Question about the data I can get from the ecs.input.SCREEN_TOUCH_START

Hello!

I am writing a component for Niantic Studio.
I have a question about the data I can get from the ecs.input.SCREEN_TOUCH_START event.
When I output event.data to console, I get
{pointerId: 1, position: {...} , worldPosition: BN, target: undefined}
is outputted.

Outputting event.data.worldPosition yields
{x: -1.5845162968111624, y: -0.8738807687578227, z: 2.240194782272699}
is now output.

I think this is the location/worldPosition of the entity that was hit by a ray cast-like event during the SCREEN_TOUCH_START event, is this correct?

I am using this (Studio: VPS Procedural | 8th Wall | 8th Wall) as a reference to create a mesh from the bufferGeometry of the event.data obtained from the reality.meshfound event and add it to the scene.

What I want to do is to raycast to this mesh in SCREEN_TOUCH_START.

But currently, it seems that the rays are not hitting this mesh, but are floating a bit or a bit in front of it. And target is always undefined.

Do I have to add the target for SCREEN_TOUCH_START from ecs.createEntity()?
If so, is it possible to set the bufferGeometry to the entity created by ecs.createEntity()?

I’ll have to test this out, the screen touch events use a three.js raycaster under the hood so it should work.

1 Like

Thank you for your answer!
I see.
So it is normal for the β€œtarget” to be null for the scene.added?
Based on what you say, it looks like it could be due to the complexity of the mesh.

I would say so, If you can get it working on a standard three.js project there might be something we can look into to learn why.

Roger that.
I will try it with Three.js.