Custom three JS pipeline module using react three fiber (r3f)

Hello! Looking for some guidance on why the screenshots below are different (the phone and image target are equally positioned on both). I have a Three JS scene setup using react-three-fiber and adding 8th wall is not that straightforward so I’m trying different methods. On the 3D scene I basically have a plane positioned on top of the detected image target and it should fit perfectly well on top of it.
Screenshot 1: https://www.dropbox.com/scl/fi/8kvb7dybmlxgk2kyhdhez/EightWallThreeJSPipeline.png?rlkey=6ohuip7ws9poy41zpj7u030yg&st=9ihvac9s&dl=0
On this screenshot I’m using 8th wall’s ThreeJSPipelineModule and the plane looks OK:

XR8.addCameraPipelineModules([
      XR8.GlTextureRenderer.pipelineModule(),
      XR8.Threejs.pipelineModule(),
      XR8.XrController.pipelineModule(),
      XRExtras.Loading.pipelineModule(),
      XRExtras.RuntimeError.pipelineModule(),
      pipelineModule.current,
    ]);
XR8.run({canvas: canvasRef.current});

I’m also pointing 8th wall to the r3f Canvas directly so the GlTextureRenderer and ThreeJS scene are drawn there:

<Canvas ref={canvasRef}>
      <BaseScene/>
</Canvas>

Screenshot 2: https://www.dropbox.com/scl/fi/emepfgowjs0zx4alm1kdw/CustomPipeline.png?rlkey=ndebu5ip029si8irjmi6xfjsd&st=nusjz1jk&dl=0
On this next screenshot I’m doing the Three JS camera adjustments on my custom pipeline:

XR8.addCameraPipelineModules([
      XR8.GlTextureRenderer.pipelineModule(),
      XR8.XrController.pipelineModule(),
      XRExtras.FullWindowCanvas.pipelineModule(),
      XRExtras.RuntimeError.pipelineModule(),
      pipelineModule.current,
    ]);
XR8.run({canvas: cameraFeedCanvas.current});

In this case I’m using an extra canvas to draw the camera feed like so:

<canvas ref={cameraFeedCanvasRef} />
<Canvas>
        <BaseScene/>
</Canvas>

On my custom pipeline I’m adjusting the camera fom the r3f scene on the onUpdate method like this:

onUpdate: ({ processCpuResult }) => {
      if (!processCpuResult.reality) {
        return;
      }

      const { rotation, position, intrinsics } = processCpuResult.reality;

      if (intrinsics) {
        for (let i = 0; i < 16; i++) {
          r3fCamera.projectionMatrix.elements[i] = intrinsics[i];
        }
        const b = intrinsics[5];
        const fovRad = 2 * Math.atan(1 / b);
        const fovDeg = MathUtils.radToDeg(fovRad);
        r3fCamera.fov = fovDeg;
        r3fCamera.updateProjectionMatrix();
      }

      r3fCamera.projectionMatrixInverse
        .copy(r3fCamera.projectionMatrix)
        .invert();

      if (rotation) {
        r3fCamera.setRotationFromQuaternion(rotation);
      }
      if (position) {
        r3fCamera.position.set(position.x, position.y, position.z);
      }
    },

The questions are the following:

  • Why does the field of view change when I use a separate canvas to draw the camera feed using on both cases XR8.GlTextureRenderer.pipelineModule()?
  • Why is the image target plane not aligning perfectly well on the custom pipeline implementation? I suppose there are some magic to be done with the 3D scene camera that I’m yet to find out.

Thank you!