I hope to place βbirdsβ on horizontal lines in what the camera sees. That means identifying horizontal lines with some computational vision library (tracking.js?) and deducing the line positions in the world space. Can you offer pointers on how to do that?
Welcome to the forums!
If you want it to work on any wire, your best option is to train a tracking.js model on your target environment and use a cameraPipelineModule
to process the camera pixel array provided by the module.
You can find documentation on accessing the cameraPixelArray
through a cameraPipelineModule
here:
Hereβs an example of how you can create a cameraPipelineModule
in a Niantic Studio component:
import * as ecs from '@8thwall/ecs'
const {
XR8
} = window as any
ecs.registerComponent({
name: 'camera-pipeline-component',
schema: {},
schemaDefaults: {},
data: {},
add: (world, component) => {
XR8.addCameraPipelineModule({
name: 'cameraPixelArrayModule',
onProcessCpu: ({
processGpuResult
}) => {
const {
camerapixelarray
} = processGpuResult
if (!camerapixelarray || !camerapixelarray.pixels) {
return
}
const {
rows,
cols,
rowBytes,
pixels
} = camerapixelarray
console.log(rows, cols, rowBytes, pixels)
}
})
},
tick: (world, component) => {},
remove: (world, component) => {},
})```
Has this been tested or is there an end-to-end project that uses this? On PC and mobile I get that the camerapixelarray.pixels is always null. In editor I get that the XR8 library cannot be loaded.
Your code is running before XR8 is ready.
Add a event listener that waits for xrloaded
on the window, and then run your code.