I hope to place βbirdsβ on horizontal lines in what the camera sees. That means identifying horizontal lines with some computational vision library (tracking.js?) and deducing the line positions in the world space. Can you offer pointers on how to do that?
Welcome to the forums!
If you want it to work on any wire, your best option is to train a tracking.js model on your target environment and use a cameraPipelineModule
to process the camera pixel array provided by the module.
You can find documentation on accessing the cameraPixelArray
through a cameraPipelineModule
here:
Hereβs an example of how you can create a cameraPipelineModule
in a Niantic Studio component:
import * as ecs from '@8thwall/ecs'
const {
XR8
} = window as any
ecs.registerComponent({
name: 'camera-pipeline-component',
schema: {},
schemaDefaults: {},
data: {},
add: (world, component) => {
XR8.addCameraPipelineModule({
name: 'cameraPixelArrayModule',
onProcessCpu: ({
processGpuResult
}) => {
const {
camerapixelarray
} = processGpuResult
if (!camerapixelarray || !camerapixelarray.pixels) {
return
}
const {
rows,
cols,
rowBytes,
pixels
} = camerapixelarray
console.log(rows, cols, rowBytes, pixels)
}
})
},
tick: (world, component) => {},
remove: (world, component) => {},
})```