2D Joints/Pose Detection

Hi,
I’m new to 8thWall, and I was wondering if pre-existing functions/templates exist to achieve 2D Body joint detection live on the feed. This is similar to Hand/face tracking but for the whole body (feet to head).

If not, could we use/train such a lightweight pose detector that could run inside our 8th wall project?

Thank you

Thank you, @Ian .
Would you have anything limiting us from integrating third-party code such as PoseNet - Tensorflow Lite into 8thWall other functionalities, such as Recording video frames while performing World tracking?

https://storage.googleapis.com/tfjs-models/demos/pose-detection/index.html?model=movenet

You can certainly access the camera pixels and process them with a computer vision library using a custom camera pipeline module.

We had one customer who used Tensorflow JS and trained a model to detect when two beer bottles came together as the trigger for an experience. https://www.8thwall.com/blog/post/48661573898/use-heineken-ar-cheers-for-the-chance-to-win-a-double-pass-to-the-heineken-pre-race-party

Also, an 8th Wall customer (JEELIZ) used their 3D objection recognition tech in combination with our SLAM to so something similar with a coffee cup (see https://www.youtube.com/watch?v=3j7uB4-063w)