Semantic segmentation to unlock real and digital interactions

Hi! Love the new forum! Looks good!
So I wanted to post a feature request.

The ability to interact with a finger with a virtual button on top of a image tracker or world tracking.

this would be the ability for a user to use bare hands to interact with virtual buttons.

3d buttons that can be press and hold, toggled, pushed, without the use of hand tracking.

Idk if is possible but would be nice to have for an escenario of using with AR, user hand like interaction with virtual buttons.

I think this could be useful for image tracking but also for world tracking in AR.

Maybe depth can be calculated and understand if the image is hand or finger and occluded the virtual button of X color, trigger a function that can be an animation or a logic,

Idk if Lightship ARDK functionalities can be ported but this could maybe be achieved with Semantic Segmentation.

Looking forward to check comments to see if this feature would come useful for other devs

Thanks!

4 Likes

@Diego_Aguirre Thanks for the feedback! Combining hand tracking with other types of tracking (Image Targets, World Tracking, etc) simultaneously is a feature request we’ve received from others as well.

cc @Joel for visibility!

3 Likes

UP for semantic segmentation :pray: Tks @Diego_Aguirre

1 Like