WEB AR - Selfie Wall - Workflow

Dear 8th Wall, my question is simply related to the most effective suggested workflow for a project (Web Ar - Selfie Wall) we are currently developing with the following requirements: media(image and if can also video) recorder and sharing function from within the app, use of front and back camera, body segmentation/occlusion and for content development the use of video textures, simple 3d, animations, textures…etc.

While for content we see the studio editor with great benefits, we are uncertain how the other functionalities which would need to come from app.js and body.html code working in the legacy mode editor - can be implemented.

We are open to recommendations where this can be done. many thanks for the help.

Hi, welcome to the forum!

I’d recommend using the Legacy Cloud Editor for your project since that sounds like it would better suit your needs. You can access the Legacy Cloud Editor my going to your Projects page and scrolling all the way to the bottom where you get the option to use Legacy Tools.

George, thanks a lot for your help. My question back, since we are more string on the visual side and would prefer using the studio editor is – can it be done also with the studio editor or is there a major technical NOGO (f. ex. media recording) if we us it. Can we avoid the scripting only legacy editosß Many thanks for your help.

Unfortunately you can’t combine them, but you could create a loading UI so the screen isn’t black for the end user.

Dear George thanks for your reply. We understand that the capabilities of the legacy editor mode of working can not be combined with the ones from the studio. But my question is simple. Can we developed a Web Ar project with two main aspects. 1. using video textures 2. having a screen capture function with sharing capability inside the studio editor version 8th wall is providing (without hard coding the entire projects) Many thanks!

Yes you can use video textures in Studio. There isn’t a built in Screen Capture UX for video but the events and API exist to implement it yourself.

Thanks George. Very helpfull. I am now curious to find out how simoultanous tracking (world and slam) can work? So far as soon as my smart phone camera moves away from the image tracker all ar object get lost and are not visible. As soon as i focus the image tracker again all ar objects come back. I woould need an option where i initialize with the image tracker but the object remain in space even if the image tracker is out of sight of the camera. Is this possible. Or is there a workaround where i need to implement multiple image trackers into the scene to keep everything running. Ex. first one to initiallize and then contionously moving from tracker to tracker? Thanks!