Hello, I find AR models are NEVER CONSISTENTLY SCALED when I use the “Image Target + SLAM Manipulate 3D Model” template.
We have a major art gallery exhibition IN FIVE DAYS, and we want our AR experience to work exceptionally well. We’ve been using this template in art gallery venues for the past two years, but the experience is never as good as we’d like.
Can someone please help me identify the problem and potential fix? Or is there a document, tutorial, or forum topic that addresses this issue? Or should I be trying a different approach? (I’m experienced in numerous programming languages and graphical systems, but just a JS and A-Frame novice. I get by with modifying 8th wall templates incrementally.)
This is the EXPERIENCE we want:
(1) Visitors are instructed to scan the QR code next to each painting.
(2) When the app launches, visitors point their camera at the painting.
(3) When the app finds the image, it displays the 3D model used in the creation of the painting. Note: We try to position and scale the model to be in front of the painting and the right size relative to the painting (or relative to a picture of the painting in a book).
(4) The user is then encouraged to move around and turn around inside the 3D model. Note: we decided to use SLAM for persisting the model size and position regardless of whether the painting is in view. Unfortunately, the model is not scaled consistently from run-to-run, and even changes scale abruptly within a run. The scale seems to be loosely related to the camera’s distance when the image is found, so for each installation we determine the optimal initial distance (e.g. 6 feet).
(5) We instruct users to begin by standing the optimal distance. Even so, the model changes size (and position) too much from run to run.
Please refer to the template Image Target + SLAM Manipulate 3D Model | 8th Wall Playground | 8th Wall.
(I will include the results of 10 test runs in a follow-up message, documenting how scaling varies from run to run.)
I would appreciate all suggestions.