AR Engine Graphics Inquiry + Scanniverse + feedback & suggestions

Hey team! :star_struck: I wanted to spend a few days working with the engine and gather some feedback, before coming out on here and ask some questions, but today is that day!

First of all, it’s always amazing to have these dedicated channels for support, thanks!

Today, I’m wondering about how the game engine is structured. Specifically, I’m curious about how the camera renders things in AR. Is it using A-Frame by default? Or Three.js?

I think I saw somewhere that we can use the XR8 engine. Is there a suggested approach for creating graphics in the engine? For example, in Spark AR we had render passes similar to Unity3D effects you can create with shaders.

Enabling graphic effects like glow by managing the render passes and such. Are there any plans to make documentation about the engine super easy for people who know nothing about engines, like creators?

I’m having a hard time understanding how the render magic happens when in AR Mobile and when on Desktop.

Additionally, it would be nice to have all the buttons mapped for the VR controllers for future generations. Meanwhile, I’m also trying to understand how I can migrate old code to the Niantic Studio way of creating and managing things via code.

Thank you so much for this new tool, which I think is already making a huge impact and will make an even greater one in the future if just a fraction of this feedback is implemented.

It feels like Blender in controls and behaves like Unreal in how everything is now in the UI, yet I’m struggling to create basic things. YET! The rotation animation allowed me to create some pretty cool game mechanics, which was a huge plus! Perhaps it would be worthwhile to start thinking about making a library of modules where different custom components can be found by Niantic and, why not, from the community!

I was also thinking it would be nice to have an integrated 3D library within the engine to make the process of creating faster and smoother. In the Spark AR community, we used to have Sketchfab integration and some native 3D models.

It would be great to have, besides the awesome primitives we have now, an official partnership with 3D libraries to have full support for importing models and our existing owned assets.

It would be nice to have documentation on how to make a Gaussian splat with Scanniverse.

I wonder if it is the same as making a VPS scan or a 3D model. How can we edit Gaussian Splats? It would be nice to have a desktop app to edit our Gaussian Splats. I’m thinking of making compositions of micro splats and I don’t know if there is software to compose and edit them, maybe even optimize them.

Creating a new dedicated section just for the engine that encapsulates multiple articles with links to parts of the documentation would be helpful. Yesterday, I found by searching in Google that there is documentation for ECS when looking for examples of how TypeScript works.

I’m a C# developer who happens to know a little bit of JavaScript, and thanks to AI, I managed to make some things. But learning TypeScript just for the engine adds another layer of complexity to just getting things done inside the engine. It would be nice to have both JavaScript and TypeScript examples in the same quantity for each example. This would make it easier for me to understand the engine and how to create things in it. I think we now have more TypeScript examples. It’s amazing, and it makes sense with the next generation of XR, but making it easier for the existing JavaScript AR developers to jump in would be beneficial. Otherwise, the shift is too rough and complex, and I work and don’t have much time to learn a new language to get things done.

But maybe, by default, we have all the power, which I think is our case. Yet, the sentiment I feel from my perspective is that it would be nice to be able to do the same with JavaScript.

One more thing: Adding a blueprint / node editor / patch editor visual programming interface in Studio might take the engine to the really next level. I think that would make studio the number one choice when looking forward to learning a new skill and building the future of XR.

I always seek that balance of time spent learning something versus the long-term gains of it. So far, Studio feels next level, yet some tweaks and adjustments must be made to be at the level of other software like Effect House or Lens Studio (Sorry, someone had to say it) which make the process of creating AR easier for CREATORS and DEVELOPERS alike.

But all the underlaying technology that makes the studio the studio, is there. Waiting for the world to watch the capabilities of it.

The added value the engine can create would be how easy is to create port 3D experiences to AR / Desktop / Mobile 3D and maybe in the future VR and MR with ease. But what we really need is all that we have now but with the ability to make easier for everyone to make the immersive games and experiences.

So far, so good! Let’s keep it going, because next year will be amazing! The work yall doing is really next level!

:star: Kudos to the whole team who is making this project possible.

Thanks, that would be it. Have a great week! :heart_eyes:

1 Like

One More Thing!

In the world of games, engines compete to offer similar features, aiming to attract and unify communities. The same is true for AR, creators and developers seek that sense of familiarity, those shared elements that build bridges and make it easier for everyone to adopt, explore, and create.

Creating games is hard work, but Niantic Studio has the potential to change that. Imagine a future where visual programming empowers creators to ride the wave of innovation instead of being left behind. When MR and VR support arrive, it will be the ultimate party, one where everyone is invited to dance!

Custom Components are a breakthrough, offering an elegant way to add behaviors to objects. But the real game-changer could be a Patch Editor, a tool that could reduce days of work into mere minutes, opening doors to a world of limitless possibilities.

A Patch Editor would do for AR what ChatGPT has done for AI. ChatGPT made a revolutionary impact by making advanced AI accessible to everyone, not just experts. Before, interacting with AI required specialized knowledge and skill, but now anyone can chat, create, and even generate code with ease. That’s the magic of a breakthrough: democratizing access, turning complexity into creativity, and sparking a movement that everyone can join.

Just as ChatGPT became a catalyst for AI adoption, a Node Editor could be the key to making AR creation as intuitive and inclusive. It would enable anyone, from seasoned developers to absolute beginners, to build, innovate, and share experiences in a powerful visual way.

Imagine creators dragging and dropping objects, linking custom components, and seeing their visions come to life in a Blueprint-style UI. This would break down barriers, ignite a creative revolution, and elevate Niantic Studio to a leader in XR development. :rocket: :fire:

Visual programming would be the bridge between JavaScript and TypeScript for developers, and nodes for creators, a shared space where the Lens Studio community, XR enthusiasts, and even non-coders can come together to shape the future, one project at a time.

Please, please, please consider adding a node editor! Maybe the foundation is already there, just waiting for a fresh UI, more documentation, and tutorials to fully unlock its potential. With the right tools, this community could soar to new heights, making innovation and creativity accessible to everyone.

Thank you for creating this space for feedback and giving the community a voice. Your work is already making a difference, and I’m beyond excited to see how far we can go together.

Let’s build a future where everyone can create, innovate, and transform the world through XR. Keep up the amazing work, next year is going to be amaaaaaizzzziiiinngggg!!!

Thank you for allowing me to express my thoughts in this XR community that keeps me daydreaming! :two_hearts:

2 Likes

Hi @Diego_Aguirre

This is amazing feedback. Our team really appreciates all the great insight you’re providing here. So first, a BIG THANK YOU! :heart_eyes:

Addressing the questions/comments you have:

  • How does the camera renders things in AR? Is it using A-Frame by default? Or Three.js?
    • Studio combines its own rendering system with Three.js. In the future, we may support different rendering systems, but it’s a custom rendering system that allows XR functionality and camera pass-through, structured most similarly to Three.js.
  • Is there a suggested approach for creating graphics in the engine?
    • You would use Studio’s 2D UI system and/or Particles System to create graphics (like buttons, text, screen overlays, animated effects, etc). We just released an update to working with UI Elements :tada: so that you can preview UI in the Viewport. Learn more about these systems here: 2D UI & Particles guides. Additionally, check out our example project showcasing how to create an animated shader in Studio.
  • Are there any plans to make documentation about the engine super easy for people who know nothing about engines, like creators?
    • YES! We have the full API documentation here. We’re also continuing to add Sample Projects and Tutorials which you can find here. We are taking note of your ask for graphics-oriented tutorial or sample — that seems like a great next tutorial for us to prioritize.
  • How does the render magic happen when in AR Mobile and when on Desktop?
    • Using Studio’s XR & Camera Systems, you can configure how you want the project to behave on mobile and desktop, and whether you want AR functionality for mobile. Learn more about the XR system here. The rendering magic is happening through a combination of Studio’s gaming engine and XR engine.
  • Additionally, it would be nice to have all the buttons mapped for the VR controllers for future generations.
    • Noted! Studio has Gamepad API support via the Input System, but does not yet have VR controller inputs mapped. Integrated VR controller support is also something that is being considered, but I cannot give a specific timeframe for release.
  • How I can migrate old code to the Niantic Studio way of creating and managing things via code?
    • Studio’s game engine provides a different framework from the previous Cloud Editor (three.js/a-frame framework) so unfortunately there is not an immediate way to migrate projects. But there are many things in Studio that no longer require coding at all - things like positioning 3D objects (shapes and GLBs), setting up lighting, a camera, particle effects, image textures, basic animations, etc — see the Guides section in the documentation for working with a particular system. For creating more complex behaviors and mechanics, take a look at the code files provided within the Studio Sample projects here and check out the Custom Components documentation. In the samples, you’ll see “Components” like Character-controller in the Vehicle Controller project, or Tap-to-Place in the World Effects project. These components tap into the API and gaming systems of Studio. We’re also continuing to build and release more sample projects with different Custom Components–let us know the kinds of game mechanics or interactions you would like to see demonstrated!
  • Perhaps it would be worthwhile to start thinking about making a library of modules where different custom components can be found by Niantic and, why not, from the community!
    • ABSOLUTELY and this our hope with Studio Modules! Actually, today any Studio user can create a custom Module in Studio and publish that module to their public profile. which could contain sample code for creating different engine interactions, and/or can include 3D models/assets like you mentioned. Learn more about Modules here. We don’t yet have a public facing user-generated Module Library, but we are considering building out this feature.
  • It would be great to have, besides the awesome primitives we have now, an official partnership with 3D libraries to have full support for importing models and our existing owned assets.
    • Love this idea. Sidenote - we have just released support for uploading FBX files (which converts the file to GLB).
  • It would be nice to have documentation on how to make a Gaussian splat with Scaniverse. How can we edit Gaussian Splats? I’m thinking of making compositions of micro splats and I don’t know if there is software to compose and edit them, maybe even optimize them.
    • The best documentation on creating Splats is on the Scaniverse community site here. In Scaniverse you can crop and optimize your Splat, but unfortunately editing/cropping/merging tooling capabilities are not yet available in Studio.
  • Creating a new dedicated doc section just for the engine that encapsulates multiple articles with links to parts of the documentation would be helpful. … Learning TypeScript just for the engine adds another layer of complexity to just getting things done inside the engine.
    • Noted, we’ll look into how to organize and demonstrate this better. For creating more complex behaviors and mechanics, take a look at the code files provided in the Studio Sample projects and check out the Custom Components docs. In the samples, you’ll see “Components” like Character-controller in the Vehicle Controller project, or Tap-to-Place mechanic in the World Effects project. These components tap into the API and gaming systems of Studio, and are the best sources for learning how to build more complex interactions.
  • Patch/node editor request.
    • Will respond to the Patch Editor topic separately.
  • So far, Studio feels next level, yet some tweaks and adjustments must be made to be at the level of other software like Effect House or Lens Studio (Sorry, someone had to say it) which make the process of creating AR easier for CREATORS and DEVELOPERS alike.
    • This feedback is on point :grinning: Niantic Studio is first big step towards our mission to enable creators to build the next generation of immersive AR/VR experiences right in the browser. But it is a first step and there are many areas for improvement. While in Beta, we are actively workingto add new features, engine capabilities, better documentation and examples, and overall improve the usability by fixing bugs and performance issues. Folks like you who are giving it a go and providing feedback really help us on this mission!! So please keep it coming!

Again, thank you @Diego_Aguirre :pray:

-Amanda Whitt
Group Product Manager, Niantic Studio

3 Likes

:star_struck:Wow, nobody had ever taken so much time and care in answering one of my questions. Thank you! Your answer helped me and answered all my questions.

Just on point! Thank so so much! And kudos to all the team!

Take care! :heart:

1 Like