Hey team! I wanted to spend a few days working with the engine and gather some feedback, before coming out on here and ask some questions, but today is that day!
First of all, it’s always amazing to have these dedicated channels for support, thanks!
Today, I’m wondering about how the game engine is structured. Specifically, I’m curious about how the camera renders things in AR. Is it using A-Frame by default? Or Three.js?
I think I saw somewhere that we can use the XR8 engine. Is there a suggested approach for creating graphics in the engine? For example, in Spark AR we had render passes similar to Unity3D effects you can create with shaders.
Enabling graphic effects like glow by managing the render passes and such. Are there any plans to make documentation about the engine super easy for people who know nothing about engines, like creators?
I’m having a hard time understanding how the render magic happens when in AR Mobile and when on Desktop.
Additionally, it would be nice to have all the buttons mapped for the VR controllers for future generations. Meanwhile, I’m also trying to understand how I can migrate old code to the Niantic Studio way of creating and managing things via code.
Thank you so much for this new tool, which I think is already making a huge impact and will make an even greater one in the future if just a fraction of this feedback is implemented.
It feels like Blender in controls and behaves like Unreal in how everything is now in the UI, yet I’m struggling to create basic things. YET! The rotation animation allowed me to create some pretty cool game mechanics, which was a huge plus! Perhaps it would be worthwhile to start thinking about making a library of modules where different custom components can be found by Niantic and, why not, from the community!
I was also thinking it would be nice to have an integrated 3D library within the engine to make the process of creating faster and smoother. In the Spark AR community, we used to have Sketchfab integration and some native 3D models.
It would be great to have, besides the awesome primitives we have now, an official partnership with 3D libraries to have full support for importing models and our existing owned assets.
It would be nice to have documentation on how to make a Gaussian splat with Scanniverse.
I wonder if it is the same as making a VPS scan or a 3D model. How can we edit Gaussian Splats? It would be nice to have a desktop app to edit our Gaussian Splats. I’m thinking of making compositions of micro splats and I don’t know if there is software to compose and edit them, maybe even optimize them.
Creating a new dedicated section just for the engine that encapsulates multiple articles with links to parts of the documentation would be helpful. Yesterday, I found by searching in Google that there is documentation for ECS when looking for examples of how TypeScript works.
I’m a C# developer who happens to know a little bit of JavaScript, and thanks to AI, I managed to make some things. But learning TypeScript just for the engine adds another layer of complexity to just getting things done inside the engine. It would be nice to have both JavaScript and TypeScript examples in the same quantity for each example. This would make it easier for me to understand the engine and how to create things in it. I think we now have more TypeScript examples. It’s amazing, and it makes sense with the next generation of XR, but making it easier for the existing JavaScript AR developers to jump in would be beneficial. Otherwise, the shift is too rough and complex, and I work and don’t have much time to learn a new language to get things done.
But maybe, by default, we have all the power, which I think is our case. Yet, the sentiment I feel from my perspective is that it would be nice to be able to do the same with JavaScript.
One more thing: Adding a blueprint / node editor / patch editor visual programming interface in Studio might take the engine to the really next level. I think that would make studio the number one choice when looking forward to learning a new skill and building the future of XR.
I always seek that balance of time spent learning something versus the long-term gains of it. So far, Studio feels next level, yet some tweaks and adjustments must be made to be at the level of other software like Effect House or Lens Studio (Sorry, someone had to say it) which make the process of creating AR easier for CREATORS and DEVELOPERS alike.
But all the underlaying technology that makes the studio the studio, is there. Waiting for the world to watch the capabilities of it.
The added value the engine can create would be how easy is to create port 3D experiences to AR / Desktop / Mobile 3D and maybe in the future VR and MR with ease. But what we really need is all that we have now but with the ability to make easier for everyone to make the immersive games and experiences.
So far, so good! Let’s keep it going, because next year will be amazing! The work yall doing is really next level!
Kudos to the whole team who is making this project possible.
Thanks, that would be it. Have a great week!