If you follow the latest development in AR, you’ve probably already heard the term Persistence in the context of persistent AR experiences.
This is a feature that is enabled in ARCore 1.2 and ARKit 2.0, which allow AR experiences to persist between sessions. We are not talking about resuming the state of virtual objects relative to their origin, this can be done by saving the data in a local storage, like a database. We are talking about the ability to persist virtual objects in the same location in which they were previously positioned in the real world space (aka Relocalization).
ARKit 2.0 persistence demo by azamsharp YouTube user.
Quite a few startup companies have been focusing their efforts on building AR Clouds, which allow just that, persisting content in great accuracy in the real world space. So no matter where you place the virtual object in the world, it can be revisited, viewed and interact within the same location you left it after closing the application.
This localization of virtual content bound to spatial data can also be shared in a relatively high degree of accuracy between multiple users—what’s called “Shared experienced” in ARKit or Cloud Anchors in ARCore.
I personally wanted to know what’s the difference between technologies like Cloud Anchors and that of 6D.ai AR Cloud. I’ve read on developers.google.com, that the sparse point map can be used for cloud anchors resolution requirest for 24 hours after it is generated. After 24 hours, those Cloud anchors are removed and are no longer accessible.
AR Cloud, on the other hand, provides the infrastructure and/or platform to retain information for a longer period of time, some might not have any strict limitations, so you can basically retain the data for an unlimited time until you explicitly initialize its removal using an API call.
It doesn’t mean that in the future, Google won’t offer this option in one of its funded startup companies Blue Vision and Ubiquity6. No doubt that long-term persistence will be crucial for certain apps,
Obviously, there will be some privacy and security issues that will need to be solved but those technologies will definitely help fuel up AR for years to come.
For me, persistence is a must-have feature for AR. When I take a physical item, like my iPad, and put it on the table, I know that when I wake up, it will be there when I left it (unless my cat decided to drop it out of the table). I want the same thing to be with virtual items. It might not seem too significant for some, but have persistence-feature enables many types of applicative functionality that can help developers deliver many types of experiences, that without it, weren’t possible. Some apps require a certain technology infrastructure to work with in order to deliver certain functionalities. Furthermore, persistence can act as a complementary feature that used alongside other technologies like “shared experience”, can help deliver more exciting AR experiences.
OK, all of this basic technical data is great, but what’s the actual use for it?
I personally care about how the technology can be used to enhance the Augmented Reality (AR) or Mixed Reality (MR) experience. The technology by itself is exciting, but if it can deliver exciting features to the end user, what’s the point, right?
Here are some of the practical benefit examples of persistence in AR, with or without long-term persistency:
- Make an AR art in a specific place, and come back later and continue working on it in the same exact place.
- Position contextual information using virtual content for real-world objects. For example, making a virtual AR tour to AirBnB visitors who come to the house and know what is what and where to find things.
- You can place notes in certain areas around you, those notes can then be later seen in the same exact place near real-world objects which they are related to.
- Build games which can invite users to interact with virtual content in specific locations. With GPS, you might invite a user to a certain store, but due to its accuracy, the content might appear in the next store instead. Location-based experiences, therefore, will be greatly enhanced by the advanced computer vision algorithms used by the AR persistence feature.
- Local municipalities and business owners can put signs in real time for other users to see (will probably be popular when AR glasses become more common)
- Refurnish your new house digitally. This process might take several days, but when the location of the virtual content is preserved, you can continue working on it in a later date.
- Provide instructions to new workes in industrial manufacturing facilities
- Deploy real-time danger road signs on-location
- Community-based on-location translation to signs for frequently visited places for tourism.
- Show pathway to reach certain indoor places
- Location-based smart search as the virtual content. Considering AR Cloud as a based platform, this finding many types of location-based things like experiences, objects, notes, widgets, and more, depends on the applicative functionality of the app or application ecosystem.
- Create persistent location-based permission architecture, filtering access based on a pre-defined location.
- and much much more…
I can sit down and spend all day just writing practical use-cases for having persistency in augmented reality apps.
Without having persistence, any content assigned to a specific location during a session will be lost once the session is ended. Many of the examples I’ve added above, won’t be possible, or not being particle without AR having a persistent feature available for AR developers work with.
When you work in a physical space or even with file and folders on your computer, you expect that changes that you make will persist when you come back. Some applications are tight to a certain location and because AR is relying on a visual interface and virtual content placed within the real world space, persistence of content in relation to location is very much in the core of its permanence.
In a visionary technological world where virtual is suppose to be seamlessly mixed with the real world, the importance of inheriting the behavior of physical properties to manage the space and information within it is as important as its real-world counterpart. Without it, it will be like physical things we relocate in the real world, that instead of being there where we left them off, would have reset to a default location or worse, would need to be initialized and reallocated each next time we want to use them. In our real world that doesn’t happen, so why should it happen in our mixed reality?
In a well-designed mixed reality world, the virtual should have some behavioral similarities to from physical objects in the real world, in order to maintain order, manageability, creativity and proper functionality that we humans are already used to. Even more, to further improve the efficiency of certain human tasks and deliver enhanced experiences that are limited or not possible due to the nature of the physical world we live in.
The bottom line is that Persistence in AR is a crucial and very much needed feature. I’m sure that its absence leads to a delay of many projects where their developers opted to wait rather than rely on current technologies (e.g. GPS) which are much less accurate and can severely impair the experience or make it totally unusable for many use cases, especially in professional and enterprise applications.
I’m personally excited by 6d.ai AR Cloud technology but definitely looking forward to seeing what developers are going to come up with using ARKit and ARCore’s current persistent feature technologies.