Today I came across a tweet by Micha Shipee that linked to an article that shows a decade of change in NYC thanks to Google Street View. In the past, I mentioned several times the use of AR technology in order to be able to view our world looked in the past.
I think the idea popped out when I tried the app Flotogram. However, just an overlaying image isn’t the ultimate experience. The ultimate experience will be to be able for all the pictures and videos, to match on top of real-world objects.
For example, I can view a single building in front of me and see how it looked over the years than see an entire scene like an overlayed image or in virtual reality. This means that the overlayed images and videos should be segmented based on a depth image that is created as the image is taken. This means that certain parts of the image will be assigned to objects depends on the distance and angle you are from them. This means also that we can combine different images from different users that took the image in different distance and angles in order to construct a past image of that same building from all sides, even from the roof (if a user took an image from a plane for example).
This also means that everything can be used to construct that cloud-based photogrammetry time travel machine, including indoor places, even specific items – you can see how your washing machine looked a year ago. This can be used as a crime prevention system for law enforcement! The use cases for this are enormous!
With a lot of processing power and advanced systems, we can eventually, see the world around us as it was alive, like the past replacing (better to say mixing or overlaying 3-dimensionally) the current reality that looks so real based on the information that was collected from all those who shared their imagery data from their MR glasses and other devices.
That data should be centralized by a single entity in the cloud so that data can be aggregated and used to build that 3D virtual image map based on photos and depth map data.
I am asking myself why Google can’t do it from Google Street View. In Google Street view you can walk through streets (this is just image based though) and I have no doubts that Google stores all past images, this article is a proof of it, no data is deleted. If Google saves depth data, which might be using its Google maps satellite depth map of areas based on the time it was taken, this data might be used to help construct that alternative AR scene.
The main problem here that the best solution can’t rely on Google Street View. It needs to be updated every second by millions of people, not every few months. This means that I can go right now in the street and see how the garden looked yesterday at 18:23 if there was data that covers that area.
I want to be able to use AR glasses to have that experience. I want to be able to select a building, tap on it, and slide back in time 20 years ago to see how it looked like back then. I want to enter a train station and see how it looked 50 years ago, even in a single object / specific area resolution.
Looking at the current technology progression, we can assume that this type of app will be implemented using AR cloud. Once the 3D structure foundations of the environment are saved in AR cloud, users can, with an accompanying app, to lay done imagery on those 3D structures based on the point cloud data associated with a particular location. That data is saved for that location and with a timestamp (among other data), allowing later on to enable the feature of browsing that particular space as it looked in the past.
Now the thing is that I want it to be very accurate. For example, if I scan let’s say my ventilator, I want the texture to be scanned very accurately so it is layered on the scanned 3D model in a way that I can see the spaces between the slots and see what behind it while watching the look of that ventilator from the past. The thing is that for this to happen, we need a very accurate 3D scan of objects, which requires powerful devices and of course large bandwidth to transfer that high-res 3D model. Right now it seems like something that will happen far in the future. Although the process can be optimized and the app can have a limit in the distance which objects can be processed. Or only process objects that are in your peripheral vision.
It makes me want to try out something myself. Try to build a simple prototype that uses AR Cloud technology and try to create that idea with first flat images and continue from there. I browsed the web trying to find an AR project that is similar to that idea that I can try right now on my device. If you’ve seen anything similar, like a prototype using AR cloud tech, please let me know.
Update: kkorouei mentioned in his reply that this technology exists. He linked to a video that shows a Microsoft engineer talking about PhotoSynth technology that used to connect the world’s images. Now, the question still remains, if such technology exists, why that idea wasn’t made already? Maybe right now implementing such idea is easier and more accessible than ever, and I am talking about AR implementation. Well, it made me want to take steps and try to find ways to create a little prototype myself. I already start asking devs, which in their opinion, are the best technologies to use to make it a reality. I will udpate you on this matter.
Remember, we can attach any information to those objects, including educational and social information, etc.
This idea is even a better fit for the term the World Wide Web (WWW). A world that is all a browser where you can flick back in time almost anything – it’s huge! I wonder who will be the one to do it. I will share more of my self-brainstorming on this topic and spread even more ideas that can help make this app the ultimate AR app ever. If you make or already made a prototype of this idea, please let moe know, I really want to try it out.
Well, there is a lot of thinking to do on the subject. I want to hear your thoughts about it. Thanks.