When you look at the Magic Leap One headset and mixed reality technology in general, understand their features, you cannot ignore how valuable this technology and how it can take digital storytelling experiences to a whole new level.
Overview
I’ve been fascinated with AR storytelling after trying apps like 57° North, The Condor, dARK: Subject One, and Forensic Detective among others.
57° North for the Merge Cube was a fascinated interactive storytelling experience for me and one that my niece loved the most. I see a great future for storytelling in mixed reality and interactive storytelling in particular. Having the option to affect how the story is evolved based on my interactions is a key component of delivering a compelling storytelling experience. For me, I just enjoy the fact that I can affect the story and see how it evolves and which paths it takes based on how I interact with it. Whether it’s using multiple choice options or open-answer, I like that the experience putting me control over how the story is developed.
Intensifying Sense of Presence
In mixed reality, users are an unseparated part of the experience. They are inside it in first-person at the same place the story is taking place (e.g. your room). This opens up a whole new world of possibilities to make the storytelling experience more personal, emotional and entertaining.
In spatial computing, the user exists in the physical space of the scene. His or her location can be tracked. This means that developers can add certain interaction or trigger events based on the user’s proximity from virtual objects/characters. For example, if the user gets very close to a character, it might run and hide behind cover or scream.
This type of interaction is essential in making the user feel part of the experience and not like an outsider, a spectator.
Furthermore, MR headsets like the Magic Leap One feature many different input methods that enable different types of interaction based on the user’s headpose, eye gaze, hand gestures, etc.
Utilizing those means that the users can see immediate response based on his physical actions. For example, looking at a specific character can lead that character to respond in a certain way. It’s also recommended to create storytelling experiences that drive the user to move and inspect the environment.
This type of spatial storytelling experience can better ignite curiosity, invites explorations, and make interaction more inviting and exciting. Exactly the things kids love.
Storytelling in Mixed Reality by Example
One of the great things about mixed reality is that things happen seamlessly in front of you in your personal space in. With mixed reality, and using technologies like that one of selerio.io, developers can create interactive storytelling experience with characters that not just “aware” of the environment but can interact with objects in the scene. Create stories with virtual character interaction that is generated based on available 3D reconstruction data of the real world space (meshing) and objects (object recognition) within it.
This means that developers can create certain character behavior (e.g. animation, positioning, voice) that will be generated an adapt to the user’s physical space. For example, imagine a character telling a story of a big spider and how it was so scared of it. The developer can create an event where the character will run for cover behind real world objects if those exist, rather than just making a scared animation with the character standing in place. If there is a couch in the scene, the character might run behind it, shake and point its hand to the spider. Add some heavy breathing and make the character gaze at the user, and if you imagine this in your head, you can comprehend how it can fundamentally intensify the experience. The user fills much more evolved in the story that unfolds in front of him and can develop a more emotional connection with the character in the story.
Because the experience happens in the user’s own space when characters might be located in different places in the real world space, you can use hand pointing animation, pupil location, and head orientation to give stronger location visual cues for this type of interaction. In Mixed Reality, the user’s head is the camera, and you have no control over it. The way you achieve this is by applying visual or audio cues to guide the user to look at a certain location. This can be done using spatial audio, an overlayed arrow pointing to the designated location, edge visual in the peripheral vision, lighting effects, character-based animation (eg. hand pointing or eyes looking at a direction), etc.
Whether you decide to make the XR storytelling experience interactive or not it’s up to you, but I personally wrote this article to encourage developers to at least think about it.
Unique Interactions
Let’s continue the story of our character and the spider from above.
Let me give you a nice interaction that can use the hand recognition and head tracking to enhance the experience. When the character went to hide behind the couch, he now asks you: “Can you see the spider?“. Now, this can be used to encourage users to locate the spider in the physical space. It’s better than having an explicit location cue because it delivers a more realistic and seamless MR experience. The app will trigger an event once the user’s head and/or gaze are targetting the spider. Until then, our character will just shake like mad and wait for you to find it (maybe mumbling some stuff).
After that event happens, we trigger another question from the character: “Can you please tell me when it’s gone, just use thumbs up when it’s OK for me to go out“.
If you signal a thumbs up while the spider is there and the character goes out, it triggers a different path in the story. If you do it when the spider is gone, it triggers another path.
Other types of interaction can include, paying attention to one character or event at a specific location, while ignoring one that happens in a different location. Events can be triggered based on the distance of the user from a certain story element or whether the user interacts with a certain object in the scene, for example, touching it with his hand or just gazing at it.
You can monitor the user’s movement speed and decide to make a certain character flee away if the user moves to fast or stay put if the users slowly move towards it. It is kind of a game, but these are actual interactions that should replace the standard answer selection or free text ones from standard games. To deliver the best experience in mixed reality, you should use natural movement and interaction, and ditch the keyboard input. You should focus on the input methods that the device enabled (e.g. eye gaze, headpose, 6DoF controller, etc). Mixed Reality is going to revolutionize interactive storytelling. This is a great opportunity to take advantage of this amazing technologies and create something new and fresh in this genre.
As you can see, we used several technologies to create a more compelling interactive storytelling experience. Yes, the examples I gave above are snippets, but I think it can inspire you to think about your own ideas and know how to take advantage of the current and future of mixed reality to create more compelling interactive storytelling experiences.
What kid or adult won’t be fascinated by this type of experience happening in their home? This opens up a whole new world of creative storytelling possibilities that until know wasn’t possible. It will require some extra work to create that dynamic procedural storytelling experience, but I’m telling you, it will be well worth it. This is a fantastic way to demo the capabilities of a mixed reality headset. Right now, not all of those technologies are available for the Magic Leap One. However, I wrote it with an understanding of currently available technology in general, so you can get inspired and not be limited by one mixed reality or another headset current limitations. You can always choose to ditch certain functionality if the tech isn’t there yet or not available for the targeted platform you are developing for.
How to achieve this Technically
Now, for the technical part. I am a web developer, so can’t tell you exactly how this can be done using Magic Leap One. However, if you use a context-aware 3D scene analysis technology, you can detect a common pre-defined set of objects and apply a certain behavior to those specific items, like a couch, table, bin, etc.
You can have a default behavior that runs if non of the objects are detected. The idea is to make use of the user’s environment and human interaction as much as possible. You create an adaptable app that can play in many ways differently for every user but has common interactive key elements that do not change.
You can always add some nice interaction to create a stronger emotional bond with the character, like making the character say, for example: “I see you have a soccer ball, I love playing soccer!“. This is something that might not be related to the progress of the experience but can add another entertainment layer and deepen the emotional bond.
I will leave this topic open and I hope you can enlighten me and other developers how this can be achieved programmatically for specifically targeted platforms (e.g. HoloLens, Magic Leap One).
Custom Locations
The example I’ve given above is like taking an AR storytelling experience like “57° North” and releasing it into the environment. The thing is that in many of those AR storytelling experience, the author/developer wanted to tell a story in a specific location or make it span on different locations. This can be a dark scary forest, an open sea or a wide open green field.
How can you have a pre-defined location in mixed reality where the experience happens in the user’s environment?
There are ways that you can achieve this. This is not VR where you are totally immersed with a new virtual world or even a standard app where you can customize the 3D scene anywhere you like, AR and MR are different technologies, but there are ways to achieve this to some degree.
- Use overlayed filters like the one used in Nightentfell: Shared AR
- Add objects that hint at the type of the environment that the story takes place in, for example, trees
- Add weather effects like rain, fog or snow
- Add relevant background sound or music
Important note: These are a few examples. It’s better to try to work with the natural environment. These effects can work great for storytelling, however, to make them look good, we need a headset with a large field of view, so some of those effects won’t appear cropped. This is why I do not recommend using them with a headset that has a limited field of view as it just going emphasize the edges of the FOV.
Summary
Storytelling in mixed reality is going to be an extraordinary experience. However, for this to happen, we need those technologies to be available and not just that, to be able to use several of those technologies on a single platform that we are developing our app for.
As you can see, I am a huge believer in mixed reality technology, even more after seeing what the Magic Leap One has to offer. I am not saying that everything that I’ve mentioned here is achievable at the moment, but many things are.
I wanted to write an article about interactive storytelling in order to inspire developers to create these type of mixed reality experience, whether for the Magic Leap One, HoloLens or any other mixed reality headset. I want to encourage hardware manufacturers like Magic Leap to emphasize on adding new functionality to their headset (hardware and software) to enable different types of functionalities and widen the creative possibilities of this new emerging medium.
I will extend over this article in the near future. There is plenty of things to write about.