Magic Leap One headset

New Insights about Magic Leap One and Mixed Reality

Just finished watching Magic Leap’s presentation at Unite Berlin. I want to share some of my thoughts about what’s I’ve seen and heard in the live stream. In this article I will share some of the key points talked about in Magic Leap’s Unite Berlin live stream and share my own opinion about some of the topics talked there.

The Presentation

The presentation was done by Aleissia Laidackern an interaction director at Magic Leap. She focuses on finding innovative ways to create rich, immersive experience focused on player interaction and narrative. The second host is Brian Schwab, a direction of interaction lab, he also focuses on creating new forms of compelling interaction using Magic Leap.

First, I was really excited about seeing some photos of the early days of Magic Leap and how things got started. It’s funny seeing the first prototype devices, how big they were. Some of the prototype hardware was still used even in late 2016, and only in early 2017, the team moves to use the latest iteration of the Magic Leap One, as we see it today.

This is the actual session live steam.

Mixed Reality is Different, Developers Will need to Conform to Certain Precepts

The presentation starts by explaining to the audience about the current VR and AR technologies and how they are different than Mixed Reality (MR) which Magic Leap uses. In VR, the entire scene is virtual, and the scene blocks the field of view of the user, so there are no parts of the real world visible in the scene. In AR, the digital content appears on top of the real world scene and it has no awareness of objects in the scene.

In Mixed Reality, using meshing, spatial computing, and camera technologies,  the device can get full information of the world around the user, and therefore the virtual content can interact with the real world.

Brian shared some important key guidelines for developing for Mixed Reality (MR). In MR less is more. He talked about the importance of creating digital content that is part of the real world and doesn’t replace it but extends it. Making sure that we keep the pixel count low so the digital content stands out. If most of the scene is digital, the digital, as part of the real world scene is less exciting. The other thing is to avoid putting the user’s attentional state into Screen mode.

Storytelling in MR

I liked the important talk about delivering storytelling in MR. I mean, how can it be done well if the entire scene can be used to place digital content, how the user knows where to look? Well, Brian explained that the Magic Leap One headset employs eye-tracking technology, so developers know exactly where the user looks. He mentioned the similarity to how magicians use a perceptual shortcut in order to get the viewer’s attention to a certain area. So the developer will be able to grab the user attention and make him or her move to a certain place where the story continues to evolve. This includes motion onset in the periphery or spatialized audio cue, which can use to draw user’s attention to a specific direction.

Cognitive Bandwith and Safety Issues

Another talk was about what Magic Leap calls “cognitive bandwidth”. Developers need to ensure the user has available cognitive bandwidth. Unlike standard app use where the user sits in front of a screen, in MR, the user might have special interactions, like moving on the floor, picking up objects from a high point. This interaction can draw the user’s attention to one place but can lead him to ignore objects in the real world. This can lead to a person falling down of tackling physical objects in the real world as he or she is not aware of them. This, therefore, touches safety consideration that the developer needs to be aware of. This also exists in augmented reality as well, not just in MR and this is brought up in both ARKit and ARCore design guidelines.

However, in my opinion, in Mixed Reality, the developers will have a tool to alarm the user of improper setting because the Magic Leap One created a 3D map of the environment. However, the thing is that there are endless variations of different environments that an app might take place at, so you the proper approach, in my opinion, should be to adapt to any type of environment. Which means creating an MR experience that lessens the risk of injury or one that leads to other safety or UX issues.

Lots of User Input Options

Developers who develop apps for the Magic Leap One will have access to lots of user input that unlike standard apps, some of that input isn’t always an explicit action made by the user. This includes mining data from head pose (e.g. gesture, heartbeat, gait, nonverbals), hands (gesture, tracking), eyes (gaze, blinking, emotional data, micro/macro expressions), Voice (commands, sentiment).

Magic Leap will aggregate that information that is collected from users who use the app and can provide information like showing the developer a location where people are mostly feeling joy.  This enhanced user context is something really huge and can give developers the power of developing unique types of experiences like never before. I see this as kind of unique analytics for Mixed Reality, that uses all that input information, analyzes it and bring it in context to allow developers to enhance, optimize or extend on their experiences based on that valuable information. This is powerful stuff, never before we had so much information about the current state of the user/player during the experience.

This might also make things more complicated for some developers because there is much more user input. It’s not just that you develop a game by assigning actions to a limited amount of buttons, that input sources have been multiplied times fold. Furthermore, it’s not just knowing those new input options, is knowing how to use them well in order to deliver more compelling experiences.

I assume that Magic Leap created that “Geo/Temporal Information” feature to make it easier for developers to analyze and understand that information in context. Later on, Brian mentioned “Magic Kit” which should include some source code and examples that developers can start working with. So this is probably a good place to start experimenting with the different new user inputs and see good ways of implementing those in apps and games.

Complexities in Developing Apps for Mixed Reality

Allessia started talking about the core issue of developing a Mixed Reality experience. The main problem is that the experience is going to take place in different places. The first question that developers might ask is how you can actually make sure the experience adapts and runs well in every place it runs on.

Designing experiences for the real world is completely different than designing them for standard apps. In standard apps, you start with an empty canvas and you design the place the app is going to takes place in.

I see this is one of the most important issues that developers will tackle with. Just think about it for a second. Without using any complimentary accessory or knowing where the app is going to be used, you’ll have a problem designing an app that can fully utilize the environment. You can’t make an app that adapts to the unlimited amount of possible physical objects that might exist in the place where the app is going to take place. Just imagine developing an app for outdoor use, there objects interaction there is much larger.

I can assume that some apps will be designed with an interaction of common household properties. So for example, if there is a character in the app, the character might interact with common household properties like a couch, a chair, walls, etc. Others might create some interaction that is more abstract and can work with various object shapes and surfaces. If I remember right, one of the first demonstrations uses that type of design. Another option is to focus on simple interactions with characters, which put more emphasize on the vast amount of user controls that the Magic Leap One headset provides.

Magic Leap has developed a toolkit called Environment Toolkit that simplifies certain development tasks and I’ll talk about it later on.

Just a Game Example I Thought About

It will definitely be easier for developers to create experiences that target the built-in advanced user controls and combine that with general environment mapping (not object specific). For example, you can create a game where there is that troll character. His goal is to scare you. The troll will move and hide in different places behind, above or below physical objects in the real world. You should search for it if he doesn’t see you when you see him, you get a point. If he sees you and is able to scare you (based on the contextual input like heart rate and facial recognition), you lose a point.

So this can be a cool game that can be played in any physical location. However, obviously, this game won’t be great if it is played in an empty place, because the digital creature won’t have any place to play at. What you can do, as a developer, is to add digital objects to the scene itself. Remember, it’s Mixed Reality, it will blend almost seamlessly with the environment. You can of course control that in different ways and make it an optional feature in your game.

Developer Stuff

When I first read about the technology on Twitter, I didn’t really understand a lot. I am a web developer and had some experience in game development. The presentation at Unite Berlin definitely helped me understand the technical aspects of things. It’s important for me to understand that, because having a good understanding of the technology and how things work around the scene, make it easier for me to not just understand how Magic Leap works, but also helps me understand the creative possibilities and know the technology’s limitations.

For example, BlockMesh is one of the terms introduces during the event. I quote: ” BlockMesh spatially subdivides the real world into a set of cubic blocks, axis aligned with the coordinate system origin of the current head tracking map.”.  The other cool thing is that the fact that the meshes between blocks are not connected, which means that if a change in the environment is made, this will update only the certain region where the mesh was located and not the entire mesh, which enabled real-time updates of the mesh. So if for example, you move a physical chair from one place to another, only the affected areas will be reconstructed, not the entire mesh structure.

Allissia also mentioned that the mesh generated by the spatial mapping system is much noisier compared to their real-world equivalent. Meshes are not perfectly generated, so they are not aligned perfectly of the real world object as we see it in our own eyes. This requires some adjustment to the Humanoid character like adjusting the Step Height, Max Slope and Height of the virtual character in order to make sure it won’t just start walking on different objects in the scene, like bags, tables, etc. By default, Unity has the values fit standard games, so these values will need to be adjusted for mixed reality.

Another interesting talk was on pre-scanning of the environment.  One option is to ask the user to scan the environment and at a point where the scan is sufficed, you can alert the user that the scan is finished and start the MR experience. A second option is to update the mesh at a certain point. For example, when asking a user to bring a certain object into the scene, like a chair. Then you can initiate an explicit update of the mesh.

A third option is when the developer needs a continuous update of the mesh. This might be needed when the user is in a large room with lots of moving objects, and the mesh needs to continuously be updated to reflect the changes happening for physical objects in the real world. This option of periodic updates can be enabled by the developer. It’s important to keep in mind that rebaking your Navmesh negatively impacts performance, so you need to make sure your app is designed with that implication in mind, so it won’t negatively affect the user experience.

The Environment Toolkit (MAgic Kit)

The Environment Toolkit is an important tool that allows users to have a better understanding of the real world scene structure and the Builds Affordances in it. For example, after a scan of the environment, developers will be able to know where are the seating locations in the environment, the hiding spots, room corners. It will deliver information about placements where content can be placed, including Navigable areas, Accessible planes and Open Space areas, etc.

This is another huge feature that can further simplify development. Without having any contextual understanding of the environment, we can’t have common events because every scene is going to be different than the other. It’s like having a PageLoad function when a place loads or understanding the inner regions of a browser window. Unlike a web page where the structure of the events is well defined programmatic design, in MR things get more complicated. This why an in-depth analysis of the scene is needed. By analyzing the real world scene, developers can assign events and add interactions to those well-defined areas.

With Mixed Reality, many of the interaction will be based on physical location and interaction with objects in the real world. The more we understand about the environment, the better we can introduce more common and specific interactions. I’m sure that there will be 3rd parties that will introduce their own scene analysis features to extend over what Magic Leap’s Environment Toolkit offers, so they can offer unique experiences and make them stand out from the rest. However, like in many programming frameworks, you always have the common functions and those can further be extended based on the developer’s needs. I’m pretty sure that Magic Leap will have their own interface to enable extensions of that specific Magic Kit’s functionality.

 Hand Gestures

Hand Gestures is something that I am really excited about. Unlike in AR with current technologies where your hands are occupied holding the mobile device itself, in Magic Leap, your hands are free for interacting with the environment. You do have the option to use a controller but it’s optional.

Magic Leap has built-in hand gesture detection feature that can detect different features. Magic Leap has different default poses which can be used in your project.

Magic Leap showed a nice example of a virtual character waving to the user when the user waves to it. We can also see how the Head Pose is used, when the user walks in the scene, the characters head orientation is programmed to follow the head pose of the user, its location.

I was also excited to see that the Gesture API also allow access to information about a specific finger using the MLFinger class, access to join information about the wrist using the MLWrist class and even information about the thumb instance using MLThumb and different key points on the fingers. So overall, developers will have a great control over hand gestures which can help them use different gestures or customize them to better fit their Mixed Reality experiences.

Input Pose (Non-verbal Communication MOvement)

Brian talked about input poses, you can see it in the video. What I was really excited about is the option that allows the developer to access information related to the correlation between feature points during the transition between different poses.

I can imagine a spell-casting Mixed Reality game for Magic Leap One where certain spell casting is done by analyzing the transition between two or more poses. This feature is accessible for developers via the Pose Manager.

The ability for detecting such a wide range of motion inputs with such a high accuracy allows unique interactions like never before. Just imagine in that same spellcasting game that I brought up, where you need to make a certain angry face and squish your fingers to release a dark magic fireball. If you want to cancel that action, you might just blink two times or node with your hand. You can alternatively add a command that interrupts certain actions using voice commands.

The level of analysis of your body poses are just unparallel to anything I’ve seen before and it’s all happening naturally with your own body. With Magic Leap API you can detect hands flicking, swiping, stroking, and more. You can detect eyes’ fixation, dwelling, saccading and eye-rolling. You can detect whether the user nodes or shake his head and more.

Magic Leap also talked about understanding intents (what’s the user is targeting) based on the correlation between input pose/motion data streams. They also brought up the user Focus area, which can also indicate an intent. The focus of the user also includes range determination, so Nearfield, Midfield is within 3m range (Within reach) and Farfield beyond that.

The idea is to be able to create, alongside eye focus data,  a contextual targeting (“cursor”) vector which should indicate what’s the user intent is and what action should be followed. The ability to use such a large amount of user input can give developers a more precise way to predict the exact action rather than having an action that isn’t what the user wanted.

For example, if the arm is stretched out, this might suggest intent for interacting with a faraway object, whether, pointing down with the arm half stretched, might suggest interaction with a nearfield (1m or less) object.

Magic Leap made an input model which makes the implementation of the targeting process easier for developers to implement in their MR applications.

Summary

There is much more than this, but this session definetelly gave me more information needed to understand how to develop Magic Leap One experiences, the constraints, the available APIs, understanding user inputs in Mixed Reality, tools and more.

I am worried how developers will be able to build reach reaction system that takes so much of that input that is available into their apps. The thing is that, at least according to Aleissia, is very important and failing to act on certain inputs can “break” the immersion of the MR experience.

I think that this might lead to some developers building a unified behavioral cloud system that will be accessible via API and can help with building advanced character behavioral system that can adapt to many types of external stimuli.

Aleissia mentioned the complexities of implementing animation system that adapts to different environments. She mentioned conforming to animation fail states, but this really makes things more complicated to developers, but this is one way of answering some of the issues that developers have to deal with.

I think that maybe further down the road some AI features with deep learning will help to supply some suggested user actions instead of building conditional statements that target certain fail states.

The more that I’ve learned, the more I feel for the developers who will get their first attempt at developing rich Mixed Reality experiences for this platform. Of course, it doesn’t mean that any MR app will suffer from this complex design paradigms, but I think big high ambitious projects will definitely have to do some concrete thinking into their game design in order to deliver fun and immersive MR experiences.