In this Part 4 of my Designing Great Mixed Reality Games” series I will talk about the Magic Leap One Field of View and Viewing Frustum and how to design your game so it works in synergy with the Magic Leap One Viewing Frustum, how to reduce stimuli of clipping virtual object, how to design your game so it can deliver a more comfortable and immersive experience to users while working within some of the hardware limitations. It’s an interesting and very important subject that every Magic Leap developer should be familiar with.
The Magic Leap One has a horizontal FOV of 40 degrees, vertical FOV of 30 degrees and diagonal FOV of 50 degrees. The visual field of humans covers approx. 120 degrees (horizontally) but most of the arc is a peripheral vision. The human eye has a much greater resolution in the macula where there is a higher density of cone cells, and this is the area that is responsible for seeing details. However, this area is a small fraction of the eye’s field of view, approx. 18-degrees.
When designing games or any app for that matter, you need to take that viewing limitation into account and we’ll soon talk about how to do that.
Here is a video by Robert McGregor that gives you a closer looks at how it actually feels like when wearing the glasses. The image quality is bad and of course, the camera is positioned a bit further away (it also has a focal length different than our eyes) but you can clearly see the virtual content being displayed only on a small rectangular portion of the lens.
When the Magic Leap One was launched, one of its main criticism was its limited field of view of the optical system, that part where the lightfield objects are viewed. For some people, it reminds them of the visual experience that was very prominent in the early days of VR, where you felt like you are watching the experience through a window. in ML1 it’s not like that but when users use the headset, the discover that content that appears outside the Viewing Frustum (the three-dimensional space within which lightfield objects are viewed) are clipped out or gradually disappearing towards the edges (depends on the implementation).
The field of view was still regarded as better than Microsoft HoloLens it still felt quite limited. People expected to see the augmented content appear in a much wider cone, regardless of whether the eye can see it well or not—same as you experience things around you in real life.
However, the Magic Leap One Viewing Frustum isn’t such a big problem as some people might lead you to believe. Although I myself would prefer having a larger FOV, the effect on the mixed reality experience depends not just on the sheer hardware specs but also on how you design your app, for example, the distance of objects from you, their size, how you guide the user’s attention to interactable content, the arrangement of the content within the 3D space, gameplay area limitations, ability to over manual control over the size and location of objects,the interaction between .objects, etc.
Let’s not forget that even right now, when you are looking at this page, you can only get as much content clearly visible and in focus. So even though there are other objects around you, your main focus in on a limited area.
That being said, we are talking about designing games. Having background elements or animation that are in the near or mid-peripheral field of view can add a visual layer that helps create an atmospheric feeling to the game or used as a directional cue (or “peripheral indicator”) to hint that something is going on in a particular direction or area in the 3D space (spatial audio can be a good use for that). For example, when developing a horror experience, imagine having a shadowy entity quickly moving and disappearing at the peripheral vision area and when you turn around it suddenly vanishes. Also using that area can create a perception of continuity.
So obviously the goal is to have the lightfield objects appear at a much larger cone and in the future, Mixed Reality headset will able to offer that, but right now, at least with the Magic Leap One, we have that limitation, but as I said, it’s not as significant as some people might lead you to think.
However, how significant it is, it also depends on how you design your app. Ignoring that specific hardware limitation can lead you to make a design decision that will eventually lead to a less immersive and uncomfortable mixed reality experience.
It’s also important to understand that when working in spatial computing, you are working in volumes and spaces. So a certain interaction can be to manipulate many objects, but those are arranged in the 3D space also along the z-axis, enabling the user to interact with them by moving around a certain location. Developers can benefit from it by making use of depth to organize the game level in a way that it’s optimized for that limited Viewing Frustum.
For example, imagine a pet simulator mixed reality game for the Magic Leap One. That pet is around 12-inches long (~30cm). regarding of hose the pet moves and interacts with the room if this is the only character in the experience, the character is likely to stay within the Viewing Frustum the entire time. Especially if it moves at a slow pace and has predictable movement. You can use other technique to help the user keep track of the game’s object location even when it moves outside your field of view, like using a mini-map, audio hints, breadcrumb trails, peripheral glint, force character to stay within an area where the user is looking/focusing at, etc.
Those things were actually among some of the recommendations that Magic Leap mentioned in its documentation under the “New Experience Considerations for MR” section in the Content Guide (source). When applying animation and movement to virtual game entities, try to present those in a more gentle and gradual form, so the user will have time to detect rotate his view towards where that game object is located.
This type of behavior allows the user to comfortably track the movement of the character as it moves around in the 3D space and reduce for better accommodation (the ability to focus on an object when it changed the distance from you).
Magic Leap does bring up some recommendation on how to make the clipping less noticeable by using habituation. Reduce the stimuli for those clipping virtual objects by adding high spatial-frequency noise to the edges of the entire FOV (using vignetting effect) or to surface details. Also, our brain is less susceptible to angular objects being clipped than round objects so you might think about using those recommendations to make clipping less apparent. I’m sure that developers will come up with many other ways to make that clipping much less intrusive.
In general terms and many of those suggestions apply to mobile AR as well, try to keep your object small enough so they the don’t obscure the entire FOV, consider the Near Clipping Plane limitation. This might lead to objects being farther away from the user, so you can use a virtual tool to allow the user to interact with the content.
The problem that I see with those virtual tools that are designed to extend the user reach is that without having shadows, it’s hard to estimate the distance of the object. The shadow acts as a depth perception hint. The image above was taken from an augmented reality game called AR Block Party.
Now if you use something like a laser pointer, you probably won’t need shadow, but if you want to add shadow so it looks like what you see in the image above, you can use a technique that called Negative Shadows. I’ve It’s basically applying an outer glow to a sprite placed below the object that should cast the shadow. I actually mimicked that effect in a photo editing software and the results are really convincing (see image on the right). I’ve applied an outer glow effect to a rectangular transparent shape as you can see, it’s cheating, but your brain perceives it like a shadow effect. I’ve learned about this effect yesterday and after learning that the Magic Leap One due to its optics technology can have virtual objects drop shadows, I was looking towards a solution and a developer gave me a link where I can read about this solution.
In the next chapters, I will shed some more light about keeping the content within the comfort zone, learn about the optimal zone which holograms should be placed. for maximum comfort and more.
If you follow many of those design practices, your game should be received well by those who play it, especially those who are discovering mixed reality gaming for the first time. The game should have this kind of gradual flow that adapts well to the hardware limitations and the human’s visions deficiencies. A gameplay experience that feels comfortable to users. Gameplay that enables gradual adaptation, one that reduces cognitive overload and can deliver an immersive mixed reality experience to your users. This, of course, requires heavy testing, to make sure that that flow is maintained throughout the gameplay session.
It doesn’t mean that you should make the game fast-paced or movement unpredictable but try to work within those constraints as much as possible. Those constraints weren’t designed to limit your creativity, they exist because of the technology is not completely there yet to enable a greater degree of development freedom. It will get there eventually,
It reminds me of some AR games that I’ve played. Many developers wanted to take advantage of AR to deliver 360 gameplay experience but in practice, many of those experiences felt very uncomfortable and tiring. Some apps where design so users need to hold his hands in an upward position, which in long session games, lead to hand fatigue and made the AR experience unbearable. When designing games for a specific device, you need to be aware of not just the device limitations, but those of people as well. Making a game that is accessible and usable to as many people as possible. At the end of the day, you want your game to succeed and profit from your hard work. To do so, you need to have that understanding. of different game design principles, especially when tackling a new technology. Failing to do so, will just lead to your game to fail. You might not even know why people uninstall your game.
The topic of the Magic Leap One field of view is something that every developer should be aware of. As always, I highly recommend reading the official guide on creator.magicleap.com to get solid foundations on how to develop apps for the ML platform and get recommendations and tips from those who developed it.
I am spending hours each day educating myself on what I see as important topics that can help me come up with better game ideas and design games that take good use of the Magic Leap One features, while also being aware of its limitations and trying to worth with or around those limitations in order to provide users a great mixed-reality gameplay experience.
There is a lot of things that need to sink in, I know. Even what seems like a small topic, the ML1 FOV has great implications on how developers approach game design in mixed reality. This is not different than developing to mobile AR, VR other mediums. Once you get the theoretical knowledge and start developing, it becomes a seconds nature. Even after reading the entire official documentation and this article, developers will continue to come up with new ways to improve the experience and bypass some of those limitations in very creative ways.
The Magic Leap One field of view has received a lot of criticism on the media but I think I showed you why there is no need to worry about that and how to work with those limitations in order to be able to deliver great MR experiences to your users.