Perception of depth in augmented reality

Enhancing Perception of Depth in Augmented Reality

I’ve been testing out many augmented reality games in the past months. What excited me most about AR in these games is when I actually get to observe virtual object looking like they are part f the real world space. Of course, this is nothing new, this what makes AR interesting in the first place. However, even with that in mind, some app that I used, have delivered a mediocre AR experience.

In this article, I want to talk about how to achieve a better perception of depth and presence in augmented reality apps. Having a good perception of depth can help deliver a better visual presentation of virtual objects so they appear an integral part of the real world and help deliver a more exciting user experience as a whole.

If you have anything to add, please comment at the bottom of the article and I’ll do my best to update this guide to make it as accurate, detailed and helpful as possible.

As you know, in some games, some of those virtual objects might not even be placed on a surface, but float in the air. Some games that I’ve played didn’t feel immersive because they felt flat like regular games played with a real-world projected background.

There are few things you can do to make virtual objects blend better with the environment, make them appear like they are an integral part of the world and give the scene a better sense of depth, and objects in the scene a better sense of depth, size and location.

Occlusion

Being able to make a virtual object appear behind physical objects in the real world. This will default to any mixed reality framework but currently, most AR frameworks do not support it.

This is probably one of the features that I wait for the most in ARKit. As you know, every virtual object you see on the screen is drawn completely on-screen, no matter if a physical object in the real world is supposed to be located in front of it. With occlusion, if for example an object is placed on the floor and you move the table over it, it would appear hidden or partially hidden depends on the angle you view it.

Use of shadows

The use of shadows can help the user differentiate between something that is touching the surface or above it. Without it, it’s hard to know whether the object is on the ground of floating in mid-air.

Authentic 3D character shadows, W.AR game screenshot

You can also use back shadows that are reflected on the back wall or other objects behind it to help with understanding depth. The problem with front-lit shadow is that they worked well only if the framework supports vertical plane detection because, without it, you are going to see shadows in inappropriate places, like on the sky when playing a game outdoors. I guess it’s possible to just make the shadows to appear on other objects within the virtual scene.

Let me give you an example. I was playing around with Rampage AR Unleashed. When I positioned the monster on the building, the shadow was designed to be reflected sideways onto the building, so it appears that the monster is climbing the building. However, when I moved the monster to the top of the building, you can see that part of the shadow was drawn onto the sky area, outside the building. The thing is that the app isn’t aware of the building, there is no 3D mapping or a computer vision algorithm that can detect it and mask out the shadow.

The same goes with casting shadows on surfaces where the shadow appears on top of areas that shouldn’t have shadows. With 3D mapping or other advanced computer vision algorithms, the virtual objects can be better “mixed” with the real world scene, with a realistic placement of shadows.

It’s also worth mentioning that the shadows might not be too clear on certain surfaces or the shadow is small or less visible when the object is positioned way above the surface or the object itself is very small. I saw some apps that highlight the area when interacting with objects to make it clearer where the item is located relative to the surface.

Object Location and Animation

Another way to give a better sense of your app is to animate the object moving throughout 3D space, rather than making them fixed in place facing the user.

Imagine a character in mid-air facing the user.  Changing its size or moving it back the z-axis will yield the same results, the character will appear getting smaller.

The only way to know which is which is getting an angle or watch the shadows so you can get a reference that shows its position relative to other objects in the room. Using drop shadow effect leads to a better spatial perception of aerial objects.

The problem with AR apps that don’t use occlusion or shadows, is that if the virtual object is designed to be placed underneath the table, it doesn’t mean that it actually position below its surface along the z-axis. If you lower the camera down a bit, a faraway object that appears outside the area of the table will appear like it’s on the table. while using occlusion it would appear hidden behind it.

If you have floating objects (ones that do not attach to surfaces) with no occlusion or drop shadow effect, animating the virtual object within the 3D space diagonally along the z and X-axis can give a better sense of depth. Having a fixed-in-place virtual object as a reference point can lead to a better perception of depth.

Scene staging

According to Google’s AR design guidelines, the developer needs to consider comfortable viewing ranges, with what called “center stage” considered the most comfortable viewing range and optimal for interaction.

I also think it’s important to make sure that the objects themselves are properly rotated in a way that you can perceive the depth of the object itself. A 3D object viewable from one angle can look flat, whether at another angle it’s true 3D shape is revealed. Something to keep in mind when designing object for the game and stanging the game scene for augmented reality.

More visually well-defined perspective

Creating connected visual depth cues that produce a sense of depth and 3D effect.

ARKit, once a surface is detected, will place the virtual objects so their placement and orientation match one of the cameras when you move. However, the design of the scene itself can provide a better sense of depth.

In Lego AR Studio game, it was beautiful to see the custom Lego train track in AR. Although I was the one who built the track, the visual design of the track, having a far away point connected to the close location to where I stand helped create a great sense of depth. It’s like having a white paper and drawing a few lines to create perspective, it helps in getting that 3D perspective. As you can see in the image below, the rails didn’t even have drop-down shadows, yet you still get a good sense of a depth of the virtual 3D scene.

Lego train track in augmented reality

I also noticed that AR games that are confined to a well-defined frame or have many virtual objects in them, give a better depth perception. It doesn’t have to be strictly a frame, but some objects that help to better define the gameplay space, and those can also serve as reference points to other moving virtual objects within the game to further increase the sense of depth.

Orbu ARKit game screenshot

Encourage Movement

Not all AR games require the user to move around. Some games like AMON, AR Bounce, ARise and others, necessitated this need and require the user to move in the physical space as part of the game’s puzzle-solving mechanics.

Even if this is not a necessary part of your game, adding something that encourages the user to move can give users a better understanding of the scale, size, and location of the objects in the scene. For example, when you build a shooter, you can make it so the user better moves a bit to the sides in order to dodge incoming projectiles or put a powerup that requires getting close to in order to pick it up.

Well-designed Interaction

AR Blocky Party Hamburger theme

I remember playing the game AR Block Party. One of my main criticism was that it was hard for me, even with front-side-lit shadows to understand when the item was about to hit the block. The main problem there was that I couldn’t control the location of the item (i.e. pencil, banana) and in that particular viewing perspective of the item, it was hard for me to easily comprehend the distance of the item from the block.

So although the scene was designed well, interacting with it was a nightmare.If I had to do the same in the real world, to get a good understanding of the distance of the item I hold from the blocks, I would probably move my head a bit up to get a more top-down view of things, so I can see where the pencil is relative to the blocks, something I couldn’t do in the game. So the perception of depth is not just about the objects in the scene, but also in the way you interact with it using virtual objects.

On-surface location hint

when moving virtual objects from a menu onto the scene, even without using shadows, it’s a good practice to provide some sort of a placement marker that travels along the surface to show the user where the object will be placed when dropped. Of course, adding shadows will make this indicator much cleared and also give a better understanding of an estimation where the object is located in the y-axis relative to the surface is floating over.

Movement direction visual hint

Yesterday I reviewed a game called Nightenfell AR. The game has a dark theme so I couldn’t see any shadows at all. Furthermore, comets were flying from the sky towards the surface.

To to the lack of shadows, having a reference point relative to where the surface was located, it was not that easy to understand where each comet is going to land. However, the developer added a glowing trailer to each comet, which hints the direction in which the comet was moving. I could clearly tell that a specific comet was adding in an angle aimed towards the close mushroom, rather than the one further away. Those direction visual hints helped me to better understand the movement of the 3D objects in the 3D space considering the lack of shadows and the object being in mid-air with no reference areas in the sky to compare their relative position to.

Let the real world be seen

I’ve played quite a few AR games that either has created virtual objects that block the view of the real world behind it or made the user play so close to the board or in an orientation that didn’t allow me to view the real world when playing.  This can hurt the depth perception, especially when playing on a big flat surface.

AR Block Tower mobile game for kids

When designing an app, design it so it can be played at an angle that the user can view the real world as well, if not, what’s the point of having an AR game if the virtual doesn’t seem that it part of the real world. You better off just using a nice creative background instead, and it might look even better than just seeing a wooden table texture as the background of the game.

If you don’t have shadows or anything else that can contribute to better understanding of depth in the scene, the game will turn out looking flat and boring to look at.

Make use of the vertical space

Making good use of the vertical space can really not just create a compelling AR experience, but also better define the depth, scale, , size and volume of the app.

So don’t just create a flat game, use 3D objects that take more vertical space in the room or/and animate objects so they move up the Y-axis.

RC Club iOS game screenshot

Summary

I hope you find this guide useful. I will add more things as I continue to experiment with new apps and reading more about the subject. If you have anything to add or correct, please share your thoughts in the comment section below so we can open a fruitful conversation about this particular topic.

I plan to add more guides where I share my own experience and hopefully, developers can benefit from it.

If you find this article useful, please don’t forget to share it, it means a lot to me. Thank you.