Magic Leap One hand occlusion

Magic Leap One Hand Occlusion

Watching the Magic Leap One Sigur Ros video and this excellent Magic Leap One demo, all made me realize that when interacting with an app using your hands, content that appears below your hands and arms position isn’t occluded.

Now, of course, this doesn’t surprise me, because this is physical content that changes in high-frequency and the Magic Leap can’t update the mesh so fast and I’m not sure that it can do so with the hands so close to the camera.

I mention this because occluding virtual content below the hand and arms can lead to an even more believable and immersive mixed reality experience.

If you look at this video at around minute 3:45, you can see that the hand seems as a separated and unrelated layer from the virtual one. The only thing that makes the hands appear part of the experience is the hand detection feature that allows the virtual content to react to your hand movements or hand gesture in apps that support it. Imagine seeing a virtual cloth in the air that once you pick it up it wraps around your hand or picking a virtual item like a ball and the part that is below your palms is masked, so it appears like you are holding the ball in your hands.

I first thought that the hand gesture algorithm might enable such a feature. For example, if it can recognize the hand, maybe it can remember the hand and arms’ structure and create a cached profile that can be used later for occlusion in apps that use hand recognition.

As you can see in the video above, the content always appear above the hand, because unless the hand is meshed, the content can’t be occluded and appear behind it. This is different from meshing the environment, where once the environment is mapped, you can put virtual objects behind real physical objects.

It might seem that this should actually work because the hand is a physical entity and if it can be scanned it can be masked and virtual objects can be occluded, but it doesn’t seem to be the case here from what I’ve seen.

This can be very useful in apps and games. For example, imagine holding a virtual sword or a ball, making a spell effect coming out of the palm of your hands as you rotate it and move it around. To make hand interactions looks and feel as authentic and realistic as possible, hand occlusion is needed. It will also be great to have other parts of the body, but let’s start with this.

I did a little search on Google and found a post on Reddit from last month that talks about that topic.

The terms “dynamic object” has been mentioned there, what I was talking about here. I believe that if the system is able to quickly track the change of objects in the scene, it shouldn’t be a problem making it detect the hands. A user mentioned that HoloLens also doesn’t have hand occlusion, whether MEta 2 and ZED Mini do, but they use PC as the computing backend and this is probably the reason why it wasn’t a problem to implement it. I also assume that tracking dynamic object changes requires a lot of computing power, this might have a negative impact on the battery life. As this Magic Leap One suppose to be a self-contained mobile MR platform, battery life is something that of a big concern. Maybe it requires a lot of computing power in general that the Magic Leap One’s Lightpack cannot provide at the moment.

It’s an interesting topic that I’ll spend some time reading more about. If I find anything new I will share it with you here and on Twitter.

If anyone of you developers who read this have access to the SDK, it would be nice if you can shed some light about this topic. Thanks in advance.

P.S. it would be cool playing a card game with virtual cards and seeing the cards being held in a natural way with your thumb at the front, will look and feel real. Of course, there are lots of examples.