How do you explain a non-expert casual person like myself how the Magic Leap One actually works, how it does its magic? Keep in mind that this entire article is based on information that I’ve read and gathered and doesn’t mean that it’s fully accurate, correct or well described how the Magic Leap One works. It’s my attempt to learn more about the technology, get familiar with the new terms and try to make sense out of it. I don’t have the requires education and most of what I’ve read I couldn’t completely understand. However, I am a person who wants to educate himself in the field of mixed reality, so I can have a better understanding of how the technology works and have better knowledge on the current market’s technology status and where it is heading. Feel free to correct me and add your remarks. Thank you.
Let’s start this unofficial non-expert journey and see if I can figure this out by myself with the help of the web.
What is Lightfield?
I started with the basic understanding that the holograms that we see, that light that creates them, need to be directed to the user’s eye. The light from the real world is already redirected naturally towards the user’s eye. Now seeing how the virtual content actually looks, kind of semi-transparent, lead me to think that the light constructs the holograms, someway, the artificial light is added to the natural light from the scene, and once that construct hits the eyes, it gives the illusion that these holograms are part of the real world.
Let’s start with the little information that we get from the official Magic Leap website. There it says (source)::
“.. lets in natural light waves together with softly layerd synthetic lightfield.”
The first thing that I needed to understand is what is “Lightfield“?
I needed to Google that. So Light field is according to Wikipedia: “vector function that describes the amount of light flowing in every direction through every point in space.”.
So if you add that synthetic lightfield on the display and project the light rays to the user’s eyes, you basically inject and combine the light information that described the virtual entity with the natural lightfield and thus making the virtual content appear like it’s naturally part of the environment. Magic Leap mentioned on the official website: “both the real world and virtual light rays initiate neural signals..“. So it seems that this is indeed the case.
But why synthetic? because the light fields of that virtual content are created internally by the ML glasses. To produce a light field view must be obtained for a large collection of viewpoints. Now, if it was a real object, you would probably use a camera that circulated the objects and capturing the light from many viewpoints, but how do you create it synthetically?
“Combiner” and Retina
I’ve also read the article on forbes.com that talks about How do augment reality displays work? There it says: “The optical device that combines this generated computer image with the real world is called “combiner“. Fits in beautifully with the previous description isn’t it, because we combine the natural lightfield with synthetic lightfield (based on the virtual image that we want to produce) which create the full combined lightfield that passes from the retina to the visual part of the brain that draws this combined (mixed) reality.
The Retina (I don’t want to leave lose edges): is a light-sensitive layer at the back of the eyeball contains. When light hits those cells, it triggers nerve impulses that pass via the optic never to the brain where a visual image is formed.
Exactly what is mentioned on the official Magic Leap One page: “from the retina to the visual part of the brain..“. Now those unclear words start making more sense to me 🙂
How is the synthetic lightfield created and how it is Transmitted to the Front Displays?
Now my next question is what part of the Magic Leap One is projecting the imagery to the front displays?
From my understanding based on this article on kguttag.com, the Magic Leap One uses field sequential color LCOS Microdisplays. With this displays, a single full-color image is broke into color fields based on the primary colors and imaged by the microdisplays individually (source). The full-color image is created because these colors are displays in rapid sequence. This is what it is called “Field Sequential Color”.
Continuing reading the article on Forbes I came to meet a new term “Waveguides“. I’ve met this term previously in several articles that I’ve read about the Magic Leap One,
What is Waveguide?: “A structure which guides waves, such as electromagnetic waves, light, or sound waves.“.
The Magic Leap One seems to have a waveguide combiner. A mechanism that directs the light of the virtual content that should be mixed with the external light onto the displays (the rectangular transparent displays placed in front of the user’s eye) where it’s combined with the external lights and directed to the user’s eye.
From my understanding, this is why many of the AR/MR glasses are so big because that waveguide system takes a great deal of space. However, the Magic Leap One (ML1) is relatively very slim, so Magic Leap definitely uses a novel waveguide optical system than what’ve we’ve seen to date.
On the same page at kguttag.com we can see a projector that is placed in front of the eyepiece. The projected image comes from the LED light source at the top than this light is projected (through another process that I am still not familiar with)to the eyepiece\s Orthogonal Pupil Expander (“OPE”) and exists through the Exit Pupil Expander (“EPE”) to the viewer’s eye.
I’ve read a bit on this technology on this research by Dwewen Cheng, Yongtian Wang, Chen Xu, Weitao Song and Guofan Jin. This technology enables the creation of ultra-thin near-eye displays (NED) which is essential for these types of wearable mixed reality hardware. This technology uses two advanced technologies: Geometrical waveguide and Freeform optics.
Obviously, tI won’t be able to make any insight from reading it because it’s way beyond my education level. However, In Fig 2, you can see the Schematic side view of the waveguide near-eye display. This doesn’t mean that this is the exact technology used in the Magic Leap One, but this is as close as I got to try to understand how the front display work. Maybe later this will lead me to more information, and allow me to understand it after learning more about that NED topic.
The waveguide near-eye display features reflecting mirrors at the button part which works like in A viewfinder pentaprism in a DSLR camera, which reflects the light out at the bottom of the NED. You see the projection optics at the top, very close to the Nmakeswhich make you understand how the ML1 turned out to be so compact compared to other Mixed Reality headsets.
I was also interested to learn how Magic Leap One implemented multifocal ability, but that’s for a later research.
Now, all that information I’ve written here is based on my own research and it might not be accurate, or truthfully represents the actual way the Magic Leap One headset work. I don’t have the required education level nor access to official references which I can base this information on. I try to make sense of all the information I could put my hands on to at least start to understand how this technology actually works.
You can help me and other people to better understand how this technology works by sharing your knowledge in the comments section below. I will be greatly thankful for that. I wish there was a video (preferably an official one) “for dummies” (like myself) that would clearly demonstrate how this technology works. That being said, I really enjoy going over the available technological solutions because it will help me understand why certain things work the way they are, the restrictions and issues hardware manufacturers need to deal with.