One of the topics that I was interested in reading most regarding Magic Leap One is the input methods. This is something that is going to fundamentally change the way we users are used to interact with digital content around us, especially compared to how we used to interact within handheld augmented reality apps. This is a game changer. Once your hands are free and you have a wearable device in front of your eyes, there are a handful of interactions that are now possible.
The supported input methods for Magic Leap One are as follows: Headpose (the rotation of your head), Eye Gaze (where you look at and what you are focusing at), Gesture (hand gestures using your real hands), Voice, Control (using the bundled controller and future compatible accessories), Mobile app, Keyboard.
The headpose, eye gaze, gesture, and voice are incorporated into the headset, whether the control, mobile app and keyboard are external peripherals input methods. I’ve already written about the dedicated Magic Leap One 6DoF Control and I will share more information once I learn more of its technical specs.
The other two compatible peripherals are the Magic Leap Mobile app that can be installed on a compatible Android (Android 6.0 and up) or iOS (iOS 10.0 and up) mobile devices. That apps serve as a back input for the Magic Leap One Control and also give users the option to use a virtual keyboard to quickly enter text into focused text fields. This app basically turns your phone into an alternative/secondary input device. It maps the Control buttons (Home button, Touchpad, Digital Bumper, analog Trigger) to the digital app equivalent. The app does more than just that, but this is regarding the input interaction.
The Magic Leap One is compatible with external Bluetooth keyboards to provide users with more comfort and a productive way to input text. A keyboard can also be mapped (keybind) to match the Control buttons as well.
Now for the more interesting part, the embodied interactions. These are the ones that were designed to give users a seamless and natural way to interact with content in the mixed reality scene. Most of the ML1 apps that you are going to see in the future will use one or more of these type of interactions.
Gesture and voice are emerging input methods. They are not widely used yet but there are many developers who are trying to experiment with them and offer them an alternative user input options. The Control is probably one of the most useful ones because it resembles the controls that we are already familiar with. We might see some of those emerging interaction being offered in a small portion of the app but not as main ones. Some developers worry about using those due to accessibility and accuracy, but this will change, as in mixed reality, these two input methods are the ones that will make interaction much more intuitive, especially in outdoor use.
The headpose is a key input method because it can tell the app about the user’s intent and point of interest. It tells the app which controls the user intends to interact with. It’s similar for monitoring the cursor location in a web application. In Mixed Reality, you don’t have that cursor. As a developer, you can respond to a specific event based on what the user is starring at. For example, if the headpose is at an angle where a specific Prism is located, the developer can write code that responds a focus event for a particular Prism.
Headpost is recommended for nearby or large objects, rather than a small or faraway object. This is because trying to focus on a small object will require fine and unnatural movement of the head,
Of course, the head direction doesn’t always tell you where the user’s attention is. For this, you’ll need to combine that input data with eye gaze to know the exact virtual content that the user wanted to interact with.
I try to imagine myself looking at different Prisms. If I was just relying on eye gaze, this could actually make interaction quite confusing because my eyes might be moving fast as they scan the available content. So relying on headpose for main Prism interaction seems more stable and comfortable. Furthermore, it can be quite confusing when you just glimpse at something outside the Prism location and need to get back to it. This way, when your headpose is locked to a certain location, it’s like locking a selection. I can definitely trust my head to stay in place then my eyes staying at the same exact location.
Magic Leap also mentioned that Eye Gaze can put more physical effort into the eyes and lead to eye fatigue. This is why Magic Leap recommends limiting the use of it.
This is something that it will be very interesting for developers to experiment with when developing their apps. Trying to find the right input method or combination of input methods that will work best for their particular app.
Gesture (hands gestures) is one of my favorite input methods. Although it is recommended as a tertiary (third in order or level.) input to the primary Magic Leap One Control and second to the Mobile app, I think there will be quite a few developers that will use it as a secondary one or maybe a primary one.
As a person who tried out hundreds of AR apps, having my hands free for interaction was one of my big dreams. I have nothing against a controller but having your hands free and being able to use them for interaction is something that I feel most users would have wanted have. The main problem I have with it is that you don’t get a haptic feedback, the hand tracking can lose focus if the hands are not within the detection area/range of the camera and it’s more tiring to use over a long period of time, so this is why it still can’t fully replace the use of a controller.
Lumin SDK can tell the difference between the right or left hand and can even differentiate between each finger. The Lumin SDK recognizes the following gestures: Open Hand, Closed Fist, OK Sign, Thumbs Up, Open Pinch, Closed Pinch, Relaxed Point, Closed Point. The hand gestures are a great way to interact with graphical user interfaces and can be fantastic when used in social applications.
I didn’t see anywhere in the “Input Methods” section whether developers can define their own gestures, other than the default ones. Let imagine that I want to create a spellcasting mixed reality game, obviously, I want to use differently than the one presented in the “Input Methods” section. If I find any information, I will update you on that matter.
Voice is probably the least used input method but one that has a huge potential for many different types of apps, especially enterprise/business apps. I personally opted not to use them. I don’t use Siri nor the PS4 voice commands. I found it kind of awkward to use nor accurate enough to make it work flawlessly.
However, once you have no mouse, no keyboard (by preference or the app doesn’t support it) and only a controller, phone app or hand gesture to control an app, a voice command becomes a very much needed input method for this medium. It makes interaction in this medium faster, simple and more efficient.
When this is integrated with 3rd party voice-supported services, this can help to manage tasks so much easier for users. Even simple commands like “Put the app on my desk and launch it” or “Resize 3.5x times” make thing so much easier compared to the equivalent physical interaction. Sometimes it can save you from needing to fine-tune physical movement with a controller or typing an accurate value in a text input field. The advantages of voice as an input method cannot be ignored and there are here for the Magic Leap One in case you want to use them in your app.
Summary
Overall, Lumin SDK offers a wide range of input methods that can fundamentally change the way a user interacts with your app. This is a topic that I personally intend to learn much more in-depth. It can completely change the way your approach to design apps in terms of user interaction, accessibility, UI and many other aspects related to an app and game design in general.
I think every developer should spend a good deal of time to read and fully understand those input methods and what see how he or she plans to incorporate those input methods in their future mixed reality app or game.
It might be a bit confusing at first but it also opens a door for so many unique and interesting ideas and once implemented right, can make the entire UX profoundly better. I highly recommend reading the “Input Methods” section in the Magic Leap Creator portal.
I also plan to spend reading more about it. I will share my own opinion, insights, and suggestions once I get more knowledgeable on this topic. Have a great day and don’t forget to share your creations with me on Twitter. Cheers.