Getting Started with MixedRealityToolkit – v2

Getting Started with MixedRealityToolkit – v2

Hello there, it’s been more than a year since I posted my last article. I was following the major or total rewrite of the HoloToolkit into a more generic SDK called MixedRealityToolkit – v2 for Unity in the mean time. It is an ongoing development and evolving project. Currently first official release of MRTK-v2 is expected around March this year as it is in Beta now. The new version of the toolkit is not just for Hololens but it is targeting both MR and VR platforms. The current device supported for Mixed Reality is Hololens but there is feature development going on for Magic Leap.

As you can assume when the SDK grows and expands to support more features and platforms there could be a slight learning curve. Much has changed about the way you setup the toolkit for your projects in Unity, implement your interactions or implement your custom features. This is a multi part series and we’ll explore the toolkit starting from basic setup to more advanced features as we go on.

Focus of the article

  • Get MixedRealityToolkit – v2
  • Set up the MixedRealityToolkit – v2 in a Unity project
  • Setup Gaze, Input and Speech interactions
  • Build and Run the application

Prerequisites

  • Visual Studio 2017
  • Unity 2018.3.3f1

Alright. Let’s jump in.

Get MixedRealityToolkit – v2

There are two ways to get the MRTK-v2 – directly downloading the repository and packaging it (longer path but customizable) or downloading a stable release (shorter path but you just get the release branch). At the time of this writing the latest public release is Mixed Reality Toolkit v2.0.0 Beta 2 which you can download here. To go with the longer path, proceed to the steps below. Those who were lazy enough to click and download the release package, you can skip to the next section to Set up the MRTK-v2 in your Unity project 😉

  • Go to the MRTK Unity toolkit repository in GitHub.
  • As shown in Fig 1 you’ll see two branches among others – mrtk_release and mrtk_development.
Fig 1: MRTK GitHub repository
  • For this article we’ll use mrtk_development. Go ahead and clone it to your development machine as MixedRealityToolkit-Unity.
  • Open the MixedRealityToolkit-Unity folder with Unity as a new project. It would look as shown in Fig 2.
Fig 2 : Open MRTK in Unity
  • From project pane, under Assets, select all folders starting with
    MixedRealityToolkit (you can ignore Examples and Tests if you want).
  • Right click there and select Export Package (Fig 3).
Fig 3 : Exporting MRTK as package
  • In the export package window click on export and select the folder to save the package (Fig 4).
Fig 4 : Save MRTK as package

That’s it! We have our MRTK-v2 toolkit package ready. Now let’s set it up in our new project.

Set up the MixedRealityToolkit – v2 in a Unity project

  • Create a new Unity project on a 3D template as shown in Fig 5.
Fig 5 : New project
  • Once the project scene loads, in the Project pane, right click on Assets and select Import Package -> Custom Package
  • Navigate to the folder where we exported our MixedRealityToolkit-v2 unity package to, select it and click open.
  • In the Import Unity package window if you exported all the folders from the MRTK-v2 then you can skip Examples and Tests.
  • Click Import (Fig 6).
Fig 6 : Import MRTK
  • While toolkit is imported, Unity will prompt to apply the necessary settings to enable Mixed Reality settings to the project. We’ll see this in detail later. For now click Apply.
  • Save the current scene.
Fig 7 : Save scene
  • From menu, click File -> Build Settings.
  • In the build settings window switch platform to Universal Windows Platform and make sure the settings matches the as shown in Fig 8. (Make sure Minimum platform version is less than or same as the Hololens Emulator build you have, in case you are using an emulator)
Fig 8 : Switch plat form to UWP
  • Click Switch Platform.
  • In the Build Settings window click Player Settings.
  • In player settings, go to XR Settings at the bottom.
  • Check Virtual Reality Supported and make sure the settings look like Fig 9.
Fig 9 : Enable VR support
  • In player settings go to Publishing Settings-> Capabilities.
  • In Capabilities list enable Microphone, InternetClientServer and Spatial Perception. We’ll need this for future articles based on the same project.
  • In the menu go to Edit -> Project Settings.
  • In Project Settings select Quality from left pane.
  • Select Default as Very Low for UWP as shown in Fig 10.
Fig 10 : Set Quality to Very Low
  • Close the Project Settings.
  • In menu go to Mixed Reality Toolkit – > Configure and select it.
  • Mixed Reality Toolkit will add two Gameobjects to your Scene as shown in Fig 11 – MixedRealityToolkit and
    MixedRealityPlayspace with Main camera in it.
  • The Main camera will include a UIRaycastCamera which is used for Menu interactions in the app.
Fig 11 : Adding MRTK gameobject and PlaySpace

Well, thats it! We are half way through. Next we’ll configure camera, the input actions and interactions for the app and create the handlers for those user input actions.

Setup Gaze, Input and Speech interactions

  • In the Heirarchy pane click on the MixedRealityToolkit game object and select the double click on the Active Profile property of the MixedRealityToolkit script (Fig 12).
Fig 12 :Open default toolkit configuration

What are profiles?

Profiles are one of the major toolkit designer feature written from ground up for the MRTK – v2 They are a user friendly way to configure the entire MR\VR interactions and services for the app. There are many profiles for different kind of purposes, like targeting a platform\device, configure controller mappings for your app, set up gesture actions, speech commands, spatial mapping\awareness and more. You can even inject your own custom written profiles and swap with the default ones. This would help people to configure their app interactions and features and get going with the realization of their idea without much effort while the toolkit handles the low level details.

The toolkit adopts the dependency injection and service locator pattern for registration of the services through profiles and then finding and making them all work at runtime through constructors and interfaces implemented by handlers. The services are registered with the toolkit and is managed by the MixedRealityToolkit script (attached to a MixedRealityToolkit GameObject) as the central piece to it all. For this article I am not going into more details. We’ll see more in detail on profiles and services in the upcoming articles. Let’s get to setting up our profiles.

The toolkit adopts the dependency injection and service locator pattern for registration of the services through profiles and then finding and making them all work at runtime through constructors and interfaces implemented by handlers. The services are registered with the toolkit and is managed by the MixedRealityToolkit script (attached to a MixedRealityToolkit GameObject) as the central piece to it all. For this article I am not going into more details. We’ll see more in detail on profiles and services in the upcoming articles. Let’s get to setting up our profiles.

  • When you click on the Active Profile value, it will open the Default Configuration Profile for the toolkit. Here we can copy and customize the default profiles or create new ones.
  • Let’s create a new one. Click on the ‘Create new profiles’ button. It will create a MixedRealityToolkit.Generated folder and a sub folder named CustomProfiles inside it under Assets pane (Fig 13).
Fig 13 : Generating custom profiles
  • For a simple project structure I moved the CustomProfiles out to the main heirarchy and deleted the MixedRealityToolkit-Generated folder. This is optional.
  • Inside the Custom Profiles you can see a MixedRealityToolkitConfigurationProfile. This is the base or root of all other profiles. I renamed it to MRTKConfigurationProfile. You can rename it as you wisth to suit your project name.
  • Now, in the Heirarchy window, select the MixedRealityToolkit game object and select the newly created Config Profile as the Active Profile in the Inspector window (Fig 14).
Fig 14 : Assigning active config profile
  • Double click on the new Config Profile we created. You’ll see the details as shown in Fig 15. For the scope of this article we are going to create only Camera and Input profiles. In future articles we’ll expand on other profiles too.
Fig 15 : Active config profile
  • Check the ‘Enable Camera Profile’ and ‘Enable Input System’ options in there.
  • Right click in the Custom Folder view, Go to Create -> Mixed Reality Toolkit -> Mixed Reality Camera Profile.
  • Create an Input System Profile the same way (Fig 15).
Fig 16 : Creating Camera and Input System profiles
  • In the MRTK Configuration Profile assign the newly created Camera and Input System Profiles.
  • Set the Input System Type property in the MRTKConfigProfile to MixedRealityInputSystem from the dropdown. This is the class which handles the Input system of the MR toolkit. We’ll dig into this later.
  • Once you have set everything, profile would look as shown in Fig 17(a) below.
Fig 17(a) : Enable and Assign Camera and Input System profiles
  • The Camera profile will detect if the device is a Opaque (VR) or a transparent (MR\Hololens) device and will enable the settings for it automatically. You don’t need to do anything – for this project. Just leave as it is.
  • Now, in the Hierarchy panel expand the MixedRealityPlayspace gameobject and click on Main Camera. In the inspector window add a script component by searching for Gaze Provider like 17(b) below.
Fig 17(b) : Add Gaze Provider script component to Main Camera
  • Open the InputSystem Profile we created. You can see many sub profiles are needed in this. This basically controls all the input including, configuring your hardware controller inputs. So we need to create the following:
    • Pointer profile
    • Input actions profile
    • Input actions rules profile
    • Controller mapping profile
    • Gestures profile
    • Speech commands profile
  • Go ahead and create these profiles just like we created Input System Profile.
  • Once you are done, configure it as shown below.

Input actions profile

Fig 18 : Define actions allowed by your app

The actions profile defines all the interactions that the user can make in your app through various input medium – head\gaze, gesture, controllers and speech. Here we define few simple actions. Though in this example we’ll be using only Select (Air Tap). The others are placeholder actions for Controller mapping and Gesture mapping for now.

Input actions rules profile

Fig 19 : Define actions rules to give alternative action definitions

The action rules profile is used to define alternative to the base actions you created in your action profile. Lets say you have defined an OK action with 6 Degree of Freedom(6DoF) but you want to interpret this action as another action in case there is some variation in the base OK action. So you set the base action to start from and then set your criteria, like position and rotation difference and then interpret it as a different action. That’s where this will come in handy. We’ll explore this in another article. For now just create a actions rules profile or you can even use the default one.

Gestures profile

Fig 20 : Mapping gestures to actions
Fig 21 : Combine gestures to manipulation or navigation

Gesture profile basically defines action mappings for gestures. Currently the three gestures supported are Hold, Manipulate and Navigate. For each gesture only one action is allowed by the current profile. You can change this behaviour if you create your own profile. For this article we use only Tap gesture and Speech commands, so you can just create a gesture profile as shown above or use the default one. The profile supports Hololens only for now.

Controller mapping profile

Fig 22 : Mapping your actions to controller\device specific actions

Now this is where the real mapping of hardware actions\operations against software\toolkit defined actions happen. Here we define our Select action against the air tap gesture of Hololens which is defined as a digital axis constraint in the Input Action profile. Assign Grip Pose to Spatial Grip which is a 6 DoF axis. The Spatial Pointer is not really needed for Hololens and could be removed in a future release. For now just assign the Pointer Press action to it. There are many other platforms supported like Oculus, Vive, Windows controllers, Xbox etc.

Speech commands profile

Fig 23 : Define speech commands

Speech profile defines speech commands and key codes to simulate in emulator\unity editor. We have Rotate Cube and Select Cube as our commands. The Select Cube is a speech alternative to the Air Tap operation on a gameobject. Although, the actions defined against it doesn’t have any actual effect as we setup the final hooking of commands against actual handlers later in this article.

Pointer profile

Pointer profile helps in setting up your gaze provider which handles gaze based interactions specifically. Gaze is considered a special case of pointer as opposed to other pointers, like controllers or hand. Other pointer input sources are handled by another provider called FocusProvider (GazeProvider is a part of it).

Now that we have created and configured all our interactions, lets put it all together in the Input System profile as shown below (Fig 25).

Note: Important thing to note is that even though we are not creating a controller visualization profile we need to assign the default visualization profile in the input system profile as shown in Fig 25. Without that the Air Tap feature doesn’t work. This could be a bug. At the point of this writing I have raised a issue in GitHub regarding this.

Fig 25 : Final input system profile

Now there is one small step to do. We need to register the providers for these input sub systems.

What is a provider?

At a high level, providers in MixedRealityToolkit are nothing but device\platform specific or sometimes generic implementations of various features or systems (controllers, gaze, speech, spatial mapping, networking etc..). You can write your own and plumb it in. They are currently registered with the toolkit as Additional Service providers. When you think about it might not be a good idea to register devices specific implementation as additional services as the purpose of additional services profile is to write custom services and register there. So there is already a plan to consider devices as simply data providers and have their own registry rather than putting them in additional services. Let’s set up our provider. Here we need to set up a Speech provider and Windows Mixed Reality provider.

  • Create a Registered Service Provider profile and set it up as shown in Fig 26.
Fig 26 : Register Speech provider and Windows Mixed Reality provider in additional services
  • Configure the Registered Service Provider profile in the Additional Service Providers property of MRTK Configuration Profile (Fig 27).
Fig 27 : Configure registered services in additional services

That’s it! Now we are ready to create some gameobjects and write some code to handle user interaction. We are almost there 🙂

App Implementation

  • In the Hierarchy pane, create a Cube.
  • Set its position to x:0, y:0, z:2 so that it appears in front of the camera.
  • Add a material to it to give some color.
  • Create a folder called scripts under Assets in the Project pane.
  • Create a new C# script called CubeController.cs
  • Now, add this code to Cube Controller.
using Microsoft.MixedReality.Toolkit.Core.EventDatum.Input;
using Microsoft.MixedReality.Toolkit.SDK.Input.Handlers;
using UnityEngine;

public class CubeController : BaseFocusHandler
{
    public TextMesh textMesh;
    public void Select_Cube()
    {
        textMesh.text = "User selected Cube";
    }

    public void Rotate_Cube()
    {
        textMesh.text = "User wants to rotate Cube";
    }

    public override void OnFocusEnter(FocusEventData eventData)
    {
        textMesh.text = "User looking at cube";
    }

    public override void OnFocusExit(FocusEventData eventData)
    {
        textMesh.text = "User not looking at cube";
    }
}
  • Add Cube Controller script to the Cube as script component.
  • Next, add 3d Text to the scene and position it accordingly so that you can see it directly above the Cube. We’ll update this text with all the user interactions.
  • Now add a Speech Input Handler and Pointer Click Handler scripts to the Cube. These classes handle the Speech commands and Tap gesture, for us. We’ll implement a BaseFocusHandler class in our CubeController class to handle the focus enter and exit events as you can see.
  • Once you have added them, select\add the actions we configured earlier through profiles and hook in the handlers we wrote in our CubeController code.
  • Here is a completed version of the Cube configuration in the inspector.
Fig 28 : Final configuration for Cube

Now go ahead and click play. You’ll see the white cursor (it’s small – you can change it with a better cursor prefab) pointing at the cube and when you rotate the camera it moves in alignment with the Cube’s plane. You can also see the message of the Text changing when the Camera moves in and out of the focus of Cube. Use the key code S and R to see the feedback for Selection and Rotation of cube in the text.

Fig 29 : Final product.

Lets build this and run it in emulator or a real hololens.

Building and running the app

  • Click on File -> Build Settings.
  • Make sure your scecne is in Scenes in Build and the settings are for Universal Windows Platform as we set up earlier,
  • Click build.
  • Create a new folder, give a name (e.g. App) and click Select.
  • Let the build commence.
  • Once the build is complete, go to the App folder that you created for Build and open the Visual Studio solution file in it.
Fig 31 : Open the app solution
  • Once the solution is opened, change the architectire to x86 and then the deployment option to Hololens Emulator or use Remote Machine to try it in your Hololens directly. Press F5.
Fig 32 : Run the app
  • Let the application compile, initiate the emulator (in this case) and deploy the application. Once the application has started you can try interacting and see the results as below.
Fig 33 : Gaze focus out of the Cube
Fig 34 : Gaze focus on the Cube
Fig 35 : Air Tap on the Cube
Fig 36 : Speech Command for the cube

Final unity project related to this article is placed here.

Well that’s it! This concludes our introduction to MRTK-v2. A bit long introduction, I know. Although, next time onwards things would move much faster as you are now familiar with the initial setup and configurations. Over the next article we’ll how to set up and work with manipulation gestures, spatial mapping, networking, Azure integration etc. and make some cool apps with it. Try it out couple times and get yourself familiar with it. Make your own changes, break it all, fix it back and keep learning. Let me know your feedbacks and comments.


9 Replies to “Getting Started with MixedRealityToolkit – v2”

  1. Fantastic article! Great step-by-step tutorial to get MRTK v2 running with Unity! Everything seems to work perfectly, except for the AirTap gesture (I’m using Unity 2018.3.8f1). I’ve also tried your project from the Github repository and the same issue occurs. Any idea what the issue could be here?

    1. As additional info, the air-tap seems to be recognises because I see the cursor animation (going from large to small when tapping), but nothing seems to happen.

    2. As additional info, the air-tap seems to be recognises because I see the cursor animation (going from large to small when tapping), but nothing seems to happen.

      1. Apparently, the problem with non-recognised air-taps only occurs when running the application on the HoloLens device and with Holographic Remoting. In the emulator, everything seems to work fine.

  2. Hi Felix,

    How are you creating the Speech Input Handler and Pointer Click Handler scripts ? Please provide the code.

    Regards,
    Sheldon

  3. Could you please post a blog on how to use spatial mapping in new MRtoolkit V2. There is no document for getting started by Mrttolkit

Leave a Reply

Your email address will not be published. Required fields are marked *