top of page
  • Shubham Rana

Untangling the Microsoft Mesh Platform

Microsoft announced its brand-new Microsoft Mesh platform at the Ignite conference 2021. Mixed Reality, Holoportation, Holographics— terms only used when we have sci-fi discussions referring to a futuristic utopia, are now equaling reality. Microsoft Mesh is the company’s new “Mixed Reality” platform, possibly the cutting-edge medium that is attempting to make these concepts real.

Mesh came out as the fruit, for the years of Microsoft’s research and development in areas starting from hand and eye-tracking and HoloLens development to create persistent holograms and AI models. Mesh is a platform designed to help build multi-user, cross-platform Mixed Reality applications based on AI-powered tools.

Microsoft Mesh will allow geographically distributed teams to join collaboratively in a meeting, share holographic experiences on different kinds of devices, conduct virtual design sessions, assist others, learn together and host virtual social meetups. For example, people around the world will attend conferences from their living rooms and home offices as if they were in person. They will be presented as holographic avatars present in a shared holographic world watching events unfold in the meeting.

During the Ignite conference, Microsoft announced two apps built on the Microsoft Mesh platform–HoloLens and AltspaceVR. Over time, the company expects that people will be able to choose from an expanding set of Microsoft Mesh-enabled applications built by external developers and partners and benefit from the planned, full integration of Mesh with Microsoft products such as Microsoft Teams and Dynamics 365.

"This has been the dream for Mixed Reality, the idea from the very beginning. You can actually feel like you're in the same place with someone sharing content or you can teleport from different Mixed Reality devices and be present with people even when you're not physically together," Microsoft Technical Fellow Alex Kipman said.

Kipman came on the Ignite virtual stage as a fully realized holoportation of himself to narrate his Mixed Reality experience of the show’s opening in real-time as the rays of light simulated his physical body.

Why Mesh?

Mixed reality can be claimed as the fourth wave in computing after Mainframes, PCs, and smartphones. It is becoming mainstream across consumers and commercials, liberating them from screen-bound experiences into instinctual interactions in their own space that became a holographic space, among their things and with their people. The people missing the in-person social interactions, meet-ups like concerts, and fitness training are moving to the virtual world. Nearly 50% of the Fortune 500 companies currently use Mixed Reality solutions such as HoloLens. Given that big number, it is surprising that the options in Mixed Reality applications and experiences are very few. Some underlying problems, like Lack of time and resources for people representation, hologram stability in a shared MR space across time and device types, building high-fidelity 3D models into Mixed Reality, and synchronization of people’s actions and gestures are some underlying hard problems in developing such platforms.

Core Components - Microsoft Mesh Platform

With Mesh, Microsoft hopes to help developers design immersive MR apps without having to worry about complex technical issues. Not to be confused with an application or device, Mesh is a platform that offers developers a comprehensive developer platform, tooling based on Microsoft Azure with a suite having various AI-powered tools for avatars, synchronization across multiple users, session management, spatial rendering, and holoportation to build solutions in Mixed Reality with a combination of all. It also benefits from Azure’s privacy and enterprise-grade security advancements.

“More and more we are building value in our intelligent cloud, which is Azure,” Kipman said. “In these collaborative experiences, the content is not inside my device or inside my application. The holographic content is in the cloud, and I just need the special lenses that allow me to see it.”

Developer Platform: Developers need not worry about the core infrastructure around live-state management capabilities, audio/video transmission, and billing. Mesh benefits from the identity services such as Azure Active Directory and Microsoft Accounts to bring duly authenticated and authorized users and enable secure and trusted sessions. With Microsoft Graph API, users are able to bring connections, content, and preferences from the commercial and consumer space. Mesh also has AI capabilities for enabling massive multi-user online (MMO) scenarios for Mixed Reality.

Multi-Device Support: Mesh allows ‘meet users where they are. For example, Mesh supports many devices to give a 3-dimensional volumetric experience. Mesh is compatible with fully immersive head-mounted displays (HMDs) such as HP Reverb G2, Microsoft HoloLens, and Oculus Quest 2. An accessible 3D presence is provided by Mesh with representative avatars created with inside-out sensors of the devices that can be seen, touched, and interacted with. The avatar rig and customization studio of Mesh allow creating out-of-the-box avatars. The platform can also function with existing avatar rigs due to its AI-powered motion models that accurately capture motions. Along with avatars, Mesh enables holoportation, a 360° 3D representation creation, and transmission, using outside-in sensors. These sensors can be custom camera setups.

Mesh allows both local stand-alone and cloud-connected remote holographic rendering for each scene and model regardless of the device’s (mobile, PC, tablet) computing and thermal capacity. Additionally, native rendering of 3D file format is possible on Mesh-enabled apps.

Mesh fulfills the need to understand each participant’s physical world and space to build Mesh-enabled apps that persist holographic content in the real world. It places holograms in a way that can persist across time, space, and devices through Spatial maps. The mesh helps you create a map of your world by delivering ‘world-locked holograms’ tied to specific points of interest. Mesh also generates holographic content aligned precisely to the layout and geometry of objects to allow a developer to build apps with visual information, service records, and other data.

The multi-user sync feature in Mesh allows the creation of a hologram’s common perspective in a collaborative session. According to Microsoft, all the motions, pose updates, expressions, or holographic transformation happens within 100 milliseconds of latency, irrespective of whether the participant is in the same physical location or halfway around the world. This enables Mesh to create a sense of being in the same physical space in a multi-user situation.

Mesh-Enabled Apps: Among many other Mixed Reality experiences built by Microsoft, Mesh also delivers some app experiences on top of the development platform that brings the platform alive. Mesh lights up collaborative experience for immersive headsets with apps built by Microsoft like the HoloLens 2 Mesh app and AltspaceVR with new enterprise capabilities.

The Mesh platform is comprehensive. It is designed with tools and capabilities to help developers get started quickly to deliver engaging multi user Mixed Reality experiences.

Seminal Patent

U.S. Patent No. 9,508,197 - Generating An Avatar From Real-Time Image Data

Assignee: Microsoft Technology Licensing LLC

Grant Date: 2016-11-29

The patent describes the technology for automatically generating a facial avatar resembling a user in a defined art style. One or more processors are used to generate a user 3D head model for the user based on 3D image data captured from a communicatively coupled 3D image capture device. A set of user transferable head features from the user 3D head model are automatically represented by one or more processors in the facial avatar by rules governing transferable user 3D head features. In some embodiments, a base or reference head model of the avatar is remapped to include the set of user head features. In other embodiments, the model of an avatar head shape is selected based on the user 3D head model, and the transferable features of the user 3D head are represented in the avatar head shape model.

The above figure illustrates an example of shape units of a 3D head model. A shape unit is a collection of 3D points which represent a head feature or an aspect of a head feature.

Facebook’s AR and VR efforts togetherly named “Facebook Reality Labs,” announced recently a new consumer wearable that’s going to be their latest advancement in augmented reality devices. Mark Zuckerberg revealed at the fittingly virtual event that Facebook is going to launch “the next step on the road to augmented reality glasses” next year, a pair of smart glasses that falls short of AR capability. Zuckerberg also announced that Facebook will be bringing in luxury eyewear giant Luxottica, and the new consumer device will have Ray-Ban branding.

Apple analyst Ming-Chi Kuo revealed in the latest research note that Apple will release a “helmet-type” mixed reality headset next year. “We foresee that the helmet product will provide AR and VR experiences, while glasses and contact lens types of products are more likely to focus on AR applications,” Kuo writes in the note. He added that the headset will likely be priced in the $1,000 range in the US.

Qualcomm with its Qualcomm® Snapdragon™ XR2 Platform, the world’s first 5G-supported XR platform, unveils some innovative features and boasts multiple firsts for XR that can be scaled across AR, VR, and Mixed Reality.

BAE Systems, a defense, security, and aerospace company, has been using HoloLens in its processes to make electric propulsion devices. The company has realized a 50% reduction in assembly time through the use of the device.


Microsoft Mesh is a platform built to help developers build their own multi-user Mixed Reality apps. It does a good job to address the challenges faced by developers to build a Mixed Reality experience and provides an interesting avatar system. Mesh has a massive potential to be integrated into a variety of applications and industries that may eventually become an industry-changing technology.

Shubham is a research analyst at Copperpod IP. He has a Bachelor's degree in Electronics and Communication Engineering. His interest areas are the Internet of things (IoT), Networking, Semiconductors, Embedded System and Software.











Related Posts

See All

Deepfake and AI: To Be or Not To Be

"No problem can be solved from the same level of consciousness that created it" – Albert Einstein What is Deepfake? Tech companies like...

Digital Twin - What is the Future?

As companies continue to aggressively pursue smart sensor technology and Internet of Things (IoT) technology, analyses and visualization...


Recent Insights
bottom of page