A research prototype for authoring and deploying labeled Mixed Reality (MR) experiences across multiple platforms, including handheld AR (Android) and head-mounted MR (Meta Quest 3). This toolkit enables the creation of interactive, spatially anchored labels on 3D models, allowing for guided, and proximity-based experiences.
The system is released as an artifact alongside the short paper "The Invisible Hand of the Context: Authoring of Context-Aware Mixed Reality Labels" accepted at the Mensch und Computer 2025 (MuC) conference.
-
Desktop Authoring Tool Create MR experiences by augmenting 3D models with textual descriptions, images, and structured guidance.
-
Cross-Platform Experience Deployment Deploy authored projects to:
- Handheld AR (Android) using ARFoundation & ARCore
- Head-mounted MR (Meta Quest 3) using Meta XR SDK
-
Core Labeling Features
- Spatial anchor point definition (manual or automatic)
- Interactive label expansion
- Proximity-based visibility
- Guided narrative sequences
- Temporal coherence
The Unity-based authoring environment lets users import 3D models (.glb format), define features, and place anchor points. Projects are saved as structured directories that bundle all content and metadata.
Main components:
- Project Management: Create and manage labeled MR projects
- Feature Configuration: Attach names, descriptions, and optional images to model components
- Anchor Placement: Define one or multiple anchor points per feature
- Narrative Sequences: Create step-by-step guided learning experiences
- Structured Export: All data (model, features, metadata) stored in a readable JSON format
The authored experiences are rendered using platform-specific MR implementations. Each runtime loads the exported project data, performs label placement, and provides interaction mechanisms tailored to the device.
-
Android (ARCore via AR Foundation):
- Touchscreen interactions
- Label tap detection via raycast
- Model placement on detected surfaces
-
Meta Quest 3 (Meta XR SDK):
- 3D UI (floating panels)
- Hand tracking or controller input
- Spatial positioning of content using pass-through and anchors
Each project is saved as a self-contained folder with the following structure:
Persistent App Path
├── newModel.glb
├── someImage.png
└── Saves
├── Project1
│ ├── model.glb
│ ├── project.json
│ └── thumbnail.png
└── Project2
Projects are created in a Saves folder at the AuthoringTools persistent data path. All the necessary materials to create a project (i.e., a 3D model and optional images) must be placed in the app's persistent path. When a project is created, its corresponding files are copied into the project folder.
The ExperienceViewers look for available projects at their respective persistent data paths.
Note: This prototype is a research artifact and not a production-ready toolkit.
- Clone the repository and open the Unity project.
- Build and run the desktop authoring tool.
- Place material (3D models and images) in the applications persistent path
- Import a
.glbmodel and begin authoring. - Move the project folder to a target device for deployment.
- Android AR: Build the scene and install on device.
- Meta Quest 3: Build the scene and install on device.
To ensure correct rendering of 3D models across platforms, Unity's shader stripping behavior needs to be configured for each application (Authoring Tool, Android, Meta Quest):
- Import your 3D model and place it into the scene.
- Go to Project Settings → Graphics → Shader Stripping.
- Click "Save to asset" to generate a shader variant collection.
- Then, add that asset to "Preloaded Shaders" under the same menu.
- Repeat this process for each target application (Authoring Tool, Android, Meta Quest).
Without this step, shaders may be stripped during the build process, leading to broken or invisible visuals at runtime.
Use your mouse to navigate and place anchor points in the 3D model view:
- Rotate view: Right mouse button
- Pan view: Middle mouse button
- Zoom view: Mouse wheel
- Place model: Tap on a detected plane
- Interact with labels: Tap labels directly, Tap and hold to show all leader lines
- Navigate narrative sequence: Use on-screen buttons
-
Place model:
- Controller: Press
Button.One(A button) on the right-hand controller - Hand tracking: Touch thumb and index finger together
- Controller: Press
-
Navigate narrative sequence:
- Next: Thumbstick right
- Previous: Thumbstick left
-
Label interactions:
- Show all leader lines: Press
PrimaryHandTrigger - Interact with individual labels: Press
Button.Oneon the right controller
- Show all leader lines: Press
This project makes use of the following components:
-
glTFast by Unity Technologies and the glTFast authors. Licensed under the Apache-2.0 License. https://github.com/atteneder/glTFast
-
ARFoundation by Unity Technologies. Licensed under Unity Companion License. https://docs.unity3d.com/Packages/[email protected]/manual/index.html
-
ARCore XR Plugin by Google & Unity Technologies. Licensed under Unity Companion License. https://github.com/google-ar/arcore-unity-sdk
-
QuickOutline by Chris Nolet. Licensed under MIT License. https://github.com/chrisnolet/QuickOutline