Geometry Aware Passthrough Mitigates Cybersickness
Trishia El Chemaly, Mohit Goyal, Tinglin Duan, Vrushank Phadnis, Sakar Khattar, Bjorn Vlaskamp, Achin Kulshrestha, Eric Lee Turner, Aveek Purohit, Gregory Neiswander, Konstantine Tsotsos
Stop shipping raw camera feeds as passthrough. Implement depth-corrected rendering for any VST headset—especially for productivity apps where users need extended sessions. The computational overhead is minimal compared to the usability gain.
Video See-Through headsets cause cybersickness because raw camera feeds distort scale perception and exaggerate motion parallax. Users get nauseous from the mismatch between what they see and what their vestibular system expects.
Method: Geometry-aware passthrough renders the real world using depth maps and 3D reconstruction instead of raw video feeds. The system corrects for the physical offset between cameras and eyes (interpupillary distance mismatch) and eliminates the scale distortion that makes objects appear closer or farther than they are. In a 36-participant study, this reduced cybersickness scores by 31% compared to standard passthrough while maintaining the same visual quality.
Caveats: Requires accurate depth sensing hardware. Performance degrades in environments with reflective surfaces or poor lighting where depth reconstruction fails.
Reflections: Does the cybersickness reduction persist across multi-hour sessions, or does adaptation occur? · Can simplified depth correction (e.g., planar approximation) achieve similar results with lower computational cost? · How does this interact with other cybersickness mitigation techniques like dynamic FOV restriction?