Decoding the Mind’s Eye: Reconstructing 3D Objects from Brainwaves in VR
- Eddie Avil

- 3 hours ago
- 2 min read

What if we could look directly into the human mind and reconstruct what someone is seeing in virtual reality—object, orientation, and all? That’s the question Ninon Lize Masclef & team have been chasing at the MIT Media Lab, and at NeurIPS 2025 they unveiled a step toward answering it.
Together with Nataliya Kosmyna, Ph.D., Taisija Demchenko, and Antonella Catanzaro, they presented a system that translates raw brain activity into 3D object reconstructions. Using EEG signals, the dual-stream architecture separates two critical aspects of perception:
Viewpoint-tolerant object identity – recognizing what the object is, regardless of perspective.
Viewpoint-dependent orientation – capturing how the object is rotated in space.
This design allows to decode six object classes with up to 68% accuracy, while estimating rotation angles with a mean absolute error of just 10–11°.
The Tech Behind the Breakthrough
The decoded embeddings condition a multiview diffusion model, fine-tuned with LoRA (Low-Rank Adaptation), to reconstruct 3D objects that remain consistent across viewpoints. In other words, the system doesn’t just guess—it builds coherent 3D shapes that align with how the brain perceives them in VR.
This is more than a technical achievement. It’s a glimpse into the future of brain foundation models, where scaling laws and multimodal architectures could unlock applications far beyond the lab.
Why It Matters
Neural decoding is no longer science fiction—it’s becoming a science of precision. The implications are vast:
Next-gen BCIs (Brain-Computer Interfaces): Moving beyond cursor control to immersive, intuitive communication.
Assistive technologies: Helping individuals with limited mobility express themselves through reconstructed imagery.
Cognitive research: Offering new windows into how the brain encodes identity, orientation, and perception.
Creative tools: Imagine artists and designers sketching directly with their thoughts in VR.
We’re entering an era where AI’s final frontier is the brain itself—and this work is one step closer to crossing it.
Looking Ahead
The journey is just beginning. Scaling these models, integrating richer neural signals, and exploring real-world applications will define the next wave of brain-AI research. But the excitement is palpable: decoding the mind’s eye is no longer a dream—it’s happening
Paper: https://lnkd.in/eHbX448C
Project: https://lnkd.in/eA5Ggt-t





Comments