Mind-reading computers that can produce high-fidelity images from your thoughts alone are coming sooner than you think

 AI (Artificial Intelligence) concept - woman creating photos from her thoughts.
AI (Artificial Intelligence) concept - woman creating photos from her thoughts.

Mind-reading computer systems that can produce high-fidelity images from your thoughts alone are coming. Are we a decade or less away from being able to think images into existence?

Short answer: less.

But before we dig into the long answer, let’s first answer the question of how this might be possible. Meta, the parent company of Facebook, recently began sharing the early results of a project, dubbed DINOv2, that has been in development for the past couple of years. The project involves researchers using magnetoencephalography (MEG) combined with AI to understand how the brain processes visual information.

Magnetoencephalography isn’t quite as complicated as you might first assume. It’s a functional brain scanning technique that captures the magnetic field, generated by electrical signals when neurons fire in the brain, in real-time. It’s often used to assess conditions such as epilepsy and to aid in the removal of brain tumours. However, in healthy patients, MEG can be used to study what the brains of healthy volunteers do when fed a series of different images.

Pairing this data with AI, which studies the output, Meta claims it has created a way to decode the output of the brain, generating images that are close to the original input. In a blog post on its website, Meta explained the process like this: “Using magnetoencephalography, we showcase an AI system capable of decoding the unfolding of visual representations in the brain with an unprecedented temporal resolution.”

DINOv2
DINOv2

The system can analyse thousands of brain activity measurements per second and churn out approximated images that at least fit the categories of the original images, even though some of them miss the mark. In its inception, this process involved people manually feeding and categorising thousands of images and video clips into an AI model. What’s critical about DINOv2 is it can learn from images without human input and without being told what they are or what the context is. As a self-supervision model, DINOv2 can learn key visual characteristics, including depth perception and the natural behaviour of light in different scenarios.

What could all of this mean in terms of real-world applications? Computer-vision systems are already being used to help create the first generations of self-driving transport. But if computers can read our minds and generate high-fidelity representations, we could eventually create unique images from thoughts alone; no cameras, no canvas, no paintbrushes or pens required. Could we get to a point where we record our dreams like we save TV shows? Perhaps that’s further down the road.

Of course, there are huge implications for our privacy and autonomy as individuals in a world where our minds can be looked into and their contents displayed for others to see. In many ways, it’s a terrifying prospect. But as a technological optimist, I’m excited to see where research like this may lead. I’m excited to see the creations generated by people when there’s literally no filter between our personal vision and what we can generate.

Read more of Jon Devo's Scanning Ahead blogs