Technology can read our minds! I know that sounds like something straight out of horror sci-fi movie, but this new development is actually pretty cool. Neuroscientists from the University of Toronto Scarborough have developed an algorithm that can read human minds and reconstruct images of what the subject perceives just based on their brain activity from EEG scans.
The algorithm was developed by Dan Nemrodov, who said, “When we see something, our brain creates a mental percept, which is essentially a mental impression of that thing. We were able to capture this percept using EEG to get a direct illustration of what's happening in the brain during this process."
To test the new algorithm, the researchers hooked subjects up to an EEG. The subjects were then given images of their own faces and the EEG recorded their brain activity. From there, the algorithm attempted to digitally recreate the image in the person’s mind based just on the EEG scans. The algorithm used a machine learning technique so it was learning with each scan.
Surprisingly, this is not the first time that an algorithm has been able to recreate images from neuroimaging. Researchers have been able to successfully recreate images using functional magnetic resonance imaging (fMRI), but this is the first time they have found success with EEG scans.
The fMRI method has the ability to scan finer details of what is happening in the brain, but EEG is more practical for mind reading because it is more common and cheaper than an fMRI. EEG has a greater temporal resolution and can find details within milliseconds.
"fMRI captures activity at the time scale of seconds, but EEG captures activity at the millisecond scale. So we can see with very fine detail how the percept of a face develops in our brain using EEG," says Nemrodov. It takes our brain about 170 milliseconds to figure out what the person is seeing.
The next steps for the researchers is to figure out if the EEG data and image reconstructing would be done with memory. They also want to find out if the algorithm can recognize items other than faces. The researchers hope that the algorithm could be used for many things, like communication with people who can’t verbally communicate and gathering eyewitness accounts of a crime.
"What's really exciting is that we're not reconstructing squares and triangles but actual images of a person's face, and that involves a lot of fine-grained visual detail," adds assistant professor Adrian Nestor, whose lab was used for this research.
"The fact we can reconstruct what someone experiences visually based on their brain activity opens up a lot of possibilities. It unveils the subjective content of our mind and it provides a way to access, explore and share the content of our perception, memory and imagination."
The paper on this research was published in eNeuro.