AI is getting way better at deciphering our thoughts, for better or worse.
Scientists at University of Texas published a study in Nature(opens in a new tab) describing how they used functional magnetic resonance (fMRI) and an AI system preceding ChatGPT called GPT-1, to create a non-invasive mind decoder that can detect brain activity and capture the essence of what someone is thinking.
To train the AI, researchers placed three people in fMRI scans and played entertaining podcasts for them to listen to, including The New York Times’ Modern Love(opens in a new tab), and The Moth Radio Hour(opens in a new tab). The scientists used transcripts of the podcasts to track brain activity and figure out which parts of the brain were activated by different words.
To see if the AI can decode imagery, scientists played silent clips from Pixar movies with subtitles, then tested whether they could translate related stories the subjects conjured in their heads without speaking. The results weren’t shockingly detailed, but they were accurate enough for the decoder to understand the meaning behind the subjects’ thoughts and convert it into text.
On one hand, this is really exciting news. Just imagine a future where people with neurological conditions or survivors of stroke can once again communicate with the help of this type of technology.
The decoder, however, is not fully developed yet. The AI only works if it’s trained with data from the brain activity of the person it is used on, which limits its distribution possibilities. There’s also a barrier with the fMRI scans, which are big and expensive. Plus, scientists found that the decoder can get confused if people decide to ‘lie’ to it by choosing to think about something different than what is required.
These obstacles may be a positive, as the potential to create a machine that can decode people’s thoughts raises serious privacy concerns; there’s currently no way to limit the tech’s use to medicine, and just imagine if the decoder could be used as a surveillance or an interrogation method. So, before AI mind-reading develops further, scientists and policy makers need to seriously consider the ethical implications, and enforce laws that protect mental privacy to ensure this kind of tech is only used to benefit humanity.