Article Image

IPFS News Link • Robots and Artificial Intelligence

Groundbreaking AI system reads minds and produces text without implants

• https://newatlas.com, By Paul McClure

Called a semantic decoder, the system may help people who are conscious but unable to speak, such as those who've suffered a stroke.

This new brain-computer interface differs from other 'mind-reading' technology because it doesn't need to be implanted into the brain. The researchers at UT Austin took non-invasive recordings of the brain using functional magnetic resonance imaging (fMRI) to reconstruct perceived or imagined stimuli using continuous, natural language.

While fMRI produces excellent quality images, the signal it measures, which depends on blood oxygen levels, is very slow, as an impulse of neural activity causes a rise and fall in blood oxygen over about 10 seconds. Because naturally spoken English uses more than two words per second, each brain image can therefore be affected by more than 20 words.

That's where the semantic decoder comes in. It uses an encoding model similar to that used by Open AI's ChatGPT and Google's Bard that can predict how a person's brain will respond to natural language. To 'train' the decoder, the researchers recorded three people's brain responses while they listened to 16 hours of spoken stories. The decoder could predict, with considerable accuracy, how the person's brain would respond to hearing a sequence of words.

"For a noninvasive method, this is a real leap forward compared to what's been done before, which is typically single words or short sentences," said Alexander Huth, corresponding author of the study.

The result does not recreate the stimulus word for word. Rather, the decoder picks up on the gist of what's being said. It's not perfect, but about half the time, it produced text that was closely – sometimes precisely – matched to the original.

When participants were actively listening to a story and ignoring another story being played simultaneously, the decoder was able to capture the gist of the story being actively listened to.

As well as having participants listen to and think about stories, they were asked to watch four short, silent videos while their brains were scanned using fMRI. The semantic encoder translated their brain activity into accurate descriptions of certain events from the videos they watched.


PurePatriot