Paralyzed People Can Now 'Talk Again' With the Help of a Brain Implant That Converts Thoughts to Speech Instantly

Scientists have been routinely taking massive strides in speech technology. Artificial Intelligence has accelerated this development, stated Live Science. Recently, a team of scientists came up with a stunning development that has the potential to aid paralyzed individuals in leading a better life. Findings regarding this development have been published in the journal Nature Neuroscience.

| Photo by ICSA)
The team has developed a speech neuroprostheses, which they believe can broadcast an individual's thoughts through a speaker. The advancement has attracted a lot of attention worldwide because it is the first time scientists have managed to showcase near-synchronous brain-to-voice streaming. The device works by attaching electrodes to the brain's surface. It is through these electrodes that scientists manage to detect speech signals, and later interpret them.
The device's brain-computer interface (BCI) makes use of AI to break down neural signals, as per the study. An earlier version of this device was put forward by the same team in 2023. The previous version completed the same function, however, it took more time and was considered to be robotic in execution. The team improved on these features and came up with the updated device. The AI being used in the recent offering can stream the intended speech from the brain close to real time. The study claimed that the implant located brain signals every 80 milliseconds (0.08 seconds) and could convert them to speech after three seconds of detection.
Researchers incorporated the decoding technology used by popular devices in their speech neuroprostheses. "Our streaming approach brings the same rapid speech decoding capacity of devices like Alexa and Siri to neuroprostheses," study co-principal investigator Gopala Anumanchipalli, an assistant professor of electrical engineering and computer sciences at UC Berkeley, said. "Using a similar type of algorithm, we found that we could decode neural data and, for the first time, enable near-synchronous voice streaming."
The team is hopeful that this device will allow people suffering from paralysis to communicate through synthetic speech technology. "We are essentially intercepting signals where the thought is translated into articulation and in the middle of that motor control," study co-lead author Cheol Jun Cho, a doctoral student in electrical engineering and computer sciences at UC Berkeley, said. "So what we’re decoding is after a thought has happened, after we've decided what to say, after we’ve decided what words to use and how to move our vocal-tract muscles."
The first individual to use this updated device was a person named Ann, who, after a stroke in 2005, lost the ability to speak. This is not the first time the woman has used such applications to improve her communication. To date, 253 electrodes have been connected to her brain for this pursuit. Before the process, Ann helped the team train their AI algorithm in identifying which neural activity was associated with what speech. The AI recorded the neutral activity in Ann as she attempted to speak the sentences before her on a screen.
The team claims that the reason this version is quicker than the previous is because it has the capability of converting shorter neural signals, and does not need to wait for a longer signal depicting complete sentences. "This proof-of-concept framework is quite a breakthrough," Cho said. "We are optimistic that we can now make advances at every level. On the engineering side, for example, we will continue to push the algorithm to see how we can generate speech better and faster."