health problems such as Stroke attack Wave Amyotrophic lateral sclerosis (ALS) can alter The ability to speak sequel. In the United StateSeparately, two teams of researchers managed this Brain-reading implantation mode that have been improved by artificial intelligence (AI).
made these devices Two paralyzed people can communicate Unprecedented accuracy and speed. Both advances from two studies have been published in the journal nature.
The research teams are described in their reports Brain-computer interfaces that translate Nerve signals in text or spoken words with an artificial voice.
These types of interfaces can decode speech at 62 and 78 words per minute, respectively. the normal conversation It is produced at about 160 words per minute, but the new techniques are faster than any previous attempt.
Now it is possible to imagine a The future in which we can return the fluent conversation to a paralyzed personAllowing you to say what you want freely and with an accuracy high enough to be reliably understood. Frances Willettneuroscientist Stanford UniversityCalifornia, USA and co-author of one of the jobs.
Willett and his colleagues developed an interface to interpret neural activity at the cellular level and translate it into text. was patient Pat Bennett67 years old, injured Amyotrophic lateral sclerosisA condition that causes a gradual loss of muscle control, with consequent difficulties with movement and speech.
The researchers inserted arrays of tiny silicone electrodes into parts of Bennett’s brain responsible for speech, just a few millimeters below the surface.
They then trained deep learning algorithms to recognize unique signals in the patient’s brain as she tried to pronounce different sentences using a large vocabulary of 125,000 words and a small vocabulary of 50 words.
Artificial intelligence decodes words from phonemesSubunits of speech that make up spoken words. For vocabulary of 50 words, the interface ran 2.7 times faster than the previous next-generation interface and achieved a word error rate of 9.1%.
The error rate rose to 23.8% for a vocabulary of 125,000 words. “About three out of four words are broken correctly,” Willett said at a news conference.
Patient Bennett said: “For people who don’t talk, it means they can stay in touch with the whole world, maybe continue to work, and maintain friendships and family relationships.”
within, Edward Changa neurosurgeon University of California, San Francisco“, stated his colleagues in Stady who worked with a 47-year-old woman named Anne, who had lost the ability to speak after suffering a stroke 18 years earlier.
They used a different method than Willett’s team. They placed a thin rectangle containing 253 electrodes on the surface of the cerebral cortex. The technique is called Electrocortical mapping ECoG is less intrusive and can record the combined activity of thousands of neurons at the same time.
The team trained AI algorithms to identify patterns in Ann’s brain activity associated with her attempts to pronounce 249 sentences with a vocabulary of 1,024 words. The device produced 78 words per minute with an average word error rate of 25.5%.
Although Willett’s team’s implants, which capture neural activity more accurately, beat this score with larger vocabulary, “it is gratifying to see that with ECoG it is possible to achieve a lower rate of word errors,” he notes. Blaise EverettResearcher in neurotechnology Institute of Neuroscience in Grenoble, France.
Zhang and her team also created custom algorithms to convert a patient’s brain signals into an artificial voice and an animated avatar that mimics facial expressions. They customized the voice to sound like Ann’s before her injury. They coach her with her wedding video recordings.
“Just hearing a voice like yours is exciting,” Ann told the researchers in the post-study session. “When I got to be able to speak for myself, it was amazing!” she added.
“Voice is a really important part of our identity. It’s not just about communication, it’s also about who we are,” Zhang said.
Despite the advantages patients have found, many improvements are needed before brain implants can become available for clinical use. “Ideally, the connection should be wirelessAnne explained to the investigators. A suitable interface for everyday use had to be a fully implantable system, with no visible connectors or cables, Evert added.
Both teams hope to continue increasing the speed and accuracy of their machines by using more powerful decoding algorithms.
According to Herv, the participants in both studies were still able to activate their facial muscles when they were thinking of speaking, and the areas of the brain associated with speech were intact. Although he admitted that this does not happen in all patients.
“We see it as a proof of concept and a catalyst for industrialists in this sector to turn it into a product that anyone can use,” Willett explained.
The devices still need to be tested in clinical trials with more people to prove their reliability. “Although this data is fictional and technically advanced, we have to understand it in context, and in a very measured way,” he says. Judy EllisNeuroethicist researcher University of British Columbia in Vancouver, Canada.
He added, “We have to beware of making overly promising generalizations to large numbers of the population.” more optimistic Christian HerfComputational neuroscientist Maastricht Universityin the Netherlands, opined that brain transplants “could be products of the very near future”.
Read on:
“Beeraholic. Friend of animals everywhere. Evil web scholar. Zombie maven.”