A team of scientists and doctors at Duke University has developed a new technology that can translate a person’s brain signals into speech. The speech prosthetic, revealed in the journal Nature Communications on Nov. 6, is seen as a potential breakthrough for individuals who struggle to communicate due to neurological disorders.
The prosthetic aims to assist those with conditions like ALS or locked-in syndrome, where speech impairment is a significant challenge. Current communication tools for such patients are often slow and cumbersome, making it difficult for them to express themselves effectively.
The technology utilizes brain signals to predict the words a person is trying to say. This is achieved through a brain-computer interface, which interprets signals from the brain’s speech center. The development could open new possibilities for individuals who have lost their ability to speak.
The current speech decoding rate, akin to listening to an audiobook at half-speed, is about 78 words per minute. However, people typically speak at a rate of around 150 words per minute. The prosthetic aims to bridge this gap and provide a more natural and efficient means of communication.
To enhance the accuracy of decoding, the team incorporated 256 microscopic brain sensors onto a small, flexible, medical-grade plastic device. This device, about the size of a postage stamp, was placed on the surface of the brain during experiments.
The researchers collaborated with neurosurgeons at Duke University Hospital to conduct tests on four patients undergoing brain surgery for other conditions. The limited time available during surgery required a quick setup, likened to a NASCAR pit crew, to ensure minimal impact on the overall procedure.
Participants in the study engaged in a simple listen-and-repeat activity where they heard a series of nonsense words and spoke them aloud. The brain sensors recorded activity from the speech motor cortex, responsible for coordinating the muscles involved in speech.
The recorded neural and speech data were then fed into a machine learning algorithm, which aimed to predict the sounds based on brain activity alone. The initial results showed promise, with an overall accuracy of 40%, considering the constraints of the limited time and data available during the tests.
The researchers are optimistic about further developments and are working on a cordless version of the device with support from a recent $2.4 million grant from the National Institutes of Health. The wireless version aims to provide more flexibility and mobility for users.