Imagine a future in which individuals can regain the ability to communicate almost naturally through a brain-computer interface. In a new development, a team of Duke University researchers has unveiled a cutting-edge speech prosthetic that holds promise for individuals suffering with speech-impairing neurological disorders.
How does it work?
The prosthetic features 256 brain sensors on a small, flexible piece of medical grade plastic, and aims to convert neural signals from the speech motor cortex into realistic speech using both these high-density sensors and machine learning. The study included four participants, all with some form of pre-existing motor disorders, such as ALS, and were subsequently undergoing brain surgery. The researchers collaborated with surgeons during the surgery, implanting the device in participants' brains as they engaged in a speech repetition task. During this task, participants heard unfamiliar/nonsensical words, such as “Yak”, and were instructed to repeat them. The speech prosthetic recorded activity from the brain as participants coordinated the muscles involved in speech, acting as a tool to capture and decode related brain signals, and translating this intended speech into audible words.
In the initial trials, the prosthetic demonstrated an impressive 40% accuracy in converting brain signals into ‘spoken words’. The algorithm uses this data to predict the intended speech sounds based on the brain activity recordings. What makes this achievement notable is that the interface achieved this level of accuracy with just 90 seconds of data, a significantly shorter duration compared to the extensive datasets typically required by other existing methods.
How can we apply this?
In the future, research on these types of prosthetics could revolutionise current assistive technologies for individuals with speech-impairing neurological disorders. This device promises faster and more natural verbal communication than through neural speech prostheses. With 256 high-density brain sensors on a flexible substrate and machine learning adaptability, it provides a more comprehensive and personalised communication experience than conventional methods. Its future integration with communication aids might enable real-time speech decoding, fostering dynamic conversations and accommodating larger vocabularies. Furthermore, the researchers aim to develop this research further and create a cordless device for speech deciphering.
What challenges need to be overcome?
While this innovative technology represents a significant leap forward from existing communication aids. The main hurdle for future advancements lies in addressing the current decoding speed, which falls behind natural speech and achieves an accuracy level of approximately 40%. However, at present the researchers are optimistic about possibilities of refining the technology to match the rapid pace of natural speech, bringing it closer to mainstream availability, and potentially transforming the lives of those with speech-related challenges.
Photo by Josh Riemer on Unsplash
This article was written by Rebecca Parker and edited by Julia Dabrowska. Interested in writing for WiN UK yourself? Contact us through the blog page and the editors will be in touch!
Comments