
Stanford researchers have developed a brain-computer interface (BCI) that can turn imagined speech into text for people with severe paralysis, where neural decoding enables communication without any physical movement.
The leap forward in assistive neuroscience, led by Frank Willett, PhD, an assistant professor of neurosurgery at Stanford, focuses on decoding inner speech—the silent words we form in our minds—rather than spoken attempts or hand movements. By tapping directly into brain activity, the team aims to restore a natural way of communication for patients who attempted speech decoding.
How Neural Decoders Read Thoughts
The neural decoding system utilizes microelectrode arrays implanted in the motor cortex of the brain, the part that oversees movement, and in the muscles utilized for BCI speech. The electrodes record neural activity, which machine-learning algorithms interpret.
The algorithms are educated to identify repeated patterns that are associated with the tiniest units of speech, known as phonemes. The computer combines these units to construct full words and sentences. The approach builds on earlier Stanford studies showing that BCIs could decode brain signals when patients attempted to speak or write.
In this study, the researchers asked four individuals with severe motor and speech impairments to use the system. Remarkably, their inner speech produced clear and consistent neural activity patterns, similar—though smaller in scale—to those observed during attempted speech.
Decoding Cognition from Spontaneous Neural Activity
The neural decoding results proved that imagined speech can be decoded, offering proof of principle that future BCIs could provide fluent and effortless communication through inner monologues alone. This would reduce fatigue and irritation that patients experience while trying to translate words into physical movements.
However, technology also invades privacy. Because inner speech is often habitual, there is a risk BCIs will record thoughts not meant for transmission. Current systems are not refined enough to decipher rapid, unstructured inner speech correctly, but technologists are already working on safeguards.
They include training the BCIs to filter out signals and considering only those associated with meaningful communication. Another advance is a “password” system—a setup in which the machine activates only when the subject imagines a pre-chosen, secret sentence. This maintains control firmly in the patient’s hands, preventing accidental decoding.
Stanford’s research looks to a future where brain implants could restore a human voice after it has been lost.
aspects of the Tech industry. Keep an eye on our Medtech section to stay informed and updated with our daily articles.