Scientist have found a way to translate brain signals into understandable speech using a prosthetic voice decoder.
They measured the brain activity made by the jaw, larynx, lips, and tongue when people were attempting to speak. Using this, they were able to translate the movements into the intended speech.
Researchers have developed other virtual speech aids. Those work by decoding the brain signals responsible for recognizing letters and words, the verbal representations of speech. But those approaches lack the speed and fluidity of natural speaking.
The new system, described on Wednesday in the journal Nature, deciphers the brain’s motor commands guiding vocal movement during speech — the tap of the tongue, the narrowing of the lips — and generates intelligible sentences that approximate a speaker’s natural cadence.
This study tends to benefit individuals with speech impairment to reconnect with the society.
Check the simulated vocal tract animation on New York Times.
(Image Credit: University of California, San Francisco)