TY - JOUR
T1 - A neural speech decoding framework leveraging deep learning and speech synthesis
AU - Chen, Xupeng
AU - Wang, Ran
AU - Khalilian-Gourtani, Amirhossein
AU - Yu, Leyao
AU - Dugan, Patricia
AU - Friedman, Daniel
AU - Doyle, Werner
AU - Devinsky, Orrin
AU - Wang, Yao
AU - Flinker, Adeen
N1 - Publisher Copyright:
© The Author(s) 2024.
PY - 2024/4
Y1 - 2024/4
N2 - Decoding human speech from neural signals is essential for brain–computer interface (BCI) technologies that aim to restore speech in populations with neurological deficits. However, it remains a highly challenging task, compounded by the scarce availability of neural signals with corresponding speech, data complexity and high dimensionality. Here we present a novel deep learning-based neural speech decoding framework that includes an ECoG decoder that translates electrocorticographic (ECoG) signals from the cortex into interpretable speech parameters and a novel differentiable speech synthesizer that maps speech parameters to spectrograms. We have developed a companion speech-to-speech auto-encoder consisting of a speech encoder and the same speech synthesizer to generate reference speech parameters to facilitate the ECoG decoder training. This framework generates natural-sounding speech and is highly reproducible across a cohort of 48 participants. Our experimental results show that our models can decode speech with high correlation, even when limited to only causal operations, which is necessary for adoption by real-time neural prostheses. Finally, we successfully decode speech in participants with either left or right hemisphere coverage, which could lead to speech prostheses in patients with deficits resulting from left hemisphere damage.
AB - Decoding human speech from neural signals is essential for brain–computer interface (BCI) technologies that aim to restore speech in populations with neurological deficits. However, it remains a highly challenging task, compounded by the scarce availability of neural signals with corresponding speech, data complexity and high dimensionality. Here we present a novel deep learning-based neural speech decoding framework that includes an ECoG decoder that translates electrocorticographic (ECoG) signals from the cortex into interpretable speech parameters and a novel differentiable speech synthesizer that maps speech parameters to spectrograms. We have developed a companion speech-to-speech auto-encoder consisting of a speech encoder and the same speech synthesizer to generate reference speech parameters to facilitate the ECoG decoder training. This framework generates natural-sounding speech and is highly reproducible across a cohort of 48 participants. Our experimental results show that our models can decode speech with high correlation, even when limited to only causal operations, which is necessary for adoption by real-time neural prostheses. Finally, we successfully decode speech in participants with either left or right hemisphere coverage, which could lead to speech prostheses in patients with deficits resulting from left hemisphere damage.
UR - http://www.scopus.com/inward/record.url?scp=85189876581&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85189876581&partnerID=8YFLogxK
U2 - 10.1038/s42256-024-00824-8
DO - 10.1038/s42256-024-00824-8
M3 - Article
AN - SCOPUS:85189876581
SN - 2522-5839
VL - 6
SP - 467
EP - 480
JO - Nature Machine Intelligence
JF - Nature Machine Intelligence
IS - 4
ER -