Electrical and Computer Engineering Department, New York University, New York, NY 11201.
Neurology Department, New York University, New York, NY 10016.
Proc Natl Acad Sci U S A. 2023 Oct 17;120(42):e2300255120. doi: 10.1073/pnas.2300255120. Epub 2023 Oct 11.
Speech production is a complex human function requiring continuous feedforward commands together with reafferent feedback processing. These processes are carried out by distinct frontal and temporal cortical networks, but the degree and timing of their recruitment and dynamics remain poorly understood. We present a deep learning architecture that translates neural signals recorded directly from the cortex to an interpretable representational space that can reconstruct speech. We leverage learned decoding networks to disentangle feedforward vs. feedback processing. Unlike prevailing models, we find a mixed cortical architecture in which frontal and temporal networks each process both feedforward and feedback information in tandem. We elucidate the timing of feedforward and feedback-related processing by quantifying the derived receptive fields. Our approach provides evidence for a surprisingly mixed cortical architecture of speech circuitry together with decoding advances that have important implications for neural prosthetics.
言语产生是一项复杂的人类功能,需要连续的前馈指令和感觉反馈处理。这些过程由不同的额颞皮质网络执行,但它们的招募程度和动态变化仍知之甚少。我们提出了一种深度学习架构,该架构可以将直接从皮质记录的神经信号转换为可解释的表示空间,从而重建言语。我们利用学习到的解码网络来分离前馈与反馈处理。与流行的模型不同,我们发现混合的皮质结构,其中额颞网络都在串联中处理前馈和反馈信息。我们通过量化所得的感受野来阐明前馈和反馈相关处理的时间。我们的方法为言语电路提供了令人惊讶的混合皮质结构的证据,同时解码方面的进展对神经假体具有重要意义。