CerCo UMR 5549, CNRS - Université Toulouse III, Toulouse, France.
IPAL, CNRS IRL62955, Singapore, Singapore.
PLoS Comput Biol. 2024 Aug 2;20(8):e1012288. doi: 10.1371/journal.pcbi.1012288. eCollection 2024 Aug.
Sounds are temporal stimuli decomposed into numerous elementary components by the auditory nervous system. For instance, a temporal to spectro-temporal transformation modelling the frequency decomposition performed by the cochlea is a widely adopted first processing step in today's computational models of auditory neural responses. Similarly, increments and decrements in sound intensity (i.e., of the raw waveform itself or of its spectral bands) constitute critical features of the neural code, with high behavioural significance. However, despite the growing attention of the scientific community on auditory OFF responses, their relationship with transient ON, sustained responses and adaptation remains unclear. In this context, we propose a new general model, based on a pair of linear filters, named AdapTrans, that captures both sustained and transient ON and OFF responses into a unifying and easy to expand framework. We demonstrate that filtering audio cochleagrams with AdapTrans permits to accurately render known properties of neural responses measured in different mammal species such as the dependence of OFF responses on the stimulus fall time and on the preceding sound duration. Furthermore, by integrating our framework into gold standard and state-of-the-art machine learning models that predict neural responses from audio stimuli, following a supervised training on a large compilation of electrophysiology datasets (ready-to-deploy PyTorch models and pre-processed datasets shared publicly), we show that AdapTrans systematically improves the prediction accuracy of estimated responses within different cortical areas of the rat and ferret auditory brain. Together, these results motivate the use of our framework for computational and systems neuroscientists willing to increase the plausibility and performances of their models of audition.
声音是由听觉神经系统分解成许多基本成分的时间刺激。例如,一种对耳蜗执行的频率分解进行模拟的时频转换模型,是当今听觉神经反应计算模型中广泛采用的第一步处理步骤。同样,声音强度的增加和减少(即原始波形本身或其频谱带的强度增加和减少)是神经编码的关键特征,具有很高的行为意义。然而,尽管科学界越来越关注听觉 OFF 反应,但它们与瞬态 ON、持续反应和适应的关系仍不清楚。在这种情况下,我们提出了一种新的通用模型,基于一对线性滤波器,称为 AdapTrans,它将持续和瞬态 ON 和 OFF 反应纳入一个统一且易于扩展的框架中。我们证明,用 AdapTrans 对音频耳蜗图进行滤波,可以准确地呈现出在不同哺乳动物物种中测量到的神经反应的已知特性,例如 OFF 反应对刺激下降时间和先前声音持续时间的依赖性。此外,通过将我们的框架集成到基于音频刺激预测神经反应的黄金标准和最先进的机器学习模型中,在对大量电生理学数据集(可立即部署的 PyTorch 模型和公开共享的预处理数据集)进行监督训练之后,我们表明 AdapTrans 系统地提高了大鼠和雪貂听觉大脑不同皮质区域估计反应的预测准确性。总之,这些结果促使计算和系统神经科学家使用我们的框架来提高他们的听觉模型的可信度和性能。