McClure John P, Erkat O Batuhan, Corbo Julien, Polack Pierre-Olivier
Center for Molecular and Behavioral Neuroscience, Rutgers University-Newark, Newark, NJ, United States.
Behavioral and Neural Sciences Graduate Program, Rutgers University-Newark, Newark, NJ, United States.
Front Syst Neurosci. 2022 May 9;16:869705. doi: 10.3389/fnsys.2022.869705. eCollection 2022.
Audiovisual perception results from the interaction between visual and auditory processing. Hence, presenting auditory and visual inputs simultaneously usually improves the accuracy of the unimodal percepts, but can also lead to audiovisual illusions. Cross-talks between visual and auditory inputs during sensory processing were recently shown to occur as early as in the primary visual cortex (V1). In a previous study, we demonstrated that sounds improve the representation of the orientation of visual stimuli in the naïve mouse V1 by promoting the recruitment of neurons better tuned to the orientation and direction of the visual stimulus. However, we did not test if this type of modulation was still present when the auditory and visual stimuli were both behaviorally relevant. To determine the effect of sounds on active visual processing, we performed calcium imaging in V1 while mice were performing an audiovisual task. We then compared the representations of the task stimuli orientations in the unimodal visual and audiovisual context using shallow neural networks (SNNs). SNNs were chosen because of the biological plausibility of their computational structure and the possibility of identifying the biological neurons having the strongest influence on the classification decision. We first showed that SNNs can categorize the activity of V1 neurons evoked by drifting gratings of 12 different orientations. Then, we demonstrated using the connection weight approach that SNN training assigns the largest computational weight to the V1 neurons having the best orientation and direction selectivity. Finally, we showed that it is possible to use SNNs to determine how V1 neurons represent the orientations of stimuli that do not belong to the set of orientations used for SNN training. Once the SNN approach was established, we replicated the previous finding that sounds improve orientation representation in the V1 of naïve mice. Then, we showed that, in mice performing an audiovisual detection task, task tones improve the representation of the visual cues associated with the reward while deteriorating the representation of non-rewarded cues. Altogether, our results suggest that the direction of sound modulation in V1 depends on the behavioral relevance of the visual cue.
视听感知源于视觉和听觉处理之间的相互作用。因此,同时呈现听觉和视觉输入通常会提高单峰感知的准确性,但也可能导致视听错觉。最近研究表明,感觉处理过程中视觉和听觉输入之间的串扰最早发生在初级视觉皮层(V1)。在之前的一项研究中,我们证明声音通过促进更好地调谐到视觉刺激的方向和方向的神经元的募集,改善了未成熟小鼠V1中视觉刺激方向的表征。然而,我们没有测试当听觉和视觉刺激在行为上都相关时,这种调制类型是否仍然存在。为了确定声音对主动视觉处理的影响,我们在小鼠执行视听任务时对V1进行了钙成像。然后,我们使用浅层神经网络(SNN)比较了单峰视觉和视听背景下任务刺激方向的表征。选择SNN是因为其计算结构具有生物学合理性,并且有可能识别对分类决策影响最大的生物神经元。我们首先表明,SNN可以对由12种不同方向的漂移光栅诱发的V1神经元活动进行分类。然后,我们使用连接权重方法证明,SNN训练将最大的计算权重分配给具有最佳方向和方向选择性的V1神经元。最后,我们表明可以使用SNN来确定V1神经元如何表征不属于SNN训练所用方向集的刺激的方向。一旦建立了SNN方法,我们就重复了之前的发现,即声音改善了未成熟小鼠V1中的方向表征。然后,我们表明,在执行视听检测任务的小鼠中,任务音调改善了与奖励相关的视觉线索的表征,同时恶化了无奖励线索的表征。总之,我们的结果表明,V1中声音调制的方向取决于视觉线索的行为相关性。