• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

深度学习算法在小鼠发声分割中的扩展性能分析。

Extended performance analysis of deep-learning algorithms for mice vocalization segmentation.

机构信息

Department of Information Engineering, University of Brescia, Brescia, Italy.

Department of Molecular and Translational Medicine, University of Brescia, Brescia, Italy.

出版信息

Sci Rep. 2023 Jul 11;13(1):11238. doi: 10.1038/s41598-023-38186-7.

DOI:10.1038/s41598-023-38186-7
PMID:37433808
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10336146/
Abstract

Ultrasonic vocalizations (USVs) analysis represents a fundamental tool to study animal communication. It can be used to perform a behavioral investigation of mice for ethological studies and in the field of neuroscience and neuropharmacology. The USVs are usually recorded with a microphone sensitive to ultrasound frequencies and then processed by specific software, which help the operator to identify and characterize different families of calls. Recently, many automated systems have been proposed for automatically performing both the detection and the classification of the USVs. Of course, the USV segmentation represents the crucial step for the general framework, since the quality of the call processing strictly depends on how accurately the call itself has been previously detected. In this paper, we investigate the performance of three supervised deep learning methods for automated USV segmentation: an Auto-Encoder Neural Network (AE), a U-NET Neural Network (UNET) and a Recurrent Neural Network (RNN). The proposed models receive as input the spectrogram associated with the recorded audio track and return as output the regions in which the USV calls have been detected. To evaluate the performance of the models, we have built a dataset by recording several audio tracks and manually segmenting the corresponding USV spectrograms generated with the Avisoft software, producing in this way the ground-truth (GT) used for training. All three proposed architectures demonstrated precision and recall scores exceeding [Formula: see text], with UNET and AE achieving values above [Formula: see text], surpassing other state-of-the-art methods that were considered for comparison in this study. Additionally, the evaluation was extended to an external dataset, where UNET once again exhibited the highest performance. We suggest that our experimental results may represent a valuable benchmark for future works.

摘要

超声发声(USVs)分析是研究动物交流的基本工具。它可用于对老鼠进行行为学研究以及神经科学和神经药理学领域的研究。USVs 通常使用对超声波频率敏感的麦克风进行记录,然后由特定软件进行处理,该软件可以帮助操作人员识别和描述不同类型的叫声。最近,已经提出了许多自动化系统,用于自动执行 USVs 的检测和分类。当然,USV 分割是通用框架的关键步骤,因为叫声处理的质量严格取决于之前如何准确地检测到叫声。在本文中,我们研究了三种用于自动 USV 分割的监督深度学习方法的性能:自动编码器神经网络(AE)、U-NET 神经网络(UNET)和递归神经网络(RNN)。所提出的模型接收与记录的音频轨道相关的频谱图作为输入,并输出已检测到 USV 叫声的区域作为输出。为了评估模型的性能,我们通过录制多个音频轨道并使用 Avisoft 软件手动分割相应的 USV 频谱图构建了一个数据集,从而生成用于训练的真实数据(GT)。所有三种提出的架构都表现出超过 [Formula: see text] 的精确率和召回率,其中 UNET 和 AE 的值超过 [Formula: see text],超过了在本研究中考虑的其他用于比较的最新方法。此外,评估扩展到了外部数据集,其中 UNET 再次表现出最高的性能。我们建议,我们的实验结果可能是未来工作的有价值的基准。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb50/10336146/214ab3fab1b2/41598_2023_38186_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb50/10336146/ad0e0ce99de4/41598_2023_38186_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb50/10336146/be01a1a6994b/41598_2023_38186_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb50/10336146/2ff6d954d18a/41598_2023_38186_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb50/10336146/f70a7d7e7f2e/41598_2023_38186_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb50/10336146/41971512ba81/41598_2023_38186_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb50/10336146/539c9b69d6a8/41598_2023_38186_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb50/10336146/263a77202802/41598_2023_38186_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb50/10336146/8265fc699269/41598_2023_38186_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb50/10336146/ae8f112f38cb/41598_2023_38186_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb50/10336146/214ab3fab1b2/41598_2023_38186_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb50/10336146/ad0e0ce99de4/41598_2023_38186_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb50/10336146/be01a1a6994b/41598_2023_38186_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb50/10336146/2ff6d954d18a/41598_2023_38186_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb50/10336146/f70a7d7e7f2e/41598_2023_38186_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb50/10336146/41971512ba81/41598_2023_38186_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb50/10336146/539c9b69d6a8/41598_2023_38186_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb50/10336146/263a77202802/41598_2023_38186_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb50/10336146/8265fc699269/41598_2023_38186_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb50/10336146/ae8f112f38cb/41598_2023_38186_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb50/10336146/214ab3fab1b2/41598_2023_38186_Fig10_HTML.jpg

相似文献

1
Extended performance analysis of deep-learning algorithms for mice vocalization segmentation.深度学习算法在小鼠发声分割中的扩展性能分析。
Sci Rep. 2023 Jul 11;13(1):11238. doi: 10.1038/s41598-023-38186-7.
2
Automatic classification of mice vocalizations using Machine Learning techniques and Convolutional Neural Networks.使用机器学习技术和卷积神经网络对小鼠叫声进行自动分类。
PLoS One. 2021 Jan 19;16(1):e0244636. doi: 10.1371/journal.pone.0244636. eCollection 2021.
3
HybridMouse: A Hybrid Convolutional-Recurrent Neural Network-Based Model for Identification of Mouse Ultrasonic Vocalizations.HybridMouse:一种基于卷积循环神经网络的用于识别小鼠超声波发声的混合模型。
Front Behav Neurosci. 2022 Jan 25;15:810590. doi: 10.3389/fnbeh.2021.810590. eCollection 2021.
4
Enhancing the analysis of murine neonatal ultrasonic vocalizations: Development, evaluation, and application of different mathematical models.增强对鼠类新生期超声发声的分析:不同数学模型的开发、评估与应用。
J Acoust Soc Am. 2024 Oct 1;156(4):2448-2466. doi: 10.1121/10.0030473.
5
DeepSqueak: a deep learning-based system for detection and analysis of ultrasonic vocalizations.DeepSqueak:一种基于深度学习的超声发声检测与分析系统。
Neuropsychopharmacology. 2019 Apr;44(5):859-868. doi: 10.1038/s41386-018-0303-6. Epub 2019 Jan 4.
6
Capturing the songs of mice with an improved detection and classification method for ultrasonic vocalizations (BootSnap).利用一种改进的超声发声检测和分类方法(BootSnap)捕捉老鼠的歌声。
PLoS Comput Biol. 2022 May 12;18(5):e1010049. doi: 10.1371/journal.pcbi.1010049. eCollection 2022 May.
7
Acoustilytix™: A Web-Based Automated Ultrasonic Vocalization Scoring Platform.Acoustilytix™:一个基于网络的自动超声发声评分平台。
Brain Sci. 2021 Jun 29;11(7):864. doi: 10.3390/brainsci11070864.
8
Analysis of ultrasonic vocalizations from mice using computer vision and machine learning.利用计算机视觉和机器学习分析小鼠的超声发声。
Elife. 2021 Mar 31;10:e59161. doi: 10.7554/eLife.59161.
9
SqueakOut: Autoencoder-based segmentation of mouse ultrasonic vocalizations.SqueakOut:基于自动编码器的小鼠超声波发声分割
bioRxiv. 2024 Apr 23:2024.04.19.590368. doi: 10.1101/2024.04.19.590368.
10
Automatic segmentation and classification of mice ultrasonic vocalizations.自动分割和分类小鼠超声发声。
J Acoust Soc Am. 2022 Jul;152(1):266. doi: 10.1121/10.0012350.

引用本文的文献

1
SqueakOut: Autoencoder-based segmentation of mouse ultrasonic vocalizations.SqueakOut:基于自动编码器的小鼠超声波发声分割
bioRxiv. 2024 Apr 23:2024.04.19.590368. doi: 10.1101/2024.04.19.590368.

本文引用的文献

1
Ultrasonic Vocalizations in Adult C57BL/6J Mice: The Role of Sex Differences and Repeated Testing.成年C57BL/6J小鼠的超声波发声:性别差异和重复测试的作用
Front Behav Neurosci. 2022 Jul 14;16:883353. doi: 10.3389/fnbeh.2022.883353. eCollection 2022.
2
TrackUSF, a novel tool for automated ultrasonic vocalization analysis, reveals modified calls in a rat model of autism.TrackUSF,一种用于自动超声发声分析的新型工具,揭示了自闭症大鼠模型中经过修饰的叫声。
BMC Biol. 2022 Jul 12;20(1):159. doi: 10.1186/s12915-022-01299-y.
3
HybridMouse: A Hybrid Convolutional-Recurrent Neural Network-Based Model for Identification of Mouse Ultrasonic Vocalizations.
HybridMouse:一种基于卷积循环神经网络的用于识别小鼠超声波发声的混合模型。
Front Behav Neurosci. 2022 Jan 25;15:810590. doi: 10.3389/fnbeh.2021.810590. eCollection 2021.
4
Communication and social interaction in the cannabinoid-type 1 receptor null mouse: Implications for autism spectrum disorder.大麻素型 1 受体基因敲除小鼠的交流和社会互动:对自闭症谱系障碍的影响。
Autism Res. 2021 Sep;14(9):1854-1872. doi: 10.1002/aur.2562. Epub 2021 Jun 26.
5
Automatic classification of mice vocalizations using Machine Learning techniques and Convolutional Neural Networks.使用机器学习技术和卷积神经网络对小鼠叫声进行自动分类。
PLoS One. 2021 Jan 19;16(1):e0244636. doi: 10.1371/journal.pone.0244636. eCollection 2021.
6
Ultrasonic vocalizations in mice: relevance for ethologic and neurodevelopmental disorders studies.小鼠的超声发声:对行为学和神经发育障碍研究的意义
Neural Regen Res. 2021 Jun;16(6):1158-1167. doi: 10.4103/1673-5374.300340.
7
Ultrasonic vocalizations as a fundamental tool for early and adult behavioral phenotyping of Autism Spectrum Disorder rodent models.超声发声作为自闭症谱系障碍啮齿动物模型早期和成年行为表型分析的基本工具。
Neurosci Biobehav Rev. 2020 Sep;116:31-43. doi: 10.1016/j.neubiorev.2020.06.011. Epub 2020 Jun 13.
8
USVSEG: A robust method for segmentation of ultrasonic vocalizations in rodents.超声信号分割的稳健方法:用于啮齿类动物的超声发声分析
PLoS One. 2020 Feb 10;15(2):e0228907. doi: 10.1371/journal.pone.0228907. eCollection 2020.
9
Longitudinal analysis of ultrasonic vocalizations in mice from infancy to adolescence: Insights into the vocal repertoire of three wild-type strains in two different social contexts.从婴儿期到青春期的小鼠超声发声的纵向分析:两种不同社会环境下三种野生型品系发声谱的见解。
PLoS One. 2019 Jul 31;14(7):e0220238. doi: 10.1371/journal.pone.0220238. eCollection 2019.
10
Quantifying ultrasonic mouse vocalizations using acoustic analysis in a supervised statistical machine learning framework.利用监督式统计机器学习框架中的声学分析对超声鼠叫声进行定量分析。
Sci Rep. 2019 May 30;9(1):8100. doi: 10.1038/s41598-019-44221-3.