• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

自动提取齿鲸哨声轮廓。

Automated extraction of odontocete whistle contours.

机构信息

San Diego State University, Department of Computer Science, 5500 Campanile Drive, San Diego, California 92182-7720, USA.

出版信息

J Acoust Soc Am. 2011 Oct;130(4):2212-23. doi: 10.1121/1.3624821.

DOI:10.1121/1.3624821
PMID:21973376
Abstract

Many odontocetes produce frequency modulated tonal calls known as whistles. The ability to automatically determine time × frequency tracks corresponding to these vocalizations has numerous applications including species description, identification, and density estimation. This work develops and compares two algorithms on a common corpus of nearly one hour of data collected in the Southern California Bight and at Palmyra Atoll. The corpus contains over 3000 whistles from bottlenose dolphins, long- and short-beaked common dolphins, spinner dolphins, and melon-headed whales that have been annotated by a human, and released to the Moby Sound archive. Both algorithms use a common signal processing front end to determine time × frequency peaks from a spectrogram. In the first method, a particle filter performs Bayesian filtering, estimating the contour from the noisy spectral peaks. The second method uses an adaptive polynomial prediction to connect peaks into a graph, merging graphs when they cross. Whistle contours are extracted from graphs using information from both sides of crossings. The particle filter was able to retrieve 71.5% (recall) of the human annotated tonals with 60.8% of the detections being valid (precision). The graph algorithm's recall rate was 80.0% with a precision of 76.9%.

摘要

许多齿鲸类动物会发出调频的音调叫声,称为口哨声。自动确定与这些发声相对应的时间×频率轨迹的能力具有许多应用,包括物种描述、识别和密度估计。这项工作在南加州湾和帕尔米拉环礁收集的近一小时共同语料库上开发和比较了两种算法。该语料库包含了超过 3000 个来自宽吻海豚、长吻海豚、飞旋海豚和瓜头鲸的口哨声,这些口哨声已经由人类进行了注释,并发布到了 Moby Sound 档案中。两种算法都使用一个共同的信号处理前端从声谱图中确定时间×频率峰值。在第一种方法中,粒子滤波器执行贝叶斯滤波,从噪声频谱峰值估计轮廓。第二种方法使用自适应多项式预测将峰值连接成一个图,当它们交叉时合并图。使用交叉点两侧的信息从图中提取口哨声轮廓。粒子滤波器能够以 60.8%的检测准确率(精度)恢复 71.5%(召回率)的人类标注的音调。图形算法的召回率为 80.0%,准确率为 76.9%。

相似文献

1
Automated extraction of odontocete whistle contours.自动提取齿鲸哨声轮廓。
J Acoust Soc Am. 2011 Oct;130(4):2212-23. doi: 10.1121/1.3624821.
2
Spectrogram denoising and automated extraction of the fundamental frequency variation of dolphin whistles.频谱图去噪与海豚哨声基频变化的自动提取
J Acoust Soc Am. 2008 Aug;124(2):1159-70. doi: 10.1121/1.2945711.
3
Discriminating features of echolocation clicks of melon-headed whales (Peponocephala electra), bottlenose dolphins (Tursiops truncatus), and Gray's spinner dolphins (Stenella longirostris longirostris).瓜头鲸(Peponocephala electra)、宽吻海豚(Tursiops truncatus)和长嘴海豚(Stenella longirostris longirostris)回声定位声纳脉冲的鉴别特征。
J Acoust Soc Am. 2010 Oct;128(4):2212-24. doi: 10.1121/1.3479549.
4
An adaptive filter-based method for robust, automatic detection and frequency estimation of whistles.基于自适应滤波器的稳健、自动口哨检测和频率估计方法。
J Acoust Soc Am. 2011 Aug;130(2):893-903. doi: 10.1121/1.3609117.
5
Differences in the whistle characteristics and repertoire of Bottlenose and Spinner Dolphins.宽吻海豚和长吻飞旋海豚的哨声特征及声谱差异。
An Acad Bras Cienc. 2004 Jun;76(2):386-92. doi: 10.1590/s0001-37652004000200030. Epub 2004 Jun 8.
6
A description of sounds recorded from melon-headed whales (Peponocephala electra) off Hawai'i.从夏威夷海域记录的糙齿海豚(Peponocephala electra)发出的声音描述。
J Acoust Soc Am. 2010 May;127(5):3248-55. doi: 10.1121/1.3365259.
7
Classification of echolocation clicks from odontocetes in the Southern California Bight.对南加州海湾中齿鲸类回声定位声纳脉冲信号的分类。
J Acoust Soc Am. 2011 Jan;129(1):467-75. doi: 10.1121/1.3514383.
8
A method for detecting whistles, moans, and other frequency contour sounds.一种检测口哨声、呻吟声和其他频率轮廓声音的方法。
J Acoust Soc Am. 2011 Jun;129(6):4055-61. doi: 10.1121/1.3531926.
9
Passive acoustic monitoring of the temporal variability of odontocete tonal sounds from a long-term marine observatory.通过长期海洋观测站对齿鲸类声调声音的时间变化进行被动声学监测。
PLoS One. 2015 Apr 29;10(4):e0123943. doi: 10.1371/journal.pone.0123943. eCollection 2015.
10
Characteristics of whistles from rough-toothed dolphins (Steno bredanensis) in Rio de Janeiro coast, southeastern Brazil.巴西东南部里约热内卢海岸糙齿海豚(Steno bredanensis)的哨声特征。
J Acoust Soc Am. 2012 May;131(5):4173-81. doi: 10.1121/1.3701878.

引用本文的文献

1
Bioacoustic fundamental frequency estimation: a cross-species dataset and deep learning baseline.生物声学基频估计:一个跨物种数据集及深度学习基线
Bioacoustics. 2025;34(4):419-446. doi: 10.1080/09524622.2025.2500380. Epub 2025 Jun 2.
2
Identification of western North Atlantic odontocete echolocation click types using machine learning and spatiotemporal correlates.利用机器学习和时空关联识别北大西洋西部齿鲸的回声定位咔哒声类型。
PLoS One. 2022 Mar 24;17(3):e0264988. doi: 10.1371/journal.pone.0264988. eCollection 2022.
3
Sensing ecosystem dynamics via audio source separation: A case study of marine soundscapes off northeastern Taiwan.
通过音频源分离感知生态系统动态:以台湾东北部海域的海洋声景为例。
PLoS Comput Biol. 2021 Feb 18;17(2):e1008698. doi: 10.1371/journal.pcbi.1008698. eCollection 2021 Feb.
4
Signals from the deep: Spatial and temporal acoustic occurrence of beaked whales off western Ireland.深海信号:爱尔兰西部喙鲸的时空声学出现。
PLoS One. 2018 Jun 21;13(6):e0199431. doi: 10.1371/journal.pone.0199431. eCollection 2018.
5
Passive acoustic monitoring of the temporal variability of odontocete tonal sounds from a long-term marine observatory.通过长期海洋观测站对齿鲸类声调声音的时间变化进行被动声学监测。
PLoS One. 2015 Apr 29;10(4):e0123943. doi: 10.1371/journal.pone.0123943. eCollection 2015.
6
An image processing based paradigm for the extraction of tonal sounds in cetacean communications.一种基于图像处理的鲸类动物通讯中音调声音提取范式。
J Acoust Soc Am. 2013 Dec;134(6):4435. doi: 10.1121/1.4828821.