Suppr超能文献

用于猪发声与非发声分类的深度卷积神经网络:使用新数据评估模型稳健性

DCNN for Pig Vocalization and Non-Vocalization Classification: Evaluate Model Robustness with New Data.

作者信息

Pann Vandet, Kwon Kyeong-Seok, Kim Byeonghyeon, Jang Dong-Hwa, Kim Jong-Bok

机构信息

Animal Environment Division, National Institute of Animal Science, Rural Development Administration, Wanju 55365, Republic of Korea.

出版信息

Animals (Basel). 2024 Jul 9;14(14):2029. doi: 10.3390/ani14142029.

Abstract

Since pig vocalization is an important indicator of monitoring pig conditions, pig vocalization detection and recognition using deep learning play a crucial role in the management and welfare of modern pig livestock farming. However, collecting pig sound data for deep learning model training takes time and effort. Acknowledging the challenges of collecting pig sound data for model training, this study introduces a deep convolutional neural network (DCNN) architecture for pig vocalization and non-vocalization classification with a real pig farm dataset. Various audio feature extraction methods were evaluated individually to compare the performance differences, including Mel-frequency cepstral coefficients (MFCC), Mel-spectrogram, Chroma, and Tonnetz. This study proposes a novel feature extraction method called Mixed-MMCT to improve the classification accuracy by integrating MFCC, Mel-spectrogram, Chroma, and Tonnetz features. These feature extraction methods were applied to extract relevant features from the pig sound dataset for input into a deep learning network. For the experiment, three datasets were collected from three actual pig farms: Nias, Gimje, and Jeongeup. Each dataset consists of 4000 WAV files (2000 pig vocalization and 2000 pig non-vocalization) with a duration of three seconds. Various audio data augmentation techniques are utilized in the training set to improve the model performance and generalization, including pitch-shifting, time-shifting, time-stretching, and background-noising. In this study, the performance of the predictive deep learning model was assessed using the k-fold cross-validation (k = 5) technique on each dataset. By conducting rigorous experiments, Mixed-MMCT showed superior accuracy on Nias, Gimje, and Jeongeup, with rates of 99.50%, 99.56%, and 99.67%, respectively. Robustness experiments were performed to prove the effectiveness of the model by using two farm datasets as a training set and a farm as a testing set. The average performance of the Mixed-MMCT in terms of accuracy, precision, recall, and F1-score reached rates of 95.67%, 96.25%, 95.68%, and 95.96%, respectively. All results demonstrate that the proposed Mixed-MMCT feature extraction method outperforms other methods regarding pig vocalization and non-vocalization classification in real pig livestock farming.

摘要

由于猪的发声是监测猪健康状况的重要指标,利用深度学习进行猪发声的检测与识别在现代养猪业的管理和福利方面发挥着关键作用。然而,为深度学习模型训练收集猪的声音数据既耗时又费力。认识到为模型训练收集猪声音数据所面临的挑战,本研究引入了一种深度卷积神经网络(DCNN)架构,用于使用真实猪场数据集对猪的发声和不发声进行分类。对各种音频特征提取方法进行了单独评估,以比较性能差异,包括梅尔频率倒谱系数(MFCC)、梅尔频谱图、色度和音高频率。本研究提出了一种名为混合-MMCT的新型特征提取方法,通过整合MFCC、梅尔频谱图、色度和音高频率特征来提高分类准确率。这些特征提取方法被应用于从猪声音数据集中提取相关特征,以便输入到深度学习网络中。对于实验,从三个实际猪场收集了三个数据集:尼亚斯、金堤和井邑。每个数据集由4000个WAV文件(2000个猪发声和2000个猪不发声)组成,时长为三秒。在训练集中使用了各种音频数据增强技术来提高模型性能和泛化能力,包括音高转换、时间移位、时间拉伸和背景噪声添加。在本研究中,使用k折交叉验证(k = 5)技术在每个数据集上评估预测深度学习模型的性能。通过进行严格的实验,混合-MMCT在尼亚斯、金堤和井邑上表现出卓越的准确率,分别为99.50%、99.56%和99.67%。通过使用两个猪场数据集作为训练集和一个猪场作为测试集进行稳健性实验,以证明模型的有效性。混合-MMCT在准确率、精确率、召回率和F1分数方面的平均性能分别达到了95.67%、96.25%、95.68%和95.96%。所有结果表明,所提出的混合-MMCT特征提取方法在实际养猪业中猪发声和不发声分类方面优于其他方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/73d9/11273863/148233ae2c98/animals-14-02029-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验