文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

DCNN for Pig Vocalization and Non-Vocalization Classification: Evaluate Model Robustness with New Data.

作者信息

Pann Vandet, Kwon Kyeong-Seok, Kim Byeonghyeon, Jang Dong-Hwa, Kim Jong-Bok

机构信息

Animal Environment Division, National Institute of Animal Science, Rural Development Administration, Wanju 55365, Republic of Korea.

出版信息

Animals (Basel). 2024 Jul 9;14(14):2029. doi: 10.3390/ani14142029.


DOI:10.3390/ani14142029
PMID:39061490
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11273863/
Abstract

Since pig vocalization is an important indicator of monitoring pig conditions, pig vocalization detection and recognition using deep learning play a crucial role in the management and welfare of modern pig livestock farming. However, collecting pig sound data for deep learning model training takes time and effort. Acknowledging the challenges of collecting pig sound data for model training, this study introduces a deep convolutional neural network (DCNN) architecture for pig vocalization and non-vocalization classification with a real pig farm dataset. Various audio feature extraction methods were evaluated individually to compare the performance differences, including Mel-frequency cepstral coefficients (MFCC), Mel-spectrogram, Chroma, and Tonnetz. This study proposes a novel feature extraction method called Mixed-MMCT to improve the classification accuracy by integrating MFCC, Mel-spectrogram, Chroma, and Tonnetz features. These feature extraction methods were applied to extract relevant features from the pig sound dataset for input into a deep learning network. For the experiment, three datasets were collected from three actual pig farms: Nias, Gimje, and Jeongeup. Each dataset consists of 4000 WAV files (2000 pig vocalization and 2000 pig non-vocalization) with a duration of three seconds. Various audio data augmentation techniques are utilized in the training set to improve the model performance and generalization, including pitch-shifting, time-shifting, time-stretching, and background-noising. In this study, the performance of the predictive deep learning model was assessed using the k-fold cross-validation (k = 5) technique on each dataset. By conducting rigorous experiments, Mixed-MMCT showed superior accuracy on Nias, Gimje, and Jeongeup, with rates of 99.50%, 99.56%, and 99.67%, respectively. Robustness experiments were performed to prove the effectiveness of the model by using two farm datasets as a training set and a farm as a testing set. The average performance of the Mixed-MMCT in terms of accuracy, precision, recall, and F1-score reached rates of 95.67%, 96.25%, 95.68%, and 95.96%, respectively. All results demonstrate that the proposed Mixed-MMCT feature extraction method outperforms other methods regarding pig vocalization and non-vocalization classification in real pig livestock farming.

摘要
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/73d9/11273863/701f36a91a72/animals-14-02029-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/73d9/11273863/148233ae2c98/animals-14-02029-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/73d9/11273863/fb072424ee8c/animals-14-02029-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/73d9/11273863/0d4228c89b8a/animals-14-02029-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/73d9/11273863/39a870483340/animals-14-02029-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/73d9/11273863/c0e7c40a6d48/animals-14-02029-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/73d9/11273863/421dfa77a6a4/animals-14-02029-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/73d9/11273863/c24576ef97c2/animals-14-02029-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/73d9/11273863/8efb4a94c295/animals-14-02029-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/73d9/11273863/701f36a91a72/animals-14-02029-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/73d9/11273863/148233ae2c98/animals-14-02029-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/73d9/11273863/fb072424ee8c/animals-14-02029-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/73d9/11273863/0d4228c89b8a/animals-14-02029-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/73d9/11273863/39a870483340/animals-14-02029-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/73d9/11273863/c0e7c40a6d48/animals-14-02029-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/73d9/11273863/421dfa77a6a4/animals-14-02029-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/73d9/11273863/c24576ef97c2/animals-14-02029-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/73d9/11273863/8efb4a94c295/animals-14-02029-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/73d9/11273863/701f36a91a72/animals-14-02029-g009.jpg

相似文献

[1]
DCNN for Pig Vocalization and Non-Vocalization Classification: Evaluate Model Robustness with New Data.

Animals (Basel). 2024-7-9

[2]
CovidCoughNet: A new method based on convolutional neural networks and deep feature extraction using pitch-shifting data augmentation for covid-19 detection from cough, breath, and voice signals.

Comput Biol Med. 2023-9

[3]
Classification of Infant Cry Based on Hybrid Audio Features and ResLSTM.

J Voice. 2024-9-20

[4]
Automatic Assessment of Aphasic Speech Sensed by Audio Sensors for Classification into Aphasia Severity Levels to Recommend Speech Therapies.

Sensors (Basel). 2022-9-14

[5]
Speech emotion recognition using machine learning techniques: Feature extraction and comparison of convolutional neural network and random forest.

PLoS One. 2023

[6]
Deep learning in automatic detection of dysphonia: Comparing acoustic features and developing a generalizable framework.

Int J Lang Commun Disord. 2023-3

[7]
Deep Learning-Based Cattle Vocal Classification Model and Real-Time Livestock Monitoring System with Noise Filtering.

Animals (Basel). 2021-2-1

[8]
Study on a Pig Vocalization Classification Method Based on Multi-Feature Fusion.

Sensors (Basel). 2024-1-5

[9]
Feature-Based Fusion Using CNN for Lung and Heart Sound Classification.

Sensors (Basel). 2022-2-16

[10]
Bird Species Identification Using Spectrogram Based on Multi-Channel Fusion of DCNNs.

Entropy (Basel). 2021-11-13

本文引用的文献

[1]
Study on a Pig Vocalization Classification Method Based on Multi-Feature Fusion.

Sensors (Basel). 2024-1-5

[2]
Speech emotion recognition using machine learning techniques: Feature extraction and comparison of convolutional neural network and random forest.

PLoS One. 2023

[3]
Farmers' Perspectives of the Benefits and Risks in Precision Livestock Farming in the EU Pig and Poultry Sectors.

Animals (Basel). 2023-9-9

[4]
Consumer Preferences for Animal Welfare in China: Optimization of Pork Production-Marketing Chains.

Animals (Basel). 2022-11-6

[5]
Industry 4.0 and Precision Livestock Farming (PLF): An up to Date Overview across Animal Productions.

Sensors (Basel). 2022-6-7

[6]
The Application of Cameras in Precision Pig Farming: An Overview for Swine-Keeping Professionals.

Animals (Basel). 2021-8-9

[7]
Consumer Perceptions of Precision Livestock Farming-A Qualitative Study in Three European Countries.

Animals (Basel). 2021-4-23

[8]
Review: Precision livestock farming: building 'digital representations' to bring the animals closer to the farmer.

Animal. 2019-9-13

[9]
Environment Sound Classification Using a Two-Stream CNN Based on Decision-Level Fusion.

Sensors (Basel). 2019-4-11

[10]
Precision Livestock Farming in Swine Welfare: A Review for Swine Practitioners.

Animals (Basel). 2019-3-31

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索