Jung Dae-Hyun, Kim Na Yeon, Moon Sang Ho, Jhin Changho, Kim Hak-Jin, Yang Jung-Seok, Kim Hyoung Seok, Lee Taek Sung, Lee Ju Young, Park Soo Hyun
Smart Farm Research Center, Korea Institute of Science and Technology (KIST), Gangneung 25451, Korea.
Department of Bio-Convergence Science, College of Biomedical and Health Science, Konkuk University, Chungju 27478, Korea.
Animals (Basel). 2021 Feb 1;11(2):357. doi: 10.3390/ani11020357.
The priority placed on animal welfare in the meat industry is increasing the importance of understanding livestock behavior. In this study, we developed a web-based monitoring and recording system based on artificial intelligence analysis for the classification of cattle sounds. The deep learning classification model of the system is a convolutional neural network (CNN) model that takes voice information converted to Mel-frequency cepstral coefficients (MFCCs) as input. The CNN model first achieved an accuracy of 91.38% in recognizing cattle sounds. Further, short-time Fourier transform-based noise filtering was applied to remove background noise, improving the classification model accuracy to 94.18%. Categorized cattle voices were then classified into four classes, and a total of 897 classification records were acquired for the classification model development. A final accuracy of 81.96% was obtained for the model. Our proposed web-based platform that provides information obtained from a total of 12 sound sensors provides cattle vocalization monitoring in real time, enabling farm owners to determine the status of their cattle.
肉类行业对动物福利的重视使得了解牲畜行为的重要性日益增加。在本研究中,我们基于人工智能分析开发了一个用于牛叫声分类的基于网络的监测与记录系统。该系统的深度学习分类模型是一个卷积神经网络(CNN)模型,它将转换为梅尔频率倒谱系数(MFCC)的语音信息作为输入。该CNN模型在识别牛叫声方面首次达到了91.38%的准确率。此外,应用基于短时傅里叶变换的噪声滤波来去除背景噪声,将分类模型的准确率提高到了94.18%。然后将分类后的牛叫声分为四类,并为分类模型开发获取了总共897条分类记录。该模型最终获得了81.96%的准确率。我们提出的基于网络的平台提供了从总共12个声音传感器获得的信息,可实时监测牛的发声情况,使农场主能够确定其牛的状况。