Gammelgård Frej, Nielsen Jonas, Nielsen Emilia J, Hansen Malthe G, Alstrup Aage K Olsen, Perea-García Juan O, Jensen Trine H, Pertoldi Cino
Department of Chemistry and Bioscience, Aalborg University, Frederik Bajers Vej 7H, 9220 Aalborg, Denmark.
Department of Nuclear Medicine & PET, Aarhus University Hospital and Department of Clinical Medicine, Aarhus University, Palle Juul Jensens Boulevard 99, 8000 Aarhus, Denmark.
Animals (Basel). 2024 Jun 8;14(12):1729. doi: 10.3390/ani14121729.
This article applies object detection to CCTV video material to investigate the potential of using machine learning to automate behavior tracking. This study includes video tapings of two captive Bornean orangutans and their behavior. From a 2 min training video containing the selected behaviors, 334 images were extracted and labeled using Rectlabel. The labeled training material was used to construct an object detection model using Create ML. The use of object detection was shown to have potential for automating tracking, especially of locomotion, whilst filtering out false positives. Potential improvements regarding this tool are addressed, and future implementation should take these into consideration. These improvements include using adequately diverse training material and limiting iterations to avoid overfitting the model.
本文将目标检测应用于闭路电视视频素材,以研究使用机器学习实现行为跟踪自动化的潜力。本研究包括对两只圈养的婆罗洲猩猩及其行为的视频拍摄。从一段包含选定行为的2分钟训练视频中,提取了334张图像,并使用Rectlabel进行标注。使用标注后的训练素材,通过Create ML构建了一个目标检测模型。结果表明,目标检测在实现跟踪自动化方面具有潜力,尤其是在运动跟踪方面,同时还能过滤误报。文中讨论了该工具可能的改进之处,未来的应用应考虑这些改进。这些改进包括使用足够多样的训练素材以及限制迭代次数以避免模型过度拟合。