Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea.
Sensors (Basel). 2018 May 29;18(6):1746. doi: 10.3390/s18061746.
Segmenting touching-pigs in real-time is an important issue for surveillance cameras intended for the 24-h tracking of individual pigs. However, methods to do so have not yet been reported. We particularly focus on the segmentation of touching-pigs in a crowded pig room with low-contrast images obtained using a Kinect depth sensor. We reduce the execution time by combining object detection techniques based on a convolutional neural network (CNN) with image processing techniques instead of applying time-consuming operations, such as optimization-based segmentation. We first apply the fastest CNN-based object detection technique (i.e., You Only Look Once, YOLO) to solve the separation problem for touching-pigs. If the quality of the YOLO output is not satisfied, then we try to find the possible boundary line between the touching-pigs by analyzing the shape. Our experimental results show that this method is effective to separate touching-pigs in terms of both accuracy (i.e., 91.96%) and execution time (i.e., real-time execution), even with low-contrast images obtained using a Kinect depth sensor.
实时分割触碰猪是用于 24 小时跟踪个体猪的监控摄像机的一个重要问题。然而,目前还没有报道用于实现这一目标的方法。我们特别关注使用 Kinect 深度传感器获取的低对比度图像的拥挤猪舍中触碰猪的分割。我们通过结合基于卷积神经网络(CNN)的对象检测技术和图像处理技术来减少执行时间,而不是应用耗时的操作,如基于优化的分割。我们首先应用最快的基于 CNN 的对象检测技术(即,YOLO)来解决触碰猪的分离问题。如果 YOLO 输出的质量不令人满意,则我们尝试通过分析形状来找到触碰猪之间可能的边界线。我们的实验结果表明,即使使用 Kinect 深度传感器获取低对比度图像,该方法在准确性(即 91.96%)和执行时间(即实时执行)方面都能有效地分割触碰猪。