Department of Computer Science, Utah State University, Logan, Utah, USA.
Sensors (Basel). 2023 Jul 29;23(15):6791. doi: 10.3390/s23156791.
A continuing trend in precision apiculture is to use computer vision methods to quantify characteristics of bee traffic in managed colonies at the hive's entrance. Since traffic at the hive's entrance is a contributing factor to the hive's productivity and health, we assessed the potential of three open-source convolutional network models, YOLOv3, YOLOv4-tiny, and YOLOv7-tiny, to quantify omnidirectional traffic in videos from on-hive video loggers on regular, unmodified one- and two-super Langstroth hives and compared their accuracies, energy efficacies, and operational energy footprints. We trained and tested the models with a 70/30 split on a dataset of 23,173 flying bees manually labeled in 5819 images from 10 randomly selected videos and manually evaluated the trained models on 3600 images from 120 randomly selected videos from different apiaries, years, and queen races. We designed a new energy efficacy metric as a ratio of performance units per energy unit required to make a model operational in a continuous hive monitoring data pipeline. In terms of accuracy, YOLOv3 was first, YOLOv7-tiny-second, and YOLOv4-tiny-third. All models underestimated the true amount of traffic due to false negatives. YOLOv3 was the only model with no false positives, but had the lowest energy efficacy and highest operational energy footprint in a deployed hive monitoring data pipeline. YOLOv7-tiny had the highest energy efficacy and the lowest operational energy footprint in the same pipeline. Consequently, YOLOv7-tiny is a model worth considering for training on larger bee datasets if a primary objective is the discovery of non-invasive computer vision models of traffic quantification with higher energy efficacies and lower operational energy footprints.
精准养蜂领域的一个持续趋势是使用计算机视觉方法来量化管理蜂群中在蜂巢入口处蜜蜂活动的特征。由于蜂巢入口处的交通是蜂巢生产力和健康的一个影响因素,我们评估了三个开源卷积网络模型(YOLOv3、YOLOv4-tiny 和 YOLOv7-tiny)的潜力,以量化常规、未经修改的一框和两框朗斯特罗式蜂箱上的基于蜂箱的视频日志器中的全方位交通,并比较了它们的准确性、能效和运行能耗足迹。我们使用一个 70/30 的数据集在训练集上进行了训练和测试,该数据集由手动标记的 23173 只飞行蜜蜂组成,这些蜜蜂来自 10 个随机选择的视频中的 5819 张图像,然后在来自不同蜂场、年份和蜂王品种的 120 个随机选择的视频中的 3600 张图像上对训练后的模型进行了手动评估。我们设计了一个新的能效指标,作为模型在连续蜂箱监测数据管道中运行所需的性能单位与能量单位的比率。在准确性方面,YOLOv3 排名第一,YOLOv7-tiny 排名第二,YOLOv4-tiny 排名第三。所有模型由于误报率低而低估了真实的交通量。YOLOv3 是唯一没有误报的模型,但在部署的蜂箱监测数据管道中能效最低,运行能耗足迹最高。在同一管道中,YOLOv7-tiny 的能效最高,运行能耗足迹最低。因此,如果主要目标是发现具有更高能效和更低运行能耗足迹的非侵入式交通量化计算机视觉模型,那么对于在更大的蜜蜂数据集上进行训练而言,YOLOv7-tiny 是一个值得考虑的模型。