Zhang Qiangbo, Liu Yunxiang, Zhang Yu, Zong Ming, Zhu Jianlin
School of Computer Science and Information Engineering, Shanghai Institute of Technology, Shanghai 201418, China.
Sensors (Basel). 2023 Nov 10;23(22):9089. doi: 10.3390/s23229089.
Occluded pedestrian detection faces huge challenges. False positives and false negatives in crowd occlusion scenes will reduce the accuracy of occluded pedestrian detection. To overcome this problem, we proposed an improved you-only-look-once version 3 (YOLOv3) based on squeeze-and-excitation networks (SENet) and optimized generalized intersection over union (GIoU) loss for occluded pedestrian detection, namely YOLOv3-Occlusion (YOLOv3-Occ). The proposed network model considered incorporating squeeze-and-excitation networks (SENet) into YOLOv3, which assigned greater weights to the features of unobstructed parts of pedestrians to solve the problem of feature extraction against unsheltered parts. For the loss function, a new generalized intersection over union (GIoU) loss was developed to ensure the areas of predicted frames of pedestrian invariant based on the GIoU loss, which tackled the problem of inaccurate positioning of pedestrians. The proposed method, YOLOv3-Occ, was validated on the CityPersons and COCO2014 datasets. Experimental results show the proposed method could obtain 1.2% MR gains on the CityPersons dataset and 0.7% mAP@50 improvements on the COCO2014 dataset.
遮挡行人检测面临巨大挑战。人群遮挡场景中的误报和漏报会降低遮挡行人检测的准确性。为克服这一问题,我们提出了一种基于挤压激励网络(SENet)的改进型你只看一次版本3(YOLOv3),并针对遮挡行人检测优化了广义交并比(GIoU)损失,即YOLOv3-遮挡(YOLOv3-Occ)。所提出的网络模型考虑将挤压激励网络(SENet)融入YOLOv3,该网络为行人未遮挡部分的特征分配更大权重,以解决针对未遮挡部分的特征提取问题。对于损失函数,基于GIoU损失开发了一种新的广义交并比(GIoU)损失,以确保行人预测框的面积不变,从而解决行人定位不准确的问题。所提出的方法YOLOv3-Occ在CityPersons和COCO2014数据集上进行了验证。实验结果表明,该方法在CityPersons数据集上可获得1.2%的平均误识率提升,在COCO2014数据集上可获得0.7%的50%平均精度均值提升。