Weng Tianhang, Niu Xiaopeng
School of Computer Science and Artificial Intelligence, Beijing Technology and Business University, Beijing 100048, China.
Sensors (Basel). 2025 Jul 17;25(14):4463. doi: 10.3390/s25144463.
Drone-view object detection models operating under low-light conditions face several challenges, such as object scale variations, high image noise, and limited computational resources. Existing models often struggle to balance accuracy and lightweight architecture. This paper introduces ELS-YOLO, a lightweight object detection model tailored for low-light environments, built upon the YOLOv11s framework. ELS-YOLO features a re-parameterized backbone (ER-HGNetV2) with integrated Re-parameterized Convolution and Efficient Channel Attention mechanisms, a Lightweight Feature Selection Pyramid Network (LFSPN) for multi-scale object detection, and a Shared Convolution Separate Batch Normalization Head (SCSHead) to reduce computational complexity. Layer-Adaptive Magnitude-Based Pruning (LAMP) is employed to compress the model size. Experiments on the ExDark and DroneVehicle datasets demonstrate that ELS-YOLO achieves high detection accuracy with a compact model. Here, we show that ELS-YOLO attains a mAP@0.5 of 74.3% and 68.7% on the ExDark and DroneVehicle datasets, respectively, while maintaining real-time inference capability.
在低光照条件下运行的无人机视角目标检测模型面临着诸多挑战,例如目标尺度变化、高图像噪声以及有限的计算资源。现有模型往往难以在准确性和轻量级架构之间取得平衡。本文介绍了ELS-YOLO,这是一种基于YOLOv11s框架构建的、专为低光照环境量身定制的轻量级目标检测模型。ELS-YOLO具有一个重新参数化的主干网络(ER-HGNetV2),集成了重新参数化卷积和高效通道注意力机制,一个用于多尺度目标检测的轻量级特征选择金字塔网络(LFSPN),以及一个共享卷积分离批归一化头(SCSHead)以降低计算复杂度。采用基于层自适应幅度的剪枝(LAMP)来压缩模型大小。在ExDark和DroneVehicle数据集上的实验表明,ELS-YOLO以紧凑的模型实现了高检测准确率。在此,我们展示了ELS-YOLO在ExDark和DroneVehicle数据集上分别达到了74.3%和68.7%的mAP@0.5,同时保持了实时推理能力。