Xu Huizhi, Tan Wenting, Li Yamei, Tian Yue
School of Civil Engineering and Transportation, Northeast Forestry University, Harbin 150040, China.
Sensors (Basel). 2025 Jun 9;25(12):3613. doi: 10.3390/s25123613.
Accurate vehicle type recognition in low-light environments remains a critical challenge for intelligent transportation systems (ITSs). To address the performance degradation caused by insufficient lighting, complex backgrounds, and light interference, this paper proposes a Twin-Stream Feature Fusion Graph Neural Network (TFF-Net) model. The model employs multi-scale convolutional operations combined with an Efficient Channel Attention (ECA) module to extract discriminative local features, while independent convolutional layers capture hierarchical global representations. These features are mapped as nodes to construct fully connected graph structures. Hybrid graph neural networks (GNNs) process the graph structures and model spatial dependencies and semantic associations. TFF-Net enhances the representation of features by fusing local details and global context information from the output of GNNs. To further improve its robustness, we propose an Adaptive Weighted Fusion-Bagging (AWF-Bagging) algorithm, which dynamically assigns weights to base classifiers based on their F1 scores. TFF-Net also includes dynamic feature weighting and label smoothing techniques for solving the category imbalance problem. Finally, the proposed TFF-Net is integrated into YOLOv11n (a lightweight real-time object detector) with an improved adaptive loss function. For experimental validation in low-light scenarios, we constructed the low-light vehicle dataset VDD-Light based on the public dataset UA-DETRAC. Experimental results demonstrate that our model achieves 2.6% and 2.2% improvements in mAP50 and mAP50-95 metrics over the baseline model. Compared to mainstream models and methods, the proposed model shows excellent performance and practical deployment potential.
在低光照环境下准确识别车辆类型仍然是智能交通系统(ITS)面临的一项关键挑战。为了解决光照不足、背景复杂和光干扰导致的性能下降问题,本文提出了一种双流特征融合图神经网络(TFF-Net)模型。该模型采用多尺度卷积操作并结合高效通道注意力(ECA)模块来提取有区分力的局部特征,同时独立的卷积层捕捉分层全局表示。这些特征被映射为节点以构建全连接图结构。混合图神经网络(GNN)处理图结构并对空间依赖性和语义关联进行建模。TFF-Net通过融合来自GNN输出的局部细节和全局上下文信息来增强特征表示。为了进一步提高其鲁棒性,我们提出了一种自适应加权融合装袋(AWF-Bagging)算法,该算法根据基分类器的F1分数动态分配权重。TFF-Net还包括动态特征加权和标签平滑技术来解决类别不平衡问题。最后,将所提出的TFF-Net集成到具有改进自适应损失函数的YOLOv11n(一种轻量级实时目标检测器)中。为了在低光照场景下进行实验验证,我们基于公共数据集UA-DETRAC构建了低光照车辆数据集VDD-Light。实验结果表明,我们的模型在mAP50和mAP50-95指标上比基线模型分别提高了2.6%和2.2%。与主流模型和方法相比,所提出的模型表现出优异的性能和实际部署潜力。