Seo Aria, Woo Seunghyun, Son Yunsik
Department of Computer Science and Engineering, Dongguk University, Seoul 04620, Republic of Korea.
Department of Artificial Intelligence, Dongguk University, Seoul 04620, Republic of Korea.
Sensors (Basel). 2024 Aug 10;24(16):5162. doi: 10.3390/s24165162.
This study develops a vision-based technique for enhancing taillight recognition in autonomous vehicles, aimed at improving real-time decision making by analyzing the driving behaviors of vehicles ahead. The approach utilizes a convolutional 3D neural network (C3D) with feature simplification to classify taillight images into eight distinct states, adapting to various environmental conditions. The problem addressed is the variability in environmental conditions that affect the performance of vision-based systems. Our objective is to improve the accuracy and generalizability of taillight signal recognition under different conditions. The methodology involves using a C3D model to analyze video sequences, capturing both spatial and temporal features. Experimental results demonstrate a significant improvement in the model's accuracy (85.19%) and generalizability, enabling precise interpretation of preceding vehicle maneuvers. The proposed technique effectively enhances autonomous vehicle navigation and safety by ensuring reliable taillight state recognition, with potential for further improvements under nighttime and adverse weather conditions. Additionally, the system reduces latency in signal processing, ensuring faster and more reliable decision making directly on the edge devices installed within the vehicles.
本研究开发了一种基于视觉的技术,用于增强自动驾驶车辆中的尾灯识别,旨在通过分析前方车辆的驾驶行为来改善实时决策。该方法利用具有特征简化功能的卷积3D神经网络(C3D)将尾灯图像分类为八个不同的状态,以适应各种环境条件。所解决的问题是影响基于视觉的系统性能的环境条件的变异性。我们的目标是提高不同条件下尾灯信号识别的准确性和通用性。该方法包括使用C3D模型来分析视频序列,捕捉空间和时间特征。实验结果表明,该模型的准确性(85.19%)和通用性有了显著提高,能够精确解释前方车辆的操作。所提出的技术通过确保可靠的尾灯状态识别,有效地增强了自动驾驶车辆的导航和安全性,在夜间和恶劣天气条件下还有进一步改进的潜力。此外,该系统减少了信号处理中的延迟,确保在车辆内安装的边缘设备上直接进行更快、更可靠的决策。