Li Rongsong, Pei Xin, Xing Lu
Department of Automation, BNRist, Tsinghua University, Beijing, China; PowerChina Guiyang Engineering Co., Ltd, Guiyang 550081, China.
Department of Automation, BNRist, Tsinghua University, Beijing, China; Shanghai Artificial Intelligence Laboratory, Shanghai 200232, China.
Accid Anal Prev. 2025 Sep;220:108163. doi: 10.1016/j.aap.2025.108163. Epub 2025 Jul 12.
The individual perception capabilities of autonomous vehicles face significant challenges in overcoming occlusions and achieving long-distance visibility. Consequently, cooperative or collaborative perception (COOP), which can effectively expand the perception field and help to detect the human-driven vehicles or vulnerable road users by leveraging vehicle-to-everything (V2X) communication among connected and automated vehicles (CAVs) and roadside units (RSUs), has garnered increasing academic attention in recent years. Despite notable advancements in datasets, simulation platforms, and algorithms, there remains a dearth of research focused on the evaluation and testing methodologies for COOP systems, particularly concerning driving safety. This study proposes a general and effective framework for Risky Testing Scenarios Generation for Cooperative Perception (CoRTSG), which can integrate traffic data and prior knowledge to sequentially produce risky functional, logical, and concrete scenarios. Specific functional scenarios pertinent to COOP are extracted from the traffic crashes due to vision occlusion, thereby defining its operational design domain. Subsequently, by selecting appropriate sites on an OpenDRIVE map, risky logical scenarios are determined. A fast occlusion judgment algorithm is also developed, assigning roles to objects within a logical scenario and employing autoregressive sampling to derive risky concrete scenarios. Accordingly, a comprehensive large-scale library of risky testing scenarios encompassing 11 functional and 17,490 concrete scenarios for COOP in a mixed traffic environment with CAVs, non-CAVs, and vulnerable road users has been created for the first time in literatures. All concrete scenarios have been simulated in the CARLA environment, facilitating thorough testing of representative COOP algorithms in terms of detection accuracy, driving safety, and communication efficiency. The results highlight that COOP significantly enhances driving safety and detection accuracy compared to individual perception, however, further optimization is needed to balance performance with bandwidth requirements and to ensure stable safety improvements. Data and code are released at https://github.com/RadetzkyLi/CoRTSG.
自动驾驶车辆的个体感知能力在克服遮挡和实现远距离可见性方面面临重大挑战。因此,协作感知(COOP)近年来在学术上受到越来越多的关注,它可以通过利用联网和自动驾驶车辆(CAV)与路边单元(RSU)之间的车对万物(V2X)通信,有效地扩展感知范围,并有助于检测有人驾驶车辆或易受伤害的道路使用者。尽管在数据集、仿真平台和算法方面取得了显著进展,但针对COOP系统的评估和测试方法,尤其是与驾驶安全相关的研究仍然匮乏。本研究提出了一种用于协作感知风险测试场景生成的通用有效框架(CoRTSG),该框架可以整合交通数据和先验知识,依次生成风险功能场景、逻辑场景和具体场景。从因视觉遮挡导致的交通事故中提取与COOP相关的特定功能场景,从而定义其操作设计域。随后,通过在OpenDRIVE地图上选择合适的地点,确定风险逻辑场景。还开发了一种快速遮挡判断算法,为逻辑场景中的对象分配角色,并采用自回归采样来推导风险具体场景。因此,首次在文献中创建了一个全面的大规模风险测试场景库,其中包括在包含CAV、非CAV和易受伤害道路使用者的混合交通环境中针对COOP的11个功能场景和17490个具体场景。所有具体场景都已在CARLA环境中进行了模拟,便于在检测准确性、驾驶安全性和通信效率方面对代表性的COOP算法进行全面测试。结果表明,与个体感知相比,COOP显著提高了驾驶安全性和检测准确性,然而,需要进一步优化以平衡性能与带宽需求,并确保安全性能的稳定提升。数据和代码可在https://github.com/RadetzkyLi/CoRTSG上获取。