Suppr超能文献

自动驾驶汽车需要避免哪些类人的错误,才能最大限度地提高安全性?

What humanlike errors do autonomous vehicles need to avoid to maximize safety?

机构信息

Insurance Institute for Highway Safety, United States.

Insurance Institute for Highway Safety, United States.

出版信息

J Safety Res. 2020 Dec;75:310-318. doi: 10.1016/j.jsr.2020.10.005. Epub 2020 Nov 15.

Abstract

INTRODUCTION

The final failure in the causal chain of events in 94% of crashes is driver error. It is assumed most crashes will be prevented by autonomous vehicles (AVs), but AVs will still crash if they make the same mistakes as humans. By identifying the distribution of crashes among various contributing factors, this study provides guidance on the roles AVs must perform and errors they must avoid to realize their safety potential.

METHOD

Using the NMVCCS database, five categories of driver-related contributing factors were assigned to crashes: (1) sensing/perceiving (i.e., not recognizing hazards); (2) predicting (i.e., misjudging behavior of other vehicles); (3) planning/deciding (i.e., poor decision-making behind traffic law adherence and defensive driving); (4) execution/performance (i.e., inappropriate vehicle control); and (5) incapacitation (i.e., alcohol-impaired or otherwise incapacitated driver). Assuming AVs would have superior perception and be incapable of incapacitation, we determined how many crashes would persist beyond those with incapacitation or exclusively sensing/perceiving factors.

RESULTS

Thirty-three percent of crashes involved only sensing/perceiving factors (23%) or incapacitation (10%). If they could be prevented by AVs, 67% could remain, many with planning/deciding (41%), execution/performance (23%), and predicting (17%) factors. Crashes with planning/deciding factors often involved speeding (23%) or illegal maneuvers (15%).

CONCLUSIONS

Errors in choosing evasive maneuvers, predicting actions of other road users, and traveling at speeds suitable for conditions will persist if designers program AVs to make errors similar to those of today's human drivers. Planning/deciding factors, such as speeding and disobeying traffic laws, reflect driver preferences, and AV design philosophies will need to be consistent with safety rather than occupant preferences when they conflict. Practical applications: This study illustrates the complex roles AVs will have to perform and the risks arising from occupant preferences that AV designers and regulators must address if AVs will realize their potential to eliminate most crashes.

摘要

简介

在 94%的事故中,最终导致事件链失效的原因是驾驶员失误。人们认为自动驾驶汽车(AV)可以预防大多数事故,但如果它们犯了与人类相同的错误,AV 仍然会发生事故。通过确定各种因素在事故中的分布情况,本研究为 AV 必须发挥的作用以及必须避免的错误提供了指导,以实现其安全潜力。

方法

利用 NMVCCS 数据库,将与驾驶员相关的五类因素分配给事故:(1)感知/察觉(即无法识别危险);(2)预测(即错误判断其他车辆的行为);(3)规划/决策(即遵守交通法规和防御性驾驶的决策不佳);(4)执行/表现(即不当的车辆控制);以及(5)失能(即醉酒或其他原因导致的失能驾驶员)。假设 AV 具有卓越的感知能力并且不会失能,我们确定了除失能或仅感知/察觉因素之外,还有多少事故会持续存在。

结果

33%的事故仅涉及感知/察觉因素(23%)或失能(10%)。如果这些事故可以通过 AV 来预防,那么仍有 67%的事故会持续存在,其中许多事故涉及规划/决策(41%)、执行/表现(23%)和预测(17%)因素。涉及规划/决策因素的事故通常涉及超速(23%)或非法操作(15%)。

结论

如果设计师为 AV 编程使其犯与当今人类驾驶员相似的错误,那么在选择回避措施、预测其他道路使用者的行为以及以适合路况的速度行驶时出现的错误仍将持续存在。规划/决策因素,如超速和违反交通法规,反映了驾驶员的偏好,而 AV 设计理念在与安全发生冲突时,必须优先考虑安全,而不是乘客偏好。实际应用:本研究说明了 AV 必须发挥的复杂作用,以及如果要实现消除大多数事故的潜力,AV 设计师和监管机构必须解决的由乘客偏好引起的风险。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验