State Key Laboratory of Automotive Safety and Energy, School of Vehicle and Mobility, Tsinghua University, Beijing 100084, China.
Changan Automobile Global R&D Center, Chongqing Changan Automobile Co., Ltd., Chongqing 401133, China.
J Biomech Eng. 2024 Mar 1;146(3). doi: 10.1115/1.4063033.
Accurate occupant injury prediction in near-collision scenarios is vital in guiding intelligent vehicles to find the optimal collision condition with minimal injury risks. Existing studies focused on boosting prediction performance by introducing deep-learning models but encountered computational burdens due to the inherent high model complexity. To better balance these two traditionally contradictory factors, this study proposed a training method for pre-crash injury prediction models, namely, knowledge distillation (KD)-based training. This method was inspired by the idea of knowledge distillation, an emerging model compression method. Technically, we first trained a high-accuracy injury prediction model using informative post-crash sequence inputs (i.e., vehicle crash pulses) and a relatively complex network architecture as an experienced "teacher". Following this, a lightweight pre-crash injury prediction model ("student") learned both from the ground truth in output layers (i.e., conventional prediction loss) and its teacher in intermediate layers (i.e., distillation loss). In such a step-by-step teaching framework, the pre-crash model significantly improved the prediction accuracy of occupant's head abbreviated injury scale (AIS) (i.e., from 77.2% to 83.2%) without sacrificing computational efficiency. Multiple validation experiments proved the effectiveness of the proposed KD-based training framework. This study is expected to provide reference to balancing prediction accuracy and computational efficiency of pre-crash injury prediction models, promoting the further safety improvement of next-generation intelligent vehicles.
在近碰撞场景中准确预测乘员伤害对于指导智能车辆找到最优碰撞条件以最小化伤害风险至关重要。现有研究通过引入深度学习模型来提高预测性能,但由于模型固有复杂性较高,因此遇到了计算负担。为了更好地平衡这两个传统上相互矛盾的因素,本研究提出了一种用于预碰撞伤害预测模型的训练方法,即基于知识蒸馏(KD)的训练。该方法受到知识蒸馏的启发,知识蒸馏是一种新兴的模型压缩方法。从技术上讲,我们首先使用信息丰富的碰撞后序列输入(即车辆碰撞脉冲)和相对复杂的网络架构来训练高精度的伤害预测模型,作为经验丰富的“教师”。之后,一个轻量级的预碰撞伤害预测模型(“学生”)从输出层的真实数据(即常规预测损失)及其教师在中间层(即蒸馏损失)中学习。在这种逐步教学框架中,预碰撞模型在不牺牲计算效率的情况下,显著提高了乘员头部简略伤害等级(AIS)的预测精度(即从 77.2%提高到 83.2%)。多项验证实验证明了所提出的基于 KD 的训练框架的有效性。本研究有望为平衡预碰撞伤害预测模型的预测精度和计算效率提供参考,促进下一代智能车辆的进一步安全改进。