National Institute for Public Health and the Environment (RIVM), P.O. Box 1, 3720 BA, Bilthoven, the Netherlands.
National Institute for Public Health and the Environment (RIVM), P.O. Box 1, 3720 BA, Bilthoven, the Netherlands.
Regul Toxicol Pharmacol. 2024 Mar;148:105589. doi: 10.1016/j.yrtph.2024.105589. Epub 2024 Feb 23.
Risk assessment of chemicals is a time-consuming process and needs to be optimized to ensure all chemicals are timely evaluated and regulated. This transition could be stimulated by valuable applications of in silico Artificial Intelligence (AI)/Machine Learning (ML) models. However, implementation of AI/ML models in risk assessment is lagging behind. Most AI/ML models are considered 'black boxes' that lack mechanistical explainability, causing risk assessors to have insufficient trust in their predictions. Here, we explore 'trust' as an essential factor towards regulatory acceptance of AI/ML models. We provide an overview of the elements of trust, including technical and beyond-technical aspects, and highlight elements that are considered most important to build trust by risk assessors. The results provide recommendations for risk assessors and computational modelers for future development of AI/ML models, including: 1) Keep models simple and interpretable; 2) Offer transparency in the data and data curation; 3) Clearly define and communicate the scope/intended purpose; 4) Define adoption criteria; 5) Make models accessible and user-friendly; 6) Demonstrate the added value in practical settings; and 7) Engage in interdisciplinary settings. These recommendations should ideally be acknowledged in future developments to stimulate trust and acceptance of AI/ML models for regulatory purposes.
化学品风险评估是一个耗时的过程,需要进行优化,以确保所有化学品都能及时得到评估和监管。有价值的计算人工智能(AI)/机器学习(ML)模型的应用可以刺激这种转变。然而,AI/ML 模型在风险评估中的应用却滞后了。大多数 AI/ML 模型被认为是“黑箱”,缺乏机械可解释性,导致风险评估人员对其预测缺乏足够的信任。在这里,我们将“信任”视为监管机构接受 AI/ML 模型的一个重要因素。我们概述了信任的要素,包括技术和超越技术的方面,并强调了风险评估人员认为最重要的建立信任的要素。研究结果为风险评估人员和计算模型人员提供了关于未来 AI/ML 模型开发的建议,包括:1)保持模型简单和可解释;2)提供数据和数据管理的透明度;3)明确定义和传达范围/预期目的;4)定义采用标准;5)使模型易于访问和使用;6)在实际环境中展示附加值;7)参与跨学科设置。这些建议在未来的发展中应得到认可,以刺激对 AI/ML 模型的信任和接受,从而用于监管目的。