Connor Skylar, Li Ting, Roberts Ruth, Thakkar Shraddha, Liu Zhichao, Tong Weida
National Center for Toxicological Research, US Food and Drug Administration, Jefferson, AR, United States.
ApconiX Ltd., Macclesfield, United Kingdom.
Front Artif Intell. 2022 Nov 8;5:1034631. doi: 10.3389/frai.2022.1034631. eCollection 2022.
Artificial intelligence (AI) has played a crucial role in advancing biomedical sciences but has yet to have the impact it merits in regulatory science. As the field advances, and approaches have been evaluated as alternatives to animal studies, in a drive to identify and mitigate safety concerns earlier in the drug development process. Although many AI tools are available, their acceptance in regulatory decision-making for drug efficacy and safety evaluation is still a challenge. It is a common perception that an AI model improves with more data, but does reality reflect this perception in drug safety assessments? Importantly, a model aiming at regulatory application needs to take a broad range of model characteristics into consideration. Among them is adaptability, defined as the adaptive behavior of a model as it is retrained on unseen data. This is an important model characteristic which should be considered in regulatory applications. In this study, we set up a comprehensive study to assess adaptability in AI by mimicking the real-world scenario of the annual addition of new drugs to the market, using a model we previously developed known as DeepDILI for predicting drug-induced liver injury (DILI) with a novel Deep Learning method. We found that the target test set plays a major role in assessing the adaptive behavior of our model. Our findings also indicated that adding more drugs to the training set does not significantly affect the predictive performance of our adaptive model. We concluded that the proposed adaptability assessment framework has utility in the evaluation of the performance of a model over time.
人工智能(AI)在推动生物医学科学发展方面发挥了关键作用,但在监管科学领域尚未产生应有的影响。随着该领域的发展,人们开始评估各种替代动物研究的方法,以便在药物开发过程中更早地识别和缓解安全问题。尽管有许多人工智能工具可供使用,但它们在药物疗效和安全性评估的监管决策中的接受度仍然是一个挑战。人们普遍认为,人工智能模型会随着数据的增加而改进,但在药物安全性评估中,现实是否反映了这种看法呢?重要的是,一个旨在用于监管应用的模型需要考虑广泛的模型特征。其中包括适应性,它被定义为模型在未见过的数据上重新训练时的自适应行为。这是监管应用中应考虑的一个重要模型特征。在本研究中,我们通过模仿每年有新药上市的真实场景,使用我们之前开发的名为DeepDILI的模型,采用一种新颖的深度学习方法来预测药物性肝损伤(DILI),从而开展了一项全面研究来评估人工智能的适应性。我们发现目标测试集在评估我们模型的自适应行为中起着主要作用。我们的研究结果还表明,在训练集中添加更多药物并不会显著影响我们的自适应模型的预测性能。我们得出结论,所提出的适应性评估框架在评估模型随时间推移的性能方面具有实用价值。