Ferreira-da-Silva Renato, Cruz-Correia Ricardo, Ribeiro Inês
Porto Pharmacovigilance Centre, Faculty of Medicine of the University of Porto, Alameda Professor Hernâni Monteiro, 4200-319, Porto, Portugal.
RISE-Health, Department of Community Medicine, Information and Health Decision Sciences, Faculty of Medicine of the University of Porto, Porto, Portugal.
Int J Clin Pharm. 2025 Sep 1. doi: 10.1007/s11096-025-02004-z.
Artificial intelligence (AI), particularly machine learning (ML), is increasingly influencing pharmacovigilance (PV) by improving case triage and signal detection. Several studies have reported encouraging performance, with high F1 scores and alignment with expert assessments, suggesting that AI tools can help prioritize reports and identify potential safety issues faster than manual review. However, integrating these tools into PV raises concerns. Most models are designed for prediction, not explanation, and operate as "black boxes," offering limited insight into how decisions are made. This lack of transparency may undermine trust and clinical utility, especially in a domain where causality is central. Traditional ML relies on correlational patterns and may amplify biases inherent in spontaneous reporting systems, such as under-reporting, missing data, and confounding. Recent developments in explainable AI (XAI) and causal AI aim to address these issues by offering more interpretable and causally meaningful outputs, but their use in PV remains limited. These methods face challenges, including the need for robust data, the difficulty of defining ground truth for adverse drug reactions (ADRs), and the lack of standard validation frameworks. In this commentary, we explore the promise and pitfalls of AI in PV and argue for a shift toward causally informed, interpretable models grounded in epidemiological reasoning. We identify four priorities: incorporating causal inference into AI workflows; developing benchmark datasets to support transparent evaluation; ensuring model outputs align with clinical and regulatory logic; and upholding rigorous validation standards. The goal is not to replace expert judgment, but to enhance it with tools that are more transparent, reliable, and capable of separating true signals from noise. Moving toward explainable and causally robust AI is essential to ensure that its application in pharmacovigilance is both scientifically credible and ethically sound.
人工智能(AI),尤其是机器学习(ML),正通过改进病例分流和信号检测,越来越多地影响药物警戒(PV)。多项研究报告了令人鼓舞的表现,F1分数很高且与专家评估结果一致,这表明人工智能工具能够帮助对报告进行优先级排序,并比人工审查更快地识别潜在的安全问题。然而,将这些工具整合到药物警戒中引发了一些担忧。大多数模型是为预测而非解释而设计的,并且作为“黑匣子”运行,对决策过程的洞察有限。这种缺乏透明度的情况可能会破坏信任和临床效用,尤其是在因果关系至关重要的领域。传统机器学习依赖于相关模式,可能会放大自发报告系统中固有的偏差,如报告不足、数据缺失和混杂因素。可解释人工智能(XAI)和因果人工智能的最新进展旨在通过提供更具可解释性和因果意义的输出结果解决这些问题,但它们在药物警戒中的应用仍然有限。这些方法面临挑战,包括需要可靠的数据、定义药物不良反应(ADR)的基本事实的困难,以及缺乏标准的验证框架。在这篇评论中,我们探讨了人工智能在药物警戒中的前景和陷阱,并主张转向基于流行病学推理的因果关系明确、可解释的模型。我们确定了四个优先事项:将因果推断纳入人工智能工作流程;开发基准数据集以支持透明评估;确保模型输出与临床和监管逻辑一致;坚持严格的验证标准。目标不是取代专家判断,而是用更透明、可靠且能够区分真实信号与噪声的工具来增强专家判断。朝着可解释且因果关系稳健的人工智能发展对于确保其在药物警戒中的应用在科学上可信且符合伦理至关重要。