Lal Amos, Pinevich Yuliya, Gajic Ognjen, Herasevich Vitaly, Pickering Brian
Department of Medicine, Division of Pulmonary and Critical Care Medicine, Rochester, Mayo Clinic, MN 55905, United States.
Multidisciplinary Epidemiology and Translational Research in Intensive Care Group, Mayo Clinic, Rochester, MN 55905, United States.
World J Crit Care Med. 2020 Jun 5;9(2):13-19. doi: 10.5492/wjccm.v9.i2.13.
Widespread implementation of electronic health records has led to the increased use of artificial intelligence (AI) and computer modeling in clinical medicine. The early recognition and treatment of critical illness are central to good outcomes but are made difficult by, among other things, the complexity of the environment and the often non-specific nature of the clinical presentation. Increasingly, AI applications are being proposed as decision supports for busy or distracted clinicians, to address this challenge. Data driven "associative" AI models are built from retrospective data registries with missing data and imprecise timing. Associative AI models lack transparency, often ignore causal mechanisms, and, while potentially useful in improved prognostication, have thus far had limited clinical applicability. To be clinically useful, AI tools need to provide bedside clinicians with actionable knowledge. Explicitly addressing causal mechanisms not only increases validity and replicability of the model, but also adds transparency and helps gain trust from the bedside clinicians for real world use of AI models in teaching and patient care.
电子健康记录的广泛应用导致人工智能(AI)和计算机建模在临床医学中的使用增加。危重病的早期识别和治疗对于良好的治疗结果至关重要,但由于环境的复杂性以及临床表现往往不具有特异性等因素,这一过程变得困难。越来越多的AI应用被提议作为忙碌或分心的临床医生的决策支持,以应对这一挑战。数据驱动的“关联”AI模型是基于存在缺失数据和不精确时间的回顾性数据登记构建的。关联AI模型缺乏透明度,往往忽略因果机制,虽然在改善预后方面可能有用,但迄今为止临床适用性有限。为了在临床上有用,AI工具需要为床边临床医生提供可操作的知识。明确解决因果机制不仅能提高模型的有效性和可重复性,还能增加透明度,并有助于在教学和患者护理中让床边临床医生信任AI模型在现实世界中的应用。