BOSTON, MASSACHUSETTS.
Trans Am Clin Climatol Assoc. 2024;134:133-145.
Artificial intelligence (AI) in the form of ChatGPT has rapidly attracted attention from physicians and medical educators. While it holds great promise for more routine medical tasks, may broaden one's differential diagnosis, and may be able to assist in the evaluation of images, such as radiographs and electrocardiograms, the technology is largely based on advanced algorithms akin to pattern recognition. One of the key questions raised in concert with these advances is: What does the growth of artificial intelligence mean for medical education, particularly the development of critical thinking and clinical reasoning? In this commentary, we will explore the elements of cognitive theory that underlie the ways in which physicians are taught to reason through a diagnostic case and compare hypothetico-deductive reasoning, often employing illness scripts, with inductive reasoning, which is based on a deeper understanding of mechanisms of health and disease. Issues of cognitive bias and their impact on diagnostic error will be examined. The constructs of routine and adaptive expertise will also be delineated. The application of artificial intelligence to diagnostic problem solving, along with concerns about racial and gender bias, will be delineated. Using several case examples, we will demonstrate the limitations of this technology and its potential pitfalls and outline the direction medical education may need to take in the years to come.
人工智能(AI)形式的 ChatGPT 迅速引起了医生和医学教育者的关注。虽然它在更常规的医疗任务中具有很大的应用前景,可以拓宽人们的鉴别诊断范围,并且能够帮助评估图像,如 X 光片和心电图,但这项技术主要是基于类似于模式识别的高级算法。随着这些进步的出现,一个关键问题是:人工智能的发展对医学教育意味着什么,特别是对批判性思维和临床推理的发展意味着什么?在这篇评论中,我们将探讨认知理论的要素,这些要素是医生通过诊断案例进行推理的基础,并将假设演绎推理(通常使用疾病脚本)与基于对健康和疾病机制更深入理解的归纳推理进行比较。还将研究认知偏差问题及其对诊断错误的影响。常规和适应性专业知识的结构也将被阐明。将阐述人工智能在诊断问题解决中的应用,以及对种族和性别偏见的担忧。我们将使用几个案例示例来展示这项技术的局限性及其潜在的陷阱,并概述医学教育在未来几年可能需要采取的方向。