Ramos-Soto Oscar, Aranguren Itzel, Carrillo M Manuel, Oliva Diego, Balderas-Mata Sandra E
Universidad de Guadalajara, CUCEI, Departamento de Ingeniería Electro-fotónica, Guadalajara, México.
Hospital de Especialidades Bernardo Sepúlveda, Centro Médico Nacional Siglo XXI, División de Oftalmología, Ciudad de México, México.
J Med Imaging (Bellingham). 2025 Nov;12(6):061405. doi: 10.1117/1.JMI.12.6.061405. Epub 2025 Jun 19.
We examine the transformative potential of artificial intelligence (AI) in medical imaging diagnosis, focusing on improving diagnostic accuracy and efficiency through advanced algorithms. It addresses the significant challenges preventing immediate clinical adoption of AI, specifically from technical, ethical, and legal perspectives. The aim is to highlight the current state of AI in medical imaging and outline the necessary steps to ensure safe, effective, and ethically sound clinical implementation.
We conduct a comprehensive discussion, with special emphasis on the technical requirements for robust AI models, the ethical frameworks needed for responsible deployment, and the legal implications, including data privacy and regulatory compliance. Explainable artificial intelligence (XAI) is examined as a means to increase transparency and build trust among healthcare professionals and patients.
The analysis reveals key challenges to AI integration in clinical settings, including the need for extensive high-quality datasets, model reliability, advanced infrastructure, and compliance with regulatory standards. The lack of explainability in AI outputs remains a barrier, with XAI identified as crucial for meeting transparency standards and enhancing trust among end users.
Overcoming these barriers requires a collaborative, multidisciplinary approach to integrate AI into clinical practice responsibly. Addressing technical, ethical, and legal issues will support a softer transition, fostering a more accurate, efficient, and patient-centered healthcare system where AI augments traditional medical practices.
我们研究人工智能(AI)在医学影像诊断中的变革潜力,重点是通过先进算法提高诊断准确性和效率。它解决了阻碍AI立即临床应用的重大挑战,特别是从技术、伦理和法律角度。目的是突出AI在医学影像中的当前状态,并概述确保安全、有效和符合伦理的临床实施所需的步骤。
我们进行了全面的讨论,特别强调强大的AI模型的技术要求、负责任部署所需的伦理框架以及法律影响,包括数据隐私和法规遵从性。可解释人工智能(XAI)被视为提高透明度并在医疗专业人员和患者之间建立信任的一种手段。
分析揭示了AI在临床环境中集成的关键挑战,包括对大量高质量数据集的需求、模型可靠性、先进的基础设施以及符合监管标准。AI输出缺乏可解释性仍然是一个障碍,XAI被认为对于满足透明度标准和增强最终用户之间的信任至关重要。
克服这些障碍需要一种协作的多学科方法,以负责任地将AI整合到临床实践中。解决技术、伦理和法律问题将支持更平稳的过渡,促进一个更准确、高效且以患者为中心的医疗系统,在这个系统中AI增强传统医疗实践。