Suppr超能文献

机器学习在医学中的应用及问题:通过可解释人工智能缩小差距。

Applications of and issues with machine learning in medicine: Bridging the gap with explainable AI.

作者信息

Karako Kenji, Tang Wei

机构信息

Hepato-Biliary-Pancreatic Surgery Division, Department of Surgery, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.

National Center for Global Health and Medicine, Tokyo, Japan.

出版信息

Biosci Trends. 2025 Jan 14;18(6):497-504. doi: 10.5582/bst.2024.01342. Epub 2024 Dec 8.

Abstract

In recent years, machine learning, and particularly deep learning, has shown remarkable potential in various fields, including medicine. Advanced techniques like convolutional neural networks and transformers have enabled high-performance predictions for complex problems, making machine learning a valuable tool in medical decision-making. From predicting postoperative complications to assessing disease risk, machine learning has been actively used to analyze patient data and assist healthcare professionals. However, the "black box" problem, wherein the internal workings of machine learning models are opaque and difficult to interpret, poses a significant challenge in medical applications. The lack of transparency may hinder trust and acceptance by clinicians and patients, making the development of explainable AI (XAI) techniques essential. XAI aims to provide both global and local explanations for machine learning models, offering insights into how predictions are made and which factors influence these outcomes. In this article, we explore various applications of machine learning in medicine, describe commonly used algorithms, and discuss explainable AI as a promising solution to enhance the interpretability of these models. By integrating explainability into machine learning, we aim to ensure its ethical and practical application in healthcare, ultimately improving patient outcomes and supporting personalized treatment strategies.

摘要

近年来,机器学习,尤其是深度学习,在包括医学在内的各个领域都展现出了显著的潜力。卷积神经网络和Transformer等先进技术能够对复杂问题进行高性能预测,使机器学习成为医疗决策中的一项重要工具。从预测术后并发症到评估疾病风险,机器学习已被积极用于分析患者数据并协助医疗专业人员。然而,机器学习模型的内部运作不透明且难以解释的“黑箱”问题,在医疗应用中构成了重大挑战。缺乏透明度可能会阻碍临床医生和患者的信任与接受,因此开发可解释的人工智能(XAI)技术至关重要。XAI旨在为机器学习模型提供全局和局部解释,深入了解预测是如何做出的以及哪些因素影响了这些结果。在本文中,我们探讨了机器学习在医学中的各种应用,描述了常用算法,并讨论了可解释的人工智能作为增强这些模型可解释性的一种有前景的解决方案。通过将可解释性融入机器学习,我们旨在确保其在医疗保健中的道德和实际应用,最终改善患者预后并支持个性化治疗策略。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验