Teng Zixuan, Li Lan, Xin Ziqing, Xiang Dehui, Huang Jiang, Zhou Hailing, Shi Fei, Zhu Weifang, Cai Jing, Peng Tao, Chen Xinjian
School of Future Science and Engineering, Soochow University, Suzhou, China.
Healthy Inspection and Testing Institute, The Center for Disease Control and Prevention of Huangshi, Huangshi, China.
Quant Imaging Med Surg. 2024 Dec 5;14(12):9620-9652. doi: 10.21037/qims-24-723. Epub 2024 Nov 29.
Medical image segmentation is a vital aspect of medical image processing, allowing healthcare professionals to conduct precise and comprehensive lesion analyses. Traditional segmentation methods are often labor intensive and influenced by the subjectivity of individual physicians. The advent of artificial intelligence (AI) has transformed this field by reducing the workload of physicians, and improving the accuracy and efficiency of disease diagnosis. However, conventional AI techniques are not without challenges. Issues such as inexplicability, uncontrollable decision-making processes, and unpredictability can lead to confusion and uncertainty in clinical decision-making. This review explores the evolution of AI in medical image segmentation, focusing on the development and impact of explainable AI (XAI) and trustworthy AI (TAI).
This review synthesizes existing literature on traditional segmentation methods, AI-based approaches, and the transition from conventional AI to XAI and TAI. The review highlights the key principles and advancements in XAI that aim to address the shortcomings of conventional AI by enhancing transparency and interpretability. It further examines how TAI builds on XAI to improve the reliability, safety, and accountability of AI systems in medical image segmentation.
XAI has emerged as a solution to the limitations of conventional AI by providing greater transparency and interpretability, allowing healthcare professionals to better understand and trust AI-driven decisions. However, XAI itself faces challenges, including those related to safety, robustness, and value alignment. TAI has been developed to overcome these challenges, offering a more reliable framework for AI applications in medical image segmentation. By integrating the principles of XAI with enhanced safety and dependability, TAI addresses the critical need for TAI systems in clinical settings.
TAI presents a promising future for medical image segmentation, combining the benefits of AI with improved reliability and safety. Thus, TAI is a more viable and dependable option for healthcare applications, and could ultimately lead to better clinical outcomes for patients, and advance the field of medical image processing.
医学图像分割是医学图像处理的一个重要方面,它使医疗保健专业人员能够进行精确而全面的病变分析。传统的分割方法通常劳动强度大,且受个体医生主观性的影响。人工智能(AI)的出现改变了这一领域,减少了医生的工作量,提高了疾病诊断的准确性和效率。然而,传统的人工智能技术并非没有挑战。诸如不可解释性、不可控的决策过程和不可预测性等问题可能导致临床决策中的困惑和不确定性。本综述探讨了人工智能在医学图像分割中的发展历程,重点关注可解释人工智能(XAI)和可信人工智能(TAI)的发展及其影响。
本综述综合了关于传统分割方法、基于人工智能的方法以及从传统人工智能向XAI和TAI转变的现有文献。该综述强调了XAI的关键原则和进展,其旨在通过提高透明度和可解释性来解决传统人工智能的缺点。它进一步研究了TAI如何在XAI的基础上构建,以提高医学图像分割中人工智能系统的可靠性、安全性和可问责性。
XAI通过提供更高的透明度和可解释性,成为解决传统人工智能局限性的一种方法,使医疗保健专业人员能够更好地理解和信任人工智能驱动的决策。然而,XAI本身也面临挑战,包括与安全、稳健性和价值一致性相关的挑战。TAI的发展是为了克服这些挑战,为医学图像分割中的人工智能应用提供了一个更可靠的框架。通过将XAI的原则与增强的安全性和可靠性相结合,TAI满足了临床环境中对可信人工智能系统的迫切需求。
TAI为医学图像分割展现了一个充满希望的未来,它将人工智能的优势与更高的可靠性和安全性相结合。因此,TAI是医疗保健应用中更可行、更可靠的选择,最终可能为患者带来更好的临床结果,并推动医学图像处理领域的发展。