Suppr超能文献

外科医生能信任人工智能吗?关于外科手术中机器学习的观点以及可解释人工智能(XAI)的重要性。

Can surgeons trust AI? Perspectives on machine learning in surgery and the importance of eXplainable Artificial Intelligence (XAI).

作者信息

Brandenburg Johanna M, Müller-Stich Beat P, Wagner Martin, van der Schaar Mihaela

机构信息

Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany.

National Center for Tumor Diseases (NCT), Heidelberg, Germany.

出版信息

Langenbecks Arch Surg. 2025 Jan 28;410(1):53. doi: 10.1007/s00423-025-03626-7.

Abstract

PURPOSE

This brief report aims to summarize and discuss the methodologies of eXplainable Artificial Intelligence (XAI) and their potential applications in surgery.

METHODS

We briefly introduce explainability methods, including global and individual explanatory features, methods for imaging data and time series, as well as similarity classification, and unraveled rules and laws.

RESULTS

Given the increasing interest in artificial intelligence within the surgical field, we emphasize the critical importance of transparency and interpretability in the outputs of applied models.

CONCLUSION

Transparency and interpretability are essential for the effective integration of AI models into clinical practice.

摘要

目的

本简要报告旨在总结和讨论可解释人工智能(XAI)的方法及其在手术中的潜在应用。

方法

我们简要介绍可解释性方法,包括全局和个体解释特征、成像数据和时间序列的方法,以及相似性分类、解开的规则和规律。

结果

鉴于手术领域对人工智能的兴趣日益增加,我们强调应用模型输出的透明度和可解释性至关重要。

结论

透明度和可解释性对于将人工智能模型有效整合到临床实践中至关重要。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8951/11775030/2ad52fb684f5/423_2025_3626_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验