Suppr超能文献

通过可解释性推动医疗保健领域的道德人工智能发展。

⁠Advancing ethical AI in healthcare through interpretability.

作者信息

Ning Yilin, Liu Mingxuan, Liu Nan

机构信息

Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore, Singapore.

Duke-NUS AI + Medical Sciences Initiative, Duke-NUS Medical School, Singapore, Singapore.

出版信息

Patterns (N Y). 2025 Jun 13;6(6):101290. doi: 10.1016/j.patter.2025.101290.

Abstract

Interpretability is essential for building trust in health artificial intelligence (AI), but ensuring trustworthiness requires addressing broader ethical concerns, such as fairness, privacy, and reliability. This opinion article discusses the multilayered role of interpretability and transparency in addressing these concerns by highlighting their fundamental contribution to the responsible adoption and regulation of health AI.

摘要

可解释性对于建立对健康人工智能(AI)的信任至关重要,但确保可信度需要解决更广泛的伦理问题,如公平性、隐私和可靠性。这篇观点文章通过强调可解释性和透明度对健康AI的负责任采用和监管的根本贡献,讨论了它们在解决这些问题方面的多层次作用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2bbc/12191714/fbedffd4ec61/gr1.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验