Suppr超能文献

透明的人类——(不)透明的技术?人工智能医疗技术中对透明度的双面呼吁。

Transparent human - (non-) transparent technology? The Janus-faced call for transparency in AI-based health care technologies.

作者信息

Ott Tabea, Dabrock Peter

机构信息

Chair of Systematic Theology II (Ethics), Faculty of Humanities, Social Sciences, and Theology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.

出版信息

Front Genet. 2022 Aug 22;13:902960. doi: 10.3389/fgene.2022.902960. eCollection 2022.

Abstract

The use of Artificial Intelligence and Big Data in health care opens up new opportunities for the measurement of the human. Their application aims not only at gathering more and better data points but also at doing it less invasive. With this change in health care towards its extension to almost all areas of life and its increasing invisibility and opacity, new questions of transparency arise. While the complex human-machine interactions involved in deploying and using AI tend to become non-transparent, the use of these technologies makes the patient seemingly transparent. Papers on the ethical implementation of AI plead for transparency but neglect the factor of the "transparent patient" as intertwined with AI. Transparency in this regard appears to be Janus-faced: The precondition for receiving help - e.g., treatment advice regarding the own health - is to become transparent for the digitized health care system. That is, for instance, to donate data and become visible to the AI and its operators. The paper reflects on this entanglement of transparent patients and (non-) transparent technology. It argues that transparency regarding both AI and humans is not an ethical principle per se but an infraethical concept. Further, it is no sufficient basis for avoiding harm and human dignity violations. Rather, transparency must be enriched by intelligibility following Judith Butler's use of the term. Intelligibility is understood as an epistemological presupposition for recognition and the ensuing humane treatment. Finally, the paper highlights ways to testify intelligibility in dealing with AI in health care ex ante, ex post, and continuously.

摘要

人工智能和大数据在医疗保健领域的应用为衡量人类开辟了新机遇。它们的应用不仅旨在收集更多、更好的数据点,还旨在以更少的侵入性方式进行收集。随着医疗保健向生活几乎所有领域的扩展以及其日益增加的不可见性和不透明度,出现了新的透明度问题。虽然部署和使用人工智能所涉及的复杂人机交互往往变得不透明,但这些技术的使用却使患者看似变得透明。关于人工智能伦理实施的论文主张透明度,但却忽视了与人工智能相互交织的“透明患者”因素。在这方面,透明度似乎具有两面性:获得帮助(例如,关于自身健康的治疗建议)的前提是对数字化医疗保健系统变得透明。也就是说,例如捐赠数据并对人工智能及其操作者可见。本文反思了透明患者与(非)透明技术的这种纠葛。它认为,关于人工智能和人类的透明度本身并非一项伦理原则,而是一个次伦理概念。此外,它并非避免伤害和侵犯人类尊严的充分依据。相反,透明度必须按照朱迪思·巴特勒对该术语的用法,通过可理解性来加以充实。可理解性被理解为认可及随之而来的人道待遇的一种认识论前提。最后,本文着重介绍了在医疗保健领域事前、事后及持续应对人工智能时证明可理解性的方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e9fb/9444183/fd2c673799fe/fgene-13-902960-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验