• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
A riddle, wrapped in a mystery, inside an enigma: How semantic black boxes and opaque artificial intelligence confuse medical decision-making.一个谜中套谜的谜语:语义黑箱和不透明的人工智能如何混淆医学决策。
Bioethics. 2022 Feb;36(2):113-120. doi: 10.1111/bioe.12924. Epub 2021 Aug 10.
2
Call for the responsible artificial intelligence in the healthcare.呼吁在医疗保健中使用负责任的人工智能。
BMJ Health Care Inform. 2023 Dec 21;30(1):e100920. doi: 10.1136/bmjhci-2023-100920.
3
Coming to Terms with the Black Box Problem: How to Justify AI Systems in Health Care.直面黑箱问题:如何为医疗保健中的人工智能系统正名。
Hastings Cent Rep. 2021 Jul;51(4):38-45. doi: 10.1002/hast.1248. Epub 2021 Apr 6.
4
Artificial Intelligence Algorithms to Diagnose Glaucoma and Detect Glaucoma Progression: Translation to Clinical Practice.用于诊断青光眼和检测青光眼病情进展的人工智能算法:向临床实践的转化
Transl Vis Sci Technol. 2020 Oct 15;9(2):55. doi: 10.1167/tvst.9.2.55. eCollection 2020 Oct.
5
The Use of Artificial Intelligence in Clinical Care: A Values-Based Guide for Shared Decision Making.人工智能在临床护理中的应用:基于价值的共享决策指南。
Curr Oncol. 2023 Feb 9;30(2):2178-2186. doi: 10.3390/curroncol30020168.
6
ARTIFICIAL INTELLIGENCE IN MEDICAL PRACTICE: REGULATIVE ISSUES AND PERSPECTIVES.人工智能在医学实践中的应用:监管问题与展望。
Wiad Lek. 2020;73(12 cz 2):2722-2727.
7
The artificial intelligence revolution in primary care: Challenges, dilemmas and opportunities.人工智能在初级保健中的革命:挑战、困境与机遇。
Aten Primaria. 2024 Feb;56(2):102820. doi: 10.1016/j.aprim.2023.102820. Epub 2023 Dec 5.
8
Trusting AI made decisions in healthcare by making them explainable.通过让人工智能做出可解释的决策来信任其在医疗保健中的决策。
Sci Prog. 2024 Jul-Sep;107(3):368504241266573. doi: 10.1177/00368504241266573.
9
Evaluation of artificial intelligence clinical applications: Detailed case analyses show value of healthcare ethics approach in identifying patient care issues.人工智能临床应用评估:详细案例分析显示,医疗保健伦理方法在识别患者护理问题方面具有价值。
Bioethics. 2021 Sep;35(7):623-633. doi: 10.1111/bioe.12885. Epub 2021 May 28.
10
Ethical Issues in the Utilization of Black Boxes for Artificial Intelligence in Medicine.人工智能在医学中利用黑盒引发的伦理问题。
Stud Health Technol Inform. 2022 Jun 29;295:249-252. doi: 10.3233/SHTI220709.

引用本文的文献

1
Engaging an advisory board in discussions about the ethical relevance of algorithmic bias and fairness.邀请一个顾问委员会参与有关算法偏见和公平性的伦理相关性的讨论。
NPJ Digit Med. 2025 May 18;8(1):292. doi: 10.1038/s41746-025-01711-1.
2
Role and Potential of Artificial Intelligence in Biomarker Discovery and Development of Treatment Strategies for Amyotrophic Lateral Sclerosis.人工智能在肌萎缩侧索硬化症生物标志物发现及治疗策略开发中的作用与潜力
Int J Mol Sci. 2025 May 2;26(9):4346. doi: 10.3390/ijms26094346.
3
When can we Kick (Some) Humans "Out of the Loop"? An Examination of the use of AI in Medical Imaging for Lumbar Spinal Stenosis.我们何时能将(部分)人类“排除在流程之外”?对人工智能在腰椎管狭窄症医学影像中的应用研究
Asian Bioeth Rev. 2024 May 15;17(1):207-223. doi: 10.1007/s41649-024-00290-9. eCollection 2025 Jan.
4
Ethical considerations on the use of big data and artificial intelligence in kidney research from the ERA ethics committee.欧洲肾脏协会(ERA)伦理委员会关于在肾脏研究中使用大数据和人工智能的伦理考量
Nephrol Dial Transplant. 2025 Feb 28;40(3):455-464. doi: 10.1093/ndt/gfae267.
5
Analyzing the Predictability of an Artificial Intelligence App (Tibot) in the Diagnosis of Dermatological Conditions: A Cross-sectional Study.分析一款人工智能应用程序(Tibot)在皮肤病诊断中的可预测性:一项横断面研究。
JMIR Dermatol. 2023 Mar 1;6:e45529. doi: 10.2196/45529.
6
Digital determinants of health: opportunities and risks amidst health inequities.健康的数字决定因素:健康不平等中的机遇与风险。
Nat Rev Nephrol. 2023 Dec;19(12):749-750. doi: 10.1038/s41581-023-00763-4.
7
Democratising or disrupting diagnosis? Ethical issues raised by the use of AI tools for rare disease diagnosis.使诊断民主化还是扰乱诊断?使用人工智能工具进行罕见病诊断引发的伦理问题。
SSM Qual Res Health. 2023 Jun;3:100240. doi: 10.1016/j.ssmqr.2023.100240.
8
An exploration of expectations and perceptions of practicing physicians on the implementation of computerized clinical decision support systems using a Qsort approach.运用 Qsort 方法探究执业医师对计算机临床决策支持系统实施的期望和看法。
BMC Med Inform Decis Mak. 2022 Jul 16;22(1):185. doi: 10.1186/s12911-022-01933-3.

一个谜中套谜的谜语:语义黑箱和不透明的人工智能如何混淆医学决策。

A riddle, wrapped in a mystery, inside an enigma: How semantic black boxes and opaque artificial intelligence confuse medical decision-making.

机构信息

Tilburg Institute for Law, Markets, Technology, and Society, Tilburg Law School, Tilburg, The Netherlands.

Bioethics Institute Ghent, Ghent University, Ghent, Belgium.

出版信息

Bioethics. 2022 Feb;36(2):113-120. doi: 10.1111/bioe.12924. Epub 2021 Aug 10.

DOI:10.1111/bioe.12924
PMID:34374441
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9291279/
Abstract

The use of artificial intelligence (AI) in healthcare comes with opportunities but also numerous challenges. A specific challenge that remains underexplored is the lack of clear and distinct definitions of the concepts used in and/or produced by these algorithms, and how their real world meaning is translated into machine language and vice versa, how their output is understood by the end user. This "semantic" black box adds to the "mathematical" black box present in many AI systems in which the underlying "reasoning" process is often opaque. In this way, whereas it is often claimed that the use of AI in medical applications will deliver "objective" information, the true relevance or meaning to the end-user is frequently obscured. This is highly problematic as AI devices are used not only for diagnostic and decision support by healthcare professionals, but also can be used to deliver information to patients, for example to create visual aids for use in shared decision-making. This paper provides an examination of the range and extent of this problem and its implications, on the basis of cases from the field of intensive care nephrology. We explore how the problematic terminology used in human communication about the detection, diagnosis, treatment, and prognosis of concepts of intensive care nephrology becomes a much more complicated affair when deployed in the form of algorithmic automation, with implications extending throughout clinical care, affecting norms and practices long considered fundamental to good clinical care.

摘要

人工智能(AI)在医疗保健中的应用带来了机遇,但也面临着众多挑战。一个尚未得到充分探索的具体挑战是,这些算法中使用的和/或产生的概念缺乏明确和清晰的定义,以及它们的实际意义如何转化为机器语言,反之亦然,最终用户如何理解它们的输出。这个“语义”黑箱增加了许多 AI 系统中存在的“数学”黑箱,其中底层“推理”过程往往不透明。这样,虽然人们经常声称在医疗应用中使用 AI 将提供“客观”的信息,但最终用户的真正相关性或意义经常被掩盖。这是非常成问题的,因为 AI 设备不仅用于医疗保健专业人员的诊断和决策支持,而且还可以用于向患者提供信息,例如创建用于共同决策的可视化辅助工具。本文基于重症监护肾脏病学领域的案例,对这一问题的范围和程度及其影响进行了考察。我们探讨了当用于算法自动化时,重症监护肾脏病学概念的检测、诊断、治疗和预后方面在人类交流中使用的有问题的术语如何变得更加复杂,其影响延伸到整个临床护理领域,影响到长期以来被认为是良好临床护理基础的规范和实践。