• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人工智能在医疗保健中的偏见:基于网络的调查。

Artificial Intelligence Bias in Health Care: Web-Based Survey.

机构信息

Core Facility Digital Medicine and Interoperability, Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Berlin, Germany.

Institute for Medical Informatics, Charité - Universitätsmedizin Berlin, Berlin, Germany.

出版信息

J Med Internet Res. 2023 Jun 22;25:e41089. doi: 10.2196/41089.

DOI:10.2196/41089
PMID:37347528
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10337406/
Abstract

BACKGROUND

Resources are increasingly spent on artificial intelligence (AI) solutions for medical applications aiming to improve diagnosis, treatment, and prevention of diseases. While the need for transparency and reduction of bias in data and algorithm development has been addressed in past studies, little is known about the knowledge and perception of bias among AI developers.

OBJECTIVE

This study's objective was to survey AI specialists in health care to investigate developers' perceptions of bias in AI algorithms for health care applications and their awareness and use of preventative measures.

METHODS

A web-based survey was provided in both German and English language, comprising a maximum of 41 questions using branching logic within the REDCap web application. Only the results of participants with experience in the field of medical AI applications and complete questionnaires were included for analysis. Demographic data, technical expertise, and perceptions of fairness, as well as knowledge of biases in AI, were analyzed, and variations among gender, age, and work environment were assessed.

RESULTS

A total of 151 AI specialists completed the web-based survey. The median age was 30 (IQR 26-39) years, and 67% (101/151) of respondents were male. One-third rated their AI development projects as fair (47/151, 31%) or moderately fair (51/151, 34%), 12% (18/151) reported their AI to be barely fair, and 1% (2/151) not fair at all. One participant identifying as diverse rated AI developments as barely fair, and among the 2 undefined gender participants, AI developments were rated as barely fair or moderately fair, respectively. Reasons for biases selected by respondents were lack of fair data (90/132, 68%), guidelines or recommendations (65/132, 49%), or knowledge (60/132, 45%). Half of the respondents worked with image data (83/151, 55%) from 1 center only (76/151, 50%), and 35% (53/151) worked with national data exclusively.

CONCLUSIONS

This study shows that the perception of biases in AI overall is moderately fair. Gender minorities did not once rate their AI development as fair or very fair. Therefore, further studies need to focus on minorities and women and their perceptions of AI. The results highlight the need to strengthen knowledge about bias in AI and provide guidelines on preventing biases in AI health care applications.

摘要

背景

为了提高疾病的诊断、治疗和预防水平,越来越多的资源被投入到医疗人工智能(AI)解决方案中。尽管过去的研究已经解决了数据和算法开发中透明度和减少偏差的需求,但对于 AI 开发者对偏差的认识和看法却知之甚少。

目的

本研究旨在调查医疗保健领域的 AI 专家,以调查开发者对医疗保健应用 AI 算法的偏差的看法,以及他们对偏差的认识和使用预防措施的情况。

方法

采用基于网络的调查,分别以德语和英语两种语言提供,使用 REDCap 网络应用程序中的分支逻辑,最多包含 41 个问题。只有具有医学 AI 应用领域经验且完成调查问卷的参与者的结果才被纳入分析。分析了人口统计学数据、技术专长以及对公平性的看法,以及对 AI 中偏差的认识,并评估了性别、年龄和工作环境之间的差异。

结果

共有 151 名 AI 专家完成了基于网络的调查。中位数年龄为 30 岁(IQR 26-39 岁),67%(101/151)的受访者为男性。三分之一的人认为他们的 AI 开发项目是公平的(47/151,31%)或中等公平的(51/151,34%),12%(18/151)的人报告他们的 AI 几乎是公平的,1%(2/151)的人完全不公平。一位自认为是多样化的参与者认为 AI 开发项目几乎是公平的,而在 2 位未定义性别的参与者中,AI 开发项目分别被评为中等公平或公平。受访者选择的偏差原因包括缺乏公平的数据(90/132,68%)、指南或建议(65/132,49%)或知识(60/132,45%)。一半的受访者(83/151,55%)仅使用来自 1 个中心的图像数据(76/151,50%),35%(53/151)仅使用国家数据。

结论

本研究表明,AI 整体偏差的感知是中等公平的。性别少数群体从未将他们的 AI 开发评为公平或非常公平。因此,需要进一步研究关注少数群体和女性以及他们对 AI 的看法。结果强调了需要加强对 AI 偏差的认识,并提供关于预防 AI 医疗保健应用中偏差的指南。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fe/10337406/5b4b82c56cc3/jmir_v25i1e41089_fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fe/10337406/d9085049df6a/jmir_v25i1e41089_fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fe/10337406/5b4b82c56cc3/jmir_v25i1e41089_fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fe/10337406/d9085049df6a/jmir_v25i1e41089_fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fe/10337406/5b4b82c56cc3/jmir_v25i1e41089_fig2.jpg

相似文献

1
Artificial Intelligence Bias in Health Care: Web-Based Survey.人工智能在医疗保健中的偏见:基于网络的调查。
J Med Internet Res. 2023 Jun 22;25:e41089. doi: 10.2196/41089.
2
Fairness of artificial intelligence in healthcare: review and recommendations.人工智能在医疗保健中的公平性:综述与建议。
Jpn J Radiol. 2024 Jan;42(1):3-15. doi: 10.1007/s11604-023-01474-3. Epub 2023 Aug 4.
3
Future Medical Artificial Intelligence Application Requirements and Expectations of Physicians in German University Hospitals: Web-Based Survey.德国大学医院的未来医学人工智能应用要求和医生期望:基于网络的调查。
J Med Internet Res. 2021 Mar 5;23(3):e26646. doi: 10.2196/26646.
4
Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas.在流行地区,服用抗叶酸抗疟药物的人群中,叶酸补充剂与疟疾易感性和严重程度的关系。
Cochrane Database Syst Rev. 2022 Feb 1;2(2022):CD014217. doi: 10.1002/14651858.CD014217.
5
A roadmap to artificial intelligence (AI): Methods for designing and building AI ready data to promote fairness.人工智能(AI)路线图:设计和构建 AI 就绪数据的方法,以促进公平性。
J Biomed Inform. 2024 Jun;154:104654. doi: 10.1016/j.jbi.2024.104654. Epub 2024 May 11.
6
A scoping review of fair machine learning techniques when using real-world data.使用真实世界数据时公平机器学习技术的范围综述。
J Biomed Inform. 2024 Mar;151:104622. doi: 10.1016/j.jbi.2024.104622. Epub 2024 Mar 6.
7
Call for algorithmic fairness to mitigate amplification of racial biases in artificial intelligence models used in orthodontics and craniofacial health.呼吁算法公平性以减轻在口腔正畸学和颅面健康中使用的人工智能模型中种族偏见的放大。
Orthod Craniofac Res. 2023 Dec;26 Suppl 1:124-130. doi: 10.1111/ocr.12721. Epub 2023 Oct 17.
8
Recommendations to promote fairness and inclusion in biomedical AI research and clinical use.促进生物医学人工智能研究和临床应用公平性和包容性的建议。
J Biomed Inform. 2024 Sep;157:104693. doi: 10.1016/j.jbi.2024.104693. Epub 2024 Jul 15.
9
Latent bias and the implementation of artificial intelligence in medicine.医学人工智能应用中的潜在偏见
J Am Med Inform Assoc. 2020 Dec 9;27(12):2020-2023. doi: 10.1093/jamia/ocaa094.
10
Lack of Transparency and Potential Bias in Artificial Intelligence Data Sets and Algorithms: A Scoping Review.人工智能数据集和算法中缺乏透明度和潜在偏见:范围综述。
JAMA Dermatol. 2021 Nov 1;157(11):1362-1369. doi: 10.1001/jamadermatol.2021.3129.

引用本文的文献

1
Application of Machine Learning for Patients With Cardiac Arrest: Systematic Review and Meta-Analysis.机器学习在心脏骤停患者中的应用:系统评价与荟萃分析。
J Med Internet Res. 2025 Mar 10;27:e67871. doi: 10.2196/67871.
2
Harnessing artificial intelligence in sepsis care: advances in early detection, personalized treatment, and real-time monitoring.脓毒症护理中人工智能的应用:早期检测、个性化治疗和实时监测的进展
Front Med (Lausanne). 2025 Jan 6;11:1510792. doi: 10.3389/fmed.2024.1510792. eCollection 2024.
3
The Sociodemographic Biases in Machine Learning Algorithms: A Biomedical Informatics Perspective.

本文引用的文献

1
A bias evaluation checklist for predictive models and its pilot application for 30-day hospital readmission models.预测模型的偏倚评估清单及其在 30 天住院再入院模型中的初步应用。
J Am Med Inform Assoc. 2022 Jul 12;29(8):1323-1333. doi: 10.1093/jamia/ocac065.
2
Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond.通过多模态和多中心数据融合开启医学可解释人工智能的黑匣子:一篇综述、两个案例展示及其他
Inf Fusion. 2022 Jan;77:29-52. doi: 10.1016/j.inffus.2021.07.016.
3
Evaluation of User-Prosthesis-Interfaces for sEMG-Based Multifunctional Prosthetic Hands.
机器学习算法中的社会人口统计学偏差:生物医学信息学视角
Life (Basel). 2024 May 21;14(6):652. doi: 10.3390/life14060652.
4
A commentary on 'ChatGPT in medicine: prospects and challenges - a review article'.对《医学中的ChatGPT:前景与挑战——一篇综述文章》的评论
Int J Surg. 2024 Aug 1;110(8):5171-5172. doi: 10.1097/JS9.0000000000001450.
5
Large language models for generating medical examinations: systematic review.生成医学检查的大型语言模型:系统评价。
BMC Med Educ. 2024 Mar 29;24(1):354. doi: 10.1186/s12909-024-05239-y.
基于表面肌电的多功能假肢手的用户-假体界面评估。
Sensors (Basel). 2021 Oct 26;21(21):7088. doi: 10.3390/s21217088.
4
Sex Differences in Cancer Genomes: Much Learned, More Unknown.癌症基因组中的性别差异:所知已多,未知更多。
Endocrinology. 2021 Nov 1;162(11). doi: 10.1210/endocr/bqab170.
5
Lack of consideration of sex and gender in COVID-19 clinical studies.COVID-19 临床研究中对性别因素的考虑不足。
Nat Commun. 2021 Jul 6;12(1):4015. doi: 10.1038/s41467-021-24265-8.
6
Sample size, power and effect size revisited: simplified and practical approaches in pre-clinical, clinical and laboratory studies.样本量、功效和效应量再探:临床前、临床和实验室研究中简化而实用的方法。
Biochem Med (Zagreb). 2021 Feb 15;31(1):010502. doi: 10.11613/BM.2021.010502. Epub 2020 Dec 15.
7
Sex differences in immune responses that underlie COVID-19 disease outcomes.COVID-19 疾病结局相关的免疫反应中的性别差异。
Nature. 2020 Dec;588(7837):315-320. doi: 10.1038/s41586-020-2700-3. Epub 2020 Aug 26.
8
Introduction to Machine Learning, Neural Networks, and Deep Learning.机器学习、神经网络和深度学习导论。
Transl Vis Sci Technol. 2020 Feb 27;9(2):14. doi: 10.1167/tvst.9.2.14.
9
Patient safety and quality improvement: Ethical principles for a regulatory approach to bias in healthcare machine learning.患者安全与质量改进:医疗机器学习偏倚监管方法的伦理原则
J Am Med Inform Assoc. 2020 Dec 9;27(12):2024-2027. doi: 10.1093/jamia/ocaa085.
10
Sex differences in autophagy-mediated diseases: toward precision medicine.自噬介导的疾病中的性别差异:迈向精准医学。
Autophagy. 2021 May;17(5):1065-1076. doi: 10.1080/15548627.2020.1752511. Epub 2020 Apr 17.