文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

Pitfalls and Best Practices in Evaluation of AI Algorithmic Biases in Radiology.

作者信息

Yi Paul H, Bachina Preetham, Bharti Beepul, Garin Sean P, Kanhere Adway, Kulkarni Pranav, Li David, Parekh Vishwa S, Santomartino Samantha M, Moy Linda, Sulam Jeremias

机构信息

From the Department of Radiology, St Jude Children's Research Hospital, 262 Danny Thomas Pl, Memphis, TN 38105-3678 (P.H.Y.); Johns Hopkins University School of Medicine, Baltimore, Md (P.B.); Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Md (B.B., J.S.); Uniformed Services University of the Health Sciences, Bethesda, Md (S.P.G.); Institute for Health Computing, University of Maryland School of Medicine, Baltimore, Md (A.K., P.K.); Department of Medical Imaging, Western University Schulich School of Medicine & Dentistry, London, Ontario, Canada (D.L.); Department of Diagnostic and Interventional Imaging, McGovern Medical School at The University of Texas Health Science Center at Houston (UTHealth Houston), Houston, Tex (V.S.P.); Drexel University School of Medicine, Philadelphia, Pa (S.M.S.); and Department of Radiology, New York University Grossman School of Medicine, New York, NY (L.M.).

出版信息

Radiology. 2025 May;315(2):e241674. doi: 10.1148/radiol.241674.


DOI:10.1148/radiol.241674
PMID:40392092
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12127964/
Abstract

Despite growing awareness of problems with fairness in artificial intelligence (AI) models in radiology, evaluation of algorithmic biases, or AI biases, remains challenging due to various complexities. These include incomplete reporting of demographic information in medical imaging datasets, variability in definitions of demographic categories, and inconsistent statistical definitions of bias. To guide the appropriate evaluation of AI biases in radiology, this article summarizes the pitfalls in the evaluation and measurement of algorithmic biases. These pitfalls span the spectrum from the technical (eg, how different statistical definitions of bias impact conclusions about whether an AI model is biased) to those associated with social context (eg, how different conventions of race and ethnicity impact identification or masking of biases). Actionable best practices and future directions to avoid these pitfalls are summarized across three key areas: medical imaging datasets, demographic definitions, and statistical evaluations of bias. Although AI bias in radiology has been broadly reviewed in the recent literature, this article focuses specifically on underrecognized potential pitfalls related to the three key areas. By providing awareness of these pitfalls along with actionable practices to avoid them, exciting AI technologies can be used in radiology for the good of all people.

摘要

相似文献

[1]
Pitfalls and Best Practices in Evaluation of AI Algorithmic Biases in Radiology.

Radiology. 2025-5

[2]
Sociodemographic Variables Reporting in Human Radiology Artificial Intelligence Research.

J Am Coll Radiol. 2023-6

[3]
AI pitfalls and what not to do: mitigating bias in AI.

Br J Radiol. 2023-10

[4]
Pitfalls in Interpretive Applications of Artificial Intelligence in Radiology.

AJR Am J Roentgenol. 2024-10

[5]
Bias in artificial intelligence for medical imaging: fundamentals, detection, avoidance, mitigation, challenges, ethics, and prospects.

Diagn Interv Radiol. 2025-3-3

[6]
Fairness of artificial intelligence in healthcare: review and recommendations.

Jpn J Radiol. 2024-1

[7]
"Shortcuts" Causing Bias in Radiology Artificial Intelligence: Causes, Evaluation, and Mitigation.

J Am Coll Radiol. 2023-9

[8]
Call for algorithmic fairness to mitigate amplification of racial biases in artificial intelligence models used in orthodontics and craniofacial health.

Orthod Craniofac Res. 2023-12

[9]
Unmasking bias in artificial intelligence: a systematic review of bias detection and mitigation strategies in electronic health record-based models.

J Am Med Inform Assoc. 2024-4-19

[10]
Understanding Biases and Disparities in Radiology AI Datasets: A Review.

J Am Coll Radiol. 2023-9

引用本文的文献

[1]
The ethics of data mining in healthcare: challenges, frameworks, and future directions.

BioData Min. 2025-7-11

本文引用的文献

[1]
A vision-language foundation model for the generation of realistic chest X-ray images.

Nat Biomed Eng. 2025-4

[2]
Estimating and Controlling for Equalized Odds via Sensitive Attribute Predictors.

Adv Neural Inf Process Syst. 2023-12

[3]
Patient Characteristics Impact Performance of AI Algorithm in Interpreting Negative Screening Digital Breast Tomosynthesis Studies.

Radiology. 2024-5

[4]
Assessing the Performance of Artificial Intelligence Models: Insights from the American Society of Functional Neuroradiology Artificial Intelligence Competition.

AJNR Am J Neuroradiol. 2024-9-9

[5]
Understanding and Mitigating Bias in Imaging Artificial Intelligence.

Radiographics. 2024-5

[6]
Data Extraction from Free-Text Reports on Mechanical Thrombectomy in Acute Ischemic Stroke Using ChatGPT: A Retrospective Analysis.

Radiology. 2024-4

[7]
Generative models improve fairness of medical classifiers under distribution shifts.

Nat Med. 2024-4

[8]
Lessons Learned in Building Expertly Annotated Multi-Institution Datasets and Hosting the RSNA AI Challenges.

Radiol Artif Intell. 2024-5

[9]
Coarse Race and Ethnicity Labels Mask Granular Underdiagnosis Disparities in Deep Learning Models for Chest Radiograph Diagnosis.

Radiology. 2023-11

[10]
The RSNA Cervical Spine Fracture CT Dataset.

Radiol Artif Intell. 2023-8-30

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索