• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

机器学习模型在阿尔茨海默病进展中的算法公平性。

Algorithmic Fairness of Machine Learning Models for Alzheimer Disease Progression.

机构信息

Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia.

Penn Statistics in Imaging and Visualization Endeavor, Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia.

出版信息

JAMA Netw Open. 2023 Nov 1;6(11):e2342203. doi: 10.1001/jamanetworkopen.2023.42203.

DOI:10.1001/jamanetworkopen.2023.42203
PMID:37934495
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10630899/
Abstract

IMPORTANCE

Predictive models using machine learning techniques have potential to improve early detection and management of Alzheimer disease (AD). However, these models potentially have biases and may perpetuate or exacerbate existing disparities.

OBJECTIVE

To characterize the algorithmic fairness of longitudinal prediction models for AD progression.

DESIGN, SETTING, AND PARTICIPANTS: This prognostic study investigated the algorithmic fairness of logistic regression, support vector machines, and recurrent neural networks for predicting progression to mild cognitive impairment (MCI) and AD using data from participants in the Alzheimer Disease Neuroimaging Initiative evaluated at 57 sites in the US and Canada. Participants aged 54 to 91 years who contributed data on at least 2 visits between September 2005 and May 2017 were included. Data were analyzed in October 2022.

EXPOSURES

Fairness was quantified across sex, ethnicity, and race groups. Neuropsychological test scores, anatomical features from T1 magnetic resonance imaging, measures extracted from positron emission tomography, and cerebrospinal fluid biomarkers were included as predictors.

MAIN OUTCOMES AND MEASURES

Outcome measures quantified fairness of prediction models (logistic regression [LR], support vector machine [SVM], and recurrent neural network [RNN] models), including equal opportunity, equalized odds, and demographic parity. Specifically, if the model exhibited equal sensitivity for all groups, it aligned with the principle of equal opportunity, indicating fairness in predictive performance.

RESULTS

A total of 1730 participants in the cohort (mean [SD] age, 73.81 [6.92] years; 776 females [44.9%]; 69 Hispanic [4.0%] and 1661 non-Hispanic [96.0%]; 29 Asian [1.7%], 77 Black [4.5%], 1599 White [92.4%], and 25 other race [1.4%]) were included. Sensitivity for predicting progression to MCI and AD was lower for Hispanic participants compared with non-Hispanic participants; the difference (SD) in true positive rate ranged from 20.9% (5.5%) for the RNN model to 27.8% (9.8%) for the SVM model in MCI and 24.1% (5.4%) for the RNN model to 48.2% (17.3%) for the LR model in AD. Sensitivity was similarly lower for Black and Asian participants compared with non-Hispanic White participants; for example, the difference (SD) in AD true positive rate was 14.5% (51.6%) in the LR model, 12.3% (35.1%) in the SVM model, and 28.4% (16.8%) in the RNN model for Black vs White participants, and the difference (SD) in MCI true positive rate was 25.6% (13.1%) in the LR model, 24.3% (13.1%) in the SVM model, and 6.8% (18.7%) in the RNN model for Asian vs White participants. Models generally satisfied metrics of fairness with respect to sex, with no significant differences by group, except for cognitively normal (CN)-MCI and MCI-AD transitions (eg, an absolute increase [SD] in the true positive rate of CN-MCI transitions of 10.3% [27.8%] for the LR model).

CONCLUSIONS AND RELEVANCE

In this study, models were accurate in aggregate but failed to satisfy fairness metrics. These findings suggest that fairness should be considered in the development and use of machine learning models for AD progression.

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3d/10630899/516dff206f6c/jamanetwopen-e2342203-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3d/10630899/a38ea9a59e84/jamanetwopen-e2342203-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3d/10630899/516dff206f6c/jamanetwopen-e2342203-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3d/10630899/a38ea9a59e84/jamanetwopen-e2342203-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da3d/10630899/516dff206f6c/jamanetwopen-e2342203-g002.jpg
摘要

重要性

使用机器学习技术的预测模型有可能改善阿尔茨海默病(AD)的早期检测和管理。然而,这些模型可能存在偏差,并可能加剧或加剧现有的差异。

目的

描述 AD 进展纵向预测模型的算法公平性。

设计、地点和参与者:本预后研究使用来自美国和加拿大 57 个地点的阿尔茨海默病神经影像学倡议参与者的数据,调查了逻辑回归、支持向量机和递归神经网络预测向轻度认知障碍(MCI)和 AD 进展的算法公平性。参与者年龄在 54 至 91 岁之间,在 2005 年 9 月至 2017 年 5 月期间至少参加了 2 次访问。数据分析于 2022 年 10 月进行。

暴露

在性别、族裔和种族群体之间量化公平性。神经心理学测试分数、T1 磁共振成像的解剖特征、正电子发射断层扫描提取的测量值以及脑脊液生物标志物被纳入预测因素。

主要结果和措施

结局衡量了预测模型(逻辑回归[LR]、支持向量机[SVM]和递归神经网络[RNN]模型)的公平性,包括均等机会、均等赔率和人口均等。具体来说,如果模型对所有群体都表现出相同的敏感性,则符合均等机会原则,表明预测性能公平。

结果

队列中共有 1730 名参与者(平均[标准差]年龄,73.81[6.92]岁;776 名女性[44.9%];69 名西班牙裔[4.0%]和 1661 名非西班牙裔[96.0%];29 名亚洲人[1.7%],77 名黑人[4.5%],1599 名白人[92.4%]和 25 名其他种族[1.4%])被纳入研究。与非西班牙裔参与者相比,西班牙裔参与者预测向 MCI 和 AD 进展的敏感性较低;在 MCI 中,RNN 模型的真阳性率差异(标准差)范围为 20.9%(5.5%)至 27.8%(9.8%),SVM 模型为 24.1%(5.4%)至 48.2%(17.3%),LR 模型为 AD;在 AD 中,敏感性相似地低于黑人和亚洲参与者与非西班牙裔白人参与者相比;例如,LR 模型中 AD 真阳性率差异(标准差)为 14.5%(51.6%),SVM 模型为 12.3%(35.1%),RNN 模型为 28.4%(16.8%),LR 模型为黑人与白人参与者,SVM 模型为 24.3%(13.1%),RNN 模型为 6.8%(18.7%),亚洲人与白人参与者。除了认知正常(CN)-MCI 和 MCI-AD 过渡外(例如,LR 模型中 CN-MCI 过渡的真阳性率增加[标准差]为 10.3%[27.8%]),这些模型通常符合公平性指标,在组间没有显著差异。

结论和相关性

在这项研究中,模型总体上是准确的,但未能满足公平性指标。这些发现表明,在开发和使用 AD 进展的机器学习模型时,应考虑公平性。

相似文献

1
Algorithmic Fairness of Machine Learning Models for Alzheimer Disease Progression.机器学习模型在阿尔茨海默病进展中的算法公平性。
JAMA Netw Open. 2023 Nov 1;6(11):e2342203. doi: 10.1001/jamanetworkopen.2023.42203.
2
Fairness in Predicting Cancer Mortality Across Racial Subgroups.预测不同种族亚组癌症死亡率的公平性。
JAMA Netw Open. 2024 Jul 1;7(7):e2421290. doi: 10.1001/jamanetworkopen.2024.21290.
3
Racial and Ethnic Bias in Risk Prediction Models for Colorectal Cancer Recurrence When Race and Ethnicity Are Omitted as Predictors.当种族和民族被排除在预测因素之外时,结直肠癌复发风险预测模型中的种族和民族偏见。
JAMA Netw Open. 2023 Jun 1;6(6):e2318495. doi: 10.1001/jamanetworkopen.2023.18495.
4
Racial/Ethnic Disparities in the Performance of Prediction Models for Death by Suicide After Mental Health Visits.精神卫生就诊后自杀死亡预测模型表现的种族/民族差异。
JAMA Psychiatry. 2021 Jul 1;78(7):726-734. doi: 10.1001/jamapsychiatry.2021.0493.
5
Optimizing Machine Learning Methods to Improve Predictive Models of Alzheimer's Disease.优化机器学习方法以提高阿尔茨海默病预测模型的性能。
J Alzheimers Dis. 2019;71(3):1027-1036. doi: 10.3233/JAD-190262.
6
Disparities by Race and Ethnicity Among Adults Recruited for a Preclinical Alzheimer Disease Trial.成年人参与临床前阿尔茨海默病试验的种族和民族差异。
JAMA Netw Open. 2021 Jul 1;4(7):e2114364. doi: 10.1001/jamanetworkopen.2021.14364.
7
Racial Disparity in Cerebrospinal Fluid Amyloid and Tau Biomarkers and Associated Cutoffs for Mild Cognitive Impairment.种族差异在脑脊液淀粉样蛋白和 tau 生物标志物及相关轻度认知障碍的截断值上的表现。
JAMA Netw Open. 2019 Dec 2;2(12):e1917363. doi: 10.1001/jamanetworkopen.2019.17363.
8
A Stable and Scalable Digital Composite Neurocognitive Test for Early Dementia Screening Based on Machine Learning: Model Development and Validation Study.基于机器学习的稳定且可扩展的数字化复合神经认知测试在早期痴呆筛查中的应用:模型的开发与验证研究。
J Med Internet Res. 2023 Dec 1;25:e49147. doi: 10.2196/49147.
9
A parameter-efficient deep learning approach to predict conversion from mild cognitive impairment to Alzheimer's disease.一种参数高效的深度学习方法,用于预测轻度认知障碍向阿尔茨海默病的转化。
Neuroimage. 2019 Apr 1;189:276-287. doi: 10.1016/j.neuroimage.2019.01.031. Epub 2019 Jan 14.
10
Fairness of Machine Learning Algorithms for Predicting Foregone Preventive Dental Care for Adults.机器学习算法在预测成年人未接受预防牙科护理方面的公平性。
JAMA Netw Open. 2023 Nov 1;6(11):e2341625. doi: 10.1001/jamanetworkopen.2023.41625.

引用本文的文献

1
Neurotechnological Approaches to Cognitive Rehabilitation in Mild Cognitive Impairment: A Systematic Review of Neuromodulation, EEG, Virtual Reality, and Emerging AI Applications.轻度认知障碍认知康复的神经技术方法:神经调节、脑电图、虚拟现实及新兴人工智能应用的系统综述
Brain Sci. 2025 May 28;15(6):582. doi: 10.3390/brainsci15060582.
2
A scoping review and evidence gap analysis of clinical AI fairness.临床人工智能公平性的范围综述与证据差距分析
NPJ Digit Med. 2025 Jun 14;8(1):360. doi: 10.1038/s41746-025-01667-2.
3
Tailoring task arithmetic to address bias in models trained on multi-institutional datasets.

本文引用的文献

1
Association of the Informant-Reported Memory Decline With Cognitive and Brain Deterioration Through the Alzheimer Clinical Continuum.知情者报告的记忆衰退与通过阿尔茨海默病临床连续统的认知和大脑恶化的关联。
Neurology. 2023 Jun 13;100(24):e2454-e2465. doi: 10.1212/WNL.0000000000207338. Epub 2023 Apr 21.
2
Prevention of Bias and Discrimination in Clinical Practice Algorithms.临床实践算法中偏见与歧视的预防
JAMA. 2023 Jan 24;329(4):283-284. doi: 10.1001/jama.2022.23867.
3
Algorithmic fairness in computational medicine.计算医学中的算法公平性。
调整任务算法以解决在多机构数据集上训练的模型中的偏差问题。
J Biomed Inform. 2025 Aug;168:104858. doi: 10.1016/j.jbi.2025.104858. Epub 2025 Jun 8.
4
Ensuring Fairness in Detecting Mild Cognitive Impairment with MRI.利用磁共振成像检测轻度认知障碍时确保公平性
AMIA Annu Symp Proc. 2025 May 22;2024:1119-1128. eCollection 2024.
5
External validation of a proprietary risk model for 1-year mortality in community-dwelling adults aged 65 years or older.针对65岁及以上社区居住成年人1年死亡率的专有风险模型的外部验证。
J Am Med Inform Assoc. 2025 Jul 1;32(7):1110-1119. doi: 10.1093/jamia/ocaf062.
6
Exploring trade-offs in equitable stroke risk prediction with parity-constrained and race-free models.利用平等约束和无种族模型探索公平性中风风险预测中的权衡。
Artif Intell Med. 2025 Jun;164:103130. doi: 10.1016/j.artmed.2025.103130. Epub 2025 Apr 10.
7
Mitigating bias in AI mortality predictions for minority populations: a transfer learning approach.减轻人工智能对少数族裔人口死亡率预测中的偏差:一种迁移学习方法。
BMC Med Inform Decis Mak. 2025 Jan 17;25(1):30. doi: 10.1186/s12911-025-02862-7.
8
Assessment of Racial Bias within the Risk Analysis Index of Frailty.脆弱性风险分析指数中的种族偏见评估。
Ann Surg Open. 2024 Sep 25;5(4):e490. doi: 10.1097/AS9.0000000000000490. eCollection 2024 Dec.
9
Addressing fairness issues in deep learning-based medical image analysis: a systematic review.解决基于深度学习的医学图像分析中的公平性问题:一项系统综述。
NPJ Digit Med. 2024 Oct 17;7(1):286. doi: 10.1038/s41746-024-01276-5.
10
Machine Learning Models for Predicting Mortality in Critically Ill Patients with Sepsis-Associated Acute Kidney Injury: A Systematic Review.用于预测脓毒症相关性急性肾损伤重症患者死亡率的机器学习模型:一项系统综述
Diagnostics (Basel). 2024 Jul 24;14(15):1594. doi: 10.3390/diagnostics14151594.
EBioMedicine. 2022 Oct;84:104250. doi: 10.1016/j.ebiom.2022.104250. Epub 2022 Sep 6.
4
Self-supervised learning in medicine and healthcare.医学和医疗保健中的自我监督学习。
Nat Biomed Eng. 2022 Dec;6(12):1346-1352. doi: 10.1038/s41551-022-00914-1. Epub 2022 Aug 11.
5
AI in health and medicine.人工智能在医疗中的应用。
Nat Med. 2022 Jan;28(1):31-38. doi: 10.1038/s41591-021-01614-0. Epub 2022 Jan 20.
6
Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations.人工智能算法应用于服务不足患者人群的胸部 X 光片时的漏诊偏倚。
Nat Med. 2021 Dec;27(12):2176-2182. doi: 10.1038/s41591-021-01595-0. Epub 2021 Dec 10.
7
Black and White individuals differ in dementia prevalence, risk factors, and symptomatic presentation.黑人和白人在痴呆症的患病率、风险因素和症状表现方面存在差异。
Alzheimers Dement. 2022 Aug;18(8):1461-1471. doi: 10.1002/alz.12509. Epub 2021 Dec 2.
8
The Problem of Fairness in Synthetic Healthcare Data.合成医疗数据中的公平性问题。
Entropy (Basel). 2021 Sep 4;23(9):1165. doi: 10.3390/e23091165.
9
Ethical Machine Learning in Healthcare.医疗保健中的伦理机器学习。
Annu Rev Biomed Data Sci. 2021 Jul;4:123-144. doi: 10.1146/annurev-biodatasci-092820-114757. Epub 2021 May 6.
10
Addressing Fairness, Bias, and Appropriate Use of Artificial Intelligence and Machine Learning in Global Health.解决全球卫生领域中人工智能和机器学习的公平性、偏见及合理使用问题。
Front Artif Intell. 2021 Apr 15;3:561802. doi: 10.3389/frai.2020.561802. eCollection 2020.