• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于脑图像分析的深度学习模型中与公平性相关的性能和可解释性影响。

Fairness-related performance and explainability effects in deep learning models for brain image analysis.

作者信息

Stanley Emma A M, Wilms Matthias, Mouches Pauline, Forkert Nils D

机构信息

University of Calgary, Department of Biomedical Engineering, Calgary, Alberta, Canada.

University of Calgary, Department of Radiology, Calgary, Alberta, Canada.

出版信息

J Med Imaging (Bellingham). 2022 Nov;9(6):061102. doi: 10.1117/1.JMI.9.6.061102. Epub 2022 Aug 26.

DOI:10.1117/1.JMI.9.6.061102
PMID:36046104
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9412191/
Abstract

Explainability and fairness are two key factors for the effective and ethical clinical implementation of deep learning-based machine learning models in healthcare settings. However, there has been limited work on investigating how unfair performance manifests in explainable artificial intelligence (XAI) methods, and how XAI can be used to investigate potential reasons for unfairness. Thus, the aim of this work was to analyze the effects of previously established sociodemographic-related confounders on classifier performance and explainability methods. A convolutional neural network (CNN) was trained to predict biological sex from T1-weighted brain MRI datasets of 4547 9- to 10-year-old adolescents from the Adolescent Brain Cognitive Development study. Performance disparities of the trained CNN between White and Black subjects were analyzed and saliency maps were generated for each subgroup at the intersection of sex and race. The classification model demonstrated a significant difference in the percentage of correctly classified White male ( ) and Black male ( ) children. Conversely, slightly higher performance was found for Black female ( ) compared with White female ( ) children. Saliency maps showed subgroup-specific differences, corresponding to brain regions previously associated with pubertal development. In line with this finding, average pubertal development scores of subjects used in this study were significantly different between Black and White females ( ) and males ( ). We demonstrate that a CNN with significantly different sex classification performance between Black and White adolescents can identify different important brain regions when comparing subgroup saliency maps. Importance scores vary substantially between subgroups within brain structures associated with pubertal development, a race-associated confounder for predicting sex. We illustrate that unfair models can produce different XAI results between subgroups and that these results may explain potential reasons for biased performance.

摘要

可解释性和公平性是基于深度学习的机器学习模型在医疗保健环境中有效且符合伦理地临床应用的两个关键因素。然而,关于研究不公平性能如何在可解释人工智能(XAI)方法中表现,以及XAI如何用于调查不公平的潜在原因的工作还很有限。因此,这项工作的目的是分析先前确定的社会人口统计学相关混杂因素对分类器性能和可解释性方法的影响。我们训练了一个卷积神经网络(CNN),以从青少年大脑认知发展研究中4547名9至10岁青少年的T1加权脑MRI数据集中预测生物性别。分析了训练后的CNN在白人和黑人受试者之间的性能差异,并为性别和种族交叉的每个亚组生成了显著性图。分类模型显示,正确分类的白人男性( )和黑人男性( )儿童的百分比存在显著差异。相反,发现黑人女性( )的表现略高于白人女性( )儿童。显著性图显示了亚组特异性差异,对应于先前与青春期发育相关的脑区。与此发现一致,本研究中使用的受试者的平均青春期发育得分在黑人和白人女性( )以及男性之间存在显著差异。我们证明,在黑人和白人青少年之间具有显著不同性别分类性能的CNN在比较亚组显著性图时可以识别不同的重要脑区。在与青春期发育相关的脑结构内的亚组之间,重要性得分差异很大,青春期发育是预测性别的种族相关混杂因素。我们表明,不公平模型可以在亚组之间产生不同的XAI结果,并且这些结果可能解释了性能偏差的潜在原因。

相似文献

1
Fairness-related performance and explainability effects in deep learning models for brain image analysis.用于脑图像分析的深度学习模型中与公平性相关的性能和可解释性影响。
J Med Imaging (Bellingham). 2022 Nov;9(6):061102. doi: 10.1117/1.JMI.9.6.061102. Epub 2022 Aug 26.
2
Explainable classification of Parkinson's disease using deep learning trained on a large multi-center database of T1-weighted MRI datasets.利用基于 T1 加权 MRI 数据集的大型多中心数据库训练的深度学习对帕金森病进行可解释分类。
Neuroimage Clin. 2023;38:103405. doi: 10.1016/j.nicl.2023.103405. Epub 2023 Apr 17.
3
Toward explainable AI-empowered cognitive health assessment.迈向可解释人工智能赋能的认知健康评估。
Front Public Health. 2023 Mar 9;11:1024195. doi: 10.3389/fpubh.2023.1024195. eCollection 2023.
4
A novel approach of brain-computer interfacing (BCI) and Grad-CAM based explainable artificial intelligence: Use case scenario for smart healthcare.一种新的脑机接口 (BCI) 和基于 Grad-CAM 的可解释人工智能方法:智能医疗保健用例场景。
J Neurosci Methods. 2024 Aug;408:110159. doi: 10.1016/j.jneumeth.2024.110159. Epub 2024 May 7.
5
Saliency-driven explainable deep learning in medical imaging: bridging visual explainability and statistical quantitative analysis.医学成像中基于显著性的可解释深度学习:架起视觉可解释性与统计定量分析之间的桥梁。
BioData Min. 2024 Jun 22;17(1):18. doi: 10.1186/s13040-024-00370-4.
6
Quantitative evaluation of Saliency-Based Explainable artificial intelligence (XAI) methods in Deep Learning-Based mammogram analysis.基于显著性的可解释人工智能(XAI)方法在基于深度学习的乳房X光片分析中的定量评估。
Eur J Radiol. 2024 Apr;173:111356. doi: 10.1016/j.ejrad.2024.111356. Epub 2024 Feb 5.
7
Neuro-XAI: Explainable deep learning framework based on deeplabV3+ and bayesian optimization for segmentation and classification of brain tumor in MRI scans.Neuro-XAI:基于deeplabV3+和贝叶斯优化的可解释深度学习框架,用于磁共振成像扫描中脑肿瘤的分割和分类。
J Neurosci Methods. 2024 Oct;410:110247. doi: 10.1016/j.jneumeth.2024.110247. Epub 2024 Aug 10.
8
Image Embeddings Extracted from CNNs Outperform Other Transfer Learning Approaches in Classification of Chest Radiographs.从卷积神经网络(CNNs)提取的图像嵌入在胸部X光片分类中优于其他迁移学习方法。
Diagnostics (Basel). 2022 Aug 28;12(9):2084. doi: 10.3390/diagnostics12092084.
9
Explainability of deep neural networks for MRI analysis of brain tumors.深度神经网络在脑肿瘤 MRI 分析中的可解释性。
Int J Comput Assist Radiol Surg. 2022 Sep;17(9):1673-1683. doi: 10.1007/s11548-022-02619-x. Epub 2022 Apr 23.
10
A deep dive into understanding tumor foci classification using multiparametric MRI based on convolutional neural network.基于卷积神经网络,深入探究利用多参数磁共振成像进行肿瘤病灶分类。
Med Phys. 2020 Sep;47(9):4077-4086. doi: 10.1002/mp.14255. Epub 2020 Jun 12.

引用本文的文献

1
Towards machine learning fairness in classifying multicategory causes of deaths in colorectal or lung cancer patients.迈向结直肠癌或肺癌患者多类别死亡原因分类中的机器学习公平性。
Brief Bioinform. 2025 Jul 2;26(4). doi: 10.1093/bib/bbaf398.
2
Brain Aging in Patients With Cardiovascular Disease From the UK Biobank.来自英国生物银行的心血管疾病患者的脑老化
Hum Brain Mapp. 2025 Jun 1;46(8):e70252. doi: 10.1002/hbm.70252.
3
Dimensionality reduction in 3D causal deep learning for neuroimage generation: an evaluation study.用于神经图像生成的3D因果深度学习中的降维:一项评估研究。
J Med Imaging (Bellingham). 2025 Mar;12(2):024506. doi: 10.1117/1.JMI.12.2.024506. Epub 2025 Apr 22.
4
Levelling up as a fair solution in AI enabled cancer screening.在人工智能辅助癌症筛查中,将公平性提升作为一种合理的解决方案。
Front Digit Health. 2025 Feb 25;7:1540982. doi: 10.3389/fdgth.2025.1540982. eCollection 2025.
5
Towards machine learning fairness in classifying multicategory causes of deaths in colorectal or lung cancer patients.迈向结直肠癌或肺癌患者多类别死因分类中的机器学习公平性
bioRxiv. 2025 Feb 19:2025.02.14.638368. doi: 10.1101/2025.02.14.638368.
6
Where, why, and how is bias learned in medical image analysis models? A study of bias encoding within convolutional networks using synthetic data.医学图像分析模型中的偏差是在何处、为何以及如何习得的?一项使用合成数据对卷积网络中的偏差编码进行的研究。
EBioMedicine. 2025 Jan;111:105501. doi: 10.1016/j.ebiom.2024.105501. Epub 2024 Dec 12.
7
Sex differences in brain MRI using deep learning toward fairer healthcare outcomes.利用深度学习实现更公平的医疗结果的脑磁共振成像中的性别差异。
Front Comput Neurosci. 2024 Nov 13;18:1452457. doi: 10.3389/fncom.2024.1452457. eCollection 2024.
8
Analysis and visualization of the effect of multiple sclerosis on biological brain age.多发性硬化对生物脑龄影响的分析与可视化
Front Neurol. 2024 Oct 10;15:1423485. doi: 10.3389/fneur.2024.1423485. eCollection 2024.
9
Addressing fairness issues in deep learning-based medical image analysis: a systematic review.解决基于深度学习的医学图像分析中的公平性问题:一项系统综述。
NPJ Digit Med. 2024 Oct 17;7(1):286. doi: 10.1038/s41746-024-01276-5.
10
An Investigation into Race Bias in Random Forest Models Based on Breast DCE-MRI Derived Radiomics Features.基于乳腺动态对比增强磁共振成像衍生的影像组学特征对随机森林模型中的种族偏见进行的调查。
Clin Image Based Proced Fairness AI Med Imaging Ethical Philos Issues Med Imaging (2023). 2023;14242:225-234. doi: 10.1007/978-3-031-45249-9_22. Epub 2023 Oct 9.

本文引用的文献

1
DARQ: Deep learning of quality control for stereotaxic registration of human brain MRI to the T1w MNI-ICBM 152 template.DARQ:基于深度学习的人脑 MRI 到 T1w MNI-ICBM 152 模板的立体定向配准质量控制。
Neuroimage. 2022 Aug 15;257:119266. doi: 10.1016/j.neuroimage.2022.119266. Epub 2022 Apr 29.
2
Multimodal biological brain age prediction using magnetic resonance imaging and angiography with the identification of predictive regions.多模态生物脑龄预测:磁共振成像和血管造影与预测区域的识别。
Hum Brain Mapp. 2022 Jun 1;43(8):2554-2566. doi: 10.1002/hbm.25805. Epub 2022 Feb 9.
3
Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations.人工智能算法应用于服务不足患者人群的胸部 X 光片时的漏诊偏倚。
Nat Med. 2021 Dec;27(12):2176-2182. doi: 10.1038/s41591-021-01595-0. Epub 2021 Dec 10.
4
Explainable Artificial Intelligence for Bias Detection in COVID CT-Scan Classifiers.用于 COVID CT 扫描分类器中偏差检测的可解释人工智能。
Sensors (Basel). 2021 Aug 23;21(16):5657. doi: 10.3390/s21165657.
5
A Researcher's Guide to the Measurement and Modeling of Puberty in the ABCD Study at Baseline.《ABCD 研究基线时青春期测量与建模研究人员指南》
Front Endocrinol (Lausanne). 2021 May 5;12:608575. doi: 10.3389/fendo.2021.608575. eCollection 2021.
6
Correspondence Between Perceived Pubertal Development and Hormone Levels in 9-10 Year-Olds From the Adolescent Brain Cognitive Development Study.《青少年大脑认知发展研究中 9-10 岁儿童青春期发育感知与激素水平的相关性》
Front Endocrinol (Lausanne). 2021 Feb 18;11:549928. doi: 10.3389/fendo.2020.549928. eCollection 2020.
7
Accurate brain age prediction with lightweight deep neural networks.使用轻量级深度神经网络进行准确的脑龄预测。
Med Image Anal. 2021 Feb;68:101871. doi: 10.1016/j.media.2020.101871. Epub 2020 Oct 19.
8
Supervised machine learning tools: a tutorial for clinicians.监督机器学习工具:临床医生教程。
J Neural Eng. 2020 Nov 19;17(6). doi: 10.1088/1741-2552/abbff2.
9
Deep learning identifies morphological determinants of sex differences in the pre-adolescent brain.深度学习识别青春期前大脑性别差异的形态学决定因素。
Neuroimage. 2020 Dec;223:117293. doi: 10.1016/j.neuroimage.2020.117293. Epub 2020 Aug 22.
10
CerebrA, registration and manual label correction of Mindboggle-101 atlas for MNI-ICBM152 template.CerebrA,Mindboggle-101 图谱的注册和手动标签修正,以适配 MNI-ICBM152 模板。
Sci Data. 2020 Jul 15;7(1):237. doi: 10.1038/s41597-020-0557-9.