• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于生成潜在空间解缠的体外受精图像分类模型的可视化可解释性。

Visual interpretability of image-based classification models by generative latent space disentanglement applied to in vitro fertilization.

机构信息

Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva, 84105, Israel.

AIVF Ltd., Tel Aviv, 69271, Israel.

出版信息

Nat Commun. 2024 Aug 27;15(1):7390. doi: 10.1038/s41467-024-51136-9.

DOI:10.1038/s41467-024-51136-9
PMID:39191720
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11349992/
Abstract

The success of deep learning in identifying complex patterns exceeding human intuition comes at the cost of interpretability. Non-linear entanglement of image features makes deep learning a "black box" lacking human meaningful explanations for the models' decision. We present DISCOVER, a generative model designed to discover the underlying visual properties driving image-based classification models. DISCOVER learns disentangled latent representations, where each latent feature encodes a unique classification-driving visual property. This design enables "human-in-the-loop" interpretation by generating disentangled exaggerated counterfactual explanations. We apply DISCOVER to interpret classification of in vitro fertilization embryo morphology quality. We quantitatively and systematically confirm the interpretation of known embryo properties, discover properties without previous explicit measurements, and quantitatively determine and empirically verify the classification decision of specific embryo instances. We show that DISCOVER provides human-interpretable understanding of "black box" classification models, proposes hypotheses to decipher underlying biomedical mechanisms, and provides transparency for the classification of individual predictions.

摘要

深度学习在识别超出人类直觉的复杂模式方面取得了成功,但代价是可解释性。图像特征的非线性纠缠使得深度学习成为一个“黑箱”,缺乏对模型决策的人类有意义的解释。我们提出了 DISCOVER,这是一种生成模型,旨在发现驱动基于图像的分类模型的底层视觉属性。DISCOVER 学习解缠的潜在表示,其中每个潜在特征编码一个独特的分类驱动视觉属性。这种设计通过生成解缠的夸张反事实解释来实现“人在回路”解释。我们将 DISCOVER 应用于体外受精胚胎形态质量的分类解释。我们定量和系统地确认了对已知胚胎属性的解释,发现了以前没有明确测量过的属性,并定量确定和经验验证了特定胚胎实例的分类决策。我们表明,DISCOVER 为“黑箱”分类模型提供了可理解的人类解释,提出了破译潜在生物医学机制的假设,并为个体预测的分类提供了透明度。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6c7b/11349992/2d99276845d3/41467_2024_51136_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6c7b/11349992/ee4d2f7d2b99/41467_2024_51136_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6c7b/11349992/dd80c1759392/41467_2024_51136_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6c7b/11349992/31aaf5e908cd/41467_2024_51136_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6c7b/11349992/b44ac38812c9/41467_2024_51136_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6c7b/11349992/68ed24df8f05/41467_2024_51136_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6c7b/11349992/2d99276845d3/41467_2024_51136_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6c7b/11349992/ee4d2f7d2b99/41467_2024_51136_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6c7b/11349992/dd80c1759392/41467_2024_51136_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6c7b/11349992/31aaf5e908cd/41467_2024_51136_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6c7b/11349992/b44ac38812c9/41467_2024_51136_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6c7b/11349992/68ed24df8f05/41467_2024_51136_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6c7b/11349992/2d99276845d3/41467_2024_51136_Fig6_HTML.jpg

相似文献

1
Visual interpretability of image-based classification models by generative latent space disentanglement applied to in vitro fertilization.基于生成潜在空间解缠的体外受精图像分类模型的可视化可解释性。
Nat Commun. 2024 Aug 27;15(1):7390. doi: 10.1038/s41467-024-51136-9.
2
BlastAssist: a deep learning pipeline to measure interpretable features of human embryos.BlastAssist:一个用于测量人类胚胎可解释特征的深度学习流水线。
Hum Reprod. 2024 Apr 3;39(4):698-708. doi: 10.1093/humrep/deae024.
3
Orthogonal Subspace Representation for Generative Adversarial Networks.生成对抗网络的正交子空间表示
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):4413-4427. doi: 10.1109/TNNLS.2024.3377436. Epub 2025 Feb 28.
4
Generative artificial intelligence to produce high-fidelity blastocyst-stage embryo images.生成式人工智能生成高保真囊胚期胚胎图像。
Hum Reprod. 2024 Jun 3;39(6):1197-1207. doi: 10.1093/humrep/deae064.
5
Understanding the black-box: towards interpretable and reliable deep learning models.理解黑箱:迈向可解释且可靠的深度学习模型。
PeerJ Comput Sci. 2023 Nov 29;9:e1629. doi: 10.7717/peerj-cs.1629. eCollection 2023.
6
Anonymizing medical case-based explanations through disentanglement.通过解缠实现基于医学病例的解释匿名化。
Med Image Anal. 2024 Jul;95:103209. doi: 10.1016/j.media.2024.103209. Epub 2024 May 17.
7
Retrieving and reconstructing conceptually similar images from fMRI with latent diffusion models and a neuro-inspired brain decoding model.使用潜在扩散模型和神经启发式脑解码模型从功能磁共振成像中检索和重建概念上相似的图像。
J Neural Eng. 2024 Jun 28;21(4). doi: 10.1088/1741-2552/ad593c.
8
Decontextualized learning for interpretable hierarchical representations of visual patterns.用于视觉模式可解释分层表示的去上下文学习。
Patterns (N Y). 2021 Jan 21;2(2):100193. doi: 10.1016/j.patter.2020.100193. eCollection 2021 Feb 12.
9
Explaining Black-Box Models for Biomedical Text Classification.解释用于生物医学文本分类的黑盒模型。
IEEE J Biomed Health Inform. 2021 Aug;25(8):3112-3120. doi: 10.1109/JBHI.2021.3056748. Epub 2021 Aug 5.
10
Performance of a deep learning based neural network in the selection of human blastocysts for implantation.基于深度学习的神经网络在选择人类囊胚进行植入中的性能。
Elife. 2020 Sep 15;9:e55301. doi: 10.7554/eLife.55301.

引用本文的文献

1
Representation of high-dimensional cell morphology and morphodynamics in 2D latent space.二维潜在空间中高维细胞形态和形态动力学的表示。
Phys Biol. 2025 Apr 24;22(3). doi: 10.1088/1478-3975/adcd37.
2
Autonomous learning of pathologists' cancer grading rules.病理学家癌症分级规则的自主学习
bioRxiv. 2025 Apr 7:2025.03.18.643999. doi: 10.1101/2025.03.18.643999.
3
Interpretable representation learning for 3D multi-piece intracellular structures using point clouds.使用点云对三维多片细胞内结构进行可解释的表示学习。

本文引用的文献

1
Visual interpretability of bioimaging deep learning models.生物成像深度学习模型的视觉可解释性。
Nat Methods. 2024 Aug;21(8):1394-1397. doi: 10.1038/s41592-024-02322-6.
2
Segment anything in medical images.在医学图像中分割任何内容。
Nat Commun. 2024 Jan 22;15(1):654. doi: 10.1038/s41467-024-44824-z.
3
Auditing the inference processes of medical-image classifiers by leveraging generative AI and the expertise of physicians.利用生成式人工智能和医生的专业知识对医学图像分类器的推理过程进行审计。
bioRxiv. 2024 Aug 13:2024.07.25.605164. doi: 10.1101/2024.07.25.605164.
Nat Biomed Eng. 2025 Mar;9(3):294-306. doi: 10.1038/s41551-023-01160-9. Epub 2023 Dec 28.
4
Revealing invisible cell phenotypes with conditional generative modeling.利用条件生成模型揭示不可见的细胞表型。
Nat Commun. 2023 Oct 11;14(1):6386. doi: 10.1038/s41467-023-42124-6.
5
Explaining the black-box smoothly-A counterfactual approach.黑盒解释的平滑化——反事实方法。
Med Image Anal. 2023 Feb;84:102721. doi: 10.1016/j.media.2022.102721. Epub 2022 Dec 13.
6
An artificial intelligence model correlated with morphological and genetic features of blastocyst quality improves ranking of viable embryos.一种与囊胚质量的形态学和遗传学特征相关的人工智能模型可提高有活力胚胎的排序。
Reprod Biomed Online. 2022 Dec;45(6):1105-1117. doi: 10.1016/j.rbmo.2022.07.018. Epub 2022 Aug 3.
7
Morphology of inner cell mass: a better predictive biomarker of blastocyst viability.内细胞团形态:一种更好的囊胚活力预测生物标志物。
PeerJ. 2022 Aug 26;10:e13935. doi: 10.7717/peerj.13935. eCollection 2022.
8
Imaging cell biology.成像细胞生物学
Nat Cell Biol. 2022 Aug;24(8):1180-1185. doi: 10.1038/s41556-022-00960-6.
9
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.停止为高风险决策解释黑箱机器学习模型,转而使用可解释模型。
Nat Mach Intell. 2019 May;1(5):206-215. doi: 10.1038/s42256-019-0048-x. Epub 2019 May 13.
10
GANterfactual-Counterfactual Explanations for Medical Non-experts Using Generative Adversarial Learning.使用生成对抗学习为医学非专业人员提供反事实-反事实解释。
Front Artif Intell. 2022 Apr 8;5:825565. doi: 10.3389/frai.2022.825565. eCollection 2022.