Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva, 84105, Israel.
AIVF Ltd., Tel Aviv, 69271, Israel.
Nat Commun. 2024 Aug 27;15(1):7390. doi: 10.1038/s41467-024-51136-9.
The success of deep learning in identifying complex patterns exceeding human intuition comes at the cost of interpretability. Non-linear entanglement of image features makes deep learning a "black box" lacking human meaningful explanations for the models' decision. We present DISCOVER, a generative model designed to discover the underlying visual properties driving image-based classification models. DISCOVER learns disentangled latent representations, where each latent feature encodes a unique classification-driving visual property. This design enables "human-in-the-loop" interpretation by generating disentangled exaggerated counterfactual explanations. We apply DISCOVER to interpret classification of in vitro fertilization embryo morphology quality. We quantitatively and systematically confirm the interpretation of known embryo properties, discover properties without previous explicit measurements, and quantitatively determine and empirically verify the classification decision of specific embryo instances. We show that DISCOVER provides human-interpretable understanding of "black box" classification models, proposes hypotheses to decipher underlying biomedical mechanisms, and provides transparency for the classification of individual predictions.
深度学习在识别超出人类直觉的复杂模式方面取得了成功,但代价是可解释性。图像特征的非线性纠缠使得深度学习成为一个“黑箱”,缺乏对模型决策的人类有意义的解释。我们提出了 DISCOVER,这是一种生成模型,旨在发现驱动基于图像的分类模型的底层视觉属性。DISCOVER 学习解缠的潜在表示,其中每个潜在特征编码一个独特的分类驱动视觉属性。这种设计通过生成解缠的夸张反事实解释来实现“人在回路”解释。我们将 DISCOVER 应用于体外受精胚胎形态质量的分类解释。我们定量和系统地确认了对已知胚胎属性的解释,发现了以前没有明确测量过的属性,并定量确定和经验验证了特定胚胎实例的分类决策。我们表明,DISCOVER 为“黑箱”分类模型提供了可理解的人类解释,提出了破译潜在生物医学机制的假设,并为个体预测的分类提供了透明度。