IEEE Trans Med Imaging. 2024 Jan;43(1):427-438. doi: 10.1109/TMI.2023.3309821. Epub 2024 Jan 2.
Human brain is a complex system composed of many components that interact with each other. A well-designed computational model, usually in the format of partial differential equations (PDEs), is vital to understand the working mechanisms that can explain dynamic and self-organized behaviors. However, the model formulation and parameters are often tuned empirically based on the predefined domain-specific knowledge, which lags behind the emerging paradigm of discovering novel mechanisms from the unprecedented amount of spatiotemporal data. To address this limitation, we sought to link the power of deep neural networks and physics principles of complex systems, which allows us to design explainable deep models for uncovering the mechanistic role of how human brain (the most sophisticated complex system) maintains controllable functions while interacting with external stimulations. In the spirit of optimal control, we present a unified framework to design an explainable deep model that describes the dynamic behaviors of underlying neurobiological processes, allowing us to understand the latent control mechanism at a system level. We have uncovered the pathophysiological mechanism of Alzheimer's disease to the extent of controllability of disease progression, where the dissected system-level understanding enables higher prediction accuracy for disease progression and better explainability for disease etiology than conventional (black box) deep models.
人脑是一个由许多相互作用的组件组成的复杂系统。一个设计良好的计算模型,通常以偏微分方程 (PDE) 的形式,对于理解可以解释动态和自组织行为的工作机制至关重要。然而,模型的公式和参数通常是根据预先定义的领域特定知识进行经验调整的,这落后于从前所未有的时空数据中发现新机制的新兴范例。为了解决这个局限性,我们试图将深度学习网络的功能和复杂系统的物理原理联系起来,这使我们能够设计可解释的深度学习模型,以揭示人类大脑(最复杂的复杂系统)如何在与外部刺激相互作用的同时保持可控功能的机械作用。本着最优控制的精神,我们提出了一个统一的框架来设计一个可解释的深度模型,该模型描述了潜在神经生物学过程的动态行为,使我们能够在系统层面上理解潜在的控制机制。我们已经揭示了阿尔茨海默病的病理生理学机制,在一定程度上可以控制疾病的进展,其中剖析的系统层面的理解比传统的(黑盒)深度学习模型具有更高的疾病进展预测准确性和更好的疾病病因可解释性。