Bhalodia Riddhish, Elhabian Shireen, Kavan Ladislav, Whitaker Ross
Scientific Computing and Imaging Institute, 72 Central Campus Dr, University of Utah, Salt Lake City, Utah-84112, USA.
Scientific Computing and Imaging Institute, 72 Central Campus Dr, University of Utah, Salt Lake City, Utah-84112, USA; School of Computing, 50 Central Campus Dr, University of Utah, Salt Lake City, Utah-84112, USA.
Med Image Anal. 2021 Oct;73:102157. doi: 10.1016/j.media.2021.102157. Epub 2021 Jul 9.
In current biological and medical research, statistical shape modeling (SSM) provides an essential framework for the characterization of anatomy/morphology. Such analysis is often driven by the identification of a relatively small number of geometrically consistent features found across the samples of a population. These features can subsequently provide information about the population shape variation. Dense correspondence models can provide ease of computation and yield an interpretable low-dimensional shape descriptor when followed by dimensionality reduction. However, automatic methods for obtaining such correspondences usually require image segmentation followed by significant preprocessing, which is taxing in terms of both computation as well as human resources. In many cases, the segmentation and subsequent processing require manual guidance and anatomy specific domain expertise. This paper proposes a self-supervised deep learning approach for discovering landmarks from images that can directly be used as a shape descriptor for subsequent analysis. We use landmark-driven image registration as the primary task to force the neural network to discover landmarks that register the images well. We also propose a regularization term that allows for robust optimization of the neural network and ensures that the landmarks uniformly span the image domain. The proposed method circumvents segmentation and preprocessing and directly produces a usable shape descriptor using just 2D or 3D images. In addition, we also propose two variants on the training loss function that allows for prior shape information to be integrated into the model. We apply this framework on several 2D and 3D datasets to obtain their shape descriptors. We analyze these shape descriptors in their efficacy of capturing shape information by performing different shape-driven applications depending on the data ranging from shape clustering to severity prediction to outcome diagnosis.
在当前的生物和医学研究中,统计形状建模(SSM)为解剖学/形态学特征的表征提供了一个重要框架。此类分析通常由在一组样本中发现的相对少量几何上一致的特征的识别所驱动。这些特征随后可以提供有关群体形状变化的信息。密集对应模型在进行降维后可以提供易于计算的结果,并产生一个可解释的低维形状描述符。然而,获得这种对应的自动方法通常需要图像分割以及大量的预处理,这在计算和人力资源方面都很繁重。在许多情况下,分割和后续处理需要人工指导和特定解剖领域的专业知识。本文提出了一种自监督深度学习方法,用于从图像中发现地标,这些地标可直接用作后续分析的形状描述符。我们将地标驱动的图像配准作为主要任务,以迫使神经网络发现能很好配准图像的地标。我们还提出了一个正则化项,用于对神经网络进行稳健优化,并确保地标均匀地跨越图像域。所提出的方法绕过了分割和预处理,仅使用2D或3D图像就直接生成一个可用的形状描述符。此外,我们还在训练损失函数上提出了两个变体,允许将先验形状信息集成到模型中。我们将这个框架应用于几个2D和3D数据集,以获得它们的形状描述符。我们通过根据数据执行从形状聚类到严重程度预测再到结果诊断等不同的形状驱动应用,来分析这些形状描述符在捕获形状信息方面的功效。