IEEE J Biomed Health Inform. 2021 Mar;25(3):806-817. doi: 10.1109/JBHI.2020.3002582. Epub 2021 Mar 5.
In the past decade, anatomical context features have been widely used for cephalometric landmark detection and significant progress is still being made. However, most existing methods rely on handcrafted graphical models rather than incorporating anatomical context during training, leading to suboptimal performance. In this study, we present a novel framework that allows a Convolutional Neural Network (CNN) to learn richer anatomical context features during training. Our key idea consists of the Local Feature Perturbator (LFP) and the Anatomical Context loss (AC loss). When training the CNN, the LFP perturbs a cephalometric image based on prior anatomical distribution, forcing the CNN to gaze relevant features more globally. Then AC loss helps the CNN to learn the anatomical context based on spatial relationships between the landmarks. The experimental results demonstrate that the proposed framework makes the CNN learn richer anatomical representation, leading to increased performance. In the performance comparisons, the proposed scheme outperforms state-of-the-art methods on the ISBI 2015 Cephalometric X-ray Image Analysis Challenge.
在过去的十年中,解剖上下文特征已被广泛用于头影测量标志点检测,并且仍在取得重大进展。然而,大多数现有方法依赖于手工制作的图形模型,而不是在训练过程中结合解剖上下文,导致性能不理想。在本研究中,我们提出了一种新颖的框架,允许卷积神经网络(CNN)在训练过程中学习更丰富的解剖上下文特征。我们的主要思想包括局部特征扰断器(LFP)和解剖上下文损失(AC 损失)。在训练 CNN 时,LFP 根据先验解剖分布对头影测量图像进行干扰,迫使 CNN更全局地关注相关特征。然后,AC 损失帮助 CNN根据标志点之间的空间关系学习解剖上下文。实验结果表明,所提出的框架使 CNN 学习更丰富的解剖表示,从而提高了性能。在性能比较中,所提出的方案在 ISBI 2015 头影测量 X 射线图像分析挑战赛上优于最先进的方法。