Xue Zhiyun, Long Rodney, Jaeger Stefan, Folio Les, George Thoma R, Antani And Sameer
Annu Int Conf IEEE Eng Med Biol Soc. 2018 Jul;2018:5890-5893. doi: 10.1109/EMBC.2018.8513560.
In this paper, we aim to extract the aortic knuckle (AK) contour in chest radiographs, an anatomical structure rarely being addressed in the literature. Since the AK structure is small and thin, simply adopting the deep network methods that are successful for large organ segmentation is inadequate for achieving good pixel-level accuracy and resolving local ambiguities. To address this challenge, we propose a new coarse-to-fine segmentation approach which focuses on global and local information contexts, respectively. Two convolutional networks are used. For the coarse segmentation, we use FasterRCNN; for the fine segmentation, we use U-Net. Our evaluation uses the publicly available JSRT dataset; the results are promising. Besides presenting these results, we analyze issues such as the imprecision of manual contour marking, and automatic generation of the coarse segmentation ground-truth mask used for deep network training. Our approach is general and can be applied to extract other curve-like objects-of-interest.
在本文中,我们旨在提取胸部X光片中的主动脉结(AK)轮廓,这是一个在文献中很少被提及的解剖结构。由于AK结构小且薄,简单地采用在大型器官分割中成功的深度网络方法,不足以实现良好的像素级精度和解决局部模糊性问题。为应对这一挑战,我们提出了一种新的从粗到细的分割方法,该方法分别关注全局和局部信息上下文。使用了两个卷积网络。对于粗分割,我们使用FasterRCNN;对于细分割,我们使用U-Net。我们的评估使用了公开可用的JSRT数据集;结果很有前景。除了展示这些结果外,我们还分析了诸如手动轮廓标记的不精确性以及用于深度网络训练的粗分割真实掩码的自动生成等问题。我们的方法具有通用性,可应用于提取其他类似曲线的感兴趣对象。