Department of Robotics and Mechatronics Engineering, Daegu Gyeonbuk Institute of Science and Engineering (DGIST), Daegu, 42988, South Korea.
Department of Computer Science and Engineering, School of Convergence, College of Computing and Informatics, Sungkyunkwan University, Seoul, 03063, South Korea.
Sci Rep. 2023 Jan 16;13(1):791. doi: 10.1038/s41598-023-27815-w.
Automated multi-organ segmentation plays an essential part in the computer-aided diagnostic (CAD) of chest X-ray fluoroscopy. However, developing a CAD system for the anatomical structure segmentation remains challenging due to several indistinct structures, variations in the anatomical structure shape among different individuals, the presence of medical tools, such as pacemakers and catheters, and various artifacts in the chest radiographic images. In this paper, we propose a robust deep learning segmentation framework for the anatomical structure in chest radiographs that utilizes a dual encoder-decoder convolutional neural network (CNN). The first network in the dual encoder-decoder structure effectively utilizes a pre-trained VGG19 as an encoder for the segmentation task. The pre-trained encoder output is fed into the squeeze-and-excitation (SE) to boost the network's representation power, which enables it to perform dynamic channel-wise feature calibrations. The calibrated features are efficiently passed into the first decoder to generate the mask. We integrated the generated mask with the input image and passed it through a second encoder-decoder network with the recurrent residual blocks and an attention the gate module to capture the additional contextual features and improve the segmentation of the smaller regions. Three public chest X-ray datasets are used to evaluate the proposed method for multi-organs segmentation, such as the heart, lungs, and clavicles, and single-organ segmentation, which include only lungs. The results from the experiment show that our proposed technique outperformed the existing multi-class and single-class segmentation methods.
自动化多器官分割在计算机辅助诊断(CAD)的胸部 X 射线荧光检查中起着至关重要的作用。然而,由于存在多个不明显的结构、不同个体之间解剖结构形状的变化、医疗工具(如起搏器和导管)的存在以及胸部射线图像中的各种伪影,开发用于解剖结构分割的 CAD 系统仍然具有挑战性。在本文中,我们提出了一种基于双编码器-解码器卷积神经网络(CNN)的胸部射线图像解剖结构的鲁棒深度学习分割框架。双编码器-解码器结构中的第一个网络有效地利用预训练的 VGG19 作为分割任务的编码器。预训练编码器的输出被馈送到 squeeze-and-excitation(SE)中,以增强网络的表示能力,从而使其能够进行动态通道特征校准。经过校准的特征被有效地传递到第一个解码器,以生成掩模。我们将生成的掩模与输入图像集成,并将其传递到具有递归残差块和注意力门模块的第二个编码器-解码器网络中,以捕获附加的上下文特征并改善较小区域的分割。使用三个公共的胸部 X 射线数据集来评估所提出的方法在多器官分割(如心脏、肺部和锁骨)和单器官分割(仅包括肺部)中的性能。实验结果表明,我们提出的技术优于现有的多类和单类分割方法。