Yang Yingjian, Zheng Jie, Guo Peng, Wu Tianqi, Gao Qi, Guo Yingwei, Chen Ziran, Liu Chengcheng, Ouyang Zhanglei, Chen Huai, Kang Yan
Department of Radiological Research and Development, Shenzhen Lanmage Medical Technology Co., Ltd., Shenzhen, Guangdong, China.
Neusoft Medical System Co., Ltd., Shenyang, Liaoning, China.
Front Physiol. 2024 Aug 8;15:1416912. doi: 10.3389/fphys.2024.1416912. eCollection 2024.
The cardiothoracic ratio (CTR) based on postero-anterior chest X-rays (P-A CXR) images is one of the most commonly used cardiac measurement methods and an indicator for initially evaluating cardiac diseases. However, the hearts are not readily observable on P-A CXR images compared to the lung fields. Therefore, radiologists often manually determine the CTR's right and left heart border points of the adjacent left and right lung fields to the heart based on P-A CXR images. Meanwhile, manual CTR measurement based on the P-A CXR image requires experienced radiologists and is time-consuming and laborious.
Based on the above, this article proposes a novel, fully automatic CTR calculation method based on lung fields abstracted from the P-A CXR images using convolutional neural networks (CNNs), overcoming the limitations to heart segmentation and avoiding errors in heart segmentation. First, the lung field mask images are abstracted from the P-A CXR images based on the pre-trained CNNs. Second, a novel localization method of the heart's right and left border points is proposed based on the two-dimensional projection morphology of the lung field mask images using graphics.
The results show that the mean distance errors at the -axis direction of the CTR's four key points in the test sets T1 (21 × 512 × 512 static P-A CXR images) and T2 (13 × 512 × 512 dynamic P-A CXR images) based on various pre-trained CNNs are 4.1161 and 3.2116 pixels, respectively. In addition, the mean CTR errors on the test sets T1 and T2 based on four proposed models are 0.0208 and 0.0180, respectively.
Our proposed model achieves the equivalent performance of CTR calculation as the previous CardioNet model, overcomes heart segmentation, and takes less time. Therefore, our proposed method is practical and feasible and may become an effective tool for initially evaluating cardiac diseases.
基于后前位胸部X光(P-A CXR)图像的心胸比率(CTR)是最常用的心脏测量方法之一,也是初步评估心脏疾病的一个指标。然而,与肺野相比,心脏在P-A CXR图像上不易观察到。因此,放射科医生通常根据P-A CXR图像手动确定与心脏相邻的左、右肺野的CTR的右、左心脏边界点。同时,基于P-A CXR图像的手动CTR测量需要经验丰富的放射科医生,且耗时费力。
基于上述情况,本文提出了一种新颖的、基于卷积神经网络(CNN)从P-A CXR图像中提取的肺野的全自动CTR计算方法,克服了心脏分割的局限性,避免了心脏分割中的误差。首先,基于预训练的CNN从P-A CXR图像中提取肺野掩码图像。其次,基于肺野掩码图像的二维投影形态,利用图形学提出了一种新颖的心脏右、左边界点定位方法。
结果表明,基于各种预训练的CNN,测试集T1(21×512×512静态P-A CXR图像)和T2(13×512×512动态P-A CXR图像)中CTR四个关键点在x轴方向的平均距离误差分别为4.1161像素和3.2116像素。此外,基于四个提出的模型,测试集T1和T2上的平均CTR误差分别为0.0208和0.0180。
我们提出的模型在CTR计算方面达到了与之前的CardioNet模型相当的性能,克服了心脏分割问题,且耗时更少。因此,我们提出的方法是实用可行的,可能成为初步评估心脏疾病的有效工具。