Salehi Mohammad, Ardekani Mahdieh Afkhami, Taramsari Alireza Bashari, Ghaffari Hamed, Haghparast Mohammad
Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran, Iran.
Clinical Research Development Center, Shahid Mohammadi Hospital, Hormozgan University of Medical Sciences, Bandar-Abbas, Iran.
Pol J Radiol. 2022 Aug 26;87:e478-e486. doi: 10.5114/pjr.2022.119027. eCollection 2022.
The novel coronavirus COVID-19, which spread globally in late December 2019, is a global health crisis. Chest computed tomography (CT) has played a pivotal role in providing useful information for clinicians to detect COVID-19. However, segmenting COVID-19-infected regions from chest CT results is challenging. Therefore, it is desirable to develop an efficient tool for automated segmentation of COVID-19 lesions using chest CT. Hence, we aimed to propose 2D deep-learning algorithms to automatically segment COVID-19-infected regions from chest CT slices and evaluate their performance.
Herein, 3 known deep learning networks: U-Net, U-Net++, and Res-Unet, were trained from scratch for automated segmenting of COVID-19 lesions using chest CT images. The dataset consists of 20 labelled COVID-19 chest CT volumes. A total of 2112 images were used. The dataset was split into 80% for training and validation and 20% for testing the proposed models. Segmentation performance was assessed using Dice similarity coefficient, average symmetric surface distance (ASSD), mean absolute error (MAE), sensitivity, specificity, and precision.
All proposed models achieved good performance for COVID-19 lesion segmentation. Compared with Res-Unet, the U-Net and U-Net++ models provided better results, with a mean Dice value of 85.0%. Compared with all models, U-Net gained the highest segmentation performance, with 86.0% sensitivity and 2.22 mm ASSD. The U-Net model obtained 1%, 2%, and 0.66 mm improvement over the Res-Unet model in the Dice, sensitivity, and ASSD, respectively. Compared with Res-Unet, U-Net++ achieved 1%, 2%, 0.1 mm, and 0.23 mm improvement in the Dice, sensitivity, ASSD, and MAE, respectively.
Our data indicated that the proposed models achieve an average Dice value greater than 84.0%. Two-dimensional deep learning models were able to accurately segment COVID-19 lesions from chest CT images, assisting the radiologists in faster screening and quantification of the lesion regions for further treatment. Nevertheless, further studies will be required to evaluate the clinical performance and robustness of the proposed models for COVID-19 semantic segmentation.
2019年12月底在全球范围内传播的新型冠状病毒COVID-19是一场全球健康危机。胸部计算机断层扫描(CT)在为临床医生检测COVID-19提供有用信息方面发挥了关键作用。然而,从胸部CT结果中分割出COVID-19感染区域具有挑战性。因此,需要开发一种使用胸部CT自动分割COVID-19病变的有效工具。因此,我们旨在提出二维深度学习算法,以从胸部CT切片中自动分割出COVID-19感染区域并评估其性能。
在此,对3种已知的深度学习网络:U-Net、U-Net++和Res-Unet进行从头训练,以使用胸部CT图像自动分割COVID-19病变。数据集由20个标记的COVID-19胸部CT容积组成。总共使用了2112张图像。数据集被分为80%用于训练和验证,20%用于测试所提出的模型。使用Dice相似系数、平均对称表面距离(ASSD)、平均绝对误差(MAE)、敏感性、特异性和精确度来评估分割性能。
所有提出的模型在COVID-19病变分割方面都取得了良好的性能。与Res-Unet相比,U-Net和U-Net++模型提供了更好的结果,平均Dice值为85.0%。与所有模型相比,U-Net获得了最高的分割性能,敏感性为86.0%,ASSD为2.22毫米。U-Net模型在Dice、敏感性和ASSD方面分别比Res-Unet模型提高了1%、2%和0.66毫米。与Res-Unet相比,U-Net++在Dice、敏感性、ASSD和MAE方面分别提高了1%、2%、0.1毫米和0.23毫米。
我们的数据表明,所提出的模型实现了大于84.0%的平均Dice值。二维深度学习模型能够从胸部CT图像中准确分割出COVID-19病变,帮助放射科医生更快地筛查和量化病变区域以进行进一步治疗。然而,需要进一步研究来评估所提出的模型在COVID-19语义分割方面的临床性能和稳健性。