Peng Qi, Chen Xingcai, Zhang Chao, Li Wenyan, Liu Jingjing, Shi Tingxin, Wu Yi, Feng Hua, Nian Yongjian, Hu Rong
Department of Digital Medicine, School of Biomedical Engineering and Imaging Medicine, Army Medical University, Third Military Medical University, Chongqing, China.
Department of Neurosurgery, First Affiliated Hospital,Southwest Hospital, Army Medical University, Third Military Medical University, Chongqing, China.
Front Neurosci. 2022 Oct 3;16:965680. doi: 10.3389/fnins.2022.965680. eCollection 2022.
The study aims to enhance the accuracy and practicability of CT image segmentation and volume measurement of ICH by using deep learning technology. A dataset including the brain CT images and clinical data of 1,027 patients with spontaneous ICHs treated from January 2010 to December 2020 were retrospectively analyzed, and a deep segmentation network (AttFocusNet) integrating the focus structure and the attention gate (AG) mechanism is proposed to enable automatic, accurate CT image segmentation and volume measurement of ICHs. In internal validation set, experimental results showed that AttFocusNet achieved a Dice coefficient of 0.908, an intersection-over-union (IoU) of 0.874, a sensitivity of 0.913, a positive predictive value (PPV) of 0.957, and a 95% Hausdorff distance (HD95) (mm) of 5.960. The intraclass correlation coefficient (ICC) of the ICH volume measurement between AttFocusNet and the ground truth was 0.997. The average time of per case achieved by AttFocusNet, Coniglobus formula and manual segmentation is 5.6, 47.7, and 170.1 s. In the two external validation sets, AttFocusNet achieved a Dice coefficient of 0.889 and 0.911, respectively, an IoU of 0.800 and 0.836, respectively, a sensitivity of 0.817 and 0.849, respectively, a PPV of 0.976 and 0.981, respectively, and a HD95 of 5.331 and 4.220, respectively. The ICC of the ICH volume measurement between AttFocusNet and the ground truth were 0.939 and 0.956, respectively. The proposed segmentation network AttFocusNet significantly outperforms the Coniglobus formula in terms of ICH segmentation and volume measurement by acquiring measurement results closer to the true ICH volume and significantly reducing the clinical workload.
本研究旨在利用深度学习技术提高脑出血(ICH)CT图像分割及体积测量的准确性和实用性。回顾性分析了2010年1月至2020年12月期间接受治疗的1027例自发性ICH患者的脑CT图像及临床数据,并提出了一种整合焦点结构和注意力门控(AG)机制的深度分割网络(AttFocusNet),以实现ICH的CT图像自动、准确分割及体积测量。在内验证集中,实验结果显示AttFocusNet的Dice系数为0.908,交并比(IoU)为0.874,敏感度为0.913,阳性预测值(PPV)为0.957,95%豪斯多夫距离(HD95)(mm)为5.960。AttFocusNet与真实值之间ICH体积测量的组内相关系数(ICC)为0.997。AttFocusNet、Coniglobus公式和手动分割每例的平均时间分别为5.6、47.7和170.1秒。在两个外验证集中,AttFocusNet的Dice系数分别为0.889和0.911,IoU分别为0.800和0.836,敏感度分别为0.817和0.849,PPV分别为0.976和0.981,HD95分别为5.331和4.220。AttFocusNet与真实值之间ICH体积测量的ICC分别为0.939和0.956。所提出的分割网络AttFocusNet在ICH分割和体积测量方面显著优于Coniglobus公式,获得了更接近真实ICH体积的测量结果,并显著减少了临床工作量。