Xia Xianwu, Gong Jing, Hao Wen, Yang Ting, Lin Yeqing, Wang Shengping, Peng Weijun
Department of Radiology, Municipal Hospital Affiliated to Medical School of Taizhou University, Taizhou, China.
Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China.
Front Oncol. 2020 Mar 31;10:418. doi: 10.3389/fonc.2020.00418. eCollection 2020.
For stage-I lung adenocarcinoma, the 5-years disease-free survival (DFS) rates of non-invasive adenocarcinoma (non-IA) is different with invasive adenocarcinoma (IA). This study aims to develop CT image based artificial intelligence (AI) schemes to classify between non-IA and IA nodules, and incorporate deep learning (DL) and radiomics features to improve the classification performance. We collect 373 surgical pathological confirmed ground-glass nodules (GGNs) from 323 patients in two centers. It involves 205 non-IA (including 107 adenocarcinoma and 98 minimally invasive adenocarcinoma), and 168 IA. We first propose a recurrent residual convolutional neural network based on U-Net to segment the GGNs. Then, we build two schemes to classify between non-IA and IA namely, DL scheme and radiomics scheme, respectively. Third, to improve the classification performance, we fuse the prediction scores of two schemes by applying an information fusion method. Finally, we conduct an observer study to compare our scheme performance with two radiologists by testing on an independent dataset. Comparing with DL scheme and radiomics scheme (the area under a receiver operating characteristic curve (AUC): 0.83 ± 0.05, 0.87 ± 0.04), our new fusion scheme (AUC: 0.90 ± 0.03) significant improves the risk classification performance ( < 0.05). In a comparison with two radiologists, our new model yields higher accuracy of 80.3%. The kappa value for inter-radiologist agreement is 0.6. It demonstrates that applying AI method is an effective way to improve the invasiveness risk prediction performance of GGNs. In future, fusion of DL and radiomics features may have a potential to handle the classification task with limited dataset in medical imaging.
对于I期肺腺癌,非侵袭性腺癌(non-IA)与侵袭性腺癌(IA)的5年无病生存率(DFS)有所不同。本研究旨在开发基于CT图像的人工智能(AI)方案,以区分non-IA和IA结节,并结合深度学习(DL)和影像组学特征来提高分类性能。我们从两个中心的323例患者中收集了373个经手术病理证实的磨玻璃结节(GGN)。其中包括205个non-IA(包括107个腺癌和98个微浸润腺癌)以及168个IA。我们首先提出一种基于U-Net的循环残差卷积神经网络来分割GGN。然后,我们分别构建了两种用于区分non-IA和IA的方案,即DL方案和影像组学方案。第三,为了提高分类性能,我们通过应用信息融合方法来融合两种方案的预测分数。最后,我们进行了一项观察者研究,通过在独立数据集上进行测试,将我们的方案性能与两位放射科医生的性能进行比较。与DL方案和影像组学方案(受试者操作特征曲线下面积(AUC):0.83±0.05,0.87±0.04)相比,我们的新融合方案(AUC:0.90±0.03)显著提高了风险分类性能(P<0.05)。与两位放射科医生相比,我们的新模型具有更高的80.3%的准确率。放射科医生之间的kappa值为0.6。这表明应用AI方法是提高GGN侵袭性风险预测性能的有效途径。未来,DL和影像组学特征的融合可能有潜力处理医学影像中有限数据集的分类任务。
Front Oncol. 2023-12-11