Liu Tao, Miao Kuo, Tan Gaoqiang, Bu Hanqi, Xu Mingda, Zhang Qiming, Liu Qin, Dong Xiaoqiu
Department of Medical Ultrasound, The Fourth Affiliated Hospital of Harbin Medical University, No. 37 Yiyuan Street, Nangang District, Harbin, 150001, Heilongjiang, China.
School of Basic Medical Sciences, Xiangnan College, Chenzhou, China.
Arch Gynecol Obstet. 2024 Dec;310(6):3111-3120. doi: 10.1007/s00404-024-07837-z. Epub 2024 Nov 23.
The study aimed to create a deep convolutional neural network (DCNN) model based on ConvNeXt-Tiny to identify classic benign lesions (CBL) from other lesions (OL) within the Ovarian-Adnexal Reporting and Data System (O-RADS), enhancing the system's utility for novice ultrasonographers.
Two sets of sonographic images of pathologically confirmed adnexal lesions were retrospectively collected [development dataset (DD) and independent test dataset (ITD)]. The ConvNeXt-Tiny model, optimized through transfer learning, was trained on the DD using the original images directly and after automatic lesion segmentation by a U-Net model. Models derived from both training paradigms were validated on the ITD for sensitivity, specificity, accuracy, and area under the curve (AUC). Two novice ultrasonographers were assessed in O-RADS with and without assistance from the model for Application Effectiveness.
The ConvNeXt-Tiny model trained on original images scored AUCs of 0.978 for DD and 0.955 for ITD, while the U-Net segmented image model achieved 0.967 for DD and 0.923 for ITD; neither showed significant differences. When assessing the malignancy of lesions using O-RADS 4 and 5, the diagnostic performances of two novice ultrasonographers and senior ultrasonographer, as well as model-assisted classifications, showed no significant differences, except for one novice's low accuracy. This approach reduced classification time by 62 and 64 min. The kappa values with senior doctors' classifications rose from 0.776 and 0.761 to 0.914 and 0.903, respectively.
The ConvNeXt-Tiny model demonstrated excellent and stable performance in distinguishing CBL from OL within O-RADS. The diagnostic performance of novice ultrasonographers using O-RADS is essentially equivalent to that of senior ultrasonographer, and the assistance of the model can enhance their classification efficiency and consistency with the results of senior ultrasonographer.
本研究旨在创建一个基于ConvNeXt-Tiny的深度卷积神经网络(DCNN)模型,以在卵巢附件报告和数据系统(O-RADS)中从其他病变(OL)中识别经典良性病变(CBL),提高该系统对新手超声检查医师的实用性。
回顾性收集两组经病理证实的附件病变超声图像[开发数据集(DD)和独立测试数据集(ITD)]。通过迁移学习优化的ConvNeXt-Tiny模型,在DD上直接使用原始图像以及在由U-Net模型进行自动病变分割后进行训练。从这两种训练范式得出的模型在ITD上针对敏感性、特异性、准确性和曲线下面积(AUC)进行验证。两名新手超声检查医师在有和没有该模型协助的情况下,在O-RADS中接受应用效果评估。
在原始图像上训练的ConvNeXt-Tiny模型在DD上的AUC为0.978,在ITD上为0.955,而U-Net分割图像模型在DD上为0.967,在ITD上为0.923;两者均无显著差异。在使用O-RADS 4和5评估病变的恶性程度时,除了一名新手的准确性较低外,两名新手超声检查医师和资深超声检查医师的诊断性能以及模型辅助分类均无显著差异。这种方法将分类时间减少了62分钟和64分钟。与资深医生分类的kappa值分别从0.776和0.761提高到0.914和0.903。
ConvNeXt-Tiny模型在O-RADS中区分CBL和OL方面表现出优异且稳定的性能。使用O-RADS的新手超声检查医师的诊断性能与资深超声检查医师基本相当,并且该模型的协助可以提高他们的分类效率以及与资深超声检查医师结果的一致性。