Suppr超能文献

基于深度卷积神经网络的卵巢附件病变超声图像自动O-RADS分类研究

A Study on Automatic O-RADS Classification of Sonograms of Ovarian Adnexal Lesions Based on Deep Convolutional Neural Networks.

作者信息

Liu Tao, Miao Kuo, Tan Gaoqiang, Bu Hanqi, Shao Xiaohui, Wang Siming, Dong Xiaoqiu

机构信息

The Department of Ultrasound Medicine, Harbin Medical University Fourth Affiliated Hospital, Harbin, Heilongjiang, China.

The Department of Ultrasound Medicine, Harbin Medical University Fourth Affiliated Hospital, Harbin, Heilongjiang, China.

出版信息

Ultrasound Med Biol. 2025 Feb;51(2):387-395. doi: 10.1016/j.ultrasmedbio.2024.11.009. Epub 2024 Nov 26.

Abstract

OBJECTIVE

This study explored a new method for automatic O-RADS classification of sonograms based on a deep convolutional neural network (DCNN).

METHODS

A development dataset (DD) of 2,455 2D grayscale sonograms of 870 ovarian adnexal lesions and an intertemporal validation dataset (IVD) of 426 sonograms of 280 lesions were collected and classified according to O-RADS v2022 (categories 2-5) by three senior sonographers. Classification results verified by a two-tailed z-test to be consistent with the O-RADS v2022 malignancy rate indicated the diagnostic performance was comparable to that of a previous study and were used for training; otherwise, the classification was repeated by two different sonographers. The DD was used to develop three DCNN models (ResNet34, DenseNet121, and ConvNeXt-Tiny) that employed transfer learning techniques. Model performance was assessed for accuracy, precision, and F1 score, among others. The optimal model was selected and validated over time using the IVD and to analyze whether the efficiency of O-RADS classification was improved with the assistance of this model for three sonographers with different years of experience.

RESULTS

The proportion of malignant tumors in the DD and IVD in each O-RADS-defined risk category was verified using a two-tailed z-test. Malignant lesions (O-RADS categories 4 and 5) were diagnosed in the DD and IVD with sensitivities of 0.949 and 0.962 and specificities of 0.892 and 0.842, respectively. ResNet34, DenseNet121, and ConvNeXt-Tiny had overall accuracies of 0.737, 0.752, and 0.878, respectively, for sonogram prediction in the DD. The ConvNeXt-Tiny model's accuracy for sonogram prediction in the IVD was 0.859, with no significant difference between test sets. The modeling aid significantly reduced O-RADS classification time for three sonographers (Cohen's d = 5.75).

CONCLUSION

ConvNeXt-Tiny showed robust and stable performance in classifying O-RADS 2-5, improving sonologists' classification efficacy.

摘要

目的

本研究探索了一种基于深度卷积神经网络(DCNN)的超声图像自动O-RADS分类新方法。

方法

收集了870个卵巢附件病变的2455幅二维灰度超声图像的开发数据集(DD)和280个病变的426幅超声图像的跨期验证数据集(IVD),并由三位资深超声医师根据O-RADS v2022(2-5类)进行分类。经双尾z检验验证的与O-RADS v2022恶性率一致的分类结果表明诊断性能与先前研究相当,并用于训练;否则,由另外两位不同的超声医师重复分类。DD用于开发采用迁移学习技术的三种DCNN模型(ResNet34、DenseNet121和ConvNeXt-Tiny)。评估了模型在准确性、精确性和F1分数等方面的性能。选择最佳模型并使用IVD进行长期验证,并分析该模型对三位具有不同经验年限的超声医师的帮助下O-RADS分类效率是否提高。

结果

使用双尾z检验验证了每个O-RADS定义的风险类别中DD和IVD中恶性肿瘤的比例。在DD和IVD中诊断出恶性病变(O-RADS 4类和5类),敏感性分别为0.949和0.962,特异性分别为0.892和0.842。对于DD中的超声图像预测,ResNet34、DenseNet121和ConvNeXt-Tiny的总体准确率分别为0.737、0.752和0.878。ConvNeXt-Tiny模型在IVD中对超声图像预测的准确率为0.859,测试集之间无显著差异。建模辅助显著减少了三位超声医师的O-RADS分类时间(科恩d=5.75)。

结论

ConvNeXt-Tiny在对O-RADS 2-5进行分类时表现出强大且稳定的性能,提高了超声科医生的分类效率。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验