Suppr超能文献

三维超声成像人工智能诊断宫腔粘连的前瞻性研究

Artificial intelligence diagnosis of intrauterine adhesion by 3D ultrasound imaging: a prospective study.

作者信息

Zhao Xingping, Liu Minqiang, Wu Susu, Zhang Baiyun, Burjoo Arvind, Yang Yimin, Xu Dabao

机构信息

Department of Gynecology, Third Xiangya Hospital of Central South University, Changsha, China.

Software Solution Unit, Hunan KMS Medical Technology, Changsha, China.

出版信息

Quant Imaging Med Surg. 2023 Apr 1;13(4):2314-2327. doi: 10.21037/qims-22-965. Epub 2023 Mar 22.

Abstract

BACKGROUND

There were a very large number of intrauterine adhesion (IUA) patients. As improving the classification of three-dimensional transvaginal ultrasound (3D-TVUS) of IUA or non-IUA images remains a clinical challenge and is needed to avoid inappropriate surgery. Our study aimed to evaluate deep learning as a method to classify 3D-TVUS of IUA or non-IUA images taken with panoramic technology.

METHODS

After meeting an inclusion/exclusion criteria, a total of 4,401 patients were selected for this study. This included 2,803 IUA patients and 1,598 non-IUA patients. IUA was confirmed by hysteroscopy, and each patient underwent one 3D-TVUS examination. Four well-known convolutional neural network (CNN) architectures were selected to classify the IUA images: Visual Geometry Group16 (VGG16), InceptionV3, ResNet50, and ResNet101. We used these pretrained CNNs on ImageNet by applying both TensorFlow and PyTorch. All 3D-TVUS images were normalized and mixed together. We split the data set into a training set, validation set, and test set. The performance of our classification model was evaluated according to sensitivity, precision, F1-score, and accuracy, which were determined by equations that used true-positive (TP), false-positive (FP), true-negative (TN), and false-negative (FN) numbers.

RESULTS

The overall performances of VGG16, InceptionV3, ResNet50, and ResNet101 were better in PyTorch as opposed to TensorFlow. Through PyTorch, the best CNN model was InceptionV3 with its performance measured as 94.2% sensitivity, 99.4% precision, 96.8% F1-score, and 97.3% accuracy. The area under the curve (AUC) results of VGG16, InceptionV3, ResNet50, and ResNet101 were 0.959, 0.999, 0.997, and 0.999, respectively. PyTorch also successfully transferred information from the source to the target domain where we were able to use another center's data as an external test data set. No overfitting that could have adversely affected the classification accuracy occurred. Finally, we successfully established a webpage to diagnose IUA based on the 3D-TVUS images.

CONCLUSIONS

Deep learning can assist in the binary classification of 3D-TVUS images to diagnose IUA. This study lays the foundation for future research into the integration of deep learning and blockchain technology.

摘要

背景

宫腔粘连(IUA)患者数量众多。由于改进IUA或非IUA图像的三维经阴道超声(3D-TVUS)分类仍是一项临床挑战,且需要避免不适当的手术。我们的研究旨在评估深度学习作为一种对采用全景技术获取的IUA或非IUA图像的3D-TVUS进行分类的方法。

方法

在符合纳入/排除标准后,本研究共纳入4401例患者。其中包括2803例IUA患者和1598例非IUA患者。通过宫腔镜检查确诊IUA,每位患者均接受一次3D-TVUS检查。选择四种著名的卷积神经网络(CNN)架构对IUA图像进行分类:视觉几何组16(VGG16)、InceptionV3、ResNet50和ResNet101。我们通过应用TensorFlow和PyTorch在ImageNet上使用这些预训练的CNN。所有3D-TVUS图像均进行归一化处理并混合在一起。我们将数据集分为训练集、验证集和测试集。根据灵敏度、精确度、F1分数和准确率评估我们分类模型的性能,这些指标由使用真阳性(TP)、假阳性(FP)、真阴性(TN)和假阴性(FN)数量的公式确定。

结果

与TensorFlow相比,VGG16、InceptionV3、ResNet50和ResNet101在PyTorch中的整体性能更好。通过PyTorch,最佳的CNN模型是InceptionV3,其性能指标为灵敏度94.2%、精确度99.4%、F1分数96.8%和准确率97.3%。VGG16、InceptionV3、ResNet50和ResNet101的曲线下面积(AUC)结果分别为0.959、0.999、0.997和0.999。PyTorch还成功地将信息从源域转移到目标域,在那里我们能够将另一个中心的数据用作外部测试数据集。未出现可能对分类准确性产生不利影响的过拟合情况。最后,我们成功建立了一个基于3D-TVUS图像诊断IUA的网页。

结论

深度学习可辅助对3D-TVUS图像进行二分类以诊断IUA。本研究为未来深度学习与区块链技术整合的研究奠定了基础。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ec60/10102785/dc9dbda13d23/qims-13-04-2314-f1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验