• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

利用图像修复提高卵巢肿瘤超声图像的分割精度

Improving the Segmentation Accuracy of Ovarian-Tumor Ultrasound Images Using Image Inpainting.

作者信息

Chen Lijiang, Qiao Changkun, Wu Meijing, Cai Linghan, Yin Cong, Yang Mukun, Sang Xiubo, Bai Wenpei

机构信息

School of Electronic and Information Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing 100191, China.

Department of Obstetrics and Gynecology, Beijing Shijitan Hospital, Capital Medical University, Beijing 100038, China.

出版信息

Bioengineering (Basel). 2023 Feb 1;10(2):184. doi: 10.3390/bioengineering10020184.

DOI:10.3390/bioengineering10020184
PMID:36829679
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9952248/
Abstract

Diagnostic results can be radically influenced by the quality of 2D ovarian-tumor ultrasound images. However, clinically processed 2D ovarian-tumor ultrasound images contain many artificially recognized symbols, such as fingers, crosses, dashed lines, and letters which assist artificial intelligence (AI) in image recognition. These symbols are widely distributed within the lesion's boundary, which can also affect the useful feature-extraction-utilizing networks and thus decrease the accuracy of lesion classification and segmentation. Image inpainting techniques are used for noise and object elimination from images. To solve this problem, we observed the MMOTU dataset and built a 2D ovarian-tumor ultrasound image inpainting dataset by finely annotating the various symbols in the images. A novel framework called mask-guided generative adversarial network (MGGAN) is presented in this paper for 2D ovarian-tumor ultrasound images to remove various symbols from the images. The MGGAN performs to a high standard in corrupted regions by using an attention mechanism in the generator to pay more attention to valid information and ignore symbol information, making lesion boundaries more realistic. Moreover, fast Fourier convolutions (FFCs) and residual networks are used to increase the global field of perception; thus, our model can be applied to high-resolution ultrasound images. The greatest benefit of this algorithm is that it achieves pixel-level inpainting of distorted regions without clean images. Compared with other models, our model achieveed better results with only one stage in terms of objective and subjective evaluations. Our model obtained the best results for 256 × 256 and 512 × 512 resolutions. At a resolution of 256 × 256, our model achieved for SSIM, 22.66 for FID, and 0.07806 for LPIPS. At a resolution of 512 × 512, our model achieved 0.9208 for SSIM, 25.52 for FID, and 0.08300 for LPIPS. Our method can considerably improve the accuracy of computerized ovarian tumor diagnosis. The segmentation accuracy was improved from 71.51% to 76.06% for the Unet model and from 61.13% to 66.65% for the PSPnet model in clean images.

摘要

二维卵巢肿瘤超声图像的质量会对诊断结果产生根本性影响。然而,临床处理后的二维卵巢肿瘤超声图像包含许多人工识别的符号,如手指、十字、虚线和字母,这些有助于人工智能(AI)进行图像识别。这些符号广泛分布在病变边界内,这也会影响利用特征提取的网络,从而降低病变分类和分割的准确性。图像修复技术用于去除图像中的噪声和物体。为了解决这个问题,我们观察了MMOTU数据集,并通过对图像中的各种符号进行精细标注,构建了一个二维卵巢肿瘤超声图像修复数据集。本文提出了一种名为掩码引导生成对抗网络(MGGAN)的新颖框架,用于二维卵巢肿瘤超声图像,以去除图像中的各种符号。MGGAN通过在生成器中使用注意力机制,在受损区域表现出高标准,更加关注有效信息并忽略符号信息,使病变边界更加逼真。此外,快速傅里叶卷积(FFC)和残差网络用于扩大全局感知范围;因此,我们的模型可以应用于高分辨率超声图像。该算法的最大优点是它无需干净图像就能实现对失真区域的像素级修复。与其他模型相比,在客观和主观评估方面,我们的模型仅通过一个阶段就取得了更好的结果。我们的模型在256×256和512×512分辨率下取得了最佳结果。在256×256分辨率下,我们的模型SSIM为 ,FID为22.66,LPIPS为0.07806。在512×512分辨率下,我们的模型SSIM为0.9208,FID为25.52,LPIPS为0.08300。我们的方法可以显著提高计算机辅助卵巢肿瘤诊断的准确性。在干净图像中,Unet模型的分割准确率从71.51%提高到76.06%,PSPnet模型的分割准确率从61.13%提高到66.65%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/44db91d3a9d2/bioengineering-10-00184-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/a0e8dd794b0f/bioengineering-10-00184-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/01882ef6a1f1/bioengineering-10-00184-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/e2d4b65d2225/bioengineering-10-00184-g0A3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/8234d5e3af72/bioengineering-10-00184-g0A4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/d1a35ed76bff/bioengineering-10-00184-g0A5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/3ba099c86678/bioengineering-10-00184-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/1c22542a02a3/bioengineering-10-00184-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/0e769fd19889/bioengineering-10-00184-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/6550c58aa509/bioengineering-10-00184-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/ba3ef0e701cf/bioengineering-10-00184-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/6e642cd53707/bioengineering-10-00184-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/98e22e43572b/bioengineering-10-00184-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/eb2731778396/bioengineering-10-00184-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/bdfaed50c775/bioengineering-10-00184-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/44db91d3a9d2/bioengineering-10-00184-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/a0e8dd794b0f/bioengineering-10-00184-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/01882ef6a1f1/bioengineering-10-00184-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/e2d4b65d2225/bioengineering-10-00184-g0A3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/8234d5e3af72/bioengineering-10-00184-g0A4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/d1a35ed76bff/bioengineering-10-00184-g0A5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/3ba099c86678/bioengineering-10-00184-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/1c22542a02a3/bioengineering-10-00184-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/0e769fd19889/bioengineering-10-00184-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/6550c58aa509/bioengineering-10-00184-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/ba3ef0e701cf/bioengineering-10-00184-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/6e642cd53707/bioengineering-10-00184-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/98e22e43572b/bioengineering-10-00184-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/eb2731778396/bioengineering-10-00184-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/bdfaed50c775/bioengineering-10-00184-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e8b/9952248/44db91d3a9d2/bioengineering-10-00184-g010.jpg

相似文献

1
Improving the Segmentation Accuracy of Ovarian-Tumor Ultrasound Images Using Image Inpainting.利用图像修复提高卵巢肿瘤超声图像的分割精度
Bioengineering (Basel). 2023 Feb 1;10(2):184. doi: 10.3390/bioengineering10020184.
2
Automatic consecutive context perceived transformer GAN for serial sectioning image blind inpainting.自动连续上下文感知变换生成对抗网络用于切片图像盲修复。
Comput Biol Med. 2021 Sep;136:104751. doi: 10.1016/j.compbiomed.2021.104751. Epub 2021 Aug 10.
3
Semi-supervised segmentation of lesion from breast ultrasound images with attentional generative adversarial network.基于注意力生成对抗网络的乳腺超声图像病灶半监督分割。
Comput Methods Programs Biomed. 2020 Jun;189:105275. doi: 10.1016/j.cmpb.2019.105275. Epub 2019 Dec 12.
4
Panoptic blind image inpainting.全景盲图像修复
ISA Trans. 2023 Jan;132:208-221. doi: 10.1016/j.isatra.2022.10.030. Epub 2022 Nov 1.
5
Ultrasound image denoising using generative adversarial networks with residual dense connectivity and weighted joint loss.使用具有残差密集连接和加权联合损失的生成对抗网络进行超声图像去噪
PeerJ Comput Sci. 2022 Feb 16;8:e873. doi: 10.7717/peerj-cs.873. eCollection 2022.
6
Attention-VGG16-UNet: a novel deep learning approach for automatic segmentation of the median nerve in ultrasound images.注意力-VGG16-UNet:一种用于超声图像中正中神经自动分割的新型深度学习方法。
Quant Imaging Med Surg. 2022 Jun;12(6):3138-3150. doi: 10.21037/qims-21-1074.
7
Facial image inpainting for big data using an effective attention mechanism and a convolutional neural network.利用有效注意力机制和卷积神经网络对大数据进行面部图像修复
Front Neurorobot. 2023 Jan 12;16:1111621. doi: 10.3389/fnbot.2022.1111621. eCollection 2022.
8
An Innovative Low-dose CT Inpainting Algorithm based on Limited-angle Imaging Inpainting Model.一种基于有限角度成像修复模型的创新性低剂量CT图像修复算法。
J Xray Sci Technol. 2023;31(1):131-152. doi: 10.3233/XST-221260.
9
Lesion-aware generative adversarial networks for color fundus image to fundus fluorescein angiography translation.用于彩色眼底图像到眼底荧光血管造影转换的病变感知生成对抗网络。
Comput Methods Programs Biomed. 2023 Feb;229:107306. doi: 10.1016/j.cmpb.2022.107306. Epub 2022 Dec 14.
10
RNON: image inpainting via repair network and optimization network.RNON:通过修复网络和优化网络进行图像修复
Int J Mach Learn Cybern. 2023 Mar 25:1-17. doi: 10.1007/s13042-023-01811-y.

引用本文的文献

1
Intelligent system based on multiple networks for accurate ovarian tumor semantic segmentation.基于多网络的智能系统用于准确的卵巢肿瘤语义分割。
Heliyon. 2024 Sep 3;10(17):e37386. doi: 10.1016/j.heliyon.2024.e37386. eCollection 2024 Sep 15.
2
Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology.开启5D超声时代?关于人工智能超声成像在妇产科应用的系统文献综述
J Clin Med. 2023 Oct 29;12(21):6833. doi: 10.3390/jcm12216833.

本文引用的文献

1
MGML: Multigranularity Multilevel Feature Ensemble Network for Remote Sensing Scene Classification.MGML:用于遥感场景分类的多粒度多层次特征集成网络。
IEEE Trans Neural Netw Learn Syst. 2023 May;34(5):2308-2322. doi: 10.1109/TNNLS.2021.3106391. Epub 2023 May 2.
2
Explainable Deep Learning Models in Medical Image Analysis.医学图像分析中的可解释深度学习模型
J Imaging. 2020 Jun 20;6(6):52. doi: 10.3390/jimaging6060052.
3
Multiple U-Net-Based Automatic Segmentations and Radiomics Feature Stability on Ultrasound Images for Patients With Ovarian Cancer.
基于多U-Net的卵巢癌患者超声图像自动分割及影像组学特征稳定性研究
Front Oncol. 2021 Feb 18;10:614201. doi: 10.3389/fonc.2020.614201. eCollection 2020.
4
Ultrasound image analysis using deep neural networks for discriminating between benign and malignant ovarian tumors: comparison with expert subjective assessment.基于深度神经网络的超声图像分析用于鉴别良恶性卵巢肿瘤:与专家主观评估的比较。
Ultrasound Obstet Gynecol. 2021 Jan;57(1):155-163. doi: 10.1002/uog.23530.
5
Texture Synthesis Based Thyroid Nodule Detection From Medical Ultrasound Images: Interpreting and Suppressing the Adversarial Effect of In-place Manual Annotation.基于纹理合成的医学超声图像甲状腺结节检测:解读与抑制就地人工标注的对抗效应
Front Bioeng Biotechnol. 2020 Jun 17;8:599. doi: 10.3389/fbioe.2020.00599. eCollection 2020.
6
Dataset of breast ultrasound images.乳腺超声图像数据集。
Data Brief. 2019 Nov 21;28:104863. doi: 10.1016/j.dib.2019.104863. eCollection 2020 Feb.
7
Accurate and robust deep learning-based segmentation of the prostate clinical target volume in ultrasound images.基于深度学习的超声图像中前列腺临床靶区的准确且稳健分割
Med Image Anal. 2019 Oct;57:186-196. doi: 10.1016/j.media.2019.07.005. Epub 2019 Jul 15.
8
Classification of Carotid Artery Intima Media Thickness Ultrasound Images with Deep Learning.基于深度学习的颈动脉内中膜超声图像分类。
J Med Syst. 2019 Jul 5;43(8):273. doi: 10.1007/s10916-019-1406-2.
9
An improved deep learning approach for detection of thyroid papillary cancer in ultrasound images.一种改进的深度学习方法,用于检测超声图像中的甲状腺乳头状癌。
Sci Rep. 2018 Apr 26;8(1):6600. doi: 10.1038/s41598-018-25005-7.
10
Non-Local Means Inpainting of MS Lesions in Longitudinal Image Processing.纵向图像处理中多发性硬化症病变的非局部均值修复
Front Neurosci. 2015 Dec 15;9:456. doi: 10.3389/fnins.2015.00456. eCollection 2015.