• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于无线胶囊内窥镜图像中清晰和污染区域分割的逐像素标注:一个多中心数据库。

Pixel-wise annotation for clear and contaminated regions segmentation in wireless capsule endoscopy images: A multicentre database.

作者信息

Sadeghi Vahid, Sanahmadi Yasaman, Behdad Maryam, Vard Alireza, Sharifi Mohsen, Raeisi Ahmad, Nikkhah Mehdi, Mehridehnavi Alireza

机构信息

Student Research Committee, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran.

Medical Image & Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran.

出版信息

Data Brief. 2024 Sep 10;57:110927. doi: 10.1016/j.dib.2024.110927. eCollection 2024 Dec.

DOI:10.1016/j.dib.2024.110927
PMID:39351133
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11440793/
Abstract

Wireless capsule endoscopy (WCE) is capable of non-invasively visualizing the small intestine, the most complicated segment of the gastrointestinal tract, to detect different types of abnormalities. However, its main drawback is reviewing the vast number of captured images (more than 50,000 frames). The recorded images are only sometimes clear, and different contaminating agents, such as turbid materials and air bubbles, degrade the visualization quality of the WCE images. This condition could cause serious problems such as reducing mucosal view visualization, prolonging recorded video reviewing time, and increasing the risks of missing pathology. On the other hand, accurately quantifying the amount of turbid fluids and bubbles can indicate potential motility malfunction. To assist in developing computer vision-based techniques, we have constructed the first multicentre publicly available clear and contaminated annotated dataset by precisely segmenting 17,593 capsule endoscopy images from three different databases. In contrast to the existing datasets, our dataset has been annotated at the pixel level, discriminating the clear and contaminated regions and subsequently differentiating bubbles and turbid fluids from normal tissue. To create the dataset, we first selected all of the images (2906 frames) in the reduced mucosal view class covering different levels of contamination and randomly selected 12,237 images from the normal class of the copyright-free CC BY 4.0 licensed small bowel capsule endoscopy (SBCE) images from the Kvasir capsule endoscopy database. To mitigate the possible available bias in the mentioned dataset and to increase the sample size, the number of 2077 and 373 images have been stochastically chosen from the SEE-AI project and CECleanliness datasets respectively for the subsequent annotation. Randomly selected images have been annotated with the aid of ImageJ and ITK-SNAP software under the supervision of an expert SBCE reader with extensive experience in gastroenterology and endoscopy. For each image, two binary and tri-colour ground truth (GT) masks have been created in which each pixel has been indexed into two classes (clear and contaminated) and three classes (bubble, turbid fluids, and normal), respectively. To the best of the author's knowledge, there is no implemented clear and contaminated region segmentation on the capsule endoscopy reading software. Curated multicentre dataset can be utilized to implement applicable segmentation algorithms for identification of clear and contaminated regions and discrimination bubbles, as well as turbid fluids from normal tissue in the small intestine. Since the annotated images belong to three different sources, they provide a diverse representation of the clear and contaminated patterns in the WCE images. This diversity is valuable for training the models that are more robust to variations in data characteristics and can generalize well across different subjects and settings. The inclusion of images from three different centres allows for robust cross-validation opportunities, where computer vision-based models can be trained on one centre's annotated images and evaluated on others.

摘要

无线胶囊内镜(WCE)能够非侵入性地可视化小肠,这是胃肠道最复杂的部分,以检测不同类型的异常情况。然而,其主要缺点是要查看大量捕获的图像(超过50,000帧)。记录的图像有时并不清晰,并且不同的污染物,如浑浊物质和气泡,会降低WCE图像的可视化质量。这种情况可能会导致严重问题,如减少黏膜视图的可视化、延长记录视频的查看时间以及增加漏诊病变的风险。另一方面,准确量化浑浊液体和气泡的数量可以指示潜在的运动功能障碍。为了协助开发基于计算机视觉的技术,我们通过精确分割来自三个不同数据库的17,593张胶囊内镜图像,构建了第一个多中心公开可用的清晰和受污染的注释数据集。与现有数据集相比,我们的数据集在像素级别进行了注释,区分了清晰和受污染的区域,并随后将气泡和浑浊液体与正常组织区分开来。为了创建该数据集,我们首先选择了覆盖不同污染水平的减少黏膜视图类别中的所有图像(2906帧),并从Kvasir胶囊内镜数据库中随机选择了12,237张来自无版权CC BY 4.0许可的小肠胶囊内镜(SBCE)图像的正常类别图像。为了减轻上述数据集中可能存在的偏差并增加样本量,分别从SEE - AI项目和CECleanliness数据集中随机选择了2077张和373张图像用于后续注释。随机选择的图像在一位在胃肠病学和内镜检查方面有丰富经验的专家SBCE读者的监督下,借助ImageJ和ITK - SNAP软件进行注释。对于每张图像,创建了两个二进制和三色地面真值(GT)掩码,其中每个像素分别被索引为两类(清晰和受污染)和三类(气泡、浑浊液体和正常)。据作者所知,在胶囊内镜阅读软件上尚未实现清晰和受污染区域的分割。精心策划的多中心数据集可用于实施适用的分割算法,以识别清晰和受污染的区域,并区分气泡以及小肠中的浑浊液体与正常组织。由于注释图像来自三个不同的来源它们提供了WCE图像中清晰和受污染模式的多样化表示。这种多样性对于训练对数据特征变化更具鲁棒性且能在不同受试者和设置中良好泛化的模型很有价值。包含来自三个不同中心的图像允许进行强大的交叉验证机会,其中基于计算机视觉的模型可以在一个中心的注释图像上进行训练,并在其他中心进行评估。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3762/11440793/f93bde3a196e/gr7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3762/11440793/8299d72686dc/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3762/11440793/696c9f83c09f/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3762/11440793/7c2dadb3ea1d/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3762/11440793/ebdcf2954d7b/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3762/11440793/ac50c8e8dd71/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3762/11440793/52d0b7361230/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3762/11440793/f93bde3a196e/gr7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3762/11440793/8299d72686dc/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3762/11440793/696c9f83c09f/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3762/11440793/7c2dadb3ea1d/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3762/11440793/ebdcf2954d7b/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3762/11440793/ac50c8e8dd71/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3762/11440793/52d0b7361230/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3762/11440793/f93bde3a196e/gr7.jpg

相似文献

1
Pixel-wise annotation for clear and contaminated regions segmentation in wireless capsule endoscopy images: A multicentre database.用于无线胶囊内窥镜图像中清晰和污染区域分割的逐像素标注:一个多中心数据库。
Data Brief. 2024 Sep 10;57:110927. doi: 10.1016/j.dib.2024.110927. eCollection 2024 Dec.
2
Automatic detection of informative frames from wireless capsule endoscopy images.无线胶囊内窥镜图像中信息帧的自动检测。
Med Image Anal. 2010 Jun;14(3):449-70. doi: 10.1016/j.media.2009.12.001. Epub 2010 Jan 4.
3
Categorization and segmentation of intestinal content frames for wireless capsule endoscopy.
IEEE Trans Inf Technol Biomed. 2012 Nov;16(6):1341-52. doi: 10.1109/titb.2012.2221472.
4
Identification of Circular Patterns in Capsule Endoscopy Bubble Frames.胶囊内镜气泡帧中圆形模式的识别。
J Med Signals Sens. 2024 Jul 2;14:15. doi: 10.4103/jmss.jmss_50_23. eCollection 2024.
5
Computer aided wireless capsule endoscopy video segmentation.计算机辅助无线胶囊内窥镜视频分割
Med Phys. 2015 Feb;42(2):645-52. doi: 10.1118/1.4905164.
6
Deep learning for registration of region of interest in consecutive wireless capsule endoscopy frames.基于深度学习的无线胶囊内窥镜连续帧中感兴趣区域的配准。
Comput Methods Programs Biomed. 2021 Sep;208:106189. doi: 10.1016/j.cmpb.2021.106189. Epub 2021 May 25.
7
Deep learning for polyp recognition in wireless capsule endoscopy images.用于无线胶囊内窥镜图像中息肉识别的深度学习
Med Phys. 2017 Apr;44(4):1379-1389. doi: 10.1002/mp.12147.
8
Semantic Segmentation Dataset for AI-Based Quantification of Clean Mucosa in Capsule Endoscopy.基于人工智能的胶囊内镜清洁黏膜定量分析的语义分割数据集。
Medicina (Kaunas). 2022 Mar 7;58(3):397. doi: 10.3390/medicina58030397.
9
Multiple Linear Discriminant Models for Extracting Salient Characteristic Patterns in Capsule Endoscopy Images for Multi-Disease Detection.用于在胶囊内镜图像中提取显著特征模式以进行多疾病检测的多元线性判别模型。
IEEE J Transl Eng Health Med. 2020 Jan 17;8:3300111. doi: 10.1109/JTEHM.2020.2964666. eCollection 2020.
10
Computer-aided gastrointestinal hemorrhage detection in wireless capsule endoscopy videos.无线胶囊内镜视频中的计算机辅助胃肠道出血检测。
Comput Methods Programs Biomed. 2015 Dec;122(3):341-53. doi: 10.1016/j.cmpb.2015.09.005. Epub 2015 Sep 9.

引用本文的文献

1
Precision enhancement in wireless capsule endoscopy: a novel transformer-based approach for real-time video object detection.无线胶囊内镜中的精度增强:一种基于新型变压器的实时视频目标检测方法。
Front Artif Intell. 2025 Apr 30;8:1529814. doi: 10.3389/frai.2025.1529814. eCollection 2025.

本文引用的文献

1
Small bowel capsule endoscopy examination and open access database with artificial intelligence: The SEE-artificial intelligence project.小肠胶囊内镜检查与带人工智能的开放获取数据库:SEE人工智能项目。
DEN Open. 2023 Jun 22;4(1):e258. doi: 10.1002/deo2.258. eCollection 2024 Apr.
2
Semantic Segmentation Dataset for AI-Based Quantification of Clean Mucosa in Capsule Endoscopy.基于人工智能的胶囊内镜清洁黏膜定量分析的语义分割数据集。
Medicina (Kaunas). 2022 Mar 7;58(3):397. doi: 10.3390/medicina58030397.
3
Kvasir-Capsule, a video capsule endoscopy dataset.
卡瓦西胶囊内镜数据集
Sci Data. 2021 May 27;8(1):142. doi: 10.1038/s41597-021-00920-z.
4
A neural network-based algorithm for assessing the cleanliness of small bowel during capsule endoscopy.基于神经网络的胶囊内镜中小肠清洁程度评估算法。
Endoscopy. 2021 Sep;53(9):932-936. doi: 10.1055/a-1301-3841. Epub 2021 Jan 12.
5
Automatic evaluation of degree of cleanliness in capsule endoscopy based on a novel CNN architecture.基于新型卷积神经网络架构的胶囊内镜清洁度自动评估。
Sci Rep. 2020 Oct 19;10(1):17706. doi: 10.1038/s41598-020-74668-8.
6
NIH Image to ImageJ: 25 years of image analysis.NIH 图像到 ImageJ:25 年的图像分析。
Nat Methods. 2012 Jul;9(7):671-5. doi: 10.1038/nmeth.2089.
7
User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability.用户引导的解剖结构三维主动轮廓分割:显著提高效率和可靠性。
Neuroimage. 2006 Jul 1;31(3):1116-28. doi: 10.1016/j.neuroimage.2006.01.015. Epub 2006 Mar 20.
8
The measurement of observer agreement for categorical data.分类数据观察者一致性的测量。
Biometrics. 1977 Mar;33(1):159-74.