• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于智能凝视的组织病理学图像标注在深度卷积神经网络训练中的应用

On Smart Gaze Based Annotation of Histopathology Images for Training of Deep Convolutional Neural Networks.

出版信息

IEEE J Biomed Health Inform. 2022 Jul;26(7):3025-3036. doi: 10.1109/JBHI.2022.3148944. Epub 2022 Jul 1.

DOI:10.1109/JBHI.2022.3148944
PMID:35130177
Abstract

Unavailability of large training datasets is a bottleneck that needs to be overcome to realize the true potential of deep learning in histopathology applications. Although slide digitization via whole slide imaging scanners has increased the speed of data acquisition, labeling of virtual slides requires a substantial time investment from pathologists. Eye gaze annotations have the potential to speed up the slide labeling process. This work explores the viability and timing comparisons of eye gaze labeling compared to conventional manual labeling for training object detectors. Challenges associated with gaze based labeling and methods to refine the coarse data annotations for subsequent object detection are also discussed. Results demonstrate that gaze tracking based labeling can save valuable pathologist time and delivers good performance when employed for training a deep object detector. Using the task of localization of Keratin Pearls in cases of oral squamous cell carcinoma as a test case, we compare the performance gap between deep object detectors trained using hand-labelled and gaze-labelled data. On average, compared to 'Bounding-box' based hand-labeling, gaze-labeling required 57.6% less time per label and compared to 'Freehand' labeling, gaze-labeling required on average 85% less time per label.

摘要

大型训练数据集的缺乏是一个需要克服的瓶颈,才能实现深度学习在组织病理学应用中的真正潜力。虽然通过全切片成像扫描仪进行幻灯片数字化已经提高了数据采集的速度,但虚拟幻灯片的标记仍然需要病理学家大量的时间投入。眼动注视标注有可能加速幻灯片标记过程。这项工作探讨了眼动注视标注与传统手动标注在训练目标探测器方面的可行性和时间比较。还讨论了与基于注视的标注相关的挑战以及用于后续目标检测的粗粒度数据标注的改进方法。结果表明,当用于训练深度目标探测器时,基于注视跟踪的标注可以节省宝贵的病理学家时间并提供良好的性能。我们使用口腔鳞状细胞癌病例中角蛋白珠定位的任务作为测试案例,比较了使用手工标注和注视标注数据训练的深度目标探测器之间的性能差距。平均而言,与基于“边界框”的手工标注相比,注视标注每个标签所需的时间减少了 57.6%,与“徒手”标注相比,注视标注每个标签所需的时间平均减少了 85%。

相似文献

1
On Smart Gaze Based Annotation of Histopathology Images for Training of Deep Convolutional Neural Networks.基于智能凝视的组织病理学图像标注在深度卷积神经网络训练中的应用
IEEE J Biomed Health Inform. 2022 Jul;26(7):3025-3036. doi: 10.1109/JBHI.2022.3148944. Epub 2022 Jul 1.
2
Training high-performance deep learning classifier for diagnosis in oral cytology using diverse annotations.使用多种标注训练用于口腔细胞学诊断的高性能深度学习分类器。
Sci Rep. 2024 Jul 30;14(1):17591. doi: 10.1038/s41598-024-67879-w.
3
Semi-supervised training of deep convolutional neural networks with heterogeneous data and few local annotations: An experiment on prostate histopathology image classification.基于异构数据和少量局部标注的深度卷积神经网络的半监督学习:前列腺组织病理学图像分类实验。
Med Image Anal. 2021 Oct;73:102165. doi: 10.1016/j.media.2021.102165. Epub 2021 Jul 14.
4
Person-Specific Gaze Estimation from Low-Quality Webcam Images.基于低质量网络摄像头图像的特定人注视估计。
Sensors (Basel). 2023 Apr 20;23(8):4138. doi: 10.3390/s23084138.
5
Deep-SAGA: a deep-learning-based system for automatic gaze annotation from eye-tracking data.深度 SAGA:一种基于深度学习的眼动追踪数据自动注视点标注系统。
Behav Res Methods. 2023 Apr;55(3):1372-1391. doi: 10.3758/s13428-022-01833-4. Epub 2022 Jun 1.
6
Multiview Multitask Gaze Estimation With Deep Convolutional Neural Networks.基于深度卷积神经网络的多视图多任务注视估计。
IEEE Trans Neural Netw Learn Syst. 2019 Oct;30(10):3010-3023. doi: 10.1109/TNNLS.2018.2865525. Epub 2018 Sep 3.
7
Training Robust Object Detectors From Noisy Category Labels and Imprecise Bounding Boxes.从噪声类别标签和不精确边界框中训练鲁棒目标检测器。
IEEE Trans Image Process. 2021;30:5782-5792. doi: 10.1109/TIP.2021.3085208. Epub 2021 Jun 23.
8
Deep contrastive learning based tissue clustering for annotation-free histopathology image analysis.基于深度对比学习的无标注组织聚类在病理图像分析中的应用。
Comput Med Imaging Graph. 2022 Apr;97:102053. doi: 10.1016/j.compmedimag.2022.102053. Epub 2022 Mar 12.
9
LiteGaze: Neural architecture search for efficient gaze estimation.LiteGaze:用于高效注视估计的神经结构搜索。
PLoS One. 2023 May 1;18(5):e0284814. doi: 10.1371/journal.pone.0284814. eCollection 2023.
10
Fully Automated DCNN-Based Thermal Images Annotation Using Neural Network Pretrained on RGB Data.基于全自动化 DCNN 的热图像注释,使用基于 RGB 数据预训练的神经网络。
Sensors (Basel). 2021 Feb 23;21(4):1552. doi: 10.3390/s21041552.

引用本文的文献

1
Application of artificial intelligence in oral potentially malignant disorders: current opinions and future barriers.人工智能在口腔潜在恶性疾病中的应用:当前观点与未来障碍
Clin Transl Oncol. 2025 Aug 30. doi: 10.1007/s12094-025-04043-4.
2
Eye-Guided Multimodal Fusion: Toward an Adaptive Learning Framework Using Explainable Artificial Intelligence.眼动引导的多模态融合:迈向使用可解释人工智能的自适应学习框架
Sensors (Basel). 2025 Jul 24;25(15):4575. doi: 10.3390/s25154575.
3
Deep learning quantifies pathologists' visual patterns for whole slide image diagnosis.
深度学习量化病理学家用于全切片图像诊断的视觉模式。
Nat Commun. 2025 Jul 1;16(1):5493. doi: 10.1038/s41467-025-60307-1.
4
Robust ROI Detection in Whole Slide Images Guided by Pathologists' Viewing Patterns.由病理学家观察模式引导的全切片图像中稳健的感兴趣区域检测
J Imaging Inform Med. 2025 Feb;38(1):439-454. doi: 10.1007/s10278-024-01202-x. Epub 2024 Aug 9.
5
Eye tracking in digital pathology: A comprehensive literature review.数字病理学中的眼动追踪:一项全面的文献综述。
J Pathol Inform. 2024 May 17;15:100383. doi: 10.1016/j.jpi.2024.100383. eCollection 2024 Dec.
6
The Use of Machine Learning in Eye Tracking Studies in Medical Imaging: A Review.机器学习在医学成像眼动研究中的应用:综述。
IEEE J Biomed Health Inform. 2024 Jun;28(6):3597-3612. doi: 10.1109/JBHI.2024.3371893. Epub 2024 Jun 6.
7
Research and application of artificial intelligence in dentistry from lower-middle income countries - a scoping review.从中低收入国家看人工智能在牙科领域的研究与应用——范围综述
BMC Oral Health. 2024 Feb 12;24(1):220. doi: 10.1186/s12903-024-03970-y.