Suppr超能文献

一种用于多模态宫颈病变检测的对应区域融合框架。

A Corresponding Region Fusion Framework for Multi-Modal Cervical Lesion Detection.

出版信息

IEEE/ACM Trans Comput Biol Bioinform. 2024 Jul-Aug;21(4):959-970. doi: 10.1109/TCBB.2022.3178725. Epub 2024 Aug 9.

Abstract

Cervical lesion detection (CLD) using colposcopic images of multi-modality (acetic and iodine) is critical to computer-aided diagnosis (CAD) systems for accurate, objective, and comprehensive cervical cancer screening. To robustly capture lesion features and conform with clinical diagnosis practice, we propose a novel corresponding region fusion network (CRFNet) for multi-modal CLD. CRFNet first extracts feature maps and generates proposals for each modality, then performs proposal shifting to obtain corresponding regions under large position shifts between modalities, and finally fuses those region features with a new corresponding channel attention to detect lesion regions on both modalities. To evaluate CRFNet, we build a large multi-modal colposcopic image dataset collected from our collaborative hospital. We show that our proposed CRFNet surpasses known single-modal and multi-modal CLD methods and achieves state-of-the-art performance, especially in terms of Average Precision.

摘要

使用多模态(醋酸和碘)阴道镜图像进行宫颈病变检测(CLD)对于计算机辅助诊断(CAD)系统进行准确、客观和全面的宫颈癌筛查至关重要。为了稳健地捕捉病变特征并符合临床诊断实践,我们提出了一种新的对应区域融合网络(CRFNet)用于多模态 CLD。CRFNet 首先提取特征图并为每种模态生成提议,然后进行提议移位,以在模态之间的大位置移位下获得对应区域,最后使用新的对应通道注意力融合这些区域特征,以检测两种模态上的病变区域。为了评估 CRFNet,我们构建了一个从我们合作医院收集的大型多模态阴道镜图像数据集。我们表明,我们提出的 CRFNet 超越了已知的单模态和多模态 CLD 方法,实现了最先进的性能,特别是在平均精度方面。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验