Suppr超能文献

使用多模态融合的组织学图像搜索

Histology image search using multimodal fusion.

作者信息

Caicedo Juan C, Vanegas Jorge A, Páez Fabian, González Fabio A

机构信息

University of Illinois at Urbana-Champaign, Sieble Center for Computer Science, 201 N Goodwin Ave, Urbana, IL 61801, USA.

MindLab Research Laboratory, Universidad Nacional de Colombia, Bogotá, Colombia.

出版信息

J Biomed Inform. 2014 Oct;51:114-28. doi: 10.1016/j.jbi.2014.04.016. Epub 2014 May 10.

Abstract

This work proposes a histology image indexing strategy based on multimodal representations obtained from the combination of visual features and associated semantic annotations. Both data modalities are complementary information sources for an image retrieval system, since visual features lack explicit semantic information and semantic terms do not usually describe the visual appearance of images. The paper proposes a novel strategy to build a fused image representation using matrix factorization algorithms and data reconstruction principles to generate a set of multimodal features. The methodology can seamlessly recover the multimodal representation of images without semantic annotations, allowing us to index new images using visual features only, and also accepting single example images as queries. Experimental evaluations on three different histology image data sets show that our strategy is a simple, yet effective approach to building multimodal representations for histology image search, and outperforms the response of the popular late fusion approach to combine information.

摘要

这项工作提出了一种基于多模态表示的组织学图像索引策略,该多模态表示是通过视觉特征与相关语义注释相结合而获得的。对于图像检索系统而言,这两种数据模态都是互补的信息源,因为视觉特征缺乏明确的语义信息,而语义术语通常也无法描述图像的视觉外观。本文提出了一种新颖的策略,利用矩阵分解算法和数据重建原理来构建融合图像表示,以生成一组多模态特征。该方法能够无缝恢复没有语义注释的图像的多模态表示,使我们能够仅使用视觉特征对新图像进行索引,并且还接受单例图像作为查询。在三个不同的组织学图像数据集上进行的实验评估表明,我们的策略是一种简单而有效的方法,用于构建组织学图像搜索的多模态表示,并且优于流行的后期融合方法组合信息的响应。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验