Suppr超能文献

基于图的多模态医学图像检索方法。

A graph-based approach for the retrieval of multi-modality medical images.

机构信息

Biomedical and Multimedia Information Technology (BMIT) Research Group, School of Information Technologies, University of Sydney, Sydney, Australia.

Biomedical and Multimedia Information Technology (BMIT) Research Group, School of Information Technologies, University of Sydney, Sydney, Australia; Department of Molecular Imaging, Royal Prince Alfred Hospital, Sydney, Australia.

出版信息

Med Image Anal. 2014 Feb;18(2):330-42. doi: 10.1016/j.media.2013.11.003. Epub 2013 Dec 6.

Abstract

In this paper, we address the retrieval of multi-modality medical volumes, which consist of two different imaging modalities, acquired sequentially, from the same scanner. One such example, positron emission tomography and computed tomography (PET-CT), provides physicians with complementary functional and anatomical features as well as spatial relationships and has led to improved cancer diagnosis, localisation, and staging. The challenge of multi-modality volume retrieval for cancer patients lies in representing the complementary geometric and topologic attributes between tumours and organs. These attributes and relationships, which are used for tumour staging and classification, can be formulated as a graph. It has been demonstrated that graph-based methods have high accuracy for retrieval by spatial similarity. However, naïvely representing all relationships on a complete graph obscures the structure of the tumour-anatomy relationships. We propose a new graph structure derived from complete graphs that structurally constrains the edges connected to tumour vertices based upon the spatial proximity of tumours and organs. This enables retrieval on the basis of tumour localisation. We also present a similarity matching algorithm that accounts for different feature sets for graph elements from different imaging modalities. Our method emphasises the relationships between a tumour and related organs, while still modelling patient-specific anatomical variations. Constraining tumours to related anatomical structures improves the discrimination potential of graphs, making it easier to retrieve similar images based on tumour location. We evaluated our retrieval methodology on a dataset of clinical PET-CT volumes. Our results showed that our method enabled the retrieval of multi-modality images using spatial features. Our graph-based retrieval algorithm achieved a higher precision than several other retrieval techniques: gray-level histograms as well as state-of-the-art methods such as visual words using the scale- invariant feature transform (SIFT) and relational matrices representing the spatial arrangements of objects.

摘要

在本文中,我们解决了从同一扫描仪顺序获取的两种不同成像模式的多模态医学体积的检索问题。例如,正电子发射断层扫描和计算机断层扫描(PET-CT)为医生提供了功能和解剖特征以及空间关系的互补性,并提高了癌症的诊断、定位和分期水平。癌症患者的多模态体积检索的挑战在于表示肿瘤和器官之间的互补几何和拓扑属性。这些用于肿瘤分期和分类的属性和关系可以表示为图。已经证明基于图的方法具有很高的空间相似性检索准确性。然而,天真地在完全图上表示所有关系会掩盖肿瘤-解剖关系的结构。我们提出了一种新的图结构,该结构基于肿瘤和器官之间的空间接近度,从完全图中推导出结构上约束与肿瘤顶点连接的边。这可以基于肿瘤定位进行检索。我们还提出了一种相似性匹配算法,该算法考虑了来自不同成像模式的图元素的不同特征集。我们的方法强调了肿瘤与相关器官之间的关系,同时仍然对患者特定的解剖变化进行建模。将肿瘤约束到相关的解剖结构可以提高图的区分潜力,从而更容易根据肿瘤位置检索相似的图像。我们在临床 PET-CT 体积数据集上评估了我们的检索方法。我们的结果表明,我们的方法能够使用空间特征检索多模态图像。我们的基于图的检索算法比其他几种检索技术具有更高的精度:灰度直方图以及使用尺度不变特征变换(SIFT)的视觉单词等最新方法以及表示对象空间排列的关系矩阵。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验