Kurtz Camille, Depeursinge Adrien, Napel Sandy, Beaulieu Christopher F, Rubin Daniel L
Department of Radiology, School of Medicine, Stanford University, USA; LIPADE Laboratory (EA 2517), University Paris Descartes, France.
Department of Radiology, School of Medicine, Stanford University, USA.
Med Image Anal. 2014 Oct;18(7):1082-100. doi: 10.1016/j.media.2014.06.009. Epub 2014 Jul 2.
Computer-assisted image retrieval applications can assist radiologists by identifying similar images in archives as a means to providing decision support. In the classical case, images are described using low-level features extracted from their contents, and an appropriate distance is used to find the best matches in the feature space. However, using low-level image features to fully capture the visual appearance of diseases is challenging and the semantic gap between these features and the high-level visual concepts in radiology may impair the system performance. To deal with this issue, the use of semantic terms to provide high-level descriptions of radiological image contents has recently been advocated. Nevertheless, most of the existing semantic image retrieval strategies are limited by two factors: they require manual annotation of the images using semantic terms and they ignore the intrinsic visual and semantic relationships between these annotations during the comparison of the images. Based on these considerations, we propose an image retrieval framework based on semantic features that relies on two main strategies: (1) automatic "soft" prediction of ontological terms that describe the image contents from multi-scale Riesz wavelets and (2) retrieval of similar images by evaluating the similarity between their annotations using a new term dissimilarity measure, which takes into account both image-based and ontological term relations. The combination of these strategies provides a means of accurately retrieving similar images in databases based on image annotations and can be considered as a potential solution to the semantic gap problem. We validated this approach in the context of the retrieval of liver lesions from computed tomographic (CT) images and annotated with semantic terms of the RadLex ontology. The relevance of the retrieval results was assessed using two protocols: evaluation relative to a dissimilarity reference standard defined for pairs of images on a 25-images dataset, and evaluation relative to the diagnoses of the retrieved images on a 72-images dataset. A normalized discounted cumulative gain (NDCG) score of more than 0.92 was obtained with the first protocol, while AUC scores of more than 0.77 were obtained with the second protocol. This automatical approach could provide real-time decision support to radiologists by showing them similar images with associated diagnoses and, where available, responses to therapies.
计算机辅助图像检索应用程序可以通过在档案中识别相似图像来协助放射科医生,以此提供决策支持。在传统情况下,使用从图像内容中提取的低级特征来描述图像,并使用适当的距离在特征空间中找到最佳匹配。然而,使用低级图像特征来完全捕捉疾病的视觉外观具有挑战性,并且这些特征与放射学中的高级视觉概念之间的语义鸿沟可能会损害系统性能。为了解决这个问题,最近有人主张使用语义术语来提供放射图像内容的高级描述。然而,现有的大多数语义图像检索策略受到两个因素的限制:它们需要使用语义术语对图像进行手动注释,并且在图像比较过程中忽略了这些注释之间固有的视觉和语义关系。基于这些考虑,我们提出了一种基于语义特征的图像检索框架,该框架依赖于两种主要策略:(1)从多尺度里斯小波自动“软”预测描述图像内容的本体术语;(2)通过使用一种新的术语差异度量来评估注释之间的相似性来检索相似图像,该度量同时考虑了基于图像的和本体术语的关系。这些策略的结合提供了一种基于图像注释在数据库中准确检索相似图像的方法,可以被视为语义鸿沟问题的潜在解决方案。我们在从计算机断层扫描(CT)图像中检索肝脏病变并使用RadLex本体的语义术语进行注释的背景下验证了这种方法。使用两种协议评估检索结果的相关性:相对于在25幅图像数据集上为图像对定义的差异参考标准进行评估,以及相对于在72幅图像数据集上检索到的图像的诊断进行评估。第一种协议获得了超过0.92的归一化折损累计增益(NDCG)分数,而第二种协议获得了超过0.77的AUC分数。这种自动方法可以通过向放射科医生展示带有相关诊断的相似图像以及(如有)治疗反应,为他们提供实时决策支持。