Gu Yun, Vyas Khushi, Shen Mali, Yang Jie, Yang Guang-Zhong
IEEE Trans Neural Netw Learn Syst. 2021 Feb;32(2):481-492. doi: 10.1109/TNNLS.2020.2980129. Epub 2021 Feb 4.
Representation learning is a critical task for medical image analysis in computer-aided diagnosis. However, it is challenging to learn discriminative features due to the limited size of the data set and the lack of labels. In this article, we propose a deep graph-based multimodal feature embedding (DGMFE) framework for medical image retrieval with application to breast tissue classification by learning discriminative features of probe-based confocal laser endomicroscopy (pCLE). We first build a multimodality graph model based on the visual similarity between pCLE data and reference histology images. The latent similar pCLE-histology pairs are extracted by walking with the cyclic path on the graph while the dissimilar pairs are extracted based on the geodesic distance. Given the similar and dissimilar pairs, the latent feature space is discovered by reconstructing the similarity between pCLE and histology images via deep Siamese neural networks. The proposed method is evaluated on a clinical database with 700 pCLE mosaics. The accuracy of image retrieval demonstrates that DGMFE can outperform previous works on feature learning. Especially, the top-1 accuracy in an eight-class retrieval task is 0.739, thus demonstrating a 10% improvement compared to the state-of-the-art method.
表示学习是计算机辅助诊断中医学图像分析的一项关键任务。然而,由于数据集规模有限且缺乏标签,学习判别性特征具有挑战性。在本文中,我们提出了一种基于深度图的多模态特征嵌入(DGMFE)框架,用于医学图像检索,并通过学习基于探针的共聚焦激光内镜显微镜(pCLE)的判别性特征将其应用于乳腺组织分类。我们首先基于pCLE数据与参考组织学图像之间的视觉相似性构建一个多模态图模型。通过在图上沿循环路径游走提取潜在的相似pCLE-组织学对,而基于测地距离提取不相似对。给定相似对和不相似对,通过深度孪生神经网络重建pCLE与组织学图像之间的相似性来发现潜在特征空间。所提出的方法在一个包含700个pCLE镶嵌图的临床数据库上进行了评估。图像检索的准确率表明DGMFE在特征学习方面优于先前的工作。特别是,在八类检索任务中的top-1准确率为0.739,因此与最先进的方法相比提高了10%。