School of Information Technologies, University of Sydney, Australia; Institute of Biomedical Engineering and Technology, University of Sydney, Australia.
School of Electrical and Information Engineering, University of Sydney, Australia.
Comput Med Imaging Graph. 2016 Apr;49:37-45. doi: 10.1016/j.compmedimag.2016.01.001. Epub 2016 Feb 4.
The automatic annotation of medical images is a prerequisite for building comprehensive semantic archives that can be used to enhance evidence-based diagnosis, physician education, and biomedical research. Annotation also has important applications in the automatic generation of structured radiology reports. Much of the prior research work has focused on annotating images with properties such as the modality of the image, or the biological system or body region being imaged. However, many challenges remain for the annotation of high-level semantic content in medical images (e.g., presence of calcification, vessel obstruction, etc.) due to the difficulty in discovering relationships and associations between low-level image features and high-level semantic concepts. This difficulty is further compounded by the lack of labelled training data. In this paper, we present a method for the automatic semantic annotation of medical images that leverages techniques from content-based image retrieval (CBIR). CBIR is a well-established image search technology that uses quantifiable low-level image features to represent the high-level semantic content depicted in those images. Our method extends CBIR techniques to identify or retrieve a collection of labelled images that have similar low-level features and then uses this collection to determine the best high-level semantic annotations. We demonstrate our annotation method using retrieval via weighted nearest-neighbour retrieval and multi-class classification to show that our approach is viable regardless of the underlying retrieval strategy. We experimentally compared our method with several well-established baseline techniques (classification and regression) and showed that our method achieved the highest accuracy in the annotation of liver computed tomography (CT) images.
医学图像的自动标注是构建全面语义档案的前提,这些档案可用于增强基于证据的诊断、医师教育和生物医学研究。标注在自动生成结构化放射学报告方面也有重要应用。之前的大部分研究工作都集中在使用图像的模态或被成像的生物系统或身体部位等属性对图像进行标注。然而,由于发现低水平图像特征与高水平语义概念之间的关系和关联具有难度,因此在医学图像的高级语义内容标注方面仍存在许多挑战(例如存在钙化、血管阻塞等)。由于缺乏标记的训练数据,这种困难更加复杂。在本文中,我们提出了一种利用基于内容的图像检索 (CBIR) 技术的医学图像自动语义标注方法。CBIR 是一种成熟的图像搜索技术,它使用可量化的低水平图像特征来表示这些图像中描绘的高级语义内容。我们的方法扩展了 CBIR 技术,以识别或检索具有相似低水平特征的一组标记图像,然后使用该集合来确定最佳的高级语义标注。我们使用加权最近邻检索和多类分类来演示我们的标注方法,以表明无论底层检索策略如何,我们的方法都是可行的。我们通过实验将我们的方法与几种成熟的基线技术(分类和回归)进行了比较,并表明我们的方法在肝脏计算机断层扫描 (CT) 图像的标注中实现了最高的准确性。