IEEE Trans Med Imaging. 2022 Nov;41(11):3266-3277. doi: 10.1109/TMI.2022.3181694. Epub 2022 Oct 27.
The identification of melanoma involves an integrated analysis of skin lesion images acquired using clinical and dermoscopy modalities. Dermoscopic images provide a detailed view of the subsurface visual structures that supplement the macroscopic details from clinical images. Visual melanoma diagnosis is commonly based on the 7-point visual category checklist (7PC), which involves identifying specific characteristics of skin lesions. The 7PC contains intrinsic relationships between categories that can aid classification, such as shared features, correlations, and the contributions of categories towards diagnosis. Manual classification is subjective and prone to intra- and interobserver variability. This presents an opportunity for automated methods to aid in diagnostic decision support. Current state-of-the-art methods focus on a single image modality (either clinical or dermoscopy) and ignore information from the other, or do not fully leverage the complementary information from both modalities. Furthermore, there is not a method to exploit the 'intercategory' relationships in the 7PC. In this study, we address these issues by proposing a graph-based intercategory and intermodality network (GIIN) with two modules. A graph-based relational module (GRM) leverages intercategorical relations, intermodal relations, and prioritises the visual structure details from dermoscopy by encoding category representations in a graph network. The category embedding learning module (CELM) captures representations that are specialised for each category and support the GRM. We show that our modules are effective at enhancing classification performance using three public datasets (7PC, ISIC 2017, and ISIC 2018), and that our method outperforms state-of-the-art methods at classifying the 7PC categories and diagnosis.
黑色素瘤的识别需要综合分析使用临床和皮肤镜模式获得的皮肤病变图像。皮肤镜图像提供了对补充临床图像宏观细节的亚表面视觉结构的详细视图。视觉黑色素瘤诊断通常基于 7 点视觉类别检查表(7PC),该检查表涉及识别皮肤病变的特定特征。7PC 包含类别之间的内在关系,这些关系可以帮助分类,例如共享特征、相关性以及类别对诊断的贡献。手动分类是主观的,容易受到观察者内和观察者间变异性的影响。这为自动化方法提供了辅助诊断决策支持的机会。目前的最先进方法侧重于单一图像模式(临床或皮肤镜),忽略了来自另一种模式的信息,或者没有充分利用两种模式的互补信息。此外,没有一种方法可以利用 7PC 中的“类别间”关系。在这项研究中,我们通过提出基于图的类别间和模态间网络(GIIN)和两个模块来解决这些问题。基于图的关系模块(GRM)利用类别间关系、模态间关系,并通过在图网络中对类别表示进行编码来优先考虑皮肤镜的视觉结构细节。类别嵌入学习模块(CELM)捕获专门针对每个类别的表示,并支持 GRM。我们表明,我们的模块在使用三个公共数据集(7PC、ISIC 2017 和 ISIC 2018)时可以有效提高分类性能,并且我们的方法在对 7PC 类别进行分类和诊断方面优于最先进的方法。