IEEE Trans Med Imaging. 2020 Jul;39(7):2482-2493. doi: 10.1109/TMI.2020.2972964. Epub 2020 Feb 10.
Automated skin lesion segmentation and classification are two most essential and related tasks in the computer-aided diagnosis of skin cancer. Despite their prevalence, deep learning models are usually designed for only one task, ignoring the potential benefits in jointly performing both tasks. In this paper, we propose the mutual bootstrapping deep convolutional neural networks (MB-DCNN) model for simultaneous skin lesion segmentation and classification. This model consists of a coarse segmentation network (coarse-SN), a mask-guided classification network (mask-CN), and an enhanced segmentation network (enhanced-SN). On one hand, the coarse-SN generates coarse lesion masks that provide a prior bootstrapping for mask-CN to help it locate and classify skin lesions accurately. On the other hand, the lesion localization maps produced by mask-CN are then fed into enhanced-SN, aiming to transfer the localization information learned by mask-CN to enhanced-SN for accurate lesion segmentation. In this way, both segmentation and classification networks mutually transfer knowledge between each other and facilitate each other in a bootstrapping way. Meanwhile, we also design a novel rank loss and jointly use it with the Dice loss in segmentation networks to address the issues caused by class imbalance and hard-easy pixel imbalance. We evaluate the proposed MB-DCNN model on the ISIC-2017 and PH2 datasets, and achieve a Jaccard index of 80.4% and 89.4% in skin lesion segmentation and an average AUC of 93.8% and 97.7% in skin lesion classification, which are superior to the performance of representative state-of-the-art skin lesion segmentation and classification methods. Our results suggest that it is possible to boost the performance of skin lesion segmentation and classification simultaneously via training a unified model to perform both tasks in a mutual bootstrapping way.
自动皮肤病变分割和分类是计算机辅助皮肤癌诊断中两个最基本和相关的任务。尽管它们很常见,但深度学习模型通常只为一项任务而设计,忽略了同时执行这两项任务的潜在好处。在本文中,我们提出了用于同时进行皮肤病变分割和分类的相互引导深度卷积神经网络(MB-DCNN)模型。该模型由一个粗分割网络(coarse-SN)、一个掩模引导分类网络(mask-CN)和一个增强分割网络(enhanced-SN)组成。一方面,coarse-SN 生成粗病变掩模,为 mask-CN 提供先验引导,帮助其准确定位和分类皮肤病变。另一方面,mask-CN 生成的病变定位图被送入 enhanced-SN,旨在将 mask-CN 学习到的定位信息转移到 enhanced-SN 中,以实现精确的病变分割。这样,分割和分类网络就可以相互传递知识,相互促进。同时,我们还设计了一种新的排序损失,并将其与分割网络中的 Dice 损失联合使用,以解决由类不平衡和难易像素不平衡引起的问题。我们在 ISIC-2017 和 PH2 数据集上评估了所提出的 MB-DCNN 模型,在皮肤病变分割方面达到了 80.4%的 Jaccard 指数和 89.4%的平均 AUC,在皮肤病变分类方面达到了 93.8%的平均 AUC 和 97.7%的平均 AUC,优于代表性的最先进的皮肤病变分割和分类方法的性能。我们的结果表明,通过训练一个统一的模型以相互引导的方式同时执行这两项任务,有可能提高皮肤病变分割和分类的性能。