Zhou Chenhong, Ding Changxing, Wang Xinchao, Lu Zhentai, Tao Dacheng
IEEE Trans Image Process. 2020 Feb 19. doi: 10.1109/TIP.2020.2973510.
Class imbalance has emerged as one of the major challenges for medical image segmentation. The model cascade (MC) strategy, a popular scheme, significantly alleviates the class imbalance issue via running a set of individual deep models for coarse-to-fine segmentation. Despite its outstanding performance, however, this method leads to undesired system complexity and also ignores the correlation among the models. To handle these flaws in the MC approach, we propose in this paper a light-weight deep model, i.e., the One-pass Multi-task Network (OM-Net) to solve class imbalance better than MC does, while requiring only one-pass computation for brain tumor segmentation. First, OM-Net integrates the separate segmentation tasks into one deep model, which consists of shared parameters to learn joint features, as well as task-specific parameters to learn discriminative features. Second, to more effectively optimize OM-Net, we take advantage of the correlation among tasks to design both an online training data transfer strategy and a curriculum learning-based training strategy. Third, we further propose sharing prediction results between tasks, which enables us to design a cross-task guided attention (CGA) module. By following the guidance of the prediction results provided by the previous task, CGA can adaptively recalibrate channel-wise feature responses based on the category-specific statistics. Finally, a simple yet effective post-processing method is introduced to refine the segmentation results of the proposed attention network. Extensive experiments are conducted to demonstrate the effectiveness of the proposed techniques. Most impressively, we achieve state-of-the-art performance on the BraTS 2015 testing set and BraTS 2017 online validation set. Using these proposed approaches, we also won joint third place in the BraTS 2018 challenge among 64 participating teams.The code will be made publicly available at https://github.com/chenhong-zhou/OM-Net.
类别不平衡已成为医学图像分割的主要挑战之一。模型级联(MC)策略是一种流行的方案,它通过运行一组用于从粗到细分割的独立深度模型,显著缓解了类别不平衡问题。然而,尽管该方法性能出色,但它导致了不必要的系统复杂性,并且还忽略了模型之间的相关性。为了解决MC方法中的这些缺陷,我们在本文中提出了一种轻量级深度模型,即单通道多任务网络(OM-Net),它比MC能更好地解决类别不平衡问题,同时在脑肿瘤分割中只需要单通道计算。首先,OM-Net将单独的分割任务集成到一个深度模型中,该模型由用于学习联合特征的共享参数以及用于学习判别特征的特定任务参数组成。其次,为了更有效地优化OM-Net,我们利用任务之间的相关性来设计在线训练数据转移策略和基于课程学习的训练策略。第三,我们进一步提出在任务之间共享预测结果,这使我们能够设计一个跨任务引导注意力(CGA)模块。通过遵循前一个任务提供的预测结果的指导,CGA可以根据特定类别的统计信息自适应地重新校准通道级特征响应。最后,引入了一种简单而有效的后处理方法来细化所提出的注意力网络的分割结果。进行了广泛的实验来证明所提出技术的有效性。最令人印象深刻的是,我们在BraTS 2015测试集和BraTS 2017在线验证集上取得了领先的性能。使用这些提出的方法,我们还在64个参赛团队参加的BraTS 2018挑战赛中获得了并列第三名。代码将在https://github.com/chenhong-zhou/OM-Net上公开提供。