University College London, UK; University of Aberdeen, UK.
Nepal Applied Mathematics and Informatics Institute for research (NAAMII), Nepal.
Med Image Anal. 2023 Apr;85:102747. doi: 10.1016/j.media.2023.102747. Epub 2023 Jan 13.
We present our novel deep multi-task learning method for medical image segmentation. Existing multi-task methods demand ground truth annotations for both the primary and auxiliary tasks. Contrary to it, we propose to generate the pseudo-labels of an auxiliary task in an unsupervised manner. To generate the pseudo-labels, we leverage Histogram of Oriented Gradients (HOGs), one of the most widely used and powerful hand-crafted features for detection. Together with the ground truth semantic segmentation masks for the primary task and pseudo-labels for the auxiliary task, we learn the parameters of the deep network to minimize the loss of both the primary task and the auxiliary task jointly. We employed our method on two powerful and widely used semantic segmentation networks: UNet and U2Net to train in a multi-task setup. To validate our hypothesis, we performed experiments on two different medical image segmentation data sets. From the extensive quantitative and qualitative results, we observe that our method consistently improves the performance compared to the counter-part method. Moreover, our method is the winner of FetReg Endovis Sub-challenge on Semantic Segmentation organised in conjunction with MICCAI 2021. Code and implementation details are available at:https://github.com/thetna/medical_image_segmentation.
我们提出了一种新的医学图像分割深度多任务学习方法。现有的多任务方法需要对主任务和辅助任务都进行真实标签注释。与之相反,我们提出了一种在无监督的情况下生成辅助任务的伪标签的方法。为了生成伪标签,我们利用了方向梯度直方图(HOG),这是最广泛使用和强大的手工制作特征之一,用于检测。结合主任务的真实语义分割掩模和辅助任务的伪标签,我们学习深度网络的参数,以最小化主任务和辅助任务的损失。我们在两个强大且广泛使用的语义分割网络 UNet 和 U2Net 上进行了多任务设置的训练。为了验证我们的假设,我们在两个不同的医学图像分割数据集上进行了实验。从广泛的定量和定性结果中,我们观察到我们的方法与对照方法相比,性能始终得到了提高。此外,我们的方法是在 MICCAI 2021 会议上联合举办的 FetReg Endovis 语义分割挑战赛的获胜者。代码和实现细节可在:https://github.com/thetna/medical_image_segmentation 上找到。