Zhou Zongwei, Siddiquee Md Mahfuzur Rahman, Tajbakhsh Nima, Liang Jianming
Arizona State University.
Deep Learn Med Image Anal Multimodal Learn Clin Decis Support (2018). 2018 Sep;11045:3-11. doi: 10.1007/978-3-030-00889-5_1. Epub 2018 Sep 20.
In this paper, we present UNet++, a new, more powerful architecture for medical image segmentation. Our architecture is essentially a deeply-supervised encoder-decoder network where the encoder and decoder sub-networks are connected through a series of nested, dense skip pathways. The re-designed skip pathways aim at reducing the semantic gap between the feature maps of the encoder and decoder sub-networks. We argue that the optimizer would deal with an easier learning task when the feature maps from the decoder and encoder networks are semantically similar. We have evaluated UNet++ in comparison with U-Net and wide U-Net architectures across multiple medical image segmentation tasks: nodule segmentation in the low-dose CT scans of chest, nuclei segmentation in the microscopy images, liver segmentation in abdominal CT scans, and polyp segmentation in colonoscopy videos. Our experiments demonstrate that UNet++ with deep supervision achieves an average IoU gain of 3.9 and 3.4 points over U-Net and wide U-Net, respectively.
在本文中,我们提出了UNet++,一种用于医学图像分割的全新且更强大的架构。我们的架构本质上是一个深度监督的编码器-解码器网络,其中编码器和解码器子网络通过一系列嵌套的密集跳跃连接路径相连。重新设计的跳跃连接路径旨在缩小编码器和解码器子网络特征图之间的语义差距。我们认为,当解码器和编码器网络的特征图在语义上相似时,优化器将处理更简单的学习任务。我们在多个医学图像分割任务中,将UNet++与U-Net和宽U-Net架构进行了比较评估:胸部低剂量CT扫描中的结节分割、显微镜图像中的细胞核分割、腹部CT扫描中的肝脏分割以及结肠镜视频中的息肉分割。我们的实验表明,具有深度监督的UNet++在平均交并比(IoU)上分别比U-Net和宽U-Net提高了3.9个百分点和3.4个百分点。