Center for Applied Mathematics for Energy Research Applications, Lawrence Berkeley National Laboratory, Berkeley, CA 94720.
Center for Applied Mathematics for Energy Research Applications, Lawrence Berkeley National Laboratory, Berkeley, CA 94720;
Proc Natl Acad Sci U S A. 2018 Jan 9;115(2):254-259. doi: 10.1073/pnas.1715832114. Epub 2017 Dec 26.
Deep convolutional neural networks have been successfully applied to many image-processing problems in recent works. Popular network architectures often add additional operations and connections to the standard architecture to enable training deeper networks. To achieve accurate results in practice, a large number of trainable parameters are often required. Here, we introduce a network architecture based on using dilated convolutions to capture features at different image scales and densely connecting all feature maps with each other. The resulting architecture is able to achieve accurate results with relatively few parameters and consists of a single set of operations, making it easier to implement, train, and apply in practice, and automatically adapts to different problems. We compare results of the proposed network architecture with popular existing architectures for several segmentation problems, showing that the proposed architecture is able to achieve accurate results with fewer parameters, with a reduced risk of overfitting the training data.
深度卷积神经网络在最近的研究中已经成功应用于许多图像处理问题。流行的网络架构通常会在标准架构中添加额外的操作和连接,以实现更深层次的网络训练。为了在实践中获得准确的结果,通常需要大量的可训练参数。在这里,我们介绍一种基于使用扩张卷积来捕获不同图像尺度特征的网络架构,并将所有特征图密集地连接在一起。所得到的架构可以用相对较少的参数实现准确的结果,并且只包含一组操作,使得它更容易在实践中实现、训练和应用,并且能够自动适应不同的问题。我们将所提出的网络架构的结果与几个分割问题的现有流行架构进行了比较,结果表明,所提出的架构能够用较少的参数实现准确的结果,并且降低了过拟合训练数据的风险。