• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于高效卷积神经网络的级联子补丁网络

Cascaded Subpatch Networks for Effective CNNs.

作者信息

Jiang Xiaoheng, Pang Yanwei, Sun Manli, Li Xuelong

出版信息

IEEE Trans Neural Netw Learn Syst. 2018 Jul;29(7):2684-2694. doi: 10.1109/TNNLS.2017.2689098. Epub 2017 May 12.

DOI:10.1109/TNNLS.2017.2689098
PMID:28504949
Abstract

Conventional convolutional neural networks use either a linear or a nonlinear filter to extract features from an image patch (region) of spatial size (typically, is small and is equal to , e.g., is 5 or 7). Generally, the size of the filter is equal to the size of the input patch. We argue that the representational ability of equal-size strategy is not strong enough. To overcome the drawback, we propose to use subpatch filter whose spatial size is smaller than . The proposed subpatch filter consists of two subsequent filters. The first one is a linear filter of spatial size and is aimed at extracting features from spatial domain. The second one is of spatial size and is used for strengthening the connection between different input feature channels and for reducing the number of parameters. The subpatch filter convolves with the input patch and the resulting network is called a subpatch network. Taking the output of one subpatch network as input, we further repeat constructing subpatch networks until the output contains only one neuron in spatial domain. These subpatch networks form a new network called the cascaded subpatch network (CSNet). The feature layer generated by CSNet is called the csconv layer. For the whole input image, we construct a deep neural network by stacking a sequence of csconv layers. Experimental results on five benchmark data sets demonstrate the effectiveness and compactness of the proposed CSNet. For example, our CSNet reaches a test error of 5.68% on the CIFAR10 data set without model averaging. To the best of our knowledge, this is the best result ever obtained on the CIFAR10 data set.

摘要

传统卷积神经网络使用线性或非线性滤波器从空间大小的图像块(区域)中提取特征(通常,该空间大小较小且等于 ,例如为 5 或 7)。一般来说,滤波器的大小等于输入图像块的大小。我们认为等大小策略的表征能力不够强。为了克服这一缺点,我们建议使用空间大小小于 的子图像块滤波器。所提出的子图像块滤波器由两个连续的滤波器组成。第一个是空间大小为 的线性滤波器,旨在从空间域中提取特征。第二个的空间大小为 ,用于加强不同输入特征通道之间的连接并减少参数数量。子图像块滤波器与输入图像块进行卷积,得到网络被称为子图像块网络。将一个子图像块网络的输出作为输入,我们进一步重复构建子图像块网络,直到输出在空间域中仅包含一个神经元。这些子图像块网络形成一个名为级联子图像块网络(CSNet)的新网络。由 CSNet 生成的特征层称为 csconv 层。对于整个输入图像,我们通过堆叠一系列 csconv 层来构建深度神经网络。在五个基准数据集上的实验结果证明了所提出的 CSNet 的有效性和紧凑性。例如,我们的 CSNet 在 CIFAR10 数据集上在不进行模型平均的情况下达到了 5.68% 的测试误差。据我们所知,这是在 CIFAR10 数据集上获得的最好结果。

相似文献

1
Cascaded Subpatch Networks for Effective CNNs.用于高效卷积神经网络的级联子补丁网络
IEEE Trans Neural Netw Learn Syst. 2018 Jul;29(7):2684-2694. doi: 10.1109/TNNLS.2017.2689098. Epub 2017 May 12.
2
Convolution in Convolution for Network in Network.卷积神经网络中的卷积
IEEE Trans Neural Netw Learn Syst. 2018 May;29(5):1587-1597. doi: 10.1109/TNNLS.2017.2676130. Epub 2017 Mar 16.
3
Hierarchical Multi-Scale Convolutional Neural Networks for Hyperspectral Image Classification.分层多尺度卷积神经网络在高光谱图像分类中的应用。
Sensors (Basel). 2019 Apr 10;19(7):1714. doi: 10.3390/s19071714.
4
Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.基于层次卷积特征的层次递归神经网络哈希图像检索
IEEE Trans Image Process. 2018;27(1):106-120. doi: 10.1109/TIP.2017.2755766.
5
Two-Stream Convolutional Networks for Blind Image Quality Assessment.双流卷积网络的盲图像质量评估。
IEEE Trans Image Process. 2019 May;28(5):2200-2211. doi: 10.1109/TIP.2018.2883741. Epub 2018 Nov 28.
6
Robust Vehicle Detection in Aerial Images Based on Cascaded Convolutional Neural Networks.基于级联卷积神经网络的航空图像中稳健车辆检测
Sensors (Basel). 2017 Nov 24;17(12):2720. doi: 10.3390/s17122720.
7
Gabor Convolutional Networks.Gabor 卷积网络。
IEEE Trans Image Process. 2018 Sep;27(9):4357-4366. doi: 10.1109/TIP.2018.2835143.
8
Source localization using deep neural networks in a shallow water environment.在浅水环境中使用深度神经网络进行源定位。
J Acoust Soc Am. 2018 May;143(5):2922. doi: 10.1121/1.5036725.
9
Model pruning based on filter similarity for edge device deployment.基于滤波器相似度的模型剪枝用于边缘设备部署。
Front Neurorobot. 2023 Mar 2;17:1132679. doi: 10.3389/fnbot.2023.1132679. eCollection 2023.
10
Deep Recurrent Neural Networks for Human Activity Recognition.深度递归神经网络在人体活动识别中的应用。
Sensors (Basel). 2017 Nov 6;17(11):2556. doi: 10.3390/s17112556.

引用本文的文献

1
Linear leaky-integrate-and-fire neuron model based spiking neural networks and its mapping relationship to deep neural networks.基于线性泄漏积分发放神经元模型的脉冲神经网络及其与深度神经网络的映射关系。
Front Neurosci. 2022 Aug 24;16:857513. doi: 10.3389/fnins.2022.857513. eCollection 2022.
2
M-SAC-VLADNet: A Multi-Path Deep Feature Coding Model for Visual Classification.M-SAC-VLADNet:一种用于视觉分类的多路径深度特征编码模型。
Entropy (Basel). 2018 May 4;20(5):341. doi: 10.3390/e20050341.