• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

PresB-Net:具有可学习激活函数和随机分组卷积的参数化二值神经网络。

PresB-Net: parametric binarized neural network with learnable activations and shuffled grouped convolution.

作者信息

Shin Jungwoo, Kim HyunJin

机构信息

School of Electronics and Electrical Engineering, Dankook University, Yongin, South Korea.

出版信息

PeerJ Comput Sci. 2022 Jan 3;8:e842. doi: 10.7717/peerj-cs.842. eCollection 2022.

DOI:10.7717/peerj-cs.842
PMID:35111925
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8771783/
Abstract

In this study, we present a novel performance-enhancing binarized neural network model called PresB-Net: Parametric Binarized Neural Network. A binarized neural network (BNN) model can achieve fast output computation with low hardware costs by using binarized weights and features. However, performance degradation is the most critical problem in BNN models. Our PresB-Net combines several state-of-the-art BNN structures including the learnable activation with additional trainable parameters and shuffled grouped convolution. Notably, we propose a new normalization approach, which reduces the imbalance between the shuffled groups occurring in shuffled grouped convolutions. Besides, the proposed normalization approach helps gradient convergence so that the unstableness of the learning can be amortized when applying the learnable activation. Our novel BNN model enhances the classification performance compared with other existing BNN models. Notably, the proposed PresB-Net-18 achieves 73.84% Top-1 inference accuracy for the CIFAR-100 dataset, outperforming other existing counterparts.

摘要

在本研究中,我们提出了一种名为PresB-Net:参数化二值神经网络的新型性能增强二值神经网络模型。二值神经网络(BNN)模型通过使用二值化权重和特征,可以以较低的硬件成本实现快速的输出计算。然而,性能下降是BNN模型中最关键的问题。我们的PresB-Net结合了几种先进的BNN结构,包括具有额外可训练参数的可学习激活和随机分组卷积。值得注意的是,我们提出了一种新的归一化方法,该方法减少了随机分组卷积中出现的随机组之间的不平衡。此外,所提出的归一化方法有助于梯度收敛,从而在应用可学习激活时可以消除学习的不稳定性。与其他现有的BNN模型相比,我们的新型BNN模型提高了分类性能。值得注意的是,所提出的PresB-Net-18在CIFAR-100数据集上实现了73.84%的Top-1推理准确率,优于其他现有的同类模型。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/e882daeebe07/peerj-cs-08-842-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/a157babf30db/peerj-cs-08-842-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/9119f9d268f9/peerj-cs-08-842-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/6c796c0ef199/peerj-cs-08-842-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/7e76dffea132/peerj-cs-08-842-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/3fb892c07780/peerj-cs-08-842-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/452911d525d7/peerj-cs-08-842-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/50c25525431b/peerj-cs-08-842-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/6b162c4948e6/peerj-cs-08-842-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/bf2eeadc6740/peerj-cs-08-842-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/13737969be11/peerj-cs-08-842-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/d5ebde1d6271/peerj-cs-08-842-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/0ee4a38ce85c/peerj-cs-08-842-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/e882daeebe07/peerj-cs-08-842-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/a157babf30db/peerj-cs-08-842-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/9119f9d268f9/peerj-cs-08-842-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/6c796c0ef199/peerj-cs-08-842-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/7e76dffea132/peerj-cs-08-842-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/3fb892c07780/peerj-cs-08-842-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/452911d525d7/peerj-cs-08-842-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/50c25525431b/peerj-cs-08-842-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/6b162c4948e6/peerj-cs-08-842-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/bf2eeadc6740/peerj-cs-08-842-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/13737969be11/peerj-cs-08-842-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/d5ebde1d6271/peerj-cs-08-842-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/0ee4a38ce85c/peerj-cs-08-842-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/177b/8771783/e882daeebe07/peerj-cs-08-842-g013.jpg

相似文献

1
PresB-Net: parametric binarized neural network with learnable activations and shuffled grouped convolution.PresB-Net:具有可学习激活函数和随机分组卷积的参数化二值神经网络。
PeerJ Comput Sci. 2022 Jan 3;8:e842. doi: 10.7717/peerj-cs.842. eCollection 2022.
2
AresB-Net: accurate residual binarized neural networks using shortcut concatenation and shuffled grouped convolution.AresB-Net:使用捷径拼接和混洗分组卷积的精确残差二值化神经网络。
PeerJ Comput Sci. 2021 Mar 26;7:e454. doi: 10.7717/peerj-cs.454. eCollection 2021.
3
A storage-efficient ensemble classification using filter sharing on binarized convolutional neural networks.一种在二值化卷积神经网络上使用滤波器共享的存储高效集成分类方法。
PeerJ Comput Sci. 2022 Mar 29;8:e924. doi: 10.7717/peerj-cs.924. eCollection 2022.
4
E2FIF: Push the Limit of Binarized Deep Imagery Super-Resolution Using End-to-End Full-Precision Information Flow.E2FIF:利用端到端全精度信息流突破二值化深度图像超分辨率的极限。
IEEE Trans Image Process. 2023;32:5379-5393. doi: 10.1109/TIP.2023.3315540. Epub 2023 Oct 5.
5
ECG signal classification with binarized convolutional neural network.基于二值化卷积神经网络的心电图信号分类
Comput Biol Med. 2020 Jun;121:103800. doi: 10.1016/j.compbiomed.2020.103800. Epub 2020 May 5.
6
Toward Accurate Binarized Neural Networks With Sparsity for Mobile Application.面向具有稀疏性的精确二值化神经网络以用于移动应用
IEEE Trans Neural Netw Learn Syst. 2022 May 27;PP. doi: 10.1109/TNNLS.2022.3173498.
7
Pre-Computing Batch Normalisation Parameters for Edge Devices on a Binarized Neural Network.在二值化神经网络上为边缘设备预计算批量归一化参数。
Sensors (Basel). 2023 Jun 14;23(12):5556. doi: 10.3390/s23125556.
8
Pattern Classification Using Quantized Neural Networks for FPGA-Based Low-Power IoT Devices.基于量化神经网络的 FPGA 低功耗物联网设备的模式分类。
Sensors (Basel). 2022 Nov 10;22(22):8694. doi: 10.3390/s22228694.
9
FPGA Implementation of Keyword Spotting System Using Depthwise Separable Binarized and Ternarized Neural Networks.使用深度可分离二值化和三值化神经网络的关键词识别系统的现场可编程门阵列实现
Sensors (Basel). 2023 Jun 19;23(12):5701. doi: 10.3390/s23125701.
10
Highly parallelized memristive binary neural network.高度并行的忆阻二进制神经网络。
Neural Netw. 2021 Dec;144:565-572. doi: 10.1016/j.neunet.2021.09.016. Epub 2021 Sep 27.

本文引用的文献

1
AresB-Net: accurate residual binarized neural networks using shortcut concatenation and shuffled grouped convolution.AresB-Net:使用捷径拼接和混洗分组卷积的精确残差二值化神经网络。
PeerJ Comput Sci. 2021 Mar 26;7:e454. doi: 10.7717/peerj-cs.454. eCollection 2021.