• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于笔划序列的深度卷积神经网络在线手写体汉字识别。

Stroke Sequence-Dependent Deep Convolutional Neural Network for Online Handwritten Chinese Character Recognition.

出版信息

IEEE Trans Neural Netw Learn Syst. 2020 Nov;31(11):4637-4648. doi: 10.1109/TNNLS.2019.2956965. Epub 2020 Oct 29.

DOI:10.1109/TNNLS.2019.2956965
PMID:31905151
Abstract

We propose a novel model, called stroke sequence-dependent deep convolutional neural network (SSDCNN), which uses the stroke sequence information and eight-directional features of Chinese characters for online handwritten Chinese character recognition (OLHCCR). SSDCNN learns the representation of OLHCCs by incorporating the natural sequence information of the strokes. Furthermore, it naturally incorporates the eight-directional features. First, SSDCNN inputs the stroke sequence and transforms it into stacks of feature maps following the writing order of the strokes. Second, the fixed-length, stroke sequence-dependent representations of OLHCC are derived through convolutional, residual, and max-pooling operations. Third, the stroke sequence-dependent representation is combined with the eight-directional features via a number of fully connected neural network layers. Finally, the Chinese characters are recognized using a softmax classifier. The SSDCNN is trained in two stages: 1) the whole architecture is pretrained using the training data until the performance converges to an acceptable degree. 2) The stroke sequence-dependent representation is combined with the eight-directional features by a fully connected neural network and a softmax layer for further training. The model was experimentally evaluated on the OLHCCR competition tasks of International Conference on Document Analysis and Recognition (ICDAR) 2013. The recognition error was a maximum 58.28% lower in SSDCNN than in a model using the eight-directional features alone (5.13% versus 2.14%). Owing to its high accuracy (97.86%), the proposed SSDCNN reduced the recognition error by approximately 18.0% as compared with that of the winning system in the ICDAR 2013 competition. SSDCNN integrated with an adaptation mechanism, called the SSDCNN+Adapt model, and reached a new state-of-the-art (SOTA) standard with an accuracy of 97.94%. The SSDCNN exploits the stroke sequence information to learn high-quality OLHCC representations. Moreover, the learned representation and the classical eight-directional features complement each other within the SSDCNN architecture.

摘要

我们提出了一种新的模型,称为笔画序列相关的深度卷积神经网络(SSDCNN),它使用汉字的笔画序列信息和八方向特征进行在线手写汉字识别(OLHCCR)。SSDCNN 通过结合笔画的自然序列信息来学习 OLHCC 的表示。此外,它自然地结合了八方向特征。首先,SSDCNN 输入笔画序列,并按照笔画的书写顺序将其转换为特征图的堆叠。其次,通过卷积、残差和最大池化操作,得到与笔画序列相关的固定长度的 OLHCC 表示。第三,通过多个全连接神经网络层将与笔画序列相关的表示与八方向特征相结合。最后,使用 softmax 分类器识别汉字。SSDCNN 分两个阶段进行训练:1)使用训练数据对整个架构进行预训练,直到性能收敛到可接受的程度。2)通过全连接神经网络和 softmax 层将与笔画序列相关的表示与八方向特征相结合进行进一步训练。该模型在 ICDAR 2013 年的 OLHCCR 竞赛任务中进行了实验评估。与仅使用八方向特征的模型相比(5.13%对 2.14%),SSDCNN 的识别错误降低了 58.28%。由于其高精度(97.86%),与 ICDAR 2013 竞赛中的获奖系统相比,所提出的 SSDCNN 将识别错误降低了约 18.0%。SSDCNN 与一种称为 SSDCNN+Adapt 的自适应机制集成,并达到了新的 SOTA 标准,准确率为 97.94%。SSDCNN 利用笔画序列信息来学习高质量的 OLHCC 表示。此外,在 SSDCNN 架构内,学习到的表示和经典的八方向特征相互补充。

相似文献

1
Stroke Sequence-Dependent Deep Convolutional Neural Network for Online Handwritten Chinese Character Recognition.基于笔划序列的深度卷积神经网络在线手写体汉字识别。
IEEE Trans Neural Netw Learn Syst. 2020 Nov;31(11):4637-4648. doi: 10.1109/TNNLS.2019.2956965. Epub 2020 Oct 29.
2
Drawing and Recognizing Chinese Characters with Recurrent Neural Network.基于循环神经网络的汉字绘制与识别。
IEEE Trans Pattern Anal Mach Intell. 2018 Apr;40(4):849-862. doi: 10.1109/TPAMI.2017.2695539. Epub 2017 Apr 18.
3
Full depth CNN classifier for handwritten and license plate characters recognition.用于手写和车牌字符识别的全深度卷积神经网络分类器。
PeerJ Comput Sci. 2021 Jun 18;7:e576. doi: 10.7717/peerj-cs.576. eCollection 2021.
4
Learning Spatial-Semantic Context with Fully Convolutional Recurrent Network for Online Handwritten Chinese Text Recognition.基于全卷积循环网络学习空间语义上下文用于在线手写中文文本识别
IEEE Trans Pattern Anal Mach Intell. 2018 Aug;40(8):1903-1917. doi: 10.1109/TPAMI.2017.2732978. Epub 2017 Jul 28.
5
Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.基于层次卷积特征的层次递归神经网络哈希图像检索
IEEE Trans Image Process. 2018;27(1):106-120. doi: 10.1109/TIP.2017.2755766.
6
Recognition of Pashto Handwritten Characters Based on Deep Learning.基于深度学习的普什图文手写字符识别。
Sensors (Basel). 2020 Oct 17;20(20):5884. doi: 10.3390/s20205884.
7
Analysis of stroke structures of handwritten Chinese characters.手写汉字笔画结构分析
IEEE Trans Syst Man Cybern B Cybern. 1999;29(1):47-61. doi: 10.1109/3477.740165.
8
Stacked Convolutional Denoising Auto-Encoders for Feature Representation.堆叠卷积去噪自编码器的特征表示。
IEEE Trans Cybern. 2017 Apr;47(4):1017-1027. doi: 10.1109/TCYB.2016.2536638. Epub 2016 Mar 16.
9
Cross-Convolutional-Layer Pooling for Image Recognition.跨卷积层池化的图像识别。
IEEE Trans Pattern Anal Mach Intell. 2017 Nov;39(11):2305-2313. doi: 10.1109/TPAMI.2016.2637921. Epub 2016 Dec 9.
10
Evaluation of convolutional neural networks for visual recognition.用于视觉识别的卷积神经网络评估。
IEEE Trans Neural Netw. 1998;9(4):685-96. doi: 10.1109/72.701181.

引用本文的文献

1
Interpol questioned documents review 2019-2022.国际刑警组织对2019年至2022年文件的审查
Forensic Sci Int Synerg. 2023 Feb 24;6:100300. doi: 10.1016/j.fsisyn.2022.100300. eCollection 2023.
2
Detecting COVID-19 from digitized ECG printouts using 1D convolutional neural networks.使用一维卷积神经网络从数字化心电图打印件中检测 COVID-19。
PLoS One. 2022 Nov 4;17(11):e0277081. doi: 10.1371/journal.pone.0277081. eCollection 2022.
3
One shot ancient character recognition with siamese similarity network.孪生相似性网络的单次古文字识别。
Sci Rep. 2022 Sep 1;12(1):14820. doi: 10.1038/s41598-022-18986-z.