• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

深度货币:使用生成对抗网络进行假币检测。

DeepMoney: counterfeit money detection using generative adversarial networks.

作者信息

Ali Toqeer, Jan Salman, Alkhodre Ahmad, Nauman Mohammad, Amin Muhammad, Siddiqui Muhammad Shoaib

机构信息

Faculty of Computer and Information Systems, Islamic University of Madinah, Madinah, Saudi Arabia.

Malaysian Institute of Information Technology, University Kuala Lumpur, Kuala Lumpur, Malaysia.

出版信息

PeerJ Comput Sci. 2019 Sep 2;5:e216. doi: 10.7717/peerj-cs.216. eCollection 2019.

DOI:10.7717/peerj-cs.216
PMID:33816869
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7924467/
Abstract

Conventional paper currency and modern electronic currency are two important modes of transactions. In several parts of the world, conventional methodology has clear precedence over its electronic counterpart. However, the identification of forged currency paper notes is now becoming an increasingly crucial problem because of the new and improved tactics employed by counterfeiters. In this paper, a machine assisted system-dubbed DeepMoney-is proposed which has been developed to discriminate fake notes from genuine ones. For this purpose, state-of-the-art models of machine learning called Generative Adversarial Networks (GANs) are employed. GANs use unsupervised learning to train a model that can then be used to perform supervised predictions. This flexibility provides the best of both worlds by allowing unlabelled data to be trained on whilst still making concrete predictions. This technique was applied to Pakistani banknotes. State-of-the-art image processing and feature recognition techniques were used to design the overall approach of a valid input. Augmented samples of images were used in the experiments which show that a high-precision machine can be developed to recognize genuine paper money. An accuracy of 80% has been achieved. The code is available as an open source to allow others to reproduce and build upon the efforts already made.

摘要

传统纸币和现代电子货币是两种重要的交易模式。在世界上的几个地区,传统方法比其电子对应方法具有明显的优先权。然而,由于造假者采用了新的和改进的手段,识别伪造纸币现在正成为一个日益关键的问题。在本文中,提出了一种机器辅助系统——称为DeepMoney,它被开发用于区分假钞和真钞。为此,采用了称为生成对抗网络(GANs)的机器学习的先进模型。GANs使用无监督学习来训练一个模型,然后该模型可用于进行有监督的预测。这种灵活性通过允许对未标记数据进行训练,同时仍能做出具体预测,提供了两全其美的效果。该技术被应用于巴基斯坦纸币。使用了先进的图像处理和特征识别技术来设计有效输入的整体方法。实验中使用了增强的图像样本,结果表明可以开发出高精度的机器来识别真钞。已经实现了80%的准确率。代码作为开源提供,以便其他人能够重现并在已取得的成果基础上进行改进。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/3eaf8c62b303/peerj-cs-05-216-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/739f9d9a9db9/peerj-cs-05-216-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/8b41ab1f0f7b/peerj-cs-05-216-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/a5e3b8dc0e47/peerj-cs-05-216-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/0f67bc3ced4e/peerj-cs-05-216-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/2a61021df770/peerj-cs-05-216-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/e90406d8b41e/peerj-cs-05-216-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/6ac0d854bd2a/peerj-cs-05-216-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/d2e5f897154b/peerj-cs-05-216-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/aaba386e519d/peerj-cs-05-216-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/24abeb01ce00/peerj-cs-05-216-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/6733f3b0e5ad/peerj-cs-05-216-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/ab667eb37cad/peerj-cs-05-216-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/0b7a537c188e/peerj-cs-05-216-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/5122fdb10272/peerj-cs-05-216-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/dddf660b137b/peerj-cs-05-216-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/8287c8fae959/peerj-cs-05-216-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/a2a7d5ff9529/peerj-cs-05-216-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/3eaf8c62b303/peerj-cs-05-216-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/739f9d9a9db9/peerj-cs-05-216-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/8b41ab1f0f7b/peerj-cs-05-216-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/a5e3b8dc0e47/peerj-cs-05-216-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/0f67bc3ced4e/peerj-cs-05-216-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/2a61021df770/peerj-cs-05-216-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/e90406d8b41e/peerj-cs-05-216-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/6ac0d854bd2a/peerj-cs-05-216-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/d2e5f897154b/peerj-cs-05-216-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/aaba386e519d/peerj-cs-05-216-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/24abeb01ce00/peerj-cs-05-216-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/6733f3b0e5ad/peerj-cs-05-216-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/ab667eb37cad/peerj-cs-05-216-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/0b7a537c188e/peerj-cs-05-216-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/5122fdb10272/peerj-cs-05-216-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/dddf660b137b/peerj-cs-05-216-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/8287c8fae959/peerj-cs-05-216-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/a2a7d5ff9529/peerj-cs-05-216-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3db/7924467/3eaf8c62b303/peerj-cs-05-216-g018.jpg

相似文献

1
DeepMoney: counterfeit money detection using generative adversarial networks.深度货币:使用生成对抗网络进行假币检测。
PeerJ Comput Sci. 2019 Sep 2;5:e216. doi: 10.7717/peerj-cs.216. eCollection 2019.
2
Semi-Supervised Generative Adversarial Nets with Multiple Generators for SAR Image Recognition.基于多个生成器的半监督生成对抗网络在 SAR 图像识别中的应用。
Sensors (Basel). 2018 Aug 17;18(8):2706. doi: 10.3390/s18082706.
3
3D conditional generative adversarial networks for high-quality PET image estimation at low dose.基于三维条件生成对抗网络的低剂量 PET 图像高质量估计。
Neuroimage. 2018 Jul 1;174:550-562. doi: 10.1016/j.neuroimage.2018.03.045. Epub 2018 Mar 20.
4
Adversarial symmetric GANs: Bridging adversarial samples and adversarial networks.对抗对称 GANs:连接对抗样本和对抗网络。
Neural Netw. 2021 Jan;133:148-156. doi: 10.1016/j.neunet.2020.10.016. Epub 2020 Nov 6.
5
Generative adversarial networks with decoder-encoder output noises.生成对抗网络与解码器编码器输出噪声。
Neural Netw. 2020 Jul;127:19-28. doi: 10.1016/j.neunet.2020.04.005. Epub 2020 Apr 9.
6
Creating Artificial Images for Radiology Applications Using Generative Adversarial Networks (GANs) - A Systematic Review.使用生成对抗网络 (GANs) 为放射学应用创建人工图像 - 系统评价。
Acad Radiol. 2020 Aug;27(8):1175-1185. doi: 10.1016/j.acra.2019.12.024. Epub 2020 Feb 5.
7
Exploiting Images for Video Recognition: Heterogeneous Feature Augmentation via Symmetric Adversarial Learning.利用图像进行视频识别:通过对称对抗学习实现异构特征增强
IEEE Trans Image Process. 2019 Nov;28(11):5308-5321. doi: 10.1109/TIP.2019.2917867. Epub 2019 May 24.
8
Semi-Supervised Learning for Low-Dose CT Image Restoration with Hierarchical Deep Generative Adversarial Network (HD-GAN).基于分层深度生成对抗网络(HD-GAN)的低剂量CT图像恢复半监督学习
Annu Int Conf IEEE Eng Med Biol Soc. 2019 Jul;2019:2683-2686. doi: 10.1109/EMBC.2019.8857572.
9
Hardness Recognition of Robotic Forearm Based on Semi-supervised Generative Adversarial Networks.基于半监督生成对抗网络的机器人前臂硬度识别
Front Neurorobot. 2019 Sep 6;13:73. doi: 10.3389/fnbot.2019.00073. eCollection 2019.
10
Semi-supervised segmentation of lesion from breast ultrasound images with attentional generative adversarial network.基于注意力生成对抗网络的乳腺超声图像病灶半监督分割。
Comput Methods Programs Biomed. 2020 Jun;189:105275. doi: 10.1016/j.cmpb.2019.105275. Epub 2019 Dec 12.

引用本文的文献

1
Recognizing Egyptian currency for people with visual impairment using deep learning models.使用深度学习模型为视障人士识别埃及货币。
Sci Rep. 2025 Oct 1;15(1):34288. doi: 10.1038/s41598-025-20646-x.
2
Achieving model explainability for intrusion detection in VANETs with LIME.使用局部可解释模型无关解释方法(LIME)实现车联网入侵检测的模型可解释性。
PeerJ Comput Sci. 2023 Jun 22;9:e1440. doi: 10.7717/peerj-cs.1440. eCollection 2023.

本文引用的文献

1
Long short-term memory.长短期记忆
Neural Comput. 1997 Nov 15;9(8):1735-80. doi: 10.1162/neco.1997.9.8.1735.