• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于图正则化的自动编码器图像表示方法。

Graph Regularized Auto-Encoders for Image Representation.

出版信息

IEEE Trans Image Process. 2017 Jun;26(6):2839-2852. doi: 10.1109/TIP.2016.2605010. Epub 2016 Aug 31.

DOI:10.1109/TIP.2016.2605010
PMID:28113587
Abstract

Image representation has been intensively explored in the domain of computer vision for its significant influence on the relative tasks such as image clustering and classification. It is valuable to learn a low-dimensional representation of an image which preserves its inherent information from the original image space. At the perspective of manifold learning, this is implemented with the local invariant idea to capture the intrinsic low-dimensional manifold embedded in the high-dimensional input space. Inspired by the recent successes of deep architectures, we propose a local invariant deep nonlinear mapping algorithm, called graph regularized auto-encoder (GAE). With the graph regularization, the proposed method preserves the local connectivity from the original image space to the representation space, while the stacked auto-encoders provide explicit encoding model for fast inference and powerful expressive capacity for complex modeling. Theoretical analysis shows that the graph regularizer penalizes the weighted Frobenius norm of the Jacobian matrix of the encoder mapping, where the weight matrix captures the local property in the input space. Furthermore, the underlying effects on the hidden representation space are revealed, providing insightful explanation to the advantage of the proposed method. Finally, the experimental results on both clustering and classification tasks demonstrate the effectiveness of our GAE as well as the correctness of the proposed theoretical analysis, and it also suggests that GAE is a superior solution to the current deep representation learning techniques comparing with variant auto-encoders and existing local invariant methods.

摘要

图像表示在计算机视觉领域得到了广泛的研究,因为它对图像聚类和分类等相关任务有重要的影响。从图像空间中学习到保留其内在信息的低维表示是很有价值的。从流形学习的角度来看,这是通过局部不变性的思想来实现的,以捕捉嵌入在高维输入空间中的内在低维流形。受深度架构最近成功的启发,我们提出了一种局部不变的深度非线性映射算法,称为图正则自动编码器(GAE)。通过图正则化,所提出的方法保留了从原始图像空间到表示空间的局部连接性,而堆叠的自动编码器则提供了快速推断的显式编码模型和复杂建模的强大表达能力。理论分析表明,图正则项惩罚编码器映射的雅可比矩阵的加权 Frobenius 范数,其中权重矩阵捕获了输入空间中的局部特性。此外,还揭示了隐藏表示空间的潜在影响,为所提出方法的优势提供了有见地的解释。最后,聚类和分类任务的实验结果证明了我们的 GAE 的有效性以及所提出的理论分析的正确性,并且还表明 GAE 是当前深度表示学习技术的一种优越解决方案,与变体自动编码器和现有的局部不变性方法相比。

相似文献

1
Graph Regularized Auto-Encoders for Image Representation.基于图正则化的自动编码器图像表示方法。
IEEE Trans Image Process. 2017 Jun;26(6):2839-2852. doi: 10.1109/TIP.2016.2605010. Epub 2016 Aug 31.
2
Dual Graph Regularized Latent Low-Rank Representation for Subspace Clustering.双重图正则化潜在低秩表示的子空间聚类。
IEEE Trans Image Process. 2015 Dec;24(12):4918-33. doi: 10.1109/TIP.2015.2472277. Epub 2015 Aug 24.
3
Laplacian Regularized Low-Rank Representation and Its Applications.拉普拉斯正则化低秩表示及其应用。
IEEE Trans Pattern Anal Mach Intell. 2016 Mar;38(3):504-17. doi: 10.1109/TPAMI.2015.2462360.
4
Graph regularized sparse coding for image representation.基于图正则化稀疏编码的图像表示方法。
IEEE Trans Image Process. 2011 May;20(5):1327-36. doi: 10.1109/TIP.2010.2090535. Epub 2010 Nov 1.
5
Discerning Feature Supported Encoder for Image Representation.用于图像表示的可辨别特征支持编码器。
IEEE Trans Image Process. 2019 Aug;28(8):3728-3738. doi: 10.1109/TIP.2019.2900646. Epub 2019 Feb 21.
6
Graph Regularized Non-Negative Low-Rank Matrix Factorization for Image Clustering.基于图正则化的非负低秩矩阵分解的图像聚类。
IEEE Trans Cybern. 2017 Nov;47(11):3840-3853. doi: 10.1109/TCYB.2016.2585355. Epub 2016 Jul 20.
7
Stacked Convolutional Denoising Auto-Encoders for Feature Representation.堆叠卷积去噪自编码器的特征表示。
IEEE Trans Cybern. 2017 Apr;47(4):1017-1027. doi: 10.1109/TCYB.2016.2536638. Epub 2016 Mar 16.
8
Automated graph regularized projective nonnegative matrix factorization for document clustering.自动化图正则化投影非负矩阵分解在文档聚类中的应用。
IEEE Trans Cybern. 2014 Oct;44(10):1821-31. doi: 10.1109/TCYB.2013.2296117.
9
Two-layer contractive encodings for learning stable nonlinear features.两层收缩编码学习稳定的非线性特征。
Neural Netw. 2015 Apr;64:4-11. doi: 10.1016/j.neunet.2014.09.008. Epub 2014 Sep 28.
10
Enhanced low-rank representation via sparse manifold adaption for semi-supervised learning.基于稀疏流形自适应的半监督学习增强低秩表示
Neural Netw. 2015 May;65:1-17. doi: 10.1016/j.neunet.2015.01.001. Epub 2015 Jan 10.

引用本文的文献

1
Role of Deep Learning in Loop Closure Detection for Visual and Lidar SLAM: A Survey.深度学习在视觉和激光雷达 SLAM 中的闭环检测中的作用:综述。
Sensors (Basel). 2021 Feb 10;21(4):1243. doi: 10.3390/s21041243.
2
Sparse Feature Learning of Hyperspectral Imagery via Multiobjective-Based Extreme Learning Machine.基于多目标的极端学习机的高光谱图像稀疏特征学习。
Sensors (Basel). 2020 Feb 26;20(5):1262. doi: 10.3390/s20051262.