Suppr超能文献

双重收缩稀疏降维。

Double shrinking sparse dimension reduction.

机构信息

Centre for Quantum Computation & Intelligent Systems and the Faculty of Engineering & Information Technology, University of Technology, Sydney NSW 2007, Australia.

出版信息

IEEE Trans Image Process. 2013 Jan;22(1):244-57. doi: 10.1109/TIP.2012.2202678. Epub 2012 Jun 5.

Abstract

Learning tasks such as classification and clustering usually perform better and cost less (time and space) on compressed representations than on the original data. Previous works mainly compress data via dimension reduction. In this paper, we propose "double shrinking" to compress image data on both dimensionality and cardinality via building either sparse low-dimensional representations or a sparse projection matrix for dimension reduction. We formulate a double shrinking model (DSM) as an l(1) regularized variance maximization with constraint ||x||(2)=1, and develop a double shrinking algorithm (DSA) to optimize DSM. DSA is a path-following algorithm that can build the whole solution path of locally optimal solutions of different sparse levels. Each solution on the path is a "warm start" for searching the next sparser one. In each iteration of DSA, the direction, the step size, and the Lagrangian multiplier are deduced from the Karush-Kuhn-Tucker conditions. The magnitudes of trivial variables are shrunk and the importances of critical variables are simultaneously augmented along the selected direction with the determined step length. Double shrinking can be applied to manifold learning and feature selections for better interpretation of features, and can be combined with classification and clustering to boost their performance. The experimental results suggest that double shrinking produces efficient and effective data compression.

摘要

学习任务,如分类和聚类,通常在压缩表示上比在原始数据上表现更好,成本更低(时间和空间)。以前的工作主要通过降维来压缩数据。在本文中,我们提出了“双重收缩”,通过构建稀疏低维表示或稀疏投影矩阵来进行降维,从而在维度和基数上压缩图像数据。我们将双重收缩模型(DSM)表示为具有约束||x||(2)= 1 的 l(1)正则化方差最大化,并开发了一种双重收缩算法(DSA)来优化 DSM。DSA 是一种路径跟随算法,可以构建不同稀疏度的局部最优解的整个解路径。路径上的每个解都是搜索下一个更稀疏解的“热身启动”。在 DSA 的每次迭代中,方向、步长和拉格朗日乘数都是从 Karush-Kuhn-Tucker 条件推导出来的。沿着选定的方向以确定的步长缩小平凡变量的大小,同时增加关键变量的重要性。双重收缩可应用于流形学习和特征选择,以更好地解释特征,并可与分类和聚类结合使用,以提高其性能。实验结果表明,双重收缩可以有效地进行数据压缩。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验