• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过极大似然最小化实现稳健矩阵分解。

Robust Matrix Factorization by Majorization Minimization.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2018 Jan;40(1):208-220. doi: 10.1109/TPAMI.2017.2651816. Epub 2017 Jan 11.

DOI:10.1109/TPAMI.2017.2651816
PMID:28092520
Abstract

-norm based low rank matrix factorization in the presence of missing data and outliers remains a hot topic in computer vision. Due to non-convexity and non-smoothness, all the existing methods either lack scalability or robustness, or have no theoretical guarantee on convergence. In this paper, we apply the Majorization Minimization technique to solve this problem. At each iteration, we upper bound the original function with a strongly convex surrogate. By minimizing the surrogate and updating the iterates accordingly, the objective function has sufficient decrease, which is stronger than just being non-increasing that other methods could offer. As a consequence, without extra assumptions, we prove that any limit point of the iterates is a stationary point of the objective function. In comparison, other methods either do not have such a convergence guarantee or require extra critical assumptions. Extensive experiments on both synthetic and real data sets testify to the effectiveness of our algorithm. The speed of our method is also highly competitive.

摘要

基于范数的低秩矩阵分解在存在缺失数据和异常值的情况下仍然是计算机视觉中的一个热门话题。由于非凸性和非光滑性,所有现有的方法要么缺乏可扩展性,要么缺乏收敛性的理论保证,要么缺乏鲁棒性。在本文中,我们应用了极大似然估计技术来解决这个问题。在每次迭代中,我们用一个强凸的替代函数来逼近原始函数。通过最小化替代函数并相应地更新迭代,目标函数有足够的下降,这比其他方法提供的仅仅是非递增更强。因此,在没有额外假设的情况下,我们证明了迭代的任何极限点都是目标函数的一个驻点。相比之下,其他方法要么没有这样的收敛保证,要么需要额外的关键假设。在合成数据集和真实数据集上的大量实验证明了我们算法的有效性。我们方法的速度也极具竞争力。

相似文献

1
Robust Matrix Factorization by Majorization Minimization.通过极大似然最小化实现稳健矩阵分解。
IEEE Trans Pattern Anal Mach Intell. 2018 Jan;40(1):208-220. doi: 10.1109/TPAMI.2017.2651816. Epub 2017 Jan 11.
2
Fast and Robust Non-Rigid Registration Using Accelerated Majorization-Minimization.基于加速极大似然极小化的快速鲁棒非刚性配准方法
IEEE Trans Pattern Anal Mach Intell. 2023 Aug;45(8):9681-9698. doi: 10.1109/TPAMI.2023.3247603. Epub 2023 Jun 30.
3
Nonconvex Nonsmooth Low Rank Minimization via Iteratively Reweighted Nuclear Norm.基于迭代加权核范数的非凸非光滑低秩最小化
IEEE Trans Image Process. 2016 Feb;25(2):829-39. doi: 10.1109/TIP.2015.2511584. Epub 2015 Dec 22.
4
Efficient Recovery of Low-Rank Matrix via Double Nonconvex Nonsmooth Rank Minimization.通过双非凸非光滑秩最小化实现低秩矩阵的高效恢复
IEEE Trans Neural Netw Learn Syst. 2019 Oct;30(10):2916-2925. doi: 10.1109/TNNLS.2019.2900572. Epub 2019 Mar 18.
5
An efficient matrix bi-factorization alternative optimization method for low-rank matrix recovery and completion.一种用于低秩矩阵恢复和补全的高效矩阵双因子分解替代优化方法。
Neural Netw. 2013 Dec;48:8-18. doi: 10.1016/j.neunet.2013.06.013. Epub 2013 Jul 8.
6
Smoothed low rank and sparse matrix recovery by iteratively reweighted least squares minimization.通过迭代重加权最小二乘法恢复平滑低秩和稀疏矩阵。
IEEE Trans Image Process. 2015 Feb;24(2):646-54. doi: 10.1109/TIP.2014.2380155. Epub 2014 Dec 12.
7
Efficient Low-Rank Semidefinite Programming With Robust Loss Functions.具有鲁棒损失函数的高效低秩半定规划
IEEE Trans Pattern Anal Mach Intell. 2022 Oct;44(10):6153-6168. doi: 10.1109/TPAMI.2021.3085858. Epub 2022 Sep 14.
8
Projective multiview structure and motion from element-wise factorization.基于元素分解的投影多视角结构与运动估计
IEEE Trans Pattern Anal Mach Intell. 2013 Sep;35(9):2238-51. doi: 10.1109/TPAMI.2013.20.
9
A Look at the Generalized Heron Problem through the Lens of Majorization-Minimization.从优化最小化视角看广义海伦问题
Am Math Mon. 2014 Feb;121(2):95-108. doi: 10.4169/amer.math.monthly.121.02.095#sthash.QTTb5Z6T.dpuf.
10
Low-rank structure learning via nonconvex heuristic recovery.基于非凸启发式恢复的低秩结构学习。
IEEE Trans Neural Netw Learn Syst. 2013 Mar;24(3):383-96. doi: 10.1109/TNNLS.2012.2235082.