Suppr超能文献

通过极大似然最小化实现稳健矩阵分解。

Robust Matrix Factorization by Majorization Minimization.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2018 Jan;40(1):208-220. doi: 10.1109/TPAMI.2017.2651816. Epub 2017 Jan 11.

Abstract

-norm based low rank matrix factorization in the presence of missing data and outliers remains a hot topic in computer vision. Due to non-convexity and non-smoothness, all the existing methods either lack scalability or robustness, or have no theoretical guarantee on convergence. In this paper, we apply the Majorization Minimization technique to solve this problem. At each iteration, we upper bound the original function with a strongly convex surrogate. By minimizing the surrogate and updating the iterates accordingly, the objective function has sufficient decrease, which is stronger than just being non-increasing that other methods could offer. As a consequence, without extra assumptions, we prove that any limit point of the iterates is a stationary point of the objective function. In comparison, other methods either do not have such a convergence guarantee or require extra critical assumptions. Extensive experiments on both synthetic and real data sets testify to the effectiveness of our algorithm. The speed of our method is also highly competitive.

摘要

基于范数的低秩矩阵分解在存在缺失数据和异常值的情况下仍然是计算机视觉中的一个热门话题。由于非凸性和非光滑性,所有现有的方法要么缺乏可扩展性,要么缺乏收敛性的理论保证,要么缺乏鲁棒性。在本文中,我们应用了极大似然估计技术来解决这个问题。在每次迭代中,我们用一个强凸的替代函数来逼近原始函数。通过最小化替代函数并相应地更新迭代,目标函数有足够的下降,这比其他方法提供的仅仅是非递增更强。因此,在没有额外假设的情况下,我们证明了迭代的任何极限点都是目标函数的一个驻点。相比之下,其他方法要么没有这样的收敛保证,要么需要额外的关键假设。在合成数据集和真实数据集上的大量实验证明了我们算法的有效性。我们方法的速度也极具竞争力。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验