• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

从不完整数据中快速学习字典。

Fast dictionary learning from incomplete data.

作者信息

Naumova Valeriya, Schnass Karin

机构信息

Simula Metropolitan Center for Digital Engineering, Martin Linges 25, Fornebu, 1325 Norway.

Department of Mathematics, University of Innsbruck, Technikerstraße 13, Innsbruck, 6020 Austria.

出版信息

EURASIP J Adv Signal Process. 2018;2018(1):12. doi: 10.1186/s13634-018-0533-0. Epub 2018 Feb 22.

DOI:10.1186/s13634-018-0533-0
PMID:29503663
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC5823970/
Abstract

This paper extends the recently proposed and theoretically justified iterative thresholding and residual means (ITKrM) algorithm to learning dictionaries from incomplete/masked training data (ITKrMM). It further adapts the algorithm to the presence of a low-rank component in the data and provides a strategy for recovering this low-rank component again from incomplete data. Several synthetic experiments show the advantages of incorporating information about the corruption into the algorithm. Further experiments on image data confirm the importance of considering a low-rank component in the data and show that the algorithm compares favourably to its closest dictionary learning counterparts, wKSVD and BPFA, either in terms of computational complexity or in terms of consistency between the dictionaries learned from corrupted and uncorrupted data. To further confirm the appropriateness of the learned dictionaries, we explore an application to sparsity-based image inpainting. There the ITKrMM dictionaries show a similar performance to other learned dictionaries like wKSVD and BPFA and a superior performance to other algorithms based on pre-defined/analytic dictionaries.

摘要

本文将最近提出且理论上合理的迭代阈值化与残差均值(ITKrM)算法扩展为从不完整/掩码训练数据中学习字典的算法(ITKrMM)。它进一步使算法适应数据中存在低秩分量的情况,并提供了一种从不完整数据中再次恢复该低秩分量的策略。几个合成实验展示了将关于数据损坏的信息纳入算法的优势。对图像数据的进一步实验证实了考虑数据中低秩分量的重要性,并表明该算法在计算复杂度方面或在从损坏数据和未损坏数据中学到的字典之间的一致性方面,与最接近的字典学习对应算法(wKSVD和BPFA)相比具有优势。为了进一步确认所学习字典的适用性,我们探索了其在基于稀疏性的图像修复中的应用。在该应用中,ITKrMM字典表现出与wKSVD和BPFA等其他学习字典相似的性能,并且优于其他基于预定义/解析字典的算法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/468c/5823970/fafb080e4dac/13634_2018_533_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/468c/5823970/35005913f71e/13634_2018_533_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/468c/5823970/f6c081911d05/13634_2018_533_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/468c/5823970/68aa8b359b74/13634_2018_533_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/468c/5823970/77e05255625b/13634_2018_533_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/468c/5823970/038002edc6f5/13634_2018_533_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/468c/5823970/db7acec1a554/13634_2018_533_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/468c/5823970/e816c2e033c3/13634_2018_533_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/468c/5823970/0845d063ade8/13634_2018_533_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/468c/5823970/fafb080e4dac/13634_2018_533_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/468c/5823970/35005913f71e/13634_2018_533_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/468c/5823970/f6c081911d05/13634_2018_533_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/468c/5823970/68aa8b359b74/13634_2018_533_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/468c/5823970/77e05255625b/13634_2018_533_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/468c/5823970/038002edc6f5/13634_2018_533_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/468c/5823970/db7acec1a554/13634_2018_533_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/468c/5823970/e816c2e033c3/13634_2018_533_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/468c/5823970/0845d063ade8/13634_2018_533_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/468c/5823970/fafb080e4dac/13634_2018_533_Fig9_HTML.jpg

相似文献

1
Fast dictionary learning from incomplete data.从不完整数据中快速学习字典。
EURASIP J Adv Signal Process. 2018;2018(1):12. doi: 10.1186/s13634-018-0533-0. Epub 2018 Feb 22.
2
Learning Stable Multilevel Dictionaries for Sparse Representations.学习用于稀疏表示的稳定多级字典。
IEEE Trans Neural Netw Learn Syst. 2015 Sep;26(9):1913-26. doi: 10.1109/TNNLS.2014.2361052. Epub 2014 Oct 16.
3
Image transformation based on learning dictionaries across image spaces.基于跨图像空间学习字典的图像变换。
IEEE Trans Pattern Anal Mach Intell. 2013 Feb;35(2):367-80. doi: 10.1109/TPAMI.2012.95.
4
Robust Kronecker Component Analysis.稳健克罗内克成分分析
IEEE Trans Pattern Anal Mach Intell. 2019 Oct;41(10):2365-2379. doi: 10.1109/TPAMI.2018.2881476. Epub 2018 Nov 15.
5
Deep supervised dictionary learning by algorithm unrolling-Application to fast 2D dynamic MR image reconstruction.基于算法展开的深度监督字典学习——在快速二维动态磁共振图像重建中的应用
Med Phys. 2023 May;50(5):2939-2960. doi: 10.1002/mp.16182. Epub 2023 Jan 17.
6
Alternatively Constrained Dictionary Learning For Image Superresolution.替代约束字典学习的图像超分辨率方法。
IEEE Trans Cybern. 2014 Mar;44(3):366-77. doi: 10.1109/TCYB.2013.2256347. Epub 2013 May 2.
7
Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation.基于多子字典表示的Lp正则化稀疏自适应迭代加权阈值算法(SAITA)
Sensors (Basel). 2017 Dec 15;17(12):2920. doi: 10.3390/s17122920.
8
Orthogonal Procrustes Analysis for Dictionary Learning in Sparse Linear Representation.稀疏线性表示中用于字典学习的正交普罗克汝斯分析
PLoS One. 2017 Jan 19;12(1):e0169663. doi: 10.1371/journal.pone.0169663. eCollection 2017.
9
Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems.外积字典学习高效求和法(SOUP-DIL)及其在逆问题中的应用
IEEE Trans Comput Imaging. 2017 Dec;3(4):694-709. doi: 10.1109/TCI.2017.2697206. Epub 2017 Apr 21.
10
Blind compressive sensing dynamic MRI.盲压缩感知动态 MRI。
IEEE Trans Med Imaging. 2013 Jun;32(6):1132-45. doi: 10.1109/TMI.2013.2255133. Epub 2013 Mar 27.

引用本文的文献

1
Application of artificial intelligence and machine learning in lung transplantation: a comprehensive review.人工智能和机器学习在肺移植中的应用:综述
Front Digit Health. 2025 May 1;7:1583490. doi: 10.3389/fdgth.2025.1583490. eCollection 2025.

本文引用的文献

1
Solving inverse problems with piecewise linear estimators: from Gaussian mixture models to structured sparsity.用分段线性估计器解决反问题:从高斯混合模型到结构稀疏性。
IEEE Trans Image Process. 2012 May;21(5):2481-99. doi: 10.1109/TIP.2011.2176743. Epub 2011 Dec 14.
2
Task-driven dictionary learning.任务驱动的字典学习。
IEEE Trans Pattern Anal Mach Intell. 2012 Apr;34(4):791-804. doi: 10.1109/TPAMI.2011.156.
3
Nonparametric Bayesian dictionary learning for analysis of noisy and incomplete images.非参数贝叶斯字典学习在分析噪声和不完整图像中的应用。
IEEE Trans Image Process. 2012 Jan;21(1):130-44. doi: 10.1109/TIP.2011.2160072. Epub 2011 Jun 20.
4
Image super-resolution via sparse representation.基于稀疏表示的图像超分辨率重建。
IEEE Trans Image Process. 2010 Nov;19(11):2861-73. doi: 10.1109/TIP.2010.2050625. Epub 2010 May 18.
5
Robust face recognition via sparse representation.基于稀疏表示的鲁棒人脸识别。
IEEE Trans Pattern Anal Mach Intell. 2009 Feb;31(2):210-27. doi: 10.1109/TPAMI.2008.79.
6
Filling-in by joint interpolation of vector fields and gray levels.通过向量场和灰度级的联合插值进行填充。
IEEE Trans Image Process. 2001;10(8):1200-11. doi: 10.1109/83.935036.
7
Sparse representation for color image restoration.用于彩色图像恢复的稀疏表示。
IEEE Trans Image Process. 2008 Jan;17(1):53-69. doi: 10.1109/tip.2007.911828.
8
Dictionary learning algorithms for sparse representation.用于稀疏表示的字典学习算法。
Neural Comput. 2003 Feb;15(2):349-96. doi: 10.1162/089976603762552951.
9
Learning overcomplete representations.学习超完备表示。
Neural Comput. 2000 Feb;12(2):337-65. doi: 10.1162/089976600300015826.
10
Emergence of simple-cell receptive field properties by learning a sparse code for natural images.通过学习自然图像的稀疏编码产生简单细胞感受野特性。
Nature. 1996 Jun 13;381(6583):607-9. doi: 10.1038/381607a0.