• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

DiCoDiLe:分布式卷积字典学习

DiCoDiLe: Distributed Convolutional Dictionary Learning.

作者信息

Moreau Thomas, Gramfort Alexandre

出版信息

IEEE Trans Pattern Anal Mach Intell. 2022 May;44(5):2426-2437. doi: 10.1109/TPAMI.2020.3039215. Epub 2022 Apr 1.

DOI:10.1109/TPAMI.2020.3039215
PMID:33211653
Abstract

Convolutional dictionary learning (CDL) estimates shift invariant basis adapted to represent signals or images. CDL has proven useful for image denoising or inpainting, as well as for pattern discovery on multivariate signals. Contrarily to standard patch-based dictionary learning, patterns estimated by CDL can be positioned anywhere in signals or images. Optimization techniques consequently face the difficulty of working with extremely large inputs with millions of pixels or time samples. To address this optimization problem, we propose a distributed and asynchronous algorithm, employing locally greedy coordinate descent and a soft-locking mechanism that does not require a central server. Computation can be distributed on a number of workers which scales linearly with the size of the data. The parallel computation accelerates the parameter estimation and the distributed setting allows our algorithm to be used with data that do not fit into a single computer's RAM. Experiments confirm the theoretical scaling properties of the algorithm. This allows to demonstrate an improved pattern recovery as images grow in size, and to learn patterns on images from the Hubble Space Telescope containing tens of millions of pixels.

摘要

卷积字典学习(CDL)估计适用于表示信号或图像的平移不变基。CDL已被证明在图像去噪或修复以及多变量信号的模式发现方面很有用。与基于标准补丁的字典学习相反,CDL估计的模式可以位于信号或图像中的任何位置。因此,优化技术面临处理具有数百万像素或时间样本的极大输入的困难。为了解决这个优化问题,我们提出了一种分布式异步算法,采用局部贪婪坐标下降和不需要中央服务器的软锁定机制。计算可以分布在多个工作节点上,这些工作节点与数据大小呈线性比例扩展。并行计算加速了参数估计,并且分布式设置允许我们的算法用于不适合单个计算机随机存取存储器的数据。实验证实了该算法的理论扩展特性。这使得随着图像尺寸的增加能够展示出改进的模式恢复,并能够从包含数千万像素的哈勃太空望远镜图像中学习模式。

相似文献

1
DiCoDiLe: Distributed Convolutional Dictionary Learning.DiCoDiLe:分布式卷积字典学习
IEEE Trans Pattern Anal Mach Intell. 2022 May;44(5):2426-2437. doi: 10.1109/TPAMI.2020.3039215. Epub 2022 Apr 1.
2
Slice-Based Online Convolutional Dictionary Learning.基于切片的在线卷积字典学习。
IEEE Trans Cybern. 2021 Oct;51(10):5116-5129. doi: 10.1109/TCYB.2019.2931914. Epub 2021 Oct 12.
3
Joint and Direct Optimization for Dictionary Learning in Convolutional Sparse Representation.卷积稀疏表示中字典学习的联合与直接优化
IEEE Trans Neural Netw Learn Syst. 2020 Feb;31(2):559-573. doi: 10.1109/TNNLS.2019.2906074. Epub 2019 Apr 19.
4
Convolutional Dictionary Learning: Acceleration and Convergence.卷积字典学习:加速与收敛。
IEEE Trans Image Process. 2018 Apr;27(4):1697-1712. doi: 10.1109/TIP.2017.2761545. Epub 2017 Oct 9.
5
Dictionary Pair Learning on Grassmann Manifolds for Image Denoising.基于 Grassmann 流形的字典对学习在图像去噪中的应用。
IEEE Trans Image Process. 2015 Nov;24(11):4556-69. doi: 10.1109/TIP.2015.2468172. Epub 2015 Aug 13.
6
Person Re-Identification by Cross-View Multi-Level Dictionary Learning.基于跨视角多级字典学习的行人重识别
IEEE Trans Pattern Anal Mach Intell. 2018 Dec;40(12):2963-2977. doi: 10.1109/TPAMI.2017.2764893. Epub 2017 Oct 26.
7
Multi-Modal Convolutional Dictionary Learning.多模态卷积字典学习
IEEE Trans Image Process. 2022;31:1325-1339. doi: 10.1109/TIP.2022.3141251. Epub 2022 Jan 25.
8
Learning smooth pattern transformation manifolds.学习平滑模式变换流形。
IEEE Trans Image Process. 2013 Apr;22(4):1311-25. doi: 10.1109/TIP.2012.2227768. Epub 2012 Nov 16.
9
Online Adaptive Image Reconstruction (OnAIR) Using Dictionary Models.使用字典模型的在线自适应图像重建(OnAIR)
IEEE Trans Comput Imaging. 2020;6:153-166. doi: 10.1109/tci.2019.2931092.
10
Blind compressive sensing dynamic MRI.盲压缩感知动态 MRI。
IEEE Trans Med Imaging. 2013 Jun;32(6):1132-45. doi: 10.1109/TMI.2013.2255133. Epub 2013 Mar 27.