Suppr超能文献

DiCoDiLe:分布式卷积字典学习

DiCoDiLe: Distributed Convolutional Dictionary Learning.

作者信息

Moreau Thomas, Gramfort Alexandre

出版信息

IEEE Trans Pattern Anal Mach Intell. 2022 May;44(5):2426-2437. doi: 10.1109/TPAMI.2020.3039215. Epub 2022 Apr 1.

Abstract

Convolutional dictionary learning (CDL) estimates shift invariant basis adapted to represent signals or images. CDL has proven useful for image denoising or inpainting, as well as for pattern discovery on multivariate signals. Contrarily to standard patch-based dictionary learning, patterns estimated by CDL can be positioned anywhere in signals or images. Optimization techniques consequently face the difficulty of working with extremely large inputs with millions of pixels or time samples. To address this optimization problem, we propose a distributed and asynchronous algorithm, employing locally greedy coordinate descent and a soft-locking mechanism that does not require a central server. Computation can be distributed on a number of workers which scales linearly with the size of the data. The parallel computation accelerates the parameter estimation and the distributed setting allows our algorithm to be used with data that do not fit into a single computer's RAM. Experiments confirm the theoretical scaling properties of the algorithm. This allows to demonstrate an improved pattern recovery as images grow in size, and to learn patterns on images from the Hubble Space Telescope containing tens of millions of pixels.

摘要

卷积字典学习(CDL)估计适用于表示信号或图像的平移不变基。CDL已被证明在图像去噪或修复以及多变量信号的模式发现方面很有用。与基于标准补丁的字典学习相反,CDL估计的模式可以位于信号或图像中的任何位置。因此,优化技术面临处理具有数百万像素或时间样本的极大输入的困难。为了解决这个优化问题,我们提出了一种分布式异步算法,采用局部贪婪坐标下降和不需要中央服务器的软锁定机制。计算可以分布在多个工作节点上,这些工作节点与数据大小呈线性比例扩展。并行计算加速了参数估计,并且分布式设置允许我们的算法用于不适合单个计算机随机存取存储器的数据。实验证实了该算法的理论扩展特性。这使得随着图像尺寸的增加能够展示出改进的模式恢复,并能够从包含数千万像素的哈勃太空望远镜图像中学习模式。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验