Suppr超能文献

混合迭代收缩阈值算法(Hybrid ISTA):使用自由形式深度神经网络实现具有收敛保证的迭代收缩阈值算法展开

Hybrid ISTA: Unfolding ISTA With Convergence Guarantees Using Free-Form Deep Neural Networks.

作者信息

Zheng Ziyang, Dai Wenrui, Xue Duoduo, Li Chenglin, Zou Junni, Xiong Hongkai

出版信息

IEEE Trans Pattern Anal Mach Intell. 2023 Mar;45(3):3226-3244. doi: 10.1109/TPAMI.2022.3172214. Epub 2023 Feb 3.

Abstract

It is promising to solve linear inverse problems by unfolding iterative algorithms (e.g., iterative shrinkage thresholding algorithm (ISTA)) as deep neural networks (DNNs) with learnable parameters. However, existing ISTA-based unfolded algorithms restrict the network architectures for iterative updates with the partial weight coupling structure to guarantee convergence. In this paper, we propose hybrid ISTA to unfold ISTA with both pre-computed and learned parameters by incorporating free-form DNNs (i.e., DNNs with arbitrary feasible and reasonable network architectures), while ensuring theoretical convergence. We first develop HCISTA to improve the efficiency and flexibility of classical ISTA (with pre-computed parameters) without compromising the convergence rate in theory. Furthermore, the DNN-based hybrid algorithm is generalized to popular variants of learned ISTA, dubbed HLISTA, to enable a free architecture of learned parameters with a guarantee of linear convergence. To our best knowledge, this paper is the first to provide a convergence-provable framework that enables free-form DNNs in ISTA-based unfolded algorithms. This framework is general to endow arbitrary DNNs for solving linear inverse problems with convergence guarantees. Extensive experiments demonstrate that hybrid ISTA can reduce the reconstruction error with an improved convergence rate in the tasks of sparse recovery and compressive sensing.

摘要

通过将迭代算法(例如,迭代收缩阈值算法(ISTA))展开为具有可学习参数的深度神经网络(DNN)来解决线性逆问题是很有前景的。然而,现有的基于ISTA的展开算法通过部分权重耦合结构限制了用于迭代更新的网络架构,以保证收敛。在本文中,我们提出了混合ISTA,通过结合自由形式的DNN(即具有任意可行且合理网络架构的DNN)来展开具有预计算参数和学习参数的ISTA,同时确保理论收敛。我们首先开发了HCISTA,以提高经典ISTA(具有预计算参数)的效率和灵活性,而不影响理论上的收敛速度。此外,基于DNN的混合算法被推广到学习型ISTA的流行变体,称为HLISTA,以实现具有线性收敛保证的自由形式学习参数架构。据我们所知,本文首次提供了一个可证明收敛的框架,该框架在基于ISTA的展开算法中启用自由形式的DNN。这个框架具有通用性,可为解决线性逆问题的任意DNN赋予收敛保证。大量实验表明,混合ISTA可以在稀疏恢复和压缩感知任务中以提高的收敛速度降低重建误差。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验