No Albert, Weissman Tsachy
Department of Electrical Engineering, Stanford University, Stanford, CA 94305 USA.
IEEE Trans Inf Theory. 2016 Oct;62(10):5484-5495. doi: 10.1109/tit.2016.2598148. Epub 2016 Aug 12.
We begin by presenting a simple lossy compressor operating at near-zero rate: The encoder merely describes the indices of the few maximal source components, while the decoder's reconstruction is a natural estimate of the source components based on this information. This scheme turns out to be near optimal for the memoryless Gaussian source in the sense of achieving the zero-rate slope of its distortion-rate function. Motivated by this finding, we then propose a scheme comprised of iterating the above lossy compressor on an appropriately transformed version of the difference between the source and its reconstruction from the previous iteration. The proposed scheme achieves the rate distortion function of the Gaussian memoryless source (under squared error distortion) when employed on any finite-variance ergodic source. It further possesses desirable properties, and we, respectively, refer to as infinitesimal successive refinability, ratelessness, and complete separability. Its storage and computation requirements are of order no more than ()/(log ) per source symbol for > 0 at both the encoder and the decoder. Though the details of its derivation, construction, and analysis differ considerably, we discuss similarities between the proposed scheme and the recently introduced Sparse Regression Codes of Venkataramanan
编码器仅描述少数几个最大源分量的索引,而解码器的重构是基于此信息对源分量的自然估计。事实证明,就实现其失真率函数的零速率斜率而言,该方案对于无记忆高斯源近乎最优。受这一发现的启发,我们随后提出一种方案,该方案包括对源与其上一次迭代重构之间的差值的适当变换版本重复应用上述有损压缩器。当应用于任何具有有限方差的遍历源时,所提出的方案实现了高斯无记忆源的速率失真函数(在平方误差失真下)。它还具有一些理想的特性,我们分别将其称为无穷小逐次可细化性、无速率性和完全可分离性。对于编码器和解码器处大于0的情况,其存储和计算要求每个源符号不超过()/(log )量级。尽管其推导、构造和分析的细节有很大不同,但我们讨论了所提出的方案与最近引入的Venkataramanan的稀疏回归码之间的相似性