Suppr超能文献

高速PAM VLC系统中使用的记忆控制深度LSTM神经网络后置均衡器。

Memory-controlled deep LSTM neural network post-equalizer used in high-speed PAM VLC system.

作者信息

Lu Xingyu, Lu Chao, Yu Weixiang, Qiao Liang, Liang Shangyu, Lau Alan Pak Tao, Chi Nan

出版信息

Opt Express. 2019 Mar 4;27(5):7822-7833. doi: 10.1364/OE.27.007822.

Abstract

Linear and nonlinear impairments severely limit the transmission performance of high-speed visible light communication systems. Neural network-based equalizers have been applied to optical communication systems, which enables significantly improved system performance, such as transmission data rate and distance. In this paper, a memory-controlled deep long short-term memory (LSTM) neural network post-equalizer is proposed to mitigate both linear and nonlinear impairments in pulse amplitude modulation (PAM) based visible light communication (VLC) systems. Both 1.15-Gbps PAM4 and 0.9Gbps PAM8 VLC systems are successfully demonstrated, based on a single red-LED with bit error ratio (BER) below the hard decision forward error correction (HD-FEC) limit of 3.8 x 10. Compared with the traditional finite impulse response (FIR) based equalizer, the Q factor performance is improved by 1.2dB and the transmission distance is increased by one-third in the same experimental hardware setups. Compared with traditional nonlinear hybrid Volterra equalizers, the significant complexity and system performance advantages of using a LSTM-based equalizer is demonstrated. To the best of our knowledge, this is the first demonstration of using deep LSTM in VLC systems.

摘要

线性和非线性损伤严重限制了高速可见光通信系统的传输性能。基于神经网络的均衡器已应用于光通信系统,这能够显著提高系统性能,如传输数据速率和距离。本文提出了一种内存控制的深度长短期记忆(LSTM)神经网络后置均衡器,以减轻基于脉冲幅度调制(PAM)的可见光通信(VLC)系统中的线性和非线性损伤。基于单个红色发光二极管,成功演示了1.15 Gbps的PAM4和0.9 Gbps的PAM8 VLC系统,误码率(BER)低于3.8×10的硬判决前向纠错(HD-FEC)极限。在相同的实验硬件设置中,与传统的基于有限脉冲响应(FIR)的均衡器相比,Q因子性能提高了1.2 dB,传输距离增加了三分之一。与传统的非线性混合沃尔泰拉均衡器相比,展示了使用基于LSTM的均衡器的显著复杂性和系统性能优势。据我们所知,这是首次在VLC系统中演示使用深度LSTM。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验