Imtiaz Hamza, Zheng Zibo, Homayoun Nejad Rizan, Rusch Leslie A, Zeng Ming
Opt Express. 2023 Nov 6;31(23):38513-38528. doi: 10.1364/OE.500467.
Optical communications at high bandwidth and high spectral efficiency rely on the use of a digital-to-analog converter (DAC). We propose the use of a neural network (NN) for digital pre-distortion (DPD) to mitigate the quantization and bandlimited impairments from a DAC in such systems. We experimentally validate our approach with a 64 Gbaud 8-level pulse amplitude modulation (PAM-8) signal. We examine the NN-DPD training with both direct and indirect learning methods. We compare the performance with typical Volterra, look-up table (LUT) and linear DPD solutions. We sweep regimes where nonlinear quantization becomes more prominent to highlight the advantages of NN-DPD. The proposed NN-DPD trained via direct learning outperforms the Volterra, LUT and linear DPDs by almost 0.9 dB, 1.9 dB and 2.9 dB, respectively. We find that an indirect learning recurrent NN offers better performance at the same complexity as Volterra, while a direct learning recursive NN pushes performance to a higher level than a Volterra can achieve.
高带宽和高光谱效率的光通信依赖于数模转换器(DAC)的使用。我们提出使用神经网络(NN)进行数字预失真(DPD),以减轻此类系统中DAC产生的量化和带宽受限损伤。我们通过一个64 Gbaud 8电平脉冲幅度调制(PAM-8)信号对我们的方法进行了实验验证。我们使用直接和间接学习方法研究了神经网络数字预失真(NN-DPD)训练。我们将其性能与典型的沃尔泰拉、查找表(LUT)和线性DPD解决方案进行了比较。我们扫描了非线性量化变得更加突出的区域,以突出NN-DPD的优势。通过直接学习训练的所提出的NN-DPD分别比沃尔泰拉、LUT和线性DPD性能高出近0.9 dB、1.9 dB和2.9 dB。我们发现,间接学习递归神经网络在与沃尔泰拉相同的复杂度下提供了更好的性能,而直接学习递归神经网络将性能提升到了沃尔泰拉无法达到的更高水平。