Zhu Yiming, Wei Yuan, Chen Chaoxu, Chi Nan, Shi Jianyang
Key Laboratory for Information Science of Electromagnetic Waves (MoE), Department of Communication Science and Engineering, Fudan University, Shanghai 200433, China.
Shanghai Engineering Research Center of Low-Earth-Orbit Satellite Communication and Applications, Shanghai 200433, China.
Sensors (Basel). 2024 Mar 1;24(5):1612. doi: 10.3390/s24051612.
An equalizer based on a recurrent neural network (RNN), especially with a bidirectional gated recurrent unit (biGRU) structure, is a good choice to deal with nonlinear damage and inter-symbol interference (ISI) in optical communication systems because of its excellent performance in processing time series information. However, its recursive structure prevents the parallelization of the computation, resulting in a low equalization rate. In order to improve the speed without compromising the equalization performance, we propose a minimalist 1D convolutional neural network (CNN) equalizer, which is reconverted from a biGRU with knowledge distillation (KD). In this work, we applied KD to regression problems and explain how KD helps students learn from teachers in solving regression problems. In addition, we compared the biGRU, 1D-CNN after KD and 1D-CNN without KD in terms of Q-factor and equalization velocity. The experimental data showed that the Q-factor of the 1D-CNN increased by 1 dB after KD learning from the biGRU, and KD increased the RoP sensitivity of the 1D-CNN by 0.89 dB with the HD-FEC threshold of 1 × 10. At the same time, compared with the biGRU, the proposed 1D-CNN equalizer reduced the computational time consumption by 97% and the number of trainable parameters by 99.3%, with only a 0.5 dB Q-factor penalty. The results demonstrate that the proposed minimalist 1D-CNN equalizer holds significant promise for future practical deployments in optical wireless communication systems.
基于递归神经网络(RNN),特别是具有双向门控递归单元(biGRU)结构的均衡器,由于其在处理时间序列信息方面的出色性能,是处理光通信系统中非线性损伤和符号间干扰(ISI)的理想选择。然而,其递归结构阻碍了计算的并行化,导致均衡速率较低。为了在不影响均衡性能的前提下提高速度,我们提出了一种极简主义的一维卷积神经网络(CNN)均衡器,它是通过知识蒸馏(KD)从biGRU转换而来的。在这项工作中,我们将KD应用于回归问题,并解释KD如何帮助学生在解决回归问题时向教师学习。此外,我们在Q因子和均衡速度方面比较了biGRU、KD后的一维CNN和未使用KD的一维CNN。实验数据表明,一维CNN在从biGRU进行KD学习后,Q因子提高了1 dB,并且在1×10的HD-FEC阈值下,KD使一维CNN的RoP灵敏度提高了0.89 dB。同时,与biGRU相比,所提出的一维CNN均衡器将计算时间消耗减少了97%,可训练参数数量减少了99.3%,仅造成0.5 dB的Q因子损失。结果表明,所提出的极简主义一维CNN均衡器在未来光无线通信系统的实际部署中具有巨大的潜力。