Choi Jongsoo, Bouchard Martin, Yeap Tet Hin
School of Information Technology and Engineering, University of Ottawa, Ottawa, ON K1N 6N5, Canada.
IEEE Trans Neural Netw. 2005 May;16(3):699-708. doi: 10.1109/TNN.2005.845142.
Real-time recurrent learning (RTRL), commonly employed for training a fully connected recurrent neural network (RNN), has a drawback of slow convergence rate. In the light of this deficiency, a decision feedback recurrent neural equalizer (DFRNE) using the RTRL requires long training sequences to achieve good performance. In this paper, extended Kalman filter (EKF) algorithms based on the RTRL for the DFRNE are presented in state-space formulation of the system, in particular for complex-valued signal processing. The main features of global EKF and decoupled EKF algorithms are fast convergence and good tracking performance. Through nonlinear channel equalization, performance of the DFRNE with the EKF algorithms is evaluated and compared with that of the DFRNE with the RTRL.
实时递归学习(RTRL)通常用于训练全连接递归神经网络(RNN),但其收敛速度较慢。鉴于这一缺陷,使用RTRL的判决反馈递归神经均衡器(DFRNE)需要长训练序列才能实现良好性能。本文在系统的状态空间公式中提出了基于RTRL的用于DFRNE的扩展卡尔曼滤波器(EKF)算法,特别是针对复值信号处理。全局EKF和解耦EKF算法的主要特点是收敛速度快和跟踪性能好。通过非线性信道均衡,评估了采用EKF算法的DFRNE的性能,并与采用RTRL的DFRNE的性能进行了比较。