Birla Global University, Gothapatna, Bhubaneswar, Odisha, India.
School of Computing Science & Engineering, Department of CSE, Galgotias University, Greater Noida, UP, India.
Comput Intell Neurosci. 2023 Oct 10;2023:5113417. doi: 10.1155/2023/5113417. eCollection 2023.
Computing intelligence is built on several learning and optimization techniques. Incorporating cutting-edge learning techniques to balance the interaction between exploitation and exploration is therefore an inspiring field, especially when it is combined with IoT. The reinforcement learning techniques created in recent years have largely focused on incorporating deep learning technology to improve the generalization skills of the algorithm while ignoring the issue of detecting and taking full advantage of the dilemma. To increase the effectiveness of exploration, a deep reinforcement algorithm based on computational intelligence is proposed in this study, using intelligent sensors and the Bayesian approach. In addition, the technique for computing the posterior distribution of parameters in Bayesian linear regression is expanded to nonlinear models such as artificial neural networks. The Bayesian Bootstrap Deep -Network (BBDQN) algorithm is created by combining the bootstrapped DQN with the recommended computing technique. Finally, tests in two scenarios demonstrate that, when faced with severe exploration problems, BBDQN outperforms DQN and bootstrapped DQN in terms of exploration efficiency.
计算智能建立在几种学习和优化技术之上。因此,将最先进的学习技术融入到平衡开发与探索之间的交互中是一个令人振奋的领域,尤其是当它与物联网结合时。近年来创建的强化学习技术主要侧重于结合深度学习技术来提高算法的泛化能力,而忽略了检测和充分利用困境的问题。为了提高探索的效果,本研究提出了一种基于计算智能的深度强化算法,该算法使用智能传感器和贝叶斯方法。此外,还将贝叶斯线性回归中参数后验分布的计算技术扩展到人工神经网络等非线性模型。通过将带自举的 DQN 与推荐的计算技术相结合,创建了贝叶斯自举深度网络(BBDQN)算法。最后,在两个场景中的测试表明,在面临严重的探索问题时,BBDQN 在探索效率方面优于 DQN 和带自举的 DQN。