Yuan Jinyu, Peng Jingyi, Yan Qing, He Gang, Xiang Honglin, Liu Zili
School of Knowledge Based Technology and Energy, Tech University of Korea, Siheung-si 15073, Gyeonggi-do, Republic of Korea.
China Industrial Control Systems Cyber Emergency Response Team, Beijing 100040, China.
Sensors (Basel). 2024 Mar 1;24(5):1632. doi: 10.3390/s24051632.
The fast development of the sensors in the wireless sensor networks (WSN) brings a big challenge of low energy consumption requirements, and Peer-to-peer (P2P) communication becomes the important way to break this bottleneck. However, the interference caused by different sensors sharing the spectrum and the power limitations seriously constrains the improvement of WSN. Therefore, in this paper, we proposed a deep reinforcement learning-based energy consumption optimization for P2P communication in WSN. Specifically, P2P sensors (PUs) are considered agents to share the spectrum of authorized sensors (AUs). An authorized sensor has permission to access specific data or systems, while a P2P sensor directly communicates with other sensors without needing a central server. One involves permission, the other is direct communication between sensors. Each agent can control the power and select the resources to avoid interference. Moreover, we use a double deep Q network (DDQN) algorithm to help the agent learn more detailed features of the interference. Simulation results show that the proposed algorithm can obtain a higher performance than the deep Q network scheme and the traditional algorithm, which can effectively lower the energy consumption for P2P communication in WSN.
无线传感器网络(WSN)中传感器的快速发展带来了低能耗要求这一巨大挑战,而对等(P2P)通信成为突破这一瓶颈的重要途径。然而,不同传感器共享频谱所造成的干扰以及功率限制严重制约了WSN的发展。因此,在本文中,我们针对WSN中的P2P通信提出了一种基于深度强化学习的能耗优化方法。具体而言,对等传感器(PUs)被视为共享授权传感器(AUs)频谱的智能体。授权传感器有权访问特定数据或系统,而对等传感器无需中央服务器即可直接与其他传感器通信。一个涉及权限,另一个是传感器之间的直接通信。每个智能体可以控制功率并选择资源以避免干扰。此外,我们使用双深度Q网络(DDQN)算法来帮助智能体学习干扰的更详细特征。仿真结果表明,所提出的算法比深度Q网络方案和传统算法具有更高的性能,能够有效降低WSN中P2P通信的能耗。