Hasani Zahra, Mahdavimoghadam Maryam, Mohammadi Razieh, Shirmohammadi Zahra, Nikoofard Amirhossein, Nikahd Eesa, Davoodi Kasra
Department of Electrical Engineering, Shahid Rajaee Teacher Training University, Tehran, Iran.
Department of Computer Engineering, Shahid Rajaee Teacher Training University, Tehran, Iran.
Sci Rep. 2025 Aug 3;15(1):28321. doi: 10.1038/s41598-025-14111-y.
Energy Harvesting Wireless Sensor Networks (EH-WSNs) are widely adopted for their ability to harvest ambient energy. However, these networks face significant challenges due to the limited and continuously varying energy availability at individual nodes, which depends on unpredictable environmental sources. To operate effectively in such conditions, energy fluctuations need to be regulated. This requires continuous monitoring of each node's energy level over time and adaptively adjusting operations. State-of-the-art mechanisms often categorize nodes or discretize energy levels, leading to issues such as the inability to select appropriate actions based on the actual energy states of the nodes. This discretization simplifies the representation of energy states and reduces complexity, making it easier to design and implement. However, it overlooks subtle variations in energy levels, leading to inaccurate assessments and suboptimal performance. To overcome this limitation, this paper proposes an energy-aware transmission method based on the Deep Reinforcement Learning (DRL) algorithm that integrates Q-learning with Deep Neural Networks (DNNs). This method enables each node to adaptively select transmission actions based on its real-time energy state, improving responsiveness to dynamic network conditions. Simulation results show that the proposed method improves throughput by 11.79% compared to traditional methods. These findings demonstrate the effectiveness of DRL-based control in enhancing performance and energy efficiency in EH-WSNs.
能量收集无线传感器网络(EH-WSNs)因其能够收集环境能量而被广泛采用。然而,由于各个节点的能量可用性有限且不断变化,这些网络面临着重大挑战,而节点能量可用性取决于不可预测的环境源。为了在这种条件下有效运行,需要调节能量波动。这就要求持续监测每个节点随时间的能量水平,并自适应地调整操作。现有机制通常对节点进行分类或对能量水平进行离散化,导致诸如无法根据节点的实际能量状态选择适当操作等问题。这种离散化简化了能量状态的表示并降低了复杂性,使其更易于设计和实现。然而,它忽略了能量水平的细微变化,导致评估不准确和性能欠佳。为克服这一限制,本文提出了一种基于深度强化学习(DRL)算法的能量感知传输方法,该方法将Q学习与深度神经网络(DNN)相结合。此方法使每个节点能够根据其实时能量状态自适应地选择传输动作,提高对动态网络条件的响应能力。仿真结果表明,与传统方法相比,所提方法的吞吐量提高了11.79%。这些发现证明了基于DRL的控制在提高EH-WSNs性能和能量效率方面的有效性。