• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

深度学习神经网络和强化学习在无线通信中的应用。

Application of deep neural network and deep reinforcement learning in wireless communication.

机构信息

National Intellectual Property Administration, Beijing City, China.

出版信息

PLoS One. 2020 Jul 2;15(7):e0235447. doi: 10.1371/journal.pone.0235447. eCollection 2020.

DOI:10.1371/journal.pone.0235447
PMID:32614858
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7332070/
Abstract

OBJECTIVE

To explore the application of deep neural networks (DNNs) and deep reinforcement learning (DRL) in wireless communication and accelerate the development of the wireless communication industry.

METHOD

This study proposes a simple cognitive radio scenario consisting of only one primary user and one secondary user. The secondary user attempts to share spectrum resources with the primary user. An intelligent power algorithm model based on DNNs and DRL is constructed. Then, the MATLAB platform is utilized to simulate the model.

RESULTS

In the performance analysis of the algorithm model under different strategies, it is found that the second power control strategy is more conservative than the first. In the loss function, the second power control strategy has experienced more iterations than the first. In terms of success rate, the second power control strategy has more iterations than the first. In the average number of transmissions, they show the same changing trend, but the success rate can reach 1. In comparison with the traditional distributed clustering and power control (DCPC) algorithm, it is obvious that the convergence rate of the algorithm in this research is higher. The proposed DQN algorithm based on DRL only needs several steps to achieve convergence, which verifies its effectiveness.

CONCLUSION

By applying DNNs and DRL to model algorithms constructed in wireless scenarios, the success rate is higher and the convergence rate is faster, which can provide experimental basis for the improvement of later wireless communication networks.

摘要

目的

探索深度神经网络(DNN)和深度强化学习(DRL)在无线通信中的应用,加速无线通信产业的发展。

方法

本研究提出了一个仅由一个主用户和一个次用户组成的简单认知无线电场景。次用户试图与主用户共享频谱资源。构建了基于 DNN 和 DRL 的智能功率算法模型。然后,利用 MATLAB 平台对模型进行仿真。

结果

在不同策略下的算法模型性能分析中,发现第二功率控制策略比第一功率控制策略更为保守。在损失函数中,第二功率控制策略的迭代次数比第一功率控制策略多。在成功率方面,第二功率控制策略的迭代次数比第一功率控制策略多。在平均传输次数上,它们呈现出相同的变化趋势,但成功率可以达到 1。与传统的分布式聚类和功率控制(DCPC)算法相比,本研究中算法的收敛速度明显更快。基于 DRL 的 DQN 算法仅需几步即可达到收敛,验证了其有效性。

结论

通过将 DNN 和 DRL 应用于无线场景中的模型算法,提高了成功率,加快了收敛速度,可为后续无线通信网络的改进提供实验依据。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e0f/7332070/ef7c5c07f118/pone.0235447.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e0f/7332070/51529c6a4d03/pone.0235447.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e0f/7332070/81ba60e86638/pone.0235447.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e0f/7332070/e9250cc1338f/pone.0235447.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e0f/7332070/9c2af711bde0/pone.0235447.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e0f/7332070/b5008f8468d5/pone.0235447.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e0f/7332070/ef7c5c07f118/pone.0235447.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e0f/7332070/51529c6a4d03/pone.0235447.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e0f/7332070/81ba60e86638/pone.0235447.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e0f/7332070/e9250cc1338f/pone.0235447.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e0f/7332070/9c2af711bde0/pone.0235447.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e0f/7332070/b5008f8468d5/pone.0235447.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e0f/7332070/ef7c5c07f118/pone.0235447.g006.jpg

相似文献

1
Application of deep neural network and deep reinforcement learning in wireless communication.深度学习神经网络和强化学习在无线通信中的应用。
PLoS One. 2020 Jul 2;15(7):e0235447. doi: 10.1371/journal.pone.0235447. eCollection 2020.
2
Two Tier Slicing Resource Allocation Algorithm Based on Deep Reinforcement Learning and Joint Bidding in Wireless Access Networks.基于深度强化学习和联合竞价的无线接入网络双层切片资源分配算法
Sensors (Basel). 2022 May 4;22(9):3495. doi: 10.3390/s22093495.
3
Spectrum-efficient user grouping and resource allocation based on deep reinforcement learning for mmWave massive MIMO-NOMA systems.基于深度强化学习的毫米波大规模MIMO-NOMA系统频谱高效用户分组与资源分配
Sci Rep. 2024 Apr 17;14(1):8884. doi: 10.1038/s41598-024-59241-x.
4
Joint Deep Reinforcement Learning and Unsupervised Learning for Channel Selection and Power Control in D2D Networks.用于D2D网络中信道选择和功率控制的联合深度强化学习与无监督学习
Entropy (Basel). 2022 Nov 24;24(12):1722. doi: 10.3390/e24121722.
5
Dynamic Spectrum Sharing Based on Deep Reinforcement Learning in Mobile Communication Systems.基于深度强化学习的移动通信系统动态频谱共享。
Sensors (Basel). 2023 Feb 27;23(5):2622. doi: 10.3390/s23052622.
6
Deep Reinforcement Learning for Physical Layer Security Enhancement in Energy Harvesting Based Cognitive Radio Networks.基于能量收集的认知无线电网络中物理层安全增强的深度强化学习。
Sensors (Basel). 2023 Jan 10;23(2):807. doi: 10.3390/s23020807.
7
Deep reinforcement learning for automated radiation adaptation in lung cancer.深度强化学习在肺癌放射自适应中的应用。
Med Phys. 2017 Dec;44(12):6690-6705. doi: 10.1002/mp.12625. Epub 2017 Nov 14.
8
Deep Reinforcement Learning-Based Adaptive Scheduling for Wireless Time-Sensitive Networking.基于深度强化学习的无线时间敏感网络自适应调度
Sensors (Basel). 2024 Aug 15;24(16):5281. doi: 10.3390/s24165281.
9
Joint Beamforming, Power Allocation, and Splitting Control for SWIPT-Enabled IoT Networks with Deep Reinforcement Learning and Game Theory.基于深度强化学习和博弈论的支持同时无线信息与能量传输的物联网网络的联合波束成形、功率分配和分割控制
Sensors (Basel). 2022 Mar 17;22(6):2328. doi: 10.3390/s22062328.
10
Security Enhancement for Deep Reinforcement Learning-Based Strategy in Energy-Efficient Wireless Sensor Networks.基于深度强化学习的节能无线传感器网络策略的安全性增强
Sensors (Basel). 2024 Mar 21;24(6):1993. doi: 10.3390/s24061993.

引用本文的文献

1
Retraction: Application of deep neural network and deep reinforcement learning in wireless communication.
PLoS One. 2024 Nov 7;19(11):e0313643. doi: 10.1371/journal.pone.0313643. eCollection 2024.
2
Self-controlling photonic-on-chip networks with deep reinforcement learning.基于深度强化学习的自控制片上光子网络。
Sci Rep. 2021 Nov 30;11(1):23151. doi: 10.1038/s41598-021-02583-7.

本文引用的文献

1
Improving the Antinoise Ability of DNNs via a Bio-Inspired Noise Adaptive Activation Function Rand Softplus.通过生物启发的噪声自适应激活函数 Rand Softplus 提高 DNN 的抗噪能力。
Neural Comput. 2019 Jun;31(6):1215-1233. doi: 10.1162/neco_a_01192. Epub 2019 Apr 12.
2
Bright-field holography: cross-modality deep learning enables snapshot 3D imaging with bright-field contrast using a single hologram.明场全息术:跨模态深度学习通过单个全息图实现具有明场对比度的快照三维成像。
Light Sci Appl. 2019 Mar 6;8:25. doi: 10.1038/s41377-019-0139-9. eCollection 2019.
3
Accuracy of deep learning, a machine-learning technology, using ultra-wide-field fundus ophthalmoscopy for detecting rhegmatogenous retinal detachment.
深度学习(一种机器学习技术)应用于超广角眼底检查以检测孔源性视网膜脱离的准确性。
Sci Rep. 2017 Aug 25;7(1):9425. doi: 10.1038/s41598-017-09891-x.
4
Speaker-dependent multipitch tracking using deep neural networks.使用深度神经网络的说话人相关多音高跟踪
J Acoust Soc Am. 2017 Feb;141(2):710. doi: 10.1121/1.4973687.