• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于深度强化学习的蜂窝车联网模式 3 带下行链路的资源分配

Deep Reinforcement Learning-Based Resource Allocation for Cellular Vehicular Network Mode 3 with Underlay Approach.

机构信息

College of Information Science and Engineering, Xinjiang University, Urumqi 830000, China.

Network Department, China Mobile Communications Group Xinjiang Co., Ltd., Urumqi 830000, China.

出版信息

Sensors (Basel). 2022 Feb 27;22(5):1874. doi: 10.3390/s22051874.

DOI:10.3390/s22051874
PMID:35271024
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8914637/
Abstract

Vehicle-to-vehicle (V2V) communication has attracted increasing attention since it can improve road safety and traffic efficiency. In the underlay approach of mode 3, the V2V links need to reuse the spectrum resources preoccupied with vehicle-to-infrastructure (V2I) links, which will interfere with the V2I links. Therefore, how to allocate wireless resources flexibly and improve the throughput of the V2I links while meeting the low latency requirements of the V2V links needs to be determined. This paper proposes a V2V resource allocation framework based on deep reinforcement learning. The base station (BS) uses a double deep Q network to allocate resources intelligently. In particular, to reduce the signaling overhead for the BS to acquire channel state information (CSI) in mode 3, the BS optimizes the resource allocation strategy based on partial CSI in the framework of this article. The simulation results indicate that the proposed scheme can meet the low latency requirements of V2V links while increasing the capacity of the V2I links compared with the other methods. In addition, the proposed partial CSI design has comparable performance to complete CSI.

摘要

车对车(V2V)通信自吸引关注以来,它可以提高道路安全和交通效率。在模式 3 的覆盖方法中,V2V 链路需要重用被车辆到基础设施(V2I)链路占用的频谱资源,这将干扰 V2I 链路。因此,需要确定如何灵活分配无线资源并提高 V2I 链路的吞吐量,同时满足 V2V 链路的低延迟要求。本文提出了一种基于深度强化学习的 V2V 资源分配框架。基站(BS)使用双深度 Q 网络智能地分配资源。特别是,为了减少模式 3 中 BS 获取信道状态信息(CSI)的信令开销,BS 在本文框架中基于部分 CSI 优化资源分配策略。仿真结果表明,与其他方法相比,所提出的方案可以满足 V2V 链路的低延迟要求,同时增加 V2I 链路的容量。此外,所提出的部分 CSI 设计与完整 CSI 具有可比的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5ad4/8914637/3cc7a2f2bbe5/sensors-22-01874-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5ad4/8914637/bf628a2fa9bd/sensors-22-01874-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5ad4/8914637/bdafb666238e/sensors-22-01874-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5ad4/8914637/7131765d246c/sensors-22-01874-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5ad4/8914637/e8060cc04506/sensors-22-01874-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5ad4/8914637/b1b193efd234/sensors-22-01874-g005a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5ad4/8914637/3cc7a2f2bbe5/sensors-22-01874-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5ad4/8914637/bf628a2fa9bd/sensors-22-01874-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5ad4/8914637/bdafb666238e/sensors-22-01874-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5ad4/8914637/7131765d246c/sensors-22-01874-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5ad4/8914637/e8060cc04506/sensors-22-01874-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5ad4/8914637/b1b193efd234/sensors-22-01874-g005a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5ad4/8914637/3cc7a2f2bbe5/sensors-22-01874-g006.jpg

相似文献

1
Deep Reinforcement Learning-Based Resource Allocation for Cellular Vehicular Network Mode 3 with Underlay Approach.基于深度强化学习的蜂窝车联网模式 3 带下行链路的资源分配
Sensors (Basel). 2022 Feb 27;22(5):1874. doi: 10.3390/s22051874.
2
Energy-Efficient Resource Allocation Based on Deep Q-Network in V2V Communications.基于深度 Q 网络的车对车通信中的节能资源分配。
Sensors (Basel). 2023 Jan 23;23(3):1295. doi: 10.3390/s23031295.
3
Intelligent Resource Allocation for V2V Communication with Spectrum-Energy Efficiency Maximization.用于实现频谱-能量效率最大化的车对车通信智能资源分配
Sensors (Basel). 2023 Jul 29;23(15):6796. doi: 10.3390/s23156796.
4
Two Tier Slicing Resource Allocation Algorithm Based on Deep Reinforcement Learning and Joint Bidding in Wireless Access Networks.基于深度强化学习和联合竞价的无线接入网络双层切片资源分配算法
Sensors (Basel). 2022 May 4;22(9):3495. doi: 10.3390/s22093495.
5
A Power Allocation Scheme for MIMO-NOMA and D2D Vehicular Edge Computing Based on Decentralized DRL.一种基于去中心化 DRL 的 MIMO-NOMA 和 D2D 车联网边缘计算的功率分配方案。
Sensors (Basel). 2023 Mar 25;23(7):3449. doi: 10.3390/s23073449.
6
Sensing Traffic Density Combining V2V and V2I Wireless Communications.结合车对车(V2V)和车对基础设施(V2I)无线通信感知交通密度
Sensors (Basel). 2015 Dec 16;15(12):31794-810. doi: 10.3390/s151229889.
7
Deep Reinforcement Learning Based Resource Allocation for D2D Communications Underlay Cellular Networks.基于深度强化学习的蜂窝网络中 D2D 通信的资源分配。
Sensors (Basel). 2022 Dec 3;22(23):9459. doi: 10.3390/s22239459.
8
LoRa-Based Physical Layer Key Generation for Secure V2V/V2I Communications.用于安全车对车/车对基础设施通信的基于LoRa的物理层密钥生成
Sensors (Basel). 2020 Jan 26;20(3):682. doi: 10.3390/s20030682.
9
Novel Road Traffic Management Strategy for Rapid Clarification of the Emergency Vehicle Route Based on V2V Communications.基于车对车通信的新型道路交通管理策略,用于快速明确紧急车辆路径。
Sensors (Basel). 2021 Jul 28;21(15):5120. doi: 10.3390/s21155120.
10
Congestion based mechanism for route discovery in a V2I-V2V system applying smart devices and IoT.基于拥塞的机制,用于在应用智能设备和物联网的车到基础设施-车到车(V2I-V2V)系统中进行路由发现。
Sensors (Basel). 2015 Mar 31;15(4):7768-806. doi: 10.3390/s150407768.

引用本文的文献

1
Multi-head deep Q-learning for continuous beamforming with selective MC-CDMA operation in V2X highway communications.用于V2X公路通信中具有选择性MC-CDMA操作的连续波束成形的多头深度Q学习
Sci Rep. 2025 Aug 14;15(1):29860. doi: 10.1038/s41598-025-16016-2.
2
Task Offloading Decision-Making Algorithm for Vehicular Edge Computing: A Deep-Reinforcement-Learning-Based Approach.车载边缘计算的任务卸载决策算法:一种基于深度强化学习的方法。
Sensors (Basel). 2023 Sep 1;23(17):7595. doi: 10.3390/s23177595.

本文引用的文献

1
Human-level control through deep reinforcement learning.通过深度强化学习实现人类水平的控制。
Nature. 2015 Feb 26;518(7540):529-33. doi: 10.1038/nature14236.