Deep Reinforcement Learning-Based Resource Allocation for Integrated Sensing, Communication, and Computation in Vehicular Network

被引:0
|
作者
Yang, Liu [1 ,2 ]
Wei, Yifei [3 ]
Feng, Zhiyong [4 ]
Zhang, Qixun
Han, Zhu [5 ,6 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Elect Engn, Beijing 100876, Peoples R China
[2] Beijing Union Univ, Coll Robot, Beijing Key Lab Informat Serv Engn, Beijing 100101, Peoples R China
[3] Beijing Univ Posts andTelecommunicat, Sch Elect Engn, Beijing Key Lab Work Safety Intelligent Monitoring, Beijing 100876, Peoples R China
[4] Beijing Univ Posts & Telecommun, Key Lab Universal Wireless Commun, Minist Educ, Beijing 100876, Peoples R China
[5] Univ Houston, Dept Elect & Comp Engn, Houston, TX 77004 USA
[6] Kyung Hee Univ, Dept Comp Sci & Engn, Seoul 446701, South Korea
基金
日本科学技术振兴机构; 中国国家自然科学基金;
关键词
Array signal processing; Resource management; Optimization; Robot sensing systems; Integrated sensing and communication; Wireless communication; Interference; Autonomous vehicles; 6G mobile communication; Federated learning; Integrated sensing; communication; and computation; beamforming; resource allocation; deep reinforcement learning; THE-AIR COMPUTATION; JOINT COMMUNICATION; MIMO COMMUNICATIONS; RADAR; SYSTEMS; OPTIMIZATION; ROBUST;
D O I
10.1109/TWC.2024.3470873
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In developing the sixth-generation (6G) system, integrated sensing and communication technology is becoming increasingly essential, especially for applications like autonomous driving. This paper develops an architecture for integrated sensing, communication, and computation (ISCC) in the vehicular network, where vehicles perform environment sensing, sensing data computation, and transmission. To support low-latency cooperation between vehicles and extend vehicles' sensing range, over-air-computation federated learning is employed. The optimization problem of joint beamforming design and power resource allocation in the ISCC scenario is formulated to maximize the achievable data rate while ensuring sensing and computing performance. However, solving this joint optimization problem is a great challenge due to the high coupling resource and time-varying channel environment. Therefore, a hybrid reinforcement learning scheme is proposed in this work. First, the semidefinite relaxation and Gaussian randomization techniques are leveraged to obtain the approximate solution of the aggregation beamformer. Then, the deep deterministic policy gradient algorithm is proposed to tackle the transmit beamforming design and resource allocation problem in continuous action space. Extensive simulation results validated the admirable performance of the proposed scheme in convergence and achievable sum rate compared with the benchmark schemes. In addition, the impact of variables on the optimization performance is demonstrated via numerical results.
引用
收藏
页码:18608 / 18622
页数:15
相关论文
共 50 条
  • [1] Reinforcement Learning-Based UAVs Resource Allocation for Integrated Sensing and Communication (ISAC) System
    Wang, Min
    Chen, Peng
    Cao, Zhenxin
    Chen, Yun
    ELECTRONICS, 2022, 11 (03)
  • [2] A deep reinforcement learning resource allocation strategy for integrated sensing, communication and computing
    Cai, Lili
    He, Jincan
    PHYSICAL COMMUNICATION, 2024, 64
  • [3] Deep Reinforcement Learning-Based Resource Allocation for Cellular Vehicular Network Mode 3 with Underlay Approach
    Fu, Jinjuan
    Qin, Xizhong
    Huang, Yan
    Tang, Li
    Liu, Yan
    SENSORS, 2022, 22 (05)
  • [4] Poster Abstract: Deep Reinforcement Learning-based Resource Allocation in Vehicular Fog Computing
    Lee, Seung-seob
    Lee, Sukyoung
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM 2019 WKSHPS), 2019, : 1029 - 1030
  • [5] Deep Reinforcement Learning Based Resource Allocation and Trajectory Planning in Integrated Sensing and Communications UAV Network
    Qin, Yunhui
    Zhang, Zhongshan
    Li, Xulong
    Wei Huangfu
    Zhang, Haijun
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2023, 22 (11) : 8158 - 8169
  • [6] Deep reinforcement learning-based joint optimization model for vehicular task offloading and resource allocation
    Li, Zhi-Yuan
    Zhang, Zeng-Xiang
    PEER-TO-PEER NETWORKING AND APPLICATIONS, 2024, 17 (04) : 2001 - 2015
  • [7] Computation Migration and Resource Allocation in Heterogeneous Vehicular Networks: A Deep Reinforcement Learning Approach
    Wang, Hui
    Ke, Hongchang
    Liu, Gang
    Sun, Weijia
    IEEE ACCESS, 2020, 8 : 171140 - 171153
  • [8] Deep Reinforcement Learning-Based Computation Offloading in Vehicular Edge Computing
    Zhan, Wenhan
    Luo, Chunbo
    Wang, Jin
    Min, Geyong
    Duan, Hancong
    2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [9] Deep Reinforcement Learning-Based Adaptive Computation Offloading and Power Allocation in Vehicular Edge Computing Networks
    Qiu, Bin
    Wang, Yunxiao
    Xiao, Hailin
    Zhang, Zhongshan
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (10) : 13339 - 13349
  • [10] Deep reinforcement learning-based joint task offloading and resource allocation in multipath transmission vehicular networks
    Yin, Chenyang
    Zhang, Yuyang
    Dong, Ping
    Zhang, Hongke
    TRANSACTIONS ON EMERGING TELECOMMUNICATIONS TECHNOLOGIES, 2024, 35 (01)