FeDRL-D2D: Federated Deep Reinforcement Learning- Empowered Resource Allocation Scheme for Energy Efficiency Maximization in D2D-Assisted 6G Networks

被引:1
|
作者
Noman, Hafiz Muhammad Fahad [1 ]
Dimyati, Kaharudin [1 ]
Noordin, Kamarul Ariffin [1 ]
Hanafi, Effariza [1 ]
Abdrabou, Atef [2 ]
机构
[1] Univ Malaya, Fac Engn, Dept Elect Engn, Adv Commun Res & Innovat ACRI, Kuala Lumpur 50603, Malaysia
[2] UAE Univ, Coll Engn, Elect & Commun Engn Dept, Al Ain, U Arab Emirates
来源
IEEE ACCESS | 2024年 / 12卷
关键词
6G; device-to-device communications; double deep Q-network (DDQN); energy efficiency; federated-deep reinforcement learning (F-DRL); resource allocation; POWER-CONTROL; OPTIMIZATION; SELECTION;
D O I
10.1109/ACCESS.2024.3434619
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Device-to-device (D2D)-assisted 6G networks are expected to support the proliferation of ubiquitous mobile applications by enhancing system capacity and overall energy efficiency towards a connected-sustainable world. However, the stringent quality of service (QoS) requirements for ultra-massive connectivity, limited network resources, and interference management are the significant challenges to deploying multiple device-to-device pairs (DDPs) without disrupting cellular users. Hence, intelligent resource management and power control are indispensable for alleviating interference among DDPs to optimize overall system performance and global energy efficiency. Considering this, we present a Federated DRL-based method for energy-efficient resource management in a D2D-assisted heterogeneous network (HetNet). We formulate a joint optimization problem of power control and channel allocation to maximize the system's energy efficiency under QoS constraints for cellular user equipment (CUEs) and DDPs. The proposed scheme employs federated learning for a decentralized training paradigm to address user privacy, and a double-deep Q-network (DDQN) is used for intelligent resource management. The proposed DDQN method uses two separate Q-networks for action selection and target estimation to rationalize the transmit power and dynamic channel selection in which DDPs as agents could reuse the uplink channels of CUEs. Simulation results depict that the proposed method improves the overall system energy efficiency by 41.52% and achieves a better sum rate of 11.65%, 24.78%, and 47.29% than multi-agent actor-critic (MAAC), distributed deep-deterministic policy gradient (D3PG), and deep Q network (DQN) scheduling, respectively. Moreover, the proposed scheme achieves a 5.88%, 15.79%, and 27.27% reduction in cellular outage probability compared to MAAC, D3PG, and DQN scheduling, respectively, which makes it a robust solution for energy-efficient resource allocation in D2D-assisted 6G networks.
引用
收藏
页码:109775 / 109792
页数:18
相关论文
共 50 条
  • [41] Energy-spectral efficient resource allocation and power control in heterogeneous networks with D2D communication
    Khazali, Azadeh
    Sobhi-Givi, Sima
    Kalbkhani, Hashem
    Shayesteh, Mahrokh G.
    WIRELESS NETWORKS, 2020, 26 (01) : 253 - 267
  • [42] Deep Reinforcement Learning for Joint Channel Selection and Power Control in D2D Networks
    Tan, Junjie
    Liang, Ying-Chang
    Zhang, Lin
    Feng, Gang
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (02) : 1363 - 1378
  • [43] S-MFRL: Spiking Mean Field Reinforcement Learning for Dynamic Resource Allocation of D2D Networks
    Ye, Pei-Gen
    Wang, Yuan-Gen
    Tang, Weixuan
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (01) : 1032 - 1047
  • [44] JOAGT: Latency-Oriented Joint Optimization of Computation Offloading and Resource Allocation in D2D-Assisted MEC System
    Wang, Xue
    Han, Yingbin
    Shi, Haotian
    Qian, Zhihong
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2022, 11 (09) : 1780 - 1784
  • [45] Distributed Spectrum and Power Allocation for D2D-U Networks: a Scheme Based on NN and Federated Learning
    Yin, Rui
    Zou, Zhiqun
    Wu, Celimuge
    Yuan, Jiantao
    Chen, Xianfu
    MOBILE NETWORKS & APPLICATIONS, 2021, 26 (05) : 2000 - 2013
  • [46] Distributed Spectrum and Power Allocation for D2D-U Networks: a Scheme Based on NN and Federated Learning
    Rui Yin
    Zhiqun Zou
    Celimuge Wu
    Jiantao Yuan
    Xianfu Chen
    Mobile Networks and Applications, 2021, 26 : 2000 - 2013
  • [47] Hybrid Centralized-Distributed Resource Allocation Based on Deep Reinforcement Learning for Cooperative D2D Communications
    Yu, Yang
    Tang, Xiaoqing
    IEEE ACCESS, 2024, 12 : 196609 - 196623
  • [48] Autonomous Resource Slicing for Virtualized Vehicular Networks With D2D Communications Based on Deep Reinforcement Learning
    Sun, Guolin
    Boateng, Gordon Owusu
    Ayepah-Mensah, Daniel
    Liu, Guisong
    Wei, Jiang
    IEEE SYSTEMS JOURNAL, 2020, 14 (04): : 4694 - 4705
  • [49] Optimizing resource allocation for cluster D2D-assisted fog computing networks: A three-layer Stackelberg game approach
    Chen, Wen
    Yang, Yuxiao
    Liu, Sibin
    Hu, Wenjing
    COMPUTER NETWORKS, 2024, 250
  • [50] Energy efficiency in cognitive radio assisted D2D communication networks
    Ahmad, Mushtaq
    Orakzai, Farooq Alam
    Ahmed, Ashfaq
    Naeem, Muhammad
    Iqbal, Muhammad
    Umer, Tariq
    TELECOMMUNICATION SYSTEMS, 2019, 71 (02) : 167 - 180