FeDRL-D2D: Federated Deep Reinforcement Learning- Empowered Resource Allocation Scheme for Energy Efficiency Maximization in D2D-Assisted 6G Networks

被引:1
|
作者
Noman, Hafiz Muhammad Fahad [1 ]
Dimyati, Kaharudin [1 ]
Noordin, Kamarul Ariffin [1 ]
Hanafi, Effariza [1 ]
Abdrabou, Atef [2 ]
机构
[1] Univ Malaya, Fac Engn, Dept Elect Engn, Adv Commun Res & Innovat ACRI, Kuala Lumpur 50603, Malaysia
[2] UAE Univ, Coll Engn, Elect & Commun Engn Dept, Al Ain, U Arab Emirates
来源
IEEE ACCESS | 2024年 / 12卷
关键词
6G; device-to-device communications; double deep Q-network (DDQN); energy efficiency; federated-deep reinforcement learning (F-DRL); resource allocation; POWER-CONTROL; OPTIMIZATION; SELECTION;
D O I
10.1109/ACCESS.2024.3434619
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Device-to-device (D2D)-assisted 6G networks are expected to support the proliferation of ubiquitous mobile applications by enhancing system capacity and overall energy efficiency towards a connected-sustainable world. However, the stringent quality of service (QoS) requirements for ultra-massive connectivity, limited network resources, and interference management are the significant challenges to deploying multiple device-to-device pairs (DDPs) without disrupting cellular users. Hence, intelligent resource management and power control are indispensable for alleviating interference among DDPs to optimize overall system performance and global energy efficiency. Considering this, we present a Federated DRL-based method for energy-efficient resource management in a D2D-assisted heterogeneous network (HetNet). We formulate a joint optimization problem of power control and channel allocation to maximize the system's energy efficiency under QoS constraints for cellular user equipment (CUEs) and DDPs. The proposed scheme employs federated learning for a decentralized training paradigm to address user privacy, and a double-deep Q-network (DDQN) is used for intelligent resource management. The proposed DDQN method uses two separate Q-networks for action selection and target estimation to rationalize the transmit power and dynamic channel selection in which DDPs as agents could reuse the uplink channels of CUEs. Simulation results depict that the proposed method improves the overall system energy efficiency by 41.52% and achieves a better sum rate of 11.65%, 24.78%, and 47.29% than multi-agent actor-critic (MAAC), distributed deep-deterministic policy gradient (D3PG), and deep Q network (DQN) scheduling, respectively. Moreover, the proposed scheme achieves a 5.88%, 15.79%, and 27.27% reduction in cellular outage probability compared to MAAC, D3PG, and DQN scheduling, respectively, which makes it a robust solution for energy-efficient resource allocation in D2D-assisted 6G networks.
引用
收藏
页码:109775 / 109792
页数:18
相关论文
共 50 条
  • [31] D2D Resource Allocation Based on Reinforcement Learning and QoS
    Kuo, Fang-Chang
    Wang, Hwang-Cheng
    Tseng, Chih-Cheng
    Wu, Jung-Shyr
    Xu, Jia-Hao
    Chang, Jieh-Ren
    MOBILE NETWORKS & APPLICATIONS, 2023, 28 (03) : 1076 - 1095
  • [32] D2D Resource Allocation Based on Reinforcement Learning and QoS
    Fang-Chang Kuo
    Hwang-Cheng Wang
    Chih-Cheng Tseng
    Jung-Shyr Wu
    Jia-Hao Xu
    Jieh-Ren Chang
    Mobile Networks and Applications, 2023, 28 : 1076 - 1095
  • [33] D2D-Assisted Multi-User Cooperative Partial Offloading in MEC Based on Deep Reinforcement Learning
    Guan, Xin
    Lv, Tiejun
    Lin, Zhipeng
    Huang, Pingmu
    Zeng, Jie
    SENSORS, 2022, 22 (18)
  • [34] Hybrid Deep Reinforcement Learning-Based Task Offloading for D2D-Assisted Cloud-Edge-Device Collaborative Networks
    Fan, Wenhao
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (12) : 13455 - 13471
  • [35] Improving the Spectral Efficiency in Dense Heterogeneous Networks Using D2D-Assisted eICIC
    Elshatshat, Mohamed A.
    Papadakis, Stefanos
    Angelakis, Vangelis
    2018 IEEE 23RD INTERNATIONAL WORKSHOP ON COMPUTER AIDED MODELING AND DESIGN OF COMMUNICATION LINKS AND NETWORKS (CAMAD), 2018, : 32 - 37
  • [36] Deep Multi-Agent Reinforcement Learning for Resource Allocation in D2D Communication Underlaying Cellular Networks
    Zhang, Xu
    Lin, Ziqi
    Ding, Beichen
    Gu, Bo
    Han, Yu
    APNOMS 2020: 2020 21ST ASIA-PACIFIC NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM (APNOMS), 2020, : 55 - 60
  • [37] Distributed Learning in Noisy-Potential Games for Resource Allocation in D2D Networks
    Ali, M. Shabbir
    Coucheney, Pierre
    Coupechoux, Marceau
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2020, 19 (12) : 2761 - 2773
  • [38] Deep Reinforcement Learning-Based Optimization Method for D2D Communication Energy Efficiency in Heterogeneous Cellular Networks
    Pan, Ziyu
    Yang, Jie
    IEEE ACCESS, 2024, 12 : 140439 - 140455
  • [39] Balancing Fairness and Energy Efficiency in SWIPT-Based D2D Networks: Deep Reinforcement Learning Based Approach
    Han, Eun-Jeong
    Sengly, Muy
    Lee, Jung-Ryun
    IEEE ACCESS, 2022, 10 : 64495 - 64503
  • [40] A 3-Dimensional Matching Method for Resource Allocation in D2D-Assisted Indoor Wireless Communications
    Li, Chenglin
    Zhang, Zhi
    Liu, Baoling
    2019 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA (ICCC), 2019,