FeDRL-D2D: Federated Deep Reinforcement Learning- Empowered Resource Allocation Scheme for Energy Efficiency Maximization in D2D-Assisted 6G Networks

被引:1
|
作者
Noman, Hafiz Muhammad Fahad [1 ]
Dimyati, Kaharudin [1 ]
Noordin, Kamarul Ariffin [1 ]
Hanafi, Effariza [1 ]
Abdrabou, Atef [2 ]
机构
[1] Univ Malaya, Fac Engn, Dept Elect Engn, Adv Commun Res & Innovat ACRI, Kuala Lumpur 50603, Malaysia
[2] UAE Univ, Coll Engn, Elect & Commun Engn Dept, Al Ain, U Arab Emirates
来源
IEEE ACCESS | 2024年 / 12卷
关键词
6G; device-to-device communications; double deep Q-network (DDQN); energy efficiency; federated-deep reinforcement learning (F-DRL); resource allocation; POWER-CONTROL; OPTIMIZATION; SELECTION;
D O I
10.1109/ACCESS.2024.3434619
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Device-to-device (D2D)-assisted 6G networks are expected to support the proliferation of ubiquitous mobile applications by enhancing system capacity and overall energy efficiency towards a connected-sustainable world. However, the stringent quality of service (QoS) requirements for ultra-massive connectivity, limited network resources, and interference management are the significant challenges to deploying multiple device-to-device pairs (DDPs) without disrupting cellular users. Hence, intelligent resource management and power control are indispensable for alleviating interference among DDPs to optimize overall system performance and global energy efficiency. Considering this, we present a Federated DRL-based method for energy-efficient resource management in a D2D-assisted heterogeneous network (HetNet). We formulate a joint optimization problem of power control and channel allocation to maximize the system's energy efficiency under QoS constraints for cellular user equipment (CUEs) and DDPs. The proposed scheme employs federated learning for a decentralized training paradigm to address user privacy, and a double-deep Q-network (DDQN) is used for intelligent resource management. The proposed DDQN method uses two separate Q-networks for action selection and target estimation to rationalize the transmit power and dynamic channel selection in which DDPs as agents could reuse the uplink channels of CUEs. Simulation results depict that the proposed method improves the overall system energy efficiency by 41.52% and achieves a better sum rate of 11.65%, 24.78%, and 47.29% than multi-agent actor-critic (MAAC), distributed deep-deterministic policy gradient (D3PG), and deep Q network (DQN) scheduling, respectively. Moreover, the proposed scheme achieves a 5.88%, 15.79%, and 27.27% reduction in cellular outage probability compared to MAAC, D3PG, and DQN scheduling, respectively, which makes it a robust solution for energy-efficient resource allocation in D2D-assisted 6G networks.
引用
收藏
页码:109775 / 109792
页数:18
相关论文
共 50 条
  • [21] Joint Computation Offloading and Resource Allocation for D2D-Assisted Mobile Edge Computing
    Jiang, Wei
    Feng, Daquan
    Sun, Yao
    Feng, Gang
    Wang, Zhenzhong
    Xia, Xiang-Gen
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2023, 16 (03) : 1949 - 1963
  • [22] Deep reinforcement learning empowered joint mode selection and resource allocation for RIS-aided D2D communications
    Guo, Liang
    Jia, Jie
    Chen, Jian
    Du, An
    Wang, Xingwei
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (25) : 18231 - 18249
  • [23] A Collaborative Task Offloading Scheme in D2D-Assisted Fog Computing Networks
    Fan, Nanxin
    Wang, Xiaoxiang
    Wang, Dongyu
    Lan, Yanwen
    Hou, Junxu
    2020 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2020,
  • [24] Energy-Efficiency Based Resource Allocation for D2D Communication and Cellular Networks
    AlWreikat, Layanah
    Chai, Rong
    Abu-Sharkh, Osama M. F.
    2014 IEEE FOURTH INTERNATIONAL CONFERENCE ON BIG DATA AND CLOUD COMPUTING (BDCLOUD), 2014, : 722 - 728
  • [25] Power Controlled Resource Allocation and Task Offloading via Optimized Deep Reinforcement Learning in D2D Assisted Mobile Edge Computing
    Gottam, Sambi Reddy
    Kar, Udit Narayana
    IEEE ACCESS, 2025, 13 : 19420 - 19437
  • [26] D2D Resource Allocation Mechanism Based on Energy Efficiency Optimization in Heterogeneous Networks
    Zhang Damin
    Zhang Huijuan
    Yan Wei
    Chen Zhongyun
    Xin Ziyun
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2020, 42 (02) : 480 - 487
  • [27] Joint resource allocation and power control for D2D communication with deep reinforcement learning in MCC
    Wang, Dan
    Qin, Hao
    Song, Bin
    Xu, Ke
    Du, Xiaojiang
    Guizani, Mohsen
    PHYSICAL COMMUNICATION, 2021, 45
  • [28] Resource allocation for UAV-aided energy harvesting-powered D2D communications: A reinforcement learning-based scheme
    Xu, Yi-Han
    Sun, Qi-Ming
    Zhou, Wen
    Yu, Gang
    AD HOC NETWORKS, 2022, 136
  • [29] Energy-Efficient D2D-Assisted Computation Offloading in NOMA-Enabled Cognitive Networks
    Cheng, Yuxia
    Liang, Chengchao
    Chen, Qianbin
    Yu, F. Richard
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (12) : 13441 - 13446
  • [30] Sum Throughput Maximization Scheme for NOMA-Enabled D2D Groups Using Deep Reinforcement Learning in 5G and Beyond Networks
    Khan, Mohammad Aftab Alam
    Kaidi, Hazilah Mad
    Ahmad, Norulhusna
    Rehman, Masood Ur
    IEEE SENSORS JOURNAL, 2023, 23 (13) : 15046 - 15057