Channel Access and Power Control for Energy-Efficient Delay-Aware Heterogeneous Cellular Networks for Smart Grid Communications Using Deep Reinforcement Learning

被引:17
|
作者
Asuhaimi, Fauzun Abdullah [1 ]
Bu, Shengrong [1 ]
Klaine, Paulo Valente [1 ]
Imran, Muhammad Ali [1 ]
机构
[1] Univ Glasgow, Dept Elect Engn, Glasgow G12 8QQ, Lanark, Scotland
基金
英国工程与自然科学研究理事会;
关键词
Energy efficiency; end-to-end delay; device-to-device communications; cellular networks; smart grids; DEVICE-TO-DEVICE; RESOURCE-ALLOCATION; SENSOR NETWORKS; CHALLENGES; MANAGEMENT; UPLINK;
D O I
10.1109/ACCESS.2019.2939827
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Cellular technology with long-term evolution (LTE)-based standards is a preferable choice for smart grid neighborhood area networks due to its high availability and scalability. However, the integration of cellular networks and smart grid communications puts forth a significant challenge due to the simultaneous transmission of real-time smart grid data which could cause radio access network (RAN) congestions. Heterogeneous cellular networks (HetNets) have been proposed to improve the performance of LTE because HetNets can alleviate RAN congestions by off-loading access attempts from a macrocell to small cells. In this paper, we study energy efficiency and delay problems in HetNets for transmitting smart grid data with different delay requirements. We propose a distributed channel access and power control scheme, and develop a learning-based approach for the phasor measurement units (PMUs) to transmit data successfully by considering interference and signal-to-interference-plus-noise ratio (SINR) constraints. In particular, we exploit a deep reinforcement learning(DRL)-based method to train the PMUs to learn an optimal policy that maximizes the earned reward of successful transmissions without having knowledge on the system dynamics. Results show that the DRL approach obtains good performance without knowing the system dynamic beforehand and outperforms the Gittin index policy in different normal ratios, minimum SINR requirements and number of users in the cell.
引用
收藏
页码:133474 / 133484
页数:11
相关论文
共 50 条
  • [31] Energy-Efficient Power Control and Resource Allocation for D2D Communications in Underlaying Cellular Networks
    Guan, Xiaoxiao
    Zhai, Xiangping
    Yuan, Jiabin
    Liu, Hu
    CLOUD COMPUTING AND SECURITY, PT I, 2017, 10602
  • [32] Energy-efficient access point clustering and power allocation in cell-free massive MIMO networks: a hierarchical deep reinforcement learning approach
    Tan, Fangqing
    Deng, Quanxuan
    Liu, Qiang
    EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING, 2024, 2024 (01)
  • [33] Energy-efficient access point clustering and power allocation in cell-free massive MIMO networks: a hierarchical deep reinforcement learning approach
    Fangqing Tan
    Quanxuan Deng
    Qiang Liu
    EURASIP Journal on Advances in Signal Processing, 2024
  • [34] A Social-Aware Virtual MAC Protocol for Energy-Efficient D2D Communications Underlying Heterogeneous Cellular Networks
    Fan, Bo
    Tian, Hui
    Jiang, Li
    Vasilakos, Athanasios V.
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2018, 67 (09) : 8372 - 8385
  • [35] QOS-AWARE FLOW CONTROL FOR POWER-EFFICIENT DATA CENTER NETWORKS WITH DEEP REINFORCEMENT LEARNING
    Sun, Penghao
    Guo, Zehua
    Liu, Sen
    Lan, Julong
    Hu, Yuxiang
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 3552 - 3556
  • [36] Energy-efficient power allocation for device-to-device communications underlaid cellular networks using stochastic geometry
    Zabetian, Negar
    Mohammadi, Abbas
    Masoudi, Meysam
    TRANSACTIONS ON EMERGING TELECOMMUNICATIONS TECHNOLOGIES, 2019, 30 (12)
  • [37] Comfortable and energy-efficient speed control of autonomous vehicles on rough pavements using deep reinforcement learning
    Du, Yuchuan
    Chen, Jing
    Zhao, Cong
    Liu, Chenglong
    Liao, Feixiong
    Chan, Ching-Yao
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2022, 134
  • [38] Collaborative Multi-Agent Deep Reinforcement Learning for Energy-Efficient Resource Allocation in Heterogeneous Mobile Edge Computing Networks
    Xiao, Yang
    Song, Yuqian
    Liu, Jun
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (06) : 6653 - 6668
  • [39] Dynamic Channel Access and Power Control in Wireless Interference Networks via Multi-Agent Deep Reinforcement Learning
    Lu, Ziyang
    Zhong, Chen
    Gursoy, M. Cenk
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (02) : 1588 - 1601
  • [40] A Physiological Control System for Pulsatile Ventricular Assist Device Using an Energy-Efficient Deep Reinforcement Learning Method
    Li, Te
    Cui, Wenbo
    Liu, Xingjian
    Li, Xu
    Xie, Nan
    Wang, Yongqing
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72