Deep Q-network Based Reinforcement Learning for Distributed Dynamic Spectrum Access

被引:1
作者
Yadav, Manish Anand [1 ]
Li, Yuhui [1 ]
Fang, Guangjin [1 ]
Shen, Bin [1 ]
机构
[1] Chongqing Univ Posts & Telecommun CQUPT, Sch Commun & Informat Engn SCIE, Chongqing 400065, Peoples R China
来源
2022 IEEE 2ND INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATION AND ARTIFICIAL INTELLIGENCE (CCAI 2022) | 2022年
关键词
dynamic spectrum access; Q-learning; deep reinforcement learning; double deep Q-network;
D O I
10.1109/CCAI55564.2022.9807797
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To solve the problem of spectrum scarcity and spectrum under-utilization in wireless networks, we propose a double deep Q-network based reinforcement learning algorithm for distributed dynamic spectrum access. Channels in the network are either busy or idle based on the two-state Markov chain. At the start of each time slot, every secondary user (SU) performs spectrum sensing on each channel and accesses one based on the sensing result as well as the output of the Q-network of our algorithm. Over time, the Deep Reinforcement Learning (DRL) algorithm learns the spectrum environment and becomes good at modeling the behavior pattern of the primary users (PUs). Through simulation, we show that our proposed algorithm is simple to train, yet effective in reducing interference to primary as well as secondary users and achieving higher successful transmission.
引用
收藏
页码:227 / 232
页数:6
相关论文
共 50 条
  • [41] Dynamic Path Planning Scheme for OHT in AMHS Based on Map Information Double Deep Q-Network
    Ao, Qi
    Zhou, Yue
    Guo, Wei
    Wang, Wenguang
    Ye, Ying
    ELECTRONICS, 2024, 13 (22)
  • [42] A reinforcement double deep Q-network with prioritised experience replay for rolling bearing fault diagnosis
    Li, Zhenning
    Jiang, Hongkai
    Liu, Yunpeng
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2023, 34 (12)
  • [43] Dynamic Spectrum Sharing Based on Deep Reinforcement Learning in Mobile Communication Systems
    Liu, Sizhuang
    Pan, Changyong
    Zhang, Chao
    Yang, Fang
    Song, Jian
    SENSORS, 2023, 23 (05)
  • [44] Query Join Order Optimization Method Based on Dynamic Double Deep Q-Network
    Ji, Lixia
    Zhao, Runzhe
    Dang, Yiping
    Liu, Junxiu
    Zhang, Han
    ELECTRONICS, 2023, 12 (06)
  • [45] Double Deep Q-Network Based Dynamic Framing Offloading in Vehicular Edge Computing
    Tang, Huijun
    Wu, Huaming
    Qu, Guanjin
    Li, Ruidong
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2023, 10 (03): : 1297 - 1310
  • [46] Deep Q-learning multiple networks based dynamic spectrum access with energy harvesting for green cognitive radio network
    Peng, Bao
    Yao, Zhi
    Liu, Xin
    Zhou, Guofu
    COMPUTER NETWORKS, 2023, 224
  • [47] Online routing and spectrum allocation in elastic optical networks based on dueling Deep Q-network
    Zhang, Jiawei
    Qian, Fengchen
    Yang, Junqiang
    COMPUTERS & INDUSTRIAL ENGINEERING, 2022, 173
  • [48] Visual Analysis of Deep Q-network
    Seng, Dewen
    Zhang, Jiaming
    Shi, Xiaoying
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2021, 15 (03): : 853 - 873
  • [49] An Enhanced Dueling Double Deep Q-Network With Convolutional Block Attention Module for Traffic Signal Optimization in Deep Reinforcement Learning
    Wang, Peng
    Ni, Wenlong
    IEEE ACCESS, 2024, 12 : 44224 - 44232
  • [50] Stochastic Double Deep Q-Network
    Lv, Pingli
    Wang, Xuesong
    Cheng, Yuhu
    Duan, Ziming
    IEEE ACCESS, 2019, 7 : 79446 - 79454