Deep Q-network Based Reinforcement Learning for Distributed Dynamic Spectrum Access

被引:1
作者
Yadav, Manish Anand [1 ]
Li, Yuhui [1 ]
Fang, Guangjin [1 ]
Shen, Bin [1 ]
机构
[1] Chongqing Univ Posts & Telecommun CQUPT, Sch Commun & Informat Engn SCIE, Chongqing 400065, Peoples R China
来源
2022 IEEE 2ND INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATION AND ARTIFICIAL INTELLIGENCE (CCAI 2022) | 2022年
关键词
dynamic spectrum access; Q-learning; deep reinforcement learning; double deep Q-network;
D O I
10.1109/CCAI55564.2022.9807797
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To solve the problem of spectrum scarcity and spectrum under-utilization in wireless networks, we propose a double deep Q-network based reinforcement learning algorithm for distributed dynamic spectrum access. Channels in the network are either busy or idle based on the two-state Markov chain. At the start of each time slot, every secondary user (SU) performs spectrum sensing on each channel and accesses one based on the sensing result as well as the output of the Q-network of our algorithm. Over time, the Deep Reinforcement Learning (DRL) algorithm learns the spectrum environment and becomes good at modeling the behavior pattern of the primary users (PUs). Through simulation, we show that our proposed algorithm is simple to train, yet effective in reducing interference to primary as well as secondary users and achieving higher successful transmission.
引用
收藏
页码:227 / 232
页数:6
相关论文
共 50 条
  • [1] Dynamic spectrum access based on double deep Q-network and convolution neural network
    Fang, Guangjin
    Shen, Bin
    Zhang, Hong
    Cui, Taiping
    2022 24TH INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY (ICACT): ARITIFLCIAL INTELLIGENCE TECHNOLOGIES TOWARD CYBERSECURITY, 2022, : 112 - +
  • [2] Distributed Deep Reinforcement Learning with Wideband Sensing for Dynamic Spectrum Access
    Kaytaz, Umuralp
    Ucar, Seyhan
    Akgun, Bans
    Coleri, Sinem
    2020 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2020,
  • [3] Multi-User Dynamic Spectrum Access Based on LR-Q Deep Reinforcement Learning Network
    Li, Yuhui
    Wang, Yu
    Li, Yue
    Shen, Bin
    2023 25TH INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY, ICACT, 2023, : 79 - 84
  • [4] Deep Reinforcement Learning Pairs Trading with a Double Deep Q-Network
    Brim, Andrew
    2020 10TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE (CCWC), 2020, : 222 - 227
  • [5] Deep Multi-User Reinforcement Learning for Distributed Dynamic Spectrum Access
    Naparstek, Oshri
    Cohen, Kobi
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2019, 18 (01) : 310 - 323
  • [6] Timeslot Scheduling with Reinforcement Learning Using a Double Deep Q-Network
    Ryu, Jihye
    Kwon, Juhyeok
    Ryoo, Jeong-Dong
    Cheung, Taesik
    Joung, Jinoo
    ELECTRONICS, 2023, 12 (04)
  • [7] Deep Reinforcement Learning. Case Study: Deep Q-Network
    Vrejoiu, Mihnea Horia
    ROMANIAN JOURNAL OF INFORMATION TECHNOLOGY AND AUTOMATIC CONTROL-REVISTA ROMANA DE INFORMATICA SI AUTOMATICA, 2019, 29 (03): : 65 - 78
  • [8] Dynamic spectrum access based on deep reinforcement learning for multiple access in cognitive radio
    Li, Zeng-qi
    Liu, Xin
    Ning, Zhao-long
    PHYSICAL COMMUNICATION, 2022, 54
  • [9] Learning to schedule dynamic distributed reconfigurable workshops using expected deep Q-network
    Yang, Shengluo
    Wang, Junyi
    Xu, Zhigang
    ADVANCED ENGINEERING INFORMATICS, 2024, 59
  • [10] Microgrid energy management using deep Q-network reinforcement learning
    Alabdullah, Mohammed H.
    Abido, Mohammad A.
    ALEXANDRIA ENGINEERING JOURNAL, 2022, 61 (11) : 9069 - 9078