Deep Q-network Based Reinforcement Learning for Distributed Dynamic Spectrum Access

被引:1
作者
Yadav, Manish Anand [1 ]
Li, Yuhui [1 ]
Fang, Guangjin [1 ]
Shen, Bin [1 ]
机构
[1] Chongqing Univ Posts & Telecommun CQUPT, Sch Commun & Informat Engn SCIE, Chongqing 400065, Peoples R China
来源
2022 IEEE 2ND INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATION AND ARTIFICIAL INTELLIGENCE (CCAI 2022) | 2022年
关键词
dynamic spectrum access; Q-learning; deep reinforcement learning; double deep Q-network;
D O I
10.1109/CCAI55564.2022.9807797
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To solve the problem of spectrum scarcity and spectrum under-utilization in wireless networks, we propose a double deep Q-network based reinforcement learning algorithm for distributed dynamic spectrum access. Channels in the network are either busy or idle based on the two-state Markov chain. At the start of each time slot, every secondary user (SU) performs spectrum sensing on each channel and accesses one based on the sensing result as well as the output of the Q-network of our algorithm. Over time, the Deep Reinforcement Learning (DRL) algorithm learns the spectrum environment and becomes good at modeling the behavior pattern of the primary users (PUs). Through simulation, we show that our proposed algorithm is simple to train, yet effective in reducing interference to primary as well as secondary users and achieving higher successful transmission.
引用
收藏
页码:227 / 232
页数:6
相关论文
共 50 条
  • [11] The Application of Deep Reinforcement Learning to Distributed Spectrum Access in Dynamic Heterogeneous Environments With Partial Observations
    Xu, Yue
    Yu, Jianyuan
    Buehrer, R. Michael
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2020, 19 (07) : 4494 - 4506
  • [12] Deep Reinforcement Learning for Dynamic Spectrum Access in Wireless Networks
    Xu, Y.
    Yu, J.
    Headley, W. C.
    Buehrer, R. M.
    2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018), 2018, : 207 - 212
  • [13] Dynamic Spectrum Access for Internet-of-Things Based on Federated Deep Reinforcement Learning
    Li, Feng
    Shen, Bowen
    Guo, Jiale
    Lam, Kwok-Yan
    Wei, Guiyi
    Wang, Li
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (07) : 7952 - 7956
  • [14] Intelligent Dynamic Spectrum Access Using Deep Reinforcement Learning for VANETs
    Wang, Yonghua
    Li, Xueyang
    Wan, Pin
    Shao, Ruiyu
    IEEE SENSORS JOURNAL, 2021, 21 (14) : 15554 - 15563
  • [15] Dynamic Multichannel Access Based on Deep Reinforcement Learning in Distributed Wireless Networks
    Cui, Qimei
    Zhang, Ziyuan
    Shi, Yanpeng
    Ni, Wei
    Zeng, Ming
    Zhou, Mingyu
    IEEE SYSTEMS JOURNAL, 2022, 16 (04): : 5831 - 5834
  • [16] Dynamic fusion for ensemble of deep Q-network
    Patrick P. K. Chan
    Meng Xiao
    Xinran Qin
    Natasha Kees
    International Journal of Machine Learning and Cybernetics, 2021, 12 : 1031 - 1040
  • [17] Dynamic fusion for ensemble of deep Q-network
    Chan, Patrick P. K.
    Xiao, Meng
    Qin, Xinran
    Kees, Natasha
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2021, 12 (04) : 1031 - 1040
  • [18] Deep Q-Learning with Multiband Sensing for Dynamic Spectrum Access
    Nguyen, Ha Q.
    Nguyen, Binh T.
    Dong, Trung Q.
    Ngo, Dat T.
    Nguyen, Tuan A.
    2018 IEEE INTERNATIONAL SYMPOSIUM ON DYNAMIC SPECTRUM ACCESS NETWORKS (DYSPAN), 2018,
  • [19] Deep Reinforcement Learning With Bidirectional Recurrent Neural Networks for Dynamic Spectrum Access
    Chen, Peng
    Quo, Shizeng
    Gao, Yulong
    2021 IEEE 94TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2021-FALL), 2021,
  • [20] Multi-Agent Reinforcement Learning-Based Distributed Dynamic Spectrum Access
    Albinsaid, Hasan
    Singh, Keshav
    Biswas, Sudip
    Li, Chih-Peng
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2022, 8 (02) : 1174 - 1185