Q-learning based heterogenous network self-optimization for reconfigurable network with CPC assistance

被引:0
作者
ZhiYong Feng
LiTao Liang
Li Tan
Ping Zhang
机构
[1] Beijing University of Posts and Telecommunications,Key Laboratory of Universal Wireless Communications, Ministry of Education
[2] China Mobile Group Beijing Co.,undefined
[3] Ltd.,undefined
来源
Science in China Series F: Information Sciences | 2009年 / 52卷
关键词
reconfigurable system; self-optimization; Q-learning; CPC;
D O I
暂无
中图分类号
学科分类号
摘要
With the irreversible trend of the convergence and cooperation among heterogeneous networks, there emerge some important issues for network evolution. One of them is to reconfigure network elements such as cellular base stations (BSs) or access points (APs) of wireless local area networks (WLANs) according to the real-time network environment, in order to maximize the cooperation gain of different networks. In this paper, we consider cognitive pilot channel (CPC) as an assistant to enable cooperation among heterogeneous networks. Based on the widely used reinforcement learning algorithm, this paper has proposed the heterogeneous network self-optimization algorithm (HNSA) to solve the adaptation problem in reconfigurable systems. In the algorithm, distributed agents perform reinforcement learning, and make decisions cooperatively with the help of CPC in order to reduce the system blocking rate and improve network revenue. Finally our simulation proves the anticipated goal is achieved.
引用
收藏
页码:2360 / 2368
页数:8
相关论文
共 50 条
[31]   A QoS alert Scheduling based on Q-learning for Medical Wireless Body Area Network [J].
Chowdhury, Abishi ;
Raut, Shital A. .
2018 INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND SYSTEMS BIOLOGY (BSB), 2018, :53-57
[32]   Q-Learning Based UAV Ad-Hoc Network Hybrid MAC Protocol [J].
Zhou, Yanhong ;
Peng, Xuefeng ;
Ning, Jin ;
Wang, Zibin .
TRENDS IN ADVANCED UNMANNED AERIAL SYSTEMS, ICAUAS 2024, 2025, :338-345
[33]   Application of artificial neural network based on Q-learning for mobile robot path planning [J].
Li, Caihong ;
Zhang, Jingyuan ;
Li, Yibin .
2006 IEEE INTERNATIONAL CONFERENCE ON INFORMATION ACQUISITION, VOLS 1 AND 2, CONFERENCE PROCEEDINGS, 2006, :978-982
[34]   Effectiveness of Q-learning as a tool for calibrating agent-based supply network models [J].
Zhang, Y. ;
Bhattacharyya, S. .
ENTERPRISE INFORMATION SYSTEMS, 2007, 1 (02) :217-233
[35]   Dynamic Routing for Military Network Based on Semi-Markov Prediction and Q-Learning [J].
Yang, Li ;
Liang, Chao ;
Huang, Qilong ;
Chi, Cheng .
PROCEEDINGS OF 2022 10TH CHINA CONFERENCE ON COMMAND AND CONTROL, 2022, 949 :406-416
[36]   Distributed Control Strategy of DC Link based on Q-Learning for Distribution Network Operation [J].
Qi, Qi ;
Jiang, Qirong .
2020 IEEE POWER & ENERGY SOCIETY GENERAL MEETING (PESGM), 2020,
[37]   Integration of Q-learning and Behavior Network Approach with Hierarchical Task Network Planning for Dynamic Environments [J].
Sung, Yunsick ;
Cho, Kyungeun ;
Um, Kyhyun .
INFORMATION-AN INTERNATIONAL INTERDISCIPLINARY JOURNAL, 2012, 15 (05) :2079-2090
[38]   Optimizing wireless sensor network routing with Q-learning: enhancing energy efficiency and network longevity [J].
Jain, Amit Kumar ;
Jain, Sushil ;
Mathur, Garima .
ENGINEERING RESEARCH EXPRESS, 2024, 6 (04)
[39]   Q-learning with exploration driven by internal dynamics in chaotic neural network [J].
Matsuki, Toshitaka ;
Inoue, Souya ;
Shibata, Katsunari .
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
[40]   Sensor-assisted Coverage Self-Optimization for Wireless Local Area Network [J].
Zhou, Yuan ;
Luo, Zezhou ;
Zhuang, Hongcheng .
2013 22ND WIRELESS AND OPTICAL COMMUNICATIONS CONFERENCE (WOCC 2013), 2013, :444-448