Q-learning based heterogenous network self-optimization for reconfigurable network with CPC assistance

被引:0
作者
ZhiYong Feng
LiTao Liang
Li Tan
Ping Zhang
机构
[1] Beijing University of Posts and Telecommunications,Key Laboratory of Universal Wireless Communications, Ministry of Education
[2] China Mobile Group Beijing Co.,undefined
[3] Ltd.,undefined
来源
Science in China Series F: Information Sciences | 2009年 / 52卷
关键词
reconfigurable system; self-optimization; Q-learning; CPC;
D O I
暂无
中图分类号
学科分类号
摘要
With the irreversible trend of the convergence and cooperation among heterogeneous networks, there emerge some important issues for network evolution. One of them is to reconfigure network elements such as cellular base stations (BSs) or access points (APs) of wireless local area networks (WLANs) according to the real-time network environment, in order to maximize the cooperation gain of different networks. In this paper, we consider cognitive pilot channel (CPC) as an assistant to enable cooperation among heterogeneous networks. Based on the widely used reinforcement learning algorithm, this paper has proposed the heterogeneous network self-optimization algorithm (HNSA) to solve the adaptation problem in reconfigurable systems. In the algorithm, distributed agents perform reinforcement learning, and make decisions cooperatively with the help of CPC in order to reduce the system blocking rate and improve network revenue. Finally our simulation proves the anticipated goal is achieved.
引用
收藏
页码:2360 / 2368
页数:8
相关论文
共 50 条
[41]   Feature selection optimization algorithm based on evolutionary Q-learning [J].
Yang, Guan ;
Zeng, Zhiyong ;
Pu, Xinrui ;
Duan, Ren .
INFORMATION SCIENCES, 2025, 719
[42]   A Novel Hybrid Path Planning Method Based on Q-Learning and Neural Network for Robot Arm [J].
Abdi, Ali ;
Adhikari, Dibash ;
Park, Ju Hong .
APPLIED SCIENCES-BASEL, 2021, 11 (15)
[43]   Q-Learning Based and Energy-Aware Multipath Congestion Control in Mobile Wireless Network [J].
Qin, Jiuren ;
Gao, Kai ;
Zhong, Lujie ;
Yang, Shujie .
JOURNAL OF INFORMATION SCIENCE AND ENGINEERING, 2022, 38 (01) :165-183
[44]   Deep Reinforcement Learning-Based Self-Optimization of Flow Chemistry [J].
Yewale, Ashish ;
Yang, Yihui ;
Nazemifard, Neda ;
Papageorgiou, Charles D. ;
Rielly, Chris D. ;
Benyahia, Brahim .
ACS ENGINEERING AU, 2025, 5 (03) :247-266
[45]   Few-shot Self-optimization Learning Based on Deep Metric [J].
Ma, Yong ;
Dou, Quansheng .
2020 IEEE THE 3RD INTERNATIONAL CONFERENCE ON ELECTRONICS AND COMMUNICATION ENGINEERING (ICECE), 2020, :134-139
[46]   Comparative analysis of Q-learning, SARSA, and deep Q-network for microgrid energy management [J].
Ramesh, Sreyas ;
Sukanth, B. N. ;
Sathyavarapu, Sri Jaswanth ;
Sharma, Vishwash ;
Kumaar, A. A. Nippun ;
Khanna, Manju .
SCIENTIFIC REPORTS, 2025, 15 (01)
[47]   Remote Proximity Sensing With a Novel Q-Learning in Bluetooth Low Energy Network [J].
Ng, Pai Chet ;
She, James .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (08) :6156-6166
[48]   Exploiting Q-learning in Extending the Network Lifetime of Wireless Sensor Networks with Holes [J].
Khanh Le ;
Nguyen Thanh Hung ;
Kien Nguyen ;
Phi Le Nguyen .
2019 IEEE 25TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2019, :602-609
[49]   Deep Spatio-Temporal Graph Network with Self-Optimization for Air Quality Prediction [J].
Jin, Xue-Bo ;
Wang, Zhong-Yao ;
Kong, Jian-Lei ;
Bai, Yu-Ting ;
Su, Ting-Li ;
Ma, Hui-Jun ;
Chakrabarti, Prasun .
ENTROPY, 2023, 25 (02)
[50]   BIOINSPIRED NEURAL NETWORK-BASED Q-LEARNING APPROACH FOR ROBOT PATH PLANNING IN UNKNOWN ENVIRONMENTS [J].
Ni, Jianjun ;
Li, Xinyun ;
Hua, Mingang ;
Yang, Simon X. .
INTERNATIONAL JOURNAL OF ROBOTICS & AUTOMATION, 2016, 31 (06) :464-474