Q-learning based heterogenous network self-optimization for reconfigurable network with CPC assistance

被引:0
作者
ZhiYong Feng
LiTao Liang
Li Tan
Ping Zhang
机构
[1] Beijing University of Posts and Telecommunications,Key Laboratory of Universal Wireless Communications, Ministry of Education
[2] China Mobile Group Beijing Co.,undefined
[3] Ltd.,undefined
来源
Science in China Series F: Information Sciences | 2009年 / 52卷
关键词
reconfigurable system; self-optimization; Q-learning; CPC;
D O I
暂无
中图分类号
学科分类号
摘要
With the irreversible trend of the convergence and cooperation among heterogeneous networks, there emerge some important issues for network evolution. One of them is to reconfigure network elements such as cellular base stations (BSs) or access points (APs) of wireless local area networks (WLANs) according to the real-time network environment, in order to maximize the cooperation gain of different networks. In this paper, we consider cognitive pilot channel (CPC) as an assistant to enable cooperation among heterogeneous networks. Based on the widely used reinforcement learning algorithm, this paper has proposed the heterogeneous network self-optimization algorithm (HNSA) to solve the adaptation problem in reconfigurable systems. In the algorithm, distributed agents perform reinforcement learning, and make decisions cooperatively with the help of CPC in order to reduce the system blocking rate and improve network revenue. Finally our simulation proves the anticipated goal is achieved.
引用
收藏
页码:2360 / 2368
页数:8
相关论文
共 50 条
[21]   Q-learning Based Network Selection for WCDMA/WLAN Heterogeneous Wireless Networks [J].
Xu, Yubin ;
Chen, Jiamei ;
Ma, Lin ;
Lang, Gaiping .
2014 IEEE 79TH VEHICULAR TECHNOLOGY CONFERENCE (VTC-SPRING), 2014,
[22]   An Improved Spray and Wait Algorithm Based on Q-learning in Delay Tolerant Network [J].
Yao, Lei ;
Bai, Xiangyu ;
Zhou, Kexin .
2024 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN 2024, 2024,
[23]   Q-learning with a growing RBF network for behavior learning in mobile robotics [J].
Li, J ;
Duckett, T .
PROCEEDINGS OF THE SIXTH IASTED INTERNATIONAL CONFERENCE ON ROBOTICS AND APPLICATIONS, 2005, :273-278
[24]   Content Delivery Networks - Q-Learning Approach for Optimization of the Network Cost and the Cache Hit Ratio [J].
de Almeida, Diego Felix ;
Yen, Jason ;
Aibin, Michal .
2020 IEEE CANADIAN CONFERENCE ON ELECTRICAL AND COMPUTER ENGINEERING (CCECE), 2020,
[25]   Q-Learning: From Computer Network Security To Software Security [J].
Randrianasolo, Arisoa S. ;
Pyeatt, Larry D. .
2014 13TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA), 2014, :257-262
[26]   Target Network and Truncation Overcome the Deadly Triad in Q-Learning [J].
Chend, Zaiwei ;
Clarke, John-Paul ;
Maguluri, Siva Theja .
SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2023, 5 (04) :1078-1101
[27]   Benchmarking Q-Learning Methods for Intelligent Network Orchestration in the Edge [J].
Reijonen, Joel ;
Opsenica, Miljenko ;
Kauppinen, Tero ;
Komu, Miika ;
Kjallman, Jimmy ;
Mecklin, Tomas ;
Hiltunen, Eero ;
Arkko, Jan ;
Simanainen, Timo ;
Elmusrati, Mohammed .
2020 2ND 6G WIRELESS SUMMIT (6G SUMMIT), 2020,
[28]   Hyperparameter optimization of neural networks based on Q-learning [J].
Xin Qi ;
Bing Xu .
Signal, Image and Video Processing, 2023, 17 :1669-1676
[29]   Hyperparameter optimization of neural networks based on Q-learning [J].
Qi, Xin ;
Xu, Bing .
SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (04) :1669-1676
[30]   An Improved Q-Learning Based Handover Scheme in Cellular-Connected UAV Network [J].
Zhong, Jihai ;
Zhang, Li ;
Serugunda, Jonathan ;
Gautam, Prabhat Raj ;
Mugala, Sheila .
2022 25TH INTERNATIONAL SYMPOSIUM ON WIRELESS PERSONAL MULTIMEDIA COMMUNICATIONS (WPMC), 2022,