Deep Reinforcement Learning for Intelligent Communications

被引:0
|
作者
Tan J.-J. [1 ]
Liang Y.-C. [1 ]
机构
[1] National Key Laboratory of Science and Technology on Communications, University of Electronic Science and Technology of China, Chengdu
来源
Dianzi Keji Daxue Xuebao/Journal of the University of Electronic Science and Technology of China | 2020年 / 49卷 / 02期
关键词
Deep reinforcement learning; Heterogeneous networks; Intelligent communications; Intelligent network management;
D O I
10.12178/1001-0548.2020040
中图分类号
学科分类号
摘要
In the era of data explosion, the rapid growth of mobile devices makes the size of wireless networks increase tremendously. Meanwhile, people are having higher demands for wireless communications, which requires the networks to provide precisely on-demand services in order to exploit the limited resource. Due to the above two reasons, the traditional modeling-and-optimizing methods for network management will meet the performance bottleneck in the future. Fortunately, the appearance of artificial intelligence and machine learning provides a new solution to this issue. As a data-driven machine learning technique, deep reinforcement learning can directly learn the pattern of dynamic environments and use it to make optimal decisions. Hence, deep reinforcement learning enables wireless networks to manage and optimize themselves based on their environments, which makes it possible to realize intelligent communications. This paper introduces the application of deep reinforcement learning on wireless communications from the aspects of resource management, access control, and network maintenance, and illustrates that deep reinforcement learning is an effective approach to realizing intelligent communications. © 2020, Editorial Board of Journal of the University of Electronic Science and Technology of China. All right reserved.
引用
收藏
页码:169 / 181
页数:12
相关论文
共 44 条
  • [1] Huang Y., Tan J., Liang Y., Wireless big data: Transforming heterogeneous networks to smart networks, Journal of Communications and Information Networks, 2, 1, pp. 19-32, (2017)
  • [2] Lecun Y., Bengio Y., Hinton G., Deep learning, Nature, 521, 7553, pp. 436-444, (2015)
  • [3] Luong N.C., Hoang D.T., Gong S., Et al., Applications of deep reinforcement learning in communications and networking: A survey, IEEE Communications Surveys & Tutorials, 21, 4, pp. 3133-3174, (2019)
  • [4] Silver D., Huang A., Maddison C.J., Et al., Mastering the game of Go with deep neural networks and tree search, Nature, 529, 7587, (2016)
  • [5] Sutton R.S., Barto A.G., Introduction to Reinforcement Learning, (1998)
  • [6] Cybenko G., Approximation by superpositions of a sigmoidal function, Mathematics of Control, Signals and Systems, 2, 4, pp. 303-314, (1989)
  • [7] Dayan P., Abbott L.F., Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems, (2001)
  • [8] Lecun Y., Bengio Y., Convolutional networks for images, speech, and time series, The Handbook of Brain Theory and Neural Networks, 3361, 10, (1995)
  • [9] Mandic D.P., Chambers J., Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability, (2001)
  • [10] Hochreiter S., Schmidhuber J., Long short-term memory, Neural Computation, 9, 8, pp. 1735-1780, (1997)