Model-Based Reinforcement Learning for Cavity Filter Tuning

被引:0
作者
Nimara, Doumitrou Daniil [1 ]
Malek-Mohammadi, Mohammadreza [2 ]
Wei, Jieqiang [1 ]
Huang, Vincent [1 ]
Ogren, Petter [3 ]
机构
[1] Ericsson GAIA, Stockholm, Sweden
[2] Qualcomm, San Diego, CA USA
[3] KTH, Div Robot Percept & Learning, Stockholm, Sweden
来源
LEARNING FOR DYNAMICS AND CONTROL CONFERENCE, VOL 211 | 2023年 / 211卷
关键词
Reinforcement Learning; Model Based Reinforcement Learning; Telecommunication;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The ongoing development of telecommunication systems like 5G has led to an increase in demand of well calibrated base transceiver station (BTS) components. A pivotal component of every BTS is cavity filters, which provide a sharp frequency characteristic to select a particular band of interest and reject the rest. Unfortunately, their characteristics in combination with manufacturing tolerances make them difficult for mass production and often lead to costly manual post-production fine tuning. To address this, numerous approaches have been proposed to automate the tuning process. One particularly promising one, that has emerged in the past few years, is to use model free reinforcement learning (MFRL); however, the agents are not sample efficient. This poses a serious bottleneck, as utilising complex simulators or training with real filters is prohibitively time demanding. This work advocates for the usage of model based reinforcement learning (MBRL) and showcases how its utilisation can significantly decrease sample complexity, while maintaining similar levels of success rate. More specifically, we propose an improvement over a state-of-the-art (SoTA) MBRL algorithm, namely the Dreamer algorithm. This improvement can serve as a template for applications in other similar, high-dimensional non-image data problems. We carry experiments on two complex filter types, and show that our novel modification on the Dreamer architecture reduces sample complexity by a factor of 4 and 10, respectively. Our findings pioneer the usage of MBRL which paves the way for utilising more precise and accurate simulators which was previously prohibitively time demanding.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Value-Distributional Model-Based Reinforcement Learning
    Luis, Carlos E.
    Bottero, Alessandro G.
    Vinogradska, Julia
    Berkenkamp, Felix
    Peters, Jan
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25
  • [32] A Model-based Factored Bayesian Reinforcement Learning Approach
    Wu, Bo
    Feng, Yanpeng
    Zheng, Hongyan
    [J]. APPLIED SCIENCE, MATERIALS SCIENCE AND INFORMATION TECHNOLOGIES IN INDUSTRY, 2014, 513-517 : 1092 - 1095
  • [33] Entity Abstraction in Visual Model-Based Reinforcement Learning
    Veerapaneni, Rishi
    Co-Reyes, John D.
    Chang, Michael
    Janner, Michael
    Finn, Chelsea
    Wu, Jiajun
    Tenenbaum, Joshua
    Levine, Sergey
    [J]. CONFERENCE ON ROBOT LEARNING, VOL 100, 2019, 100
  • [34] Exploration in Relational Domains for Model-based Reinforcement Learning
    Lang, Tobias
    Toussaint, Marc
    Kersting, Kristian
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2012, 13 : 3725 - 3768
  • [35] Model-based reinforcement learning for alternating Markov games
    Mellor, D
    [J]. AI 2003: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2003, 2903 : 520 - 531
  • [36] Model-Based Reinforcement Learning for Quantized Federated Learning Performance Optimization
    Yang, Nuocheng
    Wang, Sihua
    Chen, Mingzhe
    Brinton, Christopher G.
    Yin, Changchuan
    Saad, Walid
    Cui, Shuguang
    [J]. 2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 5063 - 5068
  • [37] Model-Based Reinforcement Learning for Eco-Driving Control of Electric Vehicles
    Lee, Heeyun
    Kim, Namwook
    Cha, Suk Won
    [J]. IEEE ACCESS, 2020, 8 : 202886 - 202896
  • [38] Learning Human Strategies for Tuning Cavity Filters with Continuous Reinforcement Learning
    Wang, Zhiyang
    Ou, Yongsheng
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (05):
  • [39] Case-Based Task Generalization in Model-Based Reinforcement Learning
    Zholus, Artem
    Panov, Aleksandr, I
    [J]. ARTIFICIAL GENERAL INTELLIGENCE, AGI 2021, 2022, 13154 : 344 - 354
  • [40] Model-Based Reinforcement Learning for Trajectory Tracking of Musculoskeletal Robots
    Xu, Haoran
    Fan, Jianyin
    Wang, Qiang
    [J]. 2023 IEEE INTERNATIONAL INSTRUMENTATION AND MEASUREMENT TECHNOLOGY CONFERENCE, I2MTC, 2023,