Exploring the Potential of Model-Free Reinforcement Learning using Tsetlin Machines

被引:0
|
作者
Drosdal, Didrik K. [1 ]
Grimsmo, Andreas [1 ]
Andersen, Per-Arne [1 ]
Granmo, Ole-Christoffer [1 ]
Goodwin, Morten [1 ]
机构
[1] Univ Agder, Ctr AI Res, Grimstad, Norway
来源
2023 INTERNATIONAL SYMPOSIUM ON THE TSETLIN MACHINE, ISTM | 2023年
关键词
Machine Learning; Tsetlin Machine; Reinforcement Learning; OpenAI Gym; Cartpole; Gynasium; Pong; Game Playing;
D O I
10.1109/ISTM58889.2023.10455080
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper aims to investigate the potential of model-free reinforcement learning using the Tsetlin Machine by evaluating its performance in widely recognized benchmark environments for reinforcement learning: Cartpole and Pong. Our study is divided into two primary objectives. First, we analyze the effectiveness of the Tsetlin Machine in learning from the actions of expert agents in the Cartpole environment. Second, we assess the ability of the multiclass Tsetlin Machine to learn to play both Cartpole and Pong environments from scratch. Our findings indicate that the Tsetlin Machine can successfully learn and solve the Cartpole environment. Although the Pong environment remains unsolved, the Tsetlin Machine demonstrates its learning capabilities by scoring a few points in multiple test runs. Through our empirical investigation, we conclude that the Tsetlin Machine exhibits promise in the field of reinforcement learning. Nonetheless, further research is needed to address the limitations observed in its performance in some of the examined environments.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Model-free optimization of power/efficiency tradeoffs in quantum thermal machines using reinforcement learning
    Erdman, Paolo A.
    Noe, Frank
    PNAS NEXUS, 2023, 2 (08):
  • [2] Model-free LQ Control for Unmanned Helicopters using Reinforcement Learning
    Lee, Dong Jin
    Bang, Hyochoong
    2011 11TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS), 2011, : 117 - 120
  • [3] Hierarchical Dynamic Power Management Using Model-Free Reinforcement Learning
    Wang, Yanzhi
    Triki, Maryam
    Lin, Xue
    Ammari, Ahmed C.
    Pedram, Massoud
    PROCEEDINGS OF THE FOURTEENTH INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN (ISQED 2013), 2013, : 170 - 177
  • [4] Constrained model-free reinforcement learning for process optimization
    Pan, Elton
    Petsagkourakis, Panagiotis
    Mowbray, Max
    Zhang, Dongda
    del Rio-Chanona, Ehecatl Antonio
    COMPUTERS & CHEMICAL ENGINEERING, 2021, 154
  • [5] Model-free Reinforcement Learning of Semantic Communication by Stochastic Policy Gradient
    Beck, Edgar
    Bockelmann, Carsten
    Dekorsy, Armin
    2024 IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING FOR COMMUNICATION AND NETWORKING, ICMLCN 2024, 2024, : 367 - 373
  • [6] MODEL-FREE ONLINE REINFORCEMENT LEARNING OF A ROBOTIC MANIPULATOR
    Sweafford, Jerry, Jr.
    Fahimi, Farbod
    MECHATRONIC SYSTEMS AND CONTROL, 2019, 47 (03): : 136 - 143
  • [7] Model-Free Approach to Fair Solar PV Curtailment Using Reinforcement Learning
    Wei, Zhuo
    de Nijs, Frits
    Li, Jinhao
    Wang, Hao
    PROCEEDINGS OF THE 2023 THE 14TH ACM INTERNATIONAL CONFERENCE ON FUTURE ENERGY SYSTEMS, E-ENERGY 2023, 2023, : 14 - 21
  • [8] A Model-free Mapless Navigation Method for Mobile Robot Using Reinforcement Learning
    Lv Qiang
    Duo Nanxun
    Lin Huican
    Wei Heng
    PROCEEDINGS OF THE 30TH CHINESE CONTROL AND DECISION CONFERENCE (2018 CCDC), 2018, : 3410 - 3415
  • [9] Secure Linear Quadratic Regulator Using Sparse Model-Free Reinforcement Learning
    Kiumarsi, Bahare
    Basar, Tamer
    2019 IEEE 58TH CONFERENCE ON DECISION AND CONTROL (CDC), 2019, : 3641 - 3647
  • [10] On Distributed Model-Free Reinforcement Learning Control With Stability Guarantee
    Mukherjee, Sayak
    Vu, Thanh Long
    IEEE CONTROL SYSTEMS LETTERS, 2021, 5 (05): : 1615 - 1620