Exploring the Potential of Model-Free Reinforcement Learning using Tsetlin Machines

被引:0
|
作者
Drosdal, Didrik K. [1 ]
Grimsmo, Andreas [1 ]
Andersen, Per-Arne [1 ]
Granmo, Ole-Christoffer [1 ]
Goodwin, Morten [1 ]
机构
[1] Univ Agder, Ctr AI Res, Grimstad, Norway
来源
2023 INTERNATIONAL SYMPOSIUM ON THE TSETLIN MACHINE, ISTM | 2023年
关键词
Machine Learning; Tsetlin Machine; Reinforcement Learning; OpenAI Gym; Cartpole; Gynasium; Pong; Game Playing;
D O I
10.1109/ISTM58889.2023.10455080
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper aims to investigate the potential of model-free reinforcement learning using the Tsetlin Machine by evaluating its performance in widely recognized benchmark environments for reinforcement learning: Cartpole and Pong. Our study is divided into two primary objectives. First, we analyze the effectiveness of the Tsetlin Machine in learning from the actions of expert agents in the Cartpole environment. Second, we assess the ability of the multiclass Tsetlin Machine to learn to play both Cartpole and Pong environments from scratch. Our findings indicate that the Tsetlin Machine can successfully learn and solve the Cartpole environment. Although the Pong environment remains unsolved, the Tsetlin Machine demonstrates its learning capabilities by scoring a few points in multiple test runs. Through our empirical investigation, we conclude that the Tsetlin Machine exhibits promise in the field of reinforcement learning. Nonetheless, further research is needed to address the limitations observed in its performance in some of the examined environments.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] Budget Constrained Bidding by Model-free Reinforcement Learning in Display Advertising
    Wu, Di
    Chen, Xiujun
    Yang, Xun
    Wang, Hao
    Tan, Qing
    Zhang, Xiaoxun
    Xu, Jian
    Gai, Kun
    CIKM'18: PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, 2018, : 1443 - 1451
  • [32] Model-Free Safe Reinforcement Learning Through Neural Barrier Certificate
    Yang, Yujie
    Jiang, Yuxuan
    Liu, Yichen
    Chen, Jianyu
    Li, Shengbo Eben
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (03) : 1295 - 1302
  • [33] A Model-Free Deep Reinforcement Learning Approach to Piano Fingering Generation
    Phan, Ananda
    Ahn, Chang Wook
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 31 - 37
  • [35] Can model-free reinforcement learning explain deontological moral judgments?
    Ayars, Alisabeth
    COGNITION, 2016, 150 : 232 - 242
  • [36] MODEL-FREE PREDICTIVE CONTROL OF NONLINEAR PROCESSES BASED ON REINFORCEMENT LEARNING
    Shah, Hitesh
    Gopal, M.
    IFAC PAPERSONLINE, 2016, 49 (01): : 89 - 94
  • [37] Deep Reinforcement Learning for Autonomous Model-Free Navigation with Partial Observability
    Tapia, Daniel
    Parras, Juan
    Zazo, Santiago
    2019 27TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2019,
  • [38] Model-free off-policy reinforcement learning in continuous environment
    Wawrzynski, P
    Pacut, A
    2004 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, PROCEEDINGS, 2004, : 1091 - 1096
  • [39] DeepPool: Distributed Model-Free Algorithm for Ride-Sharing Using Deep Reinforcement Learning
    Al-Abbasi, Abubakr O.
    Ghosh, Arnob
    Aggarwal, Vaneet
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2019, 20 (12) : 4714 - 4727
  • [40] Model-free Reinforcement Learning with Stochastic Reward Stabilization for Recommender Systems
    Cai, Tianchi
    Bao, Shenliao
    Jiang, Jiyan
    Zhou, Shiji
    Zhang, Wenpeng
    Gu, Lihong
    Gu, Jinjie
    Zhang, Guannan
    PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 2179 - 2183