Enhancing Learning Efficiency in FACL: A Novel Fuzzy Rule Transfer Method for Transfer Learning

被引:1
作者
Ni, Dawei [1 ]
Schwartz, Howard M. [2 ]
机构
[1] Ericsson, Global Artificial Intelligence Accelerator, Montreal, PQ, Canada
[2] Carleton Univ, Dept Syst & Comp Engn, Ottawa, ON, Canada
关键词
Transfer learning; Fuzzy Actor-Critic Learning; Differential games; Reinforcement learning; REINFORCEMENT; CONTROLLERS;
D O I
10.1007/s40815-023-01662-3
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The concept of leveraging knowledge from previous experience to accelerate learning forms the crux of transfer learning. Within the realm of reinforcement learning (RL), the agent typically requires protracted interaction with the environment, which can be time-consuming and can lead to slow convergence. Transfer learning offers a promising solution in such settings. In this paper, we investigate the application of transfer learning in the fuzzy reinforcement learning domain, specifically within the context of differential games. We introduce a novel approach for knowledge transfer across analogous tasks, employing fuzzy logic controllers as function approximators, notably within the Fuzzy Actor-Critic Learning (FACL) algorithm. Specifically, we propose a strategy for fuzzy rule transfer aimed at mapping fuzzy rules between the source and target tasks. The target task is assumed to be related to the source task yet it contains more complex states. Our approach has been implemented and tested within the domain of differential games in which all state space and action space are continuous. The simulation outcomes demonstrate that the application of knowledge transfer enables RL agents to learn faster and achieve asymptotic performance more rapidly in the target task.
引用
收藏
页码:1215 / 1232
页数:18
相关论文
共 50 条
  • [31] Learning to Predict Consequences as a Method of Knowledge Transfer in Reinforcement Learning
    Chalmers, Eric
    Contreras, Edgar Bermudez
    Robertson, Brandon
    Luczak, Artur
    Gruber, Aaron
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (06) : 2259 - 2270
  • [32] Federated Fuzzy Transfer Learning With Domain and Category Shifts
    Li, Keqiuyin
    Lu, Jie
    Zuo, Hua
    Zhang, Guangquan
    IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2024, 32 (12) : 6708 - 6719
  • [33] Fuzzy Transfer Learning Using an Infinite Gaussian Mixture Model and Active Learning
    Zuo, Hua
    Lu, Jie
    Zhang, Guangquan
    Liu, Feng
    IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2019, 27 (02) : 291 - 303
  • [34] Transfer Learning in Deep Reinforcement Learning: A Survey
    Zhu, Zhuangdi
    Lin, Kaixiang
    Jain, Anil K.
    Zhou, Jiayu
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (11) : 13344 - 13362
  • [35] Lateral Transfer Learning for Multiagent Reinforcement Learning
    Shi, Haobin
    Li, Jingchen
    Mao, Jiahui
    Hwang, Kao-Shing
    IEEE TRANSACTIONS ON CYBERNETICS, 2023, 53 (03) : 1699 - 1711
  • [36] Transfer Learning for Reinforcement Learning Domains: A Survey
    Taylor, Matthew E.
    Stone, Peter
    JOURNAL OF MACHINE LEARNING RESEARCH, 2009, 10 : 1633 - 1685
  • [37] Effective rule mining of sparse data based on transfer learning
    Sun, Yongjiao
    Guo, Jiancheng
    Li, Boyang
    Haldar, Nur Al Hasan
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2023, 26 (01): : 461 - 480
  • [38] Fuzzy Regression Transfer Learning in Takagi-Sugeno Fuzzy Models
    Zuo, Hua
    Zhang, Guangquan
    Pedrycz, Witold
    Behbood, Vahid
    Lu, Jie
    IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2017, 25 (06) : 1795 - 1807
  • [39] Effective rule mining of sparse data based on transfer learning
    Yongjiao Sun
    Jiancheng Guo
    Boyang Li
    Nur Al Hasan Haldar
    World Wide Web, 2023, 26 : 461 - 480
  • [40] A Deep Transfer Learning Design Rule Checker With Synthetic Training
    Francisco, Luis
    Davis, W. Rhett
    Franzon, Paul
    IEEE DESIGN & TEST, 2023, 40 (01) : 77 - 84