共 2 条
Linguistic reward-oriented Takagi-Sugeno fuzzy reinforcement learning
被引:0
作者:
Yan, XW
[1
]
Deng, ZD
[1
]
Sun, ZQ
[1
]
机构:
[1] Tsing Hua Univ, State Key Lab Intelligent Technol & Syst, Beijing 100084, Peoples R China
来源:
10TH IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1-3: MEETING THE GRAND CHALLENGE: MACHINES THAT SERVE PEOPLE
|
2001年
关键词:
D O I:
暂无
中图分类号:
TP [自动化技术、计算机技术];
学科分类号:
0812 ;
摘要:
This paper presents a new learning method to attack two significant sub-problems in reinforcement learning at the same time: continuous space and linguistic rewards. Linguistic reward-oriented Takagi-Sugeno fuzzy reinforcement learning (LRTSFRL) is constructed by combining Q-learning with Takagi-Sugeno type fuzzy inference systems. The proposed paradigm is capable of solving complicated learning tasks of continuous domains, also can be used to design Takagi-Sugeno fuzzy logic controllers. Experiments on the double inverted pendulum system demonstrate the performance and applicability of the presented scheme. Finally, the conclusion remark is drawn.
引用
收藏
页码:533 / 536
页数:4
相关论文