Multiagent Reinforcement Learning: Spiking and Nonspiking Agents in the Iterated Prisoner's Dilemma

被引:14
作者
Vassiliades, Vassilis [1 ]
Cleanthous, Aristodemos [1 ]
Christodoulou, Chris [1 ]
机构
[1] Univ Cyprus, Dept Comp Sci, CY-1678 Nicosia, Cyprus
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2011年 / 22卷 / 04期
关键词
Multiagent reinforcement learning; Prisoner's Dilemma; reward transformation; spiking neural networks; TIMING-DEPENDENT PLASTICITY; INFINITE-HORIZON; MODEL; NETWORKS; ALGORITHM; EVOLUTION; NEURONS; SYSTEMS; ANSWER;
D O I
10.1109/TNN.2011.2111384
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper investigates multiagent reinforcement learning (MARL) in a general-sum game where the payoffs' structure is such that the agents are required to exploit each other in a way that benefits all agents. The contradictory nature of these games makes their study in multiagent systems quite challenging. In particular, we investigate MARL with spiking and nonspiking agents in the Iterated Prisoner's Dilemma by exploring the conditions required to enhance its cooperative outcome. The spiking agents are neural networks with leaky integrate-and-fire neurons trained with two different learning algorithms: 1) reinforcement of stochastic synaptic transmission, or 2) reward-modulated spike-timing-dependent plasticity with eligibility trace. The nonspiking agents use a tabular representation and are trained with Q-and SARSA learning algorithms, with a novel reward transformation process also being applied to the Q-learning agents. According to the results, the cooperative outcome is enhanced by: 1) transformed internal reinforcement signals and a combination of a high learning rate and a low discount factor with an appropriate exploration schedule in the case of non-spiking agents, and 2) having longer eligibility trace time constant in the case of spiking agents. Moreover, it is shown that spiking and nonspiking agents have similar behavior and therefore they can equally well be used in a multiagent interaction setting. For training the spiking agents in the case where more than one output neuron competes for reinforcement, a novel and necessary modification that enhances competition is applied to the two learning algorithms utilized, in order to avoid a possible synaptic saturation. This is done by administering to the networks additional global reinforcement signals for every spike of the output neurons that were not "responsible" for the preceding decision.
引用
收藏
页码:639 / 653
页数:15
相关论文
共 87 条
[1]   A Multiagent Reinforcement Learning Algorithm with Non-linear Dynamics [J].
Abdallah, Sherief ;
Lesser, Victor .
JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2008, 33 :521-549
[2]  
[Anonymous], ARTIFICIAL LIFE
[3]  
[Anonymous], 2000, UAI
[4]  
[Anonymous], 1987, Neural and brain modeling
[5]  
[Anonymous], 1941, Conditioned reflexes and psychiatry
[6]  
ARAS R, 2006, P C FRAN APPR AUT CA
[7]   Learning Anticipation via Spiking Networks: Application to Navigation Control [J].
Arena, Paolo ;
Fortuna, Luigi ;
Frasca, Mattia ;
Patane, Luca .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2009, 20 (02) :202-216
[8]   THE EVOLUTION OF COOPERATION [J].
AXELROD, R ;
HAMILTON, WD .
SCIENCE, 1981, 211 (4489) :1390-1396
[9]  
Axelrod R., 1984, EVOLUTION COOPERATIO
[10]  
Banerjee B, 2002, LECT NOTES ARTIF INT, V2430, P1