Reinforcement learning explains various conditional cooperation

被引:29
作者
Geng, Yini [1 ,2 ]
Liu, Yifan [3 ,4 ]
Lu, Yikang [3 ]
Shen, Chen [3 ]
Shi, Lei [3 ,4 ,5 ]
机构
[1] Hunan Normal Univ, Sch Math & Stat, MOE LCSM, Changsha 410081, Peoples R China
[2] Hunan Normal Univ, Coll Hunan Prov, Key Lab Appl Stat & Data Sci, Changsha 410081, Peoples R China
[3] Yunnan Univ Finance & Econ, Sch Stat & Math, Kunming 650221, Peoples R China
[4] Dongbei Univ Finance & Econ, Sch Econ, Dalian 116025, Peoples R China
[5] Shanghai Lixin Univ Accounting & Finance, Interdisciplinary Res Inst Data Sci, Shanghai 201209, Peoples R China
基金
中国国家自然科学基金;
关键词
Evolutionary games; Q-learning; Conditional cooperation; PRISONERS-DILEMMA GAME; EVOLUTIONARY GAMES; PUBLIC-GOODS; DYNAMICS; BEHAVIOR; SYSTEM;
D O I
10.1016/j.amc.2022.127182
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Recent studies show that different update rules are invariant regarding the evolutionary outcomes for a well-mixed population or homogeneous network. In this paper, we investi-gate how the Q-learning algorithm, one of the reinforcement learning methods, affects the evolutionary outcomes in square lattice. Especially, we consider the mixed strategy update rule, among which some agents adopt Q-learning method to update their strategies, the proportion of these agents (these agents are denoted as Artificial Intelligence (AI)) is con-trolled by a simple parameter rho. The rest of other agents, the proportion is denoted by 1 - rho, adopt the Fermi function to update their strategies. Through extensive numerical simulations, we found that the mixed strategy-update rule can facilitate cooperation com-pared with the pure Fermi-function-based update rule. Besides, if the proportion of AI is moderate, cooperators among the whole population exhibit conditional behavior and moody conditional behavior. However, if the whole population adopts the pure Fermi-function-based strategy update rule or the pure Q-learning-based strategy update rule, then cooperators among the whole population exhibit the hump-shaped conditional be-havior. Our results provide a new insight to understand the evolution of cooperation from AI's view. (c) 2022 Elsevier Inc. All rights reserved.
引用
收藏
页数:9
相关论文
共 85 条
[1]   An adaptive portfolio trading system: A risk-return portfolio optimization using recurrent reinforcement learning with expected maximum drawdown [J].
Almahdi, Saud ;
Yang, Steve Y. .
EXPERT SYSTEMS WITH APPLICATIONS, 2017, 87 :267-279
[2]  
[Anonymous], 2010, Synthesis Lectures on Artificial Intelligence and Machine Learning
[3]   THE EVOLUTION OF COOPERATION [J].
AXELROD, R ;
HAMILTON, WD .
SCIENCE, 1981, 211 (4489) :1390-1396
[4]   Evolutionary Dynamics of Multi-Agent Learning: A Survey [J].
Bloembergen, Daan ;
Tuyls, Karl ;
Hennes, Daniel ;
Kaisers, Michael .
JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2015, 53 :659-697
[5]  
Borkar V., 2002, Advances in Complex Systems, V5, P55
[6]   Reinforcement Learning, Fast and Slow [J].
Botvinick, Matthew ;
Ritter, Sam ;
Wang, Jane X. ;
Kurth-Nelson, Zeb ;
Blundell, Charles ;
Hassabis, Demis .
TRENDS IN COGNITIVE SCIENCES, 2019, 23 (05) :408-422
[7]   Conditional cooperation and confusion in public-goods experiments [J].
Burton-Chellew, Maxwell N. ;
El Mouden, Claire ;
West, Stuart A. .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2016, 113 (05) :1291-1296
[8]   POINTS OF SIGNIFICANCE Statistics versus machine learning [J].
Bzdok, Danilo ;
Altman, Naomi ;
Krzywinski, Martin .
NATURE METHODS, 2018, 15 (04) :232-233
[9]   Co-evolution of strategies and update rules in the prisoner's dilemma game on complex networks [J].
Cardillo, Alessio ;
Gomez-Gardenes, Jesus ;
Vilone, Daniele ;
Sanchez, Angel .
NEW JOURNAL OF PHYSICS, 2010, 12
[10]   Machine learning and the physical sciences [J].
Carleo, Giuseppe ;
Cirac, Ignacio ;
Cranmer, Kyle ;
Daudet, Laurent ;
Schuld, Maria ;
Tishby, Naftali ;
Vogt-Maranto, Leslie ;
Zdeborova, Lenka .
REVIEWS OF MODERN PHYSICS, 2019, 91 (04)