Reinforcement learning explains various conditional cooperation

被引:21
作者
Geng, Yini [1 ,2 ]
Liu, Yifan [3 ,4 ]
Lu, Yikang [3 ]
Shen, Chen [3 ]
Shi, Lei [3 ,4 ,5 ]
机构
[1] Hunan Normal Univ, Sch Math & Stat, MOE LCSM, Changsha 410081, Peoples R China
[2] Hunan Normal Univ, Coll Hunan Prov, Key Lab Appl Stat & Data Sci, Changsha 410081, Peoples R China
[3] Yunnan Univ Finance & Econ, Sch Stat & Math, Kunming 650221, Peoples R China
[4] Dongbei Univ Finance & Econ, Sch Econ, Dalian 116025, Peoples R China
[5] Shanghai Lixin Univ Accounting & Finance, Interdisciplinary Res Inst Data Sci, Shanghai 201209, Peoples R China
基金
中国国家自然科学基金;
关键词
Evolutionary games; Q-learning; Conditional cooperation; PRISONERS-DILEMMA GAME; EVOLUTIONARY GAMES; PUBLIC-GOODS; DYNAMICS; BEHAVIOR; SYSTEM;
D O I
10.1016/j.amc.2022.127182
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Recent studies show that different update rules are invariant regarding the evolutionary outcomes for a well-mixed population or homogeneous network. In this paper, we investi-gate how the Q-learning algorithm, one of the reinforcement learning methods, affects the evolutionary outcomes in square lattice. Especially, we consider the mixed strategy update rule, among which some agents adopt Q-learning method to update their strategies, the proportion of these agents (these agents are denoted as Artificial Intelligence (AI)) is con-trolled by a simple parameter rho. The rest of other agents, the proportion is denoted by 1 - rho, adopt the Fermi function to update their strategies. Through extensive numerical simulations, we found that the mixed strategy-update rule can facilitate cooperation com-pared with the pure Fermi-function-based update rule. Besides, if the proportion of AI is moderate, cooperators among the whole population exhibit conditional behavior and moody conditional behavior. However, if the whole population adopts the pure Fermi-function-based strategy update rule or the pure Q-learning-based strategy update rule, then cooperators among the whole population exhibit the hump-shaped conditional be-havior. Our results provide a new insight to understand the evolution of cooperation from AI's view. (c) 2022 Elsevier Inc. All rights reserved.
引用
收藏
页数:9
相关论文
共 85 条
  • [1] An adaptive portfolio trading system: A risk-return portfolio optimization using recurrent reinforcement learning with expected maximum drawdown
    Almahdi, Saud
    Yang, Steve Y.
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2017, 87 : 267 - 279
  • [2] THE EVOLUTION OF COOPERATION
    AXELROD, R
    HAMILTON, WD
    [J]. SCIENCE, 1981, 211 (4489) : 1390 - 1396
  • [3] Evolutionary Dynamics of Multi-Agent Learning: A Survey
    Bloembergen, Daan
    Tuyls, Karl
    Hennes, Daniel
    Kaisers, Michael
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2015, 53 : 659 - 697
  • [4] Borkar V.S., 2002, Advances in Complex Systems, V5, P55
  • [5] Reinforcement Learning, Fast and Slow
    Botvinick, Matthew
    Ritter, Sam
    Wang, Jane X.
    Kurth-Nelson, Zeb
    Blundell, Charles
    Hassabis, Demis
    [J]. TRENDS IN COGNITIVE SCIENCES, 2019, 23 (05) : 408 - 422
  • [6] Conditional cooperation and confusion in public-goods experiments
    Burton-Chellew, Maxwell N.
    El Mouden, Claire
    West, Stuart A.
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2016, 113 (05) : 1291 - 1296
  • [7] POINTS OF SIGNIFICANCE Statistics versus machine learning
    Bzdok, Danilo
    Altman, Naomi
    Krzywinski, Martin
    [J]. NATURE METHODS, 2018, 15 (04) : 232 - 233
  • [8] Co-evolution of strategies and update rules in the prisoner's dilemma game on complex networks
    Cardillo, Alessio
    Gomez-Gardenes, Jesus
    Vilone, Daniele
    Sanchez, Angel
    [J]. NEW JOURNAL OF PHYSICS, 2010, 12
  • [9] Machine learning and the physical sciences
    Carleo, Giuseppe
    Cirac, Ignacio
    Cranmer, Kyle
    Daudet, Laurent
    Schuld, Maria
    Tishby, Naftali
    Vogt-Maranto, Leslie
    Zdeborova, Lenka
    [J]. REVIEWS OF MODERN PHYSICS, 2019, 91 (04)
  • [10] Chaudhuri A., 2006, Economics Bulletin, V3, P1