Social Concepts Simplify Complex Reinforcement Learning

被引:2
|
作者
Hackel, Leor M. [1 ]
Kalkstein, David A. [2 ]
机构
[1] Univ Southern Calif, Dept Psychol, Los Angeles, CA 90007 USA
[2] Stanford Univ, Dept Psychol, Stanford, CA USA
关键词
concepts; generalization; reinforcement learning; relational reasoning; rewards; social cognition; open data; open materials; preregistered; RELATIONAL LANGUAGE; INFERENCES; KNOWLEDGE; SELECTION; CHOICES;
D O I
10.1177/09567976231180587
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Humans often generalize rewarding experiences across abstract social roles. Theories of reward learning suggest that people generalize through model-based learning, but such learning is cognitively costly. Why do people seem to generalize across social roles with ease? Humans are social experts who easily recognize social roles that reflect familiar semantic concepts (e.g., "helper" or "teacher"). People may associate these roles with model-free reward (e.g., learning that helpers are rewarding), allowing them to generalize easily (e.g., interacting with novel individuals identified as helpers). In four online experiments with U.S. adults (N = 577), we found evidence that social concepts ease complex learning (people generalize more and at faster speed) and that people attach reward directly to abstract roles (they generalize even when roles are unrelated to task structure). These results demonstrate how familiar concepts allow complex behavior to emerge from simple strategies, highlighting social interaction as a prototype for studying cognitive ease in the face of environmental complexity.
引用
收藏
页码:968 / 983
页数:16
相关论文
共 50 条
  • [21] Decision Making Based on Reinforcement Learning and Emotion Learning for Social Behavior
    Matsuda, Atsushi
    Misawa, Hideaki
    Horio, Keiichi
    IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ 2011), 2011, : 2714 - 2719
  • [22] Multirobot coordination with deep reinforcement learning in complex environments
    Wang, Di
    Deng, Hongbin
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 180
  • [23] A phased reinforcement learning algorithm for complex control problems
    Goto, Takakuni
    Homma, Noriyasu
    Yoshizawa, Makoto
    Abe, Kenichi
    ARTIFICIAL LIFE AND ROBOTICS, 2007, 11 (02) : 190 - 196
  • [24] Towards Decentralized Reinforcement Learning Architectures for Social Dilemmas
    Anastassacos, Nicolas
    Musolesi, Mirco
    AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 1776 - 1777
  • [25] Mean field LQG social optimization: A reinforcement learning
    Xu, Zhenhui
    Wang, Bing-Chang
    Shen, Tielong
    AUTOMATICA, 2025, 171
  • [26] A Reinforcement Learning Model for Influence Maximization in Social Networks
    Wang, Chao
    Liu, Yiming
    Gao, Xiaofeng
    Chen, Guihai
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS (DASFAA 2021), PT II, 2021, 12682 : 701 - 709
  • [27] Using Reinforcement Learning to Simplify Mealtime Insulin Dosing for People with Type 1 Diabetes: In-Silico Experiments
    El Fathi, Anas
    Breton, Marc D.
    IFAC PAPERSONLINE, 2023, 56 (02): : 11539 - 11544
  • [28] Reinforcement Learning Trees
    Zhu, Ruoqing
    Zeng, Donglin
    Kosorok, Michael R.
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2015, 110 (512) : 1770 - 1784
  • [29] Reinforcement learning in medical image analysis: Concepts, applications, challenges, and future directions
    Hu, Mingzhe
    Zhang, Jiahan
    Matkovic, Luke
    Liu, Tian
    Yang, Xiaofeng
    JOURNAL OF APPLIED CLINICAL MEDICAL PHYSICS, 2023, 24 (02):
  • [30] Simplifying social learning
    Hackel, Leor M.
    Kalkstein, David A.
    Mende-Siedlecki, Peter
    TRENDS IN COGNITIVE SCIENCES, 2024, 28 (05) : 428 - 440