A motivational-based learning model for mobile robots

被引:0
|
作者
Berto, Leticia [1 ,4 ]
Costa, Paula [2 ,4 ]
Simoes, Alexandre [3 ,4 ]
Gudwin, Ricardo [2 ,4 ]
Colombini, Esther [1 ,4 ]
机构
[1] Univ Estadual Campinas, Inst Comp, Ave Albert Einstein,1251 Cidade Univ, BR-13083852 Campinas, SP, Brazil
[2] Univ Estadual Campinas, Sch Elect & Comp Engn, Ave Albert Einstein,400 Cidade Univ, BR-13083852 Campinas, SP, Brazil
[3] Sao Paulo State Univ, Dept Control & Automat Engn, Ave Tres de Marco,511 Alto Boa Vista, BR-18087180 Sorocaba, SP, Brazil
[4] Artificial Intelligence & Cognit Architectures Hb, Ave Albert Einstein,1251 Cidade Univ, BR-13083852 Campinas, SP, Brazil
来源
基金
巴西圣保罗研究基金会;
关键词
Motivation; Action selection and planning; Models of internal states; Internal reinforces; DECISION-MAKING; PLEASURE; EMOTIONS; REWARD;
D O I
10.1016/j.cogsys.2024.101278
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Humans have needs motivating their behavior according to intensity and context. However, we also create preferences associated with each action's perceived pleasure, which is susceptible to changes over time. This makes decision-making more complex, requiring learning to balance needs and preferences according to the context. To understand how this process works and enable the development of robots with a motivational-based learning model, we computationally model a motivation theory proposed by Hull. In this model, the agent (an abstraction of a mobile robot) is motivated to keep itself in a state of homeostasis. We introduced hedonic dimensions to explore the impact of preferences on decision-making and employed reinforcement learning to train our motivated-based agents. In our experiments, we deploy three agents with distinct energy decay rates, simulating different metabolic rates, within two diverse environments. We investigate the influence of these conditions on their strategies, movement patterns, and overall behavior. The findings reveal that agents excel at learning more effective strategies when the environment allows for choices that align with their metabolic requirements. Furthermore, we observe that incorporating pleasure as a component of the motivational mechanism affects behavior learning, particularly for agents with regular metabolisms depending on the environment. Our study also unveils that, when confronted with survival challenges, agents prioritize immediate needs over pleasure and equilibrium. These insights shed light on how robotic agents can adapt and make informed decisions in demanding scenarios, demonstrating the intricate interplay between motivation, pleasure, and environmental context in autonomous systems.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] Mobile game-based learning with a mobile app: Motivational effects and learning performance
    Huang Y.-L.
    Chang D.-F.
    Wu B.
    Journal of Advanced Computational Intelligence and Intelligent Informatics, 2017, 21 (06) : 963 - 970
  • [2] Iterative learning-based model predictive control for mobile robots in space applications
    Baldauf, Niklas
    Turnwald, Alen
    2023 27TH INTERNATIONAL CONFERENCE ON METHODS AND MODELS IN AUTOMATION AND ROBOTICS, MMAR, 2023, : 434 - 439
  • [3] Validating a Motivational Process Model for Mobile-Assisted Language Learning
    Tseng, Wen-Ta
    Cheng, Hsing-Fu
    Hsiao, Tsung-Yuan
    ENGLISH TEACHING AND LEARNING, 2019, 43 (04): : 369 - 388
  • [4] Simultaneous learning of motion and sensor model parameters for mobile robots
    Yap, Teddy N., Jr.
    Shelton, Christian R.
    2008 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-9, 2008, : 2091 - 2097
  • [5] Mobile learning with a mobile game:: design and motivational effects
    Schwabe, G
    Göth, C
    JOURNAL OF COMPUTER ASSISTED LEARNING, 2005, 21 (03) : 204 - 216
  • [6] Learning for intelligent mobile robots
    Hall, EL
    Liao, X
    Ali, SMA
    INTELLIGENT ROBOTS AND COMPUTER VISION XXI: ALGORITHMS, TECHNIQUES, AND ACTIVE VISION, 2003, 5267 : 12 - 25
  • [7] Formation Control of Multiple Mobile Robots Based on Iterative Learning Distributed Model Predictive Control
    Shang, Wei
    Liu, Meng
    Zhang, Daode
    Zhu, Hanzong
    IEEE ACCESS, 2023, 11 : 120034 - 120048
  • [8] Model-based sonar localisation for mobile robots
    Triggs, Bill
    Robotics and Autonomous Systems, 1994, 12 (3-4) : 173 - 186
  • [9] Safe Reinforcement Learning-Based Motion Planning for Functional Mobile Robots Suffering Uncontrollable Mobile Robots
    Cao, Huanhui
    Xiong, Hao
    Zeng, Weifeng
    Jiang, Hantao
    Cai, Zhiyuan
    Hu, Liang
    Zhang, Lin
    Lu, Wenjie
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (05) : 4346 - 4363
  • [10] Motivational strategies in a mobile inquiry-based language learning setting
    Chang, Ching
    Chang, Chih-Kai
    Shih, Ju-Ling
    SYSTEM, 2016, 59 : 100 - 115