A motivational-based learning model for mobile robots

被引:0
|
作者
Berto, Leticia [1 ,4 ]
Costa, Paula [2 ,4 ]
Simoes, Alexandre [3 ,4 ]
Gudwin, Ricardo [2 ,4 ]
Colombini, Esther [1 ,4 ]
机构
[1] Univ Estadual Campinas, Inst Comp, Ave Albert Einstein,1251 Cidade Univ, BR-13083852 Campinas, SP, Brazil
[2] Univ Estadual Campinas, Sch Elect & Comp Engn, Ave Albert Einstein,400 Cidade Univ, BR-13083852 Campinas, SP, Brazil
[3] Sao Paulo State Univ, Dept Control & Automat Engn, Ave Tres de Marco,511 Alto Boa Vista, BR-18087180 Sorocaba, SP, Brazil
[4] Artificial Intelligence & Cognit Architectures Hb, Ave Albert Einstein,1251 Cidade Univ, BR-13083852 Campinas, SP, Brazil
来源
基金
巴西圣保罗研究基金会;
关键词
Motivation; Action selection and planning; Models of internal states; Internal reinforces; DECISION-MAKING; PLEASURE; EMOTIONS; REWARD;
D O I
10.1016/j.cogsys.2024.101278
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Humans have needs motivating their behavior according to intensity and context. However, we also create preferences associated with each action's perceived pleasure, which is susceptible to changes over time. This makes decision-making more complex, requiring learning to balance needs and preferences according to the context. To understand how this process works and enable the development of robots with a motivational-based learning model, we computationally model a motivation theory proposed by Hull. In this model, the agent (an abstraction of a mobile robot) is motivated to keep itself in a state of homeostasis. We introduced hedonic dimensions to explore the impact of preferences on decision-making and employed reinforcement learning to train our motivated-based agents. In our experiments, we deploy three agents with distinct energy decay rates, simulating different metabolic rates, within two diverse environments. We investigate the influence of these conditions on their strategies, movement patterns, and overall behavior. The findings reveal that agents excel at learning more effective strategies when the environment allows for choices that align with their metabolic requirements. Furthermore, we observe that incorporating pleasure as a component of the motivational mechanism affects behavior learning, particularly for agents with regular metabolisms depending on the environment. Our study also unveils that, when confronted with survival challenges, agents prioritize immediate needs over pleasure and equilibrium. These insights shed light on how robotic agents can adapt and make informed decisions in demanding scenarios, demonstrating the intricate interplay between motivation, pleasure, and environmental context in autonomous systems.
引用
收藏
页数:18
相关论文
共 50 条
  • [21] Pre-service teachers' intention to adopt mobile learning: A motivational model
    Baydas, Ozlem
    Yilmaz, Rabia M.
    BRITISH JOURNAL OF EDUCATIONAL TECHNOLOGY, 2018, 49 (01) : 137 - 152
  • [22] Planning to Learn: Integrating Model Learning into a Trajectory Planner for Mobile Robots
    Greytak, Matthew
    Hover, Franz
    ICIA: 2009 INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION, VOLS 1-3, 2009, : 13 - 18
  • [23] Embedded Learning-based Model Predictive Control for Mobile Robots using Gaussian Process Regression
    Janssen, N. H. J.
    Kools, L.
    Antunes, D. J.
    2020 AMERICAN CONTROL CONFERENCE (ACC), 2020, : 1124 - 1130
  • [24] MODEL-BASED AND NON-MODEL-BASED VELOCITY ESTIMATORS FOR MOBILE ROBOTS
    Matsuo, Takami
    Wada, Shuhei
    Suemitsu, Haruo
    INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL, 2008, 4 (12): : 3123 - 3133
  • [25] Movement model of multiple mobile robots based on servo system
    Niitsuma, M
    Hashimoto, H
    Kimura, Y
    Ishijima, S
    DISTRIBUTED AUTONOMOUS ROBOTIC SYSTEMS 5, 2002, : 247 - 256
  • [26] Learning to understand tasks for mobile robots
    ten Hagen, SHG
    Kröse, BJA
    2004 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN & CYBERNETICS, VOLS 1-7, 2004, : 2942 - 2947
  • [27] Path Planning for Mobile Robots Based on a Modified Potential Model
    Jia, Qian
    Wang, Xingsong
    2009 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION, VOLS 1-7, CONFERENCE PROCEEDINGS, 2009, : 4946 - 4951
  • [28] Model-Based Fault Diagnosis Techniques for Mobile Robots
    Kuestenmacher, Anastassia
    Ploeger, Paul G.
    IFAC PAPERSONLINE, 2016, 49 (15): : 50 - 56
  • [29] Rapid concept learning for mobile robots
    Mahadevan, S
    Theocharous, G
    Khaleeli, N
    MACHINE LEARNING, 1998, 31 (1-3) : 7 - 27
  • [30] Advances in learning for intelligent mobile robots
    Hall, EL
    Ghaffari, M
    Liao, XS
    Ali, SMA
    INTELLIGENT ROBOTS AND COMPUTER VISION XXII: ALGORITHMS, TECHNIQUES, AND ACTIVE VISION, 2004, 5608 : 13 - 24