Learning task-specific sensing, control and memory policies

被引:0
|
作者
Rajendran, S [1 ]
Huber, M [1 ]
机构
[1] Univ Texas, Dept Comp & Engn, Arlington, TX 76019 USA
关键词
focus of attention; event memory; reinforcement learning;
D O I
10.1142/S0218213005002119
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
AI agents and robots that can adapt and handle multiple tasks in real time promise to be a powerful tool. To address the control challenges involved in such systems, the underlying control approach has to take into account the important sensory information. Modern sensors, however, can generate huge amounts of data, rendering the processing and representation of all sensor data in real time computationally intractable. This issue can be addressed by developing task-specific focus of attention strategies that limit the sensory data that is processed at any point in time to the data relevant for the given task. Alone, however, this mechanism is not adequate for solving complex tasks since the robot also has to maintain selected pieces of past information. This necessitates AI agents and robots to have the capability to remember significant past events that are required for task completion. This paper presents an approach that considers focus of attention as a problem of selecting controller and feature pairs to be processed at any given point in time to optimize system performance. This approach is further extended by incorporating short term memory and a learned memory management policy. The result is a system that learns control, sensing, and memory policies that are task-specific and adaptable to real world situations using feedback from the world in a reinforcement learning framework. The approach is illustrated using table cleaning, sorting, star-king, and copying tasks in the blocks world domain.
引用
收藏
页码:303 / 327
页数:25
相关论文
共 50 条
  • [1] Learning task-specific memory policies
    Rajendran, S
    Huber, M
    Proceedings of the Sixth IASTED International Conference on Intelligent Systems and Control, 2004, : 238 - 243
  • [2] Learning Task-Specific City Region Partition
    Wang, Hongjian
    Jenkins, Porter
    Wei, Hua
    Wu, Fei
    Li, Zhenhui
    WEB CONFERENCE 2019: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2019), 2019, : 3300 - 3306
  • [3] T3S: Improving Multi-Task Reinforcement Learning with Task-Specific Feature Selector and Scheduler
    Yu, Yuanqiang
    Yang, Tianpei
    Lv, Yongliang
    Zheng, Yan
    Hao, Jianye
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [4] Self-Supervised Curriculum Generation for Autonomous Reinforcement Learning Without Task-Specific Knowledge
    Lee, Sang-Hyun
    Seo, Seung-Woo
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (05) : 4043 - 4050
  • [5] Composite Motion Learning with Task Control
    Xu, Pei
    Shang, Xiumin
    Zordan, Victor
    Karamouzas, Ioannis
    ACM TRANSACTIONS ON GRAPHICS, 2023, 42 (04):
  • [6] Reinforcement Learning for Disassembly Task Control
    Weerasekara, Sachini
    Li, Wei
    Isaacs, Jacqueline
    Kamarthi, Sagar
    COMPUTERS & INDUSTRIAL ENGINEERING, 2024, 190
  • [7] A Survey of Task-Oriented Dialogue Policies Based on Reinforcement Learning
    Xu K.
    Wang Z.-Y.
    Wang X.
    Qin H.
    Long Y.-X.
    Jisuanji Xuebao/Chinese Journal of Computers, 2024, 47 (06): : 1201 - 1231
  • [8] Behavioral control task supervisor with memory based on reinforcement learning for human-multi-robot coordination systems
    Huang, Jie
    Mo, Zhibin
    Zhang, Zhenyi
    Chen, Yutao
    FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2022, 23 (08) : 1174 - 1188
  • [9] Learning Sample-Specific Policies for Sequential Image Augmentation
    Li, Pu
    Liu, Xiaobai
    Xie, Xiaohui
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 4491 - 4500
  • [10] Shared learning of powertrain control policies for vehicle fleets
    Kerbel, Lindsey
    Ayalew, Beshah
    Ivanco, Andrej
    APPLIED ENERGY, 2024, 365