Human Intention Recognition using Markov Decision Processes

被引:0
作者
Lin, Hsien-I [1 ]
Chen, Wei-Kai [1 ]
机构
[1] Natl Taipei Univ Technol, Grad Inst Automat Technol, Taipei, Taiwan
来源
2014 CACS INTERNATIONAL AUTOMATIC CONTROL CONFERENCE (CACS 2014) | 2014年
关键词
Human intention recognition; human-robot interaction (HRI); Markov decision processes (MDPs); frequency-based reward function;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Human intention recognition in human-robot interaction (HRI) has been a papular topic. This paper presents a human-intention recognition framework using Markov decision processes (MDPs). The framework is composed of the object and motion layers. The object and motion layers obtain the object information and human hand gestures, respectively. The information extracted from the both layers is used to represent the state in the MDPs. To learn human intention to accomplish tasks, a frequency-based reward function in the MDPs is proposed. It assists the MDPs to converge to the policy that corresponds to the frequency of the task that has been performed. In our experiments, four tasks that were trained in different numbers of trial of pouring water and making coffee were used to validate the proposed framework. With the frequency-based reward function, the plausible intentional actions in certain states were distinguishable from the ones using the default reward function.
引用
收藏
页码:340 / 343
页数:4
相关论文
共 40 条
  • [1] Human Intention Recognition in Flexible Robotized Warehouses Based on Markov Decision Processes
    Petkovic, Tomislav
    Markovic, Ivan
    Petrovic, Ivan
    ROBOT 2017: THIRD IBERIAN ROBOTICS CONFERENCE, VOL 2, 2018, 694 : 629 - 640
  • [2] Human intention recognition using context relationships in complex scenes
    Tong, Tong
    Setchi, Rossitza
    Hicks, Yulia
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 266
  • [3] Probabilistic Safety Guarantees for Markov Decision Processes
    Wisniewski, Rafal
    Bujorianu, Manuela L.
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (12) : 8095 - 8102
  • [4] Singularly perturbed linear programs and Markov decision processes
    Avrachenkov, Konstantin
    Filar, Jerzy A.
    Gaitsgory, Vladimir
    Stillman, Andrew
    OPERATIONS RESEARCH LETTERS, 2016, 44 (03) : 297 - 301
  • [5] Distributional Reachability for Markov Decision Processes: Theory and Applications
    Gao, Yulong
    Abate, Alessandro
    Xie, Lihua
    Johansson, Karl Henrik
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2024, 69 (07) : 4598 - 4613
  • [6] Learning Policies for Markov Decision Processes From Data
    Hanawal, Manjesh Kumar
    Liu, Hao
    Zhu, Henghui
    Paschalidis, Ioannis Ch.
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2019, 64 (06) : 2298 - 2309
  • [7] Approximation of Markov decision processes with general state space
    Dufour, F.
    Prieto-Rumeau, T.
    JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS, 2012, 388 (02) : 1254 - 1267
  • [8] Markov decision processes with delays and asynchronous cost collection
    Katsikopoulos, KV
    Engelbrecht, SE
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2003, 48 (04) : 568 - 574
  • [9] Human intention and workspace recognition for collaborative assembly
    Gajjar, Nishant Ketan
    Rekik, Khansa
    Kanso, Ali
    Mueller, Rainer
    IFAC PAPERSONLINE, 2022, 55 (10): : 365 - 370
  • [10] Simulation-based policy generation using large-scale Markov decision processes
    Zobel, CW
    Scherer, WT
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART A-SYSTEMS AND HUMANS, 2001, 31 (06): : 609 - 622