Touch-text answer for human-robot interaction via supervised adversarial learning

被引:4
作者
Wang, Ya-Xin [1 ]
Meng, Qing-Hao [1 ]
Li, Yun-Kai [2 ]
Hou, Hui-Rang [1 ]
机构
[1] Tianjin Univ, Inst Robot & Autonomous Syst, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China
[2] Zhengzhou Univ, Sch Elect & Informat Engn, Zhengzhou 450001, Henan, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Human-robot interaction; Cross-modal retrieval; Adversarial learning; Touch gesture; Text;
D O I
10.1016/j.eswa.2023.122738
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In daily life, touch modality plays an important role in conveying human intentions and emotions. To further improve touch-based human-robot interaction, robots need to infer human emotions from touch signals and respond accordingly. Therefore, it is a major challenge to correlate the emotional state of touch gestures with text responses. At present, there are few researches on touch-text dialogue, and robots cannot respond to human tactile gestures with appropriate text, so touch-text-based human-robot interaction is not yet possible. To solve these problems, we first built a touch-text dialogue (TTD) corpus based on six basic emotions through experiments, which contains 1109 touch-text sample pairs. And then, we designed a supervised adversarial learning for touch-text answer (SATTA) model to realize the touch-text based human-robot interaction. The SATTA model correlates the data of text mode with that of touch mode by reducing the emotion discrimination loss in the public space and the feature difference between the sample pairs of two modes. At the same time, the feature representation is mapped into the label space to reduce the classification loss of samples. The experiment in the TTD corpus validates the proposed method.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] A Hybrid PAC Reinforcement Learning Algorithm for Human-Robot Interaction
    Zehfroosh, Ashkan
    Tanner, Herbert G.
    FRONTIERS IN ROBOTICS AND AI, 2022, 9
  • [42] Structured learning for spoken language understanding in human-robot interaction
    Bastianelli, Emanuele
    Castellucci, Giuseppe
    Croce, Danilo
    Basili, Roberto
    Nardi, Daniele
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2017, 36 (5-7) : 660 - 683
  • [43] A Taxonomy of Robot Autonomy for Human-Robot Interaction
    Kim, Stephanie
    Anthis, Jacy Reese
    Sebo, Sarah
    PROCEEDINGS OF THE 2024 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI 2024, 2024, : 381 - 393
  • [44] Robot Curiosity in Human-Robot Interaction (RCHRI)
    Ayub, Ali
    Scheunemann, Marcus
    Mavrogiannis, Christoforos
    Rhim, Jimin
    Dautenhahn, Kerstin
    Nehaniv, Chrystopher L.
    Hafner, Verena V.
    Polani, Daniel
    PROCEEDINGS OF THE 2022 17TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI '22), 2022, : 1231 - 1233
  • [45] Learning of Social Skills for Human-Robot Interaction by Hierarchical HMM and Interaction Dynamics
    Kim, Min Gu
    Lee, Sang Hyoung
    Suh, Il Hong
    2014 INTERNATIONAL CONFERENCE ON ELECTRONICS, INFORMATION AND COMMUNICATIONS (ICEIC), 2014,
  • [46] Can Human-Inspired Learning Behaviour Facilitate Human-Robot Interaction?
    Carfi, Alessandro
    Villalobos, Jessica
    Coronado, Enrique
    Bruno, Barbara
    Mastrogiovanni, Fulvio
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2020, 12 (01) : 173 - 186
  • [47] Assessment of Cognitive skills via Human-robot Interaction and Cloud Computing
    Alessandro Di Nuovo
    Simone Varrasi
    Alexandr Lucas
    Daniela Conti
    John McNamara
    Alessandro Soranzo
    Journal of Bionic Engineering, 2019, 16 : 526 - 539
  • [48] Robot behavior adaptation for human-robot interaction based on policy gradient reinforcement learning
    Mitsunaga, N
    Smith, C
    Kanda, T
    Ishiguro, H
    Hagita, N
    2005 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, 2005, : 1594 - 1601
  • [49] Human-Robot Interaction for Surgical Robot Based on Fuzzy Model Reference Learning Control
    Lin A.
    Gan M.
    Ge H.
    Tang Y.
    Xu H.
    Kuang S.
    Huang L.
    Sun L.
    Jiqiren/Robot, 2019, 41 (04): : 543 - 550
  • [50] Learning smiling expression for Human-robot Interaction: A Malaysia Case Study
    Zulkifli, Winal Zikril
    Shamsuddin, Syamimi
    Hwee, Lim Thiam
    Noor, Ahamad Zaki Mohamed
    PROCEEDINGS OF INNOVATIVE RESEARCH AND INDUSTRIAL DIALOGUE 2018 (IRID'18), 2019, : 48 - 49