Touch-text answer for human-robot interaction via supervised adversarial learning

被引:4
作者
Wang, Ya-Xin [1 ]
Meng, Qing-Hao [1 ]
Li, Yun-Kai [2 ]
Hou, Hui-Rang [1 ]
机构
[1] Tianjin Univ, Inst Robot & Autonomous Syst, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China
[2] Zhengzhou Univ, Sch Elect & Informat Engn, Zhengzhou 450001, Henan, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Human-robot interaction; Cross-modal retrieval; Adversarial learning; Touch gesture; Text;
D O I
10.1016/j.eswa.2023.122738
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In daily life, touch modality plays an important role in conveying human intentions and emotions. To further improve touch-based human-robot interaction, robots need to infer human emotions from touch signals and respond accordingly. Therefore, it is a major challenge to correlate the emotional state of touch gestures with text responses. At present, there are few researches on touch-text dialogue, and robots cannot respond to human tactile gestures with appropriate text, so touch-text-based human-robot interaction is not yet possible. To solve these problems, we first built a touch-text dialogue (TTD) corpus based on six basic emotions through experiments, which contains 1109 touch-text sample pairs. And then, we designed a supervised adversarial learning for touch-text answer (SATTA) model to realize the touch-text based human-robot interaction. The SATTA model correlates the data of text mode with that of touch mode by reducing the emotion discrimination loss in the public space and the feature difference between the sample pairs of two modes. At the same time, the feature representation is mapped into the label space to reduce the classification loss of samples. The experiment in the TTD corpus validates the proposed method.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Implementing social and affective touch to enhance user experience in human-robot interaction
    Cansev, M. Ege
    Miller, Alexandra J.
    Brown, Jeremy D.
    Beckerle, Philipp
    FRONTIERS IN ROBOTICS AND AI, 2024, 11
  • [22] How Do Communication Cues Change Impressions of Human-Robot Touch Interaction?
    Hirano, Takahiro
    Shiomi, Masahiro
    Iio, Takamasa
    Kimoto, Mitsuhiko
    Tanev, Ivan
    Shimohara, Katsunori
    Hagita, Norihiro
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2018, 10 (01) : 21 - 31
  • [23] Textile Pressure Mapping Sensor for Emotional Touch Detection in Human-Robot Interaction
    Zhou, Bo
    Altamirano, Carlos Andres Velez
    Zurian, Heber Cruz
    Atefi, Seyed Reza
    Billing, Erik
    Martinez, Fernando Seoane
    Lukowicz, Paul
    SENSORS, 2017, 17 (11)
  • [24] On Interaction Quality in Human-Robot Interaction
    Bensch, Suna
    Jevtic, Aleksandar
    Hellstrom, Thomas
    ICAART: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE, VOL 1, 2017, : 182 - 189
  • [25] Human-Robot Proxemics: Physical and Psychological Distancing in Human-Robot Interaction
    Mumm, Jonathan
    Mutlu, Bilge
    PROCEEDINGS OF THE 6TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTIONS (HRI 2011), 2011, : 331 - 338
  • [26] Expressiveness in human-robot interaction
    Marti, Patrizia
    Giusti, Leonardo
    Pollini, Alessandro
    Rullo, Alessia
    INTERACTION DESIGN AND ARCHITECTURES, 2008, (5-6) : 93 - 98
  • [27] Communication in Human-Robot Interaction
    Andrea Bonarini
    Current Robotics Reports, 2020, 1 (4): : 279 - 285
  • [28] The Effect of Multiple Robot Interaction on Human-Robot Interaction
    Yang, Jeong-Yean
    Kwon, Dong-Soo
    2012 9TH INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS AND AMBIENT INTELLIGENCE (URAL), 2012, : 30 - 33
  • [29] The Science of Human-Robot Interaction
    Kiesler, Sara
    Goodrich, Michael A.
    ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION, 2018, 7 (01)
  • [30] A developmental approach to robotic pointing via human-robot interaction
    Chao, Fei
    Wang, Zhengshuai
    Shang, Changjing
    Meng, Qinggang
    Jiang, Min
    Zhou, Changle
    Shen, Qiang
    INFORMATION SCIENCES, 2014, 283 : 288 - 303