Multi-modal human robot interaction for map generation

被引:0
|
作者
Saito, H [1 ]
Ishimura, K [1 ]
Hattori, M [1 ]
Takamori, T [1 ]
机构
[1] Kobe Univ, Kobe, Hyogo 6578501, Japan
来源
SICE 2002: PROCEEDINGS OF THE 41ST SICE ANNUAL CONFERENCE, VOLS 1-5 | 2002年
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper describes an interface for multi modal human-robot interaction which enables people to introduce a newcomer robot about different attributes of objects and places in the room through speech commands and hand gestures. Robot generates in environment based on knowledge learned through communication with human and uses this map for navigation.
引用
收藏
页码:2721 / 2724
页数:4
相关论文
共 50 条
  • [41] Multi-modal multi-hop interaction network for dialogue response generation
    Zhou, Jie
    Tian, Junfeng
    Wang, Rui
    Wu, Yuanbin
    Yan, Ming
    He, Liang
    Huang, Xuanjing
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 227
  • [42] Design of Robot Teaching Assistants Through Multi-modal Human-Robot Interactions
    Ferrarelli, Paola
    Lazaro, Maria T.
    Iocchi, Luca
    ROBOTICS IN EDUCATION: LATEST RESULTS AND DEVELOPMENTS, 2018, 630 : 274 - 286
  • [43] Multi-modal contact-less Human Computer Interaction
    Frangeskides, Frangiskos
    Lanitis, Andreas
    ENTERPRISE INFORMATION SYSTEMS-BOOK, 2008, 3 : 405 - +
  • [44] An Immersive System with Multi-modal Human-computer Interaction
    Zhao, Rui
    Wang, Kang
    Divekar, Rahul
    Rouhani, Robert
    Su, Hui
    Ji, Qiang
    PROCEEDINGS 2018 13TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2018), 2018, : 517 - 524
  • [45] Multi-Modal Interfaces for Human-Robot Communication in Collaborative Assembly
    Horvath, Gergely
    Kardos, Csaba
    Kemeny, Zsolt
    Kovacs, Andras
    Pataki, Balazs E.
    Vancza, Jozsef
    ERCIM NEWS, 2018, (114): : 15 - 16
  • [46] Human-robot dialogue annotation for multi-modal common ground
    Bonial, Claire
    Lukin, Stephanie M.
    Abrams, Mitchell
    Baker, Anthony
    Donatelli, Lucia
    Foots, Ashley
    Hayes, Cory J.
    Henry, Cassidy
    Hudson, Taylor
    Marge, Matthew
    Pollard, Kimberly A.
    Artstein, Ron
    Traum, David
    Voss, Clare R.
    LANGUAGE RESOURCES AND EVALUATION, 2024,
  • [47] Research on Multi-modal Human-machine Interface for Aerospace Robot
    Bian, Yan
    Zhao, Li
    Li, Hongwei
    Yang, Genghuang
    Geng, Liqing
    Deng, Xuhui
    2015 7TH INTERNATIONAL CONFERENCE ON INTELLIGENT HUMAN-MACHINE SYSTEMS AND CYBERNETICS IHMSC 2015, VOL I, 2015, : 535 - 538
  • [48] A multi-modal and collaborative human-machine interface for a walking robot
    Estremera, J
    Garcia, E
    de Santos, PG
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2002, 35 (04) : 397 - 425
  • [49] A huggable, mobile robot for developmental disorder interventions in a multi-modal interaction space
    Bonarini, Andrea
    Garzotto, Franca
    Gelsomini, Mirko
    Romero, Maximiliano
    Clasadonte, Francesco
    Yilmaz, Ayse Naciye Celebi
    2016 25TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2016, : 823 - 830
  • [50] Implementation of ActiveCube for multi-modal interaction
    Itoh, Y
    Kitamura, Y
    Kawai, M
    Kishino, F
    HUMAN-COMPUTER INTERACTION - INTERACT'01, 2001, : 682 - 683