LIMIT: Learning Interfaces to Maximize Information Transfer

被引:0
作者
Christie, Benjamin a. [1 ]
Losey, Dylan p. [1 ]
机构
[1] Virginia Tech, Dept Mech Engn, Blacksburg, VA 24061 USA
关键词
Interfaces; Information Theory; Co-Adaption; Human-Robot Interaction; MOTION INTENT; ROBOT; REALITY;
D O I
10.1145/3675758
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Robots can use auditory, visual, or haptic interfaces to convey information to human users. The way these interfaces select signals is typically pre-defined by the designer: for instance, a haptic wristband might vibrate when the robot is moving and squeeze when the robot stops. But different people interpret the same signals in different ways, so that what makes sense to one person might be confusing or unintuitive to another. In this article, we introduce a unified algorithmic formalism for learning co-adaptive interfaces from scratch. Our method does not need to know the human's task (i.e., what the human is using these signals for). Instead, our insight is that interpretable interfaces should select signals that maximize correlation between the human's actions and the information the interface is trying to convey. Applying this insight we develop Learning Interfaces to Maximize Information Transfer (LIMIT). LIMIT optimizes a tractable, real-time proxy of information gain in continuous spaces. The first time a person works with our system the signals may appear random; but over repeated interactions, the interface learns a one-to-one mapping between displayed signals and human responses. Our resulting approach is both personalized to the current user and not tied to any specific interface modality. We compare LIMIT to state-of-the-art baselines across controlled simulations, an online survey, and an in-person user study with auditory, visual, and haptic interfaces. Overall, our results suggest that LIMIT learns interfaces that enable users to complete the task more quickly and efficiently, and users subjectively prefer LIMIT to the alternatives. See videos here: https://youtu.be/IvQ3TM1_2fA.
引用
收藏
页数:26
相关论文
共 41 条
[1]  
Andersen RS, 2016, IEEE ROMAN, P294, DOI 10.1109/ROMAN.2016.7745145
[2]  
Belghazi MI, 2018, PR MACH LEARN RES, V80
[3]   Social robots for education: A review [J].
Belpaeme, Tony ;
Kennedy, James ;
Ramachandran, Aditi ;
Scassellati, Brian ;
Tanaka, Fumihide .
SCIENCE ROBOTICS, 2018, 3 (21)
[4]  
Cha Elizabeth, 2018, Foundations and Trends in Robotics, V6, P211, DOI DOI 10.1561/2300000057
[5]  
Chadalavada RT, 2015, 2015 EUROPEAN CONFERENCE ON MOBILE ROBOTS (ECMR), DOI 10.1109/ECMR.2015.7403771
[6]   A brain-computer interface with vibrotactile biofeedback for haptic information [J].
Chatterjee, Aniruddha ;
Aggarwal, Vikram ;
Ramos, Ander ;
Acharya, Soumyadipta ;
Thakor, Nitish V. .
JOURNAL OF NEUROENGINEERING AND REHABILITATION, 2007, 4 (1)
[7]   Efficient and Trustworthy Social Navigation via Explicit and Implicit Robot-Human Communication [J].
Che, Yuhang ;
Okamura, Allison M. ;
Sadigh, Dorsa .
IEEE TRANSACTIONS ON ROBOTICS, 2020, 36 (03) :692-707
[8]   A Framework for Optimizing Co-adaptation in Body-Machine Interfaces [J].
De Santis, Dalia .
FRONTIERS IN NEUROROBOTICS, 2021, 15
[9]   Effects of Peripheral Haptic Feedback on Intracortical Brain-Computer Interface Control and Associated Sensory Responses in Motor Cortex [J].
Deo, Darrel R. ;
Rezaii, Paymon ;
Hochberg, Leigh R. ;
Okamura, Allison M. ;
Shenoy, Krishna, V ;
Henderson, Jaimie M. .
IEEE TRANSACTIONS ON HAPTICS, 2021, 14 (04) :762-775
[10]  
Dragan AD, 2013, ACMIEEE INT CONF HUM, P301, DOI 10.1109/HRI.2013.6483603