A Multimodal Anomaly Detector for Robot-Assisted Feeding Using an LSTM-Based Variational Autoencoder

被引:678
作者
Park, Daehyung [1 ]
Hoshi, Yuuna [1 ]
Kemp, Charles C. [1 ]
机构
[1] Healthcare Robotics Lab, Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta,GA,30332, United States
基金
美国国家科学基金会;
关键词
Anomaly detection - Robots;
D O I
10.1109/LRA.2018.2801475
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
The detection of anomalous executions is valuable for reducing potential hazards in assistive manipulation. Multimodal sensory signals can be helpful for detecting a wide range of anomalies. However, the fusion of high-dimensional and heterogeneous modalities is a challenging problem for model-based anomaly detection. We introduce a long short-term memory-based variational autoencoder (LSTM-VAE) that fuses signals and reconstructs their expected distribution by introducing a progress-based varying prior. Our LSTM-VAE-based detector reports an anomaly when a reconstruction-based anomaly score is higher than a state-based threshold. For evaluations with 1555 robot-assisted feeding executions, including 12 representative types of anomalies, our detector had a higher area under the receiver operating characteristic curve of 0.8710 than 5 other baseline detectors from the literature. We also show the variational autoencoding and state-based thresholding are effective in detecting anomalies from 17 raw sensory signals without significant feature engineering effort. © 2018 IEEE.
引用
收藏
页码:1544 / 1551
相关论文
empty
未找到相关数据