Explainable Activity Recognition for Smart Home Systems

被引:17
作者
Das, Devleena [1 ]
Nishimura, Yasutaka [2 ]
Vivek, Rajan P. [1 ]
Takeda, Naoto [2 ]
Fish, Sean T. [1 ]
Plotz, Thomas [1 ]
Chernova, Sonia [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] KDDI Res, Fujimino, Japan
关键词
Explainable AI; smart home activity recognition; human-AI-interaction; ANOMALY DETECTION; CLASSIFIERS; TECHNOLOGY;
D O I
10.1145/3561533
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Smart home environments are designed to provide services that help improve the quality of life for the occupant via a variety of sensors and actuators installed throughout the space. Many automated actions taken by a smart home are governed by the output of an underlying activity recognition system. However, activity recognition systems may not be perfectly accurate, and therefore inconsistencies in smart home operations can lead users reliant on smart home predictions to wonder "Why did the smart home do that?" In this work, we build on insights from Explainable Artificial Intelligence (XAI) techniques and introduce an explainable activity recognition framework in which we leverage leading XAI methods (Local Interpretable Model-agnostic Explanations, SHapley Additive exPlanations (SHAP), Anchors) to generate natural language explanations that explain what about an activity led to the given classification. We evaluate our framework in the context of a commonly targeted smart home scenario: autonomous remote caregiver monitoring for individuals who are living alone or need assistance. Within the context of remote caregiver monitoring, we perform a two-step evaluation: (a) utilize Machine Learning experts to assess the sensibility of explanations and (b) recruit non-experts in two user remote caregiver monitoring scenarios, synchronous and asynchronous, to assess the effectiveness of explanations generated via our framework. Our results show that the XAI approach, SHAP, has a 92% success rate in generating sensible explanations. Moreover, in 83% of sampled scenarios users preferred natural language explanations over a simple activity label, underscoring the need for explainable activity recognition systems. Finally, we show that explanations generated by some XAI methods can lead users to lose confidence in the accuracy of the underlying activity recognition model, while others lead users to gain confidence. Taking all studied factors into consideration, we make a recommendation regarding which existing XAI method leads to the best performance in the domain of smart home automation and discuss a range of topics for future work to further improve explainable activity recognition.
引用
收藏
页数:39
相关论文
共 82 条
[71]   Shapley additive explanations for NO2 forecasting [J].
Vega Garcia, Maria ;
Aznarte, Jose L. .
ECOLOGICAL INFORMATICS, 2020, 56
[72]   Recognizing multi-user activities using wearable sensors in a smart home [J].
Wang, Liang ;
Gu, Tao ;
Tao, Xianping ;
Chen, Hanhua ;
Lu, Jian .
PERVASIVE AND MOBILE COMPUTING, 2011, 7 (03) :287-298
[73]   Survey on Internet of Things based Smart Home [J].
Williams, Vasanth ;
Terence, Sebastian J. ;
Immaculate, Jude .
PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON INTELLIGENT SUSTAINABLE SYSTEMS (ICISS 2019), 2019, :460-464
[74]  
Wu JL, 2019, Arxiv, DOI arXiv:1809.02805
[75]  
Wu MK, 2017, Arxiv, DOI [arXiv:1711.06178, 10.48550/arXiv.1711.06178]
[76]  
Xu Wei, 2019, Interactions, V26, P42, DOI [10.1145/3328485, DOI 10.1145/3328485]
[77]  
Yanni Zhai, 2011, 2011 IEEE 2nd International Conference on Computing, Control and Industrial Engineering, P41, DOI 10.1109/CCIENG.2011.6008062
[78]   QualityDeepSense: Quality-Aware Deep Learning Framework for Internet of Things Applications with Sensor-Temporal Attention [J].
Yao, Shuochao ;
Zhao, Yiran ;
Hu, Shaohan ;
Abdelzaher, Tarek .
PROCEEDINGS OF THE 2018 INTERNATIONAL WORKSHOP ON EMBEDDED AND MOBILE DEEP LEARNING (EMDL '18), 2018, :42-47
[79]   Understanding and Improving Recurrent Networks for Human Activity Recognition by Continuous Attention [J].
Zeng, Ming ;
Gao, Haoxiang ;
Yu, Tong ;
Mengshoel, Ole J. ;
Langseth, Helge ;
Lane, Ian ;
Liu, Xiaobing .
ISWC'18: PROCEEDINGS OF THE 2018 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2018, :56-63
[80]   Visual interpretability for deep learning: a survey [J].
Zhang, Quan-shi ;
Zhu, Song-chun .
FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2018, 19 (01) :27-39