Occupant behavior monitoring and emergency event detection in single-person households using deep learning-based sound recognition

被引:40
作者
Kim, Jinwoo [1 ,2 ]
Min, Kyungjun [3 ]
Jung, Minhyuk [2 ]
Chi, Seokho [1 ,2 ]
机构
[1] Seoul Natl Univ, Dept Civil & Environm Engn, 1 Gwanak Ro, Seoul, South Korea
[2] Seoul Natl Univ, Inst Construct & Environm Engn, 1 Gwanak Ro, Seoul, South Korea
[3] TmaxA&C Co Ltd, 29 Hwangsaeul Ro 258Beon Gil, Seongnam Si, Gyeonggi Do, South Korea
基金
新加坡国家研究基金会;
关键词
Occupant behaviors; Emergency events; Single-person households; Deep learning; Sound recognition; Home environments; Lonely deaths; FALL DETECTION; EARTHMOVING EXCAVATORS; CLASSIFICATION; FEATURES; SYSTEM; MODEL;
D O I
10.1016/j.buildenv.2020.107092
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
The number of single-person households (SPHs) has been consistently increasing owing to various social issues, such as separation by death, declining marriage rate, and increasing divorce rate. Unfortunately, this demographical change is creating a new social problem, namely, lonely death. In response to this problem, many researchers have attempted to develop wearable sensor-based and computer vision-based systems that monitor occupant behaviors and detect possible emergency events in indoor environments. However, existing approaches face challenges in monitoring SPHs owing to their technical disadvantages; for instance, if the occupant is not wearing the electronic sensor, or if the signal is occluded by other objects, it is not possible to monitor SPHs. Moreover, as existing studies focus only on classifying the occupant's daily activities, such as eating, sitting, and talking, the emergency events that are significant for SPH monitoring are still unclear. To address these challenges, this study investigates emergency events that have a critical impact on the occupant's health and proposes a deep learning-based sound recognition model to monitor occupant behaviors and detect possible emergency events in SPH environments. Experiments are conducted using audio data collected from actual SPH home environments and online data-sharing websites. The average precision and recall rates of the developed model are 78.0% and 90.8%, respectively. The results demonstrate that the developed model could successfully distinguish emergency sound events from the sounds of regular human activities. The findings can not only secure and rescue SPHs in danger but also provide new research directions for indoor occupant and event monitoring.
引用
收藏
页数:11
相关论文
共 55 条
[1]   Everyday Life Sounds Database: Telemonitoring of Elderly or Disabled [J].
Abdoune, Leila ;
Fezari, Mohamed .
JOURNAL OF INTELLIGENT SYSTEMS, 2016, 25 (01) :71-84
[2]  
ADAVANNE S, 2019, MULTIROOM REVERBERAN
[3]  
Adavanne S., 2016, SOUND EVENT DETECTIO
[4]   Fall detection through acoustic Local Ternary Patterns [J].
Adnan, Syed M. ;
Irtaza, Aun ;
Aziz, Sumair ;
Ullah, M. Obaid ;
Javed, Ali ;
Mahmood, Muhammad Tariq .
APPLIED ACOUSTICS, 2018, 140 :296-300
[5]  
Alsina-Pag`es R.M., 2017, SENSORS, P17, DOI [10.3390/s17040854., DOI 10.3390/S17040854.]
[6]  
[Anonymous], 2010, P 27 INT C MACH LEAR, DOI 10.5555/3104322.3104425
[7]   Towards Information-Centric Networking (ICN) Naming for Internet of Things (loT): The Case of Smart Campus [J].
Arshad, Sobia ;
Azam, Muhammad Awais ;
Ahmed, Syed Hassan ;
Loo, Jonathan .
PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON FUTURE NETWORKS AND DISTRIBUTED SYSTEMS (ICFNDS '17), 2017,
[8]  
Bergstra J, 2012, J MACH LEARN RES, V13, P281
[9]   AutoDietary: A Wearable Acoustic Sensor System for Food Intake Recognition in Daily Life [J].
Bi, Yin ;
Lv, Mingsong ;
Song, Chen ;
Xu, Wenyao ;
Guan, Nan ;
Yi, Wang .
IEEE SENSORS JOURNAL, 2016, 16 (03) :806-816
[10]  
Bingham E, 2000, Int J Neural Syst, V10, P1, DOI 10.1142/S0129065700000028