EmbraceNet for Activity: A Deep Multimodal Fusion Architecture for Activity Recognition

被引:20
作者
Choi, Jun-Ho [1 ]
Lee, Jong-Seok [1 ]
机构
[1] Yonsei Univ, Sch Integrated Technol, Incheon, South Korea
来源
UBICOMP/ISWC'19 ADJUNCT: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2019 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS | 2019年
关键词
multimodal fusion; activity recognition; deep learning;
D O I
10.1145/3341162.3344871
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Human activity recognition using multiple sensors is a challenging but promising task in recent decades. In this paper, we propose a deep multimodal fusion model for activity recognition based on the recently proposed feature fusion architecture named EmbraceNet. Our model processes each sensor data independently, combines the features with the EmbraceNet architecture, and post-processes the fused feature to predict the activity. In addition, we propose additional processes to boost the performance of our model. We submit the results obtained from our proposed model to the SHL recognition challenge with the team name "Yonsei-MCML."
引用
收藏
页码:693 / 698
页数:6
相关论文
共 19 条
  • [1] Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
  • [2] The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition
    Chavarriaga, Ricardo
    Sagha, Hesam
    Calatroni, Alberto
    Digumarti, Sundara Tejaswi
    Troester, Gerhard
    Millan, Jose del R.
    Roggen, Daniel
    [J]. PATTERN RECOGNITION LETTERS, 2013, 34 (15) : 2033 - 2042
  • [3] Chen C, 2015, IEEE IMAGE PROC, P168, DOI 10.1109/ICIP.2015.7350781
  • [4] EmbraceNet: A robust deep learning architecture for multimodal classification
    Choi, Jun-Ho
    Lee, Jong-Seok
    [J]. INFORMATION FUSION, 2019, 51 : 259 - 270
  • [5] Confidence-based Deep Multimodal Fusion for Activity Recognition
    Choi, Jun-Ho
    Lee, Jong-Seok
    [J]. PROCEEDINGS OF THE 2018 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2018 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS (UBICOMP/ISWC'18 ADJUNCT), 2018, : 1548 - 1556
  • [6] The University of Sussex-Huawei Locomotion and Transportation Dataset for Multimodal Analytics With Mobile Devices
    Gjoreski, Hristijan
    Ciliberto, Mathias
    Wang, Lin
    Morales, Francisco Javier Ordonez
    Mekki, Sami
    Valentin, Stefan
    Roggen, Daniel
    [J]. IEEE ACCESS, 2018, 6 : 42592 - 42604
  • [7] Applying Multiple Knowledge to Sussex-Huawei Locomotion Challenge
    Gjoreski, Martin
    Janko, Vito
    Rescic, Nina
    Mlakar, Miha
    Lustrek, Mitja
    Bizjak, Jani
    Slapnicar, Gasper
    Marinko, Matej
    Drobnic, Vid
    Gams, Matjaz
    [J]. PROCEEDINGS OF THE 2018 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2018 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS (UBICOMP/ISWC'18 ADJUNCT), 2018, : 1488 - 1496
  • [8] Heilbron Fabian Caba, 2015, PROC CVPR IEEE, P961, DOI DOI 10.1109/CVPR.2015.7298698
  • [9] Going deeper into action recognition: A survey
    Herath, Samitha
    Harandi, Mehrtash
    Porikli, Fatih
    [J]. IMAGE AND VISION COMPUTING, 2017, 60 : 4 - 21
  • [10] Application of CNN for Human Activity Recognition with FFT Spectrogram of Acceleration and Gyro Sensors
    Ito, Chihiro
    Cao, Xin
    Shuzo, Masaki
    Maeda, Eisaku
    [J]. PROCEEDINGS OF THE 2018 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2018 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS (UBICOMP/ISWC'18 ADJUNCT), 2018, : 1503 - 1510