An ensemble of autonomous auto-encoders for human activity recognition

被引:40
|
作者
Garcia, Kemilly Dearo [1 ,3 ]
de Sa, Claudio Rebelo [2 ]
Poel, Mannes [1 ]
Carvalho, Tiago [2 ]
Mendes-Moreira, Joao [2 ]
Cardoso, Joao M. P. [5 ]
Carvalho, Andre C. P. L. F. de [4 ]
Kok, Joost N. [1 ]
机构
[1] Univ Twente, Fac Elect Engn Math & Comp Sci, Enschede, Netherlands
[2] Univ Twente, Enschede, Netherlands
[3] Univ Sao Paulo, Sao Paulo, Brazil
[4] Univ Sao Paulo, Inst Math & Comp Sci, Sao Paulo, Brazil
[5] Univ Porto, Fac Engn, Dept Informat Engn, Porto, Portugal
关键词
Human activity recognition; Ensemble of auto-encoders; Semi-supervised learning; AUTOENCODER;
D O I
10.1016/j.neucom.2020.01.125
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human Activity Recognition is focused on the use of sensing technology to classify human activities and to infer human behavior. While traditional machine learning approaches use hand-crafted features to train their models, recent advancements in neural networks allow for automatic feature extraction. Auto-encoders are a type of neural network that can learn complex representations of the data and are commonly used for anomaly detection. In this work we propose a novel multi-class algorithm which consists of an ensemble of auto-encoders where each auto-encoder is associated with a unique class. We compared the proposed approach with other state-of-the-art approaches in the context of human activity recognition. Experimental results show that ensembles of auto-encoders can be efficient, robust and competitive. Moreover, this modular classifier structure allows for more flexible models. For example, the extension of the number of classes, by the inclusion of new auto-encoders, without the necessity to retrain the whole model. (c) 2021 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http:// creativecommons.org/licenses/by/4.0/).
引用
收藏
页码:271 / 280
页数:10
相关论文
共 50 条
  • [1] Smile Recognition Based on Deep Auto-Encoders
    Liang, Shufen
    Liang, Xiangqun
    Guo, Min
    2015 11TH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION (ICNC), 2015, : 176 - 181
  • [2] Fisher Auto-Encoders
    Elkhalil, Khalil
    Hasan, Ali
    Ding, Jie
    Farsiu, Sina
    Tarokh, Vahid
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130 : 352 - 360
  • [3] Ornstein Auto-Encoders
    Choi, Youngwon
    Won, Joong-Ho
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 2172 - 2178
  • [4] Transforming Auto-Encoders
    Hinton, Geoffrey E.
    Krizhevsky, Alex
    Wang, Sida D.
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2011, PT I, 2011, 6791 : 44 - 51
  • [5] Adversarial Auto-encoders for Speech Based Emotion Recognition
    Sahu, Saurabh
    Gupta, Rahul
    Sivaraman, Ganesh
    AbdAlmageed, Wael
    Espy-Wilson, Carol
    18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, : 1243 - 1247
  • [6] Video anomaly detection using transformers and ensemble of convolutional auto-encoders
    Rahimpour, Seyed Mohammad
    Kazemi, Mohammad
    Moallem, Payman
    Safayani, Mehran
    COMPUTERS & ELECTRICAL ENGINEERING, 2024, 120
  • [7] Correlated Variational Auto-Encoders
    Tang, Da
    Liang, Dawen
    Jebara, Tony
    Ruozzi, Nicholas
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [8] Human Pose Estimation by a Series of Residual Auto-Encoders
    Farrajota, M.
    Rodrigues, Joao M. F.
    du Buf, J. M. H.
    PATTERN RECOGNITION AND IMAGE ANALYSIS (IBPRIA 2017), 2017, 10255 : 131 - 139
  • [9] Hyperspherical Variational Auto-Encoders
    Davidson, Tim R.
    Falorsi, Luca
    De Cao, Nicola
    Kipf, Thomas
    Tomczak, Jakub M.
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2018, : 856 - 865
  • [10] Random Walk Graph Auto-Encoders With Ensemble Networks in Graph Embedding
    Xie, Chengxin
    Wen, Xiumei
    Meng, Fanxing
    Pang, Hui
    IEEE ACCESS, 2023, 11 : 55204 - 55211