Three-Year Review of the 2018-2020 SHL Challenge on Transportation and Locomotion Mode Recognition From Mobile Sensors

被引:22
作者
Wang, Lin [1 ]
Gjoreski, Hristijan [2 ]
Ciliberto, Mathias [3 ]
Lago, Paula [4 ]
Murao, Kazuya [5 ]
Okita, Tsuyoshi [6 ]
Roggen, Daniel [3 ]
机构
[1] Queen Mary Univ London, Ctr Intelligent Sensing, London, England
[2] Ss Cyril & Methodius, Fac Elect Engn & Informat Technol, Skopje, North Macedonia
[3] Univ Sussex, Wearable Technol Lab, Brighton, E Sussex, England
[4] Univ Nacl Abierta & Distancia, Bogota, Colombia
[5] Ritsumeikan Univ, Coll Informat Sci & Engn, Shiga, Japan
[6] Kyushu Inst Technol, Kitakyushu, Fukuoka, Japan
来源
FRONTIERS IN COMPUTER SCIENCE | 2021年 / 3卷
基金
欧盟地平线“2020”;
关键词
activity recognition; context-aware computing; deep learning; machine learning; mobile sensing; transportation mode recognition; SMARTPHONES;
D O I
10.3389/fcomp.2021.713719
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The Sussex-Huawei Locomotion-Transportation (SHL) Recognition Challenges aim to advance and capture the state-of-the-art in locomotion and transportation mode recognition from smartphone motion (inertial) sensors. The goal of this series of machine learning and data science challenges was to recognize eight locomotion and transportation activities (Still, Walk, Run, Bus, Car, Train, Subway). The three challenges focused on time-independent (SHL 2018), position-independent (SHL 2019) and user-independent (SHL 2020) evaluations, respectively. Overall, we received 48 submissions (out of 93 teams who registered interest) involving 201 scientists over the three years. The survey captures the state-of-the-art through a meta-analysis of the contributions to the three challenges, including approaches, recognition performance, computational requirements, software tools and frameworks used. It was shown that state-of-the-art methods can distinguish with relative ease most modes of transportation, although the differentiating between subtly distinct activities, such as rail transport (Train and Subway) and road transport (Bus and Car) still remains challenging. We summarize insightful methods from participants that could be employed to address practical challenges of transportation mode recognition, for instance, to tackle over-fitting, to employ robust representations, to exploit data augmentation, and to exploit smart post-processing techniques to improve performance. Finally, we present baseline results to compare the three challenges with a unified recognition pipeline and decision window length.
引用
收藏
页数:24
相关论文
共 59 条
[1]   POIDEN: Position and Orientation Independent Deep Ensemble Network for the Classification of Locomotion and Transportation Modes [J].
Ahmed, Masud ;
Das Antar, Anindya ;
Hossain, Tahera ;
Inoue, Sozo ;
Ahad, Md Atiqur Rahman .
UBICOMP/ISWC'19 ADJUNCT: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2019 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2019, :674-679
[2]   Hierarchical Signal Segmentation and Classification for Accurate Activity Recognition [J].
Akbari, Ali ;
Wu, Jian ;
Grimsley, Reese ;
Jafari, Roozbeh .
PROCEEDINGS OF THE 2018 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2018 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS (UBICOMP/ISWC'18 ADJUNCT), 2018, :1596-1605
[3]   Orange Labs Contribution to the Sussex-Huawei Locomotion-Transportation Recognition Challenge [J].
Alwan, Azzam ;
Frey, Vincent ;
Le Lan, Gael .
UBICOMP/ISWC'19 ADJUNCT: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2019 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2019, :680-684
[4]   From mobility patterns to behavioural change: leveraging travel behaviour and personality profiles to nudge for sustainable transportation [J].
Anagnostopoulou, Evangelia ;
Urbancic, Jasna ;
Bothos, Efthimios ;
Magoutas, Babis ;
Bradesko, Luka ;
Schrammel, Johann ;
Mentzas, Gregoris .
JOURNAL OF INTELLIGENT INFORMATION SYSTEMS, 2020, 54 (01) :157-178
[5]   Semi-supervised Learning for Human Activity Recognition Using Adversarial Autoencoders [J].
Balabka, Dmitrijs .
UBICOMP/ISWC'19 ADJUNCT: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2019 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2019, :685-688
[6]   IndRNN Based Long-term Temporal Recognition in the Spatial an Frequency Domain [J].
Zhao, Beidi ;
Li, Shuai ;
Gao, Yanbo .
UBICOMP/ISWC '20 ADJUNCT: PROCEEDINGS OF THE 2020 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2020 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2020, :368-372
[7]  
Biancat J., 2014, ICST T AMBIENT SYST, V1, P7, DOI [10.4108/amsys.1.4.e7, DOI 10.4108/AMSYS.1.4.E7]
[8]   Ensemble Approach for Sensor-Based Human Activity Recognition [J].
Brajesh, Sunidhi ;
Ray, Indraneel .
UBICOMP/ISWC '20 ADJUNCT: PROCEEDINGS OF THE 2020 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2020 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2020, :296-300
[9]   Does green make a difference: The potential role of smartphone technology in transport behaviour [J].
Brazil, William ;
Caulfield, Brian .
TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2013, 37 :93-101
[10]   Driver Behavior Profiling Using Smartphones: A Low-Cost Platform for Driver Monitoring [J].
Castignani, German ;
Derrmann, Thierry ;
Frank, Raphael ;
Engel, Thomas .
IEEE INTELLIGENT TRANSPORTATION SYSTEMS MAGAZINE, 2015, 7 (01) :91-102