Leveraging Sensor Fusion and Sensor-Body Position for Activity Recognition for Wearable Mobile Technologies

被引:4
作者
Alam A. [1 ]
Das A. [1 ]
Tasjid S. [1 ]
Marouf A.A. [1 ]
机构
[1] Daffodil International University, Dhaka
关键词
deep learning; human activity recognition; machine learning; sensor fusion; sensors;
D O I
10.3991/ijim.v15i17.25197
中图分类号
学科分类号
摘要
Smart devices like smartphones and smartwatches have made this world smarter. These wearable devices are created through complex research methodologies to make them more usable and interactive with its user. Various interactive mobile applications such as augmented reality (AR), virtual reality (VR) or mixed reality (MR) applications solely depend on the in-built sensors of the smart devices. A lot of facilities can be taken from these devices with sensors such as accelerometer and gyroscope. Different physical activities such as walking, jogging, sitting, etc., can be important for analysis like health state prediction and duration of exercise by using those sensors based on artificial intelligence. In this paper, we have implemented machine learning and deep learning algorithms to detect and recognize eight activities namely, walking, jogging, standing, walking upstairs, walking downstairs, sitting, sitting-in-a-car and cycling; with a maximum of 99.3% accuracy. A few activities are almost similar in action, such as sitting and sitting-in-a-car, but difficult to distinguish; which makes it more challenging to predict tasks. In this paper, we have hypothesized that with more sensors (sensor fusion) and data collection points (sensor-body positions) a wide range of activities can be recognized and the recognition accuracies can be increased. Finally, we showed that the combination of all the sensors data of both pocket/waist and wrist can be used to recognize a wide range of activities accurately. The possibility of using the proposed methodologies for futuristic mobile technologies is quite significant. The adaptation of most recent deep learning algorithms such as convolutional neural network (CNN) and bi-directional Long Short Time Memory (Bi-LSTM) demonstrated high credibility of the methods presented as experimentation. © 2021. All Rights Reserved.
引用
收藏
页码:141 / 155
页数:14
相关论文
共 42 条
[11]  
Bayat Akram, Pomplun Marc, Tran Duc A., A Study on Human Activity Recognition Using Accelerometer Data from Smartphones, Elsevier B.V, 34, pp. 450-157, (2014)
[12]  
Vinh L., Lee S., Le H., Ngo H., Kim H., Han M., Lee Y.-K., Semi-markov conditional random fields for accelerometer-based activity recognition, Applied Intelligence, 35, pp. 226-241, (2011)
[13]  
Bao L., Intille S. S., Activity recognition from user-annotated acceleration data, Pervasive, pp. 1-17, (2004)
[14]  
Jatoba L. C., Grossmann U., Kunze C., Ottenbacher J., Stork W., Context-aware mobile health monitoring: Evaluation of different pat-tern recognition methods for classification of physical activity, 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 5250-5253, (2008)
[15]  
Zhu C., Sheng W., Human daily activity recognition in robot-assisted living using multi-sensor fusion, 2009 IEEE International Conference on Robotics and Automation, pp. 2154-2159, (2009)
[16]  
Mitchell T., Decision Tree Learning, Machine Learning, pp. 52-78, (1997)
[17]  
San-Segundo-Hernandez R., Blunck H., Moreno-Pimentel J., Stisen A., Gil-Martin M., Robust Human Activity Recognition using smartwatches and smartphones, Eng. Appl. Artif. Intell, 72, pp. 190-202, (2018)
[18]  
Bulbul E., Cetin A., Dogru I. A., Human Activity Recognition Using Smartphones, 2018 2nd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), pp. 1-6, (2018)
[19]  
He Z., Jin L., Activity recognition from acceleration data based on discrete cosine transform and svm, IEEE International Conference on Systems, Man and Cybernetics, pp. 5041-5044, (2009)
[20]  
Khan A., Lee Y.-K., Lee S., Kim T.-S., A triaxial accelerometer-based physical-activity recognition via augmented-signal features and a hierarchical recognizer, IEEE Trans. Inf. Technol. Biomed, 14, 5, pp. 1166-1172, (2010)