Hybrid Multi-Modal Deep Learning using Collaborative Concat Layer in Health Bigdata

被引:11
作者
Kim, Joo-Chang [1 ]
Chung, Kyungyong [2 ]
机构
[1] Kyonggi Univ, Dept Comp Sci, Suwon 16227, South Korea
[2] Kyonggi Univ, Div Comp Sci & Engn, Suwon 16227, South Korea
关键词
Deep learning; Data models; Collaboration; Data mining; Smart healthcare; Neural networks; Health bigdata; data imputation; multi-modal; model concatenate; hybrid learning; WEIGHT; MODEL;
D O I
10.1109/ACCESS.2020.3031762
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A health model based on data has various missing values depending on the user situation, and the accuracy of a health model requiring variables that the user cannot collect appears low. A deep learning health model is fitted by learning weights to increase accuracy. In the process of applying a deep-learning-based health model to the user situation, accuracy may be degraded if learning is omitted. In this paper, we propose hybrid multimodal deep learning using a collaborative concat layer in health big data. The proposed method uses a machine learning technique to alleviate the issue caused by the change in the data observation range according to a change in the user situation, and occurring in multimodal health deep learning. It is a layer composed of the connection, input, and output of the model of the collaborative node (CN). A CN is a node that predicts absent variables through filtering using the similarity of input values. With CN, a collaborative concat layer (CCL) that handles missing values from the input of the health model can be configured, and the issue related to missing values occurring in the health model can be resolved. With the proposed CCL, it is possible to reuse existing models or construct new models through the concatenation of several single-modal deep learning models. By evaluating the effect on the input and output of the model according to the structural position of the CCL, various networks can be configured, and the performance of the single-modal model can be maintained. In particular, the accuracy of a deep learning model is more stable when the CCL is used, suggesting the experiment progress based on the assumption that a specific variable is absent depending on the user situation.
引用
收藏
页码:192469 / 192480
页数:12
相关论文
共 33 条
[1]  
[Anonymous], 2019, INF TECHNOL MANAGE
[2]   Hybrid clustering based health decision-making for improving dietary habits [J].
Back, Ji-Won ;
Kim, Joo-Chang ;
Chun, Junchul ;
Chung, Kyungyong .
TECHNOLOGY AND HEALTH CARE, 2019, 27 (05) :459-472
[3]   Context Deep Neural Network Model for Predicting Depression Risk Using Multiple Regression [J].
Baek, Ji-Won ;
Chung, Kyungyong .
IEEE ACCESS, 2020, 8 :18171-18181
[4]  
Chung K., 2019, PEER PEER NETW APPL, V13, P694, DOI DOI 10.1007/s12083-019-00791-7
[5]   Categorization for grouping associative items using data mining in item-based collaborative filtering [J].
Chung, Kyung-Yong ;
Lee, Daesung ;
Kim, Kuinam J. .
MULTIMEDIA TOOLS AND APPLICATIONS, 2014, 71 (02) :889-904
[6]   Ambient context-based modeling for health risk assessment using deep neural network [J].
Chung, Kyungyong ;
Yoo, Hyun ;
Choe, Do-Eun .
JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2020, 11 (04) :1387-1395
[7]  
Durga S, 2019, PROCEEDINGS OF THE 2019 3RD INTERNATIONAL CONFERENCE ON COMPUTING METHODOLOGIES AND COMMUNICATION (ICCMC 2019), P1018, DOI 10.1109/ICCMC.2019.8819806
[8]   A Hybrid Deep Learning Model for Human Activity Recognition Using Multimodal Body Sensing Data [J].
Gumaei, Abdu ;
Hassan, Mohammad Mehedi ;
Alelaiwi, Abdulhameed ;
Alsalman, Hussain .
IEEE ACCESS, 2019, 7 :99152-99160
[9]   Stacked Multilevel-Denoising Autoencoders: A New Representation Learning Approach for Wind Turbine Gearbox Fault Diagnosis [J].
Jiang, Guoqian ;
He, Haibo ;
Xie, Ping ;
Tang, Yufei .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2017, 66 (09) :2391-2402
[10]   PrefixSpan Based Pattern Mining Using Time Sliding Weight From Streaming Data [J].
Kang, Ji-Soo ;
Baek, Ji-Won ;
Chung, Kyungyong .
IEEE ACCESS, 2020, 8 :124833-124844