Hybrid Multi-Modal Deep Learning using Collaborative Concat Layer in Health Bigdata

被引:11
作者
Kim, Joo-Chang [1 ]
Chung, Kyungyong [2 ]
机构
[1] Kyonggi Univ, Dept Comp Sci, Suwon 16227, South Korea
[2] Kyonggi Univ, Div Comp Sci & Engn, Suwon 16227, South Korea
关键词
Deep learning; Data models; Collaboration; Data mining; Smart healthcare; Neural networks; Health bigdata; data imputation; multi-modal; model concatenate; hybrid learning; WEIGHT; MODEL;
D O I
10.1109/ACCESS.2020.3031762
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A health model based on data has various missing values depending on the user situation, and the accuracy of a health model requiring variables that the user cannot collect appears low. A deep learning health model is fitted by learning weights to increase accuracy. In the process of applying a deep-learning-based health model to the user situation, accuracy may be degraded if learning is omitted. In this paper, we propose hybrid multimodal deep learning using a collaborative concat layer in health big data. The proposed method uses a machine learning technique to alleviate the issue caused by the change in the data observation range according to a change in the user situation, and occurring in multimodal health deep learning. It is a layer composed of the connection, input, and output of the model of the collaborative node (CN). A CN is a node that predicts absent variables through filtering using the similarity of input values. With CN, a collaborative concat layer (CCL) that handles missing values from the input of the health model can be configured, and the issue related to missing values occurring in the health model can be resolved. With the proposed CCL, it is possible to reuse existing models or construct new models through the concatenation of several single-modal deep learning models. By evaluating the effect on the input and output of the model according to the structural position of the CCL, various networks can be configured, and the performance of the single-modal model can be maintained. In particular, the accuracy of a deep learning model is more stable when the CCL is used, suggesting the experiment progress based on the assumption that a specific variable is absent depending on the user situation.
引用
收藏
页码:192469 / 192480
页数:12
相关论文
共 50 条
  • [21] Multi-Modal Song Mood Detection with Deep Learning
    Pyrovolakis, Konstantinos
    Tzouveli, Paraskevi
    Stamou, Giorgos
    [J]. SENSORS, 2022, 22 (03)
  • [22] Exploring Fusion Strategies in Deep Learning Models for Multi-Modal Classification
    Zhang, Duoyi
    Nayak, Richi
    Bashar, Md Abul
    [J]. DATA MINING, AUSDM 2021, 2021, 1504 : 102 - 117
  • [23] MDNNSyn: A Multi-Modal Deep Learning Framework for Drug Synergy Prediction
    Li, Lei
    Li, Haitao
    Ishdorj, Tseren-Onolt
    Zheng, Chunhou
    Su, Yansen
    [J]. IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (10) : 6225 - 6236
  • [24] CovidSafe: A Deep Learning Framework for Covid Detection Using Multi-modal Approach
    Srikanth, Panigrahi
    Behera, Chandan Kumar
    Routhu, Srinivasa Rao
    [J]. NEW GENERATION COMPUTING, 2025, 43 (01)
  • [25] Multi-Modal ISAR Object Recognition using Adaptive Deep Relation Learning
    Xue, Bin
    Tong, Ningning
    [J]. 2019 INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS, SIGNAL PROCESSING AND NETWORKING (WISPNET 2019): ADVANCING WIRELESS AND MOBILE COMMUNICATIONS TECHNOLOGIES FOR 2020 INFORMATION SOCIETY, 2019, : 48 - 53
  • [26] Detecting glaucoma from multi-modal data using probabilistic deep learning
    Huang, Xiaoqin
    Sun, Jian
    Gupta, Krati
    Montesano, Giovanni
    Crabb, David P.
    Garway-Heath, David F.
    Brusini, Paolo
    Lanzetta, Paolo
    Oddone, Francesco
    Turpin, Andrew
    McKendrick, Allison M.
    Johnson, Chris A.
    Yousefi, Siamak
    [J]. FRONTIERS IN MEDICINE, 2022, 9
  • [27] Multi-Modal Physiological Data Fusion for Affect Estimation Using Deep Learning
    Hssayeni, Murtadha D.
    Ghoraani, Behnaz
    [J]. IEEE ACCESS, 2021, 9 : 21642 - 21652
  • [28] Enabling multi-modal search for inspirational design stimuli using deep learning
    Kwon, Lisa
    Huang, Forrest
    Goucher-Lambert, Kosa
    [J]. AI EDAM-ARTIFICIAL INTELLIGENCE FOR ENGINEERING DESIGN ANALYSIS AND MANUFACTURING, 2022, 36
  • [29] MMHFNet: Multi-modal and multi-layer hybrid fusion network for voice pathology detection
    Mohammed, Hussein M. A.
    Omeroglu, Asli Nur
    Oral, Emin Argun
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2023, 223
  • [30] Collaborative Federated Learning for Healthcare: Multi-Modal COVID-19 Diagnosis at the Edge
    Qayyum, Adnan
    Ahmad, Kashif
    Ahsan, Muhammad Ahtazaz
    Al-Fuqaha, Ala
    Qadir, Junaid
    [J]. IEEE OPEN JOURNAL OF THE COMPUTER SOCIETY, 2022, 3 : 172 - 184