Deep and wide nonnegative matrix factorization with embedded regularization

被引:0
作者
Moayed, Hojjat [1 ]
Mansoori, Eghbal G. [1 ]
机构
[1] Shiraz Univ, Sch Elect & Comp Engn, Shiraz, Iran
关键词
Feature extraction; Deep learning; Nonnegative matrix factorization; Channel augmentation; Regularization; NETWORK; RECOGNITION;
D O I
10.1016/j.patcog.2024.110530
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
End-to-end learning is an advanced framework in deep learning. It combines feature extraction, followed by pattern recognition (classification, clustering, etc.) in a unified learning structure. However, these deep networks face several challenges such as overfitting, vanishing gradient, computational complexity, information loss in layers, and weak robustness to noisy data/features. To address these challenges, this paper presents Deep and Wide Nonnegative Matrix Factorization (DWNMF) with embedded regularization for the feature extraction stage of the end-to-end models. DWNMF aims to identify more robust features while preventing overfitting via embedding regularization. For this purpose, DWNMF integrates input data with its noisy versions as diverse augmented channels. Then, the features in all channels are extracted in parallel using distinct network branches. The parameters of this model learn the intrinsic hierarchical features in the channels of complex data objects. Finally, the extracted features in different channels are aggregated in a single feature space to perform the classification task. To embed regularization in the DWNMF model, some NMF neurons in the layers are substituted by random neurons to increase the stability and robustness of the extracted features. Experimental results confirm that the DWNMF model extracts more robust features, prevents overfitting, and achieves better classification accuracy compared to state-of-the-art methods.
引用
收藏
页数:16
相关论文
共 47 条
  • [1] Review of deep learning: concepts, CNN architectures, challenges, applications, future directions
    Alzubaidi, Laith
    Zhang, Jinglan
    Humaidi, Amjad J.
    Al-Dujaili, Ayad
    Duan, Ye
    Al-Shamma, Omran
    Santamaria, J.
    Fadhel, Mohammed A.
    Al-Amidie, Muthana
    Farhan, Laith
    [J]. JOURNAL OF BIG DATA, 2021, 8 (01)
  • [2] Anguita Anguita D. D., ESANN
  • [3] Boyd S., 2004, Convex Optimization, DOI 10.1017/CBO9780511804441
  • [4] Multilayer nonnegative matrix factorisation
    Cichocki, A.
    Zdunek, R.
    [J]. ELECTRONICS LETTERS, 2006, 42 (16) : 947 - 948
  • [5] Structural Analysis and Optimization of Convolutional Neural Networks with a Small Sample Size
    D'souza, Rhett N.
    Huang, Po-Yao
    Yeh, Fang-Cheng
    [J]. SCIENTIFIC REPORTS, 2020, 10 (01)
  • [6] Convex and Semi-Nonnegative Matrix Factorizations
    Ding, Chris
    Li, Tao
    Jordan, Michael I.
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2010, 32 (01) : 45 - 55
  • [7] Dua D., 2019, UCI MACHINE LEARNING
  • [8] Re-weighted multi-view clustering via triplex regularized non-negative matrix factorization
    Feng, Lin
    Liu, Wenzhe
    Meng, Xiangzhu
    Zhang, Yong
    [J]. NEUROCOMPUTING, 2021, 464 (464) : 352 - 363
  • [9] Flenner J., 2017, Semantic Scholar
  • [10] Classification of Voice Disorders Using a One-Dimensional Convolutional Neural Network
    Fujimura, Shintaro
    Kojima, Tsuyoshi
    Okanoue, Yusuke
    Shoji, Kazuhiko
    Inoue, Masato
    Omori, Koichi
    Hori, Ryusuke
    [J]. JOURNAL OF VOICE, 2022, 36 (01) : 15 - 20