Deep neural networks with visible intermediate layers

被引:0
|
作者
Gao, Ying-Ying [1 ]
Zhu, Wei-Bin [1 ]
机构
[1] Institute of Information Science, Beijing Jiaotong University, Beijing,100044, China
来源
Zidonghua Xuebao/Acta Automatica Sinica | 2015年 / 41卷 / 09期
关键词
Network layers - Speech recognition - Emotion Recognition;
D O I
10.16383/j.aas.2015.c150023
中图分类号
学科分类号
摘要
The hidden nature of intermediate layers in deep neural networks makes the learning process hard to track and the learned results difficult to explain, which restricts the development of deep networks to some extent. This work focuses on making these intermediate layers visible through prior knowledge, which means giving the intermediate layers definite meanings and explicit interrelationship, in the hope to supervise the learning process of deep networks and guide the learning direction. On the basis of deep stacking network (DSN), we propose two networks in which the intermediate layers are partially visible: the input-layer visible deep stacking network (IVDSN) and the hidden-layer visible deep stacking network (HVDSN). To be partially but not fully visible is to leave room for the unknown and the error. With the application of the text-based detection of speech emotion, the performance of the proposed networks is tested. The results validate that the transparency of intermediate layers is beneficial to improve the performance of deep neural networks. Between the two proposed networks, the HVDSN has a simpler structure and a better performance. Copyright © 2015 Acta Automatica Sinica. All rights reserved.
引用
收藏
页码:1627 / 1637
相关论文
共 50 条
  • [21] Automated INL/OPL subsidence detection in intermediate AMD with deep neural networks
    Aresta, Guilherme
    Araujo, Teresa
    Riedl, Sophie
    Reiter, Gregor Sebastian
    Guymer, Robyn H.
    Wu, Zhichao
    Schmidt-Erfurth, Ursula
    Bogunovic, Hrvoje
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2023, 64 (08)
  • [22] Compressing fully connected layers of deep neural networks using permuted features
    Nagaraju, Dara
    Chandrachoodan, Nitin
    IET COMPUTERS AND DIGITAL TECHNIQUES, 2023, 17 (3-4): : 149 - 161
  • [23] Diverse Feature Visualizations Reveal Invariances in Early Layers of Deep Neural Networks
    Cadena, Santiago A.
    Weis, Marissa A.
    Gatys, Leon A.
    Bethge, Matthias
    Ecker, Alexander S.
    COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 : 225 - 240
  • [24] Quantile Layers: Statistical Aggregation in Deep Neural Networks for Eye Movement Biometrics
    Abdelwahab, Ahmed
    Landwehr, Niels
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT II, 2020, 11907 : 332 - 348
  • [25] Deep regularization and direct training of the inner layers of Neural Networks with Kernel Flows
    Yoo, Gene Ryan
    Owhadi, Houman
    PHYSICA D-NONLINEAR PHENOMENA, 2021, 426
  • [26] Bridging of layers of neural networks
    Kopcanski, D
    Odri, S
    Petrovacki, D
    2002 6TH SEMINAR ON NEURAL NETWORK APPLICATIONS IN ELECTRICAL ENGINEERING, PROCEEDINGS, 2002, : 17 - 22
  • [27] FuzzyDCNN: Incorporating Fuzzy Integral Layers to Deep Convolutional Neural Networks for Image Segmentation
    Lin, Qiao
    Chen, Xin
    Chen, Chao
    Garibaldi, Jonathan M.
    IEEE CIS INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS 2021 (FUZZ-IEEE), 2021,
  • [28] Deep Neural Networks with Mixture of Experts Layers for Complex Event Recognition from Images
    Li, Mingyao
    Kamata, Sei-ichiro
    2018 JOINT 7TH INTERNATIONAL CONFERENCE ON INFORMATICS, ELECTRONICS & VISION (ICIEV) AND 2018 2ND INTERNATIONAL CONFERENCE ON IMAGING, VISION & PATTERN RECOGNITION (ICIVPR), 2018, : 410 - 415
  • [29] Restructuring Output Layers of Deep Neural Networks using Minimum Risk Parameter Clustering
    Kubo, Yotaro
    Suzuki, Jun
    Hori, Takaaki
    Nakamura, Atsushi
    15TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2014), VOLS 1-4, 2014, : 1068 - 1072
  • [30] Performance-Portable Autotuning of OpenCL Kernels for Convolutional Layers of Deep Neural Networks
    Tsai, Yaohung M.
    Luszczek, Piotr
    Kurzak, Jakub
    Dongarra, Jack
    PROCEEDINGS OF 2016 2ND WORKSHOP ON MACHINE LEARNING IN HPC ENVIRONMENTS (MLHPC), 2016, : 9 - 18