Evolution of Abstraction Across Layers in Deep Learning Neural Networks

被引:12
作者
Kozma, Robert [1 ,2 ]
Ilin, Roman [3 ]
Siegelmann, Hava T. [1 ]
机构
[1] Univ Massachusetts, Dept Comp Sci, Amherst, MA 01003 USA
[2] Univ Memphis, Dept Math, Memphis, TN 38152 USA
[3] RYAT, Air Force Res Lab, Dayton, OH 45433 USA
来源
INNS CONFERENCE ON BIG DATA AND DEEP LEARNING | 2018年 / 144卷
关键词
Deep Learning; Convolutional Neural Networks; Abstraction Level; Image Processing; Knowledge;
D O I
10.1016/j.procs.2018.10.520
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Deep learning neural networks produce excellent results in various pattern recognition tasks. It is of great practical importance to answer some open questions regarding model design and parameterization, and to understand how input data are converted into meaningful knowledge at the output. The layer-by-layer evolution of the abstraction level has been proposed previously as a quantitative measure to describe the emergence of knowledge in the network. In this work we systematically evaluate the abstraction level for a variety of image datasets. We observe that there is a general tendency of increasing abstraction from input to output with the exception of a drop of abstraction at some ReLu and Pooling layers. The abstraction level is relatively low and does not change significantly in the first few layers following the input, while it fluctuates around some high saturation value at the layers preceding the output. Finally, the layer-by-layer change in abstraction is not normally distributed, rather it approximates an exponential distribution. These results point to salient local features of deep layers impacting overall (global) classification performance. We compare the results extracted from deep learning neural networks performing image processing tasks with the results obtained by analyzing brain imaging data. Our conclusions may be helpful in future designs of more efficient, compact deep learning neural networks. (C) 2018 The Authors. Published by Elsevier Ltd.
引用
收藏
页码:203 / 213
页数:11
相关论文
共 15 条
[1]  
[Anonymous], 1995, Artificial Intelligence
[2]   Representation Learning: A Review and New Perspectives [J].
Bengio, Yoshua ;
Courville, Aaron ;
Vincent, Pascal .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) :1798-1828
[3]   Learning Deep Architectures for AI [J].
Bengio, Yoshua .
FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2009, 2 (01) :1-127
[4]  
CORTES C, 1995, MACH LEARN, V20, P273, DOI 10.1023/A:1022627411411
[5]  
Ilin R, 2017, IEEE IJCNN, P768, DOI 10.1109/IJCNN.2017.7965929
[6]   Machine learning: Trends, perspectives, and prospects [J].
Jordan, M. I. ;
Mitchell, T. M. .
SCIENCE, 2015, 349 (6245) :255-260
[7]  
Kozma R., 2016, Cognitive phase transitions in the cerebral cortex-enhancing the neuron doctrine by modeling neural fields, DOI 10.1007/978-3-319-24406-8
[8]  
Kozma R., 2017, CRFR036002 MATR RES
[9]  
Krizhevsky A, 2009, LEARNING MULTIPLE LA
[10]   Deep learning [J].
LeCun, Yann ;
Bengio, Yoshua ;
Hinton, Geoffrey .
NATURE, 2015, 521 (7553) :436-444