Deep Learning Architecture Reduction for fMRI Data

被引:0
作者
Alvarez-Gonzalez, Ruben [1 ]
Mendez-Vazquez, Andres [1 ]
机构
[1] Cinvestav Guadalajara, Dept Comp Sci, Zapopan 45015, Mexico
关键词
CNN; machine learning; deep learning; computer vision; transfer learning; N-SPHERE; IMAGE; CLASSIFICATION; ALGORITHM; DESIGN;
D O I
10.3390/brainsci12020235
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
In recent years, deep learning models have demonstrated an inherently better ability to tackle non-linear classification tasks, due to advances in deep learning architectures. However, much remains to be achieved, especially in designing deep convolutional neural network (CNN) configurations. The number of hyper-parameters that need to be optimized to achieve accuracy in classification problems increases with every layer used, and the selection of kernels in each CNN layer has an impact on the overall CNN performance in the training stage, as well as in the classification process. When a popular classifier fails to perform acceptably in practical applications, it may be due to deficiencies in the algorithm and data processing. Thus, understanding the feature extraction process provides insights to help optimize pre-trained architectures, better generalize the models, and obtain the context of each layer's features. In this work, we aim to improve feature extraction through the use of a texture amortization map (TAM). An algorithm was developed to obtain characteristics from the filters amortizing the filter's effect depending on the texture of the neighboring pixels. From the initial algorithm, a novel geometric classification score (GCS) was developed, in order to obtain a measure that indicates the effect of one class on another in a classification problem, in terms of the complexity of the learnability in every layer of the deep learning architecture. For this, we assume that all the data transformations in the inner layers still belong to a Euclidean space. In this scenario, we can evaluate which layers provide the best transformations in a CNN, allowing us to reduce the weights of the deep learning architecture using the geometric hypothesis.
引用
收藏
页数:27
相关论文
共 86 条
[1]   The Vapnik-Chervonenkis Dimension: Information versus Complexity in Learning [J].
Abu-Mostafa, Yaser S. .
NEURAL COMPUTATION, 1989, 1 (03) :312-317
[2]  
Al-Tashi Q, 2020, ALGO INTELL SY, P273, DOI 10.1007/978-981-32-9990-0_13
[3]  
Albawi S, 2017, I C ENG TECHNOL
[4]   Heuristic filter feature selection methods for medical datasets [J].
Alirezanejad, Mehdi ;
Enayatifar, Rasul ;
Motameni, Homayun ;
Nematzadeh, Hossein .
GENOMICS, 2020, 112 (02) :1173-1181
[5]  
Aly M., 2005, Neural Netw, V19, P1
[6]   Wavelets on the n-sphere and related manifolds [J].
Antoine, JP ;
Vandergheynst, P .
JOURNAL OF MATHEMATICAL PHYSICS, 1998, 39 (08) :3987-4008
[7]  
Asghar M.A., 2020, P 2020 INT C ENG EM, P1
[8]  
Assiri Y., 2020, ARXIV200108856
[9]   Network Dissection: Quantifying Interpretability of Deep Visual Representations [J].
Bau, David ;
Zhou, Bolei ;
Khosla, Aditya ;
Oliva, Aude ;
Torralba, Antonio .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :3319-3327
[10]  
Bengio Y, 2004, J MACH LEARN RES, V5, P1089