XnODR and XnIDR: Two Accurate and Fast Fully Connected Layers for Convolutional Neural Networks

被引:3
作者
Sun, Jian [1 ]
Fard, Ali Pourramezan [2 ]
Mahoor, Mohammad H. [2 ]
机构
[1] Univ Denver, Dept Comp Sci, 2155 E Wesley Ave, Denver, CO 80210 USA
[2] Univ Denver, Dept Comp Engn, 2155 E Wesley Ave, Denver, CO 80210 USA
关键词
CapsNet; XNOR-Net; Dynamic routing; Binarization; Xnorization; Machine learning; Neural network;
D O I
10.1007/s10846-023-01952-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Capsule Network is powerful at defining the positional relationship between features in deep neural networks for visual recognition tasks, but it is computationally expensive and not suitable for running on mobile devices. The bottleneck is in the computational complexity of the Dynamic Routing mechanism used between the capsules. On the other hand, XNOR-Net is fast and computationally efficient, though it suffers from low accuracy due to information loss in the binarization process. To address the computational burdens of the Dynamic Routing mechanism, this paper proposes new Fully Connected (FC) layers by xnorizing the linear projection outside or inside the Dynamic Routing within the CapsFC layer. Specifically, our proposed FC layers have two versions, XnODR (Xnorize the Linear Projection Outside Dynamic Routing) and XnIDR (Xnorize the Linear Projection Inside Dynamic Routing). To test the generalization of both XnODR and XnIDR, we insert them into two different networks, MobileNetV2 and ResNet-50. Our experiments on three datasets, MNIST, CIFAR-10, and MultiMNIST validate their effectiveness. The results demonstrate that both XnODR and XnIDR help networks to have high accuracy with lower FLOPs and fewer parameters (e.g., 96.14% correctness with 2.99M parameters and 311.74M FLOPs on CIFAR-10).
引用
收藏
页数:17
相关论文
共 63 条
[1]  
Arora S, 2014, PR MACH LEARN RES, V32
[2]  
Ba LJ, 2014, ADV NEUR IN, V27
[3]  
Bahadori MT., 2018, Spectral capsule networks
[4]  
Bulat Adrian, 2019, BMVC
[5]   No routing needed between capsules [J].
Byerly, Adam ;
Kalganova, Tatiana ;
Dear, Ian .
NEUROCOMPUTING, 2021, 463 (545-553) :545-553
[6]  
Chen X., 2022, INT C LEARNING REPRE
[7]   An analysis of generative adversarial networks and variants for image synthesis on MNIST dataset [J].
Cheng, Keyang ;
Tahir, Rabia ;
Eric, Lubamba Kasangu ;
Li, Maozhen .
MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (19-20) :13725-13752
[8]  
Cherti M, 2022, Arxiv, DOI arXiv:2106.00116
[9]  
Cybenko G., 1989, Mathematics of Control, Signals, and Systems, V2, P303, DOI 10.1007/BF02551274
[10]  
Dai Z, 2021, ADV NEUR IN, V34