Small and Slim Deep Convolutional Neural Network for Mobile Device

被引:28
作者
Winoto, Amadeus Suryo [1 ]
Kristianus, Michael [1 ]
Premachandra, Chinthaka [2 ]
机构
[1] BINUS Univ, Dept Comp Sci, Jakarta 11480, Indonesia
[2] Shibaura Inst Technol, Grad Sch Engn & Sci, Dept Elect Engn, Sch Engn, Tokyo 1358548, Japan
关键词
Artificial neural network; image recognition; machine learning; deep learning;
D O I
10.1109/ACCESS.2020.3005161
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent development of deep convolutional neural networks (DCNN) devoted in creating a slim model for devices with lower specification such as embedded, mobile hardware, or microcomputer. Slim model can be achieved by minimizing computational complexity which theoretically will make processing time faster. Therefore, our focus is to build an architecture with minimum floating-point operation per second (FLOPs). In this work, we propose a small and slim architecture which later will be compared to state-of-the-art models. This architecture will be implemented into two models which are CustomNet and CustomNet2. Each of these models implements 3 convolutional blocks which reduce the computational complexity while maintains its accuracy and able to compete with state-of-the-art DCNN models. These models will be trained using ImageNet, CIFAR 10, CIFAR 100 and other datasets. The result will be compared based on accuracy, complexity, size, processing time, and trainable parameter. From the result, we found that one of our models which is CustomNet2, is better than MobileNet, MobileNet-v2, DenseNet, NASNetMobile in accuracy, trainable parameter, and complexity. For future implementation, this architecture can be adapted using region based DCNN for multiple object detection.
引用
收藏
页码:125210 / 125222
页数:13
相关论文
共 41 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]  
Chen G., 2019, Rethinking the usage of batch normalization and dropout in the training of deep neural networks
[3]  
Chrabaszcz P., 2017, CORR
[4]   Image Super-Resolution Using Deep Convolutional Networks [J].
Dong, Chao ;
Loy, Chen Change ;
He, Kaiming ;
Tang, Xiaoou .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (02) :295-307
[5]  
Freeman I, 2018, IEEE IMAGE PROC, P6, DOI 10.1109/ICIP.2018.8451339
[6]   Fast R-CNN [J].
Girshick, Ross .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1440-1448
[7]   Rich feature hierarchies for accurate object detection and semantic segmentation [J].
Girshick, Ross ;
Donahue, Jeff ;
Darrell, Trevor ;
Malik, Jitendra .
2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, :580-587
[8]   Learning Automata Based Incremental Learning Method for Deep Neural Networks [J].
Guo, Haonan ;
Wang, Shilin ;
Fan, Jianxun ;
Li, Shenghong .
IEEE ACCESS, 2019, 7 (41164-41171) :41164-41171
[9]  
He K., 2017, IEEE INT C COMPUT VI, P2980, DOI [10.1109/ICCV.2017.322, 10.1109/iccv.201, DOI 10.1109/ICCV.2017.322]
[10]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778