SdcNet for object recognition

被引:5
作者
Ma, Yunlong [1 ]
Wang, Chunyan [1 ]
机构
[1] Concordia Univ, Dept Elect & Comp Engn, Montreal, PQ, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Convolutional neural network (CNN); Image processing; Object recognition; Feature extraction; Successive depthwise convolutions; Data flow control; Machine learning; SUPPORT VECTOR MACHINES; FEATURES; NETWORK;
D O I
10.1016/j.cviu.2021.103332
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, a CNN architecture for object recognition is proposed, aiming at achieving a good processing quality at the lowest computation-cost. The work includes the design of SdcBlock, a convolution module, for feature extraction, and that of SdcNet, an end-to-end CNN architecture. The module is designed to extract the maximum amount of high-density feature information from a given set of data channels. To this end, successive depthwise convolutions (Sdc) are applied to each group of data to produce feature elements of different filtering orders. To optimize the functionality of these convolutions, a particular pre-and-post-convolution data control is applied. The pre-convolution control is to organize the input channels of the module so that the depthwise convolutions can be performed with a single channel or a combination of multiple data channels, depending on the nature of the data. The post-convolution control is to combine the critical feature elements of different filtering orders to enhance the quality of the convolved results. The SdcNet is mainly composed of cascaded SdcBlocks. The hyper-parameters in the architecture can be adjusted easily so that each module can be tuned to suit its input signals in order to optimize the processing-quality of the entire network. Three different versions of SdcNet have been proposed and tested using CIFAR dataset, and the results demonstrate that the architecture gives a better processing-quality at a significantly lower computation cost, compared with networks performing similar tasks. Two other versions have also been tested with samples from ImageNet to prove the applicability of SdcNet in object recognition with images of ImageNet format. Also, a SdcNet for brain tumor detection has been designed and tested successfully to illustrate that SdcNet can effectively perform the detection with a high computation efficiency.
引用
收藏
页数:12
相关论文
共 38 条
  • [1] HTTU-Net: Hybrid Two Track U-Net for Automatic Brain Tumor Segmentation
    Aboelenein, Nagwa M.
    Piao Songhao
    Koubaa, Anis
    Noor, Alam
    Afifi, Ahmed
    [J]. IEEE ACCESS, 2020, 8 : 101406 - 101415
  • [2] Bakas S., 2018, ARXIV
  • [3] Data Descriptor: Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features
    Bakas, Spyridon
    Akbari, Hamed
    Sotiras, Aristeidis
    Bilello, Michel
    Rozycki, Martin
    Kirby, Justin S.
    Freymann, John B.
    Farahani, Keyvan
    Davatzikos, Christos
    [J]. SCIENTIFIC DATA, 2017, 4
  • [4] FACE RECOGNITION - FEATURES VERSUS TEMPLATES
    BRUNELLI, R
    POGGIO, T
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1993, 15 (10) : 1042 - 1052
  • [5] Support vector machines for histogram-based image classification
    Chapelle, O
    Haffner, P
    Vapnik, VN
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1999, 10 (05): : 1055 - 1064
  • [6] Dual-force convolutional neural networks for accurate brain tumor segmentation
    Chen, Shengcong
    Ding, Changxing
    Liu, Minfeng
    [J]. PATTERN RECOGNITION, 2019, 88 : 90 - 100
  • [7] Xception: Deep Learning with Depthwise Separable Convolutions
    Chollet, Francois
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1800 - 1807
  • [8] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [9] Hampapur I, 2005, IEEE SIGNAL PROC MAG, V22, P38
  • [10] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778