Feature Pooling - A Feature Compression Method Used in Convolutional Neural Networks

被引:1
作者
Pei, Ge [1 ]
Gao, Hai-Chang [1 ]
Zhou, Xin [1 ]
Cheng, Nuo [2 ]
机构
[1] Xidian Univ, Sch Comp Sci & Technol, Xian 710071, Shaanxi, Peoples R China
[2] Xidian Univ, Sch Cyber Engn, Xian 710071, Shaanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
convolutional neural network; features compression; pooling; image classification; image denoising; IMAGE; SPARSE;
D O I
10.6688/JISE.202005_36(3).0007
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent works have shown that convolutional neural networks (CNNs) are now the most effective machine learning method for solving various computer vision problems. A key advantage of CNNs is that they extract features automatically; users do not need to know what features should be extracted for a certain task. It is typically believed that the deeper the CNNs are, the higher the features that can be extracted and the more powerfully the resulting representations networks will be. Therefore, present-day CNNs are becoming substantially deeper. Previous works have proven that not all features extracted by deep CNNs are useful. In this paper, we tentatively consider a question: how do we simply remove the useless features? We propose a simple pooling method called feature pooling to compress features extracted in deep CNNs. In contrast to traditional CNNs, which input feature maps from the previous layer directly to the next layer, feature pooling compresses features from the channel below, reconstructs feature maps and then sends them to the next layer. We evaluate feature pooling based on two tasks: image classification and image denoising. Each task has a distinct network architecture and uses several benchmarks. Promising results are achieved in both tasks, especially image denoising, in which we obtain state-of-the-art results. This finding verifies the previous proposition that feature pooling is a straightforward method to perform further feature compression in CNNs. We have also observed that feature pooling has several competitive advantages: it reduces the number of parameters, increases the compactness of the networks, and strengthens the representation power with both high effectiveness and wide applicability.
引用
收藏
页码:577 / 596
页数:20
相关论文
共 38 条
[11]  
[Anonymous], 2010, JMLR WORKSH C P
[12]  
[Anonymous], 2017, ARXIV160702533
[13]   Large-Scale Machine Learning with Stochastic Gradient Descent [J].
Bottou, Leon .
COMPSTAT'2010: 19TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL STATISTICS, 2010, :177-186
[14]   Xception: Deep Learning with Depthwise Separable Convolutions [J].
Chollet, Francois .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1800-1807
[15]  
Courbariaux M, 2015, ADV NEUR IN, V28
[16]   Image denoising by sparse 3-D transform-domain collaborative filtering [J].
Dabov, Kostadin ;
Foi, Alessandro ;
Katkovnik, Vladimir ;
Egiazarian, Karen .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2007, 16 (08) :2080-2095
[17]   Image denoising via sparse and redundant representations over learned dictionaries [J].
Elad, Michael ;
Aharon, Michal .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2006, 15 (12) :3736-3745
[18]  
Goodfellow I.J., 2014, 3 INT C LEARNING REP
[19]  
He KM, 2015, PROC CVPR IEEE, P5353, DOI 10.1109/CVPR.2015.7299173
[20]  
He XF, 2004, ADV NEUR IN, V16, P153