Feature Pooling - A Feature Compression Method Used in Convolutional Neural Networks

被引:1
作者
Pei, Ge [1 ]
Gao, Hai-Chang [1 ]
Zhou, Xin [1 ]
Cheng, Nuo [2 ]
机构
[1] Xidian Univ, Sch Comp Sci & Technol, Xian 710071, Shaanxi, Peoples R China
[2] Xidian Univ, Sch Cyber Engn, Xian 710071, Shaanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
convolutional neural network; features compression; pooling; image classification; image denoising; IMAGE; SPARSE;
D O I
10.6688/JISE.202005_36(3).0007
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent works have shown that convolutional neural networks (CNNs) are now the most effective machine learning method for solving various computer vision problems. A key advantage of CNNs is that they extract features automatically; users do not need to know what features should be extracted for a certain task. It is typically believed that the deeper the CNNs are, the higher the features that can be extracted and the more powerfully the resulting representations networks will be. Therefore, present-day CNNs are becoming substantially deeper. Previous works have proven that not all features extracted by deep CNNs are useful. In this paper, we tentatively consider a question: how do we simply remove the useless features? We propose a simple pooling method called feature pooling to compress features extracted in deep CNNs. In contrast to traditional CNNs, which input feature maps from the previous layer directly to the next layer, feature pooling compresses features from the channel below, reconstructs feature maps and then sends them to the next layer. We evaluate feature pooling based on two tasks: image classification and image denoising. Each task has a distinct network architecture and uses several benchmarks. Promising results are achieved in both tasks, especially image denoising, in which we obtain state-of-the-art results. This finding verifies the previous proposition that feature pooling is a straightforward method to perform further feature compression in CNNs. We have also observed that feature pooling has several competitive advantages: it reduces the number of parameters, increases the compactness of the networks, and strengthens the representation power with both high effectiveness and wide applicability.
引用
收藏
页码:577 / 596
页数:20
相关论文
共 38 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]  
[Anonymous], 2014, Comput. Sci.
[3]  
[Anonymous], 2016, CoRR abs/1602.02830
[4]  
[Anonymous], PROC CVPR IEEE
[5]  
[Anonymous], 2013, ADV NEURAL INFORM PR
[6]  
[Anonymous], 2016, ARXIV
[7]  
[Anonymous], 2015, PROCIEEE CONFCOMPUT PROCIEEE CONFCOMPUT PROCIEEE CONFCOMPUT PROCIEEE CONFCOMPUT PROCIEEE CONFCOMPUT PROCIEEE CONFCOMPUT PROCIEEE CONFCOMPUT PROCIEEE CONFCOMPUT PROCIEEE CONFCOMPUT PROCIEEE CONFCOMPUT PROCIEEE CONFCOMPUT PROCIEEE CONFCOMPUT
[8]  
[Anonymous], P EUR C COMP VIS
[9]  
[Anonymous], 2015, 32 ICML
[10]  
[Anonymous], 2012, On the Difficulty of Training Recurrent Neural Networks, DOI DOI 10.48550/ARXIV.1211.5063