Rectified Binary Network for Single-Image Super-Resolution

被引:0
作者
Xin, Jingwei [1 ]
Wang, Nannan [1 ]
Jiang, Xinrui [2 ]
Li, Jie [3 ]
Wang, Xiaoyu [4 ]
Gao, Xinbo [5 ]
机构
[1] Xidian Univ, Sch Telecommun Engn, State Key Lab Integrated Serv Networks, Xian 710071, Shaanxi, Peoples R China
[2] Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Peoples R China
[3] Xidian Univ, Sch Elect Engn, Xian 710071, Shaanxi, Peoples R China
[4] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei 230026, Peoples R China
[5] Chongqing Univ Posts & Telecommun, Chongqing Key Lab Image Cognit, Chongqing 400065, Peoples R China
基金
中国国家自然科学基金;
关键词
Superresolution; Quantization (signal); Task analysis; Accuracy; Training; Backpropagation; Computational modeling; Activation rectified; adaptive approximation estimator (AAE); binary neural network (BNN); computational complexity; single-image super-resolution (SISR);
D O I
10.1109/TNNLS.2024.3438432
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Binary neural network (BNN) is an effective approach to reduce the memory usage and the computational complexity of full-precision convolutional neural networks (CNNs), which has been widely used in the field of deep learning. However, there are different properties between BNNs and real-valued models, making it difficult to draw on the experience of CNN composition to develop BNN. In this article, we study the application of binary network to the single-image super-resolution (SISR) task in which the network is trained for restoring original high-resolution (HR) images. Generally, the distribution of features in the network for SISR is more complex than those in recognition models for preserving the abundant image information, e.g., texture, color, and details. To enhance the representation ability of BNN, we explore a novel activation-rectified inference (ARI) module that achieves a more complete representation of features by combining observations from different quantitative perspectives. The activations are divided into several parts with different quantification intervals and are inferred independently. This allows the binary activations to retain more image detail and yield finer inference. In addition, we further propose an adaptive approximation estimator (AAE) for gradually learning the accurate gradient estimation interval in each layer to alleviate the optimization difficulty. Experiments conducted on several benchmarks show that our approach is able to learn a binary SISR model with superior performance over the state-of-the-art methods. The code will be released at https://github.com/jwxintt/Rectified-BSR.
引用
收藏
页码:9341 / 9355
页数:15
相关论文
共 67 条
[1]   Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network [J].
Ahn, Namhyuk ;
Kang, Byungkon ;
Sohn, Kyung-Ah .
COMPUTER VISION - ECCV 2018, PT X, 2018, 11214 :256-272
[2]  
Bengio, 2016, NeurIPS
[3]  
Bengio Y, 2016, P ADV NEUR INF PROC
[4]  
Bengio Y., 2013, arXiv
[5]   Low-Complexity Single-Image Super-Resolution based on Nonnegative Neighbor Embedding [J].
Bevilacqua, Marco ;
Roumy, Aline ;
Guillemot, Christine ;
Morel, Marie-Line Alberi .
PROCEEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2012, 2012,
[6]  
Bulat, 2019, ARXIV
[7]  
Bulat A., 2019, P IEEE C COMP VIS PA, P23
[8]  
Courbariaux M, 2015, ADV NEUR IN, V28
[9]   Second-order Attention Network for Single Image Super-Resolution [J].
Dai, Tao ;
Cai, Jianrui ;
Zhang, Yongbing ;
Xia, Shu-Tao ;
Zhang, Lei .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :11057-11066
[10]   Learning a Low Tensor-Train Rank Representation for Hyperspectral Image Super-Resolution [J].
Dian, Renwei ;
Li, Shutao ;
Fang, Leyuan .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (09) :2672-2683