Deep CNN based Image Compression with Redundancy Minimization via Attention Guidance

被引:7
作者
Mishra, Dipti [1 ,3 ]
Singh, Satish Kumar [2 ,4 ]
Singh, Rajat Kumar [2 ,5 ]
机构
[1] Mahindra Univ, Ecole Cent Sch Engn, Hyderabad, India
[2] Indian Inst Informat Technol Allahabad, Prayagraj, India
[3] Mahindra Univ, Indian Inst Informat Technol, Dept Elect & Commun Engn, Allahabad, India
[4] Indian Inst Informat Technol, Dept Informat Technol, Allahabad, India
[5] Indian Inst Informat Technol, Dept Elect & Commun Engn, Allahabad, India
关键词
Contextual loss; Compression; -decompression; Attention network; Redundancy; Multi -size kernel CNN; Perceptual loss; Style loss; TRANSFORM;
D O I
10.1016/j.neucom.2022.08.009
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Almost all compression algorithms try to minimize the one or other type of visual redundancy present in the image. Compression becomes challenging while considering the preservation of contextual information and other information. Without considering the contextual information, some of the unwanted features are also learned by the learning-based methods, which leads to the wastage of computational resources. Motivated by this fact, we propose an attention mechanism guided multi-size kernel convolution network-based image compression-decompression algorithm, which focuses on important (local and global) features that are needed for better reconstruction. Among various feature maps obtained after convolution at any stage, channel attention focuses on "what" is meaningful, and spatial attention focuses on "where" the important features are present in the entire feature map. Secondly, we propose to use a perceptual loss function for the task of image compression, which is a combination of contextual, style, and '-2 loss functions. The proposed network and training it with perceptual loss function helped achieve significant improvements when tested with various datasets like CLIC 2019, Tecnick, Kodak, FDDB, ECSSD, and HKU-IS datasets. When assessed on CLIC 2019 challenging dataset, the MS-SSIM and PSNR of the proposed algorithm outperformed JPEG, JPEG2000, and BPG by approximately up to 49.6%, 34.61%, 20.69%, and 10.79%, 1.32%, 3.36% respectively, at low-bit rates (around 0.1 bpp). We further investigated the effectiveness of the proposed algorithm on the cartoon images and found them to be superior to other algorithms. Lastly, as the cartoon images are significantly less available for experimentation using deep learning algorithms, we propose a cartoon image dataset, namely CARTAGE.(c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页码:397 / 411
页数:15
相关论文
共 67 条
  • [1] Gatys LA, 2015, Arxiv, DOI arXiv:1508.06576
  • [2] Adjeroh D., 2008, The Burrows-Wheeler Transform: Data Compression, Suffix Arrays, and Pattern Matching
  • [3] Agustsson E, 2020, Arxiv, DOI arXiv:2006.09952
  • [4] Agustsson E, 2017, Arxiv, DOI arXiv:1704.00648
  • [5] Generative Adversarial Networks for Extreme Learned Image Compression
    Agustsson, Eirikur
    Tschannen, Michael
    Mentzer, Fabian
    Timofte, Radu
    Van Gool, Luc
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 221 - 231
  • [6] [Anonymous], 2019, WORKSH COLI COMPR C
  • [7] [Anonymous], AISTATS
  • [8] [Anonymous], 2010, UMCS2010009
  • [9] Asuni N., 2014, STAG, P63
  • [10] Balle J., 2017, 5 INT C LEARN REPR I, P1