Content-Aware Scalable Deep Compressed Sensing

被引:44
|
作者
Chen, Bin [1 ]
Zhang, Jian [1 ,2 ]
机构
[1] Peking Univ, Sch Elect & Comp Engn, Shenzhen Grad Sch, Shenzhen 518055, Peoples R China
[2] Peng Cheng Lab, Shenzhen 518066, Peoples R China
基金
中国国家自然科学基金;
关键词
Resource management; Scalability; Training; Image restoration; Image reconstruction; Task analysis; Detectors; Compressed sensing; image restoration; content-aware sampling; model scalability; deep unfolding network; SPARSE REPRESENTATION; IMAGE; RECOVERY; RANK; COMPLETION; ALGORITHM; NETWORKS; MATRIX;
D O I
10.1109/TIP.2022.3195319
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To more efficiently address image compressed sensing (CS) problems, we present a novel content-aware scalable network dubbed CASNet which collectively achieves adaptive sampling rate allocation, fine granular scalability and high-quality reconstruction. We first adopt a data-driven saliency detector to evaluate the importance of different image regions and propose a saliency-based block ratio aggregation (BRA) strategy for sampling rate allocation. A unified learnable generating matrix is then developed to produce sampling matrix of any CS ratio with an ordered structure. Being equipped with the optimization-inspired recovery subnet guided by saliency information and a multi-block training scheme preventing blocking artifacts, CASNet jointly reconstructs the image blocks sampled at various sampling rates with one single model. To accelerate training convergence and improve network robustness, we propose an SVD-based initialization scheme and a random transformation enhancement (RTE) strategy, which are extensible without introducing extra parameters. All the CASNet components can be combined and learned end-to-end. We further provide a four-stage implementation for evaluation and practical deployments. Experiments demonstrate that CASNet outperforms other CS networks by a large margin, validating the collaboration and mutual supports among its components and strategies. Codes are available at https://github.com/Guaishou74851/CASNet.
引用
收藏
页码:5412 / 5426
页数:15
相关论文
共 50 条
  • [41] Content-aware preserving image generation
    Le, Giang H.
    Nguyen, Anh Q.
    Kang, Byeongkeun
    Lee, Yeejin
    NEUROCOMPUTING, 2025, 617
  • [42] Content-aware copying and pasting in images
    Meng Ding
    Ruo-Feng Tong
    The Visual Computer, 2010, 26 : 721 - 729
  • [43] Content-Aware Reverse Tone Mapping
    Masia, Belen
    Gutierrez, Diego
    PROCEEDINGS OF THE 2016 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE: TECHNOLOGIES AND APPLICATIONS, 2016, 127
  • [44] Discrete Listwise Content-aware Recommendation
    Luo, Fangyuan
    Wu, Jun
    Wang, Tao
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 18 (01)
  • [45] Content-aware convex hull prediction
    Adhuran, Jayasingam
    Kulupana, Gosala
    MHV 2023 - Proceedings of the 2nd Mile-High Video Conference, 2023, : 1 - 7
  • [46] Enabling Content-aware Traffic Engineering
    Poese, Ingmar
    Frank, Benjamin
    Smaragdakis, Georgios
    Uhlig, Steve
    Feldmann, Anja
    Maggs, Bruce
    ACM SIGCOMM COMPUTER COMMUNICATION REVIEW, 2012, 42 (05) : 21 - 28
  • [47] A Content-aware Filtering for RGBD Faces
    Dihl, Leandro
    Cruz, Leandro
    Monteiro, Nuno
    Goncalves, Nuno
    PROCEEDINGS OF THE 14TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (GRAPP), VOL 1, 2019, : 270 - 277
  • [48] Content-Aware Warping for View Synthesis
    Guo, Mantang
    Hou, Junhui
    Jin, Jing
    Liu, Hui
    Zeng, Huanqiang
    Lu, Jiwen
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (08) : 9486 - 9503
  • [49] Content-Aware Listwise Collaborative Filtering
    Ravanifard, Rabeh
    Mirzaei, Abdolreza
    Buntine, Wray
    Safayani, Mehran
    NEUROCOMPUTING, 2021, 461 : 479 - 493
  • [50] Content-aware convolutional neural networks
    Guo, Yong
    Chen, Yaofo
    Tan, Mingkui
    Jia, Kui
    Chen, Jian
    Wang, Jingdong
    NEURAL NETWORKS, 2021, 143 : 657 - 668