MAML-SR: Self-adaptive super-resolution networks via multi-scale optimized attention-aware meta-learning

被引:2
作者
Pal, Debabrata [1 ,2 ]
Bose, Shirsha [3 ]
More, Deeptej [4 ]
Jha, Ankit [1 ]
Banerjee, Biplab [1 ]
Jeppu, Yogananda [2 ]
机构
[1] Indian Inst Technol, Bombay 400076, Maharashtra, India
[2] Honeywell Technol Solut lab Pvt Ltd, Bengaluru 560103, Karnataka, India
[3] Tech Univ Munich, Chirurg, Germany
[4] Manipal Inst Technol, Manipal 576104, Karnataka, India
关键词
Image super-resolution; Meta-learning; Attention learning; Multi-scale optimization; IMAGE SUPERRESOLUTION; INTERPOLATION;
D O I
10.1016/j.patrec.2023.08.004
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The deep-learning-based super-resolution (SR) methods require an avalanche of training images. However, they do not adapt model parameters in test time to cope with the novel blur kernel scenarios. Even though the recent meta-learning-based SR techniques adapt trained model parameters leveraging a test image's internal patch recurrence, they need heavy pre-training on an external dataset to initialize. Also, the shallow internal information exploration and failure to amplify the salient edges impacts blurry SR image generation. Besides, model inferencing gets delayed due to a threshold-dependent adaptation phase. In this paper, we present Multi-scale Optimized Attention-aware Meta -Learning framework for SR (MAML-SR) to explore the multi scale hierarchical self-similarity of recurring patches in a test image. Precisely, without any pre-training, we directly meta-train our model with a second-order optimization having the first-order adapted parameters from the intermediate scales, which are again directly supervised with the ground-truth HR images. At each scale, non-local self-similarity is maximized along with the amplification of salient edges using a novel cross scale spectro-spatial attention learning unit. Also, we drastically reduce the inference delay by putting a metric-dependent constraint on the gradient updates for a test image. We demonstrate our method's superior super-resolving capability over four benchmark SR datasets.
引用
收藏
页码:101 / 107
页数:7
相关论文
共 31 条
  • [11] Kim J, 2016, PROC CVPR IEEE, P1637, DOI [10.1109/CVPR.2016.181, 10.1109/CVPR.2016.182]
  • [12] Kingma DP., 2014, ARXIV, DOI DOI 10.48550/ARXIV.1412.6980
  • [13] Lee CY, 2015, JMLR WORKSH CONF PRO, V38, P562
  • [14] Enhanced Deep Residual Networks for Single Image Super-Resolution
    Lim, Bee
    Son, Sanghyun
    Kim, Heewon
    Nah, Seungjun
    Lee, Kyoung Mu
    [J]. 2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, : 1132 - 1140
  • [15] Taking A Closer Look at Domain Shift: Category-level Adversaries for Semantics Consistent Domain Adaptation
    Luo, Yawei
    Zheng, Liang
    Guan, Tao
    Yu, Junqing
    Yang, Yi
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 2502 - 2511
  • [16] Martin D, 2001, EIGHTH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, VOL II, PROCEEDINGS, P416, DOI 10.1109/ICCV.2001.937655
  • [17] MSAR-Net: Multi-scale attention based light-weight image super-resolution
    Mehta, Nancy
    Murala, Subrahmanyam
    [J]. PATTERN RECOGNITION LETTERS, 2021, 151 : 215 - 221
  • [18] Santoro A, 2016, PR MACH LEARN RES, V48
  • [19] Seobin Park, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12372), P754, DOI 10.1007/978-3-030-58583-9_45
  • [20] "Zero-Shot" Super-Resolution using Deep Internal Learning
    Shocher, Assaf
    Cohen, Nadav
    Irani, Michal
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 3118 - 3126