Dynamic dual attention iterative network for image super-resolution

被引:0
|
作者
Hao Feng
Liejun Wang
Shuli Cheng
Anyu Du
Yongming Li
机构
[1] Xinjiang University,School of Information Science and Engineering
来源
Applied Intelligence | 2022年 / 52卷
关键词
Dynamic convolution; Feature refinement; Iterative loss; Image super-resolution;
D O I
暂无
中图分类号
学科分类号
摘要
Recently, deep convolution neural networks (DCNNs) have obtained remarkable performance in exploring single image super-resolution (SISR). However, most of the existing CNN-based SISR methods only focus on increasing the width and depth of the network to improve SR performance, which makes them face a heavy computing burden. In this paper, we propose a lightweight dynamic dual attention iteration network (DDAIN) for SISR. Specifically, to better realize the attention of the channel and the convolution kernel, we design a dynamic convolution unit (DYCU) at the head of the network. It improves the SR performance by enhancing the complexity of the model without increasing the width and depth of the network. Compared with the traditional static convolution, it can extract more abundant high and low-frequency image features according to different input images. Moreover, to recover the high-frequency detail features of images with different resolutions as much as possible, we embed multiple dual residual attention (DRA) in the feature refinement unit (FRU). Finally, to alleviate the height discomfort caused by SR, we introduce iterative loss Liter to optimize the training process further. Extensive experimental results on benchmark show that the performance of the DDAIN in different degradation models exceeds some existing classical methods.
引用
收藏
页码:8189 / 8208
页数:19
相关论文
共 50 条
  • [21] Information-Growth Attention Network for Image Super-Resolution
    Li, Zhuangzi
    Li, Ge
    Li, Thomas
    Liu, Shan
    Gao, Wei
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 544 - 552
  • [22] Lightweight image super-resolution with sliding Proxy Attention Network
    Hu, Zhenyu
    Sun, Wanjie
    Chen, Zhenzhong
    SIGNAL PROCESSING, 2025, 227
  • [23] Efficient residual attention network for single image super-resolution
    Fangwei Hao
    Taiping Zhang
    Linchang Zhao
    Yuanyan Tang
    Applied Intelligence, 2022, 52 : 652 - 661
  • [24] A Novel Attention Enhanced Dense Network for Image Super-Resolution
    Niu, Zhong-Han
    Zhou, Yang-Hao
    Yang, Yu-Bin
    Fan, Jian-Cong
    MULTIMEDIA MODELING (MMM 2020), PT I, 2020, 11961 : 568 - 580
  • [25] Structured Fusion Attention Network for Image Super-Resolution Reconstruction
    Dai, Yaonan
    Yu, Jiuyang
    Hu, Tianhao
    Lu, Yang
    Zheng, Xiaotao
    IEEE ACCESS, 2022, 10 : 31896 - 31906
  • [26] CANS: Combined Attention Network for Single Image Super-Resolution
    Muhammad, Wazir
    Aramvith, Supavadee
    Onoye, Takao
    IEEE ACCESS, 2024, 12 : 167498 - 167517
  • [27] DANS: Deep Attention Network for Single Image Super-Resolution
    Talreja, Jagrati
    Aramvith, Supavadee
    Onoye, Takao
    IEEE ACCESS, 2023, 11 : 84379 - 84397
  • [28] Efficient residual attention network for single image super-resolution
    Hao, Fangwei
    Zhang, Taiping
    Zhao, Linchang
    Tang, Yuanyan
    APPLIED INTELLIGENCE, 2022, 52 (01) : 652 - 661
  • [29] A novel attention-enhanced network for image super-resolution
    Bo, Yangyu
    Wu, Yongliang
    Wang, Xuejun
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 130
  • [30] Frequency-Separated Attention Network for Image Super-Resolution
    Qu, Daokuan
    Li, Liulian
    Yao, Rui
    APPLIED SCIENCES-BASEL, 2024, 14 (10):