Stacked deformable convolution network with weighted non-local attention and branch residual connection for image quality assessment

被引:0
作者
Fan, Xiaodong [1 ]
Peng, Chang [2 ]
Jiang, Xiaoli [2 ]
Han, Ying [1 ]
Hou, Limin [1 ]
机构
[1] Liaoning Tech Univ, Fac Elect & Control Engn, Huludao 125105, Liaoning, Peoples R China
[2] Bohai Univ, Coll Math, Jinzhou 121013, Liaoning, Peoples R China
基金
中国国家自然科学基金;
关键词
Image quality assessment; Deep learning; Deformable convolution; Self-attention; SIMILARITY; INDEX; DEVIATION; EFFICIENT;
D O I
10.1016/j.jvcir.2024.104214
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Convolutional neural networks are data-driven Image Quality Assessment (IQA) models based on the convolutional operation in a rectangular window. Deformable convolutions with learnable receptive fields can efficiently extract structural features of irregular objects in an image. By superimposing deformable convolutions, a plug-and-play module is designed to obtain information for irregular geometric shapes. To selectively fuse shallow and deep features, we propose a weighted non-local attention (WNLA) module with the input and output of self-attention in a weighted manner. This paper proposes a dual branch residual full-reference IQA network that combines weighted non-local attention and stacked deformable convolution. The proposed network was trained on PIPAL dataset and tested on LIVE and TID2013. The cross-dataset evaluation shows that the network has a competitive generalization ability. Ablation experiments indicate that the proposed modules can effectively improve the performance of the network. Comparative experiments reveal that our network is superior to existing excellent networks. The codes for training, test and visualization are available at: https://github.com/Pengchang-haha/SDCN.git.
引用
收藏
页数:8
相关论文
共 56 条
[31]   Making a "Completely Blind" Image Quality Analyzer [J].
Mittal, Anish ;
Soundararajan, Rajiv ;
Bovik, Alan C. .
IEEE SIGNAL PROCESSING LETTERS, 2013, 20 (03) :209-212
[32]   Mean Deviation Similarity Index: Efficient and Reliable Full-Reference Image Quality Evaluator [J].
Nafchi, Hossein Ziaei ;
Shahkolaei, Atena ;
Hedjam, Rachid ;
Cheriet, Mohamed .
IEEE ACCESS, 2016, 4 :5579-5590
[33]   Image Quality Assessment Using Human Visual DOG Model Fused With Random Forest [J].
Pei, Soo-Chang ;
Chen, Li-Heng .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (11) :3282-3292
[34]   Image database TID2013: Peculiarities, results and perspectives [J].
Ponomarenko, Nikolay ;
Jin, Lina ;
Ieremeiev, Oleg ;
Lukin, Vladimir ;
Egiazarian, Karen ;
Astola, Jaakko ;
Vozel, Benoit ;
Chehdi, Kacem ;
Carli, Marco ;
Battisti, Federica ;
Kuo, C. -C. Jay .
SIGNAL PROCESSING-IMAGE COMMUNICATION, 2015, 30 :57-77
[35]   PieAPP: Perceptual Image-Error Assessment through Pairwise Preference [J].
Prashnani, Ekta ;
Cai, Hong ;
Mostofi, Yasamin ;
Sen, Pradeep .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :1808-1817
[36]   A statistical evaluation of recent full reference image quality assessment algorithms [J].
Sheikh, Hamid Rahim ;
Sabir, Muhammad Farooq ;
Bovik, Alan Conrad .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2006, 15 (11) :3440-3451
[37]   Image information and visual quality [J].
Sheikh, HR ;
Bovik, AC .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2006, 15 (02) :430-444
[38]   Region-Adaptive Deformable Network for Image Quality Assessment [J].
Shi, Shuwei ;
Bai, Qingyan ;
Cao, Mingdeng ;
Xia, Weihao ;
Wang, Jiahao ;
Chen, Yifan ;
Yang, Yujiu .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, :324-333
[39]   Blindly Assess Image Quality in the Wild Guided by A Self-Adaptive Hyper Network [J].
Su, Shaolin ;
Yan, Qingsen ;
Zhu, Yu ;
Zhang, Cheng ;
Ge, Xin ;
Sun, Jinqiu ;
Zhang, Yanning .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :3664-3673
[40]   GraphIQA: Learning Distortion Graph Representations for Blind Image Quality Assessment [J].
Sun, Simeng ;
Yu, Tao ;
Xu, Jiahua ;
Zhou, Wei ;
Chen, Zhibo .
IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 :2912-2925