Image quality assessment via spatial-transformed domains multi-feature fusion

被引:6
作者
Yu, Miaomiao [1 ,2 ]
Zheng, Yuanlin [1 ,2 ]
Liao, Kaiyang [1 ,2 ]
Tang, Zhisen [1 ,2 ]
机构
[1] Xian Univ Technol, Fac Printing Packaging Engn & Digital Media Techn, Xian 710048, Peoples R China
[2] Key Lab Printing & Packaging Engn Shanxi Prov, Xian 710048, Peoples R China
基金
中国国家自然科学基金;
关键词
regression analysis; feature extraction; gradient methods; image fusion; transforms; vectors; random forests; learning (artificial intelligence); spatial-transformed domains multifeature fusion; image processing; subjective methods; image quality assessment tasks; edge contour information; mask scale; cross-database operation capability; gradient information operators; 12-dimensional feature vector generation; random forest regression technique; FVC-G; SALIENCY DETECTION; VISUAL SALIENCY; SIMILARITY;
D O I
10.1049/iet-ipr.2018.6417
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The basis of image processing is to evaluate and monitor image quality using algorithms rather than subjective methods. Conventional gradient operators have been popularly used in previous image quality assessment tasks to reflect the edge contour of an image, while there are some obvious defects in terms of the selection of mask scale and direction. Some improved versions are also less than ideal since they fail to consider the gradient information of the same pixel in different directions at the same time. The authors adopt a powerful gradient operator that can simultaneously capture edge information in all four directions at the same pixel point with more relevant values being considered instead of selecting the maximum in these four directions. Furthermore, four complementary types of features extracted from the spatial and transform domains are considered. A set of 12-dimensional feature vectors is generated for each image by multi-feature fusion. Ultimately, random forest regression technique is employed to train their model and then map the distortion effects to the prediction scores. The experimental results show that the proposed FVC-G has better overall performance, more powerful cross-database operation capability, and higher visual consistency than other advanced methods.
引用
收藏
页码:648 / 657
页数:10
相关论文
共 44 条
[1]   Detection of Lung Nodules in CT Scans Based on Unsupervised Feature Learning and Fuzzy Inference [J].
Akbarizadeh, Gholamreza ;
Moghaddam, Amal Eisapour .
JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS, 2016, 6 (02) :477-483
[2]   A New Statistical-Based Kurtosis Wavelet Energy Feature for Texture Recognition of SAR Images [J].
Akbarizadeh, Gholamreza .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2012, 50 (11) :4358-4368
[3]   A Novel Image Quality Assessment With Globally and Locally Consilient Visual Quality Perception [J].
Bae, Sung-Ho ;
Kim, Munchurl .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016, 25 (05) :2392-2406
[4]  
Balanov A, 2015, IEEE IMAGE PROC, P2105, DOI 10.1109/ICIP.2015.7351172
[5]   Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment [J].
Bosse, Sebastian ;
Maniry, Dominique ;
Mueller, Klaus-Robert ;
Wiegand, Thomas ;
Samek, Wojciech .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (01) :206-219
[6]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[7]   Image Quality Assessment Using Directional Anisotropy Structure Measurement [J].
Ding, Li ;
Huang, Hua ;
Zang, Yu .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (04) :1799-1809
[8]   Image quality assessment based on multi-feature extraction and synthesis with support vector regression [J].
Ding, Yong ;
Zhao, Yang ;
Zhao, Xinyu .
SIGNAL PROCESSING-IMAGE COMMUNICATION, 2017, 54 :81-92
[9]   Optimized fuzzy cellular automata for synthetic aperture radar image edge detection [J].
Farbod, Mohammad ;
Akbarizadeh, Gholamreza ;
Kosarian, Abdolnabi ;
Rangzan, Kazem .
JOURNAL OF ELECTRONIC IMAGING, 2018, 27 (01)
[10]   DeepSim: Deep similarity for image quality assessment [J].
Gao, Fei ;
Wang, Yi ;
Li, Panpeng ;
Tan, Min ;
Yu, Jun ;
Zhu, Yani .
NEUROCOMPUTING, 2017, 257 :104-114