Color image definition evaluation method based on deep learning method

被引:0
作者
Liu Di [1 ]
Li YingChun [1 ]
机构
[1] Acad Equipment, Beijing 101416, Peoples R China
来源
2017 INTERNATIONAL CONFERENCE ON OPTICAL INSTRUMENTS AND TECHNOLOGY: OPTICAL SYSTEMS AND MODERN OPTOELECTRONIC INSTRUMENTS | 2017年 / 10616卷
关键词
deep learning; non-reference image clarity evaluation; feature learning; VGG16; net; human visual perception;
D O I
10.1117/12.2289589
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.
引用
收藏
页数:6
相关论文
共 12 条
  • [1] Guo J C, 2017, J IMAGE GRAPHICS, V22
  • [2] Hou Weilong, 2013, Journal of Xidian University, V40, P200, DOI 10.3969/j.issn.1001-2400.2013.05.032
  • [3] Jin W, 2000, INT C SIGN PROC P 20, V3, P1647
  • [4] Most apparent distortion: full-reference image quality assessment and the role of strategy
    Larson, Eric C.
    Chandler, Damon M.
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2010, 19 (01)
  • [5] Deep learning
    LeCun, Yann
    Bengio, Yoshua
    Hinton, Geoffrey
    [J]. NATURE, 2015, 521 (7553) : 436 - 444
  • [6] 基于深度学习模型的图像质量评价方法
    李琳
    余胜生
    [J]. 华中科技大学学报(自然科学版), 2016, 44 (12) : 70 - 75
  • [7] Najafabadi M. M., 2015, J Big Data, V2, P1, DOI [DOI 10.1186/S40537-014-0007-7, 10.1186/S40537-014-0007-7/METRICS]
  • [8] A statistical evaluation of recent full reference image quality assessment algorithms
    Sheikh, Hamid Rahim
    Sabir, Muhammad Farooq
    Bovik, Alan Conrad
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2006, 15 (11) : 3440 - 3451
  • [9] Simonyan K., 2013, DEEP INSIDE CONVOLUT
  • [10] Simonyan K, 2015, Arxiv, DOI arXiv:1409.1556