Fast and robust multiple ColorChecker detection using deep convolutional neural networks

被引:10
|
作者
Marrero Fernandez, Pedro D. [1 ]
Guerrero Pena, Fidel A. [1 ]
Ren, Tsang Ing [1 ]
Leandro, Jorge J. G. [2 ]
机构
[1] Univ Fed Pernambuco UFPE, Ctr Informat, Recife, PE, Brazil
[2] Motorola Mobil LLC, Sao Paulo, Brazil
关键词
ColorChecker detection; Photograph; Image quality; Color science; Color balance; Segmentation; Convolutional neural network;
D O I
10.1016/j.imavis.2018.11.001
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
ColorCheckers are reference standards that professional photographers and filmmakers use to ensure predictable results under every lighting condition. The objective of this work is to propose a new fast and robust method for automatic ColorChecker detection. The process is divided into two steps: (1) ColorCheckers localization and (2) ColorChecker patches recognition. For the ColorChecker localization, we trained a detection convolutional neural network using synthetic images. The synthetic images are created with the 3D models of the ColorChecker and different background images. The output of the neural networks are the bounding box of each possible ColorChecker candidates in the input image. Each bounding box defines a cropped image which is evaluated by a recognition system, and each image is canonized with regards to color and dimensions. Subsequently, all possible color patches are extracted and grouped with respect to the center's distance. Each group is evaluated as a candidate for a ColorChecker part, and its position in the scene is estimated. Finally, a cost function is applied to evaluate the accuracy of the estimation. The method is tested using real and synthetic images. The proposed method is fast, robust to overlaps and invariant to affine projections. The algorithm also performs well in case of multiple ColorCheckers detection. (C) 2018 Elsevier B.V. All rights reserved.
引用
收藏
页码:15 / 24
页数:10
相关论文
共 50 条
  • [31] Fall Detection Using Multiple Bioradars and Convolutional Neural Networks
    Anishchenko, Lesya
    Zhuravlev, Andrey
    Chizh, Margarita
    SENSORS, 2019, 19 (24)
  • [32] Robust PPG Peak Detection Using Dilated Convolutional Neural Networks
    Kazemi, Kianoosh
    Laitala, Juho
    Azimi, Iman
    Liljeberg, Pasi
    Rahmani, Amir M.
    SENSORS, 2022, 22 (16)
  • [33] FAST ANIMAL DETECTION IN UAV IMAGES USING CONVOLUTIONAL NEURAL NETWORKS
    Kellenberger, Benjamin
    Volpi, Michele
    Tuia, Devis
    2017 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS), 2017, : 866 - 869
  • [34] Deep Convolutional Neural Networks for DGA Detection
    Catania, Carlos
    Garcia, Sebastian
    Torres, Pablo
    COMPUTER SCIENCE - CACIC 2018, 2019, 995 : 327 - 340
  • [35] Deep Convolutional Neural Networks for pedestrian detection
    Tome, D.
    Monti, F.
    Baroffio, L.
    Bondi, L.
    Tagliasacchi, M.
    Tubaro, S.
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2016, 47 : 482 - 489
  • [36] Stenosis Detection with Deep Convolutional Neural Networks
    Antczak, Karol
    Liberadzki, Lukasz
    22ND INTERNATIONAL CONFERENCE ON CIRCUITS, SYSTEMS, COMMUNICATIONS AND COMPUTERS (CSCC 2018), 2018, 210
  • [37] ROBUST DETECTION OF EPILEPTIC SEIZURES USING DEEP NEURAL NETWORKS
    Hussein, Ramy
    Palangi, Hamid
    Wang, Z. Jane
    Ward, Rabab
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 2546 - 2550
  • [38] Vessel Detection in Ultrasound Images Using Deep Convolutional Neural Networks
    Smistad, Erik
    Lovstakken, Lasse
    DEEP LEARNING AND DATA LABELING FOR MEDICAL APPLICATIONS, 2016, 10008 : 30 - 38
  • [39] Robot grasp detection using multimodal deep convolutional neural networks
    Wang, Zhichao
    Li, Zhiqi
    Wang, Bin
    Liu, Hong
    ADVANCES IN MECHANICAL ENGINEERING, 2016, 8 (09) : 1 - 12
  • [40] Pulmonary Tuberculosis Detection Using Deep Learning Convolutional Neural Networks
    Norval, Michael
    Wang, Zenghui
    Sun, Yanxia
    ICVIP 2019: PROCEEDINGS OF 2019 3RD INTERNATIONAL CONFERENCE ON VIDEO AND IMAGE PROCESSING, 2019, : 47 - 51