Dense Pixel-to-Pixel Harmonization via Continuous Image Representation

被引:7
|
作者
Chen, Jianqi [1 ,2 ]
Zhang, Yilan [1 ,2 ]
Zou, Zhengxia [3 ]
Chen, Keyan [1 ,2 ]
Shi, Zhenwei [1 ,2 ]
机构
[1] Beihang Univ, Image Proc Ctr, Sch Astronaut, Beijing 100191, Peoples R China
[2] Beihang Univ, State Key Lab Virtual Real Technol & Syst, Beijing 100191, Peoples R China
[3] Beihang Univ, Sch Astronaut, Dept Guidance Nav & Control, Beijing 100191, Peoples R China
关键词
Image harmonization; implicit neural representation; high resolution; pixel-to-pixel; COLOR; FRAMEWORK;
D O I
10.1109/TCSVT.2023.3324591
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
High-resolution (HR) image harmonization is of great significance in real-world applications such as image synthesis and image editing. However, due to the high memory costs, existing dense pixel-to-pixel harmonization methods are mainly focusing on processing low-resolution (LR) images. Some recent works resort to combining with color-to-color transformations but are either limited to certain resolutions or heavily depend on hand-crafted image filters. In this work, we explore leveraging the implicit neural representation (INR) and propose a novel image Harmonization method based on Implicit neural Networks (HINet), which to the best of our knowledge, is the first dense pixel-to-pixel method applicable to HR images without any hand-crafted filter design. Inspired by the Retinex theory, we decouple the MLPs into two parts to respectively capture the content and environment of composite images. A Low-Resolution Image Prior (LRIP) network is designed to alleviate the Boundary Inconsistency problem, and we also propose new designs for the training and inference process. Extensive experiments have demonstrated the effectiveness of our method compared with state-of-the-art methods. Furthermore, some interesting and practical applications of the proposed method are explored. Our code is available at https://github.com/WindVChen/INR-Harmonization.
引用
收藏
页码:3876 / 3890
页数:15
相关论文
共 50 条
  • [1] Deep pixel-to-pixel network for underwater image enhancement and restoration
    Sun, Xin
    Liu, Lipeng
    Li, Qiong
    Dong, Junyu
    Lima, Estanislau
    Yin, Ruiying
    IET IMAGE PROCESSING, 2019, 13 (03) : 469 - 474
  • [2] A Pixel-to-Pixel Convolutional Neural Network for Single Image Dehazing
    Zhu, Chengkai
    Zhou, Yucan
    Xie, Zongxia
    NEURAL INFORMATION PROCESSING (ICONIP 2017), PT III, 2017, 10636 : 270 - 279
  • [3] Pixel-to-pixel matching for image recognition using Hungarian graph matching
    Keysers, D
    Deselaers, T
    Ney, H
    PATTERN RECOGNITION, 2004, 3175 : 154 - 162
  • [4] Depth Discontinuities by Pixel-to-Pixel Stereo
    Stan Birchfield
    Carlo Tomasi
    International Journal of Computer Vision, 1999, 35 : 269 - 293
  • [5] Depth discontinuities by pixel-to-pixel stereo
    Birchfield, S
    Tomasi, C
    SIXTH INTERNATIONAL CONFERENCE ON COMPUTER VISION, 1998, : 1073 - 1080
  • [6] Depth discontinuities by pixel-to-pixel stereo
    Birchfield, S
    Tomasi, C
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 1999, 35 (03) : 269 - 293
  • [7] Progressive pixel-to-pixel evaluation to obtain the hard and smooth region for image compression
    Taujuddin, N. S. A. M.
    Ibrahim, Rosziati
    Sari, Suhaila
    PROCEEDINGS SIXTH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS, MODELLING AND SIMULATION, 2015, : 102 - 106
  • [8] Guided Super-Resolution as Pixel-to-Pixel Transformation
    de Lutio, Riccardo
    D'Aronco, Stefano
    Wegner, Jan Dirk
    Schindler, Konrad
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 8828 - 8836
  • [9] Alignment Pixel-to-Pixel for Mammography Obtained by Dual Energy
    Costa, I. T.
    Oliveira, H. J. Q.
    4TH EUROPEAN CONFERENCE OF THE INTERNATIONAL FEDERATION FOR MEDICAL AND BIOLOGICAL ENGINEERING, 2009, 22 (1-3): : 799 - 802
  • [10] Pixel-to-pixel correspondence adjustment in DMD camera by moire methodology
    Ri, S
    Fujigaki, M
    Matui, T
    Morimoto, Y
    EXPERIMENTAL MECHANICS, 2006, 46 (01) : 67 - 75