Multiscale Attention Fusion for Depth Map Super-Resolution Generative Adversarial Networks

被引:3
作者
Xu, Dan [1 ]
Fan, Xiaopeng [1 ,2 ]
Gao, Wen [2 ,3 ]
机构
[1] Harbin Inst Technol, Sch Comp Sci & Technol, Harbin 150001, Peoples R China
[2] Pengcheng Lab, Shenzhen 518052, Peoples R China
[3] Peking Univ, Sch Elect Engn & Comp Sci, Beijing 100871, Peoples R China
基金
中国国家自然科学基金; 国家高技术研究发展计划(863计划);
关键词
attention; depth map; fusion; generative adversarial networks; multiscale; super-resolution;
D O I
10.3390/e25060836
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Color images have long been used as an important supplementary information to guide the super-resolution of depth maps. However, how to quantitatively measure the guiding effect of color images on depth maps has always been a neglected issue. To solve this problem, inspired by the recent excellent results achieved in color image super-resolution by generative adversarial networks, we propose a depth map super-resolution framework with generative adversarial networks using multiscale attention fusion. Fusion of the color features and depth features at the same scale under the hierarchical fusion attention module effectively measure the guiding effect of the color image on the depth map. The fusion of joint color-depth features at different scales balances the impact of different scale features on the super-resolution of the depth map. The loss function of a generator composed of content loss, adversarial loss, and edge loss helps restore clearer edges of the depth map. Experimental results on different types of benchmark depth map datasets show that the proposed multiscale attention fusion based depth map super-resolution framework has significant subjective and objective improvements over the latest algorithms, verifying the validity and generalization ability of the model.
引用
收藏
页数:15
相关论文
共 46 条
  • [1] A Naturalistic Open Source Movie for Optical Flow Evaluation
    Butler, Daniel J.
    Wulff, Jonas
    Stanley, Garrett B.
    Black, Michael J.
    [J]. COMPUTER VISION - ECCV 2012, PT VI, 2012, 7577 : 611 - 625
  • [2] Chan D., 2008, P WORKSH MULT MULT S, P1
  • [3] Denton E.L., 2015, ADV NEURAL INFORM PR, V8
  • [4] Diebel J., 2006, P ADV NEUR INF PROC, P291
  • [5] Image Super-Resolution Using Deep Convolutional Networks
    Dong, Chao
    Loy, Chen Change
    He, Kaiming
    Tang, Xiaoou
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (02) : 295 - 307
  • [6] Variational Depth Superresolution using Example-Based Edge Representations
    Ferstl, David
    Ruether, Matthias
    Bischof, Horst
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 513 - 521
  • [7] Image Guided Depth Upsampling using Anisotropic Total Generalized Variation
    Ferstl, David
    Reinbacher, Christian
    Ranftl, Rene
    Ruether, Matthias
    Bischof, Horst
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, : 993 - 1000
  • [8] Fu M., 2016, P VIS COMM IM PROC C, P1
  • [9] Guided Image Filtering
    He, Kaiming
    Sun, Jian
    Tang, Xiaoou
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (06) : 1397 - 1409
  • [10] Hirschmüller H, 2007, PROC CVPR IEEE, P2134