Unsupervised Deep Image Stitching: Reconstructing Stitched Features to Images

被引:128
作者
Nie, Lang [1 ,2 ]
Lin, Chunyu [1 ,2 ]
Liao, Kang [1 ,2 ]
Liu, Shuaicheng [3 ]
Zhao, Yao [1 ,2 ]
机构
[1] Beijing Jiaotong Univ, Inst Informat Sci, Beijing 100044, Peoples R China
[2] Beijing Key Lab Adv Informat Sci & Network Techno, Beijing 100044, Peoples R China
[3] Univ Elect Sci & Technol China, Sch Informat & Commun Engn, Chengdu 611731, Peoples R China
基金
中国国家自然科学基金;
关键词
Image stitching; Image reconstruction; Feature extraction; Image resolution; Estimation; Strain; Solid modeling; Computer vision; deep image stitching; deep homogrpahy estimation; HOMOGRAPHY;
D O I
10.1109/TIP.2021.3092828
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Traditional feature-based image stitching technologies rely heavily on feature detection quality, often failing to stitch images with few features or low resolution. The learning-based image stitching solutions are rarely studied due to the lack of labeled data, making the supervised methods unreliable. To address the above limitations, we propose an unsupervised deep image stitching framework consisting of two stages: unsupervised coarse image alignment and unsupervised image reconstruction. In the first stage, we design an ablation-based loss to constrain an unsupervised homography network, which is more suitable for large-baseline scenes. Moreover, a transformer layer is introduced to warp the input images in the stitching-domain space. In the second stage, motivated by the insight that the misalignments in pixel-level can be eliminated to a certain extent in feature-level, we design an unsupervised image reconstruction network to eliminate the artifacts from features to pixels. Specifically, the reconstruction network can be implemented by a low-resolution deformation branch and a high-resolution refined branch, learning the deformation rules of image stitching and enhancing the resolution simultaneously. To establish an evaluation benchmark and train the learning framework, a comprehensive real-world image dataset for unsupervised deep image stitching is presented and released. Extensive experiments well demonstrate the superiority of our method over other state-of-the-art solutions. Even compared with the supervised solutions, our image stitching quality is still preferred by users.
引用
收藏
页码:6184 / 6197
页数:14
相关论文
共 52 条
  • [1] Interactive digital photomontage
    Agarwala, A
    Dontcheva, M
    Agrawala, M
    Drucker, S
    Colburn, A
    Curless, B
    Salesin, D
    Cohen, M
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2004, 23 (03): : 294 - 302
  • [2] Jump: Virtual Reality Video
    Anderson, Robert
    Gallup, David
    Barron, Jonathan T.
    Kontkanen, Janne
    Snavely, Noah
    Hernandez, Carlos
    Agarwal, Sameer
    Seitz, Steven M.
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2016, 35 (06):
  • [3] Canwei Shen, 2019, 2019 IEEE International Conference on Real-time Computing and Robotics (RCAR). Proceedings, P192, DOI 10.1109/RCAR47638.2019.9044010
  • [4] MIST: Accurate and Scalable Microscopy Image Stitching Tool with Stage Modeling and Error Minimization
    Chalfoun, Joe
    Majurski, Michael
    Blattner, Tim
    Bhadriraju, Kiran
    Keyrouz, Walid
    Bajcsy, Peter
    Brady, Mary
    [J]. SCIENTIFIC REPORTS, 2017, 7
  • [5] Shape-Preserving Half-Projective Warps for Image Stitching
    Chang, Che-Han
    Sato, Yoichi
    Chuang, Yung-Yu
    [J]. 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 3254 - 3261
  • [6] Chang CH, 2012, PROC CVPR IEEE, P1075, DOI 10.1109/CVPR.2012.6247786
  • [7] Natural Image Stitching with the Global Similarity Prior
    Chen, Yu-Sheng
    Chuang, Yung-Yu
    [J]. COMPUTER VISION - ECCV 2016, PT V, 2016, 9909 : 186 - 201
  • [8] DeTone D., 2016, ARXIV160603798
  • [9] Eden A., 2006, P 2006 IEEE COMP SOC, P2498
  • [10] RANDOM SAMPLE CONSENSUS - A PARADIGM FOR MODEL-FITTING WITH APPLICATIONS TO IMAGE-ANALYSIS AND AUTOMATED CARTOGRAPHY
    FISCHLER, MA
    BOLLES, RC
    [J]. COMMUNICATIONS OF THE ACM, 1981, 24 (06) : 381 - 395