Robust Cylindrical Panorama Stitching for Low-Texture Scenes Based on Image Alignment Using Deep Learning and Iterative Optimization

被引:10
作者
Kang, Lai [1 ]
Wei, Yingmei [1 ]
Jiang, Jie [1 ]
Xie, Yuxiang [1 ]
机构
[1] Natl Univ Def Technol, Coll Syst Engn, Changsha 410073, Peoples R China
基金
中国国家自然科学基金;
关键词
cylindrical panorama; low-texture environments; convolutional neural network (CNN); robust image alignment; sub-pixel optimization;
D O I
10.3390/s19235310
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Cylindrical panorama stitching is able to generate high resolution images of a scene with a wide field-of-view (FOV), making it a useful scene representation for applications like environmental sensing and robot localization. Traditional image stitching methods based on hand-crafted features are effective for constructing a cylindrical panorama from a sequence of images in the case when there are sufficient reliable features in the scene. However, these methods are unable to handle low-texture environments where no reliable feature correspondence can be established. This paper proposes a novel two-step image alignment method based on deep learning and iterative optimization to address the above issue. In particular, a light-weight end-to-end trainable convolutional neural network (CNN) architecture called ShiftNet is proposed to estimate the initial shifts between images, which is further optimized in a sub-pixel refinement procedure based on a specified camera motion model. Extensive experiments on a synthetic dataset, rendered photo-realistic images, and real images were carried out to evaluate the performance of our proposed method. Both qualitative and quantitative experimental results demonstrate that cylindrical panorama stitching based on our proposed image alignment method leads to significant improvements over traditional feature based methods and recent deep learning based methods for challenging low-texture environments.
引用
收藏
页数:22
相关论文
共 44 条
[31]  
Poursaeed Omid., 2018, European Conference on Computer Vision, P485
[32]   Mobile robot localization using panoramic vision and combinations of feature region detectors [J].
Ramisa, Arnau ;
Tapus, Adriana ;
de Mantaras, Ramon Lopez ;
Toledo, Ricardo .
2008 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-9, 2008, :538-+
[33]   Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks [J].
Ren, Shaoqing ;
He, Kaiming ;
Girshick, Ross ;
Sun, Jian .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (06) :1137-1149
[34]   Machine learning for high-speed corner detection [J].
Rosten, Edward ;
Drummond, Tom .
COMPUTER VISION - ECCV 2006 , PT 1, PROCEEDINGS, 2006, 3951 :430-443
[35]  
Rublee E, 2011, IEEE I CONF COMP VIS, P2564, DOI 10.1109/ICCV.2011.6126544
[36]   Multi-Image Robust Alignment of Medium-Resolution Satellite Imagery [J].
Scaioni, Marco ;
Barazzetti, Luigi ;
Gianinetto, Marco .
REMOTE SENSING, 2018, 10 (12)
[37]  
Szeliski R., 1997, Computer Graphics Proceedings, SIGGRAPH 97, P251, DOI 10.1145/258734.258861
[38]  
Tareen S.A.K., 2018, 2018 INT C COMPUTING, P1
[39]   DeMoN: Depth and Motion Network for Learning Monocular Stereo [J].
Ummenhofer, Benjamin ;
Zhou, Huizhong ;
Uhrig, Jonas ;
Mayer, Nikolaus ;
Ilg, Eddy ;
Dosovitskiy, Alexey ;
Brox, Thomas .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :5622-5631
[40]  
Xu Y., RATIO PRESERVING HAL