Accurate 3-D Reconstruction Under IoT Environments and Its Applications to Augmented Reality

被引:14
作者
Cao, Mingwei [1 ,2 ,3 ]
Zheng, Liping [1 ,2 ,3 ]
Jia, Wei [1 ,2 ,3 ]
Lu, Huimin [4 ]
Liu, Xiaoping [1 ,2 ,3 ]
机构
[1] Hefei Univ Technol, Key Lab Knowledge Engn Big Data, Minist Educ, Hefei 230009, Anhui, Peoples R China
[2] Hefei Univ Technol, Anhui Prov Key Lab Ind Safety & Emergency Technol, Hefei 230009, Peoples R China
[3] Hefei Univ Technol, Sch Comp Sci & Informat Engn, Hefei 230009, Peoples R China
[4] Kyushu Inst Technol, Kitakyushu, Fukuoka 8048550, Japan
关键词
Three-dimensional displays; Image reconstruction; Cameras; Feature extraction; Computational modeling; Solid modeling; Surface reconstruction; Augmented reality; Internet of Things (IoT); mixed Reality; modeling; 3-D reconstruction;
D O I
10.1109/TII.2020.3016393
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the remarkable development of sensor devices and the Internet of Things (IoT), today's researchers can easily know what changes have taken place in the real world by acquiring a 3-D model. Conversely, a large amount of image data promotes the development of perceptual computing technology. In this article, we focus on modeling 3-D scenes from the multisource image data obtained from the IoT with cameras. Although great progress has been made in 3-D reconstruction, it is still challenging to recover the 3-D model from IoT data because the captured images are usually noisy, incomplete, varying scale, and with repetitive structures or features. In this article, we propose an accurate 3-D reconstruction method under IoT environments for perceptual computing of the scene. This method consists of sparse, dense, and surface reconstruction processes, which can gradually recover high-quality geometric models from the image data and efficiently deal with various repetitive structures. By analyzing the reconstructed model, we can detect the changes of scenes. We evaluate the proposed method on the benchmark data sets (i.e., tanks and temples) and publicly available data sets(in which samples usually contain repeated structures, lighting change, and different scales). Experimental results show that the proposed method outperforms the state-of-the-art methods according to the standard evaluation metric. We also use our method to enhance the real scenes with virtual objects, thus producing promising results.
引用
收藏
页码:2090 / 2100
页数:11
相关论文
共 31 条
[1]   KAZE Features [J].
Alcantarilla, Pablo Fernandez ;
Bartoli, Adrien ;
Davison, Andrew J. .
COMPUTER VISION - ECCV 2012, PT VI, 2012, 7577 :214-227
[2]  
[Anonymous], 2014, GCH 2014 - Eurographics Workshop on Graphics and Cultural Heritage
[3]  
[Anonymous], 2018, CORR
[4]   Efficient adaptive non-maximal suppression algorithms for homogeneous spatial keypoint distribution [J].
Bailo, Oleksandr ;
Rameau, Francois ;
Joo, Kyungdon ;
Park, Jinsun ;
Bogdan, Oleksandr ;
Kweon, In So .
PATTERN RECOGNITION LETTERS, 2018, 106 :53-60
[5]   Accurate, Dense, and Robust Multiview Stereopsis [J].
Furukawa, Yasutaka ;
Ponce, Jean .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2010, 32 (08) :1362-1376
[6]   Massively Parallel Multiview Stereopsis by Surface Normal Diffusion [J].
Galliani, Silvano ;
Lasinger, Katrin ;
Schindler, Konrad .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :873-881
[7]  
Heller J, 2015, 2015 14TH IAPR INTERNATIONAL CONFERENCE ON MACHINE VISION APPLICATIONS (MVA), P30, DOI 10.1109/MVA.2015.7153126
[8]   High Accuracy and Visibility-Consistent Dense Multiview Stereo [J].
Hoang-Hiep Vu ;
Labatut, Patrick ;
Pons, Jean-Philippe ;
Keriven, Renaud .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (05) :889-901
[9]  
Kazhdan M., 2006, P 4 EUR S GEOM PROC, V7, P1, DOI [10.2312/SGP/SGP06/061-070, DOI 10.2312/SGP/SGP06/061-070]
[10]   BigSUR: Large-scale Structured Urban Reconstruction [J].
Kelly, Tom ;
Femiani, John ;
Wonka, Peter ;
Mitra, Niloy J. .
ACM TRANSACTIONS ON GRAPHICS, 2017, 36 (06)