Generative deep-learning-embedded asynchronous structured light for three-dimensional imaging

被引:13
作者
Lu, Lei [1 ]
Bu, Chenhao [1 ]
Su, Zhilong [2 ,3 ]
Guan, Banglei [4 ]
Yu, Qifeng [4 ]
Pan, Wei [5 ]
Zhang, Qinghui [1 ]
机构
[1] Henan Univ Technol, Coll Informat Sci & Engn, Zhengzhou, Peoples R China
[2] Shanghai Univ, Shanghai Inst Appl Math & Mech, Sch Mech & Engn Sci, Shanghai Key Lab Mech Energy Engn, Shanghai, Peoples R China
[3] Shanghai Univ, Shaoxing Res Inst, Shaoxing, Peoples R China
[4] Natl Univ Def Technol, Coll Aerosp Sci & Engn, Changsha, Peoples R China
[5] OPT Machine Vis Tech Co Ltd, Dongguan, Peoples R China
来源
ADVANCED PHOTONICS | 2024年 / 6卷 / 04期
基金
中国国家自然科学基金;
关键词
structured light; fringe pattern projection; asynchrony; deep learning; generative neural networks; three-dimensional imaging; SHAPE MEASUREMENT;
D O I
10.1117/1.AP.6.4.046004
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Three-dimensional (3D) imaging with structured light is crucial in diverse scenarios, ranging from intelligent manufacturing and medicine to entertainment. However, current structured light methods rely on projector-camera synchronization, limiting the use of affordable imaging devices and their consumer applications. In this work, we introduce an asynchronous structured light imaging approach based on generative deep neural networks to relax the synchronization constraint, accomplishing the challenges of fringe pattern aliasing, without relying on any a priori constraint of the projection system. To overcome this need, we propose a generative deep neural network with U-Net-like encoder-decoder architecture to learn the underlying fringe features directly by exploring the intrinsic prior principles in the fringe pattern aliasing. We train within an adversarial learning framework and supervise the network training via a statistics-informed loss function. We demonstrate that by evaluating the performance on fields of intensity, phase, and 3D reconstruction. It is shown that the trained network can separate aliased fringe patterns for producing comparable results with the synchronous one: the absolute error is no greater than 8 mu m, and the standard deviation does not exceed 3 mu m. Evaluation results on multiple objects and pattern types show it could be generalized for any asynchronous structured light scene.
引用
收藏
页数:14
相关论文
共 33 条
[1]  
Bradley D, 2009, PROC CVPR IEEE, P540, DOI 10.1109/CVPR.2009.5204340
[2]   Subpixel Unsynchronized Unstructured Light [J].
El Asmi, Chaima ;
Roy, Sebastien .
PROCEEDINGS OF THE 14TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 5, 2019, :865-875
[3]   Fast Unsynchronized Unstructured Light [J].
El Asmi, Chaima ;
Roy, Sebastien .
2018 15TH CONFERENCE ON COMPUTER AND ROBOT VISION (CRV), 2018, :277-284
[4]   Generalized framework for non-sinusoidal fringe analysis using deep learning [J].
Feng, Shijie ;
Zuo, Chao ;
Zhang, Liang ;
Yin, Wei ;
Chen, Qian .
PHOTONICS RESEARCH, 2021, 9 (06) :1084-1098
[5]   Fringe pattern analysis using deep learning [J].
Feng, Shijie ;
Chen, Qian ;
Gu, Guohua ;
Tao, Tianyang ;
Zhang, Liang ;
Hu, Yan ;
Yin, Wei ;
Zuo, Chao .
ADVANCED PHOTONICS, 2019, 1 (02)
[6]  
Fujiyoshi H, 2003, 2003 IEEE INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE IN ROBOTICS AND AUTOMATION, VOLS I-III, PROCEEDINGS, P1239
[7]   Structured-light 3D surface imaging: a tutorial [J].
Geng, Jason .
ADVANCES IN OPTICS AND PHOTONICS, 2011, 3 (02) :128-160
[8]   Generative Adversarial Networks [J].
Goodfellow, Ian ;
Pouget-Abadie, Jean ;
Mirza, Mehdi ;
Xu, Bing ;
Warde-Farley, David ;
Ozair, Sherjil ;
Courville, Aaron ;
Bengio, Yoshua .
COMMUNICATIONS OF THE ACM, 2020, 63 (11) :139-144
[9]   Fringe projection techniques: Whither we are? [J].
Gorthi, Sai Siva ;
Rastogi, Pramod .
OPTICS AND LASERS IN ENGINEERING, 2010, 48 (02) :133-140
[10]  
Hasler N, 2009, PROC CVPR IEEE, P224, DOI 10.1109/CVPRW.2009.5206859