Single-shot real-time compressed ultrahigh-speed imaging enabled by a snapshot-to-video autoencoder

被引:15
作者
Liu, Xianglei [1 ]
Monteiro, Joao [1 ]
Albuquerque, Isabela [1 ]
Lai, Yingming [1 ]
Jiang, Cheng [1 ]
Zhang, Shian [2 ]
Falk, Tiago H. [1 ]
Liang, Jinyang [1 ]
机构
[1] Inst Natl Rech Sci, Ctr Energie Mat Telecommun, Varennes, PQ J3X 1S2, Canada
[2] East China Normal Univ, State Key Lab Precis Spect, Shanghai 200062, Peoples R China
基金
加拿大创新基金会; 加拿大自然科学与工程研究理事会;
关键词
ULTRAFAST PHOTOGRAPHY; NEURAL-NETWORKS; ALGORITHMS;
D O I
10.1364/PRJ.422179
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Single-shot 2D optical imaging of transient scenes is indispensable for numerous areas of study. Among existing techniques, compressed optical-streaking ultrahigh-speed photography (COSUP) uses a cost-efficient design to endow ultrahigh frame rates with off-the-shelf CCD and CMOS cameras. Thus far, COSUP's application scope is limited by the long processing time and unstable image quality in existing analytical-modeling-based video reconstruction. To overcome these problems, we have developed a snapshot-to-video autoencoder (S2V-AE)-which is a deep neural network that maps a compressively recorded 2D image to a movie. The S2V-AE preserves spatiotemporal coherence in reconstructed videos and presents a flexible structure to tolerate changes in input data. Implemented in compressed ultrahigh-speed imaging, the S2V-AE enables the development of single-shot machine-learning assisted real-time (SMART) COSUP, which features a reconstruction time of 60 ms and a large sequence depth of 100 frames. SMART-COSUP is applied to wide-field multiple-particle tracking at 20,000 frames per second. As a universal computational framework, the S2V-AE is readily adaptable to other modalities in high-dimensional compressed sensing. SMART-COSUP is also expected to find wide applications in applied and fundamental sciences. (C) 2021 Chinese Laser Press
引用
收藏
页码:2464 / 2474
页数:11
相关论文
共 76 条
  • [1] Albuquerque I, 2019, PR MACH LEARN RES, V97
  • [2] Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space
    Anh Nguyen
    Clune, Jeff
    Bengio, Yoshua
    Dosovitskiy, Alexey
    Yosinski, Jason
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3510 - 3520
  • [3] [Anonymous], 2006, ADV NEURAL INFORM PR
  • [4] [Anonymous], 2017, ARXIV171109618
  • [5] On the use of deep learning for computational imaging
    Barbastathis, George
    Ozcan, Aydogan
    Situ, Guohai
    [J]. OPTICA, 2019, 6 (08): : 921 - 943
  • [6] A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration
    Bioucas-Dias, Jose M.
    Figueiredo, Mario A. T.
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2007, 16 (12) : 2992 - 3004
  • [7] A New Interface Technique for the Acquisition of Multiple Multi-Channel High Speed ADCs
    Calvet, Denis
    [J]. IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 2008, 55 (05) : 2592 - 2597
  • [8] Single-shot spectral-volumetric compressed ultrafast photography
    Ding, Pengpeng
    Yao, Yunhua
    Qi, Dalong
    Yang, Chengshuai
    Cao, Fengyan
    He, Yilin
    Yao, Jiali
    Jin, Chengzhi
    Huang, Zhengqi
    Deng, Li
    Deng, Lianzhong
    Jia, Tianqing
    Liang, Jinyang
    Sun, Zhenrong
    Zhang, Shian
    [J]. ADVANCED PHOTONICS, 2021, 3 (04):
  • [9] FRAME: femtosecond videography for atomic and molecular dynamics
    Ehn, Andreas
    Bood, Joakim
    Li, Zheming
    Berrocal, Edouard
    Alden, Marcus
    Kristensson, Elias
    [J]. LIGHT-SCIENCE & APPLICATIONS, 2017, 6 : e17045 - e17045
  • [10] Etoh TG, 2014, INT EL DEVICES MEET