On the interplay between physical and content priors in deep learning for computational imaging

被引:30
作者
Deng, Mo [1 ]
Li, Shuai [2 ]
Zhang, Zhengyun [4 ]
Kang, Iksung [1 ]
Fang, Nicholas X. [3 ]
Barbastathis, George [3 ,4 ]
机构
[1] MIT, Dept Elect Engn & Comp Sci, 77 Massachusetts Ave, Cambridge, MA 02139 USA
[2] SenseBrain Technol Ltd LLC, 2550 N 1st St,Suite 300, San Jose, CA 95131 USA
[3] MIT, Dept Mech Engn, 77 Massachusetts Ave, Cambridge, MA 02139 USA
[4] Singapore MIT Alliance Res & Technol SMART Ctr, One Create Way, Singapore 117543, Singapore
基金
新加坡国家研究基金会;
关键词
NETWORK;
D O I
10.1364/OE.395204
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Deep learning (DL) has been applied extensively in many computational imaging problems, often leading to superior performance over traditional iterative approaches. However, two important questions remain largely unanswered: first, how well can the trained neural network generalize to objects very different from the ones in training? This is particularly important in practice, since large-scale annotated examples similar to those of interest are often not available during training. Second, has the trained neural network learnt the underlying (inverse) physics model, or has it merely done something trivial, such as memorizing the examples or point-wise pattern matching? This pertains to the interpretability of machine-learning based algorithms. In this work, we use the Phase Extraction Neural Network (PhENN) [Optica 4, 1117-1125 (2017)], a deep neural network (DNN) for quantitative phase retrieval in a lensless phase imaging system as the standard platform and show that the two questions are related and share a common crux: the choice of the training examples. Moreover, we connect the strength of the regularization effect imposed by a training set to the training process with the Shannon entropy of images in the dataset. That is, the higher the entropy of the training images, the weaker the regularization effect can be imposed. We also discover that weaker regularization effect leads to better learning of the underlying propagation model, i.e. the weak object transfer function, applicable for weakly scattering objects under the weak object approximation. Finally, simulation and experimental results show that better cross-domain generalization performance can be achieved if DNN is trained on a higher-entropy database, e.g. the ImageNet, than if the same DNN is trained on a lower-entropy database, e.g. MNIST, as the former allows the underlying physics model be learned better than the latter. (C) 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
引用
收藏
页码:24152 / 24170
页数:19
相关论文
共 47 条
[1]  
Advani M. S., 2017, ARXIV171003667, Patent No. [1710.03667, 171003667]
[2]  
[Anonymous], 2018, ARXIV180512076
[3]  
Barbastathis G., 2019, OPTICA
[4]  
Cover TM, 2012, Elements of information theory
[5]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[6]   Learning to synthesize: robust phase retrieval at low photon counts [J].
Deng, Mo ;
Li, Shuai ;
Goy, Alexandre ;
Kang, Iksung ;
Barbastathis, George .
LIGHT-SCIENCE & APPLICATIONS, 2020, 9 (01)
[7]   Probing shallower: perceptual loss trained Phase Extraction Neural Network (PLT-PhENN) for artifact-free reconstruction at low photon budget [J].
Deng, Mo ;
Goy, Alexandre ;
Li, Shuai ;
Arthur, Kwabena ;
Barbastathis, George .
OPTICS EXPRESS, 2020, 28 (02) :2511-2535
[8]  
Deng Mo, 2018, ARXIV181107945 ARXIV181107945
[9]   Learning a Deep Convolutional Network for Image Super-Resolution [J].
Dong, Chao ;
Loy, Chen Change ;
He, Kaiming ;
Tang, Xiaoou .
COMPUTER VISION - ECCV 2014, PT IV, 2014, 8692 :184-199
[10]  
Goy A., 2019, P NAT ACAD SCI