Diffractive Deep Neural Networks at Visible Wavelengths

被引:101
作者
Chen, Hang [1 ]
Feng, Jianan [1 ]
Jiang, Minwei [1 ]
Wang, Yiqun [2 ]
Lin, Jie [1 ,3 ]
Tan, Jiubin [1 ]
Jin, Peng [1 ,3 ]
机构
[1] Harbin Inst Technol, Ctr Ultraprecis Optoelect Instrument, Harbin 150001, Peoples R China
[2] Chinese Acad Sci, Suzhou Inst Nanotech & Nanobion, Nanofabricat Facil, Suzhou 215123, Peoples R China
[3] Harbin Inst Technol, Key Lab Microsyst & Microstruct Mfg, Minist Educ, Harbin 150001, Peoples R China
基金
中国国家自然科学基金;
关键词
Optical computation; Optical neural networks; Deep learning; Optical machine learning; Diffractive deep neural networks; LIGHT COMMUNICATION; LEARNING APPROACH;
D O I
10.1016/j.eng.2020.07.032
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Optical deep learning based on diffractive optical elements offers unique advantages for parallel processing, computational speed, and power efficiency. One landmark method is the diffractive deep neural network ((DNN)-N-2) based on three-dimensional printing technology operated in the terahertz spectral range. Since the terahertz bandwidth involves limited interparticle coupling and material losses, this paper extends (DNN)-N-2 to visible wavelengths. A general theory including a revised formula is proposed to solve any contradictions between wavelength, neuron size, and fabrication limitations. A novel visible light (DNN)-N-2 classifier is used to recognize unchanged targets (handwritten digits ranging from 0 to 9) and targets that have been changed (i.e., targets that have been covered or altered) at a visible wavelength of 632.8 nm. The obtained experimental classification accuracy (84%) and numerical classification accuracy (91.57%) quantify the match between the theoretical design and fabricated system performance. The presented framework can be used to apply a (DNN)-N-2 to various practical applications and design other new applications. (C) 2021 THE AUTHORS. Published by Elsevier LTD on behalf of Chinese Academy of Engineering and Higher Education Press Limited Company.
引用
收藏
页码:1483 / 1491
页数:9
相关论文
共 50 条
  • [21] Theory-based residual neural networks: A synergy of discrete choice models and deep neural networks
    Wang, Shenhao
    Mo, Baichuan
    Zhao, Jinhua
    [J]. TRANSPORTATION RESEARCH PART B-METHODOLOGICAL, 2021, 146 : 333 - 358
  • [22] Deep Recurrent Neural Networks for Human Activity Recognition
    Murad, Abdulmajid
    Pyun, Jae-Young
    [J]. SENSORS, 2017, 17 (11)
  • [23] A study of deep neural networks for human activity recognition
    Sansano, Emilio
    Montoliu, Raul
    Belmonte Fernandez, Oscar
    [J]. COMPUTATIONAL INTELLIGENCE, 2020, 36 (03) : 1113 - 1139
  • [24] Diffractive interconnects: all-optical permutation operation using diffractive networks
    Mengu, Deniz
    Zhao, Yifan
    Tabassum, Anika
    Jarrahi, Mona
    Ozcan, Aydogan
    [J]. NANOPHOTONICS, 2023, 12 (05) : 905 - 923
  • [25] Classification of Metal Handwritten Digits Based on Microwave Diffractive Deep Neural Network
    Gu, Ze
    Ma, Qian
    Gao, Xinxin
    You, Jian Wei
    Cui, Tie Jun
    [J]. ADVANCED OPTICAL MATERIALS, 2024, 12 (07)
  • [26] Improving Deep Neural Networks with Multilayer Maxout Networks
    Sun, Weichen
    Su, Fei
    Wang, Leiquan
    [J]. 2014 IEEE VISUAL COMMUNICATIONS AND IMAGE PROCESSING CONFERENCE, 2014, : 334 - 337
  • [27] MULTILINGUAL TRAINING OF DEEP NEURAL NETWORKS
    Ghoshal, Arnab
    Swietojanski, Pawel
    Renals, Steve
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 7319 - 7323
  • [28] Activation Ensembles for Deep Neural Networks
    Klabjan, Diego
    Harmon, Mark
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 206 - 214
  • [29] Selection dynamics for deep neural networks
    Liu, Hailiang
    Markowich, Peter
    [J]. JOURNAL OF DIFFERENTIAL EQUATIONS, 2020, 269 (12) : 11540 - 11574
  • [30] IMPLICIT SALIENCY IN DEEP NEURAL NETWORKS
    Sun, Yutong
    Prabhushankar, Mohit
    AlRegib, Ghassan
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 2915 - 2919