Heavy tails and pruning in programmable photonic circuits for universal unitaries

被引:13
作者
Yu, Sunkyu [1 ]
Park, Namkyoo [2 ]
机构
[1] Seoul Natl Univ, Dept Elect & Comp Engn, Intelligent Wave Syst Lab, Seoul 08826, South Korea
[2] Seoul Natl Univ, Dept Elect & Comp Engn, Photon Syst Lab, Seoul 08826, South Korea
基金
新加坡国家研究基金会;
关键词
D O I
10.1038/s41467-023-37611-9
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Authors model programmable photonic circuits targeting universal unitaries and verify that a type of unit rotation operator has a heavy-tailed distribution. They suggest hardware pruning for random unitary and present design strategies for high fidelity and energy efficiency in large-scale quantum computations and photonic deep learning accelerators. Developing hardware for high-dimensional unitary operators plays a vital role in implementing quantum computations and deep learning accelerations. Programmable photonic circuits are singularly promising candidates for universal unitaries owing to intrinsic unitarity, ultrafast tunability and energy efficiency of photonic platforms. Nonetheless, when the scale of a photonic circuit increases, the effects of noise on the fidelity of quantum operators and deep learning weight matrices become more severe. Here we demonstrate a nontrivial stochastic nature of large-scale programmable photonic circuits-heavy-tailed distributions of rotation operators-that enables the development of high-fidelity universal unitaries through designed pruning of superfluous rotations. The power law and the Pareto principle for the conventional architecture of programmable photonic circuits are revealed with the presence of hub phase shifters, allowing for the application of network pruning to the design of photonic hardware. For the Clements design of programmable photonic circuits, we extract a universal architecture for pruning random unitary matrices and prove that "the bad is sometimes better to be removed" to achieve high fidelity and energy efficiency. This result lowers the hurdle for high fidelity in large-scale quantum computing and photonic deep learning accelerators.
引用
收藏
页数:10
相关论文
共 51 条
  • [51] Photonic matrix multiplication lights up photonic accelerator and beyond
    Zhou, Hailong
    Dong, Jianji
    Cheng, Junwei
    Dong, Wenchan
    Huang, Chaoran
    Shen, Yichen
    Zhang, Qiming
    Gu, Min
    Qian, Chao
    Chen, Hongsheng
    Ruan, Zhichao
    Zhang, Xinliang
    [J]. LIGHT-SCIENCE & APPLICATIONS, 2022, 11 (01)