Detection of Entangled States Supported by Reinforcement Learning

被引:5
|
作者
Cao, Jia-Hao [1 ]
Chen, Feng [1 ]
Liu, Qi [1 ,6 ]
Mao, Tian-Wei [1 ]
Xu, Wen-Xin [1 ]
Wu, Ling-Na [2 ,3 ]
You, Li [1 ,4 ,5 ]
机构
[1] Tsinghua Univ, Dept Phys, State Key Lab Low Dimens Quantum Phys, Beijing 100084, Peoples R China
[2] Hainan Univ, Ctr Theoret Phys, Haikou 570228, Peoples R China
[3] Hainan Univ, Sch Sci, Haikou 570228, Peoples R China
[4] Frontier Sci Ctr Quantum Informat, Beijing, Peoples R China
[5] Beijing Acad Quantum Informat Sci, Beijing 100193, Peoples R China
[6] Sorbonne Univ, ENS PSL Univ, Coll France, Lab Kastler Brossel,CNRS, Paris, France
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Compendex;
D O I
10.1103/PhysRevLett.131.073201
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Discrimination of entangled states is an important element of quantum-enhanced metrology. This typically requires low-noise detection technology. Such a challenge can be circumvented by introducing nonlinear readout process. Traditionally, this is realized by reversing the very dynamics that generates the entangled state, which requires a full control over the system evolution. In this Letter, we present nonlinear readout of highly entangled states by employing reinforcement learning to manipulate the spin-mixing dynamics in a spin-1 atomic condensate. The reinforcement learning found results in driving the system toward an unstable fixed point, whereby the (to be sensed) phase perturbation is amplified by the subsequent spin-mixing dynamics. Working with a condensate of 10 900 87Rb atoms, we achieve a metrological gain of 6.97 thorn 1.30 -1.38 dB beyond the classical precision limit. Our work will open up new possibilities in unlocking the full potential of entanglement caused quantum enhancement in experiments.
引用
收藏
页数:7
相关论文
共 50 条
  • [21] Exponential Hardness of Reinforcement Learning with Linear Function Approximation
    Kane, Daniel
    Liu, Sihan
    Lovett, Shachar
    Mahajan, Gaurav
    Szepesvári, Csaba
    Weisz, Gellért
    Proceedings of Machine Learning Research, 2023, 195 : 1588 - 1617
  • [22] Adaptive evolution strategy with ensemble of mutations for Reinforcement Learning
    Ajani, Oladayo S.
    Mallipeddi, Rammohan
    Knowledge-Based Systems, 2022, 245
  • [23] Hamilton-Jacobi Reachability in Reinforcement Learning: A Survey
    Ganai, Milan
    Gao, Sicun
    Herbert, Sylvia L.
    IEEE Open Journal of Control Systems, 2024, 3 : 310 - 324
  • [24] Reinforcement Detection in Concrete using Artificial Neural Network
    Yamasaki, Yuki
    Tsuburaya, Tomonori
    Meng, Zhiqi
    IEEJ Transactions on Fundamentals and Materials, 2024, 144 (11): : 401 - 406
  • [25] Special Issue on Aerospace and Mechanical Applications of Reinforcement Learning and Adaptive Learning Based Control
    How, Jonathan P.
    Chowdhary, Girish
    Walsh, Thomas
    JOURNAL OF AEROSPACE INFORMATION SYSTEMS, 2014, 11 (09): : 541 - 541
  • [26] Reinforcement Learning Based Techniques for Radar Anti-Jamming
    Institute of Space Technology, Electrical Engineering Department, Islamabad, Pakistan
    Proc. Int. Bhurban Conf. Appl. Sci. Technol., IBCAST, (1021-1025):
  • [27] Optimistic PAC Reinforcement Learning: the Instance-Dependent View
    Tirinzoni, Andrea
    Al-Marjani, Aymen
    Kaufmann, Emilie
    Proceedings of Machine Learning Research, 2023, 201 : 1460 - 1480
  • [28] A Study on the Agent in Fighting Games Based on Deep Reinforcement Learning
    Liang, Hai
    Li, Jiaqi
    MOBILE INFORMATION SYSTEMS, 2022, 2022
  • [29] Safety-constrained reinforcement learning with a distributional safety critic
    Yang, Qisong
    Simao, Thiago D.
    Tindemans, Simon H.
    Spaan, Matthijs T. J.
    MACHINE LEARNING, 2023, 112 (03) : 859 - 887
  • [30] Semantic communication aware reinforcement learning for UAVs intelligent following
    Feng, Bohao
    Zhang, Yang
    Proceedings of SPIE - The International Society for Optical Engineering, 2023, 12717