Distilling dynamical knowledge from stochastic reaction networks

被引:4
作者
Liu, Chuanbo [1 ]
Wang, Jin [2 ,3 ]
机构
[1] Chinese Acad Sci, Changchun Inst Appl Chem, State Key Lab Electroanalyt Chem, Changchun 130022, Jilin, Peoples R China
[2] Univ Chinese Acad Sci, Wenzhou Inst, Ctr Theoret Interdisciplinary Sci, Wenzhou 325001, Zhejiang, Peoples R China
[3] SUNY Stony Brook, Dept Chem & Phys & Astron, Stony Brook, NY 11794 USA
关键词
knowledge distillation; machine learning; stochastic reaction networks;
D O I
10.1073/pnas.2317422121
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Stochastic reaction networks are widely used in the modeling of stochastic systems across diverse domains such as biology, chemistry, physics, and ecology. However, the comprehension of the dynamic behaviors inherent in stochastic reaction networks is a formidable undertaking, primarily due to the exponential growth in the number of possible states or trajectories as the state space dimension increases. In this study, we introduce a knowledge distillation method based on reinforcement learning principles, aimed at compressing the dynamical knowledge encoded in stochastic reaction networks into a singular neural network construct. The trained neural network possesses the capability to accurately predict the state conditional joint probability distribution that corresponds to the given query contexts, when prompted with rate parameters, initial conditions, and time values. This obviates the need to track the dynamical process, enabling the direct estimation of normalized state and trajectory probabilities, without necessitating the integration over the complete state space. By applying our method to representative examples, we have observed a high degree of accuracy in both multimodal and high -dimensional systems. Additionally, the trained neural network can serve as a foundational model for developing efficient algorithms for parameter inference and trajectory ensemble generation. These results collectively underscore the efficacy of our approach as a universal means of distilling knowledge from stochastic reaction networks. Importantly, our methodology also spotlights the potential utility in harnessing a singular, pretrained, large-scale model to encapsulate the solution space underpinning a wide spectrum of stochastic dynamical systems.
引用
收藏
页数:11
相关论文
共 47 条
[1]  
Anderson D.F., 2015, Stochastic analysis of biochemical systems
[2]  
Brown TB, 2020, ADV NEUR IN, V33
[3]   Computational modeling of eukaryotic mRNA turnover [J].
Cao, D ;
Parker, R .
RNA, 2001, 7 (09) :1192-1212
[4]   ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS [J].
Cao, Youfang ;
Terebus, Anna ;
Liang, Jie .
MULTISCALE MODELING & SIMULATION, 2016, 14 (02) :923-963
[5]   State Space Truncation with Quantified Errors for Accurate Solutions to Discrete Chemical Master Equation [J].
Cao, Youfang ;
Terebus, Anna ;
Liang, Jie .
BULLETIN OF MATHEMATICAL BIOLOGY, 2016, 78 (04) :617-661
[6]   Emerging Properties in Self-Supervised Vision Transformers [J].
Caron, Mathilde ;
Touvron, Hugo ;
Misra, Ishan ;
Jegou, Herve ;
Mairal, Julien ;
Bojanowski, Piotr ;
Joulin, Armand .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :9630-9640
[7]  
Chen TQ, 2016, Arxiv, DOI arXiv:1511.05641
[8]   Model Compression and Acceleration for Deep Neural Networks The principles, progress, and challenges [J].
Cheng, Yu ;
Wang, Duo ;
Zhou, Pan ;
Zhang, Tao .
IEEE SIGNAL PROCESSING MAGAZINE, 2018, 35 (01) :126-136
[9]   The use of mixture density networks in the emulation of complex epidemiological individual-based models [J].
Davis, Christopher N. ;
Hollingsworth, T. Deirdre ;
Caudron, Quentin ;
Irvine, Michael A. .
PLOS COMPUTATIONAL BIOLOGY, 2020, 16 (03)
[10]  
DELBRUCK M, 1940, J CHEM PHYS, V8, P120, DOI DOI 10.1063/1.1750549