De novo drug design based on Stack-RNN with multi-objective reward-weighted sum and reinforcement learning

被引:10
|
作者
Hu, Pengwei [1 ,2 ]
Zou, Jinping [1 ,2 ]
Yu, Jialin [1 ,2 ]
Shi, Shaoping [1 ,2 ]
机构
[1] Nanchang Univ, Sch Math & Comp Sci, Dept Math, Nanchang 330031, Peoples R China
[2] Nanchang Univ, Inst Math & Interdisciplinary Sci, Nanchang 330031, Peoples R China
基金
中国国家自然科学基金;
关键词
Reinforcement learning; De novo drug; Multi-objective; Drug design; Molecular generation; NEURAL-NETWORKS; VALIDATION; RECEPTORS; MOLECULES;
D O I
10.1007/s00894-023-05523-6
中图分类号
Q5 [生物化学]; Q7 [分子生物学];
学科分类号
071010 ; 081704 ;
摘要
ContextIn recent decades, drug development has become extremely important as different new diseases have emerged. However, drug discovery is a long and complex process with a very low success rate, and methods are needed to improve the efficiency of the process and reduce the possibility of failure. Among them, drug design from scratch has become a promising approach. Molecules are generated from scratch, reducing the reliance on trial and error and prefabricated molecular repositories, but the optimization of its molecular properties is still a challenging multi-objective optimization problem.MethodsIn this study, two stack-augmented recurrent neural networks were used to compose a generative model for generating drug-like molecules, and then reinforcement learning was used for optimization to generate molecules with desirable properties, such as binding affinity and the logarithm of the partition coefficient between octanol and water. In addition, a memory storage network was added to increase the internal diversity of the generated molecules. For multi-objective optimization, we proposed a new approach which utilized the magnitude of different attribute reward values to assign different weights to molecular optimization. The proposed model not only solves the problem that the properties of the generated molecules are extremely biased towards a certain attribute due to the possible conflict between the attributes, but also improves various properties of the generated molecules compared with the traditional weighted sum and alternating weighted sum, among which the molecular validity reaches 97.3%, the internal diversity is 0.8613, and the desirable molecules increases from 55.9 to 92%.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Hypervolume-Based Multi-Objective Reinforcement Learning
    Van Moffaert, Kristof
    Drugan, Madalina M.
    Nowe, Ann
    EVOLUTIONARY MULTI-CRITERION OPTIMIZATION, EMO 2013, 2013, 7811 : 352 - 366
  • [22] Emotion Regulation Based on Multi-objective Weighted Reinforcement Learning for Human-robot Interaction
    Hao, Man
    Cao, Weihua
    Liu, Zhentao
    Wu, Min
    Yuan, Yan
    2019 12TH ASIAN CONTROL CONFERENCE (ASCC), 2019, : 1402 - 1406
  • [23] Multi-Objective Molecular De Novo Design by Adaptive Fragment Prioritization
    Reutlinger, Michael
    Rodrigues, Tiago
    Schneider, Petra
    Schneider, Gisbert
    ANGEWANDTE CHEMIE-INTERNATIONAL EDITION, 2014, 53 (16) : 4244 - 4248
  • [24] Scalarized Multi-Objective Reinforcement Learning: Novel Design Techniques
    Van Moffaert, Kristof
    Drugan, Madalina M.
    Nowe, Ann
    PROCEEDINGS OF THE 2013 IEEE SYMPOSIUM ON ADAPTIVE DYNAMIC PROGRAMMING AND REINFORCEMENT LEARNING (ADPRL), 2013, : 191 - 199
  • [25] Multi-objective de novo adrug design using evolutionary graphs
    Christos A Nicolaou
    CS Pattichis
    Chemistry Central Journal, 2 (Suppl 1)
  • [26] Deep Reinforcement Learning for Multiparameter Optimization in de novo Drug Design
    Stahl, Niclas
    Falkman, Goran
    Karlsson, Alexander
    Mathiason, Gunnar
    Bostrom, Jonas
    JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2019, 59 (07) : 3166 - 3176
  • [27] Multi-Objective Reinforcement Learning Based on Decomposition: A Taxonomy and Framework
    Felten, Florian
    Talbi, El-Ghazali
    Danoy, Gregoire
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2024, 79 : 679 - 723
  • [28] Multi-objective path planning based on deep reinforcement learning
    Xu, Jian
    Huang, Fei
    Cui, Yunfei
    Du, Xue
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 3273 - 3279
  • [29] Virtual machine placement based on multi-objective reinforcement learning
    Yao Qin
    Hua Wang
    Shanwen Yi
    Xiaole Li
    Linbo Zhai
    Applied Intelligence, 2020, 50 : 2370 - 2383
  • [30] An XCS-based Algorithm for Multi-Objective Reinforcement Learning
    Cheng, Xiu
    Chen, Gang
    Zhang, Mengjie
    2016 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2016, : 4007 - 4014