Sequence Alignment with Q-Learning Based on the Actor-Critic Model

被引:0
作者
Li, Yarong [1 ]
机构
[1] Beijing Normal Univ, Expt High Sch, Beijing 10000, Peoples R China
关键词
Words and Phrases: Sequence alignment; reinforcement learning; Q-learning; Actor-Critic model; PROTEIN SECONDARY STRUCTURE; MULTIPLE;
D O I
10.1145/3433540
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multiple sequence alignment methods refer to a series of algorithmic solutions for the alignment of evolutionary-related sequences while taking into account evolutionary events such as mutations, insertions, deletions, and rearrangements under certain conditions. In this article, we propose a method with Q-learning based on the Actor-Critic model for sequence alignment. We transform the sequence alignment problem into an agent's autonomous learning process. In this process, the reward of the possible next action taken is calculated, and the cumulative reward of the entire process is calculated. The results show that the method we propose is better than the gene algorithm and the dynamic programming method.
引用
收藏
页数:7
相关论文
共 50 条
[31]   Actor-Critic Learning for Platform-Independent Robot Navigation [J].
David Muse ;
Stefan Wermter .
Cognitive Computation, 2009, 1 :203-220
[32]   A Prioritized objective actor-critic method for deep reinforcement learning [J].
Ngoc Duy Nguyen ;
Thanh Thi Nguyen ;
Peter Vamplew ;
Richard Dazeley ;
Saeid Nahavandi .
Neural Computing and Applications, 2021, 33 :10335-10349
[33]   Actor-Critic Learning for Platform-Independent Robot Navigation [J].
Muse, David ;
Wermter, Stefan .
COGNITIVE COMPUTATION, 2009, 1 (03) :203-220
[34]   QVDDPG: QV Learning with Balanced Constraint in Actor-Critic Framework [J].
Huang, Jiao ;
Hu, Jifeng ;
Yang, Luheng ;
Ren, Zhihang ;
Chen, Hechang ;
Yang, Bo .
2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
[35]   A Prioritized objective actor-critic method for deep reinforcement learning [J].
Nguyen, Ngoc Duy ;
Nguyen, Thanh Thi ;
Vamplew, Peter ;
Dazeley, Richard ;
Nahavandi, Saeid .
NEURAL COMPUTING & APPLICATIONS, 2021, 33 (16) :10335-10349
[36]   Application of Actor-Critic learning to adaptive state space construction [J].
Cheng, YH ;
Yi, JQ ;
Zhao, DB .
PROCEEDINGS OF THE 2004 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-7, 2004, :2985-2990
[37]   Asymmetric Actor-Critic for Adapting to Changing Environments in Reinforcement Learning [J].
Yue, Wangyang ;
Zhou, Yuan ;
Zhang, Xiaochuan ;
Hua, Yuchen ;
Li, Minne ;
Fan, Zunlin ;
Wang, Zhiyuan ;
Kou, Guang .
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT IV, 2024, 15019 :325-339
[38]   ADVERSARIAL ADVANTAGE ACTOR-CRITIC MODEL FOR TASK-COMPLETION DIALOGUE POLICY LEARNING [J].
Peng, Baolin ;
Li, Xiujun ;
Gao, Jianfeng ;
Liu, Jingjing ;
Chen, Yun-Nung ;
Wong, Kam-Fai .
2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, :6149-6153
[39]   Dynamic Pricing Based on Demand Response Using Actor-Critic Agent Reinforcement Learning [J].
Ismail, Ahmed ;
Baysal, Mustafa .
ENERGIES, 2023, 16 (14)
[40]   Autonomous Control of Lift System based on Actor-Critic Learning for Air Cushion Vehicle [J].
Zhou, Hua ;
Wang, Yuanhui ;
Jiyang, E. ;
Wang, Xiaole .
OCEANS 2023 - LIMERICK, 2023,