Sequence Alignment with Q-Learning Based on the Actor-Critic Model

被引:0
|
作者
Li, Yarong [1 ]
机构
[1] Beijing Normal Univ, Expt High Sch, Beijing 10000, Peoples R China
关键词
Words and Phrases: Sequence alignment; reinforcement learning; Q-learning; Actor-Critic model; PROTEIN SECONDARY STRUCTURE; MULTIPLE;
D O I
10.1145/3433540
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multiple sequence alignment methods refer to a series of algorithmic solutions for the alignment of evolutionary-related sequences while taking into account evolutionary events such as mutations, insertions, deletions, and rearrangements under certain conditions. In this article, we propose a method with Q-learning based on the Actor-Critic model for sequence alignment. We transform the sequence alignment problem into an agent's autonomous learning process. In this process, the reward of the possible next action taken is calculated, and the cumulative reward of the entire process is calculated. The results show that the method we propose is better than the gene algorithm and the dynamic programming method.
引用
收藏
页数:7
相关论文
共 50 条
  • [11] Research on actor-critic reinforcement learning in RoboCup
    Guo, He
    Liu, Tianying
    Wang, Yuxin
    Chen, Feng
    Fan, Jianming
    WCICA 2006: SIXTH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION, VOLS 1-12, CONFERENCE PROCEEDINGS, 2006, : 205 - 205
  • [12] An actor-critic learning framework based on Lyapunov stability for automatic assembly
    Li, Xinwang
    Xiao, Juliang
    Cheng, Yu
    Liu, Haitao
    APPLIED INTELLIGENCE, 2023, 53 (04) : 4801 - 4812
  • [13] TRANSFER LEARNING BASED ON FORBIDDEN RULE SET IN ACTOR-CRITIC METHOD
    Takano, Toshiaki
    Takase, Haruhiko
    Kawanaka, Hiroharu
    Kita, Hidehiko
    Hayashi, Terumine
    Tsuruoka, Shinji
    INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL, 2011, 7 (5B): : 2907 - 2917
  • [14] Merging with Extraction Method for Transfer Learning in Actor-Critic
    Takano, Toshiaki
    Takase, Haruhiko
    Kawanaka, Hiroharu
    Tsuruoka, Shinji
    JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2011, 15 (07) : 814 - 821
  • [15] An actor-critic learning framework based on Lyapunov stability for automatic assembly
    Xinwang Li
    Juliang Xiao
    Yu Cheng
    Haitao Liu
    Applied Intelligence, 2023, 53 : 4801 - 4812
  • [16] Actor-Critic Reinforcement Learning for Control With Stability Guarantee
    Han, Minghao
    Zhang, Lixian
    Wang, Jun
    Pan, Wei
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (04) : 6217 - 6224
  • [17] Fast Learning in an Actor-Critic Architecture with Reward and Punishment
    Balkenius, Christian
    Winberg, Stefan
    TENTH SCANDINAVIAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2008, 173 : 20 - 27
  • [18] MARS: Malleable Actor-Critic Reinforcement Learning Scheduler
    Baheri, Betis
    Tronge, Jacob
    Fang, Bo
    Li, Ang
    Chaudhary, Vipin
    Guan, Qiang
    2022 IEEE INTERNATIONAL PERFORMANCE, COMPUTING, AND COMMUNICATIONS CONFERENCE, IPCCC, 2022,
  • [19] Influences of Reinforcement and Choice Histories on Choice Behavior in Actor-Critic Learning
    Katahira K.
    Kimura K.
    Computational Brain & Behavior, 2023, 6 (2) : 172 - 194
  • [20] An Actor-critic Reinforcement Learning Model for Optimal Bidding in Online Display Advertising
    Yuan, Congde
    Guo, Mengzhuo
    Xiang, Chaoneng
    Wang, Shuangyang
    Song, Guoqing
    Zhang, Qingpeng
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 3604 - 3613