Novel Fine-Tuning Strategy on Pre-trained Protein Model Enhances ACP Functional Type Classification

被引:0
|
作者
Wang, Shaokai [1 ]
Ma, Bin [1 ]
机构
[1] Univ Waterloo, David R Cheriton Sch Comp Sci, Waterloo, ON, Canada
来源
BIOINFORMATICS RESEARCH AND APPLICATIONS, PT I, ISBRA 2024 | 2024年 / 14954卷
关键词
Anti-cancer Peptide; Pre-training; Fine-tuning; LANGUAGE; PEPTIDE;
D O I
10.1007/978-981-97-5128-0_30
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cancer remains one of the most formidable health challenges globally. Anti-cancer peptides (ACPs) have recently emerged as a promising new therapeutic strategy, recognized for their targeted and efficient anti-cancer properties. To fully discover the potential of ACPs, computational methods that can accurately predict their functional types are indispensable. By leveraging a pre-trained protein sequence model, we present ACP-FT that fine-tuned specifically for predicting the functional types of ACPs. Employing a novel fine-tuning approach alongside an adversarial model training technique, our model surpasses existing methods in classification performance on two public datasets. Additionally, we provide a thorough analysis of our training strategy's effectiveness. The experimental results demonstrate that our two-step fine-tuning approach effectively prevents catastrophic forgetting in the pretrained model, while adversarial training enhances the model's robustness. Together, these techniques significantly increase the accuracy of ACP functional type predictions.
引用
收藏
页码:371 / 382
页数:12
相关论文
共 50 条
  • [31] An Empirical Study on Hyperparameter Optimization for Fine-Tuning Pre-trained Language Models
    Liu, Xueqing
    Wang, Chi
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 2286 - 2300
  • [32] Fine-tuning pre-trained voice conversion model for adding new target speakers with limited data
    Koshizuka, Takeshi
    Ohmura, Hidefumi
    Katsurada, Kouichi
    INTERSPEECH 2021, 2021, : 1339 - 1343
  • [33] Poster: Attempts on detecting Alzheimer's disease by fine-tuning pre-trained model with Gaze Data
    Nagasawa, Junichi
    Nakata, Yuichi
    Hiroe, Mamoru
    Zheng, Yujia
    Kawaguchi, Yutaka
    Maegawa, Yuji
    Hojo, Naoki
    Takiguchi, Tetsuya
    Nakayama, Minoru
    Uchimura, Maki
    Sonoda, Yuma
    Kowa, Hisatomo
    Nagamatsu, Takashi
    PROCEEDINGS OF THE 2024 ACM SYMPOSIUM ON EYE TRACKING RESEARCH & APPLICATIONS, ETRA 2024, 2024,
  • [34] Towards Fine-tuning Pre-trained Language Models with Integer Forward and Backward Propagation
    Tayaranian, Mohammadreza
    Ghaffari, Alireza
    Tahaei, Marzieh S.
    Rezagholizadeh, Mehdi
    Asgharian, Masoud
    Nia, Vahid Partovi
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 1912 - 1921
  • [35] Efficient Fine-Tuning for Low-Resource Tibetan Pre-trained Language Models
    Zhou, Mingjun
    Daiqing, Zhuoma
    Qun, Nuo
    Nyima, Tashi
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT VII, 2024, 15022 : 410 - 422
  • [36] Fine-Tuning BERT-Based Pre-Trained Models for Arabic Dependency Parsing
    Al-Ghamdi, Sharefah
    Al-Khalifa, Hend
    Al-Salman, Abdulmalik
    APPLIED SCIENCES-BASEL, 2023, 13 (07):
  • [37] Fine-tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation Extraction
    Alt, Christoph
    Huebner, Marc
    Hennig, Leonhard
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 1388 - 1398
  • [38] Towards Anytime Fine-tuning: Continually Pre-trained Language Models with Hypernetwork Prompts
    Jiang, Gangwei
    Jiang, Caigao
    Xue, Sigiao
    Zhang, James Y.
    Zhou, Jun
    Lian, Defu
    Wei, Ying
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 12081 - 12095
  • [39] Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models
    Zhou, Kun
    Zhao, Wayne Xin
    Wang, Sirui
    Zhang, Fuzheng
    Wu, Wei
    We, Ji-Rong
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3875 - 3887
  • [40] Disfluencies and Fine-Tuning Pre-trained Language Models for Detection of Alzheimer's Disease
    Yuan, Jiahong
    Bian, Yuchen
    Cai, Xingyu
    Huang, Jiaji
    Ye, Zheng
    Church, Kenneth
    INTERSPEECH 2020, 2020, : 2162 - 2166