FluPMT: Prediction of Predominant Strains of Influenza A Viruses Via Multi-task Learning

被引:3
|
作者
Cai C. [1 ]
Li J. [1 ]
Xia Y. [2 ]
Li W. [1 ]
机构
[1] School of Information Science and Engineering, Yunnan University, Kunming
[2] State Key Laboratory for Conservation and Utilization of Bio-Resources in Yunnan, Yunnan University, Kunming
基金
中国国家自然科学基金;
关键词
Antigenic distance; Computer viruses; Influenza; Multi-task learning; Predictive models; Predominant strain prediction; Strain; Task analysis; Time series analysis; Vaccines;
D O I
10.1109/TCBB.2024.3378468
中图分类号
学科分类号
摘要
Seasonal influenza vaccines play a crucial role in saving numerous lives annually. However, the constant evolution of the influenza A virus necessitates frequent vaccine updates to ensure its ongoing effectiveness. The decision to develop a new vaccine strain is generally based on the assessment of the current predominant strains. Nevertheless, the process of vaccine production and distribution is very time-consuming, leaving a window for the emergence of new variants that could decrease vaccine effectiveness, so predictions of influenza A virus evolution can inform vaccine evaluation and selection. Hence, we present FluPMT, a novel sequence prediction model that applies an encoder-decoder architecture to predict the hemagglutinin (HA) protein sequence of the upcoming season's predominant strain by capturing the patterns of evolution of influenza A viruses. Specifically, we employ time series to model the evolution of influenza A viruses, and utilize attention mechanisms to explore dependencies among residues of sequences. Additionally, antigenic distance prediction based on graph network representation learning is incorporated into the sequence prediction as an auxiliary task through a multi-task learning framework. Experimental results on two influenza datasets highlight the exceptional predictive performance of FluPMT, offering valuable insights into virus evolutionary dynamics, as well as vaccine evaluation and production. IEEE
引用
收藏
页码:1 / 11
页数:10
相关论文
共 50 条
  • [21] Legal judgment prediction via optimized multi-task learning fusing similarity correlation
    Guo, Xiaoding
    Zao, Feifei
    Shen, Zhuo
    Zhang, Lei
    APPLIED INTELLIGENCE, 2023, 53 (21) : 26205 - 26229
  • [22] Multi-task Supervised Learning via Cross-learning
    Cervino, Juan
    Andres Bazerque, Juan
    Calvo-Fullana, Miguel
    Ribeiro, Alejandro
    29TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2021), 2021, : 1381 - 1385
  • [23] Constructing negative samples via entity prediction for multi-task knowledge representation learning
    Chen, Guihai
    Wu, Jianshe
    Luo, Wenyun
    Ding, Jingyi
    KNOWLEDGE-BASED SYSTEMS, 2023, 281
  • [24] Legal judgment prediction via optimized multi-task learning fusing similarity correlation
    Xiaoding Guo
    Feifei Zao
    Zhuo Shen
    Lei Zhang
    Applied Intelligence, 2023, 53 : 26205 - 26229
  • [25] Fairness in Multi-Task Learning via Wasserstein Barycenters
    Hu, Francois
    Ratz, Philipp
    Charpentier, Arthur
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT II, 2023, 14170 : 295 - 312
  • [26] Wind Speed Forecasting via Multi-task Learning
    Lencione, Gabriel R.
    Von Zuben, Fernando J.
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [27] Modeling disease progression via multi-task learning
    Zhou, Jiayu
    Liu, Jun
    Narayan, Vaibhav A.
    Ye, Jieping
    NEUROIMAGE, 2013, 78 : 233 - 248
  • [28] Efficient Multi-Task Learning via Generalist Recommender
    Wang, Luyang
    Tang, Cangcheng
    Zhang, Chongyang
    Ruan, Jun
    Huang, Kai
    Dai, Jason
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 4335 - 4339
  • [29] Attribution of Adversarial Attacks via Multi-task Learning
    Guo, Zhongyi
    Han, Keji
    Ge, Yao
    Li, Yun
    Ji, Wei
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT II, 2024, 14448 : 81 - 94
  • [30] Object localization via evaluation multi-task learning
    Tian, Yan
    Wang, Huiyan
    Wang, Xun
    NEUROCOMPUTING, 2017, 253 : 34 - 41