Positional embeddings and zero-shot learning using BERT for molecular-property prediction

被引:0
|
作者
Mswahili, Medard Edmund [1 ]
Hwang, Junha [1 ]
Rajapakse, Jagath C. [2 ]
Jo, Kyuri [1 ]
Jeong, Young-Seob [1 ]
机构
[1] Chungbuk Natl Univ, Dept Comp Engn, Cheongju 28644, South Korea
[2] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore, Singapore
来源
JOURNAL OF CHEMINFORMATICS | 2025年 / 17卷 / 01期
关键词
Transformers; BERT; Positional embedding/encoding; Zero-shot learning; Molecular-property prediction; SMILES; DeepSMILES; GRAPH NEURAL-NETWORK; DRUG DISCOVERY; MODELS; ALGORITHM; DATABASE;
D O I
10.1186/s13321-025-00959-9
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Recently, advancements in cheminformatics such as representation learning for chemical structures, deep learning (DL) for property prediction, data-driven discovery, and optimization of chemical data handling, have led to increased demands for handling chemical simplified molecular input line entry system (SMILES) data, particularly in text analysis tasks. These advancements have driven the need to optimize components like positional encoding and positional embeddings (PEs) in transformer model to better capture the sequential and contextual information embedded in molecular representations. SMILES data represent complex relationships among atoms or elements, rendering them critical for various learning tasks within the field of cheminformatics. This study addresses the critical challenge of encoding complex relationships among atoms in SMILES strings to explore various PEs within the transformer-based framework to increase the accuracy and generalization of molecular property predictions. The success of transformer-based models, such as the bidirectional encoder representations from transformer (BERT) models, in natural language processing tasks has sparked growing interest from the domain of cheminformatics. However, the performance of these models during pretraining and fine-tuning is significantly influenced by positional information such as PEs, which help in understanding the intricate relationships within sequences. Integrating position information within transformer architectures has emerged as a promising approach. This encoding mechanism provides essential supervision for modeling dependencies among elements situated at different positions within a given sequence. In this study, we first conduct pretraining experiments using various PEs to explore diverse methodologies for incorporating positional information into the BERT model for chemical text analysis using SMILES strings. Next, for each PE, we fine-tune the best-performing BERT (masked language modeling) model on downstream tasks for molecular-property prediction. Here, we use two molecular representations, SMILES and DeepSMILES, to comprehensively assess the potential and limitations of the PEs in zero-shot learning analysis, demonstrating the model's proficiency in predicting properties of unseen molecular representations in the context of newly proposed and existing datasets.Scientific contributionThis study explores the unexplored potential of PEs using BERT model for molecular property prediction. The study involved pretraining and fine-tuning the BERT model on various datasets related to COVID-19, bioassay data, and other molecular and biological properties using SMILES and DeepSMILES representations. The study details the pretraining architecture, fine-tuning datasets, and the performance of the BERT model with different PEs. It also explores zero-shot learning analysis and the model's performance on various classification and regression tasks. In this study, newly proposed datasets from different domains were introduced during fine-tuning in addition to the existing and commonly used datasets. The study highlights the robustness of the BERT model in predicting chemical properties and its potential applications in cheminformatics and bioinformatics.
引用
收藏
页数:22
相关论文
共 50 条
  • [11] Rebalanced Zero-Shot Learning
    Ye, Zihan
    Yang, Guanyu
    Jin, Xiaobo
    Liu, Youfa
    Huang, Kaizhu
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 4185 - 4198
  • [12] Zero-Shot Audio Classification Via Semantic Embeddings
    Xie, Huang
    Virtanen, Tuomas
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 : 1233 - 1242
  • [13] Spherical Zero-Shot Learning
    Shen, Jiayi
    Xiao, Zehao
    Zhen, Xiantong
    Zhang, Lei
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (02) : 634 - 645
  • [14] Zero-Shot Multilingual Sentiment Analysis using Hierarchical Attentive Network and BERT
    Sarkar, Anindya
    Reddy, Sujeeth
    Iyengar, Raghu Sesha
    NLPIR 2019: 2019 3RD INTERNATIONAL CONFERENCE ON NATURAL LANGUAGE PROCESSING AND INFORMATION RETRIEVAL, 2019, : 49 - 56
  • [15] Zero-Shot Learning Based on Deep Weighted Attribute Prediction
    Wang, Xuesong
    Chen, Chen
    Cheng, Yuhu
    Chen, Xun
    Liu, Yu
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2020, 50 (08): : 2948 - 2957
  • [16] A Unified Approach for Conventional Zero-Shot, Generalized Zero-Shot, and Few-Shot Learning
    Rahman, Shafin
    Khan, Salman
    Porikli, Fatih
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (11) : 5652 - 5667
  • [17] Adversarial strategy for transductive zero-shot learning
    Liu, Youfa
    Du, Bo
    Ni, Fuchuan
    INFORMATION SCIENCES, 2021, 578 : 750 - 761
  • [18] Collaborative Filtering Based Zero-Shot Learning
    Yang B.
    Zhang Y.-X.-Q.
    Peng Y.-D.
    Zhang C.-X.
    Huang J.
    Ruan Jian Xue Bao/Journal of Software, 2021, 32 (09): : 2801 - 2815
  • [19] Zero-Shot Learning for Computer Vision Applications
    Sarma, Sandipan
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 9360 - 9364
  • [20] Classifier and Exemplar Synthesis for Zero-Shot Learning
    Changpinyo, Soravit
    Chao, Wei-Lun
    Gong, Boqing
    Sha, Fei
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2020, 128 (01) : 166 - 201