Drug-Target Interaction Prediction Using Multi-Head Self-Attention and Graph Attention Network

被引:192
|
作者
Cheng, Zhongjian [1 ]
Yan, Cheng [1 ,2 ]
Wu, Fang-Xiang [3 ]
Wang, Jianxin [1 ]
机构
[1] Cent South Univ, Sch Comp Sci & Engn, Hunan Prov Key Lab Bioinformat, Changsha 410083, Peoples R China
[2] Qiannan Normal Univ Nationalities, Sch Comp & Informat, Duyun 558000, Guizhou, Peoples R China
[3] Univ Saskatchewan, Dept Mech Engn, Div Biomed Engn, Saskatoon, SK S7N 5A9, Canada
基金
中国国家自然科学基金;
关键词
Proteins; Drugs; Predictive models; Amino acids; Feature extraction; Compounds; Biological system modeling; Drug-target interactions; multi-head self-attention; graph attention network;
D O I
10.1109/TCBB.2021.3077905
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Identifying drug-target interactions (DTIs) is an important step in the process of new drug discovery and drug repositioning. Accurate predictions for DTIs can improve the efficiency in the drug discovery and development. Although rapid advances in deep learning technologies have generated various computational methods, it is still appealing to further investigate how to design efficient networks for predicting DTIs. In this study, we propose an end-to-end deep learning method (called MHSADTI) to predict DTIs based on the graph attention network and multi-head self-attention mechanism. First, the characteristics of drugs and proteins are extracted by the graph attention network and multi-head self-attention mechanism, respectively. Then, the attention scores are used to consider which amino acid subsequence in a protein is more important for the drug to predict its interactions. Finally, we predict DTIs by a fully connected layer after obtaining the feature vectors of drugs and proteins. MHSADTI takes advantage of self-attention mechanism for obtaining long-dependent contextual relationship in amino acid sequences and predicting DTI interpretability. More effective molecular characteristics are also obtained by the attention mechanism in graph attention networks. Multiple cross validation experiments are adopted to assess the performance of our MHSADTI. The experiments on four datasets, human, C.elegans, DUD-E and DrugBank show our method outperforms the state-of-the-art methods in terms of AUC, Precision, Recall, AUPR and F1-score. In addition, the case studies further demonstrate that our method can provide effective visualizations to interpret the prediction results from biological insights.
引用
收藏
页码:2208 / 2218
页数:11
相关论文
共 50 条
  • [31] Multi-Head Self-Attention Model for Classification of Temporal Lobe Epilepsy Subtypes
    Gu, Peipei
    Wu, Ting
    Zou, Mingyang
    Pan, Yijie
    Guo, Jiayang
    Xiahou, Jianbing
    Peng, Xueping
    Li, Hailong
    Ma, Junxia
    Zhang, Ling
    FRONTIERS IN PHYSIOLOGY, 2020, 11
  • [32] Deep Bug Triage Model Based on Multi-head Self-attention Mechanism
    Yu, Xu
    Wan, Fayang
    Tang, Bin
    Zhan, Dingjia
    Peng, Qinglong
    Yu, Miao
    Wang, Zhaozhe
    Cui, Shuang
    COMPUTER SUPPORTED COOPERATIVE WORK AND SOCIAL COMPUTING, CHINESECSCW 2021, PT II, 2022, 1492 : 107 - 119
  • [33] Fast Neural Chinese Named Entity Recognition with Multi-head Self-attention
    Qi, Tao
    Wu, Chuhan
    Wu, Fangzhao
    Ge, Suyu
    Liu, Junxin
    Huang, Yongfeng
    Xie, Xing
    KNOWLEDGE GRAPH AND SEMANTIC COMPUTING: KNOWLEDGE COMPUTING AND LANGUAGE UNDERSTANDING, 2019, 1134 : 98 - 110
  • [34] Dual-input ultralight multi-head self-attention learning network for hyperspectral image classification
    Li, Xinhao
    Xu, Mingming
    Liu, Shanwei
    Sheng, Hui
    Wan, Jianhua
    INTERNATIONAL JOURNAL OF REMOTE SENSING, 2024, 45 (04) : 1277 - 1303
  • [35] Multi-head self-attention based gated graph convolutional networks for aspect-based sentiment classification
    Xiao, Luwei
    Hu, Xiaohui
    Chen, Yinong
    Xue, Yun
    Chen, Bingliang
    Gu, Donghong
    Tang, Bixia
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (14) : 19051 - 19070
  • [36] MASPP and MWASP: multi-head self-attention based modules for UNet network in melon spot segmentation
    Tran, Khoa-Dang
    Ho, Trang-Thi
    Huang, Yennun
    Le, Nguyen Quoc Khanh
    Tuan, Le Quoc
    Ho, Van Lam
    JOURNAL OF FOOD MEASUREMENT AND CHARACTERIZATION, 2024, 18 (5) : 3935 - 3949
  • [37] Multi-head self-attention based gated graph convolutional networks for aspect-based sentiment classification
    Luwei Xiao
    Xiaohui Hu
    Yinong Chen
    Yun Xue
    Bingliang Chen
    Donghong Gu
    Bixia Tang
    Multimedia Tools and Applications, 2022, 81 : 19051 - 19070
  • [38] Self Multi-Head Attention for Speaker Recognition
    India, Miquel
    Safari, Pooyan
    Hernando, Javier
    INTERSPEECH 2019, 2019, : 4305 - 4309
  • [39] Automatic segmentation of golden pomfret based on fusion of multi-head self-attention and channel-attention mechanism
    Yu, Guoyan
    Luo, Yingtong
    Deng, Ruoling
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2022, 202
  • [40] SMGCN: Multiple Similarity and Multiple Kernel Fusion Based Graph Convolutional Neural Network for Drug-Target Interactions Prediction
    Wang, Wei
    Yu, Mengxue
    Sun, Bin
    Li, Juntao
    Liu, Dong
    Zhang, Hongjun
    Wang, Xianfang
    Zhou, Yun
    IEEE-ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, 2024, 21 (01) : 143 - 154