Molformer: Motif-Based Transformer on 3D Heterogeneous Molecular Graphs

被引:0
作者
Wu, Fang [1 ,3 ]
Radev, Dragomir [2 ]
Li, Stan Z. [1 ]
机构
[1] Westlake Univ, Sch Engn, Hangzhou, Peoples R China
[2] Yale Univ, Dept Comp Sci, New Haven, CT USA
[3] Tsinghua Univ, Inst AI Ind Res, Beijing, Peoples R China
来源
THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 4 | 2023年
关键词
DATABASE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Procuring expressive molecular representations underpins AI-driven molecule design and scientific discovery. The research mainly focuses on atom-level homogeneous molecular graphs, ignoring the rich information in subgraphs or motifs. However, it has been widely accepted that substructures play a dominant role in identifying and determining molecular properties. To address such issues, we formulate heterogeneous molecular graphs (HMGs), and introduce a novel architecture to exploit both molecular motifs and 3D geometry. Precisely, we extract functional groups as motifs for small molecules and employ reinforcement learning to adaptively select quaternary amino acids as motif candidates for proteins. Then HMGs are constructed with both atom-level and motif-level nodes. To better accommodate those HMGs, we introduce a variant of the Transformer named Molformer, which adopts a heterogeneous self-attention layer to distinguish the interactions between multi-level nodes. Besides, it is also coupled with a multi-scale mechanism to capture fine-grained local patterns with increasing contextual scales. An attentive farthest point sampling algorithm is also proposed to obtain the molecular representations. We validate Molformer across a broad range of domains, including quantum chemistry, physiology, and biophysics. Extensive experiments show that Molformer outperforms or achieves the comparable performance of several state-of-the-art baselines. Our work provides a promising way to utilize informative motifs from the perspective of multi-level graph construction. The code is available at https://github.com/smiles724/Molformer.
引用
收藏
页码:5312 / 5320
页数:9
相关论文
共 58 条
[31]   Tertiary alphabet for the observable protein structural universe [J].
Mackenzie, Craig O. ;
Zhou, Jianfu ;
Grigoryan, Gevorg .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2016, 113 (47) :E7438-E7447
[32]   A Bayesian Approach to in Silico Blood-Brain Barrier Penetration Modeling [J].
Martins, Ines Filipa ;
Teixeira, Ana L. ;
Pinheiro, Luis ;
Falcao, Andre O. .
JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2012, 52 (06) :1686-1697
[33]  
Maziarka L, 2021, arXiv
[34]  
Maziarka L, 2020, Arxiv, DOI arXiv:2002.08264
[35]   DeepDTA: deep drug-target binding affinity prediction [J].
Ozturk, Hakime ;
Ozgur, Arzucan ;
Ozkirimli, Elif .
BIOINFORMATICS, 2018, 34 (17) :821-829
[36]   Electronic spectra from TDDFT and machine learning in chemical space [J].
Ramakrishnan, Raghunathan ;
Hartmann, Mia ;
Tapavicza, Enrico ;
von Lilienfeld, O. Anatole .
JOURNAL OF CHEMICAL PHYSICS, 2015, 143 (08)
[37]   Quantum chemistry structures and properties of 134 kilo molecules [J].
Ramakrishnan, Raghunathan ;
Dral, Pavlo O. ;
Rupp, Matthias ;
von Lilienfeld, O. Anatole .
SCIENTIFIC DATA, 2014, 1
[38]  
Ramsundar B, 2015, Arxiv, DOI arXiv:1502.02072
[39]  
Rong Y, 2020, Arxiv, DOI [arXiv:2007.02835, DOI 10.48550/ARXIV.2007.02835]
[40]   SchNet - A deep learning architecture for molecules and materials [J].
Schuett, K. T. ;
Sauceda, H. E. ;
Kindermans, P. -J. ;
Tkatchenko, A. ;
Mueller, K. -R. .
JOURNAL OF CHEMICAL PHYSICS, 2018, 148 (24)