3D-structure-attention graph neural network for crystals and materials

被引:6
作者
Lin, Xuanjie [1 ,2 ]
Jiang, Hantong [1 ,2 ]
Wang, Liquan [1 ,2 ]
Ren, Yongsheng [3 ,4 ]
Ma, Wenhui [5 ]
Zhan, Shu [1 ,2 ]
机构
[1] Minist Educ, Key Lab Big Data Knowledge Engn, Hefei, Peoples R China
[2] Hefei Univ Technol, Sch Comp & Informat Engn, Hefei, Peoples R China
[3] Natl Engn Lab Vacuum Met, Kunming, Yunnan, Peoples R China
[4] Kunming Univ Sci & Technol, Fac Met & Energy Engn, Kunming, Yunnan, Peoples R China
[5] Puer Univ, Puer, Peoples R China
关键词
Graph neural network; deep learning for materials science; machine learning; molecular property prediction; LEARNING FRAMEWORK; ATTENTION; DIVERSITY;
D O I
10.1080/00268976.2022.2077258
中图分类号
O64 [物理化学(理论化学)、化学物理学];
学科分类号
070304 ; 081704 ;
摘要
Machine learning has been widely used in physics and chemistry. As a deep learning method based on graph domain analysis, graph neural networks (GNNs) have natural advantages in predicting material properties. We find that most existing models focus on the topological relationship between atoms without considering the specific positions. However, 3D-spatial distribution is the key to affecting the atomic state and interaction relationship, which has a decisive impact on the material properties. Here, we present a 3D-structure-attention graph neural network (3SAGNN) model, introducing the attention mechanism. The model focuses on the critical areas in the material 3D structure that significantly impact the prediction properties to effectively improve the accuracy of material properties prediction. We prove that the performance of 3SAGNN on a variety of datasets outperforms prior ML models, such as CGCNN. Our proposed model was tested on 36,000 inorganic materials dataset, 20,000 Pt nanocluster dataset, 18,000 porous materials, and 37,000 alloy surface reactions. The experimental results show that 3SAGNN can predict formation energies, total energies, band gaps, and surface catalytic properties more accurately and quickly than density functional theory. Finally, we improve the interpretability of the model through visualisation and show the working mechanism of the network.
引用
收藏
页数:13
相关论文
共 43 条
[41]  
Xu K, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P3156
[42]  
Yin W., 2016, Trans Assoc Comput Linguist, V4, P259, DOI [10.1162/tacla00097, DOI 10.1162/TACLA00097, 10.1162/tacl_a_00097]
[43]  
Zhang Benjamin J., 2021, ARXIV PREPRINT ARXIV