A Crystal Knowledge-Enhanced Pre-training Framework for Crystal Property Estimation

被引:0
|
作者
Yu, Haomin [1 ]
Song, Yanru [2 ]
Hu, Jilin [2 ]
Guo, Chenjuan [2 ]
Yang, Bin [2 ]
Jensen, Christian S. [1 ]
机构
[1] Aalborg Univ, Aalborg, Denmark
[2] East China Normal Univ, Shanghai, Peoples R China
来源
MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES-APPLIED DATA SCIENCE TRACK, PT X, ECML PKDD 2024 | 2024年 / 14950卷
关键词
Crystal property; Pre-training; Knowledge-enhanced; NETWORKS;
D O I
10.1007/978-3-031-70381-2_15
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The design of new crystalline materials, or simply crystals, with desired properties relies on the ability to estimate the properties of crystals based on their structure. To advance the ability of machine learning (ML) to enable property estimation, we address two key limitations. First, creating labeled data for training entails time-consuming laboratory experiments and physical simulations, yielding a shortage of such data. To reduce the need for labeled training data, we propose a pre-training framework that adopts a mutually exclusive mask strategy, enabling models to discern underlying patterns. Second, crystal structures obey physical principles. To exploit the principle of periodic invariance, we propose multi-graph attention (MGA) and crystal knowledge-enhanced (CKE) modules. The MGA module considers different types of multi-graph edges to capture complex structural patterns. The CKE module incorporates periodic attribute learning and atomtype contrastive learning by explicitly introducing crystal knowledge to enhance crystal representation learning. We integrate these modules in a CRystal knOwledge-enhanced Pre-training (CROP) framework. Experiments on eight different datasets show that CROP is capable of promising estimation performance and can outperform strong baselines.
引用
收藏
页码:231 / 246
页数:16
相关论文
共 35 条
  • [21] CETP: A novel semi-supervised framework based on contrastive pre-training for imbalanced encrypted traffic classification
    Lin, Xinjie
    He, Longtao
    Gou, Gaopeng
    Yu, Jing
    Guan, Zhong
    Li, Xiang
    Guo, Juncheng
    Xiong, Gang
    COMPUTERS & SECURITY, 2024, 143
  • [22] HOP plus : History-Enhanced and Order-Aware Pre-Training for Vision-and-Language Navigation
    Qiao, Yanyuan
    Qi, Yuankai
    Hong, Yicong
    Yu, Zheng
    Wang, Peng
    Wu, Qi
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (07) : 8524 - 8537
  • [23] A Multi-Task Semantic Decomposition Framework with Task-specific Pre-training for Few-Shot NER
    Dong, Guanting
    Wang, Zechen
    Zhao, Jinxu
    Zhao, Gang
    Guo, Daichi
    Fu, Dayuan
    Hui, Tingfeng
    Zeng, Chen
    He, Keqing
    Li, Xuefeng
    Wang, Liwen
    Cui, Xinyue
    Xu, Weiran
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 430 - 440
  • [24] Leveraging Concept-Enhanced Pre-Training Model and Masked-Entity Language Model for Named Entity Disambiguation
    Ji, Zizheng
    Dai, Lin
    Pang, Jin
    Shen, Tingting
    IEEE ACCESS, 2020, 8 : 100469 - 100484
  • [25] A Framework for pre-training hidden-unit conditional random fields and its extension to long short term memory networks
    Kim, Young-Bum
    Stratos, Karl
    Sarikaya, Ruhi
    COMPUTER SPEECH AND LANGUAGE, 2017, 46 : 311 - 326
  • [26] Edema Estimation From Facial Images Taken Before and After Dialysis via Contrastive Multi-Patient Pre-Training
    Akamatsu, Yusuke
    Onishi, Yoshifumi
    Imaoka, Hitoshi
    Kameyama, Junko
    Tsurushima, Hideo
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2023, 27 (03) : 1419 - 1430
  • [27] An improved wav2vec 2.0 pre-training approach using enhanced local dependency modeling for speech recognition
    Zhu, Qiu-shi
    Zhang, Jie
    Wu, Ming-hui
    Fang, Xin
    Dai, Li-Rong
    INTERSPEECH 2021, 2021, : 4334 - 4338
  • [28] The effects of pre-training types on cognitive load, collaborative knowledge construction and deep learning in a computer-supported collaborative learning environment
    Jung, Jaewon
    Shin, Yoonhee
    Zumbach, Joerg
    INTERACTIVE LEARNING ENVIRONMENTS, 2019, : 1163 - 1175
  • [29] One multimodal plugin enhancing all: CLIP-based pre-training framework enhancing multimodal item representations in recommendation systems
    Mo, Minghao
    Lu, Weihai
    Xie, Qixiao
    Xiao, Zikai
    Lv, Xiang
    Yang, Hong
    Zhang, Yanchun
    NEUROCOMPUTING, 2025, 637
  • [30] Data-Driven Self-Triggered Control for Networked Motor Control Systems Using RNNs and Pre-Training: A Hierarchical Reinforcement Learning Framework
    Chen, Wei
    Wan, Haiying
    Luan, Xiaoli
    Liu, Fei
    SENSORS, 2024, 24 (06)