CVTGAD: Simplified Transformer with Cross-View Attention for Unsupervised Graph-Level Anomaly Detection

被引:5
作者
Li, Jindong [1 ]
Xing, Qianli [1 ]
Wang, Qi [1 ]
Chang, Yi [1 ]
机构
[1] Jilin Univ, Sch Artificial Intelligence, Changchun 130012, Peoples R China
来源
MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT I | 2023年 / 14169卷
基金
中国国家自然科学基金;
关键词
Transformer; Cross-View Attention; Graph-level Anomaly Detection; Unsupervised Learning; Graph Neural Network;
D O I
10.1007/978-3-031-43412-9_11
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised graph-level anomaly detection (UGAD) has received remarkable performance in various critical disciplines, such as chemistry analysis and bioinformatics. Existing UGAD paradigms often adopt data augmentation techniques to construct multiple views, and then employ different strategies to obtain representations from different views for jointly conducting UGAD. However, most previous works only considered the relationship between nodes/graphs from a limited receptive field, resulting in some key structure patterns and feature information being neglected. In addition, most existing methods consider different views separately in a parallel manner, which is not able to explore the inter-relationship across different views directly. Thus, a method with a larger receptive field that can explore the inter-relationship across different views directly is in need. In this paper, we propose a novel Simplified Transformer with Cross-View Attention for Unsupervised Graph-level Anomaly Detection, namely, CVTGAD. To increase the receptive field, we construct a simplified transformer-based module, exploiting the relationship between nodes/graphs from both intra-graph and inter-graph perspectives. Furthermore, we design a cross-view attention mechanism to directly exploit the view co-occurrence between different views, bridging the inter-view gap at node level and graph level. To the best of our knowledge, this is the first work to apply transformer and cross attention to UGAD, which realizes graph neural network and transformer working collaboratively. Extensive experiments on 15 real-world datasets of 3 fields demonstrate the superiority of CVTGAD on the UGAD task. The code is available at https://github.com/jindongli-Ai/CVTGAD.
引用
收藏
页码:185 / 200
页数:16
相关论文
共 38 条
[1]   Multimodal Machine Learning: A Survey and Taxonomy [J].
Baltrusaitis, Tadas ;
Ahuja, Chaitanya ;
Morency, Louis-Philippe .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (02) :423-443
[2]   Uninformed Students: Student-Teacher Anomaly Detection with Discriminative Latent Embeddings [J].
Bergmann, Paul ;
Fauser, Michael ;
Sattlegger, David ;
Steger, Carsten .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :4182-4191
[3]   LOF: Identifying density-based local outliers [J].
Breunig, MM ;
Kriegel, HP ;
Ng, RT ;
Sander, J .
SIGMOD RECORD, 2000, 29 (02) :93-104
[4]  
Chen DL, 2020, AAAI CONF ARTIF INTE, V34, P3438
[5]  
Ding K, 2019, Data Min, P594
[6]  
Hamilton WL, 2017, ADV NEUR IN, V30
[7]  
Hassani K, 2020, PR MACH LEARN RES, V119
[8]  
Hinton G, 2015, Arxiv, DOI arXiv:1503.02531
[9]  
Huang X, 2020, Arxiv, DOI arXiv:2012.06678
[10]   Stacked Cross Attention for Image-Text Matching [J].
Lee, Kuang-Huei ;
Chen, Xi ;
Hua, Gang ;
Hu, Houdong ;
He, Xiaodong .
COMPUTER VISION - ECCV 2018, PT IV, 2018, 11208 :212-228