Unbiased and Robust: External Attention-enhanced Graph Contrastive Learning for Cross-domain Sequential Recommendation

被引:1
作者
Wang, Xinhua [1 ]
Yue, Houping [1 ]
Wang, Zizheng [2 ]
Xu, Liancheng [1 ]
Zhang, Jinyu [3 ]
机构
[1] Shandong Normal Univ, Sch Informat Sci & Engn, Shandong, Peoples R China
[2] Shandong Univ, Zhongtai Secur Inst Financial Studies, Shandong, Peoples R China
[3] Shandong Univ Sci & Technol, Sch Comp Sci & Engn, Shandong, Peoples R China
来源
2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023 | 2023年
关键词
Cross-domain sequential recommendation; Unbiased; recommender system; Graph neural networks; Attention; mechanism; Contrastive learning;
D O I
10.1109/ICDMW60847.2023.00194
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cross-domain sequential recommenders (CSRs) are gaining considerable research attention as they can capture user sequential preference by leveraging side information from multiple domains. However, these works typically follow an ideal setup, i.e., different domains obey similar data distribution, which ignores the bias brought by asymmetric interaction densities (a.k.a. the inter-domain density bias). Besides, the frequently adopted mechanism (e.g., the selfattention network) in sequence encoder only focuses on the interactions within a local view, which overlooks the global correlations between different training batches. To this end, we propose an External Attention-enhanced Graph Contrastive Learning framework, namely EA-GCL. Specifically, to remove the impact of the inter-domain density bias, an auxiliary Self-Supervised Learning (SSL) task is attached to the traditional graph encoder under a multi-task learning manner. To robustly capture users' behavioral patterns, we develop an external attention-based sequence encoder that contains an MLP-based memory-sharing structure. Unlike the selfattention mechanism, such a structure can effectively alleviate the bias interference from the batch-based training scheme. Extensive experiments on two real-world datasets demonstrate that EA-GCL outperforms several state-of-the-art baselines on CSR tasks. The source codes and relevant datasets are available at https://github.com/HoupingY/EA-GCL.
引用
收藏
页码:1526 / 1534
页数:9
相关论文
共 37 条
  • [1] Contrastive Cross-Domain Sequential Recommendation
    Cao, Jiangxia
    Cong, Xin
    Sheng, Jiawei
    Liu, Tingwen
    Wang, Bin
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 138 - 147
  • [2] Dual Attention Transfer in Session-based Recommendation with Multi-dimensional Integration
    Chen, Chen
    Guo, Jie
    Song, Bin
    [J]. SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2021, : 869 - 878
  • [3] Glorot X., 2010, P 13 INT C ART INT S, P249
  • [4] Guo L., 2022, TNNLs
  • [5] Guo L, 2021, PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, P2483
  • [6] A new chamigrane sesquiterpene from the rice fermentation of Antrodiella albocinnamomea
    Guo, Min
    Liang, Ying-Zhong
    Cui, Xiu-Ming
    Shao, Lin-Jiao
    Li, Yin-Fei
    Yang, Xiao-Yan
    [J]. NATURAL PRODUCT RESEARCH, 2023, 37 (09) : 1411 - 1415
  • [7] LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation
    He, Xiangnan
    Deng, Kuan
    Wang, Xiang
    Li, Yan
    Zhang, Yongdong
    Wang, Meng
    [J]. PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), 2020, : 639 - 648
  • [8] NAIS: Neural Attentive Item Similarity Model for Recommendation
    He, Xiangnan
    He, Zhankui
    Song, Jingkuan
    Liu, Zhenguang
    Jiang, Yu-Gang
    Chua, Tat-Seng
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2018, 30 (12) : 2354 - 2366
  • [9] Neural Collaborative Filtering
    He, Xiangnan
    Liao, Lizi
    Zhang, Hanwang
    Nie, Liqiang
    Hu, Xia
    Chua, Tat-Seng
    [J]. PROCEEDINGS OF THE 26TH INTERNATIONAL CONFERENCE ON WORLD WIDE WEB (WWW'17), 2017, : 173 - 182
  • [10] Hidasi B., 2016, ICLR, V5, P1, DOI [10.48550/arXiv.1511.06939, DOI 10.48550/ARXIV.1511.06939]