Hierarchical Contrastive Learning with Multiple Augmentations for Sequential Recommendation

被引:0
作者
Lee, Dongjun [1 ]
Ko, Donggeun [2 ]
Kim, Jaekwang [3 ]
机构
[1] Maum AI, Sungnam, South Korea
[2] Aim Future, Seoul, South Korea
[3] Sungkyunkwan Univ, Sch Convergence, Convergence Program Social Innovat, Seoul, South Korea
来源
40TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING | 2025年
基金
新加坡国家研究基金会;
关键词
Sequential Recommendation; Contrastive Learning; Multiple Augmentations;
D O I
10.1145/3672608.3707902
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Sequential recommendation aims to predict users' next actions by analyzing their historical behavior. Lately, contrastive learning has become prominent in this domain, especially when user interactions with items are sparse. Although data augmentation methods have flourished in fields like computer vision, their potential in sequential recommendation remains under-explored. Thus, we present Hierarchical Contrastive Learning with Multiple Augmentations for Sequential Recommendation (HCLRec), a novel framework that harnesses multiple augmentation techniques to create diverse views on user sequences. This framework systematically employs existing augmentation techniques, creating a hierarchy to generate varied views. First, we augment the input sequences to various views using multiple augmentations. Through the continuous composition of these augmentation methods, we formulate both low-level and high-level view pairs. Second, an effective sequence-based encoder is used to embed input sequences, complemented by the supplementary blocks to capture users' nonlinear behaviors, which are further varied by augmentations. Input sequences are routed to subsequent layers based on the number of augmentations applied, helping the model discern intricate sequential patterns intensified by these augmentations. Finally, contrastive losses is calculated between view pairs of the same level within each layer. This allows the encoder to learn from the contrastive losses between augmented views of the same level, and the gap caused by different information between the low-level views and high-level views by multiple augmentations is reduced. In evaluations, HCLRec outperforms state-of-the-art methods by up to 7.22% and demonstrates its effectiveness in handling sparse data.
引用
收藏
页码:1231 / 1239
页数:9
相关论文
共 27 条
[1]  
Chen T., 2020, INT C MACH LEARN ICM, P1597
[2]   Intent Contrastive Learning for Sequential Recommendation [J].
Chen, Yongjun ;
Liu, Zhiwei ;
Li, Jia ;
McAuley, Julian ;
Xiong, Caiming .
PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, :2172-2182
[3]   Contrastive Learning with Bidirectional Transformers for Sequential Recommendation [J].
Du, Hanwen ;
Shi, Hui ;
Zhao, Pengpeng ;
Wang, Deqing ;
Sheng, Victor S. ;
Liu, Yanchi ;
Liu, Guanfeng ;
Zhao, Lei .
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, :396-405
[4]   Lighter and Better: Low-Rank Decomposed Self-Attention Networks for Next-Item Recommendation [J].
Fan, Xinyan ;
Liu, Zheng ;
Lian, Jianxun ;
Zhao, Wayne Xin ;
Xie, Xing ;
Wen, Ji-Rong .
SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2021, :1733-1737
[5]  
Gao Tianyu, 2021, arXiv, DOI 10.48550/arXiv.2104.08821
[6]  
He RN, 2016, IEEE DATA MINING, P191, DOI [10.1109/ICDM.2016.0030, 10.1109/ICDM.2016.88]
[7]  
Hidasi B, 2016, Arxiv, DOI [arXiv:1511.06939, DOI 10.48550/ARXIV.1511.06939]
[8]   When Recurrent Neural Networks meet the Neighborhood for Session-Based Recommendation [J].
Jannach, Dietmar ;
Ludewig, Malte .
PROCEEDINGS OF THE ELEVENTH ACM CONFERENCE ON RECOMMENDER SYSTEMS (RECSYS'17), 2017, :306-310
[9]  
Kalantidis Y, 2020, ADV NEUR IN, V33
[10]   Self-Attentive Sequential Recommendation [J].
Kang, Wang-Cheng ;
McAuley, Julian .
2018 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2018, :197-206