Frequency Enhanced Hybrid Attention Network for Sequential Recommendation

被引:48
作者
Du, Xinyu [1 ]
Yuan, Huanhuan [1 ]
Zhao, Pengpeng [1 ]
Qu, Jianfeng [1 ]
Zhuang, Fuzhen [2 ,3 ]
Liu, Guanfeng [4 ]
Liu, Yanchi [5 ]
Sheng, Victor S. [6 ]
机构
[1] Soochow Univ, Suzhou, Jiangsu, Peoples R China
[2] Beihang Univ, Inst Artificial Intelligence, Beijing, Peoples R China
[3] Beihang Univ, SKLSDE, Sch Comp Sci, Beijing, Peoples R China
[4] Macquarie Univ, Dept Comp, Sydney, NSW, Australia
[5] Rutgers State Univ, New Brunswick, NJ USA
[6] Texas Tech Univ, Lubbock, TX 79409 USA
来源
PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023 | 2023年
关键词
Sequential Recommendation; Self-attention; Periodic Pattern; Frequency; domain; BEHAVIOR;
D O I
10.1145/3539618.3591689
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The self-attention mechanism, which equips with a strong capability of modeling long-range dependencies, is one of the extensively used techniques in the sequential recommendation field. However, many recent studies represent that current self-attention based models are low-pass filters and are inadequate to capture high-frequency information. Furthermore, since the items in the user behaviors are intertwined with each other, these models are incomplete to distinguish the inherent periodicity obscured in the time domain. In this work, we shift the perspective to the frequency domain, and propose a novel Frequency Enhanced Hybrid Attention Network for Sequential Recommendation, namely FEARec. In this model, we firstly improve the original time domain self-attention in the frequency domain with a ramp structure to make both low-frequency and high-frequency information could be explicitly learned in our approach. Moreover, we additionally design a similar attention mechanism via auto-correlation in the frequency domain to capture the periodic characteristics and fuse the time and frequency level attention in a union model. Finally, both contrastive learning and frequency regularization are utilized to ensure that multiple views are aligned in both the time domain and frequency domain. Extensive experiments conducted on four widely used benchmark datasets demonstrate that the proposed model performs significantly better than the state-of-the-art approaches(1).
引用
收藏
页码:78 / 88
页数:11
相关论文
共 56 条
[1]  
Baxes G. A., 1994, Digital Image Processing: Principles and Applications
[2]   Intent Contrastive Learning for Sequential Recommendation [J].
Chen, Yongjun ;
Liu, Zhiwei ;
Li, Jia ;
McAuley, Julian ;
Xiong, Caiming .
PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, :2172-2182
[3]   Graph Signal Processing and Deep Learning: Convolution, Pooling, and Topology [J].
Cheung, Mark ;
Shi, John ;
Wright, Oren ;
Jiang, Lavendar Y. ;
Liu, Xujin ;
Moura, Jose M. F. .
IEEE SIGNAL PROCESSING MAGAZINE, 2020, 37 (06) :139-149
[4]  
Chi L., 2020, ADV NEURAL INFORM PR, P4479, DOI DOI 10.5555/3495724.3496100
[5]   AN ALGORITHM FOR MACHINE CALCULATION OF COMPLEX FOURIER SERIES [J].
COOLEY, JW ;
TUKEY, JW .
MATHEMATICS OF COMPUTATION, 1965, 19 (90) :297-&
[6]   Motif-aware Sequential Recommendation [J].
Cui, Zeyu ;
Cai, Yinjiang ;
Wu, Shu ;
Ma, Xibo ;
Wang, Liang .
SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2021, :1738-1742
[7]   Lighter and Better: Low-Rank Decomposed Self-Attention Networks for Next-Item Recommendation [J].
Fan, Xinyan ;
Liu, Zheng ;
Lian, Jianxun ;
Zhao, Wayne Xin ;
Xie, Xing ;
Wen, Ji-Rong .
SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2021, :1733-1737
[8]   Sequential Recommendation via Stochastic Self-Attention [J].
Fan, Ziwei ;
Liu, Zhiwei ;
Wang, Yu ;
Wang, Alice ;
Nazari, Zahra ;
Zheng, Lei ;
Peng, Hao ;
Yu, Philip S. .
PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, :2036-2047
[9]   The design and implementation of FFTW3 [J].
Frigo, M ;
Johnson, SG .
PROCEEDINGS OF THE IEEE, 2005, 93 (02) :216-231
[10]  
Guibas J., 2021, arXiv