Online Distillation-enhanced Multi-modal Transformer for Sequential Recommendation

被引:3
作者
Ji, Wei [1 ]
Liu, Xiangyan [1 ]
Zhang, An [1 ]
Wei, Yinwei [2 ]
Ni, Yongxin [1 ]
Wang, Xiang [3 ,4 ]
机构
[1] Natl Univ Singapore, Singapore, Singapore
[2] Monash Univ, Melbourne, Vic, Australia
[3] Univ Sci & Technol China, Hefei, Peoples R China
[4] Hefei Comprehens Natl Sci Ctr, Inst Artificial Intelligence, Inst Dataspace, Hefei, Peoples R China
来源
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023 | 2023年
基金
中国国家自然科学基金;
关键词
Multi-modal Recommendation; Knowledge Distillation; Sequential Recommendation;
D O I
10.1145/3581783.3612091
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-modal recommendation systems, which integrate diverse types of information, have gained widespread attention in recent years. However, compared to traditional collaborative filtering-based multi-modal recommendation systems, research on multi-modal sequential recommendation is still in its nascent stages. Unlike traditional sequential recommendation models that solely rely on item identifier (ID) information and focus on network structure design, multi-modal recommendation models need to emphasize item representation learning and the fusion of heterogeneous data sources. This paper investigates the impact of item representation learning on downstream recommendation tasks and examines the disparities in information fusion at different stages. Empirical experiments are conducted to demonstrate the need to design a framework suitable for collaborative learning and fusion of diverse information. Based on this, we propose a new model-agnostic framework for multi-modal sequential recommendation tasks, called Online Distillation-enhanced Multi-modal Transformer (ODMT), to enhance feature interaction and mutual learning among multi-source input (ID, text, and image), while avoiding conflicts among different features during training, thereby improving recommendation accuracy. To be specific, we first introduce an ID-aware Multi-modal Transformer module in the item representation learning stage to facilitate information interaction among different features. Secondly, we employ an online distillation training strategy in the prediction optimization stage to make multi-source data learn from each other and improve prediction robustness. Experimental results on a stream media recommendation dataset and three e-commerce recommendation datasets demonstrate the effectiveness of the proposed two modules, which is approximately 10% improvement in performance compared to baseline models. Our code will be released at: https://github.com/xyliugo/ODMT.
引用
收藏
页码:955 / 965
页数:11
相关论文
共 59 条
  • [41] Xu B., 2015, ARXIV150500853, V1505, P853
  • [42] Rethinking Personalized Ranking at Pinterest: An End-to-End Approach
    Xu, Jiajing
    Zhai, Andrew
    Rosenberg, Charles
    [J]. PROCEEDINGS OF THE 16TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2022, 2022, : 502 - 505
  • [43] A Generic Learning Framework for Sequential Recommendation with Distribution Shifts
    Yang, Zhengyi
    He, Xiangnan
    Zhang, Jizhi
    Wu, Jiancan
    Xin, Xin
    Chen, Jiawei
    Wang, Xiang
    [J]. PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 331 - 340
  • [44] Sampling-Bias-Corrected Neural Modeling for Large Corpus Item Recommendations
    Yi, Xinyang
    Yang, Ji
    Hong, Lichan
    Cheng, Derek Zhiyuan
    Heldt, Lukasz
    Kumthekar, Aditee
    Zhao, Zhe
    Wei, Li
    Chi, Ed
    [J]. RECSYS 2019: 13TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, 2019, : 269 - 277
  • [45] Yin GH, 2022, AAAI CONF ARTIF INTE, P3134
  • [46] A Simple Convolutional Generative Network for Next Item Recommendation
    Yuan, Fajie
    Karatzoglou, Alexandros
    Arapakis, Ioannis
    Jose, Joemon M.
    He, Xiangnan
    [J]. PROCEEDINGS OF THE TWELFTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM'19), 2019, : 582 - 590
  • [47] Yuan Zheng, 2023, ARXIV230313835
  • [48] Zhang An, 2023, WWW
  • [49] Zhang An, 2022, NEURIPS
  • [50] Zhang Ao, 2023, ARXIV230501278