Online Distillation-enhanced Multi-modal Transformer for Sequential Recommendation

被引:3
作者
Ji, Wei [1 ]
Liu, Xiangyan [1 ]
Zhang, An [1 ]
Wei, Yinwei [2 ]
Ni, Yongxin [1 ]
Wang, Xiang [3 ,4 ]
机构
[1] Natl Univ Singapore, Singapore, Singapore
[2] Monash Univ, Melbourne, Vic, Australia
[3] Univ Sci & Technol China, Hefei, Peoples R China
[4] Hefei Comprehens Natl Sci Ctr, Inst Artificial Intelligence, Inst Dataspace, Hefei, Peoples R China
来源
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023 | 2023年
基金
中国国家自然科学基金;
关键词
Multi-modal Recommendation; Knowledge Distillation; Sequential Recommendation;
D O I
10.1145/3581783.3612091
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-modal recommendation systems, which integrate diverse types of information, have gained widespread attention in recent years. However, compared to traditional collaborative filtering-based multi-modal recommendation systems, research on multi-modal sequential recommendation is still in its nascent stages. Unlike traditional sequential recommendation models that solely rely on item identifier (ID) information and focus on network structure design, multi-modal recommendation models need to emphasize item representation learning and the fusion of heterogeneous data sources. This paper investigates the impact of item representation learning on downstream recommendation tasks and examines the disparities in information fusion at different stages. Empirical experiments are conducted to demonstrate the need to design a framework suitable for collaborative learning and fusion of diverse information. Based on this, we propose a new model-agnostic framework for multi-modal sequential recommendation tasks, called Online Distillation-enhanced Multi-modal Transformer (ODMT), to enhance feature interaction and mutual learning among multi-source input (ID, text, and image), while avoiding conflicts among different features during training, thereby improving recommendation accuracy. To be specific, we first introduce an ID-aware Multi-modal Transformer module in the item representation learning stage to facilitate information interaction among different features. Secondly, we employ an online distillation training strategy in the prediction optimization stage to make multi-source data learn from each other and improve prediction robustness. Experimental results on a stream media recommendation dataset and three e-commerce recommendation datasets demonstrate the effectiveness of the proposed two modules, which is approximately 10% improvement in performance compared to baseline models. Our code will be released at: https://github.com/xyliugo/ODMT.
引用
收藏
页码:955 / 965
页数:11
相关论文
共 59 条
  • [1] Anil R., 2018, P INT C LEARN REPR, P1
  • [2] [Anonymous], 2016, P AAAI C ARTIFICIAL
  • [3] ItemSage: Learning Product Embeddings for Shopping Recommendations at Pinterest
    Baltescu, Paul
    Chen, Haoyu
    Pancha, Nikil
    Zhai, Andrew
    Leskovec, Jure
    Rosenberg, Charles
    [J]. PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 2703 - 2711
  • [4] Chen DF, 2020, AAAI CONF ARTIF INTE, V34, P3430
  • [5] Chen Jin, 2022, ARXIV220514859
  • [6] Cheng Mingyue, 2022, ARXIV220606190
  • [7] Cui YM, 2020, FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, P657
  • [8] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  • [9] Dosovitskiy A., 2020, PREPRINT
  • [10] Guo QS, 2020, PROC CVPR IEEE, P11017, DOI 10.1109/CVPR42600.2020.01103