TruthSR: Trustworthy Sequential Recommender Systems via User-generated Multimodal Content

被引:0
|
作者
Yan, Meng [1 ]
Huang, Haibin [1 ]
Liu, Ying [2 ]
Zhao, Juan [3 ]
Gao, Xiyue [1 ]
Xu, Cai [1 ]
Guan, Ziyu [1 ]
Zhao, Wei [1 ]
机构
[1] Xidian Univ, Xian, Peoples R China
[2] Northwest Univ, Xian, Peoples R China
[3] Peng Cheng Lab, Shenzhen, Peoples R China
来源
DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2024, PT 3 | 2025年 / 14852卷
基金
中国国家自然科学基金;
关键词
User-generated content; Sequential recommender system; Trustworthy learning;
D O I
10.1007/978-981-97-5555-4_12
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Sequential recommender systems explore users' preferences and behavioral patterns from their historically generated data. Recently, researchers aim to improve sequential recommendation by utilizing massive user-generated multi-modal content, such as reviews, images, etc. This content often contains inevitable noise. Some studies attempt to reduce noise interference by suppressing cross-modal inconsistent information. However, they could potentially constrain the capturing of personalized user preferences. In addition, it is almost impossible to entirely eliminate noise in diverse user-generated multi-modal content. To solve these problems, we propose a trustworthy sequential recommendation method via noisy user-generated multi-modal content. Specifically, we explicitly capture the consistency and complementarity of user-generated multi-modal content to mitigate noise interference. We also achieve the modeling of the user's multi-modal sequential preferences. In addition, we design a trustworthy decision mechanism that integrates subjective user perspective and objective item perspective to dynamically evaluate the uncertainty of prediction results. Experimental evaluation on four widely-used datasets demonstrates the superior performance of our model compared to state-of-the-art methods. The code is released at https://github.com/FairyMeng/TrustSR.
引用
收藏
页码:180 / 195
页数:16
相关论文
共 50 条
  • [31] Bangkok Tours and Activities Data Analysis via User-Generated Content
    Chugh, Naina
    Phumchusri, Naragain
    2020 INTERNATIONAL CONFERENCE ON COMPUTING, ELECTRONICS & COMMUNICATIONS ENGINEERING (ICCECE, 2020, : 98 - 102
  • [32] The institutionalization of YouTube: From user-generated content to professionally generated content
    Kim, Jin
    MEDIA CULTURE & SOCIETY, 2012, 34 (01) : 53 - 67
  • [33] A Risk Management Framework for User-Generated Content on Public Display Systems
    Coutinho, Pedro
    Jose, Rui
    ADVANCES IN HUMAN-COMPUTER INTERACTION, 2019, 2019
  • [34] Social Connections in User-Generated Content Video Systems: Analysis and Recommendation
    Li, Zhenyu
    Lin, Jiali
    Salamatian, Kave
    Xie, Gaogang
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2013, 10 (01): : 70 - 83
  • [35] Multimodal Semantics Extraction from User-Generated Videos
    Cricri, Francesco
    Dabov, Kostadin
    Roininen, Mikko J.
    Mate, Sujeet
    Curcio, Igor D. D.
    Gabbouj, Moncef
    ADVANCES IN MULTIMEDIA, 2012, 2012
  • [36] Studies of user-generated content: A systematic review
    Naab, Teresa K.
    Sehl, Annika
    JOURNALISM, 2017, 18 (10) : 1256 - 1273
  • [37] On the Use of User-generated Content in Critiquing Recommendation
    Contreras, David
    Salamo, Maria
    ARTIFICIAL INTELLIGENCE RESEARCH AND DEVELOPMENT, 2015, 277 : 195 - 204
  • [38] Impact of Mobility and Timing on User-Generated Content
    Piccoli, Gabriele
    Ott, Myle
    MIS QUARTERLY EXECUTIVE, 2014, 13 (03) : 147 - 157
  • [39] Leveraging User-Generated Content for News Search
    McCreadie, Richard M. C.
    SIGIR 2010: PROCEEDINGS OF THE 33RD ANNUAL INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH DEVELOPMENT IN INFORMATION RETRIEVAL, 2010, : 919 - 919
  • [40] USER-GENERATED CONTENT AS WORD-OF-MOUTH
    Ramirez, Edward
    Gau, Roland
    Hadjimarcou, John
    Xu, Zhenning
    JOURNAL OF MARKETING THEORY AND PRACTICE, 2018, 26 (1-2) : 90 - 98