Transfer Learning via Contextual Invariants for One-to-Many Cross-Domain Recommendation

被引:40
作者
Krishnan, Adit [1 ]
Das, Mahashweta [2 ]
Bendre, Mangesh [2 ]
Yang, Hao [2 ]
Sundaram, Hari [1 ]
机构
[1] Univ Illinois, Urbana, IL 61801 USA
[2] Visa Res, Palo Alto, CA USA
来源
PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20) | 2020年
关键词
Cross-Domain Recommendation; Contextual Invariants; Transfer Learning; Neural Layer Adaptation; Data Sparsity; OPTIMIZATION;
D O I
10.1145/3397271.3401078
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The rapid proliferation of new users and items on the social web has aggravated the gray-sheep user/long-tail item challenge in recommender systems. Historically, cross-domain co-clustering methods have successfully leveraged shared users and items across dense and sparse domains to improve inference quality. However, they rely on shared rating data and cannot scale to multiple sparse target domains (i.e., the one-to-many transfer setting). This, combined with the increasing adoption of neural recommender architectures, motivates us to develop scalable neural layer-transfer approaches for cross-domain learning. Our key intuition is to guide neural collaborative filtering with domain-invariant components shared across the dense and sparse domains, improving the user and item representations learned in the sparse domains. We leverage contextual invariances across domains to develop these shared modules, and demonstrate that with user-item interaction context, we can learn-to-learn informative representation spaces even with sparse interaction data. We show the effectiveness and scalability of our approach on two public datasets and a massive transaction dataset from Visa, a global payments technology company (19% Item Recall, 3x faster vs. training separate models for each domain). Our approach is applicable to both implicit and explicit feedback settings.
引用
收藏
页码:1081 / 1090
页数:10
相关论文
共 50 条
  • [1] [Anonymous], Unsupervised representation learning with deep convolutional generative adversarial networks
  • [2] Baltrunas L., 2011, P 5 ACM C REC SYST R, P301, DOI DOI 10.1145/2043932.2043988
  • [3] Latent Cross: Making Use of Context in Recurrent Recommender Systems
    Beutel, Alex
    Covington, Paul
    Jain, Sagar
    Xu, Can
    Li, Jia
    Gatto, Vince
    Chi, Ed H.
    [J]. WSDM'18: PROCEEDINGS OF THE ELEVENTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2018, : 46 - 54
  • [4] Chang HS, 2017, ADV NEUR IN, V30
  • [5] Cogswell Michael, 2016, arXiv preprint arXiv:1511.06068
  • [6] Doersch Carl, 2016, Tutorial on variational autoencoders
  • [7] Dong Zhenhua, 2018, ABS180207876 CORR
  • [8] Sequential Scenario-Specific Meta Learner for Online Recommendation
    Du, Zhengxiao
    Wang, Xiaowei
    Yang, Hongxia
    Zhou, Jingren
    Tang, Jie
    [J]. KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 2895 - 2904
  • [9] Feurer M, 2015, AAAI CONF ARTIF INTE, P1128
  • [10] Finn C, 2017, PR MACH LEARN RES, V70