Deep Distribution Network: Addressing the Data Sparsity Issue for Top-N Recommendation

被引:21
作者
Zheng, Lei [1 ]
Li, Chaozhuo [2 ]
Lu, Chun-Ta [1 ]
Zhang, Jiawei [3 ]
Yu, Philip S. [1 ]
机构
[1] Univ Illinois, Dept Comp Sci, Chicago, IL 60607 USA
[2] Beihang Univ, Dept Comp Sci, Beijing, Peoples R China
[3] Florida State Univ, Dept Comp Sci, IFM Lab, Tallahassee, FL 32306 USA
来源
PROCEEDINGS OF THE 42ND INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '19) | 2019年
关键词
Sparsity; Recommendation; Distribution;
D O I
10.1145/3331184.3331330
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Existing recommendation methods mostly learn fixed vectors for users and items in a low-dimensional continuous space, and then calculate the popular dot-product to derive user-item distances. However, these methods suffer from two drawbacks: (1) the data sparsity issue prevents from learning high-quality representations; and (2) the dot-product violates the crucial triangular inequality and therefore, results in a sub-optimal performance. In this work, in order to overcome the two aforementioned drawbacks, we propose Deep Distribution Network (DDN) to model users and items via Gaussian distributions. We argue that, compared to fixed vectors, distribution-based representations are more powerful to characterize users' uncertain interests and items' distinct properties. In addition, we propose a Wasserstein-based loss, in which the critical triangular inequality can be satisfied. In experiments, we evaluate DDN and comparative models on standard datasets. It is shown that DDN significantly outperforms state-of-the-art models, demonstrating the advantages of the proposed distribution-based representations and wassertein loss.
引用
收藏
页码:1081 / 1084
页数:4
相关论文
共 13 条
[1]  
[Anonymous], 2013, ICWSM
[2]  
Bojchevski Aleksandar, 2017, DEEP GAUSSIAN EMBEDD
[3]   Neural Collaborative Filtering [J].
He, Xiangnan ;
Liao, Lizi ;
Zhang, Hanwang ;
Nie, Liqiang ;
Hu, Xia ;
Chua, Tat-Seng .
PROCEEDINGS OF THE 26TH INTERNATIONAL CONFERENCE ON WORLD WIDE WEB (WWW'17), 2017, :173-182
[4]   Fast Matrix Factorization for Online Recommendation with Implicit Feedback [J].
He, Xiangnan ;
Zhang, Hanwang ;
Kan, Min-Yen ;
Chua, Tat-Seng .
SIGIR'16: PROCEEDINGS OF THE 39TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2016, :549-558
[5]   Collaborative Metric Learning [J].
Hsieh, Cheng-Kang ;
Yang, Longqi ;
Cui, Yin ;
Lin, Tsung-Yi ;
Belongie, Serge ;
Estrin, Deborah .
PROCEEDINGS OF THE 26TH INTERNATIONAL CONFERENCE ON WORLD WIDE WEB (WWW'17), 2017, :193-201
[6]  
Kingma J., 2015, INT C LEARNING REPRE
[7]   MATRIX FACTORIZATION TECHNIQUES FOR RECOMMENDER SYSTEMS [J].
Koren, Yehuda ;
Bell, Robert ;
Volinsky, Chris .
COMPUTER, 2009, 42 (08) :30-37
[8]  
Rendle S., 2012, ABS12052618 CORR
[9]  
Sarwar B. M., 2001, PROC 10 INT C WORLD, P285
[10]   Latent Relational Metric Learning via Memory-based Attention for Collaborative Ranking [J].
Tay, Yi ;
Luu Anh Tuan ;
Hui, Siu Cheung .
WEB CONFERENCE 2018: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW2018), 2018, :729-739