A probabilistic framework for multi-view feature learning with many-to-many associations via neural networks

被引:0
作者
Okuno, Akifumi [1 ,2 ]
Hada, Tetsuya [3 ]
Shimodaira, Hidetoshi [1 ,2 ]
机构
[1] Kyoto Univ, Grad Sch Informat, Kyoto, Japan
[2] RIKEN, Ctr Adv Intelligence Project AIP, Tokyo, Japan
[3] Recruit Technol Co Ltd, Tokyo, Japan
来源
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80 | 2018年 / 80卷
关键词
KERNEL; SETS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A simple framework Probabilistic Multi-view Graph Embedding (PMvGE) is proposed for multi-view feature learning with many-to-many associations so that it generalizes various existing multi-view methods. PMvGE is a probabilistic model for predicting new associations via graph embedding of the nodes of data vectors with links of their associations. Multi-view data vectors with many-to-many associations are transformed by neural networks to feature vectors in a shared space, and the probability of new association between two data vectors is modeled by the inner product of their feature vectors. While existing multi-view feature learning techniques can treat only either of many-to-many association or nonlinear transformation, PMvGE can treat both simultaneously. By combining Mercer's theorem and the universal approximation theorem, we prove that PMvGE learns a wide class of similarity measures across views. Our likelihood-based estimator enables efficient computation of nonlinear transformations of data vectors in large-scale datasets by minibatch SGD, and numerical experiments illustrate that PMvGE outperforms existing multi-view methods.
引用
收藏
页数:10
相关论文
共 47 条
[1]  
Andrienko G., 2013, Introduction, P1
[2]  
[Anonymous], 2012, P 26 AAAI C ART INT
[3]  
[Anonymous], 2006, PATTERN RECOGN
[4]  
Belkin M, 2002, ADV NEUR IN, V14, P585
[5]  
Chung F., 1992, Spectral Graph Theory
[6]  
Courant R., 1989, Methods of mathematical physics, V1
[7]  
Cybenko G., 1989, Mathematics of Control, Signals, and Systems, V2, P303, DOI 10.1007/BF02551274
[8]  
Dai QY, 2018, AAAI CONF ARTIF INTE, P2167
[9]  
Defferrard M, 2016, ADV NEUR IN, V29
[10]  
Donahue J, 2014, PR MACH LEARN RES, V32