Adversarial Graph Embedding for Ensemble Clustering

被引:0
作者
Tao, Zhiqiang [1 ]
Liu, Hongfu [2 ]
Li, Jun [3 ]
Wang, Zhaowen [4 ]
Fu, Yun [1 ,5 ]
机构
[1] Northeastern Univ, Dept Elect & Comp Engn, Boston, MA 02115 USA
[2] Brandeis Univ, Michtom Sch Comp Sci, Waltham, MA USA
[3] MIT, Inst Med Engn & Sci, 77 Massachusetts Ave, Cambridge, MA 02139 USA
[4] Adobe Syst Inc, Adobe Res, San Jose, CA USA
[5] Northeastern Univ, Khoury Coll Comp & Informat Sci, Boston, MA 02115 USA
来源
PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE | 2019年
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Ensemble clustering generally integrates basic partitions into a consensus one through a graph partitioning method, which, however, has two limitations: 1) it neglects to reuse original features; 2) obtaining consensus partition with learnable graph representations is still under-explored. In this paper, we propose a novel Adversarial Graph Auto-Encoders (AGAE) model to incorporate ensemble clustering into a deep graph embedding process. Specifically, graph convolutional network is adopted as probabilistic encoder to jointly integrate the information from feature content and consensus graph, and a simple inner product layer is used as decoder to reconstruct graph with the encoded latent variables (i.e., embedding representations). Moreover, we develop an adversarial regularizer to guide the network training with an adaptive partition-dependent prior. Experiments on eight real-world datasets are presented to show the effectiveness of AGAE over several state-of-the-art deep embedding and ensemble clustering methods.
引用
收藏
页码:3562 / 3568
页数:7
相关论文
共 37 条
[1]  
[Anonymous], 2015, SIGKDD, DOI DOI 10.1145/2783258.2783287
[2]  
[Anonymous], 2017, ICML
[3]  
[Anonymous], 2016, SIGKDD, DOI DOI 10.1145/2939672.2939813
[4]  
[Anonymous], 2016, ICML
[5]  
Bruna J., 2014, INT C LEARNING REPRE
[6]  
Cai D., 2009, Proceedings of the 26th Annual International Conference on Machine Learning, P105, DOI [10.1145/1553374.1553388, DOI 10.1145/1553374.1553388]
[7]  
Dai QY, 2018, AAAI CONF ARTIF INTE, P2167
[8]  
Defferrard M, 2016, ADV NEUR IN, V29
[9]  
Ding Zhengming, 2019, ADV INFORM KNOWLEDGE
[10]  
Domeniconi C., 2009, ACM Transactions on Knowledge Discovery from Data, V2, P1, DOI [10.1145/1460797.1460800, DOI 10.1145/1460797.1460800]