Learning Representations of Ultrahigh-dimensional Data for Random Distance-based Outlier Detection

被引:151
作者
Pang, Guansong [1 ]
Cao, Longbing [1 ]
Chen, Ling [1 ]
Liu, Huan [2 ]
机构
[1] Univ Technol Sydney, Sydney, NSW, Australia
[2] Arizona State Univ, Tempe, AZ USA
来源
KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING | 2018年
关键词
Outlier Detection; Representation Learning; Ultrahigh-dimensional Data; Dimension Reduction;
D O I
10.1145/3219819.3220042
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning expressive low-dimensional representations of ultra-highdimensional data, e.g., data with thousands/millions of features, has been a major way to enable learning methods to address the curse of dimensionality. However, existing unsupervised representation learning methods mainly focus on preserving the data regularity information and learning the representations independently of subsequent outlier detection methods, which can result in suboptimal and unstable performance of detecting irregularities (i.e., outliers). This paper introduces a ranking model-based framework, called RAMODO, to address this issue. RAMODO unifies representation learning and outlier detection to learn low-dimensional representations that are tailored for a state-of-the-art outlier detection approach - the random distance-based approach. This customized learning yields more optimal and stable representations for the targeted outlier detectors. Additionally, RAMODO can leverage little labeled data as prior knowledge to learn more expressive and application-relevant representations. We instantiate RAMODO to an efficient method called REPEN to demonstrate the performance of RAMODO. Extensive empirical results on eight real-world ultrahigh dimensional data sets show that REPEN (i) enables a random distancebased detector to obtain significantly better AUC performance and two orders of magnitude speedup; (ii) performs substantially better and more stably than four state-of-the-art representation learning methods; and (iii) leverages less than 1% labeled data to achieve up to 32% AUC improvement.
引用
收藏
页码:2041 / 2050
页数:10
相关论文
共 38 条
[1]  
Aggarwal C.C., 2017, Outlier Analysis, V2nd, P219
[2]   An effective and efficient algorithm for high-dimensional outlier detection [J].
Aggarwal, CC ;
Yu, PS .
VLDB JOURNAL, 2005, 14 (02) :211-221
[3]  
[Anonymous], 2012, CoRR
[4]  
[Anonymous], 2015, ADV NEURAL INFORM PR
[5]  
[Anonymous], 2009, P 26 ANN INT C MACH, DOI DOI 10.1145/1553374.1553462
[6]   GPU-Accelerated Feature Selection for Outlier Detection using the Local Kernel Density Ratio [J].
Azmandian, Fatemeh ;
Yilmazer, Ayse ;
Dy, Jennifer G. ;
Aslam, Javed A. ;
Kaeli, David R. .
12TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2012), 2012, :51-60
[7]  
Bay S.D., 2003, KDD, P29, DOI DOI 10.1145/956750.956758
[8]   Representation Learning: A Review and New Perspectives [J].
Bengio, Yoshua ;
Courville, Aaron ;
Vincent, Pascal .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) :1798-1828
[9]   On the evaluation of unsupervised outlier detection: measures, datasets, and an empirical study [J].
Campos, Guilherme O. ;
Zimek, Arthur ;
Sander, Jorg ;
Campello, Ricardo J. G. B. ;
Micenkova, Barbora ;
Schubert, Erich ;
Assent, Ira ;
Houle, Michael E. .
DATA MINING AND KNOWLEDGE DISCOVERY, 2016, 30 (04) :891-927
[10]   Robust Principal Component Analysis? [J].
Candes, Emmanuel J. ;
Li, Xiaodong ;
Ma, Yi ;
Wright, John .
JOURNAL OF THE ACM, 2011, 58 (03)