Pivotal-Aware Principal Component Analysis

被引:3
作者
Li, Xuelong [1 ,2 ]
Li, Pei [1 ,2 ]
Zhang, Hongyuan [1 ,2 ]
Zhu, Kangjia [2 ,3 ]
Zhang, Rui [2 ,3 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Shaanxi, Peoples R China
[2] Northwestern Polytech Univ, Sch Artificial Intelligence, Opt & Elect iOPEN, Xian 710072, Shaanxi, Peoples R China
[3] Shanghai Artificial Intelligence Lab, Shanghai 200232, Peoples R China
基金
中国国家自然科学基金;
关键词
Principal component analysis; Training; Robustness; Pollution measurement; Dimensionality reduction; Data models; Biological system modeling; Collaborative-enhanced learning; dimensionality reduction; principal component analysis (PCA); robust loss; REGRESSION;
D O I
10.1109/TNNLS.2023.3252602
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A conventional principal component analysis (PCA) frequently suffers from the disturbance of outliers, and thus, spectra of extensions and variations of PCA have been developed. However, all the existing extensions of PCA derive from the same motivation, which aims to alleviate the negative effect of the occlusion. In this article, we design a novel collaborative-enhanced learning framework that aims to highlight the pivotal data points in contrast. As for the proposed framework, only a part of well-fitting samples are adaptively highlighted, which indicates more significance during training. Meanwhile, the framework can collaboratively reduce the disturbance of the polluted samples as well. In other words, two contrary mechanisms could work cooperatively under the proposed framework. Based on the proposed framework, we further develop a pivotal-aware PCA (PAPCA), which utilizes the framework to simultaneously augment positive samples and constrain negative ones by retaining the rotational invariance property. Accordingly, extensive experiments demonstrate that our model has superior performance compared with the existing methods that only focus on the negative samples.
引用
收藏
页码:12201 / 12210
页数:10
相关论文
共 32 条
[1]  
[Anonymous], 2016, P INT JOINT C ART IN, DOI [10.5555/3060832.3060873, DOI 10.5555/3060832.3060873]
[3]   Document clustering using locality preserving indexing [J].
Cai, D ;
He, XF ;
Han, JW .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2005, 17 (12) :1624-1637
[4]  
Cai D., 2010, P 16 ACM SIGKDD INT, P333, DOI DOI 10.1145/1835804.1835848
[5]   Robust Principal Component Analysis? [J].
Candes, Emmanuel J. ;
Li, Xiaodong ;
Ma, Yi ;
Wright, John .
JOURNAL OF THE ACM, 2011, 58 (03)
[6]   Compound Rank-k Projections for Bilinear Analysis [J].
Chang, Xiaojun ;
Nie, Feiping ;
Wang, Sen ;
Yang, Yi ;
Zhou, Xiaofang ;
Zhang, Chengqi .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2016, 27 (07) :1502-1513
[7]  
Ding C., 2011, P 22 INT JOINT C ART, P1433
[8]  
Ding C., 2006, P 23 INT C MACH LEAR, P281, DOI DOI 10.1145/1143844.1143880
[9]  
Gao H.C., 2015, P 24 ACM INT C INFOR, V15, P871
[10]   From few to many: Illumination cone models for face recognition under variable lighting and pose [J].
Georghiades, AS ;
Belhumeur, PN ;
Kriegman, DJ .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2001, 23 (06) :643-660