Graph Regularized Auto-Encoders for Image Representation

被引:36
作者
Liao, Yiyi [1 ]
Wang, Yue [1 ]
Liu, Yong [1 ]
机构
[1] Inst Cyber Syst & Control, State Key Lab Ind Control Technol, Hangzhou 310027, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Auto-encoders; graph regularization; local invariance; DIMENSIONALITY;
D O I
10.1109/TIP.2016.2605010
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Image representation has been intensively explored in the domain of computer vision for its significant influence on the relative tasks such as image clustering and classification. It is valuable to learn a low-dimensional representation of an image which preserves its inherent information from the original image space. At the perspective of manifold learning, this is implemented with the local invariant idea to capture the intrinsic low-dimensional manifold embedded in the high-dimensional input space. Inspired by the recent successes of deep architectures, we propose a local invariant deep nonlinear mapping algorithm, called graph regularized auto-encoder (GAE). With the graph regularization, the proposed method preserves the local connectivity from the original image space to the representation space, while the stacked auto-encoders provide explicit encoding model for fast inference and powerful expressive capacity for complex modeling. Theoretical analysis shows that the graph regularizer penalizes the weighted Frobenius norm of the Jacobian matrix of the encoder mapping, where the weight matrix captures the local property in the input space. Furthermore, the underlying effects on the hidden representation space are revealed, providing insightful explanation to the advantage of the proposed method. Finally, the experimental results on both clustering and classification tasks demonstrate the effectiveness of our GAE as well as the correctness of the proposed theoretical analysis, and it also suggests that GAE is a superior solution to the current deep representation learning techniques comparing with variant auto-encoders and existing local invariant methods.
引用
收藏
页码:2839 / 2852
页数:14
相关论文
共 50 条
[21]   Triggering dark showers with conditional dual auto-encoders [J].
Anzalone, Luca ;
Chhibra, Simranjit Singh ;
Maier, Benedikt ;
Chernyavskaya, Nadezda ;
Pierini, Maurizio .
MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2024, 5 (03)
[22]   Collaborative Filtering Auto-Encoders for Technical Patent Recommending [J].
Bai, Wenlei ;
Guo, Jun ;
Zhang, Xueqing ;
Liu, Baoying ;
Gan, Daguang .
IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2021, E104D (08) :1258-1265
[23]   Staff-line removal with selectional auto-encoders [J].
Gallego, Antonio-Javier ;
Calvo-Zaragoza, Jorge .
EXPERT SYSTEMS WITH APPLICATIONS, 2017, 89 :138-148
[24]   A CONNECTED AUTO-ENCODERS BASED APPROACH FOR IMAGE SEPARATION WITH SIDE INFORMATION: WITH APPLICATIONS TO ART INVESTIGATION [J].
Pu, Wei ;
Sober, Barak ;
Daly, Nathan ;
Higgitt, Catherine ;
Daubechies, Ingrid ;
Rodrigues, Miguel R. D. .
2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, :2213-2217
[25]   Auto-Encoders in Deep Learning-A Review with New Perspectives [J].
Chen, Shuangshuang ;
Guo, Wei .
MATHEMATICS, 2023, 11 (08)
[26]   Stock-Index Tracking Optimization Using Auto-Encoders [J].
Zhang, Chi ;
Liang, Shuang ;
Lyu, Fei ;
Fang, Libing .
FRONTIERS IN PHYSICS, 2020, 8
[27]   EXTRACTING DEEP BOTTLENECK FEATURES USING STACKED AUTO-ENCODERS [J].
Gehring, Jonas ;
Miao, Yajie ;
Metze, Florian ;
Waibel, Alex .
2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, :3377-3381
[28]   Laplacian Auto-Encoders: An explicit learning of nonlinear data manifold [J].
Jia, Kui ;
Sun, Lin ;
Gao, Shenghua ;
Song, Zhan ;
Shi, Bertram E. .
NEUROCOMPUTING, 2015, 160 :250-260
[29]   Exploiting auto-encoders for explaining black-box classifiers [J].
Guidotti, Riccardo .
INTELLIGENZA ARTIFICIALE, 2022, 16 (01) :115-129
[30]   Graph regularized independent latent low-rank representation for image clustering [J].
Li, Bo ;
Pan, Lin-Feng .
APPLIED INTELLIGENCE, 2025, 55 (06)