Efficient Stochastic Optimization for Low-Rank Distance Metric Learning

被引:0
|
作者
Zhang, Jie [1 ]
Zhang, Lijun [1 ]
机构
[1] Nanjing Univ, Natl Key Lab Novel Software Technol, Nanjing 210023, Jiangsu, Peoples R China
来源
THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE | 2017年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Although distance metric learning has been successfully applied to many real-world applications, learning a distance metric from large-scale and high-dimensional data remains a challenging problem. Due to the PSD constraint, the computational complexity of previous algorithms per iteration is at least O(d(2)) where d is the dimensionality of the data. In this paper, we develop an efficient stochastic algorithm for a class of distance metric learning problems with nuclear norm regularization, referred to as low-rank DML. By utilizing the low-rank structure of the intermediate solutions and stochastic gradients, the complexity of our algorithm has a linear dependence on the dimensionality d. The key idea is to maintain all the iterates in factorized representations and construct stochastic gradients that are low-rank. In this way, the projection onto the PSD cone can be implemented efficiently by incremental SVD. Experimental results on several data sets validate the effectiveness and efficiency of our method.
引用
收藏
页码:933 / 939
页数:7
相关论文
共 50 条
  • [21] Compressed Self-Attention for Deep Metric Learning with Low-Rank Approximation
    Chen, Ziye
    Gong, Mingming
    Ge, Lingjuan
    Du, Bo
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 2058 - 2064
  • [22] Multi-Task Low-Rank Metric Learning Based on Common Subspace
    Yang, Peipei
    Huang, Kaizhu
    Liu, Cheng-Lin
    NEURAL INFORMATION PROCESSING, PT II, 2011, 7063 : 151 - 159
  • [23] Efficient Low-Rank Stochastic Gradient Descent Methods for Solving Semidefinite Programs
    Chen, Jianhui
    Yang, Tianbao
    Zhu, Shenghuo
    ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 33, 2014, 33 : 122 - 130
  • [24] On the distance to low-rank matrices in the maximum norm
    Budzinskiy, Stanislav
    LINEAR ALGEBRA AND ITS APPLICATIONS, 2024, 688 : 44 - 58
  • [25] A new perspective on low-rank optimization
    Dimitris Bertsimas
    Ryan Cory-Wright
    Jean Pauphilet
    Mathematical Programming, 2023, 202 : 47 - 92
  • [26] Efficient Low-rank Federated Learning based on Singular Value Decomposition
    Kwon, Jungmin
    Park, Hyunggon
    PROCEEDINGS OF THE 2022 THE TWENTY-THIRD INTERNATIONAL SYMPOSIUM ON THEORY, ALGORITHMIC FOUNDATIONS, AND PROTOCOL DESIGN FOR MOBILE NETWORKS AND MOBILE COMPUTING, MOBIHOC 2022, 2022, : 285 - 286
  • [27] Sample Efficient Reinforcement Learning via Low-Rank Matrix Estimation
    Shah, Devavrat
    Song, Dogyoon
    Xu, Zhi
    Yang, Yuzhe
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [28] PELA: Learning Parameter-Efficient Models with Low-Rank Approximation
    Guo, Yangyang
    Wang, Guangzhi
    Kankanhalli, Mohan
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 15699 - 15709
  • [29] Efficient Wireless Federated Learning via Low-Rank Gradient Factorization
    Guo, Mingzhao
    Liu, Dongzhu
    Simeone, Osvaldo
    Wen, Dingzhu
    arXiv,
  • [30] Efficient Wireless Federated Learning Via Low-Rank Gradient Factorization
    Guo, Mingzhao
    Liu, Dongzhu
    Simeone, Osvaldo
    Wen, Dingzhu
    IEEE Transactions on Vehicular Technology, 2024,