Self-Supervised Learning for Specified Latent Representation

被引:4
作者
Liu, Chicheng [1 ,2 ]
Song, Libin [1 ,2 ]
Zhang, Jiwen [1 ,2 ]
Chen, Ken [1 ,2 ]
Xu, Jing [1 ,2 ]
机构
[1] Tsinghua Univ, State Key Lab Tribol, Beijing Key Lab Precis Ultra Precis Mfg Equipment, Beijing 100084, Peoples R China
[2] Tsinghua Univ, Dept Mech Engn, Beijing 100084, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Supervised learning; Shape; Semantics; Unsupervised learning; Encoding; Task analysis; Neural networks; Latent representation; neural networks; unsupervised learning; OBJECT DETECTION; REGRESSION;
D O I
10.1109/TFUZZ.2019.2904237
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Current latent representation methods using unsupervised learning have no semantic meaning; thus, it is difficult to directly express their physical task in the real world. To this end, this paper attempts to propose a specified latent representation with physical semantic meaning. First, a few labeled samples are used to generate the framework of the latent space, and these labeled samples are mapped to framework nodes in the latent space. Second, a self-learning method using structured unlabeled samples is proposed to shape the free space between the framework nodes in the latent space. The proposed specified latent representation therefore possesses the advantages provided by both supervised and unsupervised learning. The proposed method is verified by numerical simulations and real-world experiments.
引用
收藏
页码:47 / 59
页数:13
相关论文
共 58 条
[1]  
[Anonymous], BRIT MACH VIS C
[2]  
[Anonymous], 2009, Short Paper University Twente
[3]  
[Anonymous], 2011, P 12 ANN C INT SPEEC
[4]  
[Anonymous], 2012, COMPUTER SCI
[5]  
[Anonymous], 2014, BMVC
[6]  
[Anonymous], 2013, ASS ADV ARTIFICIAL I
[7]  
[Anonymous], 2009, Advances in neural information processing systems
[8]  
[Anonymous], 2010, P 11 ANN C INT SPEEC
[9]  
[Anonymous], P IEEE C COMP VIS PA
[10]  
[Anonymous], P IEEE INT C COMP VI