Variational Auto-Encoder Combined with Knowledge Graph Zero-Shot Learning

被引:1
作者
Zhang, Haitao [1 ]
Su, Lin [1 ]
机构
[1] School of Software, Liaoning Technical University, Liaoning, Huludao
关键词
graph convolutional network; knowledge graph; variational auto-encoder; variational graph auto-encoder; zero-shot learning;
D O I
10.3778/j.issn.1002-8331.2106-0430
中图分类号
学科分类号
摘要
Recently, zero-shot learning combine with generative model has been widely studied, but such methods usually only use attributes, lacking class semantics, a single information cannot strong enough to represent the class. It is easy to cause the domain-shift problem and affect knowledge transfer, the accuracy of classification results will be decreased. In order to solve this problem, variational auto-encoder combined with knowledge graph zero-shot learning(KG-VAE)is proposed. By building hierarchical structured knowledge graph with class description and word vector as the semantic information, the rich semantic information in the knowledge graph is combined into the VAE model. The generated potential features retain better effective deterministic information, it is effective to reduce domain shift and promote knowledge transfer. According to the evaluation with four public datasets, and compared with the baseline method CADA-VAE, the average classification accuracy is improved. At the same time, the availability of knowledge graph is proved by ablation experiment. © 2023 Journal of Computer Engineering and Applications Beijing Co., Ltd.; Science Press. All rights reserved.
引用
收藏
页码:236 / 243
页数:7
相关论文
共 27 条
[1]  
FROME A, CORRADO G, Et al., Devise:A deep visual- semantic embedding model[C], Proceedings of Advances in Neural Information Processing Systems, pp. 2121-2129, (2013)
[2]  
AKATA Z,, PERRONNIN F, HARCHAOUI Z, Et al., Labelembedding for image classification[J], IEEE Transactions on Pattern Analysis and Machine Intelligence, 38, 7, pp. 1425-1438, (2015)
[3]  
AKATA Z,, REED S, WALTER D, Et al., Evaluation of output embeddings for fine-grained image classification[C], Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2927-2936, (2015)
[4]  
TORR P., An embarrassingly simple approach to zero- shot learning[C], Proceedings of the International Conference on Machine Learning, pp. 2152-2161, (2015)
[5]  
XIAN Y, SHARMA G, Et al., Latent embeddings for zero-shot classification[C], Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 69-77, (2016)
[6]  
CHANGPINYO S, CHAO W L, GONG B, Et al., Synthesized classifiers for zero-shot learning[C], Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5327-5336, (2016)
[7]  
KODIROV E, XIANG T, GONG S., Semantic autoencoder for zero-shot learning[C], Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3174-3183, (2017)
[8]  
GOODFELLOW I, POUGET-ABADIE J, MIRZA M, Et al., Generative adversarial nets[C], Proceedings of Advances in Neural Information Processing Systems, pp. 2672-2680, (2014)
[9]  
KINGMA D P,, WELLING M., Auto-encoding variational Bayes[J], (2013)
[10]  
XIAN Y, LORENZ T, SCHIELE B, Et al., Feature generating networks for zero-shot learning[C], Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5542-5551, (2018)