Variational Auto-Encoder Combined with Knowledge Graph Zero-Shot Learning

被引:1
作者
Zhang, Haitao [1 ]
Su, Lin [1 ]
机构
[1] School of Software, Liaoning Technical University, Liaoning, Huludao
关键词
graph convolutional network; knowledge graph; variational auto-encoder; variational graph auto-encoder; zero-shot learning;
D O I
10.3778/j.issn.1002-8331.2106-0430
中图分类号
学科分类号
摘要
Recently, zero-shot learning combine with generative model has been widely studied, but such methods usually only use attributes, lacking class semantics, a single information cannot strong enough to represent the class. It is easy to cause the domain-shift problem and affect knowledge transfer, the accuracy of classification results will be decreased. In order to solve this problem, variational auto-encoder combined with knowledge graph zero-shot learning(KG-VAE)is proposed. By building hierarchical structured knowledge graph with class description and word vector as the semantic information, the rich semantic information in the knowledge graph is combined into the VAE model. The generated potential features retain better effective deterministic information, it is effective to reduce domain shift and promote knowledge transfer. According to the evaluation with four public datasets, and compared with the baseline method CADA-VAE, the average classification accuracy is improved. At the same time, the availability of knowledge graph is proved by ablation experiment. © 2023 Journal of Computer Engineering and Applications Beijing Co., Ltd.; Science Press. All rights reserved.
引用
收藏
页码:236 / 243
页数:7
相关论文
共 27 条
[11]  
MISHRA A, KRISHNA REDDY S, MITTAL A, Et al., A generative model for zero shot learning using conditional variational autoencoders[C], Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 2188-2196, (2018)
[12]  
SCHONFELD E, EBRAHIMI S, SINHA S, Et al., Generalized zero-and few-shot learning via aligned variational autoencoders[C], Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8247-8255, (2019)
[13]  
VERMA V K, ARORA G, MISHRA A, Et al., Generalized zero- shot learning via synthesized examples[C], Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4281-4289, (2018)
[14]  
ZHU Y, XIE J, LIU B, Et al., Learning feature-to-feature translator by alternating back-propagation for generative zero- shot learning[C], Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9844-9854, (2019)
[15]  
FU Y, HOSPEDALES T M, XIANG T, Et al., Transductive multi-view zero-shot learning[J], IEEE Transactions on Pattern Analysis and Machine Intelligence, 37, 11, pp. 2332-2345, (2015)
[16]  
LAMPERT C H, NICKISCH H, HARMELING S., Attribute-based classification for zero-shot visual object categorization[J], IEEE Transactions on Pattern Analysis and Machine Intelligence, 36, 3, pp. 453-465, (2013)
[17]  
SHIGETO Y, SUZUKI I, HARA K, Et al., Ridge regression,hubness,and zero-shot learning[C], Joint European conference on Machine Learning and Knowledge Discovery in Databases, pp. 135-151, (2015)
[18]  
GULRAJANI I, AHMED F, ARJOVSKY M, Et al., Improved training of wasserstein gans[J], (2017)
[19]  
WANG X, YE Y, GUPTA A., Zero-shot recognition via semantic embeddings and knowledge graphs[C], Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6857-6866, (2018)
[20]  
KAMPFFMEYER M, CHEN Y, LIANG X, Et al., Rethinking knowledge graph propagation for zero-shot learning[C], Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11487-11496, (2019)