Learning to Generate with Memory

被引:0
|
作者
Li, Chongxuan [1 ]
Zhu, Jun [1 ]
Zhang, Bo [1 ]
机构
[1] Tsinghua Univ, Dept Comp Sci & Tech, State Key Lab Intell Tech & Sys, TNList Lab,Ctr Bioinspired Comp Res, Beijing 100084, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Memory units have been widely used to enrich the capabilities of deep networks on capturing long-term dependencies in reasoning and prediction tasks, but little investigation exists on deep generative models (DGMs) which are good at inferring high-level invariant representations from unlabeled data. This paper presents a deep generative model with a possibly large external memory and an attention mechanism to capture the local detail information that is often lost in the bottom-up abstraction process in representation learning. By adopting a smooth attention model, the whole network is trained end-to-end by optimizing a variational bound of data likelihood via auto-encoding variational Bayesian methods, where an asymmetric recognition network is learnt jointly to infer high-level invariant representations. The asymmetric architecture can reduce the competition between bottom-up invariant feature extraction and top-down generation of instance details. Our experiments on several datasets demonstrate that memory can significantly boost the performance of DGMs on various tasks, including density estimation, image generation, and missing value imputation, and DGMs with memory can achieve state-of-the-art quantitative results.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] Learning to Generate SAR Images with Adversarial Autoencoder
    Song, Qian
    Xu, Feng
    Zhu, Xiao Xiang
    Jin, Ya-Qiu
    IEEE Transactions on Geoscience and Remote Sensing, 2022, 60
  • [42] SampleFix: Learning to Generate Functionally Diverse Fixes
    Hajipour, Hossein
    Bhattacharyya, Apratim
    Staicu, Cristian-Alexandru
    Fritz, Mario
    MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, PT II, 2021, 1525 : 119 - 133
  • [43] Learning to Generate Tips from Song Reviews
    Zang, Jingya
    Gao, Cuiyun
    Chen, Yupan
    Xu, Ruifeng
    Zhou, Lanjun
    Wang, Xuan
    NEURAL NETWORKS, 2023, 161 : 746 - 756
  • [44] Learning to Generate Fair Clusters from Demonstrations
    Galhotra, Sainyam
    Saisubramanian, Sandhya
    Zilberstein, Shlomo
    AIES '21: PROCEEDINGS OF THE 2021 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2021, : 491 - 501
  • [45] Learning to Generate Examples for Semantic Processing Tasks
    Croce, Danilo
    Filice, Simone
    Castellucci, Giuseppe
    Basili, Roberto
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 4587 - 4601
  • [46] Learning to Generate Visual Questions with Noisy Supervision
    Shen, Kai
    Wu, Lingfei
    Tang, Siliang
    Zhuang, Yueting
    He, Zhen
    Ding, Zhuoye
    Xiao, Yun
    Long, Bo
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [47] Generate and Revise: Reinforcement Learning in Neural Poetry
    Zugarini, Andrea
    Pasqualini, Luca
    Melacci, Stefano
    Maggini, Marco
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [48] A Machine Learning Approach to Generate Test Oracles
    Braga, Ronyerison
    Neto, Pedro Santos
    Rabelo, Ricardo
    Santiago, Jose
    Souza, Matheus
    SBES'18: PROCEEDINGS OF THE XXXII BRAZILIAN SYMPOSIUM ON SOFTWARE ENGINEERING, 2018, : 142 - 151
  • [49] LEARNING TO GENERATE IMAGES WITH PERCEPTUAL SIMILARITY METRICS
    Snell, Jake
    Ridgeway, Karl
    Liao, Renjie
    Roads, Brett D.
    Mozer, Michael C.
    Zemel, Richard S.
    2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 4277 - 4281
  • [50] LEARNING TO GENERATE VIDEO OBJECT SEGMENT PROPOSALS
    Li, Jianwu
    Zhou, Tianfei
    Lu, Yao
    2017 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2017, : 787 - 792