Learning to Generate with Memory

被引:0
|
作者
Li, Chongxuan [1 ]
Zhu, Jun [1 ]
Zhang, Bo [1 ]
机构
[1] Tsinghua Univ, Dept Comp Sci & Tech, State Key Lab Intell Tech & Sys, TNList Lab,Ctr Bioinspired Comp Res, Beijing 100084, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Memory units have been widely used to enrich the capabilities of deep networks on capturing long-term dependencies in reasoning and prediction tasks, but little investigation exists on deep generative models (DGMs) which are good at inferring high-level invariant representations from unlabeled data. This paper presents a deep generative model with a possibly large external memory and an attention mechanism to capture the local detail information that is often lost in the bottom-up abstraction process in representation learning. By adopting a smooth attention model, the whole network is trained end-to-end by optimizing a variational bound of data likelihood via auto-encoding variational Bayesian methods, where an asymmetric recognition network is learnt jointly to infer high-level invariant representations. The asymmetric architecture can reduce the competition between bottom-up invariant feature extraction and top-down generation of instance details. Our experiments on several datasets demonstrate that memory can significantly boost the performance of DGMs on various tasks, including density estimation, image generation, and missing value imputation, and DGMs with memory can achieve state-of-the-art quantitative results.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Learning Methods to Generate Good Plans: Integrating HTN Learning and Reinforcement Learning
    Hogg, Chad
    Kuter, Ugur
    Munoz-Avila, Hector
    PROCEEDINGS OF THE TWENTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI-10), 2010, : 1530 - 1535
  • [32] Learning to Generate Realistic LiDAR Point Clouds
    Zyrianov, Vlas
    Zhu, Xiyue
    Wang, Shenlong
    COMPUTER VISION, ECCV 2022, PT XXIII, 2022, 13683 : 17 - 35
  • [33] Learning to Generate Product Reviews from Attributes
    Dong, Li
    Huang, Shaohan
    Wei, Furu
    Lapata, Mirella
    Zhou, Ming
    Xu, Ke
    15TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EACL 2017), VOL 1: LONG PAPERS, 2017, : 623 - 632
  • [34] PATHNET: LEARNING TO GENERATE TRAJECTORIES AVOIDING OBSTACLES
    Watt, Alassane M.
    Yoshiyasu, Yusuke
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 3194 - 3198
  • [35] SceneGen: Learning to Generate Realistic Traffic Scenes
    Tan, Shuhan
    Wong, Kelvin
    Wang, Shenlong
    Manivasagam, Sivabalan
    Ren, Mengye
    Urtasun, Raquel
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 892 - 901
  • [36] Learning to Generate Synthetic Data via Compositing
    Tripathi, Shashank
    Chandra, Siddhartha
    Agrawal, Amit
    Tyagi, Ambrish
    Rehg, James M.
    Chari, Visesh
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 461 - 470
  • [37] Learning to Generate Chairs with Generative Adversarial Nets
    Zamyatin, Evgeny
    Filchenkov, Andrey
    7TH INTERNATIONAL YOUNG SCIENTISTS CONFERENCE ON COMPUTATIONAL SCIENCE, YSC2018, 2018, 136 : 200 - 209
  • [38] LEARNING TO GENERATE DIVERSE QUESTIONS FROM KEYWORDS
    Pan, Youcheng
    Hui, Baotian
    Chen, Qingcai
    Xiang, Yang
    Wang, Xiaolong
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8224 - 8228
  • [39] Learning to Generate SAR Images With Adversarial Autoencoder
    Song, Qian
    Xu, Feng
    Zhu, Xiao Xiang
    Jin, Ya-Qiu
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [40] CanvasVAE: Learning to Generate Vector Graphic Documents
    Yamaguchi, Kota
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 5461 - 5469