Multi-view dreaming: multi-view world model with contrastive learning

被引:0
|
作者
Kinose A. [1 ]
Okumura R. [2 ]
Okada M. [2 ]
Taniguchi T. [2 ,3 ]
机构
[1] Research and Development Center, Panasonic Connect Co. Ltd., Tokyo
[2] Digital and AI Technology Center, Technology Division, Panasonic Holdings Co, Osaka, Kadoma
[3] College of Information Science and Engineering, Ritsumeikan University, Kusatsu
关键词
multimodal; reinforcement learning; robotic manipulation; sensor integration; World models;
D O I
10.1080/01691864.2023.2264363
中图分类号
学科分类号
摘要
In this paper, we propose Multi-View Dreaming, a novel reinforcement learning agent for integrated recognition and control from multi-view observations by extending Dreaming. Most current reinforcement learning method assumes a single-view observation space, and this imposes limitations on the observed data, such as lack of spatial information and occlusions. This makes obtaining ideal observational information from the environment difficult and is a bottleneck for real-world robotics applications. In this paper, we use contrastive learning to train a shared latent space between different viewpoints and show how the Products of Experts approach can be used to integrate and control the probability distributions of latent states for multiple viewpoints. We also propose Multi-View DreamingV2, a variant of Multi-View Dreaming that uses a categorical distribution to model the latent state instead of the Gaussian distribution. Experiments show that the proposed method outperforms simple extensions of existing methods in a realistic robot control task. © 2023 Informa UK Limited, trading as Taylor & Francis Group and The Robotics Society of Japan.
引用
收藏
页码:1212 / 1220
页数:8
相关论文
共 50 条
  • [41] Multi-View Graph Contrastive Learning for Urban Region Representation
    Zhang, Yu
    Xu, Yonghui
    Cui, Lizhen
    Yan, Zhongmin
    Proceedings of the International Joint Conference on Neural Networks, 2023, 2023-June
  • [42] Multi-View Contrastive Enhanced Heterogeneous Graph Structure Learning
    Bing R.
    Yuan G.
    Meng F.
    Wang S.
    Qiao S.
    Wang Z.
    Ruan Jian Xue Bao/Journal of Software, 2023, 34 (10):
  • [43] Unsupervised Multi-view Learning
    Huang, Ling
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 6442 - 6443
  • [44] Multi-view Network Embedding with Structure and Semantic Contrastive Learning
    Shang, Yifan
    Ye, Xiucai
    Sakurai, Tetsuya
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 870 - 875
  • [45] Multi-view Contrastive Learning for Knowledge-Aware Recommendation
    Yu, Ruiguo
    Li, Zixuan
    Zhao, Mankun
    Zhang, Wenbin
    Yang, Ming
    Yu, Jian
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT V, 2024, 14451 : 211 - 223
  • [46] Multi-View Graph Contrastive Learning for Urban Region Representation
    Zhang, Yu
    Xu, Yonghui
    Cui, Lizhen
    Yan, Zhongmin
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [47] Dual Contrastive Prediction for Incomplete Multi-View Representation Learning
    Lin, Yijie
    Gou, Yuanbiao
    Liu, Xiaotian
    Bai, Jinfeng
    Lv, Jiancheng
    Peng, Xi
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (04) : 4447 - 4461
  • [48] Learning Contrastive Multi-View Graphs for Recommendation (Student Abstract)
    Cheng, Zhangtao
    Zhong, Ting
    Zhang, Kunpeng
    Walker, Joojo
    Zhou, Fan
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 12927 - 12928
  • [49] A review on multi-view learning
    Yu, Zhiwen
    Dong, Ziyang
    Yu, Chenchen
    Yang, Kaixiang
    Fan, Ziwei
    Chen, C. L. Philip
    FRONTIERS OF COMPUTER SCIENCE, 2025, 19 (07)
  • [50] Multi-View Reinforcement Learning
    Li, Minne
    Wu, Lisheng
    Ammar, Haitham Bou
    Wang, Jun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32