Hierarchical Indian Buffet Neural Networks for Bayesian Continual Learning

被引:0
|
作者
Kessler, Samuel [1 ]
Vu Nguyen [2 ]
Zohren, Stefan [1 ]
Roberts, Stephen J. [1 ]
机构
[1] Univ Oxford, Oxford, England
[2] Amazon Adelaide, Adelaide, SA, Australia
来源
UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, VOL 161 | 2021年 / 161卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We place an Indian Buffet process (IBP) prior over the structure of a Bayesian Neural Network (BNN), thus allowing the complexity of the BNN to increase and decrease automatically. We further extend this model such that the prior on the structure of each hidden layer is shared globally across all layers, using a Hierarchical-IBP (H-IBP). We apply this model to the problem of resource allocation in Continual Learning (CL) where new tasks occur and the network requires extra resources. Our model uses online variational inference with reparameterisation of the Bernoulli and Beta distributions, which constitute the IBP and H-IBP priors. As we automatically learn the number of weights in each layer of the BNN, overfitting and underfitting problems are largely overcome. We show empirically that our approach offers a competitive edge over existing methods in CL.
引用
收藏
页码:749 / 759
页数:11
相关论文
共 50 条
  • [1] Continual Learning Using Bayesian Neural Networks
    Li, Honglin
    Barnaghi, Payam
    Enshaeifare, Shirin
    Ganz, Frieder
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (09) : 4243 - 4252
  • [2] Bayesian continual learning via spiking neural networks
    Skatchkovsky, Nicolas
    Jang, Hyeryung
    Simeone, Osvaldo
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2022, 16
  • [3] A hierarchical Bayesian learning scheme for autoregressive neural networks
    Acernese, F
    Barone, F
    De Rosa, R
    Eleuteri, A
    Milano, L
    Tagliaferri, R
    PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS 2003, VOLS 1-4, 2003, : 1356 - 1360
  • [4] Hierarchical Bayesian Inference and Learning in Spiking Neural Networks
    Guo, Shangqi
    Yu, Zhaofei
    Deng, Fei
    Hu, Xiaolin
    Chen, Feng
    IEEE TRANSACTIONS ON CYBERNETICS, 2019, 49 (01) : 133 - 145
  • [5] Learning from the Past: Continual Meta-Learning with Bayesian Graph Neural Networks
    Luo, Yadan
    Huang, Zi
    Zhang, Zheng
    Wang, Ziwei
    Baktashmotlagh, Mahsa
    Yang, Yang
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 5021 - 5028
  • [6] INDIAN BUFFET GAME WITH NON-BAYESIAN SOCIAL LEARNING
    Jiang, Chunxiao
    Chen, Yan
    Gao, Yang
    Liu, K. J. Ray
    2013 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP), 2013, : 309 - 312
  • [7] Continual Learning with Neural Networks: A Review
    Awasthi, Abhijeet
    Sarawagi, Sunita
    PROCEEDINGS OF THE 6TH ACM IKDD CODS AND 24TH COMAD, 2019, : 362 - 365
  • [8] Bayesian Hierarchical Convolutional Neural Networks
    Bensen, Alexis
    Kahana, Adam
    Woods, Zerotti
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS V, 2023, 12538
  • [9] Hierarchical Prototype Networks for Continual Graph Representation Learning
    Zhang, Xikun
    Song, Dongjin
    Tao, Dacheng
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (04) : 4622 - 4636
  • [10] Continual robot learning with constructive neural networks
    Grossmann, A
    Poli, R
    LEARNING ROBOTS, PROCEEDINGS, 1998, 1545 : 95 - 108