Distributed Continual Learning With CoCoA in High-Dimensional Linear Regression

被引:0
作者
Hellkvist, Martin [1 ]
Ozcelikkale, Ayca [1 ]
Ahlen, Anders [1 ]
机构
[1] Uppsala Univ, Dept Elect Engn, S-75121 Uppsala, Sweden
基金
瑞典研究理事会;
关键词
Task analysis; Training; Distributed databases; Distance learning; Computer aided instruction; Data models; Training data; Multi-task networks; networked systems; distributed estimation; adaptation; overparametrization; NEURAL-NETWORKS; ALGORITHMS;
D O I
10.1109/TSP.2024.3361714
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
We consider estimation under scenarios where the signals of interest exhibit change of characteristics over time. In particular, we consider the continual learning problem where different tasks, e.g., data with different distributions, arrive sequentially and the aim is to perform well on the newly arrived task without performance degradation on the previously seen tasks. In contrast to the continual learning literature focusing on the centralized setting, we investigate the problem from a distributed estimation perspective. We consider the well-established distributed learning algorithm CoCoA, which distributes the model parameters and the corresponding features over the network. We provide exact analytical characterization for the generalization error of CoCoA under continual learning for linear regression in a range of scenarios, where overparameterization is of particular interest. These analytical results characterize how the generalization error depends on the network structure, the task similarity and the number of tasks, and show how these dependencies are intertwined. In particular, our results show that the generalization error can be significantly reduced by adjusting the network size, where the most favorable network size depends on task similarity and the number of tasks. We present numerical results verifying the theoretical analysis and illustrate the continual learning performance of CoCoA with a digit classification task.
引用
收藏
页码:1015 / 1031
页数:17
相关论文
共 50 条
  • [31] MANIFOLD LEARNING-BASED POLYNOMIAL CHAOS EXPANSIONS FOR HIGH-DIMENSIONAL SURROGATE MODELS
    Kontolati, Katiana
    Loukrezis, Dimitrios
    dos Santos, Ketson R. M.
    Giovanis, Dimitrios G.
    Shields, Michael D.
    INTERNATIONAL JOURNAL FOR UNCERTAINTY QUANTIFICATION, 2022, 12 (04) : 39 - 64
  • [32] MOKBL plus MOMs: An interpretable multi-objective evolutionary fuzzy system for learning high-dimensional regression data
    Aghaeipoor, Fatemeh
    Javidi, Mohammad Masoud
    INFORMATION SCIENCES, 2019, 496 : 1 - 24
  • [33] Linear Dimensionality Reduction for Margin-Based Classification: High-Dimensional Data and Sensor Networks
    Varshney, Kush R.
    Willsky, Alan S.
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2011, 59 (06) : 2496 - 2512
  • [34] High-dimensional variable selection via low-dimensional adaptive learning
    Staerk, Christian
    Kateri, Maria
    Ntzoufras, Ioannis
    ELECTRONIC JOURNAL OF STATISTICS, 2021, 15 (01): : 830 - 879
  • [35] Efficient Private Empirical Risk Minimization for High-dimensional Learning
    Kasiviswanathan, Shiva Prasad
    Jin, Hongxia
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [36] High-dimensional role of Al and machine learning in cancer research
    Capobianco, Enrico
    BRITISH JOURNAL OF CANCER, 2022, 126 (04) : 523 - 532
  • [37] Distributed Partially Linear Additive Models With a High Dimensional Linear Part
    Wang, Yue
    Zhang, Weiping
    Lian, Heng
    IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, 2021, 7 : 611 - 625
  • [38] Learning High-Dimensional Evolving Data Streams With Limited Labels
    Din, Salah Ud
    Kumar, Jay
    Shao, Junming
    Mawuli, Cobbinah Bernard
    Ndiaye, Waldiodio David
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (11) : 11373 - 11384
  • [39] Linear versus nonlinear dimensionality reduction of high-dimensional dynamical systems
    Smaoui, N
    SIAM JOURNAL ON SCIENTIFIC COMPUTING, 2004, 25 (06) : 2107 - 2125
  • [40] A Deep Reinforcement Learning Framework for High-Dimensional Circuit Linearization
    Rong, Chao
    Paramesh, Jeyanandh
    Carley, L. Richard
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2022, 69 (09) : 3665 - 3669