Distributed Learning of Fully Connected Neural Networks using Independent Subnet Training

被引:11
|
作者
Yuan, Binhang [1 ]
Wolfe, Cameron R. [1 ]
Dun, Chen [1 ]
Tang, Yuxin [1 ]
Kyrillidis, Anastasios [1 ]
Jermaine, Chris [1 ]
机构
[1] Rice Univ, Houston, TX 77251 USA
来源
PROCEEDINGS OF THE VLDB ENDOWMENT | 2022年 / 15卷 / 08期
关键词
ALGORITHMS;
D O I
10.14778/3529337.3529343
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Distributed machine learning (ML) can bring more computational resources to bear than single-machine learning, thus enabling reductions in training time. Distributed learning partitions models and data over many machines, allowing model and dataset sizes beyond the available compute power and memory of a single machine. In practice though, distributed ML is challenging when distribution is mandatory, rather than chosen by the practitioner. In such scenarios, data could unavoidably be separated among workers due to limited memory capacity per worker or even because of data privacy issues. There, existing distributed methods will utterly fail due to dominant transfer costs across workers, or do not even apply. We propose a new approach to distributed fully connected neural network learning, called independent subnet training (IST), to handle these cases. In IST, the original network is decomposed into a set of narrow subnetworks with the same depth. These subnetworks are then trained locally before parameters are exchanged to produce new subnets and the training cycle repeats. Such a naturally lmodel parallelz approach limits memory usage by storing only a portion of network parameters on each device. Additionally, no requirements exist for sharing data between workers (i.e., subnet training is local and independent) and communication volume and frequency are reduced by decomposing the original network into independent subnets. These properties of IST can cope with issues due to distributed data, slow interconnects, or limited device memory, making IST a suitable approach for cases of mandatory distribution. We show experimentally that IST results in training times that are much lower than common distributed learning approaches.
引用
收藏
页码:1581 / 1590
页数:10
相关论文
共 50 条
  • [1] A homotopy training algorithm for fully connected neural networks
    Chen, Qipin
    Hao, Wenrui
    PROCEEDINGS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, 2019, 475 (2231):
  • [2] Data Symmetries and Learning in Fully Connected Neural Networks
    Anselmi, Fabio
    Manzoni, Luca
    D'onofrio, Alberto
    Rodriguez, Alex
    Caravagna, Giulio
    Bortolussi, Luca
    Cairoli, Francesca
    IEEE ACCESS, 2023, 11 : 47282 - 47290
  • [3] Training Fully Connected Neural Networks is ∃R-Complete
    Bertschinger, Daniel
    Hertrich, Christoph
    Jungeblut, Paul
    Miltzow, Tillmann
    Weber, Simon
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [4] Electrocardiogram Classification Problem Solving using Deep Learning Algorithms Fully connected Neural Networks
    Gharaibeh, Anwaar
    Quwaider, Muhannad
    2022 13TH INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION SYSTEMS (ICICS), 2022, : 281 - 288
  • [5] Differentiable homotopy methods for gradually reinforcing the training of fully connected neural networks
    Li, Peixuan
    Li, Yuanbo
    NEUROCOMPUTING, 2024, 605
  • [6] Distributed knowledge representation in fully connected networks
    Gattiker, JR
    IEEE INTERNATIONAL JOINT SYMPOSIA ON INTELLIGENCE AND SYSTEMS, PROCEEDINGS, 1996, : 84 - 88
  • [7] Spectrum Analysis for Fully Connected Neural Networks
    Jia, Bojun
    Zhang, Yanjun
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (12) : 10091 - 10104
  • [8] On the BP Training Algorithm of Fuzzy Neural Networks (FNNs) via Its Equivalent Fully Connected Neural Networks (FFNNs)
    Wang, Jing
    Wang, Chi-Hsu
    Chen, C. L. Philip
    2011 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2011, : 1376 - 1381
  • [9] On the Learnability of Fully-connected Neural Networks
    Zhang, Yuchen
    Lee, Jason D.
    Wainwright, Martin J.
    Jordan, Michael I.
    ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 54, 2017, 54 : 83 - 91
  • [10] Neural Graph Learning: Training Neural Networks Using Graphs
    Bui, Thang D.
    Ravi, Sujith
    Ramavajjala, Vivek
    WSDM'18: PROCEEDINGS OF THE ELEVENTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2018, : 64 - 71