On the Challenge of Training Small Scale Neural Networks on Large Scale Computing Systems

被引:0
作者
Malysiak, Darius [1 ]
Grimm, Matthias [1 ]
机构
[1] Hsch Ruhr West, Inst Comp Sci, Bottrop, Germany
来源
2015 16TH IEEE INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND INFORMATICS (CINTI) | 2015年
关键词
statistics; gradient descent; gpgpu; high performance computing; neural networks; backpropagation; opencl; cuda;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a novel approach of distributing small-to mid-scale neural networks onto modern parallel architectures. In this context we discuss the induced challenges and possible solutions. We provide a detailed theoretical analysis with respect to space and time complexities and reinforce our computation model with evaluations which show a performance gain over state of the art approaches.
引用
收藏
页码:273 / 284
页数:12
相关论文
共 19 条
  • [1] [Anonymous], PARALLELE PROGRAMMIE
  • [2] [Anonymous], P ISCA INT UNPUB
  • [3] [Anonymous], IJCSIT INT J COMPUTE
  • [4] [Anonymous], CORR
  • [5] [Anonymous], COMP INT INF CINTI 2
  • [6] [Anonymous], 2012, NIPS
  • [7] [Anonymous], PERFORMANCE COMP TRA
  • [8] [Anonymous], 2009, ICML
  • [9] [Anonymous], PARALLEL TRAINING MU
  • [10] Ba LJ, 2014, ADV NEUR IN, V27