Parallel Implementation of Feedforward Neural Networks on GPUs

被引:0
|
作者
Gurgel, Saskya T. A. [1 ]
Formiga, Andrei de A. [1 ]
机构
[1] Univ Fed Paraiba, Ctr Informat, BR-58059900 Joao Pessoa, Paraiba, Brazil
来源
2013 BRAZILIAN CONFERENCE ON INTELLIGENT SYSTEMS (BRACIS) | 2013年
关键词
neural networks; parallel; GPUs;
D O I
10.1109/BRACIS.2013.32
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural networks are often seen as a natural model of parallel computation, especially when contrasted with more traditional sequential models like the Turing Machine. The parallelism of neural networks has become more important in recent years through the confluence of two tendencies in the evolution of computer and information technologies: first, parallel computing devices are now ubiquitous, instead of being relegated to a niche market; and second, the amount of data available to analyze and learn from in machine learning applications has increased at a rapid pace. Graphical Processing Units (GPUs) provide great computational power in standard desktop computers, being composed of many simple execution units. In this paper a technique is presented for the parallel implementation of neural networks in GPUs. The technique is explained in relation to the difficulties imposed by the execution model of GPUs. Experimental results indicate that the proposed implementation techniques can easily attain a performance gain of more than one order of magnitude, and are scalable with the processing power of the GPU used.
引用
收藏
页码:143 / 149
页数:7
相关论文
共 50 条
  • [1] A parallel algorithm for gradient training of feedforward neural networks
    Hanzalek, Z
    PARALLEL COMPUTING, 1998, 24 (5-6) : 823 - 839
  • [2] Parallel implementation of the feedforward back-propagation algorithm on pyramid networks
    Maelainin, SA
    Bellaachia, A
    PARALLEL AND DISTRIBUTED COMPUTING SYSTEMS - PROCEEDINGS OF THE ISCA 9TH INTERNATIONAL CONFERENCE, VOLS I AND II, 1996, : 444 - 449
  • [3] Parallel implementation of non recurrent neural networks
    Calonge, T
    Alonso, L
    Ralha, R
    Sanchez, AL
    VECTOR AND PARALLEL PROCESSING - VECPAR'96, 1997, 1215 : 313 - 325
  • [4] The capacity of feedforward neural networks
    Baldi, Pierre
    Vershynin, Roman
    NEURAL NETWORKS, 2019, 116 : 288 - 311
  • [5] ON TRAINING FEEDFORWARD NEURAL NETWORKS
    KAK, S
    PRAMANA-JOURNAL OF PHYSICS, 1993, 40 (01): : 35 - 42
  • [6] An efficient implementation of parallel simulated annealing algorithm in GPUs
    A. M. Ferreiro
    J. A. García
    J. G. López-Salas
    C. Vázquez
    Journal of Global Optimization, 2013, 57 : 863 - 890
  • [7] An efficient implementation of parallel simulated annealing algorithm in GPUs
    Ferreiro, A. M.
    Garcia, J. A.
    Lopez-Salas, J. G.
    Vazquez, C.
    JOURNAL OF GLOBAL OPTIMIZATION, 2013, 57 (03) : 863 - 890
  • [8] Topology of Learning in Feedforward Neural Networks
    Gabella, Maxime
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (08) : 3588 - 3592
  • [9] Interpolation representation of feedforward neural networks
    Li, HX
    Li, LX
    Wang, JY
    MATHEMATICAL AND COMPUTER MODELLING, 2003, 37 (7-8) : 829 - 847
  • [10] On the nonlinear properties of feedforward neural networks
    Peng, TM
    Papalexopoulos, AD
    ENGINEERING INTELLIGENT SYSTEMS FOR ELECTRICAL ENGINEERING AND COMMUNICATIONS, 1996, 4 (02): : 67 - 73