Comparing the performance of multi-layer perceptron training on electrical and optical network-on-chips

被引:5
|
作者
Dai, Fei [1 ]
Chen, Yawen [1 ]
Huang, Zhiyi [1 ]
Zhang, Haibo [1 ]
Zhang, Hao [1 ]
Xia, Chengpeng [1 ]
机构
[1] Univ Otago, Dept Comp Sci, Dunedin 9054, Otago, New Zealand
来源
JOURNAL OF SUPERCOMPUTING | 2023年 / 79卷 / 10期
关键词
Multi-layer perceptron; Optical network-on-Chip; Artificial neural networks; Energy consumption; Performance comparison; Parallel computation; MODEL; DEEP;
D O I
10.1007/s11227-022-04945-y
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-layer perceptron (MLP) is a class of Artificial Neural Networks widely used in regression, classification, and prediction. To accelerate the training of MLP, more cores can be used for parallel computing on many-core systems. However, with the increasing number of cores integrated into the chip, the communication bottleneck in the training of MLP on electrical network-on-chip (ENoC) becomes severe, degrading MLP training performance. Replacing ENoC with optical network-on-chip (ONoC) can break the communication bottleneck in MLP training. To facilitate the development of ONoC for MLP training, it is necessary to compare and model the MLP training performance of ONoC and ENoC in advance. This paper first analyzes and compares the differences between ONoC and ENoC. Then, we formulate the performance and energy model of MLP training on ONoC and ENoC by analyzing the communication and computation time, static energy, and dynamic energy consumption, respectively. Furthermore, we conduct extensive simulations to compare their MLP training performance and energy consumption with our simulation infrastructure. The experimental results show the MLP training time of ONoC has been reduced by 65.16% and 52.51% on average in different numbers of cores and batch sizes compared with ENoC. The results also exhibit that ONoC overall has 54.86% and 43.13% on average energy reduction in different numbers of cores and batch sizes compared with ENoC. However, with a small number of cores (e.g., less than 50) in MLP training, ENoC consumes less energy than ONoC. These experiments confirm that generally ONoC is a good replacement for ENoC when using a large number of cores in terms of performance and energy consumption for MLP training.
引用
收藏
页码:10725 / 10746
页数:22
相关论文
共 50 条
  • [1] Comparing the performance of multi-layer perceptron training on electrical and optical network-on-chips
    Fei Dai
    Yawen Chen
    Zhiyi Huang
    Haibo Zhang
    Hao Zhang
    Chengpeng Xia
    The Journal of Supercomputing, 2023, 79 : 10725 - 10746
  • [2] Performance Comparison of Multi-layer Perceptron Training on Electrical and Optical Network-on-Chips
    Dai, Fei
    Chen, Yawen
    Huang, Zhiyi
    Zhang, Haibo
    PARALLEL AND DISTRIBUTED COMPUTING, APPLICATIONS AND TECHNOLOGIES, PDCAT 2021, 2022, 13148 : 129 - 141
  • [3] Using Multi-Layer Perceptron and Complex Network Metrics to Estimate the Performance of Optical Networks
    de Araujo, Danilo R. B.
    Martins-Filho, Joaquim F.
    Bastos-Filho, Carmelo J. A.
    2013 SBMO/IEEE MTT-S INTERNATIONAL MICROWAVE & OPTOELECTRONICS CONFERENCE (IMOC), 2013,
  • [4] Performance and Features of Multi-Layer Perceptron with Impulse Glial Network
    Ikuta, Chihiro
    Uwate, Yoko
    Nishio, Yoshifumi
    2011 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2011, : 2536 - 2541
  • [5] Battle royale optimizer for training multi-layer perceptron
    Agahian, Saeid
    Akan, Taymaz
    EVOLVING SYSTEMS, 2022, 13 (04) : 563 - 575
  • [6] Training multi-layer perceptron with artificial algae algorithm
    Turkoglu, Bahaeddin
    Kaya, Ersin
    ENGINEERING SCIENCE AND TECHNOLOGY-AN INTERNATIONAL JOURNAL-JESTECH, 2020, 23 (06): : 1342 - 1350
  • [7] Battle royale optimizer for training multi-layer perceptron
    Saeid Agahian
    Taymaz Akan
    Evolving Systems, 2022, 13 : 563 - 575
  • [8] Many-objective training of a multi-layer perceptron
    Koeppen, Mario
    Yoshida, Kaori
    NEURAL NETWORK WORLD, 2007, 17 (06) : 627 - 637
  • [9] A Study on Single and Multi-layer Perceptron Neural Network
    Singh, Jaswinder
    Banerjee, Rajdeep
    PROCEEDINGS OF THE 2019 3RD INTERNATIONAL CONFERENCE ON COMPUTING METHODOLOGIES AND COMMUNICATION (ICCMC 2019), 2019, : 35 - 40
  • [10] Training Multi-Layer Perceptron Using Harris Hawks Optimization
    Eker, Erdal
    Kayri, Murat
    Ekinci, Serdar
    Izci, Davut
    2ND INTERNATIONAL CONGRESS ON HUMAN-COMPUTER INTERACTION, OPTIMIZATION AND ROBOTIC APPLICATIONS (HORA 2020), 2020, : 279 - 283