A parallel variable memory BFGS training algorithm

被引:0
作者
Mc Loone, S [1 ]
机构
[1] Queens Univ Belfast, Sch Elect & Elect Engn, Intelligent Syst & Control Res Grp, Belfast BT9 5AH, Antrim, North Ireland
来源
ALGORITHMS AND ARCHITECTURES FOR REAL-TIME CONTROL 2000 | 2000年
关键词
neural networks; parallel algorithms; training; second-order;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper considers the parallel implementation of a novel variable memory quasi-newton neural network training algorithm recently developed by the author. Unlike existing training methods this new technique is able to optimize performance in relation to available memory. Numerically it has equivalent properties to Full Memory BFGS optimization (FM) when there are no restrictions on memory and to FM with periodic reset when memory is limited. Parallel implementations of both the Full and Variable Memory BFGS algorithms are outlined and performance results presented for a PVM target architecture. Copyright (C) 2000 IFAC.
引用
收藏
页码:129 / 134
页数:4
相关论文
共 50 条
  • [31] A Hybrid Shared-Memory Parallel Max-Tree Algorithm for Extreme Dynamic-Range Images
    Moschini, Ugo
    Meijster, Arnold
    Wilkinson, Michael H. F.
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (03) : 513 - 526
  • [32] A Parallel Algorithm for Subgraph Isomorphism
    Carletti, Vincenzo
    Foggia, Pasquale
    Ritrovato, Pierluigi
    Vento, Mario
    Vigilante, Vincenzo
    GRAPH-BASED REPRESENTATIONS IN PATTERN RECOGNITION, GBRPR 2019, 2019, 11510 : 141 - 151
  • [33] Is Working Memory Training Effective?
    Shipstead, Zach
    Redick, Thomas S.
    Engle, Randall W.
    PSYCHOLOGICAL BULLETIN, 2012, 138 (04) : 628 - 654
  • [34] Variable projections neural network training
    Pereyra, V.
    Scherer, G.
    Wong, F.
    MATHEMATICS AND COMPUTERS IN SIMULATION, 2006, 73 (1-4) : 231 - 243
  • [35] Speculative Backpropagation for CNN Parallel Training
    Park, Sangwoo
    Suh, Taeweon
    IEEE ACCESS, 2020, 8 : 215365 - 215374
  • [36] Parallel and Distributed Structured SVM Training
    Jiang, Jiantong
    Wen, Zeyi
    Wang, Zeke
    He, Bingsheng
    Chen, Jian
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (05) : 1084 - 1096
  • [37] Parallel genetic simulated annealing: A massively parallel SIMD algorithm
    Chen, H
    Flann, NS
    Watson, DW
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 1998, 9 (02) : 126 - 136
  • [38] Computer Go Research Based on Variable Scale Training and PUB-PMCTS
    Huang, Jinhan
    Huang, Zhixing
    Cen, Shengcai
    Shi, Wurui
    Huang, Xiaoxiao
    Chen, Xueyun
    IEEE ACCESS, 2024, 12 : 67246 - 67255
  • [39] Local Critic Training for Model-Parallel Learning of Deep Neural Networks
    Lee, Hojung
    Hsieh, Cho-Jui
    Lee, Jong-Seok
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (09) : 4424 - 4436
  • [40] Optimization of a Stirling Engine by Variable-Step Simplified Conjugate-Gradient Method and Neural Network Training Algorithm
    Cheng, Chin-Hsiang
    Lin, Yu-Ting
    ENERGIES, 2020, 13 (19)