Parallel learning by multitasking neural networks

被引:3
|
作者
Agliari, Elena [1 ]
Alessandrelli, Andrea [2 ,5 ]
Barra, Adriano [3 ,5 ]
Ricci-Tersenghi, Federico [4 ,5 ,6 ]
机构
[1] Sapienza Univ Roma, Dipartimento Matemat, Piazzale Aldo Moro 5, I-00185 Rome, Italy
[2] Univ Pisa, Dipartimento Informat, Lungarno Antonio Pacinotti 43, I-56126 Pisa, Italy
[3] Univ Salento, Dipartimento Matemat & Fis, Via Arnesano, I-73100 Lecce, Italy
[4] Sapienza Univ Roma, Dipartimento Fis, Piazzale Aldo Moro 2, I-00185 Rome, Italy
[5] Ist Nazl Fis Nucl, Sez Roma1 & Lecce, Lecce, Italy
[6] CNR, Nanotec, Rome Unit, I-00185 Rome, Italy
来源
JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT | 2023年 / 2023卷 / 11期
关键词
machine learning; computational neuroscience; optimization over networks; systems neuroscience; MODEL;
D O I
10.1088/1742-5468/ad0a86
中图分类号
O3 [力学];
学科分类号
08 ; 0801 ;
摘要
Parallel learning, namely the simultaneous learning of multiple patterns, constitutes a modern challenge for neural networks. While this cannot be accomplished by standard Hebbian associative neural networks, in this paper we show how the multitasking Hebbian network (a variation on the theme of the Hopfield model, working on sparse datasets) is naturally able to perform this complex task. We focus on systems processing in parallel a finite (up to logarithmic growth in the size of the network) number of patterns, mirroring the low-storage setting of standard associative neural networks. When patterns to be reconstructed are mildly diluted, the network handles them hierarchically, distributing the amplitudes of their signals as power laws w.r.t. the pattern information content (hierarchical regime), while, for strong dilution, the signals pertaining to all the patterns are simultaneously raised with the same strength (parallel regime). Further, we prove that the training protocol (either supervised or unsupervised) neither alters the multitasking performances nor changes the thresholds for learning. We also highlight (analytically and by Monte Carlo simulations) that a standard cost function (i.e. the Hamiltonian) used in statistical mechanics exhibits the same minima as a standard loss function (i.e. the sum of squared errors) used in machine learning.
引用
收藏
页数:38
相关论文
共 50 条
  • [22] On simulating one-trial learning using morphological neural networks
    Feng, Naiqin
    Sun, Bin
    COGNITIVE SYSTEMS RESEARCH, 2019, 53 : 61 - 70
  • [23] Parallel globally optimal structure learning of Bayesian networks
    Nikolova, Olga
    Zola, Jaroslaw
    Aluru, Srinivas
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2013, 73 (08) : 1039 - 1048
  • [24] Parallel Physics-Informed Neural Networks with Bidirectional Balance
    Huang, Yuhao
    Xu, Jiarong
    Fang, Shaomei
    Zhu, Zupeng
    Jiang, Linfeng
    Liang, Xiaoxin
    6TH INTERNATIONAL CONFERENCE ON INNOVATION IN ARTIFICIAL INTELLIGENCE, ICIAI2022, 2022, : 23 - 30
  • [25] Massively Parallel Combinational Binary Neural Networks for Edge Processing
    Murovic, Tadej
    Trost, Andrej
    ELEKTROTEHNISKI VESTNIK, 2019, 86 (1-2): : 47 - 53
  • [26] Parallel Neural Network-Convolutional Neural Networks for Wearable Motorcycle Airbag System
    Jeong, Jae-Hoon
    Jo, So-Hyeon
    Woo, Joo
    Lee, Dong-Heon
    Sung, Tae-Kyung
    Byun, Gi-Sig
    JOURNAL OF ELECTRICAL ENGINEERING & TECHNOLOGY, 2020, 15 (06) : 2721 - 2734
  • [27] Quantizability and learning complexity in multilayer neural networks
    Fu, LM
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS, 1998, 28 (02): : 295 - 300
  • [28] Learning minimal automata with recurrent neural networks
    Aichernig, Bernhard K.
    Koenig, Sandra
    Mateis, Cristinel
    Pferscher, Andrea
    Tappler, Martin
    SOFTWARE AND SYSTEMS MODELING, 2024, 23 (03) : 625 - 655
  • [29] Advances in artificial neural networks and machine learning
    Prieto, Alberto
    Atencia, Miguel
    Sandoval, Francisco
    NEUROCOMPUTING, 2013, 121 : 1 - 4
  • [30] Learning rules in spiking neural networks: A survey
    Yi, Zexiang
    Lian, Jing
    Liu, Qidong
    Zhu, Hegui
    Liang, Dong
    Liu, Jizhao
    NEUROCOMPUTING, 2023, 531 : 163 - 179