Homogeneous Learning: Self-Attention Decentralized Deep Learning

被引:6
|
作者
Sun, Yuwei [1 ,2 ]
Ochiai, Hideya [1 ]
机构
[1] Univ Tokyo, Grad Sch Informat Sci & Technol, Tokyo 1138654, Japan
[2] RIKEN AIP, Tokyo 1030027, Japan
关键词
Peer-to-peer computing; Training; Data models; Task analysis; Computer architecture; Computational modeling; Servers; Collective intelligence; distributed computing; knowledge transfer; multi-layer neural network; supervised learning; BLOCKCHAIN;
D O I
10.1109/ACCESS.2022.3142899
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) has been facilitating privacy-preserving deep learning in many walks of life such as medical image classification, network intrusion detection, and so forth. Whereas it necessitates a central parameter server for model aggregation, which brings about delayed model communication and vulnerability to adversarial attacks. A fully decentralized architecture like Swarm Learning allows peer-to-peer communication among distributed nodes, without the central server. One of the most challenging issues in decentralized deep learning is that data owned by each node are usually non-independent and identically distributed (non-IID), causing time-consuming convergence of model training. To this end, we propose a decentralized learning model called Homogeneous Learning (HL) for tackling non-IID data with a self-attention mechanism. In HL, training performs on each round's selected node, and the trained model of a node is sent to the next selected node at the end of each round. Notably, for the selection, the self-attention mechanism leverages reinforcement learning to observe a node's inner state and its surrounding environment's state, and find out which node should be selected to optimize the training. We evaluate our method with various scenarios for two different image classification tasks. The result suggests that HL can achieve a better performance compared with standalone learning and greatly reduce both the total training rounds by 50.8% and the communication cost by 74.6% for decentralized learning with non-IID data.
引用
收藏
页码:7695 / 7703
页数:9
相关论文
共 50 条
  • [1] Compressed Self-Attention for Deep Metric Learning
    Chen, Ziye
    Gong, Mingming
    Xu, Yanwu
    Wang, Chaohui
    Zhang, Kun
    Du, Bo
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 3561 - 3568
  • [2] Deep Transfer Learning With Self-Attention for Industry Sensor Fusion Tasks
    Zhang, Ze
    Farnsworth, Michael
    Song, Boyang
    Tiwari, Divya
    Tiwari, Ashutosh
    IEEE SENSORS JOURNAL, 2022, 22 (15) : 15235 - 15247
  • [3] Magnetotelluric Data Inversion Based on Deep Learning With the Self-Attention Mechanism
    Xu, Kaijun
    Liang, Shuyuan
    Lu, Yan
    Hu, Zuzhi
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62
  • [4] Traffic Signal Control with Deep Reinforcement Learning and Self-attention Mechanism
    Zhang X.
    Nie S.
    Li Z.
    Zhang H.
    Jiaotong Yunshu Xitong Gongcheng Yu Xinxi/Journal of Transportation Systems Engineering and Information Technology, 2024, 24 (02): : 96 - 104
  • [5] Self-attention with Functional Time Representation Learning
    Xu, Da
    Ruan, Chuanwei
    Kumar, Sushant
    Korpeoglu, Evren
    Achan, Kannan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [6] A hybrid self-attention deep learning framework for multivariate sleep stage classification
    Yuan, Ye
    Jia, Kebin
    Ma, Fenglong
    Xun, Guangxu
    Wang, Yaqing
    Su, Lu
    Zhang, Aidong
    BMC BIOINFORMATICS, 2019, 20 (Suppl 16)
  • [7] Compressed Self-Attention for Deep Metric Learning with Low-Rank Approximation
    Chen, Ziye
    Gong, Mingming
    Ge, Lingjuan
    Du, Bo
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 2058 - 2064
  • [8] Question classification task based on deep learning models with self-attention mechanism
    Mondal S.
    Barman M.
    Nag A.
    Multimedia Tools and Applications, 2025, 84 (10) : 7777 - 7806
  • [9] Reconstructing computational spectra using deep learning's self-attention method
    Wu, Hao
    Wu, Hui
    Su, Xinyu
    Wu, Jingjun
    Liu, Shuangli
    OPTICA APPLICATA, 2024, 54 (03) : 383 - 394
  • [10] Deep Multi-Object Symbol Learning with Self-Attention Based Predictors
    Ahmetoglu, Alper
    Oztop, Erhan
    Ugur, Emre
    2023 31ST SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE, SIU, 2023,