Toward Byzantine-Robust Distributed Learning for Sentiment Classification on Social Media Platform

被引:1
|
作者
Zhang, Heyi [1 ]
Wu, Jun [2 ]
Pan, Qianqian [3 ]
Bashir, Ali Kashif [4 ,5 ,6 ]
Omar, Marwan [7 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai 200240, Peoples R China
[2] Waseda Univ, Grad Sch Informat Prod & Syst, Tokyo 1698050, Japan
[3] Univ Tokyo, Sch Engn, Tokyo 1130033, Japan
[4] Manchester Metropolitan Univ, Dept Comp & Math, Manchester M15 6BH, England
[5] Woxsen Univ, Woxsen Sch Business, Hyderabad 502345, India
[6] Lebanese Amer Univ, Dept Comp Sci & Math, Beirut 11022801, Lebanon
[7] Illinois Inst Technol, Dept Informat Technol & Management, Chicago, IL 60616 USA
来源
IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS | 2024年
基金
中国国家自然科学基金;
关键词
Blockchains; Training; Blockchain; Byzantine robust; coded computing; distributed learning; sentiment classification; social media platform; BLOCKCHAIN;
D O I
10.1109/TCSS.2024.3361465
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Distributed learning empowers social media platforms to handle massive data for image sentiment classification and deliver intelligent services. However, with the increase of privacy threats and malicious activities, three major challenges are emerging: securing privacy, alleviating straggler problems, and mitigating Byzantine attacks. Although recent studies explore coded computing for privacy and straggler problems, as well as Byzantine-robust aggregation for poisoning attacks, they are not well-designed against both threats simultaneously. To tackle these obstacles and achieve an efficient Byzantine-robust and straggler-resilient distributed learning framework, in this article, we present Byzantine-robust and cost-effective distributed machine learning (BCML), a codesign of coded computing and Byzantine-robust aggregation. To balance the Byzantine resilience and efficiency, we design a cosine-similarity-based Byzantine-robust aggregation method tailored for coded computing to filter out malicious gradients efficiently in real time. Furthermore, trust scores derived from similarity are published to the blockchain for the reliability and traceability of social users. Experimental results show that our BCML can tolerate Byzantine attacks without compromising convergence accuracy with lower time consumption, compared with the state-of-the-art approaches. Specifically, it is 6x faster than the uncoded approach and 2x faster than the Lagrange coded computing (LCC) approach. Besides, the cosine-similarity-based aggregation method can effectively detect and filter out malicious social users in real time.
引用
收藏
页码:1 / 11
页数:11
相关论文
共 48 条
  • [1] Byzantine-Robust Distributed Learning With Compression
    Zhu, Heng
    Ling, Qing
    IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, 2023, 9 : 280 - 294
  • [2] Differentially Private Byzantine-Robust Federated Learning
    Ma, Xu
    Sun, Xiaoqian
    Wu, Yuduo
    Liu, Zheli
    Chen, Xiaofeng
    Dong, Changyu
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (12) : 3690 - 3701
  • [3] Privacy-Preserving Byzantine-Robust Federated Learning via Blockchain Systems
    Miao, Yinbin
    Liu, Ziteng
    Li, Hongwei
    Choo, Kim-Kwang Raymond
    Deng, Robert H.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 2848 - 2861
  • [4] Communication-efficient and Byzantine-robust distributed learning with statistical guarantee
    Zhou, Xingcai
    Chang, Le
    Xu, Pengfei
    Lv, Shaogao
    PATTERN RECOGNITION, 2023, 137
  • [5] Byzantine-robust distributed support vector machine
    Wang, Xiaozhou
    Liu, Weidong
    Mao, Xiaojun
    SCIENCE CHINA-MATHEMATICS, 2025, 68 (03) : 707 - 728
  • [6] Communication-Efficient and Byzantine-Robust Distributed Stochastic Learning with Arbitrary Number of Corrupted Workers
    Jian Xu
    Tong, Xinyi
    Huang, Shao-Lun
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 5415 - 5420
  • [7] BSR-FL: An Efficient Byzantine-Robust Privacy-Preserving Federated Learning Framework
    Zeng, Honghong
    Li, Jie
    Lou, Jiong
    Yuan, Shijing
    Wu, Chentao
    Zhao, Wei
    Wu, Sijin
    Wang, Zhiwen
    IEEE TRANSACTIONS ON COMPUTERS, 2024, 73 (08) : 2096 - 2110
  • [8] An Experimental Study of Byzantine-Robust Aggregation Schemes in Federated Learning
    Li, Shenghui
    Ngai, Edith
    Voigt, Thiemo
    IEEE TRANSACTIONS ON BIG DATA, 2024, 10 (06) : 975 - 988
  • [9] BRFL: A blockchain-based byzantine-robust federated learning model
    Li, Yang
    Xia, Chunhe
    Li, Chang
    Wang, Tianbo
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2025, 196
  • [10] Privacy-Preserving and Byzantine-Robust Federated Learning
    Dong, Caiqin
    Weng, Jian
    Li, Ming
    Liu, Jia-Nan
    Liu, Zhiquan
    Cheng, Yudan
    Yu, Shui
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (02) : 889 - 904