Toward Byzantine-Robust Distributed Learning for Sentiment Classification on Social Media Platform

被引:1
|
作者
Zhang, Heyi [1 ]
Wu, Jun [2 ]
Pan, Qianqian [3 ]
Bashir, Ali Kashif [4 ,5 ,6 ]
Omar, Marwan [7 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai 200240, Peoples R China
[2] Waseda Univ, Grad Sch Informat Prod & Syst, Tokyo 1698050, Japan
[3] Univ Tokyo, Sch Engn, Tokyo 1130033, Japan
[4] Manchester Metropolitan Univ, Dept Comp & Math, Manchester M15 6BH, England
[5] Woxsen Univ, Woxsen Sch Business, Hyderabad 502345, India
[6] Lebanese Amer Univ, Dept Comp Sci & Math, Beirut 11022801, Lebanon
[7] Illinois Inst Technol, Dept Informat Technol & Management, Chicago, IL 60616 USA
来源
IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS | 2024年
基金
中国国家自然科学基金;
关键词
Blockchains; Training; Blockchain; Byzantine robust; coded computing; distributed learning; sentiment classification; social media platform; BLOCKCHAIN;
D O I
10.1109/TCSS.2024.3361465
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Distributed learning empowers social media platforms to handle massive data for image sentiment classification and deliver intelligent services. However, with the increase of privacy threats and malicious activities, three major challenges are emerging: securing privacy, alleviating straggler problems, and mitigating Byzantine attacks. Although recent studies explore coded computing for privacy and straggler problems, as well as Byzantine-robust aggregation for poisoning attacks, they are not well-designed against both threats simultaneously. To tackle these obstacles and achieve an efficient Byzantine-robust and straggler-resilient distributed learning framework, in this article, we present Byzantine-robust and cost-effective distributed machine learning (BCML), a codesign of coded computing and Byzantine-robust aggregation. To balance the Byzantine resilience and efficiency, we design a cosine-similarity-based Byzantine-robust aggregation method tailored for coded computing to filter out malicious gradients efficiently in real time. Furthermore, trust scores derived from similarity are published to the blockchain for the reliability and traceability of social users. Experimental results show that our BCML can tolerate Byzantine attacks without compromising convergence accuracy with lower time consumption, compared with the state-of-the-art approaches. Specifically, it is 6x faster than the uncoded approach and 2x faster than the Lagrange coded computing (LCC) approach. Besides, the cosine-similarity-based aggregation method can effectively detect and filter out malicious social users in real time.
引用
收藏
页码:1 / 11
页数:11
相关论文
共 48 条
  • [21] AFL: Attention-based Byzantine-robust Federated Learning with Vector Filter
    Chen, Hao
    Lv, Xixiang
    Zheng, Wei
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 595 - 600
  • [22] Better Safe Than Sorry: Constructing Byzantine-Robust Federated Learning with Synthesized Trust
    Geng, Gangchao
    Cai, Tianyang
    Yang, Zheng
    ELECTRONICS, 2023, 12 (13)
  • [23] Byzantine-robust federated learning performance evaluation via distance-statistical aggregations
    Colosimo, Francesco
    Rocca, Giovanni
    ASSURANCE AND SECURITY FOR AI-ENABLED SYSTEMS, 2024, 13054
  • [24] Byzantine-robust federated learning via credibility assessment on non- IID data
    Zhai, Kun
    Ren, Qiang
    Wang, Junli
    Yan, Chungang
    MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2022, 19 (02) : 1659 - 1676
  • [25] Communication-Efficient and Byzantine-Robust Federated Learning for Mobile Edge Computing Networks
    Zhang, Zhuangzhuang
    Wl, Libing
    He, Debiao
    Li, Jianxin
    Cao, Shuqin
    Wu, Xianfeng
    IEEE NETWORK, 2023, 37 (04): : 112 - 119
  • [26] FedNAT: Byzantine-robust Federated Learning through Activation-based Attention Transfer
    Wang, Mengxin
    Fang, Liming
    Chen, Kuiqi
    2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 1005 - 1012
  • [27] C-RSA: Byzantine-robust and communication-efficient distributed learning in the non-convex and non-IID regime
    He, Xuechao
    Zhu, Heng
    Ling, Qing
    SIGNAL PROCESSING, 2023, 213
  • [28] FLOD: Oblivious Defender for Private Byzantine-Robust Federated Learning with Dishonest-Majority
    Dong, Ye
    Chen, Xiaojun
    Li, Kaiyun
    Wang, Dakui
    Zeng, Shuai
    COMPUTER SECURITY - ESORICS 2021, PT I, 2021, 12972 : 497 - 518
  • [29] Privacy-preserving and Byzantine-robust federated broad learning with chain-loop structure
    Li, Nan
    Ren, Chang-E
    Cheng, Siyao
    NEUROCOMPUTING, 2025, 636
  • [30] Semantic labeling of social big media using distributed online robust classification
    Sadigh, Alireza Naeimi
    Bahraini, Tahereh
    Yazdi, Hadi Sadoghi
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 132