FedMultimodal: A Benchmark For Multimodal Federated Learning

被引:20
作者
Feng, Tiantian [1 ]
Bose, Digbalay [1 ]
Zhang, Tuo [1 ]
Hebbar, Rajat [1 ]
Ramakrishna, Anil [2 ]
Gupta, Rahul [3 ]
Zhang, Mi [4 ]
Avestimehr, Salman [1 ]
Narayanan, Shrikanth [1 ]
机构
[1] Univ Southern Calif, Los Angeles, CA 90007 USA
[2] Amazon Alexa AI, Los Angeles, CA USA
[3] Amazon Alexa AI, Boston, MA USA
[4] Ohio State Univ, Columbus, OH USA
来源
PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023 | 2023年
关键词
Federated Learning; Multimodal Learning; Multimodal Benchmark;
D O I
10.1145/3580305.3599825
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Over the past few years, Federated Learning (FL) has become an emerging machine learning technique to tackle data privacy challenges through collaborative training. In the Federated Learning algorithm, the clients submit a locally trained model, and the server aggregates these parameters until convergence. Despite significant efforts that have been made to FL in fields like computer vision, audio, and natural language processing, the FL applications utilizing multimodal data streams remain largely unexplored. It is known that multimodal learning has broad real-world applications in emotion recognition, healthcare, multimedia, and social media, while user privacy persists as a critical concern. Specifically, there are no existing FL benchmarks targeting multimodal applications or related tasks. In order to facilitate the research in multimodal FL, we introduce FedMultimodal, the first FL benchmark for multimodal learning covering five representative multimodal applications from ten commonly used datasets with a total of eight unique modalities. FedMultimodal offers a systematic FL pipeline, enabling end-to-end modeling framework ranging from data partition and feature extraction to FL benchmark algorithms and model evaluation. Unlike existing FL benchmarks, FedMultimodal provides a standardized approach to assess the robustness of FL against three common data corruptions in real-life multimodal applications: missing modalities, missing labels, and erroneous labels. We hope that FedMultimodal can accelerate numerous future research directions, including designing multimodal FL algorithms toward extreme data heterogeneity, robustness multimodal FL, and efficient multimodal FL. The datasets and benchmark results can be accessed at: https://github.com/usc-sail/fed-multimodal.
引用
收藏
页码:4035 / 4045
页数:11
相关论文
共 83 条
[41]   FedRS: Federated Learning with Restricted Softmax for Label Distribution Non-IID Data [J].
Li, Xin-Chun ;
Zhan, De-Chuan .
KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, :995-1005
[42]  
Lin B.Y., 2021, ARXIV210408815
[43]  
Lin T, 2020, Adv Neural Inf Process Syst., V33, P2351, DOI [10.5555/3495724.3495922, DOI 10.5555/3495724.3495922]
[44]  
Lu JS, 2016, ADV NEUR IN, V29
[45]  
McMahan HB, 2017, PR MACH LEARN RES, V54, P1273
[46]  
Mehta Sachin, 2021, arXiv preprint arXiv:2110.02178
[47]   Exploiting Unintended Feature Leakage in Collaborative Learning [J].
Melis, Luca ;
Song, Congzheng ;
De Cristofaro, Emiliano ;
Shmatikov, Vitaly .
2019 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2019), 2019, :691-706
[48]  
Mireshghallah F, 2020, ARXIV200412254CSSTAT
[49]   Moments in Time Dataset: One Million Videos for Event Understanding [J].
Monfort, Mathew ;
Andonian, Alex ;
Zhou, Bolei ;
Ramakrishnan, Kandan ;
Bargal, Sarah Adel ;
Yan, Tom ;
Brown, Lisa ;
Fan, Quanfu ;
Gutfruend, Dan ;
Vondrick, Carl ;
Oliva, Aude .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (02) :502-508
[50]  
Northcutt CG, 2021, J ARTIF INTELL RES, V70, P1373