FedMultimodal: A Benchmark For Multimodal Federated Learning

被引:9
作者
Feng, Tiantian [1 ]
Bose, Digbalay [1 ]
Zhang, Tuo [1 ]
Hebbar, Rajat [1 ]
Ramakrishna, Anil [2 ]
Gupta, Rahul [3 ]
Zhang, Mi [4 ]
Avestimehr, Salman [1 ]
Narayanan, Shrikanth [1 ]
机构
[1] Univ Southern Calif, Los Angeles, CA 90007 USA
[2] Amazon Alexa AI, Los Angeles, CA USA
[3] Amazon Alexa AI, Boston, MA USA
[4] Ohio State Univ, Columbus, OH USA
来源
PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023 | 2023年
关键词
Federated Learning; Multimodal Learning; Multimodal Benchmark;
D O I
10.1145/3580305.3599825
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Over the past few years, Federated Learning (FL) has become an emerging machine learning technique to tackle data privacy challenges through collaborative training. In the Federated Learning algorithm, the clients submit a locally trained model, and the server aggregates these parameters until convergence. Despite significant efforts that have been made to FL in fields like computer vision, audio, and natural language processing, the FL applications utilizing multimodal data streams remain largely unexplored. It is known that multimodal learning has broad real-world applications in emotion recognition, healthcare, multimedia, and social media, while user privacy persists as a critical concern. Specifically, there are no existing FL benchmarks targeting multimodal applications or related tasks. In order to facilitate the research in multimodal FL, we introduce FedMultimodal, the first FL benchmark for multimodal learning covering five representative multimodal applications from ten commonly used datasets with a total of eight unique modalities. FedMultimodal offers a systematic FL pipeline, enabling end-to-end modeling framework ranging from data partition and feature extraction to FL benchmark algorithms and model evaluation. Unlike existing FL benchmarks, FedMultimodal provides a standardized approach to assess the robustness of FL against three common data corruptions in real-life multimodal applications: missing modalities, missing labels, and erroneous labels. We hope that FedMultimodal can accelerate numerous future research directions, including designing multimodal FL algorithms toward extreme data heterogeneity, robustness multimodal FL, and efficient multimodal FL. The datasets and benchmark results can be accessed at: https://github.com/usc-sail/fed-multimodal.
引用
收藏
页码:4035 / 4045
页数:11
相关论文
共 84 条
[1]  
Alam F, 2018, 12 INT AAAI C WEB SO
[2]  
Anguita D., 2013, EUR S ART NEUR NETW
[3]  
[Anonymous], 2021, IEEE INTERNET THINGS
[4]  
Becerik-Gerber Burcin, 2022, SCI REPORTS, V12
[5]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[6]   Multimodal Human and Environmental Sensing for Longitudinal Behavioral Studies in Naturalistic Settings: Framework for Sensor Selection, Deployment, and Management [J].
Booth, Brandon M. ;
Mundnich, Karel ;
Feng, Tiantian ;
Nadarajan, Amrutha ;
Falk, Tiago H. ;
Villatte, Jennifer L. ;
Ferrara, Emilio ;
Narayanan, Shrikanth .
JOURNAL OF MEDICAL INTERNET RESEARCH, 2019, 21 (08)
[7]  
Booth BM, 2019, INT CONF ACOUST SPEE, P7630, DOI 10.1109/ICASSP.2019.8683730
[8]  
Caldas S., 2018, LEAF: A benchmark for federated settings
[9]   CREMA-D: Crowd-Sourced Emotional Multimodal Actors Dataset [J].
Cao, Houwei ;
Cooper, David G. ;
Keutmann, Michael K. ;
Gur, Ruben C. ;
Nenkova, Ani ;
Verma, Ragini .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2014, 5 (04) :377-390
[10]   FedMSplit: Correlation-Adaptive Federated Multi-Task Learning across Multimodal Split Networks [J].
Chen, Jiayi ;
Zhang, Aidong .
PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, :87-96