Federated Learning for Personalized Image Aesthetics Assessment

被引:1
作者
Xiong, Zhiwei [1 ,2 ]
Yu, Han [1 ]
Shen, Zhiqi [1 ]
机构
[1] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore, Singapore
[2] Nanyang Technol Univ, Alibaba NTU Singapore Joint Res Inst, Singapore, Singapore
来源
2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME | 2023年
基金
新加坡国家研究基金会;
关键词
Image aesthetics assessment; federated learning; personalization;
D O I
10.1109/ICME55011.2023.00065
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Image aesthetics assessment (IAA) evaluates the generic aesthetic quality of images. Due to the subjectivity of IAA, personalized IAA (PIAA) is essential to offering dedicated image retrieval, editing, and recommendation services to individual users. However, existing PIAA approaches are trained under the centralized machine learning paradigm, which exposes sensitive image and rating data. To enhance PIAA in a privacy-preserving manner, we propose the first-of-its-kind Federated Learning-empowered Personalized Image Aesthetics Assessment (FedPIAA) approach with a simple yet effective model structure to capture image aesthetic patterns and personalized user aesthetic preferences. Extensive experimental comparison against eight baselines using the real-world dataset FLICKER-AES demonstrates that FedPIAA outperforms FedAvg by 1.56% under the small support set and by 4.86% under the large support set in terms of Spearman rank-order correlation coefficient between predicted and ground-truth personalized aesthetics scores, while achieving comparable performance with the best non-FL centralized PIAA approaches.
引用
收藏
页码:336 / 341
页数:6
相关论文
共 20 条
  • [1] Chen FW, 2022, Arxiv, DOI arXiv:2203.00829
  • [2] Image Aesthetic Assessment An experimental survey
    Deng, Yubin
    Loy, Chen Change
    Tang, Xiaoou
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (04) : 80 - 106
  • [3] Hou J., 2022, IEEE Transactions on Multimedia
  • [4] Advances and Open Problems in Federated Learning
    Kairouz, Peter
    McMahan, H. Brendan
    Avent, Brendan
    Bellet, Aurelien
    Bennis, Mehdi
    Bhagoji, Arjun Nitin
    Bonawitz, Kallista
    Charles, Zachary
    Cormode, Graham
    Cummings, Rachel
    D'Oliveira, Rafael G. L.
    Eichner, Hubert
    El Rouayheb, Salim
    Evans, David
    Gardner, Josh
    Garrett, Zachary
    Gascon, Adria
    Ghazi, Badih
    Gibbons, Phillip B.
    Gruteser, Marco
    Harchaoui, Zaid
    He, Chaoyang
    He, Lie
    Huo, Zhouyuan
    Hutchinson, Ben
    Hsu, Justin
    Jaggi, Martin
    Javidi, Tara
    Joshi, Gauri
    Khodak, Mikhail
    Konecny, Jakub
    Korolova, Aleksandra
    Koushanfar, Farinaz
    Koyejo, Sanmi
    Lepoint, Tancrede
    Liu, Yang
    Mittal, Prateek
    Mohri, Mehryar
    Nock, Richard
    Ozgur, Ayfer
    Pagh, Rasmus
    Qi, Hang
    Ramage, Daniel
    Raskar, Ramesh
    Raykova, Mariana
    Song, Dawn
    Song, Weikang
    Stich, Sebastian U.
    Sun, Ziteng
    Suresh, Ananda Theertha
    [J]. FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2021, 14 (1-2): : 1 - 210
  • [5] Kolesnikov Alexander, 2020, Computer Vision - ECCV 2020 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12350), P491, DOI 10.1007/978-3-030-58558-7_29
  • [6] Personality-Assisted Multi-Task Learning for Generic and Personalized Image Aesthetics Assessment
    Li, Leida
    Zhu, Hancheng
    Zhao, Sicheng
    Ding, Guiguang
    Lin, Weisi
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 3898 - 3910
  • [7] Lin T., 2020, ADV NEURAL INFORM PR, V33, P2351
  • [8] USAR: An Interactive User-specific Aesthetic Ranking Framework for Images
    Lv, Pei
    Wang, Meng
    Xu, Yongbo
    Peng, Ze
    Sun, Junyi
    Su, Shimei
    Zhou, Bing
    Xu, Mingliang
    [J]. PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, : 1328 - 1336
  • [9] McMahan HB, 2017, PR MACH LEARN RES, V54, P1273
  • [10] Personalized Image Aesthetics
    Ren, Jian
    Shen, Xiaohui
    Lin, Zhe
    Mech, Radomir
    Foran, David J.
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 638 - 647