Meta-DPSTL: meta learning-based differentially private self-taught learning

被引:0
作者
Singh, Upendra Pratap [1 ,2 ]
Sinha, Indrajeet Kumar [1 ,3 ]
Singh, Krishna Pratap [1 ,3 ]
Verma, Shekhar [1 ,3 ]
机构
[1] Indian Inst Informat Technol Allahabad, Dept Informat Technol, Prayagraj, Uttar Pradesh, India
[2] LNM Inst Informat Technol, Dept Comp Sci & Engn, Jaipur, India
[3] Indian Inst Informat Technol Allahabad, Dept Informat Technol, Machine Learning & Optimizat Lab, Prayagraj, India
关键词
Self-taught learning; Meta-learning; Relative reconstruction distance; Differential privacy; Inversion attack;
D O I
10.1007/s13042-024-02134-2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-taught learning models are successfully applied to improve the target model's performance in different low-resource environments. In this setting, features are learned using unlabeled instances in the source domain; thereafter, the learned feature representations are transferred to the target domain for the supervised classification task. Two important challenges in this setup include learning efficient feature representations in the source domain and securing instance privacy against attacks carried out during knowledge transfer from the source to the target domain. We propose Meta-DPSTL\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Meta-DPSTL$$\end{document}, a novel Meta Differentially Private Self-Taught Learning model to overcome these challenges. The proposed approach implements self-taught learning in the meta-learning-based framework; training of meta-learner and base-learner proceeds episodically and is equivalent to estimating source and target domain parameters, respectively. Further, to protect the sensitive source data from a potential attacker, differential privacy is added to the meta-parameters learned in an episode before they are passed to the target domain to train the base-learner. To measure the immunity of the proposed model to an inversion attack, we propose a novel Relative Reconstruction Distance (RRD) metric. Lastly, an inversion attack is carried out on the meta-parameters; empirical results obtained on the handwritten digits recognition dataset, COVID-19\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$COVID-19$$\end{document}X-Ray\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X-Ray$$\end{document} Radiography dataset, and COVID-19\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$COVID-19$$\end{document} Lung CT Scans dataset confirm the utility of meta-learning-based self-taught features in obtaining richer feature representations and hence, providing base-learners that are more generalizable. Relative reconstruction distance values computed on these datasets show that the differentially-private meta-parameters are robust to inversion attacks. Consequently, the proposed approach may be used in applications where the privacy requirements of sensitive source domain datasets are paramount.
引用
收藏
页码:4021 / 4053
页数:33
相关论文
共 50 条
  • [41] Self-taught learning via exponential family sparse coding for cost-effective patient thought record categorization
    Wang, Hua
    Huang, Heng
    Basco, Monica
    Lopez, Molly
    Makedon, Fillia
    PERSONAL AND UBIQUITOUS COMPUTING, 2014, 18 (01) : 27 - 35
  • [42] STAR-Lite: A light-weight scalable self-taught learning framework for older adults' activity recognition
    Ramamurthy, Sreenivasan Ramasamy
    Ghosh, Indrajeet
    Gangopadhyay, Aryya
    Galik, Elizabeth
    Roy, Nirmalya
    PERVASIVE AND MOBILE COMPUTING, 2022, 87
  • [43] ASAD: A Meta Learning-Based Auto-Selective Approach and Tool for Anomaly Detection
    Rashid, Nadia
    Mehmood, Rashid
    Alqurashi, Fahad
    Alqahtany, Saad
    Corchado, Juan M.
    IEEE ACCESS, 2025, 13 : 4341 - 4367
  • [44] Self-taught learning via exponential family sparse coding for cost-effective patient thought record categorization
    Hua Wang
    Heng Huang
    Monica Basco
    Molly Lopez
    Fillia Makedon
    Personal and Ubiquitous Computing, 2014, 18 : 27 - 35
  • [45] A Federated Learning Framework Based on Differentially Private Continuous Data Release
    Cai, Jianping
    Liu, Ximeng
    Ye, Qingqing
    Liu, Yang
    Wang, Yuyang
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (05) : 4879 - 4894
  • [46] Meta Learning-Based Hybrid Ensemble Approach for Short-Term Wind Speed Forecasting
    Ma, Zhengwei
    Guo, Sensen
    Xu, Gang
    Aziz, Saddam
    IEEE ACCESS, 2020, 8 : 172859 - 172868
  • [47] A Differentially Private Blockchain-Based Approach for Vertical Federated Learning
    Tran, Linh
    Chari, Sanjay
    Khan, Md Saikat Islam
    Zachariah, Aaron
    Patterson, Stacy
    Seneviratne, Oshani
    2024 IEEE INTERNATIONAL CONFERENCE ON DECENTRALIZED APPLICATIONS AND INFRASTRUCTURES, DAPPS 2024, 2024, : 86 - 92
  • [48] Curriculum-Based Meta-learning
    Zhang, Ji
    Song, Jingkuan
    Yao, Yazhou
    Gao, Lianli
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 1838 - 1846
  • [49] Meta-Transfer Learning-Based Handover Optimization for V2N Communication
    Sohaib, Rana Muhammad
    Onireti, Oluwakayode
    Tan, Kang
    Sambo, Yusuf
    Swash, Rafiq
    Imran, Muhammad Ali
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (11) : 17331 - 17346
  • [50] MRLM: A meta-reinforcement learning-based metaheuristic for hybrid flow-shop scheduling problem with learning and forgetting effects
    Zhang, Zeyu
    Shao, Zhongshi
    Shao, Weishi
    Chen, Jianrui
    Pi, Dechang
    SWARM AND EVOLUTIONARY COMPUTATION, 2024, 85