Patient-Centered and Practical Privacy to Support AI for Healthcare

被引:0
作者
Liu, Ruixuan [1 ]
Lee, Hong Kyu [1 ]
Bhavani, Sivasubramanium V. [1 ]
Jiang, Xiaoqian [2 ]
Ohno-Machado, Lucila [3 ]
Xiong, Li [1 ]
机构
[1] Emory Univ, Atlanta, GA 30322 USA
[2] UTHlth Houston, Houston, TX USA
[3] Yale Univ, New Haven, CT USA
来源
2024 IEEE 6TH INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS, AND APPLICATIONS, TPS-ISA | 2024年
基金
美国国家科学基金会;
关键词
Privacy-preserving; machine learning; healthcare;
D O I
10.1109/TPS-ISA62245.2024.00038
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The increasing integration of artificial intelligence (AI) in healthcare holds great promise for enhancing patient care through predictive modeling and clinical decision support. However, privacy concerns emerge when deploying and sharing AI models, as adversaries can exploit vulnerabilities to infer sensitive patient information. Differential privacy (DP) has been the state-of-the-art approach to mitigate these risks, yet its adoption in healthcare remains limited due to complex privacy needs and the trade-off between privacy guarantees and model utility. This vision paper highlights the challenges and potential research directions of creating patient-centered privacy solutions that are practical, flexible, and transparent. They include improving patient awareness and control, developing privacy-enhanced training mechanisms that respect diverse patient preferences, and enabling post-training unlearning to adapt to evolving privacy requirements. While healthcare serves as a critical use case, the strategies discussed in this paper are applicable to other privacy-sensitive domains, aiming to advance the development of privacy-preserving AI systems for real-world applications across other data-sensitive domains.
引用
收藏
页码:265 / 272
页数:8
相关论文
共 85 条
[21]  
Cummings R., 2024, Advancing differential privacy: Where we are now and future directions for real-world deployment
[22]  
Desfontaines D., 2020, A list of real-world uses of differential privacy
[23]  
Devlin J, 2019, Arxiv, DOI arXiv:1810.04805
[24]  
Dukler Y., 2023, P IEEE CVF INT C COM, P17108
[25]   Calibrating noise to sensitivity in private data analysis [J].
Dwork, Cynthia ;
McSherry, Frank ;
Nissim, Kobbi ;
Smith, Adam .
THEORY OF CRYPTOGRAPHY, PROCEEDINGS, 2006, 3876 :265-284
[26]  
Eldan R, 2023, Arxiv, DOI arXiv:2310.02238
[27]  
Feldman V., 2021, Advances in Neural Information Processing Systems, V34, P28080
[28]  
Foster J, 2024, AAAI CONF ARTIF INTE, P12043
[29]   Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures [J].
Fredrikson, Matt ;
Jha, Somesh ;
Ristenpart, Thomas .
CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, :1322-1333
[30]   Catastrophic forgetting in connectionist networks [J].
French, RM .
TRENDS IN COGNITIVE SCIENCES, 1999, 3 (04) :128-135