WiP: An On-device LLM-based Approach to Query Privacy Protection

被引:1
作者
Yuan, Yizhen [1 ,2 ]
Kong, Rui [1 ,3 ]
Li, Yuanchun [1 ]
Liu, Yunxin [1 ]
机构
[1] Tsinghua Univ, Inst AI Ind Res AIR, Beijing, Peoples R China
[2] Tsinghua Univ, Dept Elect Engn, Beijing, Peoples R China
[3] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai, Peoples R China
来源
PROCEEDINGS OF THE 2024 WORKSHOP ON EDGE AND MOBILE FOUNDATION MODELS, EDGEFM 2024 | 2024年
基金
美国国家科学基金会;
关键词
Large language models; query obfuscation; privacy protection;
D O I
10.1145/3662006.3662060
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Privacy leakage from user queries is a widely-concerned issue in search engines and chatbot services. Existing solutions based on privacy information removal, obfuscation, and encryption may inevitably hurt service quality or require full trust of the service provider. Inspired by the remarking language understanding and generation abilities of large language models (LLMs), we introduce LLM-QueryGuard, an LLM-based tool designed to mitigate privacy leakage in continuous user queries. The core of LLM-QueryGuard is an on-device LLM that automatically understands the private properties contained in the user queries and generates false queries to obfuscate the private properties. By making the generated queries indistinguishable and mixing them into the real queries, our approach can be seamlessly integrated into any query-driven applications (e.g. search engine and ChatGPT), with some cost of additional queries.
引用
收藏
页码:7 / 9
页数:3
相关论文
共 8 条
  • [1] Chen HM, 2022, Arxiv, DOI arXiv:2207.01193
  • [2] Li YC, 2024, Arxiv, DOI arXiv:2401.05459
  • [3] OpenAI, 2022, Introduce ChatGPT
  • [4] Plant R, 2021, Arxiv, DOI arXiv:2108.12318
  • [5] Wen H, 2024, Arxiv, DOI arXiv:2308.15272
  • [6] Yao YX, 2024, Arxiv, DOI arXiv:2402.08227
  • [7] Zhou Xin, 2023, FINDINGS ASS COMPUTA, P3749
  • [8] Zhou Xin, 2022, P 2022 C EMP METH NA