Detoxifying Large Language Models via Knowledge Editing

被引:0
作者
Wang, Mengru [1 ]
Zhang, Ningyu [1 ,6 ]
Xu, Ziwen [1 ]
Xi, Zekun [1 ]
Deng, Shumin [3 ]
Yao, Yunzhi [1 ]
Zhang, Qishen [2 ]
Yang, Linyi [4 ]
Wang, Jindong [5 ]
Chen, Huajun [1 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
[2] Ant Grp, Hangzhou, Peoples R China
[3] Natl Univ Singapore, NUS NCS Joint Lab, Singapore, Singapore
[4] Westlake Univ, Hangzhou, Peoples R China
[5] Microsoft Res Asia, Beijing, Peoples R China
[6] Southeast Univ, Key Lab New Generat Artificial Intelligence Techn, Minist Educ, Nanjing, Peoples R China
来源
PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS | 2024年
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper investigates using knowledge editing techniques to detoxify Large Language Models (LLMs). We construct a benchmark, SafeEdit, which covers nine unsafe categories with various powerful attack prompts and equips comprehensive metrics for systematic evaluation. We conduct experiments with several knowledge editing approaches, indicating that knowledge editing has the potential to detoxify LLMs with a limited impact on general performance efficiently. Then, we propose a simple yet effective baseline, dubbed Detoxifying with Intraoperative Neural Monitoring (DINM), to diminish the toxicity of LLMs within a few tuning steps via only one instance. We further provide an in-depth analysis of the internal mechanism for various detoxifying approaches, demonstrating that previous methods like SFT and DPO may merely suppress the activations of toxic parameters, while DINM mitigates the toxicity of the toxic parameters to a certain extent, making permanent adjustments. We hope that these insights could shed light on future work of developing detoxifying approaches and the underlying knowledge mechanisms of LLMs1.
引用
收藏
页码:3093 / 3118
页数:26
相关论文
共 74 条
  • [1] Akyurek Afra, 2023, P 2023 C EMPIRICAL M, P1847
  • [2] Bai Y., 2022, ARXIV
  • [3] Cao Bochuan, 2023, arXiv
  • [4] Chen Y., 2024, ARXIV
  • [5] Chen Yirong, 2023, CoRR
  • [6] Cohen Roi, 2023, ARXIV
  • [7] Deng Jiacheng, 2023, ARXIV
  • [8] Deshpande Ameet, 2023, FINDINGS ASS COMPUT, P1236
  • [9] Ding, 2023, ARXIV
  • [10] Feng Zhili, 2023, arXiv