共 30 条
- [1] BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements 37TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2021, 2021, : 554 - 569
- [2] NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 15551 - 15565
- [3] RAP: Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLP Models 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 8365 - 8381
- [4] Concealed Data Poisoning Attacks on NLP Models 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 139 - 150
- [6] Hidden Trigger Backdoor Attack on NLP Models via Linguistic Style Manipulation PROCEEDINGS OF THE 31ST USENIX SECURITY SYMPOSIUM, 2022, : 3611 - 3628
- [7] Using Random Perturbations to Mitigate Adversarial Attacks on NLP Models THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 13142 - 13143
- [8] Dynamic Backdoor Attacks Against Machine Learning Models 2022 IEEE 7TH EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY (EUROS&P 2022), 2022, : 703 - 718
- [9] Integrated Directional Gradients: Feature Interaction Attribution for Neural NLP Models 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 865 - 878
- [10] TARGET: Template-Transferable Backdoor Attack Against Prompt-Based NLP Models via GPT4 NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT II, NLPCC 2024, 2025, 15360 : 398 - 411