共 50 条
[32]
UOR: Universal Backdoor Attacks on Pre-trained Language Models
[J].
FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024,
2024,
:7865-7877
[34]
Quantifying Adaptability in Pre-trained Language Models with 500 Tasks
[J].
NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES,
2022,
:4696-4715
[35]
Efficient and Long-Tailed Generalization for Pre-trained Vision-Language Model
[J].
PROCEEDINGS OF THE 30TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2024,
2024,
:2663-2673
[36]
Bidirectional Cross-Modal Knowledge Exploration for Video Recognition with Pre-trained Vision-Language Models
[J].
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR,
2023,
:6620-6630
[37]
Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models
[J].
COMPUTER VISION - ECCV 2024, PT XXXVI,
2025, 15094
:346-365
[40]
Adversarial Attacks on Pre-trained Deep Learning Models for Encrypted Traffic Analysis
[J].
JOURNAL OF WEB ENGINEERING,
2024, 23 (06)
:749-768