共 50 条
- [1] Boost Vision Transformer with GPU-Friendly Sparsity and Quantization 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 22658 - 22668
- [2] Self-Distilled Quantization: Achieving High Compression Rates in Transformer-Based Language Models 61ST CONFERENCE OF THE THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 2, 2023, : 1329 - 1339
- [4] Ouroboros: On Accelerating Training of Transformer-Based Language Models ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
- [5] Transformer-Based Language Models for Software Vulnerability Detection PROCEEDINGS OF THE 38TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2022, 2022, : 481 - 496
- [6] A Comparison of Transformer-Based Language Models on NLP Benchmarks NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS (NLDB 2022), 2022, 13286 : 490 - 501
- [9] TAG: Gradient Attack on Transformer-based Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 3600 - 3610