共 50 条
- [1] Is Transformer-Based Attention Agnostic of the Pretraining Language and Task? SOUTH AFRICAN COMPUTER SCIENCE AND INFORMATION SYSTEMS RESEARCH TRENDS, SAICSIT 2024, 2024, 2159 : 95 - 123
- [2] A Study on Performance Enhancement by Integrating Neural Topic Attention with Transformer-Based Language Model APPLIED SCIENCES-BASEL, 2024, 14 (17):
- [3] The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models ACL-IJCNLP 2021: THE 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 2, 2021, : 130 - 140
- [4] Ouroboros: On Accelerating Training of Transformer-Based Language Models ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
- [5] Transformer-Based Language Models for Software Vulnerability Detection PROCEEDINGS OF THE 38TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2022, 2022, : 481 - 496
- [6] BERTAC: Enhancing Transformer-based Language Models with Adversarially Pretrained Convolutional Neural Networks 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 2103 - 2115
- [7] A Comparison of Transformer-Based Language Models on NLP Benchmarks NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS (NLDB 2022), 2022, 13286 : 490 - 501
- [10] TAG: Gradient Attack on Transformer-based Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 3600 - 3610