共 50 条
- [1] Learned Token Pruning for Transformers PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 784 - 794
- [2] Multimodal Token Fusion for Vision Transformers 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 12176 - 12185
- [3] Efficient Transformers with Dynamic Token Pooling PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 6403 - 6417
- [4] Robustifying Token Attention for Vision Transformers 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 17511 - 17522
- [5] Token Pooling in Vision Transformers for Image Classification 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 12 - 21
- [7] Token-Label Alignment for Vision Transformers 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 5472 - 5481
- [8] TLM: Token-Level Masking for Transformers 2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 14099 - 14111
- [9] Adaptive Token Sampling for Efficient Vision Transformers COMPUTER VISION, ECCV 2022, PT XI, 2022, 13671 : 396 - 414