共 50 条
- [1] The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models ACL-IJCNLP 2021: THE 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 2, 2021, : 130 - 140
- [2] Transformer-Based Models for Predicting Molecular Structures from Infrared Spectra Using Patch-Based Self-Attention JOURNAL OF PHYSICAL CHEMISTRY A, 2025, 129 (08): : 2077 - 2085
- [3] Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Translation FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 556 - 568
- [5] TRANSFORMER-BASED STREAMING ASR WITH CUMULATIVE ATTENTION 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 8272 - 8276
- [6] TripleFormer: improving transformer-based image classification method using multiple self-attention inputs VISUAL COMPUTER, 2024, 40 (12): : 9039 - 9050
- [7] SIMPLIFIED SELF-ATTENTION FOR TRANSFORMER-BASED END-TO-END SPEECH RECOGNITION 2021 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP (SLT), 2021, : 75 - 81
- [8] Synthesizer: Rethinking Self-Attention for Transformer Models INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139 : 7192 - 7203
- [9] Transformer-based Acoustic Modeling for Streaming Speech Synthesis INTERSPEECH 2021, 2021, : 146 - 150