共 50 条
- [1] TWO-STAGE TRAINING METHOD FOR JAPANESE ELECTROLARYNGEAL SPEECH ENHANCEMENT BASED ON SEQUENCE-TO-SEQUENCE VOICE CONVERSION 2022 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP, SLT, 2022, : 949 - 954
- [2] Improving AMR Parsing with Sequence-to-Sequence Pre-training PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 2501 - 2511
- [3] MASS: Masked Sequence to Sequence Pre-training for Language Generation INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
- [4] Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 571 - 582
- [5] Limited Data Emotional Voice Conversion Leveraging Text-to-Speech: Two-stage Sequence-to-Sequence Training INTERSPEECH 2021, 2021, : 811 - 815
- [6] Denoising based Sequence-to-Sequence Pre-training for Text Generation 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 4003 - 4015
- [7] SELF-TRAINING AND PRE-TRAINING ARE COMPLEMENTARY FOR SPEECH RECOGNITION 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3030 - 3034
- [8] Uncovering Hidden Consequences of Pre-training Objectives in Sequence-to-Sequence Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 7010 - 7022
- [9] Pre-training with a rational approach for antibody sequence representation FRONTIERS IN IMMUNOLOGY, 2024, 15
- [10] Unified Speech-Text Pre-training for Speech Translation and Recognition PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 1488 - 1499