共 50 条
- [1] Assessing Multilingual Fairness in Pre-trained Multimodal Representations FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 2681 - 2695
- [2] Pre-trained Affective Word Representations 2019 8TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII), 2019,
- [3] Diffused Redundancy in Pre-trained Representations ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [4] Measuring Fairness with Biased Rulers: A Comparative Study on Bias Metrics for Pre-trained Language Models NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 1693 - 1706
- [5] On the Language Neutrality of Pre-trained Multilingual Representations FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 1663 - 1674
- [7] Pre-trained Language Model Representations for Language Generation 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, 2019, : 4052 - 4059
- [8] Inverse Problems Leveraging Pre-trained Contrastive Representations ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
- [9] Are Pre-trained Convolutions Better than Pre-trained Transformers? 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, 2021, : 4349 - 4359
- [10] Pre-trained Visual Dynamics Representations for Efficient Policy Learning COMPUTER VISION - ECCV 2024, PT LXXXI, 2025, 15139 : 249 - 267