共 50 条
[31]
Controllable Generation from Pre-trained Language Models via Inverse Prompting
[J].
KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING,
2021,
:2450-2460
[32]
Enhancing Scalability of Pre-trained Language Models via Efficient Parameter Sharing
[J].
FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023),
2023,
:13771-13785
[33]
From Cloze to Comprehension: Retrofitting Pre-trained Masked Language Models to Pre-trained Machine Reader
[J].
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023),
2023,
[35]
Probing Pre-Trained Language Models for Disease Knowledge
[J].
FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021,
2021,
:3023-3033
[36]
Emotional Paraphrasing Using Pre-trained Language Models
[J].
2021 9TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION WORKSHOPS AND DEMOS (ACIIW),
2021,
[37]
Analyzing Individual Neurons in Pre-trained Language Models
[J].
PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP),
2020,
:4865-4880
[38]
Supporting Undotted Arabic with Pre-trained Language Models
[J].
PROCEEDINGS OF THE 4TH INTERNATIONAL CONFERENCE ON NATURAL LANGUAGE AND SPEECH PROCESSING, ICNLSP 2021,
2021,
:89-94
[39]
A Close Look into the Calibration of Pre-trained Language Models
[J].
PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1,
2023,
:1343-1367
[40]
Deep Entity Matching with Pre-Trained Language Models
[J].
PROCEEDINGS OF THE VLDB ENDOWMENT,
2020, 14 (01)
:50-60