Controllable Generation from Pre-trained Language Models via Inverse Prompting

被引:24
作者
Zou, Xu [1 ,2 ]
Yin, Da [1 ,2 ]
Zhong, Qingyang [1 ,2 ]
Yang, Hongxia [4 ]
Yang, Zhilin [2 ,3 ]
Tang, Jie [1 ,2 ]
机构
[1] Tsinghua Univ, Dept Comp Sci & Technol, Beijing, Peoples R China
[2] Beijing Acad Artificial Intelligence, Beijing, Peoples R China
[3] Recurrent AI Ltd, Beijing, Peoples R China
[4] Alibaba Inc, Hangzhou, Peoples R China
来源
KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING | 2021年
基金
国家重点研发计划;
关键词
Language Modeling; Machine Question Answering; Poem Generation; Controllable Generation; Beam Search;
D O I
10.1145/3447548.3467418
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large-scale pre-trained language models have demonstrated strong capabilities of generating realistic texts. However, it remains challenging to control the generation results. Previous approaches such as prompting are far from sufficient, and lack of controllability limits the usage of language models. To tackle this challenge, we propose an innovative method, inverse prompting, to better control text generation. The core idea of inverse prompting is to use generated text to inversely predict the prompt during beam search, which enhances the relevance between the prompt and the generated text and thus improves controllability. Empirically, we pre-train a large-scale Chinese language model to perform a systematic study using human evaluation on the tasks of open-domain poem generation and open-domain long-form question answering. Results demonstrate that our proposed method substantially outperforms the baselines and that our generation quality is close to human performance on some of the tasks.
引用
收藏
页码:2450 / 2460
页数:11
相关论文
共 28 条
  • [1] An J., 2015, Special lecture on IE, V2, P1
  • [2] Brown Tom, 2020, ADV NEURAL INFORM PR
  • [3] Chu Casey, 2017, ARXIV171202950
  • [4] Dai Zihang, 2018, TRANSFORMER XL LANGU
  • [5] Sterile neutrinos and neutrinoless double beta decay in effective field theory
    Dekens, W.
    de Vries, J.
    Fuyuto, K.
    Mereghetti, E.
    Zhou, G.
    [J]. JOURNAL OF HIGH ENERGY PHYSICS, 2020, 2020 (06)
  • [6] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  • [7] Guo ZP, 2019, PROCEEDINGS OF THE 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: SYSTEM DEMONSTRATIONS, (ACL 2019), P25
  • [8] Jiang L., 2008, COLING 2008, P377
  • [9] Keskar Nitish Shirish, 2019, ARXIV190905858
  • [10] Lan Janice, 2019, ARXIV191202164