In-Context Impersonation Reveals Large Language Models' Strengths and Biases

被引:0
|
作者
Salewski, Leonard [1 ,2 ]
Alaniz, Stephan [1 ,2 ]
Rio-Torto, Isabel [1 ,3 ,4 ]
Schulz, Eric [2 ,5 ]
Akata, Zeynep [1 ,2 ]
机构
[1] Univ Tubingen, Tubingen, Germany
[2] Tubingen AI Ctr, Tubingen, Germany
[3] Univ Porto, Porto, Portugal
[4] INESC TEC, Porto, Portugal
[5] Max Planck Inst Biol Cybernet, Tubingen, Germany
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In everyday conversations, humans can take on different roles and adapt their vocabulary to their chosen roles. We explore whether LLMs can take on, that is impersonate, different roles when they generate text in-context. We ask LLMs to assume different personas before solving vision and language tasks. We do this by prefixing the prompt with a persona that is associated either with a social identity or domain expertise. In a multi-armed bandit task, we find that LLMs pretending to be children of different ages recover human-like developmental stages of exploration. In a language-based reasoning task, we find that LLMs impersonating domain experts perform better than LLMs impersonating non-domain experts. Finally, we test whether LLMs' impersonations are complementary to visual information when describing different categories. We find that impersonation can improve performance: an LLM prompted to be a bird expert describes birds better than one prompted to be a car expert. However, impersonation can also uncover LLMs' biases: an LLM prompted to be a man describes cars better than one prompted to be a woman. These findings demonstrate that LLMs are capable of taking on diverse roles and that this in-context impersonation can be used to uncover their strengths and hidden biases. Our code is available at https://github.com/ExplainableML/in-context-impersonation.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Adaptive In-Context Learning with Large Language Models for Bundle
    Sun, Zhu
    Feng, Kaidong
    Yang, Jie
    Qu, Xinghua
    Fang, Hui
    Ong, Yew-Soon
    Liu, Wenyuan
    PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 966 - 976
  • [2] Learning to Retrieve In-Context Examples for Large Language Models
    Wang, Liang
    Yang, Nan
    Wei, Furu
    PROCEEDINGS OF THE 18TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 1752 - 1767
  • [3] Visual In-Context Learning for Large Vision-Language Models
    Zhou, Yucheng
    Le, Xiang
    Wang, Qianning
    Shen, Jianbing
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 15890 - 15902
  • [4] Active Learning Principles for In-Context Learning with Large Language Models
    Margatina, Katerina
    Schick, Timo
    Aletras, Nikolaos
    Dwivedi-Yu, Jane
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 5011 - 5034
  • [5] Are Emergent Abilities in Large Language Models just In-Context Learning?
    Lu, Sheng
    Bigoulaeva, Irina
    Sachdeva, Rachneet
    Madabushi, Harish Tayyar
    Gurevych, Iryna
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 5098 - 5139
  • [6] Steering Large Language Models for Machine Translation with Finetuning and In-Context Learning
    Alves, Duarte M.
    Guerreirol, Nuno M.
    Alves, Joao
    Pombal, Jose
    Rei, Ricardo
    de Souza, Jose G. C.
    Colombo, Pierre
    Martins, Andre F. T.
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 11127 - 11148
  • [7] In-Context Symbolic Regression: Leveraging Large Language Models for Function Discovery
    Merler, Matteo
    Haitsiukevich, Katsiaryna
    Dainese, Nicola
    Marttinen, Pekka
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 4: STUDENT RESEARCH WORKSHOP, 2024, : 445 - 462
  • [8] DistillMIKE: Editing Distillation of Massive In-Context Knowledge Editing in Large Language Models
    Qiao, Shanbao
    Liu, Xuebing
    Na, Seung-Hoon
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 7639 - 7654
  • [9] Meta In-Context Learning: Harnessing Large Language Models for Electrical Data Classification
    Zhou, Mi
    Li, Fusheng
    Zhang, Fan
    Zheng, Junhao
    Ma, Qianli
    ENERGIES, 2023, 16 (18)
  • [10] Large Language Models Can be Lazy Learners: Analyze Shortcuts in In-Context Learning
    Tang, Ruixiang
    Kong, Dehan
    Huang, Longtao
    Xue, Hui
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 4645 - 4657