Is Your Prompt Detailed Enough? Exploring the Effects of Prompt Coaching on Users' Perceptions, Engagement, and Trust in Text-to-Image Generative AI Tools

被引:0
作者
Chen, Cheng [1 ]
Lee, Sangwook [2 ]
Jang, Eunchae [3 ]
Sundar, S. Shyam [3 ]
机构
[1] Elon Univ, Sch Commun, Commun Design Dept, Elon, NC 27244 USA
[2] Univ Colorado Boulder, Dept Advertising Publ Relat & Media Design, Boulder, CO USA
[3] Penn State Univ, Bellisario Coll Commun, Media Effects Res Lab, University Pk, PA USA
来源
PROCEEDINGS OF THE SECOND INTERNATIONAL SYMPOSIUM ON TRUSTWORTHY AUTONOMOUS SYSTEMS, TAS 2024 | 2024年
关键词
Prompt coaching; Generative AI; Perceived trust calibration; Cognitive elaboration; User Engagement; User Interface; User Experience; SATISFACTION; PSYCHOLOGY; EXPERIENCE;
D O I
10.1145/3686038.3686060
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Prompts are the primary medium for interacting with generative AI tools. However, users often lack sufficient prompt literacy and motivation to fully benefit from these tools. To address this, we explore whether introducing prompt coaching into a chatbot-based generative AI interface can influence users' perceptions and engagement of prompting, and further affect their trust in the system. In a user study (N = 132), we found that prompt coaching encourages users to specify more details in their prompts, even though over half initially believed their prompts were sufficient. Furthermore, the coach increased users' cognitive elaboration, which was associated with higher perceived trust calibration. However, prompt coaching did not significantly enhance UX, although users in the coaching absent condition expressed a strong need for prompt assistance for better user experience. These findings have practical implications for the design of trustworthy and responsible generative AI interfaces.
引用
收藏
页数:12
相关论文
共 65 条
  • [21] Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI
    Jacovi, Alon
    Marasovic, Ana
    Miller, Tim
    Goldberg, Yoav
    [J]. PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, : 624 - 635
  • [22] Cognitive and affective trust in service relationships
    Johnson, D
    Grayson, K
    [J]. JOURNAL OF BUSINESS RESEARCH, 2005, 58 (04) : 500 - 507
  • [23] Joshi A., 2015, BRIT J APPL SCI TECH, V7, P396, DOI [10.9734/bjast/2015/14975, DOI 10.9734/BJAST/2015/14975]
  • [24] Studying heuristic-systematic processing of risk communication
    Kahlor, L
    Dunwoody, S
    Griffin, RJ
    Neuwirth, K
    Giese, J
    [J]. RISK ANALYSIS, 2003, 23 (02) : 355 - 368
  • [25] Depleted egos and affirmed selves: The two faces of customization
    Kang, Hyunjin
    Sundar, S. Shyam
    [J]. COMPUTERS IN HUMAN BEHAVIOR, 2013, 29 (06) : 2273 - 2280
  • [26] Kim J, 2023, Arxiv, DOI arXiv:2311.03754
  • [27] Caregiving role in human-robot interaction: A study of the mediating effects of perceived benefit and social presence
    Kim, Ki Joon
    Park, Eunil
    Sundar, S. Shyam
    [J]. COMPUTERS IN HUMAN BEHAVIOR, 2013, 29 (04) : 1799 - 1806
  • [28] The effort heuristic
    Kruger, J
    Wirtz, D
    Van Boven, L
    Altermatt, TW
    [J]. JOURNAL OF EXPERIMENTAL SOCIAL PSYCHOLOGY, 2004, 40 (01) : 91 - 98
  • [29] User experience: A concept without consensus? Exploring practitioners' perspectives through an international survey
    Lallemand, Carine
    Gronier, Guillaume
    Koenig, Vincent
    [J]. COMPUTERS IN HUMAN BEHAVIOR, 2015, 43 : 35 - 48
  • [30] Trust in automation: Designing for appropriate reliance
    Lee, JD
    See, KA
    [J]. HUMAN FACTORS, 2004, 46 (01) : 50 - 80