Is Your Prompt Detailed Enough? Exploring the Effects of Prompt Coaching on Users' Perceptions, Engagement, and Trust in Text-to-Image Generative AI Tools

被引:0
作者
Chen, Cheng [1 ]
Lee, Sangwook [2 ]
Jang, Eunchae [3 ]
Sundar, S. Shyam [3 ]
机构
[1] Elon Univ, Sch Commun, Commun Design Dept, Elon, NC 27244 USA
[2] Univ Colorado Boulder, Dept Advertising Publ Relat & Media Design, Boulder, CO USA
[3] Penn State Univ, Bellisario Coll Commun, Media Effects Res Lab, University Pk, PA USA
来源
PROCEEDINGS OF THE SECOND INTERNATIONAL SYMPOSIUM ON TRUSTWORTHY AUTONOMOUS SYSTEMS, TAS 2024 | 2024年
关键词
Prompt coaching; Generative AI; Perceived trust calibration; Cognitive elaboration; User Engagement; User Interface; User Experience; SATISFACTION; PSYCHOLOGY; EXPERIENCE;
D O I
10.1145/3686038.3686060
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Prompts are the primary medium for interacting with generative AI tools. However, users often lack sufficient prompt literacy and motivation to fully benefit from these tools. To address this, we explore whether introducing prompt coaching into a chatbot-based generative AI interface can influence users' perceptions and engagement of prompting, and further affect their trust in the system. In a user study (N = 132), we found that prompt coaching encourages users to specify more details in their prompts, even though over half initially believed their prompts were sufficient. Furthermore, the coach increased users' cognitive elaboration, which was associated with higher perceived trust calibration. However, prompt coaching did not significantly enhance UX, although users in the coaching absent condition expressed a strong need for prompt assistance for better user experience. These findings have practical implications for the design of trustworthy and responsible generative AI interfaces.
引用
收藏
页数:12
相关论文
共 65 条
  • [1] Time flies when you're having fun: Cognitive absorption and beliefs about information technology usage
    Agarwal, R
    Karahanna, E
    [J]. MIS QUARTERLY, 2000, 24 (04) : 665 - 694
  • [2] Trust in public-sector senior management
    Albrecht, S
    Travaglione, A
    [J]. INTERNATIONAL JOURNAL OF HUMAN RESOURCE MANAGEMENT, 2003, 14 (01) : 76 - 92
  • [3] Artificial Hallucinations in ChatGPT: Implications in Scientific Writing
    Alkaissi, Hussam
    McFarlane, Samy I.
    [J]. CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (02)
  • [4] [Anonymous], 1988, P SIGCHI C HUM FACT, DOI DOI 10.1145/57167.57203
  • [5] [Anonymous], 2022, The DALL E 2 prompt book
  • [6] I, Chatbot: Modeling the determinants of users' satisfaction and continuance intention of AI-powered service agents
    Ashfaq, Muhammad
    Yun, Jiang
    Yu, Shubin
    Correia Loureiro, Sandra Maria
    [J]. TELEMATICS AND INFORMATICS, 2020, 54
  • [7] Do computers sweat? The impact of perceived effort of online decision aids on consumers' satisfaction with the decision process
    Bechwati, NN
    Xia, L
    [J]. JOURNAL OF CONSUMER PSYCHOLOGY, 2003, 13 (1-2) : 139 - 148
  • [8] Bianchi Tiago., 2024, Interest in generative AI on Google searches from February 2022 to June 2024 worldwide
  • [9] Chen Cheng, 2023, Is this AI trained on credible data? The effects of labeling quality and performance bias on user trust, DOI [10.1145/3544548.3580805, DOI 10.1145/3544548.3580805]
  • [10] Believing Anthropomorphism: Examining the Role of Anthropomorphic Cues on Trust in Large Language Models
    Cohn, Michelle
    Pushkarna, Mahima
    Olanubi, Gbolahan O.
    Moran, Joseph M.
    Padgett, Daniel
    Mengesha, Zion
    Heldreth, Courtney
    [J]. EXTENDED ABSTRACTS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2024, 2024,