Looping In: Exploring Feedback Strategies to Motivate Human Engagement in Interactive Machine Learning

被引:0
作者
Shin, Hyorim [1 ]
Park, Jeongeun [2 ]
Yu, Jeongmin [3 ]
Kim, Jungeun [3 ]
Kim, Ha Young [1 ]
Oh, Changhoon [1 ]
机构
[1] Yonsei Univ, Grad Sch Informat, Seoul, South Korea
[2] Hanyang Univ, Dept Human Comp Interact, Ansan, South Korea
[3] Yonsei Univ, Dept Artificial Intelligence, Seoul, South Korea
关键词
Human-AI interaction; human-in-the-Loop; Interactive machine learning; task criticality; user engagement; AI feedback; GAMIFICATION; CURIOSITY; PEOPLE; MODEL;
D O I
10.1080/10447318.2024.2413293
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This study investigates effective feedback mechanisms to maintain human engagement in interactive machine learning (IML) systems, focusing on social media platforms. We developed "Loop," an IML system based on human-in-the-loop (HITL) principles that recommends content while encouraging users to report inaccuracies for model refinement. Loop implements three types of artificial intelligence (AI) feedback on user reports: (a) machine learning (ML)-centric, (b) personal-centric, and (c) community-centric feedback. In addition, we evaluated the relative effectiveness of these feedback types under two different task criticality scenarios: high and low. A user study with 30 participants was conducted to evaluate Loop through questionnaires and interviews. Results showed that participants preferred algorithmic improvements for personal benefit over altruistic contributions to the community, especially for low-criticality tasks. Furthermore, personal-centric feedback had a significant impact on user engagement and satisfaction. Our findings provide insights into the effectiveness of machine feedback in HITL-ML systems, contributing to the design of more engaging and effective IML interfaces. We discuss implications and strategies for encouraging proactive user engagement in HITL-ML-based systems, emphasizing the importance of tailored feedback mechanisms.
引用
收藏
页数:18
相关论文
共 79 条
  • [1] Personalized and Diverse Task Composition in Crowdsourcing
    Alsayasneh, Maha
    Amer-Yahia, Sihem
    Gaussier, Eric
    Leroy, Vincent
    Pilourdault, Julien
    Borromeo, Ria Mae
    Toyama, Motomichi
    Renders, Jean-Michel
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2018, 30 (01) : 128 - 141
  • [2] "I Really Don't Know What 'Thumbs Up' Means": Algorithmic Experience in Movie Recommender Algorithms
    Alvarado, Oscar
    Vanden Abeele, Vero
    Geerts, David
    Verbert, Katrien
    [J]. HUMAN-COMPUTER INTERACTION, INTERACT 2019, PT III, 2019, 11748 : 521 - 541
  • [3] Power to the People: The Role of Humans in Interactive Machine Learning
    Amershi, Saleema
    Cakmak, Maya
    Knox, W. Bradley
    Kulesza, Todd
    [J]. AI MAGAZINE, 2014, 35 (04) : 105 - 120
  • [4] Symphony: Composing Interactive Interfaces for Machine Learning
    Baeuerle, Alex
    Cabrera, Angel Alexander
    Hohman, Fred
    Maher, Megan
    Koski, David
    Suau, Xavier
    Barik, Titus
    Moritz, Dominik
    [J]. PROCEEDINGS OF THE 2022 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI' 22), 2022,
  • [5] Bergen Mark., 2022, Like Comment Subscribe. Inside YouTubes Chaotic Rise to World Domination
  • [6] Bontempelli A, 2023, Arxiv, DOI arXiv:2205.15769
  • [7] One size fits all? What counts as quality practice in (reflexive) thematic analysis?
    Braun, Virginia
    Clarke, Victoria
    [J]. QUALITATIVE RESEARCH IN PSYCHOLOGY, 2021, 18 (03) : 328 - 352
  • [8] A survey on active learning and human-in-the-loop deep learning for medical image analysis
    Budd, Samuel
    Robinson, Emma C.
    Kainz, Bernhard
    [J]. MEDICAL IMAGE ANALYSIS, 2021, 71
  • [9] Breaking monotony with meaning: Motivation in crowdsourcing markets
    Chandler, Dana
    Kapelner, Adam
    [J]. JOURNAL OF ECONOMIC BEHAVIOR & ORGANIZATION, 2013, 90 : 123 - 133
  • [10] Chanseau A, 2018, IEEE ROMAN, P1057, DOI 10.1109/ROMAN.2018.8525663