Deep Learning Uncertainty in Machine Teaching

被引:17
作者
Sanchez, Teo [1 ]
Caramiaux, Baptiste [2 ]
Thiel, Pierre [2 ]
Mackay, Wendy E. [1 ]
机构
[1] Univ Paris Saclay, CNRS, INRIA, LISN, Gif Sur Yvette, France
[2] Sorbonne Univ, CNRS, ISIR, Paris, France
来源
IUI'22: 27TH INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES | 2022年
关键词
ML uncertainty; Machine Teaching; Interactive Machine Learning; Human-AI Interaction; Human-centered analysis; HUMANS;
D O I
10.1145/3490099.3511117
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine Learning models can output confident but incorrect predictions. To address this problem, ML researchers use various techniques to reliably estimate ML uncertainty, usually performed on controlled benchmarks once the model has been trained. We explore how the two types of uncertainty-aleatoric and epistemic-can help non-expert users understand the strengths and weaknesses of a classifier in an interactive setting. We are interested in users' perception of the difference between aleatoric and epistemic uncertainty and their use to teach and understand the classifier. We conducted an experiment where non-experts train a classifier to recognize card images, and are tested on their ability to predict classifier outcomes. Participants who used either larger or more varied training sets significantly improved their understanding of uncertainty, both epistemic or aleatoric. However, participants who relied on the uncertainty measure to guide their choice of training data did not significantly improve classifier training, nor were they better able to guess the classifier outcome. We identified three specific situations where participants successfully identified the difference between aleatoric and epistemic uncertainty: placing a card in the exact same position as a training card; placing different cards next to each other; and placing a non-card, such as their hand, next to or on top of a card. We discuss our methodology for estimating uncertainty for Interactive Machine Learning systems and question the need for two-level uncertainty in Machine Teaching.
引用
收藏
页码:173 / 190
页数:18
相关论文
共 52 条
[1]   Power to the People: The Role of Humans in Interactive Machine Learning [J].
Amershi, Saleema ;
Cakmak, Maya ;
Knox, W. Bradley ;
Kulesza, Todd .
AI MAGAZINE, 2014, 35 (04) :105-120
[2]  
Benjamin Jesse Josua, 2021, MACHINE LEARNING UNC, DOI [10.1145/3411764.3445481, DOI 10.1145/3411764.3445481]
[3]   Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty [J].
Bhatt, Umang ;
Antoran, Javier ;
Zhang, Yunfeng ;
Liao, Q. Vera ;
Sattigeri, Prasanna ;
Fogliato, Riccardo ;
Melancon, Gabrielle ;
Krishnan, Ranganath ;
Stanley, Jason ;
Tickoo, Omesh ;
Nachman, Lama ;
Chunara, Rumi ;
Srikumar, Madhulika ;
Weller, Adrian ;
Xiang, Alice .
AIES '21: PROCEEDINGS OF THE 2021 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2021, :401-413
[4]  
Braun V., 2006, Qual Res Psychol, V3, P77, DOI [DOI 10.1191/1478088706QP063OA, DOI 10.1080/10875549.2021.1929659, https://doi.org/10.1191/1478088706qp063oa]
[5]   Eliciting good teaching from humans for machine learners [J].
Cakmak, Maya ;
Thomaz, Andrea L. .
ARTIFICIAL INTELLIGENCE, 2014, 217 :198-215
[6]   Teachable Machine: Approachable Web-Based Tool for Exploring Machine Learning Classification [J].
Carney, Michelle ;
Webster, Barron ;
Alvarado, Irene ;
Phillips, Kyle ;
Howell, Noura ;
Griffith, Jordan ;
Jongejan, Jonas ;
Pitaru, Amit ;
Chen, Alexander .
CHI'20: EXTENDED ABSTRACTS OF THE 2020 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2020,
[7]  
Delaney E, 2021, Arxiv, DOI [arXiv:2107.09734, DOI 10.48550/ARXIV.2107.09734]
[8]  
DeVries T, 2018, Arxiv, DOI arXiv:1802.04865
[9]   A Review of User Interface Design for Interactive Machine Learning [J].
Dudley, John J. ;
Kristensson, Per Ola .
ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS, 2018, 8 (02)
[10]  
Dwivedi Utkarsh, EXPLORING MACHINE TE