Semi-Supervised Active Learning with Temporal Output Discrepancy

被引:38
作者
Huang, Siyu [1 ]
Wang, Tianyang [2 ]
Xiong, Haoyi [1 ]
Huan, Jun [3 ]
Dou, Dejing [1 ]
机构
[1] Baidu Res, Sunnyvale, CA 94089 USA
[2] Austin Peay State Univ, Clarksville, TN 37044 USA
[3] Styling AI, Beijing, Peoples R China
来源
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) | 2021年
关键词
D O I
10.1109/ICCV48922.2021.00343
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
While deep learning succeeds in a wide range of tasks, it highly depends on the massive collection of annotated data which is expensive and time-consuming. To lower the cost of data annotation, active learning has been proposed to interactively query an oracle to annotate a small proportion of informative samples in an unlabeled dataset. Inspired by the fact that the samples with higher loss are usually more informative to the model than the samples with lower loss, in this paper we present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss. The core of our approach is a measurement Temporal Output Discrepancy (TOD) that estimates the sample loss by evaluating the discrepancy of outputs given by models at different optimization steps. Our theoretical investigation shows that TOD lower-bounds the accumulated sample loss thus it can be used to select informative unlabeled samples. On basis of TOD, we further develop an effective unlabeled data sampling strategy as well as an unsupervised learning criterion that enhances model performance by incorporating the unlabeled data. Due to the simplicity of TOD, our active learning approach is efficient, flexible, and task-agnostic. Extensive experimental results demonstrate that our approach achieves superior performances than the state-of-the-art active learning methods on image classification and semantic segmentation tasks.
引用
收藏
页码:3427 / 3436
页数:10
相关论文
共 60 条
[1]  
[Anonymous], 2001, INT C MACH LEARN
[2]  
[Anonymous], 2016, ICML
[3]  
[Anonymous], 2017, P IEEE C COMP VIS PA, DOI [DOI 10.48550/ARXIV.1705.09914, 10.1109/CVPR.2017.75]
[4]  
[Anonymous], 2015, ACM T KNOWL DISCOV D, DOI DOI 10.1145/2700408
[5]  
[Anonymous], 2016, CVPR, DOI DOI 10.1109/CVPR.2016.313
[6]  
[Anonymous], 2017, MICCAI
[7]  
[Anonymous], 2017, ARXIV170302910
[8]  
[Anonymous], 2017, ARXIV170310593
[9]  
[Anonymous], 2017, CVPR, DOI DOI 10.1109/CVPR.2017.506
[10]  
Athiwaratkun Ben, 2019, ICLR