Crowdsourcing in Computer Vision

被引:56
作者
Kovashka, Adriana [1 ]
Russakovsky, Olga [2 ]
Li Fei-Fei [3 ]
Grauman, Kristen [4 ]
机构
[1] Univ Pittsburgh, Pittsburgh, PA 15260 USA
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[3] Stanford Univ, Stanford, CA 94305 USA
[4] Univ Texas Austin, Austin, TX 78712 USA
来源
FOUNDATIONS AND TRENDS IN COMPUTER GRAPHICS AND VISION | 2014年 / 10卷 / 03期
关键词
Computer vision - Data acquisition - Object recognition;
D O I
10.1561/0600000071
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Computer vision systems require large amounts of manually annotated data to properly learn challenging visual concepts. Crowdsourcing platforms offer an inexpensive method to capture human knowledge and understanding, for a vast number of visual perception tasks. In this survey, we describe the types of annotations computer vision researchers have collected using crowdsourcing, and how they have ensured that this data is of high quality while annotation effort is minimized. We begin by discussing data collection on both classic (e.g., object recognition) and recent (e.g., visual story-telling) vision tasks. We then summarize key design decisions for creating effective data collection interfaces and workflows, and present strategies for intelligently selecting the most important data instances to annotate. Finally, we conclude with some thoughts on the future of crowdsourcing in computer vision.
引用
收藏
页码:I / 243
页数:69
相关论文
共 11 条
[1]  
Yao B., 2011, IEEE INT C COMP VIS
[2]  
Yao B., 2007, INT C EN MIN METH CO
[3]  
Ye G., 2015, ACM INT C MULT
[4]  
Yeung S., 2015, COMPUTING RES REPOSI
[5]  
Yu A., 2014, IEEE C COMP VIS PATT
[6]  
Yu L.-F., 2015, IEEE INT C COMP VIS
[7]   LabelMe video: Building a Video Database with Human Annotations [J].
Yuen, Jenny ;
Russell, Bryan ;
Liu, Ce ;
Torralba, Antonio .
2009 IEEE 12TH INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2009, :1451-1458
[8]  
Zhang C. C., 2015, ADV NEURAL INFORM PR
[9]  
Zhou Bolei, 2014, ADV NEURAL INFORM PR
[10]  
Zhu Y., IEEE C COMP VIS PATT