Evaluating Visual Data Analysis Systems: A Discussion Report

被引:12
作者
Battle, Leilani [1 ]
Angelini, Marco [6 ]
Binnig, Carsten [2 ,3 ]
Catarci, Tiziana [6 ]
Eichmann, Philipp [3 ]
Fekete, Jean-Daniel [4 ]
Santucci, Giuseppe [6 ]
Sedlmair, Michael [7 ]
Willet, Wesley [5 ]
机构
[1] Univ Washington, Seattle, WA 98195 USA
[2] Tech Univ Darmstadt, Darmstadt, Germany
[3] Brown Univ, Providence, RI 02912 USA
[4] INRIA, Rocquencourt, France
[5] Univ Calgary, Calgary, AB, Canada
[6] Univ Roma La Sapienza, Rome, Italy
[7] Jacobs Univ Bremen, Bremen, Germany
来源
HILDA'18: PROCEEDINGS OF THE WORKSHOP ON HUMAN-IN-THE-LOOP DATA ANALYTICS | 2018年
关键词
D O I
10.1145/3209900.3209901
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Visual data analysis is a key tool for helping people to make sense of and interact with massive data sets. However, existing evaluation methods (e.g., database benchmarks, individual user studies) fail to capture the key points that make systems for visual data analysis (or visual data systems) challenging to design. In November 2017, members of both the Database and Visualization communities came together in a Dagstuhl seminar to discuss the grand challenges in the intersection of data analysis and interactive visualization. In this paper, we report on the discussions of the working group on the evaluation of visual data systems, which addressed questions centered around developing better evaluation methods, such as "Howdo the different communities evaluate visual data systems?" and "What we could learn from each other to develop evaluation techniques that cut across areas?". In their discussions, the group brainstormed initial steps towards new joint evaluation methods and developed a first concrete initiative - a trace repository of various real-world workloads and visual data systems - that enables researchers to derive evaluation setups (e.g., performance benchmarks, user studies) under more realistic assumptions, and enables new evaluation perspectives (e.g., broader meta analysis across analysis contexts, reproducibility and comparability across systems).
引用
收藏
页数:6
相关论文
共 28 条
[1]   Low-level components of analytic activity in information visualization [J].
Amar, R ;
Eagan, J ;
Stasko, J .
INFOVIS 05: IEEE SYMPOSIUM ON INFORMATION VISUALIZATION, PROCEEDINGS, 2005, :111-117
[2]  
[Anonymous], 2009, Microsoft Research
[3]  
[Anonymous], 2005, TREC EXPT EVALUATION
[4]  
[Anonymous], 2005, Illuminating the path: The research and development agenda for visual analytics (Tech. Rep.)
[5]  
[Anonymous], P 2012 BELIV WORKSH
[6]  
Battle L., 2018, DSIA 18
[7]   Dynamic Prefetching of Data Tiles for Interactive Visualization [J].
Battle, Leilani ;
Chang, Remco ;
Stonebraker, Michael .
SIGMOD'16: PROCEEDINGS OF THE 2016 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA, 2016, :1363-1375
[8]   A Multi-Level Typology of Abstract Visualization Tasks [J].
Brehmer, Matthew ;
Munzner, Tamara .
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2013, 19 (12) :2376-2385
[9]  
C. Initiative, 2018, C LABS EV FOR
[10]   The VAST Challenge: history, scope, and outcomes: An introduction to the Special Issue [J].
Cook, Kristin ;
Grinstein, Georges ;
Whiting, Mark .
INFORMATION VISUALIZATION, 2014, 13 (04) :301-312