GraphUnit: Evaluating Interactive Graph Visualizations Using Crowdsourcing

被引:12
|
作者
Okoe, Mershack [1 ]
Jianu, Radu [1 ]
机构
[1] Florida Int Univ, Miami, FL 33199 USA
关键词
Graph evaluation; graph user studies; automating graph evaluation; crowdsourcing graphs; DESIGN; INFORMATION;
D O I
10.1111/cgf.12657
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We present GraphUnit, a framework and online service that automates the process of designing, running and analyzing results of controlled user studies of graph visualizations by leveraging crowdsourcing and a set of evaluation modules based on a graph task taxonomy. User studies play an important role in visualization research but conducting them requires expertise and is time consuming. GraphUnit simplifies the evaluation process by allowing visualization designers to easily configure user studies for their web-based graph visualizations, deploy them online, use Mechanical Turk to attract participants, collect user responses and store them in a database, and analyze incoming results automatically using appropriate statistical tools and graphs. We demonstrate the effectiveness of GraphUnit by replicating two published evaluation studies on network visualization, and showing that these studies could be configured in less than an hour. Finally, we discuss how GraphUnit can facilitate quick evaluations of alternative graph designs and thus encourage the frequent use of user studies to evaluate design decisions in iterative development processes.
引用
收藏
页码:451 / 460
页数:10
相关论文
共 50 条
  • [1] Using Gaze Data in Evaluating Interactive Visualizations
    Siirtola, Harri
    Raiha, Kari-Jouko
    HUMAN ASPECTS OF VISUALIZATION, 2011, 6431 : 127 - 141
  • [2] Crowdsourcing interactions: using crowdsourcing for evaluating interactive information retrieval systems
    Zuccon, Guido
    Leelanupab, Teerapong
    Whiting, Stewart
    Yilmaz, Emine
    Jose, Joemon M.
    Azzopardi, Leif
    INFORMATION RETRIEVAL, 2013, 16 (02): : 267 - 305
  • [3] Crowdsourcing interactions: using crowdsourcing for evaluating interactive information retrieval systems
    Guido Zuccon
    Teerapong Leelanupab
    Stewart Whiting
    Emine Yilmaz
    Joemon M. Jose
    Leif Azzopardi
    Information Retrieval, 2013, 16 : 267 - 305
  • [4] On Evaluating Runtime Performance of Interactive Visualizations
    Bruder, Valentin
    Mueller, Christoph
    Frey, Steffen
    Ertl, Thomas
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2020, 26 (09) : 2848 - 2862
  • [5] Pedagogy and usability in interactive algorithm visualizations: Designing and evaluating CIspace
    Amershi, Saleenia
    Carenini, Giuseppe
    Conati, Cristina
    Mackworth, Alan K.
    Poole, David
    INTERACTING WITH COMPUTERS, 2008, 20 (01) : 64 - 96
  • [6] Evaluating Interactive Visualizations for Supporting Navigation and Exploration in Enterprise Systems
    Babaian, Tamara
    Lucas, Wendy
    Chircu, Alina
    Power, Noreen
    PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON ENTERPRISE INFORMATION SYSTEMS, VOL 2 (ICEIS), 2016, : 368 - 377
  • [7] Evaluating overall quality of graph visualizations based on aesthetics aggregation
    Huang, Weidong
    Huang, Mao Lin
    Lin, Chun-Cheng
    INFORMATION SCIENCES, 2016, 330 : 444 - 454
  • [8] WiGis: A Framework for Scalable Web-Based Interactive Graph Visualizations
    Gretarsson, Brynjar
    Bostandjiev, Svetlin
    O'Donovan, John
    Hoellerer, Tobias
    GRAPH DRAWING, 2010, 5849 : 119 - 134
  • [9] Evaluating visualizations: using a taxonomic guide
    Morse, E
    Lewis, M
    Olsen, KA
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2000, 53 (05) : 637 - 662
  • [10] epiCG: A GraphUnit Based Graph Processing Engine on epiC
    Shen, Yanyan
    Cai, Qingchao
    Lu, Wei
    Sun, Dalie
    Xie, Zhongle
    BIG DATA RESEARCH, 2016, 4 : 59 - 69