3 February 2014 Evaluation in visualization: some issues and best practices
Author Affiliations +
The first data and information visualization techniques and systems were developed and presented without a systematic evaluation; however, researchers have become, and are more and more, aware of the importance of evaluation (Plaisant, 2004)1. Evaluation is not only a means of improving techniques and applications, but it can also produce evidence of measurable benefits that will encourage adoption. Yet, evaluating visualization applications or techniques, is not simple. We deem visualization applications should be developed using a user-centered design approach and that evaluation should take place in several phases along the process and with different purposes. An account of what issues we consider relevant while planning an evaluation in Medical Data Visualization can be found in (Sousa Santos and Dillenseger, 2005) 2. In that work the question “how well does a visualization represent the underlying phenomenon and help the user understand it?” is identified as fundamental, and is decomposed in two aspects: A) the evaluation of the representation of the phenomenon (first part of the question). B) the evaluation of the users’ performance in their tasks when using the visualization, which implies the understanding of the phenomenon (second part of the question). We contend that these questions transcend Medical Data Visualization and can be considered central to evaluating Data and Information Visualization applications and techniques in general. In fact, the latter part of the question is related to the question Freitas et al. (2009) 3 deem crucial to user centered visualization evaluation: “How do we know if information visualization tools are useful and usable for real users performing real visualization tasks?” In what follows issues and methods that we have been using to tackle this latter question, are briefly addressed. This excludes equally relevant topics as algorithm optimization, and accuracy, that can be dealt with using concepts and methods well known in other disciplines and are mainly related to how well the phenomenon is represented. A list of guidelines considered as our best practices to perform evaluations is presented and some conclusions are drawn.
© (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Beatriz Sousa Santos, Beatriz Sousa Santos, Paulo Dias, Paulo Dias, "Evaluation in visualization: some issues and best practices", Proc. SPIE 9017, Visualization and Data Analysis 2014, 90170O (3 February 2014); doi: 10.1117/12.2038259; https://doi.org/10.1117/12.2038259


A web-enabled visualization toolkit for geovisual analytics
Proceedings of SPIE (January 24 2011)
Visual cues for data mining
Proceedings of SPIE (April 21 1996)
Universal visualization platform
Proceedings of SPIE (March 10 2005)

Back to Top