Information Visualization specific heuristics

From Cmsc734_f12
Jump to: navigation, search

Since "the students' answer" is editable and a draft can be iterated on, I am posting a few observations, ideas, suggestions and resources about this topic and resources as a first step for this that we can possibly build on/add on/reject.


  1. A lot of the literature often does not mention heuristics. However, problems in other types of InfoViz evaluation are often studied.
  2. Heuristics often focus on the interaction alone, ignoring the quality of the visual elements
  3. The Frame is often ignored, even though a browser or windowing operating system is not a passive piece of paper. E.g., most browsers can quickly search for a word in a text oriented visualization via the ctrl-F or cmd-F or Find from the menubar. Some systems can easily save / print an interesting visual state to share with others.


  1. Perhaps a different set of heuristics is needed for static information visualization than for visual analytics systems. The lack of interaction in static visualizations creates different needs and expectations
  2. Related to Observation 1. : Heuristics can often result in biased evaluations. However, they might be extra useful in visual analytics systems in that they deal with the limitations of other evaluation methods, especially if performed by not only multiple usability experts, but also by the analysts themselves.


  1. Adjust other heuristics to meet the needs of heuristics for visual analytics systems
  2. What if the taxonomy described in (Heer, Shneiderman) was used? i.e. Evaluate how well the systems allows the functions (and meets the minimum requirements for):
    a) Visualize
    b) Filter
    c) Sort
    d) Derive
    e) Select
    f) Navigate
    g) Coordinate
    h) Organize
    i) Record
    j) Annotate
    k) Share
    l) Guide
  3. Any set of heuristics that focus on interaction can be augmented with guidelines for the evaluation of visual elements.
  4. Thoughtful list from Forsell & Johansson

    B5. Information coding. Perception of information is directly dependent on the mapping of data elements to visual objects. This should be enhanced by using realistic characteristics/techniques or the use of additional symbols.

    E7. Minimal actions. Concerns workload with respect to the number of actions necessary to accomplish a goal or a task.

    E11: Flexibility. Flexibility is reflected in the number of possible ways of achieving a given goal. It refers to the means available to customization in order to take into account working strategies, habits and task requirements.

    B7: Orientation and help. Functions like support to control levels of details, redo/undo of actions and representing additional information.

    B3: Spatial organization. Concerns users’ orientation in the information space, the distribution of elements in the layout, precision and legibility, efficiency in space usage and distortion of visual elements.

    E16: Consistency. Refers to the way design choices are maintained in similar contexts, and are different when applied to different contexts.

    C6: Recognition rather than recall. The user should not have to memorize a lot of information to carry out tasks.

    E1: Prompting. Refers to all means that help to know all alternatives when several actions are possible depending on the contexts

    D10: Remove the extraneous. Concerns whether any extra information can be a distraction and take the eye away from seeing the data or making comparisons.

    B9: Data set reduction. Concerns provided features for reducing a data set, their efficiency and ease of use.

Related Resources

  1. DesJardins, M., Bulka, B., Carr, R., Hunt, A., Rathod, P., & Rheingans, P. (2006). Heuristic search and information visualization methods for school redistricting. Proceedings of the 18th conference on Innovative applications of artificial intelligence - Volume 2, IAAI’06 (pp. 1774–1781). AAAI Press.
  2. Forsell, C., & Johansson, J. (2010). An heuristic set for evaluation in information visualization. Proceedings of the International Conference on Advanced Visual Interfaces, AVI ’10 (pp. 199–206). New York, NY, USA: ACM. doi:10.1145/1842993.1843029.
  3. Heer, J., & Shneiderman, B. (2012). Interactive Dynamics for Visual Analysis. Queue, 10(2), 30:30–30:55. doi:10.1145/2133416.2146416.
  4. O’Connell,, T. A., & Choong, Y.-Y. (2008). Metrics for measuring human interaction with interactive visualizations for information analysis. Proceedings of the twenty-sixth annual SIGCHI conference on Human factors in computing systems, CHI ’08 (pp. 1493–1496). New York, NY, USA: ACM. doi:10.1145/1357054.1357287.
  5. Plaisant, C. (2004). The challenge of information visualization evaluation. Proceedings of the working conference on Advanced visual interfaces, AVI ’04 (pp. 109–116). New York, NY, USA: ACM. doi:10.1145/989863.989880.
  6. Shneiderman, B., & Plaisant, C. (2006). Strategies for evaluating information visualization tools: multi-dimensional in-depth long-term case studies. Proceedings of the 2006 AVI workshop on BEyond time and errors: novel evaluation methods for information visualization, BELIV ’06 (pp. 1–7). New York, NY, USA: ACM. doi:10.1145/1168149.1168158.
  7. Zuk, T., Schlesier, L., Neumann, P., Hancock, M. S., & Carpendale, S. (2006). Heuristics for information visualization evaluation. Proceedings of the 2006 AVI workshop on BEyond time and errors: novel evaluation methods for information visualization, BELIV ’06 (pp. 1–6). New York, NY, USA: ACM. doi:10.1145/1168149.1168162.