In much of Bertin's <i>Semiology of Graphics</i>, marks representing individuals are arranged on paper according to their
various attributes (components). Paper and computer monitors can conveniently map two attributes to width and height,
and can map other attributes into nonspatial dimensions such as texture, or colour. Good visualizations exploit the
human perceptual apparatus so that key relationships are quickly detected as interesting patterns.
Graphical models take a somewhat dual approach with respect to the original information. Components, rather than
individuals, are represented as marks. Links between marks represent conceptually simple, easily computable, and
typically probabilistic relationships of possibly varying strength, and the viewer studies the diagram to discover deeper
relationships. Although visually annotated graphical models have been around for almost a century, they have not been
widely used. We argue that they have the potential to represent multivariate data as generically as pie charts represent
univariate data. The present work suggests a semiology for graphical models, and discusses the consequences for
Recent data mining techniques exploit patterns of statistical independence in multivariate data to make conjectures about cause/effect relationships. These relationships can be used to construct causal graphs, which are sometimes represented by weighted node-link diagrams, with nodes representing variables and combinations of weighted links and/or nodes showing the strength of causal relationships. We present an interactive visualization for causal graphs (ICGs), inspired in part by the Influence Explorer. The key principles of this visualization are as follows: Variables are represented with vertical bars attached to nodes in a graph. Direct manipulation of variables is achieved by sliding a variable value up and down, which reveals causality by producing instantaneous change in causally and/or probabilistically linked variables. This direct manipulation technique gives users the impression they are causally influencing the variables linked to the one they are manipulating. In this context, we demonstrate the subtle distinction between <i>seeing</i> and <i>setting</i> of variable values, and in an extended example, show how this visualization can help a user understand the relationships in a large variable set, and with some intuitions about the domain and a few basic concepts, quickly detect bugs in causal models constructed from these data mining techniques.