I’m starting to recognize several clusters of data visualization books. These include:
- how-to and best-practices books like Stephen Kosslyn’s Graph Design for the Eye and Mind [my notes] or Stephen Few’s Now You See It
- academic books like Colin Ware’s Information Visualization: Perception for Design or Leland Wilkinson’s The Grammar of Graphics [my notes]
- a category exemplified by Edward Tufte’s The Visual Display of Quantitative Information: lots of concrete examples, but with abstract commentary rather than how-to advice; a book that you wouldn’t mind setting on your coffee-table for the pretty pictures, though with much more content than what a “coffee-table book” usually connotes
(Of course this list calls out for a flowchart or something to visualize it!)
Howard Wainer’s Visual Revelations falls in this last category. And it’s no surprise Wainer’s book emulates Tufte’s, given how often the author refers back to Tufte’s work (including comments like “As Edward Tufte told me once…”). And The Visual Display of Quantitative Information is still probably the best introduction to the genre. But Visual Revelations is different enough to be a worthwhile read too if you enjoy such books, as I do.
Most of all, I appreciated that Wainer presents many bad graph examples found “in the wild” and follows them with improvements of his own. Not all are successful, but even so I find this approach very helpful for learning to critique and improve my own graphics. (Tufte’s classic book critiques plenty, but spends less time on before-and-after redesigns. On the other hand, Kosslyn’s book is full of redesigns, but his “before” graphs are largely made up by him to illustrate a specific point, rather than real graphics created by someone else.)
Of course, Wainer covers the classics like John Snow’s cholera map and Minard’s plot of Napoleon’s march on Russia (well-trodden by now, but perhaps less so in 1997?). But I was pleased to find some fascinating new-to-me graphics. In particular, the Mann Gulch Fire section (p. 65-68) gave me shivers: it’s not a flashy graphic, but it tells a terrifying story and tells it well.
[Edit: I should point out that Snow's and Minard's plots are so well-known today largely thanks to Wainer's own efforts. I also meant to mention that Wainer is the man who helped bring into print an English translation of Jacques Bertin's seminal Semiology of Graphics and a replica volume of William Playfair's Commercial and Political Atlas and Statistical Breviary. He has done amazing work at unearthing and popularizing many lost gems of historical data visualization!
See also Alberto Cairo's review of a more recent Wainer book.]
Finally, Wainer’s tone overall is also much lighter and more humorous than Tufte’s. His first section gives detailed advice on how to make a bad graph, for example. I enjoyed Wainer’s jokes, though some might prefer more gravitas.
Below are my notes-to-self, with things-to-follow-up in bold:
- p. 11: “When looking at a good graph, your response should never be ‘what a great graph!’ but ‘what interesting data!'” It’s a matter of taste and context, but my personal interests align with Wainer’s here. I’m currently much less interested in artsy visualizations that do not aid understanding; I’m reminded of one recently highlighted on FlowingData with the comment, “I can’t say how accurate it is or if the described mechanisms are accurate, but it sure is fun to play with.”
- p. 43: “after more than two hundred practice exercises with [bivariate choropleth] maps, graduate students in perception at Johns Hopkins University were unable to internalize the legend.” Read the study: Wainer and Francolini (1980)
- p. 47: Sandy Zabell used graphs to highlight “inconsistencies, clerical errors, and a remarkable amount of other information” that earlier researchers had missed in the London Bills of Mortality. I’d love to find these graphs: Zabell, 1976, “Arbuthnot, Heberden and the Bills of Mortality,” Technical Report #40, Department of Statistics, University of Chicago.
- p. 47: data graphics were uncommon, even in scientific journals, before William Playfair — but how & when did journals start including graphics?
- p. 52: the famous O-ring example is a case of plotting the wrong data for the question at hand. In the plot used for decision-making, they showed failures vs. temperature only for those space shuttle flights with no failures. That is, if is the number of failures and is the temperature, they plotted vs , rather than all vs . Hence, they had a distorted view of . Perhaps a related idea is key to Wald’s study of armoring airplanes (p. 58): consider not just when you’ve observed the event of interest, but also when you haven’t.
- p. 55: “Good graphs can make difficult problems trivial.” For a great example, see the inclined-plane question on p. 71-72, which can be answered either with trigonometry and calculus… or at a glance with the right graph. Also related to Colin Ware’s focus on “external cognition”: how resources outside the mind can be used to boost the mind.
- p. 80: “a reasonable strategy in what ought to be an iterative process. Sometimes one has a data-related question and then draws a graph to try to answer it. After drawing the graph a new question might suggest itself, and hence a different graph, better suited to this new question (perhaps with additional data), is drawn. This in turn suggests something else, and so on, until either the data or the grapher is exhausted. [...] My experience suggests that if you begin with a general-purpose plot there is a greater chance of finding what you had not expected.” This is my experience as well, and reminds me also of Hadley Wickham’s description of statistics as iterating between models and graphics.
- p. 84: Futurism that actually came true, for once! “Indeed, it is easy to imagine a general-purpose device that might have (among many other things) all of the Los Angeles bus routes inside [...] I see no reason why StreetmapTM-like software won’t become available eventually for cheap pocket computers of the sort now called ‘personal organizers.'”
- p. 93-94: examples of misuse of double y-axes, and a comment that it would only be okay if “the same dependent variable can be represented in a transformed way. For example, plot log of per pupil expenditures on the left and per pupil expenditures on the right, the latter spaced to match the left-hand scale [...] Ironically, no graphics package I know of allows this latter use to be done easily, whereas the misuse is often a touted option.”
- p. 97: Wainer really wants us to round the data for presentation: Readers rarely comprehend more than 2 digits easily, statisticians can rarely justify more than 2 digits of precision, and more than 2 digits are rarely of practical use.
I love this part: “The standard error of any statistic is proportional to one over the square root of the sample size. God did this, and there is nothing we can do to change it.” (Say you print 2 digits of a correlation. That implies its standard error is be less than 0.005, which requires a sample size on the order of 40,000 — do you really have that much data?)
And then on p. 99: “Round the numbers, and if you must, insert a footnote proclaiming that the unrounded details are available from the author. Then sit back and wait for the deluge of requests.”
- p. 101: nice example of spacing rows of a table by the values of one column, showing clusters in the data.
- Ch. 11 and 12: he argues in favor of Nightingale roses and trilinear plots but I don’t find them of much use, except maybe the example on p. 116.
- p. 111: people have been complaining about the size and complexity of big data for centuries! William Playfair’s classic 1786 Commercial and Political Atlas was a response to these kinds of concerns.
- p. 121-123: I love these implicit graphs or nomographs, explicitly making handy tools out of data graphics. Jonathan Rougier has an example of using nomograms to turn a predictive statistical model into something easily used in the field by non-math-savvy folks.
- p. 128: Besides graphics, Wainer has a strong interest in education and standardized testing: “Basing a characterization of an examinee’s ability to understand graphical displays on a question paired with a flawed display is akin to characterizing someone’s ability to read by asking questions about a passage full of spelling and grammatical errors. What are we really testing?”
- p. 138: great back-to-back stem and leaf plot, instead of an unhelpful table, for comparing test scores in US states vs. international countries.
- p. 147, 149: I’m not too pleased with either Cleveland’s clean-but-boring computer-defaults plot or with Wainer’s cheesy Playfair-style remake. This is where I and many other statisticians feel a huge gap in our data-graphics skillset: once you’re happy with the content and inherent form of your graph, how do you make it look nice too, without being either bland or tacky?
- Ch. 20: good advice on making readable slides, still aimed at overhead transparencies but largely applicable to PowerPoint etc. too. “If you can’t read it when you are against the back wall, either redo the ineffectual overheads or have as many of the back rows of chairs removed as necessary.”
And of course limit the number of fonts, colors, significant digits, and equations in your talk.