Gerard van Belle’s Statistical Rules of Thumb has piqued my curiosity at conferences. It turns out my work library has a copy, which has been fun to skim, or should I say, to thumb through.
The book’s examples focus largely on medical and environmental studies, but most of the book does apply to statistics in general.
The book starts off with good “rules of thumb” in the sense of quick calculations, i.e. for the approximate sample size you’d need to get suitably precise estimates in several common situations. But van Belle also suggests more general good advice, such as typical models to start with: when to use Normal vs Exponential vs Poisson etc as your initial model, etc.
Some of my favorite pithy or self-explanatory “rules”:
- 1.9: “Use p-values to determine sample size, confidence intervals to report results”
- 3.3: “Do not correlate rates or ratios indiscriminately”
i.e. if X, Y, and Z are mutually independent, then X/Z and Y/Z will show spurious correlation. - 5.8 “Distinguish between variability and uncertainty”
i.e. “reduce uncertainty but account for variability” - 5.13 “Distinguish between confidence, prediction, and tolerance intervals”
- 6.2 “Blocking is the key to reducing variability”
- 6.6 “Analysis follows design”
i.e. the possible analyses will depend on how the randomization was done - 6.11 “Plan for missing data”
i.e. be explicit about how you intend to deal with it - 6.12 “Address multiple comparisons before starting the study”
Longer notes:
- In rule 1.14, van Belle mentions Neyman-Pearson methods and Likelihood methods as two distinct schools of statistical inference (of three, along with Bayesian). This surprised me since I had not heard of Likelihood as its own philosophy and would have assumed it’s lumped together with Neyman-Pearson as the Frequentist approach. However, van Belle points readers to Michael Oakes’ Statistical Inference, which I’m consequently reading as well. It turns out when Oakes wrote the book in 1986 he held out hope for a Likelihood school of inference, indeed distinct from Neyman-Pearson or Bayesian thought. Apparently the book to read is A.W.F. Edwards’ Likelihood, but I don’t know of many people citing this influence nowadays, unless (as an Amazon reviewer suggests) it has become the basis for Art Owen’s work on empirical likelihood. I’ll need to read up more on this.
- Bland-Altman plots are one visual way to assess the agreement between two different measurements or raters on the same set of units. For each measurement pair, plot their mean on the x-axis and their difference on the y-axis, so you can see whether the differences are systematically related to the mean measure/rating. For example, if the differences get far from zero at low and high x-values, maybe your instrument is only calibrated well for the mid-range values.
Tie-Hua Ng presented an extension to comparing three methods/raters, at the 2012 JSM conference. - Although medians are more robust than means, they have the drawback that means are more directly related to the population total, which is often of inherent interest.
- Peter Sandman’s article on “Mass Media and Environmental Risk: Seven Principles” is worthwhile reading on how the media use and interpret statistics. van Belle’s summary: “the media emphasizes outrage rather than risk, blame rather than hazard, and fear rather than objectivity.”
- Remember to arrange tables in useful way: if it’s not meant for lookup but rather for insight, then don’t alphabetize but rather sort by a variable. And remove all unwarranted precision of significant digits.
- van Belle insists we should never ever use pie charts 🙂 (although Hadley Wickham told us otherwise at a recent R Meetup) and suggests we try to think up alternatives to bar charts whenever possible.
- Sir David Cox has his own list of rules under “Some remarks on consulting” (p. 28-30)… Bonus for the Rosetta-Stone-like bilingual format!
I’m especially fond of the merciful rule #19: “If more than ten per cent of what you do ends up by being directly useful, you are doing well.”
just found you. nice work!