Failure to reject anole hypothesis

-Is that a gecko?
-No, I think it’s an anole.
-You sure?
-I’m 95% percent confident…

Evidence in favor of anole hypothesis
Evidence in favor of anole hypothesis

I am writing this first blog post while at the 2011 Joint Statistical Meetings conference, in Miami, FL. When several thousand statisticians converge on Miami, dizzy from the heat and observing the tiny lizards that make up the local wildlife, you may overhear some remarks like the title of this post 🙂

Several statistics blogs are already covering JSM events day by day, including posts on The Statistics Forum by Julien Cornebise and Christian Robert:
http://statisticsforum.wordpress.com/2011/08/01/jsm-first-sessions-day/
http://statisticsforum.wordpress.com/2011/08/02/jsm-impressions-day-1/
as well as the Twitter feed of the American Statistical Association:
http://twitter.com/#!/AmstatNews

I have attended presentations by many respected statisticians. For instance Andrew Gelman, author of several popular textbooks and blogs, gave a talk pitting mathematical models against statistical models, claiming that mathematicians are more qualitative than us quantitative statisticians. In his view, it’s better to predict a quantitative, continuous value (share of the popular vote won by the incumbent) than a qualitative, binary outcome (who won the election). Pop scientists should stop trying to provide “the” reason for an outcome and instead explore how all of the relevant factors play together. I particularly liked Dr Gelman’s view that statistics is to math as engineering is to physics.

In another talk, Sir David Cox (of the Cox proportional hazards model) encouraged us to provide a unified view of statistics in terms of common objectives, rather than fragmenting into over-specialized tribes. He noted that a physicist whose experiment disagrees with F=ma will immediately say they must have missed something in the data; a statistician would just shrug that the model is an approximation… but in some cases,  we really should look to see if there are additional variables we have missed. Sir Cox’s wide-ranging talk also covered a great example on badger-culling; the fact that many traditional hypothesis testing problems are better framed as estimation (we already know the effect is not 0, so what is it?); and a summary of 2 faces of frequentism and 7 faces of Bayesianism, all of which “work” but some of which are philosophically inconsistent with one another. He believes the basic division should not be between Bayesians and frequentists, but between letting the data speak for itself (leaving interpretation for later) vs. more ambitiously integrating interpretation into the estimation itself. (An audience member thanked him for giving Bayesians faces, but noted that many are more interested in their posteriors.) Finally, he emphasized that the core of statistical theory is about concepts, and that the mathematics is just part of the implementation.

ASA president Nancy Geller gave an inspiring address calling on statisticians to participate more and “make our contributions clear and widely known.” She asked us to stand against the “assumption that statistics are merely a tool to be used by the consultant to get a result, rather than an intrinsic part of the creative process.” The following awards ceremony was mishandled by an overzealous audio engineer who played booming dramatic music over the presentation… I suspect this would be perfect for sales awards at a business conference but was off the mark for recognizing statisticians, who seem less comfortable with self-promotion — though I suppose it was fitting that this followed Dr Geller’s recommendations 😛

I’ve appreciated seeing other people whose names I recognized, including Leland Wilkinson, Dianne Cook, and Jim Berger. These annual meetings are also a great opportunity to meet firsthand with respected experts in your subfield. In my case, in the subject of Small Area Estimation, this has included meeting JNK Rao, author of the field’s standard textbook; saying hello to Bob Fay, of the pervasive Fay-Herriot model; and being heckled by Avi Singh and Danny Pfeffermann during my own presentation 🙂

Lastly, I also enjoyed meeting with the board of Statistics Without Borders, whose website I help to co-chair:
http://community.amstat.org/statisticswithoutborders/home/
SWB is always looking for volunteers and I will discuss our work further in another post. However, I will point out that former co-chair Jim Cochran was honored as an ASA Fellow tonight, partly in recognition for his work with SWB.

Next up is the JSM Dance Party. Statisticians are, shall we politely say, remarkable dancers… I have seen some truly unforgettable dance skills at past JSM dance parties and I hope to see some more tonight!

Edit: See comments for Cox’s “faces” of frequentism and Bayesianism.

2 thoughts on “Failure to reject anole hypothesis

  1. Yay to dancing at scientific conferences. Sounds like Cox is saying what everyone is already thinking, though I’d like to know what his 9 faces were.

  2. Thanks for the comment! The 2 faces of frequentists seemed to be, as Christian Robert’s blog summarized them, “long-term validation versus calibration”.
    Here are the notes I scrawled down:
    1) Rules of behavior have specified long run properties (Neyman?); “A statistician is someone who goes through life trying to make sure exactly 5% of what he or she does is wrong.”
    2) Calibrating procedures in hypothetical repeated use, relevant to the particular data under analysis (Fisher?)
    (At the moment I’m not quite following his distinction between long-run properties vs. hypothetical repeated use.)

    Cox’s 7 faces of Bayes mostly seemed to be a list of ways that people may interpret/justify priors:
    1) The prior represents frequencies over an ill-specified set of semi-similar applications (Edgeworth, K. Pearson)
    2) Empirical Bayes (uncontroversial)
    3) Flat prior represents ignorance (Laplace, Jeffreys) and lets the data speak for itself
    4) Elaborations of #3 by Jaynes, Bernardo, Berger on reference priors so that what you get out is a property of the data alone
    5) The prior encodes info from expert’s knowledge
    6) Personalized use of priors for individual decision making (Ramsey, de Finetti, Savage)
    7) The prior is just a way to get good frequentist properties (sometimes)

    #6 is clearly a hard sell for science or public policy.
    For #5, which experts? It may be fine if the experts agree, but when different experts each have strongly held diverging views, choosing one expert to give a prior is the kind of thing we want to avoid.
    #3, 4, and 7 have the same objective: what can these data tell us?
    #5 and 6 try to merge information from the data with other views.
    Some of these priors don’t have a clear meaning or implications. At this point Cox said something like, “If you multiply something nebulous by something clear, is the answer clear or nebulous?”

    Again, I think his point was that the division between Frequentists and Bayesians might not be as important as the one between (a) people who separate the estimation and analysis from the interpretation, and (b) people who are more ambitious and make interpretation part of the estimation (like Bayesians #5 and #6, and perhaps Frequentist #2).

Comments are closed.