The Testimator: Significance Day

A few more thoughts on JSM, from the Wednesday sessions:

I enjoyed the discussion on the US Supreme Court’s ruling regarding statistical significance. Some more details of the case are here.
In short, the company Matrixx claimed they did not need to tell investors about certain safety reports, since those results did not reach statistical significance. Matrixx essentially suggested that there should be a “bright line rule” that only statistically-significant results need to be reported.
However, the Supreme Court ruled against this view: All of the discussants seemed to agree that the Supreme Court made the right call in saying that statistical significance is not irrelevant, but we have to consider “the totality of the evidence.” That’s good advice for us all, in any context!

In particular, Jay Kadane and Don Rubin did not prepare slides and simply spoke well, which was a nice change of presentation style from most other sessions. Rubin brought up the fact that the p-value is not a property solely of the data, but also of the null hypothesis, test statistics, covariate selection, etc. So even if the court wanted a bright-line rule of this sort, how could they specify one in sufficient detail?
For that matter, while wider confidence intervals are more conservative
when trying to showing superiority of one drug over another, there are safety situations where narrower confidence intervals are actually the more conservative ones but “everyone still screws it up.” And “nobody really knows how to do multiple comparisons right” for subgroup analysis to check if the drug is safe on all subgroups. So p-values are not a good substitute for human judgment on the “totality of the evidence”.

I also enjoyed Rubin’s quote from Jerzy Neyman: “You’re getting misled by thinking that the mathematics is the statistics. It’s not.” This reminded me of David Cox’s earlier comments that statistics is about the concepts, not about the math. In the next session, Paul Velleman and Dick DeVeaux continued this theme by arguing that “statistics is science more than math.”
(I also love DeVeaux and Velleman’s 2008 Amstat News article on how “math is music; statistics is literature.” Of course Andrew Gelman presented his own views about stats vs. math on Sunday; and Perci Diaconis talked about the need for conceptually-unifying theory, rather than math-ier theory, at JSM 2010. See also recent discussion at The Statistics Forum. Clearly, defining “statistics” is a common theme lately!)

In any case, Velleman presented a common popular telling of the history behind Student’s t test, and then proceeded to bust myths behind every major point in the story. Most of all, he argued that we commonly take the wrong lessons from the story. Perhaps it was not his result (the t-test) that should be taught so much as the computationally-intensive method he first used, which is an approach that’s easier to do nowadays and may be more pedagogically valuable.
I’m also jealous of Gosset’s title at Guinness: “Head Experimental Brewer” would look great on a resume 🙂

After their talks, I went to the session honoring Joe Sedransk in order to hear Rod Little and Don Malec talk about topics closer to my work projects. Little made a point about “inferential schizophrenia”: if you use direct survey estimates for large areas, and model-based estimates for small areas, your entire estimation philosophy jumps drastically at the arbitrary dividing line between “large” and “small.” Wouldn’t it be better to use a Bayesian approach that transitions smoothly, closely approaching the direct estimates for large areas and the model estimates in small areas?
Pfeffermann and Rao commented afterwards that they don’t feel things are as “schizophrenic” as Little claims, but are glad that Bayesians are now okay with measuring the frequentist properties of their procedures (and Little claimed that Bayesian models can often end up with better frequentist properties than classical models).

In the afternoon, I sat in on Hadley Wickham’s talk about starting off statistics courses with graphical analysis.This less-intimidating approach lets beginners describe patterns right from the start.
He also commented that each new tool you introduce should be motivated by an actual problem where it’s needed: find an interesting question that is answered well by the new tool. In particular, when you combine a good dataset with an interesting question that’s well-answered by graphics, this gives students a good quick payoff for learning to program. Once they’re hooked, *then* you can move to the more abstract stuff.

Wickham grades students on their curiosity (what can we discover in this data?), skepticism (are we sure we’ve found a real pattern?), and organization (can we replicate and communicate this work well?). He provides practice drills to teach “muscle memory,” as well as many opportunities for mini-analyses to teach a good “disposition.”
This teaching philosophy reminds me a lot of Dan Meyer and Shawn Cornally’s approaches to teaching math (which I will post about separately sometime) (edit: which I have posted about elsewhere).
Wickham also collects interesting datasets, cleans them up, and posts them on Github along with his various R packages and tools including the excellent ggplot2.

The last talks I attended (by Eric Slud and Ansu Chatterjee, on variance estimation) were also related to my work on small area modeling.
I was amused by the mixed metaphors in Chatterjee’s warning to “not use the bootstrap as a sledgehammer,” and Bob Fay’s discussion featured the excellent term “Testimator” 🙂
This reminds me that last year Fay presented on the National Crime Victimization Survey, and got a laugh from the audience for pointing out that, “From a sampling point of view, it’s a problem that crime has gone down.”

Overall, I enjoyed JSM (as always). I did miss a few things from past JSM years:
This year I did not visit the ASA Student Stat Bowl competition, and I’m a bit sad that as a non-student I can no longer compete and defend my 2nd place title… although that ranking may not have held up across repeated sampling anyway 😛
I was also sad that last year’s wonderful StatAid / Statistics Without Borders mixer could not be repeated this year due to lack of funding.
But JSM was still a great chance to meet distant friends and respected colleagues, get feedback on my research and new ideas on many topics, see what’s going on in the wider world of stats (there are textbooks on Music Data Mining now?!?), and explore another city.
(Okay, I didn’t see too much of Miami beyond Lincoln Rd,
but I loved that the bookstore was creatively named Books & Books …
and the empanadas at Charlotte Bakery were outstanding!)
I also appreciate that it was an impetus to start this blog — knock on wood that it keeps going.

I look forward to JSM 2012 in San Diego!

Failure to reject anole hypothesis

-Is that a gecko?
-No, I think it’s an anole.
-You sure?
-I’m 95% percent confident…

Evidence in favor of anole hypothesis
Evidence in favor of anole hypothesis

I am writing this first blog post while at the 2011 Joint Statistical Meetings conference, in Miami, FL. When several thousand statisticians converge on Miami, dizzy from the heat and observing the tiny lizards that make up the local wildlife, you may overhear some remarks like the title of this post 🙂

Several statistics blogs are already covering JSM events day by day, including posts on The Statistics Forum by Julien Cornebise and Christian Robert:
http://statisticsforum.wordpress.com/2011/08/01/jsm-first-sessions-day/
http://statisticsforum.wordpress.com/2011/08/02/jsm-impressions-day-1/
as well as the Twitter feed of the American Statistical Association:
http://twitter.com/#!/AmstatNews

I have attended presentations by many respected statisticians. For instance Andrew Gelman, author of several popular textbooks and blogs, gave a talk pitting mathematical models against statistical models, claiming that mathematicians are more qualitative than us quantitative statisticians. In his view, it’s better to predict a quantitative, continuous value (share of the popular vote won by the incumbent) than a qualitative, binary outcome (who won the election). Pop scientists should stop trying to provide “the” reason for an outcome and instead explore how all of the relevant factors play together. I particularly liked Dr Gelman’s view that statistics is to math as engineering is to physics.

In another talk, Sir David Cox (of the Cox proportional hazards model) encouraged us to provide a unified view of statistics in terms of common objectives, rather than fragmenting into over-specialized tribes. He noted that a physicist whose experiment disagrees with F=ma will immediately say they must have missed something in the data; a statistician would just shrug that the model is an approximation… but in some cases,  we really should look to see if there are additional variables we have missed. Sir Cox’s wide-ranging talk also covered a great example on badger-culling; the fact that many traditional hypothesis testing problems are better framed as estimation (we already know the effect is not 0, so what is it?); and a summary of 2 faces of frequentism and 7 faces of Bayesianism, all of which “work” but some of which are philosophically inconsistent with one another. He believes the basic division should not be between Bayesians and frequentists, but between letting the data speak for itself (leaving interpretation for later) vs. more ambitiously integrating interpretation into the estimation itself. (An audience member thanked him for giving Bayesians faces, but noted that many are more interested in their posteriors.) Finally, he emphasized that the core of statistical theory is about concepts, and that the mathematics is just part of the implementation.

ASA president Nancy Geller gave an inspiring address calling on statisticians to participate more and “make our contributions clear and widely known.” She asked us to stand against the “assumption that statistics are merely a tool to be used by the consultant to get a result, rather than an intrinsic part of the creative process.” The following awards ceremony was mishandled by an overzealous audio engineer who played booming dramatic music over the presentation… I suspect this would be perfect for sales awards at a business conference but was off the mark for recognizing statisticians, who seem less comfortable with self-promotion — though I suppose it was fitting that this followed Dr Geller’s recommendations 😛

I’ve appreciated seeing other people whose names I recognized, including Leland Wilkinson, Dianne Cook, and Jim Berger. These annual meetings are also a great opportunity to meet firsthand with respected experts in your subfield. In my case, in the subject of Small Area Estimation, this has included meeting JNK Rao, author of the field’s standard textbook; saying hello to Bob Fay, of the pervasive Fay-Herriot model; and being heckled by Avi Singh and Danny Pfeffermann during my own presentation 🙂

Lastly, I also enjoyed meeting with the board of Statistics Without Borders, whose website I help to co-chair:
http://community.amstat.org/statisticswithoutborders/home/
SWB is always looking for volunteers and I will discuss our work further in another post. However, I will point out that former co-chair Jim Cochran was honored as an ASA Fellow tonight, partly in recognition for his work with SWB.

Next up is the JSM Dance Party. Statisticians are, shall we politely say, remarkable dancers… I have seen some truly unforgettable dance skills at past JSM dance parties and I hope to see some more tonight!

Edit: See comments for Cox’s “faces” of frequentism and Bayesianism.