Carl Morris Symposium on Large-Scale Data Inference (2/3)

Continuing the summary of last week’s symposium on statistics and data visualization (see part 1 and part 3)… Here I describe Dianne Cook’s discussion of visual inference, and Rob Kass’ talk on statistics in cognitive neuroscience.

[Edit: I’ve added a few more related links throughout the post.]

Continue reading “Carl Morris Symposium on Large-Scale Data Inference (2/3)”

Carl Morris Symposium on Large-Scale Data Inference (1/3)

I enjoyed this week’s Symposium on Large-Scale Data Inference, which honored Harvard’s Carl Morris as the keynote speaker. This was the 2nd such symposium; last year’s honoree was Brad Efron (whose new book I also recommend after seeing it at this event).

This year’s focus was the intersection of statistics and data visualization around the question, “Can we believe what we see?” I was seriously impressed by the variety and quality of the speakers & panelists — many thanks to Social & Scientific Systems for organizing! Look for the lecture videos to be posted online in January.

See below for the first two speakers, Carl Morris and Mark Hansen. The next posts will summarize talks by Di Cook and Rob Kass (part 2), and Chris Volinsky and the final panel discussion (part 3).

Continue reading “Carl Morris Symposium on Large-Scale Data Inference (1/3)”

Making R graphics legible in presentation slides

I only visited a few JSM sessions today, as I’ve been focused on preparing for my own talk tomorrow morning. However, I went to several talks in a row which all had a common problem that made me cringe: graphics where the fonts (titles, axes, labels) are too small to read.

You used R's default settings when putting this graph in your slides? Too bad I won't be able to read it from anywhere but the front of the room.

Dear colleagues: if we’re going to the effort of analyzing our data carefully, and creating a lovely graph in R or otherwise to convey our results in a slideshow, let’s PLEASE save our graphs in a way that the text is legible on the slides! If the audience has to strain to read your graphics, it’s no easier to digest than a slide with dense equations or massive tables of numbers.

For those of us working in R, here are some very quick suggestions that would help me focus on the content of your graphics, not on how hard I’m squinting to read them.

Continue reading “Making R graphics legible in presentation slides”

JSM 2012: Sunday

Greetings from lovely San Diego, CA, site of this year’s Joint Statistical Meetings. I can’t believe it’s already been a year since I was inspired to start blogging during the JSM in Miami!

If you’re keeping tabs on this year’s conference, there’s a fair amount of #JSM2012 activity on Twitter. Sadly, I haven’t seen any recent posts on The Statistics Forum, which blogged JSM so actively last year.

Yesterday’s Dilbert cartoon was also particularly fitting for the start of JSM, with its focus on big data 🙂

Continue reading “JSM 2012: Sunday”

useR 2012: main conference braindump

I knew R was versatile, but DANG, people do a lot with it:

> > … I don’t think anyone actually believes that R is designed to make *everyone* happy. For me, R does about 99% of the things I need to do, but sadly, when I need to order a pizza, I still have to pick up the telephone. —Roger Peng

> There are several chains of pizzerias in the U.S. that provide for Internet-based ordering (e.g. www.papajohnsonline.com) so, with the Internet modules in R, it’s only a matter of time before you will have a pizza-ordering function available. —Doug Bates

Indeed, the GraphApp toolkit … provides one (for use in Sydney, Australia, we presume as that is where the GraphApp author hails from). —Brian Ripley

So, heads up: the following post is super long, given how much R was covered at the conference. Much of this is a “notes-to-self” braindump of topics I’d like to follow up with further. I’m writing up the invited talks, the presentation and poster sessions, and a few other notes. The conference program has links to all the abstracts, and the main website should collect most of the slides eventually.

Continue reading “useR 2012: main conference braindump”

useR 2012: impressions, tutorials

First of all, useR 2012 (the 8th International R User Conference) was, hands down, the best-organized conference I’ve had the luck to attend. The session chairs kept everything moving on time, tactfully but sternly; the catering was delicious and varied; and Vanderbilt University’s leafy green campus and comfortable facilities were an excellent setting. Many thanks to Frank Harrell and the rest of Vanderbilt’s biostatistics department for hosting!

Plus there's a giant statue of bacon. What's not to love?

Continue reading “useR 2012: impressions, tutorials”

JSM: accessible for first-year grad students?

A friend of mine has just finished his first year of a biostatistics program. I’m encouraging him to attend the Joint Statistical Meetings (JSM) conference in San Diego this July/August. He asked:

Some of the talks look really interesting, though as a someone who’s only been through the first year of a master’s program, I wonder if I’d be able to understand much.  When you went as a student, did you find the presentations to be accessible?

I admit a lot of the talks went over my head the first year — and many still do. Some talks are too specialized even for an experienced statistician who just has a different focus… But there are always plenty of accessible talks as well:

  • Talks on teaching statistical literacy or Stats 101 might be useful if you’re ever a TA or consultant
  • Talks on data visualization may focus on communicating results rather than on technical details
  • Overview lectures can introduce you to a new field
  • Some folks are known for generally being accessible speakers (a few off the top of my head: Hadley Wickham, Persi Diaconis, Andrew Gelman, Don Rubin, Dick DeVeaux, David Cox, Hal Varian… and plenty of others)

And it’s worthwhile for a grad student to start getting to know other statisticians and becoming immersed in your field.

  • There’s a nice opening night event for first-time attendees, and the Stat Bowl contest for grad students; in both of those, I made some friends I keep seeing again at later JSMs
  • Even when the talk is too advanced, it’s still fun to see a lecture by the authors of your textbooks, meet the folks who invented a famous estimator, etc.
  • You can get involved in longer-term projects: after attending the Statistics Without Borders sessions, I’ve become co-chair of the SWB website and co-authored a paper that’s now under review
  • It’s fun to browse the books in the general exhibit hall, get free swag, and see if any exhibitors are hiring; there is also a career placement center although I haven’t used it myself

Even if you’re a grad student or young statistician just learning the ropes, I definitely think it’s worth the trip!

The Testimator: Significance Day

A few more thoughts on JSM, from the Wednesday sessions:

I enjoyed the discussion on the US Supreme Court’s ruling regarding statistical significance. Some more details of the case are here.
In short, the company Matrixx claimed they did not need to tell investors about certain safety reports, since those results did not reach statistical significance. Matrixx essentially suggested that there should be a “bright line rule” that only statistically-significant results need to be reported.
However, the Supreme Court ruled against this view: All of the discussants seemed to agree that the Supreme Court made the right call in saying that statistical significance is not irrelevant, but we have to consider “the totality of the evidence.” That’s good advice for us all, in any context!

In particular, Jay Kadane and Don Rubin did not prepare slides and simply spoke well, which was a nice change of presentation style from most other sessions. Rubin brought up the fact that the p-value is not a property solely of the data, but also of the null hypothesis, test statistics, covariate selection, etc. So even if the court wanted a bright-line rule of this sort, how could they specify one in sufficient detail?
For that matter, while wider confidence intervals are more conservative
when trying to showing superiority of one drug over another, there are safety situations where narrower confidence intervals are actually the more conservative ones but “everyone still screws it up.” And “nobody really knows how to do multiple comparisons right” for subgroup analysis to check if the drug is safe on all subgroups. So p-values are not a good substitute for human judgment on the “totality of the evidence”.

I also enjoyed Rubin’s quote from Jerzy Neyman: “You’re getting misled by thinking that the mathematics is the statistics. It’s not.” This reminded me of David Cox’s earlier comments that statistics is about the concepts, not about the math. In the next session, Paul Velleman and Dick DeVeaux continued this theme by arguing that “statistics is science more than math.”
(I also love DeVeaux and Velleman’s 2008 Amstat News article on how “math is music; statistics is literature.” Of course Andrew Gelman presented his own views about stats vs. math on Sunday; and Perci Diaconis talked about the need for conceptually-unifying theory, rather than math-ier theory, at JSM 2010. See also recent discussion at The Statistics Forum. Clearly, defining “statistics” is a common theme lately!)

In any case, Velleman presented a common popular telling of the history behind Student’s t test, and then proceeded to bust myths behind every major point in the story. Most of all, he argued that we commonly take the wrong lessons from the story. Perhaps it was not his result (the t-test) that should be taught so much as the computationally-intensive method he first used, which is an approach that’s easier to do nowadays and may be more pedagogically valuable.
I’m also jealous of Gosset’s title at Guinness: “Head Experimental Brewer” would look great on a resume 🙂

After their talks, I went to the session honoring Joe Sedransk in order to hear Rod Little and Don Malec talk about topics closer to my work projects. Little made a point about “inferential schizophrenia”: if you use direct survey estimates for large areas, and model-based estimates for small areas, your entire estimation philosophy jumps drastically at the arbitrary dividing line between “large” and “small.” Wouldn’t it be better to use a Bayesian approach that transitions smoothly, closely approaching the direct estimates for large areas and the model estimates in small areas?
Pfeffermann and Rao commented afterwards that they don’t feel things are as “schizophrenic” as Little claims, but are glad that Bayesians are now okay with measuring the frequentist properties of their procedures (and Little claimed that Bayesian models can often end up with better frequentist properties than classical models).

In the afternoon, I sat in on Hadley Wickham’s talk about starting off statistics courses with graphical analysis.This less-intimidating approach lets beginners describe patterns right from the start.
He also commented that each new tool you introduce should be motivated by an actual problem where it’s needed: find an interesting question that is answered well by the new tool. In particular, when you combine a good dataset with an interesting question that’s well-answered by graphics, this gives students a good quick payoff for learning to program. Once they’re hooked, *then* you can move to the more abstract stuff.

Wickham grades students on their curiosity (what can we discover in this data?), skepticism (are we sure we’ve found a real pattern?), and organization (can we replicate and communicate this work well?). He provides practice drills to teach “muscle memory,” as well as many opportunities for mini-analyses to teach a good “disposition.”
This teaching philosophy reminds me a lot of Dan Meyer and Shawn Cornally’s approaches to teaching math (which I will post about separately sometime) (edit: which I have posted about elsewhere).
Wickham also collects interesting datasets, cleans them up, and posts them on Github along with his various R packages and tools including the excellent ggplot2.

The last talks I attended (by Eric Slud and Ansu Chatterjee, on variance estimation) were also related to my work on small area modeling.
I was amused by the mixed metaphors in Chatterjee’s warning to “not use the bootstrap as a sledgehammer,” and Bob Fay’s discussion featured the excellent term “Testimator” 🙂
This reminds me that last year Fay presented on the National Crime Victimization Survey, and got a laugh from the audience for pointing out that, “From a sampling point of view, it’s a problem that crime has gone down.”

Overall, I enjoyed JSM (as always). I did miss a few things from past JSM years:
This year I did not visit the ASA Student Stat Bowl competition, and I’m a bit sad that as a non-student I can no longer compete and defend my 2nd place title… although that ranking may not have held up across repeated sampling anyway 😛
I was also sad that last year’s wonderful StatAid / Statistics Without Borders mixer could not be repeated this year due to lack of funding.
But JSM was still a great chance to meet distant friends and respected colleagues, get feedback on my research and new ideas on many topics, see what’s going on in the wider world of stats (there are textbooks on Music Data Mining now?!?), and explore another city.
(Okay, I didn’t see too much of Miami beyond Lincoln Rd,
but I loved that the bookstore was creatively named Books & Books …
and the empanadas at Charlotte Bakery were outstanding!)
I also appreciate that it was an impetus to start this blog — knock on wood that it keeps going.

I look forward to JSM 2012 in San Diego!