JSM: accessible for first-year grad students?

A friend of mine has just finished his first year of a biostatistics program. I’m encouraging him to attend the Joint Statistical Meetings (JSM) conference in San Diego this July/August. He asked:

Some of the talks look really interesting, though as a someone who’s only been through the first year of a master’s program, I wonder if I’d be able to understand much.  When you went as a student, did you find the presentations to be accessible?

I admit a lot of the talks went over my head the first year — and many still do. Some talks are too specialized even for an experienced statistician who just has a different focus… But there are always plenty of accessible talks as well:

  • Talks on teaching statistical literacy or Stats 101 might be useful if you’re ever a TA or consultant
  • Talks on data visualization may focus on communicating results rather than on technical details
  • Overview lectures can introduce you to a new field
  • Some folks are known for generally being accessible speakers (a few off the top of my head: Hadley Wickham, Persi Diaconis, Andrew Gelman, Don Rubin, Dick DeVeaux, David Cox, Hal Varian… and plenty of others)

And it’s worthwhile for a grad student to start getting to know other statisticians and becoming immersed in your field.

  • There’s a nice opening night event for first-time attendees, and the Stat Bowl contest for grad students; in both of those, I made some friends I keep seeing again at later JSMs
  • Even when the talk is too advanced, it’s still fun to see a lecture by the authors of your textbooks, meet the folks who invented a famous estimator, etc.
  • You can get involved in longer-term projects: after attending the Statistics Without Borders sessions, I’ve become co-chair of the SWB website and co-authored a paper that’s now under review
  • It’s fun to browse the books in the general exhibit hall, get free swag, and see if any exhibitors are hiring; there is also a career placement center although I haven’t used it myself

Even if you’re a grad student or young statistician just learning the ropes, I definitely think it’s worth the trip!

In defense of the American Community Survey

Disclaimer: All opinions expressed on this blog are my own and are not intended to represent those of the U.S. Census Bureau.
Edit: Please also read the May 11th official statement responding to the proposed cuts, by Census Bureau Director Robert Groves.
(Again, although of course my opinions are informed by my work with the Bureau, my post below is strictly as a private citizen. I have neither the authority nor the intent to be an official spokesperson for the Census Bureau.)

Yesterday the U.S. House of Representatives voted to eliminate the American Community Survey (ACS). The Senate has not passed such a measure yet. I do not want to get political, but in light of these events it seems appropriate to highlight some of the massive benefits that the ACS provides.

For many variables and indicators, the ACS is the only source of nationally-comparable local data. That is, if you want a detailed look at trends and changes over time, across space, or by demographic group, the ACS is your best dataset for many topics. Take a look at the list of data topics on the right-hand side of the ACS homepage: aging, disability, commuting to work, employment, language, poverty…

Businesses use the ACS to analyze markets: Can people afford our product here?  Should we add support for speakers of other languages? Does the aging population here need the same services as the younger population there? Similarly, public health officials use ACS information about population density when deciding where to place a new hospital. Dropping the ACS would increase risks with no corresponding direct benefits to businesses or local governments.

Local authorities can and do commission their own local studies of education levels or commute times; but separate surveys by each area might use incompatible questions. Only the ACS lets them compare such data to their neighbors, to similar localities around the country, and to their own past.

The Census Bureau works long and hard to ensure that each survey is well-designed to collect only the most important data with minimal intrusion. For example, even the flush toilet question (cited deprecatingly by the recent measure’s author) is useful data about infrastructure and sanitation. From the ACS page on “Questions on the form and why we ask”:

Complete plumbing facilities are defined as hot and cold running water, a flush toilet, and a bathtub or shower. These data are essential components used by the U.S. Department of Housing and Urban Development in the development of Fair Market Rents for all areas of the country. Federal agencies use this item to identify areas eligible for public assistance programs and rehabilitation loans. Public health officials use this item to locate areas in danger of ground water contamination and waterborne diseases.

Besides the direct estimates from the ACS itself, the Census Bureau uses ACS data as the backbone of several other programs. For example, the Small Area Income and Poverty Estimates program provides annual data to the Department of Education for use in allocating funds to school districts, based on local counts and rates of children in poverty. Without the ACS we would be limited to using smaller surveys (and thus less accurate information about poverty in each school district) or older data (which can become outdated within a few years, such as during the recent recession). Either way, it would hurt our ability to allocate resources fairly to schoolchildren nationwide.

Similarly, the Census Bureau uses the ACS to produce other timely small-area estimates required by Congressional legislation or requested by other agencies: the number of people with health insurance, people with disabilities, minority language speakers, etc. The legislation requires a data source like the ACS not only so that it can be carried out well, but also so its progress can be monitored.

Whatever our representatives may think about the costs of this survey, I hope they reflect seriously on all its benefits before deciding whether to eliminate the ACS.

Updated d3 idiopleth

I’ve updated the interactive poverty map from last month, providing better labels, legends, and a clickable link to the data source. It also actually compares confidence intervals correctly now. I may have switched the orange and purple colors too. (I also reordered the code so that things are defined in the right order; I think that was why sometimes you’d need to reload the map before the interactivity would work.)

Please click the screenshot to try the interactive version (seems to work better in Firefox than Internet Explorer):

Next steps: redo the default color scheme so it shows the states relative to the national average poverty rate; figure out why there are issues in the IE browser; clean up the code and share it on Github.
[Edit: the IE issues seem to be caused by D3’s use of the SVG format for its graphics; older versions of IE do not support SVG graphics. I may try to re-do this map in another Javascript library such as Raphaël, which can apparently detect old versions of IE and use another graphics format when needed.]

For lack of a better term I’m still using “idiopleth”: idio as in idiosyncratic (i.e. what’s special about this area?) and pleth as in plethora (or choropleth, the standard map for a multitude of areas). Hence, together, idiopleth: one map containing a multitude of idiosyncratic views. Please leave a comment if you know of a better term for this concept already.

Getting SASsy

Although I am most familiar with R for statistical analysis and programming, I also use a fair amount of SAS at work.

I found it a huge transition at first, but one thing that helped make SAS “click” for me is that it was designed around those (now-ancient) computers that used punch cards. So the DATA step processes one observation at a time, as if you were feeding it punch cards one after another, and never loads the whole dataset into memory at once. I think this is also why many SAS procedures require you to sort your dataset first. It makes some things awkward to do, and often it takes more code than the equivalent in R, but on the other hand it means you can process huge datasets without worrying about whether they will fit into memory. (Well… memory size should be a non-issue for the DATA step, but not for all procedures. We’ve run into serious memory issues on large datasets when using PROC MIXED and PROC MCMC, so using SAS does not guarantee that you never have to fear large data.)

The Little SAS Book (by Delwiche and Slaughter) and Learning SAS by Example (by Cody) are two good resources for learning SAS. If you’re able to take a class directly from the SAS Institute, they tend to be taught well, and you get a book of class notes with a very handy cheat sheet.

Matrix vs Data Frame in R

Today I ran into a double question that might be relevant to other R users:
Why can’t I assign a dataframe row into a matrix row?
And why won’t my function accept this dataframe row as an input argument?

A single row of a dataframe is a one-row dataframe, i.e. a list, not a vector. R won’t automatically treat dataframe rows as vectors, because a dataframe’s columns can be of different types. So converting them to a vector (which must be all of a single type) would be tricky to generalize.

But if in your case you know all your columns are numeric (no characters, factors, etc), you can convert it to a numeric matrix yourself, using the as.matrix() function, and then treat its rows as vectors.

> # Create a simple dataframe
> # and an empty matrix of the same size
> my.df <- data.frame(x=1:2, y=3:4)
> my.df
  x y
1 1 3
2 2 4
> dim(my.df)
[1] 2 2
> my.matrix <- matrix(0, nrow=2, ncol=2)
> my.matrix
     [,1] [,2]
[1,]    0    0
[2,]    0    0
> dim(my.matrix)
[1] 2 2
>
> # Try assigning a row of my.df into a row of my.matrix
> my.matrix[1,] <- my.df[1,]
> my.matrix
[[1]]
[1] 1

[[2]]
[1] 0

[[3]]
[1] 3

[[4]]
[1] 0

> dim(my.matrix)
NULL
> # my.matrix became a list!
>
> # Convert my.df to a matrix first
> # before assigning its rows into my.matrix
> my.matrix <- matrix(0, nrow=2, ncol=2)
> my.matrix[1,] <- as.matrix(my.df)[1,]
> my.matrix
     [,1] [,2]
[1,]    1    3
[2,]    0    0
> dim(my.matrix)
[1] 2 2
> # Now it works.
>
> # Try using a row of my.df as input argument
> # into a function that requires a vector,
> # for example stem-and-leaf-plot:
> stem(my.df[1,])
Error in stem(my.df[1, ]) : 'x' must be numeric
> # Fails because my.df[1,] is a list, not a vector.
> # Convert to matrix before taking the row:
> stem(as.matrix(my.df)[1,])

  The decimal point is at the |

  1 | 0
  1 |
  2 |
  2 |
  3 | 0

> # Now it works.

For clarifying dataframes vs matrices vs arrays, I found this link quite useful:
http://faculty.nps.edu/sebuttre/home/R/matrices.html#DataFrames

Director Groves leaving Census Bureau

I’m sorry to hear that our Census Bureau Director, Robert Groves, is leaving the Bureau for a position as provost of Georgetown University. The Washington Post, Deputy Commerce Secretary Rebecca Blank, and Groves himself reflect on his time here.

I have only heard good things about Groves from my colleagues. Besides the achievements listed in the links above, my senior coworkers tell me that the high number and quality of visiting scholars / research seminars here, in recent years, is largely thanks to his encouragement. He has also set a course for improving the accessibility and visualization of the Bureau’s data; I strongly hope future administrations will continue supporting these efforts.

Finally, here is a cute story I heard (in class with UMich’s Professor Steven Heeringa) about Groves as a young grad student. I’m sure the Georgetown students will enjoy having him there:

“In the days in ’65 when Kish’s book was published, there were no computers to do these calculations. So variance estimation for complex sample designs was all done through manual calculations, typically involving calculating machines, rotary calculators.

I actually arrived in ’75 as a graduate student in the sampling section, and they were still using rotary calculators. I brought the first electronic calculator to the sampling section at ISR, and people thought it was a little bit of a strange device, but within three months I had everybody convinced.

Otherwise we had these large rotary calculators that would hum and make noise, and Bob Groves and I — there was a little trick with one of the rotary calculators: if you pressed the correct sequence of buttons, it would sort of iterate and it would start humming like a machine gun, and so if you can imagine Bob Groves fiddling around on a rotor calculator to sorta create machine gun type noises in the sampling section at ISR… I’m sure he’d just as soon forget that now, but we were all young once, I guess.”

Dr Groves, I hope you continue to make the workplace exciting 🙂 and wish you all the best in your new position!

Localized Comparisons: Idiopleth Maps?

In which we propose a unifying theme, name, and some new prototypes for visualizations that allow “localization of comparisons,” aka “How do I relate to others?”

When Nathan Yau visited the Bureau a few months ago, he compared two world maps of gasoline prices by country. The first one was your typical choropleth: various color shades correspond to different gas prices. Fair enough, but (say) an American viewing the map is most likely interested in how US gas prices compare to the rest of the world. So instead, present a map with America in a neutral color (grey) and recolor the other countries relative to the US, to show whether their prices are higher or lower than here (for instance, red for pricier and green for cheaper gas).

I liked this idea but wanted to take it further: Instead of a one-off map just for Americans, why not make an interactive map that recolors automatically when you select a new country?
As a statistician, I’m also interested in how to communicate uncertainty: is your local area’s estimate statistically significantly different from your neighbors’ estimates? Continue reading “Localized Comparisons: Idiopleth Maps?”

Stats 101 resources

A few friends have asked for self-study resources on learning (or brushing up on) basic statistics. I plan to keep updating this post as I find more good suggestions.

Of course the ideal case is to have a good teacher in a nice classroom environment:

The best classroom setting

For self-study, however, you might try an open course online. MIT has some OpenCourseWare for mathematics (including course 18.433, “Statistics for Applications”), and Carnegie Mellon offers free online courses in statistics. I have not tried them myself yet but hear good things so far.

As for textbooks: Freedman, Pisani, and Purves’ Statistics is a classic intro to the key concepts and seems a great way to get up to speed.
Two other good “gentle” conceptual intros are The Cartoon Guide to
Statistics
and How to Lie with Statistics. Also useful is Statistics Done Wrong [see my review], an overview of common mistakes in designing studies or applying statistics.
But I believe they all try to avoid equations, so you might need another source to show you how to actually crunch the numbers.
My undergrad statistics class used Devore and Farnum’s Applied Statistics for Engineers and Scientists. I haven’t touched it in years, so I ought to browse it again, but I remember it demonstrated the steps of each analysis quite clearly.
If you end up using the Devore and Farnum book, Jonathan Godfrey has converted the 2nd edtion’s examples into R.
[Edit: John Cook’s blog and his commenters have some good advice about textbooks. They also cite a great article by George Cobb about how to choose a stats textbook.]

Speaking of R, I would suggest it if you don’t already have a particular statistical software package in mind. It is open source and a free download, and I find that working in R is similar to the way I think about math while working it out on paper (unlike SPSS or SAS or Stata, all of which are expensive and require a very different mindset).
I list plenty of R resources in my R101 post. In particular, John Verzani’s simpleR seems to be a good introduction to using R, and reviews on a lot of basic statistics along the way (though not in detail).
People have also recommended some books on intro stats with R, especially Dalgaard’s Introductory Statistics with R or Maindonald & Braun’s Data Analysis and Graphics Using R.

For a very different approach to introductory stats, my former professor Allen Downey wrote a book called Think Stats aimed at programmers and using Python. I’ve only just read it, and I have a few minor quibbles that I want to discuss with him, but it’s a great alternative to the classic approach. As Allen points out, “standard statistical techniques are really computational shortcuts, which is less important when computation is cheap.” Both mindsets are good to have under your belt, but Allen’s is one of the few intro books so far for the computational route. It’s published by O’Reilly but Allen also makes a free version available online, as well as a related blog full of good resources.
Speaking of O’Reilly, apparently their book Statistics Hacks contains major conceptual errors, so I would have to advise against it unless they fix them in a future edition.

DC Datadive

This weekend I had an absolute blast taking part in the DC Datadive hosted by the NYC-based Data Without Borders (DWB). It was somewhat like a hackathon, but rather than competing to develop an app with commercial potential, we were tasked with exploring data to produce insights for social good. (Perhaps it’s more like the appropriate-technology flash conferences for engineers that my classmates organized back at Olin.) In any case, we mingled on Friday night, chose one of three projects to focus on for Saturday (10am to 5am, in our case!), and presented Sunday morning.

The author, eager to point out a dotplot
The author, eager to point out a dotplot

There were three organizations acting as project sponsors (presentations here):

  • The National Environmental Education Foundation wondered how to evaluate their own efforts to increase environmental literacy among the US public. Their volunteers came up with great advice and even found some data NEEF didn’t realize they already had.
  • GuideStar, a major database of financial information on nonprofit organizations, wanted early-warning prediction of nonprofits that are at risk of failing, as well as ways to highlight high-performing organizations that are currently under the radar. This group of datadivers essentially ran their own Netflix prize contest, assembling an amazing range of machine learning approaches that each gave a new insight into the data.
  • DC Action for Children tasked us with creating a visualization to clearly express how children’s well-being, health, school performance, etc. are related to the neighborhood where they live. I chose to work on this project and am really pleased with the map we produced: screenshot and details below.

Click above and try it out. Mousing over each area gives its neighborhood-level information; hovering over a school gives school details.
In short, our map situates school performance (percent of children with Proficient or Advanced scores on reading and math tests) in the context of their DC neighborhood. Forgive me if I’m leaving out important nuances, but as I understood it the idea was to change the conversation from “The schools on this list are failing so they must have poor adminstration, bad teachers, etc.” towards “The children attending the schools in this neighborhood have it rough: socioeconomic conditions, few resources like libraries and swimming pools, no dentists or grocery stores, etc. Maybe there are other factors that public policy should address before putting full responsibility on the school.” I think our map is a good start on conveying this more effectively than a bunch of separated tables.

It was so exciting to have a tangible “product” to show off. There may be a few minor technical glitches, and we did not have time to show all of the data that the other subteams collected, but it’s a good first draft.
Planning and coordinating our giant group was a bit tough at first but our DWB coordinator, Zac, gamely kept us moving and communicating across the several sub-teams that we formed. The data sub-team found, organized, and cleaned a bunch more variables than we could put in, so that whoever continues this work will have lots of great data to use. And the GIS sub-team aggregated it all to several levels (Census tract, neighborhood, and ward); again we only had time to implement one level on map, but all is ready to add the other levels when time allows.
As for myself, I worked mostly with the visualization sub-team: Nick who set up the core map in TileMill; Jason who kept pushing it forward until 5am; Sisi who styled it and cobbled together the info boxes out of HTML and the Google Charts API and who knows what else; and a ton of other fantastic people whose handles I can’t place at the moment. I learned A TON from everybody and was just happy that my R skills let me contribute to this great effort.
[Edit: It was amiss not to mention Nick’s coworkers Troy and Andy who provided massive help with the GIS prep and the TileMill hosting. Andy has a great writeup of the tools, which they also use for their maps of the week.]
I absolutely loved the collaborative spirit: people brought so many different skills and backgrounds to the team, and we made new connections that I hope will continue with future work on this or similar projects. Perhaps some more of us will join the Data Science DC Meetup group, for example.
I do wish I had spent more time talking to people on the other projects — I was so engrossed in my own team’s work that I didn’t get to see what other groups were doing until the Sunday presentations. Thank goodness for catching up later via Twitter and #dcdatadive.

A huge thanks to New America Foundation for hosting us (physically as well as with a temporary TileMill account), to the Independent Sector NGEN Fellows for facilitating, to whoever brought all the delicious food, and of course to DWB for putting it all together. I hope this is just the start of much more such awesomeness!

PS — my one and only concern: The wifi clogged up early on Saturday, when everyone was trying to get data from the shared Dropboxes at once. If you plan to attend a future datadive, I’d suggest bringing a USB stick to ease sharing of big files if the wifi collapses.

[Edit: I also recommend DC Action for Children’s blog posts on their hopes before the datadive and their reactions afterwards. They have also shared a good article with more open questions about how kids are impacted by inequality in and among DC neighborhoods.]

R101

I’m preparing “R101,” an introductory workshop on the statistical software R. Perhaps other beginners might find some use in the following summary and resources. (See also the post on resources for teaching yourself introductory statistics.)

Do you have obligatory screenshots of nifty graphics that R can produce? Yes, we do.

Nice. So what exactly is R? It is an open-source software tool for statistics, data processing, data visualization, etc. (Technically there’s a programming language called S, and R is just one open-source software tool that implements the S language. But you’ll often hear people just say “the R language.” Beginners can worry about the nuances later.)
Open source means it is free to download and use; this is great for academics and others with low budgets. It also means you can inspect the code of any algorithm if you want to double-check it or just to see how it’s done; this is great for validating and building on each others’ ideas. And it is easy to share code in user-defined “packages,” of which there are thousands, all helping people use cutting-edge statistical tools as soon as they are invented.

How do I get started? Download and install R from CRAN, the Comprehensive R Archive Network. There are Windows, Mac, and Linux versions.
In Windows at least, when you open the program there is a big window containing a smaller window, the R Console. You can type and submit commands in the Console window at the prompts (the “>” signs). Try typing 3+5 and hit Enter, and you should see the output [1] 8 which is good. The output of 3+5 is a 1-item vector (hence the [1]) with the value 8 as it should be.
Great, now you know how to use R as a desktop calculator!
Or you can type your commands in a script, so that you can save your code easily. Go to “File -> New script” and it will open the R Editor window. Type 3+5 in there, highlight it, and then either click the “Run line or selection” icon on the top menu bar or just hit Ctrl+R on the keyboard. It should copy the command into the Console window and run it, with the same result as before.
Sweet, now you can save the code you used to do your calculations.
Quick-R has more details on using the R interface.
Next, try A Sample Session from the R manual to see examples of other things R can do.

What are the key concepts? Basically, everything is a function or an object. Objects are where your data and results are stored: data frames, matrices, vectors, lists, etc. Functions take objects in, think about them, and spit new objects out. Functions sometimes also have side effects (like displaying a table of output or a graph, or changing a display setting).
If you want to save the results or output of a function, use <- which is the assignment operator (think of an arrow pointing left). For example, to save the natural log of 10 into a variable called x, type the command x <- log(10). Then you can use x as the input to another function.
Note that functions create new output rather than affecting the input variable. If you have a vector called y that you need sorted, sort(y) will print out a sorted copy of y but will not changed y itself. If you actually want y to be sorted, you have to reassign it: y <- sort(y).
Functions always take their input in parentheses: (). So if you see a word followed by parentheses, you know it’s a function in R. You will also see square brackets: []. These are used for locating or extracting data in objects. For example, if you have a vector called y, then y[3] gives you the 3rd element of that vector. If y is a matrix, then y[4,7] is the element in the 4th row, 7th column.

How do I get help? If you know you want to use a function named foo, you can learn more about it by typing ?foo which will bring up the help file for that function. The “Usage” section tells you the arguments, their default order, and their default values. (If no default value is given, it is a required argument.) “Arguments” gives more details about each argument. “Value” gives the structure of the output. “Examples” shows an example of the function in use.
If you know what you want to do but don’t know what the function is called, I suggest looking through the R Reference Card. If that does not answer your question, you can try searching using RSeek.org or search.r-project.org, search engine tuned to the R sites and mailing lists… since just typing the letter R into Google is not always helpful 🙂

Where do I read more?
Online resources for general beginners:
R for Beginners
Simple R
Official Introduction to R
R Fundamentals
Kickstarting R
Let’s Use R Now
UCLA R Class Notes
A Quick and (Very) Dirty Intro to Doing Your Statistics in R
Hints for the R Beginner
R Tutorials from Universities Around the World (88 as of last count)

For statisticians used to other packages:
Quick-R
R for SAS and SPSS Users

For programmers:
R’s unconventional features
Google’s R code style guide

Good books (as suggested by Cosma Shalizi):
Paul Teetor, The R Cookbook: “explains how to use R to do many, many common tasks”
Norman Matloff, The Art of R Programming: “Good introduction to programming for complete novices using R.”