I’ll admit it: before grad school I wasn’t fully clear on the distinction between statistics and applied mathematics. In fact — gasp! — I may have thought statistics was a branch of mathematics, rather than its own discipline. (On the contrary: see Cobb & Moore (1997) on “Mathematics, Statistics, and Teaching”; William Briggs’s blog; and many others.)

Of course the two fields overlap considerably; but clearly a degree in one area will not emphasize exactly the same concepts as a degree in the other. One such difference I’ve seen is that statisticians have a greater focus on variability. That includes not just quantifying the usual uncertainty in your estimates, but also modeling the variability in the underlying population.

In many introductory applied-math courses and textbooks I’ve seen, the goal of modeling is usually to get the equivalent of a point estimate: the system’s behavior after converging to a steady state, the maximum or minimum necessary amount of something, etc. You may eventually get around to modeling the variability in the system too, but it’s not hammered into you from the start like it is in a statistics class.

For example, I was struck by some comments on John Cook’s post about (intellectual) traffic jams. Skipping the “intellectual” part for now, here’s what Cook said:

Imagine you’re on a highway with two lanes in each direction. Two cars are traveling side-by-side at exactly the speed limit. No one can pass, and so the cars immediately behind the lead pair go a little slower than the speed limit in order to maintain a safe distance. This process cascades until traffic slows down to a crawl miles behind the pair of cars responsible for the traffic jam.

Commenter 1:

That’s a flawed assumption. After driving a little bit slower for a short period of time, the second pair of cars can speed up to drive at the same speed as the leading pair. The distance, providing everyone’s driving at a constant speed, is going to remain the same.

Commenter 2:

Why do the cars behind have to go slower? As they approach the two lead cars that cruise abreast, they will have to slow to the speed limit at a safe following distance. They might initially overcompensate. But ultimately, the system ought to stabilize to a point where everyone’s going the speed limit (given a sufficiently long highway, such that road capacity doesn’t become the limiting factor).

My first reaction was that these commenters showed an applied-math way of thinking: There’s a steady state in which all the cars could go at the same speed as the leading pair, so presumably that’s what must happen.

A statistician, on the other hand, is trained to think about variability from the start, and should immediately recognize that the following cars *won’t* be able to match speeds perfectly (even with cruise control you’re not likely to keep *exactly* the same speed as the leading cars), and that this is going to be a major part of the problem. Indeed, see Cook’s response:

Drivers speed up and slow down for various reasons over time. Say someone’s speed varies 10 mph. On an open highway they can average 55 by driving between 50 and 60. But if someone in front of them is driving a constant 55, they will have to slow down to 50 in order to be able to maintain their usual variability.

In other words, if you’re “thinking like a statistician,” you’re likely to notice certain features of the problem that you might miss by “thinking like an applied mathematician.” Now, the kind of models or simulations you’d use here might not be the ones traditionally taught in a stats class, so the statistician may in fact need an applied mathematician to help with the modeling… but that key insight may be more likely to come from the statistician in the first place.

Of course this has been a caricature — a good applied mathematician will get around to considering variability too — but it still seems to be a difference between the fields’ focuses. Do you agree, or am I seeing a spurious pattern?

Also, in hindsight, I’m glad that I went into statistics instead of applied math, since otherwise I would likely not have gained such a strong focus on quantifying variation. But I’m sure there are major conceptual insights I missed by not getting an applied math degree instead — I wonder what they are?

As for the traffic jam idea itself, here’s an empirical example:

Well … yes and no. I consider myself an applied mathematician but the majority of what I’ve done in that realm over the years would be better classified as ‘scientific and statistical computing’, aka getting shit done on computers for scientists and engineers.

I think there’s a *huge* overlap between applied mathematics and statistics now. I think it started with Gauss / Legendre in astronomy, Boltzmann in physics and Darwin in biology. Once you get to Von Neumann, Ulam, scientific computing and the Monte Carlo method, I don’t see how you can tease the two disciplines apart any more.

For sure, the overlap is immense, including the scientific computing part. I don’t mean to separate them artificially.

But if, say, some quantitatively-minded young person wants to get a degree in one or the other, how should they decide? What would you advise them?

A VERY big difference in their problem solving approaches is the concept of experimental design Statisticians put very little weight on data collected for a hypothesis test that does not have proper controls and without a proper experimental design, while an applied mathematician may see a ton of such data as a great resource for curve fitting, etc.