The Ranking Jangle in Higher Ed

Mark Twain via Wikipedia

There are three kinds of lies: lies, damned lies, and statistics. – Mark Twain

How does one chose a single university out of all the stellar universities in this country? Answer: Rankings?

Earlier this month, US News and World Report put out their annual ranking of US colleges. Happily, the University of Baltimore was ranked in the “Top 25 Public Universities in North Region.” And considering how many public colleges there are in the northern region of the US, that ain’t too shabby. The university has come a long way, even since I started in 2009.

But what exactly does that mean? How did all those highly-caffeinated statistician’s in their binder-clogged offices come up with that number? What was taken into account? And how could they possibly have quantified some of the most intangible, and arguably the most important, aspects of a great education–like engaging teachers or peer groups that challenge you to always do better, etc?

Well the short answer is, they can’t. (To find out how they do come up with their scores, here is a great article written a few years ago in the New Yorker, which goes into US News‘s process–and the pitfalls of that process.)

Just to give an example of how rankings work, or don’t work, let’s look at two schools, which by almost any criteria are pretty damned good (to use Twain’s turn of phrase): MIT and Harvard.

According to US News and World Report, MIT is listed as the top school in the world, just above Cambridge (#2) and Harvard (#3). But in the United States MIT is ranked a lowly 7th, way behind Harvard and Yale and even Duke–not to cast dispersions on Duke, I’m just saying. And Harvard, which is listed 3rd overall, but first in the US when compared to other schools world-wide, is, when compared only to schools in the US, ranked 2nd. To say that another way: when compared to every other school on the planet, Harvard is the best in the US, but when compared to every school in the US they are number 2. If none of this make any sense to you, good. It shouldn’t.

But there really is a good reason why this doesn’t make sense.

Though we sometimes fall under the impression that everything can be quantified and evaluated in an empirical manor using the voodoo of big data and data mining and really all things data–data just cannot explain everything, especially when it comes to how  human’s actually interact. Even if were possible to get numbers that show the quality of human interaction (whatever that means), or the satisfaction of a set of humans after an interaction, those numbers will be highly, highly subjective. And beyond that, someone, a statistician, has to interpret that data and weigh it along other factors–like cost and endowments and student/teacher ratio and on and on–which all go into, in this case, the ranking of a university. So even if you believe that they are able to quantify each factor that goes into their decision accurately, the chances that they have weighed one factor against another in the way that would best suit any one prospective student is highly unlikely.

So, long story short: the project itself is flawed. Deeply, deeply flawed.

But all that being said, I really enjoy looking at rankings. Who doesn’t? They are great entertainment, and a easy, quick way to weed out the good from the bad. But if you are looking for a sole factor to base your decision on, any decision, but especially one as important as college, I wouldn’t put too much stock into the exact number given to any given school as a ranking.

Still, it’s pretty great to be in the top 25.

Leave a Reply

Your email address will not be published. Required fields are marked *