It’s better than we could have expected. After a blog post published in February by university math professor Michael Thaddeus showed that Columbia had provided fraudulent data to the magazine, American News summarily unclassified. When the university was only able to update some of the data in time for the latest rankings, the editors “assigned competitive fixed values”.
In other words, the magazine invented data to keep a popular university in its rankings.
It’s a decision that shows how much of the magazine’s so-called objective assessment of the educational quality of institutions is based on beliefs, feelings and judgments made by editors. It also demonstrates that the driving force behind the rankings is to generate publicity and boost the prestige of the rankings. A steep fall in Columbia would have forced readers to question the legitimacy of the rankings and wonder American News’the authority of s.
This authority has been building since the magazine published its first college rankings nearly 40 years ago. To fully understand the emptiness that American News filled in 1983 and the ensuing decades with problems that followed, it is worth considering the genesis of college rankings.
Attempts to quantify the academic quality of American colleges began in the early 20th century. This period saw a rapid growth in the sciences of measurement (testing, grading, etc.) and a boom in the number of colleges. Unfortunately, this is also when eugenics emerged, and the overlap between measurement scientists and eugenicists at the time was significant. The two types of rankings resulting from this first scholarship have contributed to shaping American News’s efforts: results-based rankings and reputation rankings.
Results-based rankings, in particular, have a troubled history. They are largely based on the work of James McKeen Cattell, a psychologist – and eugenicist – from Columbia University, and Kendrick Charles Babcock, a specialist in the Bureau of Education, precursor to the US Department of Education. In 1903, Cattell created an evaluation of colleges based on the number of “prominent men” producing work on their campuses, and he used these results to devise a ranking. He also believed that the West was in decline and that we could “improve the stock by weeding out the unfit or favoring the gifted”.
As Cattell sounded the alarm about the decline of “great men of science,” the Association of American Universities asked Babcock to determine which colleges best prepared their students for graduate school. The AAU believed that by working with the impartial Office of Education, the rankings would be better accepted. However, an early draft of Babcock’s report leaked, and the ensuing backlash from lower-ranking colleges prompted incumbent President William Howard Taft to issue an executive order to rescind the report.
The other ranking methodology that emerged – one based on reputation – required soliciting knowledgeable opinions from expert evaluators about institutions or programs in their field or group. Early reputation rankings were primarily objective measures of graduate programs that were based on outcomes (paper production, for example) rather than simply the perception of an institution as a whole.
American News the editors, on the other hand, chose to base the first version of their rankings entirely on a reputation survey of 1,300 university presidents, many of whom had no familiarity with the institutions they were rating. The group of evaluators eventually grew, but problems remained. A National Opinion Research Center report commissioned by the magazine in 1997 found that, for evaluators, classifying institutions into quartiles was “a nearly impossible cognitive task.” The center also pointed out that each assessor was asked to assess a large number of institutions – around 2,000.
Good researchers try to limit the influence of personal opinion and bias in the selection of criteria. The editors of American News have always apparently done the opposite, starting with their judgment and only reluctantly allowing expert opinion to influence the methodology.
US News and World Report’The methodology has been repeatedly shown to benefit wealthier institutions and suffer from measurement error, the difference between what we want to know and what is actually measured. The 1997 National Opinion Research Center report stated that the weights used “lack any defensible empirical or theoretical basis” and that “we were troubled by how little is known about the statistical properties of the measures”. In response to the center’s recommendation that the methodology remain constant for five to seven years, American News the editors wrote that “we prefer to retain our options to make small changes to the ranking model whenever we believe it will improve the quality of the results”.
Even as the rankings evolved due to criticism and feedback, Morse chose which additional criteria to include and the weight of each factor, stating in 2004 that “each factor is assigned a weight which reflects our judgment of the importance of ‘a measurement”. The magazine boldly claimed in 2008 that “it draws on quantitative measures that education experts have proposed as reliable indicators of academic quality, and it is based on our nonpartisan view of what matters in education. “. This statement shows the hubris of publishers – that what they think matters in education should be the guiding force.
Time and again, Morse asserted his judgment of educators, researchers, and the colleges themselves, and penalized colleges that did not value the magazine’s rankings.
Rather than adjust the formula to account for the varying rates at which test scores were submitted, for example, the magazine arbitrarily assigned lower scores to colleges, like Sarah Lawrence, that adopted optional testing policies. For colleges like Reed, who didn’t submit any data at all, American News data created, artificially lowering the ranking of the institution. Schools without national advertising to garner enough peer reviews each year were either unranked (in the 2010 rankings) or “assigned values equivalent to the lowest average score among schools” (in the 2023 ranking).
The rankings’ error is also clear in their use of “graduation rate performance”, which is a quantification of the magazine’s performance. prediction college graduation rate. Again, Morse chose to present his beliefs and judgments as facts and data.
When asked in a recent interview if the professionals completing the peer review survey create circular feedback based on previous years American News rankings to provide a score to unfamiliar colleges, Morse replied, “If they’re telling you that, then that’s not what we expect the way people do their ratings. … We think there’s more to thought than that, but we haven’t done any kind of social science research to prove or disprove that point.
While Eric Gertler, Managing Director of american news, claims that the rankings are an “objective resource to help high school students and their families make the most informed decisions about college and to ensure that the institutions themselves are held accountable for education and the ‘experience they offer their students’, the reality is very different.
Colleges will continue to engage in deception, manipulation, influence, and outright lying to alter rankings. Rankings bring out the worst in colleges and damage the higher education landscape. Public colleges are suffering because rankings are tilted in favor of wealthier, smaller private colleges. Colleges that choose to focus on excellence in areas other than American News’s, or who do not want to participate in the rankings at all, suffer because uninformed students decide where to enroll based in part on the rankings.
This doesn’t seem to bother Morse and American News – as long as they continue to sell magazines.