For over twenty years, colleges across the nation have been subject to ranking systems, a means in which prospective students and/or parents use to determine which school would be the wisest choice. These ranking systems have had a significant impact due to their usefulness and the controversy that surrounds them.
There is a wide variety of ways in which ranking systems work. As college education has become more mainstream and inevitable in the life of a student and almost required in today’s job market, the popularity of these systems have increased. To that end, many organizations have joined in this effort to collect data and provide easy to understand lists of colleges and universities, ranked in numerical order, based on a formulaic method that the respective organization has devised.
There seems to be wide anticipation of these publications, and because of the extreme competition among educational institutions, the concerns, criticisms and confusion of ranking systems has amassed the attention, skepticism of many. Questions have been raised as to the truthfulness and accuracy of these rankings. People are concerned about the subjectiveness and seemingly bias manner in which the results are compiled, thus, these dueling consortiums have made efforts to outdo one another in their methods of obtaining an authentic product.
Data to support these rankings can be extracted in many different ways. Organizations wishing to publish rankings will either collect information from the colleges directly or through different research foundations. The material that is collected can be statistical figures or subjective data or a combination of both. The subjective data is typically information resulting from student or faculty surveys that contain questions based on categories of perceived importance.
Once the data is received and is broken down into categories, each grouping is given a percentage weight and calculated. The rankings are based off of those proportions. For example, the following: (based on U.S. New and World Report methodology)
Peer assessment or peer review is a process in which peers or team members evaluate each other and provide data or feedback. The performance of a school is based on the evaluators’ perception of the school’s quality and the questions or concepts are subject to the scrutiny of one’s own opinion.
Most often this method of obtaining data is weighted most heavily when calculating the results of rank and has been the root of the aforementioned bias outcry.
Another common source for data is assessing the school’s student retention rate which is broken down two-fold. Eighty percent of the retention score is based upon a six-year graduation rate while twenty percent of the score is based upon freshman retention.
Faculty resources can incorporate both empirical and objective data. The spectrum of data can look something like this:
Class Size: 30% weight of proportion of classes with < 20 students and 10% weight proportion of > = 50 students.
Thus, a school benefits more for having a large proportion of classes with fewer than 20 students and a small proportion of large classes.
Faculty Salary Plus Benefits: 35% (adjusted for regional differences in cost of living)
Proportion of Professors with the Highest Degree In Their Fields: 15%
Student-Faculty Ratio: 5%
Proportion of Faculty Who Are Full Time: 5%
Student selectivity is calculated based on a composite of test scores which includes reading, math, SAT and ACT marks. This makes up only fifty percent of the student selectivity score. The other fifty percent is calculated based on a the amount of freshman who graduated in the top ten percent of their high school graduating class and the ratio of students that were accepted to the number of students who applied.
Spending habits of students can be a source for scrutiny in obtaining a measure of quality in an institution. The more the student is reflected to have spent on instruction, services, research or various other educational related costs can indicate that a school may offer more programs and resources.
Graduation Rate Performance is an “added value” to factors contributing to calculating rank. In this example, the modeled organization differentiates between an actual rate of six-year graduation for a class and an estimated figure. If the actual rate exceeds the estimation, the data will show an increase in performance of the institution’s services and programs.
This figure is an “indirect measure of student satisfaction” based on the percentage of alumni who gave to their school within a certain parameter of time.
Based on this example, the final calculations are arrived by taking the weighted sum of the schools’ scores and then rescaled. The top ranking school in each category is valued at 100 and the remaining schools’ scores are a “proportion of the top score.” They are then rounded and sequenced in descending order.
Since 1983, U.S. News and World Reports has been collecting data and publishing rankings of colleges, and they are known to be the subject’s most well-known source. The methodology that is used to gain these rankings has received the most negative attention and criticism due to the constant changes that are made to the formulas and percentage weights each year.
In an U.S. News article explaining its method for calculating rank, it is explained that the system rests on two pillars, namely its reliance on "quantitative measures that education experts have proposed as reliable indicators of academic quality, " (peer review) and U.S. News’ “nonpartisan view of what matters in education.”
Using Carnegie classification, a common research tool, the schools are categorized, and the data that is gained is assigned weights that demonstrate U.S. News’ judgment about which concept is more important than the next.
(The above example of methodology and calculation was derived from U.S. News’ formula.)
The Princeton Review is another source of college and university rankings. This source calculates it rankings of 62 schools based entirely off of student surveys and are tallied into eight different categories: Academic/Administration, Quality of Life, Politics, Demographics, Social Life, Extracurriculars, Parties and Schools by type. The results are based on the student’s answers to the survey questions.
Ordo Ludus obtains its results by calculating the averages the of the data from four different categories, academics, athletics, quality of life, and tuition and costs.
Forbes.com teamed with Dr. Richard Vedder, economist at Ohio University and the Center for College Affordability and Productivity. It ranks 569 undergraduate colleges. Like U.S. News, Forbes.com uses a combination of empirical and statistical data. Rankings are derived from five components: (information taken from Forbes.com)
1. Listing of Alumni in the 2008 Who’s Who in America (25%)
2. Student Evaluations of Professors from RateMyProfessors.com (25%)
3. Four-Year Graduation Rates (16 2/3%)
4. Enrollment-adjusted numbers of students and faculty receiving nationally competitive awards. (16 2/3%)
5. Average four year accumulated student debt of those borrowing money (16 2/3%)
Washington Monthly has created a different approach to college ranking. The criteria that this source has set forth is, “What are reasonable indicators of how much a school is benefiting the country?” Accordingly, they use three factors in determining how to go about measuring data:
1. How well does the school perform as an engine of social mobility (ideally, helping the poor to get rich rather than the very rich to get very, very rich)
2. How well does the school foster scientific and humanistic research.
3. How well does the school promote an ethic of service to country.
NOTE: The top schools ranked using Washington Monthly’s criteria are not consistent with U.S. News’ top schools. In fact, Princeton, the top ranked school for U.S. News fares significantly worse, ranking at 28.
There is a large variety of ways that organizations, research establishments, publications have determined rank among the nation’s many hundreds of college institutions. Even though many of the ranking methods that have been put into place can be thought of as skewed or biased, there is validity in the fact that the information is out there and is accessible. Rankings have opened the door to easy-access research, whether it is through the eyes of another or through the crunching of cold-hard numbers, the opportunity to learn and evaluate is available.
Update: StateUniversity.com is proud to announce the release of our own ranking system! Visit the school of your choice and see what its score is or navigate to the main page, where you can find a list of top overall schools to help with your school research.
Have something to say? Feel free to add comments or additional information.