Search

ilpenseroso

Giving to airy nothing | A local habitation and a name.

Category

Higher Education

How should a university be measured?

This year’s ranking season came to an end for another year today, and the last ranking in the annual Academic Ranking of World Universities (ARWU) – QS World University Rankings (QSWUR) – Times Higher Education World University (THE) Rankings triumvirate is out. The last of these – THE – threw up one of the major stories of this year’s cycle: the announcement that the University of Oxford had become the first UK-based university to top a global university ranking since the University of Cambridge led the QSWUR in 2011.

It’s not been the best summer for UK higher education, so the wave of triumphalism that arose in the wake of this announcement was – is – perhaps understandable. QS showed 38 of the United Kingdom’s 48 top-400 institutions dropping, while Oxford topping THE’s tables shouldn’t obscure the equally unmissable regressive  trend experienced by UK universities in that particular table. Amid that triumphalism, however, another query came up, posed most vociferously by Professor Alan Smithers of the University of Buckingham. This query ran something thus: why is it that the University of Oxford leads THE’s table, while ranking 6th in QS’s table and 6th in the ARWU tables? Why does CalTech perform so much better in THE’s table than QS’s table? Why do two ranking institutions, measuring the same object, publish tables yielding such differing results?

The answer is, in one sense, a simple one: different methodologies, with different criteria used to measure universities. Where similar criteria are used, they are weighted differently. THE use 13 criteria, including measures of research income and industry income. QS’s rankings use 6 criteria, including a unique measure of graduate employability. ARWU’s indicators include the number of alumni possessing Nobel Prizes a university has produced. QS include two indices of internationalisation, ARWU measure the number of times a university’s researchers publish papers in Nature or Science, two highly-regarded academic journals for scientists. When different measurements are being made, it seems a truism to state that different result will ensue.

This point has been made repeatedly, including in an earlier blog post I wrote in June. However, it is clear that this answer does, and will, and should not, appease those interested in university rankings. The qualms are as follows: (1) it is suggested that use of such different measurements in itself – quite aside from the different results they yield – indicate that all rankings are guilty of a lack of authenticity; (2) it is suggested that it is difficult to understand, for the academic as well as for the layperson, which best captures the essence of a university; (3) the fact that all three rankings – each with their competing claims – are released in such quick succession means that the public are presented with three competing, potentially contradictory narratives at the same time.

The first two qualms seem related, both being concerned with both the construct validity of university rankings, and the measurements used for the compilation. Which of these competing rankings is really measuring what it means to be the best university in the world? These concerns are understandable – after all, it would naturally make the lives of a number of people much easier if one were able to say, with utter certainty, that MIT were the best university in the world.

To be able to answer the question with that level of certainty, however, would require a number of conditions to be fulfilled. These conditions do not currently exist. For one, it would require an indisputable, uncontroversial answer to the question of university purpose. This question has not been conclusively answered; perhaps the simplest proof of this is a recent All Souls General Paper that asked candidates, very simply, ‘What is a university for?’

Moreover, where the question has been answered, it is clear that the answer is not a single one. Universities are multifaceted institutions with multiple purposes. Not only is there disagreement about precisely what universities should look to achieve; where there is agreement (most would agree that a university’s purposes include teaching and research), there is also disagreement on which of those purposes matter most. Is teaching more important than research? Should selectivity feature at all, and, if so, is it more or less important than internationalisation – a trait that, it is argued, shows that universities are aware of their status as global knowledge hubs?

The question also remains broadly unanswerable because different universities prioritise different things. The King Abdullah University of Science & Technology, the institution that scores most highly for QS’s citations per faculty impact, does not admit undergraduates, being focused solely on graduate research and teaching. Clearly, were the administrators of KAUST to attempt to answer that All Souls question, teaching undergradutes would not be included in their answer. The attempt to measure universities according to their essence is a misguided one: there is no such thing as an essence of a university – the very word implies something immutable and universal, that does not exist.

This affects anybody looking to measure university performance, because they will need to settle upon their best definition of what should matter when trying to assess such institutions. In the light of these disagreements, it is neither surprising nor alarming that different ranking organisations reach different conclusions about what factors unite all universities, and which warrant comparison.

Nor does it render redundant the very attempt to try and measure university performance. While no unassailable answer to these questions has been reached to any degree of certainty, and while universities do prioritise different objectives, common objectives do exist. To put this another way: while there is no objective answer to the question of what universities should try to do, there are entirely reasonable answers to the question of what universities are trying to do. Where comparable data can be sourced and methodologies are available to the public, there are good arguments for trying to make an answer to that question.

Nor is the unwillingness of students, journalists, or parents to check and compare methodologies a fault of the ranking compilers. As with any other consumption choice, consumer responsibility exists. Ranking compilers are no more at fault when readers choose not to read methodologies and make decisions based on a single ranking than Apple would be if a buyer, having read nothing about the iPhone 7’s specifications, became very upset that they were unable to charge their phone and play music simultaneously.

This provides a hopefully neat segue into dealing with (3): the idea that having three (four, if one includes US News) rankings providing conflicting information is a bad thing for students, parents, and academics.

This criticism only makes sense if one assumes that there is an essential purpose to a university: only then would it follow that one compiler had struck upon the only and best way to measure performance according to that essence, or essential purpose. Only then would it follow that the others only served to obfuscate the truth reached by that one – that students, and other key parties, were disadvantaged by having multiple sources of information.

To take the view that multiple rankings provide contradictory narratives also requires one to believe that rankings must necessarily be in competition with one other. Of course, there are those that will endorse this view, and I do not speak for them. However, I would argue that it makes far more sense, if one is a student making a decision, or a journalist trying to assess the state of national higher education, to see the information provided by multiple rankings as complementary, rather than contradictory. Here are two examples of how this process might work:

(1) Looking for similarities at the institutional level. Of course, if one is searching for an objective answer as to which is the best university in the world, seeing Oxford ranked 6th in QS’s rankings and 1st in THE’s rankings will cause confusion. Similarly, seeing Durham ranked 6th in The Guardian’s national rankings, 96th in THE’s rankings, 74th in QS’s rankings, and 151-200th in ARWU’s, will seem bizarre.

Perceiving the information as complementary rather than contradictory becomes much easier when one sees a university’s position as a general guide to its performance, rather than zooming in on a single numerical position and making a direct comparison. UCL finishing 15th-7th-17th across the three tables suggests, compellingly, that UCL is one of the world’s outstanding institutions, and does perform better according to key metrics than, say, Cardiff (182nd-140th-101-150 band). The correlations between QS and THE are high – 0.8 overall, and 0.71 over the top 100.

Again, this becomes of far less concern the moment one becomes aware of different methodologies. Durham doesn’t have, nor has ever had, a Nobel Prize winner, so will naturally suffer in ARWU’s table. It scores 90/100 for Teaching Satisfaction, so does well in the Guardian’s table. If, as a student, you perceive teaching quality as fairly unimportant but see a university’s ability to produce Nobel Prize winners as an essential determinant of is quality, ARWU’s information would be preferred.

(2) Looking for narratives at the national and/or global level

QS’s rankings this year had one central narrative: as noted in my opening paragraphs, UK institutions are performing worse this year. Taken alone, policy-makers, students, and other interested parties have two options. They can either take the single source of information as definitive, and act accordingly – or they could take the single source of information as entirely worthless, and ignore it.

Neither would be an optimal course of action. To take a single ranking, adopting a single methodology, as authoritative would be to avoid making decisions based on what that ranking missed, overvalued, or undervalued. To ignore the ranking completely would be equally inadvisable, foregoing the opportunity to take action based on what it said about performance according to the key (and relevant) indicators it did use.

When THE’s ranking also show, overwhelmingly, that the UK’s top 200 universities are dropping, the observations have, essentially, corroborated one another, and actions taken as a result are being made from a stronger position as a result. This works at the institutional level as well as the national: Cambridge drops one in QS’s rankings, stays stable in THE’s, and rises one in ARWU’s, resulting in a net change of 0 places. Conversely, LSE drops 2 in THE’s, drops 2 in QS’s, and falls one band in ARWU’s, suggesting a minuscule but unmissable drop in performance.

When meaningful disparities do occur – and there are very few as far as UK universities are concerned – what then? In every instance, the answer will be a methodological difference, with major discrepancies likely to be due to a sharp alteration in performance in an indicator used by, say, QS, but not THE or ARWU. When this happens, the clear and entirely acceptable course of action is for an individual to assess whether the indicator in question is one that matters to them, and act accordingly.

Finally, I would argue that these rankings are enhanced, rather than etiolated, by their being produced in quick succession. This ensures that universities are being measured at the same point in time; were one ranking to launch using data from one academic year, and another to use data from another, the findings would be far less comparable than they currently are.

gaf1617-starry-night-yunli-song

Dissecting Rankings: A Blog Response

Much of the following post was penned as part of a LinkedIn comment response to an article written by Dean Hoke on the 15th June 2016; the article, and my comment response, can be found here: https://www.linkedin.com/pulse/ranking-uae-universities-us-news-world-report-vs-qs-dean-hoke . It provides important context for the piece that follows.

University rankings are currently being produced in their multitudes: each year, one can expect various offerings from QS Quacquarelli Symonds, Times Higher Education, Shanghai Jiao Tong (ARWU), The Guardian, Forbes, Business Week, US News – and others. Some of these rankings are global, some are regional, others rank universities by Subject or by Faculty.

Of course, to justify its existence, each ranking needs to provide a unique, fresh analysis – and different rankings will necessarily yield different results, as a consequence of different methodologies. These differences can surprise, and can (and should) be cause for question – all the more so when rankings seem to be measuring the same universities or nations.

Continue reading “Dissecting Rankings: A Blog Response”

Blog at WordPress.com.

Up ↑