Much of the following post was penned as part of a LinkedIn comment response to an article written by Dean Hoke on the 15th June 2016; the article, and my comment response, can be found here: https://www.linkedin.com/pulse/ranking-uae-universities-us-news-world-report-vs-qs-dean-hoke . It provides important context for the piece that follows.

University rankings are currently being produced in their multitudes: each year, one can expect various offerings from QS Quacquarelli Symonds, Times Higher Education, Shanghai Jiao Tong (ARWU), The Guardian, Forbes, Business Week, US News – and others. Some of these rankings are global, some are regional, others rank universities by Subject or by Faculty.

Of course, to justify its existence, each ranking needs to provide a unique, fresh analysis – and different rankings will necessarily yield different results, as a consequence of different methodologies. These differences can surprise, and can (and should) be cause for question – all the more so when rankings seem to be measuring the same universities or nations.

Mr. Hoke, in his post, rightfully queries why US News’s recent Arab Rankings produce vastly different results to those yielded by my own ranking organisation, QS. Though there is generally agreement, Mr. Hoke notes, on United Arab Emirates University (9th in US News’s ranking, 6th in QS’s), there are otherwise significant discrepancies. Masdar Institute of Science and Technology, 14th in US News’s version, does not feature among the 100 Arabic institutions ranked by QS. The American University of Sharjah, 21st in US News’s rankings, are 7th in those published by QS.

Image credit: US News.com via http://bit.ly/1sGDKha

“What does this say about rankings?”

The answer is both simple and complicated. The simple answer is that rankings are designed to provide a model – not a definitive answer, a model – by which university performance can be compared, but different rankings have different beliefs about which aspects need to be captured, and how much emphasis should be given to these aspects. The complicated answer involves looking at what those different beliefs entail.

Image Credit: QS Quacquarelli Symonds via http://bit.ly/1sGDKha

Looking at the US News methodology, one finds that nine of their indicators involve some sort of assessment of research impact and/or productivity – in fact, these indicators account for 75% of an institution’s score. This approach certainly has merit in illuminating university performance for those who believe that research accounts for, approximately, three-quarters of a top university’s mission. However,it will naturally give vastly different results to those yielded by a methodology for which research-based indicators only account for 10% of an institution’s final score – as with the QS University Rankings for the Arab Region. It would be wrong, I think, to call either approach ‘correct’ – both are aiming to capture different elements of university performance, and shine differing proportions of their respective spotlights on these elements.

However, QS’s methodology does have a rationale behind the reduced weight research-related metrics have in their Arab Region rankings, and I – biased though I am – believe this rationale means that our rankings are better-placed to meaningfully capture Arab university performance. The rationale is twofold. The first is that research and reputation are only perceived by QS as part of a university’s mission: consequently, other factors like internationalisation and teaching quality (measured via proxy) are accounted for. For example, QS aim to try and account for an institution’s teaching quality using faculty/student ratio (assuming that universities that have a lower student-to-teacher ratio will see their students enjoy increased academic supervision).

The second is that universities whose reputation is not yet global have different priorities and strategic objectives to those of, say, MIT. When considering universities in the Arab Region – which typically have a considerably lower research portfolio to those in other regions – it is worthwhile, we believe, to measure not just research output, but also the steps universities are taking to facilitate future research success. As a result, we measure the proportion of staff at Arab universities with PhDs – assuming that faculty members with accreditation designed to denote expertise will be better-placed to produce subject-changing research in the future. Perhaps even more importantly, it’s worth noting that our rankings are designed specifically with students in mind, and the the majority of focus groups, events, and surveys we conduct have students as their main audience. This means that employability – a ubiquitous concern among students – is allocated greater weight in our Arab rankings – 20%, as opposed to the 12.5% it is allocated in the US News equivalent.

(Transparency is essential, and all data can be found here: http://www.iu.qs.com/employer-survey-responses/)

This is partly made possible by the fact that we enjoy far larger employer survey responses – responses gathered from 2,388 employers in the Arab Region contribute to our Employer Reputation indicator, as opposed to the 303 employer responses yielded by US News’s survey. Different rankings capture different aspects of university performance – not without respective merit – but we believe ours are better-placed to capture an aspect so important to student choice.

It’s also worth noting that QS have different entry requirements to those of US News – specialist institutions are typically excluded from QS’s rankings (an exception is the QS World University Rankings by Subject).

“I would be hard pressed to tell a university administrator, high school counselors or parent which ranking company is the most accurate or the best to follow.”

One easy answer to this qualm, then, is that the “best to follow” will be determined by the priorities of the university administrator, high school counselor, or parent. It so happens that none of the above groups are QS’s main audience, and we aim to provide a methodology that produces results that are the best for students to follow. I believe that the focus on multiple aspects of university performance – teaching, ability to produce employable graduates, internationalisation, research, academic repute, employment of highly-qualified staff – means that students seeking to compare universities in the Arab Region – or indeed any of the regions we cover – will find their concerns better-accounted-for in a QS ranking.

The accord between the two rankings as far as UAEU is concerned means that its status as a top-class comprehensive university is fairly well-corroborated. Where there are discrepancies, it simply means that different aspects of university quality are being emphasised. This, of course, is why students are well-advised to understand and appreciate methodology when consulting rankings. I of course encourage them to start by looking over our most recent Arab Region rankings in full, and the relevant methodology.

http://www.topuniversities.com/arab-region-rankings/methodology
http://bit.ly/1tqYvh7

Jack N. Moran
15. 06. 2016