SciELO - Scientific Electronic Library Online

 
vol.114 número3-4Open access in South Africa: A coherent strategy is neededA tribute to storytelling, camaraderie and the Prince Edward Islands índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Articulo

Indicadores

Links relacionados

  • En proceso de indezaciónCitado por Google
  • En proceso de indezaciónSimilares en Google

Compartir


South African Journal of Science

versión On-line ISSN 1996-7489
versión impresa ISSN 0038-2353

S. Afr. j. sci. vol.114 no.3-4 Pretoria mar./abr. 2018

http://dx.doi.org/10.17159/sajs.2018/a0265 

NEWS AND VIEWS

 

Launch of the ASSAf Presidential Roundtable: University rankings

 

 

John Butler-Adam

Academy of Science of South Africa, Pretoria, South Africa

Correspondence

 

 


Keywords: ranking systems; indicators; Quacquarelli Symonds; Times Higher Education


 

 

On 7 February this year, the Academy of Science of South Africa (ASSAf) launched its Presidential Roundtable series on Science, Scholarship and Society at an event in Stellenbosch, with a discussion on the subject of 'University Rankings: Helpful or Harmful?'

The roundtable discussions are a quarterly roundtable of experts in specific fields, in each case addressing a critical issue percolating in society that requires the deliberation of the best minds on the topic.

In 2013, the Journal's Leader, titled 'Being the best? Yes - but best for what?'1, expressed several concerns about rankings:

The ranking system assumes that there is just one kind of university, with common criteria for measuring comparative success, while in many countries there are institutions that differ in terms of their markets and purposes in the higher education system.

That concern remains true 4 years later, and while the presenters at the first roundtable provided a wide range of (often differing) views about, and insights into, the ranking systems, the old theme was a common view, although it was expressed in a variety of ways. Professor Lis Lange, for instance, expressed one of her concerns as follows:

One of the unintended consequences of rankings is that the idea of being in the top 100 becomes the strategy of universities. The whole being of the university is reduced to being one in the top 100 and this has very serious implications.2

Jonathan Jansen, President of ASSAf, who moderated the roundtable put it this way:

Ranking for the sake of claiming bragging rights or boosting national egos is a problem, for then the practice of rank-ordering universities serves simply as a hurtful reminder of the academic inequities embedded in the global system of knowledge production.3

Only one system may escape these concerns - the U-Multirank system, which is both more sophisticated and more complicated than other major ranking systems.

This article is not, however, an overview of the four presentations (which is given elsewhere4) but a consideration of some of the implications of the different indicators, definitions and variables, and varying metrics, used by the major ranking systems. There are about 30 'global' ranking systems for universities, and 31 countries have their own (often multiple) internal ranking systems. Of the global rankings, there are really just four that are consistently taken seriously - Academic Ranking of World Universities (ARWU, formerly Shanghai Jiao Tong Rankings); Quacquarelli Symonds (QS); Times Higher Education (THE); and University Ranking by Academic Performance (URAP). QS and THE rely primarily (but not solely) on information submitted by institutions in response to the questions posed by the ranking system while ARWU relies on Internet sources and URAP specifically on information available from the Web of Science and InCites.

There are two major implications of the different indicators, definitions and variables, and varying metrics, used by the major ranking systems. The first is that the systems are not comparable with one another and so relating a ranking on, say, the QS and ARWU lists makes no sense. The second is that the systems change their methodologies in various ways from time to time, and the participating institutions change in number from year to year, so that longitudinal comparisons for any one university most often make little or no sense. To make the point about variables and weightings, consider the QS and THE systems shown in Table 1.

Different ways of measuring, varying definitions, different weightings and, in three instances, different indicators, mean that, other than in exceptional cases, there can be little or no comparability. And even in the 'top 10' case, the specific rankings vary despite the tight, high-level competition. Figure 1 shows the 2018 rankings for the top 10 institutions as determined by QS and THE - where only the 'bottom' three institutions have consistent ranks, while Princeton University does not appear in the QS list, nor University College London on the THE list.

 

 

As far as year-on-year comparisons of rankings outcomes go, these are made very difficult by regular, often yearly, methodological changes, including changes to citations and survey data window periods, bibliometric data and periods that are considered, and percentages assigned to local and international perceptions. In addition, the expansion of rankings lists increases the pool of ranked universities each year and this renders trend conclusions meaningless by varying the scale. It also tends to make ranking a zero-sum game. This is also complicated by the proliferation of ranking systems in recent years in all rankings spheres: global, regional, young, subject rankings and employability.

Although varying in their approaches to the question posed by the ASSAf Presidential Roundtable, the presenters agreed on one key matter: although rankings are often decried (even derided) in public, they are assiduously followed by universities and their leaderships, and so they are (for the meantime) an unavoidable reality, one which may serve to influence institutional decision-making - and spending. At the same time, they are also dubious measures to use in any attempt to undertake a systematic analysis of their results within and between the systems.

 

References

1. Butler-Adam J. Being the best? Yes - but best for what? S Afr J Sci. 2013;109(9/10), Art. #a0038, 1 page. http://dx.doi.org/10.1590/sajs.2013/a0038        [ Links ]

2. Basson A. Rankings ignore local contexts of universities, say experts [webpage on the Internet]. c2018 [cited 2018 Mar 15]. Available from: http://www.sun.ac.za/english/Lists/news/DispForm.aspx?ID=5419        [ Links ]

3. Jansen J. Rankings not whole story. Herald Live. 2018 February 15. Available from: http://www.heraldlive.co.za/opinion/2018/02/15/jonathan-jansen-rankings-not-whole-story/        [ Links ]

4. Makoni M. The great global rankings debate. University World News. 2018 March 09. Available from: http://www.universityworldnews.com/article.php?story=20180306114540487&query=g reat+global+rankings+debate        [ Links ]

 

 

Correspondence:
John Butler-Adam
Email: j.butleradam@gmail.com

PUBLISHED: 27 Mar. 2018

Creative Commons License Todo el contenido de esta revista, excepto dónde está identificado, está bajo una Licencia Creative Commons