SciELO - Scientific Electronic Library Online

 
vol.114 número3-4Working together for our oceans: A marine spatial plan for Algoa Bay, South AfricaSclerotinia sclerotiorum disease prediction: A review and potential applications in South Africa índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Articulo

Indicadores

Links relacionados

  • En proceso de indezaciónCitado por Google
  • En proceso de indezaciónSimilares en Google

Compartir


South African Journal of Science

versión On-line ISSN 1996-7489
versión impresa ISSN 0038-2353

S. Afr. j. sci. vol.114 no.3-4 Pretoria mar./abr. 2018

http://dx.doi.org/10.17159/sajs.2018/20170344 

REVIEW ARTICLE

 

A review of South Africa's National Research Foundation's ratings methodology from a social science perspective

 

 

Chris Callaghan

Economic and Business Sciences, University of the Witwatersrand, Johannesburg, South Africa

Correspondence

 

 


ABSTRACT

One of South Africa's National Research Foundation's (NRF) activities is to award ratings to academics who apply according to predefined categories. Explicitly or not, these ratings are part of submissions academics make for promotions and for employment in South African universities. As such, methodological assessment of the validity of this system is important. This paper seeks to conceptually evaluate certain characteristics of this system against certain general principles of reliability and validity. On the basis of the results of this evaluation, it is argued that assumptions that the NRF rating system is always valid or reliable as a differentiator of individual academics cannot be made unconditionally. Using Management Science as an example of a social science field that draws from multidisciplinary theoretical and methodological frameworks, this paper identifies certain validity issues associated with the current NRF rating system, and makes recommendations for improvements.
SIGNIFICANCE:
Certain validity issues are highlighted and arguments are made to improve the methodology used by the NRF to rate researchers.
Issues related to multidisciplinarity and mode two knowledge production are considered.
Technological advances that have made it possible for scientific measurement of research productivity and impact are discussed.
Problems with subjective methodologies are identified, together with their ethical consequences.

Keywords: scientific methodology; NRF; rating methodology; South Africa; subjectivity bias


 

 

Introduction

If one sought to identify dominant tensions in the literature relating to the progress of science, a candidate would be the tension between Popper's1 falsifiability thesis and Kuhn's2 thesis that science progresses as much as a result of changes in human shared values as on the back of scientific advances in their own right. According to Kuhn2(p.2), 'science does not develop through the accumulation of individual discoveries and inventions' but through changes in the values and beliefs of scientists, termed 'paradigms' which typically resist evidence-based change until evidence has accumulated sufficiently to tip this balance of beliefs.

According to Still and Dryden3(p.273), Kuhn's theory 'seemed to put a distance between nature and scientific practice, and to undermine Popper's principles of demarcation'. What is of critical importance about Kuhn's2 contribution is perhaps the way human subjectivity is placed centre stage in what was considered 'objective' natural science, thus invoking academic scrutiny around the role of subjectivity in holding back the progression of scientific progress, notwithstanding social scientific critique of objectivity itself and other questions around the legitimacy of the goal of scientific progress itself.4

It has long been known that systems theory underlies the workings of human systems, particularly in fields such as Management Science5, and that there are fundamental differences between the natural and social sciences, not only in methodological approaches but also in terms of focus6, which have important implications for the tensions between monodisciplinary versus non-monodisciplinary research. This tension is summarised by van den Besselaar and Heimeriks7 as follows:

Interdisciplinarity is an important and complex issue. It is important as modern society increasingly demands application-oriented knowledge, and the usability of scientific knowledge generally requires the combination and integration of knowledge from various scientific disciplines. Traditionally, the disciplines have been very dominant in the organisation of the science system, in the reward system, and in the career system. Nevertheless, funding agencies are increasingly stressing the social relevance of research results, and consequently a new mode of application-oriented research is emerging, on top of traditional academic research. (p. 1)

These changes have therefore essentially given rise to two modes of knowledge production, and to a differentiation of research according to the extent to which it is disciplinary versus interdisciplinary.7 This longstanding differentiation is highlighted by Gibbons et al.8, who argue that these trends 'amount, not singly but in their interaction and combination, to a transformation in the mode of knowledge production', which in turn 'is profound and calls into question the adequacy of familiar knowledge producing institutions' (p.1). Given the differentiation between modes of research described here, and the growing need for applied research seeking to solve societally important problems, which is defined more by the problem than disciplinary origin, and therefore necessarily interdisciplinary7,8, it is argued here that researcher rating systems that are applied in such a way as to discriminate against interdisciplinary research in the social sciences can cause harm, as they might disincentivise societally important research in favour of monodisciplinary research, and may give rise to conditions which incentivise 'gaming', or in which research is conducted for the express purpose of meeting the goals of a system, or prioritising these goals at the expense of societal contributions. It is argued in this paper that the societal costs of such a system might be particularly salient in the South African context, and similar contexts, in which localised knowledge is particularly important, yet where localised contexts can be poorly represented in international high-impact journals.

Alvesson and Gabriel9 decry the standardisation of research and publications 'into formulaic patterns that constrain the imagination and creativity of scholars and restrict the social relevance of their work' (p.245), and which therefore result in the proliferation of non-innovative research publications. This criticism is echoed in criticisms of the culture of 'publish or perish'10, which seems to contribute to wasteful publication and unethical practices11. In light of certain potentially serious limitations associated with a system that creates a culture of accumulating points and impact factor scores, and at the same time rejects ratings applications on account of a lack of monodisciplinary focus, notwithstanding societal contribution, this paper seeks to strike a cautionary note, and to offer certain insights on the basis of the literature, which might be usefully incorporated into such a competitive system to reduce the harm it may cause.

Drawing from the relevant literature, this paper seeks also to make the argument that a system that rates academics through subjective rather than strictly objective evaluation might lack sufficient validity to be used to create perceptions as a differentiator of the quality of academics, based on their research. Similarly, given evidence that strong cross-disciplinary differences exist in terms of the relationship between objective criteria and the subjective NRF rating system's ratings,12 research into ratings in the Management field is considered important, and perhaps timely.

Fedderke12 found, for example, that, on average, 'C-rated scholars in the Biological Sciences have the same h-index as A-rated scholars in the Social Sciences' (p.3), and that ratings in the Business Sciences were the most difficult to attain for individuals with high h-indices, exceeded in difficulty only by those in the Medical and Biological Sciences.12 Arguably, such attempts to prescribe a rating to an individual can suffer from a host of biases well considered in the scientific literature. This paper therefore seeks to identify certain potential biases associated with the application of the South African National Research Foundation's (NRF) rating system, and to link these potential biases to a discussion of the consequences of such a system, as well as to how these consequences accrue differently to different stakeholders, particularly societal stakeholders, who might be the most powerless in the face of a system that might not incentivise societal problem solving. These societal costs are expected to also result from decision criteria which subjectively deviate from relatively more objective measures of research performance.

 

Justification of the research

The arguments made in this paper are considered important for the following reasons. Firstly, the violation of central tenets of the academic process of gatekeeping itself might be considered in turn a violation of academic ethics, in that principles of anonymity and confidentiality of identity are not upheld in NRF rating assessments. This is perhaps especially problematic given the intensity of identity politics13, and the racially oppressive history of the country associated with institutional racial discrimination on the part of the apartheid regime14-16. Given this historical context, to have the racial and gender identity of an individual known to assessors is perhaps unethical, given the historical context of the country, and given the career implications of rating. This is especially concerning if the objective evaluations of one's published work have already been undertaken by expert peers in the topic areas of journals, and therefore have already been vetted under conditions of anonymity.

Secondly, a similar violation of the principles of anonymity might relate to issues of academic freedom. The requirement for a 'coherent stream' of research has arguably been widely interpreted to suggest an applicant's research should fall into a 'silo', or into a largely monodisciplinary stream of research that does not deviate in its focus. Because an individual's entire portfolio of research is 'declared', any deviation from silo focus can be penalised. This is at odds with principles of academic freedom, for a number of reasons. Arguably, in doing so, the NRF rating system effectively shapes the growth of research to remain in silo areas, which might stunt important multidisciplinary or transdisciplinary innovations, as already stated above. This harks perhaps to Lysenkoism,17 in that shaping research to grow in silos, or 'straight monodisciplinary' lines, might deny important changes in research trajectories, or might mitigate against important scientific advances in applied social sciences, particularly in socially important areas, particularly given that the 'second mode' of social science knowledge creation8 is associated with applied interdisciplinary research that is necessarily defined by its problems (including those that are societally important). This might not be as big a problem for the natural sciences, as multidisciplinary work is arguably a characteristic of certain social research as a result of the multiple influences that can come to play in causing social conditions. Applied research in the social sciences, and in Management Science, can in many cases require transdisciplinary approaches, and for grant funding purposes, a multidisciplinary focus is often necessary. If Management Science researcher rating applications are rejected on account of a lack of a monodisciplinary focus, this issue should be the topic of further research and discussion.

Similarly, how scientific is it of a rating system to potentially penalise changes in a researcher's trajectory, away from a singular monodisci-plinary focus, or even toward another? Arguably, denying a researcher a rating because of changes in trajectory (and hence a lack of a 'coherent' focus) could potentially count as harmful practice, as it can incentivise lack of innovation and constrain natural changes in the trajectory of an individual's research interests. Such systems might operationalise the exact problems identified by Kuhn2.

Thirdly, another violation of the principles of academic freedom might be associated with the prescriptive nature of research 'authorities' in general. By not allowing subsidy for many good journals, yet officially including 'bad' journals in official lists ('white lists'), the stage is set for perverse incentives. It is common knowledge that journals that were identified as 'predatory' by Beall's List, were in the same year still fully accredited for subsidy by the South African Department of Higher Education and Training (DHET). However, Beall's List was discontinued at the start of 2017,18 which has left academic staff, particularly those new to the system, at the mercy of official lists. The predatory journal phenomenon would be a non-issue if authorities implemented 'white lists' (lists of accredited journals) with the diligence required.

Indeed, who can forget the case of a journal that was fully accredited by DHET (and IBSS indexed) being de-accredited retrospectively, 2 years after South African academics had (perversely) accounted for a large share of its contents. One has to ask: has the NRF through its rating system not further reduced social science academic activity to that of a 'game'? Gamification of the system is hugely problematic if it results in the proliferation of ever-growing volumes of non-innovative research that is simply targeted at formulaic journal publication.9 Have we created a monster? The test of this would perhaps be the extent to which research publication genuinely contributes to the benefit of societal stakeholders. If much of the research produced is not read by many, then what of the high levels of investment in the production of barely read research? If such a system incentivised innovative research or societally important research findings, it is possible that the system might be less wasteful. The NRF rating system, at least to the extent it relates to the rating of Management academics, might do well to take cognisance of these issues.

One may ask: who gets hurt in such games? Is it those established in publication, or is it the emerging cohorts of young academics who rely on the mentorship of those more established? Is the NRF rating system one which facilitates inclusion and development, or is its effect the opposite, acting as a mechanism of exclusion, or penalising innovative or societally oriented interdisciplinary research? Similarly, is this rating system acting as a catalyst to create a culture of competition which differentiates publicly between 'winners' and 'losers' in an academic game? If submission to such a system resulted in societal good, or was aligned with societally important needs, then tolerating the downsides of such a system would be justified. If not, then further research and discussions into this topic are needed.

Perhaps it takes courage to speak truth to power, or to take a stand on issues that affect an academic's career progress within a powerful system in which many are invested. Nevertheless, such research is important if it leads to more transparent debate and scrutiny of a system that either directly or indirectly affects everyone in this country, either as academics or as societal stakeholders.

As indicated previously, given evidence that ratings outcomes are not consistent across different academic fields,12 the objective of this paper is therefore to question certain of the assumptions that underpin the South African NRF researcher rating system, as it relates to the rating of Management researchers, in order to highlight instances in which principles of ethical and equitable assessment might not relate to practice. In doing so, certain suggestions for improved ethical use of such a system are made.

 

Context and background

The NRF is a South African state research funding agency that applies a peer-based evaluation system in rating researchers. The NRF's predecessor - the Foundation for Research Development - was established in the 1980s (see Pouris19 for a useful history of the NRF and its origins). The mandate of the NRF is to 'promote and support research' through 'funding, human resource development and the provision of the necessary facilitates' in order to facilitate 'the creation of knowledge, innovation and development in all fields of science and technology, including indigenous knowledge' and thereby contribute to 'the improvement of the quality of life of all the peoples of the Republic'20. Its strategy is based on 'four core tenets': transformation, excellence, service culture and sustainability. Its mission statement includes the following corporate values: 'passion for excellence; world-class service; ethics and integrity; respect; people-centered; accountability'.

In terms of ratings, an individual is assessed, by peers, on their recent research outputs and impact as 'perceived by international peer reviewers'20. As the NRF rating methodology is based on qualitative, or subjective assessments, there should be no problem in developing an objective index of impact, according to Fedderke12, based on either citations or on a formula that takes into account the impact factors of publications. Instead, what seems to happen is that an individual's research is subjectively assessed by a small group of evaluators, during which, for example, four reviewers can recommend rating, but two might object, resulting in the rejection of a rating. There seems to be a clear problem in that much variance exists in ratings - an issue expressed by Fedderke as follows12: