SciELO - Scientific Electronic Library Online

 
vol.103 número11-12 índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados

Artigo

Indicadores

Links relacionados

  • Em processo de indexaçãoCitado por Google
  • Em processo de indexaçãoSimilares em Google

Compartilhar


South African Journal of Science

versão On-line ISSN 1996-7489
versão impressa ISSN 0038-2353

S. Afr. j. sci. vol.103 no.11-12 Pretoria Nov./Dez. 2007

 

NEWS & VIEWS

 

The National Research Foundation's rating system: why scientists let their ratings lapse

 

 

A. Pouris

Institute for Technological Innovation, University of Pretoria, Pretoria 0002, South Africa. E-mail: anastassios.pouris@up.ac.za

 

 


ABSTRACT

THE NATIONAL RESEARCH FOUNDATION in South Africa provides an individualized evaluation and rating system of funded researchers. Some researchers, however, allow their ratings to lapse. This article reports on the results of an investigation into the reasons for this happening, as revealed by the researchers themselves. The findings can assist the NRF and universities improve their operations and may encourage researchers to express their opinions concerning the evaluation programme. Linking ratings with funding is the most important recommendation for improving the present system, with likely benefits for the NRF, researchers and the reviewers who provide the evaluations.


 

 

Introduction

The National Research Foundation (NRF) is a South African government research funding agency that operates an evaluation and rating system applied to all researchers applying for funding. This serves as a peer-based benchmarking system of each applicant's recent research outputs and of their impact. The system was established in the 1980s by the NRF's predecessor, the Foundation for Research Development (FRD), in response to the perception among research scientists at universities and museums, that research funding was 'spread too thinly' and that the allocation of funds was not based on well-defined and widely accepted criteria.1 A brief history of the rating system is available as an appendix to this article in supplementary information online at www.sajs.co.za

The NRF5 considers that the evaluation and rating system provides independent and objective information on the quality of an individual's research and South Africa's research capacity in different fields, reinforces the importance of internationally competitive research, stimulates competition between researchers, and can be used by the universities to position themselves as research-intensive institutions. Some universities use it for the promotion and recruitment of staff. It has been argued6 that the NRF's approach has made South African research known internationally and that it addresses the biases peculiar to scientifically small countries.

The NRF detected a decline in the number of rated researchers in the natural sciences and engineering (by approximately 10%) during the period 2000–04 after an increasing trend since 1985.5 The total number of rated researchers increased from just over 600 in 1985 to more than 1000 in 1999. Even though the decline was reversed in 2005, the NRF decided to investigate the matter. It therefore supported the current investigation, which was undertaken to establish the reasons why researchers permit their ratings to lapse and, in particular, to identify those reasons that can potentially be affected by NRF policies, in the hope that the investigation has the potential to be used for benchmarking future monitoring.

 

Methods

The flow of researchers from a particular institution and discipline through the NRF funding system, using a systems dynamics approach,7 is illustrated in Fig. 1. This applies to any number of different institutions and situations, and such flows can be quantified in terms of motivation and other extrinsic causes.

 

 

The NRF provided a database of researchers in its records between 2000 and 2004. This included the records of evaluated researchers already in the system, those who had been evaluated but had left their institutions, and researchers who had just applied for evaluation but whose assessment had not been completed. Excluded from the data set were those researchers who let their rating lapse in the period from 2000 to 2004, for whatever reason. These reasons were identified as indicated below.

There were 2308 records in the original data set. These included some duplicate records, as some rated scientists had changed discipline. The data were cleaned and the duplicate records merged so as to be counted only once. The revised set contained 1793 individual records. From the 2000 data set the records of deceased individuals (25), the records of people who were born in 1940 or before (presumably retirees) (160), the records of people classified as retired or who had moved to non-academic occupations, those records with a final rating of 'unsuccessful' or 'no rating' in 2000, and records of researchers with a foreign address (54) were excluded. The remainder of the records had a valid rating other than 'unsuccessful'.

Other reasons for leaving the system were identified by means of questionnaires completed by individual researchers or the university administrations, or both. These were forwarded to researchers who had allowed their rating to lapse where the reasons were not apparent. The questionnaire was administered by the NRF to safeguard the anonymity of the respondents.

The questionnaire was distributed in May 2006 to 244 researchers with known addresses. Forty-five e-mails were returned as 'undeliverable'. Reminders were sent in early June to 92 participants. This elicited a response from a further 30 recipients. The final tally was 118 responses, representing a 59% return rate on those with valid addresses. These response rates are comparable with those of other surveys of professional groups.8,9 Table 3 shows the number of the 291 researchers with lapsed ratings and the number and percentage of those who responded to the questionnaire.

 

 

 

 

 

 

The questionnaire had two sections, A and B. Section A identified seven reasons (and one other) which may have been responsible for the respondent to have his/her rating lapse. Of the 118 respondents, 96 completed this section. Tables 1 and 2 show the distribution of researchers with lapsed ratings (as defined above), according to race, gender and 'disciplinary panel'. While re-allocation of researchers to different disciplinary panels (as they move from one scientific discipline to another) may create certain differentiation, Table 1 shows that researchers in some disciplines are more prone to let their rating lapse than those in others. For example, 28 researchers in biochemistry let their ratings lapse out of the 55 rated researchers in that discipline in 2004. This phenomenon is of particular importance to the NRF and for policy, as it may mean that researchers in particular disciplines are not dependent on the NRF for funding.

Table 2 shows the number of researchers who let their rating lapse, according to gender and race. The NRF reports that 'by 2003 about 20% of all rated researchers were women and just over 9% were black scientists.'5 It is therefore apparent that blacks appear among the researchers who let their rating lapse in equal proportion to their appearance in the total number of rated researchers. In contrast, women constitute 16% of the researchers who let their rating lapse but represent 20% of the rated researchers.

Table 4 shows the reasons that researchers allow their ratings to lapse, the number of respondents choosing a particular reason and the percentage of respondents who identified with the reason. As respondents could give more than one 'reason', the sum of percentages exceeds 100%. The table shows that 39% of the respondents declared that they had moved from their institution to another organization (hence they did not need a rating). The second-most popular reason, identified by 29% of respondents, was that NRF rating is not linked to financial support, and the third-most popular reason was: 'I wish to improve my research profile before I return for re-evaluation to the NRF' (28%).

 

 

We examined the number of respondents who indicated that they either had moved to a new organization or to a non-research position within their original institution. Fifty-four out of the 96 respondents (56%) declared one of these two reasons. Seventy-five of the respondents (78%) averred that they had let their rating lapse either because they moved away from their institution, or they moved in a non-research job, or they wanted to improve their research profile before they re-applied for rating. The NRF has little or no influence upon these reasons.

The respondents who declared both that 'they are supported by other funding organizations and they don't need NRF rating' and that 'NRF rating is not linked to financial support' were also examined. If other organizations offered easier access to funding, it could be argued that the de-linking of funding from evaluation was also a reason for their disinterest in the NRF system. Even though 28 respondents identified the fact that rating is not linked to funding as a reason for allowing their rating to lapse, only six of those (21%) answered also that 'they are supported by other funding organizations and they don't need NRF rating'.

Section B of the questionnaire asked for advice on what could make the NRF's rating system more appealing and useful. Eighty-one recommendations were received. Forty-five (56%) suggested that guaranteed financial support based on the rating would encourage participation in the system. Most of the respondents linked the low level of (or no) funding support with the burden of preparing the relevant documentation for evaluation. A number of respondents commented that the fact that they were rated and they were not receiving any support, whereas their colleagues without a rating were supported, was an inconsistency and unfairness of the NRF system. Three of the respondents mentioned, or insinuated, that the perceived unfairness of the system led them to emigrate or move away from research.

Eight respondents suggested that the NRF should 'market' its policies better. They suggested that the purpose of evaluation and rating should be explicit and that the relevant benefits, for example that some universities use the NRF ranking (rating) for promotion purposes; possible facilitation to funding from other sources etc. should become publicly known. Three respondents averred that their field was not appropriate or not recognized by the NRF (for instance, taxonomy, pure mathematics, and science education), while two respondents argued against the use of local or international referees.

Finally, three respondents mentioned that their experience abroad proved to them the value of the NRF system in its recognition of individuals and their research contributions.

 

Discussion and recommendations

While the NRF can affect the turnover of academics in the higher education sector only marginally, one asks whether the loss of 163 evaluated researchers (291 × 0.56 who moved to non-research positions) out of the approximately 1000 NRF rated researchers in the country (over a period of 4 years) is an 'acceptable loss' for the sector. Just below 30% of the respondents declared that one of the reasons for which they let their rating lapse was that NRF rating is not linked to financial support. This particular issue has also been claimed by more than 55% of the respondents as the best approach to make NRF rating more appealing and useful. In the same context, some researchers argued that the application procedures could be simplified and that NRF should market itself better.

The entire NRF system is compromised, according to some, when a rated researcher is supported financially only to a limited extent or not at all, while unrated researchers are generously supported. However, the NRF supports unrated researchers only for a limited period—continued support requires an application for rating (see online appendix).

I therefore recommend that the NRF considers the automatic partial funding of researchers with a valid rating. The level of such funding should depend on the rating and be independent of other considerations. For example, C-rated researchers could be guaranteed a minimum of, say, R25 000 if they wished to avoid applying for a larger amount through other NRF channels. Such an approach will be consistent with the original purposes for which the system was set up and has the potential to reduce overhead expenses of the NRF (by eliminating the cost of administering small grants) and the National System of Innovation (by limiting the time academics are requested to assess proposals).

I thank L. Di Santolo for support in undertaking the survey and for constructive comments, and also A.M. Kaniki, A. Lourens, J.R. Midgley, W.R. Siegfried, G.O. West, G.U. Schirge and an anonymous referee.

 

1. NRF (2006). Evaluation history. Online: www.nrf.ac.za/evaluation/content/evaluationhistory.htm        [ Links ]

2. FRD (1988). South African National Scientific Programmes: Report 101E, Report of the Main Research Support Programme. Foundation for Research Development, Pretoria.         [ Links ]

3. Pouris A. (1986). Peer review in scientifically small countries. R&D Management 18(4) 333–340.         [ Links ]

4. Pouris A. (1991). Effects of funding policies on research publications in South Africa. S. Afr. J. Sci. 87, 78–81.         [ Links ]

5. NRF (2005). Facts and Figures 2005: The NRF Evaluating and Rating System. National Research Foundation, Pretoria.         [ Links ]

6. Pienaar M., Blankley W., Schirge G.U. and von Gruenewaldt G. (2000). The South African system of evaluating and rating individual researchers: its merits, shortcomings, impact and future. Research Evaluation 9(1), 27–36.         [ Links ]

7. Doyle J.K. and Ford N.F. (1998). Mental model concepts for system dynamics research. Syst. Dyn. Rev. 14(1), 3–29.         [ Links ]

8. Martinson C.B., Anderson S.M. and de Vries R. (2005). Scientists behaving badly. Nature 435, 737–738.         [ Links ]

9. Asch D.A., Jedrziewski M.K. and Christakis N.A.J. (1997). Response rates to mail surveys published in medical journals. J. Clin. Epidemiol. 50, 1129– 1136.         [ Links ]

 

 

This article is accompanied by a supplementary Appendix online at www.sajs.co.za.
Note added in proof
The NRF announced in December 2007 that it is changing the researcher evaluation and rating system. Starting in 2008, rated researchers will receive an additional incentive—called continuity funding or 'glue money'. A-rated researchers will receive annually R100 000, B-rated researchers R80 000 and C-rated researchers R40 000; P, L and Y-rated researchers will get R80 000, R40 000 and R40 000 per annum, respectively, and their institutions will have to add additional support on a 1:1 basis. The phasing-in of the system will start with support to those with no NRF funding, and then to others as their existing support falls away. Rated researchers will qualify to receive glue money and other funds (on a competitive basis) from other NRF programmes.

 

 

Supplementary material to:

Pouris A. (2007). The National Research Foundation's rating system: why scientists let their ratings lapse. S. Afr. J. Sci. 103, 439–441.         [ Links ]

 

Appendix

Outline history of the evaluation systems of the FRD and NRF

The Foundation for Research Development (FRD) was established by the amalgamation of the Research Grants Division and the Cooperative Scientific Programmes (CSP) of the Council for Scientific and Industrial research (CSIR). The Research Grants Division funded self-initiated research on a merit basis, whereas the CSP facilitated multidisciplinary research through multi-institutional participation. The policy of the CSIR to become less reliant on government funding contributed to the adoption of a new approach, which separated intramural from university-based research

In 1990, the FRD became an independent body with its own budget.1 The FRD's first priority was to remedy 'the inadequacy of funds for researchers in general and for outstanding researchers in particular, as well as the inadequate bursaries for selected research students'.2 The FRD decided that the most important criteria for funding would be the quality of the scientific output of individual researchers and their students. In essence, 'the country had moved away from the science procurement tradition and research support had become an investment in people.'3 This led to the novel concept, for the country, of peer evaluation and rating of individual researchers in higher education, based on their recent track records and research outputs. The level of financial support was linked to this rating. The system was widely acclaimed,4 attracting favourable international comments at the time. The novelty of the system was that it reduced bureaucracy for both applicants and reviewers, who did not need to prepare and review proposals, based on the assumption that past performance predicted future performance.

A peer-review approach was adopted, as it became evident that evaluation could be done only by individuals accepted by the applicants and the broader scientific community as peers actively involved in the relevant field of research. A large number of leading researchers, nationally and internationally, became involved in adjudicating the quality of South African researchers. The NRF estimates that between 1984 and 2002, more than 11 000 local and foreign researchers participated in the evaluation and rating system. More than 3 500 applicants were evaluated in this way; some of them more than once.

In April 1999, the FRD and the Centre for Science Development (CSD) were united to form a new organization, the National Research Foundation (NRF). From 1984 to 2001, the evaluation and rating system was applied only to scientists in the natural sciences, engineering and technology, but the NRF Board approved the extension of this system to researchers in the social sciences and humanities in 2002. The direct linkage between rating and funding support was discontinued in 2001 in order to create space for applicants in the social sciences and humanities; to level the playing field with the creation of the NRF, which developed new programmes and to have sufficient funding for development programmes. However, funding and rating remained linked through the following:

  • five-year grants to rated researchers (unrated researchers could qualify only for two-year grants);

  • unrated researchers qualified for a maximum of six years' funding (three two-year grants) and after that they would have to be rated to qualify for funding;

  • rated researchers who allowed their ratings to lapse, or 'lost' their rating (that is, they apply for re-evaluation and they are unsuccessful), were not eligible for funding until they regain their rating.

The ratings currently in operation are as follows:

Category A comprises researchers unequivocally recognized by their peers as leading international scholars in their field based on the high quality and impact of their recent research outputs.

Category B includes researchers who enjoy considerable international recognition by their peers for the high quality of their recent research outputs.

Category C are established researchers with a sustained recent record of productivity in the field and recognized by their peers as having produced a body of quality work, the core of which has coherence and attests to ongoing engagement with the field and demonstrates the ability to conceptualize problems and apply research methods to investigating them.

Category P: Young researchers (normally younger than 35 years of age), who have held a doctorate or equivalent qualification for less than five years at the time of application and who, on the basis of exceptional potential demonstrated in their published doctoral work and/or their research outputs in their early post-doctoral careers, are considered likely to become future leaders in their field.

Category Y: Young researchers (normally younger than 35 years of age), who have held a doctorate or equivalent qualification for less than five years at the time of application, and who are recognized as having the potential to establish themselves as researchers within a five-year period after evaluation, based on their performance and productivity as researchers during their doctoral studies and/or early post-doctoral careers.

Category L: Persons (normally younger than 55 years) who were previously established as researchers or who previously demonstrated potential through their own research products, and who are considered capable of fully establishing or re-establishing themselves as researchers within a five-year period after evaluation. Candidates should be South African citizens or foreign nationals who have been resident in South Africa for five years, during which time they have been unable for practical reasons to realize their potential as researchers.

Candidates who are eligible in this category include: black researchers; female researchers; those employed in a higher education institution that lacked a research environment; and those who were previously established as researchers and have returned to a research environment.

 

1. NRF (2006). Evaluation history. Online: www.nrf.ac.za/evaluation/content/evaluationhistory.htm        [ Links ]

2. FRD (1988). South African National Scientific Programmes: Report 101E , Report of the Main Research Support Programme. Foundation for Research Development, Pretoria.         [ Links ]

3. Pouris A. (1986). Peer review in scientifically small countries. R&D Management 18(4) 333–340.         [ Links ]

4. Pouris A. (1991). Effects of funding policies on research publications in South Africa. S. Afr. J. Sci. 87, 78–81.        [ Links ]

Creative Commons License Todo o conteúdo deste periódico, exceto onde está identificado, está licenciado sob uma Licença Creative Commons