SciELO - Scientific Electronic Library Online

vol.104 issue3-4 author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand



Related links

  • On index processCited by Google
  • On index processSimilars in Google


South African Journal of Science

On-line version ISSN 1996-7489
Print version ISSN 0038-2353

S. Afr. j. sci. vol.104 n.3-4 Pretoria Mar./Apr. 2008




Alternatives to the publication subsidy for research funding



Christopher L. Vaughan

Research and Postgraduate Affairs, Faculty of Health Sciences, University of Cape Town, Observatory 7925, South Africa. E-mail:



Government policy on research funding has a direct impact on the behaviour of academics, so we need to ask what sort of behaviour should be encouraged in South Africa. Instead of an emphasis on the number of publications, our focus should rather be on a subsidy system that inspires our institutions to aim for a level of scholarship that is able to withstand the scrutiny of an international audience. Perhaps now is the time to grasp the nettle and to consider using the National Research Foundation's rating system instead of the publication count.



It has been estimated that advances in knowledge account for about one-third of the increases in the gross domestic product (GDP) of a country.1 Since much of that knowledge is created within universities and institutions of higher learning, governments around the world have adopted different strategies to stimulate research in their countries.

In the United Kingdom, the level of research funding provided by the government to individual institutions over the past two decades has been determined by the Research Assessment Exercise (RAE), where the focus is on the credentials of a limited number of the most prominent researchers at the academic departmental level.2 The RAE has been criticized over the years,3,4 which has led to the proposal of an alternative system, the Research Excellence Framework (REF), where the emphasis will be on individual researchers and the citation of their publications. As reported by Corbyn,5 the REF has already drawn considerable criticism among senior academics. For the past 15 years in Australia, funding support has been based on a research quantum which incorporates research output measures, including the number of scholarly publications by staff and students, as well as higher degrees completed (master's and Ph.Ds).6 In the United States, support for research from the federal government is based primarily on the assessment of individual research proposals.1

In South Africa, our current system of support emanates from the doctoral thesis of Melck,7 who, working at the University of Stellenbosch, provided an economic rationale for government financing of universities. A report, known as SAPSE- 110, subsequently emerged in 1982 and was adopted as government policy by the National Department of Education.8 A formula was developed which, among other requirements, included 'a component that is independent of student numbers so as to reward universities substantially for meritorious academic achievements'. The reward was based on the number of articles published in both the natural sciences and humanities.

In 2007 the Department of Education (DoE) provided a total research subsidy of R1.4 billion to 23 institutions of higher learning in South Africa.9 To place this figure in context, two other key scientific agencies also provided competitive funding to individual researchers: the National Research Foundation (NRF) (R111 million for Focus Area Programmes, R199 million for the Innovation Fund, and R116 million for THRIP); and the Medical Research Council (MRC) (R15 million for self-initiated research grants, and R22 million for research units, out of a total government grant of R179 million).10

It is clear that significantly more research funding is available on the 'supply side', via the DoE, compared to the 'demand side', via the Department of Science and Technology (DST), which funds the NRF, or the Department of Health (DoH), which funds the MRC. The politics surrounding these three agencies, particularly between the DoE and the DST, cannot be ignored, however, especially since the ratio between the two sources of funding is heavily skewed in favour of the 'supply side'. This begs the question: Is such an approach optimal for driving improvements in the research system in our country? I would argue that it is not, that the danger of the 'supply-side' model, as currently implemented, encourages the pursuit of mediocrity.

There have been some serious criticisms aimed at our country's research subsidy system, particularly the publication subsidy, first by the Academy of Science of South Africa (ASSAf)11 and more recently in this journal by Vaughan et al.12 The purpose of this commentary is to review the current subsidy system, to describe the problems specific to the publication subsidy, to suggest some alternative approaches and then to discuss the pros and cons of the proposed alternatives. Finally, some concluding remarks are made.


Current system of research subsidy

In November 2003, the minister of education, Kader Asmal, published the current policy for the funding of public higher education.13 The National Plan proposed that resources for research should be concentrated in those institutions where there was demonstrated capacity. It further specified that there should be greater accountability for the use of research funds and that research productivity should be enhanced.

The funding framework made no provision for research input grants and specified that research funding, apart from some development awards, would be determined solely on the basis of research outputs. There are two primary types of research outputs recognized by the DoE: publications, and postgraduate qualifications (research master's and doctoral graduates).

Table 1 summarizes the research subsidy provided to 23 institutions of higher learning in 2007.14 The normed research output for 2007 was determined by the headcount of permanently employed instructors and research staff in 2005 multiplied by a weighting factor. This factor is the number of research output units that each member of staff is expected to produce per annum. For universities, the weighting factor is 1.25 and for universities of technology it is 0.5. For merged institutions (such as Johannesburg, Nelson Mandela, South Africa and Walter Sisulu universities), separate weightings are applied, as indicated in the footnotes to Table 1.



The actual research outputs for 2005 were based on the number of research publications (with a credit value of 1.0), research master's graduates (with a credit of 1.0), and doctoral graduates (with a credit of 3.0). When a master's degree is based on both course work and a thesis, the credit value varies between 0.5 and 1.0. Research publications include: articles in journals accredited either by the Institute for Scientific Information or the DoE (1.0); books (with a credit value up to 5.0 for a book of 300 pages) and book chapters (a pro-rated portion of the book); and peer-reviewed conference proceedings (0.5).15 Journal articles have contributed more than 90% of the publication outputs for the past two years.16

Each institution's success in meeting its research output target is known as 'delivery', which is the ratio of weighted output to normed output expressed as a percentage. A delivery of more than 100% indicates that the institution has exceeded its target. The 'shortfall' is simply the normed output less the weighted output and is set equal to zero when delivery exceeds 100%.

The actual grant earned by an individual institution in 2007 was equal to the institution's weighted research output multiplied by the total research subsidy (R1.385 billion) and divided by the normed output for all institutions. In 2007, one research output credit was worth R85 026. Those institutions with a delivery of less than 100% earned a development grant, sharing approximately R148 million, where the amount awarded was linearly related to their shortfall. Each institution's total grant for the year was the actual grant, based on research outputs, plus the development grant (Table 1).

The ASSAf report of 2006, while identifying the problems with the use of accredited research outputs for subsidization purposes, identified certain features of the policy that were contextually positive.11 The authors argued that the policy was: able to be reformed; inclusive of all disciplines, institutions and scholars; sensitive to the inherent value of local journals; and responsive to increased or decreased outputs in an immediate way.


Problems with the current system

The development grant has obviously been included in the funding formula to encourage institutions to develop a stronger research culture. However, there are six institutions which earn a greater development grant than actual grant, thus establishing a perverse incentive: the lower their delivery, the greater their total grant. The unintended consequences of this policy certainly need to be evaluated.

As highlighted above, the DoE has indicated that it wishes to increase research outputs and so the publication componenthas been included in the funding formula. Superficially, this seems to be a sensible method for providing universities with an incentive to increase research productivity. Unfortunately, however, this has led to the 'least publishable unit' (worth R85 026), where there is a powerful perverse incentive that encourages South African researchers to publish as many papers as possible in the least demanding journals. Instead of encouraging publication in high-impact but demanding international journals with high rejection rates, researchers and their institutions are rewarded for short reports of dubious validity and value in fifth-rate journals.12 Only 57% of journal articles accepted for the publication count in 2006 were published in journals that were internationally accredited.16

Other problems with the DoE's subsidization policy based on research outputs were described in some detail in the ASSAf report.11 These may be summarized as: the use of a bureaucratic as opposed to a peer-review approach to the accreditation of South African journals; ignoring field-specific variations in the annual rates of publication; a bias against co-authorship with international collaborators (e.g. an article in Nature where there is one South African author and three foreign co-authors earns 0.25 credits, whereas a paper in a local journal by four authors from a single institution, where the editorial board may even be from the same institution, yields 1.0 credits); and the absence of other stakeholders (besides the DoE) in developing the policy.

The authors of the ASSAf report believed that these problems were not insurmountable and made certain recommendations to reform the policy.11 These included: peer-review, by an ASSAf panel, of local journals grouped by discipline where the recommendations would be transparent and evidence-based; consideration of the method by which the credit value system was calculated, taking into account different fields and international collaboration; and the convening of a national forum for all stakeholders to discuss reform of the policy. To date, that forum has not yet been convened.

One of the major drawbacks of the current system is the amount of time required to assemble the necessary data. While counting (and verifying) the number of staff, and the number of master's and doctoral graduates is relatively straight-forward, the same cannot be said for the publication output. Because many journalarticles have multiple authors with various affiliations, support staff, both within academic departments and central university administrations, spend an inordinate amount of time determining what proportion of the publication unit count should be attributed to a particular institution. In addition, a high-level evaluation panel – with representatives from six universities, ASSAf and the DoE – in 2007 sat for two days to consider 2283 submissions of books, book chapters and conference proceedings.16 Of these submissions, 959 were rejected whereas those accepted represented less than 10% of all publications for the year. This is a rather poor yield for the amount of time spent by 15 people.

Of course, the issue of 'time/effort' in the current system – and the huge amount of funding that flows from the state to institutions via this channel – should also be seen in the context of the significant 'time/effort' required by researchers and their students to secure very modest grants from the NRF and MRC. The balance between 'supply-side' and 'demand-side' funding should probably swing towards the latter, where the emphasis has traditionally been based on peer review.

Although the publication count does include publications other than journal articles, it has been suggested that the credit values are totally inappropriate. Higgins17 has argued that a 100 000-word monograph in the humanities which could make a substantial contribution to scholarship – and therefore have a major impact in terms of the recognition for the individual, his or her institution and South Africa – can take up to three years to produce and yet contribute a small fraction of the subsidy generated by a natural scientist during the same period. It is also conceivable that certain books, such as those novels authored by J.M. Coetzee, which led to his Nobel Prize in Literature, would not qualify for research subsidy.

Some South African universities have provided incentives to their academics by passing a portion of the publication subsidy back to the individual staff member who authored the publication. While on the face of it this could be considered a reasonable strategy to improve research productivity, it does have some serious drawbacks. First, it rewards those who pursue the least publishable unit (described above). Second, it favours academics whose research field is conducive to multiple annual papers (such as zoologists) versus those who may publish only one or two papers per annum (for instance mathematicians) or those who publish important policy documents that draw no subsidy at all (for example, public health specialists). Third, there is the distinct possibility that such funds, even when paid into a research fund rather than directly into the individual staff member's personal bank account, may be considered a personal benefit by the South African Revenue Service and would therefore be liable to personal income tax.

Since the mission of the NRF is to contribute to the knowledge economy in South Africa, they have set the target of attaining at least 1% of global R&D output by 2015.18 One measure would be a greater share of publications indexed by the Institute for Scientific Information (ISI), and there was indeed a steady increase in ISI-accredited publications between 1990 and 2002, while the number of publications in DoE-accredited journals was static.11 The Australian experience has revealed that their funding formula, which also encourages publication, has led to a 25% increase in their share of global publications19 but, because there is no differentiation among journals, it has been at the expense of quality: the greatest gains have been in the journals with the lowest impact factors.20


Some alternatives to the publication subsidy

Just as the number of publications by individual researchers serves as a proxy for the research output of an institution, so too can the NRF ratings be seen as a proxy for an institution's scholarly output.21 In fact, the NRF ratings are a direct measure of the degree to which an institution aspires to international recognition for its research activities. My first alternative funding model is therefore the replacement of the publication count with an institution's NRF ratings, as illustrated in Table 2. The credit values for research master's (1.0) and doctoral degrees (3.0) have been retained, as has the provision of a development grant.



In deciding what the credit values for NRF ratings should be, two guidelines have been applied: (1) the relative weighting of the ratings in the NRF's proposed multi-criteria decision making (MCDM) tool22 has been adopted; and (2) the total weighted rating for all institutions should be approximately equal to the total number of publications for all institutions in Table 1 (7228). On this basis, an A rating has been valued at 8.0 research output credits, the B and P ratings at 6.0 credits, the C and Y ratings at 4.0 credits, and the L rating given a value of 2.0 credits. As seen in Table 2, the total weighted rating for all institutions is 7224.

The ratings grant for an individual institution is, as before, equal to the institution's weighted output multiplied by the total research subsidy (R1.385 billion) and divided by the normed output for all institutions (cf. Table 1). In comparing the total grant for the current system (Table 1) with the alternative system based on NRF ratings (Table 2), it is evident that some institutions (Free State, KwaZulu-Natal, UNISA and Pretoria) would see a substantial reduction in their total grant, while others (Cape Town, Stellenbosch and the Witwatersrand) would derive a substantial increase.

As indicated above, the development grant of R148 million represents more funding than the NRF spends on its focus area programmes. It would certainly appear, on the evidence of the research outputs and NRF ratings in Tables 1 and 2, that the administrators of the universities benefiting from these grants are using the funds for various activities, most of them unrelated to research. In the next four alternative funding models proposed, the development grant has been removed (Table 3).



With the development grant removed, and the full R1.385 billion allocated according to the existing research output model (which is based on publications and postgraduates), there would be some significant changes (Table 3). All institutions with a delivery score of more than 100% would be major beneficiaries, whereas those with large shortfalls such as Limpopo and UNISA (Table 1) would see significant reductions in their subsidy. When the output is determined by a research output model based on NRF ratings and postgraduates, the changes would be even more dramatic, with institutions such as Cape Town, Stellenbosch and Witwatersrand seeing significant gains (Table 3).

A third 'hybrid' model, based on research outputs and incorporating publications, postgraduates and NRF ratings, has also been included in Table 3. Again there is no development grant and, as seen in Tables 1 and 2, the total weighted outputs for the three components are approximately equal: publications (7228); postgraduates (7318); and NRF ratings (7224). Given these weightings, the hybrid model has a less dramatic effect across all the institutions (Table 3).

In its recently published strategic plan, the NRF has stated that by 2015 it would like to double the number of Ph.D. graduates.18 The DST's Ten-Year Innovation Plan has set even more ambitious targets, suggesting that by 2018 South Africa should increase the annual output of Ph.Ds by a factor of five.23 With these aspirations in mind, I have included a fourth model (also without the development grant) based solely on the number of Ph.D. graduates produced by each institution (Table 3). Compared with the existing model (Table 1), there are some interesting differences, with Cape Town and Pretoria being the main beneficiaries and two institutions (Mangosuthu and Walter Sisulu) earning no subsidy at all.


The pros and cons of the alternative systems

The first major benefit of a system based on NRF ratings would be the huge savings in time. All the work has already been done by the NRF's evaluation panels. Second, the data are transparent and readily available on the NRF's website.24 In contrast, there is, at the moment, no publicly available document which records what proportion of an institution's publications are in internationally accredited journals. In other words, it is not possible to judge the quality of an institution's research output by studying the publication count. Third, the NRF ratings are updated annually and thus reflect the impact of an institution's researchers in the recent past. Fourth, the NRF ratings are based on peer review by national and international experts in the field. Fifth, the NRF recognizes scholarly output otherthan journal publications (such as artefacts, prototypes and policy documents). Sixth, since 2002 the NRF rating system has been available to all disciplines. At the University of Cape Town, for example, we have artists, lawyers and accountants with an NRF rating. Finally, the system has been in operation for more than 20 years and is considered to be relatively robust.

However, what about the potential problems with this alternative approach? First, there is the fact that only 1589 researchers in South Africa have a rating (Table 2), representing just 10% of the 15 315 permanent staff who are eligible for rating (Table 1). Of course, it could also reasonably be argued that a small proportion (perhaps just 10%) of these 15 315 permanent staff contribute to research publications each year.

In 2005, an independent panel, consisting of three foreign research experts and three local panellists, conducted an institutional review of the NRF.25 Over a 12-day period, the panel interviewed more than 400 stakeholders in three cities. The review panel was highly critical of the ratings system and identified the main problem areas as: controversy provoked by the system; lack of coherence within assessment instruments; relationship between ratings and funding; difficulties with the 'one size fits all' model; operational problems; and concerns about systemic sources of bias.

The sources of bias in the rating system reported by some researchers included: emphasis on a 'one person, one paper' model of publishing at the expense of multi-authored publications; bias in favour of long-term involvement in a single area compared with scholars whose research migrated across a number of topics and disciplinary boundaries; and difficulty in securing a high rating in a field that is vast. In the light of all these concerns, the panel recommended that a task group be convened by the higher education sector to consider the future of the NRF rating system.25

The panel's recommendation was adopted in 2007 by Higher Education South Africa (HESA) and the NRF, which conducted on extensive review of the rating system. Among the problems identified was that the system valued international standing above local standing and local journals, long-term track records above short-term activity, and inherent flaws that may not be amenable to reform. Despite these misgivings, the HESA/NRF report felt strongly that the rating system should be retained, and concluded that there was 'no evidence to discontinue the system for the evaluation and rating of individual researchers'.26 They also recognized that rating should be directly linked to funding, that the NRF should address the criticisms of the rating system, and that the NRF or HESA should lobby for sufficient levels of funding to sustain the rating system.

In a recently published synthesis of the HESA/NRF report, Auf der Heyde and Mouton27 reported that key sectors of academia regard the rating system with growing scepticism and disillusionment. Some researchers, such as Cherry and Gibbons,28 have expressed serious misgivings about the NRF rating system, suggesting it is 'an idea whose time is long past', that it undermines academic collegiality in the country and that it should be 'abandoned before further damage is done'. It is unclear whether these comments on the NRF rating system represent the views of the majority of rated academics or whether they reflect the authors' personal bias in the context of their own rating.

In one of the only serious scientific analyses of the NRF rating system conducted to date, Lovegrove and Johnson29 compared the ratings of 163 South African botanists and zoologists with well-recognized bibliometric scores (such as the h-index and number of citations per publication) and found a good correlation. However, the peer-reviewed NRF ratings explained less than 40% of the variation in the scores. They concluded that a synergy between the peer-review system (NRF rating) and bibliometric scores would improve the assessment of scientific quality.

Another criticism of the proposed alternative systems of research subsidy is that my own institution, the University of Cape Town, would be one of the main beneficiaries, with its subsidy increasing by R40 million, an increase of almost 25% (cf. Tables 1 and 2), and even greater increases with the removal of the development grant (Table 3). Besides NRF ratings, however, there are two other measures that provide a gauge of UCT's research output and accomplishments: (1) it is the only South African institution ranked between 200 and 300 in the world rankings by the Shanghai Jiao Tong University30 and is ranked in the top 200 by the Times Higher Education Supplement31; and (2) it currently has one-third of the DST/NRF research chairs in the country.32


Concluding remarks

Government policy on research funding has a direct impact on the behaviour of academics, whether they are in the UK,5 Australia,20 the US1 or South Africa.11 The question that needs to be asked is this: What sort of behaviour do we wish to encourage in South Africa? Should we be rewarding universities whose academics produce the greatest number of publications, without regard to quality, or should our emphasis be on a system that inspires our academics to aim for a level of scholarship which can withstand the scrutiny of an international audience? I believe it is the latter.

If the current system of publication subsidy is replaced by one based on NRF ratings, then the NRF will have to ensure that individual researchers receive an annual incentive linked to their rating. This would go a long way to allaying any fears researchers may harbour that their ratings (like their publications) benefit the institution but not the individual. It would appear that such a policy has recently been implemented.33 It is evident that the rating system requires a significant administrative commitment by the NRF. For example, a total of 11 356 reviewers were approached during the period 2003 to 2006.21 If the number of rated academics were to double to 3000 because of the subsidy system, the NRF would need to have the resources to manage this growth.

Another challenge for the NRF in sustaining the rating system will be to address the substantial social costs of the peer-review system, particularly from the perspective of the individual researcher. As highlighted by Pouris,34 over 200 researchers let their ratings lapse between 2000 and 2004. There should, therefore, be an incentive for academics to aspire to an NRF rating.

In conclusion, it is evident that the publication subsidy serves as an extremely blunt instrument that probably has far more problems than benefits. In November 2003, the Education Ministry declared itself 'committed to considering the inclusion of additional indicators of research outputs in future years, as new national research policies are developed and implemented'.13 Perhaps now is the time to grasp the nettle and to consider using the NRF rating system instead of (or in addition to) the publication count.

I would like to thank my academic colleagues Cliff Moran, Tim Noakes and Daya Reddy, who encouraged me to pursue the ideas proposed in this commentary. I also acknowledge the assistance provided by UCT colleagues Hugh Amoore, Jane Hendry, Christina Pather and Marilet Sienaert, as well as Rian Cilliers of the Department of Education, who provided me with some of the key publications and information. Finally, the comments and suggestions from the reviewers of this paper were extremely valuable. The idea that NRF ratings should replace the publication subsidy is mine and does not necessarily reflect the position of the University of Cape Town or its Faculty of Health Sciences.


1. Axtell J. (1998). The Pleasures of Academe. A Celebration & Defense of Higher Education, p. 53. University of Nebraska Press, Lincoln.        [ Links ]

2. Online:        [ Links ]

3. Williams G. (1998). Misleading, unscientific, and unjust: the United Kingdom's research assessment exercise. Br. Med. J. 316, 1079–1082.        [ Links ]

4. Elton L. (2000). The UK research assessment exercise: unintended consequences. Higher Education Quarterly 54, 274–283.        [ Links ]

5. Corbyn Z. (2008). Researchers may play dirty to beat REF. Times Higher Education 1831: 6, 7 February.        [ Links ]

6. Geuna A. and Martin B.R. (2003). University research evaluation and funding: an international comparison. Minerva 41, 277–304.        [ Links ]

7. Melck A.P. (1982). Methods of financing universities with special reference to formula funding in South Africa. D.Com. thesis, University of Stellenbosch, South Africa.        [ Links ]

8. Venter R.H. (1985). An Investigation of Government Financing of Universities, 2nd edn, Report: SAPSE- 110, ISBN: 0 7970 0161 1. Department of National Education, Pretoria.        [ Links ]

9. Ministerial Statement on Higher Education Funding: 2007/8 to 2009/10 (2007). Department of Education, Pretoria.        [ Links ]

10. Online:;        [ Links ]

11. Gevers W., Hammes M., Mati X., Mouton J., Page-Shipp R. and Pouris A. (eds) (2006). Report on a Strategic Approach to Research Publishing in South Africa. Academy of Science of South Africa, Pretoria.        [ Links ]

12. Vaughan C.L., Reddy B.D., Noakes T.D., Moran, V.C. (2007). A commentary on the intellectual health of the nation. S. Afr. J. Sci. 103, 22–26.        [ Links ]

13. Funding of Public Higher Education (2003). Government Notice No. 2003, Department of Education, Pretoria.        [ Links ]

14. Information on the State Budget of Higher Education page C13, March 2007. Department of Education, Pretoria.        [ Links ]

15. Policy and Procedures for Measurement of Research Output of Public Higher Education Institutions. (2003). Higher Education Act 101, 1997. Ministry of Education, Pretoria,        [ Links ]

16. Report on the Evaluation of the 2006 Institutional Research Publication Outputs (2007). Department of Education, Pretoria.        [ Links ]

17. Higgins J. (2008). Reproducing mediocrity. Mail & Guardian (Johannesburg), 19 February 2008.        [ Links ]

18. NRF Vision 2015: Strategic Plan of the National Research Foundation, Pretoria. Online: www.nrf.        [ Links ]

19. Butler L. (2003). Explaining Australia's increased share of ISI publications – the effects of a funding formula based on publication counts. Research Policy 32, 143–155.        [ Links ]

20. Butler L. (2003). Modifying publication practices in response to funding formulas. Research Evaluation 12(1), 39–46.        [ Links ]

21. Evaluation and Ratings: Facts and Figures (2007). National Research Foundation, Pretoria.        [ Links ]

22. Multi-Criteria Decision Making (MCDM) Tool. Online:        [ Links ]

23. Ten-Year Innovation Plan, Department of Science and Technology, Pretoria. Online:        [ Links ]

24. Online:        [ Links ]

25. Cozzens S., Gevers W., Letlape M., Marrett M., Posel D. and Webb C. (2005). Institutional Review of the National Research Foundation, Pretoria.        [ Links ]

26. Findings and recommendations (2008). Review of the NRF System for the Evaluation and Rating of Individual Researchers. National Research Foundation, Pretoria. Online: 2008_01_14.stm.        [ Links ]

27. Auf der Heyde T. and Mouton J. (2007). Review of the NRF Rating System: Synthesis Report, National Research Foundation, Pretoria. Online:        [ Links ]

28. Cherry M.I. and Gibbons M.J. (2007). Rating the NRF's rating system. S. Afr. J. Sci. 103, 179–181.        [ Links ]

29. Lovegrove B.G. and Johnson S.D. (2008). Assessment of research performance in biology: how well do peer review and bibliometry correlate? BioScience 58(2), 1–5.        [ Links ]

30. Online:        [ Links ]

31. Ince M. (2007). Ideas without borders as excellence goes global. Times Higher Education Supplement, pp. 2–3, 9 November.        [ Links ]

32. Online:        [ Links ]

33. Online:        [ Links ]

34. Pouris A. (2007). The National Research Foundation's rating system: why scientists let their ratings lapse. S. Afr. J. Sci. 103, 439–441.        [ Links ]

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License