SciELO - Scientific Electronic Library Online

 
vol.115 issue7-8Comparisons between Australopithecus sediba (MH1) and other hominin taxa, in the context of probabilities of conspecificityThe internationalisation of South African medical research, 1975-2005 author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Article

Indicators

Related links

  • On index processCited by Google
  • On index processSimilars in Google

Share


South African Journal of Science

On-line version ISSN 1996-7489
Print version ISSN 0038-2353

S. Afr. j. sci. vol.115 n.7-8 Pretoria Jul./Aug. 2019

http://dx.doi.org/10.17159/sajs.2019/5785 

COMMENTARIES

 

The darker side of quantitative academic performance metrics

 

 

Casparus J. CrousI, II

IDepartment of Conservation Ecology and Entomology, Stellenbosch University, Stellenbosch, South Africa
IIPrevious: South African Environmental Observation Network, Arid Lands Node, Kimberley, South Africa

Correspondence

 

 


Keywords: economic incentives; hypercompetitive academia; productivity ratings; publishing behaviour; unethical practices


 

 

The economy of knowledge

Scientific information can be considered an economic commodity.1 Authors produce papers as currency to acquire research employment, and, in turn, publishers sell these papers for profit. The appropriateness of monetising knowledge, and the sustainability of such business models, remains contentious.2-4 But I do not wish to embroider on science economics per se. Instead I highlight here how current publishing behaviour has been influenced by economic incentives - behaviour that can now be predicted by basic economic models.5 Understanding such behavioural changes in science culture will become helpful to identify potentially unethical publishing practices.6,7

 

The use of quantitative academic performance metrics

Eligibility criteria to secure employment or grants are generally based on quantitative academic performance indices, for example, number of papers, the h-index8,9, or citations per paper10. Science metrics have greatly advanced our understanding of publishing trends among authors, but are clearly also not without critique.11 I have previously commented on the pitfalls of unsophisticated performance measures in evaluating academic success among scholars from developed and developing nations.12 Current literature is filled with similar polemical opinions, where even choosing a journal to publish in has become an awkward affair.13 Yet for administrators these records remain helpful to initially separate the wheat from the chaff, and are easy and free to obtain online.14

Despite progress on reforming performance evaluations, such as the Leiden Manifesto15, it will take some time for the playing field to level out. The scene is therefore set for a new science culture of winners and losers. Aspiring academics will thus most likely have to actively manage their research productivity and performance5, more aggressively so than ever before in this hypercompetitive academic environment6,16.

 

The rise of unethical citation practices

A healthy economy is related to having greater productivity (e.g. exports) than consumption (e.g. imports). To maintain this healthy economy, it is crucial to predict future shortfalls in productivity that might lead to catastrophic losses in revenue. With millions of academic papers in circulation these days, predicting future citation yield, and thereby personal performance, might prove to be extremely difficult for scholars.5 Fears of disadvantage thus arise in especially aspiring academics, prompting collusive and coercive citation practices to reduce productivity uncertainty.5,6,17,18

Haley5 subsequently put forward three likely responses by academic authors facing metric performance uncertainty: (1) do nothing, thus relying on natural accumulation of citations from the wider science community; (2) switch research fields, perhaps to those more topical at the moment, aiding a rapid and steady citation accumulation; or (3) artificially inflate citations.

Examples of artificial inflation are in fact seen among authors, editors, reviewers and publishers. Authors may premeditate plans to cite underrepresented papers in their following works, regardless of true applicability to the current message at hand.17 Using coercive citation practices, reviewers may request authors to cite their papers, directly relevant or not, or editors may request authors to include more papers from their journal to improve journal impact factor.18 In turn, collusive citation practices, although less common, occur when two or more journals conspire to primarily cite papers found in each other's issues, and in this way boost performance enormously19 - also termed 'citation stacking', this is certainly one of the most concerning consequences of quantitative performance metrics. But perhaps the worst consequence of citation-based performance metrics, is the possibility to directly manipulate citation scores by creating fake publications riddled with self-citations.20

 

Too many tactics, too little science?

Using Haley's5 three likely responses by authors to achieve metric success, I drew a three-bubble Venn diagram to conceptualise situations in which authors might use a combination of responses to mitigate citation uncertainty (Figure 1). I imagined four scenarios:

The Desperate signifies authors who generally allow for the natural accumulation of citations of their papers, but sooner or later they realise that they are falling behind and start to actively manage ways to increase scores.

The Schemer is primarily looking for ways to artificially inflate citations while simultaneously switching to more popular research topics whenever possible and/or joining larger groups of collaborators to share in their productivity. Doing nothing is beyond the realm of The Schemer as they know the current publishing game fortunes the connected, e.g. providing more opportunities to place papers while editing manuscripts.

The Survivor is an author who generally allows for the natural accumulation of citations of their papers, like The Desperate. But instead of artificially trying to inflate scores when realising slow accumulation, this author increases citation probability by constantly switching to hot, citable topics or more innovative fields.

The Abyss represents the point at which scholars utilise all three tactics. They can switch to popular or innovative fields, although often away from their expertise, thereby eventually creating a wide network of collaborations to share in the larger citation economy. These authors might also downplay ethically blurry citation practices, such as artificial inflation, as they also allow for natural accumulation and might resort to playing the whole game. The Abyss is therefore the bottomless pit where mishmashes between intellectual and moral standards capture the minds of young academics.

 

 

Should the status quo in current scientific culture be maintained (where citations are seen as currency), then desperate and scheming authors would increasingly be found. Haley5 explained this to be the case as artificially inflated citations might require the least amount of effort or funds to increase visibility. The Survivor perhaps represents the least frowned-upon scenario to increase performance metrics. However, Haley5 mentioned such strategies could lead to decreased publication quality, and citations are not necessarily guaranteed when shifting to more popular science avenues. Ultimately, even though some scholars will remain honest and open, the evolution of citation behaviour due to ill-advised incentives would eventually lead to great confusion among those that need it least - aspiring scientists. Too many tactics with too little science can potentially devastate the scientific endeavour as we know it.6,7

Collateral damage of unethical publishing to science culture

The collateral damage of self-interested publishing games is becoming more visible. Peer reviewers are growing increasingly scant.21 This fact is tough to believe given the staggering increase in published papers over the past few years - estimated at >2 million a year. Clearly, some reviewers are working more than others. In this era of paper gluttony and highly competitive scientific employment22, it is perhaps unsurprising that young, contract-based researchers might regard peer reviewing as belonging to more comfortable, permanent employees. This way they can write and cross-cite many papers. On the other hand, some might choose to review a lot to aid citation of their work. The spirit of peer reviewing is threatened.7

Lawrence16 summarised the dangers of an increasing 'aggression factor' on the psychology of people in a hypercompetitive science environment, where the battle for superiority may weaken the spirits of more gentle scholars to pursue academic careers. This way many brilliant minds, who became desperate or troubled, would leave. Who knows what wonderful discoveries have been and will be lost in the future.

 

Lighting-up academia's darker side

Emerging economies need to be attractive to heighten interest from investors. So too needs to be the performance of aspiring academics. There are thus incentives to perform, and to perform well more often. No wonder economists can now describe how young researchers could behave to increase their chances of being spotted in the vast universe of academia. I wanted to highlight that behavioural changes in citation practices by authors facing performance uncertainty will become commonplace5 and will likely further evolve.6,7 It is crucial that more mature academics and administrators acknowledge the existence of these patterns so that those in charge can help manage younger academics to avoid becoming desperate and scheming, and finally getting lost in the moral abyss.

Torch-bearing policies are therefore urgently needed to light the way forward. Lane23 drew attention to the fact that if incentives are used to push productivity and performance in academia, then economists and social scientists must help reform the applications of purely number-based metrics. It is high time we re-visit extant and useful guidelines, such as the Leiden Manifesto, to better understand the limitations of available science metrics and how to apply them when evaluating personal excellence in complex academic environments.15,24

And while we are counting beans, the sixth mass extinction event has dawned.25 The idea of an inclusive, noble science environment should be alive and kicking to protect our vulnerable planet.6,26 To remain focused, we should consider eliminating quantitative performance measures altogether.

 

References

1.Stephan PE. How economics shapes science. Cambridge, MA: Harvard University Press; 2012.         [ Links ]

2.Young NS, Ioannidis JP, Al-Ubaydli O. Why current publication practices may distort science. PLoS Med. 2008;5(10), e201, 5 pages. https://doi.org/10.1371/journal.pmed.0050201        [ Links ]

3.Van Noorden R. Open access: The true cost of science publishing. Nature. 2013;495(7442):426. https://doi:10.1038/495426a        [ Links ]

4.Bergstrom TC, Courant PN, McAfee RP, Williams MA. Evaluating big deal journal bundles. Proc Natl Acad Sci USA. 2014;111(26):9425-9430. https://doi.org/10.1073/pnas.1403006111        [ Links ]

5.Haley MR. On the inauspicious incentives of the scholar-level h-index: An economist's take on collusive and coercive citation. Appl Econ Lett. 2017;24(2):85-89. https://doi.org/10.1080/13504851.2016.1164812        [ Links ]

6.Edwards MA, Roy S. Academic research in the 21st century: Maintaining scientific integrity in a climate of perverse incentives and hypercompetition. Environ Eng Sci. 2017;34(1):51-61. https://doi.org/10.1089/ees.2016.0223        [ Links ]

7.Grant DB, Kovács G, Spens K. Questionable research practices in academia: Antecedents and consequences. Eur Bus Rev. 2018;30(2):101-127. https://doi.org/10.1108/EBR-12-2016-0155        [ Links ]

8.Hirsch JE. An index to quantify an individual's scientific research output. Proc Natl Acad Sci. 2005;102(46):16569-16572. https://doi.org/10.1073/pnas.0507655102        [ Links ]

9.Hirsch JE. Does the h index have predictive power? Proc Natl Acad Sci USA. 2007;104(49):19193-19198. https://doi.org/10.1073/pnas.0707962104        [ Links ]

10.Lehmann S, Jackson AD, Lautrup BE. Measures for measures. Nature. 2006;444(7122):1003. https://doi.org/10.1038/4441003a        [ Links ]

11.Costas R, Bordons M. The h-index: Advantages, limitations and its relation with other bibliometric indicators at the micro level. J Informetr. 2007;1(3):193-203. https://doi.org/10.1016/j.joi.2007.02.001        [ Links ]

12.Crous CJ. Judge research impact on a local scale. Nature. 2014;513(7516):7. https://doi.org/10.1038/513007a        [ Links ]

13.Lee AT, Simon CA. Publication incentives based on journal rankings disadvantage local publications. S Afr J Sci. 2018;114(9-10), Art. #a0289, 3 pages. http://dx.doi.org/10.17159/sajs.2018/a0289        [ Links ]

14.Harzing AW, Van der Wal R. Google Scholar as a new source for citation analysis. Ethics Sci Environ Polit. 2008;8(1):61-73. https://doi.org/10.3354/esep00076        [ Links ]

15.Hicks D, Wouters P, Waltman L, Rijcke SD, Rafols I. Bibliometrics: The Leiden Manifesto for research metrics. Nature. 2015;520(7548):429. https://doi.org/10.1038/520429a        [ Links ]

16.Lawrence PA. The mismeasurement of science. Curr Biol. 2007;17(15):R583-R585. https://doi.org/10.1016/j.cub.2007.06.014        [ Links ]

17.Van Raan AF. Comparison of the Hirsch-index with standard bibliometric indicators and with peer judgment for 147 chemistry research groups. Scientometrics. 2006;67(3):491-502. https://doi.org/10.1556/Scient.67.2006.3.10        [ Links ]

18.Wilhite AW, Fong EA. Coercive citation in academic publishing. Science. 2012;335(6068):542-543. https://doi.org/10.1126/science.1212540        [ Links ]

19.Van Noorden R. Brazilian citation scheme outed. Nature. 2013;500(7464):510-511. https://doi.org/10.1038/500510a        [ Links ]

20.Delgado LópezCózar E, RobinsonGarcía N, TorresSalinas D. The Google Scholar experiment: How to index false papers and manipulate bibliometric indicators. J Assoc Inf Sci Technol. 2014;65(3):446-454. https://doi.org/10.1002/asi.23056        [ Links ]

21.Lajtha K, Baveye PC. How should we deal with the growing peer-review problem? Biogeochemistry. 2010;101:1-3. https://doi.org/10.1007/s10533-010-9530-6        [ Links ]

22.Siegel D, Baveye P. Battling the paper glut. Science. 2010;329(5998):1466. https://doi.org/10.1126/science.329.5998.1466-a        [ Links ]

23.Lane J. Let's make science metrics more scientific. Nature. 2010;464(7288):488. https://doi.org/10.1038/464488a        [ Links ]

24.Braun T, Bergstrom CT, Frey BS, Osterloh M, West JD, Pendlebury D, et al. How to improve the use of metrics. Nature. 2010;465(17):870-872. https://doi.org/10.1038/465870a        [ Links ]

25.Ceballos G, Ehrlich PR, Dirzo R. Biological annihilation via the ongoing sixth mass extinction signaled by vertebrate population losses and declines. Proc Natl Acad Sci USA. 2017;114(30):E6089-E6096. https://doi.org/10.1073/pnas.1704949114        [ Links ]

26.Lawrence PA. The politics of publication. Nature. 2003;422(6929):259. https://doi.org/10.1038/422259a.         [ Links ]

 

 

Correspondence:
Casparus Crous
Email:cjcrous@gmail.com

 

 

Published: 30 July 2019

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License