SciELO - Scientific Electronic Library Online

 
vol.31 issue1Human rights values or cultural values? Pursuing values to maintain positive discipline in multicultural schools author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Article

Indicators

Related links

  • On index processCited by Google
  • On index processSimilars in Google

Share


South African Journal of Education

On-line version ISSN 2076-3433
Print version ISSN 0256-0100

S. Afr. j. educ. vol.31 n.1 Pretoria Jan. 2011

 

ARTICLES

 

How an analysis of reviewers' reports can enhance the quality of submissions to a journal of education

 

 

Philip C van der Westhuizen*; J L van der Walt; C C Wolhuter

 

 


ABSTRACT

Not only has the number of scholarly journals worldwide increased substantially in recent years but also the number of articles published in them. However, closer examination reveals that the percentage of articles actually published has remained in the region of 25%. This implies that much of researchers' time and energy has been wasted because of failure to have their research findings published. This has been occurring despite the availability of a surfeit of publications on the theme of 'How to write and publish a scientific article'. Analysis of the process of article writing and publishing reveals that it consists of four phases: writing and submitting an article, processes followed by the editor, actual review process by the reviewers, and how authors deal with the feedback. A literature survey shows that the last phase has not been discussed in the same detail as the other three. The authors contend that if prospective authors gave greater attention to this phase and learned from the findings outlined in this article, it would lead to an improvement in the quality of future submissions to a journal, of education in this particular case.


 

 

'We have read your manuscript with boundless delight. If we are to publish your paper, it would be impossible for us to publish any work of lower standard. And, as it is unthinkable that, in the next thousand years, we shall see its equal, we are, to our regret, compelled to return your divine composition, and to beg you a thousand times to overlook our short sight and timidity.'

Reputedly a rejection slip from a Chinese economics journal (Day, 1983:90).

 

Introductory remarks

The publication of scholarly articles has grown worldwide in the decade between 1995 and 2005. According to available statistics, the number of articles in peer-reviewed journals has increased in this period from 564,645 in 1995 to 709,541 in 2005 (National Board of Science, USA, as quoted by Cummings, 2010). The number of scholarly journals has also increased: 11,429 journals appear on the list of ISI accredited journals - up from 8,500 in 2002 (De la Rey, 2002). Despite the publication successes attained by scholars according to these figures, there is evidence of considerable failure. The 'mortality' rate as far as ISI accredited journals are concerned is around 90%. The rate in the case of South African SAPSE-accredited journals is around 70%. This implies that if, say, the success rate were pitched at 25% (i.e. in real figures the 709,541 in 2005), more than two million article submissions would have failed the peer review process.

The picture becomes even gloomier when taking into account that there is an abundance of publications on 'How to write and publish a scholarly article' (Fradkov, 2003; Venter, 2001; ASSAf, 2009 for scholarly books). A simple computer search with descriptors: write, publish, scientific article, education, journals, 2006-2009, revealed the following (see Table 1).

 

 

Despite the availability of all these manuals about writing scholarly articles, the mortality rate of manuscripts has remained high. The acceptance rate of submissions by the ISI- and IBSS-accredited South African Journal of Education (SAJE) is a case in point: 2006: 25%; 2007: 28%; 2008: 19%, and 2009: 16% (SAJE Annual Reports: 2006-2009).

While this phenomenon can arguably be related to the fact that South African (and other) academics have in recent years had to cope with the influx of large numbers of students and hence have had less time for devoting themselves to research and publication, there is also the possibility that they have not yet mastered the 'art' and/or 'science' of writing a scholarly article (Van der Walt, 2001). This state of affairs prompted us to ask whether (a) we would not be able to understand the problem better by analysing reviewers' reports submitted to a particular journal, and (b) whether certain guidelines could not be developed that might lead to greater acceptance of manuscripts.

In order to find answers to the conundrum as to why submissions to scholarly journals have failed at such a rate, we subjected reviewers'/referees' reports to the South African Journal of Education to analysis. This journal was selected because of the availability of digitalised reviewers' reports for 2006 to 2009. Based on this analysis, we concluded that the failure of prospective authors to submit publishable scholarly manuscripts can be ascribed to mainly two factors. Firstly, many of the manuscripts are based on unsound underlying research, i.e. a research project that was in itself methodologically and/or otherwise replete with all sorts of shortcomings. Secondly, many of the manuscripts are rejected because of failing to present the findings of the researcher(s) coherentlyand convincingly, or to substantiate their assertions, claims and/or contentions.

Our purpose in this paper is to provide evidence in support of the contention that manuscripts fail on at least these two counts. The remainder of this paper is, therefore, structured as follows. We commence by outlining the research methodology and the conceptual-theoretical framework against which we performed the empirical investigation. This is followed by a report on our findings. We then discuss the findings, and follow with a number of recommendations and concluding remarks.

 

Ethical considerations

We received permission from the editorial committee to analyse reviewers' reports and to report on our findings, on condition that neither the names of the reviewers nor those of the authors of manuscripts be divulged.

 

Research design and methodology

We followed a mixed methods or multi-analysis design for purposes of naturalistic generalization (Onwegbuezie et al., 2009:passim; 117; also refer Ivankova, Creswell & Clark, 2008: 254 ff), i.e. rather than the researcher generalizing the findings, it is the reader who generalizes from his or her past experiences (Onwegbuezie et al., 2009:120). This is a form of 'fuzzy generalization', in the sense that 'something (that) happened in one place ... might also happen elsewhere' (Ekiz, 2006:73).

In the sequential mixed analysis that we followed (Brannen, 2008:53; Onwegbuezie et al., 2009:129), the first set of methods pertained to the development of the conceptual-theoretical framework as well as to the more qualitative part of the empirical work (see Data processing). Here we applied a heuristic or hermeneutic-interpretivist document, text and narrative analysis (Ashley & Orenstein, 2005:36-38) which enabled us to hermeneutically and interpretively analyse the literature and documents, such as peer-review forms, for the purpose of composing a conceptual-theoretical framework for the empirical work. Using the criteria contained in the the South African Journal of Education's current editorial evaluation form as a springboard, we developed a conceptual-theoretical framework to provide us with a series of constructs with which we could approach the empirical work.

For the quantitative empirical investigation (see Data processing) we made use of socalled quasi-statistics which, according to Onwuegbuzie et al. (2009:126), can enable one to assess the amount of evidence that bears on a particular conclusion or observation (e.g. the frequencies at which reviewers refer to shortcomings in a manuscript). Drawing on the work of Neuman (2000:145-146), we did the empirical analysis and found that it brought to light 17 categories and/or constructs.

 

Conceptual-theoretical framework

The process of writing an article consists of four phases. In the input phase, the researcher prepares an article and submits it to the editor (Booth et al., 2003; Henning et al., 2002; Leedy & Ormrod, 2005:282 ff; Huff, 2009; McMillan & Schumacher, 2010:465 ff.)

In the process phase, the editor and/or members of the editorial committee do a preliminary evaluation of the article to see whether it complies with editorial policy, and if it passes this examination, the article is sent for review to at least two or three independent reviewers. Theyevaluate the article according to the editorial board's guidelines for evaluation of articles (refer Pickar, 2007:17).

In the feedback phase, the reviewers return their reports. Based on the reports of the (majority of the) reviewers as well as on his/her own assessment of the suitability and standard of an article, in terms of the criteria contained in the evaluation sheet, the editor decides on the acceptability of the manuscript. Some authors file unfavourable reports in a bottom drawer (Murray, 2005:194), never to return to them. Others study them and try to learn from mistakes and shortcomings. In cases where only minor corrections are advised, authors compile a change log in which they indicate where they have effected changes and where they did or could not follow the advice and requests of the editor.

There is an abundance of publications on the first three phases, but there appears to be a paucity of literature on the fourth phase. Although in some cases authors discuss the aspect of dealing with reviewers' reports, they tend to shy away from describing how to do a detailed analysis of reviewers' reports. Murray (2005:187-203) presents a few examples of reports and provides guidelines to authors for learning from them (p. 197). Although she regards this phase as a 'critical step' in writing a paper (p. 188), she does not offer a detailed description of how to deal with reviewers' reports. Mullin (1999) does not refer to reviewers' reports but confines himself to dealing with feedback from colleagues and peers. Klingner et al. (2005) make use of a few examples of reviewers' reports in their discussion of how to deal with such feedback. The same applies to Uchiyama et al., (1999). Lötter's (2000) and Fradkov's (2003) papers provide guidelines for reviewers for adjudging scientific articles but none for prospective authors.

Pickar (2007) mentions that the actual value of peer review has been 'little studied' but that it is clear that it helps editors decide whether to accept or reject an article. In his opinion, peer review 'has helped both editors and authors to improve the quality of manuscripts'. He does not enter into a detailed analysis of reviewers' reports to show how dealing with them can lead to improvement in the quality of manuscripts. Also Day (1983:80-93) does not find it necessary to give an analysis of reviewers' reports to show how authors can learn from them. He merely provides advice to authors based on his personal experience and wisdom.

The above discussion of literature regarding the review process, although by no means exhaustive, reveals a tendency among experts towards discussing the review process with the aid of a few select extracts from actual reviewers' reports. Since we could find no publication in which the author makes a detailed analysis of actual reviewers' reports to a journal, with specific respect to articles, to show how authors could learn from them, we resorted to two alternatives. Firstly we used the current evaluation form of the South African Journal of Education as a starting point, a copy of which can be obtained at nsosaje@nwu.ac.za. This form requires a reviewer to respond to the quality of a submission in terms of the following 11 criteria: 1. the importance, relevance or appeal of the submission to the academic community; 2. originality and independence; 3. presentation and readability (language usage, accuracy of references and bibliography); 4. statement of problem, aim and objectives; 5. theoretical framework (literature review); 6. appropriateness of a number of aspects (research design, data collection and procedure, ethical guidelines, data analysis, data presentation and discussion, conclusion and recommendations); 7. the extent to which the line of argumentation is clear, cohesive and logical; 8. contribution to theory; 9. contribution to practice; and the form concludes with space for a reviewer's recommendations whether it should be published or not, and for critical comments and suggestions for improvement.

Secondly, we checked the relevance and validity of the criteria contained in the standard evaluation form of the SAJE against the contents of sections on methodology in textbooks that typically treat the dissemination of research results under headings such as Form for evaluating a ... research report (Borg, Gall & Gall, 1993:427 ff), Preparing to draft, drafting and revising (Booth, Colomb & Williams, 2003: 183 ff), The research report (Babbie & Mouton, 2004:563 ff) and Preparing the research report (Leedy & Ormrod, 2005:282 ff), to mention only a few. This revealed that the standard evaluation form of the Journal did indeed cover the most salient points of article writing and effective dissemination of research results. Based on this finding, we then used the nine criteria contained in the SAJE peer review form as instrument for analysing the reviewers' reports for the period 2006 to the end of 2009, as discussed later. This exercise enabled us to expand the original nine closed items of the Journal's review form to the 17 items reflected in Table 4. This expansion was due to the fact that some of the subcategories of the SAJE peer review form gained such prominence in the analysis of reviewers' reports that they had to be reflected in separate cells in Table 4.

A qualitative analysis of the reviewers' critical comments and suggestions cast more light on some of the problems that the reviews brought to light.

The findings and guidelines that we report will hopefully fill the lacuna regarding how prospective authors can learn from reviewers' reports. Our investigation is especially significant from a South African perspective. The results of the 2008 Changing Academic Profession international survey put South Africa last in the line of 18 participating countries in terms of research productivity (Cummings, 2010). Since South African scholars are clearly lagging behind it is important to discover the reasons for this and to suggest guidelines for assuaging the situation.

 

Empirical investigation: analysis of reviewers' reports

Aim of the empirical part of the investigation

The purpose of the analysis was to discover precisely for which reasons manuscripts were deemed unacceptable for publication in the SAJE.

Sampling

The editorial committee of the SAJE made available all the reviewers' reports of manuscripts for the years from 2006 to the end of 2009 that had initially failed but were published after revision. We chose the SAJE because of its prominence in the educational fraternity, not only in South Africa but also worldwide. This Journal is one of only a handful of South African publications that are both ISI and IBSS accredited. All the reports for the years in question were also available in electronic format.

A total of 710 articles was submitted to the SAJE in the period in question. Of this number, 154 (21.6%) were published. Only seven (7) were published as originally submitted; 147 were published after revision and, in some cases, reassessment by reviewers. A total of 674 reviewers were enlisted for reviewing these articles. Of this number, 634 were attached to 16 higher education institutions in South Africa. The rest were attached to 18 higher education institutions outside South Africa.

Data processing

As stated earlier under Research design and methodology, we first determined the main themes or topics covered by the reviewers in their reports through the use of coding. We followed the three-step coding procedure outlined by Neuman (2000:420-425), Henning et al. (2004:104106), De Vos et al. (2005:334) and Ekiz (2006:72). In the process of constructing and evaluating the different categories of reviewers' remarks, hermeneutic-constructivist strategies were applied, which included establishment of external as well as internal statistical validity (quantitative data used in an interpretivist manner) (Onwuegbuzie et al., 2009:passim).

All the reports were independently as well as jointly analysed by three researchers. The salient failures of the reviews were organised into categories (themes or topics), and efforts were made to subsume these categories under broader headings so as to avoid reporting on a multitude of smaller factors.

 

Findings

Table 2 resembles the format of the review form of SAJE, with the following two exceptions: no data available for ethical aspects (refer criterion 6 of the review form); it also expands the 'appropriateness' criterion (6) in the review form by teasing out four 'appropriateness subcriteria', namely, appropriateness of data collection and procedure (item 7 of the Table), appropriateness of data analysis (item 8), appropriateness of data presentation and discussion (item 9) and appropriateness of conclusions (item 10). (Table 4 also reflects these and other sub-categories.) The coding of the various aspects of reviewers' reports is given in Table 2.

 

 

In order to facilitate rank-ordering, the above ratings were weighted: excellent ratings were multiplied by 2, good ratings were multiplied by 1, moderate ratings were given a value of zero, and poor ratings were multiplied by -1. The weighted ratings are given in Table 3.

 

 

According to Tables 2 and 3, the most important reason for the rejection of manuscripts was their poor contribution to theory. The scores in bold typeface for items 4 to 13 in Table 3 show which other factors led to rejection.

In Table 4, the third and fourth columns reflect criticisms against particular aspects of the manuscripts. A total of 1,748 sub-category responses (third and fourth columns), subsumed under 17 broader categories (columns 1 and 2), emerged from the analysis of the closed section of the reviewers' reports.

The qualitative investigation on the basis of open item 11 of the review form revealed that the prospective author of a scholarly article should have mastered the following two aspects of scholarly work. Firstly, s/he should already have become a competent researcher and should possess research findings that are deemed worthwhile for sharing with the academic community. One reviewer wrote,

It is always useful to ask yourself this question: Am I submitting this paper because I want to get something published, or am I submitting this paper because I have some important knowledge to share with other educators? If you have some important knowledge to share, you can structure the paper around questions such as: What is this important knowledge? How did I obtain it? Why do I believe that it is true? Why is it worth sharing? What are the implications of this knowledge for other educators? How could other educators use this knowledge? What do I have to do to convince other educators that this knowledge is useful?

The following comments were made with respect to the ubiquitous problem of inadequate underlying research, especiallyregarding originality: 'No new insights about analysis, practice or theory are provided by the review';' In my opinion this is not new research - it is a reinvention of the wheel';' Don't people read educational legislation and policies as all the "research" of this article is contained in these documents?'; 'It is always useful to ask yourself this question: If someone who is already knowledgeable in this field (e.g. technology education) reads my paper, what important thing will they learn? Unless a knowledgeable reader ... will learn something important from your paper, there is not much point having them read it'.

Secondly, a prospective author of a scholarly article should have mastered the intricacies of article-writing. In their qualitative remarks, reviewers tended to concentrate on a variety of factors regarding this second aspect. Some focused more on technical aspects, some more on the line of argumentation, others more on content. From the many remarks about quality of articles, we selected the following as representative: 'The abstract looks sloppy', one reviewer remarked. Another advised,

Always make sure that your paper lives up to the expectations created by the title and the abstract. A simple test is this: show several colleagues the title of your paper and ask them to tell you what they would expect to find in the paper. If what they expect is not what you have written, reconsider the title or rewrite the paper. Do the same thing with the abstract'.

With respect to the introduction to the article one said, 'The introduction is weak', the topicality of the research is not explained, and/or the organization or structure of the article is not acceptable ('it is very poorly organised and lacks both coherence and cohesion'). Regarding the use of language, one remarked: 'The language is laboured and convoluted; the syntax needs to be clarified';' the author should get the services of an editor'. Interference by the first language results in 'direct and clumsy translations from the mother tongue', according to another.

 

Discussion of findings

Researchers must keep in mind that the article itself is not the research project; it is a report of research that has been completed and of which the results are now being shared with other interested parties. Researchers should, therefore, resist the 'publish or perish culture' until they reach a point where they have substantial findings that should be shared with the academic (in this case, the education) community.

Having the data and the findings to share is only the first half of the publishing enterprise. A prospective author should also know and understand the intricacies of writing a publishable article. Prospective authors would,therefore, find it worthwhile to studycomprehensive 'reject' reports.

The qualitative study of the reviewers' reports leaves one with the impression that papers presented to the SAJE in the period 2006-2009 have largely failed because of inept presentation. In some cases, the despair of the reviewers was quite obvious; they felt the need for findings and recommendations to be disseminated, but they did not see their way clear to approving a manuscript. In one case, a reviewer remarked that s/he had seen better papers from Honours students than the one s/he had just reviewed.

From the quantitative investigation, the bold typeface in Tables 2 and 3 shows where the main problems with manuscripts lie, according to the reviewers (as reflected in the closed section of their reports). Failure to state the problem and objectives of the underlying research, absence or inadequacy of a conceptual and theoretical framework, problems with the research design, with data collection and processing, with the discussion of findings, with the presentation of recommendations, with the underlying logic of the argument, with the contribution to theory and practice seem to have been the most serious shortcomings.

Table 4 tells the same story from another perspective. The reviewers appear to have experienced the least problems with authors' selection of theme, with introductions to their papers, with their statement of hypotheses, and with the technical editing of manuscripts. The frequencies in bold typeface show where they found the manuscripts to have fallen short. Of concern here are problems with conceptual-theoretical frameworks and research method, since these are two aspects in which prospective academics (researchers) can be expected to be meticulously trained. The problem of language usage and editing is also a cause for concern. The problems in this respect can be ascribed to the fact that authors are expected to write in English, which in many cases is their second and even third language.

Columns 3 and 4 of Table 4 embody a much sharper instrument. The bold typeface (20 was arbitrarily taken as a cut-off point) in the third column shows exactly where the reviewers pinpointed the problems with respect to each of the facets of a manuscript. Of concern here is once again the absence or inadequacy of a conceptual-theoretical framework and problems with the method applied in the underlying research. The problems with the conceptual-theoretical framework and the method are compounded with the shortcomings mentioned in the last column.

 

Recommendations

Prospective authors should study a number of 'reject' reports by competent reviewers. How such reports can be accessed is not clear, but a study of these would be invaluable for an inexperienced author.

Prospective article writers should, furthermore, make a careful study of the data we have presented in Tables 2 to 4, especially the shortcomings highlighted with bold typeface. They should in fact consider keeping Tables 2 to 4 on the desk next to their computer, and constantly refer to them when planning a research project, during its execution and when writing up the findings. Although special attention is required for the problems highlighted, attention should also be paid to all the aspects of article writing contained in these three tables. A comparison between the current review form of the SAJE and the criteria reflected in the three tables shows no other constructs, dimensions, or factors in connection with writing articles for the SAJE than those already contained in the review form. Put differently, Table 4, although containing 17 categories as opposed to the nine (9) in the peer review form of the SAJE, does not contain any new constructs or criteria not already reflected in the peer review form; some of its categories reflect sub-categories or criteria in the peer review form. It is, therefore, recommended that prospective authors keep the criteria embodied in the review form in mind.

Although the findings that we report here are specifically relevant to authors contemplating submitting a manuscript to the SAJE, we would argue that following these guidelines would also enhance the standard of article writing for other journals.

The findings of this investigation can also be construed as an indictment against many a faculty of education. Contrary to what one would have expected from the training of educationists, they do not appear to have been well prepared for the construction of a conceptualtheoretical framework or research methodology. Also they appear not to have been exposed to adequate training in how to present their findings in a scholarly paper. For this reason, it is recommended that designers of post-graduate training in education should take cognisance of the problems highlighted in this study. Faculties of education should also consider enlisting the services of more senior researchers to help their less experienced colleagues, not merely to file and/or ignore negative reviews, but to try to learn from them as much as they can.

Finally, the categories enumerated in Table 4 can be useful for editors when designing a questionnaire to be completed by reviewers.

 

Conclusion

We began the article by stating our contention that the failure of prospective authors to submit publishable scholarly manuscripts can be ascribed to a variety of factors, most notably the failure to do sound research as well as the inability to report their findings to the academic community appropriately and effectively. This contention has been vindicated by the three sets of evidence we have presented. The conceptual-theoretical overview of the process of article writing firstly revealed that not sufficient attention is devoted to the final phase of the process, namely, dealing with the contents of reviewers' reports. Secondlythe quantitative investigation demonstrated that much can be learned from a careful study of the lengthy narratives occasionally returned by reviewers. Thirdly the qualitative analysis pinpointed the areas of article writing in which manuscripts submitted to the SAJE have so far fallen short.

 

References

Ashley D & Orenstein DM 2005. Sociological theory. Boston: Pearson.         [ Links ]

ASSAf (Academy of Science of South Africa) 2009. Scholarly books: their production, use and evaluation in South Africa today. Pretoria: Academy of Science of South Africa.         [ Links ]

Babbie E & Mouton J 2004. The practice of social research. Oxford: Oxford University Press.         [ Links ]

Booth WC, Colomb GG & Williams JM 2003. The craft of research. Chicago and London: University of Chicago Press.         [ Links ]

Borg WR, Gall JP & Gall MD 1993. Applying educational research. New York: Longman.         [ Links ]

Brannen J 2008. The practice of a mixed method research strategy: personal, professional and project considerations. In: Bergman MM. Advances in Mixed Methods Research. London: SAGE Publications.         [ Links ]

Cummings WK 2010. Sleeping Giants and Leaping Tigers: The Pacific Rim Academic Productivity Challenge. Paper presented at the 2010 Annual Conference of CIES, Comparative and International Education Conference, Palmer House Hilton, Chicago, USA, 1-5 March.         [ Links ]

Day RA 1983. How to write and publish a scientific paper. Philadelphia: ISI Press.         [ Links ]

De la Rey C 2002. Quality evaluation and promotion in South African research journals. Paper presented at the first workshop on the strategic management of South African research journals, University of Pretoria, Pretoria, 18 October.         [ Links ]

De Vos AS, Strydom H, Fouche CB & Delport CSL 2005. Research at grass roots: for the social sciences and human service professions. Pretoria: Van Schaik.         [ Links ]

Ekiz D 2006. Improving primary teacher education departments through pre-service teachers' pedagogical voices. Journal of Theory and Practice in Education, 2:68-80.         [ Links ]

Fradkov AL 2003. How to publish a good article and to reject a bad one. Notes of a reviewer. Automation and Remote Control, 64:149-157.         [ Links ]

Henning E, Gravett S & Van Rensburg W 2002. Finding your way in academic writing. Pretoria: Van Schaik.         [ Links ]

Henning E, Van Rensburg W & Smit B 2004. Finding your way in qualitative research. Pretoria: Van Schaik.         [ Links ]

Huff AS 2009. Designing research for publication. Thousand Oaks, CA: SAGE Publications.         [ Links ]

Ivankova NV 2008. Foundations and approaches to mixed methods research. In: Marais JG. First steps in research. Pretoria: Van Schaik.         [ Links ]

Klingner JK, Scanlon D & Pressley M 2005. How to publish in scholarly journals. Educational researcher, 34:14-20.         [ Links ]

Leedy PD & Ormrod JE 2005. Practical research. Upper Saddle River: Pearson.         [ Links ]

Lötter H 2000. How to judge scientific research articles. Tydskrif vir Taalonderrig, 34:93-100.         [ Links ]

McMillan JH & Schumacher S 2010. Research in education. Boston: Pearson.         [ Links ]

Mullin CA 1999. "What I needed to know to get published": teaching (frightened) graduate students to write for publication. Journal on Excellence in College Teaching, 10:27-52.         [ Links ]

Murray R 2005. Writing for academic journals. London: McGraw-Hill        [ Links ]

Neuman WL 2000. Social Research Methods. Boston: Allyn & Bacon.         [ Links ]

Onwuegbuzie AJ, Johnson RB & Collins KMT 2009. Call for mixed analysis: a philosophical framework for combining qualitative and quantitative approaches. International Journal of Multiple Research Approaches, 3:114-139.         [ Links ]

Pickar JH 2007. Do journals have a publication bias? Maturitas, 57:16-19.         [ Links ]

South African Journal of Education 2010. Digital versions of reviewers' reports submitted from 2006 to 2009.         [ Links ] Available from editorial office.

South African Journal of Education. Annual Reports 2006-2009. Available from editorial office.         [ Links ]

Uchiyama K, Simone G & Borko H 1999. Publishing educational research: guidelines and tips. AERA-net electronic publication.         [ Links ]

Van der Walt JL 2001. The art of writing an article for an academic journal. Paper presented at a seminar, Potchefstroom, 18-21 April.         [ Links ]

Venter M 2001. Publikasiegereed? Oor aanbiedingswyses, keuringsprosedure en teksversorging van navorsingsartikels. (Ready for publication? About methods of presentation, review procedures and text editing of articles). Koers, 66:673-689.         [ Links ]

 

 

Authors
Philip C van der Westhuizen is Professor in Educational Management, Leadership and Organisational Theory at the North-West University, Potchefstroom Campus. He is a rated and extensively published researcher and currently the Editor of the South African Journal of Education.
J L (Hannes) van der Walt is a former Dean of the Faculty of Education at the North-West University, Potchefstroom Campus, and previously member of the editorial board of the South African Journal of Education. He is widely published and specializes in the field of philosophy of education.
C C (Charl) Wolhuter is Professor in the Department of Comparative Education at the North-West University, Potchefstroom Campus. He has lectured in History of Education and Comparative Education at several universities and has published widely in these fields.

 

 

* philip.vanderwesthuizen@nwu.ac.za

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License