SciELO - Scientific Electronic Library Online

 issue79Exploring pre-service teachers' beliefs about teaching and learning grammar: Implications for teacher educationConsidering curriculum change: The case for Portuguese as an additional language in South African higher education author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand



Related links

  • On index processCited by Google
  • On index processSimilars in Google


Journal of Education (University of KwaZulu-Natal)

On-line version ISSN 2520-9868
Print version ISSN 0259-479X

Journal of Education  n.79 Durban  2020 



The use of peer assessment at a time of massification: Lecturers' perceptions in a teacher education institution



Vusi MsizaI; Thabile ZondiII; Londiwe CouchIII

ICurriculum and Education Studies, School of Education, University of KwaZulu-Natal, Pinetown, South Africa;
IIGeography Education, School of Education, University of KwaZulu-Natal, Pinetown, South Africa;
IIICurriculum and Education Studies, School of Education, University of KwaZulu-Natal, Pinetown, South Africa




The massification of higher education in South Africa has spurred a demand for innovative ways of assessing students. Peer assessment is an approach that is rarely used in higher education. In this paper, we explore lecturers' perceptions of the use of peer assessment at a teacher education institution. Using scaffolding as our theoretical underpinning for the study, we used in-depth semi-structured interview conversations to generate data from a sample of nine lecturers from various disciplines. Most of the participants use peer assessment although its application varies across disciplines. Participants also indicated a number of factors that they take into consideration when they use peer-assessment such as the students' level of study and the lecturers' disciplinary knowledge and teaching aims. Participants perceived peer assessment as significant in improving students' content knowledge and assessment skills as future teachers. The findings also indicated the limited use of e-learning-assessment tools. In the context of South Africa this work requires further studies to broaden our understanding of peer assessment in teacher education institutions.

Keywords: peer assessment, lecturers, teacher education, scaffolding




Peer assessment is not a new area of research; it has a long history in basic education as well as in universities, dating back to the 1970s (Kane & Lawler, 1978; Smith, Cooper, & Lancaster, 2002). Since then, there has been a significant growth of research in peer assessment globally (Ashenafi, 2017; Falchikov, 2001), yet little in South Africa thus far. Scott, Van Der Merwe, and Smith (2005), Naicker and Bayat (2012), Mostert and Snowball (2013) and Kang'ethe (2014) have, however, contributed to scholarship regarding peer assessment in the South African context.

In this context, the 2015-2016 #Rhodesmustfall and #Feesmustfall movements demanded the transformation of higher education and the decolonisation of the curriculum. The student movements indicated the need for students to become participants and co-constructors of their educational experiences, including assessment. According to Luckett (2016) and Murris (2016), the principles of a decolonised curriculum should commence at the point of knowledge production as well as be apparent in how the disciplines are constructed. Here we take the stance that a decolonised curriculum puts pressure on traditional teaching and assessment methods (Mohamedbhai, 2014). When teaching methods are adjusted, assessment methods require equal adjustment to maintain constructive alignment (Biggs, 2014). Therefore, the traditional ways of assessment during which the lecturer is the sole assessor of students' work, ought to change. This parallels developments elsewhere since active assessment approaches such as peer, self, and online assessments are increasingly gaining attention in teacher education institutions in the Global North (Naicker & Bayat, 2012).

There has been intensified pressure on institutions of higher education from the South African government to achieve the national goals of equality, equity, and transformation through providing access to education to all. This has led to an increase in the number of students enrolled in higher education (Council on Higher Education, 2016) and is referred to as the massification of higher education (Mohamedbhai, 2014). Countries in the Global North have also experienced this phenomenon; it began in the 1980s in the United Kingdom and the United States of America (Giannakis & Bullivant, 2016), while in South Africa it began in the early 1990s.

The pressures created by the massification of higher education have been felt the most by the academic staff in that they have had to adjust their pedagogical practices and search for innovative assessment strategies that might enable them to cope with large class sizes (Selyutin, Kalashnikova, Danilova, & Frolova, 2017). There is thus pressure to find innovative ways of conducting assessment.

According to Barnard, De Luca, and Lie (2015), less attention has been paid to lecturers' perceptions of peer assessment. In light of this, we explore the perceptions of lecturers on how lecturers in a South African teacher education institution use peer assessment to facilitate teaching and learning in a large classroom. Our research question is: What are the lecturers' perceptions of peer assessment at a time of massification in a teacher education institution? First, we map an extensive literature review on peer assessment as it applies to teacher education, and then go on to discuss the theoretical framework, the methods and methodology, and the findings, before we offer our conclusion.


Review of literature

Peer assessment is often used in formative assessment. According to Earl (2006), formative assessment intends to facilitate learning to build understanding and knowledge. Peer assessment is one of the methods of formative assessment since its primary aim is learning. For Earl (2013), peer assessment is assessment as learning; while students are assessing each other, they also learn the skill and the content. There are many definitions emerging from the literature relating to peer assessment. Falchikov (1995), Brew (1999), and Topping (2009) have defined peer assessment as making judgements and providing feedback on each other's work which could be offered in a smaller group or to an individual. For instance, individuals could assess others in a classroom or members of a group could assess each other on the work done in the group (Scott, Van Der Merwe, & Smith, 2005). We understand peers to be individuals who have a similar aspect of identity, such as the level of study. For Ashenafi (2017), peer assessment is "a scenario in which two or more students are involved in completing tasks that require fairly equivalent levels of participation for the entire process to be effective" (p. 245). Peer assessment is often presented together with peer learning and cooperative learning. In this review, we discuss the two as interwoven since the learning and skills improvement also takes place during peer assessment (Topping, 2009). Earl (2006) and Scott, Van Der Merwe, and Smith (2005) have suggested that those who administer peer assessment have an opportunity to learn the content and develop assessment skills.

The purpose of peer assessment is a critical component that can make the process successful or unsuccessful. Jordan (1999) described peer assessment as process-oriented, as opposed to other types of assessment that are product-oriented, so, for her, ongoing formative peer feedback is significant. Assessment is less about getting a good score and much more about the learning opportunities for students (Ramsden, 2003). In higher education, both peer assessment and learning are driven by recent approaches to knowledge production and how knowledge is transferred. There is an emphasis on treating students as active co-constructors of knowledge rather than as passive recipients of it (Naicker & Bayat, 2012; Wilson, Diao, & Huang, 2015). The purpose of peer assessment should be explained to students before they engage in it. This includes providing clear guidelines, rubrics, and training for students on how to provide useful comments as well as critical feedback (Kang'ethe, 2014; Smith et al., 2002; Topping, 2009; Wiliam, 2011; Zou, Schunn, Wang, & Zhang, 2017).

Peer assessment can be carried out in various ways; it can be accomplished by providing verbal or textual feedback, and it can be corrective, confirmatory, or suggestive (Cho & MacArthur, 2010; Topping, 2009). A peer providing corrective or suggestive feedback should note the aspects needing correction and then suggest improvements. Also, the process of peer assessment can be reciprocal, or it can be one way (Topping, 2009). For example, two peers could give each other feedback, or a peer could provide feedback without receiving anything in return.

Given the increasing student enrolment, lecturers are struggling to provide formative feedback, so institutions often resort to the use of multiple-choice questions for assessment as Mostert and Snowball (2013) have pointed out. While multiple-choice testing may provide an opportunity for in-depth learning if it has been appropriately set, the cooperative learning opportunity is lost. Several studies, conducted by Dochy, Segers, and Sluijsmans (1999) and that by Topping (2009), have suggested that in a situation where there is a choice, they would prefer to use formative peer feedback with comments since this ensures ongoing learning. While peer assessment provides opportunities for cooperative learning, some researchers have raised concerns about its use in arguing that its benefits are blurred (Brew, 1999; Kang'Ethe, 2014). In a study conducted at the University of Fort Hare, Kang'Ethe (2014) found that institutions of higher learning are reluctant about embracing student-centred forms of assessment such as peer assessment. Some lecturers view assessment in general as something done to students by the knowledgeable lecturer instead of seeing it as an activity done with students (Brew, 1999).

Literature from South Africa shows that lecturers have concerns with the level of study at which peer assessment can be introduced to their students (Snowball & Mostert, 2013). Naicker and Bayat (2012) have noted that, at their first attempt at introducing peer assessment with second-year students, they responded negatively because of its novelty. Students engaged in peer activities but valued feedback from their tutors or lecturers. There needs to be a pedagogical rationale behind each lecturer's decision at which level of study to introduce peer assessment and to which group. A study on implementing peer assessment for information systems conducted at the University of Cape Town demonstrated that introducing students to peer assessment over a long period of time allowed them to acquire assessment skills over the years so, in their final year, they displayed a high level of competency in assessing their peers (Scott, Van Der Merwe, & Smith, 2005). Although there are concerns, the real value of peer assessment, contrary to what many students think, is not that it will reduce lecturers' marking workload (Wilson et al., 2015). Instead, the real value is on the learning that comes with the process of allowing students to become assessors and to develop higher order thinking (Snowball & Mostert, 2013). The earlier they can become assessors and develop such thinking, the better.

The emergence of information communication technology (ICT) has also contributed to the rise in the use of peer assessment as we see in the use of automated peer assessment tools on e-learning platforms (Ashenafi, 2017). In South Africa, institutions have begun to roll out e-learning platforms (learning management systems). Naicker and Bayat (2012) administered epeer assessment and found that the current stumbling blocks such as a reliable internet connection, technical support, and institutional recognition of peer assessment as a legitimate form should be addressed before successful implementation of e-peer assessment can be realised.

Smith et al. (2002) have stated that there is resistance towards peer assessment from both academics and students, particularly in relation to its reliability and validity, which terms have been conflated and used interchangeably. Several "studies of reliability appear actually to be studies of validity since they compare peer assessment with assessments made by professionals rather with those of other peers or the same peers over time" (Dochy et al., 1999, p. 338). To address the validity and reliability of peer assessment, lecturers should be guided by the following questions: What does it mean to assess the work of others? What are the assumptions underpinning such positions? (Brew, 1999). Also, anonymity should be implemented in peer assessment to ensure fairness and the quality of feedback (Wilson et al., 2015). In addressing the problems relating to student bias, collusive marking, and friendship marking, Dochy et al. (1999) have suggested that peer, self (formative) and co-assessment (summative) should be combined. We conceptualise the latter as the lecturers' assessment and contribution after peer feedback (Kang'ethe, 2014). Scoring rubrics may also be used to improve grading validity (Hamer, Ma, & Kwong, 2005).

The use of ICT tools to administer peer assessment has contributed towards addressing anonymity, bias, and efficiency (Ashenafi, 2017). Implementing e-peer assessment with reliable internet connection and security will ensure that bias and anonymity are addressed. ICT tools can assist with calibration of grades that are assigned by both the students and the lecturer, although more work is still needed on how to use ICT in various contexts (Hamer et al., 2005; Ashenafi, 2017). Peer assessment offers triangulation on the types of assessment that are considered unreliable or valid, and this could address the concerns about validity and reliability (Topping, 2009).

The legitimacy of peer assessment is contested especially in contexts where the significance of learner-centredness is not acknowledged. Naicker and Bayat (2012) reported that students often value their lecturers' feedback more than that offered by their peers since the latter are not seen to be knowledgeable. Other challenges are institutional assessment policies that do not acknowledge learner-centred forms of assessment. ICT tools are emerging as an alternative for conducting assessment at large, and specifically to facilitate e-peer assessment but that they are applicable only in settings with good infrastructure and cyber security is of concern. Nonetheless, the literature (see Dochy et al.,1999; Topping, 2009) has also shown the benefits of peer assessment such as the learning opportunities for students to develop higher order thinking and for the lecturers to enrich their classes by broadening students' perspectives.


Theoretical framework

Peer assessment gives students practice in learning how to assess and in learning from assessment which enables them to acquire and develop life-long assessment skills (Barefoot, Lou, & Russell, 2011). The development of such skills is especially significant for student teachers who will be assessing learners. Van Steendam, Rijlaarsdam, Sercu, and Van den Bergh (2010) stipulate that lecturers can improve students' assessment skills by deliberately playing a modelling role for them to learn how assessment is carried out. It is against this background that scaffolding theory emerges as a relevant theoretical tool to understand lecturers' perceptions on the use of peer assessment in a South African teacher education institution.

Scaffolding originates from the research of Wood, Bruner, and Ross in 1976. It also relates to the theory of social learning (Vygotsky, 1978), particularly his zone of proximal development (ZPD). Vygotsky's theory of social learning views learning as a social interaction among peers and knowledgeable adults which results in the development of an individual's learning process (Wilson & Devereux, 2014). Scaffolding, in the education context, refers to a process in which lecturers provide more guidance to students in early learning stages rather than over time. As students master the skills, the lecturer gradually minimises the support provided (Stone, 1998). During peer assessment, students can gain new insights from the feedback provided by their peers and the lecturer. After the scaffolding process, the knowledge and competencies of the students are expanded (Wood, Bruner, & Ross, 1976). Students learn assessment skills from the lecturer as they guide how peer assessment should be carried out in this process that is referred to as instructional scaffolding. Essentially, through scaffolding, students are provided with the necessary support to assess their peers effectively, thus peer assessment results in a more participatory culture of learning (Kollar & Fischer, 2010).



In this study, we adopted an interpretive qualitative research approach to explore lecturers' use of peer assessment at a higher education institution. Qualitative research is concerned with exploring the phenomenon from the participant's point of view in relation to how they interpret and make meaning (Creswell, 2013). We were interested in understanding lecturers' perceptions on the use of peer assessment and, following Cohen, Manion, and Morrison (2011), we interpreted their lived experiences from their natural subjective positions. Using convenience sampling, we recruited participants on e-mail. Our target was to have 10 participants and, of the 15 lecturers contacted, 10 responded, of whom 1 subsequently withdrew. All participants are employed in the same teacher education institution, teaching both undergraduate and postgraduate students through a contact mode in various disciplines. The teacher education institution, which is part of a larger university, focuses on training students who will become teachers. The assessment is informed by the broader university policy on assessment that states, "In module planning a range of assessment options should be considered such as peer and self-assessment; criterion- and norm-referenced assessment; formative and summative assessment; and continuous assessment, as appropriate to the outcomes of the particular module" (University of KwaZulu-Natal, 2012, p. 5).

The sample included both novices with less than five years' experience and experienced academics whose time in academia ranged between 2 and 21 years, with ages ranging between 30 and 50 years. Comprised of six men and three women, the sample was predominantly male. The details of the participants are presented below.



Informed consent forms were sent to lecturers, indicating the purpose of the study, and requesting their voluntary participation and permission for us to audio-record the interviews. The participants responded by indicating their availability for the interviews. Appointments were then set up to conduct semi-structured interview conversations that were conducted individually at the lecturers' offices; each session lasted between 45 and 60 minutes, and, following Creswell (2013), we probed where we needed clarity. The interview conversations had two parts, the first of which established the participants' background, In the second part, we offered the prompts relating to the topic of this paper. All sessions were audio-recorded, transcribed, and given to the participants for member checking. All the co-authors also read them.

We analysed the transcripts using a thematic analysis strategy. Braun, Clarke, and Terry (2015) have pointed out that thematic analysis contains various stages of coding and analysing the data. The stages for our process included familiarising ourselves with the data, creating initial codes, creating subthemes, and formulating broader themes. After that, we read over the transcripts between ourselves to ensure thematic rigour. We upheld all ethical considerations throughout the study. The identity of the participants has been protected though our use of pseudonyms.



Our analysis of the data yielded four themes.

1) The tension between criticism and critique

2) Linking peer assessment to level of the students and the discipline

3) The process of peer assessment

4) Peer assessment for learning

In the discussion, we provide direct verbatim quotes to illustrate the views of participants. During data generation, we started the conversations by asking for participants' definitions of peer assessment; this was an important entry point that established part of the contextual background underlying the conversations. We found that their definitions were consistent with the existing literature (see Topping, 2009) such as having students of the same level assess each other and provide feedback. In the discussion of the findings, we refer to peer assessment and learning interchangeably. Although we are aware that the two do not mean the same thing, we use them in complementary ways, since Freeman and Mckenzie (2013) have argued that they are intertwined.

The tension between criticism and critique

What came out strongly in this theme is the importance of having a conversation with the students first and explaining the purpose of the task. For instance, Adam, in the excerpt below, noted that communicating the purpose of assessment to students is a starting point to a successful implementation. Adam emphasised further that students should critique, not criticise their peers. We conceptualise the concepts of critique vs criticise as a suggestion that students should provide constructive feedback informed by content knowledge and the rubric. According to Amin (2011), it is easy for students to confuse critique with its semantic neighbour, critic. Providing training for students prior to and during the assessment activity is an essential component of scaffolding which serves to ease the tension between critic and critique. Lecturers should introduce peer assessment through achievable steps and encourage them to contribute to the assessment criteria (Langan et al., 2005). Through this approach, students become active co-constructors of knowledge, lead their learning, support each other, and provide critique which is informed by the criteria (see Panadero & Brown, 2017; Wilson et al., 2015). Adam noted,

At first, [I] tell them that we are assessing for development and not punishment. I tell them that do not punish the author, but you should develop the person. I tell them that they shouldn't criticise but critique.

Thabiso found peer assessment in higher education suitable and useful to enhance the educational experience. But he highlighted the fact that meaningful peer assessment requires sufficient preparation on the side of the lecturer, especially since peer assessment is about students both learning the content and acquiring the assessment skills. Thabiso said,

Of course, if you think about the numbers that we deal with in a HE context it's valuable in that aspect because you can do much more through peer assessment . . . it elevates the whole educational experience. I think you would achieve outcomes or objectives better when you do peer assessment than when you [do] not . . . but I think that for you to have a good peer assessment experience you need to prepare a lot. It might not be necessarily on the day that you do a lot of work but there is much that goes behind the scene so that you present a proper peer assessment experience where students can benefit.

Nancy presented a different perspective to that of Thabiso. She highlighted her belief that peer assessment is useful because it has the potential to produce students who are competent at providing and receiving feedback. According to Nancy, emotional intelligence and critical thinking are among the skills that students develop during peer assessment. Drawing from our anecdotal teaching experiences in higher education, students are resistant towards embracing critique and receive it as an attack. Reflecting on her postgraduate module, Amin (2011) argued that "students linked the critique of the assignment to a lack of care while I, the assessor, linked critique to intellectualism" (p. 269). This view on critique links to Nancy's; she said,

It helps in that way and because we are in HE [higher education] we need to develop students to be critical thinkers and we need to teach students to give and receive critical feedback from their peers and lecturers. If we do not open up this space, we will always develop professionals who are one minded, who want to get the marks and move on . . . It develops a level of openness on how people interact, and the goal is to produce professionals with emotional intelligence.

Linking peer assessment to the level of the students and the discipline

Given the few studies on peer assessment in South Africa, it was important to explore the nine lecturers' perceptions in a teacher education institution. Seven participants indicated that they use peer assessment in their practice and two do not; both groups provided their justification. Thabiso considered the students' level of study when planning for peer assessment, as did participants in a study by Scott, Van Der Merwe, and Smith (2005). He prefers using peer assessment with final year students (4th-years), because of their level of maturity and familiarity with the learning context. Thabiso's assumption is that they would have received guidance in their previous years of study and do not personalise feedback. According to Petersen (2017), novice teachers blame teacher education institutions for not preparing them adequately for their professional roles when they face challenges at work. We argue that the type of scaffolding provided by Thabiso, who teaches sports science, is important for developing the student teachers' assessment skills. He explained,

I tend to use in the 4th-years where I know that the students are matured enough to understand that what [they] are doing is an assessment. It's not just either to score people down or to score them up just because they are your friends. I am not insinuating that students at lower year levels won't be able to do peer assessment. I tend to use it with 4th-years.

Yola, who teaches in an accounting discipline, prefers introducing peer assessment to students in the first year of study. She believes that this approach helps the students to learn, develop, and become better assessors of other people's work. This suggests that at the beginning students require more scaffolding and, over a period of time, they are able to transition to milder scaffolding. She said,

It was at undergraduate courses where we started introducing peer assessment at the first year of study. By the time they get to third year level they are not the same and they assess fairly. At third year level you can see that they getting closer to assessing their peers like the lecturer.

Peer assessment is better suited for modules or objectives in which the intention is to connect, learn, and negotiate meaning rather than provide the most accurate answer (Rotsaert, Panadero, & Schellens, 2018). Allowing students to engage collaboratively in an activity enables them to develop emotional intelligence and an ability to participate in a large and diverse classroom. Apart from the students' level of study, lecturers' disciplinary knowledge and teaching aims influence their decisions on peer assessment. Using peer assessment on modules that are collaborative such as those in the arts, which are inclusive of plays and performances, require students to plan and debate ideas together. This view seems to be echoed in Nancy's statement:

The reason why I have used peer assessment is because of the nature of the module (creative arts) that I teach. They are practically based, students spend a lot of time together, actively engaging physically and intellectually.

As noted earlier, not all the lecturers use peer assessment. Thuli cites the increased student enrolments as a deterrent for using it and the probable administrative challenges. He said,

No, I do not use it at all, I know it's there but I don't use it . . . in the institution we are taking a large number of students and it can be difficult to administer such an assessment.

Lwazi, too, had never used peer assessment before but he saw the possibility of linking peer assessment with a module such as academic literacy. He said,

For now, I don't use it and I have never used it before. But it would be an interesting way of assessing especially for modules like academic literacy, where students need to be critical in terms of how they write academically.

The process of peer assessment

To emphasise grading in peer assessment is controversial since the focus then shifts from its core business of being a learning exercise to being a summative one (Luckett & Sutherland, 2000; Panadero & Brown, 2017). The procedures through which individual lecturers administer peer assessment are diverse and determined by their disciplines and the purpose of the activity. For presentations, Elias prefers verbal feedback during which peers do not allocate marks to each other. Elias employs co-assessment (see Dochy et al., 1999), which allows the lecturer to maintain control and present the final decision in relation to grading. Elias explained,

I allow students to present then after presentations the other peers or the students who were the audience would, therefore, give feedback on what has been presented, though they do not allocate marks for them.

Innovative uses of peer assessment also emerged from the findings. Nancy uses peer assessment for various purposes. Her approach debunks the traditional conception of group work for which members are allocated the same mark. How she conducts peer assessment in art, play, and performance discipline is similar to Lejk and Subramanian's (2013) conceptualisation of group peer assessment. They argue that when individual group members assess one another and make a contribution to the final product, other members should assess the process. Nancy's approach is suitable for large classes since the students are divided into smaller groups and, within the small groups, there is greater accountability and exchange of ideas. As with Elias above, for Nancy, the lecturer as an instructor makes the final decision. She pointed out that

[i]nitially, they work on the project and in the middle, they are told that they are going to assess each other and keep a record for attendance, their roles and responsibilities. It also helps them to build a portfolio of what they are doing. Once they have contributed, completed the portfolio and the practical performance, they are given a rubric for the performance, a rubric for the contribution and attendance. In the rubric, there is space for feedback, and they submit to me.

Adam explained his rationale.

I give them an activity like an assignment, so in my module, we use an online tool and I have a discussion forum there. When the student finishes his or her assignment, they upload it to the discussion forum, then peers make comments and critique the submissions. After they have critiqued it, the assignment is sent back to the discussion forum. I then become a third reviewer and state my position regarding the two reviews.

Innovations emanating from the information and communication technologies (ICT) have found their way into one of the lecturer's assessment practices. Scaffolding is not confined only to a one-on-one process and the combination of peer and computer-based assessment has made it possible and is more suitable to administer even for a larger class, so Adam conducts peer assessment online, using the e-learning tools found in learning management systems such as the Modular Object-Oriented Dynamic Learning Environment (Moodle). Through this tool, he facilitates discussions and allocations online. Adam takes over the role of expert between himself and the students. He uses a reviewer system in which he is a third reviewer after peers have assessed the students' submissions. Adam, as a third reviewer, resolves possible clashes and deals with biases and any direction required by students.

Both Thabiso and Tina (in languages) use peer assessment through a procedural scaffolding in that students are given rubrics that guide them. In this approach, the lecturers focus on one

In a sport science module focusing on injuries I ask students to do a presentation on an injury that we have not done in class. Then I give out the rubric to different groups, and each group will do a presentation. Using the criteria in the rubric, the other groups will grade them depending on what they saw and what they heard.

Tina added,

I come from languages and with peer assessment for us, it means proper planning where students are writing essays. Students write the first draft of their essays. I then use peer assessment in a form of letting the students exchange their work for reading and provide feedback. Then we use a criterion for the class, showing aspects they must look for.

Peer assessment for learning

Students learn and acquire skills in the process of assessing their peers (Ashenafi, 2017; Naicker & Bayat, 2012). Lejk and Subramanian (2013) have stated that "the ability to assess peers is itself a learning process" (p. 371.) During peer assessment, students can construct knowledge through giving and receiving feedback from their peers (Mostert and Snowball, 2013). The lecturer facilitates the learning, addresses the gaps, and directs students towards attaining more competencies. The participants indicated that they use peer assessment mainly because students can learn from each other. According to Paul, Thabiso, and Yola, peer assessment encourages students to appreciate the content presented and other people's ideas. All three infer that such learning is necessary in a teacher education context since the students acquire assessment skills that are essential for their future. Paul explained,

When they assess other people, they can learn to appreciate the content that is presented, how students think and maybe learn from that particular process . . . I was conscious of the fact that these are teachers; one day, they will have to go out there. Practice what I am preaching, expose learners to peer assessment and ideas. We learn by engaging ourselves in different ideas.

For Thabiso,

I base it on the notion that students learn much better when they learn from each other. It's the whole idea of promoting learning from each other, which is the reason why I even give out peer assessment tasks.

Yola added,

When students are giving narrative feedback, they tend to use a language that is accessible to the assessed. Something that I might not use, for students, my language could be formal and difficult to understand. There is a learning aspect from both the criterion at a time, which, in turn, scaffolds support to students in a large classroom in categorised and specific ways, making it manageable. Thabiso said,

lecturer and the student as well. When I read the students comments, I also learn in the process. Even those who are assessing are also learning as they need to have skills and values such as honesty, fairness and be less judgemental.

According to Topping (2009), there is learning happening regardless of whether one is an assessor or the assessed. Learning in the process of peer assessment is enhanced more by the quality of comments than by the grading (Cho & Cho, 2011). Yola suggested that reading and moderating feedback provided by students to their peers has changed her perspective. She raised an important issue in higher education-language use. This emphasises that students may learn better when they receive instructions or feedback through everyday language with which they are familiar, instead of professional jargon (see Cho & MacArthur, 2010). Through peer assessment, students can communicate, and language as a component of social learning provides access for the process of scaffolding to take place.

The process of scaffolding requires active participation between the one providing the scaffold and the receiver, and this means that during peer assessment there is a shift from the lecturer being the more knowledgeable other (Vygotsky, 1978). The lecturer becomes a co-constructor of knowledge with the students and acknowledges their input (Falchikov, 2005). Furthermore, Falchikov's view is consistent with that of Brew (1999) and Earl and Giles (2011) who argue that assessment becomes an activity lecturers do with the students, during which students become active participants instead of having assessment imposed on them. Adam's view of peer assessment and learning suggests a strong belief in cooperative learning. He highlighted the importance of learning between peers and the potential to produce assertive students when he said,

If peer assessment is not used, then students are denied an opportunity to learn and we are producing passive students who are waiting for the master . . . so, I call it assessment as learning because while they are assessing they are learning the content and learning how to write.


Discussion and conclusion

Peer assessment in higher education, especially in teacher education, is needed and suitable for various modules as demonstrated in our findings. Peer assessment provides opportunities for lecturers to enhance learning and student engagement in their classes. In a large class, students can work in smaller groups, do peer assessment and learn from each other. The significance of peer assessment for teacher education is its ability for students to learn the content, for social learning to take place, and for them to acquire the assessment skill necessary when they become teachers. Once the assessment skills are acquired and harnessed through a scaffolded feedback from the lecturer, students are better positioned to participate actively in class, and provide and receive constructive feedback.

Massification of higher education brings with a diversity of students in terms of race, language, cognitive abilities, gender, and culture (Mostert & Snowball, 2013). But we want to caution that to some extent diversity could make peer assessment more complex since there would be a need to negotiate with students on such issues as common understandings in the group. Again, a lack of diversity would not in itself present an argument not to have peer assessment. It appears that peer assessment should, in the main, be driven by the outcomes of the module. We acknowledge that the lecturer reserves academic autonomy to choose the types and forms of assessment in line with the objectives of the module (Ramsden, 2003). However, in line with Panadero and Brown (2017), we are of the view that peer assessment is an essential pedagogical practice. For example, Thuli's and Lwazi's students miss out on an opportunity to learn the content and to practice comprehensive assessment skills.

Peer assessment has not gained thrust in both the South African basic and higher education systems despite its benefits (Reddy, Le Grange, Beets, & Lundie, 2015); some of the higher education institutions are reluctant to change their assessment approaches (Kang'ethe, 2014). We consider this a gap and think that teacher education institutions should revisit their assessment policies and their staff development workshops. Research is needed to understand and broaden the relationship between module outcomes with the types of assessment selected as well as studies that explore ways in which peer assessment could be improved for use in both basic and higher education institutions. Seeing the emerging debates on e-learning and e-assessment, institutions should focus on innovative ways of assessment and provide advanced training to its staff cohort. Moreover, and this can be seen to be a limitation of this paper, future research should focus on student-teachers' experiences of peer assessment.



Amin, N. (2011). Critique and care in higher education assessment: From binary opposition to Möbius congruity. Alternation, 18(2), 268-288.         [ Links ]

Ashenafi, M. M. (2017). Peer-assessment in higher education: Twenty-first century practices, challenges and the way forward. Assessment & Evaluation in Higher Education, 42(2), 226-251.         [ Links ]

Barefoot, H., Lou, F., & Russell, M. (2011). Peer assessment: Educationally effective and resource efficient. Blended Learning in Practice, 1(1), 21-35.         [ Links ]

Barnard, R., de Luca, R., & Li, J. (2015). First-year undergraduate students' perceptions of lecturer and peer feedback: A New Zealand action research project. Studies in Higher Education, 40(5), 933-944.         [ Links ]

Biggs, J. (2014). Constructive alignment in university teaching. HERDSA Review of Higher Education, 1(1), 5-22.         [ Links ]

Braun, V., Clarke, V., & Terry, G. (2015). Thematic analysis. In P. Rohleder & A. Lyons (Eds.), Qualitative research in clinical and health psychology (pp. 95-113). New York, NY: Palgrave Macmillan.         [ Links ]

Brew, A. (1999). Towards autonomous assessment: Using self-assessment and peer assessment. In S. Brown & A. Glasner (Eds.), Assessment matters in higher education: Choosing and using diverse approaches (pp. 159-171). Buckingham, UK: Open University Press.         [ Links ]

Cho, K., & MacArthur, C. (2010). Student revision with peer and expert reviewing. Learning and Instruction, 20(4), 328-338.         [ Links ]

Cho, Y. H., & Cho, K. (2011). Peer reviewers learn from giving comments. Instructional Science, 39(5), 629-643.         [ Links ]

Cohen, L., Manion, L., & Morrison, K. (2011). Research methods in education (7th ed.). London, UK: Routledge.         [ Links ]

Council on Higher Education. (2016). South African higher education reviewed: Two decades of democracy. Pretoria, RSA: Council on Higher Education.         [ Links ]

Creswell, J. W. (2013). Qualitative inquiry & research design (3rd ed.). Thousand Oaks, CA: Sage.         [ Links ]

Dochy, F., Segers, M., & Sluijsmans, D. (1999). The use of self-, peer and co-assessment in higher education: A review. Studies in Higher Education, 24(3), 331-350.         [ Links ]

Earl, K., & Giles, D. (2011). An-other look at assessment: Assessment in learning. New Zealand Journal of Teachers' Work, 8(1), 11-20.         [ Links ]

Earl, L. (2006). Assessment: A powerful lever for learning. Brock Education: A Journal of Educational Research and Practice, 16(1), 1-15.         [ Links ]

Earl, L. (2013). Assessment as learning: Using classroom assessment to maximize student learning (2nd ed.). Thousand Oaks, CA: Corwin Press.         [ Links ]

Falchikov, N. (1995). Peer feedback marking: Developing peer assessment. Programmed Learning, 32(2), 175-187.         [ Links ]

Falchikov, N. (2001). Learning together: Peer tutoring in higher education. London UK: Routledge.         [ Links ]

Falchikov, N. (2005). Improving assessment through student involvement: Practical solutions for aiding learning in higher and further education. Abingdon, UK: RoutledgeFalmer.         [ Links ]

Freeman, M., & Mckenzie, J. (2013). Aligning peer assessment with peer learning for large classes: The case for an online self and peer assessment system. In D. Boud, R. Cohen, & J. Sampson (Eds.), Peer learning in higher education: Learning from and with each other (pp. 156-169). Abingdon, UK: Routledge.         [ Links ]

Giannakis, M., & Bullivant, N. (2016). The massification of higher education in the UK: Aspects of service quality. Journal of Further and Higher Education, 40(5), 630-648.         [ Links ]

Hamer, J., Ma, K. T., & Kwong, H. H. (2005, January). A method of automatic grade calibration in peer assessment. Paper presented at the Proceedings of the 7th Australasian conference on Computing Education, Newcastle, AU.

Jordan, S. (1999). Self-assessment and peer assessment. In S. Brown & A. Glasner (Eds.), Assessment matters in higher education: Choosing and using diverse approaches (pp. 172-182). Buckingham, UK: Open University Press.         [ Links ]

Kane, J. S., & Lawler, E. E. (1978). Methods of peer assessment. Psychological Bulletin, 85(3), 555-586.         [ Links ]

Kang'ethe, S. (2014). Peer assessment as a tool of raising students' morale and motivation: The perceptions of the University of Fort Hare social work students. International Journal of Educational Sciences, 6(3), 407-413.         [ Links ]

Kollar, I., & Fischer, F. (2010). Peer assessment as collaborative learning: A cognitive perspective. Learning and Instruction, 20(4), 344-348.         [ Links ]

Langan, A. M., Wheater, C. P., Shaw, E. M., Haines, B. J., Cullen, W. R., Boyle, J. C., . . . Preziosi, R. F. (2005). Peer assessment of oral presentations: Effects of student gender, university affiliation and participation in the development of assessment criteria. Assessment & Evaluation in Higher Education, 30(1), 21-34.         [ Links ]

Lejk, M., & Subramanian, R. (2013). Enhancing student learning, participation and accountability in undergraduate group projects through peer assessment. South African Journal of Higher Education, 27(2), 368-382.         [ Links ]

Luckett, K. (2016). Curriculum contestation in a post-colonial context: A view from the South. Teaching in Higher Education, 21(4), 415-428.         [ Links ]

Luckett, K., & Sutherland, L. (2000). Assessment practices that improve teaching and learning. In S. Makoni (Ed.), Improving teaching and learning in higher education: A handbook for Southern Africa (pp. 98-130). Johannesburg, RSA: Witwatersrand University Press.         [ Links ]

Mohamedbhai, G. (2014). Massification in higher education institutions in Africa: Causes, consequences and responses. International Journal of African Higher Education, 1(1), 59-83.         [ Links ]

Mostert, M., & Snowball, J. D. (2013). Where angels fear to tread: Online peer-assessment in a large first-year class. Assessment & Evaluation in Higher Education, 38(6), 674686.         [ Links ]

Murris, K. (2016). # Rhodes Must Fall: A posthumanist orientation to decolonising higher education institutions. South African Journal of Higher Education, 30(3), 274-294.         [ Links ]

Naicker, V., & Bayat, A. (2012). Towards a learner-centred approach: Interactive online peer assessment. South African Journal of Higher Education, 26(5), 891-907.         [ Links ]

Panadero, E., & Brown, G. (2017). Teachers' reasons for using peer assessment: Positive experience predicts use. European Journal of Psychology of Education, 32(1), 133156.         [ Links ]

Petersen, N. (2017). The liminality of new foundation phase teachers: Transitioning from university into the teaching profession. South African Journal of Education, 37(2), 19.         [ Links ]

Ramsden, P. (2003). Learning to teach in higher education (2nd ed.). London, UK: Routledge.         [ Links ]

Reddy, C., Le Grange, L., Beets, P., & Lundie, S. (2015). Quality assessment in South African schools. Cape Town, RSA: Juta.         [ Links ]

Rotsaert, T., Panadero, E., & Schellens, T. (2018). Anonymity as an instructional scaffold in peer assessment: Its effects on peer feedback quality and evolution in students' perceptions about peer assessment skills. European Journal of Psychology of Education, 33(1), 75-99.         [ Links ]

Scott, E., Van Der Merwe, N., & Smith, D. (2005). Peer assessment: A complementary instrument to recognise individual contributions in IS student group projects. The Electronic Journal of Information Systems Evaluation, 8(1), 61-70. Retrieved from         [ Links ]

Selyutin, A., Kalashnikova, T. V., Danilova, N., & Frolova, N. (2017, June). Massification of the higher education as a way to individual subjective wellbeing. Paper presented at the European Proceedings of Social & Behavioural Sciences (EpSBS), Nicosia, CY.

Snowball, J. D., & Mostert, M. (2013). Dancing with the devil: Formative peer assessment and academic performance. Higher Education Research & Development, 32(4), 646659.         [ Links ]

Smith, H., Cooper, A., & Lancaster, L. (2002). Improving the quality of undergraduate peer assessment: A case for student and staff development. Innovations in Education and Teaching International, 39(1), 71-81.         [ Links ]

Stone, C. A. (1998). The metaphor of scaffolding: Its utility for the field of learning disabilities. Journal of Learning Disabilities, 31(4), 344-364.         [ Links ]

Topping, K. J. (2009). Peer assessment. Theory into Practice, 48(1), 20-27.         [ Links ]

University of KwaZulu-Natal. (2012). Policy on Assessment. Durban, RSA: University of KwaZulu-Natal.         [ Links ]

Van Steendam, E., Rijlaarsdam, G., Sercu, L., & Van den Bergh, H. (2010). The effect of instruction type and dyadic or individual emulation on the quality of higher-order peer feedback in EFL. Learning and Instruction, 20(4), 316-327.         [ Links ]

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological functions. Cambridge, MA: Harvard University Press.         [ Links ]

Wiliam, D. (2011). What is assessment for learning? Studies in Educational Evaluation, 37(1), 3-14.         [ Links ]

Wilson, K., & Devereux, L. (2014). Scaffolding theory: High challenge, high support in Academic Language and Learning (ALL) contexts. Journal of Academic Language and Learning, 8(3), 91-100.         [ Links ]

Wilson, M. J., Diao, M. M., & Huang, L. (2015). 'I'm not here to learn how to mark someone else's stuff': An investigation of an online peer-to-peer review workshop tool. Assessment & Evaluation in Higher Education, 40(1), 15-32.         [ Links ]

Wood, D., Bruner, J. S., & Ross, G. (1976). The role of tutoring in problem solving. Journal of Child Psychology and Psychiatry, 17(2), 89-100.         [ Links ]

Zou, Y., Schunn, C. D., Wang, Y., & Zhang, F. (2017). Student attitudes that predict participation in peer assessment. Assessment & Evaluation in Higher Education, 43(5), 800-811.         [ Links ]



Received: 2 July 2019
Accepted: 8 May 2020

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License