SciELO - Scientific Electronic Library Online

 
vol.38 número2School-based assessment: the leash needed to keep the poetic 'unruly pack of hounds' effectively in the hunt for learning outcomesA feminist post-structuralist analysis of an exemplar South African school history text índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados

Artigo

Indicadores

Links relacionados

  • Em processo de indexaçãoCitado por Google
  • Em processo de indexaçãoSimilares em Google

Compartilhar


South African Journal of Education

versão On-line ISSN 2076-3433
versão impressa ISSN 0256-0100

S. Afr. j. educ. vol.38 no.2 Pretoria Mai. 2018

http://dx.doi.org/10.15700/saje.v38n2a1386 

ARTICLES

 

Assessment of group work in initial teacher education and training

 

 

Mamsi Ethel Khuzwayo

Department of Further Education and Training, Faculty of Education, Cape Peninsula University of Technology, Cape Town, South Africa. kuzwayom@cput.ac.za

 

 


ABSTRACT

This study records findings of a study carried out in a group of 100 students at a South African university. This study examines the group's assignments as a way of gathering evidence about pre-service teachers' achievements in the process of education and training. The empirical study was based on comparative analysis of scores obtained by students in group tasks and scores obtained by the same students doing the same task. The results indicated a discrepancy between marks obtained in the group task and marks earned through individual effort. Findings based on assessment of the results are displayed in the frequency distribution tables: inconsistency in the scores, trustworthiness of group assessment, and equitable allocation of marks to undeserving individuals in groups. High marks are allocated to students who did not warrant them. Moderation of marks (obtained by a group on the task is necessary to validate the reality of students' performance in a group assignment). Findings highlighted that group assignments do not provide a valid reflection of student performance which could mean that some fourth-year students obtain the qualification without earning it.

Keywords: assessment; group work; moderation; pre-service teachers; progression


 

 

Introduction

In my view based on experience working in teacher education and training in Higher Education Institutions (HEIs), the issue of assessment is an internal affair. Levels of accountability vary from one institution to another. The policies on assessment and regulations are formulated in order to provide an overview of how the university accounts for progression and retention of students in the programme. There is no source that provides a generic view of which assessment techniques or tools are required for assessing student performance in the pre-determined exit level outcomes and level descriptors for the programme. Gravett and Geyser (2004) refer to the Higher Education Act to emphasise the shift from traditional assessment of rote learning to an outcomes based assessment, which focuses on individual construction of knowledge.

The common trend in the Faculty at present is that assessment tasks are designed by lecturers who aim at collecting evidence of students' abilities to demonstrate understanding of theoretical knowledge of the topic selected for the module.

Marking of the tasks, to a greater extent, is based on rigid and visible memoranda or rubrics; hence internal and external moderation emphasises question papers and memoranda. By contrast, group work, which is one of the techniques of assessing, is not mentioned in any of the assessment policies or regulations of the institution as requiring moderation. Participation in courses and seminars on assessment in higher education exposed me to the knowledge of the integrated system in assessment. This is the paradigm on assessment proposed for quality learning by the South African Qualification Authority ([SAQA], 2001 in Gravett and Geyser, 2004:95-99).

Ewell (2008) confirms that outcomes-based assessment allows for integration of assessment systems in higher education and training. The basic principle in the implementation of integrated assessment is alignment of outcomes, assessment task and criteria (Biggs, 2003). Assessment procedures in HEIs should adhere to this principle of Outcomes Based Assessment (OBA) to ensure that the final judgement about students' competent performance in a course is authentic and reliable. Some researchers associate Outcomes Based Assessment with competence-based assessment, because the results provide evidence upon which the assessor and the student can account for the performance achieved (Knight, 2004; Li, 2001; White, Lloyd, Kennedy & Stuart, 2005). In the same vein, scholars who pioneer the view of quality learning and assessment, emphasise that assessment should be driven by purpose, outcomes, competences, and criteria or standards. Criteria or standards are perceived as yardsticks for measuring quality of competence-based and outcomes-based assessment in higher learning (Biggs, 1999; Earl, 2003; Sharp, 2006). If results of assessment are to be authentic, valid and of quality, lecturers in higher education should consider the principles guiding outcomes and competence-based assessment. Exponents of the integrated assessment system (Bagnall, 1994; Gibbs & Dunbat-Goddet, 2007) emphasise that the monitoring of assessment entails creating a conducive environment for demonstration of desired learning outcomes and provision of relevant feedback for development of competences and skills. Planning an assessment thoroughly is crucial for obtaining competent results. According to this view, thoroughly planned assessment entails formulation of achievable outcomes and reasonable criteria, selecting assessment criteria and allocation of sufficient time.

In the process of learning in higher education, an outcome of the process of formative and summative assessment determines learners' progress from one year level or grade, to the next. Inadequacies in planning and monitoring of an assessment process in teacher education and training could be associated with incompetent and inadequately educated and trained professionals. Implementation of integrated assessment systems in teacher education and training is critical, because teachers are expected to demonstrate the attained competences in all aspects of the subject content knowledge in which they are specialising, namely: factual knowledge; conceptual knowledge; procedural knowledge; and meta-cognitive knowledge of subjects and disciplines. Teachers ought to demonstrate competences attributed to subject pedagogical content knowledge (SPCK) and pedagogical content knowledge (PCK). Discrepancies in assessment of these vital types of knowledge could result in serious challenges in teaching and learning in classrooms and lead to major setbacks for learner performance in the learning of subject content knowledge.

This paper highlights the shortcomings of allocating a common mark to individual students when assessing a group task. The findings of the empirical study attest to the lack of trustworthiness and inconsistencies involved in the assessing of group tasks. The competences of individual students cannot be reliably determined when assigning a general mark. The findings of this research project show that incompetent or underperforming students in a group could be awarded high marks, which are not commensurate with their individual performances. Suggestions to teacher educators are based on the findings of the empirical study and the means of moderating scores obtained from group assignments form part of this paper.

The argument expressed in this article encapsulates theoretical views and suggestions of international researchers on the issue of assessment in higher education and training. The pioneers of the outcomes-based assessment and scholars such as Biggs (2003), Ewell (2008), James, McInnis and Devlin (2002), and Winchester-Seeto (2002) writing on content-based and norm-driven assessment in Australia, United Kingdom and United States of America, benefit from corroboration of the findings highlighted in this article. Researchers who share a view that assessment should focus on the demonstration of competences through integrated assessment systems in higher education and training other than norm-referenced assessment are likely to welcome new research that endorses their conclusions. Scholars who advocate discourse in the assessment of competences and competitiveness in the training of professionals, academic and artisans for the job market could invoke the findings of this study to strengthen their arguments on the sharing of marks by individuals in the assessment of group work or collaborative task.

Literature Review/Conceptual Framework The term assessment is commonly used in the context of production or provision of services in the public sector, private sector and education institutions. The ideas and views shared by scholars is that assessment is theorised in different ways and its practice is contested by various conceptualisations. Siebörger and Macintosh (1998:6) differentiate between assessment conducted in the business sector and educational assessment:

The purpose of educational assessment is not simply to measure what learners have achieved, but to help learners to learn and achieve more. Assessment which does not motivate learners to learn and tell them what they need to do to improve does not fulfil its educational purpose. The review of literature for this study identified the following key concepts to be the attributes of theories about what assessment entails and the actions that determine the practices of assessment.

Assessment and assessing In some contexts, assessment could mean evaluation but in this paper the definitions of assessment relate assessment or the process of assessing to the process of collecting evidence about learners' performance in the teaching and learning environment (Killen, 2005, 2010, 2015). Similarly, assessment is referred to as a system regulated by the principles of validity, reliability, fairness and authenticity. According to Biggs and Tang (2011) these principles should be considered during planning of assessment which entails: determination of the purpose of gathering the evidence, selection of criteria, outcomes to be assessed, tools or instruments. Murdoch and Grobbelaar (2004), in the same vein, emphasise that quality assurance of assessment is not only about internal and external moderation of question papers and students' answer sheets, but also about the alignment of assessment with learning outcomes and competences crucial in the National Qualification Framework (NQF). The issue of transparency is mentioned as a key component of quality assurance. Transparency in this instance entails provision of suitable information regarding criteria and feedback to students.

To other scholars, assessment in the teaching and learning environment is an on-going process that affords the one being assessed an opportunity to learn from his/her mistakes (Xing, Waldholm, Petakovic & Goggins, 2015). Continuous Assessment is a concept that is linked to curriculum transformation in South Africa. Killen (2005, 2010, 2015) describes continuous assessment as a continuum that begins with baseline assessment for the purpose of identifying gaps and misconceptions in learners' previous knowledge. Formative assessment identifies difficulties in the learning process; it provides on-going feedback for the process of teaching and learning, and it is developmental. Lastly, summative assessment provides overall results about learner performance, where it is upon the evidence collected from this assessment that judgement about readiness of learners' progress from one year level to the next is made. Pioneers of continuous assessment (CA) contend that the practice of assessment is an event related to summative assessment or judgement of learner performance on the basis of normative scores or marks. These scholars argue that learning is a process through which learners develop skills, acquire factual, conceptual, procedural and meta-cognitive subject content knowledge (Anderson & Krathwohl, 2001; Biggs & Tang, 2011).

Emerging trends on assessment in higher education and training internationally and in South Africa

The emerging progressive trend in instructional research both locally and internationally indicates great support for constructivist theory which suggests the integration of assessment in teaching and learning. Pioneers of this trend consider assessment to be an integral part of teaching and learning (Biggs & Tang, 2007, 2011; Ewell, 2008; Killen, 2010, 2015; Knight, 2004). Scholars in constructivism contest the traditional view of assessment, which is normative and teacher centred. The use of norms as determinants of learner performance in the process of teaching and learning is condemned for benchmarking and comparing learners' performance. Norm-referenced assessment is challenged for creating flawed impressions, where the attainment of a certain sub-minimum is taken to mean that learners have achieved a necessary level of competency in acquiring either cognitive competencies or skills in the subject content knowledge. Lack of accountability is expressed as weakness and shortcomings of critics of norm-reference assessment. Hence, they dispute its relevance to quality teaching and learning (Bagnall, 1994; Ewell, 2008; Killen, 2005, 2010, 2015; Lejk & Wyvill, 2002).

Progressive and constructivist trends in theorising about assessment, teaching and learning suggest integrated assessment systems in higher education. The argument held by pioneers of integrated assessment systems (Biggs, 2003; Biggs & Tang, 2011; Gibbs & Dunbat-Goddet, 2007) is that the instructional framework in a higher education institution is three-dimensional: first being the development of competences, skills, and academic subject knowledge. Therefore the framework of accountability ought to be commensurate and resonant with the teaching and learning framework (Moon, 2004). According to Ewell (2008) evidence in the integrated assessment systems embraces results gathered through qualitative and quantitative approaches about learners' performance. An integrated system in assessment refers to: criterion-referenced focus on performance of learners in attaining competences or abilities benchmarked in the teaching and learning activities. Competence-referenced assessment focuses on the level of competency in the attained competencies. There are similarities in competence, criteria or outcomes referenced assessment, where they embrace qualitative approaches to evidence gathering about performance in the learning process and report results on learners' achievements or performance qualitatively. The third dimension in integrated assessment systems is normative-referenced assessment. This dimension in Higher education gathers evidence quantitatively and provides summative or overall judgment about learner progression from one level to the other vertically or horizontally.

In agreeing with the importance of a criterion-based approach to assessment, Black and Wiliam (2003:623-624) highlight the substantial principles for assessment in the 21st century as follows:

Assessment should be an integral component of course design and not something to add afterwards; good assessment requires clarity of purpose, goal, standards and criteria; assessment for improved performance involves feedback and reflections, and good assessment requires a variety of measures. Similarly, Biggs (2003), by way of his constructive alignment theory, proposes alignment of learning outcomes to competences and skills that students individually or in a group are expected to demonstrate in an assessment task.

Perspectives on assessment of group learning

Learning as a group is beneficial to students in various ways. In teacher education and training in particular, some students join the university after some years of practice as unqualified teachers in schools. Researchers point out that although group learning is of benefit to learning, the contrast in assessment of work or assignments undertaken by the group is another dimension, which has its own dynamics (Lejk & Wyvill., 2002; Li, 2001; White et al., 2005; Xing et al., 2015). Researchers highlight that assessment of a group has become a common trend in assessment of students in higher education, particularly in assessing overcrowded classes.

Conceptualisation of assessment and practices in teacher education and training requires a high level of accountability. Group work as a means of addressing large classes is recommended by researchers in higher education (Biggs, 2003). Although at the theoretical level this strategy sounds good, in practice it has proven not to be reliable. International researchers such as Daradoumis, Martínez-Monés and Xhafa (2006), Gress, Fior, Hadwin and Winne (2010) and Xing et al. (2015) point out that assessment of student abilities and capabilities through group assignment or group work provides flawed results. In spite of the concerns and discrepancies reported by these international researchers, group work remains an option in addressing the challenge of assessing large classes. Criticism of assessment of group work points out that shared marks obtained from group work are not a true reflection of individual performance (Earl, 2003; Moon, 2004). In the same vein, Sharp (2006) and Shay (2008) highlight that research in assessment of group work without adequate moderation has loopholes, and is misleading to both assessors and students. Researchers point out that assessing group work or group learning can be summative or formative, as determined by the learning outcomes being assessed (Gress et al., 2010; Taras, 2002; Xing et al., 2015). Gibbs and Dunbat-Goddet (2007) argue that, since in most cases group learning focuses on the output of the activity carried out by individuals ranging from a pair and more, assessment is likely to be more summative than formative. In support of moderation of marks shared by a group, Daradoumis et al. (2006) and Gress et al. (2010) recommend observation, content and interaction analysis as effective assessment techniques for collaborative learning, because they require each learner in the group to contribute ideas and give account of how such ideas were reached.

The literature indicates that the shift from content-driven assessment and norms-oriented assessment to outcomes based assessment in South Africa introduced a wide range of assessment methods and techniques in higher institutions of education. The South African Qualification Authority (SAQA) introduced educational policies that proposed adoption of learner-centred approaches to teaching and learning as well as assessment (Gravett & Geyser, 2004:60-97). In keeping with this view, Biggs' (1999) constructive alignment theory in assessment suggests that learning outcomes, assessment criteria and assessment tasks be aligned. Gravett and Geyser (2004) emphasise that the integrated system in assessment introduced to Higher Education Institution is based on the principles of Constructive alignment theory. To transform assessment in HEIs, expert assessors and students need to consider the process of assessment seriously ensuring the highest level of accountability, and authenticity.

Many references in this article are made to international sources in light of the fact that the review of literature revealed that research on the assessment of group work in South Africa has not received adequate scholarly attention. One research team, Clarence, Quinn and Vorster (2015:4-7), recorded the findings from case studies conducted in a sample of lecturers across disciplines in the South African university. The following are the practices and experiences of lecturers on assessment.

Each lecturer decided what is most important about their discipline and design assessment approaches and task which will best enable them to measure their students' learning.

Part of the reason for lecturers introducing peer, group and self-assessment is to promote the development of students' capacity to make judgements about their own and others work.

Lecturers complain that students seem to ignore feedback they are given: and they are only interested in the mark they have been assigned.

Analysis of an 'activity system model' (ASM) of Engestrom (1987 in Xing et al., 2015:112) provides guidelines for regulating assessment of group work for effective collaborative learning for formative and summative purposes. This model proposes clarity of the context, which inter alia focuses on social behaviour and interdependencies of the six interacting components; subjects (the individuals in a group), rules (guidelines clarifying learning outcomes and assessment criteria) tools (systems and environments) division of labour (co-ordination among individuals in a group) community (the direct and indirect communication enabling the group members to maintain a sense of belonging); and lastly the object (a task completed jointly e.g., group project or assignment).

Critics of this model contend that sharing of workloads by individuals in the group is problematic in such instances where other members are not demonstrating equal commitment to the task (Cheng & Warren, 2000; Knight, 2004; Xing et al., 2015).

Similarly, Barfield (2003) and Gibbs and Dunbat-Goddet (2007) argue that division of labour creates stress among diligent students when they have to cover up for fellow members who are lazy or not cooperative in fulfilling the sense of belonging to a community. Arguing from the same point of view, De Vita (2002) and Gibbs and Dunbat-Goddet (2007) assert that credits awarded to group work are not a true reflection of the competency and performance of all members in the group. These researchers refer to members who receive credits unduly as "freeloaders"; meaning that they get away with credits for which they have not worked. This tendency is considered by these researchers as detrimental to learners who achieve credits duly, as they become discouraged and decrease their efforts. According to Houldsworth and Mathews (2000), low morale among hard-working students resulting from allocation of the same group mark to lazy learners is referred as a 'sucker effect.' White et al. (2005) highlight that to some extent, cooperative effort within a group diminishes when other members in a group fail to meet deadlines for the completion of the task because the subsequent sense of cooperation and collaboration fails. Barfield (2003) meanwhile indicates that a shared group mark does not reflect any one individual's contribution in the task and as a result, stronger students may be unfairly disadvantaged by weaker ones and vice versa.

Nonetheless, (Almond, 2009:8-9) recommends the following measure to address the shortcomings in the assessment of group work:

... first, limiting the emphasis on group marks, the assessor should allocate a significant proportion of marks for an individual to assignment or test other that the group project. Second, assessing the outcome of group work with individual assignment or examinations and this entails including questions in the test or examination that relate directly to the preceding group work. Third, dividing up the task between individuals and allocating some or all marks to components of a given task. And this is possible when the components of the task are allocated to each member of the group and the marks to be allocated equitably across the components. Fourth, moderation of group mark against individuals' performance profile. This can be realised by requiring all group members to keep a project log or other portfolio that reveals individual engagement and effort. The alternative could be to conduct a brief viva for each student, this activity allows students to defend the marks they have acquired from a group project by answering questions based on the project.

Similarly, Houldsworth and Mathews (2000) emphasise the importance of moderation of marks obtained for the group task and recommend splitting the entire group task into chunks, and distributing segments among individual members of the group. This system allows the assessor to provide continuous feedback to individual members of the group while being mindful of the fact that at the end, learners will organise all these chunks into a complete picture required by the learning outcomes supposed to be achieved through collaborative learning.

Assumption: Pre-service teachers in their fourth year of study are able to utilise the opportunity of working in groups to share knowledge and to account responsibly for their own individual performance within their groups.

 

Methodology and Data Collection Procedure

The empirical study used a quantitative paradigm for data collection and analysis. The procedures for data collection were as follows: The targeted sample was fourth-year students enrolled in the Teaching Practice Course. The class of 100 students was subdivided into groups of 10. The task was aligned to criteria derived from the three competences. At first, in terms of foundational competence, students were expected to read and interpret Curriculum and Assessment Policy Statement (CAPS), which is the prescribed national curriculum guideline for school subjects. Guidelines were: (i) explain the theoretical knowledge of the teacher, teacher role/s, the learner and teaching practice; (ii) develop a summary from the synthesis of at least three sources about that theory or theories. Second, was the task of designing a lesson template reflecting required concepts or sub-headings which were: lesson topic, duration of a lesson, lesson objectives, learning outcomes, lesson exposition strategies etc. This activity assessed practical competences.

Students met during their own time to prepare for their presentation. The group selected a scribe and a presenter. A mark attained by the presenter was shared equitably among the group members because it was believed to be the individual group's effort.

The sample for the moderation process was formed by individuals in a 50-member group. Each group was represented by five members. Purposive sampling was conducted. I selected the first, third, fifth, seventh and the tenth member from each group. Oral presentations guided by questions were used as the tool for moderation.

The arrangement was made to meet each subgroup of five students at a time to answer questions. Open-ended questions were generated from the assignment submitted by the larger group. Meetings took place at times suggested by students. All individuals in a group of five were expected to contribute to each question. The criteria were similar to those used to mark the assignment. Example of questions were:

(i) With which theories or theory does your group associate guidelines in the CAPS document;

(ii) what were your interpretations of the CAPS document in terms of envisaged teacher, learner, classroom organisation, and preferred teaching strategies and learning styles. Questions related to the lesson plan template were: (i) how is the lesson topic formulated? (ii) What is the importance of learning outcome/s in a lesson? (iii) What is the difference between teaching activities and learning activities in the lessons?

Marks were allocated according to ratings reflected on the analytic rubrics (Appendix A). The three competencies were aligned with descriptors. Each descriptor elaborated the expectations or competency level for responses to questions.

Data collected through the analytic rubrics were presented in the frequency distribution table. The comparative analysis of group scores and individual scores began, and the results were presented in tables from which findings were identified.

 

Results

Data collected form group assessment and individual learner assessment were presented in the frequency distribution tables, 1 and 2. Summaries in bar graphs Figure 1 and Figure 2 form part of these results. Weightings were based on the values of the competencies and scores obtained were distributed accordingly.

(n = 100) (10 members in each group)

C1: foundational competences: demonstration of knowledge of theories of teaching and learning 45%

C2: practical competences: demonstration of abilities to develop a conceptualised lesson providing all necessary phases 45%

C3: team work and collaborative effort 10%

Frequency distribution tables present the summary of raw data and average scores in percentages obtained by individuals in each subgroup from oral presentation (moderation activity).

 

Findings and Discussions

Inconsistency in Students' Performance and Deceitful Feedback to Students Results displayed in Table 1 and Table 2 expose the discrepancy in scores obtained by the groups and those awarded to individuals during the moderation process. The contrast between scores obtained by individual students in Table 1 in group B. The score is 70%, which was shared by everyone in a group, whereas in Table 2, the range indicated that the lowest was 40% and the average was 58.80 percent. Similarly, scores attained by individuals in all groups in Table 2 do not resonate with the marks shared by group members in Table 1. In the scores in Table 1, no individual obtained marks that are below 50%, whereas Table 2 displays the different scenario of the students who obtained lowest scores ranged between 38 and 44 percent.

The implication of the result in Table 1 could be the development of the false impression in students about their performance in the assessed competences in the task. Further, a perverse perception of assessment could develop in the underperforming students: they could associate a mark in Table 1 with their own individual abilities to achieve the targeted competences in the task.

In the context of the classroom teaching practice, the marks obtained by groups in Table 1 indicated that all fourth-year students demonstrated abilities required in interpreting the theoretical principles underlying CAPS guidelines. The results create the impression that participants understood constructivist theory in terms of lesson preparation, selection of teaching strategies, and learning styles.

Interpretation of these findings creates the impression that participants were capable of designing a lesson plan with adequate understanding of the key components that guide the delivery of a lesson and the ability to link learners' general or previous knowledge According to Killen (2015), this implies that the student teacher comprehends the constructivist principle that prior knowledge provides learners a context that enables them to make sense of the new learning.

However, the decline in scores displayed in Table 2 points to the reality about some of the students' abilities and theoretical knowledge underlying Curriculum and Assessment Policy Statement (CAPS). The incompatibility between scores in Tables 1 and 2 confirmed the findings reported by De Vita (2002), Gibbs and Dunbat-Goddet (2007) and Xing et al. (2015) that the use of group assignments as the tool for gathering evidence on learner performance has serious flaws in the process of teaching and learning; for example, in the awarding of marks to undeserving students, and the allocation of similar marks to lazy students.

Importance of Moderating Marks Obtained through Group Work

The results presented in Table 2 indicate that the scores shared by individuals from the group task did not reflect student performance fairly. Variances in the ranges between low values and high value scores displayed in Table 2 revealed a lack of fairness in distribution of marks among individual members. The inability of individual students to defend the authenticity of the marks obtained by the group confirmed the argument that some students in the group are awarded credits unduly (Houldsworth & Mathews, 2000; Xing et al., 2015). The lowest scores obtained by members of the group were of great concern in the study because their underperformance indicated that without moderation they would have managed to proceed with gaps in their knowledge and competences required for effective teaching. Scores obtained by individuals during moderation were below the university subminimum for a pass; for example, 36, 38 to 48. The contrast reflected in scores in Table 1 and Table 2 confirms the possibility of awarding students with unwarranted credit, where Knight (2004) argues that since in most cases group learning focuses on the output of the activity carried out by individuals in a group, assessment is as a consequence likely to be more summative than formative. This contrast in scores manifests the lack of accountability when it comes to the results on which judgment is made about student teachers' performance. Findings highlight that assessment of group assignments is of little benefit to students in their education and training. Houldsworth and Mathews (2000) and Xing et al. (2015) contend that to some students, it is discouraging to work hard for fellow students who are not cooperative in the task. On the other hand, those students who do not participate and make contributions to group tasks are deceived by high marks which they do not deserve. These findings indicate the importance of moderation for teacher-educators. Group tasks reduce the burden of assessing a large number of students in highly subscribed courses but the fact is that there are serious repercussions to this practice. Teachers, unlike school learners, are expected to perform tasks aligned to their education and training. Teacher trainees who are unduly awarded scores which are not the true reflection of their competency in professional knowledge are likely to be a threat to the effective practice of teaching and learning in classrooms.

The gap between group and individual marks on moderation confirms the view raised by Barfield (2003) that a shared group mark does not reflect an individual student's authentic contribution. Incompetent students are unduly awarded with marks that do not reflect their true performance. The concern over 'freeloading' has been confirmed by findings displayed in Table 1. There is a possibility that through assessments of group assignments or tasks, undeserving students get away with gaps in their theoretical and practical knowledge about classroom practice. The findings displayed in the Tables 1 and 2 and in Figures (i) and (ii) indicate that moderation of marks shared by a group is of paramount importance for validation of students' scores. This study confirms that assessment of group work in higher education ought to be moderated so as to ensure and trustworthiness of results is checked. Presentation of the final product of the group task orally by a randomly selected group of individuals provided the true reflection of individual student's performance.

Researchers Daradoumis et al. (2006) and Gress et al. (2010) recommend observation, content and interaction analysis as effective assessment techniques for collaborative learning. Similarly, the view held in this paper is that since teacher education and training in South Africa prioritises integrated assessment systems, assessment ought to be monitored and administered adequately. In the instance of assessment of activities carried out by group, each learner ought to give an account of the contribution or role he or she played towards the accomplishment of the outcomes before marks are allocated.

These findings resonate with the issues of lack of trustworthiness in the assessment of group work highlighted by international researchers, such as Almond (2009), Barfield (2003), Lejk and Wyvill (2002), Sharp (2006) and Xing et al. (2015). The findings of the study and the argument raised in this article provide international researchers in higher education and training with reliable new data on the assessment of group work in a higher education and training; with special reference to teacher education and training in the South African context.

 

Conclusion

The purpose of this empirical study was to examine the trustworthiness of marks or scores based on assessment of the task carried out by a group of students through moderation. Comparative analysis of scores obtained from group work and moderation revealed disparities and shortcomings that result from assessment of group work. The findings recommend that tasks carried out by groups need to be carefully aligned with moderation techniques to verify authenticity and fairness in the allocation of scores. This study confirms the importance of moderation of group scores through oral testing or interviews based on the task undertaken by the group. Students who actively participated in the group task and paid attention to the benchmarked areas were quick to remember what the group discussed during the group task. It was easy to identify undeserving individuals from the group, because they remained unaware of the group's consensus, and they were unable to provide insight into the aspects of the design of the lesson. This paper recommends that for group learning to be adequately assessed through group assignment or tasks, moderation ought to be considered to verify the trustworthiness of assessment of group work in gathering the evidence about individual learners' attainment of competent performance in the acquisition of theoretical and practical knowledge.

The findings of this study also raise the following questions for further research:

  • How do students perceive assessment of group work?

  • What are the implications of 'freeloading' to honest and committed students?

 

Note

i . Published under a Creative Commons Attribution Licence.

 

References

Almond RJ 2009. Group assessment: Comparing group and individual undergraduate module marks. Assessment and Evaluation in Higher Education, 34(2):141-148. https://doi.org/10.1080/02602930801956083        [ Links ]

Anderson LW & Krathwohl DR (eds.) 2001. A taxonomy for learning, teaching, and assessing: A revision of Bloom's taxonomy of educational objectives. New York, NY: Longman.         [ Links ]

Bagnall RG 1994. Performance indicators and outcomes as measures of educational quality: A cautionary critique. International Journal of Lifelong Education, 13(1):19-32. https://doi.org/10.1080/0260137940130103        [ Links ]

Barfield RL 2003. Students' perception of satisfaction with group grades and the group experience in the college classroom. Assessment and Evaluation in Higher Education, 28(4):355-370. https://doi.org/10.1080/0260293032000066191        [ Links ]

Biggs J 1999. Teaching for quality learning at university. Buckingham, England: Open University Press.         [ Links ]

Biggs J 2003. Teaching for quality learning at university (2nd ed). Buckingham, England: Open University Press.         [ Links ]

Biggs J & Tang C 2007. Teaching for quality learning at university (3rd ed). Buckingham, England: Open University Press.         [ Links ]

Biggs J & Tang C 2011. Teaching for quality learning at university (4th ed). Berkshire, England: Open University Press.         [ Links ]

Black P & Wiliam D 2003. 'In praise of educational research': Formative assessment. British Educational Research Journal, 29(5):623-637. https://doi.org/10.1080/0141192032000133721        [ Links ]

Cheng W & Warren M 2000. Making difference: Using peers to assess individual students' contribution to a group project. Teaching in Higher Education Journal, 5(2):243-255. https://doi.org/10.1080/135625100114885        [ Links ]

Clarence S, Quinn L & Vorster JA (eds.) 2015. Assessment in higher education: Reframing traditional understanding and practices. Grahamstown, South Africa: Rhodes University. Available at https://www.ru.ac.za/media/rhodesuniversity/content/chertl/documents/RU%20 _%20Assessment%20in%20HE.pdf. Accessed 29 April 2018        [ Links ]

Daradoumis T, Martínez-Monés A & Xhafa F 2006. A layered framework for evaluating on-line collaborative learning interaction. International Journal of Human-Computer Studies, 64(7):622- 635. https://doi.org/10.1016/jijhcs.2006.02.001        [ Links ]

De Vita G 2002. Does assessed multicultural group work reality pull UK students' average down? Assessment and Evaluation in Higher Education, 27(2):153-161. https://doi.org/10.1080/02602930220128724        [ Links ]

Earl LM 2003. Assessment as learning: Using classroom assessment to maximize student learning. Thousand Oaks, CA: Corwin Press.         [ Links ]

Ewell P 2008. Building academic cultures of evidence: A perspective on learning outcomes in higher education. Paper presented at the Symposium of the Hong Kong University Grants Committee on Quality Education, Hong Kong.         [ Links ]

Gibbs G & Dunbat-Goddet H 2007. The effects of programme assessment environment on student learning. Heslington, England: The Higher Education Academy. Available at https://www.heacademy.ac.uk/system/files/gibbs_0506.pdf. Accessed 8 February 2016.         [ Links ]

Gravett S & Geyser H (eds.) 2004. Teaching and learning in higher education. Pretoria, South Africa: Van Schaik.         [ Links ]

Gress CLZ, Fior M, Hadwin AF & Winne PH 2010. Measurement and assessment in computer-supported collaborative learning. Computer in Human Behavior, 26(5):806-814. https://doi.org/10.1016/j.chb.2007.05.012        [ Links ]

Houldsworth C & Mathews BP 2000. Group composition, performance and educational attainment. Education and Training, 42(1):40-53. https://doi.org/10.1108/00400910010317086        [ Links ]

James R, McInnis C & Devlin M 2002. Assessing learning in Australian Universities: Ideas, strategies and resources for quality in student assessment. Victoria, Australia: Centre for the Study of Higher Education, The University of Melbourne. Available at http://www.ntu.edu.vn/Portals/96/Tu%20lieu%20tham%20khao/Phuong%20phap% 20danh%20gia/assessing%20learning.pdf. Accessed 20 February 2016.         [ Links ]

Killen R 2005. Programming and assessment for quality teaching and learning. Southbank, Australia: Thomson/Social Science Press.         [ Links ]

Killen R 2010. Teaching strategies for quality teaching and learning. Claremont, South Africa: Juta & Company Ltd.         [ Links ]

Killen R 2015. Teaching strategies for quality teaching and learning (2nd ed). Claremont, South Africa: Juta & Company Ltd.         [ Links ]

Knight J 2004. Comparison of student perception and performance in individual and group assessments in practical classes. Journal of Geography in Higher Education, 28(1):63-81. https://doi.org/10.1080/0309826042000198648        [ Links ]

Lejk M & Wyvill M 2002. Peer assessment of contributors to a group project: Student attitudes to holistic and category-based approaches. Assessment and Evaluation in Higher Education, 27(6):569-577. https://doi.org/10.1080/0260293022000020327        [ Links ]

Li LKY 2001. Some refinements on peer assessment of group projects. Assessment and Evaluation in Higher Education, 26(1):5-18. https://doi.org/10.1080/0260293002002255        [ Links ]

Moon J 2004. Linking level, learning outcomes and assessment criteria. Paper presented at the Bologna Seminar, Edinburgh, 1 -2 July.         [ Links ]

Murdoch N & Grobbelaar J 2004. Quality assurance of assessment in higher education. In S Gravett & H Geyser (eds). Teaching and learning in higher education. Pretoria, South Africa: Van Schaik.         [ Links ]

Sharp S 2006. Deriving individual student marks from a tutors' assessment of group work. Assessment and Evaluation in Higher Education, 31(3):329-343. https://doi.org/10.1080/02602930500352956        [ Links ]

Shay S 2008. Beyond social constructivist perspectives on assessment: The centring of knowledge. Teaching in Higher Education, 13(5):595-605. https://doi.org/10.1080/13562510802334970        [ Links ]

Siebörger R & Macintosh H 1998. Transforming assessment: A guide for South African teachers. Cape Town, South Africa: Juta.         [ Links ]

Taras M 2002. Using assessment for learning and learning from assessment. Assessment and Evaluation in Higher Education, 27(6):501-510. https://doi.org/10.1080/0260293022000020273        [ Links ]

White F, Lloyd, H, Kennedy G & Stuart C 2005. An investigation of undergraduate students' feeling and attitudes towards group work and group assessment. In Higher education in a changing world, Proceedings of the 28th HERDSA Annual Conference, Sydney, 3-6 July. Milperra, Australia: Higher Education Research and Development Society of Australasia. Available at http://www.herdsa.org.au/publications/conference-proceedings/research-and-development-higher-education-higher-education-119. Accessed 21 March 2018.         [ Links ]

Winchester-Seeto T 2002. Assessment of collaborative work-collaboration versus assessment. Invited paper presented at the Annual Uniserve Science Symposium, Australia, 5 April.         [ Links ]

Xing W, Waldholm R, Petakovic E & Goggins S 2015. Group learning assessment: Developing a theory-informed analytics. Educational Technology and Society, 18(2):110-128.         [ Links ]

 

 

Appendix A

Assessment Activity

Group assignment: (A group of 10 students)

 


Click to enlarge

 

The scoring rubrics for the main task and moderation task

 


Click to enlarge

 

Moderation Task

Question that guided oral presentation

i. With which theories or theory did your group associate guidelines in the CAPS document (Foundational competences)

ii. What were your interpretations of the CAPS document in terms of: envisaged teacher, learner, classroom organization and preferred teaching strategies and learning styles? (Reflexive competences)

iii. How is the lesson topic formulated according the guidelines of the CAPS documents? Mention the key issues that need to be included in a plan of a lesson.

iv. What is the importance of learning outcome/s or learning objectives in a lesson? What is the difference between teaching activities and learning activities in the lessons? (Practical competences)

Creative Commons License Todo o conteúdo deste periódico, exceto onde está identificado, está licenciado sob uma Licença Creative Commons