SciELO - Scientific Electronic Library Online

 
vol.13 issue4Supportive framework for teaching practice of student nurse educators: An open distance electronic learning (ODEL) contextMentors' and student nurses' experiences of the clinical competence assessment tool author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Article

Indicators

Related links

  • On index processCited by Google
  • On index processSimilars in Google

Share


African Journal of Health Professions Education

On-line version ISSN 2078-5127

Abstract

BRITS, H; JOUBERT, G; BEZUIDENHOUT, J  and  VAN DER MERWE, L. Evaluation of assessment marks in the clinical years of an undergraduate medical training programme: Where are we and how can we improve?. Afr. J. Health Prof. Educ. (Online) [online]. 2021, vol.13, n.4, pp.223-229. ISSN 2078-5127.  http://dx.doi.org/10.7196/ajhpe.2021.v13i4.1379.

BACKGROUND: In high-stakes assessments, the accuracy and consistency of the decision to pass or fail a student is as important as the reliability of the assessmentOBJECTIVE: To evaluate the reliability of results of high-stakes assessments in the clinical phase of the undergraduate medical programme at the University of the Free State, as a step to make recommendations for improving quality assessmentMETHODS: A cohort analytical study design was used. The final, end-of-block marks and the end-of-year assessment marks of both fourth-year and final-year medical students over 3 years were compared for decision reliability, test-retest reliability, stability and reproducibilityRESULTS: 1 380 marks in 26 assessments were evaluated. The G-index of agreement for decision reliability ranged from 0.86 to 0.98. In 88.9% of assessments, the test-retest correlation coefficient was <0.7. Mean marks for end-of-block and end-of-year assessments were similar. However, the standard deviations of differences between end-of-block and end-of-year assessment marks were high. Multiple-choice questions (MCQs) and objective structured clinical examinations (OSCEs) yielded good reliability resultsCONCLUSION: The reliability of pass/fail outcome decisions was good. The test reliability, as well as stability and reproducibility of individual student marks, could not be accurately replicated. The use of MCQs and OSCEs are practical examples of where the number of assessments can be increased to improve reliability. In order to increase the number of assessments and to reduce the stress of high-stake assessments, more workplace-based assessment with observed clinical cases is recommended

        · text in English     · English ( pdf )

 

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License