Servicios Personalizados
Revista
Articulo
Indicadores
Links relacionados
-
Citado por Google -
Similares en Google
Compartir
South African Computer Journal
versión On-line ISSN 2313-7835versión impresa ISSN 1015-7999
SACJ vol.37 no.1 Grahamstown jun. 2025
https://doi.org/10.18489/sacj.v37i1.20602
RESEARCH ARTICLE
A multilevel analysis of digital technology to support teachers to improve their professional practice
Reuben Dlamini; Brahm Fleisch
Wits School of Education, University of the Witwatersrand, South Africa. Email: Reuben Dlamini - reuben.dlamini@wits.ac.za (corresponding); Brahm Fleisch - brahm.fleisch@wits.ac.za
ABSTRACT
Within the education sector, digital technology has increasingly entered into school subjects, curricula, assessment and teaching methods. The common features of technology-orientated and supported programmes are the independence from geographic boundaries and time constraints. Covid-19 pandemic accelerated the debates about digitalisation in the education sector. Therefore, the purpose of this study was to conduct a multilevel analysis of evidence from a large-scale randomised control trial complemented with the systematic collection of implementation information to compare the impact of a conventional structured pedagogic programme to that of a pedagogically similar programme built with a digital technology overlay. The technology acceptance model framed our analysis. Results from the multilevel analysis provided an opportunity to advance knowledge about teachers' acceptance of digital technology to improve student learning and their professional practice. The evidence pointed to two hypotheses: While much of the literature on teachers' acceptance of digital technology concentrated on the subjective element of teachers' behavioural intentions, the current study suggests the extent to which professional accountability may be a key factor in successful digital technology interventions for teachers; and the results points to substantial differentiated uptake that is not linked to either perceived usefulness or perceived ease of use.
Categories ● Applied computing ~ Education ● Applied computing ~ Digitalisation
Keywords: Digital Technology Intervention, Randomised Control Trial, Structured Pedagogical Programme, Technology Acceptance Model, Virtual Coaching
1 INTRODUCTION
Computers, and more generally digital technology, permeate all aspects of our contemporary world. Within the education sector, information and communication technology (ICT) has increasingly entered into school subjects, curricula, assessment, teaching methods, and pedagogy. Specifically, few education systems can avoid the pressure to include computer literacy as a new aim of schooling. The Covid-19 pandemic accelerated the debates about the use of digital technology to connect learners at a distance and providing support to teachers to improve their practice. Digital technology is "viewed as a solution to systemic inequalities" (Mhlongo & Dlamini, 2022, p. 1). Within the field of computers and education, one of the main research concerns is the challenges that teachers face adopting technology in their classrooms (Dlamini, 2022). Within this field, the conceptual framework, the technology acceptance model (TAM), has gained considerable ground. TAM research is focused on better understanding how perceived usefulness and ease of use influence the adoption of technology in the workplace. Systematic review and meta-analysis of TAM for teachers largely confirmed earlier individual studies of the usefulness of an analysis of the TAM variables to explain the uneven uptake in the education sector (Scherer et al., 2019).
This paper contributes to the knowledge base of the conditions under which teachers teaching in poor communities in the Global South take up digital technology as a means to improve learning outcomes. The uneven take-up of technology is often attributed to issues of access in digital resource-constrained conditions (Brown & Czerniewicz, 2010). Using evidence from a large-scale randomised control trial (RCT), the aim of which was to compare the impact of a conventional structured pedagogic programme (CSPP) to a pedagogically similar programme built with a digital technology overlay, the study provided an opportunity to advance knowledge about teachers' acceptance of digital technology as a means to improve student learning. The original RCT study found that after three years, average student performance was significantly higher in the schools that received the CSPP relative to the control schools, but that average student performance in the digital technology intervention schools was only marginally higher than the average performance in the control schools (Cilliers et al., 2021). Given the initial positive findings in the first year and the considerable interest in the use of digital technology to improve teaching, it is not sufficient to know that the intervention had minimal impact; it is critical to know why and to better understand why the digital technology component did not add value. Deaton and Cartwright (2018, p. 2) noted the following:
RCTs can play a role in building scientific knowledge and useful predictions but they can only do so as part of a cumulative programme, combining with other methods, including conceptual and theoretical development, to discover not 'what works', but 'why things work.'
The purpose of this paper was thus to analyse the evidence that may explain why the digital technology intervention, which showed initial promise, came up short. Two hypotheses are offered in this paper. Firstly, while much of the literature on teachers' acceptance of digital technology concentrated on the subjective element of their behavioural intentions, the evidence in this study pointed to the centrality of professional accountability in the digital technology interventions. The second hypothesis points to the need to better understand factors associated with differential uptake not linked to either perceived usefulness or ease of use.
The paper is divided into four sections. The first section reviews the literature of both structured pedagogic programme models and the emerging literature on the use of digital technology in instructional coaching. This literature provides a useful context against which the study is framed. The second section of the paper describes the original research study itself, particularly the context and the methodology used in the design and implementation of the study. Although the main study focused on comparing learning outcomes in the three research groups, the current paper uses extensive implementation data to better understand teachers' engagement with digital technology. The third section revisits the main findings but concentrates on an analysis of both the quantitative and qualitative secondary findings on the uptake of technology and links to changes in instructional practices, and, ultimately, to learning outcomes. This is followed by the fourth section, which discusses these findings and hypothesis building.
2 LITERATURE REVIEW
While the scale of the early grade reading crisis was highlighted recently (Azevedo et al., 2019), significant advances have been made in understanding the problem of early learning in low-and lower middle-income country contexts. Recent reviews of research consistently and convincingly demonstrated that structured pedagogic programmes are likely to be the most effective approaches to improving early grade reading outcomes and possibly early mathematics as well (Evans & Popova, 2016; Hoadley, 2024; Snilstveit et al., 2016). Current research in South Africa, Kenya, and Haiti confirmed these indings (Angrist et al., 2023; Cilliers, Fleisch, Kotze et al., 2019; Fleisch, 2018; Guzman et al., 2021; Piper et al., 2018). Within the structured pedagogic approach, studies pointed to the importance of CSPP as the optimal method for teacher in-service professional development (Kraft et al., 2018; Popova et al., 2022). Cilliers, Fleisch, Kotze et al. (2019) showed that when traditional training in a centralised venue is compared to training plus onsite instructional coaching, which is consistently more expensive, the latter is nonetheless more cost effective. This is evidenced not only by effects immediately after the end of the intervention but also by the persisting effects (Cilliers et al., 2022). Majerowicz and Montero (2018) found similar persistence in their study in Peru. These findings expand on Piper and Zuilkowski's (2015) coaching study that provided valuable evidence on the relative cost-effectiveness of different coach-teacher ratios.
However, it must be asked whether other forms or modes of structured pedagogy, particularly structured pedagogy programmes that use digital technology, could be effective. As part of a structured pedagogic model, could e-coaching be as effective as conventional coaching? There is a small but growing body of research that suggested this may be true. Although the study by Piper et al. (2016) on ICT interventions designed to improve early grade reading in Kenya provided an important baseline, it did not really engage with the idea that technology could be harnessed for virtual coaching. The literature on using digital technology for coaching has been around for the past decade, primarily in developed system contexts. Rock et al. (2014) were among the pioneers in e-coaching, or virtual coaching, particularly in relation to bug-in-ear technology. Their earlier study (Rock et al., 2014) focused on using virtual coaching with pre-service student teachers. Rock et al. (2013) tested Skype and how teachers could develop their classroom management skills through virtual coaching. Geissler et al. (2014) presented original empirical findings on virtual coaching programmes that combined telephone coaching with an internet-based coaching platform, and although the study was small in scale (14 participants who received three coaching sessions), they found positive results.
Stapleton et al. (2017) tested virtual coaching software to assess the potential of linking principal candidates with teacher candidates to ascertain whether the technology would allow principals to give teachers live feedback on their classroom practices. A new direction that this research is taking is to tackle the problem of scaling up. Hennessy et al. (2022) and Rodriguez-Segura (2022) in a research synthesis showed benefits of ICT for teachers, but there is limited evidence that the technology is sustainable, cost-effective or impactful on student outcome. Uribe-Banda et al. (2023) in their comparative study of teacher professional development found few outcome differences between technology-based versus blended equivalent, save that teachers preferred the blended version. Shal et al. (2025), exploring professional development not with direct training but via webinars, adds important insights into wider approaches to teacher education using technology. El-Serafy et al. (2022) have extended this literature into emergency settings where the use of technology for teacher professional development has a distinct advantage. Layton's (2023) work is important as it begins to explore possible mechanism involved in technology in effective e-coaching for teachers.
While this research provided an important point of departure, given the substantive difference in the context of teachers' work in relatively affluent systems compared to teachers' work in low-income contexts in the Global South, the evidence that emerged speaks to the realities of resource-constraint contexts. Bruns et al.'s (2018) study was one of the first of its kind that described the impact of a Skype coaching intervention on secondary school teachers in Brazil. Possibly the most important is the study by Jukes et al. (2017). They used a combined model with multiple components, including training workshops, semi-scripted lesson plans, and weekly text support for teachers, and conducted a cluster randomised trial involving 51 primary schools in Kenya.
The results of this experiment showed that the combined intervention model, which included text message support, improved Grade 2 student literacy outcomes. While Jukes et al. (2017) provided strong support for the viability of using an ICT type intervention to support coaching teachers in their classrooms, they did not answer the following key questions: Is an ICT/digital technology bundled intervention with virtual or e-coaching (digital technology structured pedagogical programme [DTSPP]) as effective as a CSPP? If so, what are the impact mechanisms of the digital technology component on early grade reading outcomes?
2.1 Conceptual Framework
TAM (Davis, 1989) framed our analysis of the mechanisms of DTSPP. In terms of the framework, acceptance refers to the extent to which new digital technology is perceived to be useful and easy to use. According to Marangunić and Granić (2015, p. 81), the TAM "has evolved to become a key model in understanding predictors of human behaviour toward potential acceptance or rejection of the technology". However, our use of the conceptual framework was less concerned with the specific operational aspects of the model and more concerned with how it differentiates between potential users' views of its usefulness on the one hand, ease of use on the other, and the relationship between these two perceptions and actual system use in practice. Figure 1 presents the core variables of the TAM, namely perceived usefulness, perceived ease of use, attitude, and behavioural intention, which could contribute to attitude, and by extension, to teachers' behavioural intentions to accept or reject technology. This study was premised on the TAM and the DeLone and McLean (2003) information systems success model. The interplay between the TAM and the DeLone and McLean information systems success model provided a lens to completely understand the complex dependent variables of technology-orientated operations in schools.
Accordingly, the DeLone and McLean information systems success model allowed us to explore complex dependent variables in the positioning of digital technologies in the teaching profession in order to draw meaningful conclusions. DeLone and McLean defined "six distinct dimensions of information systems success: system quality, information quality, use, user satisfaction, individual impact, and organisational impact" (Urbach & Müller, 2012, p. 3). The six dimensions of information systems success are consistent with the TAM's construct, and their complementarity provided an integrated view on technology-orientated support of teachers to improve their professional practice. While much of the literature on teachers' acceptance of digital technology concentrated on the subjective element of the findings, the current study suggests the extent to which professional accountability may be a key factor in successful digital technology interventions for teachers and points to substantial differentiated uptake that is not linked to either perceived usefulness or perceived ease of use. Although systems quality contributes to behavioural intention and user satisfaction, the current study extended the theoretical framework with the additional dimension of professional accountability.
The increased affordances offered by digital technologies have become an attractive trend in the education sector to improve teachers' professional practices, but the quality of the support teachers receive is also a factor. Therefore, it is important for education leaders to pursue digital strategic plans to support teachers and enhance their professional practices. The TAM has been used to understand acceptance of new technologies by users in developed system contexts (Marangunić & Granić, 2015; Wingo et al., 2017), but the current study positioned the TAM and the DeLone and McLean information systems success model core variables in a developing and resource-constrained context. It was also important to consider the individual impact and organisational impact beyond the perceived usefulness and perceived ease of use constructs. The interplay between the TAM and the DeLone and McLean information systems success model provided us with the lens to explore the multidimensional relationship and interdependencies among the different constructs in developing and resource-constrained contexts.
3 RESEARCH STUDY
Over the past ten years, the Early Grade Reading Study has been advancing knowledge on system-wide improvement of early grade reading in both African languages and English as a second language in South Africa (Cilliers, Fleisch, Kotze et al., 2019; Cilliers, Fleisch, Prinsloo & Taylor, 2019; Fleisch, 2018). Much of this new knowledge centres on the effectiveness and the mechanisms of a basic structured pedagogic change model, which is a combination of detailed daily lesson plans, high-quality educational materials, and centralised training, otherwise referred to as the CSPP. The current research showed that the approach is a cost effective and sustainable way to improve early grade reading teaching system-wide. The South African research confirmed new insights from similar research programmes in India (Banerji & Chavan, 2016) and Kenya (Piper et al., 2018). These unique government and university research programmes are committed to building evidence-based knowledge through long-term cumulative research that integrates experimental and quasi-experimental research (RCTs and regression discontinuity designs), large-scale classroom observations, and in-depth qualitative case studies.
While there is a general agreement that structured pedagogy programme models are effective, concerns have been raised about the relatively high cost of the onsite instructional coaching component and the size of the pool of high-quality coaches available, particularly for remote rural areas in the Global South. To address these concerns, the original Early Grade Reading Study II investigated the viability and cost-effectiveness of an alternative model that involved a combination of components using digital technology. In this instance, the digital technology involved a mixture of low-cost Android tablets, a custom-built lesson plan application, a library of short video-clips, cell phone voice calls, text messaging, and virtual competitions. Preliminary findings from the first year of the intervention suggested that both conventional and digital technology versions were equally effective in improving early grade reading outcomes in English as a second language (Kotze et al., 2019). However, the learning outcome results at the end of the third year of the interventions revealed very different findings.
3.1 The Intervention Models
Both intervention models consisted of the following three core components:
1. detailed daily lesson plans;
2. integrated learning and teaching materials; and
3. professional development, which included both coaching and centralised training.
The first intervention model consisted of printed paper versions of the lesson plans, one-on-one onsite coaching, and cluster needs-based workshops after school. In the digital technology model, the teachers received a low-cost 10-inch Android tablet with electronic versions of the lesson plans in the form of a simple application. The tablet also included various audio-visual resources. Coaching was provided either on the tablet or on the teachers' smart phones (or both) via cell phone calls and WhatsApp messaging, which included text, voice, and video messages. The electronic lesson plan application was custom-developed for the study. This and all other electronic resources were available offline to ensure functionality without Wi-Fi or mobile data.
The additional electronic resources included short videos demonstrating good classroom practice; sound clips of the phonics sounds and songs and rhymes that appear in the learning and teaching materials; and PDF examples of learners' work. Teachers in both interventions were trained in a residential setting at the start of each term. Those in the conventional intervention group were trained for two days, and those in the digital technology group received an additional day to orientate themselves to the tablet and Android applications. In addition, both groups of teachers attended one-day cluster training sessions in small groups. If any teachers did not attend a training session, the onsite coaches organised a makeup session to ensure that the teachers had the learning and teaching materials and understood the instructional practices. Teachers in the conventional version of the intervention received visits from specialist reading coaches about once a month for the duration of the school year.
During these visits, the coaches modelled, observed, supported, and evaluated teachers' classroom practices and monitored learners' exercise books. Coaching in digital interventions involved a phone conversation with each teacher once every two weeks, regular text messages, and the establishment of virtual communities of practice. In the conventional version of the intervention, the specialist reading coaches modelled professional practice to educators during their visits, but the virtual reading coaches used WhatsApp for weekly motivational messages and also sent teaching tips to answer questions on the lessons. While the virtual coach also ran biweekly competitions for teachers to showcase their work to the wider group, the absence of embodied simulation in the digital intervention could be the reason why the digital technology did not improve early grade reading. Embodied cognition theory shows that simulation (modelling) works best when it is face-to-face because of all the bodily-based moments of recognition that occur (Soylu et al., 2014). Table 1 is a comparison between the conventional version of the intervention and the digital intervention.
3.2 The Randomised Control Trial and Complementary Implementation Data
The original Early Grade Reading Study II (a cluster RCT with two intervention arms, in the Mpumalanga province, South Africa) was designed to test the viability and cost-effectiveness of two different intervention models. Alongside the core baseline endline data collection, the study team collected extensive information about teachers' use of the technology and their beliefs about and attitudes towards digital technology in the classroom. The core instrument to assess the change in the average levels of learning was a modified version of the Early Grade Reading Assessment (Dubeck & Gove, 2015). Learner data were collected at the beginning of Grade 1, end of Grade 1, end of Grade 2, and end of Grade 3 (each was regarded as a data collection wave). Implementation data were collected at various points through the three years of implementation, particularly during the teacher training points and during onsite and virtual coaching sessions. Data on teacher use of the tablets were collected annually. The original sampling frame included 180 primary schools in two of the four districts in the Mpumalanga province, South Africa. Schools were included if they were designated in the three poorest quintiles on the national index and used either of two local languages, isiZulu or siSwati, as the home language in Grades 1-3.
The 2 684 learners who were assessed with the oral instrument and the slightly small number who were assessed with the written instrument were the same learners assessed at the original baseline assessment at the beginning of Grade 1. These learners included repeaters, mostly in Grade 2, and a handful of learners that appeared to be in Grade 4. Each successive set of instruments was designed in collaboration with language experts with the intention that the instruments would provide core information about learners' literacy skills in the home language and their language and literacy skills in English, as the additional language. The school groups were mostly balanced on the various subtests at baseline. There were only two imbalances out of the 10 subtests, and both of these were on items that had either strong ceiling or floor effects. In the 2019 testing, there was balance in terms of learner age, gender, and principal component analysis (PCA) sub-score standardised to control (Table 2).
Of the 3 327 learners that were tested at baseline, 2 684 were retested at the end of Grade 3, yielding an attrition rate of about 19.3%. When we regressed the intervention assignment on the probability of attrition and controlling for the gender and language dummies, the results showed that there was no bias in attrition in any of the intervention groups. As part of the pilot study in 2015, the research team worked with a group of eight teachers in schools that were not part of the full trial to field test the tablet and the application, and the team also observed classrooms and did extensive in-depth interviews to ascertain potential weaknesses or flaws in the tablets, virtual coaching processes, and the lesson plan application. As such, the qualitative data generated in the pilot phase were incorporated into the analysis. Administrative data collected as part of the implementation of the large-scale trial provided extensive information about attendance at face-to-face training sessions and the condition of tablets at the end of the intervention. One unanticipated source of key evidence emerged from an analysis of the data downloaded from the applications at the end of Term 3, Year 3.
The extraction of data was originally designed to help the application designers better understand usage patterns. Data from 56 Grade 3 teachers' tablets were accessed, and this data were derived from records of the frequency, time, and types of files accessed on the tablets. Several variables were prepared after cleaning the data and anatomizing the records. These included average time spent on the videos, audio files, examples, and PDF files of explanations. This data provided invaluable information on the patterns and prevalence of tablet use. Special attention was paid to the pattern of use of the various lesson plan pages on the tablet application. An inverse cumulative density function opened for at least five seconds was calculated, which allowed individual slides to be counted every time they were accessed for more than five seconds. The data on tablet use provided insights into actual ease of use and the distribution of use between teachers.
4 FINDINGS
The main findings provide a picture of the impact of the interventions, relative to the control group, on learning outcomes after a full three years of implementation. In Table 3, we only provide the results for the overall impact of the interventions for learners who were enrolled in Grade 3 (for the results for the full sample, which does not differ significantly, see Cilliers et al. (2021). Table 3 shows that learners in the conventional intervention model group performed better on both the oral proficiency and the decoding components of the assessment. This was not the case for learners in the digital technology group. The finding is congruent with other studies (Dlamini, 2022; Dlamini & Rafiki, 2022; Mhlongo & Dlamini, 2022) on digital skills deficit among teachers to pedagogically integrate technology in their professional practice. Although these learners did a little better in the oral English tasks, they were substantially below the oral language proficiency levels of learners in the conventional intervention group and did no better than the learners in the control group in English reading aggregate statistics (PCA).
4.1 Why did the digital technology model fail to help students make gains?
Why would researchers be concerned with or pay close attention to the reasons for the failure of a digital technology-based innovation designed for early grade teachers and their students? Is it not sufficient that we know that it had limited impact? The evidence of the failure of a promising innovation is always likely to add to the wider body of knowledge, but if we take Deaton and Cartwright's (2018) insight about the 'why' of innovation, we discover that the mechanisms of both success and failure are of paramount importance in our efforts to better understand how digital technology can contribute to improving learning outcomes. This was not in anyway failure of the digital technology, however digital skills gap among in-service teachers is a reality (Dlamini & Mbatha, 2018; Dlamini & Rafiki, 2022). The RCT was never designed explicitly to furnish rigorous evidence of possible reasons for innovation failure. As such, an excavation of the various layers of the intervention would at best yield hypotheses or working theories that future work could corroborate. The methodology of the excavation began with the most straightforward surface questions: Was the digital technology ever actually delivered to the teachers? Was the technology operational? Once we have established this, we can proceed with questions that go deeper into the implementation process.
4.2 First layer: Functionality
Did the digital intervention fail for simple reasons, such as the teachers did not get the tablets; they did not get training on how to use the tablets; they did not have Wi-Fi or power to charge the tablets; or they did not have data to communicate effectively with the virtual coach? We hazarded a guess that in many digital technology interventions in schools in the Global South one needs to look no further than the first layer to discover the causes of intervention failure.
According to the administrative data from the main service provider, the teachers all reported receiving functioning tablets, and they did not seem to have any problems with either access to data or electricity to recharge the devices. The teachers all received training on both the technology and the content of the material on the tablet. In other words, there was close to comprehensive delivery of the basic technical infrastructure and training in the project (Table 4).
In terms of the virtual coaching itself, the administrative data suggested that in addition to the regularly scheduled contacts, there were additional virtual coach-initiated coaching sessions. For example, in Term 3, 2019, teachers received, on average, two individual coaching session each. Table 5 reveals that the Android tablet was remarkably robust and that teachers did not have serious problems with damage or theft. All teachers in all three cohorts had operational tablets at the end of the intervention year, and 88% of Grade 1 teachers had working tablets 24 months after their participation in the intervention had ended. Two possible interpretations could be offered for the durability of the tablets. Firstly, the teachers valued the devices, and thus, kept them in safe, good working order, or secondly, they seldom used them.
4.3 Second Layer: Technology Acceptance Model
The second layer relates to subjective aspects. Even if the digital technology was delivered, teachers were trained in how to use it, and it remained in working order for the duration of the project, emotional responses to the technology can either encourage or inhibit it being used effectively. The basic framework to understand these subjective factors is the TAM. The model assumes that if individuals perceive the technology not to be useful in the workplace and as difficult to use, the technology is unlikely to be accepted and used. Conversely, if the technology is viewed as useful and easy to use, then take-up is likely to be high.
In the pilot study in preparation for the larger trial, the research team tested out the prototype tablet and related application with 11 schools that did not form part of the larger study. The focus of the pilot was on teachers' perceptions, their self-reported competencies with digital technology, the quality of the training, and detailed elements of the working of the application as it related to early grade reading methodologies. The key findings of the pilot were that most of the teachers owned smart phones but few had any experience with computers. All but one teacher reported that it was the first time they used a tablet. Despite the limited familiarity or experience with digital technology in general, five weeks into using the tablet and application almost all interviewed teachers reported agreeing or strongly agreeing that the tablet and the application were easy to use and that they were useful in their teaching. Hence, there was no question on the pedagogical affordances of tablets and the application in their professional practice (Mhlongo et al., 2023). While, the pedagogical affordances of digital technologies have been well researched and documented, the education sector in South Africa continues to struggle in the adoption and leveraging of digital education innovations (Dlamini & Ndzinisa, 2020; Ndibalema, 2022; Ndzinisa & Dlamini, 2022).
Overall, most teachers found the instructions on how to use the application clear and said that the training enabled ease of navigation. They found the font size to be adequate and the colour coding of activities clear and easy to follow. A few teachers reported being confused by the colour coding on the activities and word charts. Overall, the teachers in the pilot study reported a preference for the tablet and the lesson plan application to a conventional paper version. The reasons given where that the tablet is easier to take home, portable, and more convenient than paper versions (it fits into a handbag). They described the tablet as user-friendly and used phrases such as "everything is in". As one teacher suggested, "We are tired of paper!" Another teacher preferred the digital technology, saying, "The tablet, because the paper can be lost, torn easier". Another teacher said the following:
I don't carry a big file. At home, I open it and I know I have prepared for the next day. It is so easy. I don't have to read and have many books. It is easy to take home. Before, I was intimidated by the technology, but now it has opened my mind.
There were some less sanguine observations. One teacher observed, "I only use in preparation. We don't need it in the classroom, just in prep mode." That said, this teacher indicated that she did use the tablet "when I teach." Another teacher described reviewing the tablet application in preparation mode in the evening before the lesson. A number of teachers questioned the sound quality. As noted below, a number of teachers used the audio directly with the whole class, notwithstanding the fact that the tablet speaker was not intended for this purpose. One teacher said, "Mostly we used preparation mode. It is easy to understand and present to learners. The learners can listen to the songs; the learners understand the meaning of the song".
One teacher identified the songs and the flashcards as the two most valuable components of the digital programme, saying, "I use all audios, because they help to learn the tunes of the songs". On the resources in the preparation mode, some teachers indicated that they particularly liked specific features such as the videos. As one teacher put it, "It helped me set up my charts and with questions." This particular teacher indicated that she watched the video numerous times. Other teachers reported only watching the videos once, and one teacher never found the video button. One teacher found the questions of the day video most helpful: "I don't know how to question learners, but through the videos I am able to do it." One teacher singled out the video guide to improving classroom management, which demonstrated how to manage children while teaching. Unlike most teachers, one teacher insisted that "there are no videos." Another teacher admitted, "I haven't checked the videos as yet. I forgot about them", and a third said, "Never used them." On the WhatsApp group, one teacher said, "I used the WhatsApp group, but the data finished at the end of October." Another teacher noted, "I don't receive any messages from the group." One teacher raised a concern that "the tablet consumes lots of airtime [data]."
The researchers observed teachers using the tablet and the application in the classroom, and none of them appeared to be using them for the first time. Most had the tablet on at the start of the lesson. While some used in-class mode, most used preparation mode. A few teachers started the tablet from an off position. There was considerable variation about where teachers placed the tablet. Some placed it on their desks and moved back and forth to consult it between activities. Others held the tablet in their hands and read from it consistently throughout the lesson. One teacher struggled to find the reading 'Lucky's Taxi', and the researcher helped the teacher locate the story. There was considerable variation between teachers in the types of activities where the application was used intensively. Some teachers used the application to help guide the questions of the day, and others used it for specific sections of the shared reading activity. Some used it only to read aloud. Although we did not observe this, it was evident from the interviews that a significant portion of the teachers used the audio clips with the learners, which explains why they suggested stronger speakers for the tablets. With a few exceptions, teachers using the tablet and application were either confident and competent or very confident and very competent. One teacher got stuck navigating between preparation mode and in-class mode. Another played a song and wished to play it again but was unable to navigate to it a second time. In an informal interview with a principal, it emerged that a teacher did not understand the time allocation on each of the pages and that the total number of minutes was not for the screen but for the entire activity, which may have multiple screens.
In Term 3, 2019 teachers accessed slides between July and September for lesson planning and preparation. There is evidence that at the beginning of the term the digital technology group accessed slides to guide their teaching, however towards the end of the term there is evidence that less teacher referred to the slides. This is an indication that the lesson plan is a roadmap of what should be learned and allow teachers to develop appropriate pedagogical activities. All the teachers in the digital technology group opened at least one of the lesson plan slides; about 65% reached at least 40% slide coverage; and 27% covered more than 60% of the term's slides. This data indicates that some teachers treat lesson plans as add-ons instead of a critical resource for their instructional planning.
The breakdown of slide coverage of Term 3, 2019, week-by-week provides evidence that slides access dependent on the term period. Earlier in the term about 64% of the slides were covered, while towards the end of the term, in week 10 about 19% of the slides were covered. Week 7 was particularly well covered, which was almost certainly because this was the week in which the official curriculum prescribed assessment task took place. Teachers were expected to upload assessment results onto the government-wide school management system into which teachers have to upload various learner assessment data. It is also interesting that aside from Week 7, there seems to be a pattern of better coverage earlier in the term (Weeks 1, 2 and 3) with a steady decline in coverage until Weeks 9 and 10, which had the lowest levels of coverage. This indicates that some topics were not comprehensively covered and is a concern to us that teachers lack understanding of the importance of lesson plans. Yet, lesson plans needs to be checked continuously as a roadmap, resource and historical document for teachers to reflect on their teaching and content coverage.
Again, from the perspective of ease of use of the digitized lesson plans, teachers are encouraged to always think and reflect on the slides weekly instead of using the application when there is a bureaucratic requirement to post assessment results. At the outset, teachers are encouraged to think of instructional activities appropriate for their classroom and the slides are appropriate application for their planning and reflection.
The second year case study report noted that "virtual coaches and tablets are a successful mode of delivery for the EFAL [English First Additional Language] programme and teachers in this study were unequivocally positive about it." The second year case study concentrated on interviews and observations of eight teachers, so the claim was based on a very small sample of teachers in the study. Botha and Schollar (2018) identified the specific reason teachers were "unequivocally positive": The teachers who got the tablets and the application particularly liked the audio files that gave second language teachers access to standard English pronunciation of both sounds and words.
4.4 Third Layer: Differentiated Uptake
To answer the question about the failure of the digital technology intervention to improve learning outcomes, we explored the possibilities that the technology and related training were either not delivered at the outset; the technology and related training were of uneven quality; or that during the life of the project, the digital technology failed to function, got damaged, or was stolen. The available evidence suggests that none of these applied. The second layer of evidence related to the subjective or affective responses of teachers to the digital technology intervention. Following the TAM, we examined the available qualitative and quantitative evidence on the digital technology's ease of use and usefulness in daily work. Although a TAM survey was not conducted, an analysis of existing evidence suggested that, as a sample, the teachers in the intervention found the digital technology deployed in this experiment (the tablet and the application) relatively easy to learn to use and easy to use. They found the tablet and the virtual coaching useful in their work as teachers, particularly for some core functions.
Could it then be that differences between types of teachers could explain the lack of impact? Put another way, could it be that a small groups of teachers really benefitted from the digital technology component of the intervention programme (as suggested by the case study teachers in the Year 2 evaluation) and improved their practice, which had a strong positive impact on their learners' learning outcomes, but that most teachers, who could use the technology and did so but to a very limited extent, failed to get ignited by the technology, and therefore, it had little effect on their practice, leading to the absence of gains in learning outcomes. A differential teacher effect hypothesis raised three questions. Firstly, are there clearly discernible initial characteristics associated with teachers who responded to the technology in a particular way? Then, the more complex question: What are the mechanisms associated with the digital technology component? What works for some types of teachers but not for many others? Lastly, why would the conventional intervention model work for different types of teachers?
One of the common assumptions is that age could explain different levels of engagement with a digital technology intervention. It is often assumed that digital natives (adults who grew up with digital technology) are substantially more open to using digital technology in a reading teaching intervention than digital immigrants (see Prensky (2001) for definitions of the categories). The most obvious factor that differentiates digital natives from digital immigrants is age. However, Brown and Czerniewicz (2010) clearly showed that this idea has limited validity in the South African context. The evidence from the Early Grade Reading Study II does not support this hypothesis. As the statistics in Table 6 show, in the subsample of teachers for whom we have tablet usage evidence, younger teachers (below 45) and older teachers (over 55) were more likely to spend time on the tablets than teachers in the middle age band. In terms of the total number of slides covered during the term, the numbers were very similar across the three age bands, even though the middle age band group of teachers covered less of the prescribed work in Term 3 on their tablets.
A potentially useful avenue via which to explore differential teacher engagement was analysing the teachers' responses to the virtual coach's competition. To get a better sense of how teachers were implementing the core methodologies, the virtual coach introduced small competitions around specific themes. For instance, in Term 3, 2019, she focused on phonics and asked all her teachers to submit a photo via WhatsApp video of one of their phonics activities. She then chose the best teacher based on the image or video in each of the teacher groups. The teachers won airtime as a prize. The competitions allowed the virtual coach to see into the classroom and to gather evidence about what teachers considered their best practice in their classrooms to be. Table 7 shows the variability of teacher participation in the competitions. Just under a quarter of the teachers entered every competition. At the other end of the spectrum, 22% of teachers were completely inactive. Given that teachers were allowed discretion to enter or not, the pattern of participation may be a proxy indicator of motivation and commitment. The assumption is that those who did not enter at all were overall less likely to be engaging meaningfully with the digital technology and those who participated in every competition thrived in the new technology space.

Teachers in both the conventional and the digital technology interventions attended the early grade reading training at roughly the same rate: close to full participation. While all school heads of department and other members of the school leadership teams were invited to the training sessions, attendance at these was substantially lower for the digital technology schools, especially in the second half of the year (Table 8). One possible explanation is that the conventional approach (paper-based and onsite one-on-one coaching) involved regular visits to schools during which the coach interacted with both other foundation phase teachers and school managers, thus strengthening the school communities of practice. In contrast, the technology intervention tended to focus more directly on the relationship of the virtual coach with the individual teachers and did not trigger conversations between teachers and management team members at school level.

The following observation that the virtual coach made regarding the lower levels of enthusiasm on the part of some school leadership teams and links to teacher engagement confirmed this insight:
Yeah, it's really difficult. A lot of the time I have to like phone the SMT and say like, "Listen, I've been trying to get a hold of your teacher, please make sure that, you know, they've got their tablet switched on and they check their WhatsApp's and things like that". A lot of the time you'll find that if a teacher is one of those teachers, they come from a school where the SMT isn't very active, so it's difficult.
In the final report from the implementing agents, the evidence that "innovative programs that allow meaningful support to teachers at a large scale must continue" (Cilliers et al., 2022, p. 29). In the second year school case studies, Botha and Schollar (2018) reported distinct advantages and disadvantages of the conventional model compared to the digital model. They found that three of the teachers in the digital technology case studies were not well prepared, were more dependent on the lesson plans on the tablets during teaching, and did not pay as close attention to pacing in the lessons. The 2019 case study researcher (Fleisch & Alsofrom, 2022) gave the following analysis:
The tech-based intervention inherently includes distance, the barrier to success may be self-motivation. Teachers still feel supported, but there is not a strong enough accountability mechanism. ...it was successful primarily for teachers who were motivated enough to drive their own development process. Basically, it is teachers who are self-motivated (or are perhaps in an already functional school environment where accountability is provided through principal or colleagues interactions) where the technological intervention seems likely to be most impactful.
A pattern emerged when we compare and contrast the diverse evidence. There was substantial variability in the uptake of the digital technology intervention among teachers in the study sample, with roughly a quarter of them fully engaged and another quarter not engaged at all. Unlike an onsite coaching approach that involves visiting schools and talking to teachers face-to-face, the virtual coach struggled to hold teachers accountable and mobilise the wider school management team to support and monitor compliance with the intervention requirements. More broadly, the virtual coaching model was less successful with relationship building and creating professional accountability linked to the new instructional practices associated with improved reading outcomes.
Figure 2 presents the distribution of average school Oral Reading Fluency (ORF) from lowest to highest for the conventional (blue) compared to the digital technology (red) intervention schools. There is a high degree of overlap of the box-and-whisker plots, indicating that there is no significant difference between the two interventions. Although no causal inferences can be made from this graph, it is suggestive that both interventions appear to have a roughly equal proportion of successful schools, but that for average and below average schools, receiving the conventional intervention model benefitted these schools more than the digital technology intervention.
5 DISCUSSION
Given the limitations associated with intervention administrative data, purposively chosen case studies, and information from the tablet logs, the insights presented here should be viewed as hypothesis building. Nested in a cluster randomised study of two similarly structured pedagogic programmes of which one used a conventional model of paper lesson plans and on-site coaching and the other used tablets and an application with virtual coaching, the study provided an opportunity to excavate the multiple layers required to understand how digital technology can become a tool to improve learning outcomes at scale. By a process of elimination, we were able to establish that it is unlikely that the digital technology hardware or software could explain the slow improvement. The evidence suggests that the devices were durable and that the custom-developed software had few glitches and was easy to master. Using the TAM framework and interpreting available evidence showed that there is reason to believe the slow progress should not be attributed to teachers' negative perceptions of its usefulness and ease of use.
We suggest the following alternative hypothesis: The teachers were not a homogeneous group but were diverse in their beliefs, attitudes, and take-up of the digital innovation. Smagorinsky et al. (2002) had this insight about various positions and responses imposed by top-down curriculum reforms. In this study, teachers adopted one of three diverse responses to reform/innovation: open resistance, accommodation, and acquiescence. For our purposes, we think an alternative set of positions or responses is more appropriate: technology avoiders, minimalists, and technology enthusiasts (Zur & Walker, 2011). Technology avoiders are teachers who may not actively resist the reform innovation but who do not engage with the technology and associated tasks required of them. Minimalists, on the other hand, make an effort to appear to be engaged but only do so to save face or avoid sanction. At the other end of the spectrum, technology enthusiasts engage fully with the innovations and find ways to adapt and internalise them. These three positions/responses must be seen as heuristic devices rather than sharp, well-defined, and empirically proven categories. Although these are suitable categories for the purposes of better understanding mechanisms of change, teachers' responses exist on a spectrum rather than as distinct, clearly bounded groups.
The second assumption we make is that these groups/categories are likely present in any reform intervention, whether conventional or digital. The critical difference, however, between conventional and digital interventions in this case was that the conventional onsite coaching intervention did not allow minimalist teachers to get away without showing change in their classroom practices. Teachers accustomed to minimalist approaches in the conventional reform needed to save face, avoid humiliation or embarrassment, and preserve their dignity by showing some of the reform work. This explains why the average school performance was more or less similar at the top end and different at the middle and lower ends with conventional intervention schools on average outperforming comparable digital technology intervention schools.
This observation adds credibility to the hypothesis that the slow progress of the digital technology intervention was: (a) on the interpersonal and emotional side of the change process, and (b) due to the difficulty that the virtual coaches had accessing actual teacher work in the classroom. In other words, face-to-face conventional interactions are more likely to improve trust and allow for professional accountability.
6 CONCLUSION
One of the key issues for discussion is not whether the digital technology intervention model works (i.e. that it provides consistent and statistically significant evidence of cost-effectiveness) but whether the intervention model helps reduce what the World Bank (Azevedo et al., 2019) called "learning poverty" fast enough to achieve the 2030 Sustainable Development Goal in education. Although eliminating learning poverty is generally associated with the ability to read by the end of primary school, the benchmark of reading for meaning in English at the end of Grade 3 is an appropriate benchmark in South Africa because English is the language of instruction from Grade 4 onward for the majority of learners and it is the de facto language of government and the economy. If escaping learning poverty is associated with being able to comprehend a simple story, the structured pedagogic programme with the CSPP model works. However, more research is needed to understand how a digital technology version of the structured pedagogic programme could be more successful.
The starting point of this paper was the following question: Why did the ICT intervention fail to improve learning outcomes in South Africa's Early Grade Reading Study? At its most basic, the TAM suggests that acceptance of digital technology is a function of a combination of individuals' perceptions of the usefulness of the technology and of the relative ease of use. If individuals perceived that the technology is useful in their work and/or personal lives and that the technology is easy to learn and use, they are likely to accept and adopt the new technology. In accordance with Zhang et al. (2019, p. 208), the perceived usefulness, perceived ease of use, and attitude constructs "are antecedents of technology acceptance." However, the evidence in this study points to the following key findings:
• The centrality of professional accountability in successful digital technology interventions for teachers; and
• There is substantial differentiated uptake not linked to either perceived usefulness or perceived ease of use.
This is significant for developing and resource-constrained contexts. Both attitude and behavioural intention constructs are dependent variables in Figure 1, and in this case the differentiated uptake is dependent on professional accountability. According to UNESCO (2017, p. 1), accountability is "aprocess, aimed at helping actors meet responsibilities and ...goals." Although attitude influences users' behavioural intention to adopt and appropriate digital technologies in their professional practice, in this study there was no evidence associated with the differentiated uptake as a means to improve learning outcomes. The converse applied: If individuals come to think of the technology as not particularly helpful or useful and difficult to learn and use, once they have mastered how it works, they are unlikely to accept and ultimately adopt the new technology. Acceptance exists on a spectrum, and high usefulness and high ease of use are on the one end of the spectrum and low usefulness and low ease of use are on the other. Most individuals and groups fall at different points along the continuum.
We did not do a survey of teachers' perception of the digital technology intervention, which would have told us where they fit on the TAM spectrum. The evidence suggests three critical insights. Firstly, there was considerable variability in teachers' engagement with the digital technology intervention. Secondly, this variability was not driven by teachers' perception of the usefulness or ease of use of the various components of the digital technology intervention. Thirdly, the digital technology intervention did not allow the intervention into the classroom nor did it mobilise communities of practice at school level. The latter two insights suggest that trust and technology may be critical to understanding successful digital technology interventions in schools.
References
Angrist, N., Aurino, E., Patrinos, H. A., Psacharopoulos, G., Vegas, E., Nordjo, R., & Wong, B. (2023). Improving learning in low- and lower-middle-income countries. Journal of Benefit-Cost Analysis, 14(S1), 55-80. https://doi.org/10.1017/BCA.2023.26 [ Links ]
Azevedo, J. P., Crawford, M., Nayar, R., Rogers, H., Rodriguez, M. R. B., Ding, E., Bernal, M. G., Dixon, A., Saavedra, J., & Arias, O. (2019). Ending learning poverty: What will it take? [Accessed: 2 April 2025]. https://documents.worldbank.org/en/publication/documents-reports/documentdetail/395151571251399043/ending-learning-poverty-what-will-it-take
Banerji, R., & Chavan, M. (2016). Improving literacy and math instruction at scale in India's primary schools: The case of Pratham's Read India program. Journal of Educational Change, 17(4), 453-475. https://doi.org/10.1007/S10833-016-9285-5 [ Links ]
Botha, D., & Schollar, E. (2018, December). Case studies in EGRS II schools 2018 [Accessed: 1 December 2022]. https://www.education.gov.za/Portals/0/Documents/Publications/EGRS/EGRS%20II%20Website%20Upload/Reports/4_EGRS%20II%20Case%20Study%202018_Final_WEB.pdf
Brown, C., & Czerniewicz, L. (2010). Debunking the 'digital native': Beyond digital apartheid, towards digital democracy. Journal of Computer Assisted Learning, 26(5), 357-369. https://doi.org/10.1111/J.1365-2729.2010.00369.X [ Links ]
Bruns, B., Costa, L., & Cunha, N. (2018). Through the looking glass: Can classroom observation and coaching improve teacher performance in Brazil? Economics of Education Review, 64, 214-250. https://doi.org/10.1016/J.ECONEDUREV.2018.03.003 [ Links ]
Cilliers, J., Fleisch, B., Kotze, J., Mohohlwane, M., & Taylor, S. (2019). The sustainability of early grade education interventions: Do learning gains and improved teacher practices persist? (Tech. rep.). RISE. https://riseprogramme.org/sites/default/files/inline-files/Cilliersv2.pdf
Cilliers, J., Fleisch, B., Kotze, J., Mohohlwane, N., Taylor, S., & Thulare, T. (2022). Can virtual replace in-person coaching? experimental evidence on teacher professional development and student learning. Journal of Development Economics, 155, 102815. https://doi.org/10.1016/J.JDEVECO.2021.102815 [ Links ]
Cilliers, J., Fleisch, B., Prinsloo, C., & Taylor, S. (2019). How to improve teaching practice? an experimental comparison of centralized training and in-classroom coaching. Journal of Human Resources, 55(3), 926-962. https://doi.org/10.3368/JHR.55.3.0618-9538R1 [ Links ]
Cilliers, J., Mbiti, I. M., & Zeitlin, A. (2021). Can public rankings improve school performance?: Evidence from a nationwide reform in Tanzania. Journal ofHuman Resources, 56(3), 655-685. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3390159 [ Links ]
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly: Management Information Systems, 13(3), 319-339. https://doi.org/10.2307/249008 [ Links ]
Deaton, A., & Cartwright, N. (2018). Understanding and misunderstanding randomized controlled trials. Social Science & Medicine, 210, 2-21. https://doi.org/10.1016/J.SOCSCIMED.2017.12.005 [ Links ]
DeLone, W. H., & McLean, E. R. (2003). The DeLone and McLean model of information systems success: A ten-year update. Journal of Management Information Systems, 19(4), 9-30. https://doi.org/10.1080/07421222.2003.11045748 [ Links ]
Dlamini, R., & Ndzinisa, N. (2020). Universities trailing behind: Unquestioned epistemological foundations constraining the transition to online instructional delivery and learning. South African Journal ofHigher Education, 34(6), 52-64. https://doi.org/10.20853/34-6-4073 [ Links ]
Dlamini, R. (2022). Digital equity in schools: A multilevel analysis of in-service teachers' technological knowledge competencies. Journal ofEducational Studies, 21 (2), 40-60. https://hdl.handle.net/10520/ejc-jeds_v21_n2_a4 [ Links ]
Dlamini, R., & Mbatha, K. (2018). The discourse on ICT teacher professional development needs: The case of a South African teachers' union. International Journal ofEducation and Development using ICT, 14(2). https://eric.ed.gov/?id=EJ1190045 [ Links ]
Dlamini, R., & Rafiki, M. (2022). Teachers' perspectives on the integration of information and communication technology: The case of a teachers' union. Africa Education Review, 19(1), 34-55. https://doi.org/10.1080/18146627.2023.2181728 [ Links ]
Dubeck, M. M., & Gove, A. (2015). The early grade reading assessment (EGRA): Its theoretical foundation, purpose, and limitations. International Journal of Educational Development, 40, 315-322. https://doi.org/10.1016/J.IJEDUDEV.2014.11.004 [ Links ]
El-Serafy, Y., Adam, T., & Hassler, B. (2022). The effectiveness of technology-supported teacher professional learning communities in emergency settings. In Future-proofing teacher education (pp. 145-157). Routledge. https://www.routledge.com/Future-Proofing-Teacher-Education-Voices-from-South-Africa-and-Beyond/Gravett-Petersen/p/book/9781032028514
Evans, D. K., & Popova, A. (2016). Cost-effectiveness analysis in development: Accounting for local costs and noisy impacts. World Development, 77, 262-276. https://doi.org/10.1016/J.WORLDDEV.2015.08.020 [ Links ]
Fleisch, B. (2018). The education triple cocktail: System-wide instructional reform in South Africa. UCT Press/Juta; Company (Pty) Ltd.
Fleisch, B., & Alsofrom, K. (2022). Coaching research in the early grade reading studies in South Africa. In N. Spaull & S. Taylor (Eds.), Early grade reading and mathematics interventions in South Africa, Interventions (pp. 48-63). Oxford University Press Southern Africa. https://en.calameo.com/oxford-university-press-south-africa/read/006710753e7d90d4a4755
Geissler, H., Hasenbein, M., Kanatouri, S., & Wegener, R. (2014). E-coaching: Conceptual and empirical findings of a virtual coaching programme. International Journal of Evidence Based Coaching and Mentoring, 12(2), 165-187. https://radar.brookes.ac.uk/radar/items/585eb4f9-19ce-49e1-b600-509fde1e18c0/1/ [ Links ]
Guzman, J. C., Schuenke-Lucien, K., D'Agostino, A. J., Berends, M., & Elliot, A. J. (2021). Improving reading instruction and students' reading skills in the early grades: Evidence from a randomized evaluation in Haiti. Reading Research Quarterly, 56(1), 173-193. https://doi.org/10.1002/RRQ.297 [ Links ]
Hennessy, S., D'Angelo, S., McIntyre, N., Koomar, S., Kreimeia, A., Cao, L., Brugha, M., & Zubairi, A. (2022). Technology use for teacher professional development in low- and middle-income countries: A systematic review. Computers and Education Open, 3,100080. https://doi.org/10.1016/J.CAEO.2022.100080 [ Links ]
Hoadley, U. (2024). How do structured pedagogy programmes affect reading instruction in African early grade classrooms? International Journal of Educational Development, 107, 103023. https://doi.org/10.1016/J.IJEDUDEV.2024.103023 [ Links ]
Jukes, M. C., Turner, E. L., Dubeck, M. M., Halliday, K. E., Inyega, H. N., Wolf, S., Zuilkowski, S. S., & Brooker, S. J. (2017). Improving literacy instruction in Kenya through teacher professional development and text messages support: A cluster randomized trial. Journal of Research on Educational Effectiveness, 10(3), 449-481. https://doi.org/10.1080/19345747.2016.1221487 [ Links ]
Kotze, J., Fleisch, B., & Taylor, S. (2019). Alternative forms of early grade instructional coaching: Emerging evidence from field experiments in South Africa. International Journal of Educational Development, 66, 203-213. https://doi.Org/10.1016/J.IJEDUDEV.2018.09.004 [ Links ]
Kraft, M. A., Blazar, D., & Hogan, D. (2018). The effect of teacher coaching on instruction and achievement: A meta-analysis of the causal evidence. Review of Educational Research, 88(4), 547-588. https://doi.org/10.3102/0034654318759268 [ Links ]
Layton, T. D. (2023). Canyou see me now? The perceived impact of a virtual instructional coaching partnership applied through the lens of the partnership principles on first-year teacher professional growth: An explanatory sequential mixed-methods study [Doctoral dissertation, Baylor University]. https://eric.ed.gov/?id=ED633849 [ Links ]
Majerowicz, S., & Montero, R. (2018). Can teaching be taught? experimental evidence from a teacher coaching program in Peru (tech. rep.). Harvard. https://scholar.harvard.edu/files/smajerowicz/files/majerowicz_latest_jmp.pdf
Marangunić, N., & Granić, A. (2015). Technology acceptance model: A literature review from 1986 to 2013. Universal Access in the Information Society, 14(1), 81-95. https://doi.org/10.1007/S10209-014-0348-1/TABLES/3
Mhlongo, S., & Dlamini, R. (2022). Digital inequities and societal context: Digital transformation as a conduit to achieve social and epistemic justice: Digital transformation as a conduit to achieve social and epistemic justice. IFIP Advances in Information and Communication Technology, 645, 1-15. https://doi.org/10.1007/978-3-031-12825-7_1 [ Links ]
Mhlongo, S., Mbatha, K., Ramatsetse, B., & Dlamini, R. (2023). Challenges, opportunities, and prospects of adopting and using smart digital technologies in learning environments: An iterative review. Heliyon, 9(6), e16348. https://doi.org/10.1016/J.HELIYON.2023.E16348 [ Links ]
Ndibalema, P. (2022). Constraints of transition to online distance learning in higher education institutions during COVID-19 in developing countries: A systematic review. E-Learning and Digital Media, 19(6), 595-618. https://doi.org/10.1177/20427530221107510 [ Links ]
Ndzinisa, N., & Dlamini, R. (2022). Responsiveness vs. accessibility: Pandemic-driven shift to remote teaching and online learning. Higher Education Research & Development, 41 (7), 2262-2277. https://doi.org/10.1080/07294360.2021.2019199 [ Links ]
Piper, B., Destefano, J., Kinyanjui, E. M., & Ong'ele, S. (2018). Scaling up successfully: Lessons from Kenya's Tusome national literacy program. Journal of Educational Change, 19(3), 293-321. https://doi.org/10.1007/S10833-018-9325-4 [ Links ]
Piper, B., & Zuilkowski, S. S. (2015). Teacher coaching in Kenya: Examining instructional support in public and nonformal schools. Teaching and Teacher Education, 47, 173-183. https://doi.org/10.1016/J.TATE.2015.01.001 [ Links ]
Piper, B., Zuilkowski, S. S., Kwayumba, D., & Strigel, C. (2016). Does technology improve reading outcomes? Comparing the effectiveness and cost-effectiveness of ICT interventions for early grade reading in Kenya. International Journal ofEducational Development, 49, 204-214. https://doi.org/10.1016/J.IJEDUDEV.2016.03.006 [ Links ]
Popova, A., Evans, D. K., Breeding, M. E., & Arancibia, V. (2022). Teacher professional development around the world: The gap between evidence and practice. The World Bank Research Observer, 37(1), 107-136. https://doi.org/10.1093/WBRO/LKAB006 [ Links ]
Prensky, M. (2001). Digital natives, digital immigrants Part 2: Do they really think differently? On the Horizon, 9(6), 1-6. https://doi.org/10.1108/10748120110424843 [ Links ]
Rock, M. L., Schoenfeld, N., Zigmond, N., Gable, R. A., Gregg, M., Ploessl, D. M., & Salter, A. (2013). Can you Skype me now? Developing teachers' classroom management practices through virtual coaching. Beyond Behavior, 22(3), 15-23. https://doi.org/10.1177/107429561302200303 [ Links ]
Rock, M. L., Schumacker, R. E., Gregg, M., Howard, P. W., Gable, R. A., & Zigmond, N. (2014). How are they now? Longer term effects of e-coaching through online bug-in-ear technology. Teacher Education and Special Education, 37(2), 161-181. https://doi.org/10.1177/0888406414525048 [ Links ]
Rodriguez-Segura, D. (2022). EdTech in developing countries: A review of the evidence. The World Bank Research Observer, 37(2), 171-203. https://doi.org/10.1093/WBRO/LKAB011 [ Links ]
Scherer, R., Siddiq, F., & Tondeur, J. (2019). The technology acceptance model (TAM): A meta-analytic structural equation modeling approach to explaining teachers' adoption of digital technology in education. Computers & Education, 128, 13-35. https://doi.org/10.1016/J.COMPEDU.2018.09.009 [ Links ]
Shal, T., Ghamrawi, N., & Ghamrawi, N. A. (2025). Webinars for teacher professional development: Perceptions of members of a virtual professional community of practice. Open Learning: The Journal of Open, Distance and e-Learning. https://doi.org/10.1080/02680513.2023.2296645
Smagorinsky, P., Lakly, A., & Johnson, T. S. (2002). Acquiescence, accommodation, and resistance in learning to teach within a prescribed curriculum. English Education, 34(3), 187-213. https://doi.org/10.58680/EE20021609 [ Links ]
Snilstveit, B., Stevenson, J., Menon, R., Phillips, D., Gallagher, E., Geleen, M., Jobse, H., Schmidt, T., & Jimenez, E. (2016, September). The impact of education programmes on learning and school participation in low- and middle-income countries (tech. rep.). International Initiative for Impact Evaluation (3ie). https://doi.org/10.23846/SRS007
Soylu, F., Brady, C., Holbert, N., & Wilensky, U. (2014). The thinking hand: Embodiment of tool use, social cognition and metaphorical thinking and implications for learning design. The Annual Meeting of the American Educational Research, Philadelphia, PA. https://www.aera.net/Publications/Online-Paper-Repository/AERA-Online-Paper-Repository
Stapleton, J., Tschida, C., & Cuthrell, K. (2017). Partnering principal and teacher candidates: Exploring a virtual coaching model in teacher education. Journal ofTechnology and Teacher Education, 25(4), 495-519. https://eric.ed.gov/?id=EJ1166362 [ Links ]
UNESCO. (2017). Accountability in education: Meeting our commitments; global education monitoring report, 2017/8 [Accessed: 3 April 2025]. https://www.unesco.org/gem-report/en/accountability
Urbach, N., & Müller, B. (2012). The updated DeLone and McLean model of information systems success. Integrated Series in Information Systems, 1, 1-18. https://doi.org/10.1007/978-1-4419-6108-2_1 [ Links ]
Uribe-Banda, C., Wood, E., Gottardo, A., Biddle, J., Ghaa, C., Iminza, R., Wade, A., & Korir, E. (2023). Assessing blended and online-only delivery formats for teacher professional development in Kenya. Cogent Education, 10(1). https://doi.org/10.1080/2331186X.2023.2191414 [ Links ]
Wingo, N. P., Ivankova, N. V., & Moss, J. A. (2017). Faculty perceptions about teaching online: Exploring the literature using the technology acceptance model as an organizing framework. Online Learning, 21(1), 15-35. https://doi.org/10.24059/olj.v21i1.761 [ Links ]
Zhang, T., Tao, D., Qu, X., Zhang, X., Lin, R., & Zhang, W. (2019). The roles of initial trust and perceived risk in public's acceptance of automated vehicles. Transportation Research Part C: Emerging Technologies, 98, 207-220. https://doi.org/10.1016/J.TRC.2018.11.018 [ Links ]
Zur, O., & Walker, A. (2011). On digital immigrants and digital natives: How the digital divide affects families, educational institutions, and the workplace [Accessed: 3 April 2025]. https://drzur.com/digital-divide/
Received: 5 September 2023
Accepted: 7 October 2024
Online: 2 July 2025











