SciELO - Scientific Electronic Library Online

 
vol.57 issue3"It doesn't matter how many (cases) you got, if you love the job, you can manage everything": management strategies utilised by frontline social workersSuggestions from social work doctoral graduates on what aspirants need to know before enrolment author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Article

Indicators

Related links

  • On index processCited by Google
  • On index processSimilars in Google

Share


Social Work

On-line version ISSN 2312-7198
Print version ISSN 0037-8054

Social work (Stellenbosch. Online) vol.57 n.3 Stellenbosch  2021

http://dx.doi.org/10.15270/52-2-948 

ARTICLES

 

Barriers affecting effective monitoring and evaluation of poverty alleviation projects within Waterberg district

 

 

Dr Mmaphuti Percy DipelaI; Prof. Boitumelo Joyce MohapiII

IDepartment of Social Work, University of South Africa (UNISA), Pretoria, South Africa
IIDepartment of Social Work, University of South Africa (UNISA), Pretoria, South Africa

Correspondence

 

 


ABSTRACT

Community projects aimed at alleviating poverty under the supervision of social workers employed by the Department of Social Development (DSD) in the Waterberg district of the Limpopo Province seem to be collapsing. This prompted the researchers to undertake a qualitative study using a contextual, descriptive and explorative research design aimed at obtaining an in-depth understanding of the monitoring and evaluation (M&E) mechanisms applied in supporting community projects for alleviating poverty. This article reports on findings based on interviews conducted with 21 participants. The study identified a lack of knowledge and training, as well as poor management and supervisor support as contributors towards weak M&E.

Keywords: evaluation, community development, community projects, poverty alleviation, monitoring, social workers


 

 

INTRODUCTION

One of the challenges facing many governments today, including in South Africa, is eradicating poverty and placing monitoring and evaluation (M&E) systems in place to support poverty-alleviation initiatives (Presidency of the Republic of South Africa, 2016). Poverty is a social problem evident in both developed and developing countries. Reducing poverty is currently regarded as one of the world's greatest challenges (UNDP, 2016). Moreover, one of the sustainable development goals (SDG1) is to ensure that poverty in all its forms is eradicated worldwide, and this is one of the greatest challenges facing humanity today (United Nations, 2015). In South Africa poverty is regarded as one of the country's triple threats, the other two being unemployment and inequality (Presidency of the Republic of South Africa, 2016). The Department of Social Development (DSD), as one of the spheres of government, is mandated to fight poverty through funding of community projects for alleviating poverty as a way of empowering previously disadvantaged groups, most of whom are based in rural villages (Department of Social Development, 2010-2016). Social workers and community development workers in the employ of DSD are expected to monitor and evaluate the process and the life cycle of these projects in line with the objectives which such projects were funded to achieve.

Monitoring and evaluation of the effectiveness of services and utilisation of resources allocated by government are an important process in safeguarding good governance, which includes transparency, accountability, effectiveness and efficiency within government departments, NGOs and the private sector. The proper and effective application of M&E by social workers will enable a project to be well coordinated, credible and relevant, and this will culminate in operational excellence (Kusek & Rist, 2004:46; Scriven, 2014). Furthermore, Bromberg and Henderson (2015) assert that M&E of the effectiveness of services and utilisation of resources allocated to the project is important in sustaining the growth and sustainability of such projects. Improved M&E systems result in improved quality of planning and the correct implementation process of community projects for poverty alleviation (Arcilla, Co & Occampo, 2011).

The World Health Organisation (2011) and Allen-Ile, Eresia-Eke and Ile (2012) assert that for monitoring to be successful, attention needs to be focused on planning and developing a set of actions that inform the M&E activities and the results that emerge from them. The monitoring of community projects should not be an afterthought or a hurried activity. Furthermore, planning for M&E assists in revealing the resource requirements in terms of human, material, financial and other resources. In most community projects M&E is not effective because there is no early preparation made for M&E. Sometimes this stems from the fact that M&E operations are not integrated into the planning phase of the community programme or project. For example, in most organisations provision is not made for dedicated personnel, finance and materials for continuous M&E of community programmes or projects (Arcilla, Co & Occampo, 2011).

 

PROBLEM FORMULATION

One of the challenges in government is that monitoring and evaluation of community programmes or projects are not well coordinated or planned, or they do not adequately inform planning, policy-making and budgeting decisions (Kusek & Rist, 2004). This results in government interventions missing the opportunity to improve the functioning of the project concerned (Presidency, 2016:16). In an article entitled "Time to take performance monitoring seriously", Keeton (2012:47) contends that performance monitoring constitutes an important element in providing an "early warning system of things going wrong within the project." According to Podems, Golden and Jacobs (2014:12), various data sources suggest that the lack of monitoring and evaluation, and the lack of technical knowledge and skills, explain why community projects for alleviating poverty are not sustainable.

Studies such as those of Tjale (2010) and Seretlo (2012) investigated the reasons that led to the failure of community projects designed for alleviating poverty. Frequently, the participants in the inquiries were project members, whilst the role of those offering support, training, monitoring review and evaluation was ignored. Little attention has been given so far to researching the M&E mechanisms used by the DSD to support the projects and engage with the challenges faced in implementing the community work process. Researchers then developed an interest in the operations of the Department.

They reviewed the mechanisms applied by the Department of Social Development in supporting community projects for alleviating poverty, after observing the collapse of many funded community projects under the watch of the Department, with social workers spearheading the process to monitor, mentor, educate and evaluate the programmes and offer support. Most projects have continued to collapse despite the resources that the DSD has spent on these projects. One of the researchers had previously conducted a study which investigated the reasons that led to the collapse of community projects, the findings of which showed a lack of support from the sponsors, poor M&E, lack of knowledge and poor marketing skills among the project members. As a result of this, the opinion was formed that for a project to succeed, both the project members and the sponsors should work together to ensure the success of these projects. Particulars of the role of the Department, which is to monitor and evaluate projects and consequently resolve the challenges identified during monitoring, appear to be lacking and unclear. This furthermore acted as a motivator for undertaking this research.

Against this background, the research problem or the "issue or concern that needs to be addressed" (Creswell, 2009:18) here can be summarized as follows:

Programmes and projects aimed at alleviating poverty lack adequate monitoring and evaluation mechanisms, thus leading to the collapse or failure of the projects or programmes.

Therefore, it was resolved to undertake a study to achieve the following research goal:

To gain an in-depth understanding of the barriers affecting effective monitoring and evaluation of poverty-alleviation projects within Waterberg District of the Limpopo Province.

 

THEORETICAL FRAMEWORK

The study was based on evaluation theory. Generally, evaluation theory consists of activities that judge the worth of a programme (Scriven, 2007:7). Evaluation assesses the value or worth of a programme (Farell, Kratzmann, McWilliam, Robinson, Saunders, Ticknor & White, 2002:8) and it relates to a set of research questions and methods geared to reviewing processes, activities and strategies for the purpose of improving them in order to achieve better results (Kahan & Goodstadt, 2005:11). Auriacombe (2013:716) and Stockman (2011:14) suggest that evaluation in its general form should be regarded as an assessment or judgement of a circumstance or object on the basis of information. The information is gathered, analysed and assessed for a specific end, namely to make a decision. This assessment is a systematic investigation of the worth of a programme or project for the purpose of reducing uncertainty in decision making (Auriacombe, 2013). It uses social research techniques for assessing how an intervention was conceived, formulated, legitimised or approved and then implemented, and whether the intended purpose has been attained. Evaluation in this regard should be regarded as the systematic and objective assessment of an ongoing or completed project, programme or policy, its design, implementation and results (Gaarder & Briceno, 2010; Organisation for economic co-operative and development (OECD), 2016).

In basing this study on evaluation theory, the focus was on evaluating and reviewing the strategies and mechanisms applied by the DSD in supporting and monitoring community projects for alleviating poverty. The researchers specifically investigated social workers' knowledge of implementing community work, their understanding of the M&E of programmes or projects aimed at alleviating poverty, and the challenges encountered in supporting the programmes and projects. Evaluation theories describe and recommend what evaluators do or should do when conducting evaluations and monitoring of programmes. These theories inter alia specify research aspects such as evaluation purposes, users and uses, who participates in the evaluation process and to what extent, general strategies, method choices, and the roles and responsibilities of the evaluator (Rogers, 2009).

 

RESEARCH METHOD

The research method applied attended to the research approach and design, population and sampling, data collection and data analysis.

 

Research approach and design

A qualitative study was undertaken using an exploratory, descriptive and contextual design as a way of realising the research goal. This approach was opted for as it is in line with the research goal, research design as well as research question (Maxwell, 2013). The objectives of the study were to explore and describe the experiences of the participants regarding the implementation of monitoring and evaluation of poverty alleviation projects and to investigate the perceived influence of monitoring and evaluation the practice on poverty alleviation projects. For this purpose explorative, descriptive and contextual research designs were adopted. These designs assisted the researchers to develop an in-depth understanding of how monitoring and evaluation of community programmes and projects are conducted within the DSD.

 

Population and sampling

The population for this study consisted of social workers and community development workers employed by the Department of Social Development who have been employed for more than three years, with some experience of monitoring and evaluating a community project. The study was conducted in five municipalities of the Waterberg District of the Limpopo Province. The focus was specifically on participants with at least three years of experience, as it was argued that these participants had some experience in monitoring community projects and that they were able to provide this study with relevant information.

To draw a sample from the larger population, non-probability, purposive sampling techniques were employed. With purposive sampling, prospective participants are considered based on their experiences that are appropriate to the researcher's area of interest (Matthews & Ross, 2010:167). Generally, small samples are used in qualitative studies (Schmidt & Brown, 2015). The researchers did not focus on representation as that was not in line with this qualitative research, but rather on a sample that would provide appropriate responses (Creswell, 2016; Nicholls, 2009). According to Rossman and Rallis (2012) sampling in qualitative research is "an exercise in exploring diversity, difference and variation". In line with qualitative study procedures, the sample eventually was relatively small, with 21 individuals drawn from the population concerned.

The sample size was not determined at the onset of the study, in line with qualitative requirements (Schmidt & Brown, 2015), but by means of the principle of data saturation. Data saturation refers to a point when new data no longer emerges from interviews to bring additional insights into the research question and when repetition of information becomes evident (Schmidt & Brown, 2015). When that happens, the number of participants constituting the sample is sufficient.

 

Data collection

The data were collected by means of face-to-face semi-structured interviews with 21 participants. The study utilised an interview guide which consisted of 11 open-ended questions to elicit responses from participants. The interview guide gave the researcher the flexibility to probe and ask follow-up questions where necessary (Rossman & Rallis, 2012). The open-ended questions were based on the M&E of poverty-alleviation projects.

 

Data analysis

Data analysis in a qualitative study refers to the process of organising and interrogating data in ways that allow researchers to see patterns, identify themes, discover relationships, develop explanations, make interpretations, initiate critiques, or generate theories (Leech & Onwuegbuzie, 2007:564). In the current study, the eight steps of qualitative data analysis proposed by Tesch (in Creswell, 2009:186) were applied. Applying these steps enabled the researchers to organise the data obtained, transcribe the interviews, read through the data thoroughly and apply the coding process. The coding process allowed the researchers to develop themes which led to the interpretation and analysis of the data and the compilation of the research report.

 

PROFILE OF PARTICIPANTS

Twenty-one participants were interviewed, 12 social workers and 9 community workers. The biographical profiles of the participants are presented in Table 1. To ensure anonymity, numbers were assigned to participants, as indicated.

THEMES AND SUBTHEMES: FINDINGS AND DISCUSSION

Table 2 presents an overview of the themes and their related sub-themes. The themes were derived from the information gathered from the individual in-depth interviews with the participants. This information was thematically grouped and analysed. Direct quotations from the participants' transcribed interviews are provided to buttress and substantiate the identified themes and the related subthemes. A literature control in the form of extracts from the literature was conducted and related ideas were added to confirm, contrast and elaborate on the participants' accounts and comment on a theme or subtheme.

Theme 1: Monitoring and evaluation skills

The skills theme was deduced from the information provided by the participants in answer to the question "What are the monitoring and evaluation skills that the practitioner should possess to effectively support poverty-alleviation projects?" This question was asked to identify some of the critical skills that the social service professionals should have to conduct monitoring and evaluation of community projects effectively. Allen-Ile et al. (2012) state that, for community projects to be sustainable, M&E should be both operatively effective and methodologically sound. This requires considerable expertise, skills, knowledge and effort at the planning stages to ensure that M&E activities are properly scoped and planned. The lack of capacity amongst personnel with regard to M&E is regarded as one of the government's major challenges in effectively managing community and other developmental projects (Presidency of the Republic of South Africa, 2007). The correct application process of M&E systems could help the public sector to assess and identify challenges and gaps in service delivery outcomes. According to Podems et al. (2014), various data sources suggest that the lack of M&E as an undertaking and the lack of technical knowledge and skills, were identified to explain why community projects for alleviating poverty are not sustainable. Engela and Ajam (2010) report that the Department of Performance Monitoring and Evaluation conducted a survey in 2010 that also specifically identified the capacity limitation (DPME, 2013). According to the data, social service professionals are not equipped with the skills that could enable them to effectively conduct this monitoring and evaluation.

Subtheme: Knowledge of monitoring and evaluation

This theme was deduced from the data provided by the participants in answer to the following question: "What are the critical skills that are required to effectively conduct monitoring and evaluation?" Although participants were not directly asked about their knowledge in relation to monitoring and evaluation, they reported that one of the requisite skills for effective projects is monitoring and evaluation knowledge. Participants reported that lack of monitoring and evaluation skills has a negative impact on their ability to comprehensively support the projects. Monitoring and evaluation practices are acknowledged for their usefulness in ensuring project improvement (Enjela & Ajam, 2010).

This sentiment is aptly supported by a participant as follows:

I think the most important element is monitoring itself of which most of us lack and this is very difficult for us. You must understand the importance of monitoring and following the business plan. (Participant 02)

The above sentiments are supported by Prennushi, Rubio and Subbarao (2011), who warn that M&E needs to be conducted by personnel and organisations that are competent in utilising the information to influence policies. These authors further indicate that one of the challenges is that M&E is conducted by personnel who lack capacity and have no strong links to the key decision-making processes (Prennushi, Rubio & Subbarao, 2011). This results in a loss of opportunity to learn what works and what does not, sometimes along with a loss of the funds.

Another participant shared similar sentiments by emphasizing the lack of knowledge and understanding of the monitoring process as a challenge by saying:

I think in terms of the government our M&E systems are poor and the most of us lack knowledge and have difficulty understanding the monitoring it. (Participant 14)

Research conducted in developmental NGOs across many Third World countries shows that there may have been multiple challenges experienced by management in implementing these practices as well as in formulating appropriate tools and designs (Muzinda, 2007). These challenges can hinder effective implementation of M&E and this becomes problematic because genuine progress and transformation as desired in project objectives can only be achieved if M&E practices yield the information required for improvement. However, in South Africa there is a gap in knowledge about what informs M&E.

Monitoring continuously assesses the progress of a programme or a project in conjunction with the agreed objectives. The correct implementation of M&E assists the assessor by providing constant feedback on the operations of the project as well as assisting in the identification of potential successes and constraints. Unfortunately, in many projects the role of monitoring is barely understood and this therefore impedes the success of these projects (Allen-Ile et al., 2012:46). The following excerpt encapsulates the view of a participant who suggested that practitioners lack expertise in terms of monitoring and evaluation, because they were never capacitated for it, even though they are expected to monitor and support projects monthly:

First of all there should be intensive training [offered] to all the officials who are responsible for monitoring [such that they] have a common understanding [of] what is expected from the projects and what is expected [of] them as officials because the other thing that is hindering the progress of the project is different understanding from different officers based on their own discretion and personal understanding of how the project should be monitored or how it should be operated I think [this] is something that also confuses people who are supposed to benefit from our assistance. (Participant 06)

According to Podems et al. (2014), various data sources suggest that the lack of M&E and the lack of technical knowledge and skills explain why community projects for alleviating poverty are not sustainable. Engela and Ajam (2010) report that the Department of Performance Monitoring and Evaluation conducted a survey in 2010 that also points to the capacity limitation (DPME, 2013).

The above quotations reveal a dearth in understanding of not only M&E concepts amongst practitioners, but perhaps the entire M&E field in South Africa at the moment. It is evident that M&E in South Africa is still in its embryonic stages and to such an extent that there is a scarcity of personnel who are properly trained to handle it competently. In turn, this shortage of sufficiently trained personnel hinders the effective achievement of the major goal of M&E, which is to assess whether a project has met its targets in order to conclude project feasibility and appropriateness (Hamper & Baugh, 2011). The availability of adequately trained personnel who have a thorough understanding of M&E concepts is critical for the effective implementation and evaluation of current and future community projects for alleviating poverty. This means that the technicalities of M&E require all practitioners to have undergone some form of professional training for it (Görgens & Kusek, 2009; Scriven, 2014).

Theme 2: Employee capacity on monitoring and evaluation

Subtheme: Shortage of professional personnel trained in M&E

The shortage of professional personnel trained in M&E was highlighted as a critical area in need of attention. Participants felt that they are overwhelmed with the workload because there are too few professional personnel members with M&E training, or at times that they are the only ones with M&E training within their organisations. As a result, participants feel that they must carry the burden as their fellow team players often rely on them. Also, participants highlighted their awareness that M&E of community projects for alleviating poverty is only now beginning to receive the necessary attention in South Africa. The reason is that not many people are trained to do it. Adequate M&E capacity is one of the most important aspects in the implementation of an effective M&E system for community projects. This calls for organisations to ensure that their M&E systems are capacitated with persons having adequate and necessary knowledge and skills to undertake M&E. It is also important that M&E staff members are continually empowered to update themselves on new developments and approaches in the developing discipline and practice of M&E (Görgens & Kusek, 2009).

In response to the training conducted on M&E community projects and the effectiveness thereof, a participant said:

The government do not provide training opportunities. We are just given projects to monitor without intensive training or knowledge on M&E. (Participant 11)

It is critical for evaluators to acquire institutional capacity, including technical and managerial skills prior to the implementation of an M&E system that can be sustained over time (Kusek & Rist, 2004). Furthermore, the participants' view is captured and reinforced by Schiavo-Campo (2005), who indicated that in South Africa M&E-related capacity in terms of human resources and skills remains inadequate.

The literature reviewed (Görgens & Kusek, 2009; Kusek & Rist, 2004) suggests that capacity building must focus on the following three levels in an organisation:

· System capacity: the ability of a system to deliver the goal and objectives of a process and thereby contribute towards fulfilling the organisation's objectives. In a systems context, capacity is defined as a set of entities that operates to achieve a common purpose and according to certain rules and processes;

· Organisational capacity: the capacity of the organisation and its processes to deliver the organisation's goals and development objectives;

· Individual capacity: ability of individuals to perform functions effectively, efficiently and sustainably. M&E capacity and capacity building require a multi-faceted approach that needs to be a continuous process. This approach could help the DSD to build a comprehensive M&E capacity that is sustainable and capable of discharging its M&E mandate and functions in an effective and efficient way.

When a participant was asked about receiving any training from DSD, he reported as follows:

Unfortunately, we did not receive any specific training about M&E. (Participant 02)

When a follow-up question was posed to the participant on how he monitors a community project, even though he did not receive any training on monitoring and evaluation, the participant said:

You just learn along the way those who have been there before you will take you along on how this is done, so even if there were misinterpretations before you just follow suit and do whatever that was done before. (Participant 02)

A lack of sufficient training and inadequate capacity of government employees were emphasised by the Presidency of the Republic of South Africa (2007:23): The lack of capacity of personnel in relation to M&E is one of the government's major challenges in effectively managing community and other developmental projects. The correct application process of M&E systems can help the public sector to assess and identify challenges and gaps in service delivery outcomes. The United Nations Development Programme (2016) Handbook on Planning, Monitoring and Evaluation for Development emphasises that human resources are vital for the effective monitoring and evaluation of community projects. This report states that staff should possess the required contextualised technical expertise in the area in order to ensure high-quality monitoring and evaluation. Implementing an effective M&E system demands that the staff should undergo training as well as possess skills in research and project management; hence capacity building is critical (Nabris, 2011). The importance of contextualisation is in line with the participants' sentiments expressed below, namely that

training attended did not assist because it was not in line with the challenges experienced in projects. (Participant 14)

The technical work of any project must be done by qualified staff to ensure that the quality of work is of a high standard. There is a lack of professional development and technical supervision in the public sector, which has led to poor project quality. The United Nations Development Programme (2016) stresses the need for training on M&E and the desire to acquire appropriate knowledge about it. This is also captured by a participant in this study who contends that lack of training and skills development impacted on how projects are supported:

Unfortunately, I have never attended any training. I just started training, like when I joined the department seven years ago. I started just monitoring like you are given monitoring tool, to just go and monitor, through years of experience that's where you learn. (Participant 18)

This poses a challenge for the quality of services rendered to the projects due to inadequate skills and lack of continuous professional development in the public sector in South Africa. The participants' views in this regard are supported by Bana and Shitindi (2009), Dassah and Uken (2006), and Engela and Ajam (2010), who reported that these inadequate skills have been influenced by the lack of formal training opportunities in M&E by government departments as a result of lack of formalisation and standardisation. The researcher inquired further about the efforts put in place by the Department to ensure that employees are empowered with skills and a participant offered the following view:

To the best of my knowledge I don't think the department is doing enough I think this area is more neglected, because I do not remember officials at my level being taken to a workshop for M&E or project management. Normally, they take the supervisors and coordinators, but those who are responsible for supporting the organization no they are neglecting them and as a result it impacts negatively on the organization on how they run the organization. It is important for the department to ensure that relevant people who are responsible for monitoring and evaluation of community projects are equipped with relevant skills to be able to support the projects. (Participant 09)

The excerpt from an interview below, however, reveals that the Department sometimes empowers employees through in-service training, though the participant felt it was not sufficient and it did not address the gaps they encounter during monitoring. One participant described it as follows:

Let me say in-service training which we receive here is not that much. Actually, it is just an event and it is too administrative and rather than being given the correct knowledge, you are taught about how the forms are completed and all those things, but you're not being given the knowledge on real monitoring and evaluation. (Participant 15)

This participant's view was in line with that of Forrest (2007), who argues that training means much more than just skills training, but includes a whole suite of learning approaches from secondment to research institutes and opportunities to work on impact evaluations within the organization or elsewhere, to time spent by programme staff in evaluation departments and equally time spent by evaluators in the field. This helps the employee to be more versatile in today's world. Evaluation must also be independent and relevant. Independence is achieved when it is carried out by entities and persons free of the control of those responsible for the design and implementation of the development intervention (Gaarder & Briceno, 2010; Organisation for Economic Co-operation and Development (OECD), 2016).

It is clear from the above participant's view that the DSD is not empowering its officials in monitoring and evaluation of community projects, even though they still expect their officials to go and support the projects without the necessary technical monitoring knowledge and skills. Perrin (2012) warns that organisations which ignore the training aspect in M&E find themselves faced with a number of challenges.

Theme 3: Monitoring for compliance

Subtheme: Monitoring and evaluation conducted as a tick-box exercise

Participants highlighted the point that due to the tedious nature of M&E of community projects, as well as the processes and pressure to meet deadlines and departmental expectations, team players often end up fulfilling M&E requirements in a half-hearted manner, just for the sake of reporting. It emerged in this research that M&E of community projects is often conducted under pressure when reports or reviews are due for submission, leading to fictitious figures being generated. Social service professionals felt that fellow team players frequently lack consistency and honesty as they practise M&E of community projects minimally for reporting purposes only, thereby hindering a thorough understanding of the various dynamics at play within project implementation. Some of the participants reported that since monitoring and evaluation are forced on them through performance agreements, they just monitor and evaluate community projects in order to reach the targets so that they can get a performance bonus. Some of the participants reported that they conduct monitoring and evaluation just for the sakew of compliance with what is expected from them.

Regarding other challenges that social service practitioners responsible for M&E experience, one participant explained the situation by giving an example of his personal experiences:

Most of the officials are overwhelmed with the general work that is expected to do and the number of projects that are available so ... You end up not doing thorough monitoring and evaluation just to make sure that you cover each and every one for the sake of compliance. (Participant 07)

The World Health Organisation (2011) and Allen-Ile et al. (2012) assert that for monitoring and evaluation to be successful, attention needs to be focused on planning and developing a set of actions that will aid the M&E activities and the results that emerge from them. The monitoring of community projects should not take place as an afterthought or a hurried activity. Furthermore, planning for M&E assists in revealing the resource requirements in terms of human, material, financial and other resources. In most projects, M&E is not effective because there is no advance preparation made for conducting M&E. Sometimes this can be because M&E is not integrated into the planning phase of the programme or project. For example, in most organisations provision is not made for dedicated personnel, finance and materials for M&E (Allen-Ile et al., 2012; World Health Organisation, 2011).

When participants were asked about the importance of monitoring and evaluation, one participant stated:

We sometimes with the workload that we have we neglect projects and we focus on other issues than monitoring, when we go to monitor we will be doing it for compliance; however, I still propose that it is of paramount important that in future the department consider hiring people who will be specialising in that field. (Participant 19)

Another participant reported:

I don't even know who the accounting officer of the M&E within the department is; what I know is that I am doing monitoring and evaluation for the sake of compliance and reaching targets, because they have never been in anyway where my findings and recommendations were thoroughly discussed or deliberated by any office. (Participant 14)

It is very clear from the above submissions that monitoring and evaluation of community projects within the government sector need to be well structured and coordinated for proper implementation. These thoughts are best supported by Sorrenson (2010), who reported that as far as completed projects are concerned, with very few exceptions, the M&E systems were poorly developed and implemented at the field level. Weaknesses in M&E are traced back to the design of the M&E system, particularly the absence of clearly identifiable monitorable indicators and a lack of ownership and participation by the stakeholders. M&E systems often reflect shortcomings in the description of project objectives, components and implementation arrangements. Delays in conducting complicated baseline surveys and impact assessments, and in operationalising the M&E system, are challenges and complications often encountered during project implementation. In this regard, a participant reported:

We are not necessarily interested in quality. I am forced to monitor five organisations a day, that in itself will not allow me to exhaust the knowledge that I have; that means I will only apply what the employer or the service training has offered, that is administrative work, ticking what is there and what is not - that's it. (Participant 21)

The correct implementation of M&E techniques should be supported by management principles within government departments which apply M&E findings for their planning and budgeting. Without this, M&E systems could be just a superficial 'tick the checklist' exercise that complies with the monitoring tool (United Nations Development Programme (2016). This implies that information gathered during the monitoring of the projects must be treated as a valuable asset and relevant decisions should be taken to build and empower the project members.

When asked about the importance and the effectiveness of the monitoring and evaluation within the Department of Social Development, participants reported that abnormal case loads and unrealistic monitoring targets and limited resources compel them to commit "malicious compliance" during monitoring and evaluation. As stated by a participant:

One day you are having lot of community projects to cover because you want to reach a certain target before some can take the car, then you're just going to go there and do what we actually call malicious compliance, you're just going there to actually say I have arrived here, just sign, just sign. (Participant 08)

According to Allen-Ile et al. (2012) it is essential for M&E mechanisms to be ethically and consistently applied within community projects to assist in determining the extent to which policy outcomes or objectives have been attained, whether or not they are in line with anticipated outcomes, as well as determining the efficacy of the process followed.

It is clear from the above quotations that M&E reports are often gathered when feedback is required by the funders, thereby leading to unethical practices among M&E practitioners and their team members. According to Ortiz, Kuyama, Munch and Tang (2004), it has also continuously emerged within international NGOs that despite the importance of top management acknowledging and prioritising M&E as a management tool, many managers rather see it as a "bureaucratic requirement to justify their resources." This therefore results in management giving half-hearted recognition to the M&E systems as they feel obliged to just meet a requirement, thus making it a 'tick-box exercise' for accountability purposes. From this research it is evident that the challenge of conducting M&E as a 'last-minute act' for reporting purposes is that it results in the 'cooking of results', as mentioned by participants. Furthermore, a participant reported that

We don't really do quality work, you must just go there and monitor little things; sometimes we go there sometimes; we even fill in the monitoring tool in the office because you have one day for one car for that day. (Participant 13)

The fact that M&E is done half-heartedly to fulfil donor expectations shows that it is a flawed process altogether as the critical findings which could be used as important lessons for future projects are fabrications, which confirms the fact that there is no appreciation for the strategic value of M&E (Ferris & Graddy, 2014).

Theme 4: Poor stakeholder engagement

Subtheme: Lack of cooperation from project members

Participants highlighted that they often face resistance or difficulties in acquiring beneficiary involvement. Beneficiaries display a sense of entitlement within the project in which they regard the assistance offered to them as an indication of taking control of their project, even though the support will benefit them in the long run. A participant identified

Lack of interest from the project members, because you can't force them if they are not interested. You can mentor and coach them, but if they are not interested there is nothing that you can do. Is it not that we are not running those projects. Those who are running them on daily basis are those who are facing those patterns of poverty, especially material poverty ,but if you have mental poverty then there is no way one will definitely run the project and they cannot be convinced. (Participant 04)

The above is in line with the World Bank Report (2018) in which institutional analysis and the assessment of capacity development needs of implementing agencies and project members are cited as being essential ingredients for M&E system design and programme sustainability.

When a follow-up question was posed to the participants to understand if their superiors do not have to ensure that the project members cooperate with them during monitoring, a participant reported:

Enforcement cannot work in any way. There is a saying that you can force the horse to the river, but you cannot force it to drink. So of course, you can try to turn them around to deal with them psychologically the fund is there, are available to fund those projects. Some of the community members you may find they are not using that fund appropriately, hence I say you cannot force them. There is no way you can force community members to participate. (Participant 04)

One of the key factors of the success of M&E systems, as described by Allen-Ile et al. (2012), is the ability of implementers to claim ownership. This implies that functionaries must see it as a part of their work and not as something that is imposed on them.

As Wu, Ramesh, Howlett and Fritzen (2010) point out, for the implementation to be successful, there must be a clear understanding between project members and DSD staff. Furthermore, project members must be motivated and self-driven to want to make it work.

The above quotations are evidence of a lack of willingness by beneficiaries to participate because of the absence of visible or tangible incentives. Beneficiaries perhaps feel the need to cater for their own personal needs before they can cooperate. One could argue that this is where the importance of community education comes in, so that the project is fully explained to beneficiaries for them to become aware of the bigger picture and the impact of the project in the long run, given their participation. Again, these comments highlight the level of desperation and poverty within most communities where development NGOs offer their services, as people are so poor that they often look out for their own needs before those of others.

 

RECOMMENDATIONS

This article has provided an overview of the M&E deficits associated with support of poverty-alleviation projects. The study established lack of knowledge and skills as one of the challenges in effective monitoring and evaluation of projects. It is therefore recommended that social workers and community workers supplement their knowledge through continuous professional self-development focusing on the monitoring and evaluation of community projects. In addition, employees who are responsible for monitoring and evaluating community projects should be subjected to intensive training on project management, financial skills and intensive monitoring and evaluation training. Moreover, establishing a coordinated system of group monitoring is highly encouraged. This should involve junior and senior practitioners within the Department working together for skills transfer. In devising this system, a multidisciplinary approach should be considered, with the inclusion of other strategic sectors to enable social workers and community workers to effectively support community projects for alleviating poverty. M&E feedback should be communicated to all parties involved in the programme or project, because this addresses issues of concern and enables all parties to plan corrective measures if the programme or project is not going according to the actual plan. Data should always be available at key decision-making points such as staff meetings, review sessions and stakeholder update meetings to allow decision makers to take data-driven decisions. It is also beneficial to come up with processes used to review how data have been used for decision making over time and take corrective actions to enhance data quality. In relation to the lack of beneficiary cooperation, it is recommended that all the stakeholders be involved in the designing of the M&E plan for community projects. Managers should be aware that it is important to gain increased interest, permission, support, commitment and buy-in from key stakeholders internally and externally when designing an M&E system. Stakeholders such as beneficiaries, community leaders, local authorities, government departments and civil society organisations should be involved in the designing and implementation of an effective and efficient M&E system. Support from all levels of the organisation is crucial, because it ensures that stakeholders can mobilise adequate human, financial and material resources to undertake systematic participatory M&E processes and follow-up.

 

SUMMARY AND CONCLUDING REMARKS

This paper focused on investigating the role played by the social workers and community development workers employed by the Department of Social Development responsible for M&E in supporting community projects for poverty alleviation, with special emphasis on utilising data elicited from monitoring. The researcher investigated the extent to which the monitoring and evaluation process is applied in supporting poverty-alleviation projects.

What is evident throughout this paper is that participants are generally aware of the existence of monitoring and evaluation mechanisms for community projects within the Department of Social Development. However, lack of knowledge and critical skills about the process and system of implementing monitoring and evaluation in supporting the projects was identified as a major concern. This has implications for the sustainability and smooth running of poverty alleviation projects. Generally monitoring and evaluation should be used to assess the gaps within the projects and consequently develop intervention strategies to address the identified gaps; however, it was established that participants conducted monitoring simply for the sake of compliance and viewed monitoring and evaluation of community projects as a tick-box exercise to reach the departmental targets. This resulted in poor monitoring and evaluation mechanisms which impacted negatively on the sustainability of poverty-alleviation projects.

 

REFERENCES

ALLEN-ILE, C., ERESIA-EKE, C. & ILE, I. 2012. M&E of policies, programmes and project. 1st ed. Hatfield, Pretoria: Van Schaik.         [ Links ]

AURIACOMBE, C.J. 2013. In search of an analytical evaluation framework to meet the needs of governance. Journal of Public Administration, 48(4.1):715-729.         [ Links ]

ARCILLA, G.R., CO, F.F. & OCCAMPO, R.S. 2011. Correlates of poverty: Evidence from the community based-monitoring system (CBMS) data. DLSU Business & Economics Review, 20(2):33-43.         [ Links ]

BANA, B. & SHITINDI, E. 2009. Performance management in the Tanzanian Public Service: Proceedings of the Conference on Governance Excellence: Managing Human Potential. Arusha International Conference Centre, United Republic of Tanzania: 2 to 4 March.         [ Links ]

BROMBERG, D.E. & HENDERSON, A.C. 2015. Performance information use in local government monitoring relationships with emergency medical services agencies. Public Performance & Management Review, 39:58-60.         [ Links ]

CRESWELL, J.W. 2009. Research design: Qualitative, quantitative, and mixed methods approaches. 3rd ed. Los Angeles: Sage.         [ Links ]

CRESWELL, J.W. 2016. 30 Essential skills for the qualitative researcher. London: SAGE.         [ Links ]

DASSAH, M.O. & UKEN, E.A. 2006. Monitoring and evaluation in Africa with reference to Ghana and South Africa. Journal of Public Administration, 41(4):705-720.         [ Links ]

DEPARTMENT OF SOCIAL DEVELOPMENT. 2010-2016. Strategic framework on poverty alleviation. Pretoria: Government Printers.         [ Links ]

DPME 2013. National Evaluation Policy Framework. Department of Performance Monitoring and Evaluation. [Online] Available: http://thepresidency.gov.za [Accessed: 05/06/2018].         [ Links ]

ENGELA, R. & AJAM, T. 2010. Evaluation capacity development: Implementing a government-wide M&E system, in DME, (Department of Monitoring and Evaluation) Monitoring and Evaluation Forum, Pretoria, 21 July 2005. Pretoria: Government Printers.         [ Links ]

FARELL, K.M., KRATZMANN, S., MCWILLIAM, N., ROBINSON, D, SAUNDERS, S., TICKNOR, J. & WHITE, K. 2002. Evaluation made very easy accessible, and logical. Atlantic Centre of Excellence for Women's Health. [Online] Available: http://www.acewh.dal.ca/eng/reports/EVAL.pdf [Accessed: 21/06/2015].         [ Links ]

FERRIS, J. M. & GRADDY, E. 2014. Organisational choices for public service supply. Journal of Law, Economics and Organisation, 11(1):126-141.         [ Links ]

FORREST, W. 2014. Cohabitation, relationship quality, and desistance from crime. Journal of Marriage and Family, 76:539-556.         [ Links ]

GAARDER, M. & BRICENO, B. 2010. Institutionalizing evaluation: Review of International Experience Research Paper. London.         [ Links ]

GÖRGENS, M. & KUSEK, J.Z. 2009. Making monitoring and evaluation systems work. Washington: World Bank.         [ Links ]

HAMPER, R.J & BAUGH, L. 2011. Handbook for writing a proposal. New York: McGraw-Hill.         [ Links ]

KAHAN, B. & GOODSTADT, M. 2005. The interactive domain model of best practices in health promotion: developing and implementing a best practices approach to health promotion. Health Promotion Practice, 2(1):43-67.         [ Links ]

Keeton, G. 2012. Time to take performance monitoring seriously. Business Day, November 26.         [ Links ]

KUSEK, J.Z. & RIST, R.C. 2004. Ten steps to a results-based M&E system. Washington, DC: The World Bank.         [ Links ]

LEECH, L. & ONWUEGBUZIE, A.L. 2007. An array of qualitative data analysis tools: A call for data analysis triangulation. School Psychology Quarterly, 22(4):557-584.         [ Links ]

MATTHEWS, B. & ROSS, L. 2010. Research methods: A practical guide for the social sciences. New York: Pearson Longman.         [ Links ]

MAXWELL, A.J. 2013. Qualitative research design: An interactive approach.3rd ed. Los Angeles: Sage.         [ Links ]

MUZINDA, M. 2007. Monitoring and evaluation practices and challenges of Gaborone based local NGOs implementing HIV AIDS projects in Botswana. Gaborone: University of Botswana. (MA dissertation)        [ Links ]

NABRIS K. 2011. Civil society empowerment Palestine Academic Society for the study of International Affairs. [Online] Available: http://www.passia.org [Accessed: 26/07/2018].         [ Links ]

NICHOLLS, D. 2009. Qualitative research: Part two - Methodologies. International Journal of Therapy and Rehabilitation, 16(10):586-592.         [ Links ]

ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT, 2016. Glossary of key terms in evaluation and results-based management. Development Assistance Committee: OECD. [Online] Available: http://www.oecd.org/dac/evaluation/18074294.pdf [Accessed 25/11/2016].         [ Links ]

ORTIZ, F.E., KUYAMA, S., MUNCH, W. & TANG, G. 2004. Implementation of results-based management in the United Nations Organizations. Part 1. Series on managing for results in the United Nations System. JIU/REP/2004/6. Geneva.         [ Links ]

PERRIN, B. 2012. Linking monitoring and evaluation to impact evaluation, Impact Evaluation Notes, No. 2. [Online] Available: http://www.interaction.org/sites/default/files/Linking%20Monitoring%20and%20Eva [Accessed: 18/08/2019].         [ Links ]

PODEMS, D., GOLDMAN, I., & JACOB, C. 2014. Evaluator competencies: The South African government experience. The Canadian Journal of Program Evaluation, 28(3):71-85.         [ Links ]

PRENNUSHI, G., RUBIO G., & SUBBARAO, K. 2011. "M&E." In: Core Techniques and Cross-Cutting Issues, Vol. 1 of PRS Source Book. Washington, DC: World Bank: 105-130.         [ Links ]

PRESIDENCY REPUBLIC OF SOUTH AFRICA, 2007. Policy framework for the government-wide M&E system. Pretoria: Government Printer.         [ Links ]

PRESIDENCY REPUBLIC OF SOUTH AFRICA, 2016. Municipal assessment tool. Pretoria: DPME.         [ Links ]

ROGERS, P. 2009. Matching impact evaluation design to the nature of the intervention and the purpose of the evaluation. In: CHAMBERS R, KARLAN D, RAVALLION M, & ROGERS P. Designing impact evaluations: Different perspectives. Working Paper 4. International Initiative for Impact Evaluation, (3):24-31. New Delhi.         [ Links ]

ROSSMAN, G., & RALLIS, S. F. 2012. Learning in the field: An introduction to qualitative research (3rded). Thousand Oaks, CA: Sage.         [ Links ]

SCHIAVO-CAMPO, S.S. 2005. Building capacity for monitoring and evaluation in public sector. [Online] Available: http://www.samea.org.za/documents/Government _Wide_ME_System.pdf [Accessed: 16/02/2019].         [ Links ]

SCHMIDT, N.A. & BROWN, J.M. 2015. Evidence-based practice for nurses: Appraisal and application of research. (3rd ed). Jones & Barlett Learning.         [ Links ]

SCRIVEN, M. 2007. Key evaluation checklist. Evaluation checklists project, University of Michigan. [Online] Available: www.wmich.edu/evalctr/checklists [Accessed: 31/09/2015].         [ Links ]

SCRIVEN, M. 2014. Reflections, evaluation roots, tracing theories, views and influences, USA: SAGE: 183-195.         [ Links ]

SERETLO, M.J. 2012. Evaluating community projects of alleviating poverty in Waterberg district. Thohoyandou: University of Venda. (Honours thesis)        [ Links ]

SORRENSON, W. 2010. The use of monitoring and evaluation in agriculture and rural development projects: Findings from a review of implementation completion reports, Rome: FAO.         [ Links ]

STOCKMAN, R. 2011. A Practitioner handbook on evaluation. Massachusetts: Edward Elgar Publishing.         [ Links ]

TJALE, M.M. 2010. The impact of local economic development projects funded by the Department of Health and Social Development on poverty alleviation in Bakernberg Area of Mogalakwena Municipality, Limpopo Province. Sovenga: University of Limpopo. (MA dissertation)        [ Links ]

UN (UNITED NATIONS). 2015. Transforming our world: The 2030 Agenda for Sustainable Development. New York, NY: UN. [Online] Available: ustainabledevelopment.un.org/post2015/transformingourworld/publication [Accessed: 08/11/2020].         [ Links ]

UNITED NATIONS DEVELOPMENT PROGRAMME, 2016. Development Assistance Framework for the Republic of South Sudan, 2011-2016. UNDP.         [ Links ]

WORLD HEALTH ORGANISATION, 2011. Monitoring and evaluation toolkit: HIV and AIDS, tuberculosis and malaria. 2nd ed. Geneva. [Online] Available: http://www.who.int/hiv/pub/epidemiology/en/me_toolkit_en.pdf [Accessed: 25/01/2018].         [ Links ]

WORLD BANK REPORT, 2018. Monitoring and evaluation of World Bank agricultural research and extension projects. Agricultural and Rural Development, Discussion Paper 20, Washington: The World Bank.         [ Links ]

WU, X., RAMESH M., HOWLETT M. & FRITZEN S. 2010. The public policy premier: Managing the policy process. New York: Routledge.         [ Links ]

 

 

Correspondence:
Mmaphuti Dipela
dipelmp1@unisa.ac.za

Boitumelo Mohapi
mohapbj@unisa.ac.za

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License