SciELO - Scientific Electronic Library Online

 
vol.115 issue5Capacity building during COVID-19: Utilising South Africa's underutilised international medical graduatesFear, mistrust and misinformation author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

    Related links

    • On index processCited by Google
    • On index processSimilars in Google

    Share


    SAMJ: South African Medical Journal

    On-line version ISSN 2078-5135Print version ISSN 0256-9574

    SAMJ, S. Afr. med. j. vol.115 n.5 Pretoria Jun. 2025

    https://doi.org/10.7196/SAMJ.2025.v115i5.2673 

    SPECIAL SERIES ON THE DISTRICT HEALTH SYSTEM

     

    Monitoring District Health System performance in South Africa: A proposed dashboard based on key pragmatic indicators

     

     

    P BarronI; H MahomedII, III; T C MasilelaIV; K VallabhjeeV; N NdlovuVI; C GoliathVII; H SchneiderVIII

    IFFCH (SA); School of Public Health, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
    IIPhD; Western Cape Government Department of Health and Wellness, Cape Town, South Africa
    IIIPhD; Division of Health Systems and Public Health, Department of Global Health, Stellenbosch University, Cape Town, South Africa
    IVPhD; Department of Planning, Monitoring and Evaluation, South African Government, Pretoria, South Africa
    VFFCH (SA); Clinton Health Access Initiative, South Africa, and Division of Health Systems and Policy, School of Public Health, Faculty of Health Sciences, University of Cape Town, South Africa
    VIMSc; Health Systems Trust, Durban, South Africa
    VIIMOT; Western Cape Government Department of Health and Wellness, Cape Town, South Africa
    VIIIPhD; School of Public Health and SAMRC Health Services to Systems Research Unit, University of the Western Cape, Cape Town, South Africa

    Correspondence

     

     


    ABSTRACT

    Effective monitoring and evaluation (M&E) systems are central to ensuring the performance and accountability of the district health system (DHS). Current systems in South Africa are suboptimal and poorly oriented to the decision-making needs of district managers. Drawing on a WHO measurement framework for the performance of primary healthcare, and as a follow-up to a first article describing the challenges of M&E systems in the DHS, this article proposes a DHS performance monitoring dashboard that is both practical and pragmatic. The dashboard was constructed in an iterative and consultative process, and consists of 20 indicators for quarterly monitoring. A set of general criteria underpinning the choice of indicators is spelled out (e.g. the data are readily available and reliable). Indicators that do not have much variability, or are better suited to annual evaluation (e.g. number of community health workers per 1 000 population) are not included. The dashboard includes the name or description of the indicator, the definition of the indicator, why it is useful, challenges and pitfalls to be considered when analysing the indicator and how it can be used for decision-making. We propose that these indicators be assessed, tracked and monitored on a quarterly basis by relevant managers at the facility subdistrict and district levels. We emphasise that the purpose of this dashboard is not external compliance, but rather to support district managerial decision-making and accountability.

    Keywords: information for decision-making, district health system dashboard


     

     

    In many low- and middle-income countries, the performance of primary healthcare (PHC) systems faces significant challenges, ranging from resource limitations to variability in data collection and quality.[1] Effective monitoring and evaluation (M&E) tools, such as health system dashboards, have become essential for improving accountability, resource allocation and decision-making within public health systems. A health dashboard is an as-close-to-real-time-as-possible data-driven tool designed to provide an accessible overview of key health indicators, enabling health managers at all levels to respond quickly to emerging trends, and make informed decisions. The need for a pragmatic and context-specific health dashboard in South Africa (SA) has become increasingly critical, given the country's unique healthcare landscape and systemic challenges.

    SA's District Health Information System (DHIS) collects extensive health and health system data, but its current use for decision-making is often limited by its lack of integration and timeous production. This article aims to address these limitations by proposing a quarterly dashboard of 20 key PHC indicators tailored to the SA context. This dashboard builds on a global framework, but focuses specifically on the nuances of the SA health system. Many global frameworks for health system monitoring, while comprehensive, require significant resources, including large-scale surveys and data collection efforts, which are often impractical in resource-constrained settings.

    This article builds on a previous publication by the authors entitled 'District Health System performance in South Africa: Are current monitoring systems optimal?'[2] In this publication we argued that the current monitoring and evaluation of PHC in SA could and should be improved. We further proposed that the latest conceptual framework developed by the World Health Organization (WHO) and the United Nations Children's Fund (UNICEF)[3] could provide the basis for this. Fig. 1 shows the components of the WHO framework with the indicators measuring health system capacity (blue) and performance (green) that are relevant to the monitoring of DHS performance. The framework highlights the importance of simultaneously monitoring inputs ('health system determinants'), processes and outputs such as access, quality and continuity ('service delivery').

     

    Methodology

    The main purpose of the dashboard is to be proactive and give district level managers a quick and regular overview of how key programmes are performing, and values could be colour-coded red, orange and green for easy visualisation, like a traffic light. The dashboard was originally constructed by the first author based on his experience of the DHS in SA, and drawing on the WHO framework. In applying and adapting the WHO framework for an SA dashboard, the following rationales and criteria were considered:

    SA has a unique health system, and the dashboard is tailored to the country's current public health and health system reality.

    Many indicators in the WHO framework require special surveys to obtain the metrics. These surveys take time, effort and resources, which are in short supply.

    The dashboard needs data to be constructed from the District Health Management Information System (DHMIS[4]), the official information system under the control of the National Department of Health, which limits which indicators can be constructed. Underpinning the DHMIS is the data dictionary, with definitions of indicators and data elements.[5]

    Because we wanted to create a dashboard for managers, we have focused on a quarterly set of indicators that facility, subdistrict and district health managers can use for decision-making, within the constraints that are in the system.

    The proposed dashboard emphasises practicality by using existing data from the DHIS and the Basic Accounting System (BAS) database, ensuring that it is both cost-effective and easily implementable within the constraints of the existing health infrastructure. Several additional criteria underpinned the choice of indicators, including:

    The indicators are based on quantitative data and no qualitative indicators (e.g. 'identify and address the structural barriers to prevent clients from remaining in care') were included.

    The data underpinning the indicators are readily and regularly available across all districts in SA.

    The data on which the indicators are based are likely to be reliable.

    There is a probability that the indicator will vary on a quarterly basis, depicting variation in district service delivery performance (if an indicator is static or constant, it is not useful to measure performance).

    The indicator should not require a special survey to be calculated or constructed.

    Indicators should be important for managers and measure priority issues within the DHS.

    The indicator should be easily understandable to most managers in the DHS.

    Managers can use the indicator to make decisions.

    The indicators are sometimes different to those in the national indicator dataset (NIDS), which does not use financial information to calculate indicators. Also, some of the definitions are different to the NIDS, where we thought a different definition would provide for an improved indicator.

    The dashboard was developed in an iterative process of consultation and refinement with widening groups of stakeholders. It was first shared with the other authors, and modified based on their feedback. Their feedback was incorporated into the selection of the final set of 20 indicators. Furthermore, two of the authors (CG and HM) are currently senior managers in a provincial department of health, and another (TCM) works for a department tasked with monitoring the performance of the health system at a national level. Other authors are well vested with expertise and knowledge of the routine information systems used in the public health sector.

    Finally, the dashboard was tested for its relevance and pragmatism by distributing it among health service managers at subdistrict, district and provincial levels, as well as academics researching district health systems, and public health specialists. This was done at a South African Learning Alliance on District Health Systems (SALAD)[6] workshop with a set of around 30 participants. Each participant was given an opportunity to critique the dashboard, and based on the responses a final, consolidated dashboard of 20 indicators was constructed. Naturally, a longer list of indicators was generated at the outset. To truncate this and produce a shorter, more manageable list, each contributor who proposed an additional indicator was required to identify another indicator to be replaced by the suggested one.

    The draft dashboard was revised, and is contained in the appendix: https://coding.samedical.org/file/2345. The inputs from the stakeholder consultations included the removal of indicators that could be better evaluated on an annual rather than a quarterly basis (e.g. the number of community health workers per 1 000 population, the proportion of clinics and community health centres that are classified as 'ideal', and the proportion of clinics that have functional clinic committees). Another important suggestion included having indicators to measure the performance of subcomponents of the district (notably district hospitals and community-based services), which are often not included in the performance of fixed PHC facilities.

    The exact display of the dashboard and how it will be created from the data, which reside in several databases, is beyond the scope of this in-practice piece. The dashboard will be generated four times a year so that poor performance can be investigated and mitigation strategies put in place, if necessary, and excellent performance rewarded on a regular basis.

     

    Results

    The dashboard that we have developed (outlined in the appendix: https://coding.samedical.org/file/2345) contains a range of indicators measuring several aspects of health systems determinants and service delivery. It covers many of the WHO building blocks. All the indicators are based on data that are readily available in the health system down to the facility level, and easy to manipulate to calculate the indicators. Except for average patient waiting times, no specific surveys need to be conducted. The indicators are all deemed to be important, either to measure some priority aspect of the health system or specific health programme (e.g. maternal and child health). We acknowledge that it is not a comprehensive set of indicators, but one that is manageable to work with on a quarterly basis. It is intended to be a starting point for such a set of indicators, and may be adapted and/or expanded over time. The table includes the name or description of the indicator, the definition of the indicator, reasons why it is a useful indicator, some of the challenges and pitfalls to be considered when analysing the indicator, and how the indicator can be used for decision-making.

    Indicators omitted include those measuring:

    Ideal clinic status (i.e. the proportion of facilities that have scored sufficiently well on many parameters so that they can be classified as 'ideal' by the National Department of Health, a precursor for certification by the Office of Health Standards Compliance. The reason is that this indicator is reviewed on an annual basis, and this is a quarterly dashboard.

    Proportion of functional clinic committees. This indicator is also measured annually, and tracks the functionality of governance structures at PHC level.

    Community health worker (CHW) to population ratio. This is also measured annually, and does not change much on a quarterly basis.

    Patient satisfaction rate. This requires a survey to obtain the data, and is also measured annually.

    Emergency medical services (EMS) availability per population, or EMS response times. While EMS services are provided in a district, they are often managed separately by the provincial EMS programme and independently from district health services, even if they remain a critical part of DHS.

    Health worker per population ratio. The WHO has transitioned from population-based assessment of health workforce availability to a workload indicators for staffing needs (WISN) approach, which is peculiar to each setting.

    Social determinants of health. These are essential contributors to health outcomes in each district, but are not managed or tracked by district health management teams.

    Community and stakeholder engagement. These are important indicators, but they require qualitative data sets.

     

    Discussion

    We have attempted to develop a quarterly dashboard that can be used to make decisions to improve the performance of the primary care component of the health system. This dashboard is targeted at the full range of managers giving oversight to DHS, including facility, subdistrict, district and provincial managers. In constructing this dashboard, we have drawn heavily on the WHO/UNICEF conceptual framework as well as the accompanying publication which focuses on the technical specifications of indicators. By focusing on a set of manageable and locally meaningful indicators, the adoption of our suggested dashboard, or one like it, could have the value of helping to instil a culture of accountability and results-oriented management within the public sector and the DHS in SA, currently oriented towards upwards compliance. It can also enable the regular assessment of the adequacy of resources allocated, and their efficient utilisation. This approach not only aligns with national health priorities but also with global health targets, such as the Sustainable Development Goals. The dashboard's use of quarterly updates ensures that resource distribution can be dynamic and responsive to emerging needs, as opposed to static annual budget allocations.

    The dashboard we have developed covers the full range of health services in a typical district. It includes indicators that measure performance in community settings, all the way through to performance measurement at the district hospital level. The bulk of the indicators cover various aspects of performance at clinics and community health centres. The dashboard is largely based on data contained in the DHIS, which is the official information-gathering system for district-level performance in SA. This has been augmented with financial data from BAS. These indicators could ultimately be developed into an automated and user-friendly visual dashboard for district managers, with different degrees of aggregation, from facility to subdistrict and district levels.

    Health systems need to respond to changing priorities, which should be constantly adjusted to reflect the most important needs of the population. Therefore, regular assessments of health systems performance help to identify priorities and inform decisions that will ensure the most appropriate responses to these priorities. Assessing the performance of a health system effectively is the first step to improving it. This requires a conceptual lens through which to view the health system structures, its inputs, processes, outputs and the outcomes into which it feeds. We have used the WHO's PHC framework to develop a practical dashboard. This dashboard can help provide an explanation of the health system bottlenecks that contribute to specific policy challenges, and support efforts to pinpoint the person, group or institution that can and should take responsibility for remedial action and promote accountability.[7,8] The key to making this dashboard meaningful is for managers to have ready, regular access to the data, to be able to understand the trends seen and to be empowered to make decisions based on the observed trends.

    The use of the WHO framework has facilitated health system learning in other settings. In Oman,[9] it enabled the health ministry to obtain a holistic view of health system processes and outcomes, necessary for the identification of inefficiencies, restrictions and delays, while pointing to gaps in data (e.g. a lack of information about private healthcare providers). It also provided significant insights and benefits despite being conducted using modest resources and data already available. In South Asia, a five-country study[10] to assess the availability of information on indicators of the WHO PHC measurement framework found that the framework, when diligently used, provides a platform to readily assess and track the performance of PHC, concluding that countries should improve the completeness, quality and use of existing data for strengthening PHC.

     

    Conclusion

    Based on the WHO measurement framework and performance of PHC, the authors have constructed a proposed dashboard of 20 indicators, which is both practical and pragmatic. Some of the indicators are not in the current NIDS, or are defined differently. The authors intend to stimulate a conversation among relevant role players toward the next iteration of NIDS. It is suggested that these indicators be monitored, reviewed and discussed on a quarterly basis by relevant managers at the facility, subdistrict and facility levels. It is further suggested that the purpose of the dashboard should not be for external compliance, but rather form the basis for managerial accountability, decision-making and continuous improvement. The proposed dashboard focuses on quantitative indicators that can be regularly tracked. Moreover, these indicators provide a dynamic and actionable view of the health system's performance, allowing for proactive rather than reactive management. While this article has focused on the quantitative indicators themselves, the M&E processes, including who sits around the table, how meaning is made of the technical data, how it is triangulated with the lived experience on the frontline and data from sources other than routine DHIS, are all important aspects to be strengthened to deepen the M&E process and improve service delivery. This would include discussions around important aspects of health system functioning that do not lend themselves to hard quantitative indicators, such as staff and patient experiences, staff morale, organisational culture and system resilience. Finally, this article represents a process of co-production of knowledge between researchers, academic institutions and health policy-makers, which is advocated by both the WHO and the Alliance for Health Policy and Systems Research, and is steadily taking root in SA.

    Data availability. N/a.

    Declaration. None.

    Acknowledgements. The SALAD collective is thanked for insightful feedback on the original pragmatic dashboard proposed.

    Author contributions. The initial article was conceived by PB, and revised with inputs from TM, KV, HM and HS. NN provided input on HST's dashboard and CG provided community-based indicators. PB wrote the first draft, and all authors gave comments and edits to this, which were incorporated into the final article.

    Funding. This publication was made possible with funding from School of Public Health and SA Medical Research Council Health Services to Systems Research Unit, University of the Western Cape, SA.

    Conflicts of interest. None.

     

    References

    1. Bitton A, Fifield J, Ratcliffe H, et al. Primary healthcare system performance in low-income and middle-income countries: A scoping review of the evidence from 2010 to 2017. BMJ Glob Health 2019;4(Suppl 8):e001551. https://doi.org/10.1136/bmjgh-2019-001551        [ Links ]

    2. Barron P, H Mahomed H, Masilela TC, Vallabhjee K, Schneider H. District Health System performance in South Africa: Are current monitoring systems optimal? S Afr Med J 2023;113(12):e1614. https://doi.org/10.7196/SAMJ.2023.v113i12.1614        [ Links ]

    3. World Health Organization, United Nations Children's Fund. Primary health care measurement framework and indicators: Monitoring health systems through a primary health care lens. Geneva: WHO, UNICEF, 2022. https://iris.who.int/bitstream/handle/10665/352205/9789240044210-engpdf?sequence=1 (accessed 4 July 2024).         [ Links ]

    4. National Department of Health, South Africa. District Health Management Information System (DHMIS) Policy. Pretoria: NDoH, 2011. https://knowledgehub.health.gov.za/elibrary/district-health-management-information-system-dhmis-standard-operating-procedures-facility (accessed 5 July 2024).         [ Links ]

    5. National Department of Health, South Africa. The NDoH Data Dictionary. Pretoria: NDoH, 2024. https://dd.dhmis.org/ (accessed 6 July 2024 ).         [ Links ]

    6. Schneider H, Masilela T, Mndebele J, et al, on behalf of the South African Learning Alliance on the District Health System. Special Series on the District Health System. S Afr Med J 2023;113(11):e1653. https://doi.org/10.7196/SAMJ.2023.v113i11.1653        [ Links ]

    7. Rajan D, Papanicolas I, Karanikolos M, Koch K, Rohrer-Herold K, Figueras J. Health system performance assessment: A primer for policymakers. Copenhagen: World Health Organization; 2022. https://iris.who.int/handle/10665/364198 (accessed 9 July 2024).         [ Links ]

    8. Health system performance assessment: reporting and communicating. Practical guide for policy makers. Brussels: European Commission, 2017. https://health.ec.europa.eu/document/download/165d7fcd-1d09-4ef1-b815-f78a158ef335_en?filename=2017hspa_reportingcommunicating_en.pdf (accessed 9 July 2024).         [ Links ]

    9. Lai T, Al Salmi Q, Koch K, Hashish A, Ravaghi H, Mataria A. Health system performance assessment and reforms, Oman. Bull World Health Organ 2024;102(7):533-537. https://doi.org/10.2471/BLT.24.291750        [ Links ]

    10. Purohit N, Kaur N, Zaidi SR, et al. Assessing the WHO-UNICEF primary health-care measurement framework; Bangladesh, India, Nepal, Pakistan and Sri Lanka. Bull World Health Organ 2024;102(7):476-485C. https://doi.org/10.2471/BLT.23.290655        [ Links ]

     

     

    Correspondence:
    P Barron
    pbarron@iafrica.com

    Received 30 September 2024
    Accepted 16 April 2025