SciELO - Scientific Electronic Library Online

 
vol.15 número1 índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados

Artigo

Indicadores

Links relacionados

  • Em processo de indexaçãoCitado por Google
  • Em processo de indexaçãoSimilares em Google

Compartilhar


South African Journal of Bioethics and Law

versão On-line ISSN 1999-7639

SAJBL vol.15 no.1 Cape Town  2022

http://dx.doi.org/10.7196/SAJBL.2022.v15i1.797 

REVIEW

 

Artificial intelligence in healthcare: Proposals for policy development in South Africa

 

 

S NaidooI; D BottomleyI; M NaidooII; D DonnellyIII; D W ThaldarIII

ILLB; School of Law, College of Law and Management Studies, University of KwaZulu-Natal, Durban, South Africa
IILLM; School of Law, College of Law and Management Studies, University of KwaZulu-Natal, Durban, South Africa
IIIPhD; School of Law, College of Law and Management Studies, University of KwaZulu-Natal, Durban, South Africa

Correspondence

 

 


ABSTRACT

Despite the tremendous promise offered by artificial intelligence (AI) for healthcare in South Africa, existing policy frameworks are inadequate for encouraging innovation in this field. Practical, concrete and solution-driven policy recommendations are needed to encourage the creation and use of AI systems. This article considers five distinct problematic issues which call for policy development: (i) outdated legislation; (ii) data and algorithmic bias; (iii) the impact on the healthcare workforce; (iv) the imposition of liability dilemma; and (v) a lack of innovation and development of AI systems for healthcare in South Africa. The adoption of a national policy framework that addresses these issues directly is imperative to ensure the uptake of AI development and deployment for healthcare in a safe, responsible and regulated manner.


 

 

Artificial intelligence (AI) in healthcare is not a novel concept, as the application of such systems in medicine dates back to as early as the 1950s,[1] and pilot projects were deployed in Africa during the mid-1980s.[2] However, AI-enabled systems are currently transforming the healthcare sector at an unprecedented rate, through their use in evaluating the risk of disease onset and potential treatment outcomes, alleviating or reducing complications, ongoing patient care, clinical research and drug development.[3] The rapid growth of AI is due to quantum leaps in computing power, growth in the big-data phenomenon, significant investments in research and development of basic AI technologies.[4]

AI-enabled systems can provide patients with increased access to better-quality healthcare while simultaneously reining in rising medical costs and making treatments more affordable.[5] For example, using Vantage's AI-powered software, Ugu Municipality in KwaZulu-Natal was the first district to achieve UNAID's 90-90-90 outcomes for HIV patient treatment adherence and monitoring.[6] More recently, a mobile app developed by Vantage was utilised for COVID-19 community screening in Mpumalanga Province.[7] Despite the tremendous potential offered by AI in transforming and improving healthcare in low-resource areas,[8] the development and deployment of such technologies gives rise to several important social, legal and ethical concerns. Therefore it is imperative that South Africa (SA) develops and implements appropriate policy and regulatory frameworks for the responsible use and governance of AI and data for healthcare[9] in order to truly harness the benefits promised by such technologies.

In September 2021, an online workshop was hosted by the University of KwaZulu-Natal (UKZN) School of Law on AI in healthcare in SA ('the workshop').[10] During the workshop, five distinct problematic issues called for policy development: (i) outdated legislation; (ii) data and algorithmic bias; (iii) impact on the healthcare workforce; (iv) imposition of liability dilemma; and (v) a lack of innovation and development of AI systems for healthcare in SA. In the present article, we provide a pragmatic, legal analysis of these five issues and make recommendations for policy development. This article also refers to the high-level ethics principles developed to guide policymaking in relation to AI on which there is robust, international discourse. We recognise that there should be a debate in SA on the extent to which these principles should be applied in, or adapted for, the SA context. This is not within the scope and purpose of the present article, but will be a fruitful area for future research.

 

Ethics principles for AI regulation

For the benefit of novice readers, in this section we include a brief overview of the emerging ethics principles relevant to the regulation of AI. In recent years, AI's ethical and social implications have attracted much attention from numerous public, private and non-governmental organisations, many of which have produced normative documents that comprise principles and guidance for ethical and socially responsible AI.[11] The sheer volume of principles put forth by such organisations threatens to become overwhelming and perplexing, with the potential development of a 'market for principles' in which stakeholders cherry-pick those most beneficial for their purposes.[12] As a result, scholars have begun to analyse the content of these documents, identifying the extent to which a global consensus on AI ethics is emerging.[11] It is important to note that this 'global consensus' does not incorporate a uniquely African perspective. Instead, the ethical stance of African countries is represented through their relation to international or supranational organisations who have produced normative documents.[13] Therefore, the process of assimilating the normative values into a constitutionally and culturally appropriate and binding legal instrument in SA must still be undertaken, with care to heed the caution that there remain divergences in approach, interpretation and emphasis.[11] Nonetheless, for purposes of the present analysis, we emphasise that there is significant overlap between the principles put forward by these organisations. Interestingly, no single principle appears to be common to all documents. However, the results of different studies[11-14] aimed at conceptually categorising ethics topics, and reducing them into a smaller number, are highly consistent.[11]

The more extensive of these studies[11,13] identify five principles most frequently referenced across AI ethics documents, albeit with some differences in nomenclature, namely: (i) transparency; (ii) justice and fairness; (iii) non-maleficence; (iv) responsibility; and (v) privacy. The concepts considered below may be referred to as the 'normative core' of a principle-based approach to AI ethics and governance."4 The inclusion of all the principles that make up this normative core in more recent documents suggests that the conversation around AI ethics is beginning to converge.[14] However, navigating the sea of modern AI ethics documents requires a general understanding of this normative core, as these principles are sometimes articulated in different ways - this will be illustrated with reference to the Organisation for Economic Co-operation and Development (OECD)'s 2019 Recommendation of the Council on AI[15] and the UNESCO Recommendation on the ethics of AI.[16:]

Transparency

While this is the most prevalent principle in the current literature, there is significant variation in its articulation and interpretation.[13: Most references to transparency comprise 'efforts to increase explainability, interpretability or other acts of communication and disclosure'.[13] It is presented as a way to 'minimise harm and improve AI'[13] through the requirement that systems be developed and deployed to allow for human oversight, including through 'translation of their operations into intelligible outputs and provision of information' regarding their use.[14: The OECD articulates this principle as 'transparency and explainability', suggesting that AI actors should provide meaningful, context-appropriate information to foster a general understanding of AI systems; inform stakeholders and those affected by AI outcomes; and enable those adversely affected to challenge its outcome.[15: Similarly, UNESCO emphasises that transparency and explainability of AI systems is an essential component of a 'trustworthy' AI system.[16]

Justice and fairness

Justice is primarily articulated in terms of calls for fairness[13] encompassing both inclusive access to the benefits of AI technologies and the elimination of unfair discrimination, which may be perpetuated by bias in the datasets on which AI systems are trained.[12: At present, AI bias is already impacting individuals worldwide; therefore appeals for AI technologies to be designed and used to maximise fairness and promote inclusivity are articulated through fairness and non-discrimination principles.[14: In alignment with the general trend, the OECD articulates justice under the principle of 'human-centred values and fairness' which states that throughout the lifecycle of the system, AI actors should uphold the rule of law, human rights and democratic values which includes, among others, non-discrimination and equality.[15] This appeal is reiterated under the banner 'Inclusive growth, sustainable development and well-being' which suggests that AI actors should proactively ensure the inclusion of under-represented populations and the reduction of economic, social, gender and other inequalities.[15] The themes of inclusion and non-discrimination are given equally strong emphasis by UNESCO.[16]

Non-maleficence

This principle encompasses general calls for safety and security which stipulate that AI technologies perform as intended, should never cause foreseeable or unintentional harm, and are secured against access by unauthorised parties.[14] Interestingly, references to non-maleficence outweigh those to beneficence,[13] with most organisations prioritising caution against the overuse or misuse of AI technologies which may lead to a plethora of negative consequences. The OECD explicitly asserts the principle of 'robustness, security and safety',[15] stating that 'AI systems should be robust, secure and safe throughout their entire lifecycle' so that they function appropriately and do not pose unreasonable risks.[15] Guidelines for harm prevention most often focus on technical measures and governance strategies.[13] In line with this tendency, the OECD suggest that AI actors 'apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis' to address any risks that may arise.[15] In the UNESCO recommendation, the principle of non-maleficence finds expression in the requirement to 'do no harm' through implementation of proportionality (the choice of AI technologies that do not exceed what is appropriate and proportional to achieve a legitimate aim) and sustainability (the implementation of AI measures that benefit rather than hinder the realisation of social, cultural, economic and environmental sustainability objectives).[16]

Responsibility

While 'responsible AI' is widely referenced, both responsibility and accountability are rarely defined in the literature.[13] Responsibility and accountability recognise the importance of mechanisms that ensure that accountability (for harm caused by AI systems) 'is appropriately distributed, and that adequate remedies are provided.'[14] AI developers, designers, institutions and 'industry' are variously referenced as being responsible and accountable for decisions made and harm caused by AI systems.[13] The OECD asserts this principle under the banner of 'accountability', which notes that AI actors should not only be accountable for the proper functioning of AI systems but also for ensuring respect of all the principles contained within the Recommendation.[15] The stated definition of AI actors is quite broad, encompassing all those 'who play an active role in the AI system lifecycle, including organisations and individuals that deploy or operate AI'.[15] In the UNESCO recommendation, the scope of accountability is extended to require both AI actors and member states[16] to develop oversight, impact assessment, audit and due diligence mechanisms and whistle-blower protections.

Privacy

Within the current literature, privacy is seen 'both as a value to uphold and as a right to be protected.'[13] It expresses the requirement that AI systems respect individual privacy both in the use of personal data for training algorithms and also by allowing for agency over individuals' data and decisions made therefrom.[14] Interestingly, the OECD does not reference privacy as a stand-alone principle; instead, it is mentioned only via its relation to other principles. Under the principle 'human-centred values and fairness', it is recommended that AI actors should respect the rule of law, human rights and democratic values including those of privacy and data protection.[15] However, in the more recent UNESCO recommendation, the importance of privacy is underscored as being essential to human dignity, autonomy and agency.[16]

 

Table 1

 

Outdated legislation

SA does not currently have any specific laws in place that deal with AI, but may draw guidance from the UNESCO recommendation on the ethics of AI.[16] Further, as a member of the G20 policy development group, SA is guided by the G20 AI principles,[17] which are drawn from the OECD Recommendation of the Council on AI.[15: Furthermore, AI applications developed for healthcare use in SA will have to comply with a range of national statutes. However, this legislative framework presents several barriers to the development and deployment of AI in healthcare.

One such barrier is found in the definition of 'medical device', as stated in the Medicines and Related Substances Act (the Act).[18: To fall within the ambit of the given definition, any machine or software must be intended by the manufacturer to be used in the diagnosis, treatment, monitoring or alleviation of any disease or injury, as well as in the prevention of any disease. General software, which is not specifically intended for such a purpose, will not be considered a medical device, even where it is used in a healthcare context.[19: This definition severely limits the use of AI in healthcare. This issue is particularly concerning when considering AI-enabled systems such as COVID-19 chatbots which provide symptom checking, reporting and exposure services and can have clear health implications when they incorrectly advise a patient.[19: A reconsideration, and subsequent widening, of the definition of medical device could help in the adoption and use of such technologies in a safe, responsible and regulated manner in line with AI ethics principles.

Currently, the Act provides for a single-stage model of regulatory review for medical devices, according to predefined static specifications and standards.[19] However, this traditional review process is unsuitable for 'unlocked' AI systems which can 'adapt and optimise device performance in real-time',[20] through the use of big data analytics and machine learning. Thus, how the machine will respond to, and interpret, data may not be entirely predictable to physicians or patients.[20] To address this issue, the Food and Drug Administration (FDA) in the US has proposed a novel approach, termed the 'total product lifecycle (TPLC) regulatory approach'[20] This approach involves a multi-stage evaluation. It requires evaluation and monitoring of unlocked AI systems both at the pre-market and postmarket stage.[20] This approach is expectedly more onerous on the manufacturer who is to provide more data and predictions regarding how the device may act going forward. However, given the risks involved in sectors such as healthcare, this approach may provide a regulatory solution for the risks that unlocked AI systems pose.

 

Data and algorithmic bias

One of the workshop's main points of discussion focused on the quality of data used to train AI systems and the potentially biased outcomes that may arise therefrom. Biased data sets that do not accurately represent a model use case can result in the AI producing skewed outcomes. In general, training data for machine learning projects in the healthcare sector have to be representative of the real world.[10: Where the data are not representative, this could lead to many problematic issues - including injustice, discrimination, false diagnoses and even the possibility of rendering treatments ineffective which will in turn jeopardise patient safety.[21: The issue of non-representative data is further exacerbated in the SA context as invention and development of AI technologies mostly occurs outside our borders.[22: Thus, the data used to train these systems may not be representative of the SA population. However, it was noted in the workshop that while representative data is the way forward in eliminating bias, even where we are able to train AI healthcare systems on ideal high-quality data - data which are accurate, complete, consistent, unique and timely - we still see the perpetuation of discrimination or bias through existing structural inequalities in the form of algorithmic bias.[10]:

A possible solution to the data and algorithmic bias problem is the establishment of an institution akin to the UK's Data Ethics and Innovation Centre, which deals with ethical issues related to AI, including the quality of input data to AI processes.[23: The Data Ethics and Innovation Centre's bias review programme focuses on investigating algorithmic bias across various sectors through literature review, technical research and public engagement workshops. The programme is aimed at producing recommendations to government about how any potential harms can be identified and minimised.[24]

The South African Presidential Commission on the Fourth Industrial Revolution (4IR Commission) recommended the establishment of an AI Institute as one of SA's technological development plans.[25] While the AI Institute is intended to form part of all current and future global initiatives on AI, and to deal with ethical issues arising from the development and deployment of AI, its assigned powers and functions remain unclear. We recommend that a specific body be enacted that deals with the issues relating to bias or, alternatively, the AI Institute should have a specific mandate to create a programme within its structure that deals with this matter.

 

Impact on workforce

In noting that AI is not meant to replace the work of physicians or other healthcare practitioners but rather to complement, facilitate and enhance human work, the workshop participants asserted that new technologies have an uncanny way of displacing and even deskilling workers.[10: The right to work was recognised as a critical consideration for policy development around AI.[21:

By the year 2030, the World Health Organization (WHO) has estimated, there will be a shortage of 18 million health workers -mostly in lower- to middle-income countries (LMICs).[26] The WHO advocates AI to bridge the gap between the current and future deficit in healthcare workers and the ideal workforce required to provide appropriate healthcare.[27: In addition, the workshop participants expressed optimistic views in the understanding that AI will not take away but rather help in job creation.[10:

The workshop participants highlighted the need to initiate education and the incorporation of AI knowledge into the current healthcare system and workforce.[10: Such initiatives are especially relevant in LMICs as local staff may have insufficient experience with information technology (IT) systems or electronic data management, and may have poor technological literacy.

Countries around the world seem to consider education to be a cornerstone in the uptake of AI. The Indian AI Taskforce Report released in March 2018 emphasised the need for change in the education curriculum and the need to reskill the labour force to ensure an AI-ready future.[29] In addition, the United Arab Emirates (UAE) government established an AI 'smart lab' in 2017 to train employees in the public and private sectors to implement AI in their respective fields.[23:

We suggest that the establishment of a similar education and reskilling programme in the SA healthcare context could encourage use and trust in AI-enabled systems. Such a programme may fall under the ambit of the AI Institute proposed by the 4IR Commission, which is responsible for ensuring capacity-building in AI.[25:

 

Imposing liability

SA, being the first and currently (at the time of writing) the only country to grant a patent listing an AI system as the inventor,[30: was a keen topic of discussion at the workshop. The crux of the debate focused on the implications that such a patent may have for the legal subjectivity and, subsequently, the legal liability of AI. Participants proposed that the granting of inventorship is the first prong in a 'slippery slope' leading to the granting of AI legal subjectivity which, in turn, only provides for the creation of a legal loophole for companies, developers and users of AI systems to foist legal and financial responsibility.[10:

The possibility of developing company law as a model for AI liability was also scrutinised, as such a liability regime requires the ability to 'pierce the corporate veil' and identify who is directing the company. It was proposed that such an undertaking would be impossible in the context of AI that operates autonomously, as opposed to a company which operates through decisions made by human beings.[10:

Additionally, SA common law imposes fault-based liability on the human healthcare practitioner, which entails that one may be held liable if one fails to meet the objectively measured standard expected of a reasonable practitioner in his/her branch of the profession. However, the use of AI systems raises the explicability or 'black box algorithm' issue. This is so as the inner logic with which a machine reaches certain conclusions is arguably inscrutable to health practitioners or patients and makes it virtually impossible for practitioners to foresee and thus take reasonable steps to prevent an error and meet the required standard of care.[19: Furthermore, it is uncertain how a practitioner should proceed and maintain the required standard of care where an AI system, which is trained with far more data than a human could reasonably comprehend, recommends unconventional treatments. Crucially, the autonomous nature with which some of these AI systems operate also creates challenges in assigning fault. It is difficult to justify a finding of fault or negligence for a human from an AI decision, and yet it is also not currently possible to attribute liability to an AI system.

The WHO, in their 2021 report on the Ethics and Governance of AI for Health ('the report'), listed accountability as a key ethics principle.[27: Importantly, despite the challenges associated with establishing fault, as is often the case owing to the nature of the technology, the report firmly requires there to be human accountability - either through sole, or joint and several, liability.[27: However, liability is not limited to the healthcare practitioner, but also the manufacturer. In the event that fault cannot be attributed to either party, then it could lie with 'the government agency or institution that selected, validated and deployed it'.[27:

In terms of remedies, the report notes that it will be desirable to have different types of possible redress including forms of compensation, rehabilitation, restitution and possible sanctions with guarantees of non-repetition from entities that develop and deploy AI health systems.07 Collective responsibility is proposed to avoid the diffusion of responsibility and to encourage all those involved in the creation and use of AI to act with integrity and to minimise harm.[27:

The use of fault-based legal liability regimes poses many issues when considering AI development and deployment in the healthcare context, and seems to be at odds with the understanding of accountability as put forward by the WHO in their report. One possible solution to this problem may lie in replacing the existing idea of liability, as based on the Western legal tradition, with a reconciliatory approach aligned with the African tradition - particularly the concept of ubuntu, which has been described as 'foundational to the spirit of reconciliation and bridge-building'.[31: Instead of focusing on questions such as 'Who acted?' and 'Was the act wrongful?', which cause persons involved to be antagonistic and defensive, the focus should shift to learning how to better use AI in healthcare, and to actively developing guidelines for AI developers and healthcare professionals who are using AI systems. But how can this work in practice?

A model of what an 'AI in Healthcare Reconciliation Commission' may look like can be drawn from the Commission for Conciliation, Mediation and Arbitration (CCMA),[32: the Truth and Reconciliation Commission (TRC)[33: and the Road Accident Fund (RAF),[34: all of which do not strictly adhere to traditional Western notions of fault-based legal liability. The basis for such a model may be discerned from the CCMA, the inception of which signalled a shift from 'a highly adversarial model of relations to one based on promoting greater co-operation, industrial peace and social justice'.[35: Instead of litigation, disputes must be resolved inter alia through reconciliation. Given that AI technology is still in its infancy, society must learn from actual disputes and develop relevant, detailed legal rules accordingly. In this light, the TRC can also serve as a model. It held broad investigative powers, was able to insist on access to relevant information, and provided a platform for victims to share their stories in an attempt to make recommendations aimed at preventing such abuses in the future.[36:

A critical element of the AI in Healthcare Reconciliation Commission is the introduction of an insurance scheme, akin to that of the RAF, to compensate victims for harm caused by AI systems in the healthcare setting. The RAF is responsible for rehabilitating and compensating injured persons, as well as actively promoting the safe use of SA roads.[37: In addition, the introduction of a mandatory insurance regime when considering AI civil liability has been proposed by the European Union (EU).[38: The proposal was based on the understanding that liability coverage is a key factor in the success of new technologies and is vital for ensuring that the public can trust new technologies. Under such a regime, strict liability is imposed. In this way, the difficulties of assigning fault are avoided, while adequate victim compensation is assured.

While patients who suffer harm, in the context of the use of AI in healthcare, should be compensated, this does not mean that there should be a legal battle, or that specific persons ought to pay the compensation. We suggest that at these early stages of adopting such a qualitatively different, new technology - AI - it is more important for society to learn from past mistakes and to plot an informed path ahead. This will be optimised by excluding litigation in favour of reconciliation. Of course, as guidelines are gradually developed by the AI in Healthcare Reconciliation Commission, these can be enforced through professional bodies, and can inform legislation regarding the development and ongoing regulatory overview of AI in healthcare.

 

Lack of innovation and development

The workshop unearthed the patenting activity within AI relating to SA. Although only an approximate measure, patents are an established and useful method to measure innovation,[39,40:] and have been used as a proxy for the state of innovation in AI in SA.[41] The figures that were presented, however, did not reflect much optimism. During 2012 - 2021, 9 231 AI patent applications listed SA as a designated country, yet there were just 10 AI patents filed from within SA itself.[22: As a possible solution to this, some believe that granting inventorship status to AI systems could result in a more enabling environment that promotes the development of complex cognitive and creative AI systems - subject, of course, to human beings upstream being the actual owners of the patent.[10:]

The question then becomes: How do we truly drive innovation in the AI and healthcare arena, since these technologies offer a solution to the resource and capacity constraints which SA is currently facing? One such solution may be found in the leveraging of a public-sector health data institution.

A serious obstacle to the uptake of the development of AI in Africa is the availability of data and the costs associated with its acquisition.[2] The National Digital Health Strategy for South Africa 2019 - 2024 identifies the development of a patient electronic health record as a key priority.[42: Such a record system provides a sufficient amount of high-quality representative data with which to train AI systems. The standardised nature of the record also allows for alleviation of the significant investment and effort required to curate non-optimised data and to ensure its suitability for an AI system analysis.[28]

Availing public sector data to develop, train and improve AI-enabled systems is not an unorthodox concept. The Declaration of Cooperation on Artificial Intelligence, which was ratified by 25 European countries in April 2018,[43] saw member states agree to ensure better access to public sector data in order to 'influence AI development, fuelling innovative business models and creating economic growth and new qualified jobs'.[43]

However, access to the sensitive health data of patients raises many privacy and security concerns. A robust legal framework or governance system for such data may be key to encouraging innovation, while simultaneously preserving the privacy and security of patients. A federated data system could be the solution to safeguarding against these concerns as data do not leave the participating organisation that holds them, but authorised individuals can access these data to train algorithms.[27]

Where we can establish a public sector data institution alongside the proposed patient electronic record, allowing SA developers to access health data in a secure and safe manner that respects the rights of the patient, we can incentivise the development and deployment of AI for use in healthcare in SA.

 

Conclusion

The potential of AI in healthcare is enumerated in the substantial body of research on the topic and several global, multilateral and national policy frameworks. National strategies for the use of AI in healthcare and the wider health system are currently being developed by numerous countries around the world.[9] These strategies typically feature the acknowledgement of a significant role played by governments in creating an enabling environment for the adoption and use of AI for the greater good of society. Accordingly, it is imperative for the SA government to embrace this central role through the adoption of a national policy framework to ensure the uptake of AI development and deployment for healthcare in a safe, responsible and regulated manner.

Acknowledgements. None.

Author contributions. SN - conceptualisation, project co-ordination, writing of original draft; DB - conceptualisation, writing of original draft (liability); MN - conceptualisation, writing of original draft (innovation and development); DD - conceptualisation, revision, supervision; DT -conceptualisation, revision, supervision, funding acquisition.

Funding. The support of the HSRC/Facebook Ethics & Human Rights and AI in Africa grant is gratefully acknowledged. We acknowledge the support of the US National Institute of Mental Health and the US National Institutes of Health (award number U01MH127690). The content of this article is solely our responsibility and does not necessarily represent the official views of the US National Institute of Mental Health or the US National Institutes of Health.

Conflict of interest. None.

 

References

1. Tran BX, Vu GT, Ha GH, et al. Global evolution of research in artificial intelligence in health and medicine: A bibliometric study. J Clin Med 2019;8(3):360. https://doi.org/10.3390/jcm8030360        [ Links ]

2. Owoyemi AJ, Boyd AD. Artificial intelligence for healthcare in Africa. Front Digit Health 2020;2:6. https://doi.org/10.3389/fdgth.2020.00006        [ Links ]

3. Becker A. Artificial intelligence in medicine: What is it doing for us today? Health Policy Technol 2019;8(2):198-205. https://doi.org/10.1016/j.hlpt.2019.03.004        [ Links ]

4. Schoeman W, Moore R, Seedat Y, Chen JY. Artificial intelligence: Is South Africa ready? https://accenture.com/_acnmedia/pdf-107/accenture-ai-south-africa-ready.pdf (accessed 30 September 2021).

5. Sallstrom L, Morris O, Mehta H. Artificial intelligence in Africa's healthcare: Ethical considerations. ORF Issue Brief No. 312. 2019. https://orfonline.org/research/artificial-intelligence-in-africas-healthcare-ethical-considerations-55232/ (accessed 30 September 2021).

6. USAID. Vantage Software from USAID Partner BroadReach Spotlighted at Microsoft Conference. 2021. https://www.usaid.gov/global-health/health-areas/hiv-and-aids/information-center/news-and-updates/vantage-software-usaid-partner (accessed 19 February 2022).

7. Broadreach Group. Mpumalanga launches Vantage mobile app to stop Covid-19 spread. Cape Town: BroadReach, 2020. https://broadreachcorporation.com/together-we-will-conquer-mpumalanga-launches-mobile-app-to-stop-covid-19-spread/ (accessed 19 February 2022).         [ Links ]

8. Hoodbhoy Z, Hasan B, Siddiqui K. Does artificial intelligence have any role in healthcare in low resource settings? J Med Artif Intell 2019;2:13. https://doi.org/10.21037/jmai.2019.06.01        [ Links ]

9. Singh V. AI and data in South Africa's health sector. Pretoria: Policy Action Network, 2020. https://policyaction.org.za/sites/default/files/PAN_TopicalGuide_AIData6_Health_Elec.pdf (accessed 4 October 2021).

10. University of KwaZulu-Natal. Virtual workshop: Artificial intelligence in healthcare in South Africa. Durban: UKZN, 2021. https://law.ukzn.ac.za/virtual-workshop-artificial-intelligence-in-healthcare-in-south-africa/ (accessed 14 October 2021).         [ Links ]

11. Schiff D, Borenstein J, Biddle J, et al. AI ethics in the public, private, and NGO sectors: A review of a global document collection. IEEE Transactions on Technology and Society 2021;2(1):31-42. https://doi.org/10.1109/TTS.2021.3052127        [ Links ]

12. Floridi L, Cowls J. A unified framework of five principles for AI in society. Harvard Data Sci Rev 2019;1(1). https://doi.org/10.1162/99608f92.8cd550d1        [ Links ]

13. Jobin A, Ienca M, Vayena E. The global landscape of AI ethical guidelines. Nat Mach Intell 2019;1:389-399. https://doi.org/10.1038/s42256-019-0088-2        [ Links ]

14. Fjeld J, Achten N, et al. Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Centre for Internet & Society, 2020. https://cyber.harvard.edu/publication/2020/principled-ai (accessed 17 May 2022).

15. Organisation for Economic Co-operation and Development. Recommendation of the Council on Artificial Intelligence. Paris: OECD, 2019. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 (accessed 19 February 2022).         [ Links ]

16. United Nations Educational, Scientific and Cultural Organization. Recommendations on the Ethics of Artificial Intelligence. Paris: UNESCO, 2021. https://unesdoc.unesco.org/ark:/48223/pf0000380455 (accessed 19 February 2022).         [ Links ]

17. Group of Twenty. G20 Ministerial Statement on Trade and Digital Economy. Japan: G20, 2019. https://www.mofa.go.jp/files/000486596.pdf (accessed 19 February 2022).

18. South Africa. Medicines and Related Substances Act No. 101 of 1965.

19. Donelly D. AI in healthcare in South Africa: Dusty-Lee Donnelly. Durban: UKZN, 2021.https://www.youtube.com/watch?v=W32ga8-UynI (accessed 14 October 2021).         [ Links ]

20. US Food & Drug Administration. Proposed regulatory framework for modifications to artificial intelligence/machine learning (AI/ML)-based software as a medical device (SaMD) - discussion paper and request for feedback. Silver Spring: FDA, 2019.

21. Arowosegbe J. AI in healthcare in South Africa: Jacob Arowosegbe. Durban: UKZN, 2021. https://www.youtube.com/watch?v=onuZqEvfw6M (accessed 14 October 2021).         [ Links ]

22. Naidoo M. AI in healthcare in South Africa: Meshandren Naidoo. Durban: UKZN, 2021. https://www.youtube.com/watch?v=4bwuFNh6nLU (accessed 14 October 2021).         [ Links ]

23. Access Partnership. Artificial intelligence for Africa: An opportunity for growth, development and democratisation. 2018. https://www.accesspartnership.com/artificial-intelligence-for-africa-an-opportunity-for-growth-development-and-democratisation/ (accessed 30 September 2021).

24. Centre for Data Ethics and Innovation. Introduction to the Centre for Data Ethics and Innovation. Page 8. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment _data/file/973932/CDEI_Introduction-booklet_V2.pdf (accessed 22 September 2021).

25. National Department of Communications and Digital Technologies. Summary report & recommendations presented by the Presidential Commission on the Fourth Industrial Revolution. Pretoria: Government Gazette No. 42388:43834. 2019.

26. World Health Organization. Health Workforce. Geneva: WHO, 2021. https://www.who.int/health-topics/health-workforce#tab=tab_1 (accessed 2 October 2021).         [ Links ]

27. World Health Organization. Ethics and governance of artificial intelligence for health: WHO guidance. Geneva: WHO, 2021. https://www.who.int/publications/i/item/9789240029200 (accessed 30 September 2021).         [ Links ]

28. World Health Organization. Ethics and governance of artificial intelligence (AI) in global health background document for WHO. Geneva: WHO, 2020.         [ Links ]

29. Hickok E, Mohanda S, Barooah SP. The AI Task Force Report - The first steps towards India's AI framework. Bengaluru: The Centre for Internet and Society, 2018. https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework (accessed 23 September 2021).

30. Thaldar DW, Naidoo M. AI inventorship: The right decision? S Afr J Sci 2021;117:11-12:12509. https://doi.org/10.17159/sajs.2021/12509 (accessed 29 November 2021).         [ Links ]

31. Dikoko v. Mokhatla [2006] ZACC 10, 2006 (6) SA 235 (CC) para 113.

32. South Africa. Labour Relations Act No. 66 of 1995.

33. South Africa. Promotion of National Unity and Reconciliation Act No. 34 of 1995.

34. South Africa. Road Accident Fund Act No. 56 of 1996.

35. Commission for Conciliation, Mediation and Arbitration. About us. Johannesburg: CCMA, 2021. https://ccmarecovery.syncrony.com/About-Us (accessed 27 October 2021).         [ Links ]

36. Apartheid Museum. The Truth and Reconciliation Committee (TRC). Johannesburg: Apartheid Museum, 2021. https://www.apartheidmuseum.org/exhibitions/the-truth-and-reconciliation-commission-trc (accessed 5 October 2021).

37. Road Accident Fund. Mandate. Centurion: RAF, 2021. http://www.raf.co.za/About-Us/Pages/profile.aspx (accessed 27 October 2021).

38. European Parliament. Civil Liability Regime for Artificial Intelligence. Strasbourg: European Parliament, 2021. https://www.europarl.eu/doceo/document/TA-9-2020-0276_EN.html (accessed 27 October 2021).

39. Graham SJH, Merges RP, Samuelson P, Sichelman T. High technology entrepreneurs and the patent system: Results of the 2008 Berkeley patent survey. Berkeley Technol Law J 2009;24(4):1256-1328. https://doi.org/10.2139/ssrn.1429049 (accessed 15 February 2022).         [ Links ]

40. South African Department of Science and Technology. White paper on science, technology and innovation. Pretoria: DST, 2019. https://www.gov.za/sites/default/files/gcis_document/201912/white-paper-science-technology-and-innovation.pdf (accessed 15 February 2022).

41. Jordaan DW. Biotech innovation in South Africa: Twenty years in review. Biotechnology Law Report 2016;35(1). http://doi.org/10.1089/blr.2016.29000.dwj (accessed 15 February 2022).         [ Links ]

42. National Department of Health, South Africa. National Digital Health Strategy for South Africa 2019 - 2024. https://www.health.gov.za/wp-content/uploads/2020/11/national-digital-strategy-for-south-africa-2019-2024-b.pdf (accessed 19 February 2022).

43. European Union. EU Declaration on Cooperation on Artificial Intelligence. 2018. https://ec.europa.eu/jrc/communities/en/node/1286/document/eu-declaration-cooperation-artificial-intelligence (accessed 2 October 2021).

 

 

Correspondence:
D Donnelly
donnellyd@ukzn.ac.za

Accepted 16 March 2022

Creative Commons License Todo o conteúdo deste periódico, exceto onde está identificado, está licenciado sob uma Licença Creative Commons