SciELO - Scientific Electronic Library Online

 
vol.114 número1Fasting plasma glucose and risk factor assessment: Comparing sensitivity and specificity in identifying gestational diabetes in urban black African womenDose-related adverse events in South African patients prescribed clofazimine for drug-resistant tuberculosis índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Articulo

Indicadores

Links relacionados

  • En proceso de indezaciónCitado por Google
  • En proceso de indezaciónSimilares en Google

Compartir


SAMJ: South African Medical Journal

versión On-line ISSN 2078-5135
versión impresa ISSN 0256-9574

SAMJ, S. Afr. med. j. vol.114 no.1 Pretoria ene. 2024

http://dx.doi.org/10.7196/SAMJ.2024.v114i2.1631 

IN PRACTICE
CONTEMPORARY ISSUES IN MEDICINE

 

Artificial intelligence (AI) or augmented intelligence? How big data and AI are transforming healthcare: Challenges and opportunities

 

 

K Moodley

DPhil; Division of Medical Ethics and Law, Department of Medicine, Faculty of Medicine and Health Sciences, Stellenbosch University, Cape Town, South Africa

Correspondence

 

 


ABSTRACT

The sanctity of the doctor-patient relationship is deeply embedded in tradition - the Hippocratic oath, medical ethics, professional codes of conduct, and legislation - all of which are being disrupted by big data and 'artificial' intelligence (AI). The transition from paper-based records to electronic health records, wearables, mobile health applications and mobile phone data has created new opportunities to scale up data collection. Databases of unimaginable magnitude can be harnessed to develop algorithms for AI and to refine machine learning. Complex neural networks now lie at the core of ubiquitous AI systems in healthcare. A transformed healthcare environment enhanced by innovation, robotics, digital technology, and improved diagnostics and therapeutics is plagued by ethical, legal and social challenges. Global guidelines are emerging to ensure governance in AI, but many low- and middle-income countries have yet to develop context-specific frameworks. Legislation must be developed to frame liability and account for negligence due to robotics in the same way human healthcare providers are held accountable. The digital divide between high- and low-income settings is significant and has the potential to exacerbate health inequities globally.


 

 

Healthcare has evolved significantly from its inception, when the sanctity of the doctor-patient relationship was based on Hippocratic traditions.[1] William Osler described medicine as 'a science of uncertainty and an art of probability'.[2] The human art of healing was embedded in a relationship of trust, with history-taking and the ritual of human touch via physical examination being critical methods to reach a diagnosis and prescribe treatment.

The use of the physician's ear to listen to heart sounds in the early 19th century gave way to simple technology in the form of a wooden stethoscope developed by Dr Rene Laennec in 1816.[3] The stethoscope was refined in 1852 by Arthur Leared and George Camman.[4] In 2019, the first artificial intelligence (Al)-powered stethoscope was on the market.[5,6]

However, AI has been in development since the 1950s. Alan Turing introduced the concept of a machine capable of thought in 1950.[7] John McCarthy, widely known as the father of AI, coined the term in 1956.[8] AI refers to use of computers to imitate human intelligence and the ability to think critically.[8] Given the complexity of the human brain, neurological system and critical thought processes, AI is technically challenging at multiple levels. To incorporate human cognitive functions such as logic, reasoning, perception, association, planning, prediction, natural language processing and motor control into AI technologies is highly complex. To achieve this in healthcare is even more challenging because medical AI is unforgiving in terms of the cost of errors, yet herein lies the potential for the greatest impact and opportunity. It is therefore unsurprising that AI and big-data analytics have become ubiquitous in medicine and are transforming healthcare, medical research and public health across multiple disciplines.

In many hospitals globally, robots have been in use to assist with less complex tasks such as the delivery of medication and meals.[9,10]

During the COVID-19 pandemic, digital technology was widely employed to facilitate multiple tasks, including communication between hospitalised patients and families at home.[11]

However, for diagnostic functions, more sophisticated AI algorithms are being employed. The visual image-based medical disciplines such as dermatology, radiology and pathology are more likely to transform first. A study conducted at Stanford University in 2017 incorporated 129 450 images of 2 032 different diseases into an algorithm and tested the ability of the AI technology to differentiate between malignant and benign skin lesions -keratinocyte carcinomas versus benign seborrhoeic keratoses, and malignant melanomas versus benign naevi. The AI technology was found to be equivalent to the diagnostic competence of 21 board-certified dermatologists.[12]

Rapid advances in oncology are occurring. Screening for breast cancer usually involves mammography, the images of which are read by two radiographers. Research conducted in Sweden on 80 000 women and published in Lancet Oncology has provided interim results in which AI screening systems were able to detect 20% more cancers than human radiographers and reduce screening time by 44%.[13]

Undoubtedly, meteoric progress in AI-enhanced healthcare is occurring. Generative AI is advancing the impact.

 

Generative AI: Large language models and large multimodal models

Language is central to all forms of human communication. It is critical to convey ideas and concepts and is particularly important in the healthcare provider-patient relationship. Language models in AI serve a similar purpose. Large language models (LLMs) and large multimodal models (LMMs) represent a quantum leap in AI technology. Unlike smaller language models, LLMs contain more than a billion parameters. In machine learning parlance, parameters refer to the variables used to train LLMs to generate new content. Since ChatGPT (Generative Pertaining Transformer) was launched in 2022, this new AI chatbot' is playing several different roles in healthcare and health research.[14] This advanced language model has been trained on massive volumes of internet texts using deep learning techniques.[15] When prompted, it attempts to imitate human-like text and can perform various roles in clinical medicine, health research and health sciences education. Early adopters began using ChatGPT to assist with administrative and bureaucratic tasks such as writing sick certificates, patient letters, and motivation letters to medical insurers for access to costly medications for patients.[1] But it could also assist in real-world clinical workflows related to diagnosis or triage, which is critical in resource-depleted settings like South Africa (SA), and participant enrolment in clinical trials.

LLMs work mostly with language-related text, but is language enough in healthcare? Furthermore, standard language and medical language differ significantly. The clinician-patient interaction requires so much more - comprehensive history and physical examination findings, blood and radiology results, genomics, nonverbal cues, emotional input from patients, and consideration of social determinants of health - to assist with diagnosis and treatment. Hence LMMs like GPT-4 and Med-PaLM 2 that comprise over a trillion parameters and incorporate text, images, audio and video are more applicable. They have more diverse capability and substantial potential for innovation in healthcare.[16]

Clearly, the potential for AI in advancing healthcare is enormous. To realise this transformative potential of AI, high-quality massive sets of data are required for algorithms and machine learning. Big data has been described as 'the oxygen on which AI depends'.[17]

 

A virtuous cycle or a vicious cycle: Big data, artificial intelligence, machine learning?

Unlike conventional research datasets, big data is defined by its volume, the velocity with which it is produced, processed and analysed, and the variety (heterogeneity and complexity) of data that can be generated. Most importantly, data quality and reliability are essential - so veracity is critical and its value to clinicians and patients is non-negotiable. Variability of data refers to consistency of data as time passes.[18] Big-data analytics assist in uncovering trends, patterns and correlations in large volumes of raw data to help make data-informed decisions using analytical techniques. Big data is essential for machine learning, the most common form of AI, that learns from data sets and improves assessments over time.

 

Potential harms with AI

As with all new technology, the potential for harm is a constant and tangible concern. Technical debt is incurred when innovation occurs and is rapidly implemented without adequate safety checks. Ethical debt is incurred when AI tools are 'created and deployed without fully examining and addressing the potential ethical consequences'.[17] The cornerstone of medical ethics is 'first do no harm', so the potential benefit of new technology and AI-driven systems in healthcare must be carefully weighed against harms that could later be catastrophic.

Poor-quality inaccurate information

Like humans, LLMs and LMMs can make errors. More concerning, some LLMs may hallucinate data or produce false information that is not based on original training data.[19] This potentially contaminates the integrity of evidence-based medicine.[1] These hallucinations occur during reinforcement learning. Hallucinating medical information is harmful because it may fuel infodemics, as occurred during the COVID-19 pandemic.

Bias and discrimination

Healthcare data have always been associated with human bias that may be expressed in various forms.[20] Given the historical bias inherent in medical data produced in the clinical environment and via medial research, AI has the potential to amplify such bias.'211 This is because large volumes of data required for AI have become 'the oxygen on which AI depends', and data that are 'inculcated with decades of ... discriminatory behavior' are likely to bias diagnosis and treatment.[17] Historically, women and children have been excluded from clinical trials.[22,23] Age-related algorithmic bias is particularly concerning.[24] Likewise, people with disabilities and some ethnic groups are also not well represented in clinical research.[25] Such biases exacerbate health disparities, as became evident with the use of pulse oximeters during the COVID-19 pandemic.[26,27] Automation bias is equally concerning when healthcare professionals may overlook errors made by AI systems, thereby overestimating benefits and minimising risks.[28,29] Although full decision-making has not yet been transferred from human healthcare providers to AI, a risk of automation bias still exists. It introduces concern about whether full delegation is legal, as laws increasingly recognise the right of individuals not to be subjected to solely automated decisions when such decisions would have a significant effect.

Erosion of clinical competence and dehumanised healthcare

Medical education has traditionally spanned several years during which knowledge and skills development are inculcated. Complex surgical and medical procedures are mastered by diligent practice and repetition. The potential for loss of skills exists when over-reliance on AI technology occurs.[30] Perceptions of loss of skills may erode trust in healthcare providers if patients believe that AI could impair human judgement or reduce clinical competence.

Privacy beaches

Successful and efficient AI depends on machine learning, which in turn requires that data are constantly fed back into AI neural networks. If identifiable patient data are fed into LLMs, they form part of the data that the AI system will use in the future. In other words, the more healthcare providers feed detailed and specific patient information into LLMs, the higher the risk of sensitive information becoming vulnerable to disclosure. Confidentiality of patient information anchors the value of trust in the doctor-patient relationship. LLMs threaten data privacy - a risk that vulnerable patients may not fully understand. This risk undermines consent processes in AI-assisted healthcare, creating fertile ground for litigation. Cybersecurity risks are increasing exponentially.[31] Despite attempts to protect data privacy via de-identification methods, anonymisation and pseudonymisation, concerns persist around data security, as several data points, especially from multiple large data sets, may unmask data assumed to be concealed.[32] Three-dimensional brain imaging has the potential for facial recognition, and despite the availability of software packages for de-identification of facial images, protection of identity may not always be possible.

Anthropomorphisation

When human-like characteristics such as emotion are attributed to non-human entities, the potential for harm from anthropomorphisation becomes obvious. AI systems are being built to mimic humans in terms of voice, physical attributes and the development of avatars. Prompting a chatbot to 'talk like a doctor' is one example. The potential for psychosocial harm is substantial.[33] This is particularly concerning when children are exposed to anthropomorphic AI technology.

Social justice, commercialisation of data and the digital divide

Inequity has many dimensions, including gender, geography, culture, religion and language. Differences in socioeconomic levels contribute to discrepancies in data literacy. Coupled with data and algorithmic bias, the production of low-quality, non-representative data could exacerbate inequities in healthcare. Although data are obtained freely from multiple sources, the downstream monetisation of AI services has created concern and controversy among data donors. The corporate sector survives on profit generation. Consequently, resource-rich countries with better access to generative AI may be able to extract more data from resource-poor countries (low- and middle-income countries, LMICs) at higher speeds.[1] Similar to what has occurred with biosamples,[34,35] data extraction across asymmetrical power differentials may easily be construed as exploitation of indigenous knowledge from marginalised populations.

Sustainability and environmental impact

Generation of big datasets is energy intensive. Some technology companies use electricity produced by fossil fuels. Data centres may have a water footprint that is also substantial. Although digital technologies such as online platforms have reduced international travel and reduced our carbon footprint, the disproportionate use of energy and water consumption associated with big-data storage and use is concerning with regard to environmental impact.[36]

 

Governance throughout the AI life cycle

The potential for exploitation fuelled by unequal access and asymmetrical technological power underlines the importance of having specific regulations to govern the health uses of generative AI in LMICs. While global guidelines are emerging to promote governance in AI, many LMICs have yet to adapt and contextualise these frameworks or, better still, develop their own guidelines.

 

Ethics and values

World Health Organization AI guidelines

In response to the concerns raised by AI in healthcare, in 2021 the World Health Organization (WHO) published guidance on ethics and governance for AI in health.[37] Several ethical principles were outlined.

Protecting human autonomy

Respect for autonomy creates obligations with regard to informed consent and confidentiality. Obtaining consent for use of data from electronic health records and primary data science projects is important, as secondary use of data may be anonymised. Data collected via various sources are fundamental to develop algorithms for AI and for use of AI in healthcare. However, patients must be aware of and consent to the use of their health data and the use of AI in their healthcare. A high level of data literacy is necessary to facilitate autonomous decision-making. Low levels of data literacy remain a concern in resource poor-settings.

Promoting human wellbeing, safety and public interest

With all medical interventions, including drugs and devices, quality and safety are central. Likewise, high-quality, accurate, unbiased data are essential to contribute to safety in AI. When AI models hallucinate, fictional data are generated. It is important to minimise hallucinations by using clean specific prompts or by using multishot prompting with more examples. The end users of AI in healthcare are usually patients who are vulnerable as a result of ill health. This vulnerability is exacerbated where children are concerned. Age-associated algorithmic bias has the potential to impact on safety in paediatric healthcare.[24]

Transparency, explainability, intelligibility

'Black-box' AI models may produce an output without explaining the underlying reasoning. At the heart of all scientific development lies the obligation for science translation, public consultation and engagement. Accountability and transparency in medical decision-making is an ethical obligation. Interpretable AI uses 'white-box' algorithms that are easier to understand.[38] Explainable AI uses a second AI algorithm to explain black-box algorithms. Democratising medical knowledge is an important benefit of AI, but can only be achieved with public engagement and efforts to improve data literacy.[39]

Fostering responsibility and accountability

Developers of AI technology may be situated outside the healthcare profession in biotechnology industries, computer or data science startup companies, and other corporate structures. While the healthcare profession, pharmaceutical companies, vaccine manufacturers and medical device companies are held to high standards in the healthcare ecosystem, so too must AI and big-data stakeholders be held responsible.

Inclusiveness and equity

Inequity manifests in various ways between resource-rich and resource-poor settings. Improved access to digital technology as well as benefit sharing in big-data research and AI is important to correct inequity.

Promotion of responsible, sustainable AI

Big-data storage requires climate control, which is water intensive. Finding suitable energy sources that impact minimally on the environment is essential. Sustainability of the environment, workspaces and health systems must be considered.

AI Bill of Rights

Similar to the WHO AI ethics guidelines, the AI Bill of Rights, released in October 2022, focuses on the protection of humans from unsafe or ineffective systems.[40] Algorithmic discrimination (unjustified different treatment based on factors such as ethnicity, gender, sexual orientation, religion, disability and age) must be minimised. Protection from abusive data and labour practices, including use of poorly paid technology workers from LMICs,[41] is necessary via built-in safeguards and agency over how data are processed and used. The public must know that an automated system is being used, for example - that they are talking to a chatbot and not a human - and they must understand the impact. Humans should have the ability to opt out, where appropriate, and have access to human services - for example, it should be possible to choose between a human surgeon and robotic surgery.[42]

Professional guidelines: Health Professions Council of South Africa

Apart from limited guidance on telehealth, the current Health Professions Council of South Africa guidelines do not include AI-specific guidelines for health professionals.[43] Likewise, the current draft updated guidance on research ethics in SA under development by the National Health Research Ethics Council currently lacks content on the ethical impact of big data and AI on health research. However, a section on AI and big data in health research is under development.

Medicolegal challenges and legislative loopholes in SA

Similar to medical malpractice claims against human healthcare providers, the potential for liability claims in the context of digitally enhanced healthcare is complex. A doctor could reject good advice from an AI tool or follow inaccurate advice from an AI tool.[38] Other concerns exist around responsibility for liability where technology is concerned. Infringement of copyright laws is currently being contested, as major concerns have arisen regarding the sources of datasets used to train LLMs, LMMs and other AI systems.[15]

Legal frameworks are being developed globally, particularly in countries such as China, the USA and Europe. The European Union AI Act proposed by the European Commission in 2021 is a landmark piece of legislation currently awaiting approval by the EU Council and the EU Parliament.[44] It remains mired in controversy at the time of publication. SA has no AI-specific legislation, but there are important provisions such as transparency, accountability and respect for human rights in the SA Constitution that apply.[45,46] The Consumer Protection Act 68 of 2008 may assist to a certain extent, but does not explicitly include generative AI or other AI technologies.[47] The Protection of Personal Information Act 4 of 2013 regulates data processing and is intended to protect privacy, but does not deal specifically with AI.[48] Harmonisation of regulatory frameworks is imperative in view of the legislative nuances in resource-poor regions in the world.

 

Exploring new frontiers

Health in the metaverse

Virtual worlds and virtual communities have existed for many decades, but are less sophisticated than the metaverse. Although this may appear to be a stretch of the imagination, a metaverse is a three-dimensional virtual space that may be accessed online using various devices such as virtual reality and augmented reality. Healthcare providers and patients may enter this meta space as avatars to explore and meet each other in virtual clinics.[49] While only limited services - counselling and physiotherapy - may be provided in a virtual space, the extent to which medical funders will financially support therapy in the metaverse is uncertain. Such innovation will align the profession more closely with artificial general intelligence.

Digital twin technology

Using computational models from complex systems, digital twin technology integrates data from multiple systems. This concept has been borrowed from engineering, but is highly applicable in clinical medicine. Given that healthcare providers use heterogeneous data from multiple sources to reach a diagnosis and plan treatment for complex human organisms who differ from each other in several respects, multimodal biomedical data will more closely mimic decision-making in clinical medicine. Digital twin technology holds great promise for precision medicine and clinical trials for drug discovery.[50]

Conclusion

Balancing innovation with rigorous regulation is a moral imperative in the design, development and deployment of new digital technologies throughout the AI life cycle. This moral imperative is non-negotiable in healthcare, as human dignity and the inherent worth of humans are the central values upon which all other ethical principles rest. Diminishing biological life and the role of human intelligence, experience and empathy are risks of AI in healthcare. A hybrid approach where AI augments human healthcare provision seems prudent. Health science students must be educated and prepared for a future in which digitally augmented healthcare will be the norm. While the enormous benefits of augmented intelligence must be celebrated in healthcare, the energy and water footprints of big data used to fuel AI must be minimised.

Declaration. None.

Acknowledgements. The National Institutes of Health and the Research for Ethical Data Science in sub-Saharan Africa (REDSSA) team.

Author contributions. Sole author.

Funding. Research reported in this publication was supported by the National Institute of Mental Health of the National Institutes of Health under Award Number U01MH127704. The content is solely the responsibility of the author and does not necessarily represent the official views of the National Institutes of Health.

Conflicts of interest. None.

 

References

1. Moodley K. Rennie S. ChatGPT has many uses: Experts explore what this means for healthcare and medical research. The Conversation, 22 February 2023. https://theconversation.com/chatgpt-has-many-uses-experts-explore-what-this-means-for-healthcare-and-medical-research-200283 (accessed 12 October 2023).         [ Links ]

2. Bean WB. Sir William Osler: Aphorisms from his bedside teachings and writings. Br J Philos Sci 1954;5(18):172-173. https://philpapers.org/rec/BEASWO (accessed 17 October 2023).         [ Links ]

3. Roguin A. Rene Theophile Hyacinthe Laënnec (1781 - 1826): The man behind the stethoscope. Clin Med Res 2006;4(3):230-235. https://doi.org/10.3121/cmr.4.3.230        [ Links ]

4. Peck P. Dr Cammann and the binaural stethoscope. J Kans Med Soc 1963;64:121-123.         [ Links ]

5. Ghanayim T, Lupu L, Naveh S, et al. Artificial intelligence-based stethoscope for the diagnosis of aortic stenosis. Am J Med 2022;135(9):1124-1133. https://doi.org/10.1016/j.amjmed.2022.04.032        [ Links ]

6. Zhang M, Li M, Guo L, Liu J. A low-cost AI-empowered stethoscope and a lightweight model for detecting cardiac and respiratory diseases from lung and heart auscultation sounds. Sensors (Basel) 2023;23(5):2591. https://doi.org/10.3390/s23052591        [ Links ]

7. Turing AM. Computing machinery and intelligence. Mind 1950;236:433-460. https://phil415.pbworks.com/f/TuringComputing.pdf (accessed 17 September 2023).         [ Links ]

8. Mintz Y, Brodie R. Introduction to artificial intelligence in medicine. Minim Invasive Ther Allied Technol 201938(2):73-81. https://doi.org/10.1080/13645706.2019.1575882        [ Links ]

9. Morgan AA, Abdi J, Syed MAQ, Kohen GE, Barlow P, Vizcaychipi MP. Robots in healthcare: A scoping review. Curr Robot Rep 2022;3(4):271-280. https://doi.org/10.1007/s43154-022-00095-4        [ Links ]

10. Ohneberg C, Stöbich N, Warmbein A, et al. Assistive robotic systems in nursing care: A scoping review. BMC Nurs 2023;22(1):72. https://doi.org/10.1186/s12912-023-01230-y        [ Links ]

11. Sarker S, Jamal L, Ahmed SF, Irtisam N. Robotics and artificial intelligence in healthcare during COVID-19 pandemic: A systematic review. Rob Auton Syst 2021;146:103902. https://doi.org/10.1016/j.robot.2021.103902        [ Links ]

12. Esteva A, Kuprel B, Novoa R, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017;542(7639):115-118. https://doi.org/10.1038/nature21056        [ Links ]

13. Lang K, Josefsson V, Larsson AM, et al. Artificial intelligence-supported screen reading versus standard double reading in the Mammography Screening with Artificial Intelligence trial (MASAI): A clinical safety analysis of a randomised, controlled, non-inferiority, single-blinded, screening accuracy study. Lancet Oncol 2023;24(8):936-944. https://doi.org/10.1016/S1470-2045(23)00298-X        [ Links ]

14. Singhal K, Azizi S, Tu T, et al. Large language models encode clinical knowledge. Nature 2023;620(7972):172-180. https://doi.org/10.1038/s41586-023-06291-2        [ Links ]

15. Dave T, Athaluri SA, Singh S. ChatGPT in medicine: An overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front Artif Intell 2023;6:1169595. https://doi.org/10.3389/frai.2023.1169595        [ Links ]

16. Topol ER. As artificial intelligence goes multimodal, medical applications multiply. Science 2023;381(6663):eadk6139. https://doi.org/10.1126/science.adk6139        [ Links ]

17. Petrozzino C. Who pays for ethical debt in AI? AI Ethics 2021;1:205-208. https://doi.org/10.1007/s43681-020-00030-3        [ Links ]

18. Ristevski B, Chen M. Big data analytics in medicine and healthcare. J Integr Bioinform 2018;15(3):20170030. https://doi.org/10.1515/jib-2017-0030        [ Links ]

19. Azamfirei R, Kudchadkar SR, Fackler J. Large language models and the perils of their hallucinations. Crit Care 2023;27(1):120. https://doi.org/10.1186/s13054-023-04393-x        [ Links ]

20. Abrámoff, MD, Tarver ME, Loyo-Berrios N, et al. Considerations for addressing bias in artificial intelligence for health equity. NPJ Digit Med 2023;6(1):170. https://doi.org/10.1038/s41746-023-00913-9        [ Links ]

21. Gudis DA, McCoul ED, Marino MJ, Patel ZM. Avoiding bias in artificial intelligence. Int Forum Allergy Rhinol 2023;13(3):193-195. https://doi.org/10.1002/alr.23129        [ Links ]

22. Liu KA, Mager NA. Women's involvement in clinical trials: Historical perspective and future implications. Pharm Pract (Granada) 2016;14(1):708. https://doi.org/10.18549/PharmPract.2016.01.708        [ Links ]

23. Kampmann B. Women and children last? Shaking up exclusion criteria for vaccine trials. Nat Med 2021;27(1):8. https://doi.org/10.1038/s41591-020-01199-0        [ Links ]

24. Muralidharan V, Burgart A, Daneshjou R, Rose S. Recommendations for the use of paediatric data in artificial intelligence and machine learning ACCEPT-AI. NPJ Digit Med 2023;6(1):166. https://doi.org/10.1038/s41746-023-00898-5        [ Links ]

25. Larson E. Exclusion of certain groups from clinical research. Image J Nurs Sch 1994;26(3):185-190. https://doi.org/10.1111/j.1547-5069.1994.tb00311.x        [ Links ]

26. Tobin MJ, Jubran A. Pulse oximetry, racial bias and statistical bias. Ann Intensive Care 2022;12:2. https://doi.org/10.1186/s13613-021-00974-7        [ Links ]

27. Singh AK, Sahi MS, Mahawar B, Rajpurohit S. Comparative evaluation of accuracy of pulse oximeters and factors affecting their performance in a tertiary intensive care unit. J Clin Diagn Res 2017;11(6):OC05-OC08. https://doi.org/10.7860/JCDR/2017/24640.9961        [ Links ]

28. Goddard K, Roudsari A, Wyatt JC. Automation bias - a hidden issue for clinical decision support system use. Stud Health Technol Inform 2011;164:17-22.         [ Links ]

29. Cengiz N, Obasa EA, Ganya W, Moodley K. Digital technologies and artificial intelligence in healthcare: Ethical challenges. In: Moodley K, ed. Bioethics, Medical Law and Human Rights: A South African Perspective. 3rd ed. Pretoria: Van Schaik Publishers, 2023:283-291.         [ Links ]

30. Alami H, Rivard L, Lehoux P, et al. Artificial intelligence in health care: Laying the foundation for responsible, sustainable, and inclusive innovation in low- and middle-income countries. Glob Health 2020;16(1):52. https://doi.org/10.1186/s12992-020-00584-1        [ Links ]

31. De Silva D, Alahakoon D. An artificial intelligence life cycle: From conception to production. Patterns (N Y) 2022;3(6):100489. https://doi.org/10.1016/j.patter.2022.100489        [ Links ]

32. Na L, Yang C, Lo CC, Zhao F, Fukuoka Y, Aswani A. Feasibility of reidentifying individuals in large national physical activity data sets from which protected health information has been removed with use of machine learning. JAMA Netw Open 2018;1(8):e186040. https://doi.org/10.1001/jamanetworkopen.2018.6040        [ Links ]

33. Deshpande A, Rajpurohit T, Narasimham K, Kaljan A. Anthropomorphisation of AI: Opportunities and risks. arXiv 2023;2305:14784. https://doi.org/10.48550/arXiv.2305.14784        [ Links ]

34. Moodley K, Sibanda N, February K, Rossouw T. 'It's my blood': Ethical complexities in the use, storage and export of biological samples: Perspectives from South African research participants. BMC Med Ethics 2014;15:4. https://doi.org/10.1186/1472-6939-15-4        [ Links ]

35. Moodley K. Kleinsmidt A. Allegations of misuse of African DNA in the UK: Will data protection legislation in South Africa be sufficient to prevent a recurrence? Dev World Bioeth 2021;21(3):125-130. https://doi.org/10.1111/dewb.12277        [ Links ]

36. Ligozat A-L, Lefevre J, Bugeau A, Combaz J. Unraveling the hidden environmental impacts of AI solutions for environment life cycle assessment of AI solutions. Sustainability 2022;14(9):5172. https://doi.org/10.3390/su14095172        [ Links ]

37. World Health Organization. Ethics and governance of artificial intelligence for health: WHO guidance. 28 June 2021. https://www.who.int/publications/i/item/9789240029200 (accessed 10 September 2023).         [ Links ]

38. Hedderich DM, Weisstanner C, van Cauter S, et al. Artificial intelligence tools in clinical neuroradiology: Essential medico-legal aspects. Neuroradiology 2023;65(7):1091-1099. https://doi.org/10.1007/s00234-023-03152-7        [ Links ]

39. London AJ. Artificial intelligence and black-box medical decisions: Accuracy versus explainability. Hastings Cent Rep 2019;49(1):15-21. https://doi.org/10.1002/hast.973        [ Links ]

40. The White House Office of Science and Technology Policy. Blueprint for an AI Bill of Rights: Making automated systems work for the American people. 2022. https://www.whitehouse.gov/ostp/ai-bill-of-rights (accessed 17 October 2023).         [ Links ]

41. Le Ludec C, Cornet M. How low-paid workers in Madagascar power French tech's AI ambitions. The Conversation, 30 March 2023. https://theconversation.com/how-low-paid-workers-in-madagascar-power-french-techs-ai-ambitions-202421 (accessed 10 September 2023).         [ Links ]

42. Blumenthal-Barby J. An AI Bill of Rights: Implications for health care AI and machine learning -a bioethics lens. Am J Bioeth 2023;23(1):4-6. https://doi.org/10.1080/15265161.2022.2135875        [ Links ]

43. Health Professions Council of South Africa. Guidelines for good practice in the health professions: General ethical guidelines for the healthcare professions. Booklet 1. Pretoria: HPCSA, 2021. https://www.hpcsa.co.za/Uploads/professional_practice/ethics/Booklet_1_Guidelines_for_Good_Practice_vDec_2021.pdf (accessed 18 October 2023).         [ Links ]

44. Hacker P. The European AI liability directives - critique of a half-hearted approach and lessons for the future. Comput Law Secur Rev 2023;51:105871. https://doi.org/10.1016/j.clsr.2023.105871        [ Links ]

45. Brand D. Responsible artificial intelligence in government: Development of a legal framework for South Africa. JeDEM - EJournal of EDemocracy and Open Government 2022;14(1):130-150. https://doi.org/10.29379/jedem.v14i1.678        [ Links ]

46. South Africa. Constitution of the Republic of South Africa No. 108 of 1996. https://www.gov.za/sites/default/files/images/a108-96.pdf (accessed 30 September 2023).         [ Links ]

47. South Africa. Department of Trade and Industry: Consumer Protection Act No. 68 of 2008. https://www.gov.za/sites/default/files/32186_467.pi (accessed 15 October 2023).         [ Links ]

48. South Africa. Protection of Personal Information Act No. 4 of 2013. https://www.gov.za/documents/protection-personal-information-act (accessed 16 October 2023).         [ Links ]

49. Solaiman B. Telehealth in the metaverse: Legal & ethical challenges for cross-border care in virtual worlds. J Law Med Ethics 2023;51(2):287-300. https://doi.org/10.1017/jme.2023.64        [ Links ]

50. Acosta JN, Falcone GJ, Rajpurkar P, et al. Multimodal biomedical AI. Nat Med 2022;28(9):1773-1784. https://doi.org/10.1038/s41591-022-01981-2        [ Links ]

 

 

Correspondence:
K Moodley
km@sun.ac.za

Accepted 20 November 2023

Creative Commons License Todo el contenido de esta revista, excepto dónde está identificado, está bajo una Licencia Creative Commons