Serviços Personalizados
Journal
Artigo
Indicadores
Links relacionados
-
Citado por Google -
Similares em Google
Compartilhar
SAMJ: South African Medical Journal
versão On-line ISSN 2078-5135versão impressa ISSN 0256-9574
SAMJ, S. Afr. med. j. vol.115 no.5b Pretoria Jun. 2025
https://doi.org/10.7196/samj.2025.v115i5b.3669
RESEARCH
AI and its application within UK healthcare: British Medical Association perspective
D Norcliffe-BrownI; F McCartneyII
IMSc; Policy lead: medical ethics and human rights, British Medical Association, London UK
IIMA; Policy advice and support officer, British Medical Association, London, UK
ABSTRACT
This paper is based on a talk given by Dominic Norcliffe-Brown of the British Medical Association (BMA) to the South African Medical Association on 28 November 2024, which explained the BMA policy paper 'Principles for Artificial Intelligence (AI) and its application in healthcare'. The policy paper provides further detail on the information discussed below.
Keywords: UK healthcare, artificial intelligence, healthcare.
Interest in the application of AI in UK healthcare is as high as it has ever been. The UK's Labour government, since taking power in July 2024, has made AI investment a cornerstone of its programme, including in the National Health Service (NHS).[1] Nevertheless, there remain serious concerns for both clinicians and patients on how AI will impact their experience of the healthcare system.
This paper, based on a talk given to the South African Medical Association by an invited speaker from the British Medical Association (BMA), explains the BMA perspective on AI and its application in healthcare, utilising the BMA policy paper 'Principles for Artificial Intelligence (AI) and its application in healthcare'.[2] The paper sets out a definition of AI before acknowledging the potential benefits and drawbacks of AI in UK healthcare with respect to (i) health outcomes; (ii) health inequalities and historically marginalised groups; (iii) the doctor-patient relationship; (iv) job quality; and (v) efficiency and healthcare costs. The paper concludes by considering how to ensure the effects of AI are positive, highlighting that AI is just a tool, and what matters is how it is implemented, and noting the BMA principles for AI implementation and policy in a UK-healthcare setting.
What is artificial intelligence?
Despite the extent of discussion of AI in common discourse, there is no common agreed definition of the term. When people hear of AI, most think of artificial general intelligence (AGI), popularised in the media. This is, essentially, an AI system that mimics human cognitive abilities. AGI systems would be able to learn and understand any intellectual task that a human can and would be able to adapt to new situations. Regardless of the theoretical interest, it is generally considered unlikely to be reached in the near future, if at all.[3]
In healthcare, AGI is not the main consideration, but rather narrow AI, which is AI limited to specific tasks. The UK government gives a working general definition of 'Machines that perform tasks normally performed by human intelligence, especially when the machines learn from data how to do those tasks'. Similarly, the NHS Transformation Directorate defines AI as 'the use of digital technology to create systems capable of performing tasks commonly thought to require human intelligence'.[4]
However, 'tasks normally performed by human intelligence' is not well defined and may well change over time - for example, a pocket calculator can do simple mathematics that are even beyond the reach of many intelligent humans. But we do not tend to consider the average pocket calculator as AI. What we consider to be AI covers an ever-changing set of capabilities as new technologies are developed.
Ironically, AI researchers and practitioners often say that as soon as machines have achieved a task that previously only humans could do, it is no longer considered a mark of 'intelligence' or AI. Some consider the term 'augmented intelligence' to be more accurate, but the BMA considers the fact that because 'artificial intelligence' is so pervasive in the common lexicon as to be what AI stands for, it would be counterproductive to use it to refer to different terminology, even if such terminology is a more accurate reflection of the concepts being considered.
Therefore, we do not think it is the role of the BMA to provide a 'clean' definition of AI, as long as we recognise what we are talking about when we discuss AI in healthcare.
What do we mean by AI in healthcare?
The BMA focus is on AI technologies currently in use by the NHS or in other healthcare settings, or technologies that have been developed and may be used soon. These include:
• machine learning (a statistical technique for fitting models to data and 'learn' by training models with data)
• natural language processing (including speech recognition, simulation of human conversation)
• vision/image recognition.
Key areas of AI use currently in healthcare include:
• healthcare organisation and administration
• diagnosis and treatment (clinical decision-making and service delivery)
• population health and prevention
• biomedical research.
Healthcare organisation and administration
AI functions, such as natural language processing, are used to automate and optimise 'backend' processes, including (i) staff rostering and appointment scheduling; (ii) repetitive admin, such as note taking and transcribing during consultations, filing of an individual's electronic patient health record and patient letter writing, printing and posting; and (iii) analysis of patient feedback to inform quality improvement.
For example, at Imperial College Healthcare NHS Trust, a pilot tested the use of natural language processing to analyse patient feedback in real time, which led to responses to feedback being implemented more quickly than without the tool.[5]
Clinical decision-making
AI pattern recognition of computed tomography (CT), magnetic resonance imaging (MRI) or ultrasound scans, for instance, and AI analysis of clinical data, genomic data, health records, personal and family histories, speech patterns and voice modulations, clinical guidelines, best practice and medical research are used to support, optimise, personalise and/or automate decisions about triage, diagnostics, prognosis and subsequent care pathways at the point of care via decision support systems.
For example, Moorfields Eye Hospital have trialled the use of optical coherence tomography (a non-invasive diagnostic technique that renders a cross-sectional view of the retina) to pick up retinal diseases through AI tagging of 'urgent' cases in need of referral.[6]
Service delivery
AI-based apps, bots, personal, wearable and smart devices are used to interact directly with patients to (entirely or mostly) deliver therapies, provide health information, and/or support patients to stick to prescribed interventions and manage health conditions.
Sleepio is one example: a 6-week tailored programme delivered online that is designed to treat insomnia. The therapy is personalised using AI that tailors the intervention to patient data.[7]
Population health
Population-level applications of AI analysis using a large amount of new data allows for novel, real-time insights into: (i) epidemics, disease spread and the drivers of illness; and (ii) individuals/groups at risk of developing particular diseases that could benefit from proactive, early intervention.
During the pandemic, NHSX launched the Covid-19 Data Store, bringing together data from several sources within the health and social care system as part of a project to use AI to build a predictive model to inform the government's response to Covid-19.[8]
Biomedical research
AI analysis applied to new forms of data, including genomic data and patient records, to inform new drug and treatment discoveries.
Recently, researchers at Massachusetts Institute of Technology (MIT) [9] discovered a new class of compounds capable of killing drug-resistant bacteria by using AI screening of millions of potential chemical compounds to narrow down those with high predicted ability to kill bacteria and low toxicity to living tissue, leading to several hundred new compounds that were worth testing. Upon empirically testing of those, several were capable of killing drug-resistant bacteria.[9]
How can AI impact outcomes in UK healthcare?
It is thought that AI can impact outcomes in healthcare through three primary mechanisms. Firstly, there is precision and personalisation. AI can match or outperform humans at executing certain healthcare tasks with accuracy. One well-evidenced area is accuracy in imaging diagnostic processes.!101 AI may also be able to develop more detailed understanding of disease and more effective medications and treatments, by integrating and analysing large amounts of personal, social, clinical, genomic, and epidemiological data. This information can then be made available at the click of a button to healthcare providers at the point of care to support diagnosis and prognosis - including drug prescription. This holds the potential to make medical interventions more accurate and precise, personalised to the individual and based on the most up-to-date knowledge, guidance, and best practice.
The second mechanism is efficiency and productivity. AI imaging and clinical decision making allow the possibility for quicker and more efficient healthcare interventions. There are also quality and efficiency gains to be made from applying AI technologies to more routine, everyday tasks such as dealing with letters or appointment scheduling. The NHS England long-term workforce plan cites McKinsey research arguing that routine administrative tasks can take up to 70% of a health practitioner's time.[n
Thirdly, prevention and early intervention. AI algorithms have the potential to examine factors such as population demographics, disease prevalence, and geographical distribution to identify patients or patient groups at higher risk of certain conditions. If AI can provide such benefits as population health risk prediction, surveillance, a source of health information, and therapy, this can have positive implications for the efficacy of preventative medicine. Where upstream health and wellbeing support is prioritised, demand on healthcare can be mitigated, allowing resource to be deployed in other areas of need.
Benefits and risks of AI in healthcare
Proponents of AI use in healthcare point towards a number of potential benefits. Operating through the mechanisms outlined above, AI can help ensure that the healthcare system and staff are supported to better manage demand pressures. It can also reduce pressure on the healthcare system in the long term through, for example, improving access to more personalised health data and support. The results could be better job quality, recruitment, and retention for healthcare staff. Furthermore, it can improve the quality of care and reduce healthcare system costs.
Nevertheless, there are significant potential drawbacks of AI in healthcare, which could aggravate problems in each of the aforementioned areas, and this will depend on how AI is implemented. We now explore this in further detail.
Health outcomes
The potential benefits in health outcomes are numerous. For example, the greater precision AI potentially offers in diagnosis and treatment could lead to better outcomes for patients, by reducing the risk of errors made and aiding clinical decision-making. Studies have reported the accuracy of AI in determining if lung nodules seen on CT scans are cancerous.[121 Similar claims are made of AI retinal scanning and eye diseases.[131 Incorporating AI into imaging diagnostic processes could, therefore, improve accuracy. AI can also help aid clinicians in differential diagnoses. Tools are being developed with the aim of providing a list of potential diagnoses and their likelihoods to doctors based on patient health data and symptoms to assist in the process. It can potentially also speed up diagnosis and treatment, reducing risks to patients. For example, AI has been shown to aid analysis of brain tumours in cancer patients.
Despite the potential benefits, there are significant risks as well. A review by the Panel for the Future of Science and Technology (STOA) describes three main patient safety risks from AI technologies in healthcare: (i) false-negatives in the form of missed diagnoses of disease; (ii) unnecessary treatment due to false-positives; and (iii) unsuitable interventions due to imprecise diagnoses, or incorrect prioritisation of interventions in emergency departments.[14]
Though it must be acknowledged such risks exist in healthcare without AI, the underlying reasons behind such harms could be unique to AI. Unsuitable interventions may be made because of a lack of capability in understanding the patient empathically and as a whole person.
One of the primary concerns is the potential reduction in human interaction. Face-to-face communication with healthcare providers plays a significant role in patient care, particularly in mental health. These interactions offer empathy, understanding, and emotional support, which are difficult to replicate with AI.
Health inequalities and historically marginalised groups
AI has the potential to improve access and reduce barriers to treatment. Research by Healthwatch, National Voices, and the Good Things Foundation all find that digital services can be more accessible for some, including those with caring responsibilities, those with reduced mobility, and those who are immunosuppressed or shielding.[15] There is also some evidence[16] that digital mental health services bypass some of the access barriers to traditional services, including stigma. Societal inequalities and biases permeate all walks of life and healthcare is no exception. There is evidence that clinicians can think women exaggerate the pain they are experiencing[17] and that those from ethnic minorities are able to withstand more pain,[18] which can be detrimental to their treatment. AI could be developed in such a way to limit its exposure to such preconceptions and therefore help address inequalities in healthcare.
However, there is a growing body of evidence[19] that AI and data-driven health technologies can lead to discrimination against underserved or historically marginalised groups, exacerbating existing bias and systemic health and healthcare inequalities. Firstly, the data that AI is trained on presents a problem. If the datasets used to train AI models are not representative, then the resulting output will be inequitable, whether in prediction, prevention, triage, diagnostics or prognostics, and decision support. Secondly, digital exclusion is a considerable cause for concern. As the Joseph Rowntree Foundation have said, 'AI is a seismic shift in the goalposts of what it means to be digitally included.'[20] Digital exclusion is about not having the access, skills, and confidence to use the internet and benefit fully from digital technology in everyday life. In 2023, Ofcom estimated that 7% of the UK population (nearly 5 million people) did not have home internet access and are therefore at risk of digital exclusion.[21]
The doctor-patient relationship
AI could free up clinicians' time by taking over certain bureaucratic activities, which would allow clinicians to spend more time with each patient and develop a positive relationship with each one. This, in turn, could improve communication and trust between patients and doctors, and give more opportunities for continuity of care.
On the other hand, if the use of AI forms a barrier between a doctor and patient, and thus limits the latter's access to the former, the patient may feel alienated, resulting in distrust. In a situation, for example, in which a patient has been given a diagnosis of a life-limiting illness, to speak to a doctor to ask questions could make a significant difference to a patient's wellbeing. While these questions could be answered by AI, speaking to a doctor could be reassuring and comforting, particularly if a rapport has already been built.
Job quality
Workforce displacement from AI receives a lot of attention. One study suggests that 35% of UK jobs, economy wide, could be lost to AI automation over a 20-year period. However, PwC predict the risk of job displacement in health and social care from AI and related technologies to be lower than in other sectors, with technology largely proving 'complementary'.[22] Fewer healthcare roles consist of wholly automatable tasks and given the backdrop of rising demand for care, health and social care is predicted to see the largest net employment increase of any sector over the next two decades. Instead of replacing staff in healthcare, technology is likely to transform the nature of work. Doctors focus on tasks where human-value is added.
A potential benefit is reducing the risk of burnout in the UK health workforce. Burnout is recognised as a huge problem in UK health services, with the 2023 NHS England staff survey noting that about 30% of NHS staff experience it. AI could address this by reducing the workload NHS staff face, through efficiency and productivity mechanisms as outlined above. This could have knock-on effects on staff retention, as fewer staff feel the need to leave or retire due to health reasons. Increased productivity and efficiency also mean AI could reduce the bureaucratic burden on doctors, allowing them to pursue areas of their job they find more satisfying, as well as giving space for career development.
However, AI could also create new pressures. Routine work can act as a 'buffer' to more intensive or challenging work, and staff may find themselves working more often or continuously at the limit of their skillset - which could conversely increase the chance of burnout. It is worth noting that these outcomes will not necessarily follow from any productivity gains - it depends on how the productivity gains are realised and how the time saved is distributed. If, for instance, gains are used to keep care costs down by delivering more care with a similar number of staff, doctors could end up seeing more patients rather than having more time for each patient.
Efficiency and healthcare costs
AI could reduce pressures on staff time and healthcare system finances, particularly bureaucratic time, allowing them to reallocate resources to where they are needed. The NHS planned workforce expansion set out in NHS England's Long-term Workforce Plan, for instance, is predicated on 'stretching productivity ambitions', with innovations including AI as 'one of the most important ways of delivering' them. This can also present as a form of cost-saving, which would allow the saved funds to be reinvested in other areas, such as addressing staff shortages. Whether such productivity improvements lead to cost savings, depends on how those productivity savings are realised, as discussed above.
The Topol Review estimates a minimal saving of 1 minute saved per patient from new technologies such as AI, equating to 400 000 hours of emergency department consultation time and 5.7 million hours of general practitioner (GP) time.[23] In 2018, the Institute for Public Policy Research estimated that AI and automation could save the NHS in England £12.5 billion per year by freeing up staff time.[24]
However, AI may not always reduce healthcare costs. A recent simulation study of AI in colonoscopy in the USA found that it may increase costs in the short term by increasing the number of detected abnormalities leading to the need for intensive surveillance colonoscopies; although it may contribute to the reduction of colorectal cancer in the long term which could ultimately lead to cost reduction.[25] Similarly, a study of glaucoma screening in China found it would be able to reduce disease progression risks, but the excess costs of screening would be unlikely to be offset by this.[26] Part of the problem in understanding the true impact of AI on costs is a lack of real-world studies. When and if AI is implemented on a large scale in health services, robust large-scale studies with long-term follow-up will be required to understand if its use truly contributes to cost savings and/or improved patient outcomes.
It is important to remember that these potential risks and benefits do not exist in isolation and are interrelated. For example, a doctor no longer suffering from burnout may be willing to give more time to develop the doctor-patient relationship. Furthermore, better patient outcomes could increase the trust between patients and clinicians and thus improve the doctor-patient relationship. There is, therefore, the potential for a multiplier effect with the positive outcomes of AI. Similarly, risks have the potential to multiply.
Ensuring the effects of AI are positive
It is evident from the above that the potential of AI, both positive and negative, is substantial. It is important to remember that AI is just a tool. Despite all the excitement, it is neither a panacea nor an intrinsically destructive force. What matters is how AI is implemented and applied within healthcare. AI is intrinsically neutral, though many AI applications incorporate values into the systems themselves, and their use can have both harmful and beneficial consequences. It has the potential to considerably improve the healthcare system but it also, if utilised poorly or with the wrong intentions, could be seriously harmful to both patients and doctors. It is not possible, therefore, to make sweeping generalisations but instead implementation of each AI technology must be considered on a case-by-case basis. The BMA believes the wellbeing of clinicians and patients must be at the core of any new technology. The BMA supports the adoption of new technologies that:
• are safe, effective, ethical, and equitable
• support doctors to deliver the best possible care and improve care quality
• improve job quality.
With this in mind, the BMA has developed principles for AI implementation and policy in a UK healthcare setting.
BMA principles for AI policy and implementation
• AI must be robustly assessed for safety and efficacy in clinical settings.
• Governance and regulation to protect patient safety is vital.
• Staff and patient involvement throughout the development and implementation process is necessary.
• Staff must be trained on new technologies (initially and continuously) and they must be integrated into workflows.
• Successful AI requires a robust and functioning NHS to be effective.
• Existing information technology (IT) infrastructure and data must be improved.
• Legal liability must be clarified.
Conclusion
AI in healthcare is a complex issue, with ongoing debates even on how AI should be defined. There is significant potential to improve the healthcare system in the UK, as well as the working lives of clinicians, and outcomes for patients. However, it also carries with it significant risks, that could damage all of the above, if implemented poorly. Therefore, the crucial issue is how AI is implemented in the UK healthcare system, and the BMA's principles are key to ensuring it is utilised in a manner that is beneficial to healthcare as a whole, recognising that the wellbeing of clinicians and patients must be at the core of any new technology.
References
1. Clifford M. Department for Science, Innovation and Technology. AI Opportunities Action Plan. January 2025. https://assets.publishing.service.gov.uk/media/67851771f0528401055d2329/ai_opportunities_action_plan.pdf (accessed 24 February 2025). [ Links ]
2. British Medical Association. Principles for Artificial Intelligence (AI) and its application in healthcare. September 2024. https://www.bma.org.uk/media/njgfbmnn/bma-principles-for-artificial-intelligence-ai-and-its-application-in-healthcare.] (accessed 24 February 2025). [ Links ]
3. Liu B. "Weak AI' is likely to never become 'strong AI, so what is its greatest value for us? March 2021 https://arxiv.org/pdf/2103.15294 (accessed 24 February 2025). [ Links ]
4. HM Government. National AI Strategy. September 2021. https://assets.publishing.service.gov.uk/media/614db4d1e90e077a2cbdf3c4/National_AI_Strategy_-_PDF_version.pdf (accessed 24 February 2025). [ Links ]
5. Mayer E, Harrison White S, Klaber B, et al. The Health Foundation. Innovating for improvement. December 2018 https://www.health.org.uk/sites/default/files/2019-05/IFI%20R6%20Imperial%20Final%20report_1.pi (accessed 24 February 2025). [ Links ]
6. Yim J, Chopra R, Spitz T, et al. Nature Medicine. Predicting conversion to wet age-related macular degeneration using deep learning. May 2020. https://www.nature.com/articles/s41591-020-0867-7 (accessed 24 February 2025). [ Links ]
7. National Institute for Health and Care Excellence. Sleepio to treat insomnia and insomnia symptoms. May 2022. https://www.nice.org.uk/guidance/mtg70/resources/sleepio-to-treat-insomnia-and-insomnia-symptoms-pdf-64372230458053 (accessed 24 February 2025). [ Links ]
8. Mourby M. 'Leading by Science' through Covid-19: The NHS Data Store & automated decisionmaking. Int J Population Data Sci April 2021. https://pmc.ncbi.nlm.nih.gov/articles/PMC8189169/ (accessed 24 February 2025). [ Links ]
9. Wong F, Zheng E, Valeri L, et al. Discovery of a structural class of antibiotics with explainable deep learning. December 2023 https://www.nature.com/articles/s41586-023-06887-8 (accessed 24 February 2025). [ Links ]
10. Pinto-Coelho L. Bioengineering. How artificial intelligence is shaping medical imaging technology: A survey of innovations and applications. December 2023https://pmc.ncbi.nlm.nih.gov/articles/PMC10740686/ (accessed 24 February 2025). [ Links ]
11. Spatharou A, Hieronimus S, Jenkins J. McKinsey. Transforming healthcare with AI: The impact on the workforce and organizations. March 2020. https://www.mckinsey.com/industries/healthcare/our-insights/transforming-healthcare-with-ai (accessed 24 February 2025). [ Links ]
12. Ewals L, van der Wulp K, van den Borne B, et al. The effects of artificial intelligence assistance on the radiologists' assessment of lung nodules on CT scans: A systematic review. J Clin Med May 2023. https://pmc.ncbi.nlm.nih.gov/articles/PMC10219568/ (accessed 24 February 2025). [ Links ]
13. Lenharo M. AI detects eye disease and risk of Parkinson's from retinal images. Nature September 2023. https://www.nature.com/articles/d41586-023-02881-2 (accessed 24 February 2025). [ Links ]
14. Panel for the Future of Science and Technology. European Parliament. Artificial intelligence in healthcare: Applications, risks, and ethical and societal impacts. June 2022. https://www.europarleuropa.eu/RegData/etudes/STUD/2022/729512/EPRS_STU(2022)729512_EN.pdf (accessed 24 February 2025). [ Links ]
15. Healthwatch. Our report highlights impact of digital inclusion on access to healthcare. June 2024. https://www.healthwatchcambridgeshire.co.uk/news/2024-06-28/our-report-highlights-impact-digital-inclusion-access-healthcare (accessed 24 February 2025). [ Links ]
16. Lattie E, Stiles-Shields C, Graham A. An overview of and recommendations for more accessible digital mental health services. Nature Rev Psychol January 2022. https://www.nature.com/articles/s44159-021-00003-1 (accessed 24 February 2025). [ Links ]
17. Zhang L, Reynolds Losin E, et al. Gender biases in estimation of others' pain. J Pain. September 2021. https://www.jpain.org/article/S1526-5900(21)00035-3/fulltext (accessed 24 February 2025). [ Links ]
18. Raphael K. Harvard Global Health Institute. Racial bias in medicine. February 2020. https://globalhealth.harvard.edu/racial-bias-in-medicine/ (accessed 24 February 2025). [ Links ]
19. Machirori M. A knotted pipeline. November 2022. https://www.adalovelaceinstitute.org/report/knotted-pipeline-health-data-inequalities/ (accessed 24 February 2025). [ Links ]
20. Stone E. Joseph Rowntree Foundation. AI shifts the goalposts of digital inclusion. February 2024. https://www.jrf.org.uk/ai-for-public-good/ai-shifts-the-goalposts-of-digital-inclusion#:~:text=AI%20is%20already%20shifting%20the,them%20too%2C%20if%20they%20choose (accessed 24 February 2025). [ Links ]
21. Ofcom. Online Nation 2023 Report. November 2023. https://www.ofcom.org.uk/siteassets/resources/documents/research-and-data/online-research/online-nation/2023/online-nation-2023-report.pdf?v=368355 (accessed 24 February 2025). [ Links ]
22. PwC. Department of Business, Energy & Industrial Strategy. The potential impact of artificial intelligence on uk employment and the demand for skills. August 2021. https://assets.publishing.service.gov.uk/media/615d9a1ad3bf7f55fa92694a/impact-of-ai-on-jobs.pdf (accessed 24 February 2025). [ Links ]
23. Topol E. The Topol Review: Preparing the healthcare workforce to deliver the digital future. February 2019. https://topol.hee.nhs.uk/wp-content/uploads/HEE-Topol-Review-2019.pdf (accessed 24 February 2025). [ Links ]
24. Darzi A, Quilter-Pinner H, Kibasi T. Institute for Public Policy Research. Better health and care for all: A 10-point plan for the 2020s. June 2018. https://www.ippr.org/articles/better-health-and-care-for-all (accessed 24 February 2025). [ Links ]
25. Mori Y, East J, Hassan C, et al. Benefits and challenges in implementation of artificial intelligence in colonoscopy: World Endoscopy Organization position statement. February 2023. https://onlinelibrary.wiley.com/doi/full/10.1111/den.14531 (accessed 24 February 2025). [ Links ]
26. Xiao X, Xue L, Ye L, et al Health care cost and benefits of artificial intelligence-assisted population-based glaucoma screening for the elderly in remote areas of China: A cost-offset analysis. BMC Pub Health June 2021. https://pubmed.ncbi.nlm.nih.gov/34088286/ (accessed 24 February 2025).
Correspondence:
D Norcliffe-Brown
DNorcliffe-Brown@bma.org.uk












