Services on Demand
Journal
Article
Indicators
Related links
-
Cited by Google -
Similars in Google
Share
SAMJ: South African Medical Journal
On-line version ISSN 2078-5135Print version ISSN 0256-9574
SAMJ, S. Afr. med. j. vol.115 n.5b Pretoria Jun. 2025
https://doi.org/10.7196/samj.2025.v115i5b.3668
RESEARCH
AI in medicine: Hype, hope, and the path forward
A E DaryananiI; J M EhrenfeldII, III, IV
IMD; Advancing a Healthier Wisconsin Endowment, Medical College of Wisconsin, Milwaukee, USA
IIMD, MPH; Advancing a Healthier Wisconsin Endowment, Medical College of Wisconsin, Milwaukee, USA
IIIMD, MPH; Department of Anesthesiology, Medical College of Wisconsin, Milwaukee, USA
IVMD, MPH; American Medical Association, Chicago, USA
ABSTRACT
Artificial intelligence (AI) is rapidly transforming healthcare, with applications ranging from diagnostics and predictive analytics to administrative automation. AI holds immense potential to enhance clinical efficiency and improve patient outcomes; however, its integration into medical practice is not without challenges. Physicians remain divided; some view AI as a powerful tool for augmenting medical decision-making, while others question its reliability, ethical implications, and impact on the physician-patient relationship. This article examines the promise and limitations of AI in medicine, addressing critical concerns surrounding bias, liability, regulatory uncertainty, and physician adoption. It explores how AI is currently being used in healthcare, the barriers preventing its seamless integration, and the governance structures needed to ensure its responsible deployment.
Keywords: artificial intelligence, medical practice, AI integration, AI barriers.
The AI paradox in medicine
For decades, medicine has evolved alongside technology, with each advancement promising to ease burdens, improve accuracy, and expand the reach of care. Artificial intelligence (AI) is the latest frontier, offering unprecedented capabilities. AI-driven diagnostics can now detect disease earlier than previously possible, predictive analytics can anticipate patient deterioration before symptoms appear, and machine-learning models can automate time-consuming administrative tasks.[1] AI is already being integrated into patient care. In a recent survey done by the American Medical Association (AMA), 3 in 5 physicians indicated that they currently use AI tools in their practice, with advocates hailing it as a transformative force that will make healthcare more efficient, precise, and accessible.[2]
However, as AI's influence in medicine expands, so do the questions surrounding its limitations. While its potential to enhance clinical decision-making is undeniable, its rapid adoption has also raised concerns about bias, liability, transparency, data privacy, and the erosion of clinical judgement. AI does not simply process data; it learns from it, meaning that flawed or biased data can lead to unintended consequences. In 2019, researchers discovered that an AI system used by hospitals to allocate healthcare resources systematically disadvantaged black patients owing to racial biases embedded in its training data.[3] This is not an isolated case. Without careful oversight, AI could exacerbate existing disparities rather than eliminate them.
Unlike traditional medical tools, AI is not static. It learns, adapts, and, at times, fails in ways even its creators struggle to predict. When AI-generated recommendations contradict a physician's clinical intuition, the challenge is not just about trust but about accountability. Physicians must navigate a new layer of complexity where decisions are influenced by systems that do not always provide clear reasoning. Additionally, poor integration of AI tools with electronic health record (EHR) systems may compromise usability and further increase physicians' clerical and administrative workload, a known contributor to burnout.[4] In cases where AI makes an incorrect diagnosis or a flawed recommendation, the issue of liability remains unresolved. Should responsibility fall on the physician who relied on AI, the developer who built the algorithm, or the hospital that implemented the system? These are not hypothetical dilemmas. They are unfolding now, in real hospitals, affecting real patients.
Beyond these technical concerns, AI is reshaping the human dynamics of medicine. The physician-patient relationship has always been built on trust, communication and expertise. As AI plays an increasing role in diagnosis and treatment planning, its influence over clinical decisions will continue to grow. Patients may begin to question whether medical recommendations are derived from algorithmic calculations rather than human expertise. If AI integration is not handled carefully, there is a risk that medicine could drift toward a system where physicians are viewed more as interpreters of algorithmic outputs than as independent decision-makers. Ensuring that AI augments rather than diminishes the physician's role will be critical to maintaining the integrity of medical practice.[5,6]
Medicine now faces a defining question. Will AI alleviate burdens or introduce new ones? Will it empower physicians or deskill them? The answers will shape the next era of healthcare, not just for doctors but for the patients whose lives depend on them.
This article explores the promise, limitations, and governance challenges of AI in medicine. It examines some of AI's current applications, the barriers preventing its seamless adoption, and the regulatory and ethical frameworks needed to ensure it serves as an asset rather than a liability
AI in action: Where it's delivering value today
AI, as it is currently used, is an umbrella term that includes various computer science techniques aimed at creating machines that can perform tasks which would typically require human intelligence. [1] The most commonly used approaches are: machine learning (ML), a subset of AI that enables systems to learn from data; deep learning (DL), a specialised branch of ML that uses neural networks to detect patterns with minimal human involvement; and natural language processing (NLP), which focuses on teaching machines to understand, interpret and generate human language.[1] Large language models (LLMs), a widely used form of AI, leverage DL techniques and massive data sets to understand, summarise, generate, and predict new text-based content. [1] Many LLMs, such as OpenAI's ChatGPT, Anthropic's Claude, Google's Gemini, and Microsoft's Copilot are currently available to the public and have seen widespread adoption across countless sectors to automate tasks and streamline processes. LLMs in particular have been leveraged for interesting use cases in healthcare. Despite ongoing concerns, AI is demonstrating practical applications across healthcare settings.
Imaging and diagnostics
Incredible advancements have been made in recent years in the development and implementation of AI-powered tools to enhance disease detection. Aidoc, an Israeli technology company founded in 2016, has been a pioneer and leader in AI-enhanced radiology tools, and currently holds the largest suite of FDA-cleared algorithms in a single proprietary platform.[7] Aidoc's detection algorithms aim to accelerate the diagnosis of time-sensitive and time-consuming pathologies, such as pulmonary embolism, intracranial haemorrhage, acute abdominal findings, and aortic dissection, to improve patient outcomes. Their wide portfolio also facilitates increased detection of incidental findings and spans numerous applications across neurovascular, chest, cardiothoracic, breast, abdominal, and musculoskeletal radiology.[7] Furthermore, independent clinical studies have validated their AI tools with a high degree of diagnostic accuracy and clinical usefulness[8-12] Institutions such as the University of Rochester Medical Center and the Einstein Healthcare Network in the USA have been early adopters of this technology and now integrate these algorithms into their workflow along with more than 1 000 hospitals worldwide.[7]
Beyond radiology, AI-driven diagnostic applications are advancing rapidly across various medical specialties. One of AI's distinct advantages is the ability to analyse and integrate multiple patient data sources, such as medical imaging, laboratory test results, EHR data, and vital signs, among others, at a very large scale to assist healthcare providers in identifying and diagnosing diseases faster and more accurately.[13]
Predictive analytics and risk assessment
AI models have shown significant promise in forecasting disease progression, hospital readmission risks, and treatment outcomes.[14-16] For instance, AI-driven sepsis prediction systems, analysing EHR data and continuous vital signs, have been shown to reduce mortality rates by enabling early detection, personalised treatment, and real-time monitoring of sepsis patients.[17,18]
Researchers are also leveraging traditional, non-invasive diagnostic tests in novel ways, using AI to detect patterns and correlations that were previously undetectable to clinicians. For example, Mayo Clinic has developed several AI algorithms that can accurately diagnose cardiac conditions, such as cardiac amyloidosis, atrial fibrillation, aortic valve stenosis, left ventricular systolic and diastolic dysfunction, and pulmonary hypertension, using standard 12-lead ECGs.[19-25]
Administrative AI and LLMs
Another promising area of AI application is clinical documentation automation, including medical notetaking, insurance processing, and clinical trial enrolment. Addressing administrative burden through automation was the top area of opportunity according to 57% of physicians in the previously referenced AMA survey.[2] LLMs such as ChatGPT and purpose-built AI-facilitated clinical documentation tools, such as DeepScribe and DAX Copilot, have been shown to reduce the administrative burden on physicians, both by improving documentation quality and saving time.[26-28] Greater adoption by health systems and seamless EHR integration could further enhance physician efficiency, allowing for more direct patient interaction, and potentially reduce burnout due to current high administrative loads.[4] However, results have been inconsistent, with significant provider-to-provider variability. AI scribes that summarise patient-provider interactions hold promise, but challenges persist, including hallucinations, omission of critical patient details, and a lack of complex medical reasoning.[27] As a result, constant fact-checking is required, making performance gains uncertain.
Challenges and risks in AI adoption
While AI is already demonstrating value in diagnostics, predictive analytics, and administrative automation, its widespread adoption is not without challenges. As previously mentioned, AI is not static; it learns, adapts, and sometimes fails, introducing uncertainty and risk. Although physician enthusiasm for AI is rising, with 66% of physicians now using AI tools compared with 38% in 2023, concerns over clinical reliability, liability, ethical implications, bias, workflow integration, and regulatory gaps have also increased.[2] Addressing these concerns is critical to ensuring that AI serves as an asset rather than a liability in modern healthcare.
The physician-patient relationship: A new dynamic?
AI is not only transforming how physicians diagnose and treat disease, but also reshaping the fundamental nature of patient interactions. While AI can improve efficiency and augment clinical decision-making, 39% of physicians worry that it may negatively affect patient interactions.[2] As AI-generated recommendations become more common, there is concern that they could erode patient trust in their providers. Additionally, physicians have expressed concerns about cognitive overload, as they must interpret AI-driven insights while maintaining direct patient engagement.
To ensure that AI enhances rather than disrupts care, thoughtful implementation and constant evaluation are necessary. The AMA advocates for 'augmented intelligence', a conceptualisation of AI that emphasises its assistive role rather than autonomy.[5] This human-centred approach aims to preserve the integrity of the physician-patient relationship, enhance clinical outcomes, and support provider wellbeing.
Legal uncertainty and liability
The rapid evolution of AI in healthcare has outpaced existing regulatory frameworks, creating significant challenges for oversight and implementation. Traditional medical technologies follow well-established approval processes, with clear guidelines for validation, safety, and physician oversight.[29] In contrast, AI operates in a state of continuous flux, adapting and evolving in ways that make standardised regulation difficult to establish and maintain.[30] Unlike passive medical devices, AI often functions as a dynamic decision-support system, directly influencing patient care while remaining subject to shifting capabilities, datasets, and real-world applications.
As AI takes on a greater role in clinical decision-making and administrative processes, it introduces complex legal and ethical questions that merit careful consideration. The most pressing issue is liability: who bears responsibility when AI makes an incorrect or harmful recommendation?
Liability in the use of AI in healthcare remains largely undefined, but it could fall on multiple parties: the physician who relied on the AI system's suggestion/prediction, the hospital or institution that deployed the system, or the AI developer that built it. The AMA advocates that physicians should not bear full responsibility for adverse outcomes if AI systems lack transparency, introduce unknown biases, or are mandated without physician oversight.[30] The AMA supports a risk-based approach to AI liability, meaning that accountability falls on the entity best positioned to mitigate risk or harm.[30] For example, if an AI developer used flawed data, they should be liable; if a hospital failed to implement AI systems correctly, liability should fall on the institution; if a provider misused an AI tool in a way different to its intended use case that led to harm, they should bear responsibility.
Bias and ethical challenges in AI
Bias in AI models remains a persistent challenge. AI is trained and learns from historical data, meaning it is inherently shaped by pre-existing - and often unrecognised - biases. This has the potential to exacerbate healthcare disparities rather than eliminate them. Without proper safeguards, AI tools risk producing skewed recommendations that disproportionately affect certain populations.
Two well-documented examples of AI bias in medicine include skin cancer detection algorithms that perform less accurately on darker skin tones as a result of non-representative training data, and AI-driven pain management tools that underestimate pain levels in black patients.[31,32] However, careful evaluation of these tools has led to innovative ways to redress these sometimes unexplained disparities to potentially enable expanded access to treatment for underserved patients.[33]
Beyond clinical bias, physicians have also raised concerns about AI's impact on health equity, with only 33% believing AI tools will improve equity in healthcare.[2] The AMA has advocated for the need of AI transparency to mitigate bias, requiring that developers disclose training data sources and potential limitations.[30] In order to promote fairness, AI systems must be trained on diverse and representative datasets, audited continuously to assess for disparities, and designed with explainability in mind to ensure physicians can understand how decisions are being made.
Shaping responsible AI in healthcare
AI is poised to transform healthcare delivery by improving diagnostic accuracy, enhancing risk assessment, and streamlining operational efficiency, among many other potential applications. However, its adoption must be guided by clear ethical, regulatory, and clinical principles that ensure patient safety, maintain physician trust, and uphold the integrity of medical decision-making.[6] AI should therefore be viewed through the lens of 'augmented intelligence', as emphasised by the AMA, reinforcing its role as a tool that enhances human judgement rather than replacing it. To achieve this, human intervention points must be built into AI-driven clinical and administrative decision-making to preserve human oversight and accountability.
Trustworthy AI requires adherence to three core pillars: ethics, evidence, and equity.[6] Ethical AI must align with the fundamental values of medicine, prioritising patient welfare, autonomy, fairness, and shared decision-making between patients and physicians. Owing to their unique and evolving nature, AI systems must undergo rigorous and continuous scientific validation to ensure the highest standards of accuracy, reliability, and clinical relevance. Moreover, given the potential risks of training AI algorithms with biased datasets, health equity must be central to AI deployment. Without appropriate safeguards, AI can unintentionally worsen or create new healthcare disparities rather than mitigate them.
Transparency is a critical concern in the responsible development of AI. Physicians must understand how AI tools make decisions, what specific problems they aim to solve, and the intended patient population and clinical setting for each system. Developers must provide detailed information on training data sources, model validation, and known limitations. Additionally, the use of AI in medical decision-making or patient access to care must be clearly communicated to patients, who should retain the right to make informed choices regarding AI-assisted care. As AI models become increasingly complex, 'black-box' systems, where the reasoning behind predictions or outcomes is opaque, risk undermining both patient and physician trust.
Governance structures and regulatory frameworks must evolve to keep pace with AI's rapid advancements. Collaboration between physicians, medical organisations, AI developers, and regulatory agencies is essential to establish standards that prioritise patient safety, transparency, and clinical efficacy. Ensuring responsible AI integration will require proactive oversight, continuing validation, and physician-led implementation strategies to harness AI's full potential while safeguarding medical ethics and public trust.
The path forward: AI as a partner, not a replacement
Physician trust in AI is a critical determinant for its long-term success in healthcare. The AMA's Physician Sentiment Report on AI use found that while adoption is increasing, many physicians remain cautious, with 47% of physicians prioritising stronger oversight as a key requirement for building trust in AI tools.[2] Physicians are uniquely positioned to guide AI's integration into clinical workflows, ensuring that AI aligns with the realities of patient care rather than introducing additional complexities.
Regulatory clarity is essential to AI's future in medicine. Given AI's rapid evolution and expanding use cases, its regulation presents unique challenges. The AMA advocates for standardised evaluation frameworks, similar to those used for drugs and medical devices, to assess AI's safety, efficacy, and clinical utility.[30] As legislative efforts progress, the demand for stricter regulations and greater transparency will be particularly important for high-risk AI applications that directly impact patient care.[30] Existing models, such as the International Medical Device Regulators Forum (IMDRF) risk-based categorisation for Software as a Medical Device (SaMD), provide a foundation for regulating AI based on its level of risk and intended use.[6,29] The AMA has called for ongoing validation and real-world performance monitoring, particularly for AI systems used in clinical decision support, to ensure they remain accurate, fair, and reliable over time.[30]
Physicians should retain clinical authority over AI-generated recommendations but should not bear full responsibility for AI-driven errors when the system is appropriately deployed within its intended use case. Liability frameworks must reflect AI's collaborative role in clinical decision-making, ensuring that responsibility is distributed among developers, institutions, and regulatory bodies based on their respective roles in AI implementation. Additionally, physicians, along with national, state, and subspecialty medical societies, must continue to advocate for AI systems that promote health equity and actively work to prevent the exacerbation of existing disparities in care.
The success of AI in healthcare will ultimately be determined by how well it integrates into clinical practice, regulatory frameworks, and ethical guidelines. Prioritising physician involvement, patient-centred implementation, and robust oversight is essential to ensuring that AI fulfils its potential to enhance care while preserving the core values of medical professionalism.
Conclusion
AI is no longer a distant concept but an active force shaping modern healthcare. It holds immense promise to positively impact health and wellbeing for humanity. However, its long-term impact depends not on technological advancements alone, but on how it is governed, trusted, and integrated into medical practice. The AMA has emphasised that AI must be evidence-based, ethically sound, and transparent to ensure that it enhances rather than disrupts healthcare delivery.
The transition from traditional medicine to AI-assisted care must be guided by physician leadership, regulatory oversight, and a steadfast commitment to patient welfare. Ensuring AI remains an assistive tool rather than a substitute for clinical judgement will require continuing validation, bias mitigation, and ethical safeguards. AI developers must commit to transparent model design, equitable training data, and explainability, while regulatory bodies must establish clear frameworks that balance innovation with patient safety.
Education and training will also be critical in preparing physicians to responsibly implement AI tools. As AI becomes critically embedded in medical decision-making, healthcare professionals must be equipped to critically evaluate its recommendations, advocate for ethical deployment, and maintain accountability in patient care. The alignment of physicians, policymakers, and AI developers in these efforts will determine whether AI fulfils its potential as a force for good in medicine.
AI will not replace physicians, but physicians who effectively use AI will be better positioned to lead the future of medicine. Ensuring AI remains a trusted, explainable and accountable tool will be essential to maximising its benefits while upholding the highest standards of medical practice.
AI disclosures. In the production of this manuscript, OpenEvidence® was used to aid the literature review, and ChatGPT (GPT-4o and o3) was used to improve grammar and refine language. All content, analysis, and conclusions were generated and validated by the authors. The AI-assisted tools were used solely for enhancing readability and organisation and did not independently generate original research, data interpretation, or critical arguments.
References
1. Alowais SA, Alghamdi SS, Alsuhebany N, et al Revolutionizing healthcare: The role of artificial intelligence in clinical practice. BMC Med Educ 2023;23(1):689. https://doi.org/10.1186/s12909-023-04698-z [ Links ]
2. American Medical Association. Physician sentiments around the use of AI in health care: Motivations, opportunities, risks, and use cases. https://www.ama-assn.org/system/files/physician-ai-sentiment-report.pdf. (accessed 22 February 2025). [ Links ]
3. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019;366(6464):447-453. https://doi.org/10.1126/science.aax2342 [ Links ]
4. Melnick ER, Harry E, Sinsky CA, et al. Perceived electronic health record usability as a predictor of task load and burnout among US physicians: Mediation analysis. J Med Internet Res 2020;22(12):e23382. https://doi.org/10.2196/23382 [ Links ]
5. American Medical Association. Augmented intelligence in medicine: 2025 (updated 02/12/2025). https://www.ama-assn.org/practice-management/digital/augmented-intelligence-medicine. (accessed 21 February 2025) [ Links ]
6. Crigger E, Reinbold K, Hanson C, Kao A, Blake K, Irons M. Trustworthy augmented intelligence in health care. J Med Syst 2022;46(2):12. https://doi.org/10.1007/s10916-021-01790-z [ Links ]
7. Aidoc. AI empowering radiologists. https://www.aidoc.com/solutions/radiology/. (accessed 23 February 2025) [ Links ]
8. Graeve VIJ, Laures S, Spirig A, et al Implementation of an AI algorithm in clinical practice to reduce missed incidental pulmonary embolisms on chest CT and its impact on short-term survival. Invest Radiol 2025;60(4):260-266. https://doi.org/10.1097/RLI.0000000000001122. [ Links ]
9. Topff L, Ranschaert ER, Bartels-Rutten A, et al Artificial intelligence tool for detection and worklist prioritization reduces time to diagnosis of incidental pulmonary embolism at CT. Radiol Cardiothorac Imaging 2023;5(2):e220163. https://doi.org/10.1148/ryct.220163 [ Links ]
10. Voter AF, Meram E, Garrett JW, Yu JJ. Diagnostic accuracy and failure mode analysis of a deep learning algorithm for the detection of intracranial hemorrhage. J Am Coll Radiol 2021;18(8):1143-1152. https://doi.org/10.1016/j.jacr.2021.03.005 [ Links ]
11. Weikert T, Winkel DJ, Bremerich J, et al. Automated detection of pulmonary embolism in CT pulmonary angiograms using an AI-powered algorithm. Eur Radiol 2020;30(12):6545-6553. https://doi.org/10.1007/s00330-020-06998-0 [ Links ]
12. Winkel DJ, Heye T, Weikert TJ, Boll DT, Stieltjes B. Evaluation of an AI-based detection software for acute findings in abdominal computed tomography scans: Toward an automated work list prioritization of routine CT examinations. Invest Radiol 2019;54(1):55-59. https://doi.org/10.1097/rli0000000000000509 [ Links ]
13. Al-Antari MA. Artificial intelligence for medical diagnostics: Existing and future AI technology! Diagnostics 2023;13(4). https://doi.org/10.3390/diagnostics13040688 [ Links ]
14. Li YH, Li YL, Wei MY, Li GY. Innovation and challenges of artificial intelligence technology in personalized healthcare. Sci Rep 2024;14(1):18994. https://doi.org/10.1038/s41598-024-70073-7 [ Links ]
15. Lv H, Yang X, Wang B, et al. Machine learning-driven models to predict prognostic outcomes in patients hospitalized with heart failure using electronic health records: Retrospective study. J Med Internet Res 2021;23(4):e24996. https://doi.org/10.2196/24996 [ Links ]
16. Chi CY, Ao S, Winkler A, et al. Predicting the mortality and readmission of in-hospital cardiac arrest patients with electronic health records: A machine learning approach. J Med Internet Res 2021;23(9):e27798. https://doi.org/10.2196/27798 [ Links ]
17. Li F, Wang S, Gao Z, et al Harnessing artificial intelligence in sepsis care: Advances in early detection, personalized treatment, and real-time monitoring. Front Med 2024;11:1510792. https://doi.org/10.3389/fmed.2024.1510792 [ Links ]
18. Zhang Q, Wang J, Liu G, Zhang W. Artificial intelligence can use physiological parameters to optimize treatment strategies and predict clinical deterioration of sepsis in ICU. Physiol Meas 2023;44(1). https://doi.org/10.1088/1361-6579/acb03b [ Links ]
19. Attia ZI, Kapa S, Lopez-Jimenez F, et al. Screening for cardiac contractile dysfunction using an artificial intelligence-enabled electrocardiogram. Nat Med 2019;25(1):70-74. https://doi.org/10.1038/s41591-018-0240-2 [ Links ]
20. Attia ZI, Noseworthy PA, Lopez-Jimenez F, et al. An artificial intelligence-enabled ECG algorithm for the identification of patients with atrial fibrillation during sinus rhythm: A retrospective analysis of outcome prediction. Lancet 2019;394(10201):861-867. https://doi.org/10.1016/s0140-6736(19)31721-0 [ Links ]
21. Cohen-Shelly M, Attia ZI, Friedman PA, et al. Electrocardiogram screening for aortic valve stenosis using artificial intelligence. Eur Heart J 2021;42(30):2885-2896. https://doi.org/10.1093/eurheartj/ehab153 [ Links ]
22. DuBrock HM, Wagner TE, Carlson K, et al An electrocardiogram-based AI algorithm for early detection of pulmonary hypertension. Eur Respir J 2024;64(1). https://doi.org/10.1183/13993003.00192-2024 [ Links ]
23. Grogan M, Lopez-Jimenez F, Cohen-Shelly M, et al. Artificial intelligence-enhanced electrocardiogram for the early detection of cardiac amyloidosis. Mayo Clin Proc 2021;96(11):2768-2778. https://doi.org/10.1016/j.mayocp.2021.04.023 [ Links ]
24. Kashou AH, Medina-Inojosa JR, Noseworthy PA, et al Artificial intelligence-augmented electrocardiogram detection of left ventricular systolic dysfunction in the general population. Mayo Clin Proc 2021;96(10):2576-2586. https://doi.org/10.1016/j.mayocp.2021.02.029 [ Links ]
25. Lee E, Ito S, Miranda WR, et al. Artificial intelligence-enabled ECG for left ventricular diastolic function and filling pressure. NPJ Digit Med 2024;7(1):4. https://doi.org/10.1038/s41746-023-00993-7 [ Links ]
26. Bundy H, Gerhart J, Baek S, et al. Can the administrative loads of physicians be alleviated by AI-facilitated clinical documentation? J Gen Intern Med 2024;39(15):2995-3000. https://doi.org/10.1007/s11606-024-08870-z [ Links ]
27. Kernberg A, Gold JA, Mohan V. Using ChatGPT-4 to create structured medical notes from audio recordings of physician-patient encounters: Comparative study. J Med Internet Res 2024;26:e54419. https://doi.org/10.2196/54419 [ Links ]
28. Ma SP, Liang AS, Shah SJ, et al. Ambient artificial intelligence scribes: Utilization and impact on documentation time. J Am Med Inform Assoc 2025;32(2):381-385. https://doi.org/10.1093/jamia/ocae304 [ Links ]
29. Daryanani AE, Maduekwe UN, Baird P, Ehrenfeld JM. Ensuring medical device safety: The role of standards organizations and regulatory bodies. J Med Syst 2025;49(1):16. https://doi.org/10.1007/s10916-025-02150-x [ Links ]
30. American Medical Association. AMA AI state advocacy and policy priorities: Issue Brief, 2025 (updated 2024-12-01). https://www.ama-assn.org/system/files/issue-brief-ai-state-advocacy-policy-priorities.pdf (accessed 26 February 2025). [ Links ]
31. Liu Y, Primiero CA, Kulkarni V, Soyer HP, Betz-Stablein B. Artificial intelligence for the classification of pigmented skin lesions in populations with skin of color: A systematic review. Dermatology 2023;239(4):499-513. https://doi.org/10.1159/000530225 [ Links ]
32. Deb B, Rodman A. Racial differences in pain assessment and false beliefs about race in AI models. JAMA Netw Open 2024;7(10):e2437977. https://doi.org/10.1001/jamanetworkopen.2024.37977 [ Links ]
33. Pierson E, Cutler DM, Leskovec J, Mullainathan S, Obermeyer Z. An algorithmic approach to reducing unexplained pain disparities in underserved populations. Nat Med 2021;27(1):136-140. https://doi.org/10.1038/s41591-020-01192-7 [ Links ]
Correspondence:
Jesse M Ehrenfeld
jehrenfeld@mcw.edu












