One in five UK doctors utilizes generative artificial intelligence (GenAI) tools, such as OpenAI’s ChatGPT or Google’s Gemini, in their clinical practice, says a recent survey. The survey was conducted with around 1,000 general practitioners. It indicates that physicians are turning to GenAI for various tasks. These tasks include generating documentation post-appointments, aiding clinical decisions, and crafting patient-friendly discharge summaries and treatment plans.
Health systems are now facing many challenges. Thus, it is not surprising that there is interest in GenAI as a transformative tool for modern healthcare. Nevertheless, the recent adoption of GenAI raises significant questions about patient safety. It also prompts concerns about the responsible integration of this technology into routine medical practices.
UNDERSTANDING GENAI’S ROLE IN HEALTHCARE
Generative AI tools have become increasingly popular due to their versatility. Unlike traditional AI applications, which are designed for specific tasks—such as image classification in diagnostics—GenAI operates on broader, foundation models. These models generate text, audio, images, and more, adapting to various applications.
Nonetheless, the lack of a specific context for GenAI use raises concerns about its reliability and safety in medical environments. The transition from traditional, narrowly-focused AI to more general applications presents unique challenges that must be addressed before widespread adoption.
THE HALLUCINATION DILEMMA
A significant concern about GenAI in healthcare is the issue of “hallucinations.” This term refers to nonsensical or inaccurate outputs generated by AI based on the input it receives. Research has shown that GenAI tools may create misleading summaries. They might link information incorrectly or introduce details not present in the source material.
These hallucinations stem from how GenAI functions. It predicts the most likely next word but does not understand information as a human would. This means outputs can seem plausible but lack factual accuracy. Such inaccuracies are especially concerning in medical contexts, where precise documentation is critical for patient care.
POTENTIAL PITFALLS IN PATIENT CONSULTATIONS
Imagine a scenario where a GenAI tool listens to a patient’s consultation and produces an electronic summary note. While this can free healthcare providers to focus on patient interaction, it also introduces the risk of generating misleading notes. For example, the AI might inaccurately adjust the severity or frequency of symptoms, include unmentioned conditions, or alter vital information.
In familiar settings, a doctor might catch these inaccuracies. In fragmented healthcare systems where patients see multiple providers, these errors could lead to significant health risks. This includes delays in treatment or misdiagnoses.
IMPORTANCE OF PATIENT SAFETY
Patient safety should be a top priority when considering GenAI’s integration into healthcare. The effectiveness of such technologies hinges on their interaction with users. They must also adapt to the specific rules and cultural context of healthcare systems.
Without proper context, the unpredictability of GenAI usage can lead to unintended consequences. For example, a GenAI system used for patient triaging might hinder access for those with lower digital literacy. It could also affect non-native language speakers. This may inadvertently widen health disparities instead of alleviating them.
NEED FOR RESPONSIVE REGULATION
To guarantee patient safety, regulatory frameworks must adapt to the evolving landscape of GenAI technologies. Traditional safety assessments may fall short, as they often focus on isolated failures.
Moreover, the rapid development and updating of GenAI tools complicate safety assessments. Regulators and developers must collaborate closely with healthcare communities to create guidelines that focus on safe and effective use.
COLLABORATIVE APPROACH TO AI INTEGRATION
Successful integration of GenAI into healthcare requires a partnership between developers, healthcare providers, and regulators. This collaboration is crucial for identifying practical applications of GenAI while mitigating associated risks.
Open dialogue among stakeholders can foster an understanding of patient needs and concerns. This understanding ultimately leads to the creation of GenAI tools that enhance patient care rather than hinder it.


































