Thursday, April 23, 2026
Search

The Global Reckoning With AI in the Clinic: When Medical Decision Tools Hallucinate

As AI-powered clinical decision support tools spread across healthcare systems worldwide, the risk of 'hallucinations' — confidently delivered but factually wrong medical guidance — is emerging as a critical patient safety challenge. The case of Epocrates, a tool used by millions of clinicians globally, illustrates how the race to embed generative AI into medical workflows is outpacing regulatory frameworks on every continent.

ViaNews Editorial Team

February 18, 2026

The Global Reckoning With AI in the Clinic: When Medical Decision Tools Hallucinate
Image generated by AI for illustrative purposes. Not actual footage or photography from the reported events.
Loading stream...

When a clinician reaches for their phone mid-consultation to verify a drug dosage or check for dangerous interactions, they expect precision. For millions of healthcare professionals across the United States, Europe, Asia, and beyond, apps like Epocrates have become indispensable — a pocket-sized pharmacopoeia promising reliable clinical decision support at the point of care. But as Epocrates and a growing field of competitors integrate generative AI assistants into their core workflows, a troubling question is reverberating through hospitals and health ministries worldwide: what happens when the AI confidently gets it wrong?

Epocrates, which launched an AI-powered clinical assistant in September 2025, is now navigating one of the most consequential risk landscapes in healthcare technology. According to a recent internal risk assessment, the underlying AI model carries a high likelihood of producing hallucinations — fabricated or outdated clinical information delivered with unwarranted confidence. The severity of such failures has been rated catastrophic, a classification that reflects the direct and potentially fatal line between erroneous clinical guidance and patient harm.

A Problem Without Borders

AI hallucinations — instances where large language models generate plausible-sounding but factually incorrect outputs — are a well-documented limitation of current generative AI architectures. In consumer applications, a hallucinated restaurant recommendation is a nuisance. In a clinical setting, it can kill.

The risk is especially acute in three universally shared areas: rare or orphan drugs, where training data is sparse and model confidence is often inversely proportional to reliability; off-label uses, where clinical evidence is evolving and frequently absent from standard formularies used in countries from Germany to Japan; and newly approved therapies, which may postdate a model's training cutoff or exist only in limited regulatory filings that AI systems struggle to accurately synthesize.

These vulnerabilities are not unique to the United States. Clinical AI tools are being deployed at scale across the UK's NHS, in public health systems throughout Southeast Asia, and in rapidly digitising healthcare infrastructure across Latin America and sub-Saharan Africa — often in environments where specialist human oversight is thinner and the margin for error is narrower still.

The Regulatory Patchwork

Globally, regulators are struggling to keep pace. The US Food and Drug Administration (FDA) has begun developing frameworks for AI-enabled clinical decision support, but concrete guidance remains incomplete. The European Union's AI Act, which came into force in 2024, classifies certain medical AI systems as high-risk and imposes strict conformity requirements — but enforcement infrastructure is still being assembled across member states. The UK's Medicines and Healthcare products Regulatory Agency (MHRA) has signalled a more permissive, innovation-friendly approach, raising concerns among patient safety advocates. In most of the Global South, there is no specific regulatory framework for clinical AI at all.

This patchwork creates a dangerous asymmetry: a tool assessed as high-risk in one jurisdiction can be deployed without restriction in another. Multinational healthcare platforms are acutely aware of this arbitrage.

Liability in the Age of Clinical AI

Beyond patient safety, the liability implications are substantial and vary dramatically by jurisdiction. In the United States, questions of culpability cascade across the technology provider, the prescribing physician, and the institution when an AI recommendation proves incorrect. In France and Germany, stricter product liability regimes may place greater responsibility on software developers. In many emerging markets, legal frameworks for medical AI liability simply do not yet exist.

Epocrates has historically built its reputation on curated, medically reviewed drug monographs. The introduction of a conversational AI layer creates a fundamental tension between the open-ended flexibility users expect from an AI assistant and the rigid accuracy demands of clinical pharmacology. A physician in Lagos, London, or Los Angeles asking the AI to explain dosing adjustments for a novel oncology agent in a patient with renal impairment is not asking a trivia question — they are making a treatment decision.

Mitigation Strategies and the Human-in-the-Loop Debate

Industry and clinical bodies are converging on several risk-mitigation strategies, though implementation varies widely. These include mandatory disclosure warnings within AI interfaces — already required under EU AI Act provisions for high-risk systems — integration with regularly updated, curated drug databases rather than reliance on static training data, and the reinforcement of clinical protocols requiring a human expert to verify any AI-generated recommendation before it influences treatment.

The World Health Organization has called for international standards on AI in health, warning in its 2023 guidance that "the rapid commercialisation of AI for health has not been matched by the necessary regulatory and governance infrastructure." That gap has only grown as generative AI has accelerated into clinical settings.

The Broader Stakes

The Epocrates case is not an isolated incident but a bellwether. Dozens of clinical AI tools are now embedded in hospital systems, electronic health records, and point-of-care applications across the world. Each carries its own hallucination risk profile, its own training data limitations, and its own jurisdictional exposure.

What the Epocrates risk assessment makes plain is that the healthcare technology industry has not yet solved — and in many cases has not yet honestly confronted — the fundamental incompatibility between the probabilistic, sometimes unreliable nature of large language models and the zero-tolerance accuracy requirements of clinical medicine. Until that gap is closed, every AI-assisted consultation carries a risk that clinicians, patients, and policymakers around the world are only beginning to fully reckon with.


Sources:
1 Globe Newswire, "Drug Reference Apps Market to Reach USD 3.55 Billion by 2035, Driven by Digital Healthcare Adoption " (February 16, 2026)