The doctor who never gets tired

Published:

Why AI in medicine can no longer be ignored

It’s a story that keeps repeating itself. Every time a technology threatens to make a professional category obsolete, the controversy erupts. It happened with manufacturing, with accountants, with taxi drivers. Today it’s medicine’s turn. Newspapers run headlines about incorrect AI diagnoses, about patients who allegedly followed bad advice, about deaths attributed to an algorithm. But how many of these stories have been verified with the same scientific rigor demanded of the technology itself?

The question nobody wants to ask openly is this: who benefits from demonizing medical AI? The answer—uncomfortable but necessary—leads directly to medical lobbies. Not all doctors, of course. But the system as a whole—professional orders, insurance companies, private institutions—has every interest in maintaining the status quo. A patient who gets an initial assessment from AI is a patient who might not pay for a consultation.

The numbers nobody wants to cite

Before talking about risks, let’s talk about reality. The data on the traditional healthcare system tells a story that AI’s detractors prefer to ignore:

  • 250,000+  deaths per year in the US from medical errors (Johns Hopkins, 2016)—the third leading cause of death in the country
  • 1 in 10  European patients receive a misdiagnosis at some point in their lives
  • 7-10 minutes  average time a general practitioner spends with each patient
  • 40-50%  of emergency room visits classified as non-urgent or low-urgency
  • 4.5 billion  people worldwide without adequate access to medical care due to economic or geographic barriers

Faced with these numbers, what would be an acceptable risk for AI? And more importantly, who is being asked to accept the risk? Always the same patient who already navigates an imperfect, costly, and often inaccessible system.

Data Report · Global Healthcare · 2016–2023
The Broken
System

Five verified statistics that reveal the scale of failure in today’s healthcare — and why AI is not the problem, but part of the solution.

5
>>>  Autonomous AI agents
1 in 10
Diagnostic errors
Up to 15% of diagnoses in healthcare settings are estimated to be inaccurate or untimely. Diagnostic error is projected to cost 17.5% of total healthcare expenditure in OECD countries.
OECD Health Working Paper No.176 — “The Economics of Diagnostic Safety”, 2025
7–10
Minutes per visit
The average consultation time a GP dedicates to each patient — a timeframe in which a physician must listen, examine, reason, diagnose, and prescribe. Cognitive biases and fatigue are inevitable within such constraints.
British Journal of General Practice · European primary care studies, multiple sources
40–50%
Non-urgent ER visits
Nearly half of all emergency room visits across Europe and the US are classified as non-urgent or low-urgency — cases that could be managed through guided AI triage, freeing critical resources for true emergencies.
Emergency Medicine International · Multiple national health authority reports
4.5B
Without full coverage
As of 2021, over half the world’s population lacks access to essential health services. Over 1.3 billion were pushed into poverty by health expenses in 2019 alone.
WHO & World Bank — UHC Global Monitoring Report, 2023 · SDG 3.8 Tracking
Context
These figures describe the baseline reality of healthcare before AI involvement. The debate on AI medical risk rarely contextualizes errors against this pre-existing systemic failure. A technology is not dangerous in a vacuum — it is dangerous relative to the alternative.
US Causes of Death — Relative Scale (annual estimates)
Heart Disease
~695,000
Cancer
~605,000
Medical Errors
250,000+
Respiratory
~160,000

Patient honesty

There is one aspect that almost no analysis considers, yet it is potentially revolutionary: people are more honest with a machine than with a human doctor.

>>>  Understanding AI poisoning

Who among us hasn’t downplayed a symptom for fear of seeming like a hypochondriac? Who hasn’t concealed an unhealthy habit—smoking, drinking, or a sedentary lifestyle—to avoid the disapproval of a professional? Who hasn’t delayed a visit out of embarrassment, white-coat anxiety, or simply to avoid facing an answer they dread?

An AI system doesn’t judge. It doesn’t have a bad day. It doesn’t glance at the clock after seven minutes. It doesn’t project that subtle superiority that some professionals, often unconsciously, convey. The result? More complete medical histories, more accurate information, potentially more precise diagnoses.

AI clinics

What today seems like science fiction is, in reality, a logical and inevitable development. Imagine the scenario five to ten years from now:

A person experiences recurring symptoms. Instead of waiting weeks for an appointment with a GP, they access an AI clinic—physical or digital. An advanced system collects a complete medical history through natural dialogue, analyzes the symptoms, cross-references with available clinical records, and suggests targeted tests. Not definitive diagnoses: precise clinical guidance. The patient is then referred—if necessary—to the right specialist, with a complete dossier already prepared.

This is not futurism. It is the natural trajectory of technologies like GPT-4, Google’s Med-PaLM 2, or Stanford’s BioMedLM, already tested in real clinical settings with results comparable—in some areas—to those of specialized physicians.

The economic model is equally clear: the organization managing the AI service assumes legal responsibility for its outputs, exactly as a hospital today is held liable for the errors of its staff. This is not an unsolvable problem—it is a matter of regulation, which always follows (and never precedes) innovation.

Where AI excels—and where it doesn’t

Being provocative doesn’t mean being uncritical. There are fields where medical AI is already superior, and others where human judgment remains irreplaceable, at least for now.

AI excels at interpreting diagnostic images: recent studies show that deep learning systems detect tumors in mammograms and CT scans with a lower false-negative rate than human radiologists. It excels at analyzing patterns across large datasets—identifying correlations between drugs, symptoms, and outcomes that would escape any individual professional. It excels at standardization: applying the same protocols every time, free from fatigue effects or cognitive bias.

>>>  GPT-4 Omni can explore the world and interact more naturally

Where the human element remains central is in managing rare and multifactorial conditions, in psychological support for patients, in complex ethical decisions, and in that clinical intuition—still difficult to encode—that emerges from directly observing a person in their entirety.

The point is not to replace. It is to integrate intelligently, freeing the human doctor from routine tasks to focus where their value is truly irreplaceable.

Healthcare democracy

There is an ethical dimension that goes beyond efficiency and costs. Today, access to quality medicine depends largely on where you were born and how much money you have. A wealthy patient in a major city has access to private specialists within days. A patient in a remote area, or with limited resources, waits months for a basic visit.

AI could be the greatest healthcare equalizer in history. Not because it is perfect, but because it is scalable, replicable, and—if well-designed—accessible to anyone with a smartphone. Intelligent triage that relieves overcrowded emergency rooms, personalized prevention that today is available only to a few, lifestyle advice that currently costs hundreds of euros in private consultations: all of this could become a universal right.

It’s not about choosing

The debate on AI in medicine is framed incorrectly. It is not about choosing between technology and humanity, between algorithms and empathy, between efficiency and care. It is about deciding whether we want a 21st-century healthcare system or whether we prefer to remain anchored to the models of the last century.

The risks are real—AI hallucinations exist, error cases must be studied and prevented with rigor—but they are not arguments for blocking development. They are arguments for regulating it properly. Every medical technology has required years of testing, errors, and adjustments: from antibiotics to X-rays, from laparoscopic surgery to biological drugs.

Medical AI is no different. It is simply faster, more powerful, and—this is the true novelty—potentially within everyone’s reach. And that is precisely why it frightens those who have built their position on the scarcity of access.

Related articles

Recent articles