Nearly Right

Healthcare systems deploy AI tools while doctors lose diagnostic skills within months

Regulatory gaps and institutional failures create systematic risk to medical expertise

Within three months of introducing AI-assisted colonoscopy, experienced doctors at four Polish hospitals lost a fifth of their ability to detect cancer without the technology's help. These weren't trainees fumbling with new equipment—they were seasoned physicians who had each performed over 2,000 procedures. The artificial intelligence had made them demonstrably worse at the very skill it was meant to enhance.

This alarming finding, published in The Lancet Gastroenterology and Hepatology, exposes a critical blind spot in healthcare's embrace of artificial intelligence. Whilst regulators approve AI devices at breakneck speed and hospitals deploy them for immediate efficiency gains, no one is systematically monitoring what happens to human expertise. The result is a healthcare system that may be trading long-term resilience for short-term performance—creating institutions that appear more capable but are fundamentally more fragile.

The erosion hiding in plain sight

The Polish study tracked 19 doctors whose cancer detection rates plummeted from 28.4% to 22.4% after AI exposure. But this dramatic decline isn't isolated. Across medical specialties, similar warnings are emerging from practitioners who've watched their colleagues' skills atrophy in real time.

Neurologists describe residents who can no longer interpret EEGs without algorithmic assistance. Radiologists report that some colleagues perform worse when using AI tools, whilst others become so dependent they struggle to read images independently. A Harvard Medical School study found that AI's effects vary dramatically between individual doctors—helping some whilst actively harming others' diagnostic abilities.

The scale is staggering. The FDA has approved over 1,000 AI-enabled medical devices since 2018, each evaluated individually for safety and efficacy. But no regulatory framework examines their cumulative impact on human capabilities. Hospitals deploy AI systems for radiology, pathology, and diagnostic support without considering whether their doctors are systematically losing expertise across multiple domains.

"We find that different radiologists, indeed, react differently to AI assistance—some are helped while others are hurt by it," explains Dr Pranav Rajpurkar from Harvard Medical School. The problem isn't just individual variation—it's institutional blindness to skill erosion happening under their noses.

Aviation learned this lesson decades ago

Healthcare is repeating mistakes the aviation industry solved in the 1980s. When autopilot systems became sophisticated enough to handle routine flying, pilots began losing manual flight skills. The consequences were deadly: crews who couldn't control aircraft when automation failed, leading to crashes that shocked the industry into action.

Aviation authorities responded with rigorous policies. Pilots must demonstrate manual flying proficiency regularly, airlines monitor capabilities through flight data analysis, and training emphasises scenarios where automation fails. Commercial pilots may fly manually for only minutes per flight, yet maintaining these skills is considered non-negotiable.

"In some respects, automated aircraft may require a higher standard of basic stick and rudder skills, if only because these skills are practiced less often and maybe called upon in the most demanding emergency situations," notes Jacques Drappier, former Airbus training chief.

Healthcare has access to this hard-won knowledge but ignores it entirely. Aviation treats pilots as the ultimate backup when technology fails; healthcare often treats AI as infallible. The difference could prove catastrophic when medical AI systems inevitably encounter situations they cannot handle.

Regulators fighting the wrong war

The FDA's approach reveals a fundamental misunderstanding of the challenge. Regulations developed for traditional medical devices focus on whether AI tools work correctly, not whether doctors can work correctly without them.

Current approval processes evaluate individual devices in isolation—does this AI detect lung cancer accurately? Does that algorithm identify heart arrhythmias? But they completely ignore systemic effects. A hospital might deploy AI across radiology, emergency medicine, and diagnostics, each tool individually FDA-approved, whilst remaining oblivious that doctors are losing capabilities institution-wide.

This regulatory blind spot stems from frameworks designed in the 1970s for static devices, not dynamic systems that reshape human behaviour. Modern AI implementation requires understanding how multiple tools interact with human expertise over time—something current regulations don't remotely address.

Legal experts warn that AI tools "should not replace a healthcare provider's clinical judgment" due to misdiagnosis risks, yet no regulatory mechanism ensures this principle translates into practice. The gap between policy intention and implementation reality grows wider as AI deployment accelerates.

The irreversible spiral

The Polish study reveals the most insidious aspect of this crisis: skill erosion creates dependency, which accelerates further erosion. Once doctors lose diagnostic abilities, institutions become more reliant on AI, making recovery nearly impossible.

This isn't like forgetting a phone number. Medical expertise develops through years of pattern recognition, building what specialists call "clinical intuition" from thousands of cases. When these capabilities atrophy through disuse, they don't simply return when needed. The skills require constant practice and can take years to rebuild—if rebuilding is even possible.

One neurologist warns of creating "an abyss of lost clinical skills" where each generation becomes less capable than the last. Senior doctors who learned before AI retain foundational abilities, but newly trained physicians who learn with AI assistance from the start may never develop these crucial backup capabilities.

The World Health Organization identifies this as "de-skilling" that contradicts principles of human-centred AI development. Yet it's happening at scale across healthcare systems that mistake technological sophistication for institutional strength.

Efficiency masquerading as progress

Healthcare systems optimise for seductive but misleading metrics. They celebrate AI's immediate benefits—faster diagnoses, reduced errors, cost savings—whilst remaining blind to mounting long-term costs.

The WHO warns that AI implementation might justify employing "less-skilled staff" when technology appears to compensate for reduced human capabilities. This creates apparent economic progress that becomes expensive when systems fail or complex cases require expertise that no longer exists.

Hospitals inadvertently destroy their own intellectual capital. When radiologists lose the ability to interpret images without AI assistance, or when emergency physicians become dependent on algorithmic triage, institutions become fundamentally more fragile despite appearing more efficient.

The Brookings Institution identifies this as a broader risk where "widespread use of AI will result in decreased human knowledge and capacity over time." Healthcare appears uniquely vulnerable because medical expertise takes decades to develop but can erode within months of disuse.

The path not taken

Some institutions are beginning to recognise this threat and implement safeguards. Leading medical schools emphasise learning fundamental skills before introducing AI assistance. Radiology programmes increasingly require residents to demonstrate traditional interpretation competency before accessing algorithmic tools.

Aviation provides proven models: regular proficiency testing, mandatory manual skill practice, scenario-based training simulating automation failures, and clear protocols for human override. Healthcare could adopt similar approaches—requiring doctors to maintain diagnostic abilities independently, implementing regular assessments, creating AI-free training scenarios, and establishing robust human oversight protocols.

But these efforts remain scattered and voluntary. Most healthcare institutions continue deploying AI without systematic consideration of skill preservation, driven by competitive pressures and efficiency demands that prioritise immediate gains over long-term capability.

The moment of choice

Healthcare faces a critical decision disguised as inevitable progress. Current trajectories suggest a future where doctors become dependent on systems they cannot effectively oversee, lacking the expertise needed when technology fails—and it will fail.

The Polish colonoscopy study provides an early warning of consequences likely already widespread across specialties and institutions. Unlike software bugs or device malfunctions, eroded human expertise cannot be restored through updates or replacements. These capabilities, once lost, may be gone permanently.

The promise of AI in healthcare remains extraordinary: enhanced diagnostics, reduced errors, improved outcomes. But realising these benefits requires implementing technology that strengthens rather than undermines human capabilities. This demands regulatory frameworks addressing systemic effects, institutional policies preserving expertise as deliberately as they deploy new tools, and medical education that develops skills robust enough to survive extensive AI assistance.

The choice isn't between human expertise and artificial intelligence—it's between thoughtful implementation that enhances both, or reckless deployment that sacrifices irreplaceable human capabilities for temporary efficiency gains. Current evidence suggests healthcare is choosing the latter path, creating a crisis that may not become fully apparent until reversal is impossible.

The question is whether healthcare will learn from aviation's experience or repeat its most dangerous mistakes. The answer will determine whether AI makes medicine stronger or merely makes it seem so.

#artificial intelligence