Nearly Right

Britain's cybersecurity experts cannot see the threats coming

Government studies reveal a cybersecurity establishment struggling with prediction, hype, and dangerous knowledge gaps

Britain's cybersecurity establishment is failing on three critical fronts simultaneously, according to explosive new government research that exposes dangerous weaknesses at the heart of national defence planning.

The studies1 2 3, published by the Department for Science, Innovation and Technology, reveal a cybersecurity sector that cannot predict future threats, remains deeply sceptical of artificial intelligence despite industry hype, and is deploying life-critical technologies faster than it understands their vulnerabilities. Together, these findings paint a disturbing picture of a nation stumbling blind through an increasingly dangerous digital landscape.

The most startling revelation: the very experts trusted to protect Britain from cyber attacks consistently failed to identify emerging threats beyond those already catalogued in government documents. When Royal Holloway researchers asked 20 cybersecurity specialists to predict future technology convergences, participants "struggled, and were often reluctant, to articulate novel technology convergences outside of those previously identified by others."

Meanwhile, the professionals whose job involves breaking into systems deliver a reality check that contradicts billions in AI investment. "People have invested so much in AI, they just rammed it down our throats," one red team expert told government researchers. "It is useful. But that's it. It's just useful. It's not the world's changing."

Most alarming of all: Britain is rolling out sophisticated technologies across healthcare, defence, and critical infrastructure while researchers struggle to comprehend their security implications. Some emerging technology combinations have generated zero research papers on cybersecurity risks, even as they're deployed in hospitals and military applications.

The convergence of these findings suggests Britain faces a cybersecurity crisis hiding in plain sight—one where expert prediction fails, technological solutions disappoint, and dangerous knowledge gaps threaten national security.

The prediction paradox

At cybersecurity's core lies a fundamental contradiction: the people best qualified to anticipate future threats cannot see them coming. The Royal Holloway study reveals that domain experts consistently defaulted to existing strategic frameworks—NATO technology lists, EU critical areas, UK quantum strategies—rather than conducting independent threat assessment.

This isn't mere academic caution. When asked to consider how different technologies might converge, not one participant identified combinations involving three or more technology types. Their mental models for complex threat scenarios proved fundamentally limited, despite deep expertise in individual domains.

The January 2025 emergence of DeepSeek's large language model during the research perfectly illustrated this predictive paralysis. Participants cited the unexpected development as evidence that futures-oriented thinking proves futile—a self-defeating logic that abandons the very task cybersecurity requires.

The smartphone provides the most revealing example of expert blind spots. Six participants used smartphones to illustrate technology convergence, noting how cybersecurity implications emerged not from combining cameras, computers, and telecommunications hardware, but from the social behaviours, economic models, and usage patterns that developed around the device.

"Smart phones represent a convergence that created a new technology ecology that radically reconfigured cyber security but one that would have been difficult to manage in advance," the researchers concluded. If experts couldn't foresee smartphone implications—arguably the most transformative technological convergence of recent decades—what confidence should we have in their ability to predict the next one?

The AI reality gap

While boardrooms bet billions on artificial intelligence transformation, cybersecurity professionals who actually test systems deliver a withering assessment of AI capabilities. Their scepticism carries particular weight because they're incentivised to use the most effective tools available—failure means clients discover vulnerabilities from actual attackers rather than friendly experts.

Red team practitioners acknowledge AI's effectiveness in one narrow area: creating sophisticated phishing attacks. The technology excels at generating "well-written, highly personalised emails" and eliminating language barriers that previously limited social engineering campaigns.

But this limited success reveals AI's fundamental constraint. While effective at manipulating language and exploiting human psychology, the technology struggles with technical system penetration—the actual business of cybersecurity. The limitation stems from AI's reliance on public data. Since sophisticated attack techniques remain unpublished, AI systems lack the knowledge base needed for genuine innovation.

"Most models do not have access to data showing how attacks are getting identified and prevented," the research notes. Without understanding defensive capabilities, AI cannot develop truly effective offensive techniques.

Perhaps most tellingly, cloud migration—not AI—has had the greatest practical impact on cybersecurity operations. The shift to cloud-based infrastructure, accelerated by COVID-19, forced firms to completely reimagine their methodologies. While AI dominates technology conferences, this prosaic change fundamentally altered how cybersecurity actually works.

The contrast suggests that incremental technological shifts often matter more than revolutionary ones—a lesson with implications far beyond cybersecurity.

The deployment danger

Most dangerous of all, Britain is implementing sophisticated technology systems while critical security research lags years behind. The government's comprehensive study of eleven technology convergence pairs discovered a troubling pattern: deployment racing ahead of understanding across healthcare, defence, and critical infrastructure.

The numbers tell a sobering story. Despite identifying academic papers on AI-powered personalised medicine—now affecting millions of patients—researchers concluded that "none were directly focused on the specific impact of this pairing on cyber security." For brain-computer interfaces paired with robotics, already being tested in military applications, security research remains woefully inadequate.

Most shocking: swarm technology combined with neurotechnology generated zero research papers, despite development for both civilian and military use.

Healthcare emerges as the most vulnerable sector, where cyber attacks don't just steal data—they threaten lives. The convergence of AI, IoT, and personalised medicine creates systems where successful attacks could manipulate treatment plans, compromise diagnostic accuracy, or shut down life-support equipment.

Brain-computer interfaces present perhaps the most chilling vulnerability. These systems, designed to help paralysed patients control devices through thought alone, could expose "private thoughts and emotions" to cyber attackers. The government report warns that "visual stimuli shown to a user can increase the likelihood of private information being shared through brain activity."

Real-world examples demonstrate the stakes. In 2021, vulnerabilities in IoT-connected hospital infusion pumps could have allowed attackers to remotely "alter dosages" in critically ill patients. It's exactly the type of convergence risk that remains inadequately studied.

Meanwhile, adversaries are paying attention. Chinese research institutions have developed world-class capabilities for attacking the very brain-computer interfaces that Britain is deploying in military and medical applications.

The three-way failure

These three failures—prediction, AI assessment, and research gaps—interact in dangerous ways. Experts who cannot predict future threats fall back on institutional frameworks that may miss novel convergences. AI tools hyped as revolutionary prove limited in practice, creating false confidence in automated solutions. Most critically, deployment proceeds without adequate security research, creating vulnerabilities that neither human experts nor AI systems can properly assess.

The cybersecurity professionals' measured scepticism about AI capabilities becomes more significant in this context. Their resistance to vendor promises reflects hard-earned knowledge about the gap between marketing claims and operational reality. When one practitioner warns that AI enables people to "bluff knowledge in areas they lack experience," it highlights how technological solutions can create dangerous illusions of competence.

This proves particularly problematic in cybersecurity, where inexperienced practitioners might lack judgement to challenge poor security practices. "You'll have people who don't have the years behind them to push back when a client says we're not doing that," the research notes.

The market dynamics exacerbate these problems. The £168 billion global cybersecurity market faces intense competitive pressure, with vendors competing on revolutionary capability promises regardless of actual performance. When buyers struggle to evaluate competing claims, it creates fertile ground for overstated marketing.

The strategic implications

Britain's cybersecurity challenges reflect broader problems in how advanced democracies approach emerging technology governance. Policy responses remain fundamentally reactive, driven by "hype cycles" rather than systematic analysis. The National Cyber Security Centre has grown increasingly frustrated with this dynamic, with leaders openly calling for more political attention as regulation fails to keep pace with technological change.

Parliamentary evidence supports these concerns. The government's Cyber Security and Resilience Bill faces continued delays, ransomware response consultations were postponed by electoral politics, and promised cybersecurity measures remain stalled in Westminster's legislative machinery.

These delays compound the expert prediction problem. Cybersecurity practitioners remain focused on "fighting current fires rather than future fires" because resources are consumed responding to immediate threats. This reactive posture ensures emerging convergences receive attention only after malicious actors have exploited them.

The international dimension adds urgency. China has developed sophisticated capabilities for attacking brain-computer interfaces while America's DARPA funds dual-use research in these technologies. The emerging Space-Air-Ground Integrated Networks will require international cooperation to secure, yet current protection approaches remain inadequate.

Beyond prediction and hype

The government research suggests Britain needs fundamental reorientation rather than better forecasting. The smartphone transformed cybersecurity through social and economic innovations that consistently surprised domain experts. The next transformative convergence will likely prove equally unpredictable.

Rather than perfecting prediction mechanisms or trusting AI marketing, Britain might need to acknowledge expert limitations while building systems robust enough to respond effectively to unexpected convergences. This means investing in defensive capabilities, maintaining diverse expert networks, and developing institutional agility rather than attempting comprehensive threat prediction.

The cybersecurity professionals' approach offers instructive lessons: test rigorously, remain sceptical of vendor claims, and remember that sophisticated assessment often comes from those who've learned to trust nothing until they've tried to break it themselves.

Most fundamentally, security must be embedded into system design from the beginning, not bolted on afterwards. For AI systems, this means implementing adversarial training techniques and explainable AI mechanisms. For healthcare applications, it requires zero-trust architectures that continuously validate every device and user.

The choice facing Britain is stark: acknowledge these interconnected failures and restructure cybersecurity approaches accordingly, or continue stumbling through an increasingly dangerous digital landscape where expert prediction fails, technological promises disappoint, and critical vulnerabilities remain undiscovered until adversaries exploit them.

The evidence suggests Britain's cybersecurity establishment needs not just better tools or sharper predictions, but fundamental changes in how it approaches the relationship between technological innovation and security understanding. The alternative—continuing current approaches while deployment accelerates and threats multiply—represents a strategic vulnerability that adversaries won't ignore.

  1. Securing converged technologies: insights from subject matter experts

  2. Commercial offensive cyber capabilities: red team subsector focus

  3. Emerging technologies and their effect on cyber security

#cybersecurity #politics