Nearly Right

Britain elevates AI Security Institute chief to direct Prime Ministerial advisory role

Jade Leung's appointment signals UK pivot from AI safety research to national security implementation

When Jade Leung walks into Downing Street for her first briefing as the Prime Minister's AI adviser, she'll carry with her something no previous government adviser has possessed: hands-on experience evaluating the world's most powerful artificial intelligence systems before they reach the public.

Leung's appointment as direct adviser to the Prime Minister creates an unprecedented channel between frontier AI assessment and executive power. The Chief Technology Officer of the AI Security Institute will now split her time between testing cutting-edge AI models for security risks and briefing the Prime Minister on what those tests reveal.

This isn't routine government reshuffling. It's recognition that AI governance has outgrown academic theory and industry self-regulation. When the AI Safety Institute rebranded itself as the "AI Security Institute" in February, focusing sharply on "national security implications," it signalled Britain's strategic pivot. Leung's elevation to Prime Ministerial adviser completes that transformation.

From academic theory to government practice

Leung's career reads like a roadmap of AI governance evolution. She began asking theoretical questions about AI safety as a Rhodes Scholar pursuing a DPhil at Oxford, co-founding the Centre for the Governance of AI when the field barely existed.

Then came the reality check. At OpenAI, as Governance Lead, theory met practice. She helped develop safety standards and regulatory approaches as the company built GPT-4 and confronted the challenges of deploying increasingly powerful AI systems to millions of users worldwide.

Now she sits at the sharp end of government AI assessment. As CTO of the AI Security Institute, Leung personally oversees evaluations of the world's most advanced AI models before they reach public deployment. When OpenAI develops a new system, when Anthropic unveils Claude's latest capabilities, when Google DeepMind pushes boundaries—Leung's team tests them first.

Her progression mirrors AI governance itself: from academic speculation to industry implementation to government security assessment. The appointment suggests Britain recognises that AI oversight requires someone who's navigated every stage of this evolution.

Security over safety: the UK's strategic pivot

The word change matters more than it appears. When "AI Safety Institute" became "AI Security Institute" in February, Science Secretary Peter Kyle wasn't just updating letterhead. He was declaring Britain's position in a global competition for AI governance leadership.

Safety suggests caution, academic study, philosophical questions about distant risks. Security means immediate threats: cyber-attacks enabled by AI, chemical weapons designed by algorithms, fraud networks powered by machine learning. The Institute now partners directly with the Defence Science and Technology Laboratory and runs a criminal misuse team with the Home Office.

This strategic repositioning sets Britain apart. The EU builds comprehensive regulatory frameworks through its AI Office, implementing the AI Act's risk-based approach. The US establishes advisory committees and initiatives under its National AI agenda. Britain chooses a third path: treating AI governance as a national security issue requiring immediate, practical responses.

Leung's appointment weaponises this approach. Unlike advisory committees that write reports or regulatory bodies that enforce rules, she creates real-time intelligence flow from AI security assessment to Prime Ministerial decision-making. When her team discovers concerning capabilities in a new AI model, that intelligence can now reach the highest levels of government within hours, not months.

Implementation gaps that require executive intervention

The polite language of official announcements conceals a harsher reality. When the government promised "early or priority access" to frontier AI models for evaluation, AI companies delivered something far more limited. Instead of the deep technical access needed for thorough security assessment, they often provided little more than glorified chatbot interfaces.

This created an absurd situation: the government's AI Security Institute, tasked with protecting national security from AI risks, couldn't properly examine the very systems it was meant to evaluate. Companies retained control over when, how, and how thoroughly their models could be tested. Government evaluators found themselves in the position of security guards asked to inspect a building whilst the owners held all the keys.

Meanwhile, the government's own AI ambitions accelerated. The "Plan for Change" commits to deploying AI across public services, from "Humphrey" tools for civil servants to AI-powered NHS diagnostics. Each deployment raises security questions that require technical expertise to answer. Can these systems be manipulated? What data do they access? How do they make decisions that affect citizens' lives?

Leung's appointment signals recognition that these aren't departmental implementation challenges—they're strategic problems requiring Prime Ministerial authority to solve. Her OpenAI experience provides crucial intelligence about how AI companies actually operate, while her Security Institute role offers government perspective on what's needed. This combination becomes essential for bridging persistent gaps between industry capabilities and government requirements.

Building technical capacity at the top of government

Traditional government advisory structures weren't designed for technologies that evolve monthly rather than annually. When AI systems gain new capabilities faster than departments can write guidance documents, standard policy processes become dangerously obsolete.

Consider the challenge: when Leung's team recently evaluated OpenAI's o1 model alongside their US counterparts, they discovered capabilities that could accelerate AI research itself—and found that existing safeguards could be "routinely circumvented." Such findings demand immediate strategic decisions, not lengthy consultation processes.

Leung's direct reporting to both the Prime Minister and Science Secretary creates something unprecedented in British government: technical AI expertise with executive authority backing. She doesn't just advise—she provides real-time technical intelligence that can trigger immediate policy responses.

This matters because AI governance increasingly involves split-second decisions about rapidly evolving technologies. When a new AI model demonstrates concerning capabilities, governments face a narrow window for response before widespread deployment makes intervention far more difficult. Leung's appointment ensures that technical assessment and executive decision-making operate on the same timescale as AI development itself.

International implications and competitive positioning

Leung's appointment sends a clear signal to international competitors: Britain intends to win the AI governance race by embedding technical expertise directly into executive decision-making. While other nations debate AI regulation in committees and conferences, the UK creates institutional architecture for immediate action.

The AI Security Institute's partnerships with the US AI Safety Institute and participation in the International AI Safety Network provide technical intelligence sharing that extends British influence. When Leung evaluates a frontier model, her findings inform not only UK policy but international discussions about AI risk assessment. Her government position amplifies Britain's voice in global AI governance conversations.

This competitive positioning matters because AI governance increasingly determines economic and security advantages. Nations that can assess AI risks accurately and respond quickly will shape global AI development. Those that lag behind will find themselves implementing standards set by others.

Britain's model—direct technical advisory access combined with security-focused implementation—offers a template that other countries are likely to examine closely. The success or failure of Leung's role in navigating AI governance challenges whilst maintaining innovation leadership will influence how governments worldwide integrate technical expertise into policy development.

For Britain, the stakes justify this institutional innovation. At a time when AI capabilities advance monthly and security threats evolve daily, the luxury of slow deliberation no longer exists. Leung's appointment represents confidence that combining technical understanding with executive authority can deliver effective AI governance when speed and accuracy both matter.

The elevation of an AI Security Institute technologist to Prime Ministerial adviser marks a decisive moment in British governance: the recognition that artificial intelligence is too important, too complex, and too fast-moving to be managed through traditional policy structures. Whether this institutional innovation succeeds in balancing innovation with security will determine not only Britain's AI future, but potentially the model for democratic AI governance itself.

#artificial intelligence #politics