Britain requires privacy tools to eliminate privacy as age verification breaches multiply
The House of Lords votes to extend identity checks to VPNs, completing a surveillance architecture that child safety experts increasingly oppose
A virtual private network exists to ensure that nobody knows who you are or what you're doing online. That is not a side benefit. It is the entire purpose of the technology. So when Britain's House of Lords voted on 21 January to require VPN providers to verify every user's identity, peers were not merely extending child protection to another service category. They were demanding that privacy tools destroy privacy as a condition of operating legally.
Amendment 92 passed 207 votes to 159. A companion measure requiring social media platforms to block users under sixteen passed more decisively still, 261 to 150. Both await House of Commons approval. But the Lords' comfortable majorities signal something beyond a procedural vote on children's legislation. Britain is methodically constructing an architecture of digital identity verification that will reshape the relationship between citizens and the internet—and doing so in language carefully chosen to make opposition sound like indifference to child safety.
The ratchet turns
Six months ago, the Online Safety Act's age verification requirements took effect. Platforms hosting adult content or material related to self-harm were required to implement what regulator Ofcom calls "highly effective age assurance"—in practice, uploading government identification or submitting to facial scanning by third-party verification services.
The public response was immediate. Within hours, Proton VPN reported signups surging 1,400 per cent. NordVPN confirmed a thousand per cent spike. Ofcom data shows daily VPN users peaked at 1.4 million in August, more than double the pre-legislation baseline. Citizens were not embracing the new regime. They were fleeing it.
This circumvention is precisely what Amendment 92 targets. Lord Nash, the Conservative peer who sponsored the measure, stated the reasoning in the Lords debate with admirable clarity. Children use VPNs to evade age restrictions for gambling and pornography. The circumvention tool must therefore be brought within the regulatory framework. VPN providers marketing to UK users, or serving "a significant number" of British customers, must implement age assurance before providing service.
The logic has a surface plausibility that evaporates on contact with reality. A determined teenager could use a parent's payment details, borrow credentials from an older friend, or subscribe to services based in jurisdictions with no interest in British law. The amendment captures consumer VPN providers who might plausibly comply with UK regulations. It cannot touch peer-to-peer networks, self-hosted servers, or offshore operators that technically literate users already know how to reach. What it can do—what it will do—is create vast new databases linking citizens' identities to their browsing habits.
The inevitable breach
The security record of such databases should alarm anyone paying attention.
In October 2025, Discord confirmed that hackers had potentially accessed government identification documents belonging to 70,000 users worldwide. The breach occurred through a third-party verification service engaged specifically to comply with the Online Safety Act. Discord implemented identity checks to satisfy British law. The breach exposed users who had complied with that law.
Three months earlier, the Tea app—a service allowing women to anonymously share safety information about men they date—suffered an intrusion exposing 72,000 images. Among them were 13,000 selfies and government IDs, many of which subsequently appeared on 4chan. The app required identity verification as a trust mechanism. That requirement created the database that was stolen.
In 2024, 404 Media reported that AU10TIX, a verification company used by X and TikTok, had left administrative credentials exposed online for more than a year. Names, dates of birth, nationalities, identification numbers, document images—all potentially accessible to anyone who found the exposed credentials. AU10TIX is exactly the sort of third-party service that platforms engage to satisfy age verification mandates.
Jan Jonsson, chief executive of the Swedish VPN provider Mullvad, has offered a useful analogy. A bartender glances at identification to confirm someone is eighteen, then forgets who they are. Digital systems should work the same way—confirming age without knowing or storing identity. Instead, governments pressure private companies to build centralised databases across networks of third parties, any of which might be compromised. The question is not whether these databases will be breached. The question is when, and how catastrophically.
The familiar trajectory
Britain's path follows a pattern visible elsewhere. China's 2017 Cybersecurity Law required internet providers to verify users' real names and store that information domestically. The stated purpose was security and prevention of harmful content—language that would not sound foreign in Westminster. VPNs were not banned outright; Chinese businesses need them. But providers were required to obtain government licensing, creating a registry of who used circumvention tools and why.
The parallels are inexact. Britain is not China. Democratic institutions, judicial oversight and press freedom constrain state power in ways that matter. But the infrastructure being constructed is remarkably similar in function. Real-name registration tied to internet access. Age verification requiring identity documents. VPN usage subject to government oversight. Each measure advances under its own justification. The cumulative effect is an internet where anonymous participation becomes progressively impossible.
Xiao Qiang, a researcher studying internet freedom at the University of California, Berkeley, has described China's identification requirements as infrastructure for "digital totalitarianism"—systems capable of removing voices from the internet, not merely surveilling them. Britain is nowhere near that point. But infrastructure outlasts intentions. The systems being built would serve such purposes should any future government choose to deploy them.
The genuine problem
The case for age verification is not invented. According to Ofcom research, eight per cent of British children aged eight to fourteen access online pornography monthly. Among boys, the figure approaches one in five. Social media platforms have been implicated in mental health crises and, in tragic cases, in suicides. Algorithmic amplification of self-harm content to vulnerable teenagers represents a genuine policy failure that leaves parents feeling helpless.
The Children's Wellbeing and Schools Bill attracted support from Hugh Grant, Peter Andre, and—most consequentially—Esther Ghey, whose daughter Brianna was murdered in 2023 by teenagers who had consumed disturbing online content. Ghey has spoken of her daughter's social media addiction and the constant fear about who Brianna might be communicating with online. These concerns are real. The grief is real.
Yet expertise on these harms does not automatically endorse the proposed solutions. Ian Russell, whose daughter Molly died by suicide after Pinterest's algorithms fed her self-harm content, spoke publicly during the Lords debate. He and numerous children's charities oppose blanket social media bans for under-sixteens, he explained, because they fear unintended consequences. Bans push young people toward less regulated platforms—gaming services, the dark web. Children turn to VPNs, which is why Lord Nash's amendment attempts to close that route, but age-gating VPNs may prove "extremely problematic."
The concern is not that children deserve unrestricted internet access. The concern is that identity verification requirements push users toward shadier alternatives while building surveillance infrastructure affecting everyone. More than 550,000 people signed petitions demanding the Online Safety Act's repeal—among the largest public responses to any UK digital law. The government held a parliamentary debate in December, then immediately rejected any changes.
The permanent architecture
The amendments create requirements, not yet capabilities. Amendment 92 directs the Secretary of State to make regulations within twelve months; it specifies neither methods nor standards. Ofcom will produce guidance. Industry will respond. Some providers will comply, some will relocate, some will ignore British law entirely.
But direction matters more than any single measure. Each iteration extends identity verification to new categories. The Online Safety Act covered platforms hosting harmful content. The Children's Wellbeing Bill extends to tools circumventing that coverage. Future legislation might address cloud services, virtual servers, any technical pathway around the restrictions. Each extension spawns new databases, new third-party relationships, new breach surfaces.
The stated goal—protecting children from harmful content—could be achieved without national identity infrastructure. Device-level parental controls, already available on major operating systems, restrict content without centralised databases. Default ISP filtering, which British mobile networks already implement, provides another layer. Digital literacy education addresses the problem at its source rather than attempting to wall off the internet.
These approaches lack political appeal. They place responsibility on parents rather than platforms. They generate neither dramatic votes nor satisfying headlines about holding technology companies accountable. But they also avoid creating infrastructure that outlasts any particular child safety concern—infrastructure available for whatever purposes future governments might identify.
The House of Commons will now decide whether to accept what the Lords have passed. Technical communities will continue developing circumvention tools, as they always do when governments attempt to constrain internet access. Some parents will welcome any measure promising to protect their children. Some children will find workarounds within hours of implementation.
What persists, regardless of these amendments' fate, is the trajectory they represent. Britain is building an internet where participation requires identification, where privacy tools must eliminate privacy to operate legally, where each circumvention triggers further regulatory extension. The infrastructure being constructed will outlast every politician who voted for it, every child safety campaign that justified it, every breach that exposes its dangers.
The question worth asking is not whether any individual measure protects children. It is what kind of internet emerges when the construction is complete—and who, ultimately, it protects.