Nearly Right

The algorithm will see you now

Britain's biased gamble on refugee children

At Dover's Western Jet Foil facility, where Channel crossings end and bureaucratic nightmares begin, twenty human stories reveal everything wrong with Britain's asylum system. Between January 2023 and January 2025, twenty people arrived exhausted from small boats, were assessed as adults and dispatched to Manston processing centre. All twenty were later sent back when officials realised their mistake: these were children.

Not borderline cases. Not administrative mix-ups. Children. Wrongly classified as adults, thrust into detention conditions designed for grown men, denied protections intended for minors fleeing violence. David Bolt, chief inspector of borders and immigration, called the entire age assessment system "haphazard" - a diplomatic term for institutional failure that was causing documented mental health damage to vulnerable young people.

Immigration minister Angela Eagle announced facial age estimation technology would be piloted as "the most cost-effective option" for disputed assessments. AI trained on "millions of images" could deliver rapid judgement on whether someone clears the crucial 18-year threshold. The technology offers a "simple means" to test human judgement against algorithmic estimates.

Simple. Cost-effective. Rapid. The language of technocratic efficiency applied to children's lives.

Scratch beneath the surface, and something more disturbing emerges. This represents the latest iteration of a pattern spanning decades: systematic embrace of approaches that disproportionately harm non-white populations whilst maintaining the veneer of neutral, progressive policy-making.

The bias in the machine

The technical evidence against facial age estimation for asylum seekers is damning. Research published in Scientific Reports demonstrates that AI age estimation shows a "large decrease in accuracy for faces of older adults compared to faces of younger adults" and significantly worse performance "for female compared to male faces". The implications for asylum assessment are stark.

MIT researcher Joy Buolamwini's groundbreaking studies reveal facial recognition algorithms perform 11.8-19.2% worse on darker-skinned images compared to lighter-skinned counterparts, with error rates ballooning to over 34% for darker-skinned women compared to just 0.8% for white males. The National Institute of Standards and Technology found facial recognition technologies falsely identify Black and Asian faces 10 to 100 times more often than white faces.

Consider the demographics of modern asylum. Africa, Asia, the Middle East dominate current displacement patterns. Women and children comprise significant proportions of those seeking protection. The technology being deployed performs worst on precisely these populations.

This isn't coincidental. Training datasets exhibit massive skews toward lighter-skinned individuals and men. Government benchmark datasets reflect what Buolamwini terms "power shadows" - societal exclusion embedded in data. The algorithms haven't learned to see faces; they've learned to see faces that resemble their creators.

A £40 billion foundation of failure

This AI deployment emerges from institutions with a spectacular track record of technological disaster. Over 43 years, the UK government has accumulated approximately £40 billion in IT-related failures, with systemic causes including "truth decay, excessive secrecy, no consequences for getting it wrong and the misleading of Parliament".

The £12.7 billion NHS National Programme for IT, established in 2002 to create centralised electronic patient records, was scrapped in 2011 after nine years of delays and stakeholder opposition. The Emergency Services Network for police, ambulance and fire services is currently running seven years late, £3.1 billion over budget, and may never work as originally intended.

Parliamentary committees continue to warn of "dysfunctional, damaging and sometimes dangerous" government IT, with "most complex, large-scale digital programmes" failing to deliver promised transformations. Yet the same institutional machinery now embraces AI deployment across critical public services.

The pattern reveals institutional learning failure. As technology analysts note, government culture defaults to "failure is not an option" mentality, favouring "long-term, sequential processes" that set "requirements in stone early" without "scope for change or margin for error". This produces exactly the rigid, over-promised, under-delivered disasters that have characterised four decades of public sector technology.

The Windrush precedent

The most illuminating precedent isn't technological but systematic. The Windrush scandal demonstrated how institutional processes could systematically harm black communities whilst maintaining the appearance of neutral policy-making. Historical analysis reveals that "major immigration legislation in 1962, 1968 and 1971 was designed to reduce the proportion of people living in the United Kingdom who did not have white skin".

Hundreds of Commonwealth citizens were wrongly detained, deported and denied legal rights because complex immigration law changes affected black people differently than other racial groups, yet officials failed to recognise these differential impacts. The Williams review found "inexcusable ignorance and thoughtlessness", concluding the mistreatment was "foreseeable and avoidable".

The parallels are precise. Just as post-war immigration legislation created complex legal frameworks that systematically disadvantaged black communities, AI age estimation creates algorithmic frameworks that systematically disadvantage the same demographics. Research on Windrush survivors documents "clear pathways of social upheaval" contributing to "symptoms associated with mental health conditions including depression and suicidal ideation".

The historical analysis concludes: "The Windrush Scandal was caused by a failure to recognise that changes in immigration and citizenship law in Britain since 1948 had affected black people in the UK differently than they had other racial and ethnic groups". Current AI deployment repeats this exact failure - implementing technology that affects non-white asylum seekers differently whilst ignoring documented evidence of bias.

Innovation theatre and accountability laundering

The timing of this deployment reveals telling patterns. Technology Secretary Peter Kyle recently signed strategic partnerships with OpenAI and Google, exploring AI deployment across justice, security and education, describing the need to work with "global companies which are innovating on a scale the British state cannot match".

Critics describe these as "non-binding" agreements lacking "legal enforcement mechanisms" with "few details about how the partnership will work in practice". Digital rights campaigners warn that the "treasure trove of public data" government holds "would be of enormous commercial value to OpenAI in helping to train the next incarnation of ChatGPT".

Academic Megan Kirkwood warns the partnerships risk "placing [tech companies] beyond the reach of regulatory enforcement" whilst "further entrenching their market power". UCL's Wayne Holmes described the approach as "utter drivel and neoliberal nonsense", warning policymakers are "succumbing to the AI hype".

The pattern suggests "innovation theatre" - political gestures that demonstrate technological sophistication whilst transferring implementation risks to those with least political power. Asylum seekers cannot vote, cannot lobby effectively, and face immediate consequences from algorithmic errors. Tech companies gain valuable real-world deployment data whilst ministers gain credit for embracing cutting-edge technology.

The human mathematics

The stakes extend far beyond technological criticism. Research across European countries reveals thousands of children have been wrongfully classified as adults, spending months in adult camps and detention facilities. Systematic reviews document extraordinarily high rates of mental health problems in immigration detainees, with anxiety, depression and PTSD commonly reported, alongside self-harm and suicidal ideation.

Studies specifically examining unaccompanied refugee minors show PTSD rates ranging from 40-63%, with depression rates from 25-50%. Research consistently finds that "time in detention was positively associated with severity of distress".

The mathematics are cruel. If facial age estimation is deployed on 1,000 asylum seekers, and a third are darker-skinned women, over 100 could face misclassification. Children dispatched to adult facilities. Adults denied child protections. Each error represents a life trajectory altered by inherited algorithmic bias - technology that learned exclusion from training data that systematically excluded people who look like them.

The Refugee Council estimates at least 1,300 children have been incorrectly deemed adults over the past 18 months using current methods. Deploying demonstrably biased technology could exponentially increase these numbers whilst providing the appearance of objective, scientific assessment.

The democracy deficit

Perhaps most troubling is the systematic avoidance of democratic accountability. The government's own Centre for Data Ethics and Innovation warned in 2020 that "clear and consistent understanding of how to do this well is lacking, leading to a risk that algorithmic technologies will entrench inequalities". Previous investigations found bias in policing algorithms, yet deployment proceeds without addressing institutional learning.

The CDEI specifically highlighted immigration as a sector requiring careful attention to bias risks, warning that "public policy can support the development of technologies and industries which allow us to benefit safely from automated decision systems" only with proper governance frameworks.

The facial age estimation deployment proceeds despite institutional knowledge of bias risks, expert warnings about tech company partnerships, and documentary evidence of both algorithmic discrimination and government IT failure patterns. This suggests democratic dysfunction: institutions continuing policies despite evidence of harm because the costs fall on populations with minimal political influence.

The pattern that persists

As the ACLU-Minnesota notes, "Law enforcement and the criminal justice system already disproportionately target and incarcerate people of colour. Using technology that has documented problems with correctly identifying people of colour is dangerous". The same logic applies to immigration enforcement.

The UK government is deploying biased technology on vulnerable populations whilst maintaining partnerships with tech companies that benefit from the deployment. This represents something more systematic than policy error: it constitutes a method for institutionalising discrimination whilst maintaining plausible deniability through technological complexity.

As researchers warn, "algorithms tend to reflect the biases of those who develop these systems – mainly white males from higher socio-economic brackets". When these systems are deployed on asylum seekers from predominantly non-white countries, the result is predictable: systematic discrimination laundered through mathematical complexity.

The facial age estimation deployment represents the latest iteration of a pattern spanning from post-war immigration legislation through the Windrush scandal to contemporary AI partnerships. Each iteration maintains the appearance of neutral, progressive policy-making whilst systematically disadvantaging the same demographic groups.

Until institutions acknowledge this pattern and implement accountability mechanisms that prioritise evidence over innovation theatre, the cycle will continue. The algorithm may see asylum seekers now, but it sees them through lenses ground by exclusion and prejudice. For children fleeing violence and seeking protection, that mechanical gaze proves as damaging as human bias - but infinitely harder to challenge.

#artificial intelligence #politics