Nearly Right

Britain's chess prodigy steers the world's most consequential technology revolution

How Britain's former child chess prodigy accidentally became one of the world's most consequential decision-makers

The scene was undeniably surreal: a 49-year-old mixed-race Londoner, dressed entirely in black, receiving a Nobel Prize in Chemistry from the King of Sweden whilst wearing two watches - one smart, one analogue. Demis Hassabis admits he's "really bad at enjoying the moment", but this particular moment was "really special. It's something you dream about as a kid". Here was someone who learned chess at four, coded bestselling video games as a teenager, and now leads Google DeepMind - arguably the most important artificial intelligence laboratory on Earth. Yet hours after the ceremony, he was back to checking Liverpool FC's fixtures and planning poker nights with Magnus Carlsen.

This disconnect between the ordinary and the extraordinary captures something essential about our technological moment. We are potentially in the final years before artificial general intelligence - what Hassabis predicts could arrive within five to ten years, representing a transformation "10 times bigger than the Industrial Revolution and maybe 10 times faster". Yet the key decisions about humanity's technological future are being made by individuals whose expertise lies not in political philosophy or social organisation, but in chess tactics, game design, and protein folding.

Hassabis embodies what might be called the accidental philosopher-king problem. Through a combination of exceptional talent, fortunate timing, and competitive pressures, a relatively small group of technologists have found themselves wielding unprecedented civilisational influence. The question is whether strategic brilliance in narrow domains translates into wisdom about humanity's broader trajectory.

The strategic mind in formation

The foundations were laid early. Between ages four and thirteen, Hassabis played competitive chess for England's junior teams, achieving Master standard with a 2300 Elo rating at age thirteen - making him the second-highest-rated Under-14 player globally, behind only the legendary Judit Polgár. "When you do that at such a young age, it's very formative for the way your brain works," he reflects. "A lot of the way I think is influenced by strategic thinking from chess, and dealing with pressure."

This chess-trained mind shaped everything that followed. At eight, he bought his first computer with chess prize money. At seventeen, working at Bullfrog Productions, he coded Theme Park - a simulation game where players managed virtual amusement parks, complete with decision trees about ride placement, staff hiring, and visitor satisfaction. The game sold millions of copies and spawned an entire genre of management simulations.

There's something revealing about this progression from chess to game design to AI development. Each domain involves optimising systems, thinking several moves ahead, and finding patterns that others miss. But there's also something unsettling about applying chess-style strategic thinking to real-world systems. Chess is a perfect information game with clearly defined rules, predictable consequences, and the possibility of starting over. The decisions Hassabis makes now about AI development operate in a world of radical uncertainty, unintended consequences, and irreversible moves.

The acceleration nobody wanted

Perhaps the most illuminating aspect of Hassabis's story is his admission of reluctance about AI's rapid public deployment. "If I'd had my way, we would have left it in the lab for longer and done more things like AlphaFold, maybe cured cancer or something like that," he says. This wasn't his preferred timeline, but competitive pressures made the choice for him.

The turning point came in November 2022 with OpenAI's release of ChatGPT, which "caught big tech off guard, especially Google". Despite DeepMind's groundbreaking work on AlphaGo, which defeated the world's best Go player in 2016, and AlphaFold, which solved protein structure prediction, ChatGPT's overnight success forced a fundamental recalibration. Within months, Google had merged its elite AI teams - DeepMind and Google Brain - to accelerate development in response to OpenAI's momentum.

This reveals what might be called the competitive acceleration paradox: even the most thoughtful AI leaders wanted to proceed more cautiously, but systemic pressures overrode individual preferences. The dynamic appears in Hassabis's recollection of meeting Elon Musk in 2012 at SpaceX's California factory. When Musk explained his priority was getting to Mars "as a backup planet, in case something went wrong here", Hassabis pointed out a flaw in the plan: "What if AI was the thing that went wrong? Then being on Mars wouldn't help you, because if we got there, it would obviously be easy for an AI to get there."

Musk "sat there for a minute without saying anything, just sort of thinking", then became an investor in DeepMind. Yet both men ended up as competitors in the race to develop the technology they privately worried about. When Google acquired DeepMind for £400 million in 2014, Musk switched to backing rival startup OpenAI. The very people most aware of AI's risks found themselves accelerating its development through competitive dynamics.

Ordinary humanity, extraordinary power

What makes Hassabis's position particularly striking is how recognisably human he remains despite wielding world-historical influence. He maintains a Liverpool FC season ticket, managing to attend "six, seven games a year" when his schedule permits. He plays online chess as mental exercise, describing it as "a bit like going to the gym, for the mind". His most enjoyable times are "playing games, board games" with his teenage sons - and he doesn't let them win.

This ordinariness in the face of extraordinary responsibility illuminates how technological revolutions actually unfold. We tend to mythologise figures like Thomas Edison or Andrew Carnegie as visionary giants, but they were recognisably human individuals thrust into pivotal roles by circumstances largely beyond their control. Edison spent the 1880s engaged in the "Battle of the Systems" - a fierce competition between his direct current electrical system and George Westinghouse's alternating current technology that would determine how the world received electric power. The outcome wasn't decided by pure technical merit, but by financial backing, strategic alliances, and marketing campaigns.

Similarly, Carnegie's dominance of American steel production resulted from his adoption of the Bessemer process and vertical integration strategies, but his empire was ultimately shaped by competitive forces beyond any individual's control. These men didn't set out to reshape civilisation; they were optimising specific technical problems whilst competitive pressures scaled their innovations beyond their original intentions.

The vision and its limits

Hassabis paints a compelling picture of AI's potential benefits: "radical abundance" through medical advances, room-temperature superconductors, nuclear fusion breakthroughs. His vision encompasses travelling to the stars and solving humanity's greatest challenges. "Assuming we steward it safely and responsibly into the world," he says, "we should be in a world of what I sometimes call radical abundance... where things don't have to be zero sum."

But when pressed on implementation - how this abundance gets distributed, what happens to employment, who makes these crucial decisions - his responses reveal the limitations of technical expertise applied to social questions. "That's going to be one of the biggest things we're gonna have to figure out," he acknowledges about potential mass unemployment. "Let's say we get radical abundance, and we distribute that in a good way, what happens next?"

These are profound political and philosophical questions, yet they're being shaped by decisions made in corporate laboratories by individuals selected for their technical brilliance rather than their wisdom about human society. Hassabis recognises this limitation, suggesting we need "great philosophers, but also economists to think about what the world should look like when something like this comes along. What is purpose? What is meaning?"

The competitive machine

The most sobering aspect of Hassabis's story may be how little control even he has over the pace and direction of AI development. DeepMind has become "the engine room of Google", as he puts it, with AI being built into every corner of the company's business. The competitive pressure is relentless - Meta is offering $100 million salaries for top researchers, whilst Microsoft recently poached more than twenty engineers from DeepMind.

Mustafa Suleyman, who co-founded DeepMind with Hassabis in 2010, left in 2019 and now heads Microsoft AI. This competitive dynamic creates what industry analysts describe as a "triathlon" between foundation model development, customer acquisition, and infrastructure building. Companies are judged not just on technical capabilities, but on their ability to attract users and build the computational infrastructure necessary to train ever-larger AI systems. The result is a system where thoughtful deliberation becomes a competitive disadvantage.

The philosopher-king problem

We return to the fundamental paradox embodied by Hassabis: someone optimised for strategic perfection navigating a world of radical uncertainty. The chess training that gave him pattern recognition and forward planning also taught him that problems have optimal solutions and rational analysis leads to correct decisions. But the challenges posed by transformative AI operate according to different principles.

There may be no optimal solution to questions about how quickly to develop artificial general intelligence, how to distribute its benefits, or how to maintain human agency in a world of increasingly capable machines. These are not technical problems with algorithmic solutions, but fundamental questions about what kind of society we want to live in and what we value most about human existence.

Consider energy consumption. Hassabis acknowledges that AI systems will require enormous amounts of electricity and water, but argues that "the amount we're going to get back, even just narrowly for climate [solutions] from these models, it's going to far outweigh the energy costs." This is classic strategic thinking - weighing costs against benefits and concluding the trade-off is worthwhile. But it assumes AI will indeed solve climate problems quickly enough to justify immediate environmental costs, and that benefits will be distributed to those bearing the costs.

The choices ahead

What emerges from Hassabis's story is how accidental our current situation really is. There was no grand plan that led a chess-playing teenage game designer to become one of the world's most influential decision-makers. Instead, a series of individual choices - studying neuroscience to understand intelligence, founding DeepMind to "solve intelligence and then use it to solve everything else", Google's decision to acquire the company - created a path that nobody fully anticipated.

This accidental quality extends to the broader AI revolution. The timing of breakthroughs, the competitive dynamics between companies, the particular individuals who happen to be in key positions - all of these factors are shaping outcomes that will affect billions of lives for generations. Yet there's surprisingly little systematic thinking about whether these are the right people making these decisions, or whether we've designed appropriate institutions to guide such momentous changes.

Hassabis describes himself as a "cautious optimist" who believes "in human ingenuity" and thinks "we'll get this right" because "humans are infinitely adaptable". But there's a crucial difference between adaptation and active participation in shaping change. The Industrial Revolution transformed human society, but the people experiencing that transformation had little say in its direction or pace.

The chess master's dilemma ultimately belongs to all of us: how to navigate a transformation that may be ten times bigger and faster than the Industrial Revolution, guided by individuals whose expertise lies in games rather than governance. We need new institutions and decision-making processes that complement technical expertise with other forms of wisdom. We need philosophers and social scientists involved in AI development as equal partners, not afterthoughts. Most importantly, we need to recognise that the current concentration of power in technologists represents choices we can still change.

The teenager who coded Theme Park and the Nobel laureate shaping humanity's AI future are the same person, pursuing the same fascination with complex systems and strategic thinking. The question is whether the rest of us will engage with the civilisational choices he and his peers are making on our behalf, or simply trust that genius in narrow domains somehow translates into wisdom about the human condition. The next moves, quite literally, will determine the world we inhabit for generations to come.

#artificial intelligence