How the technology industry chose convenience over user empowerment and created digital dependence
The failure of semantic web initiatives and structured computing has left users reliant on AI systems they cannot understand or control
Try this query in Google: "What animal is featured on a flag of a country where the first small British colony was established in the same year that Sweden's King Gustav IV Adolf declared war on France?" You'll get a jumble of unrelated results. ChatGPT, however, delivers the answer in seconds: a parrot on Dominica's flag, connecting Britain's 1805 colony with Sweden's declaration of war on France that same year.
This comparison, circulating in technology circles, is being hailed as proof of AI's superiority. But the real story isn't about artificial intelligence succeeding—it's about decades of failure in computing design. The fact that we need billion-parameter neural networks to answer structured queries reveals how completely we abandoned approaches that could have made such questions simple to resolve.
Modern AI isn't a triumph of elegant engineering—it's an expensive workaround for fundamental failures in personal computing. And whilst it solves immediate problems, it creates a deeper crisis: users who can operate sophisticated systems but cannot understand, control, or improve them.
When the web's inventor got it wrong
Tim Berners-Lee had a bold vision for his creation. In 2001, the web's inventor published a manifesto promising that his network would evolve beyond documents into "semantically structured, linked, machine-readable data." Instead of forcing computers to guess meaning, we would explicitly encode it into web content.
The idea was elegant: instead of leaving computers to guess what information meant, we would explicitly encode meaning into web content. Using standards like Resource Description Framework and Web Ontology Language, every piece of information would carry semantic tags explaining its significance. A query about flags and historical dates would be answerable by any basic algorithm because the relationships—between countries, colonies, and events—would be explicitly defined and machine-readable.
It was a beautiful failure. By 2013, fewer than 13% of web domains contained semantic markup, despite decades of advocacy. Berners-Lee himself admitted in 2006 that "this simple idea…remains largely unrealised".
The failure reveals something crucial about the gap between technological idealism and human behaviour. Creating semantic markup was tedious work that benefited search engines and machines rather than the people doing the markup. As one critic observed, "The nature of human-produced content makes it extremely difficult to categorise without loss of accuracy."
The technology industry learned the wrong lesson. Rather than solving the hard problem of structured information, it simply waited for AI powerful enough to extract meaning from chaos. As researcher Rafe Brena noted, "Generative AI quickly overshadowed the SW. Its capabilities entranced developers, researchers, and corporations alike."
The great search substitution
Whilst the semantic web withered, a different pattern emerged: the systematic replacement of structured navigation with search boxes. This shift happened gradually, almost invisibly, across every corner of digital life.
File management tells the story perfectly. Traditional computing encouraged users to develop folder hierarchies that matched their thinking. Research consistently shows that "most subjects located files by browsing the folder structure, with searching used as a last resort." Users built mental maps of their information.
Then came the search-first revolution. Google Drive exemplifies the new philosophy: rather than improving organisational tools, simply provide comprehensive search. Just dump everything in and search for it later. The same pattern repeated everywhere: e-commerce sites abandoned logical navigation for search bars, software replaced menus with command palettes, and websites prioritised search over site maps.
The convenience seemed obvious. But the cost was invisible: users stopped understanding their information environment.
Why users still choose navigation
Here's the uncomfortable truth the search-first revolution ignored: when systems are well-structured, users prefer navigation. Extensive research by the Nielsen Norman Group consistently finds that "most users tend to start browsing over searching," with search usage typically ranging from just 11% to 21% of interactions.
The psychology is revealing. Navigation "replaces recall with recognition"—instead of forcing users to generate complex queries, they can recognise relevant categories and refine their path. Well-designed navigation teaches users about what's available whilst helping them find what they need.
Search, by contrast, demands that users already know what they want and how to express it. Even with reasonable queries, results are often irrelevant on specialist sites lacking Google's sophisticated algorithms. Where structured data has been implemented, the benefits are measurable: Rotten Tomatoes found 25% higher click-through rates for semantically enhanced pages.
The semantic web's technical vision was sound. The problem was getting humans to create the structured foundation it required.
The road not taken
To grasp what we lost, consider HyperCard—a remarkable piece of 1987 software that embodied a radically different computing philosophy. HyperCard allowed anyone to create interactive, linked documents containing text, images, sounds, and simple programming. It was a personal web before the web existed.
Users built everything from simple databases to complex interactive experiences. The revolutionary game Myst began as a HyperCard stack. The Beatles released official HyperCard content. As one researcher noted, students found HyperCard provided "a powerful new medium of communication and new insights into organizing and synthesizing information."
HyperCard embodied empowerment over dependence. Instead of relying on centralised services, it put creative and organisational power directly in users' hands. When its creator, Bill Atkinson, later acknowledged that connecting HyperCard stacks over networks could have created the first web browser, he highlighted a missed opportunity: user-empowering networked computing.
The web that emerged prioritised document sharing over personal knowledge management. Search engines focused on finding information rather than helping users organise it. The result was a systematic retreat from the personal computing vision.
Digital democracy at stake
These aren't merely design choices—they represent a fundamental shift in power relationships. Recent research reveals that just 28% of American adults feel "very confident" in their digital literacy, despite 84% using digital tools daily for work. Users can operate modern systems but cannot understand, control, or choose meaningful alternatives.
This digital illiteracy becomes dangerous as AI systems grow more sophisticated and opaque. Modern large language models create "ephemeral semantic maps" that exist only within neural networks. Users can query these systems but cannot examine, verify, or build upon their knowledge structures.
The semantic web promised that "knowledge and connections would remain accessible and comprehensible, not hidden within impenetrable AI models." Instead, we've created systems where information processing happens inside black boxes that even their creators don't fully understand.
When the most marginalised communities bear the brunt of the "AI divide," as researchers document, the democratic implications become clear. Citizens who cannot understand the systems mediating their information access lose capacity for informed decision-making.
Reclaiming digital agency
This trajectory wasn't inevitable. The technical foundations for empowering computing existed decades ago. What failed was the collective will to do the hard work of creating and maintaining structured information systems.
Modern AI's success shouldn't blind us to what we've lost. When we replace navigation with search, and search with AI, we eliminate opportunities for users to understand their information environment. The convenience is real, but so is the cost to human capability.
Some encouraging signs point toward correction. Knowledge graphs powering AI systems represent a partial return to semantic principles. Advanced users are rediscovering structured note-taking and personal knowledge management tools.
But reclaiming digital agency requires more than better tools—it requires recognising that convenience-first design creates dependence rather than capability. The goal shouldn't be eliminating AI, but combining its power with user understanding and control.
Whether we achieve that balance may determine not just how we interact with information, but how we participate in democratic society when artificial intelligence mediates more of human experience.