Nearly Right

How a generation learned to think like its tools

How an entire cohort learned to think like their tools—and what we're losing in the process

In a Cambridge University library corner, I observe a ritual impossible three years ago. Twenty-year-old Emma quietly asks her laptop screen to explain Keynesian economics, receives an instant response, moves to the next assignment. Nearby, James whispers relationship advice questions to his phone before transitioning seamlessly to career planning queries. Neither student has spoken to another human being in two hours.

What I'm witnessing isn't simply technology adoption—it's the emergence of history's first generation to develop intellectual and emotional capabilities primarily through algorithmic rather than human conversation. The implications transcend academia. We're observing species-level adaptation that may fundamentally alter how humans learn, think, and understand themselves.

The scope is staggering: 86% of students now use artificial intelligence in their studies, with 66% specifically relying on ChatGPT. Yet these statistics conceal a more profound transformation. An entire generation is learning to outsource the cognitive processes that historically distinguished our species.

The neurology of digital dependence

Behind these behavioural shifts lies compelling neurological evidence that reveals the true cost of this cognitive migration. Brain imaging studies reveal troubling evidence about this cognitive migration. MIT Media Lab researchers, monitoring brain activity across 32 regions using EEG technology, discovered that students writing essays with ChatGPT demonstrated "the lowest brain engagement" and "consistently underperformed at neural, linguistic, and behavioral levels". Across successive assignments, these students grew progressively passive, eventually resorting to simple copy-and-paste.

The University of Toronto documented a 42% decrease in divergent thinking scores—our ability to generate multiple solutions to problems—among college students compared to their peers just five years ago. This directly correlates with widespread artificial intelligence adoption.

Human learning evolved through social interaction where we "form memories and acquire knowledge about the world from and with others" since earliest development. Brain imaging reveals that authentic conversation activates complex networks involving mirror neurons, emotional processing, and pattern recognition—interactions AI simply cannot replicate.

Research examining extensive ChatGPT usage logs reveals the scope of this cognitive delegation. Students generate thousands of prompts monthly, covering everything from essay writing to mental health advice. Yet this constant algorithmic consultation fails to develop the critical thinking skills that emerge from wrestling with ideas alongside peers, mentors, and critics who challenge assumptions and demand justification.

The conversation we're losing

Understanding what's disappearing requires examining how humans traditionally built intellectual capabilities—and why that process cannot be replicated by even sophisticated algorithms. For millennia, learning combined trial-and-error experiences with social modelling, where 'modelled activities convey rules for generative behaviour' allowing learners to 'generate new patterns of behaviour that go beyond what they have seen or heard'.

The messy, frustrating process of human conversation served crucial developmental functions. When students previously struggled with understanding complex political concepts, they might spend hours debating with classmates, defending half-formed ideas, confronting opposing viewpoints, and gradually refining understanding through social friction. Now they simply ask ChatGPT for clarification and move on, missing the cognitive development that emerges from intellectual struggle.

Stanford research on "productive struggle" demonstrates that effective learning requires students to grapple with challenges, yet AI often provides shortcuts that bypass this crucial developmental process. When Emma encounters economic concepts she doesn't understand, AI eliminates the confusion and uncertainty that historically drove deeper engagement. She receives polished explanations without experiencing the cognitive wrestling that builds analytical muscles.

The anthropological evidence is clear: conversation isn't just information exchange—it's the primary mechanism through which our species has transmitted knowledge and developed reasoning abilities. AI conversations may feel sophisticated, but they lack the unpredictability, emotional resonance, and intellectual challenge that drive human cognitive development.

The dependency paradox

Harvard Business School research reveals a disturbing dynamic: people with access to "strong, highly predictive AI were much more likely to metaphorically 'fall asleep' at the wheel," whilst those with less capable systems remained cognitively engaged.

AI proves most harmful precisely when it works best. Students don't become intellectually passive despite these capabilities—they become passive because of them. The more helpful these systems become, the less humans engage critical faculties.

Research using the I-PACE model reveals that academic stress and performance expectations directly mediate the relationship between low academic self-efficacy and AI dependency. Students facing academic pressure increasingly turn to AI as a coping mechanism, yet this dependence further erodes the analytical capabilities they need to succeed independently.

Mental health conversations with ChatGPT illustrate this dynamic perfectly. Rather than developing emotional resilience through human relationships—learning to navigate disagreement, ambiguity, and authentic feedback—students train themselves to seek algorithmic validation. The AI never challenges their assumptions, never questions their interpretations, never forces them to defend their thinking. It simply provides comfort through confirmation.

The great cognitive outsourcing

Studies by Michael Gerlich at SBS Swiss Business School reveal a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading. This isn't merely about using tools—it's about replacing human cognitive processes with algorithmic ones.

Research demonstrates that AI creates cognitive offloading where individuals shift memory and problem-solving tasks to technology, leading to what researchers term "cognitive atrophy". When students use ChatGPT to write cover letters, they're not just getting assistance—they're avoiding the cognitive work of understanding what employers value, crafting persuasive arguments, and developing their own professional voice.

The broader implications become visible when we consider what distinguishes human intelligence. Our species excelled not because we could process information quickly, but because we could generate novel solutions through creative confusion, learn from mistakes, and build understanding through social interaction. AI assistance systematically circumvents each of these developmental processes.

Perhaps most troubling, students increasingly view AI-generated work as "their" projects because they provided the prompts, losing their sense of intellectual ownership whilst claiming credit for products that aren't truly theirs. This represents a fundamental shift in how humans understand creativity and achievement.

The hidden stakeholders

The beneficiaries and burden-bearers of this transformation are starkly misaligned. Technology companies gain users and data whilst students bear the cognitive costs. OpenAI's own research acknowledges that "state-by-state differences in student AI adoption could create gaps in workforce productivity," yet the company actively promotes student adoption.

Educational institutions benefit from efficiency gains and reduced grading loads whilst their graduates may enter the workforce with diminished analytical capabilities. Only 25 percent of universities provide the AI training that 75 percent of students want, suggesting institutions are managing appearances rather than addressing the substantive implications of cognitive dependency.

Younger participants consistently show higher AI dependence and lower critical thinking scores compared to older users, meaning those with the most to lose from cognitive atrophy face the greatest risk. Yet they're also the demographic most actively courted by AI companies seeking to establish early adoption patterns.

The species that forgot how to struggle

From an anthropological perspective, we're observing unprecedented species-level adaptation. For the first time in human history, an entire generation is learning to replace the cognitive struggle that drives development with algorithmic assistance that eliminates friction.

Research confirms that ChatGPT usage increases procrastination and memory loss whilst dampening academic performance. Yet students continue adopting these tools because they provide immediate relief from intellectual discomfort—the very discomfort that historically drove cognitive growth.

The three Cambridge students represent millions developing intellectual habits that may prove irreversible. They're learning to equate prompting with thinking, external validation with understanding, and algorithmic efficiency with genuine capability. These patterns extend far beyond university, shaping how they'll approach professional challenges, personal relationships, and civic responsibilities.

Reclaiming human development

This analysis shouldn't lead to technological rejection, but to conscious cultural adaptation. We can design educational environments that harness AI's capabilities whilst preserving the cognitive struggle essential for human development.

The solution requires recognising that conversation—messy, challenging, uncertain human conversation—remains irreplaceable for developing critical thinking, creativity, and authentic understanding. Universities must create spaces where students grapple with ideas together, defend half-formed thoughts, and experience the productive confusion that drives intellectual growth.

Research confirms that AI "cannot support all the activities that lead to new ideas, such as interactively exchanging and discussing ideas with others". This limitation becomes an opportunity if we consciously preserve and prioritise human developmental processes.

The Cambridge students I observed aren't inherently less capable than previous generations. They're adapting rationally to available tools. Our responsibility is ensuring those adaptations serve rather than undermine their long-term cognitive development.

Our choice before us carries generational weight: allow an entire cohort to outsource intellectual growth to algorithms, or consciously design educational experiences that harness AI whilst preserving cognitive struggle essential for human flourishing.

Those Cambridge library students aren't inherently less capable than previous generations. They adapt rationally to available tools. Our responsibility lies in ensuring those adaptations strengthen rather than undermine long-term cognitive development.

The last conversation generation needn't be the last. Preserving authentic human development demands recognising what we're losing—and protecting the cognitive processes that make us most distinctively human.

#artificial intelligence