Programmers reinvent 1970s concepts because nobody teaches computing history
Alan Kay's warning about software development's historical amnesia reveals a profession that treats its intellectual heritage as irrelevant trivia
Imagine a university course titled "Classical Software Studies" where students study Ivan Sutherland's 1963 Sketchpad dissertation with the same rigour philosophy students bring to Plato. They would trace how time-sharing systems evolved at MIT and Dartmouth, analyse why Unix composition mattered, understand Smalltalk not as a programming language but as a vision for computing itself.
This course doesn't exist. Most computer science students graduate without encountering the people who invented fundamental concepts they use daily. This isn't academic oversight—it reveals something disturbing about how software development positions itself amongst intellectual disciplines.
Alan Kay, who pioneered object-oriented programming and graphical interfaces at Xerox PARC, has spent decades watching this pattern with mounting frustration. In his 1997 OOPSLA keynote titled "The Computer Revolution Hasn't Happened Yet", he warned that computing was stuck in endless reinvention. The problem wasn't technology—it was that an entire profession kept rediscovering concepts thoroughly explored before most current programmers were born, each time marketing them as revolutionary breakthroughs.
The missing classical education
Art history students spend years analysing how Caravaggio's lighting influenced baroque painting. Philosophy undergraduates work through Aristotle's logic, understanding how ancient arguments shaped modern thought. Both disciplines treat historical knowledge as foundation, not decoration.
The requirements reflect this. Art history majors at major universities complete six to nine courses spanning ancient civilisations to contemporary practice. Philosophy programmes mandate multiple history sequences—ancient, medieval and early modern, nineteenth century. These aren't boxes to tick. They're recognition that creative and intellectual work exists within traditions.
An architect ignorant of historical building techniques cannot properly evaluate modern innovations. A philosopher who hasn't read Hume on causation will flounder in contemporary debates about scientific method. Historical knowledge provides the framework for understanding why current approaches matter.
Computer science took a different path. Survey typical undergraduate programmes and you'll find courses on current languages, contemporary frameworks, immediate practical skills. History of computing appears as optional elective, if at all. Students regularly complete entire degrees without encountering Dennis Ritchie, Douglas Engelbart, or the people who defined concepts they'll use throughout their careers.
The expensive cycle of rediscovery
The consequences show up in how software debates recur across decades, each generation convinced they've discovered something new.
Cloud computing dominates technology discourse as if it represents novel thinking about computing resources. Yet the core idea—multiple users sharing centralised computing power—defined the 1960s. MIT deployed its Compatible Time-Sharing System in 1961, allowing multiple programmers to interact with a single mainframe simultaneously. Dartmouth followed with DTSS in 1964. By the 1970s, time-sharing was standard.
Personal computers in the 1980s made dedicated processors seem to render time-sharing obsolete. Now cloud computing recreates that shared-resource model with different infrastructure. This isn't evolution—it's amnesia. The wheel gets reinvented whilst the original inventors' insights gather dust in unread papers.
Microservices architecture follows the same pattern. Presented as breakthrough thinking about system structure, it echoes debates from Unix development in the 1970s. Unix pioneered modular design: small programmes doing one thing well, composed through clean interfaces. This philosophy addressed precisely what microservices claim to solve—managing complexity through decomposition, enabling independent scaling, facilitating team autonomy.
The underlying principles remain unchanged. Modern advocates describe escaping "monolithic" architecture apparently unaware they're restating arguments Dennis Ritchie and Ken Thompson documented fifty years ago. The technologies differ—containers versus processes, HTTP APIs versus pipes—but the architectural thinking is remarkably similar.
This pattern costs money. Teams spend months discovering problems that were solved in the 1970s. They debate trade-offs that were thoroughly documented in papers nobody reads. They make mistakes that earlier generations already made, documented, and learned from. All because the profession treats its intellectual heritage as irrelevant.
How software development went its own way
The divergence has explicable origins. When universities began computing degrees in the 1960s and 1970s, the field changed so rapidly that historical study seemed pointless. Unlike philosophy or art, where centuries of accumulated knowledge provided obvious material for study, computing appeared to have no history worth teaching.
That initial pragmatism hardened into structural norm. As computing matured, curricula never adopted the historical requirements standard in humanities. The field positioned itself as relentlessly forward-looking—a technical discipline where last year's approaches were already obsolete.
Industry reinforced this. Employers wanted graduates who could contribute immediately to current projects. Job postings specified recent frameworks, not foundational concepts. Academic departments, proving their relevance, structured programmes around market demands.
The rapid pace of change provided justification. Why study 1970s systems when languages and architectures had evolved so dramatically? But this reasoning confuses implementation details with fundamental concepts. Programming languages change, hardware improves—questions about structuring complex systems, managing resources, and enabling human interaction persist across decades.
What the profession loses
Kay's frustration stems from watching concepts degraded through incomplete understanding. When he developed Smalltalk at Xerox PARC, object-oriented programming meant messaging between entities, inspired by biological systems. Modern object-oriented programming—particularly C++—took surface features like classes and inheritance whilst missing deeper ideas about component interaction.
"I made up the term object-oriented," Kay said in his OOPSLA talk, "and I can tell you I did not have C++ in mind." This isn't inventor's pique. It points to how concepts can be both widely adopted and fundamentally misunderstood when divorced from intellectual context. Programmers learned object-oriented syntax without reading the papers explaining why this approach mattered.
The absence of historical grounding creates other problems. Without framework for evaluating new claims, every architectural pattern can be marketed as revolutionary. Few practitioners recognise iterations on established ideas. This makes the profession vulnerable to fashion cycles and vendor marketing, constantly abandoning functional approaches for "next big things" that resurrect forgotten concepts under new branding.
Consider experienced developers encountering genuinely new problems. Without understanding how previous generations approached similar challenges, they start from first principles rather than building on documented solutions. This doesn't produce innovation—it produces expensive rediscovery, repeating the false starts and dead ends earlier researchers already navigated.
The profession loses continuity of thought. In philosophy, contemporary scholars engage directly with historical arguments. A modern epistemologist might disagree with Hume but must address his positions. In computing, 1970s ideas are simply unknown rather than debated, accepted, or refined. Progress becomes indistinguishable from mere change.
The unrealised classical curriculum
A genuine classical software studies approach wouldn't mean programming in COBOL or treating old systems as sacrosanct. It would mean studying foundational papers with the intellectual rigour other disciplines apply to their classics.
Students would read Sutherland's 1963 dissertation on Sketchpad, understanding how his ideas about objects and constraints influenced subsequent interface design. They would study time-sharing system development, tracing debates about resource allocation and user interaction that remain current. Unix philosophy would receive serious treatment—not as historical curiosity but as coherent approach to system design that later work either extends or contradicts.
The curriculum would examine failures and abandoned approaches alongside successes, understanding why certain paths weren't pursued. It would study how concepts evolved, how "object-oriented" shifted meaning, how architectural patterns emerged from specific circumstances.
Most importantly, it would position current work within intellectual tradition. Modern distributed systems understood as responding to problems that motivated RPC development in the 1980s. Contemporary programming paradigms studied against earlier functional and logic programming research. New frameworks evaluated by asking what problems they solve that previous approaches didn't address.
This isn't about privileging old solutions. It's about recognising that software development, like any intellectual endeavour, advances through accumulated knowledge rather than perpetual reinvention.
Beyond the amnesia
Kay's decades of warnings point to something deeper than missing courses. They reveal how computing defined itself as profession. Unlike architects, who study classical structures whilst designing modern buildings, or philosophers, who engage with historical arguments whilst developing contemporary theories, programmers positioned themselves as purely presentist—concerned only with what works now.
This served the field well during explosive early growth. When computing was genuinely new, forward focus made sense. But the discipline has matured. Fundamental concepts have been established, explored, refined. Computing now has genuine intellectual history worth studying.
The question is whether computer science will recognise this. Will universities treat historical computing as foundational knowledge rather than optional enrichment? Will the profession value understanding of conceptual development alongside practical skills? Will students actually read the papers that defined the concepts they use?
Or will the pattern continue—each generation rediscovering time-sharing and modularity and composition, convinced they've invented something new, whilst the people who pioneered these concepts watch in disappointed recognition?
The technology changes. The marketing evolves. But the underlying concepts remain, waiting to be learned properly rather than reinvented poorly. For now, Alan Kay remains disappointed. And the profession keeps running in circles.