Tech executives have become accidental philosophers
Silicon Valley's most powerful figures are routinely asked about philosophy, their responses reveal why technical expertise shouldn't be confused with philosophical wisdom
When someone asked Sam Altman how we distinguish real from fake in our digital world, the OpenAI CEO's response was painful to watch. He mumbled something about cryptographic signatures—missing the point entirely—before drifting into the kind of stoned undergraduate philosophising that makes you wonder if anyone's steering the ship of artificial intelligence.
"Even like a photo you take out of your iPhone today, it's like mostly real, but it's a little not," Altman meandered, as if he'd just discovered that cameras process images. His rambling meditation on TikTok videos and trampolines revealed someone wrestling with profound questions about truth and reality as though Descartes, Baudrillard, and entire academic departments had never existed.
This wasn't just one awkward moment. It was a perfect specimen of our strangest cultural phenomenon, we've somehow decided that people who build software are the best guides to life's deepest mysteries.
The making of accidental gurus
How did we get here? The transformation began with Steve Jobs, who pioneered the tech-leader-as-lifestyle-philosopher model. His Stanford commencement speech about staying "hungry" and "foolish," his meditations on mortality and meaning—suddenly consumer electronics came packaged with life advice. Jobs at least spoke from genuine experience about creativity and design. Today's tech leaders field questions about consciousness itself.
Watch any major tech conference and you'll see CEOs holding forth on human nature, social organisation, and the meaning of existence. Elon Musk tweets about the simulation hypothesis. Tech investors fund "longtermist" philosophers who claim we should prioritise hypothetical future humans over present suffering. Silicon Valley billionaires have embraced philosophical frameworks like effective altruism, positioning themselves as architects of humanity's cosmic future—often without seriously engaging the established scholarship they're supposedly replacing.
It's intellectual colonisation, pure and simple. Questions that have occupied philosophers, theologians, and social theorists for millennia now get answered by people whose expertise lies in optimising code.
Why journalists keep asking the wrong people
The media didn't stumble into this dynamic—they actively constructed it. Technology journalists face a fundamental challenge, making technical developments compelling to general audiences. Asking "What does this mean for humanity?" generates more clicks than explaining database architecture.
But there's a deeper problem. Technology reporters increasingly rely on Twitter networks that are "too small, too predictable, and too reflective of the very technological power their reporting should view sceptically". The result is a closed loop, tech leaders become go-to sources for cosmic questions whilst actual experts in philosophy, ethics, and social theory remain invisible.
Think about the absurdity. When major outlets want commentary on reality in the digital age, they interview venture capitalists rather than philosophers of technology. Questions about AI and consciousness go to the CEO building the systems instead of the cognitive scientists studying consciousness or the philosophers who've spent decades thinking about machine intelligence.
We've created a star system where engineering credentials automatically confer wisdom about existence itself.
The cost of confused expertise
This matters because philosophical questions aren't just engineering problems waiting for technical solutions. When Altman suggests our sense of reality will simply "converge" around whatever technology produces, he's not offering analysis—he's revealing a worldview where technological capability determines truth itself.
The damage runs in both directions. Tech leaders asked to play public intellectual often produce responses that satisfy neither technical precision nor philosophical rigour. Meanwhile, the actual scholarship on technology and society—happening in universities across Europe and North America—gets drowned out by CEO soundbites.
Consider what we're losing. MIT runs collaborative programmes where philosophers and computer scientists tackle AI ethics together, developing exactly the interdisciplinary expertise public discourse needs. Researchers study how digital technologies reshape consciousness and social relations, doing careful analytical work that celebrity interviews bypass entirely. Universities host thriving programmes in technology ethics and digital philosophy, bringing sophisticated frameworks to bear on precisely the questions tech executives fumble.
But this scholarship rarely breaks through because it lacks star power. Why quote a professor when you can get Elon Musk?
What real expertise looks like
The philosophical tradition offers rich resources for thinking about technology and human experience—from phenomenology's insights into embodied cognition to critical theory's analysis of technological power. These aren't abstract academic exercises; they're sophisticated tools for understanding how digital systems reshape human life.
When researchers examine algorithmic bias, they're not just identifying technical flaws—they're uncovering how power structures get embedded in code. When philosophers analyse social media's impact on identity, they're drawing on centuries of thinking about selfhood and community. This is the kind of nuanced analysis that gets steamrolled when we treat tech CEOs as universal experts.
The irony is exquisite, we live in an age of unprecedented specialisation, yet we expect people who optimise search algorithms to also serve as guides to consciousness, meaning, and human flourishing.
Reclaiming intellectual territory
The solution isn't complicated. We could start treating technical expertise as exactly that—technical expertise. When interviews turn philosophical, we might acknowledge that engineering brilliance doesn't confer wisdom about consciousness or the human condition.
This would require journalists to develop better sourcing strategies, drawing on actual experts in philosophy, ethics, and social theory when covering technology's broader implications. It would mean recognising that the most important questions raised by artificial intelligence aren't necessarily best answered by the people building AI systems.
Most fundamentally, it would require abandoning the fantasy that technological and philosophical questions are the same type of challenge. Building sophisticated software requires different skills from understanding human consciousness. Optimising systems demands different expertise from evaluating social implications.
The next time a journalist asks a tech CEO about the nature of reality, perhaps the most honest answer would be refreshingly simple, "That's not my area. Let me tell you what I do know—then you should probably speak to a philosopher."
Until then, we'll keep getting cryptographic signatures as solutions to epistemological problems, and wondering why the answers feel so unsatisfying.