The bias that billions built
How artificial intelligence's economic architecture makes discrimination inevitable
The morning queue forms early outside a yellow shipping container in downtown Brooklyn, where visitors scan QR codes with the solemnity of pilgrims seeking digital absolution. Their stories—fragments of hope, struggle, and identity—feed into artificial intelligence systems that Stephanie Dinkins has trained to see faces like theirs. Meanwhile, 3,000 miles away in Silicon Valley boardrooms, OpenAI executives finalise another billion-dollar funding round for systems trained on data that systematically erases these same communities from digital existence.
This isn't merely an art installation versus corporate AI. It's a battle for the soul of artificial intelligence itself, fought between the community partnership required for authentic representation and the venture capital mathematics that made AI dominance possible. The victor will determine whether machines learn to see humanity through the eyes of the powerful few or the diverse many.
Dinkins' If We Don't, Who Will? installation occupies downtown Brooklyn through September, offering what its creator calls a challenge to "white-dominated generative AI" through training systems on Black and brown perspectives. Visitors contribute personal stories and answer prompts about their social experiences, feeding a model trained on African American Vernacular English and images by Roy DeCarava, the Harlem photographer who captured intimate portraits of Black life in the 1950s. The resulting AI-generated images prioritise faces of colour even when submissions come from white participants—a deliberate inversion of systems that default to whiteness.
Yet this artistic intervention unfolds against economic forces that dwarf its community-scale ambitions. Artificial intelligence startups captured over £80 billion in global venture capital during 2024 alone, with AI companies receiving 53% of all venture funding in the first half of 2025. Three companies—OpenAI, Anthropic, and Elon Musk's xAI—raised a combined £25 billion between 2023 and 2024, representing one in ten dollars invested across all American startups. This extraordinary capital concentration coincides with persistent demographic exclusion: Black workers comprise merely 7.4% of the high-tech workforce, whilst women represent less than a third of AI professionals globally.
The mathematics of exclusion
The economics are brutally simple, devastatingly effective. Venture capital demands 10x returns from winners to compensate for portfolio failures—a mathematical law as immutable as gravity and twice as crushing for communities seeking authentic representation. Microsoft's £10 billion OpenAI investment, Amazon's £6 billion Anthropic stake, and Google's £1.5 billion competitive response aren't conscious acts of discrimination. They're the predictable outcome of a system where quarterly returns matter more than generational trust-building.
Consider the gulf between Dinkins' methodology and Silicon Valley's approach. She spends months cultivating relationships with community participants, learning cultural contexts, iteratively refining her system's understanding. Her AI learns African American Vernacular English not through algorithmic parsing but through respectful collaboration. It understands that okra carries ancestral memory, that Roy DeCarava's photographs capture dignity systems typically cannot see.
Venture capital's quarterly reporting cycles treat such relationship-building as expensive inefficiency. The industry's obsession with "data at scale"—scraping the entire internet to train larger models faster—systematically excludes what sociologist Louis Chude-Sokei calls "liberating content" in favour of whatever can be harvested most efficiently. The result: AI systems that learn humanity from the debris of digital extraction rather than the wisdom of cultural partnership.
The research findings read like a systematic indictment of digital democracy. University of Washington scientists analysed three state-of-the-art language models and discovered a pattern so consistent it suggests intentional design: white-associated names preferred 85% of the time, Black-associated names a mere 9%. The systems never—not once—preferred Black male names to white male names. Black women fared marginally better than Black men but remained systematically disadvantaged. The researchers called this "unique harm against Black men that wasn't visible from looking at race or gender in isolation"—academic language for technological apartheid.
These aren't isolated glitches but features of systems processing millions of decisions daily. ProPublica's investigation revealed that COMPAS, the algorithm guiding American criminal justice, classified Black defendants as twice as likely to reoffend compared to white defendants with identical risk profiles. Amazon's hiring algorithm systematically penalised any resume containing the word "woman." Healthcare algorithms demonstrated measurably lower accuracy for Black patients than white ones. Each individual decision might seem defensible; the pattern reveals systematic exclusion by design.
The failure of conventional solutions
Historical precedent offers little comfort for those hoping traditional diversity initiatives might address AI bias. The most comprehensive study of corporate inclusion efforts, examining 829 companies over 31 years, found "no positive effects in the average workplace" from diversity training programmes. Mandatory training actually decreased representation of underrepresented groups—a finding that should terrify any organisation relying on standard approaches.
Harvard sociologist Frank Dobbin's research demonstrates why such programmes consistently backfire. "The typical all-hands-on-deck, 'everybody has to have diversity training' format in big companies doesn't have any positive effects on any historically underrepresented groups," he notes. Command-and-control approaches to bias reduction activate psychological reactance—people rebel against being told how to think, even when the instruction aims toward fairness.
The pattern repeats in technology. Google revealed in 2014 that 2% of its technical workforce was Black. After a decade of diversity initiatives costing hundreds of millions, that figure reached 4.4% in 2023—progress so marginal as to constitute statistical noise. Apple, Microsoft, and Facebook show similar patterns: ambitious announcements followed by incremental gains insufficient to meaningfully alter industry composition.
The pattern extends beyond individual bias to institutional convergence that should alarm anyone concerned with technological accountability. IBM, Google, Microsoft, and academic institutions demonstrate remarkable similarity in their bias mitigation strategies despite supposedly fierce competition. All acknowledge the problem with appropriate solemnity. All invest in ethics research that generates impressive white papers. All deploy incremental improvements that suggest progress without threatening profits. None question the fundamental extraction-based development model that creates bias in the first place.
This convergence isn't coincidental—it reveals that "AI bias" serves primarily as what economists call an "externality," a cost imposed on society that doesn't appear on corporate balance sheets. Companies can afford to acknowledge bias precisely because meaningful solutions remain economically optional. The EU AI Act's maximum penalty of €35 million represents manageable business risk for companies valued in hundreds of billions. When Google pays more for Manhattan office space than potential bias fines, regulatory frameworks become corporate cost centres rather than behavioural constraints.
The result is what might be termed "ethics theatre"—elaborate performances of concern designed to satisfy stakeholder expectations whilst preserving profitable practices. Corporate diversity, equity, and inclusion programmes show identical patterns: ambitious announcements, substantial budgets, impressive metrics about training completion rates, minimal changes in actual representation or decision-making power.
The community laboratory
Inside the shipping container, something different is happening. Keisha, a 34-year-old teacher from Bedford-Stuyvesant, describes watching her grandmother's resilience appear in AI-generated portraits that actually capture her dignity rather than algorithmic assumptions about Black women. "I've never seen myself in these systems before," she says, examining an image that reflects not just her features but her story's complexity.
Dinkins' approach represents nothing less than a fundamental rejection of extraction-based AI development. Instead of scraping the internet for training data, she cultivates ongoing relationships with participants. Instead of treating cultural knowledge as raw material to be processed, she positions communities as partners in technological development. Her AI systems learn African American Vernacular English through respectful collaboration, not algorithmic parsing. They understand that okra symbolises ancestral memory, that Roy DeCarava's Harlem photographs capture dignity that mainstream systems cannot see.
"What stories can we tell machines that will help them know us better from the inside of the community out?" Dinkins asks, her voice carrying the weight of someone who understands the stakes. The question challenges every assumption underpinning contemporary AI development. Silicon Valley treats training data as raw material; Dinkins treats it as cultural patrimony requiring respectful stewardship.
The methodology produces measurably different outcomes. Images generated by Dinkins' system consistently prioritise faces of colour and cultural references meaningful to Black and brown communities. Users describe seeing themselves authentically represented for the first time in AI-generated content. The contrast with mainstream systems—which default to whiteness unless explicitly prompted otherwise—demonstrates that alternative approaches can yield fundamentally different results.
Yet the project also illuminates the constraints facing any attempt to scale such approaches. Dinkins estimates spending months building relationships with community participants, learning cultural contexts, and iteratively refining her system's responses. This relationship-intensive methodology conflicts directly with venture capital's requirement for rapid scaling and measurable returns. The shipping container can accommodate dozens of participants; Silicon Valley's systems process millions of users daily.
Boston University's Chude-Sokei positions artists like Dinkins as "democratising AI by putting the tools into the hands of the global majority." But democratisation implies widespread adoption, and the economic forces shaping AI development systematically discourage approaches that prioritise authenticity over efficiency.
The incentive trap
The venture capital industry's fundamental structure creates what might be termed an "incentive trap" around inclusive AI development. Firms like Sequoia Capital, Andreessen Horowitz, and Greylock have shifted billions towards AI investments, with a16z launching a £1.2 billion AI-focused fund in 2024. These investments require returns that justify their opportunity costs compared to other sectors.
Microsoft's OpenAI partnership exemplifies the dynamic. The £10 billion investment includes product integration requirements, cloud service commitments, and technological dependency that effectively capture OpenAI's future development direction. Similar patterns appear in Amazon's Anthropic relationship and Google's competitive positioning. Strategic investors gain early access to breakthrough technologies whilst startup recipients accept constraints on their development approaches.
This creates systematic bias towards approaches that maximise investor returns rather than community benefit. Training systems on internet-scale data offers measurable efficiency gains compared to community consultation approaches. Deploying systems rapidly across large user bases generates network effects that justify high valuations. Maintaining competitive secrecy about training methodologies prevents potential rivals from copying successful approaches.
The result resembles what economists call "path dependence"—early technological choices constraining future options even when alternatives might prove superior. AI development locked into extraction-based rather than partnership-based approaches, not through conscious discrimination but through mathematical necessity of venture funding models.
Even regulatory responses demonstrate this dynamic. The Biden administration's AI Executive Order, recently reversed by the Trump administration, emphasised safety and bias mitigation but maintained fundamental assumptions about AI development approaches. European Union regulations focus on risk categorisation and compliance auditing rather than alternative development methodologies. Regulatory frameworks accept that AI systems will demonstrate bias and attempt to minimise harm rather than questioning whether current development approaches are categorically incompatible with equitable outcomes.
The unintended consequences
Yet Dinkins' success could birth its own demons. Imagine venture capitalists discovering that community-partnered AI generates superior returns by accurately serving diverse populations—a significant market. The same financial logic that created systematic bias could attempt to industrialise community consultation, transforming genuine cultural partnership into "diversity as a service" business models. Silicon Valley excels at commercialising authenticity; Burning Man became a corporate retreat, organic food became industrial agriculture with better marketing.
More insidiously, successful minority-specific AI systems could justify technological apartheid rather than technological inclusion. Instead of creating systems that serve everyone fairly, Dinkins' methodology might inadvertently prove that different populations require entirely separate AI infrastructures. History offers sobering precedent: "separate but equal" systems rarely remain equal for long.
The pharmaceutical industry provides instructive warning. Community health advocates developed "participatory research" methodologies to ensure authentic representation in medical studies. These approaches became standardised protocols that corporations now deploy to access marginalised populations more efficiently whilst providing legal protection against exploitation claims. Academic "community-based participatory research" enables researchers to extract knowledge from communities while claiming partnership—extraction disguised as collaboration.
Most insidiously, her project could provide corporate AI developers with cultural knowledge that they then incorporate into mass-market systems without ongoing community partnership—a form of sophisticated cultural appropriation enabled by artistic research. The "slow food" movement's experience suggests how premium alternatives can legitimise mass-market problems by providing consumers with opt-out mechanisms rather than system-wide solutions.
University of Toronto professor Beth Coleman, who specialises in technology and society, acknowledges these tensions whilst maintaining that "there's a good spirit of 'how can we build a better world together?' in Stephanie's work, and at this moment in time that feels pretty revolutionary." Yet revolution implies transformation of existing systems rather than creation of parallel alternatives that potentially legitimate unchanged mainstream practices.
The technological sovereignty question
Perhaps the most profound implication of Dinkins' work concerns not inclusion within existing AI systems but community autonomy from them entirely. Her emphasis on "Afro-now-ism"—taking action towards equity today rather than deferring to future solutions—suggests priorities that extend beyond technological reform towards technological sovereignty.
This positions her project within traditions of community organising and mutual aid rather than corporate engagement. The shipping container becomes not a prototype for industry adoption but demonstration that marginalised communities can develop technological infrastructure independently of corporate systems. Such an approach accepts technological separation as preferable to compromised inclusion.
Historical precedent suggests this interpretation might prove prescient. The civil rights movement's greatest successes often involved creating parallel institutions—schools, banks, media organisations—rather than reforming exclusionary ones. Technology offers similar possibilities: community-controlled AI systems designed according to community values rather than investor requirements.
Yet technological sovereignty requires resources and expertise that individual communities rarely possess independently. AI development demands massive computational infrastructure, specialised technical knowledge, and ongoing maintenance that corporate systems provide through economies of scale. The tension between autonomy and accessibility remains unresolved.
The scale question
Back in Brooklyn, the morning queue forms again outside Dinkins' shipping container. Each visitor carries stories that Silicon Valley's systems cannot see, feeding them into AI that finally recognises their humanity. The container has become more than art installation—it's a laboratory for technological democracy, a proof of concept that authentic representation remains possible if we're willing to abandon the scale obsessions that created the problem.
The choice facing democratic societies isn't technical but political: whether we'll accept AI systems that serve investors' return requirements or demand systems that serve human dignity. Dinkins' yellow container poses this question with quiet insistence, offering a glimpse of what artificial intelligence could become if we chose community partnership over venture capital mathematics.
That choice becomes more urgent daily as AI systems increasingly mediate access to employment, healthcare, justice, and democratic participation itself. The bias that billions built won't correct itself through market forces or corporate goodwill. It will require choosing authenticity over efficiency, community wisdom over extracted data, technological sovereignty over technological scale.
The queue outside the shipping container suggests that choice isn't hypothetical. It's happening now, one story at a time, one community at a time. The question isn't whether change is possible—Dinkins has proven it is. The question is whether we have the courage to demand it at the scale democracy requires.