Nearly Right

Salesforce's ChatGPT deal is really about stopping customers from wiring AI to their own data

Behind the productivity pitch lies a scramble to close a governance gap that enterprise software vendors didn't see coming

Kris Billmaier, the Salesforce executive responsible for Agentforce Sales, let slip something remarkable last week. Asked about the company's new ChatGPT integration, he didn't talk about productivity gains or eliminating the "toggle tax" between applications. He talked about fear.

"The thing that I worry about, and what I wanted to get ahead of, was homegrown MCP servers from customers just spitting out data to OpenAI around the trust boundary."

That sentence deserves unpacking. MCP—the Model Context Protocol—is an open standard that lets anyone connect AI models to external data sources. "Trust boundary" is security jargon for the perimeter within which a company controls its data. Billmaier was admitting, in plain terms, that Salesforce customers had started routing their CRM data directly to ChatGPT, bypassing Salesforce entirely. And Salesforce had to move fast to stop it.

How fast? According to analyst Vernon Keenan, who covers the Salesforce ecosystem, the company built and shipped this integration in approximately four weeks. That's not a product launch. That's an emergency response.

The shadow AI explosion

The behaviour Billmaier described isn't isolated. It's an epidemic.

LayerX Security's 2025 enterprise report found that 77 per cent of employees paste data into ChatGPT. Nearly half use personal accounts that bypass corporate controls completely. Cisco reported that 46 per cent of organisations experienced internal data leaks through generative AI tools last year. A CybSafe survey found 43 per cent of workers admitting they share sensitive information with AI without telling their employers.

These aren't rogue actors or technical mavericks. They're ordinary employees discovering that AI works better when it knows things—and the fastest way to make it know things is to paste in whatever's relevant. Customer lists. Pipeline details. Contract terms. Competitive intelligence. The data flows out because the productivity flows back.

The technical infrastructure enabling more sophisticated bypass has matured with startling speed. The Model Context Protocol, launched by Anthropic in November 2024, grew from 100,000 downloads to over 8 million in six months. More than 5,800 MCP servers now exist. OpenAI, Google, Microsoft, and Amazon have all adopted the standard. In December, Anthropic donated MCP to the Linux Foundation. What started as an experiment became industry plumbing.

For Salesforce, this creates an uncomfortable reality. Every customer who builds their own MCP connector to route CRM data to ChatGPT is a customer operating outside Salesforce's view. Outside their governance. Outside their billing.

The telecom precedent

Enterprise software executives might find cold comfort in telecommunications history. The industry faced a structurally identical challenge thirty years ago—and the incumbents lost.

The Telecommunications Act of 1996 forced the Baby Bells to let competitors access their network infrastructure at regulated rates. Regulators called it unbundling. The reasoning was simple: competition benefits consumers. But for AT&T and its offspring, unbundling meant watching upstarts build businesses atop copper wire the incumbents had spent decades laying.

What happened next reshaped the industry. New entrants didn't just compete; they redefined what mattered. The physical infrastructure that incumbents thought was their moat became a commodity. Value migrated to the services running over the wire.

The parallel to today is almost too neat. Salesforce and its peers built empires by aggregating customer relationship data—the digital equivalent of that copper running to every home. Now customers are discovering they can unbundle the data layer from the vendor's intelligence layer. MCP provides the connectors. Frontier AI models provide the reasoning. The customer supplies the data. What remains for the incumbent to sell?

The real stakes

Billmaier's security framing obscures the commercial dimension. Every query routed through a DIY integration is usage Salesforce cannot meter.

The company has struggled publicly with AI pricing, cycling through models that customers rejected. The initial $2-per-conversation approach drew complaints about unpredictability. Consumption-based "Flex Credits" added complexity. Seat-based licensing, ranging from $125 to $550 per user monthly, created sticker shock. A 6 per cent price increase took effect in August.

The confusion reflects a deeper problem. Gartner projects agentic AI could capture 30 per cent of enterprise software revenue by 2035—over $450 billion. But AI that automates work inherently reduces headcount, undermining the seat-based licensing that funds enterprise software. Marc Benioff has promised "3x, 4x the ability to multiply monetization." Yet at Dreamforce 2024, Salesforce's own statistics showed just 122,000 AI prompts running weekly against 82 billion automated flows. Less than one prompt per customer per week. The gap between ambition and adoption created space for shadow AI to fill.

Jan Cook, Gartner's senior software licensing expert, noted that customers "still adopt a cautious approach to investing in GenAI because of the unpredictability of pricing models." That caution hasn't stopped them from using free alternatives. When official tools seem expensive and confusing, people route around them. They always have.

The aggregation threat

Technology analyst Ben Thompson's Aggregation Theory explains how internet-era giants gained power: not by controlling supply, but by controlling demand. Google doesn't create content; it aggregates the world's demand for information. The platform's grip on users commoditises suppliers into interchangeable inputs.

Apply this framework to enterprise software and something alarming emerges. ChatGPT could become an aggregation layer sitting above applications like Salesforce. If workers interact with their CRM through ChatGPT's interface rather than Salesforce's purpose-built applications, who owns the relationship? Where does the value reside?

Salesforce's Nick Johnston, senior vice president of strategic partnerships, tried to draw a distinction. ChatGPT is a "single player experience," he argued; Slack (which Salesforce owns) is "multiplayer" for team collaboration. But this distinction matters less if single-player becomes most workers' default mode for accessing business data.

The ChatGPT integration represents Salesforce's answer: concede the interface, control the intelligence. When users work through OpenAI's conversational layer, Agentforce still handles the lead scoring, the pipeline recommendations, the strategic planning. ChatGPT provides the conversation. Salesforce provides the business logic that makes it useful. Partnership structured to prevent disintermediation.

Billmaier's phrase about "being full of our destiny" reveals the stakes. This isn't feature development. It's survival strategy.

The security case, honestly assessed

Salesforce's defenders have legitimate arguments. GDPR requires data subject rights that homegrown integrations cannot easily satisfy. HIPAA demands audit trails. SOC 2 presumes controlled data flows. The Trust Layer protecting the new integration enforces existing permissions and keeps proprietary data within Salesforce's security perimeter.

The risks of shadow AI are real. IBM's 2025 Cost of Data Breach Report found AI-associated incidents cost organisations over $650,000 each. Security vulnerabilities in MCP itself have emerged; one compromised package affected 437,000 developer environments. Governance matters.

But here's the uncomfortable truth: compliance concerns haven't changed behaviour. Workers optimise for productivity, not governance frameworks. The question isn't whether sanctioned integrations offer better security—they do—but whether that advantage translates into actual use. The statistics suggest it hasn't. Seventy-seven per cent of employees pasting data into ChatGPT aren't weighing GDPR implications. They're trying to finish their work.

What this means

Salesforce executed a competent defensive play. Building a ChatGPT integration in four weeks, timed precisely as DIY connectors proliferated, shows operational speed that distinguishes survivors from casualties in technology transitions. The company preserved its position in the data flow while ceding the interface to a partner it can negotiate with—better than customers it cannot control.

But the underlying dynamic hasn't changed. Customers discovered their business data is more portable than vendors wanted them to believe. MCP provides standardised pipes to move it. Third-party governance tools are emerging that could make DIY integrations enterprise-acceptable. The Trust Layer offers genuine value, but it's a feature, not a moat.

The pattern extends beyond Salesforce. Every enterprise software vendor faces the same question: when AI interfaces threaten to aggregate demand above existing applications, where does defensible value live? Salesforce is betting on business logic—the proprietary scoring and recommendations that make raw AI output actionable. Whether that bet pays off depends on whether intelligence proves harder to replicate than the data it interprets.

For technology leaders weighing AI strategy, the lesson is clear. Your vendors are racing to keep you inside their boundaries. Sometimes that race produces genuinely useful integrations. Sometimes it produces lock-in dressed as convenience. The difference lies in understanding what they actually fear—which is rarely what the press release says they're solving.

#artificial intelligence