Most enterprise leaders still think of ChatGPT as a chatbot. OpenAI is building something far more consequential — and the implications for vendor strategy deserve serious attention.

On March 31, 2026, OpenAI announced a $122 billion funding round at an $852 billion valuation. Buried inside the announcement was a phrase that should make every IT leader pause: “We are building a unified AI superapp.” That is not marketing language. It is a platform strategy.

What OpenAI Actually Announced

The superapp concept unifies ChatGPT, Codex, browsing, shopping, and agentic capabilities into one integrated surface. OpenAI described it explicitly as a “distribution and deployment strategy” — not just a product improvement.

The numbers behind the announcement are difficult to ignore. ChatGPT now has more than 900 million weekly active users. Over 50 million are paying subscribers. Enterprise revenue now exceeds 40 per cent of total revenue and is on track to reach parity with consumer by year end. APIs process more than 15 billion tokens per minute.

In the same week, OpenAI launched its Agentic Commerce Protocol (ACP), integrating shopping directly into ChatGPT. Target, Sephora, Nordstrom, Lowe’s, Best Buy, The Home Depot, and Wayfair are already onboard. Walmart launched an in-ChatGPT shopping experience with account linking, loyalty integration, and payments. Shopify’s entire merchant catalogue is now surfaced through ChatGPT automatically.

This is not a chatbot adding a feature. It is a platform consolidating the entire purchase decision workflow.

Why Platform Lock-In Matters for Enterprise

When an AI platform controls product discovery, comparison, and increasingly checkout — and when that same platform is embedded in enterprise workflows through Codex and API integrations — the switching costs compound fast.

The pattern is familiar. It mirrors what happened with Google Search, Apple’s App Store, and Amazon’s marketplace. The platform that controls the surface where decisions happen captures disproportionate value. The difference is that AI superapps do this across both consumer and enterprise contexts simultaneously.

For organisations already building on OpenAI’s API, the lock-in trajectory is straightforward. Custom agents, fine-tuned models, integrated workflows, and proprietary data pipelines all create dependency. Adding commerce, browsing, and agentic capabilities to the same platform accelerates that dependency into something structural.

The Risk for Mid-Market Organisations

Large enterprises have dedicated vendor management teams that evaluate concentration risk. Mid-market organisations often adopt tools based on capability and convenience, without mapping the long-term platform dependency.

Three areas deserve immediate attention.

Data gravity. Every interaction, preference, and workflow pattern fed into ChatGPT increases the platform’s value — and your switching cost. If employees are using ChatGPT for procurement research, technical decisions, and vendor evaluation, the data advantage compounds quickly.

Agentic workflow dependency. If AI agents are executing tasks autonomously — querying systems, making purchases, generating reports — those workflows become deeply embedded in the platform. Migrating an agent-based workflow to a competing platform is substantially harder than switching a basic API integration.

Vendor negotiation leverage. As enterprise revenue approaches parity with consumer revenue for OpenAI, pricing and terms will evolve. Organisations that have already embedded OpenAI into critical workflows will have limited negotiating leverage when those conversations arrive.

What This Means for Australian Organisations

Australian mid-market organisations face an additional consideration. OpenAI’s superapp strategy is US-centric in its initial design — shopping integrations, merchant partnerships, and payment infrastructure all start with US retailers and financial systems.

That creates both a timing window and a potential gap. Australian organisations adopting OpenAI’s platform now may find that localised features arrive later, while dependency on the platform grows immediately. Data residency, regulatory alignment with Australian privacy legislation, and local compliance requirements should be evaluated before embedding agentic AI workflows into core operations.

Three Steps to Take Now

Map your OpenAI surface area. Document every product, API integration, and workflow that depends on OpenAI. Include both sanctioned deployments and shadow AI usage. Most organisations underestimate the breadth.

Assess switching costs. For each dependency, evaluate what it would take to migrate to an alternative — Anthropic, Google, Microsoft Copilot, or an open-source model. If the answer is “we cannot easily switch,” that is concentration risk.

Establish multi-vendor guardrails. Set explicit thresholds for how much of your AI infrastructure can depend on a single provider. This is the same discipline organisations apply to cloud provider concentration and should extend to AI platforms.

The Window for Strategic Positioning Is Narrowing

The organisations that will navigate this transition effectively are not the ones that avoid AI platforms. They are the ones that adopt them with clear-eyed understanding of the lock-in dynamics and deliberate vendor diversification strategies.

Our team works with mid-market Australian organisations to evaluate AI platform dependency, design multi-vendor architectures, and build governance frameworks that prevent vendor concentration from becoming an operational risk.

If your organisation is building on AI platforms without a vendor diversification strategy, this is a conversation worth having before the next contract renewal.