The release, in one paragraph
On May 6, 2026, OpenAI's Workspace Agents for ChatGPT moved from free research preview to credit-based pricing, formally launching the offering for ChatGPT Business, Enterprise, Edu, and Teachers plans. The agents are Codex-powered, run in the cloud (so they continue working when the user is offline), and connect natively to Slack, Salesforce, Microsoft 365, Google Drive, Notion, Atlassian Rovo, and the rest of the standard enterprise surface. They are shareable across an organization, governable through admin tooling that includes adoption analytics and per-agent visibility, and explicitly framed by OpenAI as the evolution of GPTs — not a parallel offering, but the replacement.
The headline is "agents went paid." The substance is that every team that built on custom GPTs over the last two years now has a clock running. The successor product is here, the connectors are wired, the governance is sketched, and the pricing implies that whatever workspace agents your team builds in May is going to be the surface end users actually use by Q3.
Why "successor to GPTs" is the framing that matters
For two years, custom GPTs were the easiest way an enterprise could give its team a Claude-style "here is our internal assistant for X" surface inside ChatGPT. They were also profoundly limited: no real long-running work, no first-class connectors to enterprise systems, no shareable governance, no audit, no admin view. They worked for hobby use and for the lightest internal helpdesk patterns; they did not work for production workflows.
Workspace Agents collapse that gap. Three structural changes from GPTs are worth naming:
They run in the cloud, not in a chat tab. A workspace agent that's preparing a report can keep running while the user closes the laptop, comes back the next morning, and finds the report drafted with the supporting research already done. That single change converts an entire class of work from "sit and wait" to "queue and review," which is what most teams actually wanted from agents in the first place.
They have native connectors with permissions. The agent doesn't browse Slack like a human — it has scoped access to the Slack workspace through OpenAI's connector, with permissions inherited from the admin layer. Same for Salesforce, Microsoft 365, Notion, Atlassian. The IAM-aware data plane that custom GPTs never had is the load-bearing change here, because it's what lets a workspace agent be deployed in a real enterprise without a per-team security review.
They are an admin object, not a chat-tab object. Admins get a workspace-wide view of which agents exist, which teams are using them, what their tool interactions look like, what their connector activity is. That's the difference between "someone in marketing built a GPT and we don't know what it does" and "the marketing agent is owned by the marketing platform team, has version history, and has its tool access reviewed quarterly." Workspace Agents are positioned to be governed; GPTs were positioned to be shared.
What credit pricing actually changes
Credit-based pricing replaces the free preview that ran since the agents shipped earlier this year. Two things change immediately for buyers:
The CFO is now in the room. A free preview is a sandbox; credit pricing is a line item. Whatever sprawl of agents teams built during the free period now has to defend a per-credit cost, and the budgeting conversation will land in some teams as "why are we paying for fourteen agents when three of them get used?" The teams that prepared a per-agent ROI artifact during the free phase will be fine; the ones that didn't will spend Q3 explaining variance.
Build-vs-buy gets sharper. A workspace agent that runs on OpenAI's pricing model — credits per agent run, with overhead per connector call — will be cost-comparable to running the same workflow on a custom-built agent that hits the OpenAI API directly. For some workflows the workspace-agent surface is dramatically cheaper because it bundles the runtime, the connectors, the governance, and the persistence; for others a custom build is cheaper because the team can pick a smaller model, a different vendor, or a self-hosted route. The procurement question "which agents go in Workspace, which go in our own stack" is now a real procurement question instead of a hypothetical one.
Vendor lock-in is non-trivial. A workspace agent that has accumulated three months of memory, that has been wired through five connectors, that is shared across two teams, is not portable to another vendor in any meaningful sense. The cost of that lock-in is acceptable for most workflows and unacceptable for some. Decide which is which now, before the migration cost compounds.
What this means alongside Anthropic and Microsoft 365
The specifically interesting part of the May 6 timing is that Workspace Agents and Anthropic's Microsoft 365 + Cowork updates landed in the same news cycle. Both vendors are racing to be the in-org agent runtime. Both are wiring connectors to the same enterprise surface (Slack, Microsoft 365, Salesforce, Notion). Both are shipping admin governance views. Both are in active conversations with the same procurement teams.
For a buyer, that converges to one operational fact: multi-vendor is the realistic posture, not single-vendor. A team that standardizes on Workspace Agents for the workflows where Codex's strengths align (writing and running code, operating connected SaaS apps, drafting reports across Drive + Sheets + Slack) and runs a Claude Managed Agent or Cowork-resident agent for the workflows where Claude's strengths align (long-context reasoning, code review at depth, regulated drafting) is going to outperform a team that picks one and tries to make it cover everything. The routing layer for agents — not just for models — is the next infrastructure component most teams haven't built.
Where we'd push back on the launch narrative
"Successor to GPTs" is true; "drop-in replacement" is not. A custom GPT that's been quietly working for fourteen months has accumulated a specific set of instructions, file uploads, and behavioral expectations. The workspace-agent successor that does the same job won't behave identically; the connectors are different, the runtime model is different, the long-running context is different. Plan the migration as an explicit project with eval coverage, not as a one-click upgrade.
Connector breadth is not connector depth. "Connects to Slack" is a marketing statement; "connects to Slack with the precise scoping your security team requires across both bot and user tokens with audit logging that satisfies your retention policy" is the engineering reality. The first time a workspace agent fails an enterprise security review will not be over the model behavior — it'll be over the connector posture. Vet each connector you intend to use the same way you'd vet any third-party SaaS integration.
What we'd build differently this week
- Inventory every custom GPT in the workspace and grade each one against the question "does it earn a Workspace Agent migration?" Some won't (the GPT was never used). Some will (the GPT does real work and the agent successor is strictly better). Some will need a redesign (the workflow that worked as a chat-bound GPT becomes something different as a long-running cloud agent). Triage now, migrate deliberately.
- Pick one workflow, build the same agent twice, in Workspace Agents and on a custom OpenAI/Anthropic stack. Compare. The data on cost, latency, governance fit, and developer effort is the only reliable basis for the workspace-vs-custom split your team will be making for the next twelve months.
- Stand up the per-agent ROI artifact before the credit bill arrives. Per-agent active users, per-agent successful task count, per-agent time saved (estimated), per-agent credit cost. Without this, the Q3 budgeting conversation gets harder than it needs to be.
- Build the governance layer once, even if you only have two agents. The admin view, the connector inventory, the scoped permissions, the audit retention policy — get them right at small N. Retrofitting governance onto fifty agents is a different project than scaling it from two to fifty.
Sonnet Code's take
Workspace Agents going paid is the moment custom GPTs officially became legacy and the moment enterprise AI procurement stopped being a model conversation and started being an agent conversation. The teams that win the next two quarters are the ones who treat Workspace as one runtime in a multi-runtime portfolio, who build the governance layer before the agent count, and who pick which workflows belong on Codex and which belong on Claude based on data instead of vendor allegiance. We staff that work directly: AI development at Sonnet Code is the engineering that builds the agent-routing layer, wires the connector posture, ports legacy GPTs to workspace-resident successors, and stands up the per-agent ROI dashboard your CFO is about to ask for. We pair that with AI training engagements where senior practitioners author the rubrics, golden examples, and red-team prompts that calibrate workspace agents against the same standards a human reviewer at your firm would apply. If your team woke up to credit pricing and is now staring at a list of fourteen GPTs wondering which to migrate, the next conversation isn't about the migration. It's about the agent strategy that lets you stop migrating every time a vendor renames the product.

