LIVE
ANTHROPICOpus 4.7 benchmarks published2m ago
CLAUDEOK142ms
OPUS 4.7$15 / $75per Mtok
CHATGPTOK89ms
HACKERNEWSWhy has not AI improved design quality the way it improved dev speed?14m ago
MMLU-PROleader Opus 4.788.4
GEMINIDEGRADED312ms
MISTRALMistral Medium 3 released6m ago
GPT-4o$5 / $15per Mtok
ARXIVCompositional reasoning in LRMs22m ago
BEDROCKOK178ms
GEMINI 2.5$3.50 / $10.50per Mtok
THE VERGEFrontier Model Forum expansion announced38m ago
SWE-BENCHleader Claude Opus 4.772.1%
MISTRALOK104ms
ANTHROPICOpus 4.7 benchmarks published2m ago
CLAUDEOK142ms
OPUS 4.7$15 / $75per Mtok
CHATGPTOK89ms
HACKERNEWSWhy has not AI improved design quality the way it improved dev speed?14m ago
MMLU-PROleader Opus 4.788.4
GEMINIDEGRADED312ms
MISTRALMistral Medium 3 released6m ago
GPT-4o$5 / $15per Mtok
ARXIVCompositional reasoning in LRMs22m ago
BEDROCKOK178ms
GEMINI 2.5$3.50 / $10.50per Mtok
THE VERGEFrontier Model Forum expansion announced38m ago
SWE-BENCHleader Claude Opus 4.772.1%
MISTRALOK104ms
Back to Originals

OpenAI Just Turned ChatGPT Into an Enterprise Automation Platform

Ripper··7 min read

OpenAI dropped Workspace Agents into ChatGPT Business, Enterprise, Edu, and Teachers plans this week. The framing was modest: a research preview, free until May 6. The implication is anything but. Workspace Agents quietly retire Custom GPTs and turn ChatGPT itself into the same kind of long-running, multi-tool, scheduled workflow engine that companies have spent the last decade buying from Slack, Zapier, Salesforce, and Microsoft.

I've been watching every major lab converge on the same product shape for months. With this launch, the picture is finally clear. The next ChatGPT competitor is not another chatbot. It is the org chart.

What a Workspace Agent Actually Is

A Workspace Agent is a shared, persistent agent that lives inside an organization's ChatGPT workspace. Anyone with the right permissions can build one without writing code. The agent runs in the cloud on Codex, can be invoked on demand, scheduled on a cron, or triggered by an event in a connected app. It can call multiple tools across a single task, hold state across runs, and post its results back where the team already lives: Slack threads, Salesforce records, Notion pages, Drive folders.

That is the substantive change. Custom GPTs were chat surfaces. They reset every conversation, lived inside the ChatGPT app, and could not really do work over time. Workspace Agents are workers. They sit in a queue, take jobs, run for minutes or hours, and report back.

OpenAI's reference example is a Rippling sales rep who built an agent that pulls account context, summarizes Gong calls, and drafts deal briefs every morning. The number they quote is five to six hours saved per rep per week. I'm skeptical of self-reported productivity numbers, but the workflow itself is exactly the kind of glue work that used to require a Zapier subscription, a halfhearted Salesforce dashboard, and a person.

Connectors Are the Whole Game

Capabilities only matter if the agent can reach the data. OpenAI shipped this with a serious connector list out of the gate.

CategoryConnectors
MessagingSlack, Microsoft Teams
Files & DocsGoogle Drive, Microsoft 365, Notion, Box
CRM & SalesSalesforce, HubSpot
Code & IssuesGitHub, Linear, Jira
Calendar & MailGoogle Calendar, Outlook, Gmail

What is interesting is what you do not see: a Custom Connector SDK. Today, agents reach external systems through OpenAI's curated set or through Model Context Protocol servers. That MCP path is the one that matters long-term, because it lets internal tools, vertical SaaS, and the long tail of corporate systems plug in without OpenAI gatekeeping the integration. We covered why MCP became the de facto agent layer in our 97 million installs piece. Workspace Agents make that bet pay.

The Pricing Tells You Where This Is Going

Workspace Agents are free through May 6, 2026. After that, they move to credit-based pricing on top of the existing per-seat ChatGPT Business and Enterprise contracts. OpenAI has not published the per-credit rate yet, but the model itself tells you the plan: charge for the work, not the seat. That is the same direction Anthropic took with Claude Code consumption pricing and Google has been edging toward with Vertex Agent Builder.

The seat-and-credit hybrid is a quiet but important shift. Per-seat SaaS pricing assumes every employee uses the tool roughly the same amount. Agents do not work that way. A single sales agent that runs every morning across 200 accounts will burn through compute that has nothing to do with how many humans are licensed. Credits are how OpenAI keeps a 5,000-seat enterprise account from accidentally costing them their margin.

For buyers, it means total cost of agents is going to need its own line item. The model your agent calls, the connector pulls it makes, and the Codex runtime that powers the loop are all priced separately. Plan accordingly. Our cost calculator is set up to help finance teams model this.

Where This Lines Up Against Everyone Else

The agent platform race is now real, and the players have all positioned. Here is how the four serious entrants stack up as of this week.

PlatformRuntimeStrengthWeakness
OpenAI Workspace AgentsCodexChatGPT distribution, broad connectorsNo custom connector SDK yet
Microsoft Copilot StudioAzure AI FoundryTight M365 + Power Platform integrationHeavy admin lift, Microsoft-centric
Google Vertex Agent BuilderGemini 3.1Cheapest tokens, longest contextWorkspace ecosystem still maturing
Anthropic Claude for WorkClaude Opus 4.7Best-in-class coding and reasoningNo native scheduled-agent surface

OpenAI's real advantage is distribution. ChatGPT has hundreds of millions of weekly active users, and a meaningful share of large enterprises already pay for Business or Enterprise seats. Microsoft has Office. Google has Workspace. Anthropic has the best model. OpenAI has the surface every employee already opens once a day. Workspace Agents turn that habit into a beachhead.

The honest counterpoint: Microsoft has spent two years selling Copilot Studio into IT departments, and Copilot has plumbing OpenAI does not (Power Automate, Dataverse, group policy controls). For Fortune 500 IT, the path of least resistance is still Microsoft. For everyone smaller, OpenAI just made the easier choice.

What It Means for Builders

If you build software that touches knowledge work, three things changed this week.

First, the Custom GPT moat evaporated. Anyone selling a Custom GPT directory, a GPT-powered SaaS wrapper, or a thin chat layer on top of an internal database needs to ask whether a Workspace Agent on Codex does the same job natively. In most cases it does, with better tools and better permissions.

Second, MCP just got more valuable. OpenAI is routing third-party tool access through MCP servers, which means writing one good MCP for your product gets you discovery across ChatGPT, Claude, Gemini, and any number of agent runtimes. If you have not shipped one, see our agent-readiness guide.

Third, the structure of B2B SaaS is starting to invert. Historically, every category (CRM, project management, HR, support) shipped its own AI features. With Workspace Agents, the agent is the surface and the apps are the tools. The interesting question is which categories still get to own the workflow and which become connector destinations.

Our Take

The labs have been telegraphing this for a year. GPT-5.5, which we covered in this analysis, scored 78.7 percent on OSWorld-Verified, the benchmark for autonomous OS-level agents. Codex got faster and cheaper. Claude Code shipped. MCP crossed 97 million installs. Workspace Agents are the productization step that turns all of those components into a product a finance director can buy.

For Evan and the rest of us building on top of these platforms, the takeaway is simple. Stop thinking about prompts and start thinking about jobs. The unit of value is no longer a clever instruction or a fine-tuned chatbot. It is a piece of work that finishes while you are asleep, costs three credits, and lands in your Slack before standup. OpenAI just made that the default mental model.

The free preview window closes on May 6. If you run a team that pays for ChatGPT Business or Enterprise, the cheapest learning you will do this year is the next ten days. Build one agent for one workflow you actually run. See what breaks. The next twelve months are going to be defined by which teams figured this out early and which ones are still arguing about Custom GPTs.