The release, in one paragraph
On May 12, 2026, Anthropic moved Claude Platform on AWS to general availability. The product lets enterprises consume Anthropic's native Claude Platform — Messages API, Files, Message Batches, Claude Managed Agents, Agent Skills, code execution, web search, prompt caching, citations, and MCP connectors — through their existing AWS account, authenticating with AWS IAM credentials, auditing via CloudTrail, and rolling the spend into their AWS invoice. No separate Anthropic contract, no new API keys to provision, no parallel SSO to wire. The launch covers 18 AWS regions across North America, South America, Europe, and Asia-Pacific. The architectural distinction worth holding on to: Anthropic operates the service itself, so customer data is processed inside Anthropic's data-handling boundary — unlike Claude models accessed through Amazon Bedrock, where AWS is the data processor.
The headline framing is "Claude, now on AWS." The substance is one tier deeper, and it's the part procurement teams should be reading carefully: the choice of how to access Claude inside an AWS-heavy enterprise is now a three-way decision — Bedrock, Claude Platform on AWS, or Direct API — and each one trades feature parity, data-processing boundary, and procurement friction differently.
Why "one more way to get Claude" is actually a procurement category change
For two years, the path to running Claude inside an AWS shop has been one of two options that didn't quite line up:
Option A: Amazon Bedrock. AWS hosts Claude inside Bedrock, AWS is the data processor under the existing AWS DPA, billing is on the AWS invoice, IAM is the existing IAM. Procurement is clean. The cost is feature parity — Bedrock has historically lagged the direct API on the features Anthropic ships first. Managed Agents, Skills, prompt caching, MCP connectors, citations: these landed first on the direct API and arrived on Bedrock in stages, with caveats, or not at all.
Option B: Direct API. Full feature surface, day-one access to anything Anthropic ships. The cost is procurement — separate Anthropic contract, separate auth, separate billing, separate data-processing agreement, separate SOC reports to file with the security team. For an org that already runs everything through AWS, every one of those "separates" is a meeting and a quarter of legal review.
Claude Platform on AWS is the third option that didn't exist before May 12: the full direct-API feature surface, with AWS IAM as the identity plane, CloudTrail as the audit surface, and the AWS invoice as the billing surface. Anthropic still processes the data — that hasn't changed — but the operational surface a platform team has to integrate against is now AWS-native.
For the procurement conversation, the practical effect is that the friction of getting Claude's full feature set into an AWS-native enterprise dropped substantially. Whether that friction reduction is worth the data-flow distinction (Anthropic-as-processor vs AWS-as-processor) is the real question for the next quarter.
The three Claude surfaces, and how to think about which goes where
A team that runs more than one Claude workload — which is to say, almost every team — now has three distinct surfaces to allocate to. The dimensions that actually matter:
Feature parity. Claude Platform on AWS and Direct API are at parity by design; Bedrock historically lags. If a workflow needs the newest feature shipped this month (a Managed Agents capability, a new Skill type, a new MCP integration), it's going to ship on Direct API and Claude Platform on AWS first.
Data-processing boundary. Bedrock processes through AWS; Claude Platform on AWS and Direct API process through Anthropic. Which of those is acceptable depends on the workflow's compliance posture, which DPAs are already signed, and what the data-residency officer's risk model looks like. Bedrock's "AWS is the processor" framing is sometimes the easier sell to a CISO; sometimes it isn't, depending on how aggressively the org has been hardening its Anthropic-direct posture over the last 12 months.
Identity and audit. Claude Platform on AWS gives you IAM + CloudTrail for free; Direct API requires your own identity wrapping; Bedrock is IAM + CloudTrail natively. The platform-engineering cost of integrating Direct API into an AWS shop is non-trivial, and Claude Platform on AWS collapses most of it.
Billing and commitments. AWS shops with negotiated AWS commitments can fold Claude Platform on AWS spend into those commitments. Direct API spend can't go there. For a large enough enterprise, that alone moves the needle.
Region coverage. Eighteen regions is a lot, but it isn't all of them. A workload running in a region not covered will still need Direct API or Bedrock.
The honest read is that most enterprises end up running two or three of these surfaces simultaneously, allocated per workload — and the workload allocation is the design decision that matters more than the vendor pitch.
What this changes for AWS-native AI development
Three structural shifts most engineering leaders haven't quite priced in yet.
The "build a Claude integration layer" project just got smaller. A platform team that was halfway through wrapping the direct Claude API with custom IAM proxying, custom audit-log emission, custom billing-attribution, and custom region-routing is now staring at a product that does most of that natively. The right move for most teams isn't to scrap the wrapper — it's to refactor it to delegate identity, audit, and billing to Claude Platform on AWS and keep the wrapper for the parts that are genuinely org-specific (per-workload guardrails, custom telemetry, internal observability).
Workload allocation becomes a real conversation. Today, most teams allocated Claude to Bedrock or Direct API by accident — by which procurement contract was already in place. The next quarter is when allocation becomes deliberate: per-workflow, per-team, per-data-class, which Claude surface fits, and why. The teams that do this exercise carefully will have a defensible posture by Q3. The teams that don't will have three Claude bills, three audit surfaces, and three sets of integration code, all doing slightly different things.
CloudTrail-as-the-audit-surface is the load-bearing piece. For a regulated AWS shop, the ability to point at CloudTrail and say "every Claude call in our org is here, with the IAM principal that made it, the input shape, and the response metadata" is the answer to questions auditors have been asking and that Direct-API teams have been answering with brittle log-collection pipelines. CloudTrail isn't perfect — it doesn't capture full request bodies for free, retention is per the org's existing policy, integration with downstream SIEMs is whatever the org has already built — but it is the same audit surface every other AWS service flows through, and that consistency is the thing.
Where we'd push back on the launch narrative
"AWS IAM for Claude" is not the same as "AWS is the data processor for Claude." Anthropic operates Claude Platform on AWS; customer data is processed inside Anthropic's boundary, under Anthropic's DPA. A CISO who reads the launch announcement and assumes "this is just Bedrock with more features" is going to be surprised at the data-flow diagram. Build the procurement conversation around the actual data-processing boundary, not the auth surface.
Eighteen regions sounds like a lot until your workload runs in the nineteenth. APAC and EMEA coverage is real but uneven. Workloads in regions not yet covered will continue to need Bedrock or Direct API as the fallback. Inventory the workload geography before you commit to a single Claude surface.
Bedrock isn't going away, and its feature-parity gap will close over time. Teams that route everything onto Claude Platform on AWS today on the assumption that Bedrock is permanently behind are making a bet about Anthropic's product roadmap that hasn't been validated. The structural difference (data processor: Anthropic vs AWS) will persist; the feature-parity gap will probably shrink. Plan for both surfaces to remain relevant.
What we'd build differently this week
- Inventory Claude API usage across the org by surface. Bedrock workloads, Direct API workloads, anything else. Most orgs don't have this inventory and can't make the workload-allocation decision without it.
- Stand up a Claude Platform on AWS pilot in a non-prod AWS account. One workflow, one team, one quarter. Measure the integration cost (how much of the existing wrapper can be deleted), the feature parity (does the workflow gain anything from being off Bedrock), and the audit posture (does CloudTrail capture what the compliance team needs).
- Author the workload-allocation policy now, before the next migration. Per-workflow, per-data-class, which Claude surface is correct, and why. Get the policy signed by the security team and the data-residency officer. Use it as the configuration the platform team enforces.
- Wire CloudTrail Claude events into the SIEM and the eval surface. The audit log is the artifact the compliance team will ask for. It's also the input to a trajectory-trace pipeline that grades agent quality. Same plumbing, two consumers.
- Decide who in the org owns the multi-surface Claude posture. A platform engineer? A dedicated "AI infrastructure" role? An AI Center of Excellence? The org-chart answer matters more than the tooling answer; don't let it default by accident.
Sonnet Code's take
Claude Platform on AWS going GA is the moment "how we get Claude into our AWS shop" stopped being a procurement workaround and started being a design decision with three credible options. The teams that win this cycle are the ones who allocate workloads deliberately across the three Claude surfaces, who treat CloudTrail-as-audit-surface as the load-bearing piece of their compliance posture, and who staff the platform engineering needed to keep one consistent observability and policy layer across all three. We staff that work directly: AI development at Sonnet Code is the engineering that inventories existing Claude usage, builds the per-workflow allocation policy, refactors internal wrappers to delegate to Claude Platform on AWS where it fits, and wires CloudTrail audit trails into the org's existing SIEM and eval infrastructure. We pair it with AI training engagements where senior practitioners — platform engineers, security architects, compliance specialists — author the per-workload guardrails and grade agent behavior on the surfaces they're allocated to. If your team is looking at the Claude-on-AWS launch this week and trying to figure out whether to migrate off Bedrock, the next conversation isn't about the launch — it's about the workload allocation you don't have yet and the platform engineer who'd own the multi-surface posture you're about to operate.

