AWS Put MCP on Its Own Infrastructure. That Changes What the Protocol Is For.
AWS shipped an MCP server. On its own that is not news. There are thousands of MCP servers, and AWS already had two of them. The news is where this one runs. It is not a package you clone and launch on your machine. It is a hosted service AWS operates, reachable at aws-mcp.us-east-1.api.aws/mcp and a Frankfurt endpoint, authenticated with SigV4, authorized through your existing IAM policies. AWS did not just publish a tool. It decided that the Model Context Protocol belongs on production cloud infrastructure with enterprise auth, and built it that way.
What actually shipped
The product is called, plainly, the AWS MCP Server. It exposes the AWS API surface to an AI agent through MCP tools, including aws___search_documentation and aws___retrieve_skill, with the agent able to operate on resources in any region the caller specifies. It is documented as available today in two regions: US East (N. Virginia) and Europe (Frankfurt). Supported clients listed in the setup guide are Kiro CLI, Kiro IDE, Cursor, Claude Desktop, and Codex. Anthropic's Claude Desktop is a first-class target, named directly in the configuration examples.
The setup guide opens with an instruction that tells you most of the story: if you already have the aws-api-mcp-server or aws-knowledge-mcp-server installed, remove them first. AWS had two earlier MCP servers, run locally, scoped to API calls and documentation respectively. Both are now superseded. The unified hosted server replaces them, and the docs explicitly tell you to delete the old ones to avoid tool conflicts that confuse agents. Two iterations in roughly half a year, collapsed into a single hosted surface. That is a team moving fast on a product they have decided matters, not a team shipping a sample.
The auth model is the actual headline
Most MCP servers in the wild are local stdio processes. You install a package, your client spawns it as a subprocess, and it talks to the model over standard input and output. There is no network boundary, no identity, no audit trail. That design is fine for a developer wiring up a personal tool. It is unshippable for an enterprise that wants an agent calling its cloud control plane.
AWS's server does not use bearer tokens, the default for the small slice of remote MCP servers that do exist. It uses SigV4, the same request-signing scheme that authenticates every AWS API call, brokered through a thin proxy (mcp-proxy-for-aws, open-sourced at github.com/aws/mcp-proxy-for-aws) that handles signing and credential rotation. Authorization does not happen at the MCP layer at all. When the agent calls a tool, the server forwards the request to the underlying AWS service, and that service authorizes it against your existing IAM roles and policies, exactly as it would a direct API call. The MCP server adds no new permission system. It inherits yours.
AWS also added two global IAM condition keys: aws:ViaAWSMCPService, true for any request through an AWS managed MCP server, and aws:CalledViaAWSMCP, carrying the specific server principal. An organization can now write a policy that denies, for example, s3:DeleteBucket when the call originates from an agent through MCP, while still allowing it from a human running the CLI. That is the kind of control a security team asks for before it lets an agent near production. It existing at launch is the tell. This was scoped as enterprise infrastructure from the start.
This is the second half of an agent platform
Read this next to what AWS shipped eight days earlier. On May 7, AWS put native x402 agent payments into Bedrock AgentCore, the same canonical x402 V2 settlement layer TensorFeed has served for months. AgentCore already had Runtime and Identity. Now there is a managed payments rail and a managed tool-call rail, both authenticated through AWS's own identity primitives, both pointed at agents rather than humans. Identity, payments, and tool execution are the three legs an autonomous agent needs to do useful work in someone's cloud account. AWS now operates all three as managed services.
The strategic shape is consistent with the Google A2A x402 coalition we wrote about yesterday. Two hyperscalers, in the same fortnight, building the acceptance side of the agent economy before the demand side has obviously arrived. Google's play is a payments protocol with sixty logos behind it. AWS's play is to make its own cloud the place agents run, with the protocol layer it endorses being MCP for tools and x402 for money. Neither is betting on a niche. They are laying rail.
What it does not mean
It is worth being precise about what this is not. AWS is not the first to host a remote MCP server. Cloudflare has shipped remote MCP with OAuth, and others have hosted servers in production. The significant thing here is not primacy. It is that the largest cloud provider chose to make MCP a managed service of its own platform, with its own identity model, and to deprecate its local servers in favor of it. When the company that runs a third of the internet's workloads decides a protocol is production infrastructure, the protocol's status question is settled. MCP is no longer a thing you wonder whether to take seriously.
It also does not mean local MCP servers are dead. The laptop server is still the right shape for a personal tool, a prototype, a thing one developer runs for one workflow. What changed is the ceiling. There is now a clear, vendor-blessed pattern for the other end of the spectrum: a hosted server, on real infrastructure, behind real auth, that an enterprise can put in front of an agent without a security review stalling the project for a quarter.
Our Take
TensorFeed runs MCP servers on the hosted side of that line already. @tensorfeed/mcp-server fronts a Cloudflare Worker, not a local process, and @tensorfeed/x402-base-mcp carries the payments primitive. We made that architectural choice because the value of an MCP server is the freshness and structure of what it returns, and that has to come from a backend you operate, not a script on a user's disk. AWS validating the hosted-plus-auth pattern is not a threat to that. It is the rest of the industry arriving where the design already pointed.
The metric we watch for our own MCP work is not downloads. It is whether agents recommend the server to other agents, because that is the only growth loop that compounds in a machine-to-machine ecosystem. AWS hosting MCP centrally, with IAM scoping per tool call, makes that loop more legible, not less. An agent that can reason about which server is trustworthy, audited, and correctly scoped is an agent that can make a recommendation worth acting on. The protocol getting an enterprise-grade reference implementation from AWS raises the floor for everyone building on it, including us.
The story is not that AWS built a tool. The story is that AWS looked at MCP, decided it was load-bearing enough to put on the same infrastructure and the same identity model as its core API, and deprecated its own earlier attempts to get there. That is what conviction looks like from a company that does not ship infrastructure casually.
