MCP v2 Beta: What Changes in Multi-Agent Communication
The Model Context Protocol (MCP) — the open standard, originally authored by Anthropic, that defines how AI agents discover and invoke tools — just received its most significant redesign since Anthropic open-sourced the Model Context Protocol in November 2024. On March 13, 2026, @ai-sdk/mcp v2.0.0-beta.3 landed on GitHub with breaking changes that affect every team shipping multi-agent systems on the Vercel AI SDK.
If you're running agents in production, here's the short version: your imports break, your type names change, and you need to migrate before the stable release drops. The longer version is that these changes signal something important — the Model Context Protocol is no longer an experiment. As the primary protocol standard for agent-tool communication, the Model Context Protocol is what multi-agent systems actually need to scale in production.
What Is the Model Context Protocol?
The Model Context Protocol (MCP) is an open standard, originally authored by Anthropic, that defines how AI agents communicate with external tools and data sources. Think of it as the HTTP for agent-tool interaction: a universal contract that any AI client can speak and any tool provider can implement.
Before the Model Context Protocol existed, every AI framework invented its own connector format. OpenAI had function-calling. LangChain had tool definitions. Each required vendor-specific glue code. MCP re-uses the message-flow ideas of the Language Server Protocol (LSP) and transports everything over JSON-RPC 2.0, giving teams a single integration target regardless of which AI model or agent framework they use.
The protocol has seen explosive adoption. As of early 2026, Slack, Visual Studio Code, JetBrains IDEs, Claude, and hundreds of third-party providers support Model Context Protocol natively. Our breakdown of the CLIs, MCP apps, and agent skills powering the next generation of software covers why this matters for developers building production AI products.
What's New in MCP v2 Beta (@ai-sdk/mcp v2.0.0-beta.3)
The @ai-sdk/mcp package was previously embedded inside the ai package as an experimental feature. In the v2 beta, it graduates to a standalone, stable package with a production-ready API surface. According to the official GitHub changelog, the major additions include:
- OAuth 2.0 for MCP clients — Secure authentication is now built into the protocol layer
- Elicitation support — Servers can request structured input from clients mid-conversation
- Resources support — Expose structured data through the protocol, not just tool calls
- Prompt templates — Reusable, parameterized prompts that any client can invoke
- Structured output / outputSchema — Typed tool results with schema validation
- MCP protocol version 2025-11-25 — Support for the November 2025 spec update
_metafield exposure — Servers can now pass metadata alongside tool definitions
The headline change from experimental to stable is more significant than it sounds. It means the API is frozen for the stable release — teams can build on it without worrying about the ground shifting.
Breaking Changes: What Actually Breaks
The most immediately disruptive change is the import path. Everything that was in ai is now in @ai-sdk/mcp:
// Before: AI SDK 4.x
import { experimental_createMCPClient } from 'ai';
import { Experimental_StdioMCPTransport } from 'ai/mcp-stdio';
// After: @ai-sdk/mcp v2.0.0-beta.3
import { createMCPClient } from '@ai-sdk/mcp';
import { StdioMCPTransport } from '@ai-sdk/mcp/mcp-stdio';
Note what else changed: experimental_createMCPClient drops the experimental_ prefix (it's now createMCPClient), and Experimental_StdioMCPTransport becomes StdioMCPTransport. These are clean breaking changes — no backwards compatibility layer.
Two additional renames affect the broader AI SDK 5.0 upgrade path that ships alongside @ai-sdk/mcp v2:
| Old Name (AI SDK 4.x) | New Name (AI SDK 5.0) |
|---|---|
CoreMessage | ModelMessage |
Message | UIMessage |
convertToCoreMessages | convertToModelMessages |
ToolCallOptions | ToolExecutionOptions |
message.content (string) | message.parts (array) |
The content → parts change is the one that will break the most user-facing chat UI code. Where a message previously had a content: string, it now carries parts: Array<{type: string, text?: string, ...}>. Reasoning traces, tool calls, and text are all represented as separate items in the parts array — cleaner architecturally, but a non-trivial migration if you're rendering messages in custom UI components.
New Features That Justify the Upgrade
Breaking changes only make sense when the new capabilities are worth it. The MCP v2 beta delivers three additions that matter in production.
OAuth 2.0 integration closes the biggest gap in the v1 spec. Until now, MCP servers could only authenticate via headers or API keys passed at connection time. OAuth support means MCP clients can now initiate an authorization flow, receive tokens, and refresh them transparently — the same trust model that enterprise APIs have used for a decade. For multi-agent systems that need to call Jira, GitHub, or Google Workspace on behalf of real users, this was the blocking issue.
Structured output via outputSchema is the other significant addition for production systems. Previously, tool results returned raw strings or untyped JSON. With outputSchema, servers declare the exact TypeScript type their tools return. Clients can validate responses against the schema without writing custom validation code. This matters for multi-agent PR review workflows where agent A hands structured data to agent B — type safety across agent boundaries.
Resources support separates "what an agent can do" (tools) from "what data an agent can read" (resources). A database schema, a user's calendar, a product catalog — these are now first-class protocol entities, not workarounds built on top of tool calls. This distinction reduces prompt bloat and makes the protocol's intent clearer for both LLMs and developers.
The Elicitation Pattern: A Paradigm Shift
The most architecturally interesting addition in MCP v2 beta is elicitation. In v1, the communication model was strictly one-directional: a client calls a tool, the server responds. Elicitation inverts this — a server can ask the client for more information mid-execution.
// MCP v2: Server-initiated elicitation
// Server requests structured input from the client
const userInput = await server.elicit({
message: "Which project should I assign this task to?",
schema: z.object({
projectId: z.string(),
priority: z.enum(["high", "medium", "low"])
})
});
This turns MCP from a request/response protocol into something closer to a dialogue protocol. For autonomous agents that need to pause, confirm, or collect missing context before proceeding, elicitation provides a standardized pattern instead of a home-grown interruption mechanism.
At Context Studios, we've been testing elicitation-based interrupt patterns in our content pipeline agents. The ability for an agent to "ask back" through a structured schema — rather than either hallucinating a value or returning an error — is a meaningful improvement in production reliability.
How We Use Model Context Protocol at Context Studios
Context Studios runs multi-agent production workflows where several agents coordinate across a shared set of tools — blog creation, SEO analysis, image generation, social publishing — all wired up via the Model Context Protocol. We've been on the @ai-sdk/mcp beta since it dropped.
What we've actually found in production: the stability guarantee matters more than any individual feature. With experimental APIs, you schedule migration work into every sprint because the API changes under you. A stable MCPClient means we can build tooling on top of it and trust the interface won't shift. That's the real v2 payoff for teams running autonomous agent loops in production.
The OAuth addition also unblocked a workflow we'd been holding back: calling user-authorized third-party APIs (Google Calendar, HubSpot, GitHub) directly from agent code without building our own token management layer. MCP now owns that complexity.
One concrete observation: the migration from experimental_createMCPClient to createMCPClient took about 40 minutes across our codebase — mostly a find-and-replace followed by fixing the handful of places where we'd used message.content as a string. The codemods Vercel ships with AI SDK 5.0 cover most of this automatically.
Migration Guide: From v1 to MCP v2 Beta
Vercel provides an official codemod runner. For most projects, this handles 80-90% of the changes automatically:
# Install the latest AI SDK
npm install @ai-sdk/mcp ai@5.0.0 @ai-sdk/provider@2.0.0
# Run the official codemods
npx @ai-sdk/codemod v5
After running the codemods, check these areas manually:
- Custom UI components rendering
message.content— update tomessage.parts - Type imports — any
CoreMessageorCreateMessagereferences need updating - Tool handler signatures —
ToolCallOptions→ToolExecutionOptions - MCP server imports — if you built custom MCP servers, the transport classes are renamed
For the MCP-specific migration, Vercel also ships an AI SDK 5 Migration MCP Server you can point Cursor at for AI-assisted migration. It's a meta-move: using MCP to migrate to MCP v2.
Why MCP v2 Matters for the Multi-Agent Future
The Model Context Protocol v2 beta isn't just a library update — it's a signal about where agent infrastructure is heading. When the spec gains OAuth, resources, structured outputs, and server-initiated elicitation in a single release, it's saying: this is the protocol that enterprise multi-agent systems need, not a research toy.
Consider the trajectory. MCP v1 launched in November 2024 with tools and basic resource access. By early 2025, it was supported by every major AI platform. Now, in the v2 beta, it acquires the auth story, the data access story, and the conversational interrupt story that production deployments have been working around for over a year.
For teams building on the dual-model agent architecture, MCP v2 is the connective tissue that makes multi-agent coordination reliable at scale. And for solo founders building AI-native products — like those shipping $1M ARR with 30-day cycles — the stable MCPClient removes one more piece of infrastructure they need to own.
The Model Context Protocol is becoming the TCP/IP of the agentic web. MCP v2 beta is the point where the protocol grew up.
Frequently Asked Questions
What is MCP v2 beta and when does it release?
MCP v2 beta refers to @ai-sdk/mcp v2.0.0-beta.3, released March 13, 2026. It's the first stable version of Vercel's Model Context Protocol client package. A stable release date hasn't been announced, but the beta API is frozen.
What breaks when upgrading to @ai-sdk/mcp v2.0.0-beta.3?
Three main breaking changes: (1) import path changes from ai to @ai-sdk/mcp, (2) experimental_createMCPClient renamed to createMCPClient, (3) the AI SDK 5.0 type renames — CoreMessage → ModelMessage, message.content → message.parts array. Vercel provides codemods that handle most of this automatically via npx @ai-sdk/codemod v5.
Does MCP v2 require AI SDK 5.0?
Yes. The @ai-sdk/mcp v2 package is designed to work with ai@5.0.0 and @ai-sdk/provider@2.0.0. If you're on AI SDK 4.x, you'll need to migrate the full SDK alongside the MCP package.
What is MCP elicitation and how does it work? Elicitation is a new Model Context Protocol feature allowing servers to request structured input from clients during tool execution. Instead of returning an error or guessing, an MCP server can pause and ask "which project?" or "confirm this action?" with a typed schema. The client receives the request, collects input (from the user or another agent), and returns structured data.
Is the Model Context Protocol the same as OpenAI's function calling? No. OpenAI function calling is a vendor-specific API feature. The Model Context Protocol is an open, transport-agnostic standard implemented by multiple vendors. A tool built for MCP works with Claude, Cursor, VS Code, and any other MCP-compatible client — with no vendor lock-in. MCP also adds resources, prompts, OAuth, and elicitation that function calling doesn't support.
Should we start using @ai-sdk/mcp v2.0.0-beta.3 in production today? For new projects: yes, start with the beta to avoid migrating again at stable release. For existing projects: run the codemods in a feature branch, test thoroughly, and plan for the 40-60 minute migration window. The beta API is frozen — no further breaking changes are expected before stable.