Portkey MCP Server for AutoGen 10 tools — connect in under 2 minutes
Microsoft AutoGen enables multi-agent conversations where agents negotiate, delegate, and execute tasks collaboratively. Add Portkey as an MCP tool provider through Vinkius and every agent in the group can access live data and take action.
ASK AI ABOUT THIS MCP SERVER
Vinkius supports streamable HTTP and SSE.
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.tools.mcp import McpWorkbench
async def main():
# Your Vinkius token. get it at cloud.vinkius.com
async with McpWorkbench(
server_params={"url": "https://edge.vinkius.com/[YOUR_TOKEN_HERE]/mcp"},
transport="streamable_http",
) as workbench:
tools = await workbench.list_tools()
agent = AssistantAgent(
name="portkey_agent",
tools=tools,
system_message=(
"You help users with Portkey. "
"10 tools available."
),
)
print(f"Agent ready with {len(tools)} tools")
asyncio.run(main())
* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure
About Portkey MCP Server
What you can do
Connect AI agents to the Portkey AI Gateway for enterprise-grade observability and management:
AutoGen enables multi-agent conversations where agents negotiate, delegate, and collaboratively use Portkey tools. Connect 10 tools through Vinkius and assign role-based access. a data analyst queries while a reviewer validates, with optional human-in-the-loop approval for sensitive operations.
- Monitor logs and traces of all LLM calls passing through your gateway
- Analyze token usage, latency, and costs across models and teams
- Submit feedback (Likes/Dislikes) to improve model quality and agent performance
- Export logs for audit trails, compliance, and offline cost analysis
- Review gateway configurations including retry policies, fallbacks, and cache settings
- Manage virtual keys to track provider API key usage and limits
- Discover supported models from 1,600+ LLMs available via Portkey
- Enforce budget policies to prevent runaway AI costs per team or project
The Portkey MCP Server exposes 10 tools through the Vinkius. Connect it to AutoGen in under two minutes — no API keys to rotate, no infrastructure to provision, no vendor lock-in. Your configuration, your data, your control.
How to Connect Portkey to AutoGen via MCP
Follow these steps to integrate the Portkey MCP Server with AutoGen.
Install AutoGen
Run pip install "autogen-ext[mcp]"
Replace the token
Replace [YOUR_TOKEN_HERE] with your Vinkius token
Integrate into workflow
Use the agent in your AutoGen multi-agent orchestration
Explore tools
The workbench discovers 10 tools from Portkey automatically
Why Use AutoGen with the Portkey MCP Server
AutoGen provides unique advantages when paired with Portkey through the Model Context Protocol.
Multi-agent conversations: multiple AutoGen agents discuss, delegate, and collaboratively use Portkey tools to solve complex tasks
Role-based architecture lets you assign Portkey tool access to specific agents. a data analyst queries while a reviewer validates
Human-in-the-loop support: agents can pause for human approval before executing sensitive Portkey tool calls
Code execution sandbox: AutoGen agents can write and run code that processes Portkey tool responses in an isolated environment
Portkey + AutoGen Use Cases
Practical scenarios where AutoGen combined with the Portkey MCP Server delivers measurable value.
Collaborative analysis: one agent queries Portkey while another validates results and a third generates the final report
Automated review pipelines: a researcher agent fetches data from Portkey, a critic agent evaluates quality, and a writer produces the output
Interactive planning: agents negotiate task allocation using Portkey data to make informed decisions about resource distribution
Code generation with live data: an AutoGen coder agent writes scripts that process Portkey responses in a sandboxed execution environment
Portkey MCP Tools for AutoGen (10)
These 10 tools become available when you connect Portkey to AutoGen via MCP:
create_policy
Requires policy name, budget limit (USD or token count), and optionally the target users or virtual keys to restrict. Returns the created policy details. Use this to enforce cost controls on specific teams or projects using the gateway. Create a new budget or usage policy for AI gateway access
delete_policy
Requires the policy ID. Use this when a project ends or budget constraints are no longer needed. Remove a budget or usage policy from Portkey
export_logs
Optionally filters by date range, model, or user. Returns an export ID or download URL. Use this for audit trails, cost reporting, or offline analysis of AI usage patterns. Export AI gateway logs for external analysis or compliance reporting
get_log_details
Requires the log ID from list_logs results. Use this for deep debugging of specific AI interactions. Get detailed information about a specific AI gateway log entry
get_virtual_keys
Virtual keys map to underlying provider keys (OpenAI, Anthropic, etc.) with metadata, usage limits, and policy associations. Returns key IDs, names, provider targets, current usage, and status. Use this to audit API key usage or identify keys approaching limits. List all virtual API keys managed by Portkey
list_configs
Returns config IDs, names, creation dates, and associated virtual keys. Use this to review how LLM requests are routed or to audit gateway behavior. List all gateway configurations stored in Portkey
list_logs
Returns log IDs, timestamps, model names, token usage, latency, costs, and status codes. Use this to monitor AI usage, identify expensive calls, or debug latency issues. Supports pagination via limit/offset. List recent AI gateway logs and traces from Portkey
list_models
). Returns model names, provider names, supported endpoints (chat, embeddings, etc.), and capabilities. Use this to discover which models are routable via your gateway. List all LLM models supported by the Portkey gateway
list_policies
Returns policy names, limits, current consumption, and affected users/keys. Use this to review guardrails preventing runaway AI costs. List all budget and usage policies defined in Portkey
submit_feedback
Requires the log ID, rating (LIKE, DISLIKE, or UNLIKE to remove), and optional text feedback. Use this to build RLHF datasets or monitor user satisfaction with AI outputs. Submit user feedback (Like/Dislike) for a specific AI response log
Example Prompts for Portkey in AutoGen
Ready-to-use prompts you can give your AutoGen agent to start working with Portkey immediately.
"Show me the most expensive LLM calls from the last 24 hours"
"Create a budget policy limiting the Marketing team to $500/month on LLM usage"
"Export all logs from last week for our compliance audit"
Troubleshooting Portkey MCP Server with AutoGen
Common issues when connecting Portkey to AutoGen through the Vinkius, and how to resolve them.
McpWorkbench not found
pip install "autogen-ext[mcp]"Portkey + AutoGen FAQ
Common questions about integrating Portkey MCP Server with AutoGen.
How does AutoGen connect to MCP servers?
Can different agents have different MCP tool access?
Does AutoGen support human approval for tool calls?
Connect Portkey with your favorite client
Step-by-step setup guides for every MCP-compatible client and framework:
Anthropic's native desktop app for Claude with built-in MCP support.
AI-first code editor with integrated LLM-powered coding assistance.
GitHub Copilot in VS Code with Agent mode and MCP support.
Purpose-built IDE for agentic AI coding workflows.
Autonomous AI coding agent that runs inside VS Code.
Anthropic's agentic CLI for terminal-first development.
Python SDK for building production-grade OpenAI agent workflows.
Google's framework for building production AI agents.
Type-safe agent development for Python with first-class MCP support.
TypeScript toolkit for building AI-powered web applications.
TypeScript-native agent framework for modern web stacks.
Python framework for orchestrating collaborative AI agent crews.
Leading Python framework for composable LLM applications.
Data-aware AI agent framework for structured and unstructured sources.
Microsoft's framework for multi-agent collaborative conversations.
Connect Portkey to AutoGen
Get your token, paste the configuration, and start using 10 tools in under 2 minutes. No API key management needed.
