Portkey MCP Server for CrewAI 10 tools — connect in under 2 minutes
Connect your CrewAI agents to Portkey through Vinkius, pass the Edge URL in the `mcps` parameter and every Portkey tool is auto-discovered at runtime. No credentials to manage, no infrastructure to maintain.
ASK AI ABOUT THIS MCP SERVER
Vinkius supports streamable HTTP and SSE.
from crewai import Agent, Task, Crew
agent = Agent(
role="Portkey Specialist",
goal="Help users interact with Portkey effectively",
backstory=(
"You are an expert at leveraging Portkey tools "
"for automation and data analysis."
),
# Your Vinkius token. get it at cloud.vinkius.com
mcps=["https://edge.vinkius.com/[YOUR_TOKEN_HERE]/mcp"],
)
task = Task(
description=(
"Explore all available tools in Portkey "
"and summarize their capabilities."
),
agent=agent,
expected_output=(
"A detailed summary of 10 available tools "
"and what they can do."
),
)
crew = Crew(agents=[agent], tasks=[task])
result = crew.kickoff()
print(result)
* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure
About Portkey MCP Server
What you can do
Connect AI agents to the Portkey AI Gateway for enterprise-grade observability and management:
When paired with CrewAI, Portkey becomes a first-class tool in your multi-agent workflows. Each agent in the crew can call Portkey tools autonomously, one agent queries data, another analyzes results, a third compiles reports, all orchestrated through Vinkius with zero configuration overhead.
- Monitor logs and traces of all LLM calls passing through your gateway
- Analyze token usage, latency, and costs across models and teams
- Submit feedback (Likes/Dislikes) to improve model quality and agent performance
- Export logs for audit trails, compliance, and offline cost analysis
- Review gateway configurations including retry policies, fallbacks, and cache settings
- Manage virtual keys to track provider API key usage and limits
- Discover supported models from 1,600+ LLMs available via Portkey
- Enforce budget policies to prevent runaway AI costs per team or project
The Portkey MCP Server exposes 10 tools through the Vinkius. Connect it to CrewAI in under two minutes — no API keys to rotate, no infrastructure to provision, no vendor lock-in. Your configuration, your data, your control.
How to Connect Portkey to CrewAI via MCP
Follow these steps to integrate the Portkey MCP Server with CrewAI.
Install CrewAI
Run pip install crewai
Replace the token
Replace [YOUR_TOKEN_HERE] with your Vinkius token from cloud.vinkius.com
Customize the agent
Adjust the role, goal, and backstory to fit your use case
Run the crew
Run python crew.py. CrewAI auto-discovers 10 tools from Portkey
Why Use CrewAI with the Portkey MCP Server
CrewAI Multi-Agent Orchestration Framework provides unique advantages when paired with Portkey through the Model Context Protocol.
Multi-agent collaboration lets you decompose complex workflows into specialized roles, one agent researches, another analyzes, a third generates reports, each with access to MCP tools
CrewAI's native MCP integration requires zero adapter code: pass Vinkius Edge URL directly in the `mcps` parameter and agents auto-discover every available tool at runtime
Built-in task delegation and shared memory mean agents can pass context between steps without manual state management, enabling multi-hop reasoning across tool calls
Sequential and hierarchical crew patterns map naturally to real-world workflows: enumerate subdomains → analyze DNS history → check WHOIS records → compile findings into actionable reports
Portkey + CrewAI Use Cases
Practical scenarios where CrewAI combined with the Portkey MCP Server delivers measurable value.
Automated multi-step research: a reconnaissance agent queries Portkey for raw data, then a second analyst agent cross-references findings and flags anomalies. all without human handoff
Scheduled intelligence reports: set up a crew that periodically queries Portkey, analyzes trends over time, and generates executive briefings in markdown or PDF format
Multi-source enrichment pipelines: chain Portkey tools with other MCP servers in the same crew, letting agents correlate data across multiple providers in a single workflow
Compliance and audit automation: a compliance agent queries Portkey against predefined policy rules, generates deviation reports, and routes findings to the appropriate team
Portkey MCP Tools for CrewAI (10)
These 10 tools become available when you connect Portkey to CrewAI via MCP:
create_policy
Requires policy name, budget limit (USD or token count), and optionally the target users or virtual keys to restrict. Returns the created policy details. Use this to enforce cost controls on specific teams or projects using the gateway. Create a new budget or usage policy for AI gateway access
delete_policy
Requires the policy ID. Use this when a project ends or budget constraints are no longer needed. Remove a budget or usage policy from Portkey
export_logs
Optionally filters by date range, model, or user. Returns an export ID or download URL. Use this for audit trails, cost reporting, or offline analysis of AI usage patterns. Export AI gateway logs for external analysis or compliance reporting
get_log_details
Requires the log ID from list_logs results. Use this for deep debugging of specific AI interactions. Get detailed information about a specific AI gateway log entry
get_virtual_keys
Virtual keys map to underlying provider keys (OpenAI, Anthropic, etc.) with metadata, usage limits, and policy associations. Returns key IDs, names, provider targets, current usage, and status. Use this to audit API key usage or identify keys approaching limits. List all virtual API keys managed by Portkey
list_configs
Returns config IDs, names, creation dates, and associated virtual keys. Use this to review how LLM requests are routed or to audit gateway behavior. List all gateway configurations stored in Portkey
list_logs
Returns log IDs, timestamps, model names, token usage, latency, costs, and status codes. Use this to monitor AI usage, identify expensive calls, or debug latency issues. Supports pagination via limit/offset. List recent AI gateway logs and traces from Portkey
list_models
). Returns model names, provider names, supported endpoints (chat, embeddings, etc.), and capabilities. Use this to discover which models are routable via your gateway. List all LLM models supported by the Portkey gateway
list_policies
Returns policy names, limits, current consumption, and affected users/keys. Use this to review guardrails preventing runaway AI costs. List all budget and usage policies defined in Portkey
submit_feedback
Requires the log ID, rating (LIKE, DISLIKE, or UNLIKE to remove), and optional text feedback. Use this to build RLHF datasets or monitor user satisfaction with AI outputs. Submit user feedback (Like/Dislike) for a specific AI response log
Example Prompts for Portkey in CrewAI
Ready-to-use prompts you can give your CrewAI agent to start working with Portkey immediately.
"Show me the most expensive LLM calls from the last 24 hours"
"Create a budget policy limiting the Marketing team to $500/month on LLM usage"
"Export all logs from last week for our compliance audit"
Troubleshooting Portkey MCP Server with CrewAI
Common issues when connecting Portkey to CrewAI through the Vinkius, and how to resolve them.
MCP tools not discovered
Agent not using tools
Timeout errors
Rate limiting or 429 errors
Portkey + CrewAI FAQ
Common questions about integrating Portkey MCP Server with CrewAI.
How does CrewAI discover and connect to MCP tools?
tools/list method. This means tools are always fresh and reflect the server's current capabilities. No tool schemas need to be hardcoded.Can different agents in the same crew use different MCP servers?
mcps list, so you can assign specific servers to specific roles. For example, a reconnaissance agent might use a domain intelligence server while an analysis agent uses a vulnerability database server.What happens when an MCP tool call fails during a crew run?
Can CrewAI agents call multiple MCP tools in parallel?
process=Process.parallel, each calling different MCP tools concurrently. This is ideal for workflows where separate data sources need to be queried simultaneously.Can I run CrewAI crews on a schedule (cron)?
crew.kickoff() method runs synchronously by default, making it straightforward to integrate into existing pipelines.Connect Portkey with your favorite client
Step-by-step setup guides for every MCP-compatible client and framework:
Anthropic's native desktop app for Claude with built-in MCP support.
AI-first code editor with integrated LLM-powered coding assistance.
GitHub Copilot in VS Code with Agent mode and MCP support.
Purpose-built IDE for agentic AI coding workflows.
Autonomous AI coding agent that runs inside VS Code.
Anthropic's agentic CLI for terminal-first development.
Python SDK for building production-grade OpenAI agent workflows.
Google's framework for building production AI agents.
Type-safe agent development for Python with first-class MCP support.
TypeScript toolkit for building AI-powered web applications.
TypeScript-native agent framework for modern web stacks.
Python framework for orchestrating collaborative AI agent crews.
Leading Python framework for composable LLM applications.
Data-aware AI agent framework for structured and unstructured sources.
Microsoft's framework for multi-agent collaborative conversations.
Connect Portkey to CrewAI
Get your token, paste the configuration, and start using 10 tools in under 2 minutes. No API key management needed.
