Portkey MCP Server
AI gateway observability: monitor logs, costs, and manage LLM configurations via agents.
Ask AI about this MCP Server
Vinkius supports streamable HTTP and SSE.

* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure
What is the Portkey MCP Server?
The Portkey MCP Server gives AI agents like Claude, ChatGPT, and Cursor direct access to Portkey via 10 tools. AI gateway observability: monitor logs, costs, and manage LLM configurations via agents. Powered by the Vinkius - no API keys, no infrastructure, connect in under 2 minutes.
Built-in capabilities (10)
Tools for your AI Agents to operate Portkey
Ask your AI agent "Show me the most expensive LLM calls from the last 24 hours" and get the answer without opening a single dashboard. With 10 tools connected to real Portkey data, your agents reason over live information, cross-reference it with other MCP servers, and deliver insights you would spend hours assembling manually.
Works with Claude, ChatGPT, Cursor, and any MCP-compatible client. Powered by the Vinkius - your credentials never touch the AI model, every request is auditable. Connect in under two minutes.
Why teams choose Vinkius
One subscription gives you access to thousands of MCP servers - and you can deploy your own to the Vinkius Edge. Your AI agents only access the data you authorize, with DLP that blocks sensitive information from ever reaching the model, kill switch for instant shutdown, and up to 60% token savings. Enterprise-grade infrastructure and security, zero maintenance.
Build your own MCP Server with our secure development framework →Vinkius works with every AI agent you already use
…and any MCP-compatible client


















Portkey MCP Server capabilities
10 toolsRequires policy name, budget limit (USD or token count), and optionally the target users or virtual keys to restrict. Returns the created policy details. Use this to enforce cost controls on specific teams or projects using the gateway. Create a new budget or usage policy for AI gateway access
Requires the policy ID. Use this when a project ends or budget constraints are no longer needed. Remove a budget or usage policy from Portkey
Optionally filters by date range, model, or user. Returns an export ID or download URL. Use this for audit trails, cost reporting, or offline analysis of AI usage patterns. Export AI gateway logs for external analysis or compliance reporting
Requires the log ID from list_logs results. Use this for deep debugging of specific AI interactions. Get detailed information about a specific AI gateway log entry
Virtual keys map to underlying provider keys (OpenAI, Anthropic, etc.) with metadata, usage limits, and policy associations. Returns key IDs, names, provider targets, current usage, and status. Use this to audit API key usage or identify keys approaching limits. List all virtual API keys managed by Portkey
Returns config IDs, names, creation dates, and associated virtual keys. Use this to review how LLM requests are routed or to audit gateway behavior. List all gateway configurations stored in Portkey
Returns log IDs, timestamps, model names, token usage, latency, costs, and status codes. Use this to monitor AI usage, identify expensive calls, or debug latency issues. Supports pagination via limit/offset. List recent AI gateway logs and traces from Portkey
). Returns model names, provider names, supported endpoints (chat, embeddings, etc.), and capabilities. Use this to discover which models are routable via your gateway. List all LLM models supported by the Portkey gateway
Returns policy names, limits, current consumption, and affected users/keys. Use this to review guardrails preventing runaway AI costs. List all budget and usage policies defined in Portkey
Requires the log ID, rating (LIKE, DISLIKE, or UNLIKE to remove), and optional text feedback. Use this to build RLHF datasets or monitor user satisfaction with AI outputs. Submit user feedback (Like/Dislike) for a specific AI response log
What the Portkey MCP Server unlocks
What you can do
Connect AI agents to the Portkey AI Gateway for enterprise-grade observability and management:
- Monitor logs and traces of all LLM calls passing through your gateway
- Analyze token usage, latency, and costs across models and teams
- Submit feedback (Likes/Dislikes) to improve model quality and agent performance
- Export logs for audit trails, compliance, and offline cost analysis
- Review gateway configurations including retry policies, fallbacks, and cache settings
- Manage virtual keys to track provider API key usage and limits
- Discover supported models from 1,600+ LLMs available via Portkey
- Enforce budget policies to prevent runaway AI costs per team or project
How it works
1. Get your Portkey API key from the dashboard Settings
2. Ask your AI agent to check usage, review costs, or manage policies
3. Natural language commands replace manual Portkey dashboard navigation
4. Unified observability across all your LLM providers (OpenAI, Anthropic, Google, etc.)
Who is this for?
Essential for AI platform engineers, LLM ops teams, FinOps analysts, AI governance officers, and engineering managers using multiple LLM providers. Let AI agents monitor gateway health, identify cost spikes, enforce budget policies, and optimize routing. Perfect for organizations spending $10k+/month on LLMs who need granular visibility into usage, latency, and model performance across the enterprise.
Frequently asked questions about the Portkey MCP Server
Which LLM providers does Portkey support?
Portkey supports 1,600+ LLMs including OpenAI, Anthropic, Google, Mistral, Azure OpenAI, AWS Bedrock, Cohere, Hugging Face, and many more. Use the list_models tool to see the full catalog available via your gateway.
How does Portkey help control AI costs?
Portkey provides granular visibility into token usage, latency, and costs per model, team, or virtual key. You can create budget policies with hard limits to prevent runaway spending. The gateway also supports caching to reduce duplicate calls and fallbacks to cheaper models when appropriate.
Can I track feedback on AI responses?
Yes! Portkey allows you to submit Like/Dislike feedback for any logged LLM call. This data helps improve model selection, evaluate agent performance, and build RLHF datasets for fine-tuning.
More in this category
You might also like
Connect Portkey with your favorite client
Step-by-step setup guides for every MCP-compatible client and framework:
Anthropic's native desktop app for Claude with built-in MCP support.
AI-first code editor with integrated LLM-powered coding assistance.
GitHub Copilot in VS Code with Agent mode and MCP support.
Purpose-built IDE for agentic AI coding workflows.
Autonomous AI coding agent that runs inside VS Code.
Anthropic's agentic CLI for terminal-first development.
Python SDK for building production-grade OpenAI agent workflows.
Google's framework for building production AI agents.
Type-safe agent development for Python with first-class MCP support.
TypeScript toolkit for building AI-powered web applications.
TypeScript-native agent framework for modern web stacks.
Python framework for orchestrating collaborative AI agent crews.
Leading Python framework for composable LLM applications.
Data-aware AI agent framework for structured and unstructured sources.
Microsoft's framework for multi-agent collaborative conversations.
Give your AI agents the power of Portkey MCP Server
Production-grade Portkey MCP Server. Verified, monitored, and maintained by Vinkius. Ready for your AI agents — connect and start using immediately.






