2,500+ MCP servers ready to use
Vinkius
MCP VERIFIED · PRODUCTION READY · VINKIUS GUARANTEED
Portkey

Portkey MCP Server

Built by Vinkius GDPR ToolsFree for Subscribers

AI gateway observability: monitor logs, costs, and manage LLM configurations via agents.

Vinkius supports streamable HTTP and SSE.

AI AgentVinkius
High Security·Kill Switch·Plug and Play
Portkey
Fully ManagedVinkius Servers
60%Token savings
High SecurityEnterprise-grade
IAMAccess control
EU AI ActCompliant
DLPData protection
V8 IsolateSandboxed
Ed25519Audit chain
<40msKill switch
Stream every event to Splunk, Datadog, or your own webhook in real-time

* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure

What is the Portkey MCP Server?

The Portkey MCP Server gives AI agents like Claude, ChatGPT, and Cursor direct access to Portkey via 10 tools. AI gateway observability: monitor logs, costs, and manage LLM configurations via agents. Powered by the Vinkius - no API keys, no infrastructure, connect in under 2 minutes.

Built-in capabilities (10)

create_policydelete_policyexport_logsget_log_detailsget_virtual_keyslist_configslist_logslist_modelslist_policiessubmit_feedback

Tools for your AI Agents to operate Portkey

Ask your AI agent "Show me the most expensive LLM calls from the last 24 hours" and get the answer without opening a single dashboard. With 10 tools connected to real Portkey data, your agents reason over live information, cross-reference it with other MCP servers, and deliver insights you would spend hours assembling manually.

Works with Claude, ChatGPT, Cursor, and any MCP-compatible client. Powered by the Vinkius - your credentials never touch the AI model, every request is auditable. Connect in under two minutes.

Why teams choose Vinkius

One subscription gives you access to thousands of MCP servers - and you can deploy your own to the Vinkius Edge. Your AI agents only access the data you authorize, with DLP that blocks sensitive information from ever reaching the model, kill switch for instant shutdown, and up to 60% token savings. Enterprise-grade infrastructure and security, zero maintenance.

Build your own MCP Server with our secure development framework →

Vinkius works with every AI agent you already use

…and any MCP-compatible client

CursorClaudeOpenAIVS CodeCopilotGoogleLovableMistralAWSCursorClaudeOpenAIVS CodeCopilotGoogleLovableMistralAWS

Portkey MCP Server capabilities

10 tools
create_policy

Requires policy name, budget limit (USD or token count), and optionally the target users or virtual keys to restrict. Returns the created policy details. Use this to enforce cost controls on specific teams or projects using the gateway. Create a new budget or usage policy for AI gateway access

delete_policy

Requires the policy ID. Use this when a project ends or budget constraints are no longer needed. Remove a budget or usage policy from Portkey

export_logs

Optionally filters by date range, model, or user. Returns an export ID or download URL. Use this for audit trails, cost reporting, or offline analysis of AI usage patterns. Export AI gateway logs for external analysis or compliance reporting

get_log_details

Requires the log ID from list_logs results. Use this for deep debugging of specific AI interactions. Get detailed information about a specific AI gateway log entry

get_virtual_keys

Virtual keys map to underlying provider keys (OpenAI, Anthropic, etc.) with metadata, usage limits, and policy associations. Returns key IDs, names, provider targets, current usage, and status. Use this to audit API key usage or identify keys approaching limits. List all virtual API keys managed by Portkey

list_configs

Returns config IDs, names, creation dates, and associated virtual keys. Use this to review how LLM requests are routed or to audit gateway behavior. List all gateway configurations stored in Portkey

list_logs

Returns log IDs, timestamps, model names, token usage, latency, costs, and status codes. Use this to monitor AI usage, identify expensive calls, or debug latency issues. Supports pagination via limit/offset. List recent AI gateway logs and traces from Portkey

list_models

). Returns model names, provider names, supported endpoints (chat, embeddings, etc.), and capabilities. Use this to discover which models are routable via your gateway. List all LLM models supported by the Portkey gateway

list_policies

Returns policy names, limits, current consumption, and affected users/keys. Use this to review guardrails preventing runaway AI costs. List all budget and usage policies defined in Portkey

submit_feedback

Requires the log ID, rating (LIKE, DISLIKE, or UNLIKE to remove), and optional text feedback. Use this to build RLHF datasets or monitor user satisfaction with AI outputs. Submit user feedback (Like/Dislike) for a specific AI response log

What the Portkey MCP Server unlocks

What you can do

Connect AI agents to the Portkey AI Gateway for enterprise-grade observability and management:

  • Monitor logs and traces of all LLM calls passing through your gateway
  • Analyze token usage, latency, and costs across models and teams
  • Submit feedback (Likes/Dislikes) to improve model quality and agent performance
  • Export logs for audit trails, compliance, and offline cost analysis
  • Review gateway configurations including retry policies, fallbacks, and cache settings
  • Manage virtual keys to track provider API key usage and limits
  • Discover supported models from 1,600+ LLMs available via Portkey
  • Enforce budget policies to prevent runaway AI costs per team or project

How it works

1. Get your Portkey API key from the dashboard Settings
2. Ask your AI agent to check usage, review costs, or manage policies
3. Natural language commands replace manual Portkey dashboard navigation
4. Unified observability across all your LLM providers (OpenAI, Anthropic, Google, etc.)

Who is this for?

Essential for AI platform engineers, LLM ops teams, FinOps analysts, AI governance officers, and engineering managers using multiple LLM providers. Let AI agents monitor gateway health, identify cost spikes, enforce budget policies, and optimize routing. Perfect for organizations spending $10k+/month on LLMs who need granular visibility into usage, latency, and model performance across the enterprise.

Frequently asked questions about the Portkey MCP Server

01

Which LLM providers does Portkey support?

Portkey supports 1,600+ LLMs including OpenAI, Anthropic, Google, Mistral, Azure OpenAI, AWS Bedrock, Cohere, Hugging Face, and many more. Use the list_models tool to see the full catalog available via your gateway.

02

How does Portkey help control AI costs?

Portkey provides granular visibility into token usage, latency, and costs per model, team, or virtual key. You can create budget policies with hard limits to prevent runaway spending. The gateway also supports caching to reduce duplicate calls and fallbacks to cheaper models when appropriate.

03

Can I track feedback on AI responses?

Yes! Portkey allows you to submit Like/Dislike feedback for any logged LLM call. This data helps improve model selection, evaluate agent performance, and build RLHF datasets for fine-tuning.

More in this category

You might also like

Give your AI agents the power of Portkey MCP Server

Production-grade Portkey MCP Server. Verified, monitored, and maintained by Vinkius. Ready for your AI agents — connect and start using immediately.