LiteLLM (LLM Proxy & Spend Tracking) MCP Server
Manage your LLM gateway via LiteLLM — generate API keys, track spending, and orchestrate model fallback paths.
Ask AI about this MCP Server
Vinkius supports streamable HTTP and SSE.
* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure
What is the LiteLLM MCP Server?
The LiteLLM MCP Server gives AI agents like Claude, ChatGPT, and Cursor direct access to LiteLLM via 10 tools. Manage your LLM gateway via LiteLLM — generate API keys, track spending, and orchestrate model fallback paths. Powered by the Vinkius - no API keys, no infrastructure, connect in under 2 minutes.
Built-in capabilities (10)
Tools for your AI Agents to operate LiteLLM
Ask your AI agent "List all active model fallback paths in LiteLLM" and get the answer without opening a single dashboard. With 10 tools connected to real LiteLLM data, your agents reason over live information, cross-reference it with other MCP servers, and deliver insights you would spend hours assembling manually.
Works with Claude, ChatGPT, Cursor, and any MCP-compatible client. Powered by the Vinkius - your credentials never touch the AI model, every request is auditable. Connect in under two minutes.
Why teams choose Vinkius
One subscription gives you access to thousands of MCP servers - and you can deploy your own to the Vinkius Edge. Your AI agents only access the data you authorize, with DLP that blocks sensitive information from ever reaching the model, kill switch for instant shutdown, and up to 60% token savings. Enterprise-grade infrastructure and security, zero maintenance.
Build your own MCP Server with our secure development framework →Vinkius works with every AI agent you already use
…and any MCP-compatible client


















LiteLLM (LLM Proxy & Spend Tracking) MCP Server capabilities
10 toolsInject completely fresh routing endpoints (ex: new Bedrock Llama 4 endpoints)
Generate pristine organizational isolation tracking exact cost limits per division
Insert specific End-User identities bridging Vinkius with Proxy logs
Delete an existing LLM proxy key entirely
Delete explicitly routed LLM deployments preventing 500s dynamically
Generate a new proxy API key isolating distinct microservices or teams
Get configuration and budget bounds for a specific LiteLLM API Key
Get array endpoints tracing exact Fallback paths like OpenAI -> Anthropic
Get internal logic bounds matching multiple routing users via Team UUID
Return precise End-User abstractions tracking total USD consumed natively
What the LiteLLM (LLM Proxy & Spend Tracking) MCP Server unlocks
Connect your LiteLLM Proxy instance to any AI agent and take full control of your LLM infrastructure, load balancing, and spend management through natural conversation.
What you can do
- Key Orchestration — Generate and manage proxy API keys to isolate distinct microservices or teams, including precise budget and rate limit constraints directly from your agent
- Model Routing Intelligence — Get detailed info on fallback paths (e.g., OpenAI -> Anthropic -> Groq) and verify exact routing endpoints assigned to your models
- Real-time Spend Audit — Track total USD consumed by specific end-users or teams and monitor budget ceilings to ensure cost-effective AI deployments
- Dynamic Model Control — Inject fresh routing endpoints (e.g., new AWS Bedrock or Azure OpenAI deployments) into your proxy runtime with zero downtime
- Team & Organizational Isolation — Create and manage team profiles to track exact cost limits and operational boundaries per organizational division
- Infrastructure Security — Instantly vaporize malicious or leaked keys and remove broken LLM deployments to prevent downstream 500 errors dynamically
How it works
1. Subscribe to this server
2. Enter your LiteLLM API URL and Master Key
3. Start managing your LLM gateway from Claude, Cursor, or any MCP-compatible client
Who is this for?
- Platform Engineers — manage global LLM gateway configurations and audit model fallback paths through natural conversation
- AI Ops Teams — monitor real-time AI spending and adjust team budgets across multiple LLM providers
- Backend Developers — generate sub-keys for new microservices and verify model routing availability without leaving your IDE
Frequently asked questions about the LiteLLM (LLM Proxy & Spend Tracking) MCP Server
Can I check the budget and rate limits for a specific proxy key?
Yes. Use the get_key_info tool with the specific Key ID. Your agent will retrieve the exact rate limits, budget constraints, and current RPM usage associated with that token.
How do I see the model fallback paths configured in my proxy?
The get_model_info tool allows your agent to extract the global model directory. You'll see the exact fallback chains (e.g., if OpenAI fails, use Anthropic) and the physical endpoints assigned to each model name.
Can my agent create a new team to track specific division costs?
Absolutely. Use the create_team tool and provide a JSON payload defining the team name and optional budget limits. Your agent will provision the new team identity in LiteLLM, allowing for precise organizational cost tracking.
More in this category
You might also like
Connect LiteLLM (LLM Proxy & Spend Tracking) with your favorite client
Step-by-step setup guides for every MCP-compatible client and framework:
Anthropic's native desktop app for Claude with built-in MCP support.
AI-first code editor with integrated LLM-powered coding assistance.
GitHub Copilot in VS Code with Agent mode and MCP support.
Purpose-built IDE for agentic AI coding workflows.
Autonomous AI coding agent that runs inside VS Code.
Anthropic's agentic CLI for terminal-first development.
Python SDK for building production-grade OpenAI agent workflows.
Google's framework for building production AI agents.
Type-safe agent development for Python with first-class MCP support.
TypeScript toolkit for building AI-powered web applications.
TypeScript-native agent framework for modern web stacks.
Python framework for orchestrating collaborative AI agent crews.
Leading Python framework for composable LLM applications.
Data-aware AI agent framework for structured and unstructured sources.
Microsoft's framework for multi-agent collaborative conversations.
Give your AI agents the power of LiteLLM MCP Server
Production-grade LiteLLM (LLM Proxy & Spend Tracking) MCP Server. Verified, monitored, and maintained by Vinkius. Ready for your AI agents — connect and start using immediately.






