2,500+ MCP servers ready to use
Vinkius
MCP VERIFIED · PRODUCTION READY · VINKIUS GUARANTEED
LiteLLM (LLM Proxy & Spend Tracking)

LiteLLM (LLM Proxy & Spend Tracking) MCP Server

Built by Vinkius GDPR ToolsFree for Subscribers

Manage your LLM gateway via LiteLLM — generate API keys, track spending, and orchestrate model fallback paths.

Vinkius supports streamable HTTP and SSE.

AI AgentVinkius
High Security·Kill Switch·Plug and Play
LiteLLM (LLM Proxy & Spend Tracking)
Fully ManagedVinkius Servers
60%Token savings
High SecurityEnterprise-grade
IAMAccess control
EU AI ActCompliant
DLPData protection
V8 IsolateSandboxed
Ed25519Audit chain
<40msKill switch
Stream every event to Splunk, Datadog, or your own webhook in real-time

* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure

What is the LiteLLM MCP Server?

The LiteLLM MCP Server gives AI agents like Claude, ChatGPT, and Cursor direct access to LiteLLM via 10 tools. Manage your LLM gateway via LiteLLM — generate API keys, track spending, and orchestrate model fallback paths. Powered by the Vinkius - no API keys, no infrastructure, connect in under 2 minutes.

Built-in capabilities (10)

create_modelcreate_teamcreate_userdelete_keydelete_modelgenerate_keyget_key_infoget_model_infoget_team_infoget_user_info

Tools for your AI Agents to operate LiteLLM

Ask your AI agent "List all active model fallback paths in LiteLLM" and get the answer without opening a single dashboard. With 10 tools connected to real LiteLLM data, your agents reason over live information, cross-reference it with other MCP servers, and deliver insights you would spend hours assembling manually.

Works with Claude, ChatGPT, Cursor, and any MCP-compatible client. Powered by the Vinkius - your credentials never touch the AI model, every request is auditable. Connect in under two minutes.

Why teams choose Vinkius

One subscription gives you access to thousands of MCP servers - and you can deploy your own to the Vinkius Edge. Your AI agents only access the data you authorize, with DLP that blocks sensitive information from ever reaching the model, kill switch for instant shutdown, and up to 60% token savings. Enterprise-grade infrastructure and security, zero maintenance.

Build your own MCP Server with our secure development framework →

Vinkius works with every AI agent you already use

…and any MCP-compatible client

CursorClaudeOpenAIVS CodeCopilotGoogleLovableMistralAWSCursorClaudeOpenAIVS CodeCopilotGoogleLovableMistralAWS

LiteLLM (LLM Proxy & Spend Tracking) MCP Server capabilities

10 tools
create_model

Inject completely fresh routing endpoints (ex: new Bedrock Llama 4 endpoints)

create_team

Generate pristine organizational isolation tracking exact cost limits per division

create_user

Insert specific End-User identities bridging Vinkius with Proxy logs

delete_key

Delete an existing LLM proxy key entirely

delete_model

Delete explicitly routed LLM deployments preventing 500s dynamically

generate_key

Generate a new proxy API key isolating distinct microservices or teams

get_key_info

Get configuration and budget bounds for a specific LiteLLM API Key

get_model_info

Get array endpoints tracing exact Fallback paths like OpenAI -> Anthropic

get_team_info

Get internal logic bounds matching multiple routing users via Team UUID

get_user_info

Return precise End-User abstractions tracking total USD consumed natively

What the LiteLLM (LLM Proxy & Spend Tracking) MCP Server unlocks

Connect your LiteLLM Proxy instance to any AI agent and take full control of your LLM infrastructure, load balancing, and spend management through natural conversation.

What you can do

  • Key Orchestration — Generate and manage proxy API keys to isolate distinct microservices or teams, including precise budget and rate limit constraints directly from your agent
  • Model Routing Intelligence — Get detailed info on fallback paths (e.g., OpenAI -> Anthropic -> Groq) and verify exact routing endpoints assigned to your models
  • Real-time Spend Audit — Track total USD consumed by specific end-users or teams and monitor budget ceilings to ensure cost-effective AI deployments
  • Dynamic Model Control — Inject fresh routing endpoints (e.g., new AWS Bedrock or Azure OpenAI deployments) into your proxy runtime with zero downtime
  • Team & Organizational Isolation — Create and manage team profiles to track exact cost limits and operational boundaries per organizational division
  • Infrastructure Security — Instantly vaporize malicious or leaked keys and remove broken LLM deployments to prevent downstream 500 errors dynamically

How it works

1. Subscribe to this server
2. Enter your LiteLLM API URL and Master Key
3. Start managing your LLM gateway from Claude, Cursor, or any MCP-compatible client

Who is this for?

  • Platform Engineers — manage global LLM gateway configurations and audit model fallback paths through natural conversation
  • AI Ops Teams — monitor real-time AI spending and adjust team budgets across multiple LLM providers
  • Backend Developers — generate sub-keys for new microservices and verify model routing availability without leaving your IDE

Frequently asked questions about the LiteLLM (LLM Proxy & Spend Tracking) MCP Server

01

Can I check the budget and rate limits for a specific proxy key?

Yes. Use the get_key_info tool with the specific Key ID. Your agent will retrieve the exact rate limits, budget constraints, and current RPM usage associated with that token.

02

How do I see the model fallback paths configured in my proxy?

The get_model_info tool allows your agent to extract the global model directory. You'll see the exact fallback chains (e.g., if OpenAI fails, use Anthropic) and the physical endpoints assigned to each model name.

03

Can my agent create a new team to track specific division costs?

Absolutely. Use the create_team tool and provide a JSON payload defining the team name and optional budget limits. Your agent will provision the new team identity in LiteLLM, allowing for precise organizational cost tracking.

More in this category

You might also like

Give your AI agents the power of LiteLLM MCP Server

Production-grade LiteLLM (LLM Proxy & Spend Tracking) MCP Server. Verified, monitored, and maintained by Vinkius. Ready for your AI agents — connect and start using immediately.