LiteLLM (LLM Proxy & Spend Tracking) MCP Server for Claude Code 10 tools — connect in under 2 minutes
Claude Code is Anthropic's agentic CLI for terminal-first development. Add LiteLLM (LLM Proxy & Spend Tracking) as an MCP server in one command and Claude Code will discover every tool at runtime. ideal for automation pipelines, CI/CD integration, and headless workflows via Vinkius.
ASK AI ABOUT THIS MCP SERVER
Vinkius supports streamable HTTP and SSE.
Vinkius Desktop App
The modern way to manage MCP Servers — no config files, no terminal commands. Install LiteLLM (LLM Proxy & Spend Tracking) and 2,500+ MCP Servers from a single visual interface.




# Your Vinkius token. get it at cloud.vinkius.com
claude mcp add litellm-llm-proxy-spend-tracking --transport http "https://edge.vinkius.com/[YOUR_TOKEN_HERE]/mcp"* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure
About LiteLLM (LLM Proxy & Spend Tracking) MCP Server
Connect your LiteLLM Proxy instance to any AI agent and take full control of your LLM infrastructure, load balancing, and spend management through natural conversation.
Claude Code registers LiteLLM (LLM Proxy & Spend Tracking) as an MCP server in a single terminal command. Once connected, Claude Code discovers all 10 tools at runtime and can call them headlessly. ideal for CI/CD pipelines, cron jobs, and automated workflows where LiteLLM (LLM Proxy & Spend Tracking) data drives decisions without human intervention.
What you can do
- Key Orchestration — Generate and manage proxy API keys to isolate distinct microservices or teams, including precise budget and rate limit constraints directly from your agent
- Model Routing Intelligence — Get detailed info on fallback paths (e.g., OpenAI -> Anthropic -> Groq) and verify exact routing endpoints assigned to your models
- Real-time Spend Audit — Track total USD consumed by specific end-users or teams and monitor budget ceilings to ensure cost-effective AI deployments
- Dynamic Model Control — Inject fresh routing endpoints (e.g., new AWS Bedrock or Azure OpenAI deployments) into your proxy runtime with zero downtime
- Team & Organizational Isolation — Create and manage team profiles to track exact cost limits and operational boundaries per organizational division
- Infrastructure Security — Instantly vaporize malicious or leaked keys and remove broken LLM deployments to prevent downstream 500 errors dynamically
The LiteLLM (LLM Proxy & Spend Tracking) MCP Server exposes 10 tools through the Vinkius. Connect it to Claude Code in under two minutes — no API keys to rotate, no infrastructure to provision, no vendor lock-in. Your configuration, your data, your control.
How to Connect LiteLLM (LLM Proxy & Spend Tracking) to Claude Code via MCP
Follow these steps to integrate the LiteLLM (LLM Proxy & Spend Tracking) MCP Server with Claude Code.
Install Claude Code
Run npm install -g @anthropic-ai/claude-code if not already installed
Add the MCP Server
Run the command above in your terminal
Verify the connection
Run claude mcp to list connected servers, or type /mcp inside a session
Start using LiteLLM (LLM Proxy & Spend Tracking)
Ask Claude: "Using LiteLLM (LLM Proxy & Spend Tracking), show me...". 10 tools are ready
Why Use Claude Code with the LiteLLM (LLM Proxy & Spend Tracking) MCP Server
Claude Code provides unique advantages when paired with LiteLLM (LLM Proxy & Spend Tracking) through the Model Context Protocol.
Single-command setup: `claude mcp add` registers the server instantly. no config files to edit or applications to restart
Terminal-native workflow means MCP tools integrate seamlessly into shell scripts, CI/CD pipelines, and automated DevOps tasks
Claude Code runs headlessly, enabling unattended batch processing using LiteLLM (LLM Proxy & Spend Tracking) tools in cron jobs or deployment scripts
Built by the same team that created the MCP protocol, ensuring first-class compatibility and the fastest adoption of new protocol features
LiteLLM (LLM Proxy & Spend Tracking) + Claude Code Use Cases
Practical scenarios where Claude Code combined with the LiteLLM (LLM Proxy & Spend Tracking) MCP Server delivers measurable value.
CI/CD integration: embed LiteLLM (LLM Proxy & Spend Tracking) tool calls in your deployment pipeline to validate configurations or fetch secrets before shipping
Headless batch processing: schedule Claude Code to query LiteLLM (LLM Proxy & Spend Tracking) nightly and generate reports without human intervention
Shell scripting: pipe LiteLLM (LLM Proxy & Spend Tracking) outputs into other CLI tools for data transformation, filtering, and aggregation
Infrastructure monitoring: run Claude Code in a cron job to query LiteLLM (LLM Proxy & Spend Tracking) status endpoints and alert on anomalies
LiteLLM (LLM Proxy & Spend Tracking) MCP Tools for Claude Code (10)
These 10 tools become available when you connect LiteLLM (LLM Proxy & Spend Tracking) to Claude Code via MCP:
create_model
Inject completely fresh routing endpoints (ex: new Bedrock Llama 4 endpoints)
create_team
Generate pristine organizational isolation tracking exact cost limits per division
create_user
Insert specific End-User identities bridging Vinkius with Proxy logs
delete_key
Delete an existing LLM proxy key entirely
delete_model
Delete explicitly routed LLM deployments preventing 500s dynamically
generate_key
Generate a new proxy API key isolating distinct microservices or teams
get_key_info
Get configuration and budget bounds for a specific LiteLLM API Key
get_model_info
Get array endpoints tracing exact Fallback paths like OpenAI -> Anthropic
get_team_info
Get internal logic bounds matching multiple routing users via Team UUID
get_user_info
Return precise End-User abstractions tracking total USD consumed natively
Example Prompts for LiteLLM (LLM Proxy & Spend Tracking) in Claude Code
Ready-to-use prompts you can give your Claude Code agent to start working with LiteLLM (LLM Proxy & Spend Tracking) immediately.
"List all active model fallback paths in LiteLLM"
"Generate a new API key for the 'Customer-Service' team with a $50 monthly budget"
"How much has user 'alex_dev' spent on LLM tokens today?"
Troubleshooting LiteLLM (LLM Proxy & Spend Tracking) MCP Server with Claude Code
Common issues when connecting LiteLLM (LLM Proxy & Spend Tracking) to Claude Code through the Vinkius, and how to resolve them.
Command not found: claude
npm install -g @anthropic-ai/claude-codeConnection timeout
LiteLLM (LLM Proxy & Spend Tracking) + Claude Code FAQ
Common questions about integrating LiteLLM (LLM Proxy & Spend Tracking) MCP Server with Claude Code.
How do I add an MCP server to Claude Code?
claude mcp add --transport http "" in your terminal. Claude Code registers the server and discovers all tools immediately.Can Claude Code run MCP tools in headless mode?
How do I list all connected MCP servers?
claude mcp in your terminal to see all registered servers and their status, or type /mcp inside an active Claude Code session.Connect LiteLLM (LLM Proxy & Spend Tracking) with your favorite client
Step-by-step setup guides for every MCP-compatible client and framework:
Anthropic's native desktop app for Claude with built-in MCP support.
AI-first code editor with integrated LLM-powered coding assistance.
GitHub Copilot in VS Code with Agent mode and MCP support.
Purpose-built IDE for agentic AI coding workflows.
Autonomous AI coding agent that runs inside VS Code.
Anthropic's agentic CLI for terminal-first development.
Python SDK for building production-grade OpenAI agent workflows.
Google's framework for building production AI agents.
Type-safe agent development for Python with first-class MCP support.
TypeScript toolkit for building AI-powered web applications.
TypeScript-native agent framework for modern web stacks.
Python framework for orchestrating collaborative AI agent crews.
Leading Python framework for composable LLM applications.
Data-aware AI agent framework for structured and unstructured sources.
Microsoft's framework for multi-agent collaborative conversations.
Connect LiteLLM (LLM Proxy & Spend Tracking) to Claude Code
Get your token, paste the configuration, and start using 10 tools in under 2 minutes. No API key management needed.
