LiteLLM (LLM Proxy & Spend Tracking) MCP Server for Windsurf 10 tools — connect in under 2 minutes
Windsurf brings agentic AI coding to a purpose-built IDE. Connect LiteLLM (LLM Proxy & Spend Tracking) through Vinkius and Cascade will auto-discover every tool. ask questions, generate code, and act on live data without leaving your editor.
ASK AI ABOUT THIS MCP SERVER
Vinkius supports streamable HTTP and SSE.
Vinkius Desktop App
The modern way to manage MCP Servers — no config files, no terminal commands. Install LiteLLM (LLM Proxy & Spend Tracking) and 2,500+ MCP Servers from a single visual interface.




{
"mcpServers": {
"litellm-llm-proxy-spend-tracking": {
"url": "https://edge.vinkius.com/[YOUR_TOKEN_HERE]/mcp"
}
}
}* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure
About LiteLLM (LLM Proxy & Spend Tracking) MCP Server
Connect your LiteLLM Proxy instance to any AI agent and take full control of your LLM infrastructure, load balancing, and spend management through natural conversation.
Windsurf's Cascade agent chains multiple LiteLLM (LLM Proxy & Spend Tracking) tool calls autonomously. query data, analyze results, and generate code in a single agentic session. Paste Vinkius Edge URL, reload, and all 10 tools are immediately available. Real-time tool feedback appears inline, so you see API responses directly in your editor.
What you can do
- Key Orchestration — Generate and manage proxy API keys to isolate distinct microservices or teams, including precise budget and rate limit constraints directly from your agent
- Model Routing Intelligence — Get detailed info on fallback paths (e.g., OpenAI -> Anthropic -> Groq) and verify exact routing endpoints assigned to your models
- Real-time Spend Audit — Track total USD consumed by specific end-users or teams and monitor budget ceilings to ensure cost-effective AI deployments
- Dynamic Model Control — Inject fresh routing endpoints (e.g., new AWS Bedrock or Azure OpenAI deployments) into your proxy runtime with zero downtime
- Team & Organizational Isolation — Create and manage team profiles to track exact cost limits and operational boundaries per organizational division
- Infrastructure Security — Instantly vaporize malicious or leaked keys and remove broken LLM deployments to prevent downstream 500 errors dynamically
The LiteLLM (LLM Proxy & Spend Tracking) MCP Server exposes 10 tools through the Vinkius. Connect it to Windsurf in under two minutes — no API keys to rotate, no infrastructure to provision, no vendor lock-in. Your configuration, your data, your control.
How to Connect LiteLLM (LLM Proxy & Spend Tracking) to Windsurf via MCP
Follow these steps to integrate the LiteLLM (LLM Proxy & Spend Tracking) MCP Server with Windsurf.
Open MCP Settings
Go to Settings → MCP Configuration or press Cmd+Shift+P and search "MCP"
Add the server
Paste the JSON configuration above into mcp_config.json
Save and reload
Windsurf will detect the new server automatically
Start using LiteLLM (LLM Proxy & Spend Tracking)
Open Cascade and ask: "Using LiteLLM (LLM Proxy & Spend Tracking), help me...". 10 tools available
Why Use Windsurf with the LiteLLM (LLM Proxy & Spend Tracking) MCP Server
Windsurf provides unique advantages when paired with LiteLLM (LLM Proxy & Spend Tracking) through the Model Context Protocol.
Windsurf's Cascade agent autonomously chains multiple tool calls in sequence, solving complex multi-step tasks without manual intervention
Purpose-built for agentic workflows. Cascade understands context across your entire codebase and integrates MCP tools natively
JSON-based configuration means zero code changes: paste a URL, reload, and all 10 tools are immediately available
Real-time tool feedback is displayed inline, so you see API responses directly in your editor without switching contexts
LiteLLM (LLM Proxy & Spend Tracking) + Windsurf Use Cases
Practical scenarios where Windsurf combined with the LiteLLM (LLM Proxy & Spend Tracking) MCP Server delivers measurable value.
Automated code generation: ask Cascade to fetch data from LiteLLM (LLM Proxy & Spend Tracking) and generate models, types, or handlers based on real API responses
Live debugging: query LiteLLM (LLM Proxy & Spend Tracking) tools mid-session to inspect production data while debugging without leaving the editor
Documentation generation: pull schema information from LiteLLM (LLM Proxy & Spend Tracking) and have Cascade generate comprehensive API docs automatically
Rapid prototyping: combine LiteLLM (LLM Proxy & Spend Tracking) data with Cascade's code generation to scaffold entire features in minutes
LiteLLM (LLM Proxy & Spend Tracking) MCP Tools for Windsurf (10)
These 10 tools become available when you connect LiteLLM (LLM Proxy & Spend Tracking) to Windsurf via MCP:
create_model
Inject completely fresh routing endpoints (ex: new Bedrock Llama 4 endpoints)
create_team
Generate pristine organizational isolation tracking exact cost limits per division
create_user
Insert specific End-User identities bridging Vinkius with Proxy logs
delete_key
Delete an existing LLM proxy key entirely
delete_model
Delete explicitly routed LLM deployments preventing 500s dynamically
generate_key
Generate a new proxy API key isolating distinct microservices or teams
get_key_info
Get configuration and budget bounds for a specific LiteLLM API Key
get_model_info
Get array endpoints tracing exact Fallback paths like OpenAI -> Anthropic
get_team_info
Get internal logic bounds matching multiple routing users via Team UUID
get_user_info
Return precise End-User abstractions tracking total USD consumed natively
Example Prompts for LiteLLM (LLM Proxy & Spend Tracking) in Windsurf
Ready-to-use prompts you can give your Windsurf agent to start working with LiteLLM (LLM Proxy & Spend Tracking) immediately.
"List all active model fallback paths in LiteLLM"
"Generate a new API key for the 'Customer-Service' team with a $50 monthly budget"
"How much has user 'alex_dev' spent on LLM tokens today?"
Troubleshooting LiteLLM (LLM Proxy & Spend Tracking) MCP Server with Windsurf
Common issues when connecting LiteLLM (LLM Proxy & Spend Tracking) to Windsurf through the Vinkius, and how to resolve them.
Server not connecting
LiteLLM (LLM Proxy & Spend Tracking) + Windsurf FAQ
Common questions about integrating LiteLLM (LLM Proxy & Spend Tracking) MCP Server with Windsurf.
How does Windsurf discover MCP tools?
mcp_config.json file on startup and connects to each configured server via Streamable HTTP. Tools are listed in the MCP panel and available to Cascade automatically.Can Cascade chain multiple MCP tool calls?
Does Windsurf support multiple MCP servers?
mcp_config.json. Each server's tools appear in the MCP panel and Cascade can use tools from different servers in a single flow.Connect LiteLLM (LLM Proxy & Spend Tracking) with your favorite client
Step-by-step setup guides for every MCP-compatible client and framework:
Anthropic's native desktop app for Claude with built-in MCP support.
AI-first code editor with integrated LLM-powered coding assistance.
GitHub Copilot in VS Code with Agent mode and MCP support.
Purpose-built IDE for agentic AI coding workflows.
Autonomous AI coding agent that runs inside VS Code.
Anthropic's agentic CLI for terminal-first development.
Python SDK for building production-grade OpenAI agent workflows.
Google's framework for building production AI agents.
Type-safe agent development for Python with first-class MCP support.
TypeScript toolkit for building AI-powered web applications.
TypeScript-native agent framework for modern web stacks.
Python framework for orchestrating collaborative AI agent crews.
Leading Python framework for composable LLM applications.
Data-aware AI agent framework for structured and unstructured sources.
Microsoft's framework for multi-agent collaborative conversations.
Connect LiteLLM (LLM Proxy & Spend Tracking) to Windsurf
Get your token, paste the configuration, and start using 10 tools in under 2 minutes. No API key management needed.
