LiteLLM (LLM Proxy & Spend Tracking) MCP Server for Mastra AI 10 tools — connect in under 2 minutes
Mastra AI is a TypeScript-native agent framework built for modern web stacks. Connect LiteLLM (LLM Proxy & Spend Tracking) through Vinkius and Mastra agents discover all tools automatically. type-safe, streaming-ready, and deployable anywhere Node.js runs.
ASK AI ABOUT THIS MCP SERVER
Vinkius supports streamable HTTP and SSE.
import { Agent } from "@mastra/core/agent";
import { createMCPClient } from "@mastra/mcp";
import { openai } from "@ai-sdk/openai";
async function main() {
// Your Vinkius token. get it at cloud.vinkius.com
const mcpClient = await createMCPClient({
servers: {
"litellm-llm-proxy-spend-tracking": {
url: "https://edge.vinkius.com/[YOUR_TOKEN_HERE]/mcp",
},
},
});
const tools = await mcpClient.getTools();
const agent = new Agent({
name: "LiteLLM (LLM Proxy & Spend Tracking) Agent",
instructions:
"You help users interact with LiteLLM (LLM Proxy & Spend Tracking) " +
"using 10 tools.",
model: openai("gpt-4o"),
tools,
});
const result = await agent.generate(
"What can I do with LiteLLM (LLM Proxy & Spend Tracking)?"
);
console.log(result.text);
}
main();* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure
About LiteLLM (LLM Proxy & Spend Tracking) MCP Server
Connect your LiteLLM Proxy instance to any AI agent and take full control of your LLM infrastructure, load balancing, and spend management through natural conversation.
Mastra's agent abstraction provides a clean separation between LLM logic and LiteLLM (LLM Proxy & Spend Tracking) tool infrastructure. Connect 10 tools through Vinkius and use Mastra's built-in workflow engine to chain tool calls with conditional logic, retries, and parallel execution. deployable to any Node.js host in one command.
What you can do
- Key Orchestration — Generate and manage proxy API keys to isolate distinct microservices or teams, including precise budget and rate limit constraints directly from your agent
- Model Routing Intelligence — Get detailed info on fallback paths (e.g., OpenAI -> Anthropic -> Groq) and verify exact routing endpoints assigned to your models
- Real-time Spend Audit — Track total USD consumed by specific end-users or teams and monitor budget ceilings to ensure cost-effective AI deployments
- Dynamic Model Control — Inject fresh routing endpoints (e.g., new AWS Bedrock or Azure OpenAI deployments) into your proxy runtime with zero downtime
- Team & Organizational Isolation — Create and manage team profiles to track exact cost limits and operational boundaries per organizational division
- Infrastructure Security — Instantly vaporize malicious or leaked keys and remove broken LLM deployments to prevent downstream 500 errors dynamically
The LiteLLM (LLM Proxy & Spend Tracking) MCP Server exposes 10 tools through the Vinkius. Connect it to Mastra AI in under two minutes — no API keys to rotate, no infrastructure to provision, no vendor lock-in. Your configuration, your data, your control.
How to Connect LiteLLM (LLM Proxy & Spend Tracking) to Mastra AI via MCP
Follow these steps to integrate the LiteLLM (LLM Proxy & Spend Tracking) MCP Server with Mastra AI.
Install dependencies
Run npm install @mastra/core @mastra/mcp @ai-sdk/openai
Replace the token
Replace [YOUR_TOKEN_HERE] with your Vinkius token
Run the agent
Save to agent.ts and run with npx tsx agent.ts
Explore tools
Mastra discovers 10 tools from LiteLLM (LLM Proxy & Spend Tracking) via MCP
Why Use Mastra AI with the LiteLLM (LLM Proxy & Spend Tracking) MCP Server
Mastra AI provides unique advantages when paired with LiteLLM (LLM Proxy & Spend Tracking) through the Model Context Protocol.
Mastra's agent abstraction provides a clean separation between LLM logic and tool infrastructure. add LiteLLM (LLM Proxy & Spend Tracking) without touching business code
Built-in workflow engine chains MCP tool calls with conditional logic, retries, and parallel execution for complex automation
TypeScript-native: full type inference for every LiteLLM (LLM Proxy & Spend Tracking) tool response with IDE autocomplete and compile-time checks
One-command deployment to any Node.js host. Vercel, Railway, Fly.io, or your own infrastructure
LiteLLM (LLM Proxy & Spend Tracking) + Mastra AI Use Cases
Practical scenarios where Mastra AI combined with the LiteLLM (LLM Proxy & Spend Tracking) MCP Server delivers measurable value.
Automated workflows: build multi-step agents that query LiteLLM (LLM Proxy & Spend Tracking), process results, and trigger downstream actions in a typed pipeline
SaaS integrations: embed LiteLLM (LLM Proxy & Spend Tracking) as a first-class tool in your product's AI features with Mastra's clean agent API
Background jobs: schedule Mastra agents to query LiteLLM (LLM Proxy & Spend Tracking) on a cron and store results in your database automatically
Multi-agent systems: create specialist agents that collaborate using LiteLLM (LLM Proxy & Spend Tracking) tools alongside other MCP servers
LiteLLM (LLM Proxy & Spend Tracking) MCP Tools for Mastra AI (10)
These 10 tools become available when you connect LiteLLM (LLM Proxy & Spend Tracking) to Mastra AI via MCP:
create_model
Inject completely fresh routing endpoints (ex: new Bedrock Llama 4 endpoints)
create_team
Generate pristine organizational isolation tracking exact cost limits per division
create_user
Insert specific End-User identities bridging Vinkius with Proxy logs
delete_key
Delete an existing LLM proxy key entirely
delete_model
Delete explicitly routed LLM deployments preventing 500s dynamically
generate_key
Generate a new proxy API key isolating distinct microservices or teams
get_key_info
Get configuration and budget bounds for a specific LiteLLM API Key
get_model_info
Get array endpoints tracing exact Fallback paths like OpenAI -> Anthropic
get_team_info
Get internal logic bounds matching multiple routing users via Team UUID
get_user_info
Return precise End-User abstractions tracking total USD consumed natively
Example Prompts for LiteLLM (LLM Proxy & Spend Tracking) in Mastra AI
Ready-to-use prompts you can give your Mastra AI agent to start working with LiteLLM (LLM Proxy & Spend Tracking) immediately.
"List all active model fallback paths in LiteLLM"
"Generate a new API key for the 'Customer-Service' team with a $50 monthly budget"
"How much has user 'alex_dev' spent on LLM tokens today?"
Troubleshooting LiteLLM (LLM Proxy & Spend Tracking) MCP Server with Mastra AI
Common issues when connecting LiteLLM (LLM Proxy & Spend Tracking) to Mastra AI through the Vinkius, and how to resolve them.
createMCPClient not exported
npm install @mastra/mcpLiteLLM (LLM Proxy & Spend Tracking) + Mastra AI FAQ
Common questions about integrating LiteLLM (LLM Proxy & Spend Tracking) MCP Server with Mastra AI.
How does Mastra AI connect to MCP servers?
MCPClient with the server URL and pass it to your agent. Mastra discovers all tools and makes them available with full TypeScript types.Can Mastra agents use tools from multiple servers?
Does Mastra support workflow orchestration?
Connect LiteLLM (LLM Proxy & Spend Tracking) with your favorite client
Step-by-step setup guides for every MCP-compatible client and framework:
Anthropic's native desktop app for Claude with built-in MCP support.
AI-first code editor with integrated LLM-powered coding assistance.
GitHub Copilot in VS Code with Agent mode and MCP support.
Purpose-built IDE for agentic AI coding workflows.
Autonomous AI coding agent that runs inside VS Code.
Anthropic's agentic CLI for terminal-first development.
Python SDK for building production-grade OpenAI agent workflows.
Google's framework for building production AI agents.
Type-safe agent development for Python with first-class MCP support.
TypeScript toolkit for building AI-powered web applications.
TypeScript-native agent framework for modern web stacks.
Python framework for orchestrating collaborative AI agent crews.
Leading Python framework for composable LLM applications.
Data-aware AI agent framework for structured and unstructured sources.
Microsoft's framework for multi-agent collaborative conversations.
Connect LiteLLM (LLM Proxy & Spend Tracking) to Mastra AI
Get your token, paste the configuration, and start using 10 tools in under 2 minutes. No API key management needed.
