Bring Llm Tracing
to Mastra AI
Learn how to connect Langfuse (LLM Tracing & Evals) to Mastra AI and start using 10 AI agent tools in minutes. Fully managed, enterprise secure, and ready to use without writing a single line of code.
What is the Langfuse (LLM Tracing & Evals) MCP Server?
Connect your Langfuse account to any AI agent and take full control of your LLM observability, prompt management, and quality evaluation through natural conversation.
What you can do
- Trace Orchestration — List and retrieve detailed traces of LLM API sessions, exposing latencies, token counts, and exact chained payloads directly from your agent
- Prompt Vault Access — Query actively managed prompt templates and versions to inspect system instructions and expected input variables
- Observation Analysis — Deep-dive into individual spans, events, and generations within a trace to pinpoint failures or performance bottlenecks securely
- Evaluation & Scoring — Attach structured human feedback or automated evaluation metrics to specific traces to monitor model grounding and accuracy
- Usage Metrics — Generate aggregated daily reports on USD costs and average latency to track your AI infrastructure spending in real-time
- Session Monitoring — Extract correlated user sessions to understand multi-turn interaction boundaries and improve long-term agentic workflows
How it works
1. Subscribe to this server
2. Enter your Langfuse API URL, Public Key, and Secret Key
3. Start monitoring your LLM application from Claude, Cursor, or any MCP-compatible client
Who is this for?
- LLM Engineers — debug complex AI chains and measure exact token latencies through natural conversation without manual dashboard searching
- Product Owners — monitor daily AI costs and user satisfaction scores across multiple production environments
- Data Scientists — manage prompt templates and audit evaluation metrics to improve model response quality and grounding efficiently
Built-in capabilities (10)
Create a new LLM observation (span, event, generation) inside a trace
g. 1-5 stars) or automated pipeline metrics bounding exactly onto the specified Trace or Observation. Attach human feedback or evaluation metrics to a trace/observation
Generate rolled-up USD cost and aggregated latency statistics
Retrieve explicit span or generation context within a trace
Get complete telemetry and nested graph for a single trace
List raw observation objects spanning across traces
Extract actively managed prompt templates and versions
List all explicit scores mapping quality or cost algorithms
List high-level user session entities encapsulating multiple traces
List all traces tracking LLM API sessions
Why Mastra AI?
Mastra's agent abstraction provides a clean separation between LLM logic and Langfuse (LLM Tracing & Evals) tool infrastructure. Connect 10 tools through Vinkius and use Mastra's built-in workflow engine to chain tool calls with conditional logic, retries, and parallel execution. deployable to any Node.js host in one command.
- —
Mastra's agent abstraction provides a clean separation between LLM logic and tool infrastructure. add Langfuse (LLM Tracing & Evals) without touching business code
- —
Built-in workflow engine chains MCP tool calls with conditional logic, retries, and parallel execution for complex automation
- —
TypeScript-native: full type inference for every Langfuse (LLM Tracing & Evals) tool response with IDE autocomplete and compile-time checks
- —
One-command deployment to any Node.js host. Vercel, Railway, Fly.io, or your own infrastructure
Langfuse (LLM Tracing & Evals) in Mastra AI
Langfuse (LLM Tracing & Evals) and 3,400+ other MCP servers. One platform. One governance layer.
Teams that connect Langfuse (LLM Tracing & Evals) to Mastra AI through Vinkius don't need to source, host, or maintain individual MCP servers. Every tool call runs inside a hardened runtime with credential isolation, DLP, and a signed audit chain.
Raw MCP | Vinkius | |
|---|---|---|
| Server catalog | Find and host yourself | 3,400+ managed |
| Infrastructure | Self-hosted | Sandboxed V8 isolates |
| Credential handling | Plaintext in config | Vault + runtime injection |
| Data loss prevention | None | Configurable DLP policies |
| Kill switch | None | Global instant shutdown |
| Financial circuit breakers | None | Per-server limits + alerts |
| Audit trail | None | Ed25519 signed logs |
| SIEM log streaming | None | Splunk, Datadog, Webhook |
| Honeytokens | None | Canary alerts on leak |
| Custom domains | Not applicable | DNS challenge verified |
| GDPR compliance | Manual effort | Automated purge + export |
Why teams choose Vinkius for Langfuse (LLM Tracing & Evals) in Mastra AI
The Langfuse (LLM Tracing & Evals) MCP Server runs on Vinkius-managed infrastructure inside AWS — a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts. All 10 tools execute in hardened sandboxes optimized for native MCP execution.
Your AI agents in Mastra AI only access the data you authorize, with DLP that blocks sensitive information from ever reaching the model, kill switch for instant shutdown, and up to 60% token savings. Enterprise-grade infrastructure, zero maintenance.

* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure
How Vinkius secures
Langfuse (LLM Tracing & Evals) for Mastra AI
Every tool call from Mastra AI to the Langfuse (LLM Tracing & Evals) MCP Server is protected by DLP redaction, cryptographic audit chains, V8 sandbox isolation, kill switch, and financial circuit breakers.
Frequently asked questions
Can I see the exact system instruction for a specific prompt version?
Yes. Use the list_prompts tool to browse your managed templates. Your agent can retrieve the exact text and variables for any deployed prompt version, making it easy to audit AI logic through natural conversation.
How do I log human feedback for a specific trace?
Use the create_score tool by providing the Trace ID and a JSON payload defining the score name (e.g. 'user-satisfaction') and value. Your agent will attach this structured data directly to the Langfuse record.
Can my agent report on my LLM spending for the current day?
Absolutely. The get_daily_metrics tool retrieves aggregated USD costs and average latency metrics from Langfuse. Your agent can summarize these statistics to help you monitor your infrastructure budget in real-time.
How does Mastra AI connect to MCP servers?
Create an MCPClient with the server URL and pass it to your agent. Mastra discovers all tools and makes them available with full TypeScript types.
Can Mastra agents use tools from multiple servers?
Yes. Pass multiple MCP clients to the agent constructor. Mastra merges all tool schemas and the agent can call any tool from any server.
Does Mastra support workflow orchestration?
Yes. Mastra has a built-in workflow engine that lets you chain MCP tool calls with branching logic, error handling, and parallel execution.
createMCPClient not exported
Install: npm install @mastra/mcp
