Langfuse (LLM Tracing & Evals) MCP Server
Monitor LLM apps via Langfuse — track traces, manage prompt templates, and audit evaluation scores.
Vinkius AI Gateway suporta streamable HTTP e SSE.

Funciona com todos os agentes de IA que você já usa
…e qualquer cliente compatível com MCP


















Langfuse MCP Server: veja o seu AI Agent em ação
Capacidades integradas (10)
create_observation
Create a new LLM observation (span, event, generation) inside a trace
create_score
g. 1-5 stars) or automated pipeline metrics bounding exactly onto the specified Trace or Observation. Attach human feedback or evaluation metrics to a trace/observation
get_daily_metrics
Generate rolled-up USD cost and aggregated latency statistics
get_observation
Retrieve explicit span or generation context within a trace
get_trace
Get complete telemetry and nested graph for a single trace
list_observations
List raw observation objects spanning across traces
list_prompts
Extract actively managed prompt templates and versions
list_scores
List all explicit scores mapping quality or cost algorithms
list_sessions
List high-level user session entities encapsulating multiple traces
list_traces
List all traces tracking LLM API sessions
O que esse conector desbloqueia
Connect your Langfuse account to any AI agent and take full control of your LLM observability, prompt management, and quality evaluation through natural conversation.
What you can do
- Trace Orchestration — List and retrieve detailed traces of LLM API sessions, exposing latencies, token counts, and exact chained payloads directly from your agent
- Prompt Vault Access — Query actively managed prompt templates and versions to inspect system instructions and expected input variables
- Observation Analysis — Deep-dive into individual spans, events, and generations within a trace to pinpoint failures or performance bottlenecks securely
- Evaluation & Scoring — Attach structured human feedback or automated evaluation metrics to specific traces to monitor model grounding and accuracy
- Usage Metrics — Generate aggregated daily reports on USD costs and average latency to track your AI infrastructure spending in real-time
- Session Monitoring — Extract correlated user sessions to understand multi-turn interaction boundaries and improve long-term agentic workflows
How it works
1. Subscribe to this server
2. Enter your Langfuse API URL, Public Key, and Secret Key
3. Start monitoring your LLM application from Claude, Cursor, or any MCP-compatible client
Who is this for?
- LLM Engineers — debug complex AI chains and measure exact token latencies through natural conversation without manual dashboard searching
- Product Owners — monitor daily AI costs and user satisfaction scores across multiple production environments
- Data Scientists — manage prompt templates and audit evaluation metrics to improve model response quality and grounding efficiently
Perguntas frequentes
Dê aos seus agentes de IA o poder do Langfuse
Acesse o Langfuse e mais de 2.000 servidores MCP — prontos para seus agentes usarem, agora mesmo. Sem código cola. Sem integrações customizadas. Apenas plugue o Vinkius AI Gateway e deixe seus agentes trabalharem.
Mais nesta categoria

Apify
10 ferramentasCommand Apify scrapers from your AI agent — run actors, extract web data, poll datasets, and automate browser tasks seamlessly.

Context7
2 ferramentasEmpower AI agents via Context7 — pull up-to-date documentation and code examples for any library or framework directly into your workspace.

Runlayer
27 ferramentasAI enterprise control plane: manage MCP servers, skills, agents, and security policies via agents.
Você também pode gostar

Twilio
10 ferramentasAutomate communication workflows via Twilio — manage SMS messaging, voice calls, call recordings, and account usage directly from any AI agent.

Internet Archive Metadata
10 ferramentasGet detailed metadata, files, reviews, and stats for any Internet Archive item.

Metabase (Business Intelligence & Analytics)
7 ferramentasManage your BI environment via Metabase — list dashboards, retrieve visual questions (cards), and search data entities.
