2,500+ MCP servers ready to use
Vinkius
MCP VERIFIED · PRODUCTION READY · VINKIUS GUARANTEED
LangSmith

LangSmith MCP Server

Built by Vinkius GDPR ToolsFree for Subscribers

Observability and evaluation platform for LLM applications — monitor traces, debug agent runs, and track performance metrics across your AI stack.

Vinkius supports streamable HTTP and SSE.

AI AgentVinkius
High Security·Kill Switch·Plug and Play
LangSmith
Fully ManagedVinkius Servers
60%Token savings
High SecurityEnterprise-grade
IAMAccess control
EU AI ActCompliant
DLPData protection
V8 IsolateSandboxed
Ed25519Audit chain
<40msKill switch
Stream every event to Splunk, Datadog, or your own webhook in real-time

* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure

What is the LangSmith MCP Server?

The LangSmith MCP Server gives AI agents like Claude, ChatGPT, and Cursor direct access to LangSmith via 3 tools. Observability and evaluation platform for LLM applications — monitor traces, debug agent runs, and track performance metrics across your AI stack. Powered by the Vinkius - no API keys, no infrastructure, connect in under 2 minutes.

Built-in capabilities (3)

langsmith_get_runlangsmith_list_projectslangsmith_list_runs

Tools for your AI Agents to operate LangSmith

Ask your AI agent "List all my LangSmith projects and show their metrics." and get the answer without opening a single dashboard. With 3 tools connected to real LangSmith data, your agents reason over live information, cross-reference it with other MCP servers, and deliver insights you would spend hours assembling manually.

Works with Claude, ChatGPT, Cursor, and any MCP-compatible client. Powered by the Vinkius - your credentials never touch the AI model, every request is auditable. Connect in under two minutes.

Why teams choose Vinkius

One subscription gives you access to thousands of MCP servers - and you can deploy your own to the Vinkius Edge. Your AI agents only access the data you authorize, with DLP that blocks sensitive information from ever reaching the model, kill switch for instant shutdown, and up to 60% token savings. Enterprise-grade infrastructure and security, zero maintenance.

Build your own MCP Server with our secure development framework →

Vinkius works with every AI agent you already use

…and any MCP-compatible client

CursorClaudeOpenAIVS CodeCopilotGoogleLovableMistralAWSCursorClaudeOpenAIVS CodeCopilotGoogleLovableMistralAWS

LangSmith MCP Server capabilities

3 tools
langsmith_get_run

Useful for debugging specific LLM calls or agent actions. Get detailed information about a specific run/trace by its ID

langsmith_list_projects

Each project groups related traces together and shows aggregate metrics like total runs, median latency, and feedback counts. List all tracing projects in your LangSmith account with run counts, latency stats, and feedback metrics

langsmith_list_runs

Each run represents a single LLM call, chain execution, or agent action. Shows status (success/error), latency, and token consumption. List recent traces/runs in a specific LangSmith project. Shows run names, types, status, token usage, and timing

What the LangSmith MCP Server unlocks

Connect your AI agent to LangSmith — the observability platform from the LangChain team that gives you complete visibility into your LLM applications.

What you can do

  • List Projects — View all tracing projects with aggregate metrics: total runs, median latency, feedback scores, and creation dates
  • List Runs — Browse recent traces in any project. See run names, types (LLM, chain, tool), status (success/error), token usage, and timing
  • Run Details — Deep-dive into any specific run to see its full execution trace, inputs, outputs, and associated feedback

How it works

1. Subscribe to this server
2. Enter your LangSmith API key (5,000 free traces/month)
3. Your agent can now monitor and debug LLM applications

Who is this for?

  • AI Engineers — monitor LLM calls, chains, and agent actions in production
  • ML Teams — track experiment performance, compare model outputs, and identify regressions
  • DevOps — set up alerts for error rates, latency spikes, and cost anomalies in AI workloads

Frequently asked questions about the LangSmith MCP Server

01

What is LangSmith and why do I need it?

LangSmith is the 'Datadog for LLM applications'. Without observability, AI agents in production are black boxes — you can't see what they're doing, why they fail, or how much they cost. LangSmith traces every LLM call, chain execution, and tool use, giving you complete visibility into inputs, outputs, latency, token usage, and error rates.

02

Does LangSmith work only with LangChain?

No! While LangSmith is built by the LangChain team and has native LangChain/LangGraph integration, it works with any LLM application. You can trace OpenAI, Anthropic, or any LLM provider directly using the REST API. It also integrates with CrewAI, AutoGen, and other frameworks.

03

How much does LangSmith cost?

LangSmith offers a generous free tier with 5,000 traces per month — no credit card required. The Developer plan is $39/month with 50,000 traces. Enterprise plans include SSO, RBAC, dedicated support, and unlimited traces with volume discounts.

More in this category

You might also like

Give your AI agents the power of LangSmith MCP Server

Production-grade LangSmith MCP Server. Verified, monitored, and maintained by Vinkius. Ready for your AI agents — connect and start using immediately.