LangSmith (LLM Observability & Hub) MCP Server for OpenAI Agents SDK 6 tools — connect in under 2 minutes
The OpenAI Agents SDK enables production-grade agent workflows in Python. Connect LangSmith (LLM Observability & Hub) through the Vinkius and your agents gain typed, auto-discovered tools with built-in guardrails — no manual schema definitions required.
ASK AI ABOUT THIS MCP SERVER
Vinkius supports streamable HTTP and SSE.
import asyncio
from agents import Agent, Runner
from agents.mcp import MCPServerStreamableHttp
async def main():
# Your Vinkius token — get it at cloud.vinkius.com
async with MCPServerStreamableHttp(
url="https://edge.vinkius.com/[YOUR_TOKEN_HERE]/mcp"
) as mcp_server:
agent = Agent(
name="LangSmith (LLM Observability & Hub) Assistant",
instructions=(
"You help users interact with LangSmith (LLM Observability & Hub). "
"You have access to 6 tools."
),
mcp_servers=[mcp_server],
)
result = await Runner.run(
agent, "List all available tools from LangSmith (LLM Observability & Hub)"
)
print(result.final_output)
asyncio.run(main())
* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure
About LangSmith (LLM Observability & Hub) MCP Server
Connect your LangSmith account to any AI agent and take full control of your LLM observability, tracing, and prompt management through natural conversation.
The OpenAI Agents SDK auto-discovers all 6 tools from LangSmith (LLM Observability & Hub) through native MCP integration. Build agents with built-in guardrails, tracing, and handoff patterns — chain multiple agents where one queries LangSmith (LLM Observability & Hub), another analyzes results, and a third generates reports, all orchestrated through the Vinkius.
What you can do
- Trace Orchestration — List active tracing projects and retrieve detailed execution logs for specific LLM invocation runs directly from your agent
- Performance Telemetry — Extract precise metrics including token consumption, prompt latency, and exact error strings from your AI pipelines
- Prompt Hub Access — Navigate and retrieve managed prompt templates, variable definitions, and version histories hosted in the LangChain Hub
- Evaluation Datasets — Enumerate curated 'golden' datasets used for automated evaluation of prompt logic or few-shot injection models
- Human-in-the-Loop Audit — Monitor active annotation queues where human reviewers assess the alignment, accuracy, and safety of generated LLM traces
- Agentic Step Analysis — Deep-dive into multi-turn agentic workflows to understand nested tool calls and internal reasoning paths securely
The LangSmith (LLM Observability & Hub) MCP Server exposes 6 tools through the Vinkius. Connect it to OpenAI Agents SDK in under two minutes — no API keys to rotate, no infrastructure to provision, no vendor lock-in. Your configuration, your data, your control.
How to Connect LangSmith (LLM Observability & Hub) to OpenAI Agents SDK via MCP
Follow these steps to integrate the LangSmith (LLM Observability & Hub) MCP Server with OpenAI Agents SDK.
Install the SDK
Run pip install openai-agents in your Python environment
Replace the token
Replace [YOUR_TOKEN_HERE] with your Vinkius token from cloud.vinkius.com
Run the script
Save the code above and run it: python agent.py
Explore tools
The agent will automatically discover 6 tools from LangSmith (LLM Observability & Hub)
Why Use OpenAI Agents SDK with the LangSmith (LLM Observability & Hub) MCP Server
OpenAI Agents SDK provides unique advantages when paired with LangSmith (LLM Observability & Hub) through the Model Context Protocol.
Native MCP integration via `MCPServerSse` — pass the URL and the SDK auto-discovers all tools with full type safety
Built-in guardrails, tracing, and handoff patterns let you build production-grade agents without reinventing safety infrastructure
Lightweight and composable: chain multiple agents and MCP servers in a single pipeline with minimal boilerplate
First-party OpenAI support ensures optimal compatibility with GPT models for tool calling and structured output
LangSmith (LLM Observability & Hub) + OpenAI Agents SDK Use Cases
Practical scenarios where OpenAI Agents SDK combined with the LangSmith (LLM Observability & Hub) MCP Server delivers measurable value.
Automated workflows: build agents that query LangSmith (LLM Observability & Hub), process the data, and trigger follow-up actions autonomously
Multi-agent orchestration: create specialist agents — one queries LangSmith (LLM Observability & Hub), another analyzes results, a third generates reports
Data enrichment pipelines: stream data through LangSmith (LLM Observability & Hub) tools and transform it with OpenAI models in a single async loop
Customer support bots: agents query LangSmith (LLM Observability & Hub) to resolve tickets, look up records, and update statuses without human intervention
LangSmith (LLM Observability & Hub) MCP Tools for OpenAI Agents SDK (6)
These 6 tools become available when you connect LangSmith (LLM Observability & Hub) to OpenAI Agents SDK via MCP:
get_run
Get precise telemetry for a single LLM invocation run
list_annotation_queues
List active human-in-the-loop annotation queues
list_datasets
List all evaluation and fine-tuning datasets mapped in LangSmith
list_projects
Maps out the boundaries of distinct AI pipelines currently monitored by LangSmith. List all active LangSmith tracing projects/sessions
list_prompts
Extract prompt templates hosted in the LangChain Hub
list_runs
Isolates the raw interactions containing prompts sent to and responses received from the AI models. List explicit LLM invocation runs within a specific project
Example Prompts for LangSmith (LLM Observability & Hub) in OpenAI Agents SDK
Ready-to-use prompts you can give your OpenAI Agents SDK agent to start working with LangSmith (LLM Observability & Hub) immediately.
"List all active tracing projects in LangSmith"
"Show me the telemetry for the last run in the 'Production-Bot-V2' project"
"List all prompts hosted in our Hub repository"
Troubleshooting LangSmith (LLM Observability & Hub) MCP Server with OpenAI Agents SDK
Common issues when connecting LangSmith (LLM Observability & Hub) to OpenAI Agents SDK through the Vinkius, and how to resolve them.
MCPServerStreamableHttp not found
pip install --upgrade openai-agentsAgent not calling tools
LangSmith (LLM Observability & Hub) + OpenAI Agents SDK FAQ
Common questions about integrating LangSmith (LLM Observability & Hub) MCP Server with OpenAI Agents SDK.
How does the OpenAI Agents SDK connect to MCP?
MCPServerSse(url=...) to create a server connection. The SDK auto-discovers all tools and makes them available to your agent with full type information.Can I use multiple MCP servers in one agent?
MCPServerSse instances to the agent constructor. The agent can use tools from all connected servers within a single run.Does the SDK support streaming responses?
Connect LangSmith (LLM Observability & Hub) with your favorite client
Step-by-step setup guides for every MCP-compatible client and framework:
Anthropic's native desktop app for Claude with built-in MCP support.
AI-first code editor with integrated LLM-powered coding assistance.
GitHub Copilot in VS Code with Agent mode and MCP support.
Purpose-built IDE for agentic AI coding workflows.
Autonomous AI coding agent that runs inside VS Code.
Anthropic's agentic CLI for terminal-first development.
Python SDK for building production-grade OpenAI agent workflows.
Google's framework for building production AI agents.
Type-safe agent development for Python with first-class MCP support.
TypeScript toolkit for building AI-powered web applications.
TypeScript-native agent framework for modern web stacks.
Python framework for orchestrating collaborative AI agent crews.
Leading Python framework for composable LLM applications.
Data-aware AI agent framework for structured and unstructured sources.
Microsoft's framework for multi-agent collaborative conversations.
Connect LangSmith (LLM Observability & Hub) to OpenAI Agents SDK
Get your token, paste the configuration, and start using 6 tools in under 2 minutes. No API key management needed.
