New Relic AI (LLM Observability) MCP Server for CrewAI 10 tools — connect in under 2 minutes
Connect your CrewAI agents to New Relic AI (LLM Observability) through the Vinkius — pass the Edge URL in the `mcps` parameter and every New Relic AI (LLM Observability) tool is auto-discovered at runtime. No credentials to manage, no infrastructure to maintain.
ASK AI ABOUT THIS MCP SERVER
Vinkius supports streamable HTTP and SSE.
from crewai import Agent, Task, Crew
agent = Agent(
role="New Relic AI (LLM Observability) Specialist",
goal="Help users interact with New Relic AI (LLM Observability) effectively",
backstory=(
"You are an expert at leveraging New Relic AI (LLM Observability) tools "
"for automation and data analysis."
),
# Your Vinkius token — get it at cloud.vinkius.com
mcps=["https://edge.vinkius.com/[YOUR_TOKEN_HERE]/mcp"],
)
task = Task(
description=(
"Explore all available tools in New Relic AI (LLM Observability) "
"and summarize their capabilities."
),
agent=agent,
expected_output=(
"A detailed summary of 10 available tools "
"and what they can do."
),
)
crew = Crew(agents=[agent], tasks=[task])
result = crew.kickoff()
print(result)
* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure
About New Relic AI (LLM Observability) MCP Server
Connect your New Relic AI account to any AI agent and take full control of your LLM observability, token cost tracking, and performance analytics through natural conversation.
When paired with CrewAI, New Relic AI (LLM Observability) becomes a first-class tool in your multi-agent workflows. Each agent in the crew can call New Relic AI (LLM Observability) tools autonomously — one agent queries data, another analyzes results, a third compiles reports — all orchestrated through the Vinkius with zero configuration overhead.
What you can do
- LLM Telemetry Audit — Retrieve detailed LLM chat completion messages and prompt inputs directly from your agent to understand literal model behavior in real-time
- Token Cost Tracking — Execute structural extraction of model costs to calculate exact USD token consumption across your entire AI infrastructure securely
- Performance Monitoring — Extract p95 latency matrices and average response times to ensure your LLM text generation remains performant and sub-second
- User Feedback Loop — Retrieve chronological feedback messages and 1-5 rating scores dumped by human supervisors to identify quality regressions natively
- Custom NRQL Execution — Run sophisticated read-only queries using the New Relic Query Language (NRQL) to extract rich insights from multi-tenant AI datasets instantly
- Custom Event Injection — Post atomic generic telemetry rows to track internal agent states and custom behavioral markers across your observability pipeline
- Resource Discovery — Enumerate active APM apps, dashboards, and alert policies to audit your AI environment's structural health and PagerDuty configurations
The New Relic AI (LLM Observability) MCP Server exposes 10 tools through the Vinkius. Connect it to CrewAI in under two minutes — no API keys to rotate, no infrastructure to provision, no vendor lock-in. Your configuration, your data, your control.
How to Connect New Relic AI (LLM Observability) to CrewAI via MCP
Follow these steps to integrate the New Relic AI (LLM Observability) MCP Server with CrewAI.
Install CrewAI
Run pip install crewai
Replace the token
Replace [YOUR_TOKEN_HERE] with your Vinkius token from cloud.vinkius.com
Customize the agent
Adjust the role, goal, and backstory to fit your use case
Run the crew
Run python crew.py — CrewAI auto-discovers 10 tools from New Relic AI (LLM Observability)
Why Use CrewAI with the New Relic AI (LLM Observability) MCP Server
CrewAI Multi-Agent Orchestration Framework provides unique advantages when paired with New Relic AI (LLM Observability) through the Model Context Protocol.
Multi-agent collaboration lets you decompose complex workflows into specialized roles — one agent researches, another analyzes, a third generates reports — each with access to MCP tools
CrewAI's native MCP integration requires zero adapter code: pass the Vinkius Edge URL directly in the `mcps` parameter and agents auto-discover every available tool at runtime
Built-in task delegation and shared memory mean agents can pass context between steps without manual state management, enabling multi-hop reasoning across tool calls
Sequential and hierarchical crew patterns map naturally to real-world workflows: enumerate subdomains → analyze DNS history → check WHOIS records → compile findings into actionable reports
New Relic AI (LLM Observability) + CrewAI Use Cases
Practical scenarios where CrewAI combined with the New Relic AI (LLM Observability) MCP Server delivers measurable value.
Automated multi-step research: a reconnaissance agent queries New Relic AI (LLM Observability) for raw data, then a second analyst agent cross-references findings and flags anomalies — all without human handoff
Scheduled intelligence reports: set up a crew that periodically queries New Relic AI (LLM Observability), analyzes trends over time, and generates executive briefings in markdown or PDF format
Multi-source enrichment pipelines: chain New Relic AI (LLM Observability) tools with other MCP servers in the same crew, letting agents correlate data across multiple providers in a single workflow
Compliance and audit automation: a compliance agent queries New Relic AI (LLM Observability) against predefined policy rules, generates deviation reports, and routes findings to the appropriate team
New Relic AI (LLM Observability) MCP Tools for CrewAI (10)
These 10 tools become available when you connect New Relic AI (LLM Observability) to CrewAI via MCP:
custom_nrql
Note that NRQL is read-only. Irreversibly vaporize explicit validations extracting rich Churn flags
list_alert_policies
Inspect deep internal arrays mitigating specific Plan Math
list_apm_apps
Dispatch an automated validation check routing explicit Gateway history
list_dashboards
Identify precise active arrays spanning native Gateway auth
post_custom_event
/events` inserting absolute generic `CustomAITelemetry` rows tracking internal agent state. Enumerate explicitly attached structured rules exporting active Billing
query_llm_costs
Perform structural extraction of properties driving active Account logic
query_llm_errors
Identify precise active arrays spanning native Hold parsing
query_llm_events
Identify bounded CRM records inside the Headless New Relic Platform
query_llm_feedback
Retrieve explicit Cloud logging tracing explicit Vault limits
query_llm_latency
Provision a highly-available JSON Payload generating hard Customer bindings
Example Prompts for New Relic AI (LLM Observability) in CrewAI
Ready-to-use prompts you can give your CrewAI agent to start working with New Relic AI (LLM Observability) immediately.
"Show me the last 5 LLM events for the 'OpenAI' vendor"
"What is my total LLM token cost for the last 24 hours?"
"Run NRQL: SELECT count(*) FROM LlmEvent WHERE duration > 2 SINCE 1 hour ago"
Troubleshooting New Relic AI (LLM Observability) MCP Server with CrewAI
Common issues when connecting New Relic AI (LLM Observability) to CrewAI through the Vinkius, and how to resolve them.
MCP tools not discovered
Agent not using tools
Timeout errors
Rate limiting or 429 errors
New Relic AI (LLM Observability) + CrewAI FAQ
Common questions about integrating New Relic AI (LLM Observability) MCP Server with CrewAI.
How does CrewAI discover and connect to MCP tools?
tools/list method. This means tools are always fresh and reflect the server's current capabilities. No tool schemas need to be hardcoded.Can different agents in the same crew use different MCP servers?
mcps list, so you can assign specific servers to specific roles. For example, a reconnaissance agent might use a domain intelligence server while an analysis agent uses a vulnerability database server.What happens when an MCP tool call fails during a crew run?
Can CrewAI agents call multiple MCP tools in parallel?
process=Process.parallel, each calling different MCP tools concurrently. This is ideal for workflows where separate data sources need to be queried simultaneously.Can I run CrewAI crews on a schedule (cron)?
crew.kickoff() method runs synchronously by default, making it straightforward to integrate into existing pipelines.Connect New Relic AI (LLM Observability) with your favorite client
Step-by-step setup guides for every MCP-compatible client and framework:
Anthropic's native desktop app for Claude with built-in MCP support.
AI-first code editor with integrated LLM-powered coding assistance.
GitHub Copilot in VS Code with Agent mode and MCP support.
Purpose-built IDE for agentic AI coding workflows.
Autonomous AI coding agent that runs inside VS Code.
Anthropic's agentic CLI for terminal-first development.
Python SDK for building production-grade OpenAI agent workflows.
Google's framework for building production AI agents.
Type-safe agent development for Python with first-class MCP support.
TypeScript toolkit for building AI-powered web applications.
TypeScript-native agent framework for modern web stacks.
Python framework for orchestrating collaborative AI agent crews.
Leading Python framework for composable LLM applications.
Data-aware AI agent framework for structured and unstructured sources.
Microsoft's framework for multi-agent collaborative conversations.
Connect New Relic AI (LLM Observability) to CrewAI
Get your token, paste the configuration, and start using 10 tools in under 2 minutes. No API key management needed.
