2,500+ MCP servers ready to use
Vinkius
MCP VERIFIED · PRODUCTION READY · VINKIUS GUARANTEED
New Relic AI (LLM Observability)

New Relic AI (LLM Observability) MCP Server

Built by Vinkius GDPR ToolsFree for Subscribers

Monitor and audit LLM telemetry via New Relic AI — track token costs, p95 latency, and user feedback.

Vinkius supports streamable HTTP and SSE.

AI AgentVinkius
High Security·Kill Switch·Plug and Play
New Relic AI (LLM Observability)
Fully ManagedVinkius Servers
60%Token savings
High SecurityEnterprise-grade
IAMAccess control
EU AI ActCompliant
DLPData protection
V8 IsolateSandboxed
Ed25519Audit chain
<40msKill switch
Stream every event to Splunk, Datadog, or your own webhook in real-time

* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure

What is the New Relic AI MCP Server?

The New Relic AI MCP Server gives AI agents like Claude, ChatGPT, and Cursor direct access to New Relic AI via 10 tools. Monitor and audit LLM telemetry via New Relic AI — track token costs, p95 latency, and user feedback. Powered by the Vinkius - no API keys, no infrastructure, connect in under 2 minutes.

Built-in capabilities (10)

custom_nrqllist_alert_policieslist_apm_appslist_dashboardspost_custom_eventquery_llm_costsquery_llm_errorsquery_llm_eventsquery_llm_feedbackquery_llm_latency

Tools for your AI Agents to operate New Relic AI

Ask your AI agent "Show me the last 5 LLM events for the 'OpenAI' vendor" and get the answer without opening a single dashboard. With 10 tools connected to real New Relic AI data, your agents reason over live information, cross-reference it with other MCP servers, and deliver insights you would spend hours assembling manually.

Works with Claude, ChatGPT, Cursor, and any MCP-compatible client. Powered by the Vinkius - your credentials never touch the AI model, every request is auditable. Connect in under two minutes.

Why teams choose Vinkius

One subscription gives you access to thousands of MCP servers - and you can deploy your own to the Vinkius Edge. Your AI agents only access the data you authorize, with DLP that blocks sensitive information from ever reaching the model, kill switch for instant shutdown, and up to 60% token savings. Enterprise-grade infrastructure and security, zero maintenance.

Build your own MCP Server with our secure development framework →

Vinkius works with every AI agent you already use

…and any MCP-compatible client

CursorClaudeOpenAIVS CodeCopilotGoogleLovableMistralAWSCursorClaudeOpenAIVS CodeCopilotGoogleLovableMistralAWS

New Relic AI (LLM Observability) MCP Server capabilities

10 tools
custom_nrql

Note that NRQL is read-only. Irreversibly vaporize explicit validations extracting rich Churn flags

list_alert_policies

Inspect deep internal arrays mitigating specific Plan Math

list_apm_apps

Dispatch an automated validation check routing explicit Gateway history

list_dashboards

Identify precise active arrays spanning native Gateway auth

post_custom_event

/events` inserting absolute generic `CustomAITelemetry` rows tracking internal agent state. Enumerate explicitly attached structured rules exporting active Billing

query_llm_costs

Perform structural extraction of properties driving active Account logic

query_llm_errors

Identify precise active arrays spanning native Hold parsing

query_llm_events

Identify bounded CRM records inside the Headless New Relic Platform

query_llm_feedback

Retrieve explicit Cloud logging tracing explicit Vault limits

query_llm_latency

Provision a highly-available JSON Payload generating hard Customer bindings

What the New Relic AI (LLM Observability) MCP Server unlocks

Connect your New Relic AI account to any AI agent and take full control of your LLM observability, token cost tracking, and performance analytics through natural conversation.

What you can do

  • LLM Telemetry Audit — Retrieve detailed LLM chat completion messages and prompt inputs directly from your agent to understand literal model behavior in real-time
  • Token Cost Tracking — Execute structural extraction of model costs to calculate exact USD token consumption across your entire AI infrastructure securely
  • Performance Monitoring — Extract p95 latency matrices and average response times to ensure your LLM text generation remains performant and sub-second
  • User Feedback Loop — Retrieve chronological feedback messages and 1-5 rating scores dumped by human supervisors to identify quality regressions natively
  • Custom NRQL Execution — Run sophisticated read-only queries using the New Relic Query Language (NRQL) to extract rich insights from multi-tenant AI datasets instantly
  • Custom Event Injection — Post atomic generic telemetry rows to track internal agent states and custom behavioral markers across your observability pipeline
  • Resource Discovery — Enumerate active APM apps, dashboards, and alert policies to audit your AI environment's structural health and PagerDuty configurations

How it works

1. Subscribe to this server
2. Enter your New Relic API Key and Account ID
3. Start monitoring your AI stack from Claude, Cursor, or any MCP-compatible client

Who is this for?

  • AI Engineers — monitor LLM prompt performance and verify model accuracy through natural conversation without manual dashboard navigation
  • Observability Leads — track global AI token costs and p95 latency benchmarks directly from your workspace to optimize infrastructure spend
  • DevOps Teams — audit APM app health and verify alert policy triggers across multiple AI environments efficiently

Frequently asked questions about the New Relic AI (LLM Observability) MCP Server

01

Can I check my total AI token costs through my agent?

Yes. Use the query_llm_costs tool. Your agent will execute a NRQL aggregation summing the tokenSpanCost property from your LLM events over the last 24 hours, faceted by model, to provide a clear financial breakdown.

02

How do I monitor the p95 latency of my LLM generations?

The query_llm_latency tool retrieves the average duration and latency matrices for your AI providers. Your agent will report the results as a timesheet or summary, helping you identify performance bottlenecks instantly.

03

Can my agent run custom NRQL queries against my telemetry data?

Absolutely. Use the custom_nrql tool to provide any valid read-only NRQL string. Your agent will query New Relic's NerdGraph API and return the resulting dataset, allowing for complete flexibility in how you analyze your AI operations.

More in this category

You might also like

Give your AI agents the power of New Relic AI MCP Server

Production-grade New Relic AI (LLM Observability) MCP Server. Verified, monitored, and maintained by Vinkius. Ready for your AI agents — connect and start using immediately.