Arize AI MCP Server
Automate LLM and ML observability via Arize — monitor models, track telemetry, run evaluations, and analyze data drift directly from any AI agent.
Vinkius AI Gateway supports streamable HTTP and SSE.

Works with every AI agent you already use
…and any MCP-compatible client


















Arize AI MCP Server: see your AI Agent in action
Built-in capabilities (10)
get_dataset
Get a specific evaluation dataset
get_metrics
Fetch observability metrics for an ML model
get_model
It defines the inputs, outputs, and features. Get details and metadata for a specific tracked model
ingest_log
payload_json must contain valid Arize payload structures. Ingest raw telemetry logs into Arize
list_datasets
List static evaluation datasets
list_environments
g., Production, Training, Verification) used to segregate model inferences and baseline datasets. List configured environments within Arize
list_evals
g., Toxicity, Hallucination, PII filtering). List automated evaluation runs
list_models
List tracked ML models or LLMs
list_spaces
Spaces separate different models and telemetry datasets. List accessible workspaces within the Arize platform
run_eval
Trigger a custom LLM evaluation run
What this connector unlocks
Connect your Arize AI observability platform to any AI agent and take full control of your Machine Learning and LLM telemetry workflows through natural conversation.
What you can do
- Model Monitoring & Metrics — List all tracked ML models, extract deep configuration schemas, and fetch real-time metrics (performance, data quality, and prediction drift)
- Evaluation & Alignment — Launch and list automated LLM evaluation runs (e.g., Toxicity, Hallucination, PII filtering) against static datasets and ground truth baselines
- Telemetry Ingestion — Push programmatic raw logs, predictions, and inferences straight into Arize for immediate visualization and tracking
- Space & Environment Management — Browse organizational spaces and segregated deployment environments (Production, Training, Verification)
How it works
1. Subscribe to this server
2. Enter your Arize API Key and Space ID Key
3. Start monitoring your prediction health from Claude, Cursor, or any MCP-compatible client
No more context-switching into heavily graphical dashboards to figure out why an LLM prompt hallucinated. Your AI acts as a dedicated ML Ops engineer.
Who is this for?
- Machine Learning Engineers — rapidly push inference telemetry and query performance degradation flags without leaving your terminal
- AI Product Managers — instantly monitor output toxicity, drift rates, and usage metrics across multiple LLM integrations
- Data Scientists — manage baseline evaluation datasets and trigger custom scoring loops asynchronously
Frequently asked questions
Give your AI agents the power of Arize AI
Access Arize AI and 2,000+ MCP servers — ready for your agents to use, right now. No glue code. No custom integrations. Just plug Vinkius AI Gateway and let your agents work.
More in this category

Retell AI
10 toolsEmpower your conversational AI to orchestrate, analyze, and automate phone calls or web-based voice agent interactions via Retell.

Amazon Bedrock KB
6 toolsConnect your AI agent to AWS Bedrock Knowledge Bases — execute semantic searches, managed RAG, and sync vector datasources natively.
fal.ai 3D
12 toolsGenerate 3D models via fal.ai — convert images and text to 3D assets using Rodin, TripoSR, Trellis, and 9+ AI models from any AI agent.
You might also like

Channable
8 toolsManage marketplace orders and stock via Channable — track sales, update shipments, and monitor returns directly from any AI agent.

Azure Synapse Analytics
7 toolsManage your Azure Synapse data pipelines seamlessly — audit Spark pools, SQL pools, datasets, and integration pipelines via your AI agent.

IgnitePOST
10 toolsManage hand-written note orders and outreach campaigns via IgnitePOST API.
