Bring Llm Observability
to LlamaIndex
Learn how to connect Keywords AI to LlamaIndex and start using 11 AI agent tools in minutes. Fully managed, enterprise secure, and ready to use without writing a single line of code.
What is the Keywords AI MCP Server?
Connect your Keywords AI account to any AI agent and monitor LLM performance.
What you can do
- Request Logs — List and filter all LLM API calls by model
- Cost Tracking — Monitor credit balance and usage statistics
- Analytics — View cost trends, latency metrics, and error rates
- Model Catalog — Browse available LLM models
- Team Management — List users and view activity
- Alerts — Review monitoring thresholds
Built-in capabilities (11)
Verify API connectivity
Get analytics dashboard
Get credit balance
Get request details
Get usage statistics
Get user details
List monitoring alerts
List available models
List API request logs
List requests by model
List team users
Why LlamaIndex?
LlamaIndex agents combine Keywords AI tool responses with indexed documents for comprehensive, grounded answers. Connect 11 tools through Vinkius and query live data alongside vector stores and SQL databases in a single turn. ideal for hybrid search, data enrichment, and analytical workflows.
- —
Data-first architecture: LlamaIndex agents combine Keywords AI tool responses with indexed documents for comprehensive, grounded answers
- —
Query pipeline framework lets you chain Keywords AI tool calls with transformations, filters, and re-rankers in a typed pipeline
- —
Multi-source reasoning: agents can query Keywords AI, a vector store, and a SQL database in a single turn and synthesize results
- —
Observability integrations show exactly what Keywords AI tools were called, what data was returned, and how it influenced the final answer
Keywords AI in LlamaIndex
Keywords AI and 3,400+ other MCP servers. One platform. One governance layer.
Teams that connect Keywords AI to LlamaIndex through Vinkius don't need to source, host, or maintain individual MCP servers. Every tool call runs inside a hardened runtime with credential isolation, DLP, and a signed audit chain.
Raw MCP | Vinkius | |
|---|---|---|
| Server catalog | Find and host yourself | 3,400+ managed |
| Infrastructure | Self-hosted | Sandboxed V8 isolates |
| Credential handling | Plaintext in config | Vault + runtime injection |
| Data loss prevention | None | Configurable DLP policies |
| Kill switch | None | Global instant shutdown |
| Financial circuit breakers | None | Per-server limits + alerts |
| Audit trail | None | Ed25519 signed logs |
| SIEM log streaming | None | Splunk, Datadog, Webhook |
| Honeytokens | None | Canary alerts on leak |
| Custom domains | Not applicable | DNS challenge verified |
| GDPR compliance | Manual effort | Automated purge + export |
Why teams choose Vinkius for Keywords AI in LlamaIndex
The Keywords AI MCP Server runs on Vinkius-managed infrastructure inside AWS — a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts. All 11 tools execute in hardened sandboxes optimized for native MCP execution.
Your AI agents in LlamaIndex only access the data you authorize, with DLP that blocks sensitive information from ever reaching the model, kill switch for instant shutdown, and up to 60% token savings. Enterprise-grade infrastructure, zero maintenance.

* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure
How Vinkius secures
Keywords AI for LlamaIndex
Every tool call from LlamaIndex to the Keywords AI MCP Server is protected by DLP redaction, cryptographic audit chains, V8 sandbox isolation, kill switch, and financial circuit breakers.
Frequently asked questions
Can my AI track LLM costs?
Yes. get_credits shows your balance, get_usage_stats breaks down costs by model and time period.
Can I filter request logs by model?
Yes. list_requests_by_model returns only requests made to a specific LLM.
What analytics are available?
get_analytics provides cost trends, latency percentiles, error rates, and token usage over time.
How does LlamaIndex connect to MCP servers?
Use the MCP client adapter to create a connection. LlamaIndex discovers all tools and wraps them as query engine tools compatible with any LlamaIndex agent.
Can I combine MCP tools with vector stores?
Yes. LlamaIndex agents can query Keywords AI tools and vector store indexes in the same turn, combining real-time and embedded data for grounded responses.
Does LlamaIndex support async MCP calls?
Yes. LlamaIndex's async agent framework supports concurrent MCP tool calls for high-throughput data processing pipelines.
BasicMCPClient not found
Install: pip install llama-index-tools-mcp
