2,500+ MCP servers ready to use
Vinkius

Portkey MCP Server for LlamaIndex 10 tools — connect in under 2 minutes

Built by Vinkius GDPR 10 Tools Framework

LlamaIndex specializes in data-aware AI agents that connect LLMs to structured and unstructured sources. Add Portkey as an MCP tool provider through the Vinkius and your agents can query, analyze, and act on live data alongside your existing indexes.

Vinkius supports streamable HTTP and SSE.

python
import asyncio
from llama_index.tools.mcp import BasicMCPClient, McpToolSpec
from llama_index.core.agent.workflow import FunctionAgent
from llama_index.llms.openai import OpenAI

async def main():
    # Your Vinkius token — get it at cloud.vinkius.com
    mcp_client = BasicMCPClient("https://edge.vinkius.com/[YOUR_TOKEN_HERE]/mcp")
    mcp_tool_spec = McpToolSpec(client=mcp_client)
    tools = await mcp_tool_spec.to_tool_list_async()

    agent = FunctionAgent(
        tools=tools,
        llm=OpenAI(model="gpt-4o"),
        system_prompt=(
            "You are an assistant with access to Portkey. "
            "You have 10 tools available."
        ),
    )

    response = await agent.run(
        "What tools are available in Portkey?"
    )
    print(response)

asyncio.run(main())
Portkey
Fully ManagedVinkius Servers
60%Token savings
High SecurityEnterprise-grade
IAMAccess control
EU AI ActCompliant
DLPData protection
V8 IsolateSandboxed
Ed25519Audit chain
<40msKill switch
Stream every event to Splunk, Datadog, or your own webhook in real-time

* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure

About Portkey MCP Server

What you can do

Connect AI agents to the Portkey AI Gateway for enterprise-grade observability and management:

LlamaIndex agents combine Portkey tool responses with indexed documents for comprehensive, grounded answers. Connect 10 tools through the Vinkius and query live data alongside vector stores and SQL databases in a single turn — ideal for hybrid search, data enrichment, and analytical workflows.

  • Monitor logs and traces of all LLM calls passing through your gateway
  • Analyze token usage, latency, and costs across models and teams
  • Submit feedback (Likes/Dislikes) to improve model quality and agent performance
  • Export logs for audit trails, compliance, and offline cost analysis
  • Review gateway configurations including retry policies, fallbacks, and cache settings
  • Manage virtual keys to track provider API key usage and limits
  • Discover supported models from 1,600+ LLMs available via Portkey
  • Enforce budget policies to prevent runaway AI costs per team or project

The Portkey MCP Server exposes 10 tools through the Vinkius. Connect it to LlamaIndex in under two minutes — no API keys to rotate, no infrastructure to provision, no vendor lock-in. Your configuration, your data, your control.

How to Connect Portkey to LlamaIndex via MCP

Follow these steps to integrate the Portkey MCP Server with LlamaIndex.

01

Install dependencies

Run pip install llama-index-tools-mcp llama-index-llms-openai

02

Replace the token

Replace [YOUR_TOKEN_HERE] with your Vinkius token

03

Run the agent

Save to agent.py and run: python agent.py

04

Explore tools

The agent discovers 10 tools from Portkey

Why Use LlamaIndex with the Portkey MCP Server

LlamaIndex provides unique advantages when paired with Portkey through the Model Context Protocol.

01

Data-first architecture: LlamaIndex agents combine Portkey tool responses with indexed documents for comprehensive, grounded answers

02

Query pipeline framework lets you chain Portkey tool calls with transformations, filters, and re-rankers in a typed pipeline

03

Multi-source reasoning: agents can query Portkey, a vector store, and a SQL database in a single turn and synthesize results

04

Observability integrations show exactly what Portkey tools were called, what data was returned, and how it influenced the final answer

Portkey + LlamaIndex Use Cases

Practical scenarios where LlamaIndex combined with the Portkey MCP Server delivers measurable value.

01

Hybrid search: combine Portkey real-time data with embedded document indexes for answers that are both current and comprehensive

02

Data enrichment: query Portkey to augment indexed data with live information before generating user-facing responses

03

Knowledge base agents: build agents that maintain and update knowledge bases by periodically querying Portkey for fresh data

04

Analytical workflows: chain Portkey queries with LlamaIndex's data connectors to build multi-source analytical reports

Portkey MCP Tools for LlamaIndex (10)

These 10 tools become available when you connect Portkey to LlamaIndex via MCP:

01

create_policy

Requires policy name, budget limit (USD or token count), and optionally the target users or virtual keys to restrict. Returns the created policy details. Use this to enforce cost controls on specific teams or projects using the gateway. Create a new budget or usage policy for AI gateway access

02

delete_policy

Requires the policy ID. Use this when a project ends or budget constraints are no longer needed. Remove a budget or usage policy from Portkey

03

export_logs

Optionally filters by date range, model, or user. Returns an export ID or download URL. Use this for audit trails, cost reporting, or offline analysis of AI usage patterns. Export AI gateway logs for external analysis or compliance reporting

04

get_log_details

Requires the log ID from list_logs results. Use this for deep debugging of specific AI interactions. Get detailed information about a specific AI gateway log entry

05

get_virtual_keys

Virtual keys map to underlying provider keys (OpenAI, Anthropic, etc.) with metadata, usage limits, and policy associations. Returns key IDs, names, provider targets, current usage, and status. Use this to audit API key usage or identify keys approaching limits. List all virtual API keys managed by Portkey

06

list_configs

Returns config IDs, names, creation dates, and associated virtual keys. Use this to review how LLM requests are routed or to audit gateway behavior. List all gateway configurations stored in Portkey

07

list_logs

Returns log IDs, timestamps, model names, token usage, latency, costs, and status codes. Use this to monitor AI usage, identify expensive calls, or debug latency issues. Supports pagination via limit/offset. List recent AI gateway logs and traces from Portkey

08

list_models

). Returns model names, provider names, supported endpoints (chat, embeddings, etc.), and capabilities. Use this to discover which models are routable via your gateway. List all LLM models supported by the Portkey gateway

09

list_policies

Returns policy names, limits, current consumption, and affected users/keys. Use this to review guardrails preventing runaway AI costs. List all budget and usage policies defined in Portkey

10

submit_feedback

Requires the log ID, rating (LIKE, DISLIKE, or UNLIKE to remove), and optional text feedback. Use this to build RLHF datasets or monitor user satisfaction with AI outputs. Submit user feedback (Like/Dislike) for a specific AI response log

Example Prompts for Portkey in LlamaIndex

Ready-to-use prompts you can give your LlamaIndex agent to start working with Portkey immediately.

01

"Show me the most expensive LLM calls from the last 24 hours"

02

"Create a budget policy limiting the Marketing team to $500/month on LLM usage"

03

"Export all logs from last week for our compliance audit"

Troubleshooting Portkey MCP Server with LlamaIndex

Common issues when connecting Portkey to LlamaIndex through the Vinkius, and how to resolve them.

01

BasicMCPClient not found

Install: pip install llama-index-tools-mcp

Portkey + LlamaIndex FAQ

Common questions about integrating Portkey MCP Server with LlamaIndex.

01

How does LlamaIndex connect to MCP servers?

Use the MCP client adapter to create a connection. LlamaIndex discovers all tools and wraps them as query engine tools compatible with any LlamaIndex agent.
02

Can I combine MCP tools with vector stores?

Yes. LlamaIndex agents can query Portkey tools and vector store indexes in the same turn, combining real-time and embedded data for grounded responses.
03

Does LlamaIndex support async MCP calls?

Yes. LlamaIndex's async agent framework supports concurrent MCP tool calls for high-throughput data processing pipelines.

Connect Portkey to LlamaIndex

Get your token, paste the configuration, and start using 10 tools in under 2 minutes. No API key management needed.