Verba MCP Server
Connect your Verba RAG platform to your AI agent. Search your documents, retrieve semantic answers, and manage your Weaviate knowledge base directly.
Ask AI about this MCP Server
Vinkius supports streamable HTTP and SSE.

* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure
What is the Verba MCP Server?
The Verba MCP Server gives AI agents like Claude, ChatGPT, and Cursor direct access to Verba via 6 tools. Connect your Verba RAG platform to your AI agent. Search your documents, retrieve semantic answers, and manage your Weaviate knowledge base directly. Powered by the Vinkius - no API keys, no infrastructure, connect in under 2 minutes.
Built-in capabilities (6)
Tools for your AI Agents to operate Verba
Ask your AI agent "Check Verba's configuration to see which embedding model it is currently using." and get the answer without opening a single dashboard. With 6 tools connected to real Verba data, your agents reason over live information, cross-reference it with other MCP servers, and deliver insights you would spend hours assembling manually.
Works with Claude, ChatGPT, Cursor, and any MCP-compatible client. Powered by the Vinkius - your credentials never touch the AI model, every request is auditable. Connect in under two minutes.
Why teams choose Vinkius
One subscription gives you access to thousands of MCP servers - and you can deploy your own to the Vinkius Edge. Your AI agents only access the data you authorize, with DLP that blocks sensitive information from ever reaching the model, kill switch for instant shutdown, and up to 60% token savings. Enterprise-grade infrastructure and security, zero maintenance.
Build your own MCP Server with our secure development framework →Vinkius works with every AI agent you already use
…and any MCP-compatible client


















Verba MCP Server capabilities
6 toolsProvide the document content and optional metadata JSON. Ingests a new document into the Verba knowledge base
This action is irreversible. Permanently removes a document from the knowledge base
Retrieves the full content and metadata of a specific document
Retrieves the current Verba system configuration
Lists all documents indexed in the Verba knowledge base
Returns summarized answers with citations. Executes a RAG (Retrieval Augmented Generation) query against the Verba knowledge base
What the Verba MCP Server unlocks
Intertwine the open-source Verba (by Weaviate) ecosystem natively into your conversational AI IDE. Execute powerful Retrieval-Augmented Generation processes and manage your localized knowledge bases simply by chatting.
What you can do
- Augmented Queries — Cast a question to your agent and have it retrieve fully synthesized answers from the Verba engine completely backed up by exact document citations.
- Knowledge Management — Insert new context text, list all ingested documents, retrieve the deeply embedded raw data of any ID, or remove dead knowledge dynamically without Web UIs.
- Health Checks — Request system configurations directly via chat to ensure your local LLM connections, embedding models, and cluster health are firing effectively.
How it works
1. Ensure your local or cloud Verba instance is running
2. Supply your Verba API URL and API Key (if authenticated)
3. Ask Claude or Cursor to query, retrieve, or insert document structures intuitively.
Who is this for?
- RAG Developers — quickly add or delete text chunks to evaluate changes in embedding fidelity directly inside your IDE coding session.
- Knowledge Managers — query your dense technical manuals using semantic search and receive the verified text snippets instantly.
- Open Source Hobbyists — orchestrate your personal Weaviate/Verba RAG stacks strictly through the programmatic conversational layer.
Frequently asked questions about the Verba MCP Server
Can I query my local Verba instance directly through Cursor?
Yes! Once you configure VERBA_API_URL to point to http://localhost:8000 (or your host port), you can prompt your AI assistant to execute rigorous perform_rag_query instructions without ever breaking your developer focus.
How do I insert fresh text data into Verba completely using conversational chat?
Provide the agent with your desired context directly. For example: Add this chunk of markdown as a new document to Verba: '# Title . The agent leverages
Content...'addDocumentTool, serializes the payload, and commits it into Verba's vector store immutably.
Are the query answers backed by citations from its embedded documents?
Absolutely. That's the primary benefit of the integration. When you run perform_rag_query, Verba utilizes Weaviate's hybrid search mechanics. The output explicitly includes natural language synthesis backed by the unique document IDs and snippet texts it referenced.
More in this category
You might also like
Connect Verba with your favorite client
Step-by-step setup guides for every MCP-compatible client and framework:
Anthropic's native desktop app for Claude with built-in MCP support.
AI-first code editor with integrated LLM-powered coding assistance.
GitHub Copilot in VS Code with Agent mode and MCP support.
Purpose-built IDE for agentic AI coding workflows.
Autonomous AI coding agent that runs inside VS Code.
Anthropic's agentic CLI for terminal-first development.
Python SDK for building production-grade OpenAI agent workflows.
Google's framework for building production AI agents.
Type-safe agent development for Python with first-class MCP support.
TypeScript toolkit for building AI-powered web applications.
TypeScript-native agent framework for modern web stacks.
Python framework for orchestrating collaborative AI agent crews.
Leading Python framework for composable LLM applications.
Data-aware AI agent framework for structured and unstructured sources.
Microsoft's framework for multi-agent collaborative conversations.
Give your AI agents the power of Verba MCP Server
Production-grade Verba MCP Server. Verified, monitored, and maintained by Vinkius. Ready for your AI agents — connect and start using immediately.






