Bring Llm Workflows
to Vercel AI SDK
Learn how to connect FlowiseAI to Vercel AI SDK and start using 12 AI agent tools in minutes. Fully managed, enterprise secure, and ready to use without writing a single line of code.
What is the FlowiseAI MCP Server?
Connect your FlowiseAI (self-hosted) instance to any AI agent and take full control of your LLM orchestration and RAG workflows through natural conversation.
What you can do
- Prediction Orchestration — Trigger specific chatflows and retrieve LLM-generated responses programmatically using natural language inputs
- Chatflow Management — List all orchestration flows and retrieve detailed technical structures and metadata to monitor your AI agents
- Vector Intelligence — Programmatically upsert documents or raw data into the vector stores linked to your chatflows to ensure high-fidelity context
- Component Oversight — Access server-wide credentials, custom tools, and global variables to manage your complete Flowise ecosystem
- Operational Visibility — Monitor user feedback, leads, and assistant profiles directly through your agent for instant reporting
How it works
1. Subscribe to this server
2. Enter your Flowise Instance URL and API Key
3. Start orchestrating your LLM flows from Claude, Cursor, or any MCP client
No more manual testing in the Flowise UI for every prediction. Your AI acts as your dedicated LLM operations and orchestration coordinator.
Who is this for?
- AI Developers — instantly test and trigger complex orchestration flows using natural language queries
- Data Engineers — automate document ingestion into vector stores without leaving your workspace
- Product Managers — monitor chatflow performance and review captured leads through simple AI commands
Built-in capabilities (12)
Trigger an LLM flow prediction
Get details for a specific chatflow
Get Flowise server version
List OpenAI-style assistants
List user feedback for a chatflow
List all LLM orchestration flows
List custom tools
List captured leads
List global variables
List configured credentials
List chatflow templates
Push data into a vector store
Why Vercel AI SDK?
The Vercel AI SDK gives every FlowiseAI tool full TypeScript type inference, IDE autocomplete, and compile-time error checking. Connect 12 tools through Vinkius and stream results progressively to React, Svelte, or Vue components. works on Edge Functions, Cloudflare Workers, and any Node.js runtime.
- —
TypeScript-first: every MCP tool gets full type inference, IDE autocomplete, and compile-time error checking out of the box
- —
Framework-agnostic core works with Next.js, Nuxt, SvelteKit, or any Node.js runtime. same FlowiseAI integration everywhere
- —
Built-in streaming UI primitives let you display FlowiseAI tool results progressively in React, Svelte, or Vue components
- —
Edge-compatible: the AI SDK runs on Vercel Edge Functions, Cloudflare Workers, and other edge runtimes for minimal latency
FlowiseAI in Vercel AI SDK
FlowiseAI and 3,400+ other MCP servers. One platform. One governance layer.
Teams that connect FlowiseAI to Vercel AI SDK through Vinkius don't need to source, host, or maintain individual MCP servers. Every tool call runs inside a hardened runtime with credential isolation, DLP, and a signed audit chain.
Raw MCP | Vinkius | |
|---|---|---|
| Server catalog | Find and host yourself | 3,400+ managed |
| Infrastructure | Self-hosted | Sandboxed V8 isolates |
| Credential handling | Plaintext in config | Vault + runtime injection |
| Data loss prevention | None | Configurable DLP policies |
| Kill switch | None | Global instant shutdown |
| Financial circuit breakers | None | Per-server limits + alerts |
| Audit trail | None | Ed25519 signed logs |
| SIEM log streaming | None | Splunk, Datadog, Webhook |
| Honeytokens | None | Canary alerts on leak |
| Custom domains | Not applicable | DNS challenge verified |
| GDPR compliance | Manual effort | Automated purge + export |
Why teams choose Vinkius for FlowiseAI in Vercel AI SDK
The FlowiseAI MCP Server runs on Vinkius-managed infrastructure inside AWS — a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts. All 12 tools execute in hardened sandboxes optimized for native MCP execution.
Your AI agents in Vercel AI SDK only access the data you authorize, with DLP that blocks sensitive information from ever reaching the model, kill switch for instant shutdown, and up to 60% token savings. Enterprise-grade infrastructure, zero maintenance.

* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure
How Vinkius secures
FlowiseAI for Vercel AI SDK
Every tool call from Vercel AI SDK to the FlowiseAI MCP Server is protected by DLP redaction, cryptographic audit chains, V8 sandbox isolation, kill switch, and financial circuit breakers.
Frequently asked questions
How do I find my API Key in Flowise?
Log in to your Flowise dashboard and click on the API Keys tab in the sidebar to generate or copy your unique token.
Does this support multi-tenant instances?
Yes! Ensure you provide the full Instance URL and the API Key corresponding to the specific environment you want to manage.
Can I push documents to vector stores via AI?
Absolutely. Use the upsert_vector_data tool by providing the chatflow_id and the JSON payload containing your document data.
How does the Vercel AI SDK connect to MCP servers?
Import createMCPClient from @ai-sdk/mcp and pass the server URL. The SDK discovers all tools and provides typed TypeScript interfaces for each one.
Can I use MCP tools in Edge Functions?
Yes. The AI SDK is fully edge-compatible. MCP connections work on Vercel Edge Functions, Cloudflare Workers, and similar runtimes.
Does it support streaming tool results?
Yes. The SDK provides streaming primitives like useChat and streamText that handle tool calls and display results progressively in the UI.
createMCPClient is not a function
Install: npm install @ai-sdk/mcp
