3,400+ MCP servers ready to use
Vinkius

Bring Llm Workflows
to CrewAI

Learn how to connect FlowiseAI to CrewAI and start using 12 AI agent tools in minutes. Fully managed, enterprise secure, and ready to use without writing a single line of code.

Execute Chatflow PredictionGet Chatflow DetailsGet Server VersionList Ai AssistantsList Chat FeedbackList ChatflowsList External ToolsList Flow LeadsList Flow VariablesList Flowise CredentialsList Marketplace TemplatesUpsert Vector Data

What is the FlowiseAI MCP Server?

Connect your FlowiseAI (self-hosted) instance to any AI agent and take full control of your LLM orchestration and RAG workflows through natural conversation.

What you can do

  • Prediction Orchestration — Trigger specific chatflows and retrieve LLM-generated responses programmatically using natural language inputs
  • Chatflow Management — List all orchestration flows and retrieve detailed technical structures and metadata to monitor your AI agents
  • Vector Intelligence — Programmatically upsert documents or raw data into the vector stores linked to your chatflows to ensure high-fidelity context
  • Component Oversight — Access server-wide credentials, custom tools, and global variables to manage your complete Flowise ecosystem
  • Operational Visibility — Monitor user feedback, leads, and assistant profiles directly through your agent for instant reporting

How it works

1. Subscribe to this server
2. Enter your Flowise Instance URL and API Key
3. Start orchestrating your LLM flows from Claude, Cursor, or any MCP client

No more manual testing in the Flowise UI for every prediction. Your AI acts as your dedicated LLM operations and orchestration coordinator.

Who is this for?

  • AI Developers — instantly test and trigger complex orchestration flows using natural language queries
  • Data Engineers — automate document ingestion into vector stores without leaving your workspace
  • Product Managers — monitor chatflow performance and review captured leads through simple AI commands

Built-in capabilities (12)

execute_chatflow_prediction

Trigger an LLM flow prediction

get_chatflow_details

Get details for a specific chatflow

get_server_version

Get Flowise server version

list_ai_assistants

List OpenAI-style assistants

list_chat_feedback

List user feedback for a chatflow

list_chatflows

List all LLM orchestration flows

list_external_tools

List custom tools

list_flow_leads

List captured leads

list_flow_variables

List global variables

list_flowise_credentials

List configured credentials

list_marketplace_templates

List chatflow templates

upsert_vector_data

Push data into a vector store

Why CrewAI?

When paired with CrewAI, FlowiseAI becomes a first-class tool in your multi-agent workflows. Each agent in the crew can call FlowiseAI tools autonomously, one agent queries data, another analyzes results, a third compiles reports, all orchestrated through Vinkius with zero configuration overhead.

  • Multi-agent collaboration lets you decompose complex workflows into specialized roles, one agent researches, another analyzes, a third generates reports, each with access to MCP tools

  • CrewAI's native MCP integration requires zero adapter code: pass Vinkius Edge URL directly in the mcps parameter and agents auto-discover every available tool at runtime

  • Built-in task delegation and shared memory mean agents can pass context between steps without manual state management, enabling multi-hop reasoning across tool calls

  • Sequential and hierarchical crew patterns map naturally to real-world workflows: enumerate subdomains → analyze DNS history → check WHOIS records → compile findings into actionable reports

See it in action

FlowiseAI in CrewAI

AI AgentVinkius
High Security·Kill Switch·Plug and Play
Why Vinkius

FlowiseAI and 3,400+ other MCP servers. One platform. One governance layer.

Teams that connect FlowiseAI to CrewAI through Vinkius don't need to source, host, or maintain individual MCP servers. Every tool call runs inside a hardened runtime with credential isolation, DLP, and a signed audit chain.

3,400+MCP Servers ready
<40msCold start
60%Token savings
Raw MCP
Vinkius
Server catalogFind and host yourself3,400+ managed
InfrastructureSelf-hostedSandboxed V8 isolates
Credential handlingPlaintext in configVault + runtime injection
Data loss preventionNoneConfigurable DLP policies
Kill switchNoneGlobal instant shutdown
Financial circuit breakersNonePer-server limits + alerts
Audit trailNoneEd25519 signed logs
SIEM log streamingNoneSplunk, Datadog, Webhook
HoneytokensNoneCanary alerts on leak
Custom domainsNot applicableDNS challenge verified
GDPR complianceManual effortAutomated purge + export
Enterprise Security

Why teams choose Vinkius for FlowiseAI in CrewAI

The FlowiseAI MCP Server runs on Vinkius-managed infrastructure inside AWS — a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts. All 12 tools execute in hardened sandboxes optimized for native MCP execution.

Your AI agents in CrewAI only access the data you authorize, with DLP that blocks sensitive information from ever reaching the model, kill switch for instant shutdown, and up to 60% token savings. Enterprise-grade infrastructure, zero maintenance.

FlowiseAI
Fully ManagedVinkius Servers
60%Token savings
High SecurityEnterprise-grade
IAMAccess control
EU AI ActCompliant
DLPData protection
V8 IsolateSandboxed
Ed25519Audit chain
<40msKill switch
Stream every event to Splunk, Datadog, or your own webhook in real-time

* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure

The Vinkius Advantage

How Vinkius secures FlowiseAI for CrewAI

Every tool call from CrewAI to the FlowiseAI MCP Server is protected by DLP redaction, cryptographic audit chains, V8 sandbox isolation, kill switch, and financial circuit breakers.

< 40msCold start
Ed25519Signed audit chain
60%Token savings
FAQ

Frequently asked questions

01

How do I find my API Key in Flowise?

Log in to your Flowise dashboard and click on the API Keys tab in the sidebar to generate or copy your unique token.

02

Does this support multi-tenant instances?

Yes! Ensure you provide the full Instance URL and the API Key corresponding to the specific environment you want to manage.

03

Can I push documents to vector stores via AI?

Absolutely. Use the upsert_vector_data tool by providing the chatflow_id and the JSON payload containing your document data.

04

How does CrewAI discover and connect to MCP tools?

CrewAI connects to MCP servers lazily. when the crew starts, each agent resolves its MCP URLs and fetches the tool catalog via the standard tools/list method. This means tools are always fresh and reflect the server's current capabilities. No tool schemas need to be hardcoded.

05

Can different agents in the same crew use different MCP servers?

Yes. Each agent has its own mcps list, so you can assign specific servers to specific roles. For example, a reconnaissance agent might use a domain intelligence server while an analysis agent uses a vulnerability database server.

06

What happens when an MCP tool call fails during a crew run?

CrewAI wraps tool failures as context for the agent. The LLM receives the error message and can decide to retry with different parameters, fall back to a different tool, or mark the task as partially complete. This resilience is critical for production workflows.

07

Can CrewAI agents call multiple MCP tools in parallel?

CrewAI agents execute tool calls sequentially within a single reasoning step. However, you can run multiple agents in parallel using process=Process.parallel, each calling different MCP tools concurrently. This is ideal for workflows where separate data sources need to be queried simultaneously.

08

Can I run CrewAI crews on a schedule (cron)?

Yes. CrewAI crews are standard Python scripts, so you can invoke them via cron, Airflow, Celery, or any task scheduler. The crew.kickoff() method runs synchronously by default, making it straightforward to integrate into existing pipelines.

09

MCP tools not discovered

Ensure the Edge URL is correct. CrewAI connects lazily when the crew starts. check console output.

10

Agent not using tools

Make the task description specific. Instead of "do something", say "Use the available tools to list contacts".

11

Timeout errors

CrewAI has a 10s connection timeout by default. Ensure your network can reach the Edge URL.

12

Rate limiting or 429 errors

Vinkius enforces per-token rate limits. Check your subscription tier and request quota in the dashboard. Upgrade if you need higher throughput.