Lindy (Autonomous AI Employees) MCP Server
Manage autonomous AI employees via Lindy — trigger task runs, monitor reasoning logs, and audit app integrations.
Ask AI about this MCP Server
Vinkius supports streamable HTTP and SSE.

* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure
What is the Lindy MCP Server?
The Lindy MCP Server gives AI agents like Claude, ChatGPT, and Cursor direct access to Lindy via 10 tools. Manage autonomous AI employees via Lindy — trigger task runs, monitor reasoning logs, and audit app integrations. Powered by the Vinkius - no API keys, no infrastructure, connect in under 2 minutes.
Built-in capabilities (10)
Tools for your AI Agents to operate Lindy
Ask your AI agent "List all active Lindies in my workspace" and get the answer without opening a single dashboard. With 10 tools connected to real Lindy data, your agents reason over live information, cross-reference it with other MCP servers, and deliver insights you would spend hours assembling manually.
Works with Claude, ChatGPT, Cursor, and any MCP-compatible client. Powered by the Vinkius - your credentials never touch the AI model, every request is auditable. Connect in under two minutes.
Why teams choose Vinkius
One subscription gives you access to thousands of MCP servers - and you can deploy your own to the Vinkius Edge. Your AI agents only access the data you authorize, with DLP that blocks sensitive information from ever reaching the model, kill switch for instant shutdown, and up to 60% token savings. Enterprise-grade infrastructure and security, zero maintenance.
Build your own MCP Server with our secure development framework →Vinkius works with every AI agent you already use
…and any MCP-compatible client


















Lindy (Autonomous AI Employees) MCP Server capabilities
10 toolsCancel a running execution dispatching hard stops interrupting trapped context loops
Get configuration mappings including standard tools and prompts for a specific Lindy
Get specific state for a Run blocking on Human input or External APIs
Dump literal LLM reasoning logs isolating a specific run loop
List bounded third-party app connections securely connected (e.g Slack, Gmail)
List all custom autonomous AI Assistants (Lindies) built on the workspace
List recent runs validating the full execution graph isolating active Lindy instances
List how autonomous AI agents are woken up (Cron, Webhook, API)
List all explicit organizational boundaries structuring isolated Teams
Trigger a Lindy to start an asynchronous task run parsing a JSON payload
What the Lindy (Autonomous AI Employees) MCP Server unlocks
Connect your Lindy.ai account to any AI agent and take full control of your autonomous AI workforce and automated business processes through natural conversation.
What you can do
- Lindy Orchestration — List all custom autonomous assistants (Lindies) built in your workspace and retrieve their core configurations and prompt instructions directly from your agent
- Task Execution — Trigger specific Lindies to start asynchronous task runs using dynamic JSON payloads to automate complex business workflows
- Reasoning Audit — Dump literal LLM reasoning logs for specific run loops to understand how your autonomous agents are making decisions and identifying steps
- Run Monitoring — Track the state of active executions and manage lifecycle controls, including the ability to cancel runs stuck in context loops securely
- Integration Visibility — Enumerate secure connections to third-party apps like Slack, Gmail, and CRM systems to manage your AI's reach across your software stack
- Workspace Management — Navigate organizational boundaries and team structures to understand how Lindies are distributed across your company
How it works
1. Subscribe to this server
2. Enter your Lindy API Token
3. Start managing your autonomous AI workforce from Claude, Cursor, or any MCP-compatible client
Who is this for?
- Operations Managers — automate repetitive workflows by triggering specialized Lindies and monitoring their execution history through natural conversation
- Developers — debug autonomous agent logic and inspect reasoning logs directly from your workspace without manual API testing
- Founders & Leaders — audit AI integrations and monitor the performance of your automated workforce across different team boundaries efficiently
Frequently asked questions about the Lindy (Autonomous AI Employees) MCP Server
Can I see exactly how my Lindy made a specific decision?
Yes. Use the get_run_logs tool with a specific Run ID. Your agent will retrieve the literal LLM reasoning loops and step-by-step validations, giving you full transparency into the autonomous agent's logic.
How do I trigger an autonomous task through a conversation?
The trigger_lindy tool allows you to start an asynchronous task run. You just need to provide the Lindy ID and a JSON payload defining the inputs for the task. Your agent will fire the job and return a Run ID for status tracking.
Can my agent list which third-party apps my Lindies are connected to?
Absolutely. Use the list_integrations tool to retrieve all active third-party app connections. Your agent will report which channels (like Slack, Gmail, or HubSpot) are securely connected to your workspace.
More in this category
You might also like
Connect Lindy (Autonomous AI Employees) with your favorite client
Step-by-step setup guides for every MCP-compatible client and framework:
Anthropic's native desktop app for Claude with built-in MCP support.
AI-first code editor with integrated LLM-powered coding assistance.
GitHub Copilot in VS Code with Agent mode and MCP support.
Purpose-built IDE for agentic AI coding workflows.
Autonomous AI coding agent that runs inside VS Code.
Anthropic's agentic CLI for terminal-first development.
Python SDK for building production-grade OpenAI agent workflows.
Google's framework for building production AI agents.
Type-safe agent development for Python with first-class MCP support.
TypeScript toolkit for building AI-powered web applications.
TypeScript-native agent framework for modern web stacks.
Python framework for orchestrating collaborative AI agent crews.
Leading Python framework for composable LLM applications.
Data-aware AI agent framework for structured and unstructured sources.
Microsoft's framework for multi-agent collaborative conversations.
Give your AI agents the power of Lindy MCP Server
Production-grade Lindy (Autonomous AI Employees) MCP Server. Verified, monitored, and maintained by Vinkius. Ready for your AI agents — connect and start using immediately.






