Replicate MCP Server
Equip your AI to dynamically search, run, and monitor thousands of open-source machine learning models hosted on Replicate via simple text commands.
Ask AI about this MCP Server
Vinkius supports streamable HTTP and SSE.

* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure
What is the Replicate API MCP Server?
The Replicate API MCP Server gives AI agents like Claude, ChatGPT, and Cursor direct access to Replicate API via 12 tools. Equip your AI to dynamically search, run, and monitor thousands of open-source machine learning models hosted on Replicate via simple text commands. Powered by the Vinkius - no API keys, no infrastructure, connect in under 2 minutes.
Built-in capabilities (12)
Tools for your AI Agents to operate Replicate API
Ask your AI agent "List my recent predictions." and get the answer without opening a single dashboard. With 12 tools connected to real Replicate API data, your agents reason over live information, cross-reference it with other MCP servers, and deliver insights you would spend hours assembling manually.
Works with Claude, ChatGPT, Cursor, and any MCP-compatible client. Powered by the Vinkius - your credentials never touch the AI model, every request is auditable. Connect in under two minutes.
Why teams choose Vinkius
One subscription gives you access to thousands of MCP servers - and you can deploy your own to the Vinkius Edge. Your AI agents only access the data you authorize, with DLP that blocks sensitive information from ever reaching the model, kill switch for instant shutdown, and up to 60% token savings. Enterprise-grade infrastructure and security, zero maintenance.
Build your own MCP Server with our secure development framework →Vinkius works with every AI agent you already use
…and any MCP-compatible client


















Replicate MCP Server capabilities
12 toolsCancels a prediction that is currently running
g., image generation, LLMs). Provide the model version ID and inputs as a JSON object. Starts a new model prediction on Replicate
Retrieves the authenticated Replicate account details
Provide the collection slug (e.g., "text-to-image"). Retrieves a specific collection of models by its slug
Retrieves details for a specific model
). Retrieves the status and output of a prediction
g., "Image-to-Text", "Audio Generation"). Lists curated collections of models
Lists your active model deployments on Replicate
Lists available GPU hardware options for running models
Lists public models available on Replicate
Lists recent predictions made by the user
Searches for public models on Replicate
What the Replicate MCP Server unlocks
Connect your conversational assistant directly to the Replicate ecosystem. This integration grants your AI the ability to interact programmatically with a vast library of open-source machine learning models without running them on your local hardware. From orchestrating complex image generations to spinning up specialized language models, you can command AI workflows directly from your chat.
What you can do
- Execute Predictions — Command the assistant to execute specific model versions on your behalf (
create_prediction) by supplying a payload of variables. Monitor long-running processes by retrieving outputs and execution status reliably (get_prediction) or cancel them at will (cancel_prediction). - Discover Models — Instruct the AI to intelligently scan the Replicate platform for models matching a specific use case using
search_models. You can also explore trending and categorized models by leveraging thelist_collectionsaction. - Analyze Model Metadata — Whenever you discover a new model, query its precise owner and name (
get_model) to extract the exact schema and parameter requirements necessary for a successful execution. You can also view a log of your own executed tasks (list_predictions).
How it works
1. Install the Replicate platform extension module in your MCP.
2. Provide your personal Replicate API Token, extracted from your Replicate Account Settings panel. Deposit it securely into the configuration variables below.
3. Prompt your assistant organically: "Search for a high-quality video generation model, evaluate its parameter schema, and start generating a clip using the prompt 'a cat walking on Mars'."
Who is this for?
- AI Developers & Researchers — Explore and test novel open-source algorithms by generating quick predictions without modifying Python notebook code.
- Content Creators — Execute specialized image, audio, or video generation tasks by directly delegating workflows to your conversational assistant.
- Builders — Mix and match the output of various specialized models intelligently using natural language instructions.
Frequently asked questions about the Replicate MCP Server
Can the agent pass a JSON payload directly into a Replicate model?
Yes. You can utilize the create_prediction action and attach the payload parameter filled out with any required input schema (e.g., specific prompt, num_inference_steps). Since models change inputs constantly, you should always ask your assistant to fetch the schema details first via get_model to verify keys.
Does the prediction command return results instantly?
No, Replicate's API operates asynchronously. The initial command gives your assistant an ID. You must then ask your AI companion to query the get_prediction tool periodically using that generated ID until it displays the completed status along with the generated web URLs or generated strings.
Can the AI browse trending or curated model collections?
Yes. Use the list_collections tool to browse curated groups of models organized by category — such as image generation, text-to-speech, or video. Each collection includes a slug and description so you can quickly identify the right set of models for your use case.
More in this category
You might also like
Connect Replicate with your favorite client
Step-by-step setup guides for every MCP-compatible client and framework:
Anthropic's native desktop app for Claude with built-in MCP support.
AI-first code editor with integrated LLM-powered coding assistance.
GitHub Copilot in VS Code with Agent mode and MCP support.
Purpose-built IDE for agentic AI coding workflows.
Autonomous AI coding agent that runs inside VS Code.
Anthropic's agentic CLI for terminal-first development.
Python SDK for building production-grade OpenAI agent workflows.
Google's framework for building production AI agents.
Type-safe agent development for Python with first-class MCP support.
TypeScript toolkit for building AI-powered web applications.
TypeScript-native agent framework for modern web stacks.
Python framework for orchestrating collaborative AI agent crews.
Leading Python framework for composable LLM applications.
Data-aware AI agent framework for structured and unstructured sources.
Microsoft's framework for multi-agent collaborative conversations.
Give your AI agents the power of Replicate API MCP Server
Production-grade Replicate MCP Server. Verified, monitored, and maintained by Vinkius. Ready for your AI agents — connect and start using immediately.






