2,500+ MCP servers ready to use
Vinkius
MCP VERIFIED · PRODUCTION READY · VINKIUS GUARANTEED
Replicate

Replicate MCP Server

Built by Vinkius GDPR ToolsFree for Subscribers

Equip your AI to dynamically search, run, and monitor thousands of open-source machine learning models hosted on Replicate via simple text commands.

Vinkius supports streamable HTTP and SSE.

AI AgentVinkius
High Security·Kill Switch·Plug and Play
Replicate
Fully ManagedVinkius Servers
60%Token savings
High SecurityEnterprise-grade
IAMAccess control
EU AI ActCompliant
DLPData protection
V8 IsolateSandboxed
Ed25519Audit chain
<40msKill switch
Stream every event to Splunk, Datadog, or your own webhook in real-time

* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure

What is the Replicate API MCP Server?

The Replicate API MCP Server gives AI agents like Claude, ChatGPT, and Cursor direct access to Replicate API via 12 tools. Equip your AI to dynamically search, run, and monitor thousands of open-source machine learning models hosted on Replicate via simple text commands. Powered by the Vinkius - no API keys, no infrastructure, connect in under 2 minutes.

Built-in capabilities (12)

cancel_predictioncreate_predictionget_accountget_collectionget_modelget_predictionlist_collectionslist_deploymentslist_hardwarelist_modelslist_predictionssearch_models

Tools for your AI Agents to operate Replicate API

Ask your AI agent "List my recent predictions." and get the answer without opening a single dashboard. With 12 tools connected to real Replicate API data, your agents reason over live information, cross-reference it with other MCP servers, and deliver insights you would spend hours assembling manually.

Works with Claude, ChatGPT, Cursor, and any MCP-compatible client. Powered by the Vinkius - your credentials never touch the AI model, every request is auditable. Connect in under two minutes.

Why teams choose Vinkius

One subscription gives you access to thousands of MCP servers - and you can deploy your own to the Vinkius Edge. Your AI agents only access the data you authorize, with DLP that blocks sensitive information from ever reaching the model, kill switch for instant shutdown, and up to 60% token savings. Enterprise-grade infrastructure and security, zero maintenance.

Build your own MCP Server with our secure development framework →

Vinkius works with every AI agent you already use

…and any MCP-compatible client

CursorClaudeOpenAIVS CodeCopilotGoogleLovableMistralAWSCursorClaudeOpenAIVS CodeCopilotGoogleLovableMistralAWS

Replicate MCP Server capabilities

12 tools
cancel_prediction

Cancels a prediction that is currently running

create_prediction

g., image generation, LLMs). Provide the model version ID and inputs as a JSON object. Starts a new model prediction on Replicate

get_account

Retrieves the authenticated Replicate account details

get_collection

Provide the collection slug (e.g., "text-to-image"). Retrieves a specific collection of models by its slug

get_model

Retrieves details for a specific model

get_prediction

). Retrieves the status and output of a prediction

list_collections

g., "Image-to-Text", "Audio Generation"). Lists curated collections of models

list_deployments

Lists your active model deployments on Replicate

list_hardware

Lists available GPU hardware options for running models

list_models

Lists public models available on Replicate

list_predictions

Lists recent predictions made by the user

search_models

Searches for public models on Replicate

What the Replicate MCP Server unlocks

Connect your conversational assistant directly to the Replicate ecosystem. This integration grants your AI the ability to interact programmatically with a vast library of open-source machine learning models without running them on your local hardware. From orchestrating complex image generations to spinning up specialized language models, you can command AI workflows directly from your chat.

What you can do

  • Execute Predictions — Command the assistant to execute specific model versions on your behalf (create_prediction) by supplying a payload of variables. Monitor long-running processes by retrieving outputs and execution status reliably (get_prediction) or cancel them at will (cancel_prediction).
  • Discover Models — Instruct the AI to intelligently scan the Replicate platform for models matching a specific use case using search_models. You can also explore trending and categorized models by leveraging the list_collections action.
  • Analyze Model Metadata — Whenever you discover a new model, query its precise owner and name (get_model) to extract the exact schema and parameter requirements necessary for a successful execution. You can also view a log of your own executed tasks (list_predictions).

How it works

1. Install the Replicate platform extension module in your MCP.
2. Provide your personal Replicate API Token, extracted from your Replicate Account Settings panel. Deposit it securely into the configuration variables below.
3. Prompt your assistant organically: "Search for a high-quality video generation model, evaluate its parameter schema, and start generating a clip using the prompt 'a cat walking on Mars'."

Who is this for?

  • AI Developers & Researchers — Explore and test novel open-source algorithms by generating quick predictions without modifying Python notebook code.
  • Content Creators — Execute specialized image, audio, or video generation tasks by directly delegating workflows to your conversational assistant.
  • Builders — Mix and match the output of various specialized models intelligently using natural language instructions.

Frequently asked questions about the Replicate MCP Server

01

Can the agent pass a JSON payload directly into a Replicate model?

Yes. You can utilize the create_prediction action and attach the payload parameter filled out with any required input schema (e.g., specific prompt, num_inference_steps). Since models change inputs constantly, you should always ask your assistant to fetch the schema details first via get_model to verify keys.

02

Does the prediction command return results instantly?

No, Replicate's API operates asynchronously. The initial command gives your assistant an ID. You must then ask your AI companion to query the get_prediction tool periodically using that generated ID until it displays the completed status along with the generated web URLs or generated strings.

03

Can the AI browse trending or curated model collections?

Yes. Use the list_collections tool to browse curated groups of models organized by category — such as image generation, text-to-speech, or video. Each collection includes a slug and description so you can quickly identify the right set of models for your use case.

More in this category

You might also like

Give your AI agents the power of Replicate API MCP Server

Production-grade Replicate MCP Server. Verified, monitored, and maintained by Vinkius. Ready for your AI agents — connect and start using immediately.