Replicate Alternative MCP Server
Run ML models via Replicate — generate images, text, audio and video from community models, track predictions and explore collections from any AI agent.
Ask AI about this MCP Server
Vinkius supports streamable HTTP and SSE.

* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure
What is the Replicate MCP Server?
The Replicate MCP Server gives AI agents like Claude, ChatGPT, and Cursor direct access to Replicate via 12 tools. Run ML models via Replicate — generate images, text, audio and video from community models, track predictions and explore collections from any AI agent. Powered by the Vinkius - no API keys, no infrastructure, connect in under 2 minutes.
Built-in capabilities (12)
Tools for your AI Agents to operate Replicate
Ask your AI agent "List all text-to-image collections on Replicate." and get the answer without opening a single dashboard. With 12 tools connected to real Replicate data, your agents reason over live information, cross-reference it with other MCP servers, and deliver insights you would spend hours assembling manually.
Works with Claude, ChatGPT, Cursor, and any MCP-compatible client. Powered by the Vinkius - your credentials never touch the AI model, every request is auditable. Connect in under two minutes.
Why teams choose Vinkius
One subscription gives you access to thousands of MCP servers - and you can deploy your own to the Vinkius Edge. Your AI agents only access the data you authorize, with DLP that blocks sensitive information from ever reaching the model, kill switch for instant shutdown, and up to 60% token savings. Enterprise-grade infrastructure and security, zero maintenance.
Build your own MCP Server with our secure development framework →Vinkius works with every AI agent you already use
…and any MCP-compatible client


















Replicate Alternative MCP Server capabilities
12 toolsProvide the prediction ID. The prediction status will change to "canceled". Cancel a running prediction
Requires the model slug in "owner/name" format and an input object matching the model's schema. Optionally specify a version ID and webhook URL. Returns the prediction object with its ID, status (starting, processing, succeeded, failed, canceled) and output. Use get_prediction to check status and retrieve results. Run a model prediction on Replicate
Returns account type, username and usage info. Use this to verify your API token is working correctly. Get the authenticated Replicate account info
Provide the collection slug (e.g. "text-to-image", "large-language-models"). Get details for a specific model collection
Provide the model slug in "owner/name" format (e.g. "stability-ai/sdxl" or "meta/meta-llama-3-70b-instruct"). Get details for a specific Replicate model
Each version includes its ID (64-char hash), creation date, input/output schema and cog version. Use this to find the correct version ID when creating predictions for models that require a specific version. Get all versions of a Replicate model
Returns the prediction ID, status (starting, processing, succeeded, failed, canceled), input, output URLs, creation time and logs. Use the prediction ID returned from create_prediction. Get the status and result of a prediction
Collections group related models by category (e.g. "text-to-image", "large-language-models", "audio-to-audio", "image-to-video"). Each collection includes its slug, name, description and featured models. List model collections on Replicate
Each hardware option includes its SKU name, pricing and specifications. Useful for choosing the right GPU for your prediction workload. List available GPU hardware on Replicate
Each model includes its name, owner, description, run count, hardware requirements and cover image URL. Use this to discover available models for running predictions. List available ML models on Replicate
Each prediction includes its ID, model, status, creation time and output URLs. Useful for tracking prediction history and monitoring model usage. List recent predictions on Replicate
Returns models with their name, owner, description, run count and hardware. Useful for finding specific types of models (e.g. "text-to-image", "llm", "music-generation"). Search for models on Replicate by query
What the Replicate Alternative MCP Server unlocks
Connect your Replicate account to any AI agent and run thousands of open-source ML models through natural conversation.
What you can do
- Model Discovery — Browse, search and inspect thousands of ML models with their descriptions, run counts and hardware requirements
- Predictions — Run models by creating predictions and tracking their status (starting, processing, succeeded, failed)
- Collections — Explore curated collections of models by category (text-to-image, LLMs, audio, video)
- Hardware Options — View available GPU types and pricing for model inference
- Account Info — Check your account details and usage
How it works
1. Subscribe to this server
2. Enter your Replicate API Token
3. Start running ML models from Claude, Cursor, or any MCP-compatible client
No more navigating the Replicate website to find models or check prediction status. Your AI acts as a dedicated ML operations assistant.
Who is this for?
- Developers — quickly run image generation, text models and other ML predictions via conversation
- ML Engineers — discover models, compare hardware requirements and track prediction history
- Researchers — explore model collections, inspect version schemas and test models before deployment
Frequently asked questions about the Replicate Alternative MCP Server
How do I get a Replicate API token?
Log in to the Replicate API Tokens page and click Create API Token. Copy the token immediately — it starts with r8_ and won't be shown again.
How do I run a model prediction?
Use create_prediction with the model slug (e.g. "stability-ai/sdxl") and an input JSON object matching the model's schema. The prediction starts as 'starting', then 'processing', and finally 'succeeded' with output URLs. Use get_prediction to check status and retrieve results.
How do I find models for specific tasks?
Use search_models with a query like 'text-to-image', 'llm', 'music-generation' or 'video-generation'. You can also use list_collections to browse curated collections by category, and get_collection to see featured models in each collection.
Can I cancel a running prediction?
Yes! Use cancel_prediction with the prediction ID. This works for predictions that are 'starting' or 'processing'. The status will change to 'canceled' and you won't be charged for the full compute time.
More in this category
You might also like
Connect Replicate Alternative with your favorite client
Step-by-step setup guides for every MCP-compatible client and framework:
Anthropic's native desktop app for Claude with built-in MCP support.
AI-first code editor with integrated LLM-powered coding assistance.
GitHub Copilot in VS Code with Agent mode and MCP support.
Purpose-built IDE for agentic AI coding workflows.
Autonomous AI coding agent that runs inside VS Code.
Anthropic's agentic CLI for terminal-first development.
Python SDK for building production-grade OpenAI agent workflows.
Google's framework for building production AI agents.
Type-safe agent development for Python with first-class MCP support.
TypeScript toolkit for building AI-powered web applications.
TypeScript-native agent framework for modern web stacks.
Python framework for orchestrating collaborative AI agent crews.
Leading Python framework for composable LLM applications.
Data-aware AI agent framework for structured and unstructured sources.
Microsoft's framework for multi-agent collaborative conversations.
Give your AI agents the power of Replicate MCP Server
Production-grade Replicate Alternative MCP Server. Verified, monitored, and maintained by Vinkius. Ready for your AI agents — connect and start using immediately.






