3,400+ servers built on vurb.ts
Vinkius
M

Bring Llm
to Mastra AI

Learn how to connect Cohere to Mastra AI and start using 6 AI agent tools in minutes. Fully managed, enterprise secure, and ready to use without writing a single line of code.

MCP Inspector GDPR Free for Subscribers
ChatDetokenizeEmbedList ModelsRerankTokenize
Cohere

What is the Cohere MCP Server?

Connect your Cohere account to any AI agent and leverage enterprise-grade AI models through natural conversation.

What you can do

  • Model Discovery — List all available Cohere models with their names, capabilities and context lengths
  • Chat API — Send conversations to Command models (command-r-plus, command-r, command-r7b) and receive responses with citations and tool call support
  • Embeddings — Generate vector embeddings for semantic search with multiple embedding types (float, int8, uint8, binary)
  • Reranking — Rerank documents by relevance to a search query using Cohere's industry-leading reranking models
  • Tokenization — Tokenize and detokenize text for estimating token counts and debugging

How it works

1. Subscribe to this server
2. Enter your Cohere API Key
3. Start using Cohere models from Claude, Cursor, or any MCP-compatible client

No more switching between API tools to interact with Cohere. Your AI acts as an LLM orchestration layer.

Who is this for?

  • Developers — quickly send messages to Command models, generate embeddings and rerank search results without writing HTTP code
  • ML Engineers — discover available models, compare capabilities and generate embeddings with multiple types (float, int8, binary)
  • Search Teams — rerank documents by relevance, tokenize text and generate embeddings for search index building

Built-in capabilities (6)

chat

Requires the model ID (e.g. "command-r-plus", "command-r", "command-r7b") and messages array in JSON format. Each message must have a "role" ("user", "assistant", "system" or "tool") and "content" (text or array of content blocks). Optionally set max_tokens, temperature (0-1), p (nucleus sampling 0-1) and tools array for function calling. Returns the model's response with text, citations and tool calls. Send a chat message to a Cohere model

detokenize

Requires the token IDs array. Returns the reconstructed text. Useful for debugging and verifying tokenization. Detokenize token IDs back to text using Cohere

embed

Requires the model ID (e.g. "embed-v4", "embed-v3"), texts array and input_type ("search_document", "search_query", "classification", "clustering"). Returns embedding vectors for each input text. Useful for semantic search, similarity comparison and vector database storage. Generate embeddings using Cohere

list_models

Each model returns its name (e.g. "command-r-plus", "command-r", "embed-v4", "rerank-v3.5"), endpoint compatibility, context length and tokenization info. Use this to discover which models are available and their capabilities. List all available Cohere models

rerank

Requires the model ID (e.g. "rerank-v3.5", "rerank-english-v3.0"), query text and documents array. Optionally set top_n to return only the top N results. Returns ranked documents with relevance scores. Rerank documents by relevance to a query

tokenize

Requires the text to tokenize and optionally the model. Returns the list of token IDs and token strings. Useful for estimating token counts before sending to chat or embed endpoints. Tokenize text using Cohere

Why Mastra AI?

Mastra's agent abstraction provides a clean separation between LLM logic and Cohere tool infrastructure. Connect 6 tools through Vinkius and use Mastra's built-in workflow engine to chain tool calls with conditional logic, retries, and parallel execution. deployable to any Node.js host in one command.

  • Mastra's agent abstraction provides a clean separation between LLM logic and tool infrastructure. add Cohere without touching business code

  • Built-in workflow engine chains MCP tool calls with conditional logic, retries, and parallel execution for complex automation

  • TypeScript-native: full type inference for every Cohere tool response with IDE autocomplete and compile-time checks

  • One-command deployment to any Node.js host. Vercel, Railway, Fly.io, or your own infrastructure

M
See it in action

Cohere in Mastra AI

AI AgentVinkius
High Security·Kill Switch·Plug and Play
Why Vinkius

Cohere and 3,400+ other MCP servers. One platform. One governance layer.

Teams that connect Cohere to Mastra AI through Vinkius don't need to source, host, or maintain individual MCP servers. Every tool call runs inside a hardened runtime with credential isolation, DLP, and a signed audit chain.

3,400+MCP Servers ready
<40msCold start
60%Token savings
Raw MCP
Vinkius
Server catalogFind and host yourself3,400+ managed
InfrastructureSelf-hostedSandboxed V8 isolates
Credential handlingPlaintext in configVault + runtime injection
Data loss preventionNoneConfigurable DLP policies
Kill switchNoneGlobal instant shutdown
Financial circuit breakersNonePer-server limits + alerts
Audit trailNoneEd25519 signed logs
SIEM log streamingNoneSplunk, Datadog, Webhook
HoneytokensNoneCanary alerts on leak
Custom domainsNot applicableDNS challenge verified
GDPR complianceManual effortAutomated purge + export
Enterprise Security

Why teams choose Vinkius for Cohere in Mastra AI

The Cohere MCP Server runs on Vinkius-managed infrastructure inside AWS — a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts. All 6 tools execute in hardened sandboxes optimized for native MCP execution.

Your AI agents in Mastra AI only access the data you authorize, with DLP that blocks sensitive information from ever reaching the model, kill switch for instant shutdown, and up to 60% token savings. Enterprise-grade infrastructure, zero maintenance.

Cohere
Fully ManagedVinkius Servers
60%Token savings
High SecurityEnterprise-grade
IAMAccess control
EU AI ActCompliant
DLPData protection
V8 IsolateSandboxed
Ed25519Audit chain
<40msKill switch
Stream every event to Splunk, Datadog, or your own webhook in real-time

* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure

The Vinkius Advantage

How Vinkius secures Cohere for Mastra AI

Every tool call from Mastra AI to the Cohere MCP Server is protected by DLP redaction, cryptographic audit chains, V8 sandbox isolation, kill switch, and financial circuit breakers.

< 40msCold start
Ed25519Signed audit chain
60%Token savings
FAQ

Frequently asked questions

01

How do I get a Cohere API Key?

Log in to the Cohere Dashboard, go to API Keys and click Create API Key. Copy the key immediately — it starts with a random string and won't be shown again. Free tier includes trial access with rate limits.

02

What models are available?

Use the list_models tool to see all available Cohere models. Key models include command-r-plus (most capable, 128K context), command-r (efficient, 128K context), command-r7b (lightweight, 128K context), embed-v4 (embeddings) and rerank-v3.5 (reranking).

03

Can I send multi-turn conversations?

Yes! Pass a messages array with alternating 'user', 'assistant' and 'system' roles. Each message has a 'role' and 'content' field. Command models support function calling and will return tool_calls when appropriate.

04

What is reranking and when should I use it?

Reranking reorders a set of documents by their relevance to a query. Use it after an initial search to improve result quality. The rerank tool takes a query, list of documents and returns them ranked by relevance score. Cohere's rerank models are industry-leading for search applications.

05

How does Mastra AI connect to MCP servers?

Create an MCPClient with the server URL and pass it to your agent. Mastra discovers all tools and makes them available with full TypeScript types.

06

Can Mastra agents use tools from multiple servers?

Yes. Pass multiple MCP clients to the agent constructor. Mastra merges all tool schemas and the agent can call any tool from any server.

07

Does Mastra support workflow orchestration?

Yes. Mastra has a built-in workflow engine that lets you chain MCP tool calls with branching logic, error handling, and parallel execution.

08

createMCPClient not exported

Install: npm install @mastra/mcp