Bring Dora Metrics
to CrewAI
Learn how to connect LinearB to CrewAI and start using 7 AI agent tools in minutes. Fully managed, enterprise secure, and ready to use without writing a single line of code.
What is the LinearB MCP Server?
Connect your LinearB account to any AI agent to automate your engineering intelligence and DORA metrics reporting. This MCP server enables your agent to query cycle time, track deployments, and report incidents directly from natural language interfaces.
What you can do
- Metric Ingestion — Query complex engineering metrics including cycle time, coding time, and pickup time across teams
- Deployment Management — Inform LinearB of new software releases by reporting Git refs (SHAs or tags) programmatically
- Incident Tracking — Report and list engineering incidents to maintain accurate Change Failure Rate and MTTR metrics
- Metadata Oversight — List teams and connected repositories to map technical IDs to organizational structures
- DORA Analytics — Retrieve aggregated performance data to identify bottlenecks in your delivery pipeline
How it works
1. Subscribe to this server
2. Enter your LinearB Public API Key
3. Start managing your engineering metrics from Claude, Cursor, or any MCP-compatible client
Who is this for?
- Engineering Managers — Monitor team cycle times and delivery health via simple natural language commands
- DevOps Engineers — Automate the reporting of deployments and incidents directly from CI/CD pipelines or IDEs
- CTOs — Quickly audit organizational performance and DORA metrics without opening the dashboard
Built-in capabilities (7)
List all connected repositories
List all teams defined in LinearB
List recent deployments
List engineering incidents
Requires a JSON body with requested_metrics and time_ranges. Query software engineering metrics (v2)
Requires repo_id and ref. Report a new deployment to LinearB
Requires provider_id and started_at. Report a new incident
Why CrewAI?
When paired with CrewAI, LinearB becomes a first-class tool in your multi-agent workflows. Each agent in the crew can call LinearB tools autonomously, one agent queries data, another analyzes results, a third compiles reports, all orchestrated through Vinkius with zero configuration overhead.
- —
Multi-agent collaboration lets you decompose complex workflows into specialized roles, one agent researches, another analyzes, a third generates reports, each with access to MCP tools
- —
CrewAI's native MCP integration requires zero adapter code: pass Vinkius Edge URL directly in the
mcpsparameter and agents auto-discover every available tool at runtime - —
Built-in task delegation and shared memory mean agents can pass context between steps without manual state management, enabling multi-hop reasoning across tool calls
- —
Sequential and hierarchical crew patterns map naturally to real-world workflows: enumerate subdomains → analyze DNS history → check WHOIS records → compile findings into actionable reports
LinearB in CrewAI
LinearB and 3,400+ other MCP servers. One platform. One governance layer.
Teams that connect LinearB to CrewAI through Vinkius don't need to source, host, or maintain individual MCP servers. Every tool call runs inside a hardened runtime with credential isolation, DLP, and a signed audit chain.
Raw MCP | Vinkius | |
|---|---|---|
| Server catalog | Find and host yourself | 3,400+ managed |
| Infrastructure | Self-hosted | Sandboxed V8 isolates |
| Credential handling | Plaintext in config | Vault + runtime injection |
| Data loss prevention | None | Configurable DLP policies |
| Kill switch | None | Global instant shutdown |
| Financial circuit breakers | None | Per-server limits + alerts |
| Audit trail | None | Ed25519 signed logs |
| SIEM log streaming | None | Splunk, Datadog, Webhook |
| Honeytokens | None | Canary alerts on leak |
| Custom domains | Not applicable | DNS challenge verified |
| GDPR compliance | Manual effort | Automated purge + export |
Why teams choose Vinkius for LinearB in CrewAI
The LinearB MCP Server runs on Vinkius-managed infrastructure inside AWS — a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts. All 7 tools execute in hardened sandboxes optimized for native MCP execution.
Your AI agents in CrewAI only access the data you authorize, with DLP that blocks sensitive information from ever reaching the model, kill switch for instant shutdown, and up to 60% token savings. Enterprise-grade infrastructure, zero maintenance.

* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure
How Vinkius secures
LinearB for CrewAI
Every tool call from CrewAI to the LinearB MCP Server is protected by DLP redaction, cryptographic audit chains, V8 sandbox isolation, kill switch, and financial circuit breakers.
Frequently asked questions
How do I query cycle time for a specific team?
Use the query_software_metrics tool and include the team name or ID in the group_by parameter of your JSON query.
What is the difference between coding_time and pickup_time?
Coding time is the duration from the first commit to the PR creation. Pickup time is the duration from the PR creation to the first review activity.
Can I report a release from the agent?
Absolutely. Use the record_new_deployment tool with the Git SHA or tag and the repository ID to inform LinearB that a deployment has occurred.
How does CrewAI discover and connect to MCP tools?
CrewAI connects to MCP servers lazily. when the crew starts, each agent resolves its MCP URLs and fetches the tool catalog via the standard tools/list method. This means tools are always fresh and reflect the server's current capabilities. No tool schemas need to be hardcoded.
Can different agents in the same crew use different MCP servers?
Yes. Each agent has its own mcps list, so you can assign specific servers to specific roles. For example, a reconnaissance agent might use a domain intelligence server while an analysis agent uses a vulnerability database server.
What happens when an MCP tool call fails during a crew run?
CrewAI wraps tool failures as context for the agent. The LLM receives the error message and can decide to retry with different parameters, fall back to a different tool, or mark the task as partially complete. This resilience is critical for production workflows.
Can CrewAI agents call multiple MCP tools in parallel?
CrewAI agents execute tool calls sequentially within a single reasoning step. However, you can run multiple agents in parallel using process=Process.parallel, each calling different MCP tools concurrently. This is ideal for workflows where separate data sources need to be queried simultaneously.
Can I run CrewAI crews on a schedule (cron)?
Yes. CrewAI crews are standard Python scripts, so you can invoke them via cron, Airflow, Celery, or any task scheduler. The crew.kickoff() method runs synchronously by default, making it straightforward to integrate into existing pipelines.
MCP tools not discovered
Ensure the Edge URL is correct. CrewAI connects lazily when the crew starts. check console output.
Agent not using tools
Make the task description specific. Instead of "do something", say "Use the available tools to list contacts".
Timeout errors
CrewAI has a 10s connection timeout by default. Ensure your network can reach the Edge URL.
Rate limiting or 429 errors
Vinkius enforces per-token rate limits. Check your subscription tier and request quota in the dashboard. Upgrade if you need higher throughput.
