The Model Context Protocol changed everything. For the first time, AI agents can read your databases, call your APIs, execute arbitrary code, modify files on disk, and manage cloud infrastructure — all through a single, standardized interface. Anthropic open-sourced MCP in late 2024. Within months, it was everywhere: Microsoft, Google, Amazon, hundreds of developer tools. The adoption curve was vertical.
And that's exactly the problem.
MCP servers are rapidly becoming the new API layer — the gatekeepers between AI agents and everything that matters in your organization. Your data. Your infrastructure. Your business logic. If you're building one, deploying one, or relying on one, you need to understand how MCP servers actually work — and how wide open most of them are.
Because the answer is uncomfortable.
The numbers tell a troubling story
In 2025, Astrix Security analyzed over 5,200 open-source MCP server implementations. The findings were sobering: 53% rely on long-lived static secrets — API keys, personal access tokens, credentials that never rotate. Only 8.5% use modern authentication like OAuth.
HiveTrail's research painted an even bleaker picture. 43% of the open-source MCP servers they tested had command injection flaws. A third permitted unrestricted URL fetches. Nearly a quarter leaked files outside their intended directories through path traversal bugs.
Equixly independently confirmed these numbers in their March 2025 audit, finding 30% of popular MCP implementations vulnerable to SSRF attacks. And Palo Alto Networks flagged MCP security as one of the most pressing concerns in their 2025 threat landscape report, calling out insufficient sandboxing and excessive permissions as systemic issues across the ecosystem.
Nearly 2,000 MCP servers sit on the public internet right now with no authentication whatsoever. Not misconfigured. Unprotected by design.
The MCP spec itself offers almost no guidance on authentication. The result is exactly what you'd expect: a fragmented, dangerously inconsistent patchwork of security practices — or the absence of them entirely.
This isn't theoretical. In 2025 alone, critical remote code execution vulnerabilities were disclosed in production MCP servers. CVE-2025-6514 in mcp-remote. CVE-2025-53967 in the Framelink Figma MCP Server. CVE-2025-49596 in Anthropic's own MCP Inspector. CVE-2025-6515 — a prompt hijacking flaw in the oatpp-mcp framework that let attackers take over sessions. Cyata and BlueRock Research found severe RCE vulnerabilities in Anthropic's Git and Filesystem MCP servers (CVE-2025-68145, CVE-2025-68143, CVE-2025-68144), plus an SSRF bug in Microsoft's MarkItDown MCP server.
These aren't obscure projects. These are the biggest names in the space.
Five attack vectors every team should understand
The OWASP MCP Top 10 framework lays out the threat categories. If you're deploying MCP servers — or even thinking about it — these are the risks sitting on your doorstep right now.
1. Prompt injection
OWASP's number-one risk for LLM applications (LLM01:2025), and it's especially dangerous in the MCP context. Crafted inputs manipulate AI agents into executing unintended actions through their connected tools — exfiltrating sensitive data, escalating privileges, making unauthorized changes. The indirect variant is worse: malicious instructions hide inside external content the AI ingests. The human operator never sees them. The model executes them faithfully.
2. Tool poisoning
This one's unique to MCP and wildly underestimated. Attackers tamper with a tool's metadata — its description, parameter schema, return type documentation — to embed hidden instructions the model follows. The unsettling part: the poisoned tool doesn't even need to be invoked. Its mere presence in the model's context window can alter how the agent interacts with every other tool in the session.
3. Supply chain attacks
MCP servers follow an npm-style distribution model. Developers install packages from registries, configure them through JSON files, run them with broad system access. Sound familiar? It should. One compromised package cascades through thousands of deployments. But unlike traditional npm packages, MCP servers often run with elevated privileges — direct access to databases, file systems, network infrastructure. The blast radius is enormous.
4. Shadow MCP
Shadow IT's younger, more dangerous sibling. An engineer spins up an unauthorized MCP server to prototype something quickly. Maybe integrates it with a personal AI client. That server runs outside every security perimeter you've built — no governance, no monitoring, no oversight. It's hitting production databases with dev credentials and making API calls with personal tokens.
How many shadow MCP servers are running in your org right now? Your security team almost certainly doesn't know.
5. Authentication and authorization gaps
The MCP specification mandates session identifiers in URLs. Think about that for a second. Session IDs in URLs — which means they show up in server logs, browser histories, HTTP referrer headers. Most servers implement no SSO, no RBAC, no token rotation. One leaked URL gives someone complete, persistent access to an entire server, and there's no way to revoke it without redeploying.
We've seen this exact pattern before
REST APIs launching without rate limiting in 2010. MongoDB instances bound to 0.0.0.0 with default credentials, wide open to the internet for years. Docker containers running as root with host filesystem mounts — for the better part of a decade — before the industry finally course-corrected.
Every time, the arc is the same. A new technology ships with incredible capability and zero guard rails. Adoption outpaces security awareness by an order of magnitude. The industry spends years — sometimes a full decade — retroactively patching what should have been secure from the start.
Expensive. Painful. Entirely preventable.
MCP is at that exact inflection point right now. The ecosystem is young enough to get this right. But the window won't stay open forever.
If an MCP server can execute code, access data, and make network calls — then it must be sandboxed, authenticated, and monitored from the moment it first runs. Not after the first breach. Not after the first compliance audit. From birth.
What security built into the runtime actually looks like
We built Vinkius to answer a simple question: what if the secure path was also the easy path? What if you didn't have to choose between shipping fast and shipping safe?
Every MCP server deployed on Vinkius runs inside its own V8 isolate — the same sandboxing technology behind Cloudflare Workers and Chrome's tab isolation. Not as an option. Not as a premium tier. As the architecture. The isolate enforces 34 engineering rules governing everything about execution: memory boundaries, timer limits, network access, cryptographic operations, resource cleanup.
V8 isolate sandboxing
Each MCP server gets its own isolated memory space — 32 MB heap by default, configurable upward. No filesystem access. No cross-server memory contamination. The isolate is a sealed environment: globalThis.console routes to a black-hole proxy, process.env returns an empty object, setInterval is intentionally disabled. Every fetch call goes through the host with a 10 MB response cap enforced via streaming byte counter.
If a server gets compromised? The blast radius is zero. The attacker can't reach adjacent servers, the host filesystem, or any network destination the SSRF guard doesn't explicitly allow.
SSRF proxy protection
Every outbound HTTP request from an MCP server passes through an IP-pinned SSRF guard. DNS is resolved to an IPv4 address and validated against a comprehensive blocklist before anything leaves the platform: RFC 1918 private ranges, loopback addresses, link-local ranges — including the infamous AWS metadata endpoint at 169.254.169.254 — IPv6 loopback, and unique-local addresses. The resolved IP is pinned via a pooled undici Agent that maintains TLS/SNI integrity while eliminating DNS rebinding attacks.
Data loss prevention
Configurable DLP controls inspect all outbound data for sensitive patterns before it leaves the platform — API keys, auth tokens, PII, credit card numbers, plus whatever custom patterns the operator defines. When a match hits, the data is redacted in-flight. This isn't a premium add-on or something you remember to enable. It's on by default, every deployment, every server.
HMAC authentication
Every request to a Vinkius-hosted MCP server is cryptographically signed with HMAC-SHA256. No static API keys to leak. No URLs to guess. The platform's Web Crypto bridge delegates signing to Node.js's native crypto on the host side — key material never enters the V8 sandbox. Verification uses constant-time comparison, so timing side-channel attacks can't determine partial key matches.
The path forward
MCP servers will be as ubiquitous as REST APIs within two years. Every enterprise will run them. Every AI agent framework — LangChain, CrewAI, AutoGen, the ones that haven't been built yet — will depend on them. The question isn't whether the ecosystem will scale. It's whether it'll scale securely. That's why we built Vinkius on an open-source MCP framework designed for production from day one.
The OWASP MCP Top 10 is a start. Microsoft's security guidance and Anthropic's evolving spec updates point in the right direction. But documents don't protect infrastructure. Defaults do.
The secure path has to be the default path. Security has to live in the runtime itself — not in a best-practices PDF that nobody reads after the first week.
When you deploy an MCP server on Vinkius, V8 isolation isn't optional. SSRF protection isn't something you configure. DLP isn't a premium tier. HMAC auth isn't a flag you remember to enable. This is how every secure MCP server should work. We just built the platform that makes it happen by default.
This is day one
The ecosystem is still young. The standards are still being written. The infrastructure is still being built.
There's a narrow window — right now — to bake security into the foundation of AI agent infrastructure instead of spending the next decade retrofitting it. If you're building MCP servers, if you're deploying AI agents into production, if you care about the integrity of the infrastructure your models depend on — there has never been a more critical moment to get this right.
