Bring Generative Video
to Vercel AI SDK
Learn how to connect Luma AI (Generative Video & Creative) to Vercel AI SDK and start using 10 AI agent tools in minutes. Fully managed, enterprise secure, and ready to use without writing a single line of code.
What is the Luma AI (Generative Video & Creative) MCP Server?
Connect your Luma AI account to any AI agent and take full control of state-of-the-art generative video production and professional creative tools through natural conversation.
What you can do
- Cinematic Text-to-Video — Generate high-fidelity AI videos from scenic descriptions using Luma Dream Machine (Ray-2 model) directly from your agent
- Image Animation — Transform static frames into dynamic videos (image-to-video) with industry-leading motion coherence and photorealism
- Professional Camera Control — Direct your AI shots with specific movements including pan, tilt, dolly, and orbit using structured movement parameters
- Video Extension & Looping — Seamlessly continue existing scenes with additional footage or create perfect looping videos for social media and backgrounds
- Keyframe Interpolation — Create smooth, high-quality video transitions between two distinct keyframe images to bridge visual concepts effectively
- Photorealistic Text-to-Image — Generate stunning high-resolution images using the Luma Photon-1 model for rapid visual iteration and design
- Task Orchestration — Manage asynchronous generation jobs, poll for status updates (queued, dreaming, completed), and monitor your API credit balance securely
How it works
1. Subscribe to this server
2. Enter your Luma AI API Key
3. Start generating cinematic media from Claude, Cursor, or any MCP-compatible client
Who is this for?
- Video Editors & Creators — generate high-quality B-roll and cinematic sequences through natural conversation without manual rendering
- Creative Directors — rapid-prototype visual concepts and storyboards by commanding your agent to generate varied styles and camera paths
- AI Artists & Designers — iterate on photorealistic imagery and complex video transitions directly from your workspace
Built-in capabilities (10)
Generate video with specific camera movements using Luma Dream Machine. Supports pan, tilt, dolly, orbit
Delete a Luma Dream Machine generation and its video
Extend an existing Luma video with additional footage. Seamlessly continues the scene
Get current Luma Dream Machine credit balance
Get the status and result of a Luma Dream Machine generation. Returns state (queued/dreaming/completed/failed) and video URL
Animate a still image into video using Luma Dream Machine. Image becomes the first frame
Create smooth video transition between two keyframe images using Luma Dream Machine
List recent Luma Dream Machine generations. Returns generation IDs, prompts, states, and timestamps
Generate photorealistic images using Luma Photon-1 model
), and loop (true/false). Poll get_generation for results. Generate cinematic AI video from a text prompt using Luma Dream Machine (Ray-2 model). Industry-leading motion coherence and photorealism
Why Vercel AI SDK?
The Vercel AI SDK gives every Luma AI (Generative Video & Creative) tool full TypeScript type inference, IDE autocomplete, and compile-time error checking. Connect 10 tools through Vinkius and stream results progressively to React, Svelte, or Vue components. works on Edge Functions, Cloudflare Workers, and any Node.js runtime.
- —
TypeScript-first: every MCP tool gets full type inference, IDE autocomplete, and compile-time error checking out of the box
- —
Framework-agnostic core works with Next.js, Nuxt, SvelteKit, or any Node.js runtime. same Luma AI (Generative Video & Creative) integration everywhere
- —
Built-in streaming UI primitives let you display Luma AI (Generative Video & Creative) tool results progressively in React, Svelte, or Vue components
- —
Edge-compatible: the AI SDK runs on Vercel Edge Functions, Cloudflare Workers, and other edge runtimes for minimal latency
Luma AI (Generative Video & Creative) in Vercel AI SDK
Luma AI (Generative Video & Creative) and 3,400+ other MCP servers. One platform. One governance layer.
Teams that connect Luma AI (Generative Video & Creative) to Vercel AI SDK through Vinkius don't need to source, host, or maintain individual MCP servers. Every tool call runs inside a hardened runtime with credential isolation, DLP, and a signed audit chain.
Raw MCP | Vinkius | |
|---|---|---|
| Server catalog | Find and host yourself | 3,400+ managed |
| Infrastructure | Self-hosted | Sandboxed V8 isolates |
| Credential handling | Plaintext in config | Vault + runtime injection |
| Data loss prevention | None | Configurable DLP policies |
| Kill switch | None | Global instant shutdown |
| Financial circuit breakers | None | Per-server limits + alerts |
| Audit trail | None | Ed25519 signed logs |
| SIEM log streaming | None | Splunk, Datadog, Webhook |
| Honeytokens | None | Canary alerts on leak |
| Custom domains | Not applicable | DNS challenge verified |
| GDPR compliance | Manual effort | Automated purge + export |
Why teams choose Vinkius for Luma AI (Generative Video & Creative) in Vercel AI SDK
The Luma AI (Generative Video & Creative) MCP Server runs on Vinkius-managed infrastructure inside AWS — a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts. All 10 tools execute in hardened sandboxes optimized for native MCP execution.
Your AI agents in Vercel AI SDK only access the data you authorize, with DLP that blocks sensitive information from ever reaching the model, kill switch for instant shutdown, and up to 60% token savings. Enterprise-grade infrastructure, zero maintenance.

* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure
How Vinkius secures
Luma AI (Generative Video & Creative) for Vercel AI SDK
Every tool call from Vercel AI SDK to the Luma AI (Generative Video & Creative) MCP Server is protected by DLP redaction, cryptographic audit chains, V8 sandbox isolation, kill switch, and financial circuit breakers.
Frequently asked questions
How do I check if my video generation is finished?
Use the lm.get_generation tool with the Generation ID provided. Your agent will poll the Luma API and report the current state (queued, dreaming, or completed). Once finished, it will return the final MP4 video URL.
Can I control the camera movement in my AI-generated video?
Absolutely. Use the lm.camera_control tool. You can provide a scene prompt and a JSON block defining the movement type (e.g., orbit, pan, tilt) and magnitude, allowing for professional cinematographic directing.
Can my agent extend an existing Luma video with more footage?
Yes. The lm.extend_video tool allows you to provide a continuation prompt and a previous Generation ID. Your agent will trigger Luma to seamlessly expand the scene, maintaining visual and structural consistency.
How does the Vercel AI SDK connect to MCP servers?
Import createMCPClient from @ai-sdk/mcp and pass the server URL. The SDK discovers all tools and provides typed TypeScript interfaces for each one.
Can I use MCP tools in Edge Functions?
Yes. The AI SDK is fully edge-compatible. MCP connections work on Vercel Edge Functions, Cloudflare Workers, and similar runtimes.
Does it support streaming tool results?
Yes. The SDK provides streaming primitives like useChat and streamText that handle tool calls and display results progressively in the UI.
createMCPClient is not a function
Install: npm install @ai-sdk/mcp
