Bring Generative Video
to OpenAI Agents SDK
Learn how to connect Luma AI (Generative Video & Creative) to OpenAI Agents SDK and start using 10 AI agent tools in minutes. Fully managed, enterprise secure, and ready to use without writing a single line of code.
What is the Luma AI (Generative Video & Creative) MCP Server?
Connect your Luma AI account to any AI agent and take full control of state-of-the-art generative video production and professional creative tools through natural conversation.
What you can do
- Cinematic Text-to-Video — Generate high-fidelity AI videos from scenic descriptions using Luma Dream Machine (Ray-2 model) directly from your agent
- Image Animation — Transform static frames into dynamic videos (image-to-video) with industry-leading motion coherence and photorealism
- Professional Camera Control — Direct your AI shots with specific movements including pan, tilt, dolly, and orbit using structured movement parameters
- Video Extension & Looping — Seamlessly continue existing scenes with additional footage or create perfect looping videos for social media and backgrounds
- Keyframe Interpolation — Create smooth, high-quality video transitions between two distinct keyframe images to bridge visual concepts effectively
- Photorealistic Text-to-Image — Generate stunning high-resolution images using the Luma Photon-1 model for rapid visual iteration and design
- Task Orchestration — Manage asynchronous generation jobs, poll for status updates (queued, dreaming, completed), and monitor your API credit balance securely
How it works
1. Subscribe to this server
2. Enter your Luma AI API Key
3. Start generating cinematic media from Claude, Cursor, or any MCP-compatible client
Who is this for?
- Video Editors & Creators — generate high-quality B-roll and cinematic sequences through natural conversation without manual rendering
- Creative Directors — rapid-prototype visual concepts and storyboards by commanding your agent to generate varied styles and camera paths
- AI Artists & Designers — iterate on photorealistic imagery and complex video transitions directly from your workspace
Built-in capabilities (10)
Generate video with specific camera movements using Luma Dream Machine. Supports pan, tilt, dolly, orbit
Delete a Luma Dream Machine generation and its video
Extend an existing Luma video with additional footage. Seamlessly continues the scene
Get current Luma Dream Machine credit balance
Get the status and result of a Luma Dream Machine generation. Returns state (queued/dreaming/completed/failed) and video URL
Animate a still image into video using Luma Dream Machine. Image becomes the first frame
Create smooth video transition between two keyframe images using Luma Dream Machine
List recent Luma Dream Machine generations. Returns generation IDs, prompts, states, and timestamps
Generate photorealistic images using Luma Photon-1 model
), and loop (true/false). Poll get_generation for results. Generate cinematic AI video from a text prompt using Luma Dream Machine (Ray-2 model). Industry-leading motion coherence and photorealism
Why OpenAI Agents SDK?
The OpenAI Agents SDK auto-discovers all 10 tools from Luma AI (Generative Video & Creative) through native MCP integration. Build agents with built-in guardrails, tracing, and handoff patterns. chain multiple agents where one queries Luma AI (Generative Video & Creative), another analyzes results, and a third generates reports, all orchestrated through Vinkius.
- —
Native MCP integration via
MCPServerSse, pass the URL and the SDK auto-discovers all tools with full type safety - —
Built-in guardrails, tracing, and handoff patterns let you build production-grade agents without reinventing safety infrastructure
- —
Lightweight and composable: chain multiple agents and MCP servers in a single pipeline with minimal boilerplate
- —
First-party OpenAI support ensures optimal compatibility with GPT models for tool calling and structured output
Luma AI (Generative Video & Creative) in OpenAI Agents SDK
Luma AI (Generative Video & Creative) and 3,400+ other MCP servers. One platform. One governance layer.
Teams that connect Luma AI (Generative Video & Creative) to OpenAI Agents SDK through Vinkius don't need to source, host, or maintain individual MCP servers. Every tool call runs inside a hardened runtime with credential isolation, DLP, and a signed audit chain.
Raw MCP | Vinkius | |
|---|---|---|
| Server catalog | Find and host yourself | 3,400+ managed |
| Infrastructure | Self-hosted | Sandboxed V8 isolates |
| Credential handling | Plaintext in config | Vault + runtime injection |
| Data loss prevention | None | Configurable DLP policies |
| Kill switch | None | Global instant shutdown |
| Financial circuit breakers | None | Per-server limits + alerts |
| Audit trail | None | Ed25519 signed logs |
| SIEM log streaming | None | Splunk, Datadog, Webhook |
| Honeytokens | None | Canary alerts on leak |
| Custom domains | Not applicable | DNS challenge verified |
| GDPR compliance | Manual effort | Automated purge + export |
Why teams choose Vinkius for Luma AI (Generative Video & Creative) in OpenAI Agents SDK
The Luma AI (Generative Video & Creative) MCP Server runs on Vinkius-managed infrastructure inside AWS — a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts. All 10 tools execute in hardened sandboxes optimized for native MCP execution.
Your AI agents in OpenAI Agents SDK only access the data you authorize, with DLP that blocks sensitive information from ever reaching the model, kill switch for instant shutdown, and up to 60% token savings. Enterprise-grade infrastructure, zero maintenance.

* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure
How Vinkius secures
Luma AI (Generative Video & Creative) for OpenAI Agents SDK
Every tool call from OpenAI Agents SDK to the Luma AI (Generative Video & Creative) MCP Server is protected by DLP redaction, cryptographic audit chains, V8 sandbox isolation, kill switch, and financial circuit breakers.
Frequently asked questions
How do I check if my video generation is finished?
Use the lm.get_generation tool with the Generation ID provided. Your agent will poll the Luma API and report the current state (queued, dreaming, or completed). Once finished, it will return the final MP4 video URL.
Can I control the camera movement in my AI-generated video?
Absolutely. Use the lm.camera_control tool. You can provide a scene prompt and a JSON block defining the movement type (e.g., orbit, pan, tilt) and magnitude, allowing for professional cinematographic directing.
Can my agent extend an existing Luma video with more footage?
Yes. The lm.extend_video tool allows you to provide a continuation prompt and a previous Generation ID. Your agent will trigger Luma to seamlessly expand the scene, maintaining visual and structural consistency.
How does the OpenAI Agents SDK connect to MCP?
Use MCPServerSse(url=...) to create a server connection. The SDK auto-discovers all tools and makes them available to your agent with full type information.
Can I use multiple MCP servers in one agent?
Yes. Pass a list of MCPServerSse instances to the agent constructor. The agent can use tools from all connected servers within a single run.
Does the SDK support streaming responses?
Yes. The SDK supports SSE and Streamable HTTP transports, both of which work natively with Vinkius.
MCPServerStreamableHttp not found
Ensure you have the latest version: pip install --upgrade openai-agents
Agent not calling tools
Make sure your prompt explicitly references the task the tools can help with.
