VIBEFRAME(1)

NAME

vibeframeAI-Native Video Editor — CLI-first, MCP-ready. Generate, edit, and ship videos from your terminal.

SYNOPSIS

INFO

95 stars
15 forks
0 views

DESCRIPTION

AI-Native Video Editor — CLI-first, MCP-ready. Generate, edit, and ship videos from your terminal.

README

VibeFrame

AI-native video editing. CLI-first. MCP-ready.

GitHub stars License: MIT CI Contributors

Edit videos with natural language. Every edit is a command. Every workflow is scriptable.


Demo

CLI walkthroughhttps://youtu.be/EJUUpPp2d_8

Claude Code integrationhttps://youtu.be/sdf930sZ7co


Why VibeFrame?

Traditional video editors are built for clicking buttons. VibeFrame is built for AI-powered workflows.

Traditional EditorVibeFrame
Import → Drag → Trim → Exportvibe edit silence-cut interview.mp4 -o clean.mp4
Manual scene detectionvibe detect scenes video.mp4
Export for each platformvibe pipeline viral project.vibe.json
Click through menusNatural language → CLI → done

Design Principles: Headless First — AI-Native — MCP Compatible — Provider Agnostic


Quick Start (CLI)

Prerequisites: Node.js 20+, FFmpeg

CLI-first. Every video edit is a command.

# Install
curl -fsSL https://vibeframe.ai/install.sh | bash

Remove silence from an interview

vibe edit silence-cut interview.mp4 -o clean.mp4

Add captions with auto-transcription

vibe edit caption video.mp4 -o captioned.mp4

Create a TikTok from a script

vibe pipeline script-to-video "A day in the life of a developer..." -a 9:16 -o ./tiktok/

Export to MP4

vibe export project.vibe.json -o output.mp4

For development:

git clone https://github.com/vericontext/vibeframe.git
cd vibeframe
pnpm install && pnpm build

Use with Claude Code

Already have the CLI installed? Claude Code runs vibe commands for you — just describe what you want in natural language.

You sayClaude Code runs
"Remove silence from interview.mp4"vibe edit silence-cut interview.mp4 -o clean.mp4
"Extract 3 best moments from podcast.mp4"vibe pipeline highlights podcast.mp4 -c 3
"Add Korean subtitles to video.mp4"vibe edit caption video.mp4 -o captioned.mp4
"Create a TikTok from this script"vibe pipeline script-to-video "..." -a 9:16
"Remove background noise"vibe edit noise-reduce noisy.mp4 -o clean.mp4
"Make a 60-second highlight reel"vibe pipeline highlights long-video.mp4 -d 60

No setup needed beyond installing the CLI. Claude Code discovers and runs vibe commands directly.


MCP Integration

Works with Claude Desktop and Cursor via MCP. No clone needed — just add to your config and restart:

{
  "mcpServers": {
    "vibeframe": {
      "command": "npx",
      "args": ["-y", "@vibeframe/mcp-server"]
    }
  }
}

Config file locations:

  • Claude Desktop (macOS): ~/Library/Application Support/Claude/claude_desktop_config.json
  • Claude Desktop (Windows): %APPDATA%\Claude\claude_desktop_config.json
  • Cursor: .cursor/mcp.json in your workspace

See packages/mcp-server/README.md for full tool, resource, and prompt reference.


AI Pipelines

End-to-end workflows powered by multiple AI providers (Claude + ElevenLabs + Gemini + Kling/Runway):

vibe pipeline script-to-video "A morning routine of a startup founder..." \
  -d 60 -a 9:16 -g kling -o startup.vibe.json

vibe pipeline highlights interview.mp4 -d 90 --criteria emotional vibe pipeline auto-shorts podcast.mp4 vibe pipeline animated-caption video.mp4 -s bounce -o captioned.mp4


CLI Reference

Every command supports --help. Run vibe --help for a full list.

GroupCommandsExample
vibe generateimage, video, speech, sound-effect, music, motion, storyboard, thumbnail, backgroundvibe generate image "prompt" -o img.png
vibe editsilence-cut, jump-cut, caption, grade, reframe, speed-ramp, text-overlay, fade, noise-reduce, image, fill-gapsvibe edit caption video.mp4 -o out.mp4
vibe analyzemedia, video, review, suggestvibe analyze media video.mp4 "summarize"
vibe audiotranscribe, voices, isolate, voice-clone, dub, duckvibe audio transcribe audio.mp3
vibe pipelinescript-to-video, highlights, auto-shorts, regenerate-scene, animated-captionvibe pipeline script-to-video "..." -a 9:16
vibe projectcreate, info, setvibe project create "name"
vibe timelineadd-source, add-clip, split, trim, move, delete, listvibe timeline add-source project file
vibe batchimport, concat, apply-effectvibe batch import project dir/
vibe detectscenes, silence, beatsvibe detect scenes video.mp4
vibe export-vibe export project.vibe.json -o out.mp4

Every command supports --help, --json, --dry-run, and --stdin. Run vibe schema --list for a full machine-readable command index.


Agent Mode (Standalone)

For environments without Claude Code, Codex, or MCP — a built-in interactive session:

vibe agent                     # Start (default: OpenAI)
vibe agent -p claude           # Use Claude
vibe agent -p ollama           # Free, local, no API key

Best used for onboarding and quick experiments. For production workflows, use CLI commands directly or via Claude Code / MCP.


AI Providers

See MODELS.md for detailed model information (SSOT).

CategoryProvidersDefault
Agent LLMOpenAI, Claude, Gemini, xAI, OpenRouter, OllamaGPT-5-mini
ImageGemini, OpenAI, xAI GrokGemini Nano Banana
VideoxAI Grok, Kling, Runway, VeoGrok Imagine
AudioElevenLabs, Whisper-

Required API Keys: OPENAI_API_KEY, ANTHROPIC_API_KEY, GOOGLE_API_KEY, ELEVENLABS_API_KEY, RUNWAY_API_SECRET, KLING_API_KEY, XAI_API_KEY


Project Structure

vibeframe/
├── packages/
│   ├── cli/               # CLI + Agent mode
│   ├── core/              # Timeline engine (Zustand + Immer + FFmpeg)
│   ├── ai-providers/      # Pluggable AI providers
│   ├── mcp-server/        # MCP server (npm: @vibeframe/mcp-server)
│   └── ui/                # Shared React components
├── apps/web/              # Next.js landing & preview UI
├── MODELS.md              # AI models reference (SSOT)
└── ROADMAP.md             # Development roadmap

Roadmap

PhaseStatusDescription
1. FoundationDoneCore CLI, FFmpeg.wasm export
2. AI ProvidersDoneMulti-provider integrated
3. MCP IntegrationDoneClaude Desktop & Cursor support
4. AI PipelinesDoneScript-to-Video, Highlights, B-Roll, Viral
5. Server InfrastructurePlannedHybrid rendering, chunked uploads
6. CollaborationPlannedCRDT-based local-first sync

See ROADMAP.md for details.


Open Core Model

VibeFrame Core is 100% open source (MIT License). Core features will always remain free and open source.


Contributing

pnpm build     # Build all packages
pnpm test      # Run all tests
pnpm lint      # Lint code

Contributions welcome — AI provider integrations, CLI improvements, docs, bug fixes & tests. See CONTRIBUTING.md.

Contributors


License

MIT - see LICENSE


Built for the AI age. Ship videos, not clicks.

SEE ALSO

clihub4/6/2026VIBEFRAME(1)