UVR-HEADLESS-RUNNER(1)

NAME

uvr-headless-runnerProduction-ready UVR5 CLI & Docker image. Run SOTA separation models (Roformer, SCNet, MDX, Demucs, VR Architecture)…

SYNOPSIS

$pip install uvr-headless-runner

INFO

86 stars
1 forks
0 views

DESCRIPTION

Production-ready UVR5 CLI & Docker image. Run SOTA separation models (Roformer, SCNet, MDX, Demucs, VR Architecture) on headless GPU servers without dependency hell.

README

UVR Headless Runner

🎧 Separate vocals, instruments, drums, bass & more from any audio

Command-line audio source separation powered by UVR

License: MIT Python 3.9+ PyTorch Platform PyPI

🇨🇳 中文 | 🇬🇧 English | 🐳 Docker


✨ Features

🎸 MDX-Net Runner

  • MDX-Net / MDX-C models
  • Roformer (MelBandRoformer, BSRoformer)
  • SCNet (Sparse Compression Network)
  • ONNX & PyTorch checkpoints

🥁 Demucs Runner

  • Demucs v1 / v2 / v3 / v4
  • htdemucs / htdemucs_ft
  • 6-stem separation (Guitar, Piano)
  • Auto model download

🎤 VR Runner

  • VR Architecture models
  • VR 5.1 model support
  • Window size / Aggression tuning
  • TTA & Post-processing

🚀 Highlights

FeatureDescription
🎯 GUI-IdenticalExactly replicates UVR GUI behavior
GPU AcceleratedNVIDIA CUDA & AMD DirectML support
🔧 Zero ConfigAuto-detect model parameters
📦 Batch ReadyPerfect for automation & pipelines
🎚️ Bit Depth Control16/24/32-bit PCM, 32/64-bit float
📥 Auto DownloadOfficial UVR model registry with auto-download
🛡️ Robust Error HandlingGPU fallback, retry, fuzzy matching
🔗 Unified CLIuvr mdx / uvr demucs / uvr vr — one command for all
📦 PyPI Readypip install uvr-headless-runner — instant setup

📖 Design Philosophy

Important

This project is a headless automation layer for Ultimate Vocal Remover.

It does NOT reimplement any separation logic.
It EXACTLY REPLICATES UVR GUI behavior — model loading, parameter fallback, and auto-detection.

✅ If a model works in UVR GUI, it works here — no extra config needed.


🤔 Why uvr-headless-runner?

Built for maximum flexibility. Load any custom model without waiting for upstream updates.

🎨 Full Custom Model Support

Directly load any .pth or .ckpt file.
Perfect for testing new finetunes or experimental models immediately.

🖥️ Headless & Remote Ready

Built for seamless integration into
web services or automation scripts.

👥 By Users, For Users

Designed by audio enthusiasts who
prioritize complete control and native UVR compatibility.


📋 Requirements

ComponentRequirement
Python3.9.x (3.10+ not fully tested)
GPUNVIDIA CUDA or AMD DirectML (optional)
OSWindows / Linux / macOS

🔧 Installation

🚀 Option 1: pip install from PyPI (Recommended)
# Install from PyPI
pip install uvr-headless-runner

# GPU support (NVIDIA)
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124

# ONNX GPU (optional)
pip install onnxruntime-gpu

After installation, you get the uvr unified CLI — no need to clone the repo!

uvr mdx -m "UVR-MDX-NET Inst HQ 3" -i song.wav -o output/
uvr demucs -m htdemucs -i song.wav -o output/
uvr vr -m "UVR-De-Echo-Normal" -i song.wav -o output/
📦 Option 2: Poetry (from source)
# Clone repository
git clone https://github.com/chyinan/uvr-headless-runner.git
cd uvr-headless-runner

# Install dependencies
poetry install

# GPU support (NVIDIA)
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124

# ONNX GPU (optional)
pip install onnxruntime-gpu
📦 Option 3: pip + venv (from source)
# Clone repository
git clone https://github.com/chyinan/uvr-headless-runner.git
cd uvr-headless-runner

# Create virtual environment
python -m venv venv
source venv/bin/activate  # Linux/macOS
# venv\Scripts\activate   # Windows

# Install dependencies
pip install -r requirements.txt

# GPU support (NVIDIA)
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
🔴 AMD GPU (DirectML)
# Install DirectML support
pip install torch-directml

# Use with --directml flag
python mdx_headless_runner.py -m model.ckpt -i song.wav -o output/ --directml

⚠️ DirectML is experimental. NVIDIA CUDA recommended for best performance.

✅ Verify Installation (Native Python Only)

python -c "import torch; print(f'PyTorch: {torch.__version__}'); print(f'CUDA: {torch.cuda.is_available()}')"

💡 Skip this if using Docker - the container includes all dependencies.

🐳 Option 4: Docker Hub (No Build Required!)

Fastest way to get started - just pull and run!

# Pull pre-built image from Docker Hub
docker pull chyinan/uvr-headless-runner:latest

Run directly (GPU mode)

docker run --rm --gpus all
-v ~/.uvr_models:/models
-v $(pwd):/data
chyinan/uvr-headless-runner:latest
uvr-mdx -m "UVR-MDX-NET Inst HQ 3" -i /data/song.wav -o /data/output/

Run directly (CPU mode)

docker run --rm
-v ~/.uvr_models:/models
-v $(pwd):/data
chyinan/uvr-headless-runner:latest
uvr-mdx -m "UVR-MDX-NET Inst HQ 3" -i /data/song.wav -o /data/output/ --cpu

Or install CLI wrappers for native experience:

# One-click install (auto-detects GPU)
./docker/install.sh      # Linux/macOS
.\docker\install.ps1     # Windows

Then use like native commands

uvr-mdx -m "UVR-MDX-NET Inst HQ 3" -i song.wav -o output/ uvr-demucs -m htdemucs -i song.wav -o output/ uvr-vr -m "UVR-De-Echo-Normal" -i song.wav -o output/

📖 Full Docker Guide →


🎼 Quick Start

Unified CLI (pip install / Docker)

After installing via pip install uvr-headless-runner or Docker, you can use the short commands:

# MDX-Net / Roformer separation
uvr mdx -m "UVR-MDX-NET Inst HQ 3" -i song.wav -o output/ --gpu

Demucs separation

uvr demucs -m htdemucs -i song.wav -o output/ --gpu

VR Architecture separation

uvr vr -m "UVR-De-Echo-Normal" -i song.wav -o output/ --gpu

List all available models

uvr list all

Download a model

uvr download "UVR-MDX-NET Inst HQ 3" --arch mdx

Show system info

uvr info

💡 You can also use standalone commands: uvr-mdx, uvr-demucs, uvr-vr

MDX-Net / Roformer / SCNet

# Basic separation
python mdx_headless_runner.py -m "model.ckpt" -i "song.flac" -o "output/" --gpu

Vocals only (24-bit)

python mdx_headless_runner.py -m "model.ckpt" -i "song.flac" -o "output/" --gpu --vocals-only --wav-type PCM_24

Demucs

# All 4 stems
python demucs_headless_runner.py --model htdemucs --input "song.flac" --output "output/" --gpu

Vocals only

python demucs_headless_runner.py --model htdemucs --input "song.flac" --output "output/" --gpu --stem Vocals --primary-only

VR Architecture

# Basic separation (model in database)
python vr_headless_runner.py -m "model.pth" -i "song.flac" -o "output/" --gpu

Custom model (not in database)

python vr_headless_runner.py -m "model.pth" -i "song.flac" -o "output/" --gpu
--param 4band_v3 --primary-stem Vocals


📥 Model Download Center

All runners now include automatic model downloading from official UVR sources - just like the GUI!

List Available Models

# List all MDX-Net models
python mdx_headless_runner.py --list

List only installed models

python mdx_headless_runner.py --list-installed

List models not yet downloaded

python mdx_headless_runner.py --list-uninstalled

Same for Demucs and VR

python demucs_headless_runner.py --list python vr_headless_runner.py --list

Download Models

# Download a specific model (without running inference)
python mdx_headless_runner.py --download "UVR-MDX-NET Inst HQ 3"
python demucs_headless_runner.py --download "htdemucs_ft"
python vr_headless_runner.py --download "UVR-De-Echo-Normal by FoxJoy"

Auto-Download on Inference

# Just use the model name - it will download automatically if not installed!
python mdx_headless_runner.py -m "UVR-MDX-NET Inst HQ 3" -i "song.flac" -o "output/" --gpu

Demucs models auto-download too

python demucs_headless_runner.py --model htdemucs_ft --input "song.flac" --output "output/" --gpu

Model Info & Fuzzy Matching

# Get detailed info about a model
python mdx_headless_runner.py --model-info "UVR-MDX-NET Inst HQ 3"

Typo? Get suggestions!

python mdx_headless_runner.py --model-info "UVR-MDX Inst HQ"

Output: Did you mean: UVR-MDX-NET Inst HQ 1, UVR-MDX-NET Inst HQ 2, ...

Features

FeatureDescription
🌐 Official RegistrySyncs with UVR's official model list
🔄 Resume DownloadsInterrupted downloads can be resumed
⏱️ Retry with BackoffAutomatic retry on network errors
💾 Disk Space CheckPre-checks available space before download
🔍 Fuzzy MatchingSuggests similar model names on typos
Integrity CheckValidates downloaded files

🛡️ Error Handling & GPU Fallback

All runners include robust error handling with automatic GPU-to-CPU fallback:

# If GPU runs out of memory, automatically falls back to CPU
python mdx_headless_runner.py -m "model.ckpt" -i "song.flac" -o "output/" --gpu

Output on GPU error:

============================================================

ERROR: GPU memory exhausted

============================================================

Suggestion: Try: (1) Use --cpu flag, (2) Reduce --batch-size...

Attempting to fall back to CPU mode...

Error Messages

Errors now include clear explanations and suggestions:

BeforeAfter
FileNotFoundErrorAudio file not found: song.wav
CUDA out of memoryGPU memory exhausted. Try: --cpu or reduce --batch-size
Model not foundModel 'xyz' not found. Did you mean: UVR-MDX-NET...?

📊 CLI Progress Display

All runners feature a professional CLI progress system with real-time feedback:

╭──────────────────────────────────────────────────────────────────────────╮
│                          UVR Audio Separation                            │
├──────────────────────────────────────────────────────────────────────────┤
│  Model         │ UVR-MDX-NET Inst HQ 3                                   │
│  Input         │ song.flac                                               │
│  Output        │ ./output/                                               │
│  Device        │ CUDA:0                                                  │
│  Architecture  │ MDX-Net                                                 │
╰──────────────────────────────────────────────────────────────────────────╯

⠹ Downloading model: UVR-MDX-NET Inst HQ 3 ████████████████████████████████████████ 100% • 245.3 MB • 12.5 MB/s • 0:00:00

✓ Model downloaded

⠹ Running inference ████████████████░░░░░░░░░░░░░░░░░░░░░░░░ 42% • 0:01:23 • 0:01:52

✓ Inference complete

╭──────────────────────────────────────────────────────────────────────────╮ │ ✓ Processing completed in 3:15 │ ╰──────────────────────────────────────────────────────────────────────────╯

Output files: • output/song_(Vocals).wav • output/song_(Instrumental).wav

Features

FeatureDescription
📥 Download ProgressReal-time speed, ETA, and transfer stats for model downloads
🎯 Inference ProgressChunk-based progress tracking during audio processing
⏱️ Time EstimatesElapsed time and remaining time (ETA) display
🎨 Rich OutputBeautiful terminal UI with rich library
🐳 Docker CompatibleWorks seamlessly inside containers
📉 Graceful FallbackFalls back to basic output if rich unavailable

Progress Library Support

The system automatically selects the best available library:

  1. rich (preferred) - Full-featured progress bars with colors
  2. tqdm (fallback) - Standard progress bars
  3. Basic (no deps) - Simple text-based progress

Install rich for the best experience:

pip install rich

Quiet Mode

Disable progress output for scripting:

python mdx_headless_runner.py -m model.ckpt -i song.wav -o output/ --quiet

🎛️ MDX-Net Runner

Command Line Arguments

ArgumentShortDefaultDescription
--model-mRequiredModel file path (.ckpt/.onnx)
--input-iRequiredInput audio file
--output-oRequiredOutput directory
--gpuAutoUse NVIDIA CUDA
--directmlUse AMD DirectML
--overlap0.25MDX overlap (0.25-0.99)
--overlap-mdxc2MDX-C/Roformer overlap (2-50)
--wav-typePCM_24Output: PCM_16/24/32, FLOAT, DOUBLE
--vocals-onlyOutput vocals only
--instrumental-onlyOutput instrumental only
📋 All Arguments
ArgumentDescription
--name -nOutput filename base
--jsonModel JSON config
--cpuForce CPU
--device -dGPU device ID
--segment-sizeSegment size (default: 256)
--batch-sizeBatch size (default: 1)
--primary-onlySave primary stem only
--secondary-onlySave secondary stem only
--stemMDX-C stem select
--quiet -qQuiet mode

Examples

# Roformer with custom overlap
python mdx_headless_runner.py \
    -m "MDX23C-8KFFT-InstVoc_HQ.ckpt" \
    -i "song.flac" -o "output/" \
    --gpu --overlap-mdxc 8

32-bit float output

python mdx_headless_runner.py
-m "model.ckpt" -i "song.flac" -o "output/"
--gpu --wav-type FLOAT


🥁 Demucs Runner

Supported Models

ModelVersionStemsQuality
htdemucsv44⭐⭐⭐
htdemucs_ftv44⭐⭐⭐⭐ Fine-tuned
htdemucs_6sv46⭐⭐⭐⭐ +Guitar/Piano
hdemucs_mmiv44⭐⭐⭐
mdx_extra_qv34⭐⭐⭐

Command Line Arguments

ArgumentShortDefaultDescription
--model-mRequiredModel name or path
--input-iRequiredInput audio file
--output-oRequiredOutput directory
--gpuAutoUse NVIDIA CUDA
--segmentDefaultSegment size (1-100+)
--shifts2Time shifts
--stemVocals/Drums/Bass/Other/Guitar/Piano
--wav-typePCM_24Output bit depth
--primary-onlyOutput primary stem only

Stem Selection

GUI ActionCLI Command
All Stems(no --stem)
Vocals only--stem Vocals --primary-only
Instrumental only--stem Vocals --secondary-only

Examples

# 6-stem separation
python demucs_headless_runner.py \
    --model htdemucs_6s \
    --input "song.flac" --output "output/" \
    --gpu

High quality with custom segment

python demucs_headless_runner.py
--model htdemucs_ft
--input "song.flac" --output "output/"
--gpu --segment 85


🎤 VR Architecture Runner

Command Line Arguments

ArgumentShortDefaultDescription
--model-mRequiredModel file path (.pth)
--input-iRequiredInput audio file
--output-oRequiredOutput directory
--gpuAutoUse NVIDIA CUDA
--directmlUse AMD DirectML
--window-size512Window size (320/512/1024)
--aggression5Aggression setting (0-50+)
--wav-typePCM_16Output: PCM_16/24/32, FLOAT, DOUBLE
--primary-onlyOutput primary stem only
--secondary-onlyOutput secondary stem only
📋 All Arguments
ArgumentDescription
--name -nOutput filename base
--paramModel param name (e.g., 4band_v3)
--primary-stemPrimary stem name (Vocals/Instrumental)
--noutVR 5.1 nout parameter
--nout-lstmVR 5.1 nout_lstm parameter
--cpuForce CPU
--device -dGPU device ID
--batch-sizeBatch size (default: 1)
--ttaEnable Test-Time Augmentation
--post-processEnable post-processing
--post-process-thresholdPost-process threshold (default: 0.2)
--high-end-processEnable high-end mirroring
--list-paramsList available model params

Model Parameters

When the model hash is not found in the database, you need to provide parameters manually:

# List available params
python vr_headless_runner.py --list-params

Use custom params

python vr_headless_runner.py -m "model.pth" -i "song.flac" -o "output/"
--param 4band_v3 --primary-stem Vocals

VR 5.1 model with nout/nout_lstm

python vr_headless_runner.py -m "model.pth" -i "song.flac" -o "output/"
--param 4band_v3 --primary-stem Vocals --nout 48 --nout-lstm 128

Examples

# High quality with TTA
python vr_headless_runner.py \
    -m "UVR-MDX-NET-Voc_FT.pth" \
    -i "song.flac" -o "output/" \
    --gpu --tta --window-size 1024

Aggressive separation

python vr_headless_runner.py
-m "model.pth" -i "song.flac" -o "output/"
--gpu --aggression 15 --post-process

24-bit output

python vr_headless_runner.py
-m "model.pth" -i "song.flac" -o "output/"
--gpu --wav-type PCM_24


📁 Output Structure

output/
├── song_(Vocals).wav        # Vocals
├── song_(Instrumental).wav  # Instrumental (MDX)
├── song_(Drums).wav         # Drums (Demucs)
├── song_(Bass).wav          # Bass (Demucs)
├── song_(Other).wav         # Other (Demucs)
├── song_(Guitar).wav        # Guitar (6-stem)
└── song_(Piano).wav         # Piano (6-stem)

🐍 Python API

from mdx_headless_runner import run_mdx_headless
from demucs_headless_runner import run_demucs_headless
from vr_headless_runner import run_vr_headless

MDX separation

run_mdx_headless( model_path='model.ckpt', audio_file='song.wav', export_path='output', use_gpu=True, verbose=True # Print progress )

Output: output/song_(Vocals).wav, output/song_(Instrumental).wav

Demucs separation (vocals only)

run_demucs_headless( model_path='htdemucs', audio_file='song.wav', export_path='output', use_gpu=True, demucs_stems='Vocals', # or 'All Stems' for all primary_only=True, verbose=True )

Output: output/song_(Vocals).wav

VR Architecture separation

run_vr_headless( model_path='model.pth', audio_file='song.wav', export_path='output', use_gpu=True, window_size=512, aggression_setting=5, is_tta=False, # For unknown models, provide params manually: # user_vr_model_param='4band_v3', # user_primary_stem='Vocals' )

Output: output/song_(Vocals).wav, output/song_(Instrumental).wav

💡 Note: Functions process audio and save to export_path. Check output directory for results.


🔍 Troubleshooting

❌ GPU not detected
# Check CUDA
python -c "import torch; print(torch.cuda.is_available())"

# Reinstall PyTorch with CUDA
pip uninstall torch torchvision torchaudio
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
❌ Model not found

Option 1: Use automatic download (recommended)

# List available models
python mdx_headless_runner.py --list

Download the model

python mdx_headless_runner.py --download "UVR-MDX-NET Inst HQ 3"

Or just use it - auto-downloads!

python mdx_headless_runner.py -m "UVR-MDX-NET Inst HQ 3" -i song.wav -o output/

Option 2: Manual download

Default locations:

  • MDX: ./models/MDX_Net_Models/
  • Demucs: ./models/Demucs_Models/v3_v4_repo/
  • VR: ./models/VR_Models/
❌ Network/Download errors
# Force refresh model registry
python model_downloader.py --sync

# Check network connectivity
python -c "import urllib.request; urllib.request.urlopen('https://github.com')"

The downloader includes:

  • Automatic retry (3 attempts with exponential backoff)
  • Resume interrupted downloads
  • Fallback to cached registry
❌ VR model hash not found

If your VR model isn't in the database, provide parameters manually:

# List available params
python vr_headless_runner.py --list-params

Specify param and primary stem

python vr_headless_runner.py -m "model.pth" -i "song.wav" -o "output/"
--param 4band_v3 --primary-stem Vocals

Common params: 4band_v3, 4band_v2, 1band_sr44100_hl512, 3band_44100

❌ Poor output quality
  • Try increasing --overlap or --overlap-mdxc
  • For Demucs, increase --segment (e.g., 85)
  • Ensure correct model config with --json

🙏 Acknowledgments

UVR
Anjok07 & aufr33
Demucs
Facebook Research
MDX-Net
Woosung Choi
VR Architecture
tsurumeso

Special thanks to ZFTurbo for MDX23C & SCNet.


📄 License

MIT License

Copyright (c) 2022 Anjok07 (Ultimate Vocal Remover) Copyright (c) 2026 UVR Headless Runner Contributors

View Full License


Contributing & Support

Pull Requests and Issues are welcome! Whether it's bug reports, feature suggestions, or code contributions, we greatly appreciate them all.

If you find this project helpful, please give us a Star ⭐ - it's the best support for us!


Made with ❤️ for the audio separation community

SEE ALSO

clihub3/4/2026UVR-HEADLESS-RUNNER(1)