MODELS(1)

NAME

modelsTUI and CLI for browsing AI models, benchmarks, and coding agents

SYNOPSIS

$brew install models

INFO

372 stars
15 forks
0 views

DESCRIPTION

TUI and CLI for browsing AI models, benchmarks, and coding agents

README

models

Version Benchmarks License: MIT

TUI and CLI for browsing AI models, benchmarks, and coding agents.

  • Models Tab: Browse 3,000+ models across 85+ providers from models.dev with capability indicators, adaptive layouts, and provider categorization
  • Benchmarks Tab: Compare model performance across 15+ benchmarks from Artificial Analysis, with head-to-head tables, scatter plots, radar charts, and creator filtering
  • Agents Tab: Track AI coding assistants (Claude Code, Aider, Cursor, etc.) with version detection, changelogs, and GitHub integration

Video (and screenshots below) are out-of-sync with the current state of the app, I've been moving fast on making changes and so I'll have to record a new one!

What's New

  • Models tab redesign — capability indicators, adaptive provider panel, and detailed model info at a glance
  • Benchmark compare mode — head-to-head tables, scatter plots, and radar charts for selected models
  • Benchmarks CLI — list and inspect benchmark data directly from the terminal
  • Linux packages — native .deb and .rpm packages for x86_64 and aarch64
  • Agents CLI — track agent releases, view changelogs, and compare versions from the terminal

Features

Models Tab

  • Capability indicators — see Reasoning, Tools, Files, and Open/Closed status at a glance in the model list
  • Provider categories — filter and group providers by type (Origin, Cloud, Inference, Gateway, Dev Tool)
  • Detail panel — capabilities, pricing, modalities, and metadata for the selected model
  • Cross-provider search to compare the same model across different providers
  • Copy to clipboard with a single keypress
  • CLI commands and JSON output for scripting and automation

Agents Tab

  • Curated catalog of 12+ AI coding assistants
  • Version detection — automatically detects installed agents
  • GitHub integration — stars, releases, changelogs, update availability
  • Styled changelogs — markdown rendering with syntax highlighting in the detail pane
  • Changelog search — search across changelogs with highlighted matches and n/N jump-to-match
  • Persistent cache — instant startup with ETag-based conditional fetching
  • Customizable tracking — choose which agents to monitor

Benchmarks Tab

  • ~400 benchmark entries from Artificial Analysis with quality, speed, and pricing scores
  • Compare mode — select models for head-to-head tables, scatter plots, and radar charts
  • Auto-updating — benchmark data refreshed automatically every 30 minutes
  • Creator sidebar with 40+ creators — filter by region, type, or open/closed source
  • Sort & filter — sort by any metric, filter by reasoning capability, source type, and more
  • Detail panel — full benchmark breakdown with indexes, scores, performance, and pricing

Agents CLI

  • Status table — see installed vs latest version, 24h release indicator, and release frequency at a glance
  • Inline release browseragents <tool> opens an interactive version browser with changelog preview
  • Changelogs — view release notes for any agent by name, latest version, or explicit version
  • Tracked-agent manageragents list-sources can now manage which curated agents are tracked from the CLI
  • Dual entry point — use as models agents or create an agents symlink for standalone usage
  • Fast — concurrent GitHub fetching and version detection

Benchmarks CLI

  • Live benchmark queries — fetch the current benchmark dataset without launching the TUI
  • Interactive list picker — use models benchmarks list to open a filtered benchmark selector, then inspect the selected model immediately
  • Detail views — use models benchmarks show for a direct model breakdown, with interactive disambiguation when a query matches multiple variants
  • Filtering — narrow by search text, creator, open/closed source, and reasoning status
  • Sorting — sort by any supported metric, including intelligence, coding, math, GPQA, speed, pricing, and release date
  • JSON output — pipe structured benchmark data into shell scripts and other tools

Installation

Cargo (from crates.io)

cargo install modelsdev

Homebrew (macOS/Linux)

brew install models

Migrating from the tap? Run brew untap arimxyer/tap — updates now land through homebrew-core bump PRs and may take a bit to merge.

Scoop (Windows)

scoop install extras/models

Migrating from the custom bucket? Run scoop bucket rm arimxyer — Scoop Extras handles updates automatically.

Arch Linux (AUR)

paru -S models-bin   # or: yay -S models-bin

Maintained by @Dominiquini

Debian / Ubuntu

Download the .deb from GitHub Releases and install:

# Download the latest .deb for your architecture (amd64 or arm64)
sudo dpkg -i modelsdev_*_amd64.deb

Fedora / RHEL

Download the .rpm from GitHub Releases and install:

# Download the latest .rpm for your architecture (x86_64 or aarch64)
sudo rpm -i modelsdev-*.x86_64.rpm

Verifying downloads: Each GitHub Release includes a SHA256SUMS file. After downloading, verify with: sha256sum -c SHA256SUMS --ignore-missing

Pre-built binaries

Download the latest release for your platform from GitHub Releases.

Build from source

git clone https://github.com/arimxyer/models
cd models
cargo build --release
./target/release/models

TUI Usage

Interactive Browser

Run models with no arguments to launch the interactive TUI:

models

Models tab screenshot

Keybindings

Global

KeyAction
] / [Switch tabs (Models / Agents / Benchmarks)
?Show context-aware help
qQuit

Navigation

KeyAction
j / Move down
k / Move up
gJump to first item
GJump to last item
Ctrl+d / PageDownPage down
Ctrl+u / PageUpPage up
Tab / Shift+TabSwitch panels
/ Switch panels

Search

KeyAction
/Enter search mode
Enter / EscExit search mode
EscClear search (in normal mode)

Models Tab

Filters & Sort

KeyAction
sCycle sort (name → date → cost → context)
SToggle sort direction (asc/desc)
1Toggle reasoning filter
2Toggle tools filter
3Toggle open weights filter
4Toggle free models filter
5Cycle provider category filter (All → Origin → Cloud → Inference → Gateway → Tool)
6Toggle category grouping

Copy & Open

KeyAction
cCopy provider/model-id
CCopy model-id only
oOpen provider docs in browser
DCopy provider docs URL
ACopy provider API URL

Agents Tab

Agents tab screenshot

Filters & Sort

KeyAction
sCycle sort (name → updated → stars → status)
1Toggle installed filter
2Toggle CLI tools filter
3Toggle open source filter

Search

KeyAction
/Search agents and changelogs
nJump to next match
NJump to previous match

Actions

KeyAction
aOpen tracked agents picker
oOpen docs in browser
rOpen GitHub repo
cCopy agent name

Customizing Tracked Agents

By default, models tracks 4 popular agents: Claude Code, Codex, Gemini CLI, and OpenCode.

Press a in the Agents tab to open the picker and customize which agents you track. Your preferences are saved to ~/.config/models/config.toml.

You can also add custom agents not in the catalog:

# ~/.config/models/config.toml
[[agents.custom]]
name = "My Agent"
repo = "owner/repo"
binary = "my-agent"
version_command = ["--version"]

See Custom Agents for the full reference.

Benchmarks Tab

Benchmarks tab screenshot

Quick Sort (press again to toggle direction)

KeyAction
1Sort by Intelligence index
2Sort by Release date
3Sort by Speed (tok/s)

Filters

KeyAction
4Cycle source filter (All / Open / Closed)
5Cycle region filter (US / China / Europe / ...)
6Cycle type filter (Startup / Big Tech / Research)
7Cycle reasoning filter (All / Reasoning / Non-reasoning)

Sort

KeyAction
sOpen sort picker popup
SToggle sort direction (asc/desc)

Compare Mode

KeyAction
SpaceToggle model selection (max 8)
vCycle view (H2H table → Scatter → Radar)
tToggle left panel (Models / Creators)
dShow detail overlay (H2H view)
cClear all selections
h / lSwitch focus (List / Compare)
j / kScroll H2H table (when Compare focused)
x / yCycle scatter plot axes
aCycle radar chart preset

Actions

KeyAction
oOpen Artificial Analysis page

CLI Usage

Benchmarks CLI

Query benchmark data from the command line using the same live benchmark feed as the Benchmarks tab.

Interactive benchmark picker

models benchmarks list
models benchmarks list --sort speed --limit 10
models benchmarks list --creator openai --reasoning
models benchmarks list --open --sort price-input --asc

models benchmarks list opens the inline picker in an interactive terminal and uses the same filters/sorting to narrow the candidate set before you pick a model.

Once the picker is open:

  • / starts a live text filter over name, slug, and creator
  • s cycles sort metrics
  • S reverses the current sort
  • Enter prints the selected model's normal show output

Show benchmark details

models benchmarks show gpt-4o
models benchmarks show "Claude Sonnet 4"

If show matches multiple benchmark variants in an interactive terminal, the CLI reopens the picker with just the matching candidates so you can choose the exact row you want.

JSON output

models benchmarks list --creator anthropic --json
models benchmarks show gpt-4o --json

Agents CLI

Track AI coding agent releases from the command line. Install the agents alias during setup, or use models agents as a fallback.

# Create the agents alias (one-time setup)
mkdir -p ~/.local/bin
ln -s $(which models) ~/.local/bin/agents

Note: Make sure ~/.local/bin is in your PATH. For example, in bash/zsh add export PATH="$HOME/.local/bin:$PATH" to your shell config, or in fish run fish_add_path ~/.local/bin.

Status table

agents status
┌──────────────┬─────┬───────────┬──────────┬─────────┬───────────────┐
│ Tool         │ 24h │ Installed │ Latest   │ Updated │ Freq.         │
├──────────────┼─────┼───────────┼──────────┼─────────┼───────────────┤
│ Claude Code  │ ✓   │ 2.1.42    │ 2.1.42   │ 1d ago  │ ~1d           │
│ OpenAI Codex │ ✓   │ 0.92.0    │ 0.92.0   │ 6h ago  │ ~3h           │
│ Goose        │     │ —         │ 1.0.20   │ 3d ago  │ ~2d           │
└──────────────┴─────┴───────────┴──────────┴─────────┴───────────────┘

View changelogs

agents claude              # Interactive release browser (by CLI binary name)
agents claude-code         # By agent ID
agents claude --latest     # Latest release directly
agents claude --version 1.0.170  # Specific version

Browse versions

agents claude --list       # List all versions
agents claude --pick       # Alias for the interactive release browser

In the release browser:

  • / or j/k moves between releases
  • the lower pane previews the selected release notes
  • Enter prints the full changelog for the selected release

Other commands

agents latest              # Interactive picker for releases from the last 24 hours
agents list-sources        # Interactive tracked-agent manager
agents claude --web        # Open GitHub releases in browser

Models CLI

Interactive model picker

models list
models list anthropic

models list opens the inline picker in an interactive terminal. Use a provider argument to prefilter the picker before it opens.

Once the picker is open:

  • / starts a live filter over model id, name, and provider
  • s cycles sort modes
  • S reverses the current sort
  • Enter prints the selected model's normal show output

Providers

models providers
models providers --json

Show model details

models show claude-opus-4-5-20251101
Claude Opus 4.5
===============

ID: claude-opus-4-5-20251101 Provider: Anthropic (anthropic) Family: claude-opus

Limits

Context: 200k tokens Max Output: 64k tokens

Pricing (per million tokens)

Input: $5.00 Output: $25.00 Cache Read: $0.50 Cache Write: $6.25

Capabilities

Reasoning: Yes Tool Use: Yes Attachments: Yes Modalities: text, image, pdf -> text

Metadata

Released: 2025-11-01 Updated: 2025-11-01 Knowledge: 2025-03-31 Open Weights: No

If show matches multiple providers or model variants in an interactive terminal, the CLI reopens the picker with the matching candidates so you can choose the exact row.

Search models

models search "gpt-4"
models search "claude opus"

models search currently reuses the same matcher and interactive picker flow as models list, so it remains available as a compatibility command.

JSON output

All models and benchmarks commands support --json for scripting:

models benchmarks list --json
models benchmarks show gpt-4o --json
models list --json
models providers --json
models show claude-opus-4-5 --json
models search "llama" --json

Data Sources

Lots of gratitude to the companies who do all the hard work! Shout out to the sources:

  • Model data: Fetched from models.dev, an open-source database of AI models maintained by SST
  • Benchmark data: Fetched from Artificial Analysis — quality indexes, benchmark scores, speed, and pricing for ~400 model entries
  • Agent data: Curated catalog in data/agents.json — contributions welcome!
  • GitHub data: Fetched from GitHub API (stars, releases, changelogs)

Roadmap

  • Nix flake — Nix packaging with a proper flake.lock for reproducible builds (PRs welcome!)

Contributing

Contributions are welcome! Please read the Contributing Guide before submitting a PR.

This project follows the Contributor Covenant Code of Conduct.

License

MIT

SEE ALSO

clihub3/18/2026MODELS(1)