OPENHARNESS(1)

NAME

openharnessOpen source local terminal cli with any LLM

SYNOPSIS

$npm install -g @zhijiewang/openharness

INFO

88 stars
18 forks
0 views

DESCRIPTION

Open source local terminal cli with any LLM

README

openHarness logo

OpenHarness

        ___
       /   \
      (     )        ___  ___  ___ _  _ _  _   _ ___ _  _ ___ ___ ___
       `~w~`        / _ \| _ \| __| \| | || | /_\ | _ \ \| | __/ __/ __|
       (( ))       | (_) |  _/| _|| .` | __ |/ _ \|   / .` | _|\__ \__ \
        ))((        \___/|_|  |___|_|\_|_||_/_/ \_\_|_\_|\_|___|___/___/
       ((  ))
        `--`

AI coding agent in your terminal. Works with any LLM -- free local models or cloud APIs.

OpenHarness demo

npm version npm downloads license tests tools Node.js 18+ TypeScript GitHub stars GitHub issues PRs Welcome


Table of Contents


Quick Start

npm install -g @zhijiewang/openharness
oh

That's it. OpenHarness auto-detects Ollama and starts chatting. No API key needed.

Python SDK: there's also an official Python SDK for driving oh from Python programs (notebooks, batch scripts, ML pipelines). Install with pip install openharness after the npm install, then from openharness import query. See python/README.md.

oh init                               # interactive setup wizard (provider + cybergotchi)
oh                                    # auto-detect local model
oh --model ollama/qwen2.5:7b         # specific model
oh --model gpt-4o                     # cloud model (needs OPENAI_API_KEY)
oh --trust                            # auto-approve all tool calls
oh --auto                             # auto-approve, block dangerous bash
oh -p "fix the tests" --trust         # headless mode (single prompt, exit)
oh run "review code" --json           # CI/CD with JSON output

In-session commands:

/rewind                               # undo last AI file change (checkpoint restore)
/roles                                # list agent specializations
/vim                                  # toggle vim mode
Ctrl+O                                # flush transcript to scrollback for review

Why OpenHarness?

Most AI coding agents are locked to one provider or cost $20+/month. OpenHarness works with any LLM -- run it free with Ollama on your own machine, or connect to any cloud API. Every AI edit is git-committed and reversible with /undo.

OpenHarnessClaude CodeAiderOpenCode
Any LLMYes (Ollama, OpenAI, Anthropic, OpenRouter, any OpenAI-compatible)Anthropic onlyYesYes
Free local modelsOllama nativeNoYesYes
Tools41 with permission gates43+File-focused20+
Permission modes7 (ask, trust, deny, acceptEdits, plan, auto, bypass)7BasicBasic
Git integrationAuto-commit + /undo + /rewind checkpointsYesDeep gitBasic
Slash commands42+ built-in80+SomeSome
Headless/CI modeoh -p "prompt" or oh run --jsonYesYesYes
GitHub ActionBuilt-in PR review actionYesNoNo
Agent roles6 specializations (reviewer, tester, debugger...)YesNoNo
Vim modehjkl, w/b/e, 0/$, x, d, i/a/I/A/oFull vimNoNo
Prompt cachingAnthropic cache_controlYesNoNo
Bash securityAST-based command analysisAST analysisNoNo
CompanionCybergotchi virtual petBasicNoNo
Terminal UISequential renderer (Ink pattern)React + InkBasicBubbleTea
LanguageTypeScriptTypeScriptPythonGo
LicenseMITProprietaryApache 2.0MIT
PriceFree (BYOK)$20+/monthFree (BYOK)Free (BYOK)

Terminal UI

OpenHarness features a sequential terminal renderer inspired by Ink/Claude Code's default mode. Completed messages flush to native scrollback (scrollable), while the live area (streaming, spinner, input) rewrites in-place using relative cursor movement.

Keybindings

KeyAction
EnterSubmit prompt
Alt+EnterInsert newline (multi-line input)
/ Navigate input history
Ctrl+CCancel current request / exit
Ctrl+A / Ctrl+EJump to start / end of input
Ctrl+OToggle thinking block expansion
Ctrl+KToggle code block expansion in messages
TabAutocomplete slash commands / file paths / cycle tool outputs
/vimToggle Vim mode (normal/insert)

Scrolling is handled by the terminal's native scrollbar. Completed messages flow into the terminal scrollback buffer. Use your terminal's search (e.g., Ctrl+Shift+F in VS Code) to search conversation history.

Features

  • Markdown rendering — headings, code blocks, bold, italic, lists, tables, blockquotes, links
  • Syntax highlighting — keywords, strings, comments, numbers, types (JS/TS/Python/Rust/Go and 20+ languages)
  • Collapsible code blocks — blocks over 8 lines auto-collapse; Ctrl+K to expand all
  • Collapsible thinking — thinking blocks collapse to a one-line summary after completion; Ctrl+O to expand
  • Shimmer spinner — animated "Thinking" indicator with color transitions (magenta → yellow at 30s → red at 60s)
  • Tool call display — args preview, live streaming output, result summaries (line counts, elapsed time), expand/collapse with Tab
  • Permission prompts — bordered box with risk coloring, bold colored Yes/No/Diff keys, syntax-highlighted inline diffs
  • Status line — model name, token count, cost, context usage bar (customizable via config)
  • Context warning — yellow alert when context window exceeds 75%
  • Native terminal scrollbar — completed messages flow into scrollback; use your terminal's scrollbar and search
  • Multi-line inputAlt+Enter for newlines; paste detection auto-inserts newlines
  • Autocomplete — slash commands and file paths with descriptions; Tab to cycle
  • File path autocomplete — Tab-completes paths with [dir]/[file] indicators
  • Session browser/browse to interactively browse and resume past sessions
  • Companion mascot — animated Cybergotchi in the footer (toggle with /companion off|on)

Themes

oh --light                    # light theme for bright terminals
/theme light                  # switch mid-session (saved automatically)
/theme dark                   # switch back

Theme preference is saved to .oh/config.yaml and persists across sessions.

Custom Status Line

Customize the status bar format in .oh/config.yaml:

statusLineFormat: '{model} │ {tokens} │ {cost} │ {ctx}'

Available variables: {model}, {tokens} (input↑ output↓), {cost} ($X.XXXX), {ctx} (context usage bar). Empty sections are automatically collapsed.

Tools (35)

ToolRiskDescription
Core
BashhighExecute shell commands with live streaming output (AST safety analysis)
ReadlowRead files with line ranges, PDF support
ImageReadlowRead images/PDFs for multimodal analysis
WritemediumCreate or overwrite files
EditmediumSearch-and-replace edits
MultiEditmediumAtomic multi-file edits (all succeed or none)
GloblowFind files by pattern
GreplowRegex content search with context lines
LSlowList directory contents with sizes
Web
WebFetchmediumFetch URL content (SSRF-protected)
WebSearchmediumSearch the web
RemoteTriggerhighHTTP requests to webhooks/APIs
Tasks
TaskCreatelowCreate structured tasks
TaskUpdatelowUpdate task status
TaskListlowList all tasks
TaskGetlowGet task details
TaskStoplowStop a running task
TaskOutputlowGet task output
Agents
AgentmediumSpawn a sub-agent (with role specialization)
ParallelAgentmediumDispatch multiple agents with DAG dependencies
SendMessagelowAgent-to-agent peer messaging
AskUserlowAsk user a question with options
Scheduling
CronCreatemediumSchedule recurring tasks
CronDeletemediumRemove scheduled tasks
CronListlowList all scheduled tasks
Planning
EnterPlanModelowEnter structured planning mode
ExitPlanModelowExit planning mode
Code Intelligence
DiagnosticslowLSP-based code diagnostics
NotebookEditmediumEdit Jupyter notebooks
Memory & Discovery
MemorylowSave/list/search persistent memories
SkilllowInvoke a skill from .oh/skills/
ToolSearchlowFind tools by description
Git Worktrees
EnterWorktreemediumCreate isolated git worktree
ExitWorktreemediumRemove a git worktree
Process
KillProcesshighStop processes by PID or name

Low-risk read-only tools auto-approve. Medium and high risk tools require confirmation in ask mode. Use --trust or --auto to skip prompts.

Slash Commands (33)

Type these during a chat session. Aliases: /q exit, /h help, /c commit, /m model, /s status.

Session:

CommandDescription
/clearClear conversation history
/compactCompress conversation to free context
/exportExport conversation to markdown
/history [n]List recent sessions; /history search <term> to search
/browseInteractive session browser with preview
/resume <id>Resume a saved session
/forkFork current session

Git:

CommandDescription
/diffShow uncommitted git changes
/undoUndo last AI commit
/commit [msg]Create a git commit
/logShow recent git commits

Info:

CommandDescription
/helpShow all available commands (categorized)
/costShow session cost and token usage
/statusShow model, mode, git branch, MCP servers
/configShow configuration
/filesList files in context
/model <name>Switch model mid-session
/memoryView and search memories

Settings:

CommandDescription
/theme dark|lightSwitch theme (saved to config)
/vimToggle Vim mode
/companion off|onToggle companion visibility

AI:

CommandDescription
/plan <task>Enter plan mode
/reviewReview recent code changes

Pet:

CommandDescription
/cybergotchiFeed, pet, rest, status, rename, or reset your companion

Permission Modes

Control how aggressively OpenHarness auto-approves tool calls:

ModeFlagBehavior
ask--permission-mode askPrompt for medium/high risk operations (default)
trust--trustAuto-approve everything
deny--denyOnly allow low-risk read-only operations
acceptEdits--permission-mode acceptEditsAuto-approve file edits, ask for Bash/WebFetch/Agent
plan--permission-mode planRead-only mode — block all write operations
auto--autoAuto-approve all, block dangerous bash (AST-analyzed)
bypassPermissions--permission-mode bypassPermissionsApprove everything unconditionally (CI only)

Bash commands are analyzed by a lightweight AST parser that detects destructive patterns (rm -rf, git push --force, curl | bash, etc.) and adjusts risk level accordingly.

Set permanently in .oh/config.yaml: permissionMode: 'acceptEdits'

Hooks

Run shell scripts automatically at key session events by adding a hooks block to .oh/config.yaml:

hooks:
  - event: sessionStart
    command: "echo 'Session started' >> ~/.oh/session.log"
  • event: preToolUse command: "scripts/check-tool.sh" match: Bash # optional: only trigger for this tool name

  • event: postToolUse command: "scripts/after-tool.sh"

  • event: sessionEnd command: "scripts/cleanup.sh"

Event types:

  • sessionStart — fires once when the session begins
  • preToolUse — fires before each tool call; exit code 1 blocks the tool and returns an error to the model
  • postToolUse — fires after each tool call completes
  • sessionEnd — fires when the session ends

Environment variables available to hook scripts:

VariableDescription
OH_EVENTEvent type (sessionStart, preToolUse, etc.)
OH_TOOL_NAMEName of the tool being called (tool events only)
OH_TOOL_ARGSJSON-encoded tool arguments (tool events only)
OH_TOOL_OUTPUTJSON-encoded tool output (postToolUse only)

Use match to restrict a hook to a specific tool name (e.g., match: Bash only triggers for the Bash tool).

See docs/hooks.md for the full event reference including the new userPromptSubmit, permissionRequest, and postToolUseFailure events.

Cybergotchi

OpenHarness ships with a Tamagotchi-style companion that lives in the side panel. It reacts to your session in real time — celebrating streaks, complaining when tools fail, and getting hungry if you ignore it.

Hatch one:

oh init        # wizard includes cybergotchi setup
/cybergotchi   # or hatch mid-session

Commands:

/cybergotchi feed      # +30 hunger
/cybergotchi pet       # +20 happiness
/cybergotchi rest      # +40 energy
/cybergotchi status    # show needs + lifetime stats
/cybergotchi rename    # give it a new name
/cybergotchi reset     # start over with a new species

Needs decay over time (hunger fastest, happiness slowest). Feed and pet your gotchi to keep it happy.

Evolution — your gotchi evolves based on lifetime milestones:

  • Stage 1 (✦ magenta): 10 sessions or 50 commits
  • Stage 2 (★ yellow + crown): 100 tasks completed or a 25-tool streak

18 species to choose from: duck, cat, owl, penguin, rabbit, turtle, snail, octopus, axolotl, cactus, mushroom, chonk, capybara, goose, and more.

MCP Servers

Connect any MCP (Model Context Protocol) server by editing .oh/config.yaml:

provider: anthropic
model: claude-sonnet-4-6
permissionMode: ask
mcpServers:
  - name: filesystem
    command: npx
    args: ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
  - name: github
    command: npx
    args: ["-y", "@modelcontextprotocol/server-github"]
    env:
      GITHUB_PERSONAL_ACCESS_TOKEN: ghp_...

MCP tools appear alongside built-in tools. /status shows connected servers.

Remote MCP servers (HTTP / SSE)

mcpServers:
  - name: linear
    type: http
    url: https://mcp.linear.app/mcp
    headers:
      Authorization: "Bearer ${LINEAR_API_KEY}"

See docs/mcp-servers.md for the full reference. See docs/mcp-servers.md for OAuth 2.1 setup (auto-triggered on 401; /mcp-login and /mcp-logout commands available).

MCP Server Registry — browse and install from a curated catalog:

/mcp-registry              # browse all available servers
/mcp-registry github       # show install config for a specific server
/mcp-registry database     # search by category

Categories: filesystem, git, database, api, search, productivity, dev-tools, ai.

Git Integration

OpenHarness auto-commits AI edits in git repos:

oh: Edit src/app.ts                    # auto-committed with "oh:" prefix
oh: Write tests/app.test.ts
  • Every AI file change is committed automatically
  • /undo reverts the last AI commit (only OH commits, never yours)
  • /diff shows what changed
  • Your dirty files are safe — committed separately before AI edits

Checkpoints & Rewind

Every file modification is automatically checkpointed before execution. If something goes wrong:

/rewind           # restore files from the last checkpoint
/undo             # revert the last AI git commit

Checkpoints are stored in .oh/checkpoints/ and cover FileWrite, FileEdit, and Bash commands that modify files.

Verification Loops

After every file edit (Edit, Write, MultiEdit), openHarness automatically runs language-appropriate lint/typecheck commands and feeds the results back into the agent context. This is the single highest-impact harness engineering pattern — research shows 2-3x quality improvement from automated feedback.

Auto-detection — if your project has tsconfig.json, .eslintrc*, pyproject.toml, go.mod, or Cargo.toml, verification rules are detected automatically. No configuration needed.

Custom rules via .oh/config.yaml:

verification:
  enabled: true       # default: true (auto-detect)
  mode: warn          # 'warn' appends to output, 'block' marks as error
  rules:
    - extensions: [".ts", ".tsx"]
      lint: "npx tsc --noEmit 2>&1 | head -20"
      timeout: 15000
    - extensions: [".py"]
      lint: "ruff check {file} 2>&1 | head -10"

The agent sees [Verification passed] or [Verification FAILED] with the linter output after each edit, enabling self-correction.

Memory Consolidation

On session exit, openHarness automatically prunes stale memories using temporal decay:

  • Memories not accessed in 30+ days lose 0.1 relevance per 30-day period
  • Memories below 0.1 relevance are permanently deleted
  • Updated relevance scores are persisted to memory files

This keeps the memory system lean and relevant. Configure in .oh/config.yaml:

memory:
  consolidateOnExit: true   # default: true

Scheduled Tasks (Cron)

Create recurring tasks that run automatically in the background:

# Via slash commands
/cron list                    # show all scheduled tasks
/cron create "check-tests"    # create a new task (interactive)
/cron delete <id>             # remove a task

Schedule syntax: every 5m, every 2h, every 1d

The cron executor checks every 60 seconds for due tasks and runs them via sub-queries. Results are stored in ~/.oh/crons/history/.

Agent Roles

Dispatch specialized sub-agents for focused tasks:

/roles            # list all available roles
RoleDescriptionTools
code-reviewerFind bugs, security issues, style problemsRead-only
test-writerGenerate unit and integration testsRead + Write
docs-writerWrite documentation and commentsRead + Write + Edit
debuggerSystematic bug investigationRead-only + Bash
refactorerSimplify code without changing behaviorAll file tools + Bash
security-auditorOWASP, injection, secrets, CVE scanningRead-only + Bash
evaluatorEvaluate code quality and run tests (read-only)Read-only + Bash + Diagnostics
plannerDesign step-by-step implementation plansRead-only + Bash
architectAnalyze architecture and design structural changesRead-only
migratorSystematic codebase migrations and upgradesAll file tools + Bash

Each role restricts the sub-agent to only its suggested tools. You can also pass allowed_tools explicitly:

Agent({ subagent_type: 'evaluator', prompt: 'Run all tests and report results' })
Agent({ allowed_tools: ['Read', 'Grep'], prompt: 'Search for all TODO comments' })

Headless Mode

Run a single prompt without interactive UI — perfect for CI/CD and scripting:

# Chat command with -p flag (recommended)
oh -p "fix the failing tests" --model ollama/llama3 --trust
oh -p "review src/query.ts" --auto --output-format json

Run command (alternative)

oh run "fix the failing tests" --model ollama/llama3 --trust oh run "add error handling to api.ts" --json # JSON output

Pipe stdin

cat error.log | oh run "what's wrong here?" git diff | oh run "review these changes"

GitHub Action for PR Review

OpenHarness includes a built-in GitHub Action for automated code review:

# .github/workflows/ai-review.yml
on:
  pull_request:
    types: [opened, synchronize]

jobs: review: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 with: fetch-depth: 0 - uses: ./.github/actions/review with: model: 'claude-sonnet-4-6' anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}

Exit code 0 on success, 1 on failure.

Providers

# Local (free, no API key needed)
oh --model ollama/llama3
oh --model ollama/qwen2.5:7b

Cloud

OPENAI_API_KEY=sk-... oh --model gpt-4o ANTHROPIC_API_KEY=sk-ant-... oh --model claude-sonnet-4-6 OPENROUTER_API_KEY=sk-or-... oh --model openrouter/meta-llama/llama-3-70b

llama.cpp / GGUF

oh --model llamacpp/my-model

LM Studio

oh --model lmstudio/my-model

llama.cpp / GGUF (local, no Ollama needed)

For direct GGUF support via llama-server, without the overhead of Ollama. Often faster for large models.

Prerequisites:

Start llama-server:

llama-server --model ./your-model.gguf --port 8080 --alias my-model

Configure via oh init:

  • Run oh init and select "llama.cpp / GGUF" when prompted

Or configure manually in .oh/config.yaml:

provider: llamacpp
model: my-model
baseUrl: http://localhost:8080
permissionMode: ask

Run:

oh
oh --model llamacpp/my-model
oh models                    # list available models

Configuration Hierarchy

Config is loaded in layers (later overrides earlier):

  1. Global ~/.oh/config.yaml — default provider, model, theme for all projects
  2. Project .oh/config.yaml — project-specific settings
  3. Local .oh/config.local.yaml — personal overrides (gitignored)

Set your default provider once globally:

# ~/.oh/config.yaml
provider: ollama
model: llama3
permissionMode: ask
theme: dark

Then per-project configs only need what's different:

# .oh/config.yaml
model: codellama   # override just the model

Project Rules

Create .oh/RULES.md in any repo (or run oh init):

- Always run tests after changes
- Use strict TypeScript
- Never commit to main directly

Rules load automatically into every session.

Skills & Plugins

Skills

Skills are markdown files with YAML frontmatter that add reusable behaviors:

---
name: deploy
description: Deploy the application to production
trigger: deploy
tools: [Bash, Read]
---

Run the deploy script with health checks...

Locations (searched in order):

  1. .oh/skills/ — project-level skills
  2. ~/.oh/skills/ — global skills (available in all projects)

Skills auto-trigger when the user's message contains the trigger keyword, or can be invoked explicitly with /skill deploy.

Plugins

Plugins are npm packages that bundle skills, hooks, and MCP servers:

{
  "name": "my-openharness-plugin",
  "version": "1.0.0",
  "skills": ["skills/deploy.md", "skills/review.md"],
  "hooks": {
    "sessionStart": "scripts/setup.sh"
  },
  "mcpServers": [
    { "name": "my-api", "command": "npx", "args": ["-y", "@my-org/mcp-server"] }
  ]
}

Save as openharness-plugin.json in your npm package root. Install with npm install, and openHarness discovers it automatically from node_modules/.

How It Works

graph LR
    User[User Input] --> REPL[REPL Loop]
    REPL --> Query[Query Engine]
    Query --> Provider[LLM Provider]
    Provider --> LLM[Ollama / OpenAI / Anthropic]
    LLM --> Tools[Tool Execution]
    Tools --> Permissions{Permission Check}
    Permissions -->|Approved| Execute[Run Tool]
    Permissions -->|Blocked| Deny[Deny & Report]
    Execute --> Response[Stream Response]
    Response --> REPL

FAQ

Does it work offline? Yes. Use Ollama with a local model — no internet or API key needed.

How much does it cost? Free. OpenHarness is MIT licensed. You bring your own API key (BYOK) for cloud models, or use Ollama for free.

Is it safe? Yes. 7 permission modes control what tools can do. Bash commands are analyzed by an AST parser that blocks destructive patterns (rm -rf, curl | bash, etc.). Every file change is checkpointed and reversible with /rewind.

Can I use it in CI/CD? Yes. Use oh -p "prompt" --auto for headless execution, or the built-in GitHub Action for PR reviews.

Does it support my language/framework? Yes. OpenHarness is language-agnostic — it reads, writes, and executes code in any language. Syntax highlighting covers 20+ languages.

How does it compare to Claude Code? ~95% feature parity for CLI use cases. Main advantage: works with ANY LLM (not just Anthropic). See the comparison table above.

Install

Requires Node.js 18+.

# From npm
npm install -g @zhijiewang/openharness

From source

git clone https://github.com/zhijiewong/openharness.git cd openharness npm install && npm install -g .

Development

npm install
npx tsx src/main.tsx              # run in dev mode
npx tsc --noEmit                  # type check
npm test                          # run tests

Adding a tool

Create src/tools/YourTool/index.ts implementing the Tool interface with a Zod input schema, register it in src/tools.ts.

Adding a provider

Create src/providers/yourprovider.ts implementing the Provider interface, add a case in src/providers/index.ts.

Contributing

See CONTRIBUTING.md.

Community

Join the OpenHarness community to get help, share your workflows, and discuss the future of AI coding agents!

PlatformDetails & Links
🟣 DiscordJoin our Discord Server to chat with developers and get real-time support.
🔵 Feishu / LarkScan the QR code below to collaborate with the community:

Feishu Group QR Code
🟢 WeChatScan the QR code below to join our WeChat group:

WeChat Group QR Code

License

MIT

SEE ALSO

clihub4/24/2026OPENHARNESS(1)