MOAI-ADK(1)

NAME

moai-adkSPEC-First Agentic Development Kit for Claude Code — 24 AI agents + 52 skills with TDD/DDD quality gates, 16-language…

SYNOPSIS

$https://github.com/modu-ai/moai-adk/releases

INFO

952 stars
173 forks
0 views

DESCRIPTION

SPEC-First Agentic Development Kit for Claude Code — 24 AI agents + 52 skills with TDD/DDD quality gates, 16-language projects, 4-language docs. Go CLI, zero deps.

README

MoAI-ADK

MoAI-ADK

Agentic Development Kit for Claude Code

English · 한국어 · 日本語 · 中文

CI CodeQL Codecov
Go Release License: Apache-2.0

Official Documentation


📚 Official Documentation


"The purpose of vibe coding is not rapid productivity but code quality."

MoAI-ADK is a high-performance AI development environment for Claude Code. 24 specialized AI agents and 52 skills collaborate to produce quality code. It automatically applies TDD (default) for new projects and feature development, or DDD for existing projects with minimal test coverage, and supports dual execution modes with Sub-Agent and Agent Teams.

A single binary written in Go -- runs instantly on any platform with zero dependencies.


What's New in v2.12.0

MoAI-ADK v2.12.0 introduces major upgrades to the design system, Claude Code native integration, and Opus 4.7 support.

Key Milestones

VersionHighlights
v2.9.0Claude Code v2.1.89-90 native skill integration (Opus 4.6)
v2.10.xLSP suite expansion, SPEC-CC297-001 permissionMode attribute support, Opus 4.7 preview
v2.11.xSelf-Research System integration, multi-source documentation loading, enhanced memory management
v2.12.0[SPEC-AGENCY-ABSORB-001] /agency → /moai design absorption, full Opus 4.7 support, Adaptive Thinking native integration

Major Changes

Design System Absorption (SPEC-AGENCY-ABSORB-001)

The legacy /agency command has been fully absorbed into /moai design. Existing /agency/ projects migrate automatically via:

moai migrate agency

Benefits:

  • Single unified design workflow instead of dual /moai + /agency commands
  • Improved integration with MoAI core (brand context, quality gates, SPEC-driven workflows)
  • Enhanced documentation at adk.mo.ai.kr/design

Opus 4.7 Native Support

MoAI-ADK now targets Claude Opus 4.7 with native Adaptive Thinking:

  • Automatic dynamic token allocation for reasoning (no fixed budgets)
  • Faster inference through streamlined prompt phrasing
  • Better cost efficiency on complex tasks

Self-Research & Memory Evolution

v2.11+ self-research system now integrated with agent learnings:

  • Agents auto-capture lessons from corrections
  • Memory system persists across sessions (.claude/agent-memory/)
  • Documentation loads just-in-time based on task context

Why MoAI-ADK?

We completely rewrote the Python-based MoAI-ADK (~73,000 lines) in Go.

AspectPython EditionGo Edition
Distributionpip + venv + dependenciesSingle binary, zero dependencies
Startup time~800ms interpreter boot~5ms native execution
Concurrencyasyncio / threadingNative goroutines
Type safetyRuntime (mypy optional)Compile-time enforced
Cross-platformPython runtime requiredPrebuilt binaries (macOS, Linux, Windows)
Hook executionShell wrapper + PythonCompiled binary, JSON protocol

Key Numbers

  • 38,700+ lines of Go code, 38 packages
  • 85-100% test coverage
  • 26 specialized AI agents + 47 skills
  • 18 programming languages supported
  • 27 Claude Code hook events

Harness Engineering Architecture

MoAI-ADK implements the Harness Engineering paradigm — designing the environment for AI agents rather than writing code directly.

ComponentDescriptionCommand
Self-Verify LoopAgents write code → test → fail → fix → pass cycle autonomously/moai loop
Context MapCodebase architecture maps and documentation always available to agents/moai codemaps
Session Persistenceprogress.md tracks completed phases across sessions; interrupted runs resume automatically/moai run SPEC-XXX
Failing ChecklistAll acceptance criteria registered as pending tasks at run start; marked complete as implemented/moai run SPEC-XXX
Language-Agnostic16 languages supported: auto-detects language, selects correct LSP/linter/test/coverage toolsAll workflows
Garbage CollectionPeriodic scan and removal of dead code, AI Slop, and unused imports/moai clean
Scaffolding FirstEmpty file stubs created before implementation to prevent entropy/moai run SPEC-XXX

"Human steers, agents execute." — The engineer's role shifts from writing code to designing the harness: SPECs, quality gates, and feedback loops.


System Requirements

PlatformSupported EnvironmentsNotes
macOSTerminal, iTerm2Fully supported
LinuxBash, ZshFully supported
WindowsWSL (recommended), PowerShell 7.x+Native cmd.exe is not supported

Prerequisites:

  • Git must be installed on all platforms
  • Windows users: Git for Windows is required (includes Git Bash)
    • Use WSL (Windows Subsystem for Linux) for the best experience
    • PowerShell 7.x or later is supported as an alternative
    • Legacy Windows PowerShell 5.x and cmd.exe are not supported

Quick Start

1. Installation

macOS / Linux / WSL

curl -fsSL https://raw.githubusercontent.com/modu-ai/moai-adk/main/install.sh | bash

Windows (PowerShell 7.x+)

Recommended: Use WSL with the Linux installation command above for the best experience.

irm https://raw.githubusercontent.com/modu-ai/moai-adk/main/install.ps1 | iex

Requires Git for Windows to be installed first.

Build from Source (Go 1.26+)

git clone https://github.com/modu-ai/moai-adk.git
cd moai-adk && make build

Prebuilt binaries are available on the Releases page.

2. Windows-Specific Issues

Korean Username Path Errors

If your Windows username contains non-ASCII characters (Korean, Chinese, etc.), you may encounter EINVAL errors due to Windows 8.3 short filename conversion.

Workaround 1: Set an alternative temp directory:

# Command Prompt
set MOAI_TEMP_DIR=C:\temp
mkdir C:\temp 2>nul

PowerShell

$env:MOAI_TEMP_DIR="C:\temp" New-Item -ItemType Directory -Path "C:\temp" -Force

Workaround 2: Disable 8.3 filename generation (requires admin):

fsutil 8dot3name set 1

Workaround 3: Create a new Windows user account with ASCII-only username.

3. Initialize a Project

moai init my-project

An interactive wizard auto-detects your language, framework, and methodology, then generates Claude Code integration files.

4. Start Developing with Claude Code

# After launching Claude Code
/moai project                            # Generate project docs (product.md, structure.md, tech.md)
/moai plan "Add user authentication"     # Create a SPEC document
/moai run SPEC-AUTH-001                   # DDD/TDD implementation
/moai sync SPEC-AUTH-001                  # Sync docs & create PR
/moai github issues                      # Fix GitHub issues with Agent Teams
/moai github pr 123                       # Review PR with multi-perspective analysis
graph LR
    A["🔍 /moai project"] --> B["📋 /moai plan"]
    B -->|"SPEC Document"| C["🔨 /moai run"]
    C -->|"Implementation Complete"| D["📄 /moai sync"]
    D -->|"PR Created"| E["✅ Done"]

MoAI Development Methodology

MoAI-ADK automatically selects the optimal development methodology based on your project's state.

flowchart TD
    A["🔍 Project Analysis"] --> B{"New Project or<br/>10%+ Test Coverage?"}
    B -->|"Yes"| C["TDD (default)"]
    B -->|"No"| D{"Existing Project<br/>< 10% Coverage?"}
    D -->|"Yes"| E["DDD"]
    C --> F["RED → GREEN → REFACTOR"]
    E --> G["ANALYZE → PRESERVE → IMPROVE"]
style C fill:#4CAF50,color:#fff
style E fill:#2196F3,color:#fff

TDD Methodology (Default)

The default methodology for new projects and feature development. Write tests first, then implement.

PhaseDescription
REDWrite a failing test that defines expected behavior
GREENWrite minimal code to make the test pass
REFACTORImprove code quality while keeping tests green. /simplify runs automatically after REFACTOR completes.

For brownfield projects (existing codebases), TDD is enhanced with a pre-RED analysis step: read existing code to understand current behavior before writing tests.

DDD Methodology (Existing Projects with < 10% Coverage)

A methodology for safely refactoring existing projects with minimal test coverage.

ANALYZE   → Analyze existing code and dependencies, identify domain boundaries
PRESERVE  → Write characterization tests, capture current behavior snapshots
IMPROVE   → Improve incrementally under test protection. /simplify runs automatically after IMPROVE completes.

The methodology is automatically selected during moai init (--mode <ddd|tdd>, default: tdd) and can be changed via development_mode in .moai/config/sections/quality.yaml.

Note: MoAI-ADK v2.5.0+ uses binary methodology selection (TDD or DDD only). The hybrid mode has been removed for clarity and consistency.

Auto Quality & Scale-Out Layer

MoAI-ADK v2.6.0+ integrates two Claude Code native skills that MoAI invokes autonomously — no flags or manual commands required.

SkillRoleTrigger
/simplifyQuality enforcementAlways runs after every TDD REFACTOR and DDD IMPROVE phase
/batchScale-out executionAuto-triggered when task complexity exceeds thresholds

/simplify — Automatic Quality Pass

Uses parallel agents to review changed code for reuse opportunities, quality issues, efficiency, and CLAUDE.md compliance, then auto-fixes findings. MoAI calls this directly after every implementation cycle — no configuration needed.

/batch — Parallel Scale-Out

Spawns dozens of agents in isolated git worktrees for large-scale parallel work. Each agent runs tests and reports results; MoAI merges them. Auto-triggered per workflow:

WorkflowTrigger Condition
runtasks ≥ 5, OR predicted file changes ≥ 10, OR independent tasks ≥ 3
mxsource files ≥ 50
coverageP1+P2 coverage gaps ≥ 10
cleanconfirmed dead code items ≥ 20

AI Agent Orchestration

MoAI is a strategic orchestrator. Rather than writing code directly, it delegates tasks to 24 specialized agents.

graph LR
    U["👤 User Request"] --> M["🗿 MoAI Orchestrator"]
M --&gt; MG[&quot;📋 Manager (8)&quot;]
M --&gt; EX[&quot;⚡ Expert (8)&quot;]
M --&gt; BL[&quot;🔧 Builder (3)&quot;]
M --&gt; EV[&quot;🔍 Evaluator (2)&quot;]
M --&gt; AG[&quot;🎨 Design System (4+1)&quot;]

MG --&gt; MG1[&quot;spec · ddd · tdd · docs&lt;br/&gt;quality · project · strategy · git&quot;]
EX --&gt; EX1[&quot;backend · frontend · security · devops&lt;br/&gt;performance · debug · testing · refactoring&quot;]
BL --&gt; BL1[&quot;agent · skill · plugin&quot;]
EV --&gt; EV1[&quot;evaluator-active · plan-auditor&quot;]
AG --&gt; AG1[&quot;planner · copywriter · designer&lt;br/&gt;builder · evaluator · learner&quot;]

style M fill:#FF6B35,color:#fff
style MG fill:#4CAF50,color:#fff
style EX fill:#2196F3,color:#fff
style BL fill:#9C27B0,color:#fff
style EV fill:#FF5722,color:#fff
style AG fill:#FF9800,color:#fff

Agent Categories

CategoryCountAgentsRole
Manager8spec, ddd, tdd, docs, quality, project, strategy, gitWorkflow coordination, SPEC creation, quality management
Expert8backend, frontend, security, devops, performance, debug, testing, refactoringDomain-specific implementation, analysis, optimization
Builder3agent, skill, pluginCreating new MoAI components
Evaluator2evaluator-active, plan-auditorIndependent quality assessment, plan-phase document audit
Design System4 (+ evaluator)moai-domain-copywriting, moai-domain-brand-design, moai-workflow-design-import, moai-workflow-gan-loopHybrid creative + code production

Total: 27 agents

Note: Dynamic team teammates (researcher, analyst, architect, implementer, tester, designer, reviewer) are spawned at runtime via role profiles, not as static agent definitions.

47 Skills (Progressive Disclosure)

Managed through a 3-level progressive disclosure system for token efficiency:

CategoryCountExamples
Foundation6core, cc, philosopher, quality, context, thinking
Workflow12spec, project, ddd, tdd, testing, worktree, loop, research, jit-docs...
Domain4backend, frontend, database, uiux
Format1data-formats
Platform4auth, chrome-extension, database-cloud, deployment
Library3shadcn, nextra, mermaid
Reference5api-patterns, git-workflow, owasp, react-patterns, testing-pyramid
Tool2ast-grep, svg
Design2design-tools, design-craft
Framework1electron
Design System4moai-domain-copywriting, moai-domain-brand-design, moai-workflow-design-import, moai-workflow-gan-loop
Docs1docs-generation
Language Rules16Go, Python, TypeScript, Rust, Java... (path-based rules, not skills)

Model Policy (Token Optimization)

MoAI-ADK assigns optimal AI models to each of 24 agents based on your Claude Code subscription plan. This maximizes quality within your plan's rate limits.

PolicyPlan🟣 Opus🔵 Sonnet🟡 HaikuBest For
HighMax $200/mo1653Maximum quality, highest throughput
MediumMax $100/mo3174Balanced quality and cost
LowPlus $20/mo01311Budget-friendly, no Opus access

Why does this matter? The Plus $20 plan does not include Opus access. Setting Low ensures all agents use only Sonnet and Haiku, preventing rate limit errors. Higher plans benefit from Opus on critical agents (security, strategy, architecture) while using Sonnet/Haiku for routine tasks.

Agent Model Assignment by Tier

Manager Agents

AgentHighMediumLow
manager-spec🟣 opus🟣 opus🔵 sonnet
manager-strategy🟣 opus🟣 opus🔵 sonnet
manager-ddd🟣 opus🔵 sonnet🔵 sonnet
manager-tdd🟣 opus🔵 sonnet🔵 sonnet
manager-project🟣 opus🔵 sonnet🟡 haiku
manager-docs🔵 sonnet🟡 haiku🟡 haiku
manager-quality🟡 haiku🟡 haiku🟡 haiku
manager-git🟡 haiku🟡 haiku🟡 haiku

Expert Agents

AgentHighMediumLow
expert-backend🟣 opus🔵 sonnet🔵 sonnet
expert-frontend🟣 opus🔵 sonnet🔵 sonnet
expert-security🟣 opus🟣 opus🔵 sonnet
expert-debug🟣 opus🔵 sonnet🔵 sonnet
expert-refactoring🟣 opus🔵 sonnet🔵 sonnet
expert-devops🟣 opus🔵 sonnet🟡 haiku
expert-performance🟣 opus🔵 sonnet🟡 haiku
expert-testing🟣 opus🔵 sonnet🟡 haiku

Builder Agents

AgentHighMediumLow
builder-agent🟣 opus🔵 sonnet🟡 haiku
builder-skill🟣 opus🔵 sonnet🟡 haiku
builder-plugin🟣 opus🔵 sonnet🟡 haiku

Team Agents

AgentHighMediumLow
team-reader🔵 sonnet🔵 sonnet🔵 sonnet
team-coder🔵 sonnet🔵 sonnet🔵 sonnet
team-tester🔵 sonnet🔵 sonnet🔵 sonnet
team-designer🔵 sonnet🔵 sonnet🔵 sonnet
team-validator🟡 haiku🟡 haiku🟡 haiku

Configuration

# During project initialization
moai init my-project          # Interactive wizard includes model policy selection

Reconfigure existing project

moai update # Interactive prompts for each configuration step

During moai update, you'll be asked:

  • Reset model policy? (y/n) - Re-run model policy configuration wizard
  • Update GLM settings? (y/n) - Configure GLM environment variables in settings.local.json

Default policy is High. GLM settings are isolated in settings.local.json (not committed to Git).


Dual Execution Modes

MoAI-ADK provides both Sub-Agent and Agent Teams execution modes supported by Claude Code.

graph TD
    A["🗿 MoAI Orchestrator"] --> B{"Select Execution Mode"}
    B -->|"--solo"| C["Sub-Agent Mode"]
    B -->|"--team"| D["Agent Teams Mode"]
    B -->|"Default (Auto)"| E["Auto Selection"]
C --&gt; F[&quot;Sequential Expert Delegation&lt;br/&gt;Task() → Expert Agent&quot;]
D --&gt; G[&quot;Parallel Team Collaboration&lt;br/&gt;TeamCreate → SendMessage&quot;]
E --&gt;|&quot;High Complexity&quot;| D
E --&gt;|&quot;Low Complexity&quot;| C

style C fill:#2196F3,color:#fff
style D fill:#FF9800,color:#fff
style E fill:#4CAF50,color:#fff

Agent Teams Mode (Default)

MoAI-ADK automatically analyzes project complexity and selects the optimal execution mode:

ConditionSelected ModeReason
3+ domainsAgent TeamsMulti-domain coordination
10+ affected filesAgent TeamsLarge-scale changes
Complexity score 7+Agent TeamsHigh complexity
OtherwiseSub-AgentSimple, predictable workflow

Agent Teams Mode uses parallel team-based development:

  • Multiple agents work simultaneously, collaborating through a shared task list
  • Real-time coordination via TeamCreate, SendMessage, and TaskList
  • Best suited for large-scale feature development and multi-domain tasks
/moai plan "large feature"          # Auto: researcher + analyst + architect in parallel
/moai run SPEC-XXX                  # Auto: backend-dev + frontend-dev + tester in parallel
/moai run SPEC-XXX --team           # Force Agent Teams mode

Quality Hooks for Agent Teams:

  • TeammateIdle Hook: Validates LSP quality gates before teammate goes idle (errors, type errors, lint errors)
  • TaskCompleted Hook: Verifies SPEC document exists when task references SPEC-XXX patterns
  • All validation uses graceful degradation - warnings logged but work continues

Sub-Agent Mode (--solo)

A sequential agent delegation approach using Claude Code's Task() API.

  • Delegates a task to a single specialized agent and receives the result
  • Progresses step by step: Manager → Expert → Quality
  • Best suited for simple and predictable workflows
/moai run SPEC-AUTH-001 --solo      # Force Sub-Agent mode

MoAI Workflow

Plan → Run → Sync Pipeline

MoAI's core workflow consists of three phases:

graph TB
    subgraph Plan ["📋 Plan Phase"]
        P1["Explore Codebase"] --> P2["Analyze Requirements"]
        P2 --> P3["Generate SPEC Document (EARS Format)"]
    end
subgraph Run [&quot;🔨 Run Phase&quot;]
    R1[&quot;Analyze SPEC &amp; Create Execution Plan&quot;] --&gt; R2[&quot;DDD/TDD Implementation&quot;]
    R2 --&gt; R3[&quot;TRUST 5 Quality Validation&quot;]
end

subgraph Sync [&quot;📄 Sync Phase&quot;]
    S1[&quot;Generate Documentation&quot;] --&gt; S2[&quot;Update README/CHANGELOG&quot;]
    S2 --&gt; S3[&quot;Create Pull Request&quot;]
end

Plan --&gt; Run
Run --&gt; Sync

style Plan fill:#E3F2FD,stroke:#1565C0
style Run fill:#E8F5E9,stroke:#2E7D32
style Sync fill:#FFF3E0,stroke:#E65100

Execution Mode Selection Gate

When transitioning from Plan to Run phase, MoAI automatically detects the current execution environment (cc/glm/cg) and presents a selection UI for the user to confirm or change the mode before implementation begins.

graph LR
    A["Plan Complete"] --> B["Detect Environment"]
    B --> C{"Mode Selection UI"}
    C -->|"CC"| D["Claude-only Execution"]
    C -->|"GLM"| E["GLM-only Execution"]
    C -->|"CG"| F["Claude Leader + GLM Workers"]

This gate ensures the correct execution mode is used regardless of the environment state, preventing mode mismatches during implementation.

/moai Subcommands

All subcommands are invoked within Claude Code as /moai <subcommand>.

Core Workflow

SubcommandAliasesPurposeKey Flags
planspecCreate SPEC document (EARS format)--worktree, --branch, --resume SPEC-XXX, --team
runimplDDD/TDD implementation of a SPEC--resume SPEC-XXX, --team
syncdocs, prSync documentation, codemaps, and create PR--merge, --skip-mx

Quality & Testing

SubcommandAliasesPurposeKey Flags
fixAuto-fix LSP errors, linting, type errors (single pass)--dry, --seq, --level N, --resume, --team
loopIterative auto-fix until completion (max 100 iterations)--max N, --auto-fix, --seq
reviewcode-reviewCode review with security and @MX tag compliance check--staged, --branch, --security
coveragetest-coverageTest coverage analysis and gap filling (16 languages)--target N, --file PATH, --report
e2eE2E testing (Claude-in-Chrome, Playwright CLI, or Agent Browser)--record, --url URL, --journey NAME
cleanrefactor-cleanDead code identification and safe removal--dry, --safe-only, --file PATH

Documentation & Codebase

SubcommandAliasesPurposeKey Flags
projectinitGenerate project docs (product.md, structure.md, tech.md, .moai/project/codemaps/)
mxScan codebase and add @MX code-level annotations--all, --dry, --priority P1-P4, --force, --team
codemapsupdate-codemapsGenerate architecture docs in .moai/project/codemaps/--force, --area AREA
feedbackfb, bug, issueCollect user feedback and create GitHub issues

Default Workflow

SubcommandPurposeKey Flags
(none)Full autonomous plan → run → sync pipeline. Auto-generates SPEC when complexity score >= 5.--loop, --max N, --branch, --pr, --resume SPEC-XXX, --team, --solo

Execution Mode Flags

Control how agents are dispatched during workflow execution:

FlagModeDescription
--teamAgent TeamsParallel team-based execution. Multiple agents work simultaneously.
--soloSub-AgentSequential single-agent delegation per phase.
(default)AutoSystem auto-selects based on complexity (domains >= 3, files >= 10, or score >= 7).

--team supports three execution environments:

EnvironmentCommandLeaderWorkersBest For
Claude-onlymoai ccClaudeClaudeMaximum quality
GLM-onlymoai glmGLMGLMMaximum cost savings
CG (Claude+GLM)moai cgClaudeGLMQuality + cost balance

New in v2.7.1: CG mode is now the default team mode. When using --team, the system runs in CG mode unless explicitly changed with moai cc or moai glm.

Note: moai cg uses tmux pane-level env isolation to separate Claude leader from GLM workers. If switching from moai glm, moai cg automatically resets GLM settings first — no need to run moai cc in between.

Autonomous Development Loop (Ralph Engine)

An autonomous error-fixing engine that combines LSP diagnostics with AST-grep:

/moai fix       # Single pass: scan → classify → fix → verify
/moai loop      # Iterative fix: repeats until completion marker detected (max 100 iterations)

How the Ralph Engine works:

  1. Parallel Scan: Runs LSP diagnostics + AST-grep + linters simultaneously
  2. Auto-Classification: Classifies errors from Level 1 (auto-fix) to Level 4 (user intervention)
  3. Convergence Detection: Applies alternative strategies when the same error repeats
  4. Completion Criteria: 0 errors, 0 type errors, 85%+ coverage

Recommended Workflow Chains

New Feature Development:

/moai plan → /moai run SPEC-XXX → /moai review → /moai coverage → /moai sync SPEC-XXX

Bug Fix:

/moai fix (or /moai loop) → /moai review → /moai sync

Refactoring:

/moai plan → /moai clean → /moai run SPEC-XXX → /moai review → /moai coverage → /moai codemaps

Documentation Update:

/moai codemaps → /moai sync

TRUST 5 Quality Framework

Every code change is validated against five quality criteria:

CriterionMeaningValidation
TestedTested85%+ coverage, characterization tests, unit tests passing
ReadableReadableClear naming conventions, consistent code style, 0 lint errors
UnifiedUnifiedConsistent formatting, import ordering, project structure adherence
SecuredSecuredOWASP compliance, input validation, 0 security warnings
TrackableTrackableConventional commits, issue references, structured logging

Task Metrics Logging

MoAI-ADK automatically captures Task tool metrics during development sessions:

  • Location: .moai/logs/task-metrics.jsonl
  • Captured Metrics: Token usage, tool calls, duration, agent type
  • Purpose: Session analytics, performance optimization, cost tracking

Metrics are logged by the PostToolUse hook when Task tool completes. Use this data to analyze agent efficiency and optimize token consumption.

Hook Protocol (v2.10.1)

All hook events follow the Claude Code hooks protocol with JSON stdin/stdout communication:

  • 27 event types: SessionStart, PreToolUse, PostToolUse, SessionEnd, Stop, SubagentStop, PreCompact, PostCompact, PostToolUseFailure, Notification, SubagentStart, UserPromptSubmit, PermissionRequest, PermissionDenied, TeammateIdle, TaskCompleted, TaskCreated, WorktreeCreate, WorktreeRemove, InstructionsLoaded, StopFailure, ConfigChange, CwdChanged, FileChanged, Elicitation, ElicitationResult, Setup
  • 4 hook types: command (shell scripts), prompt (LLM evaluation), agent (subagent verification), http (webhook endpoints)
  • Smart behaviors: PermissionDenied auto-retry for read-only tools, StopFailure error-type responses, PostCompact session memo restoration, SubagentStart context injection
  • Matchers: Event-specific filtering (tool name, session source, error type, config source)
  • CLAUDE_ENV_FILE: Environment variable persistence via CwdChanged/FileChanged hooks

CLI Commands

CommandDescription
moai initInteractive project setup (auto-detects language/framework/methodology)
moai doctorSystem health diagnosis and environment verification
moai statusProject status summary including Git branch, quality metrics, etc.
moai updateUpdate to the latest version (with automatic rollback support)
moai update --checkCheck for updates without installing
moai update --projectSync project templates only
moai worktree new <name>Create a new Git worktree (parallel branch development)
moai worktree listList active worktrees
moai worktree switch <name>Switch to a worktree
moai worktree syncSync with upstream
moai worktree remove <name>Remove a worktree
moai worktree cleanClean up stale worktrees
moai worktree go <name>Navigate to worktree directory in current shell
moai hook <event>Claude Code hook dispatcher
moai glmStart Claude Code with GLM 5 API (cost-effective alternative)
moai ccStart Claude Code without GLM settings (Claude-only mode)
moai cgLaunch CG mode — Claude leader + GLM teammates (auto-starts Claude Code, tmux required)
moai versionDisplay version, commit hash, and build date

Claude x GLM Multi-LLM

MoAI-ADK supports z.ai GLM as an alternative AI backend for Claude Code, enabling multi-LLM development workflows.

ItemDetails
GLM Coding PlanFrom $10/month (z.ai)
CompatibilityWorks with Claude Code — no code changes needed
ModelsGLM-5.1, GLM-4.7, GLM-4.5-Air, and free models

Default Model Mapping:

Claude TierGLM ModelInput (per 1M tokens)Output (per 1M tokens)
OpusGLM-5.1$2.00$8.00
SonnetGLM-4.7$0.60$2.20
HaikuGLM-4.5-Air$0.20$1.10

Free models also available: GLM-4.7-Flash, GLM-4.5-Flash. See z.ai Pricing for full details.

Sign up for GLM Coding Plan

CG Mode (Claude + GLM Hybrid)

CG Mode is a hybrid mode where the Leader uses Claude API while Workers use GLM API. It's implemented via tmux session-level environment variable isolation.

How It Works

moai cg execution
    │
    ├── 1. Inject GLM config into tmux session env
    │      (ANTHROPIC_AUTH_TOKEN, BASE_URL, MODEL_* vars)
    │
    ├── 2. Remove GLM env from settings.local.json
    │      → Leader pane uses Claude API
    │
    ├── 3. Set CLAUDE_CODE_TEAMMATE_DISPLAY=tmux
    │      → Workers inherit GLM env in new panes
    │
    └── 4. Launch Claude Code (replaces current process)

┌─────────────────────────────────────────────────────────────┐ │ LEADER (current tmux pane, Claude API) │ │ - Orchestrates workflow when /moai --team runs │ │ - Handles plan, quality, sync phases │ │ - No GLM env → uses Claude API │ └──────────────────────┬──────────────────────────────────────┘ │ Agent Teams (new tmux panes) ▼ ┌─────────────────────────────────────────────────────────────┐ │ TEAMMATES (new tmux panes, GLM API) │ │ - Inherit tmux session env → use GLM API │ │ - Execute implementation tasks in run phase │ │ - Communicate with leader via SendMessage │ └─────────────────────────────────────────────────────────────┘

Usage

# 1. Save GLM API key (once)
moai glm sk-your-glm-api-key

2. Verify tmux environment (skip if already in tmux)

If you need a new tmux session:

tmux new -s moai

TIP: Set VS Code terminal default to tmux for automatic tmux environment.

This allows you to skip this step entirely.

3. Launch CG mode (automatically starts Claude Code)

moai cg

4. Run team workflow

/moai --team "your task description"

Important Notes

ItemDescription
tmux EnvironmentIf already using tmux, no need to create a new session. Set VS Code terminal default to tmux for convenience.
Auto Launchmoai cg automatically launches Claude Code in the current pane. No need to run claude separately.
Session Endsession_end hook automatically clears tmux session env → next session uses Claude
Agent Teams CommunicationSendMessage tool enables Leader↔Workers communication

Mode Comparison

CommandLeaderWorkerstmux RequiredCost SavingsUse Case
moai ccClaudeClaudeNo-Complex work, maximum quality
moai glmGLMGLMRecommended~70%Cost optimization
moai cgClaudeGLMRequired~60%Quality + cost balance

Display Modes

Agent Teams supports two display modes:

ModeDescriptionCommunicationLeader/Worker Separation
in-processDefault mode, all terminals✅ SendMessage❌ Same env
tmuxSplit-pane display✅ SendMessage✅ Session env isolation

CG Mode only supports Leader/Worker API separation in tmux display mode.


@MX Tag System

MoAI-ADK uses @MX code-level annotation system to communicate context, invariants, and danger zones between AI agents.

What are @MX Tags?

@MX tags are inline code annotations that help AI agents understand your codebase faster and more accurately.

// @MX:ANCHOR: [AUTO] Hook registry dispatch - 5+ callers
// @MX:REASON: [AUTO] Central entry point for all hook events, changes have wide impact
func DispatchHook(event string, data []byte) error {
    // ...
}

// @MX:WARN: [AUTO] Goroutine executes without context.Context // @MX:REASON: [AUTO] Cannot cancel goroutine, potential resource leak func processAsync() { go func() { // ... }() }

Tag Types

Tag TypePurposeDescription
@MX:ANCHORImportant contractsFunctions with fan_in >= 3, changes have wide impact
@MX:WARNDanger zonesGoroutines, complexity >= 15, global state mutation
@MX:NOTEContextMagic constants, missing godoc, business rules
@MX:TODOIncomplete workMissing tests, unimplemented features

Why doesn't every code have @MX tags?

The @MX tag system is NOT designed to add tags to all code. The core principle is to "mark only the most dangerous/important code that AI needs to notice first."

PriorityConditionTag Type
P1 (Critical)fan_in >= 3@MX:ANCHOR
P2 (Danger)goroutine, complexity >= 15@MX:WARN
P3 (Context)magic constant, no godoc@MX:NOTE
P4 (Missing)no test file@MX:TODO

Most code doesn't meet any criteria, so it has no tags. This is normal.

Example: Tag Decision

// ❌ No tag (fan_in = 1, low complexity)
func calculateTotal(items []Item) int {
    total := 0
    for _, item := range items {
        total += item.Price
    }
    return total
}

// ✅ @MX:ANCHOR added (fan_in = 5) // @MX:ANCHOR: [AUTO] Config manager load - 5+ callers // @MX:REASON: [AUTO] Entry point for all CLI commands func LoadConfig() (*Config, error) { // ... }

Configuration (.moai/config/sections/mx.yaml)

thresholds:
  fan_in_anchor: 3        # < 3 callers = no ANCHOR
  complexity_warn: 15     # < 15 complexity = no WARN
  branch_warn: 8          # < 8 branches = no WARN

limits: anchor_per_file: 3 # Max 3 ANCHOR tags per file warn_per_file: 5 # Max 5 WARN tags per file

exclude:

  • "**/*_generated.go" # Exclude generated files
  • "/vendor/" # Exclude external libraries
  • "**/mock_*.go" # Exclude mock files

Running MX Tag Scan

# Scan entire codebase (Go projects)
/moai mx --all

Preview only (no file modifications)

/moai mx --dry

Scan by priority (P1 only)

/moai mx --priority P1

Scan specific languages only

/moai mx --all --lang go,python

Why Other Projects Also Have Few MX Tags

SituationReason
New projectsMost functions have fan_in = 0 → no tags (normal)
Small projectsFew functions = simple call graph = fewer tags
High-quality codeLow complexity, no goroutines → no WARN tags
High thresholdsfan_in_anchor: 5 = even fewer tags

Core Principle

The @MX tag system optimizes "Signal-to-Noise Ratio":

  • Mark only truly important code → AI quickly identifies core areas
  • Tag all code → Increases noise, makes important tags harder to find

Design System: Hybrid Web & App Production (v3.2, SPEC-AGENCY-ABSORB-001)

Just describe what you want. Design system interviews you, designs, builds, tests, and learns — autonomously.

MoAI-ADK includes an integrated Design System — a specialized harness for autonomous website and web application production. Like /moai "description" runs the full development workflow, /moai design "description" runs the full creative production pipeline from brief to deployed code.

Why Design? — /moai vs /moai design

flowchart TB
    subgraph MOAI["/moai — General Software Development"]
        direction LR
        M1["📋 Plan<br>(SPEC)"] --> M2["⚙️ Run<br>(DDD/TDD)"] --> M3["📦 Sync<br>(Docs + PR)"]
    end
subgraph DESIGN[&quot;/moai design — Creative Web Production&quot;]
    direction LR
    D1[&quot;📋 Manager-Spec&lt;br&gt;(BRIEF)&quot;] --&gt; D2[&quot;✍️ Copywriting&quot;]
    D1 --&gt; D3[&quot;🎨 Brand Design&quot;]
    D2 --&gt; D4[&quot;🔨 Builder&quot;]
    D3 --&gt; D4
    D4 --&gt; D5[&quot;🔍 Evaluator&quot;]
    D5 --&gt;|&quot;FAIL&quot;| D4
    D5 --&gt;|&quot;PASS&quot;| D6[&quot;🧠 Learner&quot;]
end

style MOAI fill:#e8f5e9,stroke:#4caf50
style DESIGN fill:#fff3e0,stroke:#ff9800

Aspect/moai/moai design
PurposeAny software (backend, CLI, library, API)Websites, landing pages, web apps
InputFeature description → SPECBusiness goal → BRIEF
Unique PhaseDDD/TDD implementation cycleCopywriting + Design System → Code
QualitySingle manager-quality passGAN Loop (Builder↔Evaluator, max 5 rounds)
Self-LearningNoneLearner detects patterns → proposes skill evolution
BrandNoneBrand context as constitutional constraint
Implementation20 agents (manager/expert/builder)4 skills (copywriting, brand-design, design-import, gan-loop) + evaluator-active

When to use which:

  • Building a REST API, CLI tool, or library? → /moai
  • Building a marketing website, SaaS landing page, or web app with design? → /moai design
  • Need copy, design tokens, and code as separate artifacts? → /moai design

Quick Start: One Command, Full Pipeline

/moai design "SaaS landing page for my AI developer tools startup"

This single command triggers the entire autonomous workflow:

  1. Client Interview — Manager-spec asks 9 structured questions about your business, brand, and tech preferences (skipped if already configured)
  2. BRIEF Generation — Manager-spec expands your request into a comprehensive project brief
  3. Copy + Design — moai-domain-copywriting produces brand-aligned marketing copy; moai-domain-brand-design creates a full design system with tokens (Path B). Alternative Path A: moai-workflow-design-import parses Claude Design handoff bundles.
  4. Code Implementation — expert-frontend implements production code using TDD (Next.js + Tailwind by default)
  5. Quality Assurance — evaluator-active runs Playwright tests, Lighthouse audits, and 4-dimension scoring with Sprint Contract protocol
  6. GAN Loop — If quality fails, expert-frontend and evaluator-active iterate via moai-workflow-gan-loop (up to 5 rounds) until threshold is met
  7. Self-Learning — (Optional) Learner detects patterns from the session and proposes skill improvements

Typical duration: 15-45 minutes for a complete landing page, fully autonomous.

Pipeline Architecture

flowchart LR
    REQ["🎯 /moai design 'request'"] --> INT["📋 Client Interview"]
    INT --> P["📝 Manager-Spec (BRIEF)"]
    P --> C["✍️ Copywriting"]
    P --> D["🎨 Brand Design"]
    C --> B["🔨 Builder (TDD)"]
    D --> B
    B --> E["🔍 Evaluator"]
    E -->|"FAIL (max 5 rounds)"| B
    E -->|"PASS (score ≥ 0.75)"| L["🧠 Learner (optional)"]

What Each Skill Does

SkillPurpose
manager-specConducts client interview, generates structured BRIEF document
moai-domain-copywritingWrites marketing copy as structured JSON — headlines, body, CTAs — following brand voice rules
moai-domain-brand-designCreates complete design system — color tokens, typography scale, spacing, component specs (Path B)
moai-workflow-design-importParses Claude Design handoff bundles (ZIP/HTML) for design tokens and components (Path A)
expert-frontendImplements production code with TDD (RED-GREEN-REFACTOR). Default stack: Next.js, TypeScript, Tailwind, shadcn/ui
evaluator-activeRuns Playwright visual tests + Lighthouse audits. Scores 4 dimensions with Sprint Contract protocol and must-pass criteria validation
moai-workflow-gan-loopManages GAN Loop iteration: Builder-Evaluator negotiates Sprint Contract, implements, scores, escalates on stagnation

The GAN Loop: Adversarial Quality Assurance

The Evaluator is skeptical by default — tuned to find defects, not rationalize acceptance.

sequenceDiagram
    participant B as 🔨 Builder
    participant E as 🔍 Evaluator
    participant U as 👤 User
B-&gt;&gt;E: Submit code (iteration 1)
E-&gt;&gt;E: Score 4 dimensions
E--&gt;&gt;B: ❌ FAIL (0.58) — feedback with file:line refs

B-&gt;&gt;E: Revised code (iteration 2)
E-&gt;&gt;E: Score 4 dimensions
E--&gt;&gt;B: ❌ FAIL (0.67) — mobile viewport + copy mismatch

B-&gt;&gt;E: Revised code (iteration 3)
E-&gt;&gt;E: Score 4 dimensions
Note over E: Stagnation detected (improvement &lt; 0.05)
E--&gt;&gt;U: ⚠️ Escalation — 3 rounds without pass

alt User adjusts criteria
    U--&gt;&gt;E: Lower threshold to 0.65
    E--&gt;&gt;B: ✅ PASS (0.67)
else User provides guidance
    U--&gt;&gt;B: Fix specific layout issue
    B-&gt;&gt;E: Revised code (iteration 4)
    E--&gt;&gt;B: ✅ PASS (0.78)
end

Scoring dimensions (must-pass threshold: 0.75):

DimensionWeightWhat It MeasuresAuto-FAIL Triggers
Design Quality30%Visual polish, spacing, typography, color harmonyAI cliches (purple gradients + white cards + generic icons)
Originality25%Unique brand expression, non-template feelCopy differs from Copywriter output
Completeness25%All sections, responsive, interactive elementsMobile viewport broken, any 404 link
Functionality20%Working links, forms, animations, Lighthouse scoreLighthouse Accessibility < 80

Iteration flow: Evaluator provides specific feedback with file:line references → Builder fixes → re-evaluation. After 3 failed iterations, escalates to user with options: adjust criteria, provide guidance, or force-pass.

Brand Context: Your Creative Constitution

On first run, Design System conducts a structured client interview (9 questions across 4 phases):

PhaseQuestionsPopulates
Business ContextObjective, target customer, success KPIs.moai/project/brand/target-audience.md
Brand IdentityVoice adjectives, reference sites, design preferences.moai/project/brand/brand-voice.md, visual-identity.md
Technical ScopePages needed, tech requirements.moai/project/tech.md
Quality ExpectationsPriority factors.moai/config/sections/design.yaml

Brand context flows through every skill as an immutable constraint. The evaluator-active scores brand consistency as a must-pass criterion. After 5+ projects, the interview adapts to ask only 3 key questions.

Self-Evolution with Safety

Every skill has Static + Dynamic zones:

  • Static Zone: Core principles (never auto-modified)
  • Dynamic Zone: Rules, heuristics, anti-patterns (evolved via Learner)
flowchart LR
    subgraph Observation["📊 Pattern Detection"]
        O1["1x seen"] -->|"Logged"| O2["3x seen"]
        O2 -->|"Promoted"| O3["5x seen"]
    end
subgraph Graduation[&quot;🎓 Knowledge Graduation&quot;]
    O3 --&gt;|&quot;confidence ≥ 0.80&quot;| G1[&quot;Canary Check&quot;]
    G1 --&gt;|&quot;No score drop&quot;| G2[&quot;Contradiction Check&quot;]
    G2 --&gt;|&quot;No conflicts&quot;| G3[&quot;👤 Human Review&quot;]
    G3 --&gt;|&quot;Approved&quot;| G4[&quot;✅ Graduated&quot;]
end

subgraph Safety[&quot;🛡️ Safety Gates&quot;]
    G4 --&gt; S1[&quot;Verify in next project&quot;]
    S1 --&gt;|&quot;Score drops &gt; 0.10&quot;| S2[&quot;🔄 Auto-Rollback&quot;]
end

style Observation fill:#e3f2fd,stroke:#1976d2
style Graduation fill:#f3e5f5,stroke:#7b1fa2
style Safety fill:#fce4ec,stroke:#c62828

Knowledge Graduation lifecycle: observation (1x) → heuristic (3x) → rule (5x, confidence ≥ 0.80) → graduated (applied with user approval)

5-Layer Safety Architecture:

  1. Frozen Guard — Blocks modification of identity, safety rails, and ethical boundaries
  2. Canary Check — Shadow-evaluates last 3 projects; rejects if any score drops > 0.10
  3. Contradiction Detector — Flags rules that conflict with existing ones
  4. Rate Limiter — Max 3 evolutions/week, 24h cooldown, max 50 active learnings
  5. Human Oversight — Presents before/after diff with evidence; requires user approval

Anti-Pattern Protection: A single critical failure (score drop > 0.20) triggers immediate Anti-Pattern classification — the pattern is FROZEN and can never be evolved away. Only human intervention can reclassify.

Commands

# Autonomous workflow (recommended)
/moai design "SaaS landing page for my AI startup"  # Full pipeline: interview → build → test → learn

Alternative paths

/moai design brief "landing page for dev tools" # Interview + BRIEF only (review before building) /moai design build BRIEF-001 # Run full pipeline from existing BRIEF /moai design import /path/to/design.zip # Import Claude Design handoff bundle (Path A)

Legacy Agency commands (deprecated, redirects to /moai design)

/agency "..." # Redirects to /moai design with deprecation warning /agency brief "..." # Not supported; use /moai design brief

Default Tech Stack (configurable)

LayerDefaultConfigured via
FrameworkNext.js + App Router.moai/project/tech.md
LanguageTypeScript (strict).moai/project/tech.md
StylingTailwind CSS v4.moai/project/tech.md
Componentsshadcn/ui.moai/project/tech.md
TestingVitest + Playwright.moai/config/sections/design.yaml
HostingVercel.moai/project/tech.md

Migration from /agency

Existing projects using /agency can migrate to /moai design via:

moai migrate agency

This command safely moves .agency/ data to .moai/project/brand/ and .moai/config/sections/design.yaml. Data is preserved as .agency.archived/ for recovery if needed.

Design System Documentation


Database Workflow: /moai db

Database metadata management system for MoAI projects. Manages schema documentation, migrations, ERD diagrams, and seeds through four subcommands: init, refresh, verify, and list.

Quick Start

# Initialize database metadata (interactive interview)
/moai db init

Rescan migrations and update schema documentation

/moai db refresh

Check for drift between schema.md and migration files

/moai db verify

Display all tables from schema.md

/moai db list

Subcommands

CommandPurposeWhen to Use
initInteractive setup of database engine, ORM, multi-tenant strategy, and migration tool. Scaffolds .moai/project/db/ with 7-file template setNew project initialization, before any database work
refreshScans migration files and regenerates schema.md, erd.mmd (Mermaid ERD), and migrations.md from current migration stateAfter adding/modifying migrations, milestone sync
verifyRead-only drift detection: compares schema.md table set against actual migration files, exits non-zero if drift detectedBefore PR submission, in CI/CD pipelines
listRead-only table listing: displays all tables from schema.md in aligned Markdown table formatQuick project overview, documentation review

Directory Structure

/moai db init creates the following structure in .moai/project/db/:

.moai/project/db/
├── README.md              # Database overview and setup instructions
├── schema.md              # Table schema documentation (auto-generated)
├── erd.mmd                # Entity-Relationship Diagram in Mermaid format
├── migrations.md          # Migration history and sequencing
├── rls-policies.md        # Row-level security policies (PostgreSQL)
├── queries.md             # Important queries and performance notes
└── seed-data.md           # Sample data and seeding instructions

Supported Database Technologies

Auto-detects and supports 6 migration file patterns:

Migration TypeFile PatternExample
Prismaprisma/migrations/*/migration.sql20260401120000_add_users_table/migration.sql
Alembicalembic/versions/*.pya1b2c3d4e5f6_add_users_table.py
Railsdb/migrate/*.rb20260401120000_add_users_table.rb
Raw SQLdb/migrations/*.sql001_add_users_table.sql
Supabasesupabase/migrations/*.sql20260401120000_initial_schema.sql
Genericmigrations/*.sql or db/*.sqlCustom patterns supported

Supports 16 programming language ecosystems (Go, Python, TypeScript, Java, etc.) through common package paths.

Integrations

  • PostToolUse Hook: Auto-refreshes schema.md, erd.mmd, migrations.md when migration files are edited
  • Drift Detection: Prevents schema documentation from drifting out of sync with actual migrations
  • Mermaid Diagrams: Generates ERD diagrams automatically for documentation and design reviews
  • Phase 4.1a DB Detection: /moai project automatically surfaces /moai db recommendations based on detected database technology

Configuration

Database settings are stored in .moai/config/sections/db.yaml:

db:
  enabled: true
  dir: ".moai/project/db"
  auto_sync: true
  migration_patterns:
    - "prisma/migrations/*/migration.sql"
    - "alembic/versions/*.py"
    - "db/migrate/*.rb"
  engine: ""  # Populated during init interview
  orm: ""     # Populated during init interview
  multi_tenant: false
  migration_tool: ""

Workflow Example

  1. New Project: Run /moai db init, answer 4 questions about your database setup
  2. During Development: Create migrations as usual; /moai db auto-syncs documentation
  3. Before PR: Run /moai db verify to check for schema drift
  4. Review: Reference .moai/project/db/erd.mmd in PRs for visual schema review

When to Use

  • Always on: Enable during moai init for any project with a database
  • Init: New projects, database architecture changes
  • Refresh: After significant migration work, before major commits
  • Verify: Part of CI/CD pipeline, pre-PR checks
  • List: Quick reference, documentation generation

Frequently Asked Questions

Q: Why doesn't every Go code have @MX tags?

A: This is normal. @MX tags are added "only where needed." Most code is simple and safe enough that tags aren't required.

QuestionAnswer
Is having no tags a problem?No. Most code doesn't need tags.
When are tags added?High fan_in, complex logic, danger patterns only
Are all projects similar?Yes. Most code in every project has no tags.

See the "@MX Tag System" section above for details.


Q: How do I customize which statusline segments are displayed?

The statusline v3 features a multi-line layout with real-time API usage monitoring:

Full mode (5 lines — 40-block individual bars):

🤖 Opus 4.6 │ 🔅 v2.1.74 │ 🗿 v2.7.12 │ ⏳ 5h 32m │ 💬 MoAI
CW: 🔋 █████████████████████░░░░░░░░░░░░░░░░░░░ 52%
5H: 🔋 █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 4%
7D: 🔋 ██████████████████████░░░░░░░░░░░░░░░░░░░ 56%
📁 moai-adk-go │ 🔀 main │ 📊 +0 M38 ?2

Default mode (3 lines — 10-block inline bars):

🤖 Opus 4.6 │ 🔅 v2.1.74 │ 🗿 v2.7.12 │ ⏳ 16m │ 💬 MoAI
CW: 🔋 ██░░░░░░░░ 25% │ 5H: 🔋 █░░░░░░░░░ 12% │ 7D: 🔋 ░░░░░░░░░░ 3%
📁 moai-adk-go │ 🔀 fix/my-feature │ 📊 +0 M38 ?2

2 display modes are available:

  • Full (5 lines): All segments with individual 40-block usage bars per line (model, context, usage bars, git, version, output style, directory)
  • Default (3 lines): Core segments with inline 10-block usage bars (model, context, usage bars, git status, branch, version)

Edit .moai/config/sections/statusline.yaml directly:

statusline:
  preset: default  # or full
  segments:
    model: true
    context: true
    usage_5h: true    # 5-hour API usage bar
    usage_7d: true    # 7-day API usage bar
    output_style: true
    directory: true
    git_status: true
    claude_version: true
    moai_version: true
    git_branch: true

Note: As of v2.7.8, segment preset selection has been removed from the moai init/moai update wizard. Configure segments directly in the YAML file above.


Q: What does the version indicator in statusline mean?

The MoAI statusline shows version information with update notifications:

🗿 v2.2.2 ⬆️ v2.2.5
  • v2.2.2: Currently installed version
  • ⬆️ v2.2.5: New version available for update

When you're on the latest version, only the version number is displayed:

🗿 v2.2.5

To update: Run moai update and the update notification will disappear.

Note: This is different from Claude Code's built-in version indicator (🔅 v2.1.38). The MoAI indicator tracks MoAI-ADK versions, while Claude Code shows its own version separately.


Q: "Allow external CLAUDE.md file imports?" warning appears

When opening a project, Claude Code may show a security prompt about external file imports:

External imports:
  /Users/<user>/.moai/config/sections/quality.yaml
  /Users/<user>/.moai/config/sections/user.yaml
  /Users/<user>/.moai/config/sections/language.yaml

Recommended action: Select "No, disable external imports"

Why?

  • Your project's .moai/config/sections/ already contains these files
  • Project-specific settings take precedence over global settings
  • The essential configuration is already embedded in CLAUDE.md text
  • Disabling external imports is more secure and doesn't affect functionality

What are these files?

  • quality.yaml: TRUST 5 framework and development methodology settings
  • language.yaml: Language preferences (conversation, comments, commits)
  • user.yaml: User name (optional, for Co-Authored-By attribution)

Contributing

Contributions are welcome! See CONTRIBUTING.md for detailed guidelines.

Quick Start

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/my-feature
  3. Write tests (TDD for new code, characterization tests for existing code)
  4. Ensure all tests pass: make test
  5. Ensure linting passes: make lint
  6. Format code: make fmt
  7. Commit with conventional commit messages
  8. Open a pull request

Code quality requirements: 85%+ coverage · 0 lint errors · 0 type errors · Conventional commits

Community

  • Issues -- Bug reports, feature requests

Star History

Star History Chart


License

Apache License 2.0 -- See the LICENSE file for details.

Links

SEE ALSO

clihub4/23/2026MOAI-ADK(1)