NAME
clawcode — ClawCode is claude code inspired implementation in python and Rust focused on agents and experience-based evolution,…
SYNOPSIS
pip install -eINFO
DESCRIPTION
ClawCode is claude code inspired implementation in python and Rust focused on agents and experience-based evolution, and is an Open-source coding-agent CLI for Anthropic, OpenAI, Gemini, DeepSeek, GLM, Kimit, Ollama, Codex, GitHub Models, and 200+ models via OpenAI-compatible APIs.
README
ClawCode
Your creative dev tool — AI coding Swiss Army knife
Quick Start • Why ClawCode • Features • Research • Documentation • Contributing
ClawCode is an open-source coding-agent CLI for Anthropic, OpenAI, Gemini, DeepSeek, GLM, Kimi, Ollama, Codex, GitHub Models, and 200+ models via OpenAI-compatible APIs. It goes beyond code generation — it's a self-improving engineering partner.
Why ClawCode
| Typical AI Coding Tool | ClawCode |
|---|---|
| Suggestion-only chat | Terminal-native execution |
| One-shot answers | Self-improving learning loop |
| Single model, single thread | 14-role virtual R&D team |
| No memory | Persistent sessions + experience capsules |
| Vendor lock-in | 200+ models, fully configurable |
Idea → Memory → Plan → Code → Verify → Review → Learned Experience
Features
⚡ Terminal-Native Execution
Analyze, code, verify, and review — all in one surface. No IDE overhead, no context switching.
clawcode # Interactive TUI
clawcode -p "Refactor this API" # Non-interactive
clawcode -p "Summarize changes" -f json # JSON output
🧠 Self-Improving Learning
ClawCode features ECAP (Experience Capsule) and TECAP (Team Experience Capsule) — a closed-loop learning system that turns every task into reusable knowledge:
- Instinct → Experience → Skill evolution chain
- Automatic write-back from
/clawteam --deep_loop - Portable, feedback-scored, privacy-controlled capsules
🎨 Design Team (/designteam)
Spin up specialist design agents (research, IXD, UI, product, visual) and ship structured design specs — not just "chatty UI suggestions."
👥 Virtual R&D Team (/clawteam)
Orchestrate 14 professional roles in one command:
| Role | Focus |
|---|---|
| Product Manager | Priorities, roadmap |
| System Architect | Architecture, tech choices |
| Backend / Frontend / Mobile | Implementation |
| QA / SRE | Quality, reliability |
| DevOps / Team Lead | CI/CD, decisions |
/clawteam "Build a REST API with auth" # Auto-assign roles
/clawteam --deep_loop "Design microservice arch" # Convergent iteration
🔬 Research Mode (clawcode research)
Multi-phase investigation workflows with tool-backed evidence collection:
| Workflow | Purpose |
|---|---|
deepresearch | Template-driven: plan → research → verify → deliver |
peerreview | Critical review with verification |
lit | Literature survey |
audit | Inspect URL/repo/artifact |
Research tools: research_web_search (Firecrawl/Tavily/Parallel), research_paper_search (arXiv/Semantic Scholar), research_fetch_url, research_sandbox_exec, research_code_audit.
clawcode research start "Quantum error correction" --workflow deepresearch -o ./outputs/qec
clawcode research list-prompts # View available templates
🔧 44 Built-in Tools
| Category | Examples |
|---|---|
| File I/O | view, write, edit, patch, grep |
| Shell | bash, terminal, execute_code |
| Browser | browser_* (×11 automation tools) |
| Agent | Subagent spawning with isolation |
| Integration | MCP, Sourcegraph, Desktop automation |
| Research | research_* (web, papers, audit) |
🔄 Claude Code Compatible
Migration-friendly: supports .claude/agents/, Claude-style tool names, plugin/skill systems, and familiar slash workflows.
Quick Start
1. Install
cd clawcode
python -m venv .venv
.\.venv\Scripts\Activate.ps1 # Windows
pip install -e ".[dev]"
Requirements: Python >=3.12
2. Configure
Create .clawcode.json in your project root:
{
"providers": {
"openai": {
"api_key": "sk-...",
"disabled": false
}
},
"agents": {
"coder": {
"model": "gpt-4o",
"provider_key": "openai"
}
}
}
Or use environment variables:
export CLAWCODE_OPENAI__API_KEY="sk-..."
3. Run
clawcode -c "/path/to/project" # Interactive TUI
clawcode -p "Refactor this API" # Non-interactive
Documentation
| Topic | Link |
|---|---|
| Architecture | docs/architecture.md |
| Agent & Team Orchestration | docs/agent-team-orchestration.md |
| ECAP/TECAP Learning System | docs/ecap-learning.md |
| Slash Commands Reference | docs/slash-commands.md |
| Configuration Guide | docs/clawcode-configuration.md |
| Performance & Testing | docs/clawcode-performance.md |
| Research Mode | docs/research_mode.md |
Research Mode
ClawCode includes an independent research subcommand for multi-phase investigation workflows with tool-backed evidence collection.
Workflows
| Workflow | Command | Description |
|---|---|---|
deepresearch | clawcode research start "topic" -w deepresearch | 4-phase template: plan → research → verify → deliver |
peerreview | clawcode research start "topic" -w peerreview | Critical review: review → verify → deliver |
lit | clawcode research start "topic" -w lit | Literature survey |
audit | clawcode research audit <url> | Inspect URL/repo/artifact |
compare | clawcode research start "topic" -w compare | Side-by-side comparison |
Research Tools
research_web_search— Web search with Firecrawl/Tavily/Parallel (DuckDuckGo fallback)research_paper_search— arXiv + optional Semantic Scholarresearch_fetch_url— Fetch and extract page contentresearch_sandbox_exec— Shell execution in sandboxresearch_code_audit— Compare claims against repo code
Quick Examples
# Deep research with Markdown template phases clawcode research start "Quantum error correction" --workflow deepresearch -o ./outputs/qecPeer review a proposal
clawcode research start "Review: LLM scaling laws" --workflow peerreview -o ./outputs/review
Audit a repository
clawcode research audit https://github.com/example/repo -o ./outputs/audit
List available templates
clawcode research list-prompts
Validate config without calling LLM
clawcode research start "Test" --dry-run
Configuration
Add to .clawcode.json:
{
"web": {
"backend": "firecrawl",
"firecrawl_api_key": "YOUR_KEY",
"tavily_api_key": "YOUR_KEY",
"parallel_api_key": ""
},
"research": {
"enabled": true,
"s2_api_key": "YOUR_SEMANTIC_SCHOLAR_KEY",
"subagents": { "max_concurrent": 3 }
}
}
Optional API keys:
- Firecrawl — Enhanced web search/extraction
- Tavily — Research-optimized search
- Parallel — Alternative search backend
- Semantic Scholar — Higher rate limits for papers
Without API keys, research falls back to DuckDuckGo (web) and arXiv (papers, no key required).
Test Results
| Suite | Tests | Status |
|---|---|---|
| Unit + Integration | 833 | ✅ |
| CLI Flags | 22 | ✅ |
| TUI Interactions | 27 | ✅ |
| Real Skills + Plugins | 53 | ✅ |
Total: 944 items. 935 passed, 9 skipped, 0 failed.
Tiered Onboarding
| Level | Time | Steps |
|---|---|---|
| Run it | ~5 min | Install → clawcode -p "..." → try /clawteam |
| Close the loop | ~30 min | Real task → /clawteam --deep_loop → inspect write-back |
| Team rollout | Repeatable | Align model → inventory skills → wire ECAP feedback |
Contributing
pytest
ruff check .
mypy .
For larger design changes, open an issue first.
Security
AI tooling may run commands and modify files. Use ClawCode in a controlled environment, review outputs, and apply least privilege.
License
GPL-3.0 license.
Built by DeepElementLab