LLM-WIKI(1)

NAME

llm-wikiLLM-powered knowledge base from your Claude Code, Codex CLI, Copilot, Cursor & Gemini sessions. Karpathy's LLM Wiki…

SYNOPSIS

$pip install -e

INFO

128 stars
17 forks
0 views

DESCRIPTION

LLM-powered knowledge base from your Claude Code, Codex CLI, Copilot, Cursor & Gemini sessions. Karpathy's LLM Wiki pattern — implemented and shipped.

README

llmwiki

LLM-powered knowledge base from your Claude Code, Codex CLI, Cursor, Gemini CLI, and Obsidian sessions. Built on Andrej Karpathy's LLM Wiki pattern.

👉 Live demo: pratiyush.github.io/llm-wiki

Rebuilt on every master push from the synthetic sessions in examples/demo-sessions/. No personal data. Shows every feature of the real tool (activity heatmap, tool charts, token usage, model info cards, vs-comparisons, project topics) running against safe reference data.

License: MIT Python 3.9+ Version Tests Works with Claude Code Works with Codex CLI Works with Copilot Works with Cursor Works with Gemini CLI


Every Claude Code, Codex CLI, Copilot, Cursor, and Gemini CLI session writes a full transcript to disk. You already have hundreds of them and never look at them again.

llmwiki turns that dormant history into a beautiful, searchable, interlinked knowledge base — locally, in two commands. Plus, it produces AI-consumable exports (llms.txt, llms-full.txt, JSON-LD graph, per-page .txt + .json siblings) so other AI agents can query your wiki directly.

./setup.sh                         # one-time install
./build.sh && ./serve.sh           # build + serve at http://127.0.0.1:8765

llm-wiki demo

Contributing in one line: read CONTRIBUTING.md, keep PRs focused (one concern each), use feat: / fix: / docs: / chore: / test: commit prefixes, never commit real session data (raw/ is gitignored), no new runtime deps. CI must be green to merge.

Screenshots

All screenshots below are from the public demo site which is built on every master push from the dummy example sessions. Your own wiki will look identical — just with your real work.

Home — projects overview with activity heatmap

llmwiki home page — LLM Wiki header, activity heatmap, and a grid of three demo projects (demo-blog-engine, demo-ml-pipeline, demo-todo-api)

All sessions — filterable table across every project

llmwiki sessions index — activity timeline above a table of eight demo sessions with project, model, date, message count, and tool-call columns

Session detail — full conversation + tool calls

llmwiki session detail — Rust blog engine scaffolding session showing summary, breadcrumbs, a TOML Cargo.toml block and a Rust main.rs block, both highlighted by highlight.js

Changelog — renders CHANGELOG.md as a first-class page

llmwiki changelog page — keep-a-changelog format with colored headings for Added / Fixed / Changed and auto-linked PR references

Projects index — freshness badges + per-project stats

llmwiki projects index — three demo project cards with green/yellow/red freshness badges showing how recently each project was touched

What you get

Human-readable

  • All your sessions, converted from .jsonl to clean, redacted markdown
  • A Karpathy-style wikisources/, entities/, concepts/, syntheses/, comparisons/, questions/ linked with [[wikilinks]]
  • A beautiful static site you can browse locally or deploy to GitHub Pages
    • Global search (Cmd+K command palette with fuzzy match over pre-built index)
    • highlight.js client-side syntax highlighting (light + dark themes)
    • Dark mode (system-aware + manual toggle with data-theme)
    • Keyboard shortcuts: / search · g h/p/s nav · j/k rows · ? help
    • Collapsible tool-result sections (auto-expand > 500 chars)
    • Copy-as-markdown + copy-code buttons
    • Breadcrumbs + reading progress bar
    • Filter bar on sessions table (project/model/date/text)
    • Reading time estimates (X min read)
    • Related pages panel at the bottom of every session
    • Activity heatmap on the home page
    • Model info cards with structured schema (provider, pricing, benchmarks)
    • Auto-generated vs-comparison pages between AI models
    • Append-only changelog timeline with pricing sparkline
    • Project topic chips (GitHub-style tags on project cards)
    • Agent labels (colored badges: Claude/Codex/Copilot/Cursor/Gemini)
    • Recently-updated card on the home page
    • Dataview-style structured queries in the command palette
    • Hover-to-preview wikilinks
    • Deep-link icons next to every heading
    • Mobile-responsive + print-friendly

AI-consumable (v0.4)

Every HTML page has sibling machine-readable files at the same URL:

  • <page>.html — human HTML with schema.org microdata
  • <page>.txt — plain text version (no HTML tags)
  • <page>.json — structured metadata + body

Site-level AI-agent entry points:

FileWhat
/llms.txtShort index per llmstxt.org spec
/llms-full.txtFlattened plain-text dump (~5 MB cap) — paste into any LLM's context
/graph.jsonldSchema.org JSON-LD entity/concept/source graph
/sitemap.xmlStandard sitemap with lastmod
/rss.xmlRSS 2.0 feed of newest sessions
/robots.txtAI-friendly robots with llms.txt reference
/ai-readme.mdAI-specific navigation instructions
/manifest.jsonBuild manifest with SHA-256 hashes + perf budget

Every page also includes an <!-- llmwiki:metadata --> HTML comment that AI agents can parse without fetching the separate .json sibling.

Automation

  • SessionStart hook — auto-syncs new sessions in the background on every Claude Code launch
  • File watcherllmwiki watch polls agent stores with debounce and runs sync on change
  • MCP server — 7 production tools (wiki_query, wiki_search, wiki_list_sources, wiki_read_page, wiki_lint, wiki_sync, wiki_export) queryable from any MCP client (Claude Desktop, Cline, Cursor, ChatGPT desktop)
  • No servers, no database, no npm — Python stdlib + markdown. Syntax highlighting loads from a highlight.js CDN at view time.

How it works

┌─────────────────────────────────────┐
│  ~/.claude/projects/*/*.jsonl       │  ← Claude Code sessions
│  ~/.codex/sessions/**/*.jsonl       │  ← Codex CLI sessions
│  ~/Library/.../Cursor/workspaceS…   │  ← Cursor
│  ~/Documents/Obsidian Vault/        │  ← Obsidian
│  ~/.gemini/                         │  ← Gemini CLI
└──────────────┬──────────────────────┘
               │
               ▼   python3 -m llmwiki sync
┌─────────────────────────────────────┐
│  raw/sessions/<project>/            │  ← immutable markdown (Karpathy layer 1)
│     2026-04-08-<slug>.md            │
└──────────────┬──────────────────────┘
               │
               ▼   /wiki-ingest  (your coding agent)
┌─────────────────────────────────────┐
│  wiki/sources/<slug>.md             │  ← LLM-generated wiki (Karpathy layer 2)
│  wiki/entities/<Name>.md            │
│  wiki/concepts/<Name>.md            │
│  wiki/syntheses/<Name>.md           │
│  wiki/comparisons/<Name>.md         │
│  wiki/questions/<Name>.md           │
│  wiki/index.md, overview.md, log.md │
└──────────────┬──────────────────────┘
               │
               ▼   python3 -m llmwiki build
┌─────────────────────────────────────┐
│  site/                              │  ← static HTML + AI exports
│  ├── index.html, style.css, ...     │
│  ├── sessions/<project>/<slug>.html │
│  ├── sessions/<project>/<slug>.txt  │  (AI sibling)
│  ├── sessions/<project>/<slug>.json │  (AI sibling)
│  ├── llms.txt, llms-full.txt        │
│  ├── graph.jsonld                   │
│  ├── sitemap.xml, rss.xml           │
│  ├── robots.txt, ai-readme.md       │
│  ├── manifest.json                  │
│  └── search-index.json              │
└─────────────────────────────────────┘

See docs/architecture.md for the full 3-layer Karpathy + 8-layer build breakdown.

Install

macOS / Linux

git clone https://github.com/Pratiyush/llm-wiki.git
cd llm-wiki
./setup.sh

Windows

git clone https://github.com/Pratiyush/llm-wiki.git
cd llm-wiki
setup.bat

With pip (v0.3+)

pip install -e .                # basic — everything you need
pip install -e '.[pdf]'         # + PDF ingestion
pip install -e '.[dev]'         # + pytest + ruff
pip install -e '.[all]'         # all of the above

Syntax highlighting is now powered by highlight.js, loaded from a CDN at view time — no optional deps required.

What setup does

  1. Creates raw/, wiki/, site/ data directories
  2. Installs the llmwiki Python package in-place
  3. Detects your coding agents and enables matching adapters
  4. Optionally offers to install the SessionStart hook into ~/.claude/settings.json for auto-sync
  5. Runs a first sync so you see output immediately

For maintainers

Running the project? The governance scaffold lives under docs/maintainers/ and is loaded by a dedicated skill:

FileWhat it's for
CONTRIBUTING.mdShort rules for contributors — read this first
CODE_OF_CONDUCT.mdContributor Covenant 2.1
SECURITY.mdDisclosure process for redaction bugs, XSS, data leaks
docs/maintainers/ARCHITECTURE.mdOne-page system diagram + layer boundaries + what NOT to add
docs/maintainers/REVIEW_CHECKLIST.mdCanonical code-review criteria
docs/maintainers/RELEASE_PROCESS.mdVersion bump → CHANGELOG → tag → build → publish
docs/maintainers/TRIAGE.mdLabel taxonomy + stale-issue policy
docs/maintainers/ROADMAP.mdNear-term plan + release themes
docs/maintainers/DECLINED.mdGraveyard of declined ideas with reasons

Four Claude Code slash commands automate the common ops:

  • /review-pr <N> — apply the REVIEW_CHECKLIST to a PR and post findings
  • /triage-issue <N> — label + milestone + priority a new issue
  • /release <version> — walk the release process step by step
  • /maintainer — meta-skill that loads every governance doc as context

Running E2E tests

The unit suite (pytest tests/ — 472 tests) runs in milliseconds and covers every module. The end-to-end suite under tests/e2e/ is separate: it builds a minimal demo site, serves it on a random port, drives a real browser via Playwright, and runs scenarios written in Gherkin via pytest-bdd.

Why both? Unit tests lock the contract at the module boundary; E2E locks the contract at the user's browser. A diff that passes unit tests but breaks the Cmd+K palette will fail E2E.

Install the extras (one-time, ~300 MB for Chromium):

pip install -e '.[e2e]'
python -m playwright install chromium

Run the suite:

pytest tests/e2e/ --browser=chromium

Run a single feature:

pytest tests/e2e/test_command_palette.py --browser=chromium -v

The E2E suite is excluded from the default pytest tests/ run (see the --ignore=tests/e2e addopt in pyproject.toml) so you can iterate on the unit suite without waiting for browser installs. CI runs the E2E job as a separate workflow (.github/workflows/e2e.yml) that only fires on PRs touching build.py, the viz modules, or tests/e2e/**.

Feature files live under tests/e2e/features/ — one per UI area (homepage, session page, command palette, keyboard nav, mobile nav, theme toggle, copy-as-markdown, responsive breakpoints, edge cases, accessibility, visual regression). Step definitions are all in tests/e2e/steps/ui_steps.py. Adding a new scenario is usually a 2-line change to a .feature file plus maybe one new step.

Run locally with an HTML report:

pytest tests/e2e/ --browser=chromium \
  --html=e2e-report/index.html --self-contained-html
open e2e-report/index.html     # macOS — opens the browseable report

Where to see test reports:

WhatWhere
Unit test resultsGitHub Actions → ci.yml → latest run → lint-and-test job logs
E2E HTML reportGitHub Actions → e2e.yml → latest run → Artifacts → e2e-html-report (14-day retention)
Visual regression screenshotsSame run → Artifacts → e2e-screenshots
Playwright traces (failed runs only)Same run → Artifacts → playwright-traces (open with playwright show-trace <zip>)
Demo site deploy statusGitHub Actions → pages.yml → latest run

Locally, the HTML report is one file (e2e-report/index.html) that you can open in any browser — pass/fail per scenario, duration, stdout/stderr, screenshot on failure.

Scheduled sync

Templates for running llmwiki sync automatically on a daily schedule:

OSTemplateInstall guide
macOSlaunchd.plistdocs/scheduled-sync.md
Linuxsystemd.timer + .servicedocs/scheduled-sync.md
Windowstask.xmldocs/scheduled-sync.md

See docs/scheduled-sync.md for full instructions.

CLI reference

llmwiki init                    # scaffold raw/ wiki/ site/
llmwiki sync                    # convert .jsonl → markdown
llmwiki build                   # compile static HTML + AI exports
llmwiki serve                   # local HTTP server on 127.0.0.1:8765
llmwiki adapters                # list available adapters
llmwiki graph                   # build knowledge graph (v0.2)
llmwiki watch                   # file watcher with debounce (v0.2)
llmwiki export-obsidian         # write wiki to Obsidian vault (v0.2)
llmwiki export-qmd              # export wiki as a qmd collection (v0.6)
llmwiki eval                    # 7-check structural quality score /100 (v0.3)
llmwiki check-links             # verify internal links in site/ (v0.4)
llmwiki export <format>         # AI-consumable exports (v0.4)
llmwiki synthesize              # auto-ingest synthesis pipeline (v0.5)
llmwiki manifest                # build site manifest + perf budget (v0.4)
llmwiki version

Each subcommand has its own --help. All commands are also wrapped in one-click shell/batch scripts: sync.sh/.bat, build.sh/.bat, serve.sh/.bat, upgrade.sh/.bat.

Works with

AgentAdapterStatusAdded in
Claude Codellmwiki.adapters.claude_code✅ Productionv0.1
Obsidian (input)llmwiki.adapters.obsidian✅ Productionv0.1
Obsidian (output)llmwiki.obsidian_output✅ Productionv0.2
Codex CLIllmwiki.adapters.codex_cli✅ Productionv0.3
Cursorllmwiki.adapters.cursor✅ Productionv0.5
Gemini CLIllmwiki.adapters.gemini_cli✅ Productionv0.5
PDF filesllmwiki.adapters.pdf✅ Productionv0.5
Copilot Chatllmwiki.adapters.copilot_chat✅ Productionv0.9
Copilot CLIllmwiki.adapters.copilot_cli✅ Productionv0.9
OpenCode / OpenClaw⏸ Deferred

Adding a new agent is one small file — subclass BaseAdapter, declare SUPPORTED_SCHEMA_VERSIONS, ship a fixture + snapshot test.

MCP server

llmwiki ships its own MCP server (stdio transport, no SDK dependency) so any MCP client can query your wiki directly.

python3 -m llmwiki.mcp   # runs on stdin/stdout

Seven production tools:

ToolWhat
wiki_query(question, max_pages)Keyword search + page content (no LLM synthesis)
wiki_search(term, include_raw)Raw grep over wiki/ (+ optional raw/)
wiki_list_sources(project)List raw source files with metadata
wiki_read_page(path)Read one page (path-traversal guarded)
wiki_lint()Orphans + broken-wikilinks report
wiki_sync(dry_run)Trigger the converter
wiki_export(format)Return any AI-consumable export (llms.txt, jsonld, sitemap, rss, manifest)

Register in your MCP client's config — e.g. for Claude Desktop, add to ~/Library/Application Support/Claude/claude_desktop_config.json:

{
  "mcpServers": {
    "llmwiki": {
      "command": "python3",
      "args": ["-m", "llmwiki.mcp"]
    }
  }
}

Configuration

Single JSON config at examples/sessions_config.json. Copy to config.json and edit:

{
  "filters": {
    "live_session_minutes": 60,
    "exclude_projects": []
  },
  "redaction": {
    "real_username": "YOUR_USERNAME",
    "replacement_username": "USER",
    "extra_patterns": [
      "(?i)(api[_-]?key|secret|token|bearer|password)...",
      "sk-[A-Za-z0-9]{20,}"
    ]
  },
  "truncation": {
    "tool_result_chars": 500,
    "bash_stdout_lines": 5
  },
  "adapters": {
    "obsidian": {
      "vault_paths": ["~/Documents/Obsidian Vault"]
    }
  }
}

All paths, regexes, truncation limits, and per-adapter settings are tunable. See docs/configuration.md.

.llmwikiignore

Gitignore-style pattern file at the repo root. Skip entire projects, dates, or specific sessions without touching config:

# Skip a whole project
confidential-client/
# Skip anything before a date
*2025-*
# Keep exception
!confidential-client/public-*

Karpathy's LLM Wiki pattern

This project follows the three-layer structure described in Karpathy's gist:

  1. Raw sources (raw/) — immutable. Session transcripts converted from .jsonl.
  2. The wiki (wiki/) — LLM-generated. One page per entity, concept, source. Interlinked via [[wikilinks]].
  3. The schema (CLAUDE.md, AGENTS.md) — tells your agent how to ingest and query.

See docs/architecture.md for the full breakdown and how it maps to the file tree.

Design principles

  • Stdlib first — only mandatory runtime dep is markdown. pypdf is an optional extra for PDF ingestion.
  • Works offline — no Google fonts, no external CSS. Syntax highlighting loads from a highlight.js CDN but degrades gracefully without it.
  • Redact by default — username, API keys, tokens, emails all get redacted before entering the wiki.
  • Idempotent everything — re-running any command is safe and cheap.
  • Agent-agnostic core — the converter doesn't know which agent produced the .jsonl; adapters translate.
  • Privacy by default — localhost-only binding, no telemetry, no cloud calls.
  • Dual-format output (v0.4) — every page ships both for humans (HTML) and AI agents (TXT + JSON + JSON-LD + sitemap + llms.txt).

Docs

Per-adapter docs:

Releases

VersionFocusTag
v0.1.0Core release — Claude Code adapter, god-level HTML UI, schema, CI, plugin scaffoldingv0.1.0
v0.2.0Extensions — 3 new slash commands, 3 new adapters, Obsidian bidirectional, full MCP serverv0.2.0
v0.3.0PyPI packaging, eval framework, i18n scaffoldv0.3.0
v0.4.0AI + human dual format — per-page .txt/.json siblings, llms.txt, JSON-LD graph, sitemap, RSS, schema.org microdata, reading time, related pages, activity heatmap, deep-link anchors, build manifest, link checker, wiki_export MCP toolv0.4.0
v0.5.0Folder-level _context.md, auto-ingest, 3 adapter graduations (Cursor, Gemini CLI, PDF), lazy search index, scheduled sync templates, WCAG accessibility, Playwright E2E testsv0.5.0
v0.6.0qmd export, GitLab Pages CI, PyPI release automation, maintainer governance scaffoldv0.6.0
v0.7.0Structured model-profile schema, auto-generated vs-comparison pages, append-only changelog timelinev0.7.0
v0.8.0365-day activity heatmap, tool-calling bar chart, token usage card, session metrics frontmatterv0.8.0
v0.9.0Project topics, agent labels (Claude/Codex/Copilot/Cursor/Gemini badges), Copilot adapters, image pipeline, highlight.js, public demo deploymentv0.9.0

Roadmap

Shipped milestones:

  • v0.5.0 — Folder-level _context.md, auto-ingest, adapter graduations, lazy search index, scheduled sync, WCAG, E2E tests (milestone)
  • v0.6.0 — qmd export, GitLab Pages CI, PyPI release automation, maintainer governance scaffold (milestone)
  • v0.7.0 — Structured model-profile schema, vs-comparison pages, append-only changelog timeline (milestone)
  • v0.8.0 — 365-day activity heatmap, tool-calling bar chart, token usage card, session metrics frontmatter (milestone)
  • v0.9.0 — Project topics, agent labels, Copilot adapters, image pipeline, highlight.js, public demo deployment

Active milestone:

MilestoneFocusTracking
v1.0.0Knowledge graph explorer, light mode polish, interactive graph visualizationMilestone

Deployment targets

  • GitHub Pages — shipped in v0.1 via .github/workflows/pages.yml (triggers on push to master).
  • GitLab Pages — copy .gitlab-ci.yml.example.gitlab-ci.yml. See docs/deploy/gitlab-pages.md for full setup.
  • Any static hostllmwiki build writes to site/, which you can rsync/scp anywhere.

Acknowledgements

License

MIT © Pratiyush

SEE ALSO

clihub4/16/2026LLM-WIKI(1)