NAME
YALC-the-GTM-operating-system — AI-native GTM operating system. CLI-first. Open source Clay alternative.
SYNOPSIS
INFO
DESCRIPTION
AI-native GTM operating system. CLI-first. Open source Clay alternative.
README
YALC — The Open-Source GTM Operating System
AI plans your campaigns, qualifies your leads, and learns from every interaction.
YALC is an open-source, AI-native operating system for running any GTM campaign. CLI-first. Intelligence compounds from every interaction.
Quick Start
git clone https://github.com/Othmane-Khadri/YALC-the-GTM-operating-system.git cd YALC-the-GTM-operating-system pnpm installMake the CLI available
pnpm link --global
One command to set up everything
yalc-gtm start
The start command walks you through 4 steps:
- Environment — Collects API keys. Only an Anthropic key is required to begin; other providers (Crustdata, Unipile, Firecrawl, Notion) are optional and unlock additional capabilities.
- Company Context — Interactive interview about your company, ICP, pain points, competitors, and voice. Optionally scrapes your website for additional context.
- Framework — Claude synthesizes everything into a structured GTM framework (segments, signals, positioning, competitors).
- Goals & Config — Claude recommends goals and generates qualification rules, outreach templates, and search queries.
You'll end with a readiness report showing what's unlocked and a suggested first command.
Updating
Already set up? One command to pull the latest:
yalc-gtm update
This stashes any local changes, pulls from origin, reinstalls deps, and restores your changes. Your ~/.gtm-os/ config is never touched.
After Setup
# Run your first qualification (dry-run) yalc-gtm leads:qualify --source csv --input data/leads/sample.csv --dry-runCreate a campaign
yalc-gtm campaign:create --title "Q2 Outbound" --hypothesis "VP Eng responds to pain-point messaging"
Track campaign progress
yalc-gtm campaign:track --dry-run
Or just describe what you want in natural language
yalc-gtm orchestrate "find 10 companies matching my ICP"
Non-Interactive Setup
For CI or automation, set your keys in .env.local (see .env.example) and run:
yalc-gtm start --non-interactive
Features at a Glance
- 16 built-in skills — qualify, scrape, campaign, orchestrate, personalize, competitive-intel, and more
- 7 providers — Unipile, Crustdata, Firecrawl, Notion, FullEnrich, Instantly, Mock
- Multi-channel campaigns — LinkedIn + Email with A/B variant testing
- Intelligence store — learns from every campaign outcome (hypothesis → validated → proven)
- Statistical significance — chi-squared testing to pick variant winners
- Campaign dashboard — real-time analytics, funnel views, Claude-powered Q&A
- Rate limiting — DB-backed token bucket on all external sends
- Outbound validation — every message checked before send, hard blocks on violations
- Background agents — launchd-integrated for automated campaign tracking
- Natural language orchestration — describe what you want, YALC plans the workflow
Using YALC from Claude Code (IDE or Terminal)
YALC works the same whether you run it from a coding IDE (VS Code, Cursor) or a standalone terminal. The CLI uses the same interactive prompts in both.
IDE (VS Code / Cursor with Claude Code extension):
You can ask Claude Code to run commands for you. For the initial setup, it's better to run yalc-gtm start yourself in the integrated terminal so you can answer the interactive prompts. After that, Claude Code can run any YALC command on your behalf — qualifying leads, creating campaigns, tracking results.
If your ANTHROPIC_API_KEY is already in your environment (common in Claude Code sessions), the start command detects it automatically and skips the prompt.
Terminal (standalone): Run commands directly. The interactive prompts work as expected in any terminal emulator.
File Structure — Where Things Live:
~/.gtm-os/ Your GTM brain (persists across projects) ├── config.yaml Provider settings, Notion IDs, rate limits ├── framework.yaml GTM framework — ICP, positioning, signals ├── qualification_rules.md Lead qualification patterns (auto-generated) ├── campaign_templates.yaml Outreach copy templates (auto-generated) ├── search_queries.txt Monitoring keywords (auto-generated) └── tenants/<slug>/ Per-tenant overrides (multi-company mode)
./data/ Working data (in your project directory) ├── leads/ CSV/JSON lead lists for qualification ├── intelligence/ Campaign learnings and insights └── campaigns/ Campaign exports and reports
When talking to Claude Code, reference these locations directly:
- "Update my qualification rules" → edits
~/.gtm-os/qualification_rules.md - "Add a segment to my framework" → edits
~/.gtm-os/framework.yaml - "Qualify leads from this CSV" → reads from
./data/leads/
Architecture
┌──────────────────────────────────────────────────────────┐
│ CLI Layer │
│ campaign:track · campaign:create · leads:qualify · ... │
├──────────────────────────────────────────────────────────┤
│ Skills Layer │
│ qualify · scrape-linkedin · answer-comments · email · │
│ orchestrate · visualize · monthly-report │
├──────────────────────────────────────────────────────────┤
│ Providers Layer │
│ Unipile · Crustdata · Firecrawl · Notion · FullEnrich │
├──────────────────────────────────────────────────────────┤
│ Services Layer │
│ API wrappers · Rate limiter · Outbound validator │
├──────────────────────────────────────────────────────────┤
│ Data Layer │
│ Drizzle ORM · SQLite/Turso · Intelligence Store │
└──────────────────────────────────────────────────────────┘
Three-layer pattern: Service (API wrapper) → Provider (StepExecutor) → Skill (user-facing operation). Never skip layers.
Providers
| Provider | Capabilities | Env Var |
|---|---|---|
| Unipile | LinkedIn search, connections, DMs, scraping | UNIPILE_API_KEY, UNIPILE_DSN |
| Crustdata | Company/people search, enrichment | CRUSTDATA_API_KEY |
| Firecrawl | Web scraping, search | FIRECRAWL_API_KEY |
| Notion | Database sync, page management | NOTION_API_KEY |
| FullEnrich | Email/phone enrichment | FULLENRICH_API_KEY |
| Anthropic | AI planning, qualification, personalization | ANTHROPIC_API_KEY |
Skills
| Skill | Category | Description |
|---|---|---|
qualify-leads | data | 7-gate lead qualification pipeline |
scrape-linkedin | data | Scrape post engagers (likers/commenters) |
answer-comments | outreach | Reply to LinkedIn post comments |
email-sequence | content | Generate email drip sequences |
visualize-campaigns | analysis | Campaign dashboards |
monthly-campaign-report | analysis | Cross-campaign intelligence report |
orchestrate | integration | Multi-step workflow from natural language |
CLI Commands
start Guided onboarding — keys, context, framework, goals in one flow
setup Check API keys and provider connectivity
onboard Build GTM framework from profile/website
campaign:track Poll Unipile, advance sequences, sync Notion
campaign:create Create campaign with A/B variant testing
campaign:report Generate weekly intelligence report
campaign:monthly-report Cross-campaign monthly report
campaign:dashboard Open visualization dashboard
leads:qualify Run 7-gate qualification pipeline
leads:scrape-post Scrape LinkedIn post engagers
leads:import Import leads from CSV/JSON/Notion
linkedin:answer-comments Reply to LinkedIn post comments
email:create-sequence Generate email drip sequence
notion:sync Bidirectional SQLite ↔ Notion sync
notion:bootstrap Import existing Notion data to SQLite
orchestrate Natural language → phased skill execution
agent:run Run background agent immediately
agent:install Install agent as launchd service
agent:list List agents with last run status
All commands that send or write support --dry-run. See Command Reference for full details, flags, and examples.
Documentation
| Guide | What it covers |
|---|---|
| First Run Tutorial | Step-by-step walkthrough of start, plus 3 mini-tutorials |
| Provider Setup | How to get and configure API keys for each provider |
| Command Reference | Every CLI command with flags, examples, and expected output |
| Skills Catalog | All 17 built-in skills with scenarios and decision tree |
| MCP Integration | How MCP works with GTM-OS, current status, and roadmap |
| Troubleshooting | Common errors and fixes, organized by layer |
| Background Agents | Agent architecture, creation, scheduling |
| Intelligence Store | Intelligence schema, categories, confidence lifecycle |
| Architecture | High-level project map |
| Systems Architecture | Deep dive into 8 core systems |
Configuration
YALC uses ~/.gtm-os/config.yaml for persistent configuration:
notion:
campaigns_ds: ""
leads_ds: ""
variants_ds: ""
parent_page: ""
unipile:
daily_connect_limit: 30
sequence_timing:
connect_to_dm1_days: 2
dm1_to_dm2_days: 3
rate_limit_ms: 3000
qualification:
rules_path: ~/.gtm-os/qualification_rules.md
cache_ttl_days: 30
Key Design Decisions
- Intelligence everywhere: Every campaign outcome feeds the intelligence store. The system learns what works per segment/channel.
- Outbound validation: Every human-facing message passes through
validateMessage(). Hard violations block sends. - Rate limiting: DB-backed token bucket rate limiter on all external sends (LinkedIn connects, DMs, emails).
- No silent mocks: Provider registry throws
ProviderNotFoundErrorwith suggestions instead of silently falling back to mock data. - Transactions: All campaign tracker DB writes are wrapped in Drizzle transactions.
Contributing
- Follow the three-layer pattern: Service → Provider → Skill
- Run
pnpm typecheckafter every file change - Support
--dry-runon any command that sends or writes - Never log API keys — use
sk-...redactedpattern - Wire campaign outcomes to the intelligence store
License
MIT