Your Claude Code insurance policy — best tool for the job up front, code that connects before it breaks.
Claude Code has six problems that cost you time and money on every session. The Xpansion Framework covers all six.
| Problem | Module | Available |
|---|---|---|
| Claude picks from defaults — the best tool for your task is one you've never heard of | XFTD (Tool Dispatch) | All tiers |
| Claude renames a function and misses three callers — silent until runtime | XFBA | Free (basic) / Pro / Team |
| You can't see what else breaks when a change lands — cascade failures hide until production | XSIA | Pro / Team |
| Token hogs and peak hours drain your budget invisibly — you only notice at the end | XFTC | All tiers |
| Sessions compact and context is lost — decisions, files changed, open questions — gone | XF-MEM | Pro / Team |
| Implementation drifts from spec — terminals unmet, no one notices until it ships | XFDB | Pro / Team |
One install. Six modules. Every decision logged.
The Claude Code tool ecosystem has 50,000+ options across plugins, skills, and MCPs. Claude picks from defaults. The best tool for what you're building right now — you've probably never heard of it.
XF Tool Dispatch runs as two hooks wired together:
On every message (Hook 1): Sends your last few messages to a small model for ~100ms. If it detects a task shift — you moved from debugging a Flutter widget to writing a test suite, say — it maps the shift to a category and immediately surfaces grouped tool recommendations. Recommendations are grouped by type: Plugins, Skills, and MCPs. You see them once per topic per session.
Example proactive output (on task shift):
[XFTD] Recommended tools for this flutter-building task:
Plugins:
• flutter-mobile-app-dev — Expert Flutter agent for widgets, state, iOS/Android.
Install: claude install plugin:anthropic:flutter-mobile-app-dev
Skills:
• VisionAIrySE/flutter@flutter-dev — Flutter dev skill for widget building.
Install: claude install VisionAIrySE/flutter@flutter-dev
MCPs:
• fluttermcp — Dart analysis and widget tree inspection server.
Install: claude mcp add fluttermcp npx -y @fluttermcp/server
Not sure which to pick? Ask me — I can explain the differences.
If no task shift is detected, Hook 1 exits silently.
Before every tool call (Hook 2): When Claude is about to invoke a Skill, Agent, or MCP tool, XFTD intercepts it. It searches the marketplace for tools relevant to your current task, scores them against what Claude was about to use, and if a marketplace tool scores 10+ points higher — it blocks the call and surfaces the comparison:
[XFTD] Intercepted: CC is about to use 'superpowers:systematic-debugging' (Skill) for Flutter Fixing.
CC confidence score: 62/100
── Plugins ──
1. flutter-mobile-app-dev
Relevance 91 · Signal 78 · Velocity 62 installs:12,400 stars:340 forks:28
Purpose-built Flutter/Dart agent — widget tree inspection, state, iOS/Android builds.
Install: claude install plugin:anthropic:flutter-mobile-app-dev && claude
── Skills ──
1. VisionAIrySE/flutter@flutter-dev
Relevance 84 · Signal 65 · Velocity 55 installs:2,100 stars:88 forks:14
Flutter dev workflow — widget builds, golden tests, pub dependencies.
Install: npx skills add VisionAIrySE/flutter@flutter-dev -y && claude
2. superpowers/flutter@flutter-expert
⚠ no description — install at your own risk
Relevance 0 · Signal 42 · Velocity 30 installs:890 stars:12 forks:3
── MCP Servers ──
1. dart-mcp
Relevance 79 · Signal 58 · Velocity 48 installs:4,200 stars:120 forks:9
Dart analysis server — static analysis, pub resolve, widget inspection.
More info: https://github.com/dart-lang/dart-mcp
⚠ Marketplace tools score higher than 'superpowers:systematic-debugging' (Skill) for this task.
Options:
1. Say 'proceed' to continue with 'superpowers:systematic-debugging' (one-time bypass, no restart needed)
2. Install flutter-mobile-app-dev plugin — run /compact first, then install and restart CC
3. Ignore XFTD for this task — say 'skip dispatch'
Note: Review before installing. XFTD surfaces tools based on community signals and task context — not a security audit.
Present these options to the user. Wait for their response before taking any action.
If no marketplace tool beats Claude's choice by 10+ points, Hook 2 exits silently.
On session end (Hook 3): Prints a one-line digest so you know XFTD was running, even when it correctly stayed quiet:
[XFTD] Session: 12 tool calls audited · 0 blocked (all optimal) · 1 recommendation shown
XF Boundary Auditor fires on every Edit and Write. Most of the time, Stage 1 completes in ~200ms and you see a green stamp:
◈ XFBA 47 modules · 203 edges ✓ 0 violations
When something is actually wrong, Claude sees this before the write lands:
◈ XFBA This edit will break at runtime.
evaluator.py:203 — calls rank_tools() with 3 arguments, but it only accepts 2.
This will throw a TypeError when that code runs.
[Fix problem] Type "Fix problem" — I'll apply the repair, re-audit, and promise clean
[Show diff] Type "Show diff" — show me the exact change before deciding
Supported languages: Python, TypeScript, TSX, Dart, and Bash. XFBA indexes your entire project, walks the cross-file call graph, and applies the same contract checks regardless of language. Full-stack coverage — Python backend, TypeScript frontend, Flutter mobile, and Bash scripts, all in one pass.
What it catches:
- Arity mismatches — function called with the wrong number of arguments, across files
- Broken imports — symbol renamed, moved, or deleted while callers still reference the old name
- Missing env vars — hard-coded
os.environ["KEY"]access where the var isn't confirmed set - Consumed stubs — functions marked as not implemented that have active callers
The four stages:
- Stage 1 (~200ms, always): Cross-language AST scan. Blocks immediately on violations.
- Stage 2 (on escalation): Xpansion cascade analysis — maps the full caller chain using the MECE boundary framework (DATA, NODES, FLOW, ERRORS). Shows consequence-first output.
- Stage 3: Concrete repair plan — each violation gets one specific file-and-line fix.
- Stage 4: Graduated consent — "show me the diff first" until two verified repairs this session, then "apply all" unlocks. Resets each session.
Refactor Mode: /xfa-refactor start "description" — XFBA shifts from blocking to tracking. Violations accumulate without interrupting your work. Run /xfa-refactor end when done to get the consolidated repair list. Useful when you're mid-refactor and know the code is temporarily broken across files.
Every scan leaves a record in .xf/boundary_violations.json. Every repair is logged to .xf/repair_log.json with timestamp and session ID.
XF System Impact Analyzer runs after XFBA passes. Where XFBA answers "does this edit break a contract?" XSIA answers "what breaks downstream when this change lands?"
A function signature changes. XFBA catches the arity mismatch at the call site. XSIA maps the full blast radius: all callers across all files, any data flows that depend on the return type, any side effects on shared state.
◈ XSIA 3 concerns
HIGH evaluator.py:rank_recommendations() — 4 callers in 3 files will receive wrong
argument count. interceptor.py:82, classifier.py:44, dispatcher.py:119, test_evaluator.py:31.
Fix XFBA violation first — XSIA impact is accurate only on clean code.
MEDIUM category_mapper.py — imports rank_tools from evaluator. If rank_tools changes
signature, this import chain breaks silently at call time, not import time.
LOW stack_scanner.py:detect_stack() — reads shared state modified by rank_recommendations.
Concurrent sessions could see inconsistent state during the edit window.
Six analysis dimensions (Pro): call graph impact, data flow consequences, shared state mutations, error propagation paths, import chain fragility, test coverage gaps.
XFBA is the gate. XSIA is the consequence map. XFBA runs first — if it finds a violation, XSIA still runs but flags that impact analysis is provisional until the violation is fixed. You cannot meaningfully analyze cascade impact on broken code.
XF Token Controller runs on every message you send and fires nudges when it detects invisible drains on your context budget. Most of the time it's silent.
What it watches:
- CLAUDE.md size — every line reloads on every message. 300-line config files burn thousands of tokens per session without anyone noticing.
- MCP overhead — each active MCP server adds ~18K tokens of schema overhead per message. Three idle servers = ~54K tokens gone before Claude reads your first word.
- Sub-agent model selection — Claude reaching for Opus when Haiku handles the task.
- Peak hours — 8am–2pm ET weekdays, session token budgets drain faster.
- Context fill — estimates how full your context window is using actual transcript size plus loaded files.
Free tier: One ghost notification per session showing what Pro would have flagged:
[XFTC] Pro would have flagged context usage here — xpansion.dev/pro
Pro tier: Full nudges, enforcement blocks, and the two-tier context protection system:
[XFTC] Context estimated ~52% full — snapshot + compact now to preserve everything.
Run: /xf-mem snapshot then /compact — full session saved to searchable memory, zero loss
At 90%, XFTC writes the snapshot directly from the hook — no Claude required. Even if you ignored the 50% nudge.
Sessions end and context is lost. Decisions you made, files you changed, open questions you were tracking — gone. The next session starts from scratch.
XF-MEM is persistent semantic memory for Claude Code. Every session is snapshotted to disk automatically. A Stop hook runs after every response. For semantic search, connect your own pgvector-compatible database (Supabase, Neon, or any Postgres+pgvector host) — your data stays in your infrastructure.
At session start: The last snapshot loads automatically via the Build State Protocol. Zero re-explanation needed.
For older sessions: /xf-mem search "query" does semantic retrieval across all stored snapshots — natural language, not keyword matching.
Snapshots are global — stored across all directories, tagged by project, searchable from anywhere.
Two-tier protection (triggered by XFTC):
- 50% nudge — Claude is prompted to run
/xf-mem snapshotthen/compact. Produces a curated summary: what was built, decisions made, files changed, open questions. - 90% failsafe — XFTC writes the snapshot directly from the hook, without waiting for Claude. Extracts the last 20 transcript messages automatically. Fires even if the 50% nudge was ignored.
| Before XF-MEM | With XF-MEM (Pro) |
|---|---|
| Manual warm-start at session end | Auto-snapshot at 50% + failsafe at 90% |
| Notes stored in MEMORY.md | Disk snapshots + optional BYODB semantic index |
| Single project, manual retrieval | Cross-directory, semantic search |
| Compact = context loss | Compact = zero loss |
Code ships. Specs drift. The gap between what the spec says and what the code does is invisible until something fails in production — or until a user discovers that a promised feature doesn't work.
XF De-Bug is a spec verifier. You define what your implementation must do — each requirement expressed as a binary terminal: met or unmet. XFDB reads your spec and walks the codebase to verify every terminal independently. It doesn't summarize or estimate. Each terminal either passes or it doesn't.
Run /xfdb at any point during development:
[XFDB] Spec verification — 14 terminals checked
✓ XFTD recommendation surfaces on task shift
✓ Stamp protocol fires on every hook invocation (positive + negative)
✓ PreToolUse block routes through pending file, not stdout-only
✗ /dispatch pause mid-session does not interrupt active work
Status: NOT MET
Missing: dispatch-preuse.sh pause-state check not implemented
Options: [Fix it] [Update spec] [Abandon terminal]
✗ Weekly new-tool digest email for Pro users
Status: NOT MET
Missing: no cron job or email template found
Options: [Fix it] [Update spec] [Abandon terminal]
12 met · 2 unmet · 0 blocked
What XFDB is not: a test runner. It doesn't execute code. It reads spec terminals against static analysis of your codebase — checking for the presence of implementations, hook registrations, database migrations, and documented behaviors. Think of it as a completeness audit, not a correctness audit. XFBA handles correctness. XFDB handles completeness.
The spec is the source of truth. XFDB verifies the codebase against it — not the other way around. When a terminal is unmet, the options are explicit: fix the code, update the spec to reflect reality, or consciously abandon the requirement. No silent drift.
Every terminal is logged. Met terminals record the file and line where verification passed. Unmet terminals record what was searched and what was missing. The full report lives in .xf/xfdb_report.json.
| Free | Pro | Team | |
|---|---|---|---|
| XFTD — proactive recs | ✓ | ✓ | ✓ |
| XFTD — interceptions | Unlimited | Unlimited | Unlimited |
| XFTD — ranking quality | Good | Best (Sonnet) | Best (Sonnet) |
| XFTD — catalog | Live search | Pre-ranked, 6 sources | Pre-ranked, 6 sources |
| XFBA (Boundary Auditor) | Basic | ✓ Unlimited | ✓ Unlimited |
| XSIA (Impact Analyzer) | — | ✓ Unlimited, 6 dimensions | ✓ Unlimited, 6 dimensions |
| XFTC — CLAUDE.md check | ✓ | ✓ | ✓ |
| XFTC — nudges (MCP, context, model) | — | ✓ Unlimited | ✓ Unlimited |
| XFTC — enforcement blocks | — | ✓ | ✓ |
| XF-MEM — 50% compact nudge | — | ✓ | ✓ |
| XF-MEM — 90% auto-snapshot failsafe | — | ✓ | ✓ |
XF-MEM — /xf-mem search semantic retrieval |
— | ✓ (BYODB) | ✓ (BYODB) |
| XFDB — spec verification | — | ✓ | ✓ |
/dispatch-compact-md skill |
✓ | ✓ | ✓ |
| Dashboard | — | ✓ | ✓ Team view |
| Admin controls | — | — | ✓ |
| Network intelligence | — | ✓ | ✓ |
| Cost | Free | $15/mo (Founder: $6) | $12/seat/mo |
| Data sharing | Task labels only | Task labels only | Task labels only |
git clone https://github.com/XpansionFramework/XF-Dispatch.git
cd XF-Dispatch
chmod +x install.sh
./install.shinstall.sh walks you through three things: checking dependencies, registering hooks in ~/.claude/settings.json, and connecting to the hosted endpoint (or using your own API key). Takes about two minutes.
Start a new Claude Code session after install — hooks load at session startup.
Sign up with GitHub — no API key, no card required. install.sh will ask for your token. Takes 30 seconds.
First 10 days: full Pro access — all 6 modules, unlimited detections, XF-MEM, XFDB. No card required.
After 10 days, your account stays free permanently with:
- XFTD: unlimited routing + proactive recommendations on every task shift
- XFBA: basic contract checks — catches broken imports, arity mismatches, and missing env vars
- XFTC: CLAUDE.md size check on every session
Upgrade to Pro anytime to keep the full suite.
What leaves your machine: your last ~3 messages and working directory path, sent to classify the task. Not stored — we keep your GitHub username, usage count, and task type labels (e.g., flutter-fixing). No conversation content.
Founding offer: First 250 subscribers lock in $6/month for life. After 250, standard rate of $15/month applies.
- XFTD: unlimited turns, Sonnet ranking, pre-ranked catalog from 16,000+ tools
- XFBA: full Stages 1–4 (AST scan → cascade analysis → repair plan → graduated consent), unlimited
- XSIA: full 6-dimension impact analysis on every edit, unlimited
- XFTC: all nudges, enforcement blocks, context protection
- XF-MEM: auto-snapshot at 50%, failsafe at 90%, semantic search across all sessions
- XFDB: spec verification on demand — terminal-by-terminal completeness audit
- Dashboard: interception history, contract repair history, provenance log
The catalog is the compounding advantage. The hosted version sees what thousands of developers actually installed after an XFTD suggestion, which tools they bypassed, and which ones stuck. That signal builds over time and no local setup can replicate it.
Contact us — minimum 3 seats.
Everything in Pro, plus:
- Team dashboard: usage and interception analytics across all seats
- Admin controls: manage seats, view team-wide block/bypass patterns
- Priority support
Self-hosting: Prefer to run on your own infrastructure? BYOK is free and open source →
- Claude Code (hooks support required — v1.x+)
- Python 3.8+
- Node.js + npx — nodejs.org
- One of: an XF account (free) or an Anthropic API key
The anthropic Python package installs automatically via install.sh.
Most of the time, the Xpansion Framework is invisible. Hook 1 runs on every message and exits silently unless it detects a shift. Hook 2 runs on every tool call but exits silently unless it finds something meaningfully better. XFBA/XSIA stamp ✓ and pass through. XFTC stays quiet when the session is clean.
When Hook 1 fires (on task shift): A proactive list of recommended tools appears in Claude's context, grouped by Plugins, Skills, and MCPs. Ask Claude to explain the differences, paste an install command, or ignore the list and keep working. XFTD won't show the same category again this session.
When Hook 2 fires: Claude pauses and shows you the comparison. Three options:
- Say
proceed— Claude uses its original tool choice, one-time bypass, no restart needed - Install the top pick — run
/compactto save session context, paste the install command, restart CC - Say
skip dispatch— XFTD ignores this task type going forward in the session
The threshold is a 10-point gap. If the best marketplace alternative scores 72 and Claude's tool scores 64, XFTD blocks. A 9-point gap passes through silently.
When XFBA blocks: Claude shows you the violation in plain English, explains whether it looks like a real bug or a false positive, and asks: "Fix it, suppress it, or proceed anyway?" — it waits for your answer.
When XSIA flags: Claude surfaces concerns in plain English, explains whether each looks routine or substantive, and asks: "Fix impact issues or let it ride?" — it waits for your answer.
When XFTC fires: A nudge appears at the top of Claude's next response. Surface-level notices (CLAUDE.md size, MCP overhead) need no action — just awareness. Context fill nudges are actionable: run /xf-mem snapshot then /compact to preserve everything before the window fills.
When XFDB runs: Claude presents the terminal report. For each unmet terminal, it waits for your decision — fix the code, update the spec, or consciously abandon the requirement.
| Command | How to use | What it does |
|---|---|---|
proceed |
Say it conversationally | One-time bypass — XFTD lets the current tool call through, no restart needed |
skip dispatch |
Say it conversationally | Ignore XFTD for this task type for the rest of the session |
/dispatch status |
Slash command | Show session stats — tool calls audited, blocks, recommendations shown |
Coming soon:
| Command | What it will do |
|---|---|
/dispatch pause |
Disable both hooks for this session without uninstalling |
/dispatch resume |
Re-enable after a pause |
/dispatch stack |
Show what stack_scanner detected for the current project |
/dispatch why |
Explain the last block — task type, category, top tool score vs CC score |
/dispatch ignore [tool] |
Permanently exclude a specific tool from all recommendations |
/dispatch feedback good |
Mark the last recommendation as correct (strong positive signal) |
/dispatch feedback bad |
Mark the last recommendation as wrong |
| Command | How to use | What it does |
|---|---|---|
/xfa-refactor start "description" |
Slash command | Enter Refactor Mode — violations accumulate without blocking |
/xfa-refactor end |
Slash command | Exit Refactor Mode — presents consolidated repair list |
When XFBA blocks an edit, Claude reads the options and acts:
- Say
Fix problem— Claude applies the repair, re-audits, outputs<promise>XFBA_CLEAN</promise>when clean - Say
Show diff— Claude shows exactly what the repair changes before applying it - After Show diff, say
Apply fix— apply the shown change, re-audit, promise clean - Say
I'll handle it— allow the edit through; violation logged to.xf/repair_log.json
Coming soon:
| Command | What it will do |
|---|---|
/xfa pause |
Disable XF Audit blocking for this session (violations still logged) |
/xfa resume |
Re-enable after a pause |
/xfa report |
Show repair_log.json summary — violations caught, files touched |
/xfa clear |
Clear open violations in .xf/boundary_violations.json |
| Command | How to use | What it does |
|---|---|---|
/xf-mem snapshot |
Slash command | Write a snapshot of the current session to persistent memory |
/xf-mem search "query" |
Slash command | Semantic search across all stored session snapshots |
| Command | How to use | What it does |
|---|---|---|
/xfdb |
Slash command | Run spec verification — checks all terminals, reports met/unmet |
Each recommended tool shows three components so you can judge it yourself:
- Relevance — how well the tool's description matches your specific task, scored by a fast LLM pass. Tools with no description score zero and get a visible warning.
- Signal — popularity as a quality proxy, weighted across installs, stars, and forks. Log-scaled so a newer tool with 500 installs isn't buried by one with 50,000.
- Velocity — install momentum relative to how long the tool has existed. A tool gaining traction fast ranks higher than one that peaked years ago.
All three factors contribute to the final score. XFTD blocks when the top marketplace score beats CC's confidence by a meaningful margin.
Tools are grouped by type (Plugins / Skills / MCPs), up to 3 per group. Raw installs, stars, and forks are shown so you can verify the signal yourself.
No description = relevance 0. If a tool has no README or description, it can't score on relevance — only signal and velocity. It'll still appear if community adoption is strong, but with a ⚠ warning. XFTD sends outreach to undescribed tool authors automatically.
Caveat: XFTD surfaces tools based on community signals and task context — not a security audit. Review any tool before installing.
Free — hits the live skills.sh marketplace and glama.ai MCP registry on each intercept (~2–4s). Relevance is scored by an LLM using the tool description.
Pro — pulls from a pre-ranked catalog built by a daily crawl across npm, skills.sh, glama.ai, and the Claude plugin registries. Tools are scored during the crawl — all three components pre-computed. At intercept time, XFTD maps your task to the closest taxonomy leaf and returns a pure catalog query. Intercept response is <200ms, no LLM call at hook time.
XFTD recommends from the full marketplace — installed or not. But its scores improve with better tool descriptions. Add the official marketplaces to give it more signal:
/plugins add anthropics/claude-plugins-official
/plugins add ananddtyagi/claude-code-marketplace
Browse for skills relevant to your stack:
npx skills find flutter
npx skills find supabase
npx skills find reactThe more skills in the registry that match your work, the more often XFTD has something useful to surface.
XFTD uses a hierarchical MECE taxonomy with 16 top-level categories: source-control, data-storage, search-discovery, ai-ml, frontend, mobile, backend, infrastructure, delivery, integrations, identity-security, observability, testing, data-engineering, payments, documentation. Each category breaks down into subcategories and leaf nodes (e.g. data-storage → relational → postgresql).
When Haiku detects a task shift, it generates a specific label like flutter-fixing or postgres-rls-query. XFTD maps that label to the closest taxonomy leaf — scoring token overlap against 100+ leaf nodes and their tags. The leaf drives marketplace search with precise vocabulary (e.g. postgresql maps to postgres/rls/migration/query terms), more targeted than keyword-splitting the task label directly.
Pro users get the full taxonomy path sent to the catalog — results filtered by leaf node and matching tags, sorted by pre-computed signal scores with no LLM involved.
Unknown task types are logged to unknown_categories.jsonl in the dispatch directory — if you're working in a niche stack and XFTD consistently misses, that file tells you why.
On install, and again whenever you change working directories, XFTD scans your project's manifest files (package.json, requirements.txt, go.mod, Cargo.toml, pubspec.yaml, etc.) to build a stack profile. Pro users' catalog results are reranked using this profile — a Flutter project gets flutter-mobile-app-dev ranked higher than a generic mobile tool even if their base scores are similar.
The stack profile lives at ~/.claude/dispatch/stack_profile.json and updates automatically.
XFTD isn't intercepting anything
- Start a new Claude Code session after install — hooks load at startup
- Check both hooks are registered: look for
UserPromptSubmitandPreToolUseentries in~/.claude/settings.json - Verify your key or token:
cat ~/.claude/dispatch/config.json
XFTD fires but passes everything through
- This is correct behavior most of the time — it only blocks when the gap is 10+ points
- If marketplace search returns nothing, there's nothing to compare against
Proactive recommendations aren't appearing
- Start a new Claude Code session after install — hooks load at startup
- Check that Hook 1 is registered: look for
UserPromptSubmitin~/.claude/settings.json - Proactive recommendations fire only on a confirmed task shift with confidence ≥ 0.7 — if you're continuing the same topic, no output is expected
XFBA isn't catching anything
- XFBA runs on Edit and Write tool calls. It won't fire on Bash commands or file reads.
- Check that the XF Audit hook is registered: look for
PreToolUseentries withxf-boundary-auditor.sh
Hook is slow
- 10s hard timeout — Claude proceeds normally if exceeded
- Pro/Team catalog responses are <200ms; Free tier search takes 2–4s
"Degraded mode" warning during install
- The
anthropicpackage installed but Python can't import it (common on system Python with PEP 668 restrictions) - Fix:
pip3 install anthropic --break-system-packagesor use a virtualenv
bash uninstall.shRemoves all installed files, hook scripts, and settings.json entries automatically. Also cleans up pre-v0.9.2 installs if present.
- No
~/.claude/CLAUDE.mdmodification — the Xpansion Framework doesn't touch your global Claude instructions - No credential harvesting — reads only
ANTHROPIC_API_KEYfrom your environment - No shell injection — task type labels always passed as
sys.argv, never interpolated into shell strings - Open source — every line of every hook and Python module is in this repo; verify before installing
- 10-second hard timeout — enforced by Claude Code; the Xpansion Framework cannot hang your session
BYOK: Haiku calls go directly from your machine to Anthropic. Nothing passes through our servers.
Hosted (Free, Pro, and Team): The following data is sent to and stored at xpansion.dev:
| Data | Stored? | Notes |
|---|---|---|
| Last ~3 messages | No | Sent for classification, discarded immediately |
| Working directory path | No | Sent for context, not stored |
| GitHub username + email | Yes | Collected via GitHub OAuth at signup |
Task type label (e.g. flutter-fixing) |
Yes | Stored per interception event |
| Tool intercepted + relevance scores | Yes | Tool name, CC score, marketplace score |
| Blocked / bypassed / installed | Yes | Powers your Pro dashboard |
| Stack profile (languages/frameworks) | Local only | Stored in ~/.claude/dispatch/stack_profile.json |
| XF-MEM snapshots | Local only | Pro only; stored on your machine, never sent to our servers. Semantic search requires your own BYODB connection. |
We don't store conversation content. We don't sell individual user data. Aggregate, anonymized patterns (e.g. what percentage of mobile developers install Flutter skills after an XFTD suggestion) improve catalog rankings network-wide.
Creator outreach: When the daily catalog crawl finds a skill with install activity but no description, XFTD may open a GitHub issue on that repo asking the creator to add a description. At most once per repo per 30 days.
To delete your account and all stored data, email hello@xpansion.dev. To stop all data sharing immediately, switch to BYOK mode.
Open source, MIT licensed. The classifier taxonomy and category mapping are the most impactful places to contribute — better category coverage means better marketplace routing for everyone.
Open an issue with:
- What task type XFTD detected
- Whether the recommendations were relevant
- Stack you were working in
Pull requests welcome.
Every Claude Code session has the same six problems. Claude picks tools from defaults while 50,000+ purpose-built options exist. It produces code that doesn't connect — renames a function and misses every caller. It can't show you what breaks downstream when a change lands. Sessions drain context invisibly until you compact and lose everything you were tracking. And implementation drifts silently from spec until something ships broken.
The Xpansion Framework covers all six. XFTD is the runtime layer that ensures Claude reaches for the best tool. XFBA closes the contract gap at the edit boundary, where context is still live and the fix is near-zero cost. XSIA maps the blast radius before the change lands. XFTC keeps the session lean. XF-MEM makes compacting lossless. XFDB keeps implementation honest against the spec.
One install. Everything logged. The hosted version compounds over time — it knows what tools thousands of developers actually reached for when they were doing exactly what you're doing now, and which ones they kept. Start free.
Built by Visionairy.
Every XF module is built on the same foundation: recursive MECE boundary discovery, developed by Visionairy and applied across code contracts, system impact analysis, token budgets, session memory, and spec verification.
The core idea: every system has boundaries. Every boundary has callers. Every caller is a branch. Discovery terminates when the graph is exhausted or the use case is satisfied — not before, not after. The framework enforces this discipline systematically across four boundary types: DATA (what flows), NODES (what processes), FLOW (how it moves), ERRORS (what breaks it).
| XF concept | Applied in |
|---|---|
| Boundary definition | XFBA — function signatures, import contracts, env vars, stubs |
| Recursive branch discovery | XSIA — traces every caller of every broken boundary |
| MECE termination | Cascade stops when the call graph is exhausted, no gaps, no overlaps |
| Appropriate depth | Stage 1 always runs; Stages 2–4 escalate only when violations exist |
| Terminal verification | XFDB — each spec requirement is binary met/unmet, no estimation |
The Xpansion Framework is the first public application of MECE boundary methodology to AI-generated code. The same methodology powers Visionairy's system analysis, process design, and debugging tools across all projects.
claude-code-hooks — the most complete public reference for Claude Code hook events. Documents 26 distinct hook types including several that most developers don't know exist: PostToolUseFailure, PreCompact/PostCompact, WorktreeCreate/WorktreeRemove, TaskCreated/TaskCompleted, CwdChanged, FileChanged. The Xpansion Framework currently uses 3 of these (UserPromptSubmit, PreToolUse, Stop). If you're building hooks, start here.
There is no dedicated hook registry today — no glama.ai or Smithery equivalent for hook-based tools. Skills have skills.sh. MCPs have glama and Smithery. Hooks have nothing. The Xpansion Framework plans to be the first catalog to index hook-based tools as the pattern grows.
The XF codebase is monitored by XFBA during development. Every edit Claude makes to any XF module is checked for contract breaks before it lands.
In practice, this meant:
- The arity checker caught 12 real violations during an eng review pass — functions being called with the wrong number of arguments across the codebase, all silently waiting to throw TypeErrors at runtime.
- The silent exception checker (added after a production incident) caught the pattern that caused 99 minutes of cron work to go to /dev/null — a bare
except Exceptionthat printed a warning but reported success regardless. - The stub checker surfaced unimplemented functions with active callers before they ever reached a user session.
We eat our own cooking. The tool that ships with the Xpansion Framework is the tool we use to build it.
With TypeScript and Dart scanner support, XFBA also monitors LC-Access (React Native — 28 TS modules indexed) and Perimeter (Flutter — 49 Dart modules indexed) during development. Every edit Claude makes across all three codebases is checked before it lands.
- Hosted endpoint (xpansion.dev)
- PreToolUse interception — blocks on 10+ point gap
- Category-first routing — 16 MECE categories
- Pre-ranked catalog — daily cron, signal-scored (installs/stars/forks/freshness)
- Stack detection — auto-detects languages/frameworks from manifest files
- Pro dashboard — interception history, block rate, install conversions, quota
- Install conversion tracking — detects when users install suggested tools
- Creator outreach — GitHub issues for undescribed skills (max 1/repo/month)
- Slack notifications — signup, upgrade, conversion, daily digest, cron completion
-
/dispatch statuscommand - Proactive recommendations — grouped by type (Plugins/Skills/MCPs) at task shift
- Session digest — Stop hook shows what XFTD did each session
-
/xfa-refactor start/end— Refactor Mode for XF Audit - TypeScript, TSX, and Dart scanner support — XFBA/XSIA cover React Native and Flutter
- XFTC — token control (MCP overhead, peak hours, context fill, model coaching)
- XF-MEM — session memory (auto-snapshot, semantic search, 90% failsafe)
-
/dispatch pause/resume— disable hooks mid-session without uninstalling -
/dispatch stack— show detected project stack -
/dispatch why— explain last block decision -
/dispatch ignore [tool]— permanent per-tool exclusion -
/dispatch feedback good/bad— explicit recommendation signal -
/xfa pause/resume— disable XF Audit blocking mid-session -
/xfa report— session repair summary -
/xfa clear— clear stale violations - skills.sh distribution (
npx skills add XpansionFramework/XF-Dispatch) - CC marketplace submission
- Weekly new-tool digest email for Pro users
- Aggregate insights API (category trends, CC gap analysis)