Turn your organization's markdown docs into a queryable knowledge graph.
Point OrgIntel at any folder of .md files. It builds a knowledge graph, indexes everything for semantic search, and lets you chat with your docs using local AI. No API keys required.
- Ingest — Point to a folder. OrgIntel discovers all markdown files, classifies them by type (projects, backlogs, roles, stories...), and indexes them.
- Explore — An interactive D3.js graph shows your organization: projects, services, technologies, and how they connect.
- Chat — Ask questions in natural language. A hybrid RAG pipeline searches both vectors and graph structure to ground answers.
- Simulate — Run what-if scenarios: "What if we drop a project?", "What if we lose 50% capacity?"
- Python 3.12+
- uv — Python package manager
- Ollama — Local LLM (free, no API key)
- Docker (optional) — Only needed for Neo4j graph database
# Install tools
curl -LsSf https://astral.sh/uv/install.sh | sh
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.2
# Clone and install
git clone https://github.com/iMark21/orgintel.git
cd orgintel
uv sync
# Start
uv run uvicorn orgintel.main:app --port 8000Open http://localhost:8000 — the onboarding screen will guide you:
- Browse to select your workspace folder (native OS file picker)
- Or click "Try with demo data" to explore immediately
LLM_PROVIDER=anthropic
LLM_MODEL=claude-sonnet-4-20250514
ANTHROPIC_API_KEY=sk-ant-api03-...cp .env.example .env
docker compose up --build
# Open http://localhost:3000 (frontend) or http://localhost:8000 (API + Swagger)- First-run welcome screen detects empty state
- Browse button opens native file picker (macOS / Linux / Windows)
- "Try with demo data" seeds 10 sample documents in one click
- Or paste any path manually
- Discovers markdown files recursively
- Classifies by type: CONTEXT, BACKLOG, AGENT, ROLE, STORY, PROPOSAL, FISCAL, and more
- Builds relationships: BELONGS_TO, USES, HAS_TASK, SUPPORTS...
- Works in vector-only mode (no Neo4j needed) or with Neo4j for full graph queries
- D3.js force-directed layout with zoom and pan
- Color-coded nodes (projects, services, technologies, roles, fiscal)
- Directional arrows on relationships
- Node sizing by connection count
- Hover tooltips, click for detail panel
- Hybrid retrieval: vector similarity (ChromaDB) + graph traversal
- Local LLM via Ollama (free) or Claude API
- Source citations in every answer
- Suggested starter questions
- 5 card-based simulation templates:
- Drop a project — impact on resources and services
- Lose capacity — which P0 tasks still fit
- Migrate tech — effort estimation per project
- Cut a service — orphaned projects
- Add a project — resource feasibility
- Ollama (default): free, local, no API key —
llama3.2,mistral,phi3... - Anthropic: Claude API with streaming
- Switch with one env var
OrgIntel auto-detects structure. The more organized your workspace, the richer the graph:
| Type | Pattern | What gets extracted |
|---|---|---|
| Project | */CONTEXT.md |
Name, vertical, tech stack, version |
| Backlog | */BACKLOG.md |
Tasks with ID, priority, status |
| Service | */AGENT.md |
Service name, scope |
| Role | */PM.md, TL.md, PO.md |
Role type, vertical |
| Story | */stories/*.md |
ID, title, status (from frontmatter) |
| Proposal | */proposals/*.md |
Title, status |
| Convention | CONVENTIONS.md, STACK.md |
Standards, tech stack tables |
Any .md |
Everything else | Full-text indexed for search |
| Layer | Technology |
|---|---|
| Language | Python 3.12 |
| Package Manager | uv |
| API | FastAPI + uvicorn |
| LLM | Ollama (local, free) or Claude API |
| Agents | LangGraph + langchain |
| Knowledge Graph | Neo4j 5 (optional) |
| Vector Store | ChromaDB (in-process) |
| Embeddings | sentence-transformers (all-MiniLM-L6-v2, local) |
| Frontend | HTML/CSS/JS + D3.js v7 |
| Infrastructure | Docker Compose |
| Tests | pytest (59 tests) |
| Linting | ruff |
| Method | Path | Description |
|---|---|---|
| GET | / |
Health check |
| GET | /health |
Health + LLM config |
| GET | /docs |
Swagger UI |
| POST | /api/ingest |
Ingest a workspace |
| POST | /api/seed-demo |
Load demo data |
| GET | /api/browse |
Native folder picker |
| POST | /api/chat |
Chat with RAG |
| POST | /api/chat/stream |
Chat (SSE streaming) |
| GET | /api/search?q= |
Semantic search |
| GET | /api/scenarios/templates |
List scenario templates |
| POST | /api/scenarios/run |
Run a scenario |
| GET | /api/graph/data |
Graph data for D3.js |
| GET | /api/llm/config |
Current LLM provider |
orgintel/
├── pyproject.toml
├── docker-compose.yml
├── Dockerfile
├── src/orgintel/
│ ├── main.py # FastAPI app
│ ├── config.py # Settings from .env
│ ├── models/ # Pydantic models
│ ├── ingestion/ # Classifier, parser, extractor, pipeline
│ ├── graph/ # Neo4j client, schema, queries
│ ├── rag/ # Embeddings, vectorstore, retriever, generator
│ ├── agents/ # LangGraph multi-agent system
│ ├── scenarios/ # What-if simulation engine
│ └── api/ # REST endpoints
├── frontend/ # HTML/CSS/JS + D3.js
├── tests/ # 59 tests
└── scripts/ # CLI tools
Technical Decisions
| Decision | Rationale |
|---|---|
| Ollama as default | Free, local, zero friction |
| ChromaDB over Qdrant | In-process like SQLite, no Docker needed |
| Local embeddings | all-MiniLM-L6-v2 on CPU, no API cost |
| Neo4j optional | Vector-only mode works without Docker |
| Hybrid retrieval | Vectors find semantics, graph finds structure |
| LangGraph | Typed state machines over free-form agents |
| Section-based chunking | Markdown sections are natural semantic units |
Knowledge Graph Schema
Nodes: Project, Vertical, Service, Role, Task, Story, Technology, FiscalDomain, Proposal, Document
Relationships: BELONGS_TO, USES, HAS_TASK, HAS_STORY, OWNS, DOCUMENTS, SUPPORTS, MANAGES, PROPOSED
MIT
Built with FastAPI, LangGraph, ChromaDB, D3.js, and Ollama.
