diff --git a/.claude/hooks/session-start.sh b/.claude/hooks/session-start.sh new file mode 100755 index 0000000..7574fff --- /dev/null +++ b/.claude/hooks/session-start.sh @@ -0,0 +1,5 @@ +#!/bin/bash +# Wrapper for the Python SessionStart hook +# $CLAUDE_PROJECT_DIR is provided by Claude, but we can fallback to PWD +PROJECT_DIR="${CLAUDE_PROJECT_DIR:-$(pwd)}" +python3 "$PROJECT_DIR/scripts/hooks/session_start.py" diff --git a/.claude/settings.json b/.claude/settings.json new file mode 100644 index 0000000..241121b --- /dev/null +++ b/.claude/settings.json @@ -0,0 +1,15 @@ +{ + "hooks": { + "SessionStart": [ + { + "matcher": ".*", + "hooks": [ + { + "type": "command", + "command": "$CLAUDE_PROJECT_DIR/.claude/hooks/session-start.sh" + } + ] + } + ] + } +} diff --git a/.claude/settings.local.json b/.claude/settings.local.json new file mode 100644 index 0000000..b12585e --- /dev/null +++ b/.claude/settings.local.json @@ -0,0 +1,9 @@ +{ + "permissions": { + "allow": [ + "Bash(python3:*)" + ], + "deny": [], + "ask": [] + } +} diff --git a/.gitignore b/.gitignore index cbf76c1..a458482 100644 --- a/.gitignore +++ b/.gitignore @@ -1,4 +1,3 @@ -secrets.yaml __pycache__/ *.pyc *.pyo @@ -8,3 +7,9 @@ env/ venv/ .env .DS_Store + +.agents/registry/*.json +.agents/messages/**/*.json +!.agents/**/.keep +logs/ +.agents/ diff --git a/AGENTS.md b/AGENTS.md index 9d77104..2120b4b 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -1,101 +1,13 @@ # AI Agent Instructions -You are an expert Software Engineer working on this project. Your primary responsibility is to implement features and fixes while strictly adhering to the **Task Documentation System**. +You are an expert Software Engineer working on this project. Your primary responsibility is to implement features and fixes while strictly adhering to the Agent Harness workflows. -## Core Philosophy -**"If it's not documented in `docs/tasks/`, it didn't happen."** - -## Workflow -1. **Pick a Task**: Run `python3 scripts/tasks.py next` to find the best task, `context` to see active tasks, or `list` to see pending ones. -2. **Plan & Document**: - * **Memory Check**: Run `python3 scripts/memory.py list` (or use the Memory Skill) to recall relevant long-term information. - * **Security Check**: Ask the user about specific security considerations for this task. - * If starting a new task, use `scripts/tasks.py create` (or `python3 scripts/tasks.py create`) to generate a new task file. - * Update the task status: `python3 scripts/tasks.py update [TASK_ID] in_progress`. -3. **Implement**: Write code, run tests. -4. **Update Documentation Loop**: - * As you complete sub-tasks, check them off in the task document. - * If you hit a blocker, update status to `wip_blocked` and describe the issue in the file. - * Record key architectural decisions in the task document. - * **Memory Update**: If you learn something valuable for the long term, use `scripts/memory.py create` to record it. -5. **Review & Verify**: - * Once implementation is complete, update status to `review_requested`: `python3 scripts/tasks.py update [TASK_ID] review_requested`. - * Ask a human or another agent to review the code. - * Once approved and tested, update status to `verified`. -6. **Finalize**: - * Update status to `completed`: `python3 scripts/tasks.py update [TASK_ID] completed`. - * Record actual effort in the file. - * Ensure all acceptance criteria are met. - -## Tools -* **Wrapper**: `./scripts/tasks` (Checks for Python, recommended). -* **Next**: `./scripts/tasks next` (Finds the best task to work on). -* **Create**: `./scripts/tasks create [category] "Title"` -* **List**: `./scripts/tasks list [--status pending]` -* **Context**: `./scripts/tasks context` -* **Update**: `./scripts/tasks update [ID] [status]` -* **Migrate**: `./scripts/tasks migrate` (Migrate legacy tasks to new format) -* **Link**: `./scripts/tasks link [ID] [DEP_ID]` (Add dependency). -* **Unlink**: `./scripts/tasks unlink [ID] [DEP_ID]` (Remove dependency). -* **Index**: `./scripts/tasks index` (Generate INDEX.yaml). -* **Graph**: `./scripts/tasks graph` (Visualize dependencies). -* **Validate**: `./scripts/tasks validate` (Check task files). -* **Memory**: `./scripts/memory.py [create|list|read]` -* **JSON Output**: Add `--format json` to any command for machine parsing. - -## Documentation Reference -* **Guide**: Read `docs/tasks/GUIDE.md` for strict formatting and process rules. -* **Architecture**: Refer to `docs/architecture/` for system design. -* **Features**: Refer to `docs/features/` for feature specifications. -* **Security**: Refer to `docs/security/` for risk assessments and mitigations. -* **Memories**: Refer to `docs/memories/` for long-term project context. - -## Code Style & Standards -* Follow the existing patterns in the codebase. -* Ensure all new code is covered by tests (if testing infrastructure exists). - -## PR Review Methodology -When performing a PR review, follow this "Human-in-the-loop" process to ensure depth and efficiency. - -### 1. Preparation -1. **Create Task**: `python3 scripts/tasks.py create review "Review PR #: "` -2. **Fetch Details**: Use `gh` to get the PR context. - * `gh pr view <N>` - * `gh pr diff <N>` - -### 2. Analysis & Planning (The "Review Plan") -**Do not review line-by-line yet.** Instead, analyze the changes and document a **Review Plan** in the task file (or present it for approval). - -Your plan must include: -* **High-Level Summary**: Purpose, new APIs, breaking changes. -* **Dependency Check**: New libraries, maintenance status, security. -* **Impact Assessment**: Effect on existing code/docs. -* **Focus Areas**: Prioritized list of files/modules to check. -* **Suggested Comments**: Draft comments for specific lines. - * Format: `File: <path> | Line: <N> | Comment: <suggestion>` - * Tone: Friendly, suggestion-based ("Consider...", "Nit: ..."). - -### 3. Execution -Once the human approves the plan and comments: -1. **Pending Review**: Create a pending review using `gh`. - * `COMMIT_SHA=$(gh pr view <N> --json headRefOid -q .headRefOid)` - * `gh api repos/{owner}/{repo}/pulls/{N}/reviews -f commit_id="$COMMIT_SHA"` -2. **Batch Comments**: Add comments to the pending review. - * `gh api repos/{owner}/{repo}/pulls/{N}/comments -f body="..." -f path="..." -f commit_id="$COMMIT_SHA" -F line=<L> -f side="RIGHT"` -3. **Submit**: - * `gh pr review <N> --approve --body "Summary..."` (or `--request-changes`). - -### 4. Close Task -* Update task status to `completed`. - -## Project Specific Instructions - -### Core Directives +## Core Directives - **API First**: The Bible AI API is the primary source for data. Scraping (`pkg/app/passage.go` fallback) is deprecated and should be avoided for new features. - **Secrets**: Do not commit secrets. Use `pkg/secrets` to retrieve them from Environment or Google Secret Manager. - **Testing**: Run tests from the root using `go test ./pkg/...`. -### Code Guidelines +## Code Guidelines - **Go Version**: 1.24+ - **Naming**: - Variables: `camelCase` @@ -106,12 +18,7 @@ Once the human approves the plan and comments: - `pkg/bot`: Platform integration. - `pkg/utils`: Shared utilities. -### Local Development +## Local Development - **Setup**: Create a `.env` file with `TELEGRAM_ID` and `TELEGRAM_ADMIN_ID`. - **Run**: `go run main.go` - **Testing**: Use `ngrok` to tunnel webhooks or send mock HTTP requests. - -## Agent Interoperability -- **Task Manager Skill**: `.claude/skills/task_manager/` -- **Memory Skill**: `.claude/skills/memory/` -- **Tool Definitions**: `docs/interop/tool_definitions.json` diff --git a/AGENTS.md.bak b/AGENTS.md.bak deleted file mode 100644 index 8c00bf1..0000000 --- a/AGENTS.md.bak +++ /dev/null @@ -1,117 +0,0 @@ -# AI Agent Instructions - -You are an expert Software Engineer working on this project. Your primary responsibility is to implement features and fixes while strictly adhering to the **Task Documentation System**. - -## Core Philosophy -**"If it's not documented in `docs/tasks/`, it didn't happen."** - -## Workflow -1. **Pick a Task**: Run `python3 scripts/tasks.py context` to see active tasks, or `list` to see pending ones. -2. **Plan & Document**: - * **Memory Check**: Run `python3 scripts/memory.py list` (or use the Memory Skill) to recall relevant long-term information. - * **Security Check**: Ask the user about specific security considerations for this task. - * If starting a new task, use `scripts/tasks.py create` (or `python3 scripts/tasks.py create`) to generate a new task file. - * Update the task status: `python3 scripts/tasks.py update [TASK_ID] in_progress`. -3. **Implement**: Write code, run tests. -4. **Update Documentation Loop**: - * As you complete sub-tasks, check them off in the task document. - * If you hit a blocker, update status to `wip_blocked` and describe the issue in the file. - * Record key architectural decisions in the task document. - * **Memory Update**: If you learn something valuable for the long term, use `scripts/memory.py create` to record it. -5. **Review & Verify**: - * Once implementation is complete, update status to `review_requested`: `python3 scripts/tasks.py update [TASK_ID] review_requested`. - * Ask a human or another agent to review the code. - * Once approved and tested, update status to `verified`. -6. **Finalize**: - * Update status to `completed`: `python3 scripts/tasks.py update [TASK_ID] completed`. - * Record actual effort in the file. - * Ensure all acceptance criteria are met. - -## Tools -* **Wrapper**: `./scripts/tasks` (Checks for Python, recommended). -* **Next**: `./scripts/tasks next` (Finds the best task to work on). -* **Create**: `./scripts/tasks create [category] "Title"` -* **List**: `./scripts/tasks list [--status pending]` -* **Context**: `./scripts/tasks context` -* **Update**: `./scripts/tasks update [ID] [status]` -* **Migrate**: `./scripts/tasks migrate` (Migrate legacy tasks to new format) -* **Link**: `./scripts/tasks link [ID] [DEP_ID]` (Add dependency). -* **Unlink**: `./scripts/tasks unlink [ID] [DEP_ID]` (Remove dependency). -* **Index**: `./scripts/tasks index` (Generate INDEX.yaml). -* **Graph**: `./scripts/tasks graph` (Visualize dependencies). -* **Validate**: `./scripts/tasks validate` (Check task files). -* **Memory**: `./scripts/memory.py [create|list|read]` -* **JSON Output**: Add `--format json` to any command for machine parsing. - -## Documentation Reference -* **Guide**: Read `docs/tasks/GUIDE.md` for strict formatting and process rules. -* **Architecture**: Refer to `docs/architecture/` for system design. -* **Features**: Refer to `docs/features/` for feature specifications. -* **Security**: Refer to `docs/security/` for risk assessments and mitigations. -* **Memories**: Refer to `docs/memories/` for long-term project context. - -## Code Style & Standards -* Follow the existing patterns in the codebase. -* Ensure all new code is covered by tests (if testing infrastructure exists). - -## PR Review Methodology -When performing a PR review, follow this "Human-in-the-loop" process to ensure depth and efficiency. - -### 1. Preparation -1. **Create Task**: `python3 scripts/tasks.py create review "Review PR #<N>: <Title>"` -2. **Fetch Details**: Use `gh` to get the PR context. - * `gh pr view <N>` - * `gh pr diff <N>` - -### 2. Analysis & Planning (The "Review Plan") -**Do not review line-by-line yet.** Instead, analyze the changes and document a **Review Plan** in the task file (or present it for approval). - -Your plan must include: -* **High-Level Summary**: Purpose, new APIs, breaking changes. -* **Dependency Check**: New libraries, maintenance status, security. -* **Impact Assessment**: Effect on existing code/docs. -* **Focus Areas**: Prioritized list of files/modules to check. -* **Suggested Comments**: Draft comments for specific lines. - * Format: `File: <path> | Line: <N> | Comment: <suggestion>` - * Tone: Friendly, suggestion-based ("Consider...", "Nit: ..."). - -### 3. Execution -Once the human approves the plan and comments: -1. **Pending Review**: Create a pending review using `gh`. - * `COMMIT_SHA=$(gh pr view <N> --json headRefOid -q .headRefOid)` - * `gh api repos/{owner}/{repo}/pulls/{N}/reviews -f commit_id="$COMMIT_SHA"` -2. **Batch Comments**: Add comments to the pending review. - * `gh api repos/{owner}/{repo}/pulls/{N}/comments -f body="..." -f path="..." -f commit_id="$COMMIT_SHA" -F line=<L> -f side="RIGHT"` -3. **Submit**: - * `gh pr review <N> --approve --body "Summary..."` (or `--request-changes`). - -### 4. Close Task -* Update task status to `completed`. - -## Project Specific Instructions - -### Core Directives -- **API First**: The Bible AI API is the primary source for data. Scraping (`pkg/app/passage.go` fallback) is deprecated and should be avoided for new features. -- **Secrets**: Do not commit secrets. Use `pkg/secrets` to retrieve them from Environment or Google Secret Manager. -- **Testing**: Run tests from the root using `go test ./pkg/...`. - -### Code Guidelines -- **Go Version**: 1.24+ -- **Naming**: - - Variables: `camelCase` - - Functions: `PascalCase` (exported), `camelCase` (internal) - - Packages: `underscore_case` -- **Structure**: - - `pkg/app`: Business logic. - - `pkg/bot`: Platform integration. - - `pkg/utils`: Shared utilities. - -### Local Development -- **Setup**: Create a `.env` file with `TELEGRAM_ID` and `TELEGRAM_ADMIN_ID`. -- **Run**: `go run main.go` -- **Testing**: Use `ngrok` to tunnel webhooks or send mock HTTP requests. - -## Agent Interoperability -- **Task Manager Skill**: `.claude/skills/task_manager/` -- **Memory Skill**: `.claude/skills/memory/` -- **Tool Definitions**: `docs/interop/tool_definitions.json` diff --git a/CLAUDE.md b/CLAUDE.md deleted file mode 120000 index 47dc3e3..0000000 --- a/CLAUDE.md +++ /dev/null @@ -1 +0,0 @@ -AGENTS.md \ No newline at end of file diff --git a/docs/interop/tool_definitions.json b/docs/interop/tool_definitions.json deleted file mode 100644 index f9c4572..0000000 --- a/docs/interop/tool_definitions.json +++ /dev/null @@ -1,185 +0,0 @@ -{ - "tools": [ - { - "type": "function", - "function": { - "name": "task_create", - "description": "Create a new development task.", - "parameters": { - "type": "object", - "properties": { - "category": { - "type": "string", - "enum": ["foundation", "infrastructure", "domain", "presentation", "migration", "features", "testing"], - "description": "The category of the task." - }, - "title": { - "type": "string", - "description": "The title of the task." - }, - "description": { - "type": "string", - "description": "Detailed description of the task." - } - }, - "required": ["category", "title"] - } - } - }, - { - "type": "function", - "function": { - "name": "task_list", - "description": "List existing tasks, optionally filtered by status or category.", - "parameters": { - "type": "object", - "properties": { - "status": { - "type": "string", - "enum": ["pending", "in_progress", "wip_blocked", "review_requested", "verified", "completed", "blocked", "cancelled", "deferred"], - "description": "Filter by task status." - }, - "category": { - "type": "string", - "enum": ["foundation", "infrastructure", "domain", "presentation", "migration", "features", "testing"], - "description": "Filter by task category." - }, - "archived": { - "type": "boolean", - "description": "Include archived tasks in the list." - } - } - } - } - }, - { - "type": "function", - "function": { - "name": "task_update", - "description": "Update the status of an existing task.", - "parameters": { - "type": "object", - "properties": { - "task_id": { - "type": "string", - "description": "The ID of the task (e.g., FOUNDATION-20230521-120000)." - }, - "status": { - "type": "string", - "enum": ["pending", "in_progress", "wip_blocked", "review_requested", "verified", "completed", "blocked", "cancelled", "deferred"], - "description": "The new status of the task." - } - }, - "required": ["task_id", "status"] - } - } - }, - { - "type": "function", - "function": { - "name": "task_show", - "description": "Show the details of a specific task.", - "parameters": { - "type": "object", - "properties": { - "task_id": { - "type": "string", - "description": "The ID of the task." - } - }, - "required": ["task_id"] - } - } - }, - { - "type": "function", - "function": { - "name": "task_context", - "description": "Show tasks that are currently in progress.", - "parameters": { - "type": "object", - "properties": {} - } - } - }, - { - "type": "function", - "function": { - "name": "task_archive", - "description": "Archive a completed task.", - "parameters": { - "type": "object", - "properties": { - "task_id": { - "type": "string", - "description": "The ID of the task to archive." - } - }, - "required": ["task_id"] - } - } - }, - { - "type": "function", - "function": { - "name": "memory_create", - "description": "Create a new long-term memory.", - "parameters": { - "type": "object", - "properties": { - "title": { - "type": "string", - "description": "The title of the memory." - }, - "content": { - "type": "string", - "description": "The content of the memory." - }, - "tags": { - "type": "string", - "description": "Comma-separated tags for the memory." - } - }, - "required": ["title", "content"] - } - } - }, - { - "type": "function", - "function": { - "name": "memory_list", - "description": "List existing memories, optionally filtered by tag.", - "parameters": { - "type": "object", - "properties": { - "tag": { - "type": "string", - "description": "Filter by tag." - }, - "limit": { - "type": "integer", - "description": "Limit the number of results." - } - } - } - } - }, - { - "type": "function", - "function": { - "name": "memory_read", - "description": "Read a specific memory.", - "parameters": { - "type": "object", - "properties": { - "filename": { - "type": "string", - "description": "The filename or slug of the memory to read." - } - }, - "required": ["filename"] - } - } - } - ] -} diff --git a/docs/security/PERMISSIONS.md b/docs/security/PERMISSIONS.md new file mode 100644 index 0000000..e6b3db3 --- /dev/null +++ b/docs/security/PERMISSIONS.md @@ -0,0 +1,29 @@ +# Agent Permission Governance Model + +This document defines the **Rules of Engagement** for AI agents working on this project. Agents must read and adhere to these boundaries based on their assigned role. + +## Authorization Matrix + +| Level | Name | Description | Examples | +| :--- | :--- | :--- | :--- | +| **L0** | **Viewer** | Read-only access to files, logs, and metadata. | `task_list`, `task_show`, `memory_list`, `ls`, `grep` | +| **L1** | **Contributor** | Can create and edit documentation, tasks, and non-critical assets. | `task_create`, `task_update`, `memory_create`, `agent_send` | +| **L2** | **Developer** | Can modify source code and execute tests. | `write_file (src/*)`, `run_shell_command (pytest)` | +| **L3** | **Admin** | Irreversible or dangerous operations. | `git push`, `rm -rf`, `curl`, `deployment_trigger` | + +## Enforcement Protocol + +Agents are expected to **self-regulate** by following these steps before executing any tool: + +1. **Identify Tool Risk**: Check the `risk_level` of the tool in `docs/interop/TOOLS.md`. +2. **Verify Level**: Compare the tool's risk level with the agent's assigned level (Default: **L2** for Developer agents, unless otherwise specified). +3. **Handle Escalation**: + * If **Risk <= Level**: Execute autonomously. + * If **Risk > Level**: STOP and request explicit human confirmation. + +## Project Specific Policy + +*Default Policy for this repository:* +- Agents are assigned **L2 (Developer)** by default. +- **L3 (Admin)** actions ALWAYS require a human-in-the-loop. +- Modification of `AGENTS.md` or `PERMISSIONS.md` is considered an **L3** action. diff --git a/docs/tasks/GUIDE.md b/docs/tasks/GUIDE.md deleted file mode 100644 index 77b8b86..0000000 --- a/docs/tasks/GUIDE.md +++ /dev/null @@ -1,131 +0,0 @@ -# Task Documentation System Guide - -This guide explains how to create, maintain, and update task documentation. It provides a reusable system for tracking implementation work, decisions, and progress. - -## Core Philosophy -**"If it's not documented in `docs/tasks/`, it didn't happen."** - -## Directory Structure -Tasks are organized by category in `docs/tasks/`: -- `foundation/`: Core architecture and setup -- `infrastructure/`: Services, adapters, platform code -- `domain/`: Business logic, use cases -- `presentation/`: UI, state management -- `features/`: End-to-end feature implementation -- `migration/`: Refactoring, upgrades -- `testing/`: Testing infrastructure -- `review/`: Code reviews and PR analysis - -## Task Document Format - -We use **YAML Frontmatter** for metadata and **Markdown** for content. - -### Frontmatter (Required) -```yaml ---- -id: FOUNDATION-20250521-103000 # Auto-generated Timestamp ID -status: pending # Current status -title: Initial Project Setup # Task Title -priority: medium # high, medium, low -created: 2025-05-21 10:30:00 # Creation timestamp -category: foundation # Category -type: task # task, story, bug, epic (Optional) -sprint: Sprint 1 # Iteration identifier (Optional) -estimate: 3 # Story points / T-shirt size (Optional) -dependencies: TASK-001, TASK-002 # Comma separated list of IDs (Optional) ---- -``` - -### Status Workflow -1. `pending`: Created but not started. -2. `in_progress`: Active development. -3. `review_requested`: Implementation done, awaiting code review. -4. `verified`: Reviewed and approved. -5. `completed`: Merged and finalized. -6. `wip_blocked` / `blocked`: Development halted. -7. `cancelled` / `deferred`: Stopped or postponed. - -### Content Template -```markdown -# [Task Title] - -## Task Information -- **Dependencies**: [List IDs] - -## Task Details -[Description of what needs to be done] - -### Acceptance Criteria -- [ ] Criterion 1 -- [ ] Criterion 2 - -## Implementation Status -### Completed Work -- ✅ Implemented X (file.py) - -### Blockers -[Describe blockers if any] -``` - -## Tools - -Use the `scripts/tasks` wrapper to manage tasks. - -```bash -# Create a new task (standard) -./scripts/tasks create foundation "Task Title" - -# Create an Agile Story in a Sprint -./scripts/tasks create features "User Login" --type story --sprint "Sprint 1" --estimate 5 - -# List tasks (can filter by sprint) -./scripts/tasks list -./scripts/tasks list --sprint "Sprint 1" - -# Find the next best task to work on (Smart Agent Mode) -./scripts/tasks next - -# Update status -./scripts/tasks update [TASK_ID] in_progress -./scripts/tasks update [TASK_ID] review_requested -./scripts/tasks update [TASK_ID] verified -./scripts/tasks update [TASK_ID] completed - -# Manage Dependencies -./scripts/tasks link [TASK_ID] [DEP_ID] -./scripts/tasks unlink [TASK_ID] [DEP_ID] - -# Visualization & Analysis -./scripts/tasks graph # Show dependency graph -./scripts/tasks index # Generate INDEX.yaml -./scripts/tasks validate # Check for errors - -# Migrate legacy tasks (if updating from older version) -./scripts/tasks migrate -``` - -## Agile Methodology - -This system supports Agile/Scrum workflows for LLM-Human collaboration. - -### Sprints -- Tag tasks with `sprint: [Name]` to group them into iterations. -- Use `./scripts/tasks list --sprint [Name]` to view the sprint backlog. - -### Estimation -- Use `estimate: [Value]` (e.g., Fibonacci numbers 1, 2, 3, 5, 8) to size tasks. - -### Auto-Pilot -- The `./scripts/tasks next` command uses an algorithm to determine the optimal next task based on: - 1. Status (In Progress > Pending) - 2. Dependencies (Unblocked > Blocked) - 3. Sprint (Current Sprint > Backlog) - 4. Priority (High > Low) - 5. Type (Stories/Bugs > Tasks) - -## Agent Integration - -Agents (Claude, etc.) use this system to track their work. -- Always check `./scripts/tasks context` or use `./scripts/tasks next` before starting. -- Keep the task file updated with your progress. -- Use `review_requested` when you need human feedback. diff --git a/docs/tasks/INDEX.yaml b/docs/tasks/INDEX.yaml deleted file mode 100644 index 0d41258..0000000 --- a/docs/tasks/INDEX.yaml +++ /dev/null @@ -1,12 +0,0 @@ -# Task Dependency Index -# Generated by scripts/tasks.py index - -docs/tasks/domain/DOMAIN-20251207-145243-TRR-remove-legacy-scraping.md: - -docs/tasks/features/FEATURES-20251207-145243-YWP-expand-ask-feature.md: - -docs/tasks/foundation/FOUNDATION-20251207-145244-BLD-update-documentation.md: - -docs/tasks/migration/MIGRATION-20251207-145243-IMK-complete-gcp-migration.md: - -docs/tasks/migration/MIGRATION-20251212-034235-XNE-update-harness-to-latest-version.md: diff --git a/docs/tasks/domain/DOMAIN-20251207-145243-TRR-remove-legacy-scraping.md b/docs/tasks/domain/DOMAIN-20251207-145243-TRR-remove-legacy-scraping.md deleted file mode 100644 index 2d42fd2..0000000 --- a/docs/tasks/domain/DOMAIN-20251207-145243-TRR-remove-legacy-scraping.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -id: DOMAIN-20251207-145243-TRR -status: pending -title: Remove Legacy Scraping -priority: medium -created: 2025-12-07 14:52:43 -category: domain ---- - -# Remove Legacy Scraping - -Deprecate and remove scraping logic in favor of Bible AI API. diff --git a/docs/tasks/features/.keep b/docs/tasks/features/.keep deleted file mode 100644 index e69de29..0000000 diff --git a/docs/tasks/features/FEATURES-20251207-145243-YWP-expand-ask-feature.md b/docs/tasks/features/FEATURES-20251207-145243-YWP-expand-ask-feature.md deleted file mode 100644 index 4f73362..0000000 --- a/docs/tasks/features/FEATURES-20251207-145243-YWP-expand-ask-feature.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -id: FEATURES-20251207-145243-YWP -status: pending -title: Expand Ask Feature -priority: medium -created: 2025-12-07 14:52:43 -category: features ---- - -# Expand Ask Feature - -Enable /ask command for all users after verifying API stability and costs. diff --git a/docs/tasks/foundation/.keep b/docs/tasks/foundation/.keep deleted file mode 100644 index e69de29..0000000 diff --git a/docs/tasks/foundation/FOUNDATION-20251207-145244-BLD-update-documentation.md b/docs/tasks/foundation/FOUNDATION-20251207-145244-BLD-update-documentation.md deleted file mode 100644 index 5ee2e45..0000000 --- a/docs/tasks/foundation/FOUNDATION-20251207-145244-BLD-update-documentation.md +++ /dev/null @@ -1,25 +0,0 @@ ---- -id: FOUNDATION-20251207-145244-BLD -status: completed -title: Update Documentation -priority: medium -created: 2025-12-07 14:52:44 -category: foundation ---- - -# Update Documentation - -Align documentation with vibe-bootstrapper standards and document new task management capabilities. - -## Task Details -The `scripts/tasks.py` utility has been updated with new "skills": `index`, `link`, `unlink`, `graph`, and `validate`. The documentation in `AGENTS.md` and `docs/tasks/GUIDE.md` needs to be updated to reflect these changes. - -### Acceptance Criteria -- [x] `docs/tasks/GUIDE.md` updated with new commands. -- [x] `AGENTS.md` updated with new commands. -- [x] `docs/tasks/INDEX.yaml` generated and verified. - -## Implementation Status -### Completed Work -- [x] Update GUIDE.md -- [x] Update AGENTS.md diff --git a/docs/tasks/infrastructure/.keep b/docs/tasks/infrastructure/.keep deleted file mode 100644 index e69de29..0000000 diff --git a/docs/tasks/migration/.keep b/docs/tasks/migration/.keep deleted file mode 100644 index e69de29..0000000 diff --git a/docs/tasks/migration/MIGRATION-20251207-145243-IMK-complete-gcp-migration.md b/docs/tasks/migration/MIGRATION-20251207-145243-IMK-complete-gcp-migration.md deleted file mode 100644 index 860b8b6..0000000 --- a/docs/tasks/migration/MIGRATION-20251207-145243-IMK-complete-gcp-migration.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -id: MIGRATION-20251207-145243-IMK -status: in_progress -title: Complete GCP Migration -priority: medium -created: 2025-12-07 14:52:43 -category: migration ---- - -# Complete GCP Migration - -Finish moving resources to asia-southeast1 and verify all services. diff --git a/docs/tasks/migration/MIGRATION-20251212-034235-XNE-update-harness-to-latest-version.md b/docs/tasks/migration/MIGRATION-20251212-034235-XNE-update-harness-to-latest-version.md deleted file mode 100644 index dccd0c6..0000000 --- a/docs/tasks/migration/MIGRATION-20251212-034235-XNE-update-harness-to-latest-version.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -id: MIGRATION-20251212-034235-XNE -status: completed -title: Update Harness to Latest Version -priority: medium -created: 2025-12-12 03:42:35 -category: migration -dependencies: -type: task ---- - -# Update Harness to Latest Version - -To be determined diff --git a/docs/tasks/migration/MIGRATION-20251229-060122-RTG-update-agent-harness.md b/docs/tasks/migration/MIGRATION-20251229-060122-RTG-update-agent-harness.md deleted file mode 100644 index 04111e8..0000000 --- a/docs/tasks/migration/MIGRATION-20251229-060122-RTG-update-agent-harness.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -id: MIGRATION-20251229-060122-RTG -status: completed -title: Update Agent Harness -priority: medium -created: 2025-12-29 06:01:22 -category: migration -dependencies: -type: task ---- - -# Update Agent Harness - -To be determined diff --git a/docs/tasks/presentation/.keep b/docs/tasks/presentation/.keep deleted file mode 100644 index e69de29..0000000 diff --git a/docs/tasks/research/.keep b/docs/tasks/research/.keep deleted file mode 100644 index e69de29..0000000 diff --git a/docs/tasks/review/.keep b/docs/tasks/review/.keep deleted file mode 100644 index e69de29..0000000 diff --git a/docs/tasks/security/.keep b/docs/tasks/security/.keep deleted file mode 100644 index e69de29..0000000 diff --git a/docs/tasks/testing/.keep b/docs/tasks/testing/.keep deleted file mode 100644 index e69de29..0000000 diff --git a/docs/memories/.keep b/scripts/__init__.py similarity index 100% rename from docs/memories/.keep rename to scripts/__init__.py diff --git a/scripts/bootstrap.py b/scripts/bootstrap.py deleted file mode 100644 index e180f12..0000000 --- a/scripts/bootstrap.py +++ /dev/null @@ -1,230 +0,0 @@ -#!/usr/bin/env python3 -import os -import sys -import shutil -import subprocess - -SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__)) -REPO_ROOT = os.path.dirname(SCRIPT_DIR) -AGENTS_FILE = os.path.join(REPO_ROOT, "AGENTS.md") -CLAUDE_FILE = os.path.join(REPO_ROOT, "CLAUDE.md") -TEMPLATE_MAINTENANCE = os.path.join(REPO_ROOT, "templates", "maintenance_mode.md") - -STANDARD_HEADERS = [ - "Helper Scripts", - "Agent Interoperability", - "Step 1: Detect Repository State", - "Step 2: Execution Strategy", - "Step 3: Finalize & Switch to Maintenance Mode" -] - -PREAMBLE_IGNORE_PATTERNS = [ - "# AI Agent Bootstrap Instructions", - "# AI Agent Instructions", - "**CURRENT STATUS: BOOTSTRAPPING MODE**", - "You are an expert Software Architect", - "Your current goal is to bootstrap", -] - -def is_ignored_preamble_line(line): - l = line.strip() - # Keep empty lines to preserve spacing in custom content, - # but we will strip the final result to remove excess whitespace. - if not l: - return False - - for p in PREAMBLE_IGNORE_PATTERNS: - if p in l: - return True - return False - -def extract_custom_content(content): - lines = content.splitlines() - custom_sections = [] - preamble_lines = [] - current_header = None - current_lines = [] - - for line in lines: - if line.startswith("## "): - header = line[3:].strip() - - # Flush previous section - if current_header: - if current_header not in STANDARD_HEADERS: - custom_sections.append((current_header, "\n".join(current_lines))) - else: - # Capture preamble (lines before first header) - for l in current_lines: - if not is_ignored_preamble_line(l): - preamble_lines.append(l) - - current_header = header - current_lines = [] - else: - current_lines.append(line) - - # Flush last section - if current_header: - if current_header not in STANDARD_HEADERS: - custom_sections.append((current_header, "\n".join(current_lines))) - else: - # If no headers found, everything is preamble - for l in current_lines: - if not is_ignored_preamble_line(l): - preamble_lines.append(l) - - return "\n".join(preamble_lines).strip(), custom_sections - -def check_state(): - print("Repository Analysis:") - - # Check if already in maintenance mode - if os.path.exists(AGENTS_FILE): - with open(AGENTS_FILE, "r") as f: - content = f.read() - if "BOOTSTRAPPING MODE" not in content: - print("Status: MAINTENANCE MODE (AGENTS.md is already updated)") - print("To list tasks: python3 scripts/tasks.py list") - return - - files = [f for f in os.listdir(REPO_ROOT) if not f.startswith(".")] - print(f"Files in root: {len(files)}") - - if os.path.exists(os.path.join(REPO_ROOT, "src")) or os.path.exists(os.path.join(REPO_ROOT, "lib")) or os.path.exists(os.path.join(REPO_ROOT, ".git")): - print("Status: EXISTING REPOSITORY (Found src/, lib/, or .git/)") - else: - print("Status: NEW REPOSITORY (Likely)") - - # Check for hooks - hook_path = os.path.join(REPO_ROOT, ".git", "hooks", "pre-commit") - if not os.path.exists(hook_path): - print("\nTip: Run 'python3 scripts/tasks.py install-hooks' to enable safety checks.") - - print("\nNext Steps:") - print("1. Run 'python3 scripts/tasks.py init' to scaffold directories.") - print("2. Run 'python3 scripts/tasks.py create foundation \"Initial Setup\"' to track your work.") - print("3. Explore docs/architecture/ and docs/features/.") - print("4. When ready to switch to maintenance mode, run: python3 scripts/bootstrap.py finalize --interactive") - -def finalize(): - interactive = "--interactive" in sys.argv - print("Finalizing setup...") - if not os.path.exists(TEMPLATE_MAINTENANCE): - print(f"Error: Template {TEMPLATE_MAINTENANCE} not found.") - sys.exit(1) - - # Safety check - if os.path.exists(AGENTS_FILE): - with open(AGENTS_FILE, "r") as f: - content = f.read() - if "BOOTSTRAPPING MODE" not in content and "--force" not in sys.argv: - print("Error: AGENTS.md does not appear to be in bootstrapping mode.") - print("Use --force to overwrite anyway.") - sys.exit(1) - - # Ensure init is run - print("Ensuring directory structure...") - tasks_script = os.path.join(SCRIPT_DIR, "tasks.py") - try: - subprocess.check_call([sys.executable, tasks_script, "init"]) - except subprocess.CalledProcessError: - print("Error: Failed to initialize directories.") - sys.exit(1) - - # Analyze AGENTS.md for custom sections - custom_sections = [] - custom_preamble = "" - if os.path.exists(AGENTS_FILE): - try: - with open(AGENTS_FILE, "r") as f: - current_content = f.read() - custom_preamble, custom_sections = extract_custom_content(current_content) - except Exception as e: - print(f"Warning: Failed to parse AGENTS.md for custom sections: {e}") - - if interactive: - print("\n--- Merge Analysis ---") - if custom_preamble: - print("[PRESERVED] Custom Preamble (lines before first header)") - print(f" Snippet: {custom_preamble.splitlines()[0][:60]}...") - else: - print("[INFO] No custom preamble found.") - - if custom_sections: - print(f"[PRESERVED] {len(custom_sections)} Custom Sections:") - for header, _ in custom_sections: - print(f" - {header}") - else: - print("[INFO] No custom sections found.") - - print("\n[REPLACED] The following standard bootstrapping sections will be replaced by Maintenance Mode instructions:") - for header in STANDARD_HEADERS: - print(f" - {header}") - - print(f"\n[ACTION] AGENTS.md will be backed up to AGENTS.md.bak") - - try: - # Use input if available, but handle non-interactive environments - response = input("\nProceed with finalization? [y/N] ") - except EOFError: - response = "n" - - if response.lower() not in ["y", "yes"]: - print("Aborting.") - sys.exit(0) - - # Backup AGENTS.md - if os.path.exists(AGENTS_FILE): - backup_file = AGENTS_FILE + ".bak" - try: - shutil.copy2(AGENTS_FILE, backup_file) - print(f"Backed up AGENTS.md to {backup_file}") - if not custom_sections and not custom_preamble and not interactive: - print("IMPORTANT: If you added custom instructions to AGENTS.md, they are now in .bak") - print("Please review AGENTS.md.bak and merge any custom context into the new AGENTS.md manually.") - elif not interactive: - print(f"NOTE: Custom sections/preamble were preserved in the new AGENTS.md.") - print("Please review AGENTS.md.bak to ensure no other context was lost.") - except Exception as e: - print(f"Warning: Failed to backup AGENTS.md: {e}") - - # Read template - with open(TEMPLATE_MAINTENANCE, "r") as f: - content = f.read() - - # Prepend custom preamble - if custom_preamble: - content = custom_preamble + "\n\n" + content - - # Append custom sections - if custom_sections: - content += "\n" - for header, body in custom_sections: - content += f"\n## {header}\n{body}" - if not interactive: - print(f"Appended {len(custom_sections)} custom sections to new AGENTS.md") - - # Overwrite AGENTS.md - with open(AGENTS_FILE, "w") as f: - f.write(content) - - print(f"Updated {AGENTS_FILE} with maintenance instructions.") - - # Check CLAUDE.md symlink - if os.path.islink(CLAUDE_FILE): - print(f"{CLAUDE_FILE} is a symlink. Verified.") - else: - print(f"{CLAUDE_FILE} is NOT a symlink. Recreating it...") - if os.path.exists(CLAUDE_FILE): - os.remove(CLAUDE_FILE) - os.symlink("AGENTS.md", CLAUDE_FILE) - print("Symlink created.") - - print("\nBootstrapping Complete! The agent is now in Maintenance Mode.") - -if __name__ == "__main__": - if len(sys.argv) > 1 and sys.argv[1] == "finalize": - finalize() - else: - check_state() diff --git a/scripts/hooks/session_start.py b/scripts/hooks/session_start.py new file mode 100644 index 0000000..f1ec4f9 --- /dev/null +++ b/scripts/hooks/session_start.py @@ -0,0 +1,64 @@ +#!/usr/bin/env python3 +""" +Session Start Hook for Claude. +Loads the latest Ledger and Handoff to restore context. +Input: JSON from Claude (source: startup|resume|clear) +Output: JSON to Claude (hookSpecificOutput: { additionalContext: ... }) +""" +import sys +import os +import json +import glob + +# Determine root +SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__)) +# scripts/hooks/ -> scripts/ -> root +REPO_ROOT = os.path.dirname(os.path.dirname(SCRIPT_DIR)) +CONTINUITY_DIR = os.path.join(REPO_ROOT, "docs", "continuity") +LEDGERS_DIR = os.path.join(CONTINUITY_DIR, "ledgers") +HANDOFFS_DIR = os.path.join(CONTINUITY_DIR, "handoffs") + +def get_latest_content(directory, name): + pattern = os.path.join(directory, "*.md") + files = glob.glob(pattern) + if not files: + return f"No {name} found." + files.sort(reverse=True) + with open(files[0], "r") as f: + return f.read() + +def main(): + # Read input JSON from stdin + try: + input_data = json.load(sys.stdin) + except json.JSONDecodeError: + # Fallback if no input (e.g. manual test) + input_data = {} + + # Logic: Fetch latest ledger and handoff + ledger_content = get_latest_content(LEDGERS_DIR, "Ledger") + handoff_content = get_latest_content(HANDOFFS_DIR, "Handoff") + + context_message = f""" +=== CONTINUITY RESTORED === + +[LATEST LEDGER] +{ledger_content} + +[LATEST HANDOFF] +{handoff_content} + +=========================== +""" + + # Output JSON + output = { + "continue": True, + "hookSpecificOutput": { + "additionalContext": context_message + } + } + print(json.dumps(output)) + +if __name__ == "__main__": + main() diff --git a/docs/tasks/domain/.keep b/scripts/lib/__init__.py similarity index 100% rename from docs/tasks/domain/.keep rename to scripts/lib/__init__.py diff --git a/scripts/lib/concurrency.py b/scripts/lib/concurrency.py new file mode 100644 index 0000000..8a8dd79 --- /dev/null +++ b/scripts/lib/concurrency.py @@ -0,0 +1,55 @@ +import os +import time +import fcntl +import errno +from contextlib import contextmanager + +class FileLockException(Exception): + pass + +class FileLock: + """ + A file locking mechanism using fcntl.flock for POSIX systems. + """ + def __init__(self, lock_file, timeout=5, delay=0.1): + self.lock_file = lock_file + self.timeout = timeout + self.delay = delay + self.fd = None + + def acquire(self): + start_time = time.time() + # Ensure directory exists + os.makedirs(os.path.dirname(os.path.abspath(self.lock_file)), exist_ok=True) + + self.fd = open(self.lock_file, 'w') + + while True: + try: + # LOCK_EX: Exclusive lock + # LOCK_NB: Non-blocking + fcntl.flock(self.fd, fcntl.LOCK_EX | fcntl.LOCK_NB) + return + except (IOError, OSError) as e: + if e.errno != errno.EAGAIN: + raise + if (time.time() - start_time) >= self.timeout: + self.fd.close() + raise FileLockException(f"Could not acquire lock on {self.lock_file} within {self.timeout}s") + time.sleep(self.delay) + + def release(self): + if self.fd: + try: + fcntl.flock(self.fd, fcntl.LOCK_UN) + except: + pass + self.fd.close() + self.fd = None + + def __enter__(self): + self.acquire() + return self + + def __exit__(self, exc_type, exc_val, exc_tb): + self.release() diff --git a/scripts/lib/io.py b/scripts/lib/io.py new file mode 100644 index 0000000..66b4133 --- /dev/null +++ b/scripts/lib/io.py @@ -0,0 +1,94 @@ +import os +import tempfile +import json +import time +import gzip + +def write_atomic(filepath, content, mode='w', encoding='utf-8'): + """ + Writes content to a file atomically. + 1. Writes to a temporary file in the same directory. + 2. Flushes and fsyncs. + 3. Renames the temporary file to the target filepath. + """ + directory = os.path.dirname(os.path.abspath(filepath)) + if not os.path.exists(directory): + os.makedirs(directory, exist_ok=True) + + # Create temp file in same directory to ensure atomic rename + # mkstemp returns a low-level file handle (int) and the absolute path + fd, tmp_path = tempfile.mkstemp(dir=directory, text=(mode == 'w')) + + try: + with os.fdopen(fd, mode, encoding=encoding if mode == 'w' else None) as f: + f.write(content) + f.flush() + os.fsync(f.fileno()) + + # Atomic rename + os.replace(tmp_path, filepath) + except Exception as e: + # Cleanup temp file on failure + if os.path.exists(tmp_path): + os.remove(tmp_path) + raise e + +def write_atomic_gzip(filepath, content_str, encoding='utf-8'): + """Atomic write for GZIP files.""" + directory = os.path.dirname(os.path.abspath(filepath)) + if not os.path.exists(directory): + os.makedirs(directory, exist_ok=True) + + fd, tmp_path = tempfile.mkstemp(dir=directory, suffix='.gz') + os.close(fd) + + try: + with gzip.open(tmp_path, 'wt', encoding=encoding) as f: + f.write(content_str) + f.flush() + # Try to fsync underlying file + if hasattr(f, 'fileobj') and f.fileobj: + f.fileobj.flush() + os.fsync(f.fileobj.fileno()) + + os.replace(tmp_path, filepath) + except Exception as e: + if os.path.exists(tmp_path): + os.remove(tmp_path) + raise e + +def read_text(filepath, encoding='utf-8', retries=3, delay=0.1): + """ + Reads text from a file with simple retry logic for transient errors. + Supports .gz files transparently. + """ + if not os.path.exists(filepath): + raise FileNotFoundError(f"File not found: {filepath}") + + is_gzip = filepath.endswith('.gz') + + for attempt in range(retries): + try: + if is_gzip: + with gzip.open(filepath, 'rt', encoding=encoding) as f: + return f.read() + else: + with open(filepath, 'r', encoding=encoding) as f: + return f.read() + except OSError: + if attempt == retries - 1: + raise + time.sleep(delay) + +def write_json(filepath, data, indent=2): + """Atomic write for JSON (supports .gz).""" + content = json.dumps(data, indent=indent, sort_keys=True) + if filepath.endswith('.gz'): + write_atomic_gzip(filepath, content) + else: + write_atomic(filepath, content + "\n") + +def read_json(filepath): + """Read JSON from file (supports .gz).""" + content = read_text(filepath) + return json.loads(content) \ No newline at end of file diff --git a/scripts/lib/yaml.py b/scripts/lib/yaml.py new file mode 100644 index 0000000..4728eff --- /dev/null +++ b/scripts/lib/yaml.py @@ -0,0 +1,92 @@ +import os + +class SimpleYaml: + """ + A simple, dependency-free YAML parser/dumper. + Optimized for the project's config schema (max 2 levels of nesting + lists). + """ + + @staticmethod + def load(filepath): + if not os.path.exists(filepath): + return {} + + with open(filepath, 'r') as f: + lines = f.readlines() + + return SimpleYaml._parse(lines) + + @staticmethod + def _parse(lines): + data = {} + current_section = None # Top level key + current_subsection = None # Second level key (if it holds a list) + + def clean(v): + v = v.strip() + if (v.startswith('"') and v.endswith('"')) or (v.startswith("'" ) and v.endswith("'")): + return v[1:-1] + return v + + for line in lines: + line_content = line.strip() + if not line_content or line_content.startswith('#'): + continue + + indent = len(line) - len(line.lstrip()) + + if indent == 0: + if ':' in line_content: + key = line_content.split(':', 1)[0].strip() + data[key] = {} + current_section = key + current_subsection = None + + elif indent == 2 and current_section: + if line_content.startswith('- '): + # List under section + if isinstance(data[current_section], dict) and not data[current_section]: + data[current_section] = [] + + if isinstance(data[current_section], list): + data[current_section].append(clean(line_content[2:])) + + elif ':' in line_content: + parts = line_content.split(':', 1) + key = parts[0].strip() + val = parts[1].strip() + + if val: + # Key: Value + data[current_section][key] = clean(val) + else: + # Key: ... (implies list or dict following) + # For our config, it's usually a list + data[current_section][key] = [] + current_subsection = key + + elif indent >= 4 and current_section and current_subsection: + if line_content.startswith('- '): + # List item under subsection + if isinstance(data[current_section][current_subsection], list): + data[current_section][current_subsection].append(clean(line_content[2:])) + + return data + + @staticmethod + def save(filepath, data): + with open(filepath, 'w') as f: + for key, val in data.items(): + f.write(f"{key}:\n") + if isinstance(val, dict): + for subkey, subval in val.items(): + if isinstance(subval, list): + f.write(f" {subkey}:\n") + for item in subval: + f.write(f" - {item}\n") + else: + f.write(f" {subkey}: {subval}\n") + elif isinstance(val, list): + for item in val: + f.write(f" - {item}\n") + f.write("\n") \ No newline at end of file diff --git a/scripts/llm.py b/scripts/llm.py new file mode 100755 index 0000000..c569f1d --- /dev/null +++ b/scripts/llm.py @@ -0,0 +1,105 @@ +#!/usr/bin/env python3 +import os +import sys +import json +import urllib.request +import urllib.error + +# Unified LLM Client for Agent Harness + +def call_anthropic(prompt, system=None, model="claude-3-5-sonnet-20240620", api_key=None): + """Calls Anthropic's Messages API.""" + api_key = api_key or os.getenv("ANTHROPIC_API_KEY") + if not api_key: + raise ValueError("ANTHROPIC_API_KEY not set") + + url = "https://api.anthropic.com/v1/messages" + headers = { + "x-api-key": api_key, + "anthropic-version": "2023-06-01", + "content-type": "application/json" + } + + messages = [{"role": "user", "content": prompt}] + + data = { + "model": model, + "max_tokens": 4096, + "messages": messages + } + + if system: + data["system"] = system + + req = urllib.request.Request(url, json.dumps(data).encode("utf-8"), headers) + + try: + with urllib.request.urlopen(req) as response: + result = json.loads(response.read().decode("utf-8")) + return result["content"][0]["text"] + except urllib.error.HTTPError as e: + err_body = e.read().decode("utf-8") + raise Exception(f"Anthropic API Error: {e.code} - {err_body}") + +def call_openai(prompt, system=None, model="gpt-4o", api_key=None): + """Calls OpenAI's Chat Completion API.""" + api_key = api_key or os.getenv("OPENAI_API_KEY") + if not api_key: + raise ValueError("OPENAI_API_KEY not set") + + url = "https://api.openai.com/v1/chat/completions" + headers = { + "Authorization": f"Bearer {api_key}", + "Content-Type": "application/json" + } + + messages = [] + if system: + messages.append({"role": "system", "content": system}) + messages.append({"role": "user", "content": prompt}) + + data = { + "model": model, + "messages": messages + } + + req = urllib.request.Request(url, json.dumps(data).encode("utf-8"), headers) + + try: + with urllib.request.urlopen(req) as response: + result = json.loads(response.read().decode("utf-8")) + return result["choices"][0]["message"]["content"] + except urllib.error.HTTPError as e: + err_body = e.read().decode("utf-8") + raise Exception(f"OpenAI API Error: {e.code} - {err_body}") + +def complete(prompt, provider="anthropic", system=None, model=None): + """Unified completion function.""" + + # Provider selection logic + if provider == "anthropic": + return call_anthropic(prompt, system=system, model=model or "claude-3-5-sonnet-20240620") + elif provider == "openai": + return call_openai(prompt, system=system, model=model or "gpt-4o") + else: + raise ValueError(f"Unknown provider: {provider}") + +def main(): + import argparse + parser = argparse.ArgumentParser(description="Simple LLM Client") + parser.add_argument("prompt", help="The user prompt") + parser.add_argument("--system", help="System prompt") + parser.add_argument("--provider", default="anthropic", choices=["anthropic", "openai"], help="LLM Provider") + parser.add_argument("--model", help="Specific model name") + + args = parser.parse_args() + + try: + result = complete(args.prompt, provider=args.provider, system=args.system, model=args.model) + print(result) + except Exception as e: + print(f"Error: {e}", file=sys.stderr) + sys.exit(1) + +if __name__ == "__main__": + main() diff --git a/scripts/memory.py b/scripts/memory.py deleted file mode 100644 index f82fef4..0000000 --- a/scripts/memory.py +++ /dev/null @@ -1,239 +0,0 @@ -#!/usr/bin/env python3 -import os -import sys -import argparse -import json -import datetime -import re - -# Determine the root directory of the repo -SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__)) -# Allow overriding root for testing, similar to tasks.py -REPO_ROOT = os.getenv("TASKS_REPO_ROOT", os.path.dirname(SCRIPT_DIR)) -MEMORY_DIR = os.path.join(REPO_ROOT, "docs", "memories") - -def init_memory(): - """Ensures the memory directory exists.""" - os.makedirs(MEMORY_DIR, exist_ok=True) - if not os.path.exists(os.path.join(MEMORY_DIR, ".keep")): - with open(os.path.join(MEMORY_DIR, ".keep"), "w") as f: - pass - -def slugify(text): - """Creates a URL-safe slug from text.""" - text = text.lower().strip() - return re.sub(r'[^a-z0-9-]', '-', text).strip('-') - -def create_memory(title, content, tags=None, output_format="text"): - init_memory() - tags = tags or [] - if isinstance(tags, str): - tags = [t.strip() for t in tags.split(",") if t.strip()] - - date_str = datetime.date.today().isoformat() - slug = slugify(title) - if not slug: - slug = "untitled" - - filename = f"{date_str}-{slug}.md" - filepath = os.path.join(MEMORY_DIR, filename) - - # Handle duplicates by appending counter - counter = 1 - while os.path.exists(filepath): - filename = f"{date_str}-{slug}-{counter}.md" - filepath = os.path.join(MEMORY_DIR, filename) - counter += 1 - - # Create Frontmatter - fm = f"""--- -date: {date_str} -title: "{title}" -tags: {json.dumps(tags)} -created: {datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")} ---- -""" - - full_content = fm + "\n" + content + "\n" - - try: - with open(filepath, "w") as f: - f.write(full_content) - - if output_format == "json": - print(json.dumps({ - "success": True, - "filepath": filepath, - "title": title, - "date": date_str - })) - else: - print(f"Created memory: {filepath}") - except Exception as e: - msg = f"Error creating memory: {e}" - if output_format == "json": - print(json.dumps({"error": msg})) - else: - print(msg) - sys.exit(1) - -def list_memories(tag=None, limit=20, output_format="text"): - if not os.path.exists(MEMORY_DIR): - if output_format == "json": - print(json.dumps([])) - else: - print("No memories found.") - return - - memories = [] - try: - files = [f for f in os.listdir(MEMORY_DIR) if f.endswith(".md") and f != ".keep"] - except FileNotFoundError: - files = [] - - for f in files: - path = os.path.join(MEMORY_DIR, f) - try: - with open(path, "r") as file: - content = file.read() - - # Extract basic info from frontmatter - title = "Unknown" - date = "Unknown" - tags = [] - - # Simple regex parsing to avoid YAML dependency - m_title = re.search(r'^title:\s*"(.*)"', content, re.MULTILINE) - if m_title: - title = m_title.group(1) - else: - # Fallback: unquoted title - m_title_uq = re.search(r'^title:\s*(.*)', content, re.MULTILINE) - if m_title_uq: title = m_title_uq.group(1).strip() - - m_date = re.search(r'^date:\s*(.*)', content, re.MULTILINE) - if m_date: date = m_date.group(1).strip() - - m_tags = re.search(r'^tags:\s*(\[.*\])', content, re.MULTILINE) - if m_tags: - try: - tags = json.loads(m_tags.group(1)) - except: - pass - - if tag and tag not in tags: - continue - - memories.append({ - "filename": f, - "title": title, - "date": date, - "tags": tags, - "path": path - }) - except Exception: - # Skip unreadable files - pass - - # Sort by date desc (filename usually works for YYYY-MM-DD prefix) - memories.sort(key=lambda x: x["filename"], reverse=True) - memories = memories[:limit] - - if output_format == "json": - print(json.dumps(memories)) - else: - if not memories: - print("No memories found.") - return - - print(f"{'Date':<12} {'Title'}") - print("-" * 50) - for m in memories: - print(f"{m['date']:<12} {m['title']}") - -def read_memory(filename, output_format="text"): - path = os.path.join(MEMORY_DIR, filename) - if not os.path.exists(path): - # Try finding by partial match if not exact - if os.path.exists(MEMORY_DIR): - matches = [f for f in os.listdir(MEMORY_DIR) if filename in f and f.endswith(".md")] - if len(matches) == 1: - path = os.path.join(MEMORY_DIR, matches[0]) - elif len(matches) > 1: - msg = f"Error: Ambiguous memory identifier '{filename}'. Matches: {', '.join(matches)}" - if output_format == "json": - print(json.dumps({"error": msg})) - else: - print(msg) - sys.exit(1) - else: - msg = f"Error: Memory file '{filename}' not found." - if output_format == "json": - print(json.dumps({"error": msg})) - else: - print(msg) - sys.exit(1) - else: - msg = f"Error: Memory directory does not exist." - if output_format == "json": - print(json.dumps({"error": msg})) - else: - print(msg) - sys.exit(1) - - try: - with open(path, "r") as f: - content = f.read() - - if output_format == "json": - print(json.dumps({"filename": os.path.basename(path), "content": content})) - else: - print(content) - except Exception as e: - msg = f"Error reading file: {e}" - if output_format == "json": - print(json.dumps({"error": msg})) - else: - print(msg) - sys.exit(1) - -def main(): - # Common argument for format - parent_parser = argparse.ArgumentParser(add_help=False) - parent_parser.add_argument("--format", choices=["text", "json"], default="text", help="Output format") - - parser = argparse.ArgumentParser(description="Manage long-term memories") - - subparsers = parser.add_subparsers(dest="command") - - # Create - create_parser = subparsers.add_parser("create", parents=[parent_parser], help="Create a new memory") - create_parser.add_argument("title", help="Title of the memory") - create_parser.add_argument("content", help="Content of the memory") - create_parser.add_argument("--tags", help="Comma-separated tags") - - # List - list_parser = subparsers.add_parser("list", parents=[parent_parser], help="List memories") - list_parser.add_argument("--tag", help="Filter by tag") - list_parser.add_argument("--limit", type=int, default=20, help="Max results") - - # Read - read_parser = subparsers.add_parser("read", parents=[parent_parser], help="Read a memory") - read_parser.add_argument("filename", help="Filename or slug part") - - args = parser.parse_args() - - # Default format to text if not present (though parents default handles it) - fmt = getattr(args, "format", "text") - - if args.command == "create": - create_memory(args.title, args.content, args.tags, fmt) - elif args.command == "list": - list_memories(args.tag, args.limit, fmt) - elif args.command == "read": - read_memory(args.filename, fmt) - else: - parser.print_help() - -if __name__ == "__main__": - main() diff --git a/scripts/tasks b/scripts/tasks deleted file mode 100755 index 9c4d703..0000000 --- a/scripts/tasks +++ /dev/null @@ -1,15 +0,0 @@ -#!/bin/bash - -# Wrapper for tasks.py to ensure Python 3 is available - -if ! command -v python3 &> /dev/null; then - echo "Error: Python 3 is not installed or not in PATH." - echo "Please install Python 3 to use the task manager." - exit 1 -fi - -# Get the directory of this script -SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" - -# Execute tasks.py -exec python3 "$SCRIPT_DIR/tasks.py" "$@" diff --git a/scripts/tasks.py b/scripts/tasks.py deleted file mode 100755 index 5ac5261..0000000 --- a/scripts/tasks.py +++ /dev/null @@ -1,1179 +0,0 @@ -#!/usr/bin/env python3 -import os -import sys -import shutil -import argparse -import re -import json -import random -import string -from datetime import datetime - -# Determine the root directory of the repo -# Assumes this script is in scripts/ -SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__)) -REPO_ROOT = os.getenv("TASKS_REPO_ROOT", os.path.dirname(SCRIPT_DIR)) -DOCS_DIR = os.path.join(REPO_ROOT, "docs", "tasks") -TEMPLATES_DIR = os.path.join(REPO_ROOT, "templates") - -CATEGORIES = [ - "foundation", - "infrastructure", - "domain", - "presentation", - "migration", - "features", - "testing", - "review", - "security", - "research", -] - -VALID_STATUSES = [ - "pending", - "in_progress", - "wip_blocked", - "review_requested", - "verified", - "completed", - "blocked", - "cancelled", - "deferred" -] - -VALID_TYPES = [ - "epic", - "story", - "task", - "bug" -] - -ARCHIVE_DIR_NAME = "archive" - -def init_docs(): - """Scaffolds the documentation directory structure.""" - print("Initializing documentation structure...") - - # Create docs/tasks/ directories - for category in CATEGORIES: - path = os.path.join(DOCS_DIR, category) - os.makedirs(path, exist_ok=True) - # Create .keep file to ensure git tracks the directory - with open(os.path.join(path, ".keep"), "w") as f: - pass - - # Copy GUIDE.md if missing - guide_path = os.path.join(DOCS_DIR, "GUIDE.md") - guide_template = os.path.join(TEMPLATES_DIR, "GUIDE.md") - if not os.path.exists(guide_path) and os.path.exists(guide_template): - shutil.copy(guide_template, guide_path) - print(f"Created {guide_path}") - - # Create other doc directories - for doc_type in ["architecture", "features", "security"]: - path = os.path.join(REPO_ROOT, "docs", doc_type) - os.makedirs(path, exist_ok=True) - readme_path = os.path.join(path, "README.md") - if not os.path.exists(readme_path): - if doc_type == "security": - content = """# Security Documentation - -Use this section to document security considerations, risks, and mitigations. - -## Risk Assessment -* [ ] Threat Model -* [ ] Data Privacy - -## Compliance -* [ ] Requirements - -## Secrets Management -* [ ] Policy -""" - else: - content = f"# {doc_type.capitalize()} Documentation\n\nAdd {doc_type} documentation here.\n" - - with open(readme_path, "w") as f: - f.write(content) - - # Create memories directory - memories_path = os.path.join(REPO_ROOT, "docs", "memories") - os.makedirs(memories_path, exist_ok=True) - if not os.path.exists(os.path.join(memories_path, ".keep")): - with open(os.path.join(memories_path, ".keep"), "w") as f: - pass - - print(f"Created directories in {os.path.join(REPO_ROOT, 'docs')}") - -def generate_task_id(category): - """Generates a timestamp-based ID to avoid collisions.""" - timestamp = datetime.now().strftime("%Y%m%d-%H%M%S") - suffix = ''.join(random.choices(string.ascii_uppercase, k=3)) - return f"{category.upper()}-{timestamp}-{suffix}" - -def extract_frontmatter(content): - """Extracts YAML frontmatter if present.""" - # Check if it starts with --- - if not re.match(r"^\s*---\s*(\n|$)", content): - return None, content - - # Find the second --- - lines = content.splitlines(keepends=True) - if not lines: - return None, content - - yaml_lines = [] - body_start_idx = -1 - - # Skip the first line (delimiter) - for i, line in enumerate(lines[1:], 1): - if re.match(r"^\s*---\s*(\n|$)", line): - body_start_idx = i + 1 - break - yaml_lines.append(line) - - if body_start_idx == -1: - # No closing delimiter found - return None, content - - yaml_block = "".join(yaml_lines) - body = "".join(lines[body_start_idx:]) - - data = {} - for line in yaml_block.splitlines(): - line = line.strip() - if not line or line.startswith("#"): - continue - if ":" in line: - key, val = line.split(":", 1) - data[key.strip()] = val.strip() - - return data, body - -def parse_task_content(content, filepath=None): - """Parses task markdown content into a dictionary.""" - - # Try Frontmatter first - frontmatter, body = extract_frontmatter(content) - if frontmatter: - deps_val = frontmatter.get("dependencies") or "" - deps = [] - if deps_val: - # Handle both string list "[a, b]" and plain string "a, b" - cleaned = deps_val.strip(" []") - if cleaned: - deps = [d.strip() for d in cleaned.split(",") if d.strip()] - - return { - "id": frontmatter.get("id", "unknown"), - "status": frontmatter.get("status", "unknown"), - "title": frontmatter.get("title", "No Title"), - "priority": frontmatter.get("priority", "medium"), - "type": frontmatter.get("type", "task"), - "sprint": frontmatter.get("sprint", ""), - "estimate": frontmatter.get("estimate", ""), - "dependencies": deps, - "filepath": filepath, - "content": content - } - - # Fallback to Legacy Regex Parsing - id_match = re.search(r"\*\*Task ID\*\*: ([\w-]+)", content) - status_match = re.search(r"\*\*Status\*\*: ([\w_]+)", content) - title_match = re.search(r"# Task: (.+)", content) - priority_match = re.search(r"\*\*Priority\*\*: ([\w]+)", content) - - task_id = id_match.group(1) if id_match else "unknown" - status = status_match.group(1) if status_match else "unknown" - title = title_match.group(1).strip() if title_match else "No Title" - priority = priority_match.group(1) if priority_match else "unknown" - - return { - "id": task_id, - "status": status, - "title": title, - "priority": priority, - "type": "task", - "sprint": "", - "estimate": "", - "dependencies": [], - "filepath": filepath, - "content": content - } - -def create_task(category, title, description, priority="medium", status="pending", dependencies=None, task_type="task", sprint="", estimate="", output_format="text"): - if category not in CATEGORIES: - msg = f"Error: Category '{category}' not found. Available: {', '.join(CATEGORIES)}" - if output_format == "json": - print(json.dumps({"error": msg})) - else: - print(msg) - sys.exit(1) - - task_id = generate_task_id(category) - - slug = title.lower().replace(" ", "-") - # Sanitize slug - slug = re.sub(r'[^a-z0-9-]', '', slug) - filename = f"{task_id}-{slug}.md" - filepath = os.path.join(DOCS_DIR, category, filename) - - # New YAML Frontmatter Format - deps_str = "" - if dependencies: - # Use Flow style list - deps_str = "[" + ", ".join(dependencies) + "]" - - extra_fm = "" - if task_type: - extra_fm += f"type: {task_type}\n" - if sprint: - extra_fm += f"sprint: {sprint}\n" - if estimate: - extra_fm += f"estimate: {estimate}\n" - - content = f"""--- -id: {task_id} -status: {status} -title: {title} -priority: {priority} -created: {datetime.now().strftime("%Y-%m-%d %H:%M:%S")} -category: {category} -dependencies: {deps_str} -{extra_fm}--- - -# {title} - -{description} -""" - - os.makedirs(os.path.dirname(filepath), exist_ok=True) - with open(filepath, "w") as f: - f.write(content) - - if output_format == "json": - print(json.dumps({ - "id": task_id, - "title": title, - "filepath": filepath, - "status": status, - "priority": priority, - "type": task_type - })) - else: - print(f"Created task: {filepath}") - -def find_task_file(task_id): - """Finds the file path for a given task ID.""" - task_id = task_id.upper() - - # Optimization: Check if ID starts with a known category - parts = task_id.split('-') - if len(parts) > 1: - category = parts[0].lower() - if category in CATEGORIES: - category_dir = os.path.join(DOCS_DIR, category) - if os.path.exists(category_dir): - for file in os.listdir(category_dir): - if file.startswith(task_id) and file.endswith(".md"): - return os.path.join(category_dir, file) - # Fallback to full search if not found in expected category (e.g. moved to archive) - - for root, _, files in os.walk(DOCS_DIR): - for file in files: - # Match strictly on ID at start of filename or substring - # New ID: FOUNDATION-2023... - # Old ID: FOUNDATION-001 - if file.startswith(task_id) and file.endswith(".md"): - return os.path.join(root, file) - return None - -def show_task(task_id, output_format="text"): - filepath = find_task_file(task_id) - if not filepath: - msg = f"Error: Task ID {task_id} not found." - if output_format == "json": - print(json.dumps({"error": msg})) - else: - print(msg) - sys.exit(1) - - try: - with open(filepath, "r") as f: - content = f.read() - - if output_format == "json": - task_data = parse_task_content(content, filepath) - print(json.dumps(task_data)) - else: - print(content) - except Exception as e: - msg = f"Error reading file: {e}" - if output_format == "json": - print(json.dumps({"error": msg})) - else: - print(msg) - sys.exit(1) - -def delete_task(task_id, output_format="text"): - filepath = find_task_file(task_id) - if not filepath: - msg = f"Error: Task ID {task_id} not found." - if output_format == "json": - print(json.dumps({"error": msg})) - else: - print(msg) - sys.exit(1) - - try: - os.remove(filepath) - if output_format == "json": - print(json.dumps({"success": True, "id": task_id, "message": "Deleted task"})) - else: - print(f"Deleted task: {task_id}") - except Exception as e: - msg = f"Error deleting file: {e}" - if output_format == "json": - print(json.dumps({"error": msg})) - else: - print(msg) - sys.exit(1) - -def archive_task(task_id, output_format="text"): - filepath = find_task_file(task_id) - if not filepath: - msg = f"Error: Task ID {task_id} not found." - if output_format == "json": - print(json.dumps({"error": msg})) - else: - print(msg) - sys.exit(1) - - try: - archive_dir = os.path.join(DOCS_DIR, ARCHIVE_DIR_NAME) - os.makedirs(archive_dir, exist_ok=True) - filename = os.path.basename(filepath) - new_filepath = os.path.join(archive_dir, filename) - - os.rename(filepath, new_filepath) - - if output_format == "json": - print(json.dumps({"success": True, "id": task_id, "message": "Archived task", "new_path": new_filepath})) - else: - print(f"Archived task: {task_id} -> {new_filepath}") - - except Exception as e: - msg = f"Error archiving task: {e}" - if output_format == "json": - print(json.dumps({"error": msg})) - else: - print(msg) - sys.exit(1) - -def migrate_to_frontmatter(content, task_data): - """Converts legacy content to Frontmatter format.""" - # Strip the header section from legacy content - - body = content - if "## Task Details" in content: - parts = content.split("## Task Details") - if len(parts) > 1: - body = parts[1].strip() - - description = body - # Remove footer - if "*Created:" in description: - description = description.split("---")[0].strip() - - # Check for extra keys in task_data that might need preservation - extra_fm = "" - if task_data.get("type"): extra_fm += f"type: {task_data['type']}\n" - if task_data.get("sprint"): extra_fm += f"sprint: {task_data['sprint']}\n" - if task_data.get("estimate"): extra_fm += f"estimate: {task_data['estimate']}\n" - - deps = task_data.get("dependencies", []) - if deps: - if isinstance(deps, list): - deps_str = "[" + ", ".join(deps) + "]" - else: - deps_str = str(deps) - extra_fm += f"dependencies: {deps_str}\n" - - new_content = f"""--- -id: {task_data['id']} -status: {task_data['status']} -title: {task_data['title']} -priority: {task_data['priority']} -created: {datetime.now().strftime("%Y-%m-%d %H:%M:%S")} -category: unknown -{extra_fm}--- - -# {task_data['title']} - -{description} -""" - return new_content - -def update_task_status(task_id, new_status, output_format="text"): - if new_status not in VALID_STATUSES: - msg = f"Error: Invalid status '{new_status}'. Valid statuses: {', '.join(VALID_STATUSES)}" - if output_format == "json": - print(json.dumps({"error": msg})) - else: - print(msg) - sys.exit(1) - - filepath = find_task_file(task_id) - if not filepath: - msg = f"Error: Task ID {task_id} not found." - if output_format == "json": - print(json.dumps({"error": msg})) - else: - print(msg) - sys.exit(1) - - with open(filepath, "r") as f: - content = f.read() - - # Check dependencies if moving to active status - if new_status in ["in_progress", "review_requested", "verified", "completed"]: - task_data = parse_task_content(content, filepath) - deps = task_data.get("dependencies", []) - if deps: - blocked_by = [] - for dep_id in deps: - # Resolve dependency file - dep_path = find_task_file(dep_id) - if not dep_path: - blocked_by.append(f"{dep_id} (missing)") - continue - - try: - with open(dep_path, "r") as df: - dep_content = df.read() - dep_data = parse_task_content(dep_content, dep_path) - - if dep_data["status"] not in ["completed", "verified"]: - blocked_by.append(f"{dep_id} ({dep_data['status']})") - except Exception: - blocked_by.append(f"{dep_id} (error reading)") - - if blocked_by: - msg = f"Error: Cannot move to '{new_status}' because task is blocked by dependencies: {', '.join(blocked_by)}" - if output_format == "json": - print(json.dumps({"error": msg})) - else: - print(msg) - sys.exit(1) - - frontmatter, body = extract_frontmatter(content) - - if frontmatter: - # Update Frontmatter - lines = content.splitlines() - new_lines = [] - in_fm = False - updated = False - - # Simple finite state machine for update - for line in lines: - if re.match(r"^\s*---\s*$", line): - if not in_fm: - in_fm = True - new_lines.append(line) - continue - else: - in_fm = False - new_lines.append(line) - continue - - match = re.match(r"^(\s*)status:", line) - if in_fm and match: - indent = match.group(1) - new_lines.append(f"{indent}status: {new_status}") - updated = True - else: - new_lines.append(line) - - new_content = "\n".join(new_lines) + "\n" - - else: - # Legacy Format: Migrate on Update - task_data = parse_task_content(content, filepath) - task_data['status'] = new_status # Set new status - new_content = migrate_to_frontmatter(content, task_data) - if output_format == "text": - print(f"Migrated task {task_id} to new format.") - - with open(filepath, "w") as f: - f.write(new_content) - - if output_format == "json": - print(json.dumps({"success": True, "id": task_id, "status": new_status})) - else: - print(f"Updated {task_id} status to {new_status}") - -def update_frontmatter_field(filepath, field, value): - """Updates a specific field in the frontmatter.""" - with open(filepath, "r") as f: - content = f.read() - - frontmatter, body = extract_frontmatter(content) - if not frontmatter: - # Fallback for legacy: migrate first - task_data = parse_task_content(content, filepath) - task_data[field] = value - new_content = migrate_to_frontmatter(content, task_data) - with open(filepath, "w") as f: - f.write(new_content) - return True - - # Update Frontmatter line-by-line to preserve comments/order - lines = content.splitlines() - new_lines = [] - in_fm = False - updated = False - - # Handle list values (like dependencies) - if isinstance(value, list): - # Serialize as Flow-style list [a, b] for valid YAML and easier regex - val_str = "[" + ", ".join(value) + "]" - else: - val_str = str(value) - - for line in lines: - if re.match(r"^\s*---\s*$", line): - if not in_fm: - in_fm = True - new_lines.append(line) - continue - else: - if in_fm and not updated: - # Field not found, add it before close - new_lines.append(f"{field}: {val_str}") - in_fm = False - new_lines.append(line) - continue - - match = re.match(rf"^(\s*){field}:", line) - if in_fm and match: - indent = match.group(1) - new_lines.append(f"{indent}{field}: {val_str}") - updated = True - else: - new_lines.append(line) - - new_content = "\n".join(new_lines) + "\n" - with open(filepath, "w") as f: - f.write(new_content) - return True - -def add_dependency(task_id, dep_id, output_format="text"): - filepath = find_task_file(task_id) - if not filepath: - msg = f"Error: Task ID {task_id} not found." - print(json.dumps({"error": msg}) if output_format == "json" else msg) - sys.exit(1) - - # Verify dep exists - if not find_task_file(dep_id): - msg = f"Error: Dependency Task ID {dep_id} not found." - print(json.dumps({"error": msg}) if output_format == "json" else msg) - sys.exit(1) - - with open(filepath, "r") as f: - content = f.read() - - task_data = parse_task_content(content, filepath) - deps = task_data.get("dependencies", []) - - if dep_id in deps: - msg = f"Task {task_id} already depends on {dep_id}." - print(json.dumps({"message": msg}) if output_format == "json" else msg) - return - - deps.append(dep_id) - update_frontmatter_field(filepath, "dependencies", deps) - - msg = f"Added dependency: {task_id} -> {dep_id}" - print(json.dumps({"success": True, "message": msg}) if output_format == "json" else msg) - -def remove_dependency(task_id, dep_id, output_format="text"): - filepath = find_task_file(task_id) - if not filepath: - msg = f"Error: Task ID {task_id} not found." - print(json.dumps({"error": msg}) if output_format == "json" else msg) - sys.exit(1) - - with open(filepath, "r") as f: - content = f.read() - - task_data = parse_task_content(content, filepath) - deps = task_data.get("dependencies", []) - - if dep_id not in deps: - msg = f"Task {task_id} does not depend on {dep_id}." - print(json.dumps({"message": msg}) if output_format == "json" else msg) - return - - deps.remove(dep_id) - update_frontmatter_field(filepath, "dependencies", deps) - - msg = f"Removed dependency: {task_id} -x-> {dep_id}" - print(json.dumps({"success": True, "message": msg}) if output_format == "json" else msg) - -def generate_index(output_format="text"): - """Generates docs/tasks/INDEX.yaml reflecting task dependencies.""" - index_path = os.path.join(DOCS_DIR, "INDEX.yaml") - - all_tasks = {} # id -> filepath - task_deps = {} # id -> [deps] - - for root, _, files in os.walk(DOCS_DIR): - for file in files: - if not file.endswith(".md") or file in ["GUIDE.md", "README.md", "INDEX.yaml"]: - continue - path = os.path.join(root, file) - try: - with open(path, "r") as f: - content = f.read() - task = parse_task_content(content, path) - if task["id"] != "unknown": - all_tasks[task["id"]] = path - task_deps[task["id"]] = task.get("dependencies", []) - except: - pass - - # Build YAML content - yaml_lines = ["# Task Dependency Index", "# Generated by scripts/tasks.py index", ""] - - for tid, path in sorted(all_tasks.items()): - rel_path = os.path.relpath(path, REPO_ROOT) - yaml_lines.append(f"{rel_path}:") - - deps = task_deps.get(tid, []) - if deps: - yaml_lines.append(" depends_on:") - for dep_id in sorted(deps): - dep_path = all_tasks.get(dep_id) - if dep_path: - dep_rel_path = os.path.relpath(dep_path, REPO_ROOT) - yaml_lines.append(f" - {dep_rel_path}") - else: - # Dependency not found (maybe archived or missing) - yaml_lines.append(f" - {dep_id} # Missing") - - yaml_lines.append("") - - with open(index_path, "w") as f: - f.write("\n".join(yaml_lines)) - - msg = f"Generated index at {index_path}" - print(json.dumps({"success": True, "path": index_path}) if output_format == "json" else msg) - -def list_tasks(status=None, category=None, sprint=None, include_archived=False, output_format="text"): - tasks = [] - - for root, dirs, files in os.walk(DOCS_DIR): - rel_path = os.path.relpath(root, DOCS_DIR) - - # Exclude archive unless requested - if not include_archived: - if rel_path == ARCHIVE_DIR_NAME or rel_path.startswith(ARCHIVE_DIR_NAME + os.sep): - continue - - # Filter by category if provided - if category: - if rel_path != category and not rel_path.startswith(category + os.sep): - continue - - for file in files: - if not file.endswith(".md") or file in ["GUIDE.md", "README.md", "INDEX.yaml"]: - continue - - path = os.path.join(root, file) - try: - with open(path, "r") as f: - content = f.read() - except Exception as e: - if output_format == "text": - print(f"Error reading {path}: {e}") - continue - - # Parse content - task = parse_task_content(content, path) - - # Skip files that don't look like tasks (no ID) - if task["id"] == "unknown": - continue - - if status and status.lower() != task["status"].lower(): - continue - - if sprint and sprint != task.get("sprint"): - continue - - tasks.append(task) - - if output_format == "json": - summary = [{k: v for k, v in t.items() if k != 'content'} for t in tasks] - print(json.dumps(summary)) - else: - # Adjust width for ID to handle longer IDs - print(f"{'ID':<25} {'Status':<20} {'Type':<8} {'Title'}") - print("-" * 85) - for t in tasks: - t_type = t.get("type", "task")[:8] - print(f"{t['id']:<25} {t['status']:<20} {t_type:<8} {t['title']}") - -def get_context(output_format="text"): - """Lists tasks that are currently in progress.""" - if output_format == "text": - print("Current Context (in_progress):") - list_tasks(status="in_progress", output_format=output_format) - -def migrate_all(): - """Migrates all legacy tasks to Frontmatter format.""" - print("Migrating tasks to Frontmatter format...") - count = 0 - for root, dirs, files in os.walk(DOCS_DIR): - for file in files: - if not file.endswith(".md") or file in ["GUIDE.md", "README.md", "INDEX.yaml"]: - continue - - path = os.path.join(root, file) - with open(path, "r") as f: - content = f.read() - - if content.startswith("---\n") or content.startswith("--- "): - continue # Already migrated (simple check) - - task_data = parse_task_content(content, path) - if task_data['id'] == "unknown": - continue - - new_content = migrate_to_frontmatter(content, task_data) - with open(path, "w") as f: - f.write(new_content) - - print(f"Migrated {task_data['id']}") - count += 1 - - print(f"Migration complete. {count} tasks updated.") - -def validate_all(output_format="text"): - """Validates all task files.""" - errors = [] - all_tasks = {} # id -> {path, deps} - - # Pass 1: Parse and Basic Validation - for root, dirs, files in os.walk(DOCS_DIR): - for file in files: - if not file.endswith(".md") or file in ["GUIDE.md", "README.md", "INDEX.yaml"]: - continue - path = os.path.join(root, file) - try: - with open(path, "r") as f: - content = f.read() - - # Check 1: Frontmatter exists - frontmatter, body = extract_frontmatter(content) - if not frontmatter: - errors.append(f"{file}: Missing valid frontmatter") - continue - - # Check 2: Required fields - required_fields = ["id", "status", "title", "created"] - missing = [field for field in required_fields if field not in frontmatter] - if missing: - errors.append(f"{file}: Missing required fields: {', '.join(missing)}") - continue - - task_id = frontmatter["id"] - - # Check 3: Valid Status - if "status" in frontmatter and frontmatter["status"] not in VALID_STATUSES: - errors.append(f"{file}: Invalid status '{frontmatter['status']}'") - - # Check 4: Valid Type - if "type" in frontmatter and frontmatter["type"] not in VALID_TYPES: - errors.append(f"{file}: Invalid type '{frontmatter['type']}'") - - # Parse dependencies - deps_str = frontmatter.get("dependencies") or "" - # Use shared parsing logic - deps = [] - if deps_str: - cleaned = deps_str.strip(" []") - if cleaned: - deps = [d.strip() for d in cleaned.split(",") if d.strip()] - - # Check for Duplicate IDs - if task_id in all_tasks: - errors.append(f"{file}: Duplicate Task ID '{task_id}' (also in {all_tasks[task_id]['path']})") - - all_tasks[task_id] = {"path": path, "deps": deps} - - except Exception as e: - errors.append(f"{file}: Error reading/parsing: {str(e)}") - - # Pass 2: Dependency Validation & Cycle Detection - visited = set() - recursion_stack = set() - - def detect_cycle(curr_id, path): - visited.add(curr_id) - recursion_stack.add(curr_id) - - if curr_id in all_tasks: - for dep_id in all_tasks[curr_id]["deps"]: - # Dependency Existence Check - if dep_id not in all_tasks: - # This will be caught in the loop below, but we need to handle it here to avoid error - continue - - if dep_id not in visited: - if detect_cycle(dep_id, path + [dep_id]): - return True - elif dep_id in recursion_stack: - path.append(dep_id) - return True - - recursion_stack.remove(curr_id) - return False - - for task_id, info in all_tasks.items(): - # Check dependencies exist - for dep_id in info["deps"]: - if dep_id not in all_tasks: - errors.append(f"{os.path.basename(info['path'])}: Invalid dependency '{dep_id}' (task not found)") - - # Check cycles - if task_id not in visited: - cycle_path = [task_id] - if detect_cycle(task_id, cycle_path): - errors.append(f"Circular dependency detected: {' -> '.join(cycle_path)}") - - if output_format == "json": - print(json.dumps({"valid": len(errors) == 0, "errors": errors})) - else: - if not errors: - print("All tasks validated successfully.") - else: - print(f"Found {len(errors)} errors:") - for err in errors: - print(f" - {err}") - sys.exit(1) - -def visualize_tasks(output_format="text"): - """Generates a Mermaid diagram of task dependencies.""" - tasks = [] - # Collect all tasks - for root, dirs, files in os.walk(DOCS_DIR): - for file in files: - if not file.endswith(".md") or file in ["GUIDE.md", "README.md", "INDEX.yaml"]: - continue - path = os.path.join(root, file) - try: - with open(path, "r") as f: - content = f.read() - task = parse_task_content(content, path) - if task["id"] != "unknown": - tasks.append(task) - except: - pass - - if output_format == "json": - nodes = [{"id": t["id"], "title": t["title"], "status": t["status"]} for t in tasks] - edges = [] - for t in tasks: - for dep in t.get("dependencies", []): - edges.append({"from": dep, "to": t["id"]}) - print(json.dumps({"nodes": nodes, "edges": edges})) - return - - # Mermaid Output - print("graph TD") - - status_colors = { - "completed": "#90EE90", - "verified": "#90EE90", - "in_progress": "#ADD8E6", - "review_requested": "#FFFACD", - "wip_blocked": "#FFB6C1", - "blocked": "#FF7F7F", - "pending": "#D3D3D3", - "deferred": "#A9A9A9", - "cancelled": "#696969" - } - - # Nodes - for t in tasks: - # Sanitize title for label - safe_title = t["title"].replace('"', '').replace('[', '').replace(']', '') - print(f' {t["id"]}["{t["id"]}: {safe_title}"]') - - # Style - color = status_colors.get(t["status"], "#FFFFFF") - print(f" style {t['id']} fill:{color},stroke:#333,stroke-width:2px") - - # Edges - for t in tasks: - deps = t.get("dependencies", []) - for dep in deps: - print(f" {dep} --> {t['id']}") - -def get_next_task(output_format="text"): - """Identifies the next best task to work on.""" - # 1. Collect all tasks - all_tasks = {} - for root, _, files in os.walk(DOCS_DIR): - for file in files: - if not file.endswith(".md") or file in ["GUIDE.md", "README.md", "INDEX.yaml"]: - continue - path = os.path.join(root, file) - try: - with open(path, "r") as f: - content = f.read() - task = parse_task_content(content, path) - if task["id"] != "unknown": - all_tasks[task["id"]] = task - except: - pass - - candidates = [] - - # Priority mapping - prio_score = {"high": 3, "medium": 2, "low": 1, "unknown": 1} - - for tid, task in all_tasks.items(): - # Filter completed - if task["status"] in ["completed", "verified", "cancelled", "deferred", "blocked"]: - continue - - # Check dependencies - deps = task.get("dependencies", []) - blocked = False - for dep_id in deps: - if dep_id not in all_tasks: - blocked = True # Missing dependency - break - - dep_status = all_tasks[dep_id]["status"] - if dep_status not in ["completed", "verified"]: - blocked = True - break - - if blocked: - continue - - # Calculate Score - score = 0 - - # Status Bonus - if task["status"] == "in_progress": - score += 1000 - elif task["status"] == "pending": - score += 100 - elif task["status"] == "wip_blocked": - # Unblocked now - score += 500 - - # Priority - score += prio_score.get(task.get("priority", "medium"), 1) * 10 - - # Sprint Bonus - if task.get("sprint"): - score += 50 - - # Type Bonus (Stories/Bugs > Tasks > Epics) - t_type = task.get("type", "task") - if t_type in ["story", "bug"]: - score += 20 - elif t_type == "task": - score += 10 - - candidates.append((score, task)) - - candidates.sort(key=lambda x: x[0], reverse=True) - - if not candidates: - msg = "No suitable tasks found (all completed or blocked)." - if output_format == "json": - print(json.dumps({"message": msg})) - else: - print(msg) - return - - best = candidates[0][1] - - if output_format == "json": - print(json.dumps(best)) - else: - print(f"Recommended Next Task (Score: {candidates[0][0]}):") - print(f"ID: {best['id']}") - print(f"Title: {best['title']}") - print(f"Status: {best['status']}") - print(f"Priority: {best['priority']}") - print(f"Type: {best.get('type', 'task')}") - if best.get("sprint"): - print(f"Sprint: {best.get('sprint')}") - print(f"\nRun: scripts/tasks show {best['id']}") - -def install_hooks(): - """Installs the git pre-commit hook.""" - hook_path = os.path.join(REPO_ROOT, ".git", "hooks", "pre-commit") - if not os.path.exists(os.path.join(REPO_ROOT, ".git")): - print("Error: Not a git repository.") - sys.exit(1) - - script_path = os.path.relpath(os.path.abspath(__file__), REPO_ROOT) - - hook_content = f"""#!/bin/sh -# Auto-generated by scripts/tasks.py -echo "Running task validation..." -python3 {script_path} validate --format text -""" - - try: - with open(hook_path, "w") as f: - f.write(hook_content) - os.chmod(hook_path, 0o755) - print(f"Installed pre-commit hook at {hook_path}") - except Exception as e: - print(f"Error installing hook: {e}") - sys.exit(1) - -def main(): - parser = argparse.ArgumentParser(description="Manage development tasks") - - # Common argument for format - parent_parser = argparse.ArgumentParser(add_help=False) - parent_parser.add_argument("--format", choices=["text", "json"], default="text", help="Output format") - - subparsers = parser.add_subparsers(dest="command", help="Command to run") - - # Init - subparsers.add_parser("init", help="Initialize documentation structure") - - # Create - create_parser = subparsers.add_parser("create", parents=[parent_parser], help="Create a new task") - create_parser.add_argument("category", choices=CATEGORIES, help="Task category") - create_parser.add_argument("title", help="Task title") - create_parser.add_argument("--desc", default="To be determined", help="Task description") - create_parser.add_argument("--priority", default="medium", help="Task priority") - create_parser.add_argument("--status", choices=VALID_STATUSES, default="pending", help="Task status") - create_parser.add_argument("--dependencies", help="Comma-separated list of task IDs this task depends on") - create_parser.add_argument("--type", choices=VALID_TYPES, default="task", help="Task type") - create_parser.add_argument("--sprint", default="", help="Sprint name/ID") - create_parser.add_argument("--estimate", default="", help="Estimate (points/size)") - - # List - list_parser = subparsers.add_parser("list", parents=[parent_parser], help="List tasks") - list_parser.add_argument("--status", help="Filter by status") - list_parser.add_argument("--category", choices=CATEGORIES, help="Filter by category") - list_parser.add_argument("--sprint", help="Filter by sprint") - list_parser.add_argument("--archived", action="store_true", help="Include archived tasks") - - # Show - show_parser = subparsers.add_parser("show", parents=[parent_parser], help="Show task details") - show_parser.add_argument("task_id", help="Task ID (e.g., FOUNDATION-001)") - - # Update - update_parser = subparsers.add_parser("update", parents=[parent_parser], help="Update task status") - update_parser.add_argument("task_id", help="Task ID (e.g., FOUNDATION-001)") - update_parser.add_argument("status", help=f"New status: {', '.join(VALID_STATUSES)}") - - # Delete - delete_parser = subparsers.add_parser("delete", parents=[parent_parser], help="Delete a task") - delete_parser.add_argument("task_id", help="Task ID (e.g., FOUNDATION-001)") - - # Archive - archive_parser = subparsers.add_parser("archive", parents=[parent_parser], help="Archive a task") - archive_parser.add_argument("task_id", help="Task ID") - - # Context - subparsers.add_parser("context", parents=[parent_parser], help="Show current context (in_progress tasks)") - - # Next - subparsers.add_parser("next", parents=[parent_parser], help="Suggest the next task to work on") - - # Migrate - subparsers.add_parser("migrate", parents=[parent_parser], help="Migrate legacy tasks to new format") - - # Complete - complete_parser = subparsers.add_parser("complete", parents=[parent_parser], help="Mark a task as completed") - complete_parser.add_argument("task_id", help="Task ID (e.g., FOUNDATION-001)") - - # Validate - subparsers.add_parser("validate", parents=[parent_parser], help="Validate task files") - - # Visualize - subparsers.add_parser("visualize", parents=[parent_parser], help="Visualize task dependencies (Mermaid)") - - # Graph (Alias to Visualize) - subparsers.add_parser("graph", parents=[parent_parser], help="Graph task dependencies (Alias for visualize)") - - # Install Hooks - subparsers.add_parser("install-hooks", parents=[parent_parser], help="Install git hooks") - - # Index - subparsers.add_parser("index", parents=[parent_parser], help="Generate task dependency index") - - # Link (Add Dependency) - link_parser = subparsers.add_parser("link", parents=[parent_parser], help="Add a dependency") - link_parser.add_argument("task_id", help="Task ID") - link_parser.add_argument("dep_id", help="Dependency Task ID") - - # Unlink (Remove Dependency) - unlink_parser = subparsers.add_parser("unlink", parents=[parent_parser], help="Remove a dependency") - unlink_parser.add_argument("task_id", help="Task ID") - unlink_parser.add_argument("dep_id", help="Dependency Task ID") - - args = parser.parse_args() - - # Default format to text if not present (e.g. init doesn't have it) - fmt = getattr(args, "format", "text") - - if args.command == "create": - deps = [] - if args.dependencies: - deps = [d.strip() for d in args.dependencies.split(",") if d.strip()] - create_task(args.category, args.title, args.desc, priority=args.priority, status=args.status, dependencies=deps, task_type=args.type, sprint=args.sprint, estimate=args.estimate, output_format=fmt) - elif args.command == "list": - list_tasks(args.status, args.category, sprint=args.sprint, include_archived=args.archived, output_format=fmt) - elif args.command == "init": - init_docs() - elif args.command == "show": - show_task(args.task_id, output_format=fmt) - elif args.command == "delete": - delete_task(args.task_id, output_format=fmt) - elif args.command == "archive": - archive_task(args.task_id, output_format=fmt) - elif args.command == "update": - update_task_status(args.task_id, args.status, output_format=fmt) - elif args.command == "context": - get_context(output_format=fmt) - elif args.command == "next": - get_next_task(output_format=fmt) - elif args.command == "migrate": - migrate_all() - elif args.command == "complete": - update_task_status(args.task_id, "completed", output_format=fmt) - elif args.command == "validate": - validate_all(output_format=fmt) - elif args.command == "visualize" or args.command == "graph": - visualize_tasks(output_format=fmt) - elif args.command == "install-hooks": - install_hooks() - elif args.command == "index": - generate_index(output_format=fmt) - elif args.command == "link": - add_dependency(args.task_id, args.dep_id, output_format=fmt) - elif args.command == "unlink": - remove_dependency(args.task_id, args.dep_id, output_format=fmt) - else: - parser.print_help() - -if __name__ == "__main__": - main() diff --git a/templates/GUIDE.md b/templates/GUIDE.md deleted file mode 100644 index 77b8b86..0000000 --- a/templates/GUIDE.md +++ /dev/null @@ -1,131 +0,0 @@ -# Task Documentation System Guide - -This guide explains how to create, maintain, and update task documentation. It provides a reusable system for tracking implementation work, decisions, and progress. - -## Core Philosophy -**"If it's not documented in `docs/tasks/`, it didn't happen."** - -## Directory Structure -Tasks are organized by category in `docs/tasks/`: -- `foundation/`: Core architecture and setup -- `infrastructure/`: Services, adapters, platform code -- `domain/`: Business logic, use cases -- `presentation/`: UI, state management -- `features/`: End-to-end feature implementation -- `migration/`: Refactoring, upgrades -- `testing/`: Testing infrastructure -- `review/`: Code reviews and PR analysis - -## Task Document Format - -We use **YAML Frontmatter** for metadata and **Markdown** for content. - -### Frontmatter (Required) -```yaml ---- -id: FOUNDATION-20250521-103000 # Auto-generated Timestamp ID -status: pending # Current status -title: Initial Project Setup # Task Title -priority: medium # high, medium, low -created: 2025-05-21 10:30:00 # Creation timestamp -category: foundation # Category -type: task # task, story, bug, epic (Optional) -sprint: Sprint 1 # Iteration identifier (Optional) -estimate: 3 # Story points / T-shirt size (Optional) -dependencies: TASK-001, TASK-002 # Comma separated list of IDs (Optional) ---- -``` - -### Status Workflow -1. `pending`: Created but not started. -2. `in_progress`: Active development. -3. `review_requested`: Implementation done, awaiting code review. -4. `verified`: Reviewed and approved. -5. `completed`: Merged and finalized. -6. `wip_blocked` / `blocked`: Development halted. -7. `cancelled` / `deferred`: Stopped or postponed. - -### Content Template -```markdown -# [Task Title] - -## Task Information -- **Dependencies**: [List IDs] - -## Task Details -[Description of what needs to be done] - -### Acceptance Criteria -- [ ] Criterion 1 -- [ ] Criterion 2 - -## Implementation Status -### Completed Work -- ✅ Implemented X (file.py) - -### Blockers -[Describe blockers if any] -``` - -## Tools - -Use the `scripts/tasks` wrapper to manage tasks. - -```bash -# Create a new task (standard) -./scripts/tasks create foundation "Task Title" - -# Create an Agile Story in a Sprint -./scripts/tasks create features "User Login" --type story --sprint "Sprint 1" --estimate 5 - -# List tasks (can filter by sprint) -./scripts/tasks list -./scripts/tasks list --sprint "Sprint 1" - -# Find the next best task to work on (Smart Agent Mode) -./scripts/tasks next - -# Update status -./scripts/tasks update [TASK_ID] in_progress -./scripts/tasks update [TASK_ID] review_requested -./scripts/tasks update [TASK_ID] verified -./scripts/tasks update [TASK_ID] completed - -# Manage Dependencies -./scripts/tasks link [TASK_ID] [DEP_ID] -./scripts/tasks unlink [TASK_ID] [DEP_ID] - -# Visualization & Analysis -./scripts/tasks graph # Show dependency graph -./scripts/tasks index # Generate INDEX.yaml -./scripts/tasks validate # Check for errors - -# Migrate legacy tasks (if updating from older version) -./scripts/tasks migrate -``` - -## Agile Methodology - -This system supports Agile/Scrum workflows for LLM-Human collaboration. - -### Sprints -- Tag tasks with `sprint: [Name]` to group them into iterations. -- Use `./scripts/tasks list --sprint [Name]` to view the sprint backlog. - -### Estimation -- Use `estimate: [Value]` (e.g., Fibonacci numbers 1, 2, 3, 5, 8) to size tasks. - -### Auto-Pilot -- The `./scripts/tasks next` command uses an algorithm to determine the optimal next task based on: - 1. Status (In Progress > Pending) - 2. Dependencies (Unblocked > Blocked) - 3. Sprint (Current Sprint > Backlog) - 4. Priority (High > Low) - 5. Type (Stories/Bugs > Tasks) - -## Agent Integration - -Agents (Claude, etc.) use this system to track their work. -- Always check `./scripts/tasks context` or use `./scripts/tasks next` before starting. -- Keep the task file updated with your progress. -- Use `review_requested` when you need human feedback. diff --git a/templates/maintenance_mode.md b/templates/maintenance_mode.md deleted file mode 100644 index 3d53c80..0000000 --- a/templates/maintenance_mode.md +++ /dev/null @@ -1,88 +0,0 @@ -# AI Agent Instructions - -You are an expert Software Engineer working on this project. Your primary responsibility is to implement features and fixes while strictly adhering to the **Task Documentation System**. - -## Core Philosophy -**"If it's not documented in `docs/tasks/`, it didn't happen."** - -## Workflow -1. **Pick a Task**: Run `python3 scripts/tasks.py context` to see active tasks, or `list` to see pending ones. -2. **Plan & Document**: - * **Memory Check**: Run `python3 scripts/memory.py list` (or use the Memory Skill) to recall relevant long-term information. - * **Security Check**: Ask the user about specific security considerations for this task. - * If starting a new task, use `scripts/tasks.py create` (or `python3 scripts/tasks.py create`) to generate a new task file. - * Update the task status: `python3 scripts/tasks.py update [TASK_ID] in_progress`. -3. **Implement**: Write code, run tests. -4. **Update Documentation Loop**: - * As you complete sub-tasks, check them off in the task document. - * If you hit a blocker, update status to `wip_blocked` and describe the issue in the file. - * Record key architectural decisions in the task document. - * **Memory Update**: If you learn something valuable for the long term, use `scripts/memory.py create` to record it. -5. **Review & Verify**: - * Once implementation is complete, update status to `review_requested`: `python3 scripts/tasks.py update [TASK_ID] review_requested`. - * Ask a human or another agent to review the code. - * Once approved and tested, update status to `verified`. -6. **Finalize**: - * Update status to `completed`: `python3 scripts/tasks.py update [TASK_ID] completed`. - * Record actual effort in the file. - * Ensure all acceptance criteria are met. - -## Tools -* **Wrapper**: `./scripts/tasks` (Checks for Python, recommended). -* **Create**: `./scripts/tasks create [category] "Title"` -* **List**: `./scripts/tasks list [--status pending]` -* **Context**: `./scripts/tasks context` -* **Update**: `./scripts/tasks update [ID] [status]` -* **Migrate**: `./scripts/tasks migrate` (Migrate legacy tasks to new format) -* **Memory**: `./scripts/memory.py [create|list|read]` -* **JSON Output**: Add `--format json` to any command for machine parsing. - -## Documentation Reference -* **Guide**: Read `docs/tasks/GUIDE.md` for strict formatting and process rules. -* **Architecture**: Refer to `docs/architecture/` for system design. -* **Features**: Refer to `docs/features/` for feature specifications. -* **Security**: Refer to `docs/security/` for risk assessments and mitigations. -* **Memories**: Refer to `docs/memories/` for long-term project context. - -## Code Style & Standards -* Follow the existing patterns in the codebase. -* Ensure all new code is covered by tests (if testing infrastructure exists). - -## PR Review Methodology -When performing a PR review, follow this "Human-in-the-loop" process to ensure depth and efficiency. - -### 1. Preparation -1. **Create Task**: `python3 scripts/tasks.py create review "Review PR #<N>: <Title>"` -2. **Fetch Details**: Use `gh` to get the PR context. - * `gh pr view <N>` - * `gh pr diff <N>` - -### 2. Analysis & Planning (The "Review Plan") -**Do not review line-by-line yet.** Instead, analyze the changes and document a **Review Plan** in the task file (or present it for approval). - -Your plan must include: -* **High-Level Summary**: Purpose, new APIs, breaking changes. -* **Dependency Check**: New libraries, maintenance status, security. -* **Impact Assessment**: Effect on existing code/docs. -* **Focus Areas**: Prioritized list of files/modules to check. -* **Suggested Comments**: Draft comments for specific lines. - * Format: `File: <path> | Line: <N> | Comment: <suggestion>` - * Tone: Friendly, suggestion-based ("Consider...", "Nit: ..."). - -### 3. Execution -Once the human approves the plan and comments: -1. **Pending Review**: Create a pending review using `gh`. - * `COMMIT_SHA=$(gh pr view <N> --json headRefOid -q .headRefOid)` - * `gh api repos/{owner}/{repo}/pulls/{N}/reviews -f commit_id="$COMMIT_SHA"` -2. **Batch Comments**: Add comments to the pending review. - * `gh api repos/{owner}/{repo}/pulls/{N}/comments -f body="..." -f path="..." -f commit_id="$COMMIT_SHA" -F line=<L> -f side="RIGHT"` -3. **Submit**: - * `gh pr review <N> --approve --body "Summary..."` (or `--request-changes`). - -### 4. Close Task -* Update task status to `completed`. - -## Agent Interoperability -- **Task Manager Skill**: `.claude/skills/task_manager/` -- **Memory Skill**: `.claude/skills/memory/` -- **Tool Definitions**: `docs/interop/tool_definitions.json`