Initial commit for Eureka deployment
This commit is contained in:
parent
946e0d94a0
commit
1a39be31ee
|
|
@ -0,0 +1,70 @@
|
||||||
|
# Eureka Deploy Flow Agent
|
||||||
|
|
||||||
|
You are an autonomous deployment agent for the Eureka platform. Execute the complete deployment workflow.
|
||||||
|
|
||||||
|
## Workflow Steps
|
||||||
|
|
||||||
|
Execute these steps in order:
|
||||||
|
|
||||||
|
### Step 1: Check Git Status
|
||||||
|
```bash
|
||||||
|
git status
|
||||||
|
git branch --show-current
|
||||||
|
```
|
||||||
|
- Identify current branch
|
||||||
|
- Check for uncommitted changes
|
||||||
|
- Check for untracked files
|
||||||
|
|
||||||
|
### Step 2: Stage Changes
|
||||||
|
If there are changes to commit:
|
||||||
|
```bash
|
||||||
|
git add -A
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Commit Changes
|
||||||
|
If there are staged changes:
|
||||||
|
```bash
|
||||||
|
git commit -m "Deploy: $(date +%Y-%m-%d_%H:%M:%S)"
|
||||||
|
```
|
||||||
|
Use a descriptive commit message if the user provided context.
|
||||||
|
|
||||||
|
### Step 4: Push to Eureka Remote
|
||||||
|
```bash
|
||||||
|
git push eureka $(git branch --show-current)
|
||||||
|
```
|
||||||
|
If push fails, try:
|
||||||
|
```bash
|
||||||
|
git push -u eureka $(git branch --show-current)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Trigger Deployment
|
||||||
|
```bash
|
||||||
|
eureka deploy trigger --yes
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 6: Monitor Status
|
||||||
|
```bash
|
||||||
|
eureka deploy status
|
||||||
|
```
|
||||||
|
Wait a few seconds and check status again if still building.
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
- **No eureka remote**: Run `eureka init` first
|
||||||
|
- **Push rejected**: Check if remote has changes, pull first if needed
|
||||||
|
- **Deploy failed**: Check `eureka deploy logs` for details
|
||||||
|
- **No app_id**: Run `eureka setup` to configure
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
- All changes committed and pushed
|
||||||
|
- Deployment triggered successfully
|
||||||
|
- Status shows "building" or "deployed"
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
Report:
|
||||||
|
1. Files changed/committed
|
||||||
|
2. Push result
|
||||||
|
3. Deployment status
|
||||||
|
4. Deployed URL (when available)
|
||||||
|
|
@ -0,0 +1,63 @@
|
||||||
|
---
|
||||||
|
description: View deployment logs from Eureka platform
|
||||||
|
allowed-tools: Read, Bash, Glob
|
||||||
|
---
|
||||||
|
|
||||||
|
# Eureka Deploy Logs
|
||||||
|
|
||||||
|
**Input**: "$ARGUMENTS"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## PURPOSE
|
||||||
|
|
||||||
|
View the deployment logs from the Eureka platform to debug issues or monitor progress.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## EXECUTION FLOW
|
||||||
|
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
### PHASE 1: Fetch Logs
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
#### 1.1: Run Logs Command
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Default: last 100 lines
|
||||||
|
eureka deploy logs
|
||||||
|
|
||||||
|
# Custom tail count
|
||||||
|
eureka deploy logs --tail 200
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 1.2: Parse Arguments
|
||||||
|
|
||||||
|
If `$ARGUMENTS` contains a number, use it as tail count:
|
||||||
|
```bash
|
||||||
|
TAIL_COUNT="${ARGUMENTS:-100}"
|
||||||
|
eureka deploy logs --tail "$TAIL_COUNT"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ARGUMENTS
|
||||||
|
|
||||||
|
| Argument | Default | Description |
|
||||||
|
|----------|---------|-------------|
|
||||||
|
| `[tail]` | `100` | Number of log lines to show |
|
||||||
|
| `--id <deploymentId>` | Latest | Specific deployment ID |
|
||||||
|
| `--follow` | `false` | Follow logs in real-time |
|
||||||
|
|
||||||
|
## EXAMPLES
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# View last 100 lines
|
||||||
|
/eureka:deploy-logs
|
||||||
|
|
||||||
|
# View last 500 lines
|
||||||
|
/eureka:deploy-logs 500
|
||||||
|
|
||||||
|
# View specific deployment
|
||||||
|
/eureka:deploy-logs --id dep_abc123
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,55 @@
|
||||||
|
---
|
||||||
|
description: Check deployment status on Eureka platform
|
||||||
|
allowed-tools: Read, Bash, Glob
|
||||||
|
---
|
||||||
|
|
||||||
|
# Eureka Deploy Status
|
||||||
|
|
||||||
|
**Input**: "$ARGUMENTS"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## PURPOSE
|
||||||
|
|
||||||
|
Check the current deployment status of the application on the Eureka platform.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## EXECUTION FLOW
|
||||||
|
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
### PHASE 1: Check Status
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
#### 1.1: Run Status Command
|
||||||
|
|
||||||
|
```bash
|
||||||
|
eureka deploy status --verbose
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 1.2: Display Results
|
||||||
|
|
||||||
|
The command will show:
|
||||||
|
- Current deployment status (pending, building, deploying, deployed, failed)
|
||||||
|
- Version information
|
||||||
|
- Environment
|
||||||
|
- Timestamps
|
||||||
|
- Deployment URL (if deployed)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ARGUMENTS
|
||||||
|
|
||||||
|
| Argument | Default | Description |
|
||||||
|
|----------|---------|-------------|
|
||||||
|
| `--verbose` | `false` | Show detailed logs |
|
||||||
|
|
||||||
|
## EXAMPLES
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check current deployment status
|
||||||
|
/eureka:deploy-status
|
||||||
|
|
||||||
|
# Check with verbose output
|
||||||
|
/eureka:deploy-status --verbose
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,279 @@
|
||||||
|
---
|
||||||
|
description: Deploy application to Eureka platform (creates app if needed)
|
||||||
|
allowed-tools: Read, Write, Edit, Bash, Glob, Grep
|
||||||
|
---
|
||||||
|
|
||||||
|
# Eureka Deploy
|
||||||
|
|
||||||
|
**Input**: "$ARGUMENTS"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## PURPOSE
|
||||||
|
|
||||||
|
Deploy the current project to the Eureka platform. If no `app_id` is configured, automatically creates a new directory app first, then triggers the deployment.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ⛔ CRITICAL RULES
|
||||||
|
|
||||||
|
### MUST DO
|
||||||
|
1. **MUST** check for existing `app_id` in `.claude/eureka-factory.yaml` first
|
||||||
|
2. **MUST** create a new app via API if no `app_id` exists
|
||||||
|
3. **MUST** save the new `app_id` to config after creation
|
||||||
|
4. **MUST** display deployment status after triggering
|
||||||
|
|
||||||
|
### CANNOT DO
|
||||||
|
1. **CANNOT** deploy without valid API key
|
||||||
|
2. **CANNOT** skip app creation if `app_id` is missing
|
||||||
|
3. **CANNOT** proceed if API calls fail
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## EXECUTION FLOW
|
||||||
|
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
### PHASE 1: Configuration Check
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
#### 1.1: Display Start Banner
|
||||||
|
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════╗
|
||||||
|
║ 🚀 EUREKA DEPLOY ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Deploying to Eureka Platform... ║
|
||||||
|
╚══════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 1.2: Check Configuration
|
||||||
|
|
||||||
|
Read the configuration file:
|
||||||
|
```bash
|
||||||
|
# Check if config exists
|
||||||
|
cat .claude/eureka-factory.yaml 2>/dev/null || cat .claude/eureka-factory.yml 2>/dev/null || echo "NO_CONFIG"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Extract from config:**
|
||||||
|
- `api_key` - Required for all operations
|
||||||
|
- `app_id` - If exists, skip app creation
|
||||||
|
- `api_endpoint` - Optional custom endpoint
|
||||||
|
|
||||||
|
#### 1.3: Validate API Key
|
||||||
|
|
||||||
|
If no `api_key` found:
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════╗
|
||||||
|
║ ❌ NO API KEY CONFIGURED ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Run `eureka setup` to configure your credentials. ║
|
||||||
|
╚══════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
**STOP EXECUTION**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
### PHASE 2: App Creation (if needed)
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
#### 2.1: Check for app_id
|
||||||
|
|
||||||
|
If `app_id` exists in config → **SKIP TO PHASE 3**
|
||||||
|
|
||||||
|
If `app_id` is missing:
|
||||||
|
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════╗
|
||||||
|
║ 📁 CREATING DIRECTORY APP ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ No app_id found. Creating new app on Eureka... ║
|
||||||
|
╚══════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2.2: Determine App Name
|
||||||
|
|
||||||
|
Use the project directory name as the default app name:
|
||||||
|
```bash
|
||||||
|
APP_NAME=$(basename $(pwd))
|
||||||
|
echo "App name: $APP_NAME"
|
||||||
|
```
|
||||||
|
|
||||||
|
Or use argument if provided: `$ARGUMENTS` as app name
|
||||||
|
|
||||||
|
#### 2.3: Create App via API
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create app using eureka CLI
|
||||||
|
eureka deploy trigger --name "$APP_NAME" --type other --yes
|
||||||
|
```
|
||||||
|
|
||||||
|
**If the command is not available, use direct API call:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
API_KEY="<from config>"
|
||||||
|
API_ENDPOINT="<from config or default>"
|
||||||
|
|
||||||
|
curl -X POST "${API_ENDPOINT}/v1/apps" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-API-Key: ${API_KEY}" \
|
||||||
|
-d "{\"name\": \"${APP_NAME}\", \"type\": \"other\"}"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2.4: Save app_id to Config
|
||||||
|
|
||||||
|
Extract `app_id` from API response and update config:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# .claude/eureka-factory.yaml
|
||||||
|
api_key: <existing>
|
||||||
|
project_id: <existing>
|
||||||
|
repo_id: <existing>
|
||||||
|
app_id: <NEW_APP_ID> # Add this line
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════╗
|
||||||
|
║ ✅ APP CREATED ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ App ID: <app_id> ║
|
||||||
|
║ Saved to: .claude/eureka-factory.yaml ║
|
||||||
|
╚══════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
### PHASE 3: Trigger Deployment
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
#### 3.1: Trigger Deploy
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Using eureka CLI
|
||||||
|
eureka deploy trigger --yes
|
||||||
|
|
||||||
|
# Or direct API call
|
||||||
|
curl -X POST "${API_ENDPOINT}/v1/apps/${APP_ID}/deployments" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "X-API-Key: ${API_KEY}" \
|
||||||
|
-d '{"environment": "production"}'
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3.2: Display Deployment Status
|
||||||
|
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════╗
|
||||||
|
║ ✅ DEPLOYMENT TRIGGERED ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Deployment ID: <deployment_id> ║
|
||||||
|
║ Status: PENDING ║
|
||||||
|
║ Environment: production ║
|
||||||
|
║ Version: <version> ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Use `/eureka:deploy-status` to check progress ║
|
||||||
|
║ Use `/eureka:deploy-logs` to view logs ║
|
||||||
|
╚══════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ARGUMENTS
|
||||||
|
|
||||||
|
| Argument | Default | Description |
|
||||||
|
|----------|---------|-------------|
|
||||||
|
| `[app-name]` | Directory name | Name for new app (only used if creating) |
|
||||||
|
| `--env <environment>` | `production` | Deployment environment |
|
||||||
|
| `--branch <branch>` | Current branch | Git branch to deploy |
|
||||||
|
| `--force` | `false` | Force deploy even if already deploying |
|
||||||
|
|
||||||
|
## EXAMPLES
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Deploy current project (creates app if needed)
|
||||||
|
/eureka:deploy
|
||||||
|
|
||||||
|
# Deploy with custom app name
|
||||||
|
/eureka:deploy my-awesome-app
|
||||||
|
|
||||||
|
# Deploy specific branch to staging
|
||||||
|
/eureka:deploy --env staging --branch develop
|
||||||
|
|
||||||
|
# Force redeploy
|
||||||
|
/eureka:deploy --force
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ERROR HANDLING
|
||||||
|
|
||||||
|
### No Configuration
|
||||||
|
```
|
||||||
|
❌ No configuration found.
|
||||||
|
Run `eureka setup` to configure credentials.
|
||||||
|
```
|
||||||
|
|
||||||
|
### App Creation Failed
|
||||||
|
```
|
||||||
|
❌ Failed to create app: <error message>
|
||||||
|
Check your API key and try again.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Deployment Failed
|
||||||
|
```
|
||||||
|
❌ Deployment failed: <error message>
|
||||||
|
Use `/eureka:deploy-logs` to see details.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## FLOW DIAGRAM
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ /eureka:deploy │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ │ │
|
||||||
|
│ ▼ │
|
||||||
|
│ ┌─────────────────┐ │
|
||||||
|
│ │ Check Config │ │
|
||||||
|
│ └────────┬────────┘ │
|
||||||
|
│ │ │
|
||||||
|
│ ┌─────────▼─────────┐ │
|
||||||
|
│ │ Has API Key? │ │
|
||||||
|
│ └─────────┬─────────┘ │
|
||||||
|
│ │ │
|
||||||
|
│ NO │ YES │
|
||||||
|
│ ┌────────────────────┼────────────────────┐ │
|
||||||
|
│ ▼ ▼ │
|
||||||
|
│ ┌───────────┐ ┌─────────────────┐ │
|
||||||
|
│ │ ERROR │ │ Has app_id? │ │
|
||||||
|
│ │ No Key │ └────────┬────────┘ │
|
||||||
|
│ └───────────┘ │ │
|
||||||
|
│ NO │ YES │
|
||||||
|
│ ┌───────────────────┼──────────┐ │
|
||||||
|
│ ▼ ▼ │
|
||||||
|
│ ┌─────────────────┐ ┌──────────────┐
|
||||||
|
│ │ Create App │ │ │
|
||||||
|
│ │ via API │ │ │
|
||||||
|
│ └────────┬────────┘ │ │
|
||||||
|
│ │ │ │
|
||||||
|
│ ▼ │ │
|
||||||
|
│ ┌─────────────────┐ │ │
|
||||||
|
│ │ Save app_id │ │ │
|
||||||
|
│ │ to Config │ │ │
|
||||||
|
│ └────────┬────────┘ │ │
|
||||||
|
│ │ │ │
|
||||||
|
│ └───────────────────┘ │
|
||||||
|
│ │ │
|
||||||
|
│ ▼ │
|
||||||
|
│ ┌─────────────────┐ │
|
||||||
|
│ │ Trigger Deploy │ │
|
||||||
|
│ └────────┬────────┘ │
|
||||||
|
│ │ │
|
||||||
|
│ ▼ │
|
||||||
|
│ ┌─────────────────┐ │
|
||||||
|
│ │ Show Status │ │
|
||||||
|
│ └─────────────────┘ │
|
||||||
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,595 @@
|
||||||
|
---
|
||||||
|
description: Generate comprehensive project documentation for engineers and non-engineers
|
||||||
|
allowed-tools: Read, Write, Edit, Bash, Task, TodoWrite, Glob, Grep
|
||||||
|
---
|
||||||
|
|
||||||
|
# Eureka Index - Project Documentation Generator
|
||||||
|
|
||||||
|
**Input**: "$ARGUMENTS"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## PURPOSE
|
||||||
|
|
||||||
|
Generate comprehensive, dual-audience documentation by analyzing the current project structure using **parallel agent execution**. The output is designed to be understandable for **both engineers and non-engineers**.
|
||||||
|
|
||||||
|
### Documentation Layers
|
||||||
|
|
||||||
|
| Layer | Audience | Content |
|
||||||
|
|-------|----------|---------|
|
||||||
|
| Executive Summary | Everyone | Project purpose, value, capabilities |
|
||||||
|
| Architecture Overview | Everyone | Visual diagrams, technology stack |
|
||||||
|
| Getting Started | Semi-technical | Setup, basic usage, configuration |
|
||||||
|
| Feature Guide | Non-engineers | Plain-language feature descriptions |
|
||||||
|
| API Reference | Engineers | Endpoints, schemas, authentication |
|
||||||
|
| Component Catalog | Engineers | Props, interfaces, usage examples |
|
||||||
|
| Data Models | Both | ER diagrams + plain descriptions |
|
||||||
|
| Glossary | Non-engineers | Technical terms explained |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## EXECUTION ARCHITECTURE
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────────┐
|
||||||
|
│ PARALLEL EXECUTION PIPELINE │
|
||||||
|
├─────────────────────────────────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ PHASE 1: PARALLEL ANALYSIS (run_in_background: true) │
|
||||||
|
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌────────────┐ │
|
||||||
|
│ │ Structure │ │ API │ │ Components │ │ Models │ │
|
||||||
|
│ │ Analyzer │ │ Analyzer │ │ Analyzer │ │ Analyzer │ │
|
||||||
|
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ └─────┬──────┘ │
|
||||||
|
│ │ │ │ │ │
|
||||||
|
│ ▼ ▼ ▼ ▼ │
|
||||||
|
│ PHASE 2: SYNCHRONIZATION │
|
||||||
|
│ ┌─────────────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ Merge & Create Unified Analysis │ │
|
||||||
|
│ └─────────────────────────────────────────────────────────────┘ │
|
||||||
|
│ │ │
|
||||||
|
│ ▼ │
|
||||||
|
│ PHASE 3: PARALLEL DOCUMENTATION (run_in_background: true) │
|
||||||
|
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌────────────┐ │
|
||||||
|
│ │ Main Doc │ │ API Docs │ │ Components │ │ Quick │ │
|
||||||
|
│ │ Generator │ │ Generator │ │ Generator │ │ Reference │ │
|
||||||
|
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ └─────┬──────┘ │
|
||||||
|
│ │ │ │ │ │
|
||||||
|
│ ▼ ▼ ▼ ▼ │
|
||||||
|
│ PHASE 4: FINALIZATION │
|
||||||
|
│ ┌─────────────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ HTML Generation + Validation + Summary │ │
|
||||||
|
│ └─────────────────────────────────────────────────────────────┘ │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ⛔ CRITICAL RULES
|
||||||
|
|
||||||
|
### MUST DO
|
||||||
|
1. **MUST** launch analysis agents in parallel using `run_in_background: true`
|
||||||
|
2. **MUST** wait for all analysis agents before synchronization
|
||||||
|
3. **MUST** launch documentation agents in parallel after synchronization
|
||||||
|
4. **MUST** include both technical and non-technical descriptions
|
||||||
|
5. **MUST** validate generated documentation against actual code
|
||||||
|
|
||||||
|
### CANNOT DO
|
||||||
|
1. **CANNOT** make up features that don't exist
|
||||||
|
2. **CANNOT** skip the parallel analysis phase
|
||||||
|
3. **CANNOT** generate docs without synchronizing analysis results
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## EXECUTION FLOW
|
||||||
|
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
### PHASE 1: Parallel Analysis
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
#### 1.1: Display Start Banner & Setup
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════╗
|
||||||
|
║ 📚 EUREKA INDEX - Parallel Documentation Generator ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Launching parallel analysis agents... ║
|
||||||
|
║ Output: Dual-audience documentation (Engineer + Non-Engineer)║
|
||||||
|
╚══════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
OUTPUT_DIR="${ARGUMENTS:-docs}"
|
||||||
|
mkdir -p "$OUTPUT_DIR"
|
||||||
|
echo "📁 Output directory: $OUTPUT_DIR"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 1.2: Launch Parallel Analysis Agents
|
||||||
|
|
||||||
|
**CRITICAL: Launch ALL four agents in a SINGLE message with multiple Task tool calls:**
|
||||||
|
|
||||||
|
```
|
||||||
|
Launch these 4 Task agents IN PARALLEL (single message, multiple tool calls):
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ AGENT 1: Structure Analyzer │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ Task tool with: │
|
||||||
|
│ subagent_type: "Explore" │
|
||||||
|
│ run_in_background: true │
|
||||||
|
│ prompt: | │
|
||||||
|
│ # PROJECT STRUCTURE ANALYSIS │
|
||||||
|
│ │
|
||||||
|
│ Analyze the project structure and return findings. │
|
||||||
|
│ │
|
||||||
|
│ ## Tasks │
|
||||||
|
│ 1. Identify project type (package.json, requirements.txt, │
|
||||||
|
│ Cargo.toml, go.mod, pom.xml) │
|
||||||
|
│ 2. Extract metadata (name, version, description) │
|
||||||
|
│ 3. Map directory structure with purposes │
|
||||||
|
│ 4. Identify tech stack (language, framework, database) │
|
||||||
|
│ 5. List key dependencies with plain English purposes │
|
||||||
|
│ │
|
||||||
|
│ ## Output Format (YAML) │
|
||||||
|
│ ```yaml │
|
||||||
|
│ project: │
|
||||||
|
│ name: "..." │
|
||||||
|
│ version: "..." │
|
||||||
|
│ description: "..." │
|
||||||
|
│ type: "node|python|rust|go|java|other" │
|
||||||
|
│ tech_stack: │
|
||||||
|
│ language: "..." │
|
||||||
|
│ framework: "..." │
|
||||||
|
│ database: "..." │
|
||||||
|
│ structure: │
|
||||||
|
│ directories: │
|
||||||
|
│ - path: "..." │
|
||||||
|
│ purpose: "..." │
|
||||||
|
│ file_count: N │
|
||||||
|
│ dependencies: │
|
||||||
|
│ - name: "..." │
|
||||||
|
│ purpose: "plain English" │
|
||||||
|
│ ``` │
|
||||||
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ AGENT 2: API Analyzer │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ Task tool with: │
|
||||||
|
│ subagent_type: "Explore" │
|
||||||
|
│ run_in_background: true │
|
||||||
|
│ prompt: | │
|
||||||
|
│ # API ENDPOINTS ANALYSIS │
|
||||||
|
│ │
|
||||||
|
│ Find and analyze all API endpoints in the project. │
|
||||||
|
│ │
|
||||||
|
│ ## Search Patterns │
|
||||||
|
│ - Next.js App Router: app/api/**/route.ts │
|
||||||
|
│ - Next.js Pages: pages/api/**/*.ts │
|
||||||
|
│ - Express: router.get/post/put/delete │
|
||||||
|
│ - FastAPI: @app.get/post/put/delete │
|
||||||
|
│ - GraphQL: Query/Mutation resolvers │
|
||||||
|
│ │
|
||||||
|
│ ## Output Format (YAML) │
|
||||||
|
│ ```yaml │
|
||||||
|
│ api_endpoints: │
|
||||||
|
│ - method: "GET|POST|PUT|DELETE" │
|
||||||
|
│ path: "/api/..." │
|
||||||
|
│ handler_file: "path/to/file.ts" │
|
||||||
|
│ description: "plain English" │
|
||||||
|
│ request_body: "schema if POST/PUT" │
|
||||||
|
│ response: "schema summary" │
|
||||||
|
│ auth_required: true|false │
|
||||||
|
│ ``` │
|
||||||
|
│ │
|
||||||
|
│ If no APIs found, return: api_endpoints: [] │
|
||||||
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ AGENT 3: Components Analyzer │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ Task tool with: │
|
||||||
|
│ subagent_type: "Explore" │
|
||||||
|
│ run_in_background: true │
|
||||||
|
│ prompt: | │
|
||||||
|
│ # UI COMPONENTS ANALYSIS │
|
||||||
|
│ │
|
||||||
|
│ Find and analyze all UI components in the project. │
|
||||||
|
│ │
|
||||||
|
│ ## Search Patterns │
|
||||||
|
│ - React: components/**/*.tsx, function Component() │
|
||||||
|
│ - Vue: components/**/*.vue, <script setup> │
|
||||||
|
│ - Angular: *.component.ts, @Component │
|
||||||
|
│ - Svelte: **/*.svelte │
|
||||||
|
│ │
|
||||||
|
│ ## Output Format (YAML) │
|
||||||
|
│ ```yaml │
|
||||||
|
│ components: │
|
||||||
|
│ - id: "component_name" │
|
||||||
|
│ name: "ComponentName" │
|
||||||
|
│ path: "path/to/Component.tsx" │
|
||||||
|
│ description: "what it does in plain English" │
|
||||||
|
│ props: │
|
||||||
|
│ - name: "propName" │
|
||||||
|
│ type: "string|number|boolean|..." │
|
||||||
|
│ required: true|false │
|
||||||
|
│ description: "what it controls" │
|
||||||
|
│ events: ["onClick", "onChange"] │
|
||||||
|
│ dependencies: ["OtherComponent"] │
|
||||||
|
│ ``` │
|
||||||
|
│ │
|
||||||
|
│ If no components found, return: components: [] │
|
||||||
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ AGENT 4: Data Models Analyzer │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ Task tool with: │
|
||||||
|
│ subagent_type: "Explore" │
|
||||||
|
│ run_in_background: true │
|
||||||
|
│ prompt: | │
|
||||||
|
│ # DATA MODELS ANALYSIS │
|
||||||
|
│ │
|
||||||
|
│ Find and analyze all data models in the project. │
|
||||||
|
│ │
|
||||||
|
│ ## Search Patterns │
|
||||||
|
│ - Prisma: prisma/schema.prisma, model X {} │
|
||||||
|
│ - TypeORM: @Entity(), entities/**/*.ts │
|
||||||
|
│ - Mongoose: new Schema(), models/**/*.ts │
|
||||||
|
│ - SQLAlchemy: class X(Base), models/**/*.py │
|
||||||
|
│ - TypeScript: interface/type definitions │
|
||||||
|
│ │
|
||||||
|
│ ## Output Format (YAML) │
|
||||||
|
│ ```yaml │
|
||||||
|
│ data_models: │
|
||||||
|
│ - name: "ModelName" │
|
||||||
|
│ source: "prisma|typeorm|mongoose|typescript" │
|
||||||
|
│ file_path: "path/to/model" │
|
||||||
|
│ description: "what data it represents" │
|
||||||
|
│ fields: │
|
||||||
|
│ - name: "fieldName" │
|
||||||
|
│ type: "String|Int|Boolean|..." │
|
||||||
|
│ description: "plain English" │
|
||||||
|
│ constraints: "unique|optional|default" │
|
||||||
|
│ relations: │
|
||||||
|
│ - type: "hasMany|belongsTo|hasOne" │
|
||||||
|
│ target: "OtherModel" │
|
||||||
|
│ glossary_terms: │
|
||||||
|
│ - term: "technical term found" │
|
||||||
|
│ definition: "plain English definition" │
|
||||||
|
│ ``` │
|
||||||
|
│ │
|
||||||
|
│ If no models found, return: data_models: [] │
|
||||||
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 1.3: Wait for All Analysis Agents
|
||||||
|
|
||||||
|
```
|
||||||
|
Use TaskOutput tool to wait for each agent:
|
||||||
|
- TaskOutput with task_id from Agent 1, block: true
|
||||||
|
- TaskOutput with task_id from Agent 2, block: true
|
||||||
|
- TaskOutput with task_id from Agent 3, block: true
|
||||||
|
- TaskOutput with task_id from Agent 4, block: true
|
||||||
|
|
||||||
|
Collect all results for synchronization.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
### PHASE 2: Synchronization
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
#### 2.1: Merge Analysis Results
|
||||||
|
|
||||||
|
Combine outputs from all 4 agents into a unified analysis:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# $OUTPUT_DIR/analysis.yml - Merged from parallel agents
|
||||||
|
|
||||||
|
project:
|
||||||
|
# From Agent 1: Structure Analyzer
|
||||||
|
name: "..."
|
||||||
|
version: "..."
|
||||||
|
description: "..."
|
||||||
|
type: "..."
|
||||||
|
|
||||||
|
tech_stack:
|
||||||
|
# From Agent 1: Structure Analyzer
|
||||||
|
language: "..."
|
||||||
|
framework: "..."
|
||||||
|
database: "..."
|
||||||
|
key_dependencies: [...]
|
||||||
|
|
||||||
|
structure:
|
||||||
|
# From Agent 1: Structure Analyzer
|
||||||
|
directories: [...]
|
||||||
|
|
||||||
|
api_endpoints:
|
||||||
|
# From Agent 2: API Analyzer
|
||||||
|
[...]
|
||||||
|
|
||||||
|
components:
|
||||||
|
# From Agent 3: Components Analyzer
|
||||||
|
[...]
|
||||||
|
|
||||||
|
data_models:
|
||||||
|
# From Agent 4: Data Models Analyzer
|
||||||
|
[...]
|
||||||
|
|
||||||
|
glossary_terms:
|
||||||
|
# Merged from all agents
|
||||||
|
[...]
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2.2: Write Unified Analysis File
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Write merged analysis to file
|
||||||
|
Write the unified YAML to: $OUTPUT_DIR/analysis.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
### PHASE 3: Parallel Documentation Generation
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
#### 3.1: Launch Parallel Documentation Agents
|
||||||
|
|
||||||
|
**CRITICAL: Launch ALL four doc agents in a SINGLE message with multiple Task tool calls:**
|
||||||
|
|
||||||
|
```
|
||||||
|
Launch these 4 Task agents IN PARALLEL (single message, multiple tool calls):
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ DOC AGENT 1: Main Documentation │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ Task tool with: │
|
||||||
|
│ subagent_type: "technical-writer" │
|
||||||
|
│ run_in_background: true │
|
||||||
|
│ prompt: | │
|
||||||
|
│ # Generate PROJECT_DOCUMENTATION.md │
|
||||||
|
│ │
|
||||||
|
│ Using the analysis from $OUTPUT_DIR/analysis.yml, │
|
||||||
|
│ generate comprehensive main documentation. │
|
||||||
|
│ │
|
||||||
|
│ ## Sections Required │
|
||||||
|
│ 1. Executive Summary (plain English, no jargon) │
|
||||||
|
│ 2. Quick Start (installation, basic usage) │
|
||||||
|
│ 3. Architecture Overview (ASCII diagrams) │
|
||||||
|
│ 4. Features (dual-audience: plain + technical details) │
|
||||||
|
│ 5. Glossary (all technical terms explained) │
|
||||||
|
│ │
|
||||||
|
│ ## Rules │
|
||||||
|
│ - Plain English FIRST, technical in <details> tags │
|
||||||
|
│ - Include ASCII architecture diagrams │
|
||||||
|
│ - Use tables for structured data │
|
||||||
|
│ - Code blocks with language hints │
|
||||||
|
│ │
|
||||||
|
│ Write to: $OUTPUT_DIR/PROJECT_DOCUMENTATION.md │
|
||||||
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ DOC AGENT 2: API Reference │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ Task tool with: │
|
||||||
|
│ subagent_type: "technical-writer" │
|
||||||
|
│ run_in_background: true │
|
||||||
|
│ prompt: | │
|
||||||
|
│ # Generate API_REFERENCE.md │
|
||||||
|
│ │
|
||||||
|
│ Using api_endpoints from $OUTPUT_DIR/analysis.yml, │
|
||||||
|
│ generate detailed API documentation. │
|
||||||
|
│ │
|
||||||
|
│ ## Format per Endpoint │
|
||||||
|
│ ### [METHOD] /path │
|
||||||
|
│ **Description**: Plain English │
|
||||||
|
│ **Authentication**: Required/Optional │
|
||||||
|
│ │
|
||||||
|
│ <details> │
|
||||||
|
│ <summary>Technical Details</summary> │
|
||||||
|
│ Request body, response schema, example │
|
||||||
|
│ </details> │
|
||||||
|
│ │
|
||||||
|
│ If no APIs exist, write brief note explaining this. │
|
||||||
|
│ │
|
||||||
|
│ Write to: $OUTPUT_DIR/API_REFERENCE.md │
|
||||||
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ DOC AGENT 3: Components Catalog │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ Task tool with: │
|
||||||
|
│ subagent_type: "technical-writer" │
|
||||||
|
│ run_in_background: true │
|
||||||
|
│ prompt: | │
|
||||||
|
│ # Generate COMPONENTS.md │
|
||||||
|
│ │
|
||||||
|
│ Using components from $OUTPUT_DIR/analysis.yml, │
|
||||||
|
│ generate component catalog documentation. │
|
||||||
|
│ │
|
||||||
|
│ ## Format per Component │
|
||||||
|
│ ### ComponentName │
|
||||||
|
│ **Purpose**: Plain English description │
|
||||||
|
│ **Location**: `path/to/file` │
|
||||||
|
│ │
|
||||||
|
│ <details> │
|
||||||
|
│ <summary>Props & Usage</summary> │
|
||||||
|
│ Props table, usage example, dependencies │
|
||||||
|
│ </details> │
|
||||||
|
│ │
|
||||||
|
│ If no components exist, write brief note explaining this. │
|
||||||
|
│ │
|
||||||
|
│ Write to: $OUTPUT_DIR/COMPONENTS.md │
|
||||||
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ DOC AGENT 4: Quick Reference │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ Task tool with: │
|
||||||
|
│ subagent_type: "technical-writer" │
|
||||||
|
│ run_in_background: true │
|
||||||
|
│ prompt: | │
|
||||||
|
│ # Generate QUICK_REFERENCE.md │
|
||||||
|
│ │
|
||||||
|
│ Using $OUTPUT_DIR/analysis.yml, create a one-page │
|
||||||
|
│ quick reference card. │
|
||||||
|
│ │
|
||||||
|
│ ## Sections (tables only, minimal text) │
|
||||||
|
│ - Commands (npm scripts, make targets) │
|
||||||
|
│ - Key Files (important files and purposes) │
|
||||||
|
│ - API Endpoints (method, path, purpose) │
|
||||||
|
│ - Environment Variables │
|
||||||
|
│ - Common Patterns │
|
||||||
|
│ │
|
||||||
|
│ Keep it to ONE PAGE - scannable, dense, useful. │
|
||||||
|
│ │
|
||||||
|
│ Write to: $OUTPUT_DIR/QUICK_REFERENCE.md │
|
||||||
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3.2: Wait for All Documentation Agents
|
||||||
|
|
||||||
|
```
|
||||||
|
Use TaskOutput tool to wait for each doc agent:
|
||||||
|
- TaskOutput with task_id from Doc Agent 1, block: true
|
||||||
|
- TaskOutput with task_id from Doc Agent 2, block: true
|
||||||
|
- TaskOutput with task_id from Doc Agent 3, block: true
|
||||||
|
- TaskOutput with task_id from Doc Agent 4, block: true
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
### PHASE 4: Finalization
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
#### 4.1: Generate HTML Documentation (Optional)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# If Python scripts exist, generate HTML
|
||||||
|
if [ -f "skills/documentation-generator/scripts/generate_html.py" ]; then
|
||||||
|
python3 skills/documentation-generator/scripts/generate_html.py \
|
||||||
|
$OUTPUT_DIR/analysis.yml \
|
||||||
|
skills/documentation-generator/templates/documentation.html \
|
||||||
|
$OUTPUT_DIR/index.html
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 4.2: Validate Generated Documentation
|
||||||
|
|
||||||
|
Verify all documentation files exist and are non-empty:
|
||||||
|
- `$OUTPUT_DIR/analysis.yml`
|
||||||
|
- `$OUTPUT_DIR/PROJECT_DOCUMENTATION.md`
|
||||||
|
- `$OUTPUT_DIR/API_REFERENCE.md`
|
||||||
|
- `$OUTPUT_DIR/COMPONENTS.md`
|
||||||
|
- `$OUTPUT_DIR/QUICK_REFERENCE.md`
|
||||||
|
|
||||||
|
#### 4.3: Display Completion Banner
|
||||||
|
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════╗
|
||||||
|
║ ✅ PARALLEL DOCUMENTATION COMPLETE ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Execution: 4 analysis agents → sync → 4 doc agents ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Output Directory: $OUTPUT_DIR ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Files Created: ║
|
||||||
|
║ 📊 analysis.yml (Merged analysis data) ║
|
||||||
|
║ 📄 PROJECT_DOCUMENTATION.md (Main documentation) ║
|
||||||
|
║ 📄 API_REFERENCE.md (API details) ║
|
||||||
|
║ 📄 COMPONENTS.md (Component catalog) ║
|
||||||
|
║ 📄 QUICK_REFERENCE.md (One-page reference) ║
|
||||||
|
║ 🌐 index.html (HTML version - if generated) ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Performance: ║
|
||||||
|
║ ⚡ Analysis: 4 agents in parallel ║
|
||||||
|
║ ⚡ Documentation: 4 agents in parallel ║
|
||||||
|
║ ⚡ Total: ~2x faster than sequential execution ║
|
||||||
|
╚══════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ARGUMENTS
|
||||||
|
|
||||||
|
| Argument | Default | Description |
|
||||||
|
|----------|---------|-------------|
|
||||||
|
| `[output_dir]` | `docs` | Output directory for documentation |
|
||||||
|
| `--format` | `markdown` | Output format: markdown, html |
|
||||||
|
| `--sections` | `all` | Sections to include: all, api, components, models |
|
||||||
|
| `--audience` | `both` | Target: both, technical, non-technical |
|
||||||
|
|
||||||
|
## EXAMPLES
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate full documentation with parallel agents
|
||||||
|
/eureka:index
|
||||||
|
|
||||||
|
# Generate in custom directory
|
||||||
|
/eureka:index my-docs
|
||||||
|
|
||||||
|
# Generate API-only documentation
|
||||||
|
/eureka:index --sections api
|
||||||
|
|
||||||
|
# Generate non-technical documentation only
|
||||||
|
/eureka:index --audience non-technical
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## PARALLEL EXECUTION BENEFITS
|
||||||
|
|
||||||
|
| Metric | Sequential | Parallel | Improvement |
|
||||||
|
|--------|-----------|----------|-------------|
|
||||||
|
| Analysis Phase | 4x agent time | 1x agent time | 75% faster |
|
||||||
|
| Doc Generation | 4x agent time | 1x agent time | 75% faster |
|
||||||
|
| Total Time | ~8 units | ~2 units | **4x faster** |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## DUAL-AUDIENCE WRITING GUIDELINES
|
||||||
|
|
||||||
|
### For Non-Engineers (Primary)
|
||||||
|
- Lead with "What" and "Why", not "How"
|
||||||
|
- Use analogies and real-world comparisons
|
||||||
|
- Avoid acronyms; spell them out first time
|
||||||
|
- Use bullet points over paragraphs
|
||||||
|
- Include visual diagrams
|
||||||
|
|
||||||
|
### For Engineers (Secondary)
|
||||||
|
- Include in collapsible `<details>` sections
|
||||||
|
- Provide code examples
|
||||||
|
- Reference file paths and line numbers
|
||||||
|
- Include type definitions
|
||||||
|
- Link to source files
|
||||||
|
|
||||||
|
### Balance Example
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## User Authentication
|
||||||
|
|
||||||
|
**What it does**: Allows users to securely log into the application
|
||||||
|
using their email and password.
|
||||||
|
|
||||||
|
**How it works** (simplified):
|
||||||
|
1. User enters credentials
|
||||||
|
2. System verifies them
|
||||||
|
3. User receives access
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Technical Implementation</summary>
|
||||||
|
|
||||||
|
- **Strategy**: JWT-based authentication
|
||||||
|
- **Token Storage**: HTTP-only cookies
|
||||||
|
- **Session Duration**: 24 hours
|
||||||
|
- **Refresh Logic**: Automatic refresh 1 hour before expiry
|
||||||
|
|
||||||
|
**Key Files**:
|
||||||
|
- `src/auth/jwt.service.ts` - Token generation
|
||||||
|
- `src/auth/auth.guard.ts` - Route protection
|
||||||
|
|
||||||
|
</details>
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,660 @@
|
||||||
|
---
|
||||||
|
description: Generate a designer-quality landing page from project documentation with AI-generated images
|
||||||
|
allowed-tools: Read, Write, Edit, Bash, Task, TodoWrite, Glob, Grep, mcp__eureka-imagen__generate_hero_image, mcp__eureka-imagen__generate_feature_icon, mcp__eureka-imagen__generate_illustration, mcp__eureka-imagen__generate_og_image, mcp__eureka-imagen__generate_logo_concept, mcp__eureka-imagen__list_available_models, mcp__eureka-imagen__check_status
|
||||||
|
---
|
||||||
|
|
||||||
|
# Eureka Landing - Landing Page Generator
|
||||||
|
|
||||||
|
**Input**: "$ARGUMENTS"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## PURPOSE
|
||||||
|
|
||||||
|
Generate a **designer-quality landing page** with concept branding and Q&A section based on existing project documentation. This command requires documentation to be generated first via `/eureka:index`.
|
||||||
|
|
||||||
|
### Output Features
|
||||||
|
|
||||||
|
| Feature | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
| Hero Section | Compelling headline, tagline, CTA buttons |
|
||||||
|
| Features Grid | Visual feature cards with icons |
|
||||||
|
| How It Works | Step-by-step visual flow |
|
||||||
|
| Screenshots/Demo | Placeholder for app visuals |
|
||||||
|
| Q&A/FAQ | Common questions answered |
|
||||||
|
| Concept Branding | Colors, typography, visual style |
|
||||||
|
| Responsive Design | Mobile-first, works on all devices |
|
||||||
|
| Dark Mode | Automatic system preference detection |
|
||||||
|
| **AI Images** | AI-generated hero, icons, illustrations (ImageRouter) |
|
||||||
|
|
||||||
|
### Image Generation (Optional)
|
||||||
|
|
||||||
|
When `--with-images` flag is used and IMAGEROUTER_API_KEY is set, the command will:
|
||||||
|
- Generate a hero banner image
|
||||||
|
- Generate feature icons
|
||||||
|
- Generate how-it-works illustrations
|
||||||
|
- Generate OG image for social sharing
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# With AI-generated images
|
||||||
|
/eureka:landing --with-images
|
||||||
|
|
||||||
|
# Without images (default)
|
||||||
|
/eureka:landing
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## PREREQUISITES
|
||||||
|
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════╗
|
||||||
|
║ ⚠️ REQUIRES DOCUMENTATION FIRST ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ ║
|
||||||
|
║ This command uses data from /eureka:index output. ║
|
||||||
|
║ ║
|
||||||
|
║ Required files: ║
|
||||||
|
║ ✓ docs/analysis.yml (or custom output dir) ║
|
||||||
|
║ ✓ docs/PROJECT_DOCUMENTATION.md ║
|
||||||
|
║ ║
|
||||||
|
║ If missing, run first: ║
|
||||||
|
║ /eureka:index ║
|
||||||
|
║ ║
|
||||||
|
╚══════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## EXECUTION FLOW
|
||||||
|
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
### PHASE 1: Validate Prerequisites
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
#### 1.1: Check for Documentation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
DOCS_DIR="${ARGUMENTS:-docs}"
|
||||||
|
|
||||||
|
# Check if documentation exists
|
||||||
|
if [ ! -f "$DOCS_DIR/analysis.yml" ] && [ ! -f "$DOCS_DIR/PROJECT_DOCUMENTATION.md" ]; then
|
||||||
|
echo "❌ ERROR: Documentation not found!"
|
||||||
|
echo ""
|
||||||
|
echo "Required: $DOCS_DIR/analysis.yml or $DOCS_DIR/PROJECT_DOCUMENTATION.md"
|
||||||
|
echo ""
|
||||||
|
echo "Run first: /eureka:index"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "✅ Documentation found in $DOCS_DIR"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 1.2: Display Start Banner
|
||||||
|
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════╗
|
||||||
|
║ 🎨 EUREKA LANDING - Designer Landing Page Generator ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Creating: Hero + Features + How It Works + Q&A ║
|
||||||
|
║ Style: Modern, professional, conversion-optimized ║
|
||||||
|
╚══════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
### PHASE 2: Parallel Content Generation
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
#### 2.1: Launch Content Generation Agents in Parallel
|
||||||
|
|
||||||
|
**CRITICAL: Launch ALL agents in a SINGLE message:**
|
||||||
|
|
||||||
|
```
|
||||||
|
Launch these 4 Task agents IN PARALLEL:
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ AGENT 1: Branding Concept Generator │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ Task tool with: │
|
||||||
|
│ subagent_type: "frontend-architect" │
|
||||||
|
│ run_in_background: true │
|
||||||
|
│ prompt: | │
|
||||||
|
│ # CONCEPT BRANDING GENERATION │
|
||||||
|
│ │
|
||||||
|
│ Read $DOCS_DIR/analysis.yml and create a branding concept. │
|
||||||
|
│ │
|
||||||
|
│ ## Output: branding.json │
|
||||||
|
│ ```json │
|
||||||
|
│ { │
|
||||||
|
│ "brand": { │
|
||||||
|
│ "name": "Project Name", │
|
||||||
|
│ "tagline": "Compelling one-liner", │
|
||||||
|
│ "value_proposition": "What makes it special" │
|
||||||
|
│ }, │
|
||||||
|
│ "colors": { │
|
||||||
|
│ "primary": "#hex - main brand color", │
|
||||||
|
│ "secondary": "#hex - accent color", │
|
||||||
|
│ "accent": "#hex - CTA/highlight color", │
|
||||||
|
│ "background": "#hex - light bg", │
|
||||||
|
│ "background_dark": "#hex - dark mode bg", │
|
||||||
|
│ "text": "#hex - primary text", │
|
||||||
|
│ "text_muted": "#hex - secondary text" │
|
||||||
|
│ }, │
|
||||||
|
│ "typography": { │
|
||||||
|
│ "heading_font": "Inter, system-ui, sans-serif", │
|
||||||
|
│ "body_font": "Inter, system-ui, sans-serif", │
|
||||||
|
│ "mono_font": "JetBrains Mono, monospace" │
|
||||||
|
│ }, │
|
||||||
|
│ "style": { │
|
||||||
|
│ "border_radius": "12px", │
|
||||||
|
│ "shadow": "0 4px 6px -1px rgba(0,0,0,0.1)", │
|
||||||
|
│ "gradient": "linear-gradient(...)" │
|
||||||
|
│ }, │
|
||||||
|
│ "icons": { │
|
||||||
|
│ "style": "lucide|heroicons|phosphor", │
|
||||||
|
│ "feature_icons": ["icon1", "icon2", ...] │
|
||||||
|
│ } │
|
||||||
|
│ } │
|
||||||
|
│ ``` │
|
||||||
|
│ │
|
||||||
|
│ Base colors on project type: │
|
||||||
|
│ - Developer tools: Blues, purples │
|
||||||
|
│ - Business apps: Blues, greens │
|
||||||
|
│ - Creative tools: Vibrant, gradients │
|
||||||
|
│ - Data/Analytics: Teals, blues │
|
||||||
|
│ │
|
||||||
|
│ Write to: $DOCS_DIR/branding.json │
|
||||||
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ AGENT 2: Hero & Features Content │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ Task tool with: │
|
||||||
|
│ subagent_type: "technical-writer" │
|
||||||
|
│ run_in_background: true │
|
||||||
|
│ prompt: | │
|
||||||
|
│ # HERO & FEATURES CONTENT │
|
||||||
|
│ │
|
||||||
|
│ Read $DOCS_DIR/analysis.yml and create marketing content. │
|
||||||
|
│ │
|
||||||
|
│ ## Output: content.json │
|
||||||
|
│ ```json │
|
||||||
|
│ { │
|
||||||
|
│ "hero": { │
|
||||||
|
│ "headline": "Powerful, benefit-focused headline", │
|
||||||
|
│ "subheadline": "Explain the value in one sentence", │
|
||||||
|
│ "cta_primary": "Get Started", │
|
||||||
|
│ "cta_secondary": "Learn More", │
|
||||||
|
│ "social_proof": "Used by X developers" │
|
||||||
|
│ }, │
|
||||||
|
│ "features": [ │
|
||||||
|
│ { │
|
||||||
|
│ "title": "Feature Name", │
|
||||||
|
│ "description": "Benefit-focused description", │
|
||||||
|
│ "icon": "suggested-icon-name" │
|
||||||
|
│ } │
|
||||||
|
│ ], │
|
||||||
|
│ "how_it_works": [ │
|
||||||
|
│ { │
|
||||||
|
│ "step": 1, │
|
||||||
|
│ "title": "Step Title", │
|
||||||
|
│ "description": "What happens" │
|
||||||
|
│ } │
|
||||||
|
│ ], │
|
||||||
|
│ "stats": [ │
|
||||||
|
│ { "value": "10x", "label": "Faster" } │
|
||||||
|
│ ] │
|
||||||
|
│ } │
|
||||||
|
│ ``` │
|
||||||
|
│ │
|
||||||
|
│ Writing Rules: │
|
||||||
|
│ - Headlines: Benefit-focused, action-oriented │
|
||||||
|
│ - Features: Max 6, each with clear benefit │
|
||||||
|
│ - How It Works: 3-4 steps maximum │
|
||||||
|
│ - Use numbers and specifics when possible │
|
||||||
|
│ │
|
||||||
|
│ Write to: $DOCS_DIR/content.json │
|
||||||
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ AGENT 3: Q&A / FAQ Generator │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ Task tool with: │
|
||||||
|
│ subagent_type: "technical-writer" │
|
||||||
|
│ run_in_background: true │
|
||||||
|
│ prompt: | │
|
||||||
|
│ # Q&A / FAQ GENERATION │
|
||||||
|
│ │
|
||||||
|
│ Read $DOCS_DIR/analysis.yml and PROJECT_DOCUMENTATION.md │
|
||||||
|
│ to generate comprehensive Q&A. │
|
||||||
|
│ │
|
||||||
|
│ ## Output: faq.json │
|
||||||
|
│ ```json │
|
||||||
|
│ { │
|
||||||
|
│ "categories": [ │
|
||||||
|
│ { │
|
||||||
|
│ "name": "Getting Started", │
|
||||||
|
│ "questions": [ │
|
||||||
|
│ { │
|
||||||
|
│ "q": "How do I install [Project]?", │
|
||||||
|
│ "a": "Clear, step-by-step answer" │
|
||||||
|
│ } │
|
||||||
|
│ ] │
|
||||||
|
│ }, │
|
||||||
|
│ { │
|
||||||
|
│ "name": "Features", │
|
||||||
|
│ "questions": [...] │
|
||||||
|
│ }, │
|
||||||
|
│ { │
|
||||||
|
│ "name": "Technical", │
|
||||||
|
│ "questions": [...] │
|
||||||
|
│ }, │
|
||||||
|
│ { │
|
||||||
|
│ "name": "Pricing & Support", │
|
||||||
|
│ "questions": [...] │
|
||||||
|
│ } │
|
||||||
|
│ ] │
|
||||||
|
│ } │
|
||||||
|
│ ``` │
|
||||||
|
│ │
|
||||||
|
│ Q&A Guidelines: │
|
||||||
|
│ - 3-5 questions per category │
|
||||||
|
│ - Anticipate real user questions │
|
||||||
|
│ - Answers should be concise but complete │
|
||||||
|
│ - Include code snippets where helpful │
|
||||||
|
│ - Address common concerns and objections │
|
||||||
|
│ │
|
||||||
|
│ Write to: $DOCS_DIR/faq.json │
|
||||||
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ AGENT 4: SEO & Meta Content │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ Task tool with: │
|
||||||
|
│ subagent_type: "technical-writer" │
|
||||||
|
│ run_in_background: true │
|
||||||
|
│ prompt: | │
|
||||||
|
│ # SEO & META CONTENT │
|
||||||
|
│ │
|
||||||
|
│ Read $DOCS_DIR/analysis.yml and create SEO content. │
|
||||||
|
│ │
|
||||||
|
│ ## Output: seo.json │
|
||||||
|
│ ```json │
|
||||||
|
│ { │
|
||||||
|
│ "title": "Project Name - Tagline | Category", │
|
||||||
|
│ "description": "150-160 char meta description", │
|
||||||
|
│ "keywords": ["keyword1", "keyword2"], │
|
||||||
|
│ "og": { │
|
||||||
|
│ "title": "Open Graph title", │
|
||||||
|
│ "description": "OG description", │
|
||||||
|
│ "type": "website" │
|
||||||
|
│ }, │
|
||||||
|
│ "twitter": { │
|
||||||
|
│ "card": "summary_large_image", │
|
||||||
|
│ "title": "Twitter title", │
|
||||||
|
│ "description": "Twitter description" │
|
||||||
|
│ }, │
|
||||||
|
│ "structured_data": { │
|
||||||
|
│ "@type": "SoftwareApplication", │
|
||||||
|
│ "name": "...", │
|
||||||
|
│ "description": "..." │
|
||||||
|
│ } │
|
||||||
|
│ } │
|
||||||
|
│ ``` │
|
||||||
|
│ │
|
||||||
|
│ Write to: $DOCS_DIR/seo.json │
|
||||||
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2.2: Wait for All Content Agents
|
||||||
|
|
||||||
|
```
|
||||||
|
Use TaskOutput tool to wait for each agent:
|
||||||
|
- TaskOutput with task_id from Agent 1 (branding), block: true
|
||||||
|
- TaskOutput with task_id from Agent 2 (content), block: true
|
||||||
|
- TaskOutput with task_id from Agent 3 (faq), block: true
|
||||||
|
- TaskOutput with task_id from Agent 4 (seo), block: true
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
### PHASE 3: Generate Landing Page HTML
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
#### 3.1: Generate Designer-Quality Landing Page
|
||||||
|
|
||||||
|
**Use Task tool with frontend-architect agent:**
|
||||||
|
|
||||||
|
```
|
||||||
|
Task tool with:
|
||||||
|
subagent_type: "frontend-architect"
|
||||||
|
prompt: |
|
||||||
|
# GENERATE LANDING PAGE HTML
|
||||||
|
|
||||||
|
Read these files and generate a stunning landing page:
|
||||||
|
- $DOCS_DIR/branding.json (colors, typography, style)
|
||||||
|
- $DOCS_DIR/content.json (hero, features, how-it-works)
|
||||||
|
- $DOCS_DIR/faq.json (Q&A sections)
|
||||||
|
- $DOCS_DIR/seo.json (meta tags)
|
||||||
|
|
||||||
|
## Output: $DOCS_DIR/landing.html
|
||||||
|
|
||||||
|
## Design Requirements
|
||||||
|
|
||||||
|
### Structure
|
||||||
|
```html
|
||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<!-- SEO meta tags from seo.json -->
|
||||||
|
<!-- Fonts: Google Fonts or system fonts -->
|
||||||
|
<!-- Inline critical CSS -->
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<!-- Navigation (sticky) -->
|
||||||
|
<nav>Logo | Links | CTA Button</nav>
|
||||||
|
|
||||||
|
<!-- Hero Section -->
|
||||||
|
<section class="hero">
|
||||||
|
<h1>Headline</h1>
|
||||||
|
<p>Subheadline</p>
|
||||||
|
<div class="cta-buttons">Primary | Secondary</div>
|
||||||
|
<!-- Optional: Animated gradient background -->
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<!-- Social Proof / Stats -->
|
||||||
|
<section class="stats">
|
||||||
|
<div class="stat">Value + Label</div>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<!-- Features Grid -->
|
||||||
|
<section class="features">
|
||||||
|
<h2>Features</h2>
|
||||||
|
<div class="feature-grid">
|
||||||
|
<!-- 3-column grid of feature cards -->
|
||||||
|
</div>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<!-- How It Works -->
|
||||||
|
<section class="how-it-works">
|
||||||
|
<h2>How It Works</h2>
|
||||||
|
<div class="steps">
|
||||||
|
<!-- Numbered steps with visual flow -->
|
||||||
|
</div>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<!-- Q&A / FAQ -->
|
||||||
|
<section class="faq">
|
||||||
|
<h2>Frequently Asked Questions</h2>
|
||||||
|
<div class="faq-accordion">
|
||||||
|
<!-- Expandable Q&A items -->
|
||||||
|
</div>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<!-- CTA Section -->
|
||||||
|
<section class="cta-final">
|
||||||
|
<h2>Ready to Get Started?</h2>
|
||||||
|
<button>Primary CTA</button>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<!-- Footer -->
|
||||||
|
<footer>
|
||||||
|
Links | Copyright | Social
|
||||||
|
</footer>
|
||||||
|
|
||||||
|
<!-- Inline JavaScript for interactions -->
|
||||||
|
<script>
|
||||||
|
// FAQ accordion
|
||||||
|
// Smooth scroll
|
||||||
|
// Dark mode toggle
|
||||||
|
</script>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Visual Design Standards
|
||||||
|
|
||||||
|
**Hero Section**:
|
||||||
|
- Full viewport height or 80vh minimum
|
||||||
|
- Gradient or subtle pattern background
|
||||||
|
- Large, bold headline (48-72px)
|
||||||
|
- Clear visual hierarchy
|
||||||
|
- Floating elements or subtle animation
|
||||||
|
|
||||||
|
**Feature Cards**:
|
||||||
|
- Icon + Title + Description
|
||||||
|
- Subtle hover effects (scale, shadow)
|
||||||
|
- Consistent spacing (24-32px gaps)
|
||||||
|
- 3-column on desktop, 1 on mobile
|
||||||
|
|
||||||
|
**How It Works**:
|
||||||
|
- Visual step indicators (1, 2, 3)
|
||||||
|
- Connecting lines or arrows
|
||||||
|
- Icons or illustrations per step
|
||||||
|
- Horizontal on desktop, vertical on mobile
|
||||||
|
|
||||||
|
**FAQ Accordion**:
|
||||||
|
- Click to expand/collapse
|
||||||
|
- Smooth animation (max-height transition)
|
||||||
|
- Plus/minus or chevron indicator
|
||||||
|
- Category grouping
|
||||||
|
|
||||||
|
**Micro-interactions**:
|
||||||
|
- Button hover: scale(1.02) + shadow
|
||||||
|
- Card hover: translateY(-4px) + shadow
|
||||||
|
- Smooth scroll for anchor links
|
||||||
|
- Fade-in on scroll (intersection observer)
|
||||||
|
|
||||||
|
### CSS Requirements
|
||||||
|
|
||||||
|
```css
|
||||||
|
/* CSS Custom Properties from branding.json */
|
||||||
|
:root {
|
||||||
|
--color-primary: ...;
|
||||||
|
--color-secondary: ...;
|
||||||
|
--font-heading: ...;
|
||||||
|
--radius: ...;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Dark mode */
|
||||||
|
@media (prefers-color-scheme: dark) {
|
||||||
|
:root {
|
||||||
|
--color-bg: var(--color-bg-dark);
|
||||||
|
...
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Responsive breakpoints */
|
||||||
|
/* Mobile: < 640px */
|
||||||
|
/* Tablet: 640-1024px */
|
||||||
|
/* Desktop: > 1024px */
|
||||||
|
```
|
||||||
|
|
||||||
|
### JavaScript Requirements
|
||||||
|
- FAQ accordion functionality
|
||||||
|
- Smooth scroll for navigation
|
||||||
|
- Optional: Scroll-triggered animations
|
||||||
|
- Dark mode toggle (optional)
|
||||||
|
- Mobile menu toggle
|
||||||
|
|
||||||
|
### Performance
|
||||||
|
- Single HTML file (no external dependencies)
|
||||||
|
- Inline critical CSS
|
||||||
|
- Minimal JavaScript
|
||||||
|
- Optimized for Core Web Vitals
|
||||||
|
- < 100KB total size
|
||||||
|
|
||||||
|
Write complete HTML to: $DOCS_DIR/landing.html
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
### PHASE 4: Finalization
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
#### 4.1: Validate Generated Files
|
||||||
|
|
||||||
|
Verify all files exist:
|
||||||
|
- `$DOCS_DIR/branding.json`
|
||||||
|
- `$DOCS_DIR/content.json`
|
||||||
|
- `$DOCS_DIR/faq.json`
|
||||||
|
- `$DOCS_DIR/seo.json`
|
||||||
|
- `$DOCS_DIR/landing.html`
|
||||||
|
|
||||||
|
#### 4.2: Display Completion Banner
|
||||||
|
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════╗
|
||||||
|
║ ✅ LANDING PAGE GENERATED SUCCESSFULLY ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Output Directory: $DOCS_DIR ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Files Created: ║
|
||||||
|
║ 🎨 branding.json (Colors, typography, style guide) ║
|
||||||
|
║ 📝 content.json (Hero, features, how-it-works) ║
|
||||||
|
║ ❓ faq.json (Q&A content by category) ║
|
||||||
|
║ 🔍 seo.json (Meta tags, Open Graph, Schema) ║
|
||||||
|
║ 🌐 landing.html (Designer-quality landing page) ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Landing Page Features: ║
|
||||||
|
║ ✅ Hero with compelling headline + CTAs ║
|
||||||
|
║ ✅ Feature grid with icons ║
|
||||||
|
║ ✅ How It Works visual flow ║
|
||||||
|
║ ✅ Interactive FAQ accordion ║
|
||||||
|
║ ✅ Responsive (mobile-first) ║
|
||||||
|
║ ✅ Dark mode support ║
|
||||||
|
║ ✅ SEO optimized ║
|
||||||
|
║ ✅ Single file, no dependencies ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Next Steps: ║
|
||||||
|
║ → Open landing.html in browser to preview ║
|
||||||
|
║ → Customize colors in branding.json ║
|
||||||
|
║ → Add real screenshots/images ║
|
||||||
|
║ → Deploy to your hosting ║
|
||||||
|
╚══════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ARGUMENTS
|
||||||
|
|
||||||
|
| Argument | Default | Description |
|
||||||
|
|----------|---------|-------------|
|
||||||
|
| `[docs_dir]` | `docs` | Directory containing documentation from /eureka:index |
|
||||||
|
| `--style` | `modern` | Design style: modern, minimal, bold, corporate |
|
||||||
|
| `--theme` | `auto` | Color theme: auto, light, dark |
|
||||||
|
|
||||||
|
## EXAMPLES
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate landing page from default docs directory
|
||||||
|
/eureka:landing
|
||||||
|
|
||||||
|
# Generate from custom documentation directory
|
||||||
|
/eureka:landing my-docs
|
||||||
|
|
||||||
|
# Generate with specific style
|
||||||
|
/eureka:landing --style minimal
|
||||||
|
|
||||||
|
# Generate dark-only theme
|
||||||
|
/eureka:landing --theme dark
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## DESIGN STYLE OPTIONS
|
||||||
|
|
||||||
|
### Modern (Default)
|
||||||
|
- Gradient backgrounds
|
||||||
|
- Rounded corners (12-16px)
|
||||||
|
- Soft shadows
|
||||||
|
- Vibrant accent colors
|
||||||
|
- Floating elements
|
||||||
|
|
||||||
|
### Minimal
|
||||||
|
- Clean white space
|
||||||
|
- Thin borders
|
||||||
|
- Muted colors
|
||||||
|
- Typography-focused
|
||||||
|
- Subtle interactions
|
||||||
|
|
||||||
|
### Bold
|
||||||
|
- Strong colors
|
||||||
|
- Large typography
|
||||||
|
- High contrast
|
||||||
|
- Geometric shapes
|
||||||
|
- Impactful CTAs
|
||||||
|
|
||||||
|
### Corporate
|
||||||
|
- Professional blues/grays
|
||||||
|
- Structured layout
|
||||||
|
- Trust indicators
|
||||||
|
- Clean lines
|
||||||
|
- Conservative animations
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## OUTPUT STRUCTURE
|
||||||
|
|
||||||
|
```
|
||||||
|
docs/
|
||||||
|
├── analysis.yml (from /eureka:index)
|
||||||
|
├── PROJECT_DOCUMENTATION.md
|
||||||
|
├── API_REFERENCE.md
|
||||||
|
├── COMPONENTS.md
|
||||||
|
├── QUICK_REFERENCE.md
|
||||||
|
├── index.html (documentation HTML)
|
||||||
|
│
|
||||||
|
├── branding.json (NEW - concept branding)
|
||||||
|
├── content.json (NEW - marketing content)
|
||||||
|
├── faq.json (NEW - Q&A content)
|
||||||
|
├── seo.json (NEW - SEO metadata)
|
||||||
|
└── landing.html (NEW - landing page)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## BRANDING SYSTEM
|
||||||
|
|
||||||
|
The generated branding.json provides a complete design system:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"brand": {
|
||||||
|
"name": "Project Name",
|
||||||
|
"tagline": "Your compelling tagline",
|
||||||
|
"value_proposition": "What makes it unique"
|
||||||
|
},
|
||||||
|
"colors": {
|
||||||
|
"primary": "#6366f1",
|
||||||
|
"secondary": "#8b5cf6",
|
||||||
|
"accent": "#f59e0b",
|
||||||
|
"background": "#ffffff",
|
||||||
|
"background_dark": "#0f172a",
|
||||||
|
"text": "#1e293b",
|
||||||
|
"text_muted": "#64748b"
|
||||||
|
},
|
||||||
|
"typography": {
|
||||||
|
"heading_font": "Inter, system-ui, sans-serif",
|
||||||
|
"body_font": "Inter, system-ui, sans-serif",
|
||||||
|
"mono_font": "JetBrains Mono, Consolas, monospace"
|
||||||
|
},
|
||||||
|
"style": {
|
||||||
|
"border_radius": "12px",
|
||||||
|
"shadow_sm": "0 1px 2px rgba(0,0,0,0.05)",
|
||||||
|
"shadow_md": "0 4px 6px -1px rgba(0,0,0,0.1)",
|
||||||
|
"shadow_lg": "0 10px 15px -3px rgba(0,0,0,0.1)",
|
||||||
|
"gradient": "linear-gradient(135deg, #6366f1 0%, #8b5cf6 100%)"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This can be used to:
|
||||||
|
- Maintain consistent branding across all pages
|
||||||
|
- Generate CSS custom properties
|
||||||
|
- Create Figma/design tool exports
|
||||||
|
- Build component libraries
|
||||||
|
|
@ -0,0 +1,243 @@
|
||||||
|
---
|
||||||
|
description: Analyze codebase and generate project manifest from existing code
|
||||||
|
allowed-tools: Read, Write, Bash, Glob, Grep, Task, AskUserQuestion
|
||||||
|
---
|
||||||
|
|
||||||
|
# Analyze Codebase and Generate Manifest
|
||||||
|
|
||||||
|
Analyze the current codebase and generate all files needed for the guardrail workflow system.
|
||||||
|
|
||||||
|
## Arguments
|
||||||
|
|
||||||
|
- `$ARGUMENTS` - Optional flags:
|
||||||
|
- `--force` - Overwrite existing manifest without asking
|
||||||
|
- `--dry-run` - Preview manifest without writing
|
||||||
|
- `--deep` - Use AI agent for deeper analysis
|
||||||
|
- `<name>` - Custom project name
|
||||||
|
|
||||||
|
## Generated Files
|
||||||
|
|
||||||
|
This command creates:
|
||||||
|
1. `project_manifest.json` - Entity definitions and dependencies
|
||||||
|
2. `.workflow/index.yml` - Version tracking index
|
||||||
|
3. `.workflow/versions/` - Directory for version snapshots
|
||||||
|
|
||||||
|
## Quick Execution (Default)
|
||||||
|
|
||||||
|
### Step 1: Run the Python analyzer script
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/analyze_codebase.py --path . $ARGUMENTS
|
||||||
|
```
|
||||||
|
|
||||||
|
If the script succeeds, continue to Step 2.
|
||||||
|
|
||||||
|
### Step 2: Initialize Workflow Directory Structure [MANDATORY]
|
||||||
|
```bash
|
||||||
|
# Create workflow directory structure
|
||||||
|
mkdir -p .workflow/versions
|
||||||
|
|
||||||
|
# Create index.yml if it doesn't exist
|
||||||
|
if [ ! -f .workflow/index.yml ]; then
|
||||||
|
cat > .workflow/index.yml << 'EOF'
|
||||||
|
versions: []
|
||||||
|
latest_version: null
|
||||||
|
total_versions: 0
|
||||||
|
EOF
|
||||||
|
echo "Created .workflow/index.yml"
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Display Summary
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════╗
|
||||||
|
║ ✅ GUARDRAIL INITIALIZED ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Files Created: ║
|
||||||
|
║ ✓ project_manifest.json ║
|
||||||
|
║ ✓ .workflow/index.yml ║
|
||||||
|
║ ✓ .workflow/versions/ ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Ready to use: ║
|
||||||
|
║ /workflow:spawn <feature> Start a new feature ║
|
||||||
|
║ /guardrail:status Check project status ║
|
||||||
|
╚══════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
## Deep Analysis Mode (--deep flag)
|
||||||
|
|
||||||
|
**Use the Task tool to spawn an Explore agent for comprehensive codebase analysis**
|
||||||
|
|
||||||
|
Use Task tool with:
|
||||||
|
subagent_type: "Explore"
|
||||||
|
prompt: |
|
||||||
|
Analyze this codebase thoroughly and return structured information about:
|
||||||
|
|
||||||
|
1. **Pages** (Next.js App Router):
|
||||||
|
- Find all page.tsx files in app/ directory
|
||||||
|
- Extract route paths from file locations
|
||||||
|
- Identify components imported/used
|
||||||
|
- Identify API dependencies (fetch calls)
|
||||||
|
|
||||||
|
2. **Components**:
|
||||||
|
- Find all .tsx files in app/components/
|
||||||
|
- Extract component names and exports
|
||||||
|
- Extract prop interfaces/types
|
||||||
|
- Identify child component dependencies
|
||||||
|
|
||||||
|
3. **API Routes**:
|
||||||
|
- Find all route.ts files in app/api/
|
||||||
|
- Extract HTTP methods (GET, POST, PUT, DELETE, PATCH)
|
||||||
|
- Identify request/response types
|
||||||
|
- Extract path parameters
|
||||||
|
|
||||||
|
4. **Database/Types**:
|
||||||
|
- Find type definitions in app/lib/
|
||||||
|
- Extract interfaces and type aliases
|
||||||
|
- Identify database schemas/tables
|
||||||
|
|
||||||
|
5. **Dependencies**:
|
||||||
|
- Which components are used by which pages
|
||||||
|
- Which APIs are called by which components
|
||||||
|
- Which database tables are used by which APIs
|
||||||
|
|
||||||
|
Return the analysis as structured JSON sections.
|
||||||
|
|
||||||
|
### Phase 2: Generate Manifest
|
||||||
|
|
||||||
|
Based on the analysis, create `project_manifest.json` with this structure:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"project": {
|
||||||
|
"name": "<project-name>",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"created_at": "<ISO timestamp>",
|
||||||
|
"description": "<inferred from package.json or README>"
|
||||||
|
},
|
||||||
|
"state": {
|
||||||
|
"current_phase": "IMPLEMENTATION_PHASE",
|
||||||
|
"approval_status": {
|
||||||
|
"manifest_approved": true,
|
||||||
|
"approved_by": "analyzer",
|
||||||
|
"approved_at": "<ISO timestamp>"
|
||||||
|
},
|
||||||
|
"revision_history": [
|
||||||
|
{
|
||||||
|
"action": "MANIFEST_GENERATED",
|
||||||
|
"timestamp": "<ISO timestamp>",
|
||||||
|
"details": "Generated from existing codebase analysis"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"entities": {
|
||||||
|
"pages": [
|
||||||
|
{
|
||||||
|
"id": "page_<name>",
|
||||||
|
"path": "/<route>",
|
||||||
|
"file_path": "app/<path>/page.tsx",
|
||||||
|
"status": "IMPLEMENTED",
|
||||||
|
"description": "<inferred>",
|
||||||
|
"components": ["comp_<name>", ...],
|
||||||
|
"data_dependencies": ["api_<name>", ...]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"components": [
|
||||||
|
{
|
||||||
|
"id": "comp_<name>",
|
||||||
|
"name": "<ComponentName>",
|
||||||
|
"file_path": "app/components/<Name>.tsx",
|
||||||
|
"status": "IMPLEMENTED",
|
||||||
|
"description": "<inferred from component>",
|
||||||
|
"props": {
|
||||||
|
"<propName>": {
|
||||||
|
"type": "<type>",
|
||||||
|
"optional": true|false,
|
||||||
|
"description": "<if available>"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"api_endpoints": [
|
||||||
|
{
|
||||||
|
"id": "api_<action>_<resource>",
|
||||||
|
"path": "/api/<path>",
|
||||||
|
"method": "GET|POST|PUT|DELETE|PATCH",
|
||||||
|
"file_path": "app/api/<path>/route.ts",
|
||||||
|
"status": "IMPLEMENTED",
|
||||||
|
"description": "<inferred>",
|
||||||
|
"request": {
|
||||||
|
"params": {},
|
||||||
|
"query": {},
|
||||||
|
"body": {}
|
||||||
|
},
|
||||||
|
"response": {
|
||||||
|
"type": "<type>",
|
||||||
|
"description": "<description>"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"database_tables": [
|
||||||
|
{
|
||||||
|
"id": "table_<name>",
|
||||||
|
"name": "<tableName>",
|
||||||
|
"file_path": "app/lib/db.ts",
|
||||||
|
"status": "IMPLEMENTED",
|
||||||
|
"description": "<description>",
|
||||||
|
"columns": {}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"dependencies": {
|
||||||
|
"component_to_page": {},
|
||||||
|
"api_to_component": {},
|
||||||
|
"table_to_api": {}
|
||||||
|
},
|
||||||
|
"types": {}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 3: Entity Naming Conventions
|
||||||
|
|
||||||
|
Use these ID formats:
|
||||||
|
- **Pages**: `page_<name>` (e.g., `page_home`, `page_tasks`, `page_task_detail`)
|
||||||
|
- **Components**: `comp_<snake_case>` (e.g., `comp_task_list`, `comp_filter_bar`)
|
||||||
|
- **APIs**: `api_<action>_<resource>` (e.g., `api_list_tasks`, `api_create_task`)
|
||||||
|
- **Tables**: `table_<name>` (e.g., `table_tasks`, `table_users`)
|
||||||
|
|
||||||
|
### Phase 4: Write Manifest
|
||||||
|
|
||||||
|
1. Write the generated manifest to `project_manifest.json`
|
||||||
|
2. Validate JSON syntax
|
||||||
|
3. Display summary:
|
||||||
|
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════╗
|
||||||
|
║ 📊 MANIFEST GENERATED ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Project: <name> ║
|
||||||
|
║ Generated: <timestamp> ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ ENTITIES DISCOVERED ║
|
||||||
|
║ 📄 Pages: X ║
|
||||||
|
║ 🧩 Components: X ║
|
||||||
|
║ 🔌 APIs: X ║
|
||||||
|
║ 🗄️ Tables: X ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Status: All entities marked as IMPLEMENTED ║
|
||||||
|
║ Phase: IMPLEMENTATION_PHASE ║
|
||||||
|
╚══════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
## Options
|
||||||
|
|
||||||
|
If manifest already exists, ask user:
|
||||||
|
1. **Overwrite** - Replace existing manifest
|
||||||
|
2. **Merge** - Add new entities, keep existing
|
||||||
|
3. **Cancel** - Abort operation
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- All discovered entities are marked as `IMPLEMENTED` since they already exist
|
||||||
|
- Project starts in `IMPLEMENTATION_PHASE` since code exists
|
||||||
|
- Use this command to bring existing projects under guardrail management
|
||||||
|
- After generation, use `/guardrail:validate` to verify manifest accuracy
|
||||||
|
|
@ -0,0 +1,47 @@
|
||||||
|
---
|
||||||
|
description: Approve design and transition to IMPLEMENTATION_PHASE (Reviewer mode)
|
||||||
|
allowed-tools: Read, Write, Bash
|
||||||
|
---
|
||||||
|
|
||||||
|
# Approve Design (Reviewer Mode)
|
||||||
|
|
||||||
|
✅ **REVIEWER MODE ACTIVATED**
|
||||||
|
|
||||||
|
Approve the current design and enable implementation.
|
||||||
|
|
||||||
|
## CRITICAL RULES
|
||||||
|
|
||||||
|
You are acting as the **REVIEWER AGENT**.
|
||||||
|
|
||||||
|
✅ **ALLOWED**:
|
||||||
|
- Read any file
|
||||||
|
- Update approval status in manifest
|
||||||
|
- Transition phases
|
||||||
|
|
||||||
|
❌ **BLOCKED**:
|
||||||
|
- Write ANY code files
|
||||||
|
- You cannot implement anything
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. **Verify Phase**: Must be in `DESIGN_REVIEW`
|
||||||
|
|
||||||
|
2. **Run Full Validation**:
|
||||||
|
```bash
|
||||||
|
python3 "$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_manifest.py" --strict
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **If Valid**, update manifest:
|
||||||
|
- Set `state.approval_status.manifest_approved = true`
|
||||||
|
- Set `state.approval_status.approved_by = "reviewer"`
|
||||||
|
- Set `state.approval_status.approved_at = <current timestamp>`
|
||||||
|
|
||||||
|
4. **Transition to Implementation**:
|
||||||
|
```bash
|
||||||
|
python3 "$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/transition_phase.py" --to IMPLEMENTATION_PHASE
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Show Results**:
|
||||||
|
- List all entities now with status `APPROVED`
|
||||||
|
- Explain that code can now be written for these entities
|
||||||
|
- Suggest `/guardrail:implement` to start
|
||||||
|
|
@ -0,0 +1,81 @@
|
||||||
|
---
|
||||||
|
description: Design a new feature by adding entities to manifest (Architect mode)
|
||||||
|
allowed-tools: Read, Write, Bash
|
||||||
|
---
|
||||||
|
|
||||||
|
# Design Feature (Architect Mode)
|
||||||
|
|
||||||
|
🏗️ **ARCHITECT MODE ACTIVATED**
|
||||||
|
|
||||||
|
Design the feature: "$ARGUMENTS"
|
||||||
|
|
||||||
|
## CRITICAL RULES
|
||||||
|
|
||||||
|
You are now acting as the **ARCHITECT AGENT**.
|
||||||
|
|
||||||
|
✅ **ALLOWED**:
|
||||||
|
- Read any file
|
||||||
|
- Write to `project_manifest.json` ONLY
|
||||||
|
- Run validation scripts
|
||||||
|
|
||||||
|
❌ **BLOCKED**:
|
||||||
|
- Write ANY code files (.ts, .tsx, .css, .sql, .js, .jsx)
|
||||||
|
- You CANNOT write implementation code yet
|
||||||
|
|
||||||
|
## Workflow
|
||||||
|
|
||||||
|
1. **Verify Phase**: Check `project_manifest.json` - must be in `DESIGN_PHASE`
|
||||||
|
|
||||||
|
2. **Analyze Requirements**: Break down "$ARGUMENTS" into:
|
||||||
|
- Pages needed (routes/screens)
|
||||||
|
- Components needed (UI elements)
|
||||||
|
- API endpoints needed (backend routes)
|
||||||
|
- Database tables needed (data storage)
|
||||||
|
|
||||||
|
3. **Define Each Entity** in manifest with:
|
||||||
|
- Unique ID following naming convention
|
||||||
|
- Complete specification (props, schemas, columns)
|
||||||
|
- `status: "DEFINED"`
|
||||||
|
- File path where it will be implemented
|
||||||
|
|
||||||
|
4. **Update Manifest**: Add all entities to `project_manifest.json`
|
||||||
|
|
||||||
|
5. **Validate**: Run `python3 skills/guardrail-orchestrator/scripts/validate_manifest.py`
|
||||||
|
|
||||||
|
6. **Summarize**: List what was added and suggest `/guardrail:review`
|
||||||
|
|
||||||
|
## Entity Templates
|
||||||
|
|
||||||
|
### Page
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "page_[name]",
|
||||||
|
"path": "/[route]",
|
||||||
|
"file_path": "src/pages/[name]/index.tsx",
|
||||||
|
"status": "DEFINED",
|
||||||
|
"components": [],
|
||||||
|
"data_dependencies": []
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Component
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "comp_[name]",
|
||||||
|
"name": "[PascalCase]",
|
||||||
|
"file_path": "src/components/[Name]/index.tsx",
|
||||||
|
"status": "DEFINED",
|
||||||
|
"props": {}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### API Endpoint
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "api_[action]_[resource]",
|
||||||
|
"path": "/api/v1/[resource]",
|
||||||
|
"method": "GET|POST|PUT|DELETE",
|
||||||
|
"file_path": "src/api/[resource]/[action].ts",
|
||||||
|
"status": "DEFINED"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,74 @@
|
||||||
|
---
|
||||||
|
description: Implement an approved entity from the manifest
|
||||||
|
allowed-tools: Read, Write, Bash
|
||||||
|
---
|
||||||
|
|
||||||
|
# Implement Entity
|
||||||
|
|
||||||
|
Implement the entity: "$ARGUMENTS"
|
||||||
|
|
||||||
|
## CRITICAL RULES
|
||||||
|
|
||||||
|
⚠️ **GUARDRAIL ENFORCEMENT ACTIVE**
|
||||||
|
|
||||||
|
You can ONLY write to files that:
|
||||||
|
1. Are defined in `project_manifest.json`
|
||||||
|
2. Have status = `APPROVED`
|
||||||
|
3. Match the `file_path` in the manifest EXACTLY
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. **Verify Phase**: Must be in `IMPLEMENTATION_PHASE`
|
||||||
|
|
||||||
|
2. **Find Entity** in manifest:
|
||||||
|
- If "$ARGUMENTS" is `--all`: implement all APPROVED entities
|
||||||
|
- Otherwise: find the specific entity by ID
|
||||||
|
|
||||||
|
3. **For Each Entity**:
|
||||||
|
|
||||||
|
a. **Load Definition** from manifest
|
||||||
|
|
||||||
|
b. **Verify Status** is `APPROVED`
|
||||||
|
|
||||||
|
c. **Generate Code** matching the specification:
|
||||||
|
- Props must match manifest exactly
|
||||||
|
- Types must match manifest exactly
|
||||||
|
- File path must match `file_path` in manifest
|
||||||
|
|
||||||
|
d. **Write File** to the exact path in manifest
|
||||||
|
|
||||||
|
e. **Run Validations**:
|
||||||
|
```bash
|
||||||
|
npm run lint --if-present
|
||||||
|
npm run type-check --if-present
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Status Updates** (handled by post-hook):
|
||||||
|
- Entity status changes to `IMPLEMENTED`
|
||||||
|
- Timestamp recorded
|
||||||
|
|
||||||
|
## Code Templates
|
||||||
|
|
||||||
|
### Component (Frontend)
|
||||||
|
```tsx
|
||||||
|
import React from 'react';
|
||||||
|
|
||||||
|
interface [Name]Props {
|
||||||
|
// From manifest.props
|
||||||
|
}
|
||||||
|
|
||||||
|
export const [Name]: React.FC<[Name]Props> = (props) => {
|
||||||
|
return (
|
||||||
|
// Implementation
|
||||||
|
);
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
### API Endpoint (Backend)
|
||||||
|
```typescript
|
||||||
|
import { Request, Response } from 'express';
|
||||||
|
|
||||||
|
export async function handler(req: Request, res: Response) {
|
||||||
|
// From manifest.request/response schemas
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,67 @@
|
||||||
|
---
|
||||||
|
description: Initialize a new guardrailed project with manifest
|
||||||
|
allowed-tools: Bash, Write, Read
|
||||||
|
---
|
||||||
|
|
||||||
|
# Initialize Guardrailed Project
|
||||||
|
|
||||||
|
Initialize a new project called "$ARGUMENTS" with guardrail enforcement and workflow system.
|
||||||
|
|
||||||
|
## Generated Files
|
||||||
|
|
||||||
|
This command creates:
|
||||||
|
1. `project_manifest.json` - Entity definitions and dependencies
|
||||||
|
2. `.workflow/index.yml` - Version tracking index
|
||||||
|
3. `.workflow/versions/` - Directory for version snapshots
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
### Step 1: Run the initialization script
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/init_project.py --name "$ARGUMENTS" --path .
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Initialize Workflow Directory Structure [MANDATORY]
|
||||||
|
```bash
|
||||||
|
# Create workflow directory structure
|
||||||
|
mkdir -p .workflow/versions
|
||||||
|
|
||||||
|
# Create index.yml if it doesn't exist
|
||||||
|
if [ ! -f .workflow/index.yml ]; then
|
||||||
|
cat > .workflow/index.yml << 'EOF'
|
||||||
|
versions: []
|
||||||
|
latest_version: null
|
||||||
|
total_versions: 0
|
||||||
|
EOF
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Verify and Display Summary
|
||||||
|
```bash
|
||||||
|
# Verify files exist
|
||||||
|
ls project_manifest.json .workflow/index.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
Display:
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════╗
|
||||||
|
║ ✅ PROJECT INITIALIZED ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Project: $ARGUMENTS ║
|
||||||
|
║ Phase: DESIGN_PHASE ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Files Created: ║
|
||||||
|
║ ✓ project_manifest.json ║
|
||||||
|
║ ✓ .workflow/index.yml ║
|
||||||
|
║ ✓ .workflow/versions/ ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Next Steps: ║
|
||||||
|
║ /guardrail:design Design features in manifest ║
|
||||||
|
║ /workflow:spawn <feat> Start automated workflow ║
|
||||||
|
╚══════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
- Project starts in **DESIGN_PHASE** (manifest edits only)
|
||||||
|
- Use `/guardrail:design` for manual design workflow
|
||||||
|
- Use `/workflow:spawn` for automated design + implementation
|
||||||
|
|
@ -0,0 +1,32 @@
|
||||||
|
---
|
||||||
|
description: Request design review and transition to DESIGN_REVIEW phase
|
||||||
|
allowed-tools: Read, Write, Bash
|
||||||
|
---
|
||||||
|
|
||||||
|
# Request Design Review
|
||||||
|
|
||||||
|
Transition the project from DESIGN_PHASE to DESIGN_REVIEW.
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. **Validate Manifest**:
|
||||||
|
```bash
|
||||||
|
python3 "$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_manifest.py" --strict
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **If Valid**, transition phase:
|
||||||
|
```bash
|
||||||
|
python3 "$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/transition_phase.py" --to DESIGN_REVIEW
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Show Review Checklist**:
|
||||||
|
- [ ] All pages have at least one component
|
||||||
|
- [ ] All components have defined props with types
|
||||||
|
- [ ] All APIs have request/response schemas
|
||||||
|
- [ ] All database tables have primary keys
|
||||||
|
- [ ] No orphan components
|
||||||
|
- [ ] No circular dependencies
|
||||||
|
|
||||||
|
4. **Explain Next Steps**:
|
||||||
|
- Use `/guardrail:approve` to approve and move to implementation
|
||||||
|
- Use `/guardrail:reject <feedback>` to send back for fixes
|
||||||
|
|
@ -0,0 +1,27 @@
|
||||||
|
---
|
||||||
|
description: Show current project phase, entity counts, and pending work
|
||||||
|
allowed-tools: Read, Bash
|
||||||
|
---
|
||||||
|
|
||||||
|
# Guardrail Status
|
||||||
|
|
||||||
|
Display the current guardrail project status.
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. Read `project_manifest.json`
|
||||||
|
|
||||||
|
2. Display:
|
||||||
|
- **Current Phase**: `state.current_phase`
|
||||||
|
- **Approval Status**: `state.approval_status.manifest_approved`
|
||||||
|
- **Entity Counts**:
|
||||||
|
- Pages: count by status (DEFINED/APPROVED/IMPLEMENTED)
|
||||||
|
- Components: count by status
|
||||||
|
- API Endpoints: count by status
|
||||||
|
- Database Tables: count by status
|
||||||
|
- **Recent History**: last 5 items from `state.revision_history`
|
||||||
|
|
||||||
|
3. Show available actions for current phase:
|
||||||
|
- DESIGN_PHASE: Can use `/guardrail:design`, then `/guardrail:review`
|
||||||
|
- DESIGN_REVIEW: Can use `/guardrail:approve` or `/guardrail:reject`
|
||||||
|
- IMPLEMENTATION_PHASE: Can use `/guardrail:implement`
|
||||||
|
|
@ -0,0 +1,29 @@
|
||||||
|
---
|
||||||
|
description: Validate manifest integrity and completeness
|
||||||
|
allowed-tools: Bash, Read
|
||||||
|
---
|
||||||
|
|
||||||
|
# Validate Manifest
|
||||||
|
|
||||||
|
Run validation checks on `project_manifest.json`.
|
||||||
|
|
||||||
|
## Command
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 "$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_manifest.py" $ARGUMENTS
|
||||||
|
```
|
||||||
|
|
||||||
|
## Options
|
||||||
|
|
||||||
|
- No arguments: Basic validation
|
||||||
|
- `--strict`: Treat warnings as errors
|
||||||
|
|
||||||
|
## What It Checks
|
||||||
|
|
||||||
|
1. **Structure**: Required top-level keys exist
|
||||||
|
2. **Pages**: Have paths, components, file_paths
|
||||||
|
3. **Components**: Have props with types, valid dependencies
|
||||||
|
4. **APIs**: Have methods, paths, request/response schemas
|
||||||
|
5. **Database**: Tables have primary keys, valid foreign keys
|
||||||
|
6. **Dependencies**: No orphans, no circular references
|
||||||
|
7. **Naming**: Follows conventions
|
||||||
|
|
@ -0,0 +1,57 @@
|
||||||
|
---
|
||||||
|
description: Verify implementation matches manifest specifications
|
||||||
|
allowed-tools: Bash, Read
|
||||||
|
---
|
||||||
|
|
||||||
|
# Verify Implementation
|
||||||
|
|
||||||
|
Run verification to ensure all implemented code matches the manifest specifications.
|
||||||
|
|
||||||
|
## Command
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 "$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/verify_implementation.py" --project-root "$CLAUDE_PROJECT_DIR" $ARGUMENTS
|
||||||
|
```
|
||||||
|
|
||||||
|
## Options
|
||||||
|
|
||||||
|
- No arguments: Basic verification
|
||||||
|
- `--verbose` or `-v`: Include warnings
|
||||||
|
- `--json`: Output as JSON for programmatic use
|
||||||
|
|
||||||
|
## What It Checks
|
||||||
|
|
||||||
|
For each entity in the manifest:
|
||||||
|
|
||||||
|
### Components
|
||||||
|
- File exists at specified `file_path`
|
||||||
|
- Component name is exported
|
||||||
|
- Props interface matches manifest definition
|
||||||
|
|
||||||
|
### Pages
|
||||||
|
- File exists at specified `file_path`
|
||||||
|
- Has `export default` (Next.js requirement)
|
||||||
|
- Uses specified component dependencies
|
||||||
|
|
||||||
|
### API Endpoints
|
||||||
|
- File exists at specified `file_path`
|
||||||
|
- HTTP method handler exists (GET, POST, PUT, DELETE)
|
||||||
|
- Request parameters are handled
|
||||||
|
|
||||||
|
### Database Tables
|
||||||
|
- File exists at specified `file_path`
|
||||||
|
- Column definitions present
|
||||||
|
- CRUD operations implemented
|
||||||
|
|
||||||
|
## Example Output
|
||||||
|
|
||||||
|
```
|
||||||
|
✅ [component] comp_button
|
||||||
|
File: app/components/Button.tsx
|
||||||
|
|
||||||
|
❌ [component] comp_missing
|
||||||
|
File: app/components/Missing.tsx
|
||||||
|
❌ ERROR: File not found
|
||||||
|
|
||||||
|
SUMMARY: 17/18 passed, 1 failed, 3 warnings
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,87 @@
|
||||||
|
---
|
||||||
|
description: Approve a workflow gate (design or implementation)
|
||||||
|
allowed-tools: Read, Write, Bash
|
||||||
|
---
|
||||||
|
|
||||||
|
# Approve Workflow Gate
|
||||||
|
|
||||||
|
Approve gate: "$ARGUMENTS"
|
||||||
|
|
||||||
|
## Valid Gates
|
||||||
|
- `design` - Approve the design phase (entities + tasks)
|
||||||
|
- `implementation` - Approve the implementation phase (all code)
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
### 1. Validate Gate
|
||||||
|
- If "$ARGUMENTS" is not `design` or `implementation`: STOP and show usage
|
||||||
|
|
||||||
|
### 2. Check Workflow State
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py exists
|
||||||
|
```
|
||||||
|
|
||||||
|
If no active workflow:
|
||||||
|
```
|
||||||
|
❌ No active workflow found.
|
||||||
|
Start a new workflow with: /workflow:spawn "feature name"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Verify Current Phase
|
||||||
|
|
||||||
|
**For design approval**:
|
||||||
|
- Current phase must be `AWAITING_DESIGN_APPROVAL`
|
||||||
|
- If not: Report error with current phase
|
||||||
|
|
||||||
|
**For implementation approval**:
|
||||||
|
- Current phase must be `AWAITING_IMPL_APPROVAL`
|
||||||
|
- If not: Report error with current phase
|
||||||
|
|
||||||
|
### 4. Execute Approval
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py approve $ARGUMENTS
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Transition to Next Phase
|
||||||
|
|
||||||
|
**If design approved**:
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py transition DESIGN_APPROVED
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py transition IMPLEMENTING
|
||||||
|
```
|
||||||
|
|
||||||
|
**If implementation approved**:
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py transition IMPL_APPROVED
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py transition COMPLETING
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6. Report
|
||||||
|
|
||||||
|
**Design Approved**:
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════╗
|
||||||
|
║ ✅ DESIGN APPROVED ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ The design has been approved. Implementation will begin. ║
|
||||||
|
║ ║
|
||||||
|
║ Next steps: ║
|
||||||
|
║ /workflow:frontend --next Start frontend tasks ║
|
||||||
|
║ /workflow:backend --next Start backend tasks ║
|
||||||
|
║ /workflow:resume Auto-continue workflow ║
|
||||||
|
╚══════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
**Implementation Approved**:
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════╗
|
||||||
|
║ ✅ IMPLEMENTATION APPROVED ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ All implementations have been approved. ║
|
||||||
|
║ ║
|
||||||
|
║ Next steps: ║
|
||||||
|
║ /workflow:complete --all Mark all tasks as done ║
|
||||||
|
║ /workflow:resume Auto-complete workflow ║
|
||||||
|
╚══════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,85 @@
|
||||||
|
---
|
||||||
|
description: Implement backend tasks (Backend agent)
|
||||||
|
allowed-tools: Read, Write, Edit, Bash
|
||||||
|
---
|
||||||
|
|
||||||
|
# Backend Agent - Implementation Mode
|
||||||
|
|
||||||
|
⚙️ **BACKEND AGENT ACTIVATED**
|
||||||
|
|
||||||
|
Implement task: "$ARGUMENTS"
|
||||||
|
|
||||||
|
## CRITICAL RULES
|
||||||
|
|
||||||
|
You are now the **BACKEND AGENT**.
|
||||||
|
|
||||||
|
✅ **ALLOWED**:
|
||||||
|
- Read any file
|
||||||
|
- Write new files (API routes, DB)
|
||||||
|
- Edit existing backend files
|
||||||
|
- Run Bash (build, lint, type-check, tests)
|
||||||
|
|
||||||
|
✅ **ALLOWED FILES**:
|
||||||
|
- `app/api/**/*`
|
||||||
|
- `app/lib/**/*`
|
||||||
|
- `prisma/**/*`
|
||||||
|
- `db/**/*`
|
||||||
|
|
||||||
|
## Workflow
|
||||||
|
|
||||||
|
### Step 1: Load Task
|
||||||
|
First, get the version-specific tasks directory:
|
||||||
|
```bash
|
||||||
|
TASKS_DIR=$(python3 skills/guardrail-orchestrator/scripts/version_manager.py tasks-dir)
|
||||||
|
```
|
||||||
|
|
||||||
|
Read the task file: `$TASKS_DIR/$ARGUMENTS.yml`
|
||||||
|
- If "$ARGUMENTS" is `--next`: find first task with `agent: backend` and `status: pending`
|
||||||
|
|
||||||
|
### Step 2: Update Workflow State
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py task <task_id> in_progress
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Verify Prerequisites
|
||||||
|
- Check entity is `APPROVED` in `project_manifest.json`
|
||||||
|
- Check all `dependencies` tasks are `completed`
|
||||||
|
- If blocked:
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py task <task_id> blocked
|
||||||
|
```
|
||||||
|
Stop and report blocker.
|
||||||
|
|
||||||
|
### Step 4: Implement
|
||||||
|
For each file in `file_paths`:
|
||||||
|
1. Read manifest entity specification
|
||||||
|
2. Generate code matching spec exactly:
|
||||||
|
- HTTP methods must match manifest
|
||||||
|
- Request params must match manifest
|
||||||
|
- Response types must match manifest
|
||||||
|
3. Follow existing project patterns
|
||||||
|
|
||||||
|
### Step 5: Validate
|
||||||
|
Run validations:
|
||||||
|
```bash
|
||||||
|
npm run lint
|
||||||
|
npm run build
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 6: Update Task Status
|
||||||
|
Update the task file:
|
||||||
|
```yaml
|
||||||
|
status: review
|
||||||
|
completed_at: <current timestamp>
|
||||||
|
```
|
||||||
|
|
||||||
|
Update workflow state:
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py task <task_id> review
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py progress --tasks-impl <count>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 7: Report
|
||||||
|
- List implemented files
|
||||||
|
- Show validation results
|
||||||
|
- Suggest: `/workflow:review $ARGUMENTS`
|
||||||
|
|
@ -0,0 +1,66 @@
|
||||||
|
---
|
||||||
|
description: Mark approved task as completed
|
||||||
|
allowed-tools: Read, Write, Bash
|
||||||
|
---
|
||||||
|
|
||||||
|
# Complete Task
|
||||||
|
|
||||||
|
Mark task "$ARGUMENTS" as completed.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
- Task must have `status: approved`
|
||||||
|
- All acceptance criteria verified by reviewer
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
### 1. Read Task
|
||||||
|
First, get the version-specific tasks directory:
|
||||||
|
```bash
|
||||||
|
TASKS_DIR=$(python3 skills/guardrail-orchestrator/scripts/version_manager.py tasks-dir)
|
||||||
|
```
|
||||||
|
|
||||||
|
Read `$TASKS_DIR/$ARGUMENTS.yml`
|
||||||
|
|
||||||
|
### 2. Verify Status
|
||||||
|
- If `status` is NOT `approved`: STOP and report error
|
||||||
|
- If `status` is `approved`: proceed
|
||||||
|
|
||||||
|
### 3. Update Task
|
||||||
|
Update the task file with:
|
||||||
|
```yaml
|
||||||
|
status: completed
|
||||||
|
completed_at: <current timestamp>
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Update Workflow State
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py task <task_id> completed
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py progress --tasks-completed <count>
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Update Manifest (if applicable)
|
||||||
|
For each entity in `entity_ids`:
|
||||||
|
- Update entity status to `IMPLEMENTED` in `project_manifest.json`
|
||||||
|
|
||||||
|
### 6. Check Workflow Completion
|
||||||
|
Check if all tasks are now completed:
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py status
|
||||||
|
```
|
||||||
|
|
||||||
|
If all tasks completed, transition to implementation approval:
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py transition AWAITING_IMPL_APPROVAL
|
||||||
|
```
|
||||||
|
|
||||||
|
### 7. Report
|
||||||
|
```
|
||||||
|
✅ Task completed: $ARGUMENTS
|
||||||
|
|
||||||
|
Entities implemented:
|
||||||
|
- <entity_id_1>
|
||||||
|
- <entity_id_2>
|
||||||
|
|
||||||
|
Next: /workflow:status to see remaining tasks
|
||||||
|
/workflow:complete --all to complete all approved tasks
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,476 @@
|
||||||
|
---
|
||||||
|
description: Design system architecture with ER diagram, API contracts, and UI structure
|
||||||
|
allowed-tools: Read, Write, Edit, Bash, Task, TodoWrite
|
||||||
|
---
|
||||||
|
|
||||||
|
# Workflow Design - System Architecture Phase
|
||||||
|
|
||||||
|
**Input**: "$ARGUMENTS"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## PURPOSE
|
||||||
|
|
||||||
|
This command creates a comprehensive **design document** that serves as the source of truth for implementation. It defines:
|
||||||
|
|
||||||
|
1. **Data Layer** - ER diagram with models, fields, relations
|
||||||
|
2. **API Layer** - REST endpoints with request/response contracts
|
||||||
|
3. **UI Layer** - Pages and components with data requirements
|
||||||
|
4. **Dependency Graph** - Layered execution order for parallel tasks
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ⛔ CRITICAL RULES
|
||||||
|
|
||||||
|
### MUST DO
|
||||||
|
1. **MUST** create `design_document.yml` with ALL layers defined
|
||||||
|
2. **MUST** run `validate_design.py` to generate dependency graph
|
||||||
|
3. **MUST** verify no circular dependencies before proceeding
|
||||||
|
4. **MUST** show layered execution plan to user
|
||||||
|
|
||||||
|
### CANNOT DO
|
||||||
|
1. **CANNOT** create tasks without design document
|
||||||
|
2. **CANNOT** skip validation step
|
||||||
|
3. **CANNOT** proceed if validation fails
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## EXECUTION FLOW
|
||||||
|
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
### STEP 1: Initialize Design Session
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
#### 1.1: Get Current Version
|
||||||
|
```bash
|
||||||
|
# Get active version from workflow
|
||||||
|
VERSION_ID=$(cat .workflow/current.yml 2>/dev/null | grep "active_version:" | awk '{print $2}')
|
||||||
|
if [ -z "$VERSION_ID" ]; then
|
||||||
|
echo "ERROR: No active workflow. Run /workflow:spawn first"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
echo "VERSION_ID=$VERSION_ID"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 1.2: Create Design Directory
|
||||||
|
```bash
|
||||||
|
mkdir -p .workflow/versions/$VERSION_ID/design
|
||||||
|
mkdir -p .workflow/versions/$VERSION_ID/contexts
|
||||||
|
mkdir -p .workflow/versions/$VERSION_ID/tasks
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 1.3: Display Design Start Banner
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════╗
|
||||||
|
║ 📐 SYSTEM DESIGN PHASE ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Feature: $ARGUMENTS ║
|
||||||
|
║ Version: $VERSION_ID ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ You will define: ║
|
||||||
|
║ Layer 1: Data Models (ER Diagram) ║
|
||||||
|
║ Layer 2: API Endpoints (REST Contracts) ║
|
||||||
|
║ Layer 3: Pages & Components (UI Structure) ║
|
||||||
|
╚══════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
### STEP 2: Analyze Requirements & Design System
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
**Use Task tool with system-architect agent:**
|
||||||
|
|
||||||
|
```
|
||||||
|
Use Task tool with:
|
||||||
|
subagent_type: "system-architect"
|
||||||
|
prompt: |
|
||||||
|
# SYSTEM ARCHITECT - Design Document Creation
|
||||||
|
|
||||||
|
## INPUT
|
||||||
|
Feature: "$ARGUMENTS"
|
||||||
|
Version: $VERSION_ID
|
||||||
|
Output: .workflow/versions/$VERSION_ID/design/design_document.yml
|
||||||
|
|
||||||
|
## YOUR MISSION
|
||||||
|
Create a comprehensive design document following the schema in:
|
||||||
|
skills/guardrail-orchestrator/schemas/design_document.yml
|
||||||
|
|
||||||
|
## ANALYSIS PROCESS
|
||||||
|
|
||||||
|
### Phase A: Understand Requirements
|
||||||
|
1. Read the feature description carefully
|
||||||
|
2. Identify the core user flows
|
||||||
|
3. Determine what data needs to be stored
|
||||||
|
4. Identify what APIs are needed
|
||||||
|
5. Plan the UI structure
|
||||||
|
|
||||||
|
### Phase B: Design Data Layer (ER Diagram)
|
||||||
|
For each entity needed:
|
||||||
|
- Define fields with types and constraints
|
||||||
|
- Define relations to other entities
|
||||||
|
- Define validations
|
||||||
|
- Consider indexes for performance
|
||||||
|
|
||||||
|
### Phase C: Design API Layer
|
||||||
|
For each endpoint needed:
|
||||||
|
- Define HTTP method and path
|
||||||
|
- Define request body schema (for POST/PUT/PATCH)
|
||||||
|
- Define response schemas for all status codes
|
||||||
|
- Define authentication requirements
|
||||||
|
- Link to data models used
|
||||||
|
|
||||||
|
### Phase D: Design UI Layer
|
||||||
|
For each page needed:
|
||||||
|
- Define route path
|
||||||
|
- List data requirements (which APIs)
|
||||||
|
- List components used
|
||||||
|
- Define auth requirements
|
||||||
|
|
||||||
|
For each component needed:
|
||||||
|
- Define props interface
|
||||||
|
- Define events emitted
|
||||||
|
- List child components
|
||||||
|
- List APIs called directly (if any)
|
||||||
|
|
||||||
|
## OUTPUT FORMAT
|
||||||
|
|
||||||
|
Create `.workflow/versions/$VERSION_ID/design/design_document.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Design Document
|
||||||
|
workflow_version: "$VERSION_ID"
|
||||||
|
feature: "$ARGUMENTS"
|
||||||
|
created_at: <timestamp>
|
||||||
|
status: draft
|
||||||
|
revision: 1
|
||||||
|
|
||||||
|
# LAYER 1: DATA MODELS
|
||||||
|
data_models:
|
||||||
|
- id: model_<name>
|
||||||
|
name: <PascalCase>
|
||||||
|
description: "<what this model represents>"
|
||||||
|
table_name: <snake_case>
|
||||||
|
fields:
|
||||||
|
- name: id
|
||||||
|
type: uuid
|
||||||
|
constraints: [primary_key]
|
||||||
|
- name: <field_name>
|
||||||
|
type: <string|integer|boolean|datetime|uuid|json|text|float|enum>
|
||||||
|
constraints: [<unique|not_null|indexed|default>]
|
||||||
|
# If enum:
|
||||||
|
enum_values: [<value1>, <value2>]
|
||||||
|
relations:
|
||||||
|
- type: <has_one|has_many|belongs_to|many_to_many>
|
||||||
|
target: model_<other>
|
||||||
|
foreign_key: <fk_field>
|
||||||
|
on_delete: <cascade|set_null|restrict>
|
||||||
|
timestamps: true
|
||||||
|
validations:
|
||||||
|
- field: <field_name>
|
||||||
|
rule: "<validation_rule>"
|
||||||
|
message: "<error message>"
|
||||||
|
|
||||||
|
# LAYER 2: API ENDPOINTS
|
||||||
|
api_endpoints:
|
||||||
|
- id: api_<verb>_<resource>
|
||||||
|
method: <GET|POST|PUT|PATCH|DELETE>
|
||||||
|
path: /api/<path>
|
||||||
|
summary: "<short description>"
|
||||||
|
description: "<detailed description>"
|
||||||
|
# For POST/PUT/PATCH:
|
||||||
|
request_body:
|
||||||
|
content_type: application/json
|
||||||
|
schema:
|
||||||
|
type: object
|
||||||
|
properties:
|
||||||
|
- name: <field>
|
||||||
|
type: <type>
|
||||||
|
required: <true|false>
|
||||||
|
validations: [<rules>]
|
||||||
|
responses:
|
||||||
|
- status: 200
|
||||||
|
description: "Success"
|
||||||
|
schema:
|
||||||
|
type: object
|
||||||
|
properties:
|
||||||
|
- name: <field>
|
||||||
|
type: <type>
|
||||||
|
- status: 400
|
||||||
|
description: "Validation error"
|
||||||
|
schema:
|
||||||
|
type: object
|
||||||
|
properties:
|
||||||
|
- name: error
|
||||||
|
type: string
|
||||||
|
depends_on_models: [model_<name>]
|
||||||
|
auth:
|
||||||
|
required: <true|false>
|
||||||
|
roles: [<role1>, <role2>]
|
||||||
|
|
||||||
|
# LAYER 3: PAGES
|
||||||
|
pages:
|
||||||
|
- id: page_<name>
|
||||||
|
name: "<Human Name>"
|
||||||
|
path: /<route>
|
||||||
|
layout: <layout_component>
|
||||||
|
data_needs:
|
||||||
|
- api_id: api_<name>
|
||||||
|
purpose: "<why needed>"
|
||||||
|
on_load: <true|false>
|
||||||
|
components: [component_<name1>, component_<name2>]
|
||||||
|
auth:
|
||||||
|
required: <true|false>
|
||||||
|
roles: []
|
||||||
|
redirect: /login
|
||||||
|
|
||||||
|
# LAYER 3: COMPONENTS
|
||||||
|
components:
|
||||||
|
- id: component_<name>
|
||||||
|
name: <PascalCaseName>
|
||||||
|
props:
|
||||||
|
- name: <propName>
|
||||||
|
type: <TypeScript type>
|
||||||
|
required: <true|false>
|
||||||
|
description: "<what this prop does>"
|
||||||
|
events:
|
||||||
|
- name: <onEventName>
|
||||||
|
payload: "<payload type>"
|
||||||
|
description: "<when this fires>"
|
||||||
|
uses_apis: []
|
||||||
|
uses_components: [component_<child>]
|
||||||
|
variants: [<variant1>, <variant2>]
|
||||||
|
```
|
||||||
|
|
||||||
|
## DESIGN PRINCIPLES
|
||||||
|
|
||||||
|
1. **Start with Data**: What data is needed? Design models first.
|
||||||
|
2. **APIs Serve UI**: What operations does UI need? Design APIs next.
|
||||||
|
3. **UI Consumes APIs**: Pages/Components use APIs. Design UI last.
|
||||||
|
4. **Explicit Dependencies**: Every relation must be clearly defined.
|
||||||
|
5. **Contracts First**: API request/response schemas are contracts.
|
||||||
|
|
||||||
|
## VERIFICATION
|
||||||
|
|
||||||
|
After creating the design document, verify:
|
||||||
|
1. Every API references existing models
|
||||||
|
2. Every page references existing APIs and components
|
||||||
|
3. Every component references existing child components
|
||||||
|
4. No circular dependencies
|
||||||
|
|
||||||
|
## OUTPUT
|
||||||
|
|
||||||
|
After creating the file, output:
|
||||||
|
```
|
||||||
|
=== DESIGN DOCUMENT CREATED ===
|
||||||
|
|
||||||
|
Data Models: X
|
||||||
|
API Endpoints: X
|
||||||
|
Pages: X
|
||||||
|
Components: X
|
||||||
|
|
||||||
|
File: .workflow/versions/$VERSION_ID/design/design_document.yml
|
||||||
|
```
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
### STEP 3: Validate Design & Generate Dependency Graph
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
#### 3.1: Run Design Validation [MANDATORY]
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/validate_design.py \
|
||||||
|
.workflow/versions/$VERSION_ID/design/design_document.yml \
|
||||||
|
--output-dir .workflow/versions/$VERSION_ID
|
||||||
|
```
|
||||||
|
|
||||||
|
**This generates:**
|
||||||
|
- `.workflow/versions/$VERSION_ID/dependency_graph.yml` - Layered execution order
|
||||||
|
- `.workflow/versions/$VERSION_ID/contexts/*.yml` - Per-entity context snapshots
|
||||||
|
- `.workflow/versions/$VERSION_ID/tasks/*.yml` - Tasks with full context
|
||||||
|
|
||||||
|
#### 3.2: Check Validation Result
|
||||||
|
```bash
|
||||||
|
VALIDATION_EXIT=$?
|
||||||
|
if [ $VALIDATION_EXIT -ne 0 ]; then
|
||||||
|
echo "❌ Design validation failed. Fix errors and re-run."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
**BLOCK IF**: Validation fails → Display errors, do not proceed
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
### STEP 4: Display Layered Execution Plan
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
Read the generated dependency graph and display:
|
||||||
|
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════╗
|
||||||
|
║ 📊 DEPENDENCY GRAPH - EXECUTION LAYERS ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ ║
|
||||||
|
║ Layer 1: DATA MODELS (Parallel) ║
|
||||||
|
║ ───────────────────────────────────────────── ║
|
||||||
|
║ 📦 model_user → backend ║
|
||||||
|
║ 📦 model_post → backend ║
|
||||||
|
║ ║
|
||||||
|
║ Layer 2: API ENDPOINTS (Parallel, after Layer 1) ║
|
||||||
|
║ ───────────────────────────────────────────── ║
|
||||||
|
║ 🔌 api_create_user → backend (needs: model_user) ║
|
||||||
|
║ 🔌 api_list_users → backend (needs: model_user) ║
|
||||||
|
║ 🔌 api_create_post → backend (needs: model_user, model_post)║
|
||||||
|
║ ║
|
||||||
|
║ Layer 3: UI (Parallel, after Layer 2) ║
|
||||||
|
║ ───────────────────────────────────────────── ║
|
||||||
|
║ 🧩 component_user_card → frontend ║
|
||||||
|
║ 📄 page_users → frontend (needs: api_list_users) ║
|
||||||
|
║ ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ EXECUTION SUMMARY ║
|
||||||
|
║ Total entities: X ║
|
||||||
|
║ Total layers: X ║
|
||||||
|
║ Max parallelism: X (tasks can run simultaneously) ║
|
||||||
|
║ Critical path: X layers deep ║
|
||||||
|
╚══════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
### STEP 5: Display Design Summary for Approval
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════╗
|
||||||
|
║ 🛑 DESIGN APPROVAL REQUIRED ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Feature: $ARGUMENTS ║
|
||||||
|
║ Version: $VERSION_ID ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ DESIGN DOCUMENT ║
|
||||||
|
║ 📦 Data Models: X ║
|
||||||
|
║ 🔌 API Endpoints: X ║
|
||||||
|
║ 📄 Pages: X ║
|
||||||
|
║ 🧩 Components: X ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ GENERATED ARTIFACTS ║
|
||||||
|
║ ✅ Dependency graph calculated ║
|
||||||
|
║ ✅ Context snapshots created (X files) ║
|
||||||
|
║ ✅ Implementation tasks created (X tasks) ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ EXECUTION PLAN ║
|
||||||
|
║ Layer 1: X tasks (parallel) → backend ║
|
||||||
|
║ Layer 2: X tasks (parallel) → backend ║
|
||||||
|
║ Layer 3: X tasks (parallel) → frontend ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ FILES CREATED ║
|
||||||
|
║ .workflow/versions/$VERSION_ID/design/design_document.yml ║
|
||||||
|
║ .workflow/versions/$VERSION_ID/dependency_graph.yml ║
|
||||||
|
║ .workflow/versions/$VERSION_ID/contexts/*.yml ║
|
||||||
|
║ .workflow/versions/$VERSION_ID/tasks/*.yml ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ NEXT STEPS ║
|
||||||
|
║ Review the design above, then: ║
|
||||||
|
║ /workflow:approve - Proceed to implementation ║
|
||||||
|
║ /workflow:reject - Request design changes ║
|
||||||
|
╚══════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
### STEP 6: Transition Workflow State
|
||||||
|
### ═══════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Update progress
|
||||||
|
TASK_COUNT=$(ls .workflow/versions/$VERSION_ID/tasks/*.yml 2>/dev/null | wc -l)
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py progress \
|
||||||
|
--tasks-created $TASK_COUNT
|
||||||
|
|
||||||
|
# Transition to awaiting approval
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py transition AWAITING_DESIGN_APPROVAL
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## CONTEXT SNAPSHOT EXAMPLE
|
||||||
|
|
||||||
|
Each task gets a context file like `.workflow/versions/$VERSION_ID/contexts/api_create_user.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
task_id: task_create_api_create_user
|
||||||
|
entity_id: api_create_user
|
||||||
|
workflow_version: v001
|
||||||
|
|
||||||
|
target:
|
||||||
|
type: api
|
||||||
|
definition:
|
||||||
|
method: POST
|
||||||
|
path: /api/users
|
||||||
|
request_body:
|
||||||
|
properties:
|
||||||
|
- name: email
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
validations: [email]
|
||||||
|
- name: password
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
validations: [min:8]
|
||||||
|
responses:
|
||||||
|
- status: 201
|
||||||
|
schema: { id, email, name, created_at }
|
||||||
|
- status: 400
|
||||||
|
schema: { error, details }
|
||||||
|
- status: 409
|
||||||
|
schema: { error }
|
||||||
|
|
||||||
|
related:
|
||||||
|
models:
|
||||||
|
- id: model_user
|
||||||
|
definition:
|
||||||
|
name: User
|
||||||
|
fields:
|
||||||
|
- { name: id, type: uuid }
|
||||||
|
- { name: email, type: string }
|
||||||
|
- { name: password_hash, type: string }
|
||||||
|
|
||||||
|
dependencies:
|
||||||
|
entity_ids: [model_user]
|
||||||
|
|
||||||
|
files:
|
||||||
|
to_create:
|
||||||
|
- app/api/users/route.ts
|
||||||
|
reference:
|
||||||
|
- path: app/api/health/route.ts
|
||||||
|
purpose: "API route pattern"
|
||||||
|
|
||||||
|
acceptance:
|
||||||
|
- criterion: "POST /api/users returns 201 on success"
|
||||||
|
verification: "curl -X POST /api/users with valid data"
|
||||||
|
- criterion: "Returns 409 if email exists"
|
||||||
|
verification: "Test with duplicate email"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## USAGE
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# After /workflow:spawn, run design:
|
||||||
|
/workflow:design
|
||||||
|
|
||||||
|
# This will:
|
||||||
|
# 1. Create comprehensive design document
|
||||||
|
# 2. Validate and generate dependency graph
|
||||||
|
# 3. Create tasks with full context
|
||||||
|
# 4. Wait for approval before implementation
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,193 @@
|
||||||
|
---
|
||||||
|
description: Compare workflow versions and show manifest changes
|
||||||
|
allowed-tools: Read, Bash
|
||||||
|
---
|
||||||
|
|
||||||
|
# Workflow Version Diff
|
||||||
|
|
||||||
|
Compare workflow versions to see what changed in the project manifest.
|
||||||
|
|
||||||
|
## EXECUTION PROTOCOL
|
||||||
|
|
||||||
|
### Step 1: Parse Arguments
|
||||||
|
|
||||||
|
```
|
||||||
|
IF "$ARGUMENTS" = "":
|
||||||
|
MODE = "current" (diff latest version with current)
|
||||||
|
ELSE IF "$ARGUMENTS" matches "v\d+ v\d+":
|
||||||
|
MODE = "versions" (diff two specific versions)
|
||||||
|
ELSE IF "$ARGUMENTS" matches "v\d+":
|
||||||
|
MODE = "single" (diff specific version with current)
|
||||||
|
ELSE IF "$ARGUMENTS" = "--changelog" or "--log":
|
||||||
|
MODE = "changelog" (show all version changelogs)
|
||||||
|
ELSE IF "$ARGUMENTS" contains "--json":
|
||||||
|
OUTPUT = "json"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Get Available Versions
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/manifest_diff.py versions
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Execute Diff Based on Mode
|
||||||
|
|
||||||
|
**MODE: current (default)**
|
||||||
|
```bash
|
||||||
|
# Get latest version
|
||||||
|
LATEST=$(ls -1 .workflow/versions/ 2>/dev/null | tail -1)
|
||||||
|
|
||||||
|
# Diff with current manifest
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/manifest_diff.py diff $LATEST current
|
||||||
|
```
|
||||||
|
|
||||||
|
**MODE: versions (e.g., "v001 v002")**
|
||||||
|
```bash
|
||||||
|
# Diff two specific versions
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/manifest_diff.py diff v001 v002
|
||||||
|
```
|
||||||
|
|
||||||
|
**MODE: single (e.g., "v001")**
|
||||||
|
```bash
|
||||||
|
# Diff specific version with current
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/manifest_diff.py diff v001
|
||||||
|
```
|
||||||
|
|
||||||
|
**MODE: changelog**
|
||||||
|
```bash
|
||||||
|
# Show all changelogs
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/manifest_diff.py changelog
|
||||||
|
```
|
||||||
|
|
||||||
|
**JSON output**
|
||||||
|
```bash
|
||||||
|
# Add --json for programmatic use
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/manifest_diff.py diff v001 --json
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Display Results
|
||||||
|
|
||||||
|
The script outputs:
|
||||||
|
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════════════╗
|
||||||
|
║ MANIFEST DIFF: v001 → v002 ║
|
||||||
|
╠══════════════════════════════════════════════════════════════════════╣
|
||||||
|
║ SUMMARY ║
|
||||||
|
║ + Added: 3 ║
|
||||||
|
║ ~ Modified: 2 ║
|
||||||
|
║ - Removed: 1 ║
|
||||||
|
║ = Unchanged: 5 ║
|
||||||
|
╠══════════════════════════════════════════════════════════════════════╣
|
||||||
|
║ BY TYPE ║
|
||||||
|
║ pages: +1 ║
|
||||||
|
║ components: +2 ~1 ║
|
||||||
|
║ api_endpoints: ~1 -1 ║
|
||||||
|
╠══════════════════════════════════════════════════════════════════════╣
|
||||||
|
║ ➕ ADDED ║
|
||||||
|
║ + 📄 Profile (app/profile/page.tsx) ║
|
||||||
|
║ + 🧩 Button (app/components/Button.tsx) ║
|
||||||
|
║ + 🧩 Modal (app/components/Modal.tsx) ║
|
||||||
|
╠══════════════════════════════════════════════════════════════════════╣
|
||||||
|
║ 📝 MODIFIED ║
|
||||||
|
║ ~ 🧩 Header (app/components/Header.tsx) ║
|
||||||
|
║ dependencies: [] → ['Button'] ║
|
||||||
|
║ ~ 🔌 users (app/api/users/route.ts) ║
|
||||||
|
║ status: PENDING → IMPLEMENTED ║
|
||||||
|
╠══════════════════════════════════════════════════════════════════════╣
|
||||||
|
║ ➖ REMOVED ║
|
||||||
|
║ - 🔌 legacy (app/api/legacy/route.ts) ║
|
||||||
|
╚══════════════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## USAGE EXAMPLES
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Compare latest version with current manifest
|
||||||
|
/workflow:diff
|
||||||
|
|
||||||
|
# Compare two specific versions
|
||||||
|
/workflow:diff v001 v002
|
||||||
|
|
||||||
|
# Compare specific version with current
|
||||||
|
/workflow:diff v003
|
||||||
|
|
||||||
|
# Show all version changelogs
|
||||||
|
/workflow:diff --changelog
|
||||||
|
|
||||||
|
# Output as JSON
|
||||||
|
/workflow:diff v001 --json
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## WHAT IT SHOWS
|
||||||
|
|
||||||
|
### Entity Changes
|
||||||
|
- **Added**: New pages, components, API endpoints, etc.
|
||||||
|
- **Modified**: Status changes, dependency updates, path changes
|
||||||
|
- **Removed**: Deleted entities from manifest
|
||||||
|
|
||||||
|
### Entity Type Icons
|
||||||
|
- 📄 page
|
||||||
|
- 🧩 component
|
||||||
|
- 🔌 api_endpoint
|
||||||
|
- 📚 lib
|
||||||
|
- 🪝 hook
|
||||||
|
- 📝 type
|
||||||
|
- ⚙️ config
|
||||||
|
|
||||||
|
### Change Details
|
||||||
|
- Entity name and file path
|
||||||
|
- Specific field changes with before/after values
|
||||||
|
- Summary statistics by type
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## CHANGELOG MODE
|
||||||
|
|
||||||
|
Show version history with changes:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/workflow:diff --changelog
|
||||||
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════════════╗
|
||||||
|
║ CHANGELOG: v001 ║
|
||||||
|
╠══════════════════════════════════════════════════════════════════════╣
|
||||||
|
║ Feature: User authentication ║
|
||||||
|
║ Status: completed ║
|
||||||
|
║ Started: 2025-01-15 10:30:00 ║
|
||||||
|
║ Completed: 2025-01-15 14:45:00 ║
|
||||||
|
╠══════════════════════════════════════════════════════════════════════╣
|
||||||
|
║ CHANGES ║
|
||||||
|
║ + Added page: Login ║
|
||||||
|
║ + Added page: Register ║
|
||||||
|
║ + Added component: AuthForm ║
|
||||||
|
║ + Added api_endpoint: auth ║
|
||||||
|
╚══════════════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## INTEGRATION
|
||||||
|
|
||||||
|
### Uses Version Snapshots
|
||||||
|
|
||||||
|
The diff tool uses snapshots created by version_manager.py:
|
||||||
|
- `snapshot_before/manifest.json` - Manifest at version start
|
||||||
|
- `snapshot_after/manifest.json` - Manifest at version completion
|
||||||
|
|
||||||
|
These are automatically created when:
|
||||||
|
- `/workflow:spawn` initializes a new version
|
||||||
|
- `/workflow:complete` marks a version as done
|
||||||
|
|
||||||
|
### Related Commands
|
||||||
|
|
||||||
|
- `/workflow:history` - List all workflow versions
|
||||||
|
- `/workflow:status` - Show current workflow state
|
||||||
|
- `/workflow:changelog <version>` - Alias for `--changelog`
|
||||||
|
|
@ -0,0 +1,85 @@
|
||||||
|
---
|
||||||
|
description: Implement frontend tasks (Frontend agent)
|
||||||
|
allowed-tools: Read, Write, Edit, Bash
|
||||||
|
---
|
||||||
|
|
||||||
|
# Frontend Agent - Implementation Mode
|
||||||
|
|
||||||
|
🎨 **FRONTEND AGENT ACTIVATED**
|
||||||
|
|
||||||
|
Implement task: "$ARGUMENTS"
|
||||||
|
|
||||||
|
## CRITICAL RULES
|
||||||
|
|
||||||
|
You are now the **FRONTEND AGENT**.
|
||||||
|
|
||||||
|
✅ **ALLOWED**:
|
||||||
|
- Read any file
|
||||||
|
- Write new files (components, pages)
|
||||||
|
- Edit existing UI files
|
||||||
|
- Run Bash (build, lint, type-check)
|
||||||
|
|
||||||
|
✅ **ALLOWED FILES**:
|
||||||
|
- `app/components/**/*`
|
||||||
|
- `app/**/page.tsx`
|
||||||
|
- `app/**/layout.tsx`
|
||||||
|
- `app/globals.css`
|
||||||
|
|
||||||
|
## Workflow
|
||||||
|
|
||||||
|
### Step 1: Load Task
|
||||||
|
First, get the version-specific tasks directory:
|
||||||
|
```bash
|
||||||
|
TASKS_DIR=$(python3 skills/guardrail-orchestrator/scripts/version_manager.py tasks-dir)
|
||||||
|
```
|
||||||
|
|
||||||
|
Read the task file: `$TASKS_DIR/$ARGUMENTS.yml`
|
||||||
|
- If "$ARGUMENTS" is `--next`: find first task with `agent: frontend` and `status: pending`
|
||||||
|
|
||||||
|
### Step 2: Update Workflow State
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py task <task_id> in_progress
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Verify Prerequisites
|
||||||
|
- Check entity is `APPROVED` in `project_manifest.json`
|
||||||
|
- Check all `dependencies` tasks are `completed`
|
||||||
|
- If blocked:
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py task <task_id> blocked
|
||||||
|
```
|
||||||
|
Stop and report blocker.
|
||||||
|
|
||||||
|
### Step 4: Implement
|
||||||
|
For each file in `file_paths`:
|
||||||
|
1. Read manifest entity specification
|
||||||
|
2. Generate code matching spec exactly:
|
||||||
|
- Props must match manifest
|
||||||
|
- Types must match manifest
|
||||||
|
- File path must match manifest
|
||||||
|
3. Follow existing project patterns
|
||||||
|
|
||||||
|
### Step 5: Validate
|
||||||
|
Run validations:
|
||||||
|
```bash
|
||||||
|
npm run lint
|
||||||
|
npm run build
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 6: Update Task Status
|
||||||
|
Update the task file:
|
||||||
|
```yaml
|
||||||
|
status: review
|
||||||
|
completed_at: <current timestamp>
|
||||||
|
```
|
||||||
|
|
||||||
|
Update workflow state:
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py task <task_id> review
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py progress --tasks-impl <count>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 7: Report
|
||||||
|
- List implemented files
|
||||||
|
- Show validation results
|
||||||
|
- Suggest: `/workflow:review $ARGUMENTS`
|
||||||
|
|
@ -0,0 +1,94 @@
|
||||||
|
---
|
||||||
|
description: Show workflow version history
|
||||||
|
allowed-tools: Read, Bash
|
||||||
|
---
|
||||||
|
|
||||||
|
# Workflow History
|
||||||
|
|
||||||
|
Display version history of all workflow sessions.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
```
|
||||||
|
/workflow:history # List all versions
|
||||||
|
/workflow:history v001 # Show details for specific version
|
||||||
|
/workflow:history --changelog # Show changelog for current version
|
||||||
|
```
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
### 1. List All Versions (default)
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/version_manager.py history
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Show Version Details
|
||||||
|
If "$ARGUMENTS" is a version (e.g., `v001`):
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/version_manager.py changelog $ARGUMENTS
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Display Format
|
||||||
|
|
||||||
|
**Version List**:
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════════════╗
|
||||||
|
║ WORKFLOW VERSION HISTORY ║
|
||||||
|
╠══════════════════════════════════════════════════════════════════════╣
|
||||||
|
║ ✅ v003: Dashboard with analytics ║
|
||||||
|
║ Started: 2025-01-16T16:00:00 | Tasks: 12 | Ops: 45 ║
|
||||||
|
║ ────────────────────────────────────────────────────────────────── ║
|
||||||
|
║ ✅ v002: Task filters and search ║
|
||||||
|
║ Started: 2025-01-16T14:00:00 | Tasks: 8 | Ops: 28 ║
|
||||||
|
║ ────────────────────────────────────────────────────────────────── ║
|
||||||
|
║ ✅ v001: User authentication ║
|
||||||
|
║ Started: 2025-01-16T10:00:00 | Tasks: 5 | Ops: 18 ║
|
||||||
|
╚══════════════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
**Version Changelog**:
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════════════╗
|
||||||
|
║ CHANGELOG: v001 ║
|
||||||
|
╠══════════════════════════════════════════════════════════════════════╣
|
||||||
|
║ Feature: User authentication ║
|
||||||
|
║ Status: completed ║
|
||||||
|
╠══════════════════════════════════════════════════════════════════════╣
|
||||||
|
║ CREATED ║
|
||||||
|
║ + [page] page_login ║
|
||||||
|
║ app/login/page.tsx ║
|
||||||
|
║ + [component] component_LoginForm ║
|
||||||
|
║ app/components/LoginForm.tsx ║
|
||||||
|
║ + [api] api_auth ║
|
||||||
|
║ app/api/auth/route.ts ║
|
||||||
|
║ UPDATED ║
|
||||||
|
║ ~ [component] component_Header ║
|
||||||
|
║ DELETED ║
|
||||||
|
║ (none) ║
|
||||||
|
╠══════════════════════════════════════════════════════════════════════╣
|
||||||
|
║ SUMMARY ║
|
||||||
|
║ Entities: +3 ~1 -0 ║
|
||||||
|
║ Files: +4 ~2 -0 ║
|
||||||
|
╚══════════════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Show Task Sessions
|
||||||
|
If `$ARGUMENTS` includes `--tasks`:
|
||||||
|
List all task sessions for the version with their operations:
|
||||||
|
|
||||||
|
```
|
||||||
|
Task Sessions for v001:
|
||||||
|
─────────────────────────────────────────────────
|
||||||
|
🎨 task_create_LoginPage (frontend)
|
||||||
|
Status: completed | Duration: 5m 32s
|
||||||
|
Operations:
|
||||||
|
+ CREATE file: app/login/page.tsx
|
||||||
|
~ UPDATE manifest: project_manifest.json
|
||||||
|
Review: ✅ approved
|
||||||
|
|
||||||
|
⚙️ task_create_AuthAPI (backend)
|
||||||
|
Status: completed | Duration: 8m 15s
|
||||||
|
Operations:
|
||||||
|
+ CREATE file: app/api/auth/route.ts
|
||||||
|
+ CREATE file: app/lib/auth.ts
|
||||||
|
Review: ✅ approved
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,113 @@
|
||||||
|
---
|
||||||
|
description: Reject a workflow gate and request changes
|
||||||
|
allowed-tools: Read, Write, Bash
|
||||||
|
---
|
||||||
|
|
||||||
|
# Reject Workflow Gate
|
||||||
|
|
||||||
|
Reject gate with reason: "$ARGUMENTS"
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
```
|
||||||
|
/workflow:reject design "Need more API endpoints for authentication"
|
||||||
|
/workflow:reject implementation "Login form missing validation"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
### 1. Parse Arguments
|
||||||
|
Extract:
|
||||||
|
- `gate`: First word (design | implementation)
|
||||||
|
- `reason`: Remaining text in quotes
|
||||||
|
|
||||||
|
If invalid format:
|
||||||
|
```
|
||||||
|
❌ Usage: /workflow:reject <gate> "reason"
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
/workflow:reject design "Need user profile page"
|
||||||
|
/workflow:reject implementation "Missing error handling"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Check Workflow State
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py exists
|
||||||
|
```
|
||||||
|
|
||||||
|
If no active workflow:
|
||||||
|
```
|
||||||
|
❌ No active workflow found.
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Verify Current Phase
|
||||||
|
|
||||||
|
**For design rejection**:
|
||||||
|
- Current phase must be `AWAITING_DESIGN_APPROVAL`
|
||||||
|
|
||||||
|
**For implementation rejection**:
|
||||||
|
- Current phase must be `AWAITING_IMPL_APPROVAL`
|
||||||
|
|
||||||
|
### 4. Execute Rejection
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py reject <gate> "<reason>"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Transition to Revision Phase
|
||||||
|
|
||||||
|
**If design rejected**:
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py transition DESIGN_REJECTED
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py transition DESIGNING
|
||||||
|
```
|
||||||
|
|
||||||
|
**If implementation rejected**:
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py transition IMPL_REJECTED
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py transition IMPLEMENTING
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6. Report
|
||||||
|
|
||||||
|
**Design Rejected**:
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════╗
|
||||||
|
║ ❌ DESIGN REJECTED ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Reason: <rejection_reason> ║
|
||||||
|
║ ║
|
||||||
|
║ The workflow has returned to the DESIGNING phase. ║
|
||||||
|
║ Revision count: X ║
|
||||||
|
║ ║
|
||||||
|
║ Next steps: ║
|
||||||
|
║ /workflow:design --revise Revise the design ║
|
||||||
|
║ /workflow:resume Auto-revise and continue ║
|
||||||
|
╚══════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
**Implementation Rejected**:
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════╗
|
||||||
|
║ ❌ IMPLEMENTATION REJECTED ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Reason: <rejection_reason> ║
|
||||||
|
║ ║
|
||||||
|
║ The workflow has returned to the IMPLEMENTING phase. ║
|
||||||
|
║ Revision count: X ║
|
||||||
|
║ ║
|
||||||
|
║ Tasks requiring fixes will be marked as 'pending'. ║
|
||||||
|
║ ║
|
||||||
|
║ Next steps: ║
|
||||||
|
║ /workflow:frontend --next Fix frontend tasks ║
|
||||||
|
║ /workflow:backend --next Fix backend tasks ║
|
||||||
|
║ /workflow:resume Auto-fix and continue ║
|
||||||
|
╚══════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
### 7. Update Related Tasks (Implementation Rejection)
|
||||||
|
|
||||||
|
If implementation was rejected, identify tasks related to the rejection reason and mark them as pending:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py task <affected_task_id> pending
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,159 @@
|
||||||
|
---
|
||||||
|
description: Resume an interrupted workflow from saved state
|
||||||
|
allowed-tools: Read, Write, Edit, Bash, AskUserQuestion, TodoWrite
|
||||||
|
---
|
||||||
|
|
||||||
|
# Workflow Orchestrator - Resume
|
||||||
|
|
||||||
|
Resume a previously interrupted or paused workflow.
|
||||||
|
|
||||||
|
## EXECUTION PROTOCOL
|
||||||
|
|
||||||
|
### Step 1: Load Workflow State
|
||||||
|
|
||||||
|
Read `.workflow/current.yml`:
|
||||||
|
- If not found: Report "No workflow to resume" and exit
|
||||||
|
- If found: Load state and continue
|
||||||
|
|
||||||
|
### Step 2: Display Resume Summary
|
||||||
|
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════╗
|
||||||
|
║ 🔄 RESUMING WORKFLOW ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Workflow ID: <id> ║
|
||||||
|
║ Feature: <feature> ║
|
||||||
|
║ Started: <started_at> ║
|
||||||
|
║ Last Updated: <updated_at> ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ CURRENT STATE ║
|
||||||
|
║ Phase: <current_phase> ║
|
||||||
|
║ Resume Point: <resume_point.action> ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ PROGRESS ║
|
||||||
|
║ Entities Designed: <progress.entities_designed> ║
|
||||||
|
║ Tasks Created: <progress.tasks_created> ║
|
||||||
|
║ Tasks Implemented: <progress.tasks_implemented> ║
|
||||||
|
║ Tasks Reviewed: <progress.tasks_reviewed> ║
|
||||||
|
║ Tasks Completed: <progress.tasks_completed> ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ LAST ERROR (if any): <last_error> ║
|
||||||
|
╚══════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Confirm Resume
|
||||||
|
|
||||||
|
**Ask user**:
|
||||||
|
- Option 1: "Continue - Resume from current point"
|
||||||
|
- Option 2: "Restart Phase - Redo current phase from beginning"
|
||||||
|
- Option 3: "Abort - Cancel workflow entirely"
|
||||||
|
|
||||||
|
### Step 4: Resume Based on Phase
|
||||||
|
|
||||||
|
**INITIALIZING**:
|
||||||
|
→ Continue to DESIGNING phase
|
||||||
|
|
||||||
|
**DESIGNING**:
|
||||||
|
→ Continue architect work
|
||||||
|
→ Resume creating entities/tasks
|
||||||
|
|
||||||
|
**AWAITING_DESIGN_APPROVAL**:
|
||||||
|
→ Present design summary again
|
||||||
|
→ Ask for approval
|
||||||
|
|
||||||
|
**DESIGN_APPROVED**:
|
||||||
|
→ Continue to IMPLEMENTING phase
|
||||||
|
|
||||||
|
**DESIGN_REJECTED**:
|
||||||
|
→ Show rejection reason
|
||||||
|
→ Return to DESIGNING with feedback
|
||||||
|
|
||||||
|
**IMPLEMENTING**:
|
||||||
|
→ Find incomplete tasks
|
||||||
|
→ Continue implementation from next pending task
|
||||||
|
|
||||||
|
**REVIEWING**:
|
||||||
|
→ Find tasks awaiting review
|
||||||
|
→ Continue review process
|
||||||
|
|
||||||
|
**SECURITY_REVIEW**:
|
||||||
|
→ Continue security scanning
|
||||||
|
→ Run: `python3 skills/guardrail-orchestrator/scripts/security_scan.py --project-dir . --severity HIGH`
|
||||||
|
→ Run: `python3 skills/guardrail-orchestrator/scripts/validate_api_contract.py --project-dir .`
|
||||||
|
→ If passed: Transition to AWAITING_IMPL_APPROVAL
|
||||||
|
→ If critical issues: Return to IMPLEMENTING with security feedback
|
||||||
|
|
||||||
|
**AWAITING_IMPL_APPROVAL**:
|
||||||
|
→ Present implementation summary again
|
||||||
|
→ Ask for approval
|
||||||
|
|
||||||
|
**IMPL_APPROVED**:
|
||||||
|
→ Continue to COMPLETING phase
|
||||||
|
|
||||||
|
**IMPL_REJECTED**:
|
||||||
|
→ Show rejection reason
|
||||||
|
→ Return to IMPLEMENTING with feedback
|
||||||
|
|
||||||
|
**COMPLETING**:
|
||||||
|
→ Continue marking tasks complete
|
||||||
|
|
||||||
|
**PAUSED**:
|
||||||
|
→ Resume from `resume_point.phase`
|
||||||
|
|
||||||
|
**FAILED**:
|
||||||
|
→ Show error details
|
||||||
|
→ Ask user how to proceed:
|
||||||
|
- Retry failed operation
|
||||||
|
- Skip and continue
|
||||||
|
- Abort workflow
|
||||||
|
|
||||||
|
### Step 5: Continue Workflow
|
||||||
|
|
||||||
|
Execute remaining phases following `/workflow:spawn` protocol.
|
||||||
|
|
||||||
|
## TASK-LEVEL RESUME
|
||||||
|
|
||||||
|
If resuming during IMPLEMENTING phase:
|
||||||
|
|
||||||
|
1. **Identify incomplete tasks**:
|
||||||
|
```yaml
|
||||||
|
# Resume from first task not in 'completed' or 'approved'
|
||||||
|
resume_task: tasks.pending[0] || tasks.in_progress[0] || tasks.review[0]
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Skip completed work**:
|
||||||
|
- Don't recreate files that exist and are valid
|
||||||
|
- Don't re-run validations that passed
|
||||||
|
|
||||||
|
3. **Continue from failure point**:
|
||||||
|
- If task failed mid-implementation, restart that task
|
||||||
|
- If validation failed, show error and retry
|
||||||
|
|
||||||
|
## STATE RECOVERY
|
||||||
|
|
||||||
|
If `.workflow/current.yml` is corrupted:
|
||||||
|
|
||||||
|
1. **Check for backup**: `.workflow/current.yml.bak`
|
||||||
|
2. **Attempt recovery from manifest**:
|
||||||
|
- Read `project_manifest.json` for entity status
|
||||||
|
- Scan version-specific tasks directory for task status
|
||||||
|
- Reconstruct workflow state
|
||||||
|
3. **If unrecoverable**:
|
||||||
|
- Report error
|
||||||
|
- Suggest starting fresh with `/workflow:spawn`
|
||||||
|
|
||||||
|
## ABORT WORKFLOW
|
||||||
|
|
||||||
|
If user chooses to abort:
|
||||||
|
|
||||||
|
1. **Confirm abort**:
|
||||||
|
"This will cancel the workflow. Files already created will remain. Continue?"
|
||||||
|
|
||||||
|
2. **If confirmed**:
|
||||||
|
- Archive state to `.workflow/history/<id>_aborted.yml`
|
||||||
|
- Clear `.workflow/current.yml`
|
||||||
|
- Report: "Workflow aborted. Created files remain in place."
|
||||||
|
|
||||||
|
3. **Cleanup options**:
|
||||||
|
- Offer to rollback created files (if git available)
|
||||||
|
- Offer to keep partial implementation
|
||||||
|
|
@ -0,0 +1,526 @@
|
||||||
|
---
|
||||||
|
description: Review implementation (Reviewer agent)
|
||||||
|
allowed-tools: Read, Bash
|
||||||
|
---
|
||||||
|
|
||||||
|
# Reviewer Agent - Review Mode
|
||||||
|
|
||||||
|
**Input**: "$ARGUMENTS"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## CRITICAL ENFORCEMENT RULES
|
||||||
|
|
||||||
|
**YOU ARE IN READ-ONLY MODE. VIOLATIONS WILL BE BLOCKED.**
|
||||||
|
|
||||||
|
### MUST DO (Non-Negotiable)
|
||||||
|
1. **MUST** run all validation checks (build, typecheck, lint, test, API contract)
|
||||||
|
2. **MUST** verify every file in task's `file_paths` exists
|
||||||
|
3. **MUST** read and analyze each implemented file
|
||||||
|
4. **MUST** check against acceptance_criteria in task file
|
||||||
|
5. **MUST** output structured review report (format below)
|
||||||
|
6. **MUST** run workflow_manager.py to update task status
|
||||||
|
|
||||||
|
### CANNOT DO (Strictly Forbidden)
|
||||||
|
1. **CANNOT** create files (Write tool blocked)
|
||||||
|
2. **CANNOT** modify files (Edit tool blocked)
|
||||||
|
3. **CANNOT** fix issues yourself - only report them
|
||||||
|
4. **CANNOT** approve tasks with missing files
|
||||||
|
5. **CANNOT** approve if ANY validation check fails
|
||||||
|
6. **CANNOT** skip any validation check
|
||||||
|
|
||||||
|
### ALLOWED ACTIONS
|
||||||
|
- Read any file
|
||||||
|
- Run Bash commands (build, lint, test, typecheck, ls, cat, grep)
|
||||||
|
- Output review reports
|
||||||
|
- Update task status via workflow_manager.py
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## VALIDATION CHECKS MATRIX
|
||||||
|
|
||||||
|
| Check | Command | Blocks Approval | When |
|
||||||
|
|-------|---------|-----------------|------|
|
||||||
|
| Build | `npm run build` | YES | Always |
|
||||||
|
| TypeScript | `npx tsc --noEmit` | YES | Always |
|
||||||
|
| Async/Await | `python3 verify_async.py` | YES | Always |
|
||||||
|
| Lint | `npm run lint` | YES (if --strict) | --strict mode |
|
||||||
|
| Unit Tests | `npm test -- --passWithNoTests` | YES (if --strict) | --strict mode |
|
||||||
|
| API Contract | `python3 validate_api_contract.py` | YES | Always |
|
||||||
|
| Security Scan | `python3 security_scan.py` | YES (critical) | Always |
|
||||||
|
| Files Exist | `ls -la` each file | YES | Always |
|
||||||
|
|
||||||
|
**Note:** For comprehensive security audit, use `/workflow:security --full`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ARGUMENT PARSING
|
||||||
|
|
||||||
|
```
|
||||||
|
IF "$ARGUMENTS" contains "--auto":
|
||||||
|
MODE = AUTO (batch review all tasks)
|
||||||
|
STRICT = "$ARGUMENTS" contains "--strict"
|
||||||
|
FULL = "$ARGUMENTS" contains "--full"
|
||||||
|
ELSE IF "$ARGUMENTS" = "--next":
|
||||||
|
MODE = SINGLE (next pending task)
|
||||||
|
ELSE:
|
||||||
|
MODE = SINGLE (specific task: "$ARGUMENTS")
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## MODE: AUTO REVIEW (--auto)
|
||||||
|
|
||||||
|
### Step A1: Get Active Version [MANDATORY]
|
||||||
|
```bash
|
||||||
|
VERSION_ID=$(python3 skills/guardrail-orchestrator/scripts/version_manager.py current)
|
||||||
|
TASKS_DIR=".workflow/versions/$VERSION_ID/tasks"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step A2: Run Global Validations [MANDATORY]
|
||||||
|
|
||||||
|
#### 2.1 Build Check
|
||||||
|
```bash
|
||||||
|
npm run build 2>&1
|
||||||
|
BUILD_EXIT=$?
|
||||||
|
echo "BUILD_EXIT=$BUILD_EXIT"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2.2 TypeScript Strict Check
|
||||||
|
```bash
|
||||||
|
npx tsc --noEmit 2>&1
|
||||||
|
TS_EXIT=$?
|
||||||
|
echo "TS_EXIT=$TS_EXIT"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2.3 Async/Await Check [MANDATORY]
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/verify_async.py --path . 2>&1
|
||||||
|
ASYNC_EXIT=$?
|
||||||
|
echo "ASYNC_EXIT=$ASYNC_EXIT"
|
||||||
|
```
|
||||||
|
|
||||||
|
This catches runtime errors at build time:
|
||||||
|
- `fetch()` without `await`
|
||||||
|
- `.json()` without `await`
|
||||||
|
- `Promise.all()` without `await`
|
||||||
|
- Floating promises (unawaited async calls)
|
||||||
|
|
||||||
|
**Exit codes:**
|
||||||
|
- 0 = PASS (no high-severity issues)
|
||||||
|
- 1 = HIGH severity issues found (blocks approval)
|
||||||
|
|
||||||
|
#### 2.4 Lint Check (if --strict or --full)
|
||||||
|
```bash
|
||||||
|
npm run lint 2>&1
|
||||||
|
LINT_EXIT=$?
|
||||||
|
echo "LINT_EXIT=$LINT_EXIT"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2.5 Unit Tests (if --strict or --full)
|
||||||
|
```bash
|
||||||
|
npm test -- --passWithNoTests 2>&1
|
||||||
|
TEST_EXIT=$?
|
||||||
|
echo "TEST_EXIT=$TEST_EXIT"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2.6 API Contract Validation [MANDATORY]
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/validate_api_contract.py --project-dir . 2>&1
|
||||||
|
API_EXIT=$?
|
||||||
|
echo "API_EXIT=$API_EXIT"
|
||||||
|
```
|
||||||
|
|
||||||
|
This validates:
|
||||||
|
- Frontend API calls have matching backend endpoints
|
||||||
|
- HTTP methods match (GET, POST, PUT, DELETE)
|
||||||
|
- Request bodies are sent where expected
|
||||||
|
- Response handling matches backend output
|
||||||
|
|
||||||
|
#### 2.7 Security Scan [MANDATORY]
|
||||||
|
```bash
|
||||||
|
# Run comprehensive security scanner
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/security_scan.py \
|
||||||
|
--project-dir . \
|
||||||
|
--severity HIGH
|
||||||
|
SECURITY_EXIT=$?
|
||||||
|
echo "SECURITY_EXIT=$SECURITY_EXIT"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Security scan checks:**
|
||||||
|
- Hardcoded secrets (API keys, passwords, tokens)
|
||||||
|
- SQL injection vulnerabilities
|
||||||
|
- XSS risks (dangerouslySetInnerHTML, innerHTML)
|
||||||
|
- Command injection patterns
|
||||||
|
- Path traversal vulnerabilities
|
||||||
|
- NoSQL injection risks
|
||||||
|
- SSRF vulnerabilities
|
||||||
|
- Prototype pollution
|
||||||
|
- Insecure authentication patterns
|
||||||
|
- CORS misconfigurations
|
||||||
|
- Sensitive data exposure
|
||||||
|
- Debug code in production
|
||||||
|
|
||||||
|
**Exit codes:**
|
||||||
|
- 0 = PASS (no critical/high issues)
|
||||||
|
- 1 = HIGH issues found (warning)
|
||||||
|
- 2 = CRITICAL issues found (blocks approval)
|
||||||
|
|
||||||
|
**For full security audit, run:** `/workflow:security --full`
|
||||||
|
|
||||||
|
### Step A3: Gather All Tasks [MANDATORY]
|
||||||
|
```bash
|
||||||
|
ls $TASKS_DIR/*.yml 2>/dev/null
|
||||||
|
```
|
||||||
|
**MUST process ALL task files found**
|
||||||
|
|
||||||
|
### Step A4: Review Each Task [MANDATORY]
|
||||||
|
|
||||||
|
For EACH task file:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Extract file_paths from task
|
||||||
|
TASK_FILES=$(grep -A 20 "file_paths:" "$TASK_FILE" | grep -E "^\s+-" | sed 's/.*- //')
|
||||||
|
```
|
||||||
|
|
||||||
|
**Check each file exists**:
|
||||||
|
```bash
|
||||||
|
for file in $TASK_FILES; do
|
||||||
|
if [ -f "$file" ]; then
|
||||||
|
echo "EXISTS: $file"
|
||||||
|
else
|
||||||
|
echo "MISSING: $file"
|
||||||
|
MISSING_COUNT=$((MISSING_COUNT + 1))
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
**Determine task verdict**:
|
||||||
|
```
|
||||||
|
IF all files exist
|
||||||
|
AND BUILD_EXIT = 0
|
||||||
|
AND TS_EXIT = 0
|
||||||
|
AND ASYNC_EXIT = 0
|
||||||
|
AND API_EXIT = 0
|
||||||
|
AND SECURITY_EXIT != 2 (no critical security issues)
|
||||||
|
AND (not --strict OR (LINT_EXIT = 0 AND TEST_EXIT = 0)):
|
||||||
|
-> TASK_VERDICT = APPROVED
|
||||||
|
ELSE:
|
||||||
|
-> TASK_VERDICT = REJECTED
|
||||||
|
-> Record reason (missing files / build failure / type error / async issue / API mismatch / security issue)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Security exit codes:**
|
||||||
|
- 0 = PASS
|
||||||
|
- 1 = HIGH issues (warning, doesn't block unless --strict)
|
||||||
|
- 2 = CRITICAL issues (always blocks)
|
||||||
|
|
||||||
|
### Step A5: Batch Update [MANDATORY]
|
||||||
|
|
||||||
|
For APPROVED tasks:
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py task <task_id> approved
|
||||||
|
```
|
||||||
|
|
||||||
|
For REJECTED tasks:
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py task <task_id> pending
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step A6: Auto Review Report [MANDATORY]
|
||||||
|
|
||||||
|
**MUST output this exact format**:
|
||||||
|
```
|
||||||
|
+======================================================================+
|
||||||
|
| REVIEW COMPLETE |
|
||||||
|
+======================================================================+
|
||||||
|
| Version: $VERSION_ID |
|
||||||
|
| Mode: AUTO [STRICT if --strict] [FULL if --full] |
|
||||||
|
+======================================================================+
|
||||||
|
| VALIDATION RESULTS |
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
| Build: PASS (exit 0) / FAIL (exit $BUILD_EXIT) |
|
||||||
|
| TypeScript: PASS (exit 0) / FAIL (exit $TS_EXIT) |
|
||||||
|
| Async/Await: PASS / FAIL (X high, Y medium issues) |
|
||||||
|
| Lint: PASS / FAIL / SKIPPED |
|
||||||
|
| Tests: PASS / FAIL / SKIPPED |
|
||||||
|
| API Contract: PASS / FAIL (X errors, Y warnings) |
|
||||||
|
| Security: PASS / WARNING / CRITICAL |
|
||||||
|
| (C:X H:X M:X L:X issues) |
|
||||||
|
+======================================================================+
|
||||||
|
| API CONTRACT DETAILS |
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
| Frontend calls: X matched, Y unmatched |
|
||||||
|
| Backend endpoints: X defined, Y unused |
|
||||||
|
| Method mismatches: X |
|
||||||
|
| Body mismatches: X |
|
||||||
|
+======================================================================+
|
||||||
|
| TASK RESULTS |
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
| Total: X tasks |
|
||||||
|
| Approved: X tasks |
|
||||||
|
| Rejected: X tasks |
|
||||||
|
| Skipped: X tasks (already completed) |
|
||||||
|
+======================================================================+
|
||||||
|
| APPROVED TASKS |
|
||||||
|
| - task_create_Button |
|
||||||
|
| - task_create_Form |
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
| REJECTED TASKS |
|
||||||
|
| - task_create_Modal |
|
||||||
|
| Reason: Missing file app/components/Modal.tsx |
|
||||||
|
| - task_update_API |
|
||||||
|
| Reason: API contract error - endpoint not found |
|
||||||
|
+======================================================================+
|
||||||
|
| SECURITY WARNINGS |
|
||||||
|
| - src/lib/api.ts:15 - Possible hardcoded API key |
|
||||||
|
| - app/page.tsx:42 - dangerouslySetInnerHTML usage |
|
||||||
|
+======================================================================+
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step A7: Next Steps [MANDATORY]
|
||||||
|
|
||||||
|
**IF all approved**:
|
||||||
|
```
|
||||||
|
All tasks approved.
|
||||||
|
Next: Run `/workflow:approve implementation` to continue.
|
||||||
|
```
|
||||||
|
|
||||||
|
**IF any rejected**:
|
||||||
|
```
|
||||||
|
Some tasks need fixes.
|
||||||
|
|
||||||
|
API Contract Issues:
|
||||||
|
For frontend issues: Fix the API call URL or method
|
||||||
|
For backend issues: Create or fix the API endpoint
|
||||||
|
|
||||||
|
For each rejected task, run:
|
||||||
|
/workflow:frontend <task_id> (for frontend tasks)
|
||||||
|
/workflow:backend <task_id> (for backend tasks)
|
||||||
|
|
||||||
|
Then re-run: /workflow:review --auto
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## MODE: SINGLE TASK REVIEW (--next or task_id)
|
||||||
|
|
||||||
|
### Step S1: Get Task [MANDATORY]
|
||||||
|
```bash
|
||||||
|
VERSION_ID=$(python3 skills/guardrail-orchestrator/scripts/version_manager.py current)
|
||||||
|
TASKS_DIR=".workflow/versions/$VERSION_ID/tasks"
|
||||||
|
```
|
||||||
|
|
||||||
|
**IF --next**:
|
||||||
|
```bash
|
||||||
|
# Find first task with status: pending or status: implemented
|
||||||
|
TASK_FILE=$(grep -l "status: pending\|status: implemented" $TASKS_DIR/*.yml 2>/dev/null | head -1)
|
||||||
|
```
|
||||||
|
|
||||||
|
**IF specific task_id**:
|
||||||
|
```bash
|
||||||
|
TASK_FILE="$TASKS_DIR/$ARGUMENTS.yml"
|
||||||
|
```
|
||||||
|
|
||||||
|
**BLOCK IF**: Task file does not exist -> Error: "Task not found: $ARGUMENTS"
|
||||||
|
|
||||||
|
### Step S2: Read Task Spec [MANDATORY]
|
||||||
|
```bash
|
||||||
|
cat "$TASK_FILE"
|
||||||
|
```
|
||||||
|
|
||||||
|
Extract:
|
||||||
|
- `id`: Task identifier
|
||||||
|
- `file_paths`: List of files to verify
|
||||||
|
- `acceptance_criteria`: List of requirements to check
|
||||||
|
|
||||||
|
### Step S3: Run All Validations [MANDATORY]
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Build
|
||||||
|
npm run build 2>&1
|
||||||
|
BUILD_EXIT=$?
|
||||||
|
|
||||||
|
# TypeScript
|
||||||
|
npx tsc --noEmit 2>&1
|
||||||
|
TS_EXIT=$?
|
||||||
|
|
||||||
|
# API Contract
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/validate_api_contract.py --project-dir . 2>&1
|
||||||
|
API_EXIT=$?
|
||||||
|
```
|
||||||
|
|
||||||
|
**MUST capture and report all exit codes**
|
||||||
|
|
||||||
|
### Step S4: Verify Files Exist [MANDATORY]
|
||||||
|
|
||||||
|
For each path in `file_paths`:
|
||||||
|
```bash
|
||||||
|
ls -la "$path" 2>/dev/null && echo "EXISTS" || echo "MISSING"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Record**:
|
||||||
|
- FILES_EXIST = true/false
|
||||||
|
- MISSING_FILES = list of missing paths
|
||||||
|
|
||||||
|
### Step S5: Read and Analyze Files [MANDATORY]
|
||||||
|
|
||||||
|
For each EXISTING file:
|
||||||
|
1. Read file content
|
||||||
|
2. Check against acceptance_criteria:
|
||||||
|
- [ ] File exports correct components/functions
|
||||||
|
- [ ] Props/types match manifest spec
|
||||||
|
- [ ] Code follows project patterns
|
||||||
|
- [ ] No obvious bugs or issues
|
||||||
|
3. Check API contract compliance:
|
||||||
|
- [ ] Frontend calls use correct endpoints
|
||||||
|
- [ ] HTTP methods are appropriate
|
||||||
|
- [ ] Request bodies are properly structured
|
||||||
|
- [ ] Response handling is correct
|
||||||
|
|
||||||
|
### Step S6: Determine Verdict [MANDATORY]
|
||||||
|
|
||||||
|
```
|
||||||
|
IF BUILD_EXIT = 0
|
||||||
|
AND TS_EXIT = 0
|
||||||
|
AND API_EXIT = 0
|
||||||
|
AND FILES_EXIST = true
|
||||||
|
AND acceptance_criteria met:
|
||||||
|
-> VERDICT = APPROVED
|
||||||
|
ELSE:
|
||||||
|
-> VERDICT = REQUEST_CHANGES
|
||||||
|
-> Record all issues found
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step S7: Update Task Status [MANDATORY]
|
||||||
|
|
||||||
|
**IF APPROVED**:
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py task $TASK_ID approved
|
||||||
|
```
|
||||||
|
|
||||||
|
**IF REQUEST_CHANGES**:
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py task $TASK_ID pending
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step S8: Output Review Report [MANDATORY]
|
||||||
|
|
||||||
|
**MUST output this exact format**:
|
||||||
|
```
|
||||||
|
+======================================================================+
|
||||||
|
| TASK REVIEW |
|
||||||
|
+======================================================================+
|
||||||
|
| Task: $TASK_ID |
|
||||||
|
| Version: $VERSION_ID |
|
||||||
|
+======================================================================+
|
||||||
|
| VALIDATION CHECKS |
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
| Build: PASS / FAIL |
|
||||||
|
| TypeScript: PASS / FAIL |
|
||||||
|
| API Contract: PASS / FAIL |
|
||||||
|
| Files exist: PASS / FAIL (X/Y files) |
|
||||||
|
| Acceptance criteria: PASS / PARTIAL / FAIL |
|
||||||
|
| Code quality: PASS / ISSUES |
|
||||||
|
+======================================================================+
|
||||||
|
| API CONTRACT STATUS |
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
| Endpoint calls: X found, Y matched |
|
||||||
|
| Method correctness: PASS / X mismatches |
|
||||||
|
| Request bodies: PASS / X issues |
|
||||||
|
| Response handling: PASS / ISSUES |
|
||||||
|
+======================================================================+
|
||||||
|
| VERDICT: APPROVED / REQUEST_CHANGES |
|
||||||
|
+======================================================================+
|
||||||
|
| [If REQUEST_CHANGES, list all issues:] |
|
||||||
|
| 1. Missing file: app/components/Button.tsx |
|
||||||
|
| 2. TypeScript error: Type 'string' not assignable to 'number' |
|
||||||
|
| 3. API contract: POST /api/users called but endpoint expects GET |
|
||||||
|
| 4. API contract: Frontend sends body but backend ignores it |
|
||||||
|
| 5. Acceptance criterion not met: "Must support dark mode" |
|
||||||
|
+======================================================================+
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step S9: Next Steps [MANDATORY]
|
||||||
|
|
||||||
|
**IF APPROVED**:
|
||||||
|
```
|
||||||
|
Task approved.
|
||||||
|
Next: Run `/workflow:complete $TASK_ID` to mark as done.
|
||||||
|
Or run `/workflow:review --next` to review next task.
|
||||||
|
```
|
||||||
|
|
||||||
|
**IF REQUEST_CHANGES**:
|
||||||
|
```
|
||||||
|
Changes requested.
|
||||||
|
|
||||||
|
Issues to fix:
|
||||||
|
[List specific issues with file locations]
|
||||||
|
|
||||||
|
For API contract issues:
|
||||||
|
- If frontend issue: Fix the fetch/axios call in [file:line]
|
||||||
|
- If backend issue: Update the API route handler in [file]
|
||||||
|
|
||||||
|
Fix issues and re-run:
|
||||||
|
/workflow:frontend $TASK_ID (for frontend tasks)
|
||||||
|
/workflow:backend $TASK_ID (for backend tasks)
|
||||||
|
Then: /workflow:review $TASK_ID
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## USAGE EXAMPLES
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Review specific task
|
||||||
|
/workflow:review task_create_Button
|
||||||
|
|
||||||
|
# Review next pending task
|
||||||
|
/workflow:review --next
|
||||||
|
|
||||||
|
# Auto-review all tasks (standard - build + types + API)
|
||||||
|
/workflow:review --auto
|
||||||
|
|
||||||
|
# Auto-review all tasks (strict - includes lint + tests)
|
||||||
|
/workflow:review --auto --strict
|
||||||
|
|
||||||
|
# Full review with all checks
|
||||||
|
/workflow:review --auto --full
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## API CONTRACT VALIDATION DETAILS
|
||||||
|
|
||||||
|
The API contract validator checks:
|
||||||
|
|
||||||
|
### Frontend Analysis
|
||||||
|
- **fetch()** calls with `/api/` paths
|
||||||
|
- **axios** requests (get, post, put, delete)
|
||||||
|
- **useSWR** data fetching hooks
|
||||||
|
- **Custom API clients** (api.get, api.post, etc.)
|
||||||
|
|
||||||
|
### Backend Analysis
|
||||||
|
- **Next.js App Router**: `app/api/*/route.ts` exports (GET, POST, PUT, DELETE)
|
||||||
|
- **Next.js Pages Router**: `pages/api/*.ts` with req.method checks
|
||||||
|
- **Express-style**: router.get/post/etc patterns
|
||||||
|
|
||||||
|
### Validation Rules
|
||||||
|
1. **Endpoint Existence**: Every frontend call must have a matching backend route
|
||||||
|
2. **Method Match**: GET calls must hit GET endpoints, POST to POST, etc.
|
||||||
|
3. **Body Alignment**: POST/PUT calls should send bodies, GET should not
|
||||||
|
4. **Unused Endpoints**: Backend routes not called by frontend (warnings)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ENFORCEMENT CHECKLIST
|
||||||
|
|
||||||
|
Before completing this command, verify:
|
||||||
|
- [ ] Build command executed and exit code captured
|
||||||
|
- [ ] TypeScript check executed and exit code captured
|
||||||
|
- [ ] API contract validation executed and exit code captured
|
||||||
|
- [ ] All file_paths verified with ls command
|
||||||
|
- [ ] Security scan completed
|
||||||
|
- [ ] Structured review report output (exact format above)
|
||||||
|
- [ ] Task status updated via workflow_manager.py
|
||||||
|
- [ ] Next steps clearly stated
|
||||||
|
|
@ -0,0 +1,342 @@
|
||||||
|
---
|
||||||
|
description: Run comprehensive security audit (Security Reviewer agent)
|
||||||
|
allowed-tools: Read, Bash, Grep, Task
|
||||||
|
---
|
||||||
|
|
||||||
|
# Security Reviewer Agent - Security Audit Mode
|
||||||
|
|
||||||
|
**Input**: "$ARGUMENTS"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## CRITICAL CONSTRAINTS
|
||||||
|
|
||||||
|
**YOU ARE IN READ-ONLY MODE FOR ANALYSIS.**
|
||||||
|
|
||||||
|
### MUST DO (Non-Negotiable)
|
||||||
|
1. **MUST** run automated security scanner
|
||||||
|
2. **MUST** analyze all CRITICAL and HIGH findings
|
||||||
|
3. **MUST** check dependency vulnerabilities
|
||||||
|
4. **MUST** review security configurations
|
||||||
|
5. **MUST** output structured security report
|
||||||
|
6. **MUST** provide remediation guidance
|
||||||
|
|
||||||
|
### CANNOT DO (Strictly Forbidden)
|
||||||
|
1. **CANNOT** modify source files
|
||||||
|
2. **CANNOT** fix issues directly
|
||||||
|
3. **CANNOT** approve with CRITICAL issues
|
||||||
|
4. **CANNOT** skip any security category
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ARGUMENT PARSING
|
||||||
|
|
||||||
|
```
|
||||||
|
IF "$ARGUMENTS" contains "--quick":
|
||||||
|
MODE = QUICK (scanner only)
|
||||||
|
ELSE IF "$ARGUMENTS" contains "--full":
|
||||||
|
MODE = FULL (scanner + deep analysis + deps + config)
|
||||||
|
ELSE:
|
||||||
|
MODE = STANDARD (scanner + deps)
|
||||||
|
|
||||||
|
SEVERITY = extract from --severity [critical|high|medium|low]
|
||||||
|
OUTPUT = extract from --json (JSON output) or text
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## EXECUTION FLOW
|
||||||
|
|
||||||
|
### Step 1: Run Automated Security Scanner [MANDATORY]
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/security_scan.py \
|
||||||
|
--project-dir . \
|
||||||
|
--severity ${SEVERITY:-LOW} \
|
||||||
|
${OUTPUT:+--json}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Capture output and exit code:**
|
||||||
|
```bash
|
||||||
|
SCAN_EXIT=$?
|
||||||
|
echo "SCAN_EXIT=$SCAN_EXIT"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Exit codes:**
|
||||||
|
- 0 = PASS (no critical/high issues)
|
||||||
|
- 1 = HIGH issues found
|
||||||
|
- 2 = CRITICAL issues found
|
||||||
|
|
||||||
|
### Step 2: Dependency Audit [MANDATORY unless --quick]
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo "=== Dependency Audit ==="
|
||||||
|
npm audit --json 2>/dev/null || echo '{"vulnerabilities":{}}'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parse npm audit results:**
|
||||||
|
- Count critical, high, moderate, low vulnerabilities
|
||||||
|
- List affected packages and versions
|
||||||
|
- Note if fixes available (`npm audit fix`)
|
||||||
|
|
||||||
|
### Step 3: Deep Analysis [FULL mode only]
|
||||||
|
|
||||||
|
For each CRITICAL/HIGH finding from scanner:
|
||||||
|
|
||||||
|
#### 3.1 Data Flow Tracing
|
||||||
|
Use Task agent with security-engineer subagent:
|
||||||
|
```
|
||||||
|
Analyze data flow for vulnerability at [file:line].
|
||||||
|
Trace user input from source to sink.
|
||||||
|
Identify all potential attack vectors.
|
||||||
|
Assess exploitability and impact.
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3.2 Attack Vector Analysis
|
||||||
|
For each vulnerability type:
|
||||||
|
- SQL Injection → Check if input reaches query without sanitization
|
||||||
|
- XSS → Check if input reaches DOM without encoding
|
||||||
|
- Command Injection → Check if input reaches shell without escaping
|
||||||
|
- Path Traversal → Check if input reaches file system without validation
|
||||||
|
|
||||||
|
### Step 4: Configuration Review [FULL mode only]
|
||||||
|
|
||||||
|
#### 4.1 CORS Configuration
|
||||||
|
```bash
|
||||||
|
grep -rn "cors\|Access-Control" app/ src/ pages/ --include="*.ts" --include="*.tsx" --include="*.js"
|
||||||
|
```
|
||||||
|
|
||||||
|
Check for:
|
||||||
|
- Wildcard origins (`*`)
|
||||||
|
- Credentials with permissive origins
|
||||||
|
- Missing CORS on sensitive endpoints
|
||||||
|
|
||||||
|
#### 4.2 Security Headers
|
||||||
|
```bash
|
||||||
|
grep -rn "helmet\|Content-Security-Policy\|X-Frame-Options\|X-XSS-Protection" . --include="*.ts" --include="*.js"
|
||||||
|
```
|
||||||
|
|
||||||
|
Check for:
|
||||||
|
- Helmet middleware usage
|
||||||
|
- CSP configuration
|
||||||
|
- X-Frame-Options
|
||||||
|
- X-Content-Type-Options
|
||||||
|
|
||||||
|
#### 4.3 Authentication Configuration
|
||||||
|
```bash
|
||||||
|
grep -rn "jwt\|session\|auth\|cookie" app/ src/ pages/ --include="*.ts" --include="*.tsx"
|
||||||
|
```
|
||||||
|
|
||||||
|
Check for:
|
||||||
|
- JWT algorithm (avoid 'none', prefer RS256)
|
||||||
|
- Session configuration
|
||||||
|
- Cookie flags (httpOnly, secure, sameSite)
|
||||||
|
|
||||||
|
#### 4.4 Environment Variables
|
||||||
|
```bash
|
||||||
|
# Check .env files are gitignored
|
||||||
|
cat .gitignore 2>/dev/null | grep -E "\.env"
|
||||||
|
|
||||||
|
# Check for env var usage
|
||||||
|
grep -rn "process\.env\." app/ src/ --include="*.ts" --include="*.tsx" | head -20
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Manual Review Checklist [FULL mode only]
|
||||||
|
|
||||||
|
Read each file modified in current workflow and verify:
|
||||||
|
|
||||||
|
**Input Validation**
|
||||||
|
- [ ] All user inputs validated
|
||||||
|
- [ ] Type checking enforced
|
||||||
|
- [ ] Length limits applied
|
||||||
|
- [ ] Format validation (email, URL, etc.)
|
||||||
|
|
||||||
|
**Output Encoding**
|
||||||
|
- [ ] HTML encoding for DOM insertion
|
||||||
|
- [ ] URL encoding for URLs
|
||||||
|
- [ ] JSON encoding for API responses
|
||||||
|
|
||||||
|
**Database Security**
|
||||||
|
- [ ] Parameterized queries used
|
||||||
|
- [ ] No string concatenation in queries
|
||||||
|
- [ ] Proper ORM usage
|
||||||
|
|
||||||
|
**Authentication/Authorization**
|
||||||
|
- [ ] Auth checks on protected routes
|
||||||
|
- [ ] Role-based access control
|
||||||
|
- [ ] Session validation
|
||||||
|
|
||||||
|
**Error Handling**
|
||||||
|
- [ ] Generic error messages to users
|
||||||
|
- [ ] No stack traces in production
|
||||||
|
- [ ] No sensitive data in logs
|
||||||
|
|
||||||
|
### Step 6: Generate Security Report [MANDATORY]
|
||||||
|
|
||||||
|
**MUST output this exact format:**
|
||||||
|
|
||||||
|
```
|
||||||
|
+======================================================================+
|
||||||
|
| SECURITY AUDIT REPORT |
|
||||||
|
+======================================================================+
|
||||||
|
| Mode: QUICK / STANDARD / FULL |
|
||||||
|
| Date: [current date] |
|
||||||
|
| Project: [project name from package.json] |
|
||||||
|
+======================================================================+
|
||||||
|
| RISK ASSESSMENT |
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
| Overall Risk: CRITICAL / HIGH / MEDIUM / LOW / PASS |
|
||||||
|
| |
|
||||||
|
| Static Analysis: X issues (C:X H:X M:X L:X) |
|
||||||
|
| Dependencies: X vulnerabilities |
|
||||||
|
| Configuration: X concerns |
|
||||||
|
+======================================================================+
|
||||||
|
| CRITICAL ISSUES (Immediate Action Required) |
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
| [1] [CATEGORY] Title |
|
||||||
|
| Location: file:line |
|
||||||
|
| CWE: CWE-XXX |
|
||||||
|
| OWASP: A0X:2021-Category |
|
||||||
|
| Evidence: [code snippet] |
|
||||||
|
| Impact: [description of potential attack] |
|
||||||
|
| Fix: [specific remediation steps] |
|
||||||
|
| |
|
||||||
|
| [2] ... |
|
||||||
|
+======================================================================+
|
||||||
|
| HIGH ISSUES (Fix Before Production) |
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
| [3] ... |
|
||||||
|
+======================================================================+
|
||||||
|
| MEDIUM ISSUES (Should Fix) |
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
| [4] ... |
|
||||||
|
+======================================================================+
|
||||||
|
| DEPENDENCY VULNERABILITIES |
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
| Package Version Severity Fix Available |
|
||||||
|
| lodash 4.17.20 HIGH npm audit fix |
|
||||||
|
| axios 0.21.0 MEDIUM npm audit fix |
|
||||||
|
+======================================================================+
|
||||||
|
| CONFIGURATION CONCERNS |
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
| - CORS: Wildcard origin detected in src/middleware.ts |
|
||||||
|
| - Session: Missing httpOnly flag on auth cookie |
|
||||||
|
| - Headers: No CSP header configured |
|
||||||
|
+======================================================================+
|
||||||
|
| REMEDIATION PRIORITY |
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
| 1. [CRITICAL] Rotate exposed API key in src/lib/api.ts |
|
||||||
|
| 2. [CRITICAL] Fix SQL injection in app/api/users/route.ts |
|
||||||
|
| 3. [HIGH] Update lodash to 4.17.21 |
|
||||||
|
| 4. [HIGH] Add input validation to user registration |
|
||||||
|
| 5. [MEDIUM] Configure CSP headers |
|
||||||
|
+======================================================================+
|
||||||
|
| VERDICT |
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
| FAIL - X critical issues must be fixed before deployment |
|
||||||
|
| or |
|
||||||
|
| PASS - No blocking security issues found |
|
||||||
|
+======================================================================+
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## VERDICT DETERMINATION
|
||||||
|
|
||||||
|
### FAIL Conditions
|
||||||
|
- Any CRITICAL issue found
|
||||||
|
- 3+ HIGH issues found
|
||||||
|
- Critical npm vulnerabilities without fix
|
||||||
|
- Exposed secrets or credentials
|
||||||
|
|
||||||
|
### PASS WITH WARNINGS
|
||||||
|
- Only MEDIUM/LOW issues
|
||||||
|
- All HIGH issues have accepted risk
|
||||||
|
- Dependencies have fixes available
|
||||||
|
|
||||||
|
### PASS
|
||||||
|
- No CRITICAL/HIGH issues
|
||||||
|
- Dependencies up to date
|
||||||
|
- Configurations reviewed
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## POST-AUDIT ACTIONS
|
||||||
|
|
||||||
|
### If FAIL:
|
||||||
|
```
|
||||||
|
SECURITY AUDIT FAILED
|
||||||
|
|
||||||
|
Blocking issues must be fixed:
|
||||||
|
1. [List critical issues]
|
||||||
|
|
||||||
|
For each issue:
|
||||||
|
/workflow:frontend <task_id> - if frontend issue
|
||||||
|
/workflow:backend <task_id> - if backend issue
|
||||||
|
|
||||||
|
Then re-run: /workflow:security
|
||||||
|
```
|
||||||
|
|
||||||
|
### If PASS:
|
||||||
|
```
|
||||||
|
SECURITY AUDIT PASSED
|
||||||
|
|
||||||
|
Proceed with: /workflow:review --auto
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## USAGE EXAMPLES
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Quick scan (automated scanner only)
|
||||||
|
/workflow:security --quick
|
||||||
|
|
||||||
|
# Standard scan (scanner + dependencies)
|
||||||
|
/workflow:security
|
||||||
|
|
||||||
|
# Full audit (all checks)
|
||||||
|
/workflow:security --full
|
||||||
|
|
||||||
|
# Filter by severity
|
||||||
|
/workflow:security --severity high
|
||||||
|
|
||||||
|
# JSON output for CI/CD
|
||||||
|
/workflow:security --json
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## INTEGRATION WITH CI/CD
|
||||||
|
|
||||||
|
### Pre-commit Hook
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/security_scan.py \
|
||||||
|
--project-dir . --severity HIGH --strict
|
||||||
|
```
|
||||||
|
|
||||||
|
### GitHub Actions
|
||||||
|
```yaml
|
||||||
|
- name: Security Scan
|
||||||
|
run: |
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/security_scan.py \
|
||||||
|
--project-dir . --json > security-report.json
|
||||||
|
|
||||||
|
if [ $? -ne 0 ]; then
|
||||||
|
echo "Security issues found!"
|
||||||
|
cat security-report.json
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ENFORCEMENT CHECKLIST
|
||||||
|
|
||||||
|
Before completing this command, verify:
|
||||||
|
- [ ] Automated scanner executed
|
||||||
|
- [ ] All categories analyzed
|
||||||
|
- [ ] Dependencies audited (unless --quick)
|
||||||
|
- [ ] Structured report output
|
||||||
|
- [ ] Remediation guidance provided
|
||||||
|
- [ ] Clear verdict stated
|
||||||
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,119 @@
|
||||||
|
---
|
||||||
|
description: Show workflow status and task summary
|
||||||
|
allowed-tools: Read, Bash
|
||||||
|
---
|
||||||
|
|
||||||
|
# Workflow Status
|
||||||
|
|
||||||
|
Display current workflow status and task breakdown.
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
### 1. Check Active Workflow
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py status
|
||||||
|
```
|
||||||
|
|
||||||
|
If active workflow exists, display workflow state.
|
||||||
|
If no workflow, continue with manual task scan.
|
||||||
|
|
||||||
|
### 2. Read Project Manifest
|
||||||
|
Check `project_manifest.json` for:
|
||||||
|
- Current phase
|
||||||
|
- Entity counts by status
|
||||||
|
|
||||||
|
### 3. Scan Tasks
|
||||||
|
Get the version-specific tasks directory:
|
||||||
|
```bash
|
||||||
|
TASKS_DIR=$(python3 skills/guardrail-orchestrator/scripts/version_manager.py tasks-dir)
|
||||||
|
```
|
||||||
|
|
||||||
|
Read all `$TASKS_DIR/*.yml` files and count by:
|
||||||
|
- Status (pending, in_progress, review, approved, completed, blocked)
|
||||||
|
- Agent (frontend, backend, reviewer)
|
||||||
|
- Type (create, update, delete, review)
|
||||||
|
|
||||||
|
### 4. Display Summary
|
||||||
|
|
||||||
|
```
|
||||||
|
╔══════════════════════════════════════════════════════════════╗
|
||||||
|
║ WORKFLOW STATUS ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ Active Workflow: <workflow_id> | None ║
|
||||||
|
║ Feature: <feature_name> ║
|
||||||
|
║ Phase: <current_phase> ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ APPROVAL GATES ║
|
||||||
|
║ 🛑 Design: <pending|approved|rejected> ║
|
||||||
|
║ 🛑 Implementation: <pending|approved|rejected> ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ TASKS BY STATUS ║
|
||||||
|
║ ⏳ Pending: X ║
|
||||||
|
║ 🔄 In Progress: X ║
|
||||||
|
║ 🔍 Review: X ║
|
||||||
|
║ ✅ Approved: X ║
|
||||||
|
║ ✓ Completed: X ║
|
||||||
|
║ 🚫 Blocked: X ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ TASKS BY AGENT ║
|
||||||
|
║ 🎨 Frontend: X pending, X completed ║
|
||||||
|
║ ⚙️ Backend: X pending, X completed ║
|
||||||
|
║ 🔍 Reviewer: X pending ║
|
||||||
|
╠══════════════════════════════════════════════════════════════╣
|
||||||
|
║ NEXT ACTIONS ║
|
||||||
|
║ /workflow:frontend --next (X tasks available) ║
|
||||||
|
║ /workflow:backend --next (X tasks available) ║
|
||||||
|
║ /workflow:review --next (X tasks to review) ║
|
||||||
|
║ /workflow:resume (continue workflow) ║
|
||||||
|
╚══════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Show Design Visualization
|
||||||
|
**If in DESIGNING or AWAITING_DESIGN_APPROVAL phase**, display visual design:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/visualize_design.py --manifest project_manifest.json
|
||||||
|
```
|
||||||
|
|
||||||
|
This shows:
|
||||||
|
- 📱 Page flow diagram
|
||||||
|
- 📄 Page details with components
|
||||||
|
- 🧩 Component hierarchy
|
||||||
|
- 🔌 API endpoints
|
||||||
|
- 🔄 Data flow architecture
|
||||||
|
|
||||||
|
### 5b. Show Implementation Visualization
|
||||||
|
**If in REVIEWING, SECURITY_REVIEW, or AWAITING_IMPL_APPROVAL phase**, display what was built:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/visualize_implementation.py --manifest project_manifest.json
|
||||||
|
```
|
||||||
|
|
||||||
|
This shows:
|
||||||
|
- 📱 Page structure with routes
|
||||||
|
- 🧩 Component hierarchy and relationships
|
||||||
|
- 🔌 API endpoints with HTTP methods
|
||||||
|
- 📊 Implementation statistics (lines, hooks, types)
|
||||||
|
- 🌳 Component tree view
|
||||||
|
|
||||||
|
### 6. List Pending Tasks
|
||||||
|
Show table of tasks ready to work on:
|
||||||
|
|
||||||
|
| Task ID | Type | Agent | Priority | Dependencies |
|
||||||
|
|---------|------|-------|----------|--------------|
|
||||||
|
|
||||||
|
### 7. Show Approval Instructions
|
||||||
|
|
||||||
|
**If AWAITING_DESIGN_APPROVAL**:
|
||||||
|
```
|
||||||
|
🛑 Design approval required. Review the entities and tasks, then:
|
||||||
|
- Approve: /workflow:approve design
|
||||||
|
- Reject: /workflow:reject design "reason"
|
||||||
|
```
|
||||||
|
|
||||||
|
**If AWAITING_IMPL_APPROVAL**:
|
||||||
|
```
|
||||||
|
🛑 Implementation approval required. Review the code, then:
|
||||||
|
- Approve: /workflow:approve implementation
|
||||||
|
- Reject: /workflow:reject implementation "reason"
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,4 @@
|
||||||
|
api_key: pk_user_8d080a1a699dc2a1769ca99ded0ca39fa80324b8713cf55ea7fecc1c372379a6
|
||||||
|
project_id: ""
|
||||||
|
repo_id: ""
|
||||||
|
app_id: cmjb04ana0001qp0tijyy9emq
|
||||||
|
|
@ -0,0 +1,225 @@
|
||||||
|
{
|
||||||
|
"hooks": {
|
||||||
|
"PreToolUse": [
|
||||||
|
{
|
||||||
|
"matcher": "Bash",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_bash.py\" --command \"$TOOL_INPUT_COMMAND\""
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"matcher": "Task",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_workflow.py\" --operation task --input \"$TOOL_INPUT\""
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"matcher": "Write",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_workflow.py\" --operation write --file \"$TOOL_INPUT_FILE_PATH\""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_write.py\" --manifest \"$CLAUDE_PROJECT_DIR/project_manifest.json\" --file \"$TOOL_INPUT_FILE_PATH\""
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"matcher": "Edit",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_workflow.py\" --operation edit --file \"$TOOL_INPUT_FILE_PATH\""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_write.py\" --manifest \"$CLAUDE_PROJECT_DIR/project_manifest.json\" --file \"$TOOL_INPUT_FILE_PATH\""
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"matcher": "MultiEdit",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_workflow.py\" --operation edit --file \"$TOOL_INPUT_FILE_PATH\""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_write.py\" --manifest \"$CLAUDE_PROJECT_DIR/project_manifest.json\" --file \"$TOOL_INPUT_FILE_PATH\""
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"matcher": "NotebookEdit",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_workflow.py\" --operation edit --file \"$TOOL_INPUT_NOTEBOOK_PATH\""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_write.py\" --manifest \"$CLAUDE_PROJECT_DIR/project_manifest.json\" --file \"$TOOL_INPUT_NOTEBOOK_PATH\""
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"matcher": "mcp__serena__create_text_file",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_workflow.py\" --operation write --input \"$TOOL_INPUT\""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_write.py\" --manifest \"$CLAUDE_PROJECT_DIR/project_manifest.json\" --file \"$TOOL_INPUT_FILE_PATH\""
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"matcher": "mcp__serena__replace_content",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_workflow.py\" --operation edit --input \"$TOOL_INPUT\""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_write.py\" --manifest \"$CLAUDE_PROJECT_DIR/project_manifest.json\" --file \"$TOOL_INPUT_FILE_PATH\""
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"matcher": "mcp__serena__replace_symbol_body",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_workflow.py\" --operation edit --input \"$TOOL_INPUT\""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_write.py\" --manifest \"$CLAUDE_PROJECT_DIR/project_manifest.json\" --file \"$TOOL_INPUT_FILE_PATH\""
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"matcher": "mcp__morphllm-fast-apply__write_file",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_workflow.py\" --operation write --input \"$TOOL_INPUT\""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_write.py\" --manifest \"$CLAUDE_PROJECT_DIR/project_manifest.json\" --file \"$TOOL_INPUT_FILE_PATH\""
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"matcher": "mcp__morphllm-fast-apply__tiny_edit_file",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_workflow.py\" --operation edit --input \"$TOOL_INPUT\""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_write.py\" --manifest \"$CLAUDE_PROJECT_DIR/project_manifest.json\" --file \"$TOOL_INPUT_FILE_PATH\""
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"matcher": "mcp__filesystem__write_file",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_workflow.py\" --operation write --input \"$TOOL_INPUT\""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_write.py\" --manifest \"$CLAUDE_PROJECT_DIR/project_manifest.json\" --file \"$TOOL_INPUT_PATH\""
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"matcher": "mcp__filesystem__edit_file",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_workflow.py\" --operation edit --input \"$TOOL_INPUT\""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_write.py\" --manifest \"$CLAUDE_PROJECT_DIR/project_manifest.json\" --file \"$TOOL_INPUT_PATH\""
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"matcher": "mcp__filesystem__create_directory",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_workflow.py\" --operation write --input \"$TOOL_INPUT\""
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"matcher": "mcp__filesystem__move_file",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/validate_workflow.py\" --operation write --input \"$TOOL_INPUT\""
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"PostToolUse": [
|
||||||
|
{
|
||||||
|
"matcher": "Write",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/post_write.py\" --manifest \"$CLAUDE_PROJECT_DIR/project_manifest.json\" --file \"$TOOL_INPUT_FILE_PATH\""
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"matcher": "Edit",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/post_write.py\" --manifest \"$CLAUDE_PROJECT_DIR/project_manifest.json\" --file \"$TOOL_INPUT_FILE_PATH\""
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"matcher": "Task",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "echo '🔄 Agent task completed. Verify outputs before proceeding.'"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"Stop": [
|
||||||
|
{
|
||||||
|
"matcher": "",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 \"$CLAUDE_PROJECT_DIR/skills/guardrail-orchestrator/scripts/workflow_manager.py\" status 2>/dev/null || echo '🛡️ Session complete (no active workflow)'"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,16 @@
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"eureka-docs": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["eureka-docs-server"],
|
||||||
|
"env": {}
|
||||||
|
},
|
||||||
|
"eureka-imagen": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["eureka-imagen-server"],
|
||||||
|
"env": {
|
||||||
|
"IMAGEROUTER_API_KEY": "${IMAGEROUTER_API_KEY}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,152 @@
|
||||||
|
# Documentation Writer Agent
|
||||||
|
# Specialized agent for generating dual-audience documentation
|
||||||
|
|
||||||
|
name: doc-writer
|
||||||
|
role: Documentation Specialist
|
||||||
|
description: |
|
||||||
|
Expert in creating comprehensive documentation that serves both technical
|
||||||
|
and non-technical audiences. Specializes in translating complex technical
|
||||||
|
concepts into accessible language while maintaining technical accuracy.
|
||||||
|
|
||||||
|
capabilities:
|
||||||
|
- Analyze project structure and extract key information
|
||||||
|
- Generate visual ASCII diagrams for architecture
|
||||||
|
- Write plain-language descriptions of technical features
|
||||||
|
- Create technical reference documentation
|
||||||
|
- Build glossaries for technical terms
|
||||||
|
- Structure documentation for multiple audience levels
|
||||||
|
|
||||||
|
allowed_tools:
|
||||||
|
- Read
|
||||||
|
- Write
|
||||||
|
- Edit
|
||||||
|
- Glob
|
||||||
|
- Grep
|
||||||
|
- Bash
|
||||||
|
|
||||||
|
blocked_tools:
|
||||||
|
- Task # Should not spawn sub-agents
|
||||||
|
|
||||||
|
allowed_files:
|
||||||
|
- "docs/**/*"
|
||||||
|
- "*.md"
|
||||||
|
- "package.json"
|
||||||
|
- "project_manifest.json"
|
||||||
|
- "tsconfig.json"
|
||||||
|
- "requirements.txt"
|
||||||
|
- "pyproject.toml"
|
||||||
|
- "Cargo.toml"
|
||||||
|
- "go.mod"
|
||||||
|
|
||||||
|
responsibilities:
|
||||||
|
- Analyze source code to understand functionality
|
||||||
|
- Extract API endpoints and document them
|
||||||
|
- Document components with props and usage
|
||||||
|
- Create ER diagrams for data models
|
||||||
|
- Write executive summaries for stakeholders
|
||||||
|
- Build glossaries for technical terms
|
||||||
|
- Generate quick reference cards
|
||||||
|
|
||||||
|
outputs:
|
||||||
|
- PROJECT_DOCUMENTATION.md (main documentation)
|
||||||
|
- QUICK_REFERENCE.md (one-page summary)
|
||||||
|
- API_REFERENCE.md (detailed API docs)
|
||||||
|
- COMPONENTS.md (component catalog)
|
||||||
|
- GLOSSARY.md (term definitions)
|
||||||
|
|
||||||
|
cannot_do:
|
||||||
|
- Modify source code
|
||||||
|
- Change project configuration
|
||||||
|
- Run tests or builds
|
||||||
|
- Deploy or publish
|
||||||
|
|
||||||
|
writing_principles:
|
||||||
|
non_technical:
|
||||||
|
- Lead with "What" and "Why", not "How"
|
||||||
|
- Use analogies and real-world comparisons
|
||||||
|
- Avoid acronyms; spell them out first time
|
||||||
|
- Use bullet points over paragraphs
|
||||||
|
- Include visual diagrams
|
||||||
|
- Focus on value and outcomes
|
||||||
|
|
||||||
|
technical:
|
||||||
|
- Include in collapsible <details> sections
|
||||||
|
- Provide code examples with syntax highlighting
|
||||||
|
- Reference file paths and line numbers
|
||||||
|
- Include type definitions and interfaces
|
||||||
|
- Link to source files
|
||||||
|
- Document edge cases and error handling
|
||||||
|
|
||||||
|
documentation_sections:
|
||||||
|
executive_summary:
|
||||||
|
audience: everyone
|
||||||
|
purpose: Project purpose, value proposition, key capabilities
|
||||||
|
format: Plain English, no jargon
|
||||||
|
|
||||||
|
architecture_overview:
|
||||||
|
audience: everyone
|
||||||
|
purpose: Visual system understanding
|
||||||
|
format: ASCII diagrams, technology tables
|
||||||
|
|
||||||
|
getting_started:
|
||||||
|
audience: semi-technical
|
||||||
|
purpose: Quick onboarding
|
||||||
|
format: Step-by-step with explanations
|
||||||
|
|
||||||
|
feature_guide:
|
||||||
|
audience: non-technical
|
||||||
|
purpose: Feature documentation
|
||||||
|
format: What/Why/How (simplified)
|
||||||
|
|
||||||
|
api_reference:
|
||||||
|
audience: developers
|
||||||
|
purpose: API documentation
|
||||||
|
format: Endpoints, schemas, examples
|
||||||
|
|
||||||
|
component_catalog:
|
||||||
|
audience: developers
|
||||||
|
purpose: UI component documentation
|
||||||
|
format: Props, events, usage examples
|
||||||
|
|
||||||
|
data_models:
|
||||||
|
audience: both
|
||||||
|
purpose: Data structure documentation
|
||||||
|
format: ER diagrams + plain descriptions
|
||||||
|
|
||||||
|
glossary:
|
||||||
|
audience: non-technical
|
||||||
|
purpose: Term definitions
|
||||||
|
format: Term -> Plain English definition
|
||||||
|
|
||||||
|
ascii_diagram_templates:
|
||||||
|
system_architecture: |
|
||||||
|
┌─────────────────────────────────────────────────────────┐
|
||||||
|
│ [System Name] │
|
||||||
|
├─────────────────────────────────────────────────────────┤
|
||||||
|
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||||
|
│ │ [Layer] │───▶│ [Layer] │───▶│ [Layer] │ │
|
||||||
|
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||||
|
└─────────────────────────────────────────────────────────┘
|
||||||
|
|
||||||
|
entity_relationship: |
|
||||||
|
┌──────────────┐ ┌──────────────┐
|
||||||
|
│ [Entity] │ │ [Entity] │
|
||||||
|
├──────────────┤ ├──────────────┤
|
||||||
|
│ id (PK) │──────▶│ id (PK) │
|
||||||
|
│ field │ │ foreign_key │
|
||||||
|
└──────────────┘ └──────────────┘
|
||||||
|
|
||||||
|
data_flow: |
|
||||||
|
[Source] ──▶ [Process] ──▶ [Output]
|
||||||
|
│ │ │
|
||||||
|
▼ ▼ ▼
|
||||||
|
[Storage] [Transform] [Display]
|
||||||
|
|
||||||
|
quality_checklist:
|
||||||
|
- All referenced files exist
|
||||||
|
- All code examples are syntactically correct
|
||||||
|
- No broken internal links
|
||||||
|
- Technical details wrapped in <details>
|
||||||
|
- Plain English explanations for all features
|
||||||
|
- Glossary includes all technical terms used
|
||||||
|
- ASCII diagrams render correctly in markdown
|
||||||
|
|
@ -0,0 +1,274 @@
|
||||||
|
# Documentation Output Schema
|
||||||
|
# Defines the structure for generated documentation
|
||||||
|
|
||||||
|
version: "1.0"
|
||||||
|
description: Schema for dual-audience project documentation
|
||||||
|
|
||||||
|
output_files:
|
||||||
|
main_documentation:
|
||||||
|
filename: PROJECT_DOCUMENTATION.md
|
||||||
|
required: true
|
||||||
|
sections:
|
||||||
|
- executive_summary
|
||||||
|
- quick_start
|
||||||
|
- architecture_overview
|
||||||
|
- features
|
||||||
|
- for_developers
|
||||||
|
- glossary
|
||||||
|
|
||||||
|
quick_reference:
|
||||||
|
filename: QUICK_REFERENCE.md
|
||||||
|
required: true
|
||||||
|
sections:
|
||||||
|
- commands
|
||||||
|
- key_files
|
||||||
|
- api_endpoints
|
||||||
|
- environment_variables
|
||||||
|
|
||||||
|
api_reference:
|
||||||
|
filename: API_REFERENCE.md
|
||||||
|
required: false
|
||||||
|
condition: has_api_endpoints
|
||||||
|
sections:
|
||||||
|
- authentication
|
||||||
|
- endpoints_by_resource
|
||||||
|
- error_codes
|
||||||
|
- rate_limiting
|
||||||
|
|
||||||
|
components:
|
||||||
|
filename: COMPONENTS.md
|
||||||
|
required: false
|
||||||
|
condition: has_ui_components
|
||||||
|
sections:
|
||||||
|
- component_index
|
||||||
|
- component_details
|
||||||
|
- usage_examples
|
||||||
|
|
||||||
|
section_schemas:
|
||||||
|
executive_summary:
|
||||||
|
description: High-level project overview for all audiences
|
||||||
|
fields:
|
||||||
|
project_name:
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
tagline:
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
max_length: 100
|
||||||
|
description: One-line description in plain English
|
||||||
|
what_it_does:
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
description: 2-3 sentences, no technical jargon
|
||||||
|
who_its_for:
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
description: Target audience in plain English
|
||||||
|
key_capabilities:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
capability: string
|
||||||
|
description: string
|
||||||
|
min_items: 3
|
||||||
|
max_items: 8
|
||||||
|
|
||||||
|
quick_start:
|
||||||
|
description: Getting started guide for new users
|
||||||
|
fields:
|
||||||
|
prerequisites:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
tool: string
|
||||||
|
purpose: string # Plain English explanation
|
||||||
|
install_command: string
|
||||||
|
installation_steps:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
step: integer
|
||||||
|
command: string
|
||||||
|
explanation: string # What this does
|
||||||
|
basic_usage:
|
||||||
|
type: string
|
||||||
|
description: Simple example of how to use
|
||||||
|
|
||||||
|
architecture_overview:
|
||||||
|
description: Visual system architecture
|
||||||
|
fields:
|
||||||
|
system_diagram:
|
||||||
|
type: string
|
||||||
|
format: ascii_art
|
||||||
|
required: true
|
||||||
|
technology_stack:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
layer: string
|
||||||
|
technology: string
|
||||||
|
purpose: string # Plain English
|
||||||
|
directory_structure:
|
||||||
|
type: string
|
||||||
|
format: tree
|
||||||
|
required: true
|
||||||
|
|
||||||
|
features:
|
||||||
|
description: Feature documentation for all audiences
|
||||||
|
fields:
|
||||||
|
features:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
name: string
|
||||||
|
what_it_does: string # Plain English
|
||||||
|
how_to_use: string # Simple instructions
|
||||||
|
example: string # Code or usage example
|
||||||
|
technical_notes: string # For engineers, optional
|
||||||
|
|
||||||
|
api_endpoint:
|
||||||
|
description: Single API endpoint documentation
|
||||||
|
fields:
|
||||||
|
method:
|
||||||
|
type: enum
|
||||||
|
values: [GET, POST, PUT, PATCH, DELETE]
|
||||||
|
path:
|
||||||
|
type: string
|
||||||
|
pattern: "^/api/"
|
||||||
|
summary:
|
||||||
|
type: string
|
||||||
|
description: Plain English description
|
||||||
|
description:
|
||||||
|
type: string
|
||||||
|
description: Detailed explanation
|
||||||
|
authentication:
|
||||||
|
type: object
|
||||||
|
fields:
|
||||||
|
required: boolean
|
||||||
|
type: string # bearer, api_key, session
|
||||||
|
request:
|
||||||
|
type: object
|
||||||
|
fields:
|
||||||
|
content_type: string
|
||||||
|
body_schema: object
|
||||||
|
query_params: array
|
||||||
|
path_params: array
|
||||||
|
responses:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
status: integer
|
||||||
|
description: string
|
||||||
|
schema: object
|
||||||
|
examples:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
name: string
|
||||||
|
request: object
|
||||||
|
response: object
|
||||||
|
|
||||||
|
component:
|
||||||
|
description: UI component documentation
|
||||||
|
fields:
|
||||||
|
name:
|
||||||
|
type: string
|
||||||
|
pattern: "^[A-Z][a-zA-Z]*$" # PascalCase
|
||||||
|
path:
|
||||||
|
type: string
|
||||||
|
description:
|
||||||
|
type: string
|
||||||
|
description: Plain English purpose
|
||||||
|
props:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
name: string
|
||||||
|
type: string
|
||||||
|
required: boolean
|
||||||
|
default: any
|
||||||
|
description: string
|
||||||
|
events:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
name: string
|
||||||
|
payload: string
|
||||||
|
description: string
|
||||||
|
usage_example:
|
||||||
|
type: string
|
||||||
|
format: code
|
||||||
|
dependencies:
|
||||||
|
type: array
|
||||||
|
items: string
|
||||||
|
|
||||||
|
data_model:
|
||||||
|
description: Data model documentation
|
||||||
|
fields:
|
||||||
|
name:
|
||||||
|
type: string
|
||||||
|
description:
|
||||||
|
type: string
|
||||||
|
description: What data it represents (plain English)
|
||||||
|
table_name:
|
||||||
|
type: string
|
||||||
|
fields:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
name: string
|
||||||
|
type: string
|
||||||
|
description: string # Plain English
|
||||||
|
constraints: array
|
||||||
|
relations:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
type: enum
|
||||||
|
values: [has_one, has_many, belongs_to, many_to_many]
|
||||||
|
target: string
|
||||||
|
description: string
|
||||||
|
|
||||||
|
glossary_term:
|
||||||
|
description: Technical term definition
|
||||||
|
fields:
|
||||||
|
term:
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
definition:
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
description: Plain English definition
|
||||||
|
see_also:
|
||||||
|
type: array
|
||||||
|
items: string
|
||||||
|
description: Related terms
|
||||||
|
|
||||||
|
audience_markers:
|
||||||
|
non_technical:
|
||||||
|
indicator: "📖"
|
||||||
|
description: "For all readers"
|
||||||
|
technical:
|
||||||
|
indicator: "🔧"
|
||||||
|
description: "For developers"
|
||||||
|
wrapper: "<details><summary>🔧 Technical Details</summary>...content...</details>"
|
||||||
|
|
||||||
|
formatting_rules:
|
||||||
|
headings:
|
||||||
|
h1: "# Title"
|
||||||
|
h2: "## Section"
|
||||||
|
h3: "### Subsection"
|
||||||
|
code_blocks:
|
||||||
|
language_hints: required
|
||||||
|
max_lines: 30
|
||||||
|
tables:
|
||||||
|
alignment: left
|
||||||
|
max_columns: 5
|
||||||
|
diagrams:
|
||||||
|
format: ascii_art
|
||||||
|
max_width: 80
|
||||||
|
links:
|
||||||
|
internal: "[text](#anchor)"
|
||||||
|
external: "[text](url)"
|
||||||
|
file_reference: "`path/to/file`"
|
||||||
|
|
||||||
|
validation_rules:
|
||||||
|
- name: no_broken_links
|
||||||
|
description: All internal links must resolve
|
||||||
|
- name: code_syntax
|
||||||
|
description: All code blocks must be syntactically valid
|
||||||
|
- name: file_references
|
||||||
|
description: All referenced files must exist
|
||||||
|
- name: glossary_coverage
|
||||||
|
description: All technical terms must be in glossary
|
||||||
|
- name: diagram_rendering
|
||||||
|
description: ASCII diagrams must render correctly
|
||||||
|
|
@ -0,0 +1,272 @@
|
||||||
|
# Project Analysis Schema
|
||||||
|
# Defines the structure for project analysis output
|
||||||
|
|
||||||
|
version: "1.0"
|
||||||
|
description: Schema for analyzing project structure before documentation generation
|
||||||
|
|
||||||
|
project_analysis:
|
||||||
|
project:
|
||||||
|
type: object
|
||||||
|
required: true
|
||||||
|
fields:
|
||||||
|
name:
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
source: package.json/name or directory name
|
||||||
|
version:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
source: package.json/version
|
||||||
|
description:
|
||||||
|
type: string
|
||||||
|
required: false
|
||||||
|
source: package.json/description or README.md first paragraph
|
||||||
|
type:
|
||||||
|
type: enum
|
||||||
|
values: [node, python, rust, go, java, dotnet, ruby, php, other]
|
||||||
|
detection:
|
||||||
|
node: package.json
|
||||||
|
python: requirements.txt, pyproject.toml, setup.py
|
||||||
|
rust: Cargo.toml
|
||||||
|
go: go.mod
|
||||||
|
java: pom.xml, build.gradle
|
||||||
|
dotnet: "*.csproj, *.sln"
|
||||||
|
ruby: Gemfile
|
||||||
|
php: composer.json
|
||||||
|
repository:
|
||||||
|
type: string
|
||||||
|
source: package.json/repository or .git/config
|
||||||
|
|
||||||
|
tech_stack:
|
||||||
|
type: object
|
||||||
|
required: true
|
||||||
|
fields:
|
||||||
|
language:
|
||||||
|
type: string
|
||||||
|
description: Primary programming language
|
||||||
|
framework:
|
||||||
|
type: string
|
||||||
|
description: Main application framework
|
||||||
|
detection:
|
||||||
|
next: "next" in dependencies
|
||||||
|
react: "react" in dependencies without "next"
|
||||||
|
vue: "vue" in dependencies
|
||||||
|
angular: "@angular/core" in dependencies
|
||||||
|
express: "express" in dependencies
|
||||||
|
fastapi: "fastapi" in requirements
|
||||||
|
django: "django" in requirements
|
||||||
|
flask: "flask" in requirements
|
||||||
|
rails: "rails" in Gemfile
|
||||||
|
database:
|
||||||
|
type: string
|
||||||
|
description: Database system if any
|
||||||
|
detection:
|
||||||
|
prisma: "@prisma/client" in dependencies
|
||||||
|
mongoose: "mongoose" in dependencies
|
||||||
|
typeorm: "typeorm" in dependencies
|
||||||
|
sequelize: "sequelize" in dependencies
|
||||||
|
sqlalchemy: "sqlalchemy" in requirements
|
||||||
|
ui_framework:
|
||||||
|
type: string
|
||||||
|
description: UI component framework if any
|
||||||
|
detection:
|
||||||
|
tailwind: "tailwindcss" in devDependencies
|
||||||
|
mui: "@mui/material" in dependencies
|
||||||
|
chakra: "@chakra-ui/react" in dependencies
|
||||||
|
shadcn: "shadcn" patterns in components
|
||||||
|
key_dependencies:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
name: string
|
||||||
|
version: string
|
||||||
|
purpose: string # Plain English explanation
|
||||||
|
categorization:
|
||||||
|
core: Framework, runtime dependencies
|
||||||
|
database: ORM, database clients
|
||||||
|
auth: Authentication libraries
|
||||||
|
ui: UI component libraries
|
||||||
|
testing: Test frameworks
|
||||||
|
build: Build tools, bundlers
|
||||||
|
utility: Helper libraries
|
||||||
|
|
||||||
|
structure:
|
||||||
|
type: object
|
||||||
|
required: true
|
||||||
|
fields:
|
||||||
|
source_dir:
|
||||||
|
type: string
|
||||||
|
description: Main source code directory
|
||||||
|
detection:
|
||||||
|
- src/
|
||||||
|
- app/
|
||||||
|
- lib/
|
||||||
|
- source/
|
||||||
|
directories:
|
||||||
|
type: array
|
||||||
|
items:
|
||||||
|
path: string
|
||||||
|
purpose: string # Plain English description
|
||||||
|
file_count: integer
|
||||||
|
key_files: array
|
||||||
|
common_mappings:
|
||||||
|
src/components: "UI components"
|
||||||
|
src/pages: "Application pages/routes"
|
||||||
|
src/api: "API route handlers"
|
||||||
|
src/lib: "Utility functions and shared code"
|
||||||
|
src/hooks: "Custom React hooks"
|
||||||
|
src/context: "React context providers"
|
||||||
|
src/store: "State management"
|
||||||
|
src/types: "TypeScript type definitions"
|
||||||
|
src/styles: "Global styles and themes"
|
||||||
|
prisma/: "Database schema and migrations"
|
||||||
|
public/: "Static assets"
|
||||||
|
tests/: "Test files"
|
||||||
|
__tests__/: "Test files (Jest convention)"
|
||||||
|
|
||||||
|
features:
|
||||||
|
type: array
|
||||||
|
description: Main features/capabilities of the project
|
||||||
|
items:
|
||||||
|
name: string
|
||||||
|
description: string # Plain English
|
||||||
|
technical_notes: string # For engineers
|
||||||
|
files: array # Key file paths
|
||||||
|
detection_patterns:
|
||||||
|
authentication:
|
||||||
|
keywords: [auth, login, logout, session, jwt, oauth]
|
||||||
|
files: ["**/auth/**", "**/login/**"]
|
||||||
|
user_management:
|
||||||
|
keywords: [user, profile, account, register, signup]
|
||||||
|
files: ["**/user/**", "**/users/**"]
|
||||||
|
api:
|
||||||
|
keywords: [api, endpoint, route, handler]
|
||||||
|
files: ["**/api/**", "**/routes/**"]
|
||||||
|
database:
|
||||||
|
keywords: [model, entity, schema, migration, prisma]
|
||||||
|
files: ["**/models/**", "**/prisma/**"]
|
||||||
|
file_upload:
|
||||||
|
keywords: [upload, file, storage, s3, blob]
|
||||||
|
files: ["**/upload/**", "**/storage/**"]
|
||||||
|
search:
|
||||||
|
keywords: [search, filter, query]
|
||||||
|
files: ["**/search/**"]
|
||||||
|
notifications:
|
||||||
|
keywords: [notification, email, sms, push]
|
||||||
|
files: ["**/notification/**", "**/email/**"]
|
||||||
|
|
||||||
|
components:
|
||||||
|
type: array
|
||||||
|
description: UI components found in the project
|
||||||
|
items:
|
||||||
|
id: string # component_<name>
|
||||||
|
name: string # PascalCase
|
||||||
|
path: string
|
||||||
|
description: string
|
||||||
|
props: string # Props summary
|
||||||
|
dependencies: array # Imported components
|
||||||
|
detection:
|
||||||
|
react: "export (default )?(function|const) [A-Z]"
|
||||||
|
vue: "<template>.*<script>"
|
||||||
|
angular: "@Component"
|
||||||
|
|
||||||
|
api_endpoints:
|
||||||
|
type: array
|
||||||
|
description: API endpoints found in the project
|
||||||
|
items:
|
||||||
|
method: enum [GET, POST, PUT, PATCH, DELETE]
|
||||||
|
path: string
|
||||||
|
handler_file: string
|
||||||
|
description: string
|
||||||
|
technical_notes: string
|
||||||
|
detection:
|
||||||
|
next_app_router: "app/api/**/route.ts exports GET, POST, etc."
|
||||||
|
next_pages_router: "pages/api/**/*.ts"
|
||||||
|
express: "router.get/post/put/delete"
|
||||||
|
fastapi: "@app.get/post/put/delete"
|
||||||
|
|
||||||
|
data_models:
|
||||||
|
type: array
|
||||||
|
description: Data models/entities found
|
||||||
|
items:
|
||||||
|
name: string
|
||||||
|
description: string # Plain English
|
||||||
|
fields: array
|
||||||
|
relations: array
|
||||||
|
detection:
|
||||||
|
prisma: "model [A-Z][a-z]+ {"
|
||||||
|
typeorm: "@Entity"
|
||||||
|
mongoose: "new Schema"
|
||||||
|
sqlalchemy: "class.*Base"
|
||||||
|
|
||||||
|
glossary_terms:
|
||||||
|
type: array
|
||||||
|
description: Technical terms that need definitions
|
||||||
|
items:
|
||||||
|
term: string
|
||||||
|
definition: string # Plain English
|
||||||
|
auto_detection:
|
||||||
|
- Acronyms (all caps words)
|
||||||
|
- Framework-specific terms
|
||||||
|
- Domain-specific terminology
|
||||||
|
- Technical jargon from code comments
|
||||||
|
|
||||||
|
analysis_process:
|
||||||
|
steps:
|
||||||
|
1_identify_type:
|
||||||
|
description: Determine project type from config files
|
||||||
|
files_to_check:
|
||||||
|
- package.json
|
||||||
|
- requirements.txt
|
||||||
|
- pyproject.toml
|
||||||
|
- Cargo.toml
|
||||||
|
- go.mod
|
||||||
|
- pom.xml
|
||||||
|
|
||||||
|
2_scan_structure:
|
||||||
|
description: Map directory structure
|
||||||
|
actions:
|
||||||
|
- List all directories
|
||||||
|
- Count files per directory
|
||||||
|
- Identify purpose from names
|
||||||
|
|
||||||
|
3_extract_metadata:
|
||||||
|
description: Get project metadata
|
||||||
|
sources:
|
||||||
|
- package.json (name, version, description, dependencies)
|
||||||
|
- README.md (description, usage)
|
||||||
|
- project_manifest.json (if exists)
|
||||||
|
|
||||||
|
4_identify_features:
|
||||||
|
description: Detect main features
|
||||||
|
methods:
|
||||||
|
- Keyword scanning in file names
|
||||||
|
- Pattern matching in code
|
||||||
|
- Directory structure analysis
|
||||||
|
|
||||||
|
5_map_components:
|
||||||
|
description: Catalog UI components
|
||||||
|
methods:
|
||||||
|
- Scan component directories
|
||||||
|
- Extract props from TypeScript
|
||||||
|
- Find usage patterns
|
||||||
|
|
||||||
|
6_document_apis:
|
||||||
|
description: Document API endpoints
|
||||||
|
methods:
|
||||||
|
- Scan API routes
|
||||||
|
- Extract request/response schemas
|
||||||
|
- Find authentication requirements
|
||||||
|
|
||||||
|
7_model_data:
|
||||||
|
description: Document data models
|
||||||
|
methods:
|
||||||
|
- Parse Prisma schema
|
||||||
|
- Extract TypeORM entities
|
||||||
|
- Find Mongoose schemas
|
||||||
|
|
||||||
|
8_collect_terms:
|
||||||
|
description: Build glossary
|
||||||
|
methods:
|
||||||
|
- Extract acronyms
|
||||||
|
- Find domain terms
|
||||||
|
- Look for jargon in comments
|
||||||
|
|
@ -0,0 +1,489 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Project Analyzer for Documentation Generation
|
||||||
|
Analyzes project structure and outputs YAML for documentation generation.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Dict, List, Any, Optional
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
# Try to import yaml, but provide fallback
|
||||||
|
try:
|
||||||
|
import yaml
|
||||||
|
except ImportError:
|
||||||
|
yaml = None
|
||||||
|
|
||||||
|
|
||||||
|
def detect_project_type(root_path: Path) -> Dict[str, Any]:
|
||||||
|
"""Detect project type from config files."""
|
||||||
|
indicators = {
|
||||||
|
'node': ['package.json'],
|
||||||
|
'python': ['requirements.txt', 'pyproject.toml', 'setup.py', 'Pipfile'],
|
||||||
|
'rust': ['Cargo.toml'],
|
||||||
|
'go': ['go.mod'],
|
||||||
|
'java': ['pom.xml', 'build.gradle', 'build.gradle.kts'],
|
||||||
|
'dotnet': list(root_path.glob('*.csproj')) + list(root_path.glob('*.sln')),
|
||||||
|
'ruby': ['Gemfile'],
|
||||||
|
'php': ['composer.json'],
|
||||||
|
}
|
||||||
|
|
||||||
|
for lang, files in indicators.items():
|
||||||
|
if isinstance(files, list) and isinstance(files[0], str):
|
||||||
|
for f in files:
|
||||||
|
if (root_path / f).exists():
|
||||||
|
return {'type': lang, 'config_file': f}
|
||||||
|
elif files: # Already Path objects from glob
|
||||||
|
return {'type': lang, 'config_file': str(files[0].name)}
|
||||||
|
|
||||||
|
return {'type': 'other', 'config_file': None}
|
||||||
|
|
||||||
|
|
||||||
|
def parse_package_json(root_path: Path) -> Dict[str, Any]:
|
||||||
|
"""Parse package.json for Node.js projects."""
|
||||||
|
pkg_path = root_path / 'package.json'
|
||||||
|
if not pkg_path.exists():
|
||||||
|
return {}
|
||||||
|
|
||||||
|
with open(pkg_path, 'r') as f:
|
||||||
|
data = json.load(f)
|
||||||
|
|
||||||
|
deps = data.get('dependencies', {})
|
||||||
|
dev_deps = data.get('devDependencies', {})
|
||||||
|
|
||||||
|
# Detect framework
|
||||||
|
framework = None
|
||||||
|
if 'next' in deps:
|
||||||
|
framework = 'Next.js'
|
||||||
|
elif 'react' in deps:
|
||||||
|
framework = 'React'
|
||||||
|
elif 'vue' in deps:
|
||||||
|
framework = 'Vue.js'
|
||||||
|
elif '@angular/core' in deps:
|
||||||
|
framework = 'Angular'
|
||||||
|
elif 'express' in deps:
|
||||||
|
framework = 'Express'
|
||||||
|
elif 'fastify' in deps:
|
||||||
|
framework = 'Fastify'
|
||||||
|
|
||||||
|
# Detect database
|
||||||
|
database = None
|
||||||
|
if '@prisma/client' in deps:
|
||||||
|
database = 'Prisma (PostgreSQL/MySQL/SQLite)'
|
||||||
|
elif 'mongoose' in deps:
|
||||||
|
database = 'MongoDB (Mongoose)'
|
||||||
|
elif 'typeorm' in deps:
|
||||||
|
database = 'TypeORM'
|
||||||
|
elif 'sequelize' in deps:
|
||||||
|
database = 'Sequelize'
|
||||||
|
|
||||||
|
# Detect UI framework
|
||||||
|
ui_framework = None
|
||||||
|
if 'tailwindcss' in dev_deps or 'tailwindcss' in deps:
|
||||||
|
ui_framework = 'Tailwind CSS'
|
||||||
|
if '@mui/material' in deps:
|
||||||
|
ui_framework = 'Material UI'
|
||||||
|
elif '@chakra-ui/react' in deps:
|
||||||
|
ui_framework = 'Chakra UI'
|
||||||
|
|
||||||
|
# Categorize dependencies
|
||||||
|
key_deps = []
|
||||||
|
dep_categories = {
|
||||||
|
'core': ['react', 'next', 'vue', 'angular', 'express', 'fastify'],
|
||||||
|
'database': ['@prisma/client', 'mongoose', 'typeorm', 'sequelize', 'pg', 'mysql2'],
|
||||||
|
'auth': ['next-auth', 'passport', 'jsonwebtoken', '@auth0/nextjs-auth0'],
|
||||||
|
'ui': ['@mui/material', '@chakra-ui/react', 'antd', '@radix-ui'],
|
||||||
|
'state': ['zustand', 'redux', '@reduxjs/toolkit', 'recoil', 'jotai'],
|
||||||
|
'testing': ['jest', 'vitest', '@testing-library/react', 'cypress'],
|
||||||
|
}
|
||||||
|
|
||||||
|
for dep, version in {**deps, **dev_deps}.items():
|
||||||
|
category = 'utility'
|
||||||
|
for cat, patterns in dep_categories.items():
|
||||||
|
if any(p in dep for p in patterns):
|
||||||
|
category = cat
|
||||||
|
break
|
||||||
|
|
||||||
|
if category != 'utility' or dep in ['axios', 'zod', 'date-fns', 'lodash']:
|
||||||
|
key_deps.append({
|
||||||
|
'name': dep,
|
||||||
|
'version': version.replace('^', '').replace('~', ''),
|
||||||
|
'category': category,
|
||||||
|
'purpose': get_dep_purpose(dep)
|
||||||
|
})
|
||||||
|
|
||||||
|
return {
|
||||||
|
'name': data.get('name', 'Unknown'),
|
||||||
|
'version': data.get('version', '0.0.0'),
|
||||||
|
'description': data.get('description', ''),
|
||||||
|
'framework': framework,
|
||||||
|
'database': database,
|
||||||
|
'ui_framework': ui_framework,
|
||||||
|
'key_dependencies': key_deps[:15], # Limit to 15 most important
|
||||||
|
'scripts': data.get('scripts', {})
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def get_dep_purpose(dep_name: str) -> str:
|
||||||
|
"""Get plain English purpose for common dependencies."""
|
||||||
|
purposes = {
|
||||||
|
'react': 'UI component library',
|
||||||
|
'next': 'Full-stack React framework',
|
||||||
|
'vue': 'Progressive UI framework',
|
||||||
|
'express': 'Web server framework',
|
||||||
|
'fastify': 'High-performance web framework',
|
||||||
|
'@prisma/client': 'Database ORM and query builder',
|
||||||
|
'mongoose': 'MongoDB object modeling',
|
||||||
|
'typeorm': 'TypeScript ORM',
|
||||||
|
'sequelize': 'SQL ORM',
|
||||||
|
'next-auth': 'Authentication for Next.js',
|
||||||
|
'passport': 'Authentication middleware',
|
||||||
|
'jsonwebtoken': 'JWT token handling',
|
||||||
|
'@mui/material': 'Material Design components',
|
||||||
|
'@chakra-ui/react': 'Accessible component library',
|
||||||
|
'tailwindcss': 'Utility-first CSS framework',
|
||||||
|
'zustand': 'State management',
|
||||||
|
'redux': 'Predictable state container',
|
||||||
|
'@reduxjs/toolkit': 'Redux development toolkit',
|
||||||
|
'axios': 'HTTP client',
|
||||||
|
'zod': 'Schema validation',
|
||||||
|
'date-fns': 'Date utility functions',
|
||||||
|
'lodash': 'Utility functions',
|
||||||
|
'jest': 'Testing framework',
|
||||||
|
'vitest': 'Fast unit testing',
|
||||||
|
'@testing-library/react': 'React component testing',
|
||||||
|
'cypress': 'End-to-end testing',
|
||||||
|
}
|
||||||
|
return purposes.get(dep_name, 'Utility library')
|
||||||
|
|
||||||
|
|
||||||
|
def scan_directory_structure(root_path: Path) -> Dict[str, Any]:
|
||||||
|
"""Scan and categorize directory structure."""
|
||||||
|
ignore_dirs = {
|
||||||
|
'node_modules', '.git', '.next', '__pycache__', 'venv',
|
||||||
|
'.venv', 'dist', 'build', '.cache', 'coverage', '.turbo'
|
||||||
|
}
|
||||||
|
|
||||||
|
common_purposes = {
|
||||||
|
'src': 'Main source code directory',
|
||||||
|
'app': 'Application code (Next.js App Router)',
|
||||||
|
'pages': 'Page components (Next.js Pages Router)',
|
||||||
|
'components': 'Reusable UI components',
|
||||||
|
'lib': 'Shared utilities and libraries',
|
||||||
|
'utils': 'Utility functions',
|
||||||
|
'hooks': 'Custom React hooks',
|
||||||
|
'context': 'React context providers',
|
||||||
|
'store': 'State management',
|
||||||
|
'styles': 'CSS and styling',
|
||||||
|
'types': 'TypeScript type definitions',
|
||||||
|
'api': 'API route handlers',
|
||||||
|
'services': 'Business logic services',
|
||||||
|
'models': 'Data models/entities',
|
||||||
|
'prisma': 'Database schema and migrations',
|
||||||
|
'public': 'Static assets',
|
||||||
|
'tests': 'Test files',
|
||||||
|
'__tests__': 'Jest test files',
|
||||||
|
'test': 'Test files',
|
||||||
|
'spec': 'Test specifications',
|
||||||
|
'docs': 'Documentation',
|
||||||
|
'scripts': 'Build and utility scripts',
|
||||||
|
'config': 'Configuration files',
|
||||||
|
}
|
||||||
|
|
||||||
|
directories = []
|
||||||
|
source_dir = None
|
||||||
|
|
||||||
|
# Find main source directory
|
||||||
|
for candidate in ['src', 'app', 'lib', 'source']:
|
||||||
|
if (root_path / candidate).is_dir():
|
||||||
|
source_dir = candidate
|
||||||
|
break
|
||||||
|
|
||||||
|
# Scan directories
|
||||||
|
for item in sorted(root_path.iterdir()):
|
||||||
|
if item.is_dir() and item.name not in ignore_dirs and not item.name.startswith('.'):
|
||||||
|
file_count = sum(1 for _ in item.rglob('*') if _.is_file())
|
||||||
|
key_files = [
|
||||||
|
f.name for f in item.iterdir()
|
||||||
|
if f.is_file() and f.suffix in ['.ts', '.tsx', '.js', '.jsx', '.py', '.rs', '.go']
|
||||||
|
][:5]
|
||||||
|
|
||||||
|
directories.append({
|
||||||
|
'path': item.name,
|
||||||
|
'purpose': common_purposes.get(item.name, 'Project directory'),
|
||||||
|
'file_count': file_count,
|
||||||
|
'key_files': key_files
|
||||||
|
})
|
||||||
|
|
||||||
|
return {
|
||||||
|
'source_dir': source_dir or '.',
|
||||||
|
'directories': directories
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def detect_features(root_path: Path) -> List[Dict[str, Any]]:
|
||||||
|
"""Detect main features from code patterns."""
|
||||||
|
features = []
|
||||||
|
|
||||||
|
feature_patterns = {
|
||||||
|
'authentication': {
|
||||||
|
'keywords': ['auth', 'login', 'logout', 'session', 'jwt', 'oauth'],
|
||||||
|
'description': 'User authentication and session management',
|
||||||
|
'technical_notes': 'Handles user login, logout, and session tokens'
|
||||||
|
},
|
||||||
|
'user_management': {
|
||||||
|
'keywords': ['user', 'profile', 'account', 'register', 'signup'],
|
||||||
|
'description': 'User account creation and profile management',
|
||||||
|
'technical_notes': 'CRUD operations for user data'
|
||||||
|
},
|
||||||
|
'api': {
|
||||||
|
'keywords': ['api', 'endpoint', 'route'],
|
||||||
|
'description': 'REST API endpoints for data operations',
|
||||||
|
'technical_notes': 'HTTP handlers for client-server communication'
|
||||||
|
},
|
||||||
|
'database': {
|
||||||
|
'keywords': ['prisma', 'model', 'entity', 'schema', 'migration'],
|
||||||
|
'description': 'Database storage and data persistence',
|
||||||
|
'technical_notes': 'ORM-based data layer with migrations'
|
||||||
|
},
|
||||||
|
'file_upload': {
|
||||||
|
'keywords': ['upload', 'file', 'storage', 's3', 'blob'],
|
||||||
|
'description': 'File upload and storage functionality',
|
||||||
|
'technical_notes': 'Handles file uploads and cloud storage'
|
||||||
|
},
|
||||||
|
'search': {
|
||||||
|
'keywords': ['search', 'filter', 'query'],
|
||||||
|
'description': 'Search and filtering capabilities',
|
||||||
|
'technical_notes': 'Full-text search or database queries'
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
# Scan for features
|
||||||
|
all_files = list(root_path.rglob('*.ts')) + list(root_path.rglob('*.tsx')) + \
|
||||||
|
list(root_path.rglob('*.js')) + list(root_path.rglob('*.jsx'))
|
||||||
|
|
||||||
|
file_names = [f.stem.lower() for f in all_files]
|
||||||
|
file_paths = [str(f.relative_to(root_path)).lower() for f in all_files]
|
||||||
|
|
||||||
|
for feature_name, config in feature_patterns.items():
|
||||||
|
found_files = []
|
||||||
|
for keyword in config['keywords']:
|
||||||
|
found_files.extend([
|
||||||
|
str(f.relative_to(root_path)) for f in all_files
|
||||||
|
if keyword in str(f).lower()
|
||||||
|
])
|
||||||
|
|
||||||
|
if found_files:
|
||||||
|
features.append({
|
||||||
|
'name': feature_name.replace('_', ' ').title(),
|
||||||
|
'description': config['description'],
|
||||||
|
'technical_notes': config['technical_notes'],
|
||||||
|
'files': list(set(found_files))[:5]
|
||||||
|
})
|
||||||
|
|
||||||
|
return features
|
||||||
|
|
||||||
|
|
||||||
|
def find_components(root_path: Path) -> List[Dict[str, Any]]:
|
||||||
|
"""Find UI components in the project."""
|
||||||
|
components = []
|
||||||
|
component_dirs = ['components', 'src/components', 'app/components']
|
||||||
|
|
||||||
|
for comp_dir in component_dirs:
|
||||||
|
comp_path = root_path / comp_dir
|
||||||
|
if comp_path.exists():
|
||||||
|
for file in comp_path.rglob('*.tsx'):
|
||||||
|
if file.name.startswith('_') or file.name == 'index.tsx':
|
||||||
|
continue
|
||||||
|
|
||||||
|
name = file.stem
|
||||||
|
if name[0].isupper(): # Component names are PascalCase
|
||||||
|
components.append({
|
||||||
|
'id': f'component_{name.lower()}',
|
||||||
|
'name': name,
|
||||||
|
'path': str(file.relative_to(root_path)),
|
||||||
|
'description': f'{name} component',
|
||||||
|
'props': 'See source file'
|
||||||
|
})
|
||||||
|
|
||||||
|
return components[:20] # Limit to 20 components
|
||||||
|
|
||||||
|
|
||||||
|
def find_api_endpoints(root_path: Path) -> List[Dict[str, Any]]:
|
||||||
|
"""Find API endpoints in the project."""
|
||||||
|
endpoints = []
|
||||||
|
|
||||||
|
# Next.js App Router: app/api/**/route.ts
|
||||||
|
api_dir = root_path / 'app' / 'api'
|
||||||
|
if api_dir.exists():
|
||||||
|
for route_file in api_dir.rglob('route.ts'):
|
||||||
|
path_parts = route_file.parent.relative_to(api_dir).parts
|
||||||
|
api_path = '/api/' + '/'.join(path_parts)
|
||||||
|
|
||||||
|
# Read file to detect methods
|
||||||
|
content = route_file.read_text()
|
||||||
|
methods = []
|
||||||
|
for method in ['GET', 'POST', 'PUT', 'PATCH', 'DELETE']:
|
||||||
|
if f'export async function {method}' in content or f'export function {method}' in content:
|
||||||
|
methods.append(method)
|
||||||
|
|
||||||
|
for method in methods:
|
||||||
|
endpoints.append({
|
||||||
|
'method': method,
|
||||||
|
'path': api_path.replace('[', ':').replace(']', ''),
|
||||||
|
'handler_file': str(route_file.relative_to(root_path)),
|
||||||
|
'description': f'{method} {api_path}',
|
||||||
|
'technical_notes': 'Next.js App Router endpoint'
|
||||||
|
})
|
||||||
|
|
||||||
|
# Next.js Pages Router: pages/api/**/*.ts
|
||||||
|
pages_api = root_path / 'pages' / 'api'
|
||||||
|
if pages_api.exists():
|
||||||
|
for api_file in pages_api.rglob('*.ts'):
|
||||||
|
path_parts = api_file.relative_to(pages_api).with_suffix('').parts
|
||||||
|
api_path = '/api/' + '/'.join(path_parts)
|
||||||
|
|
||||||
|
endpoints.append({
|
||||||
|
'method': 'MULTIPLE',
|
||||||
|
'path': api_path.replace('[', ':').replace(']', ''),
|
||||||
|
'handler_file': str(api_file.relative_to(root_path)),
|
||||||
|
'description': f'API endpoint at {api_path}',
|
||||||
|
'technical_notes': 'Next.js Pages Router endpoint'
|
||||||
|
})
|
||||||
|
|
||||||
|
return endpoints
|
||||||
|
|
||||||
|
|
||||||
|
def find_data_models(root_path: Path) -> List[Dict[str, Any]]:
|
||||||
|
"""Find data models in the project."""
|
||||||
|
models = []
|
||||||
|
|
||||||
|
# Prisma schema
|
||||||
|
prisma_schema = root_path / 'prisma' / 'schema.prisma'
|
||||||
|
if prisma_schema.exists():
|
||||||
|
content = prisma_schema.read_text()
|
||||||
|
model_pattern = re.compile(r'model\s+(\w+)\s*\{([^}]+)\}', re.MULTILINE)
|
||||||
|
|
||||||
|
for match in model_pattern.finditer(content):
|
||||||
|
model_name = match.group(1)
|
||||||
|
model_body = match.group(2)
|
||||||
|
|
||||||
|
# Extract fields
|
||||||
|
fields = []
|
||||||
|
for line in model_body.strip().split('\n'):
|
||||||
|
line = line.strip()
|
||||||
|
if line and not line.startswith('@@') and not line.startswith('//'):
|
||||||
|
parts = line.split()
|
||||||
|
if len(parts) >= 2:
|
||||||
|
fields.append({
|
||||||
|
'name': parts[0],
|
||||||
|
'type': parts[1],
|
||||||
|
'description': f'{parts[0]} field'
|
||||||
|
})
|
||||||
|
|
||||||
|
models.append({
|
||||||
|
'name': model_name,
|
||||||
|
'description': f'{model_name} data model',
|
||||||
|
'fields': fields[:10] # Limit fields
|
||||||
|
})
|
||||||
|
|
||||||
|
return models
|
||||||
|
|
||||||
|
|
||||||
|
def collect_glossary_terms(features: List, components: List, endpoints: List) -> List[Dict[str, str]]:
|
||||||
|
"""Collect technical terms that need definitions."""
|
||||||
|
common_terms = {
|
||||||
|
'API': 'Application Programming Interface - a way for different software to communicate',
|
||||||
|
'REST': 'Representational State Transfer - a standard way to design web APIs',
|
||||||
|
'Component': 'A reusable piece of the user interface',
|
||||||
|
'Endpoint': 'A specific URL that the application responds to',
|
||||||
|
'ORM': 'Object-Relational Mapping - connects code to database tables',
|
||||||
|
'JWT': 'JSON Web Token - a secure way to transmit user identity',
|
||||||
|
'CRUD': 'Create, Read, Update, Delete - basic data operations',
|
||||||
|
'Props': 'Properties passed to a component to customize it',
|
||||||
|
'State': 'Data that can change and affects what users see',
|
||||||
|
'Hook': 'A way to add features to React components',
|
||||||
|
'Migration': 'A controlled change to database structure',
|
||||||
|
'Schema': 'The structure/shape of data',
|
||||||
|
'Route': 'A URL path that maps to specific functionality',
|
||||||
|
'Handler': 'Code that responds to a specific request',
|
||||||
|
}
|
||||||
|
|
||||||
|
return [{'term': k, 'definition': v} for k, v in common_terms.items()]
|
||||||
|
|
||||||
|
|
||||||
|
def generate_analysis(root_path: Path) -> Dict[str, Any]:
|
||||||
|
"""Generate complete project analysis."""
|
||||||
|
project_info = detect_project_type(root_path)
|
||||||
|
pkg_info = parse_package_json(root_path) if project_info['type'] == 'node' else {}
|
||||||
|
structure = scan_directory_structure(root_path)
|
||||||
|
features = detect_features(root_path)
|
||||||
|
components = find_components(root_path)
|
||||||
|
endpoints = find_api_endpoints(root_path)
|
||||||
|
models = find_data_models(root_path)
|
||||||
|
glossary = collect_glossary_terms(features, components, endpoints)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'analysis_timestamp': datetime.now().isoformat(),
|
||||||
|
'project': {
|
||||||
|
'name': pkg_info.get('name', root_path.name),
|
||||||
|
'version': pkg_info.get('version', '0.0.0'),
|
||||||
|
'description': pkg_info.get('description', ''),
|
||||||
|
'type': project_info['type'],
|
||||||
|
},
|
||||||
|
'tech_stack': {
|
||||||
|
'language': 'TypeScript' if project_info['type'] == 'node' else project_info['type'],
|
||||||
|
'framework': pkg_info.get('framework'),
|
||||||
|
'database': pkg_info.get('database'),
|
||||||
|
'ui_framework': pkg_info.get('ui_framework'),
|
||||||
|
'key_dependencies': pkg_info.get('key_dependencies', []),
|
||||||
|
},
|
||||||
|
'structure': structure,
|
||||||
|
'features': features,
|
||||||
|
'components': components,
|
||||||
|
'api_endpoints': endpoints,
|
||||||
|
'data_models': models,
|
||||||
|
'glossary_terms': glossary,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def output_yaml(data: Dict[str, Any], output_path: Optional[Path] = None):
|
||||||
|
"""Output analysis as YAML."""
|
||||||
|
if yaml:
|
||||||
|
output = yaml.dump(data, default_flow_style=False, allow_unicode=True, sort_keys=False)
|
||||||
|
else:
|
||||||
|
# Fallback to JSON if yaml not available
|
||||||
|
output = json.dumps(data, indent=2)
|
||||||
|
|
||||||
|
if output_path:
|
||||||
|
output_path.write_text(output)
|
||||||
|
print(f"Analysis written to: {output_path}")
|
||||||
|
else:
|
||||||
|
print(output)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
"""Main entry point."""
|
||||||
|
root_path = Path.cwd()
|
||||||
|
|
||||||
|
if len(sys.argv) > 1:
|
||||||
|
root_path = Path(sys.argv[1])
|
||||||
|
|
||||||
|
if not root_path.exists():
|
||||||
|
print(f"Error: Path does not exist: {root_path}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
output_path = None
|
||||||
|
if len(sys.argv) > 2:
|
||||||
|
output_path = Path(sys.argv[2])
|
||||||
|
|
||||||
|
analysis = generate_analysis(root_path)
|
||||||
|
output_yaml(analysis, output_path)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
||||||
|
|
@ -0,0 +1,491 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
HTML Documentation Generator
|
||||||
|
Generates beautiful HTML documentation from project analysis.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
from pathlib import Path
|
||||||
|
from datetime import datetime
|
||||||
|
from typing import Dict, List, Any, Optional
|
||||||
|
|
||||||
|
# Try to import yaml
|
||||||
|
try:
|
||||||
|
import yaml
|
||||||
|
except ImportError:
|
||||||
|
yaml = None
|
||||||
|
|
||||||
|
|
||||||
|
def load_template(template_path: Path) -> str:
|
||||||
|
"""Load the HTML template."""
|
||||||
|
with open(template_path, 'r', encoding='utf-8') as f:
|
||||||
|
return f.read()
|
||||||
|
|
||||||
|
|
||||||
|
def load_analysis(analysis_path: Path) -> Dict[str, Any]:
|
||||||
|
"""Load project analysis from YAML or JSON."""
|
||||||
|
with open(analysis_path, 'r', encoding='utf-8') as f:
|
||||||
|
content = f.read()
|
||||||
|
if yaml and (analysis_path.suffix in ['.yml', '.yaml']):
|
||||||
|
return yaml.safe_load(content)
|
||||||
|
return json.loads(content)
|
||||||
|
|
||||||
|
|
||||||
|
def escape_html(text: str) -> str:
|
||||||
|
"""Escape HTML special characters."""
|
||||||
|
if not text:
|
||||||
|
return ''
|
||||||
|
return (str(text)
|
||||||
|
.replace('&', '&')
|
||||||
|
.replace('<', '<')
|
||||||
|
.replace('>', '>')
|
||||||
|
.replace('"', '"')
|
||||||
|
.replace("'", '''))
|
||||||
|
|
||||||
|
|
||||||
|
def generate_capabilities_html(capabilities: List[Dict]) -> str:
|
||||||
|
"""Generate HTML for capabilities cards."""
|
||||||
|
icons = ['✨', '⚡', '🔐', '📊', '🚀', '💡', '🎯', '🔧']
|
||||||
|
html_parts = []
|
||||||
|
|
||||||
|
for i, cap in enumerate(capabilities[:8]):
|
||||||
|
icon = icons[i % len(icons)]
|
||||||
|
html_parts.append(f'''
|
||||||
|
<div class="card">
|
||||||
|
<div class="card-header">
|
||||||
|
<div class="card-icon">{icon}</div>
|
||||||
|
<div class="card-title">{escape_html(cap.get('capability', cap.get('name', 'Feature')))}</div>
|
||||||
|
</div>
|
||||||
|
<p>{escape_html(cap.get('description', ''))}</p>
|
||||||
|
</div>''')
|
||||||
|
|
||||||
|
return '\n'.join(html_parts)
|
||||||
|
|
||||||
|
|
||||||
|
def generate_prerequisites_html(prerequisites: List[Dict]) -> str:
|
||||||
|
"""Generate HTML for prerequisites table rows."""
|
||||||
|
html_parts = []
|
||||||
|
|
||||||
|
for prereq in prerequisites:
|
||||||
|
tool = prereq.get('tool', prereq.get('name', ''))
|
||||||
|
purpose = prereq.get('purpose', prereq.get('description', ''))
|
||||||
|
html_parts.append(f'''
|
||||||
|
<tr>
|
||||||
|
<td><code>{escape_html(tool)}</code></td>
|
||||||
|
<td>{escape_html(purpose)}</td>
|
||||||
|
</tr>''')
|
||||||
|
|
||||||
|
return '\n'.join(html_parts) if html_parts else '''
|
||||||
|
<tr>
|
||||||
|
<td><code>Node.js</code></td>
|
||||||
|
<td>JavaScript runtime environment</td>
|
||||||
|
</tr>'''
|
||||||
|
|
||||||
|
|
||||||
|
def generate_tech_stack_html(tech_stack: Dict) -> str:
|
||||||
|
"""Generate HTML for technology stack table rows."""
|
||||||
|
html_parts = []
|
||||||
|
|
||||||
|
stack_items = [
|
||||||
|
('Language', tech_stack.get('language')),
|
||||||
|
('Framework', tech_stack.get('framework')),
|
||||||
|
('Database', tech_stack.get('database')),
|
||||||
|
('UI Framework', tech_stack.get('ui_framework')),
|
||||||
|
]
|
||||||
|
|
||||||
|
purposes = {
|
||||||
|
'TypeScript': 'Type-safe JavaScript for better code quality',
|
||||||
|
'JavaScript': 'Programming language for web applications',
|
||||||
|
'Python': 'General-purpose programming language',
|
||||||
|
'Next.js': 'Full-stack React framework with SSR',
|
||||||
|
'React': 'Component-based UI library',
|
||||||
|
'Vue.js': 'Progressive JavaScript framework',
|
||||||
|
'Express': 'Minimal web server framework',
|
||||||
|
'Prisma': 'Type-safe database ORM',
|
||||||
|
'MongoDB': 'NoSQL document database',
|
||||||
|
'PostgreSQL': 'Relational database',
|
||||||
|
'Tailwind CSS': 'Utility-first CSS framework',
|
||||||
|
'Material UI': 'React component library',
|
||||||
|
}
|
||||||
|
|
||||||
|
for layer, tech in stack_items:
|
||||||
|
if tech:
|
||||||
|
purpose = purposes.get(tech, f'{tech} for {layer.lower()}')
|
||||||
|
html_parts.append(f'''
|
||||||
|
<tr>
|
||||||
|
<td>{escape_html(layer)}</td>
|
||||||
|
<td><span class="badge badge-primary">{escape_html(tech)}</span></td>
|
||||||
|
<td>{escape_html(purpose)}</td>
|
||||||
|
</tr>''')
|
||||||
|
|
||||||
|
# Add key dependencies
|
||||||
|
for dep in tech_stack.get('key_dependencies', [])[:5]:
|
||||||
|
html_parts.append(f'''
|
||||||
|
<tr>
|
||||||
|
<td>Dependency</td>
|
||||||
|
<td><span class="badge badge-info">{escape_html(dep.get('name', ''))}</span></td>
|
||||||
|
<td>{escape_html(dep.get('purpose', ''))}</td>
|
||||||
|
</tr>''')
|
||||||
|
|
||||||
|
return '\n'.join(html_parts)
|
||||||
|
|
||||||
|
|
||||||
|
def generate_directory_structure(structure: Dict) -> str:
|
||||||
|
"""Generate directory structure text."""
|
||||||
|
lines = ['project/']
|
||||||
|
|
||||||
|
for i, dir_info in enumerate(structure.get('directories', [])[:10]):
|
||||||
|
prefix = '└── ' if i == len(structure.get('directories', [])) - 1 else '├── '
|
||||||
|
path = dir_info.get('path', '')
|
||||||
|
purpose = dir_info.get('purpose', '')
|
||||||
|
lines.append(f"{prefix}{path}/ # {purpose}")
|
||||||
|
|
||||||
|
return '\n'.join(lines)
|
||||||
|
|
||||||
|
|
||||||
|
def generate_features_html(features: List[Dict]) -> str:
|
||||||
|
"""Generate HTML for features section."""
|
||||||
|
icons = ['🔐', '👤', '🔌', '💾', '📁', '🔍', '📧', '⚙️']
|
||||||
|
html_parts = []
|
||||||
|
|
||||||
|
for i, feature in enumerate(features[:8]):
|
||||||
|
icon = icons[i % len(icons)]
|
||||||
|
name = feature.get('name', 'Feature')
|
||||||
|
description = feature.get('description', '')
|
||||||
|
technical_notes = feature.get('technical_notes', '')
|
||||||
|
files = feature.get('files', [])
|
||||||
|
|
||||||
|
files_html = '\n'.join([f'<li><code>{escape_html(f)}</code></li>' for f in files[:3]])
|
||||||
|
|
||||||
|
html_parts.append(f'''
|
||||||
|
<div class="feature-item">
|
||||||
|
<div class="feature-icon">{icon}</div>
|
||||||
|
<div class="feature-content">
|
||||||
|
<h4>{escape_html(name)}</h4>
|
||||||
|
<p>{escape_html(description)}</p>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>
|
||||||
|
🔧 Technical Details
|
||||||
|
<span class="tech-badge">For Engineers</span>
|
||||||
|
</summary>
|
||||||
|
<div>
|
||||||
|
<p>{escape_html(technical_notes)}</p>
|
||||||
|
<p><strong>Key Files:</strong></p>
|
||||||
|
<ul>
|
||||||
|
{files_html}
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
</details>
|
||||||
|
</div>
|
||||||
|
</div>''')
|
||||||
|
|
||||||
|
return '\n'.join(html_parts) if html_parts else '''
|
||||||
|
<div class="feature-item">
|
||||||
|
<div class="feature-icon">✨</div>
|
||||||
|
<div class="feature-content">
|
||||||
|
<h4>Core Functionality</h4>
|
||||||
|
<p>Main features of the application.</p>
|
||||||
|
</div>
|
||||||
|
</div>'''
|
||||||
|
|
||||||
|
|
||||||
|
def generate_api_endpoints_html(endpoints: List[Dict]) -> str:
|
||||||
|
"""Generate HTML for API endpoints."""
|
||||||
|
if not endpoints:
|
||||||
|
return '''
|
||||||
|
<p>No API endpoints detected. This project may use a different API pattern or may not have an API layer.</p>'''
|
||||||
|
|
||||||
|
html_parts = ['<table>', '<thead>', '<tr>', '<th>Method</th>', '<th>Endpoint</th>', '<th>Description</th>', '</tr>', '</thead>', '<tbody>']
|
||||||
|
|
||||||
|
for endpoint in endpoints[:15]:
|
||||||
|
method = endpoint.get('method', 'GET')
|
||||||
|
method_class = f'method-{method.lower()}'
|
||||||
|
path = endpoint.get('path', '')
|
||||||
|
description = endpoint.get('description', '')
|
||||||
|
|
||||||
|
html_parts.append(f'''
|
||||||
|
<tr>
|
||||||
|
<td><span class="method {method_class}">{escape_html(method)}</span></td>
|
||||||
|
<td><code>{escape_html(path)}</code></td>
|
||||||
|
<td>{escape_html(description)}</td>
|
||||||
|
</tr>''')
|
||||||
|
|
||||||
|
html_parts.extend(['</tbody>', '</table>'])
|
||||||
|
return '\n'.join(html_parts)
|
||||||
|
|
||||||
|
|
||||||
|
def generate_components_html(components: List[Dict]) -> str:
|
||||||
|
"""Generate HTML for component catalog."""
|
||||||
|
if not components:
|
||||||
|
return '''
|
||||||
|
<p>No UI components detected. This project may not have a frontend layer or uses a different component pattern.</p>'''
|
||||||
|
|
||||||
|
html_parts = []
|
||||||
|
|
||||||
|
for comp in components[:10]:
|
||||||
|
name = comp.get('name', 'Component')
|
||||||
|
description = comp.get('description', f'{name} component')
|
||||||
|
path = comp.get('path', '')
|
||||||
|
props = comp.get('props', 'See source file')
|
||||||
|
|
||||||
|
html_parts.append(f'''
|
||||||
|
<div class="card">
|
||||||
|
<div class="card-header">
|
||||||
|
<div class="card-icon">🧩</div>
|
||||||
|
<div class="card-title">{escape_html(name)}</div>
|
||||||
|
</div>
|
||||||
|
<p>{escape_html(description)}</p>
|
||||||
|
<p><code>{escape_html(path)}</code></p>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>
|
||||||
|
🔧 Props & Usage
|
||||||
|
<span class="tech-badge">Technical</span>
|
||||||
|
</summary>
|
||||||
|
<div>
|
||||||
|
<p><strong>Props:</strong> {escape_html(props)}</p>
|
||||||
|
|
||||||
|
<h4>Usage Example</h4>
|
||||||
|
<pre><code><{escape_html(name)} /></code></pre>
|
||||||
|
</div>
|
||||||
|
</details>
|
||||||
|
</div>''')
|
||||||
|
|
||||||
|
return '\n'.join(html_parts)
|
||||||
|
|
||||||
|
|
||||||
|
def generate_data_models_html(models: List[Dict]) -> str:
|
||||||
|
"""Generate HTML for data models."""
|
||||||
|
if not models:
|
||||||
|
return '''
|
||||||
|
<p>No data models detected. This project may not use a database or uses a different data pattern.</p>'''
|
||||||
|
|
||||||
|
html_parts = []
|
||||||
|
|
||||||
|
for model in models[:10]:
|
||||||
|
name = model.get('name', 'Model')
|
||||||
|
description = model.get('description', f'{name} data model')
|
||||||
|
fields = model.get('fields', [])
|
||||||
|
|
||||||
|
fields_html = ''
|
||||||
|
if fields:
|
||||||
|
fields_html = '<table><thead><tr><th>Field</th><th>Type</th><th>Description</th></tr></thead><tbody>'
|
||||||
|
for field in fields[:10]:
|
||||||
|
field_name = field.get('name', '')
|
||||||
|
field_type = field.get('type', 'unknown')
|
||||||
|
field_desc = field.get('description', '')
|
||||||
|
fields_html += f'''
|
||||||
|
<tr>
|
||||||
|
<td><code>{escape_html(field_name)}</code></td>
|
||||||
|
<td><code>{escape_html(field_type)}</code></td>
|
||||||
|
<td>{escape_html(field_desc)}</td>
|
||||||
|
</tr>'''
|
||||||
|
fields_html += '</tbody></table>'
|
||||||
|
|
||||||
|
html_parts.append(f'''
|
||||||
|
<h3>{escape_html(name)}</h3>
|
||||||
|
<p><strong>What it represents:</strong> {escape_html(description)}</p>
|
||||||
|
{fields_html}''')
|
||||||
|
|
||||||
|
return '\n'.join(html_parts)
|
||||||
|
|
||||||
|
|
||||||
|
def generate_er_diagram(models: List[Dict]) -> str:
|
||||||
|
"""Generate ASCII ER diagram."""
|
||||||
|
if not models:
|
||||||
|
return '''┌─────────────────────────────────────┐
|
||||||
|
│ No data models detected │
|
||||||
|
└─────────────────────────────────────┘'''
|
||||||
|
|
||||||
|
lines = []
|
||||||
|
for model in models[:4]:
|
||||||
|
name = model.get('name', 'Model')
|
||||||
|
fields = model.get('fields', [])[:4]
|
||||||
|
|
||||||
|
width = max(len(name) + 4, max([len(f.get('name', '')) + len(f.get('type', '')) + 5 for f in fields] or [20]))
|
||||||
|
|
||||||
|
lines.append('┌' + '─' * width + '┐')
|
||||||
|
lines.append('│' + f' {name} '.center(width) + '│')
|
||||||
|
lines.append('├' + '─' * width + '┤')
|
||||||
|
|
||||||
|
for field in fields:
|
||||||
|
field_str = f" {field.get('name', '')} : {field.get('type', '')}"
|
||||||
|
lines.append('│' + field_str.ljust(width) + '│')
|
||||||
|
|
||||||
|
lines.append('└' + '─' * width + '┘')
|
||||||
|
lines.append('')
|
||||||
|
|
||||||
|
return '\n'.join(lines)
|
||||||
|
|
||||||
|
|
||||||
|
def generate_glossary_html(terms: List[Dict]) -> str:
|
||||||
|
"""Generate HTML for glossary."""
|
||||||
|
html_parts = []
|
||||||
|
|
||||||
|
for term in terms:
|
||||||
|
word = term.get('term', '')
|
||||||
|
definition = term.get('definition', '')
|
||||||
|
|
||||||
|
html_parts.append(f'''
|
||||||
|
<div class="glossary-term">
|
||||||
|
<span class="glossary-word">{escape_html(word)}</span>
|
||||||
|
<span class="glossary-definition">{escape_html(definition)}</span>
|
||||||
|
</div>''')
|
||||||
|
|
||||||
|
return '\n'.join(html_parts)
|
||||||
|
|
||||||
|
|
||||||
|
def generate_system_diagram(tech_stack: Dict, structure: Dict) -> str:
|
||||||
|
"""Generate ASCII system architecture diagram."""
|
||||||
|
framework = tech_stack.get('framework', 'Application')
|
||||||
|
database = tech_stack.get('database', '')
|
||||||
|
ui = tech_stack.get('ui_framework', 'UI')
|
||||||
|
|
||||||
|
diagram = f'''┌─────────────────────────────────────────────────────────────┐
|
||||||
|
│ Application Architecture │
|
||||||
|
├─────────────────────────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||||
|
│ │ Client │───▶│ API │───▶│ Database │ │
|
||||||
|
│ │ ({ui or 'UI'}) │ │ ({framework or 'Server'}) │ │ ({database or 'Storage'}) │ │
|
||||||
|
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────────────────────────┘'''
|
||||||
|
return diagram
|
||||||
|
|
||||||
|
|
||||||
|
def generate_html(analysis: Dict, template: str) -> str:
|
||||||
|
"""Generate final HTML from analysis and template."""
|
||||||
|
project = analysis.get('project', {})
|
||||||
|
tech_stack = analysis.get('tech_stack', {})
|
||||||
|
structure = analysis.get('structure', {})
|
||||||
|
features = analysis.get('features', [])
|
||||||
|
components = analysis.get('components', [])
|
||||||
|
endpoints = analysis.get('api_endpoints', [])
|
||||||
|
models = analysis.get('data_models', [])
|
||||||
|
glossary = analysis.get('glossary_terms', [])
|
||||||
|
|
||||||
|
# Basic replacements
|
||||||
|
replacements = {
|
||||||
|
'{{PROJECT_NAME}}': escape_html(project.get('name', 'Project')),
|
||||||
|
'{{VERSION}}': escape_html(project.get('version', '1.0.0')),
|
||||||
|
'{{TAGLINE}}': escape_html(project.get('description', 'Project documentation')),
|
||||||
|
'{{DESCRIPTION}}': escape_html(project.get('description', 'This project provides various features and capabilities.')),
|
||||||
|
'{{AUDIENCE}}': 'Developers, stakeholders, and anyone interested in understanding this project.',
|
||||||
|
'{{GENERATED_DATE}}': datetime.now().strftime('%Y-%m-%d'),
|
||||||
|
}
|
||||||
|
|
||||||
|
# Generate complex sections
|
||||||
|
html = template
|
||||||
|
|
||||||
|
# Replace simple placeholders
|
||||||
|
for key, value in replacements.items():
|
||||||
|
html = html.replace(key, value)
|
||||||
|
|
||||||
|
# Replace capabilities section
|
||||||
|
capabilities = [{'capability': cap.get('name'), 'description': cap.get('description')}
|
||||||
|
for cap in features[:4]] if features else [
|
||||||
|
{'capability': 'Core Features', 'description': 'Main application functionality'},
|
||||||
|
{'capability': 'Easy Integration', 'description': 'Simple setup and configuration'}
|
||||||
|
]
|
||||||
|
|
||||||
|
# Find and replace the capabilities placeholder section
|
||||||
|
cap_html = generate_capabilities_html(capabilities)
|
||||||
|
html = re.sub(
|
||||||
|
r'<!-- CAPABILITIES_PLACEHOLDER -->.*?</div>\s*</div>',
|
||||||
|
f'<!-- Generated Capabilities -->\n{cap_html}',
|
||||||
|
html,
|
||||||
|
flags=re.DOTALL
|
||||||
|
)
|
||||||
|
|
||||||
|
# Replace technology stack
|
||||||
|
html = re.sub(
|
||||||
|
r'<!-- TECH_STACK_PLACEHOLDER -->.*?</tr>',
|
||||||
|
generate_tech_stack_html(tech_stack),
|
||||||
|
html,
|
||||||
|
flags=re.DOTALL
|
||||||
|
)
|
||||||
|
|
||||||
|
# Generate and replace diagrams
|
||||||
|
html = html.replace('{{SYSTEM_DIAGRAM}}', generate_system_diagram(tech_stack, structure))
|
||||||
|
html = html.replace('{{DIRECTORY_STRUCTURE}}', generate_directory_structure(structure))
|
||||||
|
html = html.replace('{{ER_DIAGRAM}}', generate_er_diagram(models))
|
||||||
|
|
||||||
|
# Replace features section
|
||||||
|
html = re.sub(
|
||||||
|
r'<!-- FEATURES_PLACEHOLDER -->.*?</div>\s*</div>\s*</div>',
|
||||||
|
f'<!-- Generated Features -->\n{generate_features_html(features)}',
|
||||||
|
html,
|
||||||
|
flags=re.DOTALL
|
||||||
|
)
|
||||||
|
|
||||||
|
# Replace API endpoints
|
||||||
|
html = re.sub(
|
||||||
|
r'<!-- API_ENDPOINTS_PLACEHOLDER -->.*?</details>',
|
||||||
|
f'<h3>API Endpoints</h3>\n{generate_api_endpoints_html(endpoints)}',
|
||||||
|
html,
|
||||||
|
flags=re.DOTALL
|
||||||
|
)
|
||||||
|
|
||||||
|
# Replace components
|
||||||
|
html = re.sub(
|
||||||
|
r'<!-- COMPONENTS_PLACEHOLDER -->.*?</div>\s*</div>',
|
||||||
|
f'<!-- Generated Components -->\n{generate_components_html(components)}',
|
||||||
|
html,
|
||||||
|
flags=re.DOTALL
|
||||||
|
)
|
||||||
|
|
||||||
|
# Replace data models
|
||||||
|
html = re.sub(
|
||||||
|
r'<!-- DATA_MODELS_PLACEHOLDER -->.*?</table>',
|
||||||
|
f'<!-- Generated Data Models -->\n{generate_data_models_html(models)}',
|
||||||
|
html,
|
||||||
|
flags=re.DOTALL
|
||||||
|
)
|
||||||
|
|
||||||
|
# Replace glossary
|
||||||
|
html = re.sub(
|
||||||
|
r'<!-- GLOSSARY_PLACEHOLDER -->.*?</div>',
|
||||||
|
f'<!-- Generated Glossary -->\n{generate_glossary_html(glossary)}\n</div>',
|
||||||
|
html,
|
||||||
|
flags=re.DOTALL
|
||||||
|
)
|
||||||
|
|
||||||
|
# Clean up remaining placeholders
|
||||||
|
html = re.sub(r'\{\{[A-Z_]+\}\}', '', html)
|
||||||
|
|
||||||
|
return html
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
"""Main entry point."""
|
||||||
|
if len(sys.argv) < 3:
|
||||||
|
print("Usage: generate_html.py <analysis.yml> <template.html> [output.html]")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
analysis_path = Path(sys.argv[1])
|
||||||
|
template_path = Path(sys.argv[2])
|
||||||
|
output_path = Path(sys.argv[3]) if len(sys.argv) > 3 else Path('documentation.html')
|
||||||
|
|
||||||
|
if not analysis_path.exists():
|
||||||
|
print(f"Error: Analysis file not found: {analysis_path}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
if not template_path.exists():
|
||||||
|
print(f"Error: Template file not found: {template_path}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
analysis = load_analysis(analysis_path)
|
||||||
|
template = load_template(template_path)
|
||||||
|
html = generate_html(analysis, template)
|
||||||
|
|
||||||
|
output_path.write_text(html, encoding='utf-8')
|
||||||
|
print(f"HTML documentation generated: {output_path}")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
||||||
|
|
@ -0,0 +1,97 @@
|
||||||
|
# Documentation Generator Skill
|
||||||
|
# Generates comprehensive dual-audience documentation
|
||||||
|
|
||||||
|
name: documentation-generator
|
||||||
|
version: "1.0.0"
|
||||||
|
description: |
|
||||||
|
Analyzes project structure and generates comprehensive documentation
|
||||||
|
that serves both technical (engineers) and non-technical audiences.
|
||||||
|
|
||||||
|
triggers:
|
||||||
|
commands:
|
||||||
|
- "/eureka:index"
|
||||||
|
- "/eureka:docs"
|
||||||
|
keywords:
|
||||||
|
- "generate documentation"
|
||||||
|
- "create docs"
|
||||||
|
- "document project"
|
||||||
|
- "project documentation"
|
||||||
|
- "index project"
|
||||||
|
|
||||||
|
agents:
|
||||||
|
- doc-writer
|
||||||
|
|
||||||
|
schemas:
|
||||||
|
- documentation_output.yml
|
||||||
|
- project_analysis.yml
|
||||||
|
|
||||||
|
scripts:
|
||||||
|
- analyze_project.py
|
||||||
|
- generate_html.py
|
||||||
|
|
||||||
|
templates:
|
||||||
|
- documentation.html
|
||||||
|
|
||||||
|
capabilities:
|
||||||
|
- Project structure analysis
|
||||||
|
- Dual-audience documentation generation
|
||||||
|
- ASCII diagram creation
|
||||||
|
- API documentation
|
||||||
|
- Component cataloging
|
||||||
|
- Glossary generation
|
||||||
|
|
||||||
|
outputs:
|
||||||
|
primary:
|
||||||
|
- index.html # Beautiful HTML for non-engineers
|
||||||
|
- PROJECT_DOCUMENTATION.md
|
||||||
|
- QUICK_REFERENCE.md
|
||||||
|
optional:
|
||||||
|
- API_REFERENCE.md
|
||||||
|
- COMPONENTS.md
|
||||||
|
- GLOSSARY.md
|
||||||
|
data:
|
||||||
|
- analysis.yml # Project analysis data
|
||||||
|
|
||||||
|
audience_support:
|
||||||
|
non_technical:
|
||||||
|
- Executive Summary
|
||||||
|
- Feature Guide
|
||||||
|
- Glossary
|
||||||
|
- Visual Diagrams
|
||||||
|
technical:
|
||||||
|
- API Reference
|
||||||
|
- Component Catalog
|
||||||
|
- Data Models
|
||||||
|
- Code Examples
|
||||||
|
|
||||||
|
configuration:
|
||||||
|
default_output_dir: docs
|
||||||
|
supported_formats:
|
||||||
|
- markdown
|
||||||
|
- html
|
||||||
|
default_sections:
|
||||||
|
- executive_summary
|
||||||
|
- architecture_overview
|
||||||
|
- getting_started
|
||||||
|
- features
|
||||||
|
- api_reference
|
||||||
|
- component_catalog
|
||||||
|
- data_models
|
||||||
|
- glossary
|
||||||
|
|
||||||
|
dependencies:
|
||||||
|
required:
|
||||||
|
- Read tool (file access)
|
||||||
|
- Write tool (file creation)
|
||||||
|
- Glob tool (file discovery)
|
||||||
|
- Grep tool (pattern search)
|
||||||
|
optional:
|
||||||
|
- Bash tool (script execution)
|
||||||
|
- Task tool (agent delegation)
|
||||||
|
|
||||||
|
quality_gates:
|
||||||
|
- All referenced files must exist
|
||||||
|
- All code examples must be syntactically valid
|
||||||
|
- All internal links must resolve
|
||||||
|
- Technical details wrapped in collapsible sections
|
||||||
|
- Glossary covers all technical terms used
|
||||||
|
|
@ -0,0 +1,962 @@
|
||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
<title>{{PROJECT_NAME}} - Documentation</title>
|
||||||
|
<style>
|
||||||
|
/* ===== CSS Variables ===== */
|
||||||
|
:root {
|
||||||
|
--color-primary: #2563eb;
|
||||||
|
--color-primary-light: #3b82f6;
|
||||||
|
--color-primary-dark: #1d4ed8;
|
||||||
|
--color-secondary: #7c3aed;
|
||||||
|
--color-success: #10b981;
|
||||||
|
--color-warning: #f59e0b;
|
||||||
|
--color-danger: #ef4444;
|
||||||
|
--color-info: #06b6d4;
|
||||||
|
|
||||||
|
--color-bg: #ffffff;
|
||||||
|
--color-bg-alt: #f8fafc;
|
||||||
|
--color-bg-code: #f1f5f9;
|
||||||
|
--color-text: #1e293b;
|
||||||
|
--color-text-light: #64748b;
|
||||||
|
--color-text-muted: #94a3b8;
|
||||||
|
--color-border: #e2e8f0;
|
||||||
|
|
||||||
|
--font-sans: 'Inter', -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
|
||||||
|
--font-mono: 'JetBrains Mono', 'Fira Code', Consolas, monospace;
|
||||||
|
|
||||||
|
--shadow-sm: 0 1px 2px 0 rgb(0 0 0 / 0.05);
|
||||||
|
--shadow-md: 0 4px 6px -1px rgb(0 0 0 / 0.1);
|
||||||
|
--shadow-lg: 0 10px 15px -3px rgb(0 0 0 / 0.1);
|
||||||
|
|
||||||
|
--radius-sm: 0.375rem;
|
||||||
|
--radius-md: 0.5rem;
|
||||||
|
--radius-lg: 0.75rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Dark mode */
|
||||||
|
@media (prefers-color-scheme: dark) {
|
||||||
|
:root {
|
||||||
|
--color-bg: #0f172a;
|
||||||
|
--color-bg-alt: #1e293b;
|
||||||
|
--color-bg-code: #334155;
|
||||||
|
--color-text: #f1f5f9;
|
||||||
|
--color-text-light: #94a3b8;
|
||||||
|
--color-text-muted: #64748b;
|
||||||
|
--color-border: #334155;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/* ===== Reset & Base ===== */
|
||||||
|
*, *::before, *::after {
|
||||||
|
box-sizing: border-box;
|
||||||
|
margin: 0;
|
||||||
|
padding: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
html {
|
||||||
|
scroll-behavior: smooth;
|
||||||
|
font-size: 16px;
|
||||||
|
}
|
||||||
|
|
||||||
|
body {
|
||||||
|
font-family: var(--font-sans);
|
||||||
|
background: var(--color-bg);
|
||||||
|
color: var(--color-text);
|
||||||
|
line-height: 1.7;
|
||||||
|
min-height: 100vh;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* ===== Layout ===== */
|
||||||
|
.layout {
|
||||||
|
display: flex;
|
||||||
|
min-height: 100vh;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Sidebar */
|
||||||
|
.sidebar {
|
||||||
|
position: fixed;
|
||||||
|
top: 0;
|
||||||
|
left: 0;
|
||||||
|
width: 280px;
|
||||||
|
height: 100vh;
|
||||||
|
background: var(--color-bg-alt);
|
||||||
|
border-right: 1px solid var(--color-border);
|
||||||
|
overflow-y: auto;
|
||||||
|
padding: 2rem 0;
|
||||||
|
z-index: 100;
|
||||||
|
}
|
||||||
|
|
||||||
|
.sidebar-header {
|
||||||
|
padding: 0 1.5rem 1.5rem;
|
||||||
|
border-bottom: 1px solid var(--color-border);
|
||||||
|
margin-bottom: 1rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.sidebar-logo {
|
||||||
|
font-size: 1.25rem;
|
||||||
|
font-weight: 700;
|
||||||
|
color: var(--color-primary);
|
||||||
|
text-decoration: none;
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
gap: 0.5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.sidebar-version {
|
||||||
|
font-size: 0.75rem;
|
||||||
|
color: var(--color-text-muted);
|
||||||
|
margin-top: 0.25rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.sidebar-nav {
|
||||||
|
padding: 0 1rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.nav-section {
|
||||||
|
margin-bottom: 1.5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.nav-section-title {
|
||||||
|
font-size: 0.75rem;
|
||||||
|
font-weight: 600;
|
||||||
|
text-transform: uppercase;
|
||||||
|
letter-spacing: 0.05em;
|
||||||
|
color: var(--color-text-muted);
|
||||||
|
padding: 0 0.5rem;
|
||||||
|
margin-bottom: 0.5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.nav-link {
|
||||||
|
display: block;
|
||||||
|
padding: 0.5rem;
|
||||||
|
color: var(--color-text-light);
|
||||||
|
text-decoration: none;
|
||||||
|
border-radius: var(--radius-sm);
|
||||||
|
font-size: 0.9rem;
|
||||||
|
transition: all 0.15s ease;
|
||||||
|
}
|
||||||
|
|
||||||
|
.nav-link:hover {
|
||||||
|
background: var(--color-border);
|
||||||
|
color: var(--color-text);
|
||||||
|
}
|
||||||
|
|
||||||
|
.nav-link.active {
|
||||||
|
background: var(--color-primary);
|
||||||
|
color: white;
|
||||||
|
}
|
||||||
|
|
||||||
|
.nav-link-icon {
|
||||||
|
margin-right: 0.5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Main content */
|
||||||
|
.main {
|
||||||
|
flex: 1;
|
||||||
|
margin-left: 280px;
|
||||||
|
padding: 3rem 4rem;
|
||||||
|
max-width: 900px;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* ===== Typography ===== */
|
||||||
|
h1, h2, h3, h4, h5, h6 {
|
||||||
|
font-weight: 600;
|
||||||
|
line-height: 1.3;
|
||||||
|
margin-top: 2rem;
|
||||||
|
margin-bottom: 1rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
h1 {
|
||||||
|
font-size: 2.5rem;
|
||||||
|
margin-top: 0;
|
||||||
|
padding-bottom: 1rem;
|
||||||
|
border-bottom: 2px solid var(--color-border);
|
||||||
|
}
|
||||||
|
|
||||||
|
h2 {
|
||||||
|
font-size: 1.75rem;
|
||||||
|
color: var(--color-primary);
|
||||||
|
}
|
||||||
|
|
||||||
|
h3 {
|
||||||
|
font-size: 1.25rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
p {
|
||||||
|
margin-bottom: 1rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
a {
|
||||||
|
color: var(--color-primary);
|
||||||
|
text-decoration: none;
|
||||||
|
}
|
||||||
|
|
||||||
|
a:hover {
|
||||||
|
text-decoration: underline;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* ===== Components ===== */
|
||||||
|
|
||||||
|
/* Hero section */
|
||||||
|
.hero {
|
||||||
|
background: linear-gradient(135deg, var(--color-primary) 0%, var(--color-secondary) 100%);
|
||||||
|
color: white;
|
||||||
|
padding: 3rem;
|
||||||
|
border-radius: var(--radius-lg);
|
||||||
|
margin-bottom: 3rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.hero h1 {
|
||||||
|
color: white;
|
||||||
|
border-bottom: none;
|
||||||
|
padding-bottom: 0;
|
||||||
|
margin-bottom: 0.5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.hero-tagline {
|
||||||
|
font-size: 1.25rem;
|
||||||
|
opacity: 0.9;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Cards */
|
||||||
|
.card {
|
||||||
|
background: var(--color-bg);
|
||||||
|
border: 1px solid var(--color-border);
|
||||||
|
border-radius: var(--radius-lg);
|
||||||
|
padding: 1.5rem;
|
||||||
|
margin-bottom: 1.5rem;
|
||||||
|
box-shadow: var(--shadow-sm);
|
||||||
|
}
|
||||||
|
|
||||||
|
.card-header {
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
gap: 0.75rem;
|
||||||
|
margin-bottom: 1rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.card-icon {
|
||||||
|
width: 2.5rem;
|
||||||
|
height: 2.5rem;
|
||||||
|
background: var(--color-primary);
|
||||||
|
color: white;
|
||||||
|
border-radius: var(--radius-md);
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
justify-content: center;
|
||||||
|
font-size: 1.25rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.card-title {
|
||||||
|
font-size: 1.1rem;
|
||||||
|
font-weight: 600;
|
||||||
|
margin: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Grid */
|
||||||
|
.grid {
|
||||||
|
display: grid;
|
||||||
|
gap: 1.5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.grid-2 {
|
||||||
|
grid-template-columns: repeat(2, 1fr);
|
||||||
|
}
|
||||||
|
|
||||||
|
.grid-3 {
|
||||||
|
grid-template-columns: repeat(3, 1fr);
|
||||||
|
}
|
||||||
|
|
||||||
|
@media (max-width: 768px) {
|
||||||
|
.grid-2, .grid-3 {
|
||||||
|
grid-template-columns: 1fr;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Tables */
|
||||||
|
table {
|
||||||
|
width: 100%;
|
||||||
|
border-collapse: collapse;
|
||||||
|
margin: 1.5rem 0;
|
||||||
|
font-size: 0.9rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
th, td {
|
||||||
|
padding: 0.75rem 1rem;
|
||||||
|
text-align: left;
|
||||||
|
border-bottom: 1px solid var(--color-border);
|
||||||
|
}
|
||||||
|
|
||||||
|
th {
|
||||||
|
background: var(--color-bg-alt);
|
||||||
|
font-weight: 600;
|
||||||
|
color: var(--color-text-light);
|
||||||
|
text-transform: uppercase;
|
||||||
|
font-size: 0.75rem;
|
||||||
|
letter-spacing: 0.05em;
|
||||||
|
}
|
||||||
|
|
||||||
|
tr:hover {
|
||||||
|
background: var(--color-bg-alt);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Code blocks */
|
||||||
|
code {
|
||||||
|
font-family: var(--font-mono);
|
||||||
|
font-size: 0.875em;
|
||||||
|
background: var(--color-bg-code);
|
||||||
|
padding: 0.2em 0.4em;
|
||||||
|
border-radius: var(--radius-sm);
|
||||||
|
}
|
||||||
|
|
||||||
|
pre {
|
||||||
|
background: var(--color-bg-code);
|
||||||
|
border-radius: var(--radius-md);
|
||||||
|
padding: 1.25rem;
|
||||||
|
overflow-x: auto;
|
||||||
|
margin: 1.5rem 0;
|
||||||
|
border: 1px solid var(--color-border);
|
||||||
|
}
|
||||||
|
|
||||||
|
pre code {
|
||||||
|
background: none;
|
||||||
|
padding: 0;
|
||||||
|
font-size: 0.875rem;
|
||||||
|
line-height: 1.6;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Diagrams */
|
||||||
|
.diagram {
|
||||||
|
background: var(--color-bg-alt);
|
||||||
|
border: 1px solid var(--color-border);
|
||||||
|
border-radius: var(--radius-md);
|
||||||
|
padding: 1.5rem;
|
||||||
|
overflow-x: auto;
|
||||||
|
margin: 1.5rem 0;
|
||||||
|
font-family: var(--font-mono);
|
||||||
|
font-size: 0.8rem;
|
||||||
|
line-height: 1.4;
|
||||||
|
white-space: pre;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Badges */
|
||||||
|
.badge {
|
||||||
|
display: inline-flex;
|
||||||
|
align-items: center;
|
||||||
|
padding: 0.25rem 0.75rem;
|
||||||
|
font-size: 0.75rem;
|
||||||
|
font-weight: 500;
|
||||||
|
border-radius: 9999px;
|
||||||
|
text-transform: uppercase;
|
||||||
|
letter-spacing: 0.025em;
|
||||||
|
}
|
||||||
|
|
||||||
|
.badge-primary {
|
||||||
|
background: rgba(37, 99, 235, 0.1);
|
||||||
|
color: var(--color-primary);
|
||||||
|
}
|
||||||
|
|
||||||
|
.badge-success {
|
||||||
|
background: rgba(16, 185, 129, 0.1);
|
||||||
|
color: var(--color-success);
|
||||||
|
}
|
||||||
|
|
||||||
|
.badge-warning {
|
||||||
|
background: rgba(245, 158, 11, 0.1);
|
||||||
|
color: var(--color-warning);
|
||||||
|
}
|
||||||
|
|
||||||
|
.badge-info {
|
||||||
|
background: rgba(6, 182, 212, 0.1);
|
||||||
|
color: var(--color-info);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Collapsible technical details */
|
||||||
|
details {
|
||||||
|
background: var(--color-bg-alt);
|
||||||
|
border: 1px solid var(--color-border);
|
||||||
|
border-radius: var(--radius-md);
|
||||||
|
margin: 1rem 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
summary {
|
||||||
|
padding: 1rem 1.25rem;
|
||||||
|
cursor: pointer;
|
||||||
|
font-weight: 500;
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
gap: 0.5rem;
|
||||||
|
color: var(--color-text-light);
|
||||||
|
}
|
||||||
|
|
||||||
|
summary:hover {
|
||||||
|
color: var(--color-text);
|
||||||
|
}
|
||||||
|
|
||||||
|
summary::marker {
|
||||||
|
content: '';
|
||||||
|
}
|
||||||
|
|
||||||
|
summary::before {
|
||||||
|
content: '▶';
|
||||||
|
font-size: 0.75rem;
|
||||||
|
transition: transform 0.2s ease;
|
||||||
|
}
|
||||||
|
|
||||||
|
details[open] summary::before {
|
||||||
|
transform: rotate(90deg);
|
||||||
|
}
|
||||||
|
|
||||||
|
details > div {
|
||||||
|
padding: 0 1.25rem 1.25rem;
|
||||||
|
border-top: 1px solid var(--color-border);
|
||||||
|
}
|
||||||
|
|
||||||
|
.tech-badge {
|
||||||
|
background: var(--color-primary);
|
||||||
|
color: white;
|
||||||
|
padding: 0.125rem 0.5rem;
|
||||||
|
border-radius: var(--radius-sm);
|
||||||
|
font-size: 0.7rem;
|
||||||
|
margin-left: auto;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* API Methods */
|
||||||
|
.method {
|
||||||
|
display: inline-block;
|
||||||
|
padding: 0.25rem 0.5rem;
|
||||||
|
border-radius: var(--radius-sm);
|
||||||
|
font-size: 0.75rem;
|
||||||
|
font-weight: 600;
|
||||||
|
font-family: var(--font-mono);
|
||||||
|
}
|
||||||
|
|
||||||
|
.method-get { background: #dcfce7; color: #166534; }
|
||||||
|
.method-post { background: #dbeafe; color: #1e40af; }
|
||||||
|
.method-put { background: #fef3c7; color: #92400e; }
|
||||||
|
.method-patch { background: #fce7f3; color: #9d174d; }
|
||||||
|
.method-delete { background: #fee2e2; color: #991b1b; }
|
||||||
|
|
||||||
|
/* Alerts / Callouts */
|
||||||
|
.callout {
|
||||||
|
padding: 1rem 1.25rem;
|
||||||
|
border-radius: var(--radius-md);
|
||||||
|
margin: 1.5rem 0;
|
||||||
|
border-left: 4px solid;
|
||||||
|
}
|
||||||
|
|
||||||
|
.callout-info {
|
||||||
|
background: rgba(6, 182, 212, 0.1);
|
||||||
|
border-color: var(--color-info);
|
||||||
|
}
|
||||||
|
|
||||||
|
.callout-tip {
|
||||||
|
background: rgba(16, 185, 129, 0.1);
|
||||||
|
border-color: var(--color-success);
|
||||||
|
}
|
||||||
|
|
||||||
|
.callout-warning {
|
||||||
|
background: rgba(245, 158, 11, 0.1);
|
||||||
|
border-color: var(--color-warning);
|
||||||
|
}
|
||||||
|
|
||||||
|
.callout-title {
|
||||||
|
font-weight: 600;
|
||||||
|
margin-bottom: 0.5rem;
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
gap: 0.5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Lists */
|
||||||
|
ul, ol {
|
||||||
|
margin: 1rem 0;
|
||||||
|
padding-left: 1.5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
li {
|
||||||
|
margin-bottom: 0.5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Glossary */
|
||||||
|
.glossary-term {
|
||||||
|
display: flex;
|
||||||
|
padding: 1rem 0;
|
||||||
|
border-bottom: 1px solid var(--color-border);
|
||||||
|
}
|
||||||
|
|
||||||
|
.glossary-term:last-child {
|
||||||
|
border-bottom: none;
|
||||||
|
}
|
||||||
|
|
||||||
|
.glossary-word {
|
||||||
|
font-weight: 600;
|
||||||
|
min-width: 150px;
|
||||||
|
color: var(--color-primary);
|
||||||
|
}
|
||||||
|
|
||||||
|
.glossary-definition {
|
||||||
|
color: var(--color-text-light);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Feature list */
|
||||||
|
.feature-item {
|
||||||
|
display: flex;
|
||||||
|
gap: 1rem;
|
||||||
|
padding: 1.25rem;
|
||||||
|
border: 1px solid var(--color-border);
|
||||||
|
border-radius: var(--radius-md);
|
||||||
|
margin-bottom: 1rem;
|
||||||
|
transition: all 0.2s ease;
|
||||||
|
}
|
||||||
|
|
||||||
|
.feature-item:hover {
|
||||||
|
border-color: var(--color-primary);
|
||||||
|
box-shadow: var(--shadow-md);
|
||||||
|
}
|
||||||
|
|
||||||
|
.feature-icon {
|
||||||
|
width: 3rem;
|
||||||
|
height: 3rem;
|
||||||
|
background: linear-gradient(135deg, var(--color-primary) 0%, var(--color-secondary) 100%);
|
||||||
|
color: white;
|
||||||
|
border-radius: var(--radius-md);
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
justify-content: center;
|
||||||
|
font-size: 1.25rem;
|
||||||
|
flex-shrink: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.feature-content h4 {
|
||||||
|
margin: 0 0 0.5rem 0;
|
||||||
|
font-size: 1rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.feature-content p {
|
||||||
|
margin: 0;
|
||||||
|
color: var(--color-text-light);
|
||||||
|
font-size: 0.9rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Responsive */
|
||||||
|
@media (max-width: 1024px) {
|
||||||
|
.sidebar {
|
||||||
|
transform: translateX(-100%);
|
||||||
|
transition: transform 0.3s ease;
|
||||||
|
}
|
||||||
|
|
||||||
|
.sidebar.open {
|
||||||
|
transform: translateX(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
.main {
|
||||||
|
margin-left: 0;
|
||||||
|
padding: 2rem 1.5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.mobile-menu-btn {
|
||||||
|
display: block;
|
||||||
|
position: fixed;
|
||||||
|
top: 1rem;
|
||||||
|
left: 1rem;
|
||||||
|
z-index: 101;
|
||||||
|
background: var(--color-primary);
|
||||||
|
color: white;
|
||||||
|
border: none;
|
||||||
|
padding: 0.75rem;
|
||||||
|
border-radius: var(--radius-md);
|
||||||
|
cursor: pointer;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@media (min-width: 1025px) {
|
||||||
|
.mobile-menu-btn {
|
||||||
|
display: none;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Print styles */
|
||||||
|
@media print {
|
||||||
|
.sidebar {
|
||||||
|
display: none;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main {
|
||||||
|
margin-left: 0;
|
||||||
|
max-width: 100%;
|
||||||
|
}
|
||||||
|
|
||||||
|
details {
|
||||||
|
display: block !important;
|
||||||
|
}
|
||||||
|
|
||||||
|
details > div {
|
||||||
|
display: block !important;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
</style>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<button class="mobile-menu-btn" onclick="toggleSidebar()">☰</button>
|
||||||
|
|
||||||
|
<div class="layout">
|
||||||
|
<!-- Sidebar Navigation -->
|
||||||
|
<aside class="sidebar" id="sidebar">
|
||||||
|
<div class="sidebar-header">
|
||||||
|
<a href="#" class="sidebar-logo">
|
||||||
|
📚 {{PROJECT_NAME}}
|
||||||
|
</a>
|
||||||
|
<div class="sidebar-version">Version {{VERSION}}</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<nav class="sidebar-nav">
|
||||||
|
<div class="nav-section">
|
||||||
|
<div class="nav-section-title">Overview</div>
|
||||||
|
<a href="#executive-summary" class="nav-link">
|
||||||
|
<span class="nav-link-icon">📋</span> Executive Summary
|
||||||
|
</a>
|
||||||
|
<a href="#quick-start" class="nav-link">
|
||||||
|
<span class="nav-link-icon">🚀</span> Quick Start
|
||||||
|
</a>
|
||||||
|
<a href="#architecture" class="nav-link">
|
||||||
|
<span class="nav-link-icon">🏗️</span> Architecture
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="nav-section">
|
||||||
|
<div class="nav-section-title">Features</div>
|
||||||
|
<a href="#features" class="nav-link">
|
||||||
|
<span class="nav-link-icon">✨</span> Feature Guide
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="nav-section">
|
||||||
|
<div class="nav-section-title">For Developers</div>
|
||||||
|
<a href="#api-reference" class="nav-link">
|
||||||
|
<span class="nav-link-icon">🔌</span> API Reference
|
||||||
|
</a>
|
||||||
|
<a href="#components" class="nav-link">
|
||||||
|
<span class="nav-link-icon">🧩</span> Components
|
||||||
|
</a>
|
||||||
|
<a href="#data-models" class="nav-link">
|
||||||
|
<span class="nav-link-icon">💾</span> Data Models
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="nav-section">
|
||||||
|
<div class="nav-section-title">Reference</div>
|
||||||
|
<a href="#glossary" class="nav-link">
|
||||||
|
<span class="nav-link-icon">📖</span> Glossary
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
</nav>
|
||||||
|
</aside>
|
||||||
|
|
||||||
|
<!-- Main Content -->
|
||||||
|
<main class="main">
|
||||||
|
<!-- Hero Section -->
|
||||||
|
<section class="hero">
|
||||||
|
<h1>{{PROJECT_NAME}}</h1>
|
||||||
|
<p class="hero-tagline">{{TAGLINE}}</p>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<!-- Executive Summary -->
|
||||||
|
<section id="executive-summary">
|
||||||
|
<h2>📋 Executive Summary</h2>
|
||||||
|
|
||||||
|
<h3>What is {{PROJECT_NAME}}?</h3>
|
||||||
|
<p>{{DESCRIPTION}}</p>
|
||||||
|
|
||||||
|
<h3>Who is it for?</h3>
|
||||||
|
<p>{{AUDIENCE}}</p>
|
||||||
|
|
||||||
|
<h3>Key Capabilities</h3>
|
||||||
|
<div class="grid grid-2">
|
||||||
|
<!-- CAPABILITIES_PLACEHOLDER -->
|
||||||
|
<div class="card">
|
||||||
|
<div class="card-header">
|
||||||
|
<div class="card-icon">✨</div>
|
||||||
|
<div class="card-title">{{CAPABILITY_1_NAME}}</div>
|
||||||
|
</div>
|
||||||
|
<p>{{CAPABILITY_1_DESCRIPTION}}</p>
|
||||||
|
</div>
|
||||||
|
<div class="card">
|
||||||
|
<div class="card-header">
|
||||||
|
<div class="card-icon">⚡</div>
|
||||||
|
<div class="card-title">{{CAPABILITY_2_NAME}}</div>
|
||||||
|
</div>
|
||||||
|
<p>{{CAPABILITY_2_DESCRIPTION}}</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<!-- Quick Start -->
|
||||||
|
<section id="quick-start">
|
||||||
|
<h2>🚀 Quick Start</h2>
|
||||||
|
|
||||||
|
<h3>Prerequisites</h3>
|
||||||
|
<table>
|
||||||
|
<thead>
|
||||||
|
<tr>
|
||||||
|
<th>Tool</th>
|
||||||
|
<th>Purpose</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<!-- PREREQUISITES_PLACEHOLDER -->
|
||||||
|
<tr>
|
||||||
|
<td><code>{{PREREQ_1_TOOL}}</code></td>
|
||||||
|
<td>{{PREREQ_1_PURPOSE}}</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
<h3>Installation</h3>
|
||||||
|
<pre><code>{{INSTALLATION_COMMANDS}}</code></pre>
|
||||||
|
|
||||||
|
<h3>Basic Usage</h3>
|
||||||
|
<pre><code>{{BASIC_USAGE}}</code></pre>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<!-- Architecture -->
|
||||||
|
<section id="architecture">
|
||||||
|
<h2>🏗️ Architecture Overview</h2>
|
||||||
|
|
||||||
|
<h3>System Diagram</h3>
|
||||||
|
<div class="diagram">{{SYSTEM_DIAGRAM}}</div>
|
||||||
|
|
||||||
|
<h3>Technology Stack</h3>
|
||||||
|
<table>
|
||||||
|
<thead>
|
||||||
|
<tr>
|
||||||
|
<th>Layer</th>
|
||||||
|
<th>Technology</th>
|
||||||
|
<th>Purpose</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<!-- TECH_STACK_PLACEHOLDER -->
|
||||||
|
<tr>
|
||||||
|
<td>{{TECH_LAYER}}</td>
|
||||||
|
<td><span class="badge badge-primary">{{TECH_NAME}}</span></td>
|
||||||
|
<td>{{TECH_PURPOSE}}</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
<h3>Directory Structure</h3>
|
||||||
|
<pre><code>{{DIRECTORY_STRUCTURE}}</code></pre>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<!-- Features -->
|
||||||
|
<section id="features">
|
||||||
|
<h2>✨ Features</h2>
|
||||||
|
|
||||||
|
<!-- FEATURES_PLACEHOLDER -->
|
||||||
|
<div class="feature-item">
|
||||||
|
<div class="feature-icon">🔐</div>
|
||||||
|
<div class="feature-content">
|
||||||
|
<h4>{{FEATURE_NAME}}</h4>
|
||||||
|
<p>{{FEATURE_DESCRIPTION}}</p>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>
|
||||||
|
🔧 Technical Details
|
||||||
|
<span class="tech-badge">For Engineers</span>
|
||||||
|
</summary>
|
||||||
|
<div>
|
||||||
|
<p>{{FEATURE_TECHNICAL_NOTES}}</p>
|
||||||
|
<p><strong>Key Files:</strong></p>
|
||||||
|
<ul>
|
||||||
|
<li><code>{{FEATURE_FILE_1}}</code></li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
</details>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<!-- API Reference -->
|
||||||
|
<section id="api-reference">
|
||||||
|
<h2>🔌 API Reference</h2>
|
||||||
|
|
||||||
|
<div class="callout callout-info">
|
||||||
|
<div class="callout-title">ℹ️ About the API</div>
|
||||||
|
<p>This section is primarily for developers who need to integrate with or extend the application.</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- API_ENDPOINTS_PLACEHOLDER -->
|
||||||
|
<h3>{{API_GROUP_NAME}}</h3>
|
||||||
|
<table>
|
||||||
|
<thead>
|
||||||
|
<tr>
|
||||||
|
<th>Method</th>
|
||||||
|
<th>Endpoint</th>
|
||||||
|
<th>Description</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td><span class="method method-get">GET</span></td>
|
||||||
|
<td><code>{{API_PATH}}</code></td>
|
||||||
|
<td>{{API_DESCRIPTION}}</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>
|
||||||
|
📖 Request & Response Details
|
||||||
|
<span class="tech-badge">Technical</span>
|
||||||
|
</summary>
|
||||||
|
<div>
|
||||||
|
<h4>Request</h4>
|
||||||
|
<pre><code>{{API_REQUEST_EXAMPLE}}</code></pre>
|
||||||
|
|
||||||
|
<h4>Response</h4>
|
||||||
|
<pre><code>{{API_RESPONSE_EXAMPLE}}</code></pre>
|
||||||
|
</div>
|
||||||
|
</details>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<!-- Components -->
|
||||||
|
<section id="components">
|
||||||
|
<h2>🧩 Component Catalog</h2>
|
||||||
|
|
||||||
|
<!-- COMPONENTS_PLACEHOLDER -->
|
||||||
|
<div class="card">
|
||||||
|
<div class="card-header">
|
||||||
|
<div class="card-icon">🧩</div>
|
||||||
|
<div class="card-title">{{COMPONENT_NAME}}</div>
|
||||||
|
</div>
|
||||||
|
<p>{{COMPONENT_DESCRIPTION}}</p>
|
||||||
|
<p><code>{{COMPONENT_PATH}}</code></p>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>
|
||||||
|
🔧 Props & Usage
|
||||||
|
<span class="tech-badge">Technical</span>
|
||||||
|
</summary>
|
||||||
|
<div>
|
||||||
|
<table>
|
||||||
|
<thead>
|
||||||
|
<tr>
|
||||||
|
<th>Prop</th>
|
||||||
|
<th>Type</th>
|
||||||
|
<th>Required</th>
|
||||||
|
<th>Description</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td><code>{{PROP_NAME}}</code></td>
|
||||||
|
<td><code>{{PROP_TYPE}}</code></td>
|
||||||
|
<td>{{PROP_REQUIRED}}</td>
|
||||||
|
<td>{{PROP_DESCRIPTION}}</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
<h4>Usage Example</h4>
|
||||||
|
<pre><code>{{COMPONENT_USAGE_EXAMPLE}}</code></pre>
|
||||||
|
</div>
|
||||||
|
</details>
|
||||||
|
</div>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<!-- Data Models -->
|
||||||
|
<section id="data-models">
|
||||||
|
<h2>💾 Data Models</h2>
|
||||||
|
|
||||||
|
<h3>Entity Relationship Diagram</h3>
|
||||||
|
<div class="diagram">{{ER_DIAGRAM}}</div>
|
||||||
|
|
||||||
|
<!-- DATA_MODELS_PLACEHOLDER -->
|
||||||
|
<h3>{{MODEL_NAME}}</h3>
|
||||||
|
<p><strong>What it represents:</strong> {{MODEL_DESCRIPTION}}</p>
|
||||||
|
|
||||||
|
<table>
|
||||||
|
<thead>
|
||||||
|
<tr>
|
||||||
|
<th>Field</th>
|
||||||
|
<th>Type</th>
|
||||||
|
<th>Description</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td><code>{{FIELD_NAME}}</code></td>
|
||||||
|
<td><code>{{FIELD_TYPE}}</code></td>
|
||||||
|
<td>{{FIELD_DESCRIPTION}}</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<!-- Glossary -->
|
||||||
|
<section id="glossary">
|
||||||
|
<h2>📖 Glossary</h2>
|
||||||
|
|
||||||
|
<div class="callout callout-tip">
|
||||||
|
<div class="callout-title">💡 Tip</div>
|
||||||
|
<p>This glossary explains technical terms in plain English. Perfect for non-technical stakeholders!</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="card">
|
||||||
|
<!-- GLOSSARY_PLACEHOLDER -->
|
||||||
|
<div class="glossary-term">
|
||||||
|
<span class="glossary-word">{{TERM}}</span>
|
||||||
|
<span class="glossary-definition">{{DEFINITION}}</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<!-- Footer -->
|
||||||
|
<footer style="margin-top: 4rem; padding-top: 2rem; border-top: 1px solid var(--color-border); color: var(--color-text-muted); text-align: center;">
|
||||||
|
<p>Generated by <strong>Eureka Index</strong> · {{GENERATED_DATE}}</p>
|
||||||
|
</footer>
|
||||||
|
</main>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<script>
|
||||||
|
// Mobile menu toggle
|
||||||
|
function toggleSidebar() {
|
||||||
|
document.getElementById('sidebar').classList.toggle('open');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Active nav highlighting
|
||||||
|
const sections = document.querySelectorAll('section[id]');
|
||||||
|
const navLinks = document.querySelectorAll('.nav-link');
|
||||||
|
|
||||||
|
window.addEventListener('scroll', () => {
|
||||||
|
let current = '';
|
||||||
|
sections.forEach(section => {
|
||||||
|
const sectionTop = section.offsetTop;
|
||||||
|
if (scrollY >= sectionTop - 100) {
|
||||||
|
current = section.getAttribute('id');
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
navLinks.forEach(link => {
|
||||||
|
link.classList.remove('active');
|
||||||
|
if (link.getAttribute('href') === '#' + current) {
|
||||||
|
link.classList.add('active');
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// Close sidebar when clicking a link (mobile)
|
||||||
|
navLinks.forEach(link => {
|
||||||
|
link.addEventListener('click', () => {
|
||||||
|
if (window.innerWidth < 1025) {
|
||||||
|
document.getElementById('sidebar').classList.remove('open');
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
</script>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
|
|
@ -0,0 +1,38 @@
|
||||||
|
# Architect Agent Definition
|
||||||
|
# Responsible for manifest design and task creation
|
||||||
|
|
||||||
|
name: architect
|
||||||
|
role: System Designer & Task Planner
|
||||||
|
|
||||||
|
description: |
|
||||||
|
The Architect designs the system by defining entities in the manifest
|
||||||
|
and breaking down implementation into discrete tasks for other agents.
|
||||||
|
|
||||||
|
allowed_tools:
|
||||||
|
- Read # Read any file for context
|
||||||
|
- Write # Write to manifest and task files ONLY
|
||||||
|
|
||||||
|
blocked_tools:
|
||||||
|
- Bash # Cannot execute commands
|
||||||
|
- Edit # Cannot modify existing code
|
||||||
|
|
||||||
|
allowed_files:
|
||||||
|
- project_manifest.json
|
||||||
|
- "tasks/*.yml"
|
||||||
|
- "tasks/**/*.yml"
|
||||||
|
|
||||||
|
responsibilities:
|
||||||
|
- Design system architecture in manifest
|
||||||
|
- Define entities (pages, components, APIs, tables)
|
||||||
|
- Create implementation tasks for frontend/backend agents
|
||||||
|
- Set task priorities and dependencies
|
||||||
|
- Ensure no orphan entities or circular dependencies
|
||||||
|
|
||||||
|
outputs:
|
||||||
|
- Updated project_manifest.json with new entities
|
||||||
|
- Task files in tasks/ directory
|
||||||
|
|
||||||
|
cannot_do:
|
||||||
|
- Implement any code
|
||||||
|
- Run build/test commands
|
||||||
|
- Modify existing source files
|
||||||
|
|
@ -0,0 +1,45 @@
|
||||||
|
# Backend Agent Definition
|
||||||
|
# Responsible for API endpoints and database implementation
|
||||||
|
|
||||||
|
name: backend
|
||||||
|
role: Backend Developer
|
||||||
|
|
||||||
|
description: |
|
||||||
|
The Backend agent implements API endpoints, database schemas,
|
||||||
|
and server-side logic based on approved entities and assigned tasks.
|
||||||
|
|
||||||
|
allowed_tools:
|
||||||
|
- Read # Read files for context
|
||||||
|
- Write # Create new files
|
||||||
|
- Edit # Modify existing files
|
||||||
|
- Bash # Run build, lint, type-check, tests
|
||||||
|
|
||||||
|
blocked_tools: [] # Full access for implementation
|
||||||
|
|
||||||
|
allowed_files:
|
||||||
|
- "app/api/**/*"
|
||||||
|
- "app/lib/**/*"
|
||||||
|
- "prisma/**/*"
|
||||||
|
- "db/**/*"
|
||||||
|
- "*.config.*"
|
||||||
|
|
||||||
|
responsibilities:
|
||||||
|
- Implement API route handlers (GET, POST, PUT, DELETE)
|
||||||
|
- Create database schemas and migrations
|
||||||
|
- Implement data access layer (CRUD operations)
|
||||||
|
- Ensure request/response match manifest specs
|
||||||
|
- Handle errors appropriately
|
||||||
|
- Run lint/type-check before marking complete
|
||||||
|
|
||||||
|
task_types:
|
||||||
|
- create # New API/DB entity
|
||||||
|
- update # Modify existing backend
|
||||||
|
- refactor # Improve code quality
|
||||||
|
- delete # Remove deprecated endpoints
|
||||||
|
|
||||||
|
workflow:
|
||||||
|
1. Read assigned task from tasks/*.yml
|
||||||
|
2. Verify entity is APPROVED in manifest
|
||||||
|
3. Implement code matching manifest spec
|
||||||
|
4. Run validation (lint, type-check)
|
||||||
|
5. Update task status to "review"
|
||||||
|
|
@ -0,0 +1,44 @@
|
||||||
|
# Frontend Agent Definition
|
||||||
|
# Responsible for UI component and page implementation
|
||||||
|
|
||||||
|
name: frontend
|
||||||
|
role: Frontend Developer
|
||||||
|
|
||||||
|
description: |
|
||||||
|
The Frontend agent implements UI components and pages based on
|
||||||
|
approved entities in the manifest and assigned tasks.
|
||||||
|
|
||||||
|
allowed_tools:
|
||||||
|
- Read # Read files for context
|
||||||
|
- Write # Create new files
|
||||||
|
- Edit # Modify existing files
|
||||||
|
- Bash # Run build, lint, type-check
|
||||||
|
|
||||||
|
blocked_tools: [] # Full access for implementation
|
||||||
|
|
||||||
|
allowed_files:
|
||||||
|
- "app/components/**/*"
|
||||||
|
- "app/**/page.tsx"
|
||||||
|
- "app/**/layout.tsx"
|
||||||
|
- "app/globals.css"
|
||||||
|
- "*.config.*"
|
||||||
|
|
||||||
|
responsibilities:
|
||||||
|
- Implement UI components matching manifest specs
|
||||||
|
- Create pages with correct routing
|
||||||
|
- Ensure props match manifest definitions
|
||||||
|
- Follow existing code patterns and styles
|
||||||
|
- Run lint/type-check before marking complete
|
||||||
|
|
||||||
|
task_types:
|
||||||
|
- create # New component/page
|
||||||
|
- update # Modify existing UI
|
||||||
|
- refactor # Improve code quality
|
||||||
|
- delete # Remove deprecated UI
|
||||||
|
|
||||||
|
workflow:
|
||||||
|
1. Read assigned task from tasks/*.yml
|
||||||
|
2. Verify entity is APPROVED in manifest
|
||||||
|
3. Implement code matching manifest spec
|
||||||
|
4. Run validation (lint, type-check)
|
||||||
|
5. Update task status to "review"
|
||||||
|
|
@ -0,0 +1,85 @@
|
||||||
|
# Orchestrator Agent Definition
|
||||||
|
# Coordinates the entire workflow and delegates to specialized agents
|
||||||
|
|
||||||
|
name: orchestrator
|
||||||
|
role: Workflow Coordinator
|
||||||
|
|
||||||
|
description: |
|
||||||
|
The Orchestrator manages the end-to-end workflow, delegating tasks
|
||||||
|
to specialized agents based on task type and current phase.
|
||||||
|
|
||||||
|
workflow_phases:
|
||||||
|
1_design:
|
||||||
|
description: Design system entities in manifest
|
||||||
|
agent: architect
|
||||||
|
inputs: Feature requirements
|
||||||
|
outputs: Updated manifest with DEFINED entities
|
||||||
|
|
||||||
|
2_plan:
|
||||||
|
description: Create implementation tasks
|
||||||
|
agent: architect
|
||||||
|
inputs: Approved manifest entities
|
||||||
|
outputs: Task files in tasks/*.yml
|
||||||
|
|
||||||
|
3_implement:
|
||||||
|
description: Implement tasks by type
|
||||||
|
agents:
|
||||||
|
frontend: UI components, pages
|
||||||
|
backend: API endpoints, database
|
||||||
|
inputs: Tasks with status "pending"
|
||||||
|
outputs: Implemented code, tasks with status "review"
|
||||||
|
|
||||||
|
4_review:
|
||||||
|
description: Review implementations
|
||||||
|
agent: reviewer
|
||||||
|
inputs: Tasks with status "review"
|
||||||
|
outputs: Approved tasks or change requests
|
||||||
|
|
||||||
|
5_complete:
|
||||||
|
description: Mark tasks as done
|
||||||
|
agent: orchestrator
|
||||||
|
inputs: Tasks with status "approved"
|
||||||
|
outputs: Tasks with status "completed"
|
||||||
|
|
||||||
|
delegation_rules:
|
||||||
|
# Task assignment by entity type
|
||||||
|
entity_routing:
|
||||||
|
pages: frontend
|
||||||
|
components: frontend
|
||||||
|
api_endpoints: backend
|
||||||
|
database_tables: backend
|
||||||
|
|
||||||
|
# Task assignment by task type
|
||||||
|
task_routing:
|
||||||
|
create: frontend | backend # Based on entity type
|
||||||
|
update: frontend | backend # Based on entity type
|
||||||
|
delete: frontend | backend # Based on entity type
|
||||||
|
refactor: frontend | backend # Based on entity type
|
||||||
|
review: reviewer
|
||||||
|
test: reviewer
|
||||||
|
|
||||||
|
status_transitions:
|
||||||
|
pending:
|
||||||
|
- in_progress # When agent starts work
|
||||||
|
- blocked # If dependencies not met
|
||||||
|
|
||||||
|
in_progress:
|
||||||
|
- review # When implementation complete
|
||||||
|
- blocked # If blocked by issue
|
||||||
|
|
||||||
|
review:
|
||||||
|
- approved # Reviewer accepts
|
||||||
|
- in_progress # Reviewer requests changes
|
||||||
|
|
||||||
|
approved:
|
||||||
|
- completed # Final state
|
||||||
|
|
||||||
|
blocked:
|
||||||
|
- pending # When blocker resolved
|
||||||
|
|
||||||
|
commands:
|
||||||
|
- /workflow:start <feature> # Start new feature workflow
|
||||||
|
- /workflow:plan # Create tasks from manifest
|
||||||
|
- /workflow:assign # Assign tasks to agents
|
||||||
|
- /workflow:status # Show workflow status
|
||||||
|
- /workflow:next # Process next available task
|
||||||
|
|
@ -0,0 +1,52 @@
|
||||||
|
# Reviewer Agent Definition
|
||||||
|
# Responsible for code review and quality assurance
|
||||||
|
|
||||||
|
name: reviewer
|
||||||
|
role: Code Reviewer & QA
|
||||||
|
|
||||||
|
description: |
|
||||||
|
The Reviewer agent reviews implementations, runs tests,
|
||||||
|
and approves or requests changes. Cannot modify code directly.
|
||||||
|
|
||||||
|
allowed_tools:
|
||||||
|
- Read # Read any file for review
|
||||||
|
- Bash # Run tests, lint, type-check, verify
|
||||||
|
|
||||||
|
blocked_tools:
|
||||||
|
- Write # Cannot create files
|
||||||
|
- Edit # Cannot modify files
|
||||||
|
|
||||||
|
allowed_files:
|
||||||
|
- "*" # Can read everything
|
||||||
|
|
||||||
|
responsibilities:
|
||||||
|
- Review implementations match manifest specs
|
||||||
|
- Verify acceptance criteria are met
|
||||||
|
- Run tests and validation commands
|
||||||
|
- Check code quality and patterns
|
||||||
|
- Approve or request changes with feedback
|
||||||
|
|
||||||
|
task_types:
|
||||||
|
- review # Review completed implementation
|
||||||
|
|
||||||
|
review_checklist:
|
||||||
|
- File exists at manifest file_path
|
||||||
|
- Exports match manifest definitions
|
||||||
|
- Props/types match manifest specs
|
||||||
|
- Follows project code patterns
|
||||||
|
- Lint passes
|
||||||
|
- Type-check passes
|
||||||
|
- Tests pass (if applicable)
|
||||||
|
|
||||||
|
workflow:
|
||||||
|
1. Read task with status "review"
|
||||||
|
2. Read implementation files
|
||||||
|
3. Run verification commands
|
||||||
|
4. Compare against manifest specs
|
||||||
|
5. Either:
|
||||||
|
- APPROVE: Update task status to "approved"
|
||||||
|
- REQUEST_CHANGES: Add review_notes, set status to "in_progress"
|
||||||
|
|
||||||
|
outputs:
|
||||||
|
- review_notes in task file
|
||||||
|
- status update (approved | in_progress)
|
||||||
|
|
@ -0,0 +1,216 @@
|
||||||
|
# Security Reviewer Agent
|
||||||
|
|
||||||
|
**Role**: Security-focused code review and vulnerability assessment
|
||||||
|
|
||||||
|
**Trigger**: `/workflow:security` command or security review phase
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Agent Capabilities
|
||||||
|
|
||||||
|
### Primary Functions
|
||||||
|
1. **Static Security Analysis**: Pattern-based vulnerability detection
|
||||||
|
2. **OWASP Top 10 Assessment**: Check for common web vulnerabilities
|
||||||
|
3. **Dependency Audit**: Identify vulnerable packages
|
||||||
|
4. **Configuration Review**: Check security settings and configurations
|
||||||
|
5. **Secret Detection**: Find hardcoded credentials and sensitive data
|
||||||
|
|
||||||
|
### Security Categories Analyzed
|
||||||
|
|
||||||
|
| Category | CWE | OWASP | Severity |
|
||||||
|
|----------|-----|-------|----------|
|
||||||
|
| Hardcoded Secrets | CWE-798 | A07 | CRITICAL |
|
||||||
|
| SQL Injection | CWE-89 | A03 | CRITICAL |
|
||||||
|
| Command Injection | CWE-78 | A03 | CRITICAL |
|
||||||
|
| XSS | CWE-79 | A03 | HIGH |
|
||||||
|
| Path Traversal | CWE-22 | A01 | HIGH |
|
||||||
|
| NoSQL Injection | CWE-943 | A03 | HIGH |
|
||||||
|
| SSRF | CWE-918 | A10 | HIGH |
|
||||||
|
| Prototype Pollution | CWE-1321 | A03 | HIGH |
|
||||||
|
| Insecure Auth | CWE-287 | A07 | HIGH |
|
||||||
|
| CORS Misconfiguration | CWE-942 | A01 | MEDIUM |
|
||||||
|
| Sensitive Data Exposure | CWE-200 | A02 | MEDIUM |
|
||||||
|
| Insecure Dependencies | CWE-1104 | A06 | MEDIUM |
|
||||||
|
| Insecure Randomness | CWE-330 | A02 | LOW |
|
||||||
|
| Debug Code | CWE-489 | A05 | LOW |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Agent Constraints
|
||||||
|
|
||||||
|
### READ-ONLY MODE
|
||||||
|
- **CANNOT** modify files
|
||||||
|
- **CANNOT** fix issues directly
|
||||||
|
- **CAN** only read, analyze, and report
|
||||||
|
|
||||||
|
### Output Requirements
|
||||||
|
- Must produce structured security report
|
||||||
|
- Must categorize issues by severity
|
||||||
|
- Must provide remediation guidance
|
||||||
|
- Must reference CWE/OWASP standards
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Execution Flow
|
||||||
|
|
||||||
|
### Step 1: Run Automated Scanner
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/security_scan.py --project-dir . --json
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Deep Analysis (Task Agent)
|
||||||
|
For each CRITICAL/HIGH issue, perform deeper analysis:
|
||||||
|
- Trace data flow from source to sink
|
||||||
|
- Identify attack vectors
|
||||||
|
- Assess exploitability
|
||||||
|
- Check for existing mitigations
|
||||||
|
|
||||||
|
### Step 3: Dependency Audit
|
||||||
|
```bash
|
||||||
|
npm audit --json 2>/dev/null || echo "{}"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Configuration Review
|
||||||
|
Check security-relevant configurations:
|
||||||
|
- CORS settings
|
||||||
|
- CSP headers
|
||||||
|
- Authentication configuration
|
||||||
|
- Session management
|
||||||
|
- Cookie settings
|
||||||
|
|
||||||
|
### Step 5: Manual Code Review Checklist
|
||||||
|
For implemented features, verify:
|
||||||
|
- [ ] Input validation on all user inputs
|
||||||
|
- [ ] Output encoding for XSS prevention
|
||||||
|
- [ ] Parameterized queries for database access
|
||||||
|
- [ ] Proper error handling (no sensitive data in errors)
|
||||||
|
- [ ] Authentication/authorization checks
|
||||||
|
- [ ] HTTPS enforcement
|
||||||
|
- [ ] Secure cookie flags
|
||||||
|
- [ ] Rate limiting on sensitive endpoints
|
||||||
|
|
||||||
|
### Step 6: Generate Report
|
||||||
|
Output comprehensive security report with:
|
||||||
|
- Executive summary
|
||||||
|
- Issue breakdown by severity
|
||||||
|
- Detailed findings with code locations
|
||||||
|
- Remediation recommendations
|
||||||
|
- Risk assessment
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Report Format
|
||||||
|
|
||||||
|
```
|
||||||
|
+======================================================================+
|
||||||
|
| SECURITY REVIEW REPORT |
|
||||||
|
+======================================================================+
|
||||||
|
| Project: $PROJECT_NAME |
|
||||||
|
| Scan Date: $DATE |
|
||||||
|
| Agent: security-reviewer |
|
||||||
|
+======================================================================+
|
||||||
|
| EXECUTIVE SUMMARY |
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
| Risk Level: CRITICAL / HIGH / MEDIUM / LOW / PASS |
|
||||||
|
| Total Issues: X |
|
||||||
|
| Critical: X (immediate action required) |
|
||||||
|
| High: X (fix before production) |
|
||||||
|
| Medium: X (should fix) |
|
||||||
|
| Low: X (consider fixing) |
|
||||||
|
+======================================================================+
|
||||||
|
| CRITICAL FINDINGS |
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
| [1] Hardcoded API Key |
|
||||||
|
| File: src/lib/api.ts:15 |
|
||||||
|
| CWE: CWE-798 |
|
||||||
|
| Code: apiKey = "sk-..." |
|
||||||
|
| Risk: Credentials can be extracted from source |
|
||||||
|
| Fix: Use environment variable: process.env.API_KEY |
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
| [2] SQL Injection |
|
||||||
|
| File: app/api/users/route.ts:42 |
|
||||||
|
| CWE: CWE-89 |
|
||||||
|
| Code: query(`SELECT * FROM users WHERE id = ${userId}`) |
|
||||||
|
| Risk: Attacker can manipulate database queries |
|
||||||
|
| Fix: Use parameterized query: query($1, [userId]) |
|
||||||
|
+======================================================================+
|
||||||
|
| HIGH FINDINGS |
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
| [3] XSS Vulnerability |
|
||||||
|
| File: app/components/Comment.tsx:28 |
|
||||||
|
| ... |
|
||||||
|
+======================================================================+
|
||||||
|
| DEPENDENCY VULNERABILITIES |
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
| lodash@4.17.20 - Prototype Pollution (HIGH) |
|
||||||
|
| axios@0.21.0 - SSRF Risk (MEDIUM) |
|
||||||
|
| Fix: npm audit fix |
|
||||||
|
+======================================================================+
|
||||||
|
| RECOMMENDATIONS |
|
||||||
|
+----------------------------------------------------------------------+
|
||||||
|
| 1. Immediately rotate any exposed credentials |
|
||||||
|
| 2. Fix SQL injection before deploying |
|
||||||
|
| 3. Add input validation layer |
|
||||||
|
| 4. Update vulnerable dependencies |
|
||||||
|
| 5. Add security headers middleware |
|
||||||
|
+======================================================================+
|
||||||
|
| VERDICT: FAIL - X critical issues must be fixed |
|
||||||
|
+======================================================================+
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Integration with Workflow
|
||||||
|
|
||||||
|
### In Review Phase
|
||||||
|
The security agent is automatically invoked during `/workflow:review`:
|
||||||
|
1. Review command runs security_scan.py
|
||||||
|
2. If CRITICAL issues found → blocks approval
|
||||||
|
3. Report included in review output
|
||||||
|
|
||||||
|
### Standalone Security Audit
|
||||||
|
Use `/workflow:security` for dedicated security review:
|
||||||
|
- More thorough analysis
|
||||||
|
- Deep code inspection
|
||||||
|
- Dependency audit
|
||||||
|
- Configuration review
|
||||||
|
|
||||||
|
### Remediation Flow
|
||||||
|
After security issues are identified:
|
||||||
|
1. Issues added to task queue as blockers
|
||||||
|
2. Implementation agents fix issues
|
||||||
|
3. Security agent re-validates fixes
|
||||||
|
4. Approval only after clean scan
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tool Usage
|
||||||
|
|
||||||
|
### Primary Tools
|
||||||
|
- `Bash`: Run security_scan.py, npm audit
|
||||||
|
- `Read`: Analyze suspicious code patterns
|
||||||
|
- `Grep`: Search for vulnerability patterns
|
||||||
|
|
||||||
|
### Blocked Tools
|
||||||
|
- `Write`: Cannot create files
|
||||||
|
- `Edit`: Cannot modify files
|
||||||
|
- `Task`: Cannot delegate to other agents
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Exit Conditions
|
||||||
|
|
||||||
|
### PASS
|
||||||
|
- No CRITICAL or HIGH issues
|
||||||
|
- All dependencies up to date or acknowledged
|
||||||
|
- Security configurations reviewed
|
||||||
|
|
||||||
|
### FAIL
|
||||||
|
- Any CRITICAL issue present
|
||||||
|
- Multiple HIGH issues present
|
||||||
|
- Critical dependencies vulnerable
|
||||||
|
|
||||||
|
### WARNING
|
||||||
|
- Only MEDIUM/LOW issues
|
||||||
|
- Some dependencies outdated
|
||||||
|
- Minor configuration concerns
|
||||||
|
|
@ -0,0 +1,347 @@
|
||||||
|
# API Contract Schema
|
||||||
|
# The binding agreement between frontend and backend implementations
|
||||||
|
# Generated during design phase, validated during review phase
|
||||||
|
#
|
||||||
|
# This contract ensures:
|
||||||
|
# 1. Backend implements exactly the endpoints frontend expects
|
||||||
|
# 2. Frontend calls endpoints with correct methods/bodies
|
||||||
|
# 3. Both use the same TypeScript types from shared file
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# CONTRACT METADATA
|
||||||
|
# ============================================================================
|
||||||
|
api_contract:
|
||||||
|
# Links to workflow
|
||||||
|
workflow_version: string # e.g., v001
|
||||||
|
design_document_revision: integer
|
||||||
|
|
||||||
|
# Timestamps
|
||||||
|
generated_at: timestamp
|
||||||
|
validated_at: timestamp | null
|
||||||
|
|
||||||
|
# Contract status
|
||||||
|
status: draft | active | violated
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# SHARED TYPES (Source of truth for both agents)
|
||||||
|
# ============================================================================
|
||||||
|
# These types are generated into app/types/api.ts
|
||||||
|
# Both frontend and backend MUST import from this file
|
||||||
|
types:
|
||||||
|
description: "TypeScript interfaces shared between frontend and backend"
|
||||||
|
|
||||||
|
type_schema:
|
||||||
|
# Identity
|
||||||
|
id: string # type_<Name> (e.g., type_User, type_CreateUserRequest)
|
||||||
|
name: string # PascalCase type name (exported interface name)
|
||||||
|
|
||||||
|
# Type definition
|
||||||
|
definition:
|
||||||
|
type: object | array | enum | union
|
||||||
|
|
||||||
|
# For object types
|
||||||
|
properties:
|
||||||
|
- name: string # Property name (camelCase)
|
||||||
|
type: string # TypeScript type (string, number, boolean, other type name)
|
||||||
|
required: boolean
|
||||||
|
description: string
|
||||||
|
validation: string # Optional validation rule
|
||||||
|
|
||||||
|
# For enum types
|
||||||
|
enum_values: [string]
|
||||||
|
|
||||||
|
# For union types
|
||||||
|
union_members: [string] # Array of type names or literal types
|
||||||
|
|
||||||
|
# For array types
|
||||||
|
array_item_type: string # Type of array items
|
||||||
|
|
||||||
|
# Usage tracking
|
||||||
|
used_by:
|
||||||
|
requests: [string] # endpoint_ids that use this as request body
|
||||||
|
responses: [string] # endpoint_ids that use this as response
|
||||||
|
models: [string] # model_ids this type represents
|
||||||
|
|
||||||
|
# Example
|
||||||
|
example_type:
|
||||||
|
id: type_User
|
||||||
|
name: User
|
||||||
|
definition:
|
||||||
|
type: object
|
||||||
|
properties:
|
||||||
|
- name: id
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
description: "Unique user identifier"
|
||||||
|
- name: email
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
description: "User email address"
|
||||||
|
validation: email
|
||||||
|
- name: name
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
description: "Display name"
|
||||||
|
- name: createdAt
|
||||||
|
type: Date
|
||||||
|
required: true
|
||||||
|
description: "Account creation timestamp"
|
||||||
|
used_by:
|
||||||
|
responses: [api_get_user, api_create_user, api_list_users]
|
||||||
|
models: [model_user]
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# ENDPOINT CONTRACTS (Binding specifications)
|
||||||
|
# ============================================================================
|
||||||
|
endpoints:
|
||||||
|
description: "API endpoint contracts with strict request/response typing"
|
||||||
|
|
||||||
|
endpoint_schema:
|
||||||
|
# Identity
|
||||||
|
id: string # api_<verb>_<resource> from design_document
|
||||||
|
|
||||||
|
# HTTP Specification
|
||||||
|
method: GET | POST | PUT | PATCH | DELETE
|
||||||
|
path: string # Exact path with params (e.g., /api/users/:id)
|
||||||
|
|
||||||
|
# Path parameters (extracted from path)
|
||||||
|
path_params:
|
||||||
|
- name: string
|
||||||
|
type: string # TypeScript type
|
||||||
|
description: string
|
||||||
|
|
||||||
|
# Query parameters (for GET requests)
|
||||||
|
query_params:
|
||||||
|
- name: string
|
||||||
|
type: string
|
||||||
|
required: boolean
|
||||||
|
default: any
|
||||||
|
description: string
|
||||||
|
|
||||||
|
# Request body (for POST/PUT/PATCH)
|
||||||
|
request_body:
|
||||||
|
type_id: string # Reference to types section (e.g., type_CreateUserRequest)
|
||||||
|
content_type: application/json
|
||||||
|
|
||||||
|
# Response specification
|
||||||
|
response:
|
||||||
|
# Success response
|
||||||
|
success:
|
||||||
|
status: integer # 200, 201, 204
|
||||||
|
type_id: string # Reference to types section
|
||||||
|
is_array: boolean # If response is array of type
|
||||||
|
|
||||||
|
# Error responses
|
||||||
|
errors:
|
||||||
|
- status: integer # 400, 401, 403, 404, 500
|
||||||
|
type_id: string # Error response type (usually type_ApiError)
|
||||||
|
description: string
|
||||||
|
|
||||||
|
# Authentication
|
||||||
|
auth:
|
||||||
|
required: boolean
|
||||||
|
roles: [string] # Required roles (empty = any authenticated)
|
||||||
|
|
||||||
|
# Contract version for compatibility
|
||||||
|
version: string # Semantic version of this endpoint spec
|
||||||
|
|
||||||
|
# Example
|
||||||
|
example_endpoint:
|
||||||
|
id: api_create_user
|
||||||
|
method: POST
|
||||||
|
path: /api/users
|
||||||
|
path_params: []
|
||||||
|
query_params: []
|
||||||
|
request_body:
|
||||||
|
type_id: type_CreateUserRequest
|
||||||
|
content_type: application/json
|
||||||
|
response:
|
||||||
|
success:
|
||||||
|
status: 201
|
||||||
|
type_id: type_User
|
||||||
|
is_array: false
|
||||||
|
errors:
|
||||||
|
- status: 400
|
||||||
|
type_id: type_ValidationError
|
||||||
|
description: "Invalid request body"
|
||||||
|
- status: 409
|
||||||
|
type_id: type_ApiError
|
||||||
|
description: "Email already exists"
|
||||||
|
auth:
|
||||||
|
required: false
|
||||||
|
roles: []
|
||||||
|
version: "1.0.0"
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# FRONTEND USAGE CONTRACTS (What frontend expects to call)
|
||||||
|
# ============================================================================
|
||||||
|
frontend_calls:
|
||||||
|
description: "Expected API calls from frontend components/pages"
|
||||||
|
|
||||||
|
call_schema:
|
||||||
|
# Identity
|
||||||
|
id: string # call_<component>_<action>
|
||||||
|
|
||||||
|
# Source
|
||||||
|
source:
|
||||||
|
entity_id: string # page_xxx or component_xxx
|
||||||
|
file_path: string # Expected file location
|
||||||
|
|
||||||
|
# Target endpoint
|
||||||
|
endpoint_id: string # Reference to endpoints section
|
||||||
|
|
||||||
|
# Call context
|
||||||
|
purpose: string # Why this call is made
|
||||||
|
trigger: string # What triggers this call (onLoad, onClick, onSubmit)
|
||||||
|
|
||||||
|
# Data mapping
|
||||||
|
request_mapping:
|
||||||
|
# How component data maps to request
|
||||||
|
from_props: [string] # Props used in request
|
||||||
|
from_state: [string] # State used in request
|
||||||
|
from_form: [string] # Form fields used in request
|
||||||
|
|
||||||
|
response_handling:
|
||||||
|
# How response is handled
|
||||||
|
success_action: string # What happens on success
|
||||||
|
error_action: string # What happens on error
|
||||||
|
|
||||||
|
# Example
|
||||||
|
example_call:
|
||||||
|
id: call_signup_form_submit
|
||||||
|
source:
|
||||||
|
entity_id: component_signup_form
|
||||||
|
file_path: app/components/SignupForm.tsx
|
||||||
|
endpoint_id: api_create_user
|
||||||
|
purpose: "Submit registration form"
|
||||||
|
trigger: onSubmit
|
||||||
|
request_mapping:
|
||||||
|
from_form: [email, name, password]
|
||||||
|
response_handling:
|
||||||
|
success_action: "Redirect to dashboard"
|
||||||
|
error_action: "Display error message"
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# BACKEND IMPLEMENTATION CONTRACTS (What backend must provide)
|
||||||
|
# ============================================================================
|
||||||
|
backend_routes:
|
||||||
|
description: "Required backend route implementations"
|
||||||
|
|
||||||
|
route_schema:
|
||||||
|
# Identity
|
||||||
|
id: string # route_<verb>_<path>
|
||||||
|
|
||||||
|
# Target endpoint
|
||||||
|
endpoint_id: string # Reference to endpoints section
|
||||||
|
|
||||||
|
# Implementation location
|
||||||
|
file_path: string # Expected file (e.g., app/api/users/route.ts)
|
||||||
|
export_name: string # Exported function name (GET, POST, etc.)
|
||||||
|
|
||||||
|
# Dependencies
|
||||||
|
uses_models: [string] # model_ids this route uses
|
||||||
|
uses_services: [string] # Service files this route depends on
|
||||||
|
|
||||||
|
# Implementation requirements
|
||||||
|
must_validate:
|
||||||
|
- field: string
|
||||||
|
rule: string
|
||||||
|
must_authenticate: boolean
|
||||||
|
must_authorize: [string] # Role checks required
|
||||||
|
|
||||||
|
# Example
|
||||||
|
example_route:
|
||||||
|
id: route_post_users
|
||||||
|
endpoint_id: api_create_user
|
||||||
|
file_path: app/api/users/route.ts
|
||||||
|
export_name: POST
|
||||||
|
uses_models: [model_user]
|
||||||
|
uses_services: [lib/auth.ts, lib/db.ts]
|
||||||
|
must_validate:
|
||||||
|
- field: email
|
||||||
|
rule: email
|
||||||
|
- field: password
|
||||||
|
rule: min:8
|
||||||
|
must_authenticate: false
|
||||||
|
must_authorize: []
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# VALIDATION RULES
|
||||||
|
# ============================================================================
|
||||||
|
validation_rules:
|
||||||
|
contracts:
|
||||||
|
- "Every frontend_call must reference existing endpoint_id"
|
||||||
|
- "Every backend_route must reference existing endpoint_id"
|
||||||
|
- "Request body type_id must exist in types section"
|
||||||
|
- "Response type_id must exist in types section"
|
||||||
|
- "Path params in endpoint must match :param patterns in path"
|
||||||
|
|
||||||
|
types:
|
||||||
|
- "Every type must have unique id"
|
||||||
|
- "Type references (nested types) must exist"
|
||||||
|
- "Required properties cannot have default values"
|
||||||
|
|
||||||
|
implementation:
|
||||||
|
- "Frontend must import types from shared types file"
|
||||||
|
- "Backend must import types from shared types file"
|
||||||
|
- "HTTP methods must match contract specification"
|
||||||
|
- "Response shapes must conform to type definitions"
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# GENERATED FILES
|
||||||
|
# ============================================================================
|
||||||
|
generated_files:
|
||||||
|
shared_types:
|
||||||
|
path: app/types/api.ts
|
||||||
|
description: "TypeScript interfaces for all API types"
|
||||||
|
template: |
|
||||||
|
// AUTO-GENERATED - DO NOT EDIT
|
||||||
|
// Source: .workflow/versions/vXXX/contracts/api_contract.yml
|
||||||
|
// Generated: {timestamp}
|
||||||
|
|
||||||
|
// === Types ===
|
||||||
|
{type_definitions}
|
||||||
|
|
||||||
|
// === API Paths (for type-safe fetch calls) ===
|
||||||
|
export const API_PATHS = {
|
||||||
|
{path_constants}
|
||||||
|
} as const;
|
||||||
|
|
||||||
|
// === API Response Types ===
|
||||||
|
{response_type_helpers}
|
||||||
|
|
||||||
|
api_client:
|
||||||
|
path: app/lib/api-client.ts
|
||||||
|
description: "Type-safe API client (optional)"
|
||||||
|
template: |
|
||||||
|
// AUTO-GENERATED - DO NOT EDIT
|
||||||
|
// Type-safe API client generated from contract
|
||||||
|
|
||||||
|
import type { * } from '@/types/api';
|
||||||
|
|
||||||
|
{api_client_methods}
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# CONTRACT VIOLATION HANDLING
|
||||||
|
# ============================================================================
|
||||||
|
violations:
|
||||||
|
severity_levels:
|
||||||
|
critical:
|
||||||
|
- "Endpoint exists in frontend but not backend"
|
||||||
|
- "Method mismatch (frontend calls POST, backend has GET)"
|
||||||
|
- "Required field missing in implementation"
|
||||||
|
high:
|
||||||
|
- "Response type mismatch"
|
||||||
|
- "Missing error handling for documented errors"
|
||||||
|
medium:
|
||||||
|
- "Extra undocumented endpoint in backend"
|
||||||
|
- "Type property order differs"
|
||||||
|
low:
|
||||||
|
- "Description mismatch"
|
||||||
|
- "Optional field handling differs"
|
||||||
|
|
||||||
|
on_violation:
|
||||||
|
critical: "Block deployment, require immediate fix"
|
||||||
|
high: "Warn in review, require acknowledgment"
|
||||||
|
medium: "Report in review, fix recommended"
|
||||||
|
low: "Log for tracking"
|
||||||
|
|
@ -0,0 +1,340 @@
|
||||||
|
# Dependency Graph Schema
|
||||||
|
# Auto-generated from design_document.yml
|
||||||
|
# Determines execution order for parallel task distribution
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# GRAPH METADATA
|
||||||
|
# ============================================================================
|
||||||
|
dependency_graph:
|
||||||
|
# Links to design document
|
||||||
|
design_version: string # design_document revision this was generated from
|
||||||
|
workflow_version: string # v001, v002, etc.
|
||||||
|
|
||||||
|
# Generation info
|
||||||
|
generated_at: timestamp
|
||||||
|
generator: string # Script that generated this
|
||||||
|
|
||||||
|
# Statistics
|
||||||
|
stats:
|
||||||
|
total_entities: integer
|
||||||
|
total_layers: integer
|
||||||
|
max_parallelism: integer # Max items that can run in parallel
|
||||||
|
critical_path_length: integer # Longest dependency chain
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# EXECUTION LAYERS
|
||||||
|
# ============================================================================
|
||||||
|
layers:
|
||||||
|
description: "Ordered layers for parallel execution within each layer"
|
||||||
|
|
||||||
|
layer_schema:
|
||||||
|
layer: integer # 1, 2, 3...
|
||||||
|
name: string # Human-readable name
|
||||||
|
description: string # What this layer contains
|
||||||
|
|
||||||
|
# Items in this layer (can run in parallel)
|
||||||
|
items:
|
||||||
|
- id: string # Entity ID (model_*, api_*, page_*, component_*)
|
||||||
|
type: enum # model | api | page | component
|
||||||
|
name: string # Human-readable name
|
||||||
|
|
||||||
|
# Dependencies (all must be in lower layers)
|
||||||
|
depends_on: [string] # Entity IDs this depends on
|
||||||
|
|
||||||
|
# Task mapping
|
||||||
|
task_id: string # task_* ID for implementation
|
||||||
|
agent: enum # frontend | backend
|
||||||
|
|
||||||
|
# Estimated complexity
|
||||||
|
complexity: enum # low | medium | high
|
||||||
|
|
||||||
|
# Layer constraints
|
||||||
|
requires_layers: [integer] # Layer numbers that must complete first
|
||||||
|
parallel_count: integer # Number of items that can run in parallel
|
||||||
|
|
||||||
|
# Example layers
|
||||||
|
example:
|
||||||
|
- layer: 1
|
||||||
|
name: "Data Layer"
|
||||||
|
description: "Database models - no external dependencies"
|
||||||
|
items:
|
||||||
|
- id: model_user
|
||||||
|
type: model
|
||||||
|
name: User
|
||||||
|
depends_on: []
|
||||||
|
task_id: task_create_model_user
|
||||||
|
agent: backend
|
||||||
|
complexity: medium
|
||||||
|
- id: model_post
|
||||||
|
type: model
|
||||||
|
name: Post
|
||||||
|
depends_on: []
|
||||||
|
task_id: task_create_model_post
|
||||||
|
agent: backend
|
||||||
|
complexity: low
|
||||||
|
requires_layers: []
|
||||||
|
parallel_count: 2
|
||||||
|
|
||||||
|
- layer: 2
|
||||||
|
name: "API Layer"
|
||||||
|
description: "REST endpoints - depend on models"
|
||||||
|
items:
|
||||||
|
- id: api_create_user
|
||||||
|
type: api
|
||||||
|
name: "Create User"
|
||||||
|
depends_on: [model_user]
|
||||||
|
task_id: task_create_api_create_user
|
||||||
|
agent: backend
|
||||||
|
complexity: medium
|
||||||
|
- id: api_list_users
|
||||||
|
type: api
|
||||||
|
name: "List Users"
|
||||||
|
depends_on: [model_user]
|
||||||
|
task_id: task_create_api_list_users
|
||||||
|
agent: backend
|
||||||
|
complexity: low
|
||||||
|
requires_layers: [1]
|
||||||
|
parallel_count: 2
|
||||||
|
|
||||||
|
- layer: 3
|
||||||
|
name: "UI Layer"
|
||||||
|
description: "Pages and components - depend on APIs"
|
||||||
|
items:
|
||||||
|
- id: component_user_card
|
||||||
|
type: component
|
||||||
|
name: UserCard
|
||||||
|
depends_on: []
|
||||||
|
task_id: task_create_component_user_card
|
||||||
|
agent: frontend
|
||||||
|
complexity: low
|
||||||
|
- id: page_users
|
||||||
|
type: page
|
||||||
|
name: "Users Page"
|
||||||
|
depends_on: [api_list_users, component_user_card]
|
||||||
|
task_id: task_create_page_users
|
||||||
|
agent: frontend
|
||||||
|
complexity: medium
|
||||||
|
requires_layers: [2]
|
||||||
|
parallel_count: 2
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# FULL DEPENDENCY MAP
|
||||||
|
# ============================================================================
|
||||||
|
dependency_map:
|
||||||
|
description: "Complete dependency relationships for visualization"
|
||||||
|
|
||||||
|
entry_schema:
|
||||||
|
entity_id:
|
||||||
|
type: enum # model | api | page | component
|
||||||
|
layer: integer # Which layer this belongs to
|
||||||
|
depends_on: [string] # What this entity needs
|
||||||
|
depended_by: [string] # What entities need this
|
||||||
|
|
||||||
|
# Example
|
||||||
|
example:
|
||||||
|
model_user:
|
||||||
|
type: model
|
||||||
|
layer: 1
|
||||||
|
depends_on: []
|
||||||
|
depended_by: [model_post, api_create_user, api_list_users, api_get_user]
|
||||||
|
|
||||||
|
api_create_user:
|
||||||
|
type: api
|
||||||
|
layer: 2
|
||||||
|
depends_on: [model_user]
|
||||||
|
depended_by: [page_user_create, component_user_form]
|
||||||
|
|
||||||
|
page_users:
|
||||||
|
type: page
|
||||||
|
layer: 3
|
||||||
|
depends_on: [api_list_users, component_user_card]
|
||||||
|
depended_by: []
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# TASK GENERATION MAP
|
||||||
|
# ============================================================================
|
||||||
|
task_map:
|
||||||
|
description: "Maps entities to implementation tasks with context"
|
||||||
|
|
||||||
|
task_entry_schema:
|
||||||
|
entity_id: string # model_user, api_create_user, etc.
|
||||||
|
task_id: string # task_create_model_user
|
||||||
|
layer: integer # Execution layer
|
||||||
|
agent: enum # frontend | backend
|
||||||
|
|
||||||
|
# Context to pass to subagent (snapshot from design_document)
|
||||||
|
context:
|
||||||
|
# For models
|
||||||
|
model_definition:
|
||||||
|
fields: [object]
|
||||||
|
relations: [object]
|
||||||
|
validations: [object]
|
||||||
|
|
||||||
|
# For APIs
|
||||||
|
api_contract:
|
||||||
|
method: string
|
||||||
|
path: string
|
||||||
|
request_body: object
|
||||||
|
responses: [object]
|
||||||
|
auth: object
|
||||||
|
|
||||||
|
# For pages
|
||||||
|
page_definition:
|
||||||
|
path: string
|
||||||
|
data_needs: [object]
|
||||||
|
components: [string]
|
||||||
|
auth: object
|
||||||
|
|
||||||
|
# For components
|
||||||
|
component_definition:
|
||||||
|
props: [object]
|
||||||
|
events: [object]
|
||||||
|
uses_apis: [string]
|
||||||
|
|
||||||
|
# Shared context
|
||||||
|
related_models: [object] # Models this entity interacts with
|
||||||
|
related_apis: [object] # APIs this entity needs/provides
|
||||||
|
|
||||||
|
# Dependencies as task IDs
|
||||||
|
depends_on_tasks: [string] # Task IDs that must complete first
|
||||||
|
|
||||||
|
# Output definition
|
||||||
|
outputs:
|
||||||
|
files: [string] # Files this task will create
|
||||||
|
provides: [string] # Entity IDs this task provides
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# EXECUTION PLAN
|
||||||
|
# ============================================================================
|
||||||
|
execution_plan:
|
||||||
|
description: "Concrete execution order for workflow orchestrator"
|
||||||
|
|
||||||
|
phase_schema:
|
||||||
|
phase: integer # 1, 2, 3... (maps to layers)
|
||||||
|
|
||||||
|
# Parallel batch within phase
|
||||||
|
parallel_batch:
|
||||||
|
- task_id: string
|
||||||
|
entity_id: string
|
||||||
|
agent: enum
|
||||||
|
|
||||||
|
# Full context blob for subagent
|
||||||
|
context_file: string # Path to context snapshot file
|
||||||
|
|
||||||
|
# Expected outputs
|
||||||
|
expected_files: [string]
|
||||||
|
|
||||||
|
# Validation to run after completion
|
||||||
|
validation:
|
||||||
|
- type: enum # file_exists | lint | typecheck | test
|
||||||
|
target: string
|
||||||
|
|
||||||
|
# Example
|
||||||
|
example:
|
||||||
|
- phase: 1
|
||||||
|
parallel_batch:
|
||||||
|
- task_id: task_create_model_user
|
||||||
|
entity_id: model_user
|
||||||
|
agent: backend
|
||||||
|
context_file: .workflow/versions/v001/contexts/model_user.yml
|
||||||
|
expected_files: [prisma/schema.prisma, app/models/user.ts]
|
||||||
|
validation:
|
||||||
|
- type: typecheck
|
||||||
|
target: app/models/user.ts
|
||||||
|
|
||||||
|
- task_id: task_create_model_post
|
||||||
|
entity_id: model_post
|
||||||
|
agent: backend
|
||||||
|
context_file: .workflow/versions/v001/contexts/model_post.yml
|
||||||
|
expected_files: [prisma/schema.prisma, app/models/post.ts]
|
||||||
|
validation:
|
||||||
|
- type: typecheck
|
||||||
|
target: app/models/post.ts
|
||||||
|
|
||||||
|
- phase: 2
|
||||||
|
parallel_batch:
|
||||||
|
- task_id: task_create_api_create_user
|
||||||
|
entity_id: api_create_user
|
||||||
|
agent: backend
|
||||||
|
context_file: .workflow/versions/v001/contexts/api_create_user.yml
|
||||||
|
expected_files: [app/api/users/route.ts]
|
||||||
|
validation:
|
||||||
|
- type: lint
|
||||||
|
target: app/api/users/route.ts
|
||||||
|
- type: typecheck
|
||||||
|
target: app/api/users/route.ts
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# CONTEXT SNAPSHOT SCHEMA
|
||||||
|
# ============================================================================
|
||||||
|
context_snapshot:
|
||||||
|
description: "Schema for per-task context files passed to subagents"
|
||||||
|
|
||||||
|
snapshot_schema:
|
||||||
|
# Metadata
|
||||||
|
task_id: string
|
||||||
|
entity_id: string
|
||||||
|
generated_at: timestamp
|
||||||
|
workflow_version: string
|
||||||
|
|
||||||
|
# The entity being implemented
|
||||||
|
target:
|
||||||
|
type: enum # model | api | page | component
|
||||||
|
definition: object # Full definition from design_document
|
||||||
|
|
||||||
|
# Related entities (for reference)
|
||||||
|
related:
|
||||||
|
models: [object] # Model definitions this task needs to know about
|
||||||
|
apis: [object] # API contracts this task needs to know about
|
||||||
|
components: [object] # Component definitions this task needs
|
||||||
|
|
||||||
|
# Dependency chain
|
||||||
|
dependencies:
|
||||||
|
completed: [string] # Entity IDs already implemented
|
||||||
|
pending: [string] # Entity IDs not yet implemented (shouldn't depend on)
|
||||||
|
|
||||||
|
# File context
|
||||||
|
files:
|
||||||
|
to_create: [string] # Files this task should create
|
||||||
|
to_modify: [string] # Files this task may modify
|
||||||
|
reference: [string] # Files to read for context
|
||||||
|
|
||||||
|
# Acceptance criteria
|
||||||
|
acceptance:
|
||||||
|
- criterion: string # What must be true
|
||||||
|
validation: string # How to verify
|
||||||
|
|
||||||
|
# Implementation hints
|
||||||
|
hints:
|
||||||
|
patterns: [string] # Patterns to follow (from existing codebase)
|
||||||
|
avoid: [string] # Anti-patterns to avoid
|
||||||
|
examples: [string] # Example file paths to reference
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# GRAPH GENERATION RULES
|
||||||
|
# ============================================================================
|
||||||
|
generation_rules:
|
||||||
|
layer_assignment:
|
||||||
|
- "Models with no relations → Layer 1"
|
||||||
|
- "Models with relations to Layer 1 models → Layer 1 (parallel)"
|
||||||
|
- "APIs depending only on models → Layer 2"
|
||||||
|
- "Components with no API deps → Layer 3 (parallel with pages)"
|
||||||
|
- "Pages and components with API deps → Layer 3+"
|
||||||
|
- "Recursive: if all deps in Layer N, assign to Layer N+1"
|
||||||
|
|
||||||
|
parallelism:
|
||||||
|
- "Items in same layer with no inter-dependencies can run in parallel"
|
||||||
|
- "Max parallelism = min(layer_item_count, configured_max_agents)"
|
||||||
|
- "Group by agent type for efficient batching"
|
||||||
|
|
||||||
|
context_generation:
|
||||||
|
- "Include full definition of target entity"
|
||||||
|
- "Include definitions of all direct dependencies"
|
||||||
|
- "Include one-level of indirect dependencies for context"
|
||||||
|
- "Exclude unrelated entities to minimize context size"
|
||||||
|
|
||||||
|
validation:
|
||||||
|
- "No circular dependencies (would prevent layer assignment)"
|
||||||
|
- "All dependency targets must exist in design_document"
|
||||||
|
- "Each entity must be in exactly one layer"
|
||||||
|
- "Layer numbers must be consecutive starting from 1"
|
||||||
|
|
@ -0,0 +1,463 @@
|
||||||
|
# Design Document Schema
|
||||||
|
# The source of truth for system design - all tasks derive from this
|
||||||
|
# Created during DESIGNING phase, approved before IMPLEMENTING
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# DOCUMENT METADATA
|
||||||
|
# ============================================================================
|
||||||
|
design_document:
|
||||||
|
# Links to workflow
|
||||||
|
workflow_version: string # e.g., v001
|
||||||
|
feature: string # Feature being implemented
|
||||||
|
|
||||||
|
# Timestamps
|
||||||
|
created_at: timestamp
|
||||||
|
updated_at: timestamp
|
||||||
|
approved_at: timestamp | null
|
||||||
|
|
||||||
|
# Design status
|
||||||
|
status: draft | review | approved | rejected
|
||||||
|
|
||||||
|
# Revision tracking
|
||||||
|
revision: integer # Increments on changes
|
||||||
|
revision_notes: string # What changed in this revision
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# LAYER 1: DATA MODELS (ER Diagram)
|
||||||
|
# ============================================================================
|
||||||
|
data_models:
|
||||||
|
description: "Database entities and their relationships"
|
||||||
|
|
||||||
|
model_schema:
|
||||||
|
# Identity
|
||||||
|
id: string # model_<name> (e.g., model_user, model_post)
|
||||||
|
name: string # PascalCase entity name (e.g., User, Post)
|
||||||
|
description: string # What this model represents
|
||||||
|
|
||||||
|
# Table/Collection info
|
||||||
|
table_name: string # snake_case (e.g., users, posts)
|
||||||
|
|
||||||
|
# Fields
|
||||||
|
fields:
|
||||||
|
- name: string # snake_case field name
|
||||||
|
type: enum # string | integer | boolean | datetime | uuid | json | text | float | decimal | enum
|
||||||
|
constraints: [enum] # primary_key | foreign_key | unique | not_null | indexed | auto_increment | default
|
||||||
|
default: any # Default value if constraint includes 'default'
|
||||||
|
enum_values: [string] # If type is 'enum', list valid values
|
||||||
|
description: string # Field purpose
|
||||||
|
|
||||||
|
# Relations to other models
|
||||||
|
relations:
|
||||||
|
- type: enum # has_one | has_many | belongs_to | many_to_many
|
||||||
|
target: string # Target model_id (e.g., model_post)
|
||||||
|
foreign_key: string # FK field name
|
||||||
|
through: string # Junction table for many_to_many
|
||||||
|
on_delete: enum # cascade | set_null | restrict | no_action
|
||||||
|
|
||||||
|
# Indexes
|
||||||
|
indexes:
|
||||||
|
- fields: [string] # Fields in index
|
||||||
|
unique: boolean # Is unique index
|
||||||
|
name: string # Index name
|
||||||
|
|
||||||
|
# Timestamps (common pattern)
|
||||||
|
timestamps: boolean # Auto-add created_at, updated_at
|
||||||
|
soft_delete: boolean # Add deleted_at for soft deletes
|
||||||
|
|
||||||
|
# Validation rules (business logic)
|
||||||
|
validations:
|
||||||
|
- field: string # Field to validate
|
||||||
|
rule: string # Validation rule (e.g., "email", "min:8", "max:100")
|
||||||
|
message: string # Error message
|
||||||
|
|
||||||
|
# Example
|
||||||
|
example_model:
|
||||||
|
id: model_user
|
||||||
|
name: User
|
||||||
|
description: "Application user account"
|
||||||
|
table_name: users
|
||||||
|
fields:
|
||||||
|
- name: id
|
||||||
|
type: uuid
|
||||||
|
constraints: [primary_key]
|
||||||
|
description: "Unique identifier"
|
||||||
|
- name: email
|
||||||
|
type: string
|
||||||
|
constraints: [unique, not_null, indexed]
|
||||||
|
description: "User email address"
|
||||||
|
- name: name
|
||||||
|
type: string
|
||||||
|
constraints: [not_null]
|
||||||
|
description: "Display name"
|
||||||
|
- name: password_hash
|
||||||
|
type: string
|
||||||
|
constraints: [not_null]
|
||||||
|
description: "Bcrypt hashed password"
|
||||||
|
- name: role
|
||||||
|
type: enum
|
||||||
|
enum_values: [user, admin, moderator]
|
||||||
|
constraints: [not_null, default]
|
||||||
|
default: user
|
||||||
|
description: "User role for authorization"
|
||||||
|
relations:
|
||||||
|
- type: has_many
|
||||||
|
target: model_post
|
||||||
|
foreign_key: user_id
|
||||||
|
on_delete: cascade
|
||||||
|
timestamps: true
|
||||||
|
soft_delete: false
|
||||||
|
validations:
|
||||||
|
- field: email
|
||||||
|
rule: email
|
||||||
|
message: "Invalid email format"
|
||||||
|
- field: password_hash
|
||||||
|
rule: min:60
|
||||||
|
message: "Invalid password hash"
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# LAYER 2: API ENDPOINTS
|
||||||
|
# ============================================================================
|
||||||
|
api_endpoints:
|
||||||
|
description: "REST API endpoints with request/response contracts"
|
||||||
|
|
||||||
|
endpoint_schema:
|
||||||
|
# Identity
|
||||||
|
id: string # api_<verb>_<resource> (e.g., api_create_user)
|
||||||
|
|
||||||
|
# HTTP
|
||||||
|
method: enum # GET | POST | PUT | PATCH | DELETE
|
||||||
|
path: string # URL path (e.g., /api/users/:id)
|
||||||
|
|
||||||
|
# Description
|
||||||
|
summary: string # Short description
|
||||||
|
description: string # Detailed description
|
||||||
|
|
||||||
|
# Tags for grouping
|
||||||
|
tags: [string] # e.g., [users, authentication]
|
||||||
|
|
||||||
|
# Path parameters
|
||||||
|
path_params:
|
||||||
|
- name: string # Parameter name (e.g., id)
|
||||||
|
type: string # Data type
|
||||||
|
description: string
|
||||||
|
|
||||||
|
# Query parameters (for GET)
|
||||||
|
query_params:
|
||||||
|
- name: string # Parameter name
|
||||||
|
type: string # Data type
|
||||||
|
required: boolean
|
||||||
|
default: any
|
||||||
|
description: string
|
||||||
|
|
||||||
|
# Request body (for POST/PUT/PATCH)
|
||||||
|
request_body:
|
||||||
|
content_type: string # application/json
|
||||||
|
schema:
|
||||||
|
type: object | array
|
||||||
|
properties:
|
||||||
|
- name: string
|
||||||
|
type: string
|
||||||
|
required: boolean
|
||||||
|
validations: [string] # Validation rules
|
||||||
|
description: string
|
||||||
|
example: object # Example request body
|
||||||
|
|
||||||
|
# Response schemas by status code
|
||||||
|
responses:
|
||||||
|
- status: integer # HTTP status code
|
||||||
|
description: string
|
||||||
|
schema:
|
||||||
|
type: object | array
|
||||||
|
properties:
|
||||||
|
- name: string
|
||||||
|
type: string
|
||||||
|
example: object
|
||||||
|
|
||||||
|
# Dependencies
|
||||||
|
depends_on_models: [string] # model_ids this endpoint uses
|
||||||
|
depends_on_apis: [string] # api_ids this endpoint calls (internal)
|
||||||
|
|
||||||
|
# Authentication/Authorization
|
||||||
|
auth:
|
||||||
|
required: boolean
|
||||||
|
roles: [string] # Required roles (empty = any authenticated)
|
||||||
|
|
||||||
|
# Rate limiting
|
||||||
|
rate_limit:
|
||||||
|
requests: integer # Max requests
|
||||||
|
window: string # Time window (e.g., "1m", "1h")
|
||||||
|
|
||||||
|
# Example
|
||||||
|
example_endpoint:
|
||||||
|
id: api_create_user
|
||||||
|
method: POST
|
||||||
|
path: /api/users
|
||||||
|
summary: "Create a new user"
|
||||||
|
description: "Register a new user account with email and password"
|
||||||
|
tags: [users, authentication]
|
||||||
|
request_body:
|
||||||
|
content_type: application/json
|
||||||
|
schema:
|
||||||
|
type: object
|
||||||
|
properties:
|
||||||
|
- name: email
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
validations: [email]
|
||||||
|
description: "User email address"
|
||||||
|
- name: name
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
validations: [min:1, max:100]
|
||||||
|
description: "Display name"
|
||||||
|
- name: password
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
validations: [min:8]
|
||||||
|
description: "Password (will be hashed)"
|
||||||
|
example:
|
||||||
|
email: "user@example.com"
|
||||||
|
name: "John Doe"
|
||||||
|
password: "securepass123"
|
||||||
|
responses:
|
||||||
|
- status: 201
|
||||||
|
description: "User created successfully"
|
||||||
|
schema:
|
||||||
|
type: object
|
||||||
|
properties:
|
||||||
|
- name: id
|
||||||
|
type: uuid
|
||||||
|
- name: email
|
||||||
|
type: string
|
||||||
|
- name: name
|
||||||
|
type: string
|
||||||
|
- name: created_at
|
||||||
|
type: datetime
|
||||||
|
example:
|
||||||
|
id: "550e8400-e29b-41d4-a716-446655440000"
|
||||||
|
email: "user@example.com"
|
||||||
|
name: "John Doe"
|
||||||
|
created_at: "2025-01-16T10:00:00Z"
|
||||||
|
- status: 400
|
||||||
|
description: "Validation error"
|
||||||
|
schema:
|
||||||
|
type: object
|
||||||
|
properties:
|
||||||
|
- name: error
|
||||||
|
type: string
|
||||||
|
- name: details
|
||||||
|
type: array
|
||||||
|
example:
|
||||||
|
error: "Validation failed"
|
||||||
|
details: ["Email is invalid", "Password too short"]
|
||||||
|
- status: 409
|
||||||
|
description: "Email already exists"
|
||||||
|
schema:
|
||||||
|
type: object
|
||||||
|
properties:
|
||||||
|
- name: error
|
||||||
|
type: string
|
||||||
|
example:
|
||||||
|
error: "Email already registered"
|
||||||
|
depends_on_models: [model_user]
|
||||||
|
depends_on_apis: []
|
||||||
|
auth:
|
||||||
|
required: false
|
||||||
|
roles: []
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# LAYER 3: UI PAGES
|
||||||
|
# ============================================================================
|
||||||
|
pages:
|
||||||
|
description: "Application pages/routes"
|
||||||
|
|
||||||
|
page_schema:
|
||||||
|
# Identity
|
||||||
|
id: string # page_<name> (e.g., page_users, page_user_detail)
|
||||||
|
name: string # Human-readable name
|
||||||
|
|
||||||
|
# Routing
|
||||||
|
path: string # URL path (e.g., /users, /users/[id])
|
||||||
|
layout: string # Layout component to use
|
||||||
|
|
||||||
|
# Data requirements
|
||||||
|
data_needs:
|
||||||
|
- api_id: string # API endpoint to call
|
||||||
|
purpose: string # Why this data is needed
|
||||||
|
on_load: boolean # Fetch on page load
|
||||||
|
|
||||||
|
# Components used
|
||||||
|
components: [string] # component_ids used on this page
|
||||||
|
|
||||||
|
# SEO
|
||||||
|
seo:
|
||||||
|
title: string
|
||||||
|
description: string
|
||||||
|
|
||||||
|
# Auth requirements
|
||||||
|
auth:
|
||||||
|
required: boolean
|
||||||
|
roles: [string]
|
||||||
|
redirect: string # Where to redirect if not authorized
|
||||||
|
|
||||||
|
# State management
|
||||||
|
state:
|
||||||
|
local: [string] # Local state variables
|
||||||
|
global: [string] # Global state dependencies
|
||||||
|
|
||||||
|
# Example
|
||||||
|
example_page:
|
||||||
|
id: page_users
|
||||||
|
name: "Users List"
|
||||||
|
path: /users
|
||||||
|
layout: layout_dashboard
|
||||||
|
data_needs:
|
||||||
|
- api_id: api_list_users
|
||||||
|
purpose: "Display user list"
|
||||||
|
on_load: true
|
||||||
|
components: [component_user_list, component_user_card, component_pagination]
|
||||||
|
seo:
|
||||||
|
title: "Users"
|
||||||
|
description: "View all users"
|
||||||
|
auth:
|
||||||
|
required: true
|
||||||
|
roles: [admin]
|
||||||
|
redirect: /login
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# LAYER 3: UI COMPONENTS
|
||||||
|
# ============================================================================
|
||||||
|
components:
|
||||||
|
description: "Reusable UI components"
|
||||||
|
|
||||||
|
component_schema:
|
||||||
|
# Identity
|
||||||
|
id: string # component_<name> (e.g., component_user_card)
|
||||||
|
name: string # PascalCase component name
|
||||||
|
|
||||||
|
# Props (input)
|
||||||
|
props:
|
||||||
|
- name: string # Prop name
|
||||||
|
type: string # TypeScript type
|
||||||
|
required: boolean
|
||||||
|
default: any
|
||||||
|
description: string
|
||||||
|
|
||||||
|
# Events (output)
|
||||||
|
events:
|
||||||
|
- name: string # Event name (e.g., onClick, onSubmit)
|
||||||
|
payload: string # Payload type
|
||||||
|
description: string
|
||||||
|
|
||||||
|
# API calls (if component fetches data)
|
||||||
|
uses_apis: [string] # api_ids this component calls directly
|
||||||
|
|
||||||
|
# Child components
|
||||||
|
uses_components: [string] # component_ids used inside this component
|
||||||
|
|
||||||
|
# State
|
||||||
|
internal_state: [string] # Internal state variables
|
||||||
|
|
||||||
|
# Styling
|
||||||
|
variants: [string] # Style variants (e.g., primary, secondary)
|
||||||
|
|
||||||
|
# Example
|
||||||
|
example_component:
|
||||||
|
id: component_user_card
|
||||||
|
name: UserCard
|
||||||
|
props:
|
||||||
|
- name: user
|
||||||
|
type: User
|
||||||
|
required: true
|
||||||
|
description: "User object to display"
|
||||||
|
- name: showActions
|
||||||
|
type: boolean
|
||||||
|
required: false
|
||||||
|
default: true
|
||||||
|
description: "Show edit/delete buttons"
|
||||||
|
events:
|
||||||
|
- name: onEdit
|
||||||
|
payload: "User"
|
||||||
|
description: "Fired when edit button clicked"
|
||||||
|
- name: onDelete
|
||||||
|
payload: "string"
|
||||||
|
description: "Fired when delete confirmed, payload is user ID"
|
||||||
|
uses_apis: []
|
||||||
|
uses_components: [component_avatar, component_button]
|
||||||
|
internal_state: [isDeleting]
|
||||||
|
variants: [default, compact]
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# DEPENDENCY GRAPH (Auto-generated from above)
|
||||||
|
# ============================================================================
|
||||||
|
dependency_graph:
|
||||||
|
description: "Execution order based on dependencies - auto-generated"
|
||||||
|
|
||||||
|
# Layers for parallel execution
|
||||||
|
layers:
|
||||||
|
- layer: 1
|
||||||
|
name: "Data Models"
|
||||||
|
description: "Database schema - no dependencies"
|
||||||
|
items:
|
||||||
|
- id: string # Entity ID
|
||||||
|
type: model # model | api | page | component
|
||||||
|
dependencies: [] # Empty for layer 1
|
||||||
|
|
||||||
|
- layer: 2
|
||||||
|
name: "API Endpoints"
|
||||||
|
description: "Backend APIs - depend on models"
|
||||||
|
items:
|
||||||
|
- id: string
|
||||||
|
type: api
|
||||||
|
dependencies: [string] # model_ids
|
||||||
|
|
||||||
|
- layer: 3
|
||||||
|
name: "UI Layer"
|
||||||
|
description: "Pages and components - depend on APIs"
|
||||||
|
items:
|
||||||
|
- id: string
|
||||||
|
type: page | component
|
||||||
|
dependencies: [string] # api_ids, component_ids
|
||||||
|
|
||||||
|
# Full dependency map for visualization
|
||||||
|
dependency_map:
|
||||||
|
model_user:
|
||||||
|
depends_on: []
|
||||||
|
depended_by: [api_create_user, api_list_users, api_get_user]
|
||||||
|
api_create_user:
|
||||||
|
depends_on: [model_user]
|
||||||
|
depended_by: [page_user_create, component_user_form]
|
||||||
|
page_users:
|
||||||
|
depends_on: [api_list_users, component_user_list]
|
||||||
|
depended_by: []
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# DESIGN VALIDATION RULES
|
||||||
|
# ============================================================================
|
||||||
|
validation_rules:
|
||||||
|
models:
|
||||||
|
- "Every model must have a primary_key field"
|
||||||
|
- "Foreign keys must reference existing models"
|
||||||
|
- "Relation targets must exist in data_models"
|
||||||
|
- "Enum types must have enum_values defined"
|
||||||
|
|
||||||
|
apis:
|
||||||
|
- "Every API must have at least one response defined"
|
||||||
|
- "POST/PUT/PATCH must have request_body"
|
||||||
|
- "depends_on_models must reference existing models"
|
||||||
|
- "Path params must match :param patterns in path"
|
||||||
|
|
||||||
|
pages:
|
||||||
|
- "data_needs must reference existing api_ids"
|
||||||
|
- "components must reference existing component_ids"
|
||||||
|
- "auth.redirect must be a valid path"
|
||||||
|
|
||||||
|
components:
|
||||||
|
- "uses_apis must reference existing api_ids"
|
||||||
|
- "uses_components must reference existing component_ids"
|
||||||
|
- "No circular component dependencies"
|
||||||
|
|
||||||
|
graph:
|
||||||
|
- "No circular dependencies in dependency_graph"
|
||||||
|
- "All entities must be assigned to a layer"
|
||||||
|
- "Layer N items can only depend on Layer < N items"
|
||||||
|
|
@ -0,0 +1,364 @@
|
||||||
|
# Implementation Task Template Schema
|
||||||
|
# Used by the Architect agent to create implementation tasks
|
||||||
|
# Tasks are generated from design_document.yml with full context
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# TASK DEFINITION
|
||||||
|
# ============================================================================
|
||||||
|
task:
|
||||||
|
# Required fields
|
||||||
|
id: task_<type>_<entity> # e.g., task_create_model_user, task_create_api_users
|
||||||
|
type: create | update | delete | refactor | test | review
|
||||||
|
title: string # Human-readable title
|
||||||
|
agent: frontend | backend # Which agent implements this
|
||||||
|
entity_id: string # Primary entity ID from design_document
|
||||||
|
entity_ids: [string] # All entity IDs this task covers (for multi-entity tasks)
|
||||||
|
status: pending | in_progress | review | approved | completed | blocked
|
||||||
|
|
||||||
|
# Execution layer (from dependency_graph)
|
||||||
|
layer: integer # Which layer this task belongs to (1, 2, 3...)
|
||||||
|
parallel_group: string # Group ID for parallel execution
|
||||||
|
|
||||||
|
# Optional fields
|
||||||
|
description: string # Detailed implementation notes
|
||||||
|
file_paths: [string] # Files to create/modify
|
||||||
|
dependencies: [string] # Task IDs that must complete first
|
||||||
|
acceptance_criteria: [string] # Checklist for completion
|
||||||
|
priority: low | medium | high # Task priority
|
||||||
|
complexity: low | medium | high # Estimated complexity
|
||||||
|
|
||||||
|
# Tracking (set by system)
|
||||||
|
created_at: datetime
|
||||||
|
assigned_at: datetime
|
||||||
|
completed_at: datetime
|
||||||
|
reviewed_by: string
|
||||||
|
review_notes: string
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# CONTEXT SECTION (Passed to subagent)
|
||||||
|
# ============================================================================
|
||||||
|
# This is the critical section that provides full context to subagents
|
||||||
|
# Generated from design_document.yml during task creation
|
||||||
|
|
||||||
|
context:
|
||||||
|
# Source reference
|
||||||
|
design_version: string # design_document revision
|
||||||
|
workflow_version: string # Workflow version (v001, etc.)
|
||||||
|
context_snapshot_path: string # Path to full context file
|
||||||
|
|
||||||
|
# TARGET: What this task implements
|
||||||
|
target:
|
||||||
|
entity_id: string # model_user, api_create_user, etc.
|
||||||
|
entity_type: model | api | page | component
|
||||||
|
definition: object # Full definition from design_document
|
||||||
|
|
||||||
|
# For models
|
||||||
|
model:
|
||||||
|
name: string
|
||||||
|
table_name: string
|
||||||
|
fields: [field_definition]
|
||||||
|
relations: [relation_definition]
|
||||||
|
validations: [validation_rule]
|
||||||
|
indexes: [index_definition]
|
||||||
|
|
||||||
|
# For APIs
|
||||||
|
api:
|
||||||
|
method: string
|
||||||
|
path: string
|
||||||
|
summary: string
|
||||||
|
path_params: [param_definition]
|
||||||
|
query_params: [param_definition]
|
||||||
|
request_body: object
|
||||||
|
responses: [response_definition]
|
||||||
|
auth: object
|
||||||
|
|
||||||
|
# For pages
|
||||||
|
page:
|
||||||
|
path: string
|
||||||
|
layout: string
|
||||||
|
data_needs: [data_requirement]
|
||||||
|
components: [string]
|
||||||
|
seo: object
|
||||||
|
auth: object
|
||||||
|
|
||||||
|
# For components
|
||||||
|
component:
|
||||||
|
name: string
|
||||||
|
props: [prop_definition]
|
||||||
|
events: [event_definition]
|
||||||
|
uses_apis: [string]
|
||||||
|
uses_components: [string]
|
||||||
|
variants: [string]
|
||||||
|
|
||||||
|
# DEPENDENCIES: What this task needs
|
||||||
|
dependencies:
|
||||||
|
# Models this task interacts with
|
||||||
|
models:
|
||||||
|
- id: string # model_user
|
||||||
|
definition:
|
||||||
|
name: string
|
||||||
|
fields: [field_definition]
|
||||||
|
relations: [relation_definition]
|
||||||
|
|
||||||
|
# APIs this task needs
|
||||||
|
apis:
|
||||||
|
- id: string # api_get_user
|
||||||
|
definition:
|
||||||
|
method: string
|
||||||
|
path: string
|
||||||
|
request_body: object
|
||||||
|
responses: [object]
|
||||||
|
|
||||||
|
# Components this task uses
|
||||||
|
components:
|
||||||
|
- id: string # component_button
|
||||||
|
definition:
|
||||||
|
props: [prop_definition]
|
||||||
|
events: [event_definition]
|
||||||
|
|
||||||
|
# CONTRACTS: Input/Output specifications
|
||||||
|
contracts:
|
||||||
|
# What this task receives from previous tasks
|
||||||
|
inputs:
|
||||||
|
- from_task: string # task_create_model_user
|
||||||
|
provides: string # model_user
|
||||||
|
type: model | api | component | file
|
||||||
|
|
||||||
|
# What this task provides to later tasks
|
||||||
|
outputs:
|
||||||
|
- entity_id: string # api_create_user
|
||||||
|
type: model | api | component | file
|
||||||
|
consumers: [string] # [page_user_create, component_user_form]
|
||||||
|
|
||||||
|
# FILES: File operations
|
||||||
|
files:
|
||||||
|
# Files to create
|
||||||
|
create: [string]
|
||||||
|
|
||||||
|
# Files to modify
|
||||||
|
modify: [string]
|
||||||
|
|
||||||
|
# Files to read for patterns/context
|
||||||
|
reference:
|
||||||
|
- path: string
|
||||||
|
purpose: string # "Similar component pattern", "API route pattern"
|
||||||
|
|
||||||
|
# VALIDATION: How to verify completion
|
||||||
|
validation:
|
||||||
|
# Required checks
|
||||||
|
checks:
|
||||||
|
- type: file_exists | lint | typecheck | test | build
|
||||||
|
target: string # File or test pattern
|
||||||
|
required: boolean
|
||||||
|
|
||||||
|
# Acceptance criteria (human-readable)
|
||||||
|
criteria:
|
||||||
|
- criterion: string
|
||||||
|
verification: string # How to verify this
|
||||||
|
|
||||||
|
# HINTS: Implementation guidance
|
||||||
|
hints:
|
||||||
|
# Patterns to follow
|
||||||
|
patterns:
|
||||||
|
- pattern: string # "Use existing API route pattern"
|
||||||
|
reference: string # "app/api/health/route.ts"
|
||||||
|
|
||||||
|
# Things to avoid
|
||||||
|
avoid:
|
||||||
|
- issue: string
|
||||||
|
reason: string
|
||||||
|
|
||||||
|
# Code examples
|
||||||
|
examples:
|
||||||
|
- description: string
|
||||||
|
file: string
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# TASK GENERATION RULES
|
||||||
|
# ============================================================================
|
||||||
|
generation_rules:
|
||||||
|
from_model:
|
||||||
|
task_id: "task_create_model_{model_name}"
|
||||||
|
type: create
|
||||||
|
agent: backend
|
||||||
|
file_paths:
|
||||||
|
- "prisma/schema.prisma" # Add model to schema
|
||||||
|
- "app/models/{model_name}.ts" # TypeScript types
|
||||||
|
acceptance_criteria:
|
||||||
|
- "Model defined in Prisma schema"
|
||||||
|
- "TypeScript types exported"
|
||||||
|
- "Relations properly configured"
|
||||||
|
- "Migrations generated"
|
||||||
|
|
||||||
|
from_api:
|
||||||
|
task_id: "task_create_api_{endpoint_name}"
|
||||||
|
type: create
|
||||||
|
agent: backend
|
||||||
|
file_paths:
|
||||||
|
- "app/api/{path}/route.ts"
|
||||||
|
acceptance_criteria:
|
||||||
|
- "Endpoint responds to {method} requests"
|
||||||
|
- "Request validation implemented"
|
||||||
|
- "Response matches contract"
|
||||||
|
- "Auth requirements enforced"
|
||||||
|
- "Error handling complete"
|
||||||
|
|
||||||
|
from_page:
|
||||||
|
task_id: "task_create_page_{page_name}"
|
||||||
|
type: create
|
||||||
|
agent: frontend
|
||||||
|
file_paths:
|
||||||
|
- "app/{path}/page.tsx"
|
||||||
|
acceptance_criteria:
|
||||||
|
- "Page renders at {path}"
|
||||||
|
- "Data fetching implemented"
|
||||||
|
- "Components integrated"
|
||||||
|
- "Auth protection active"
|
||||||
|
- "SEO metadata set"
|
||||||
|
|
||||||
|
from_component:
|
||||||
|
task_id: "task_create_component_{component_name}"
|
||||||
|
type: create
|
||||||
|
agent: frontend
|
||||||
|
file_paths:
|
||||||
|
- "app/components/{ComponentName}.tsx"
|
||||||
|
acceptance_criteria:
|
||||||
|
- "Component renders correctly"
|
||||||
|
- "Props typed and documented"
|
||||||
|
- "Events emitted properly"
|
||||||
|
- "Variants implemented"
|
||||||
|
- "Accessible (a11y)"
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# VALID STATUS TRANSITIONS
|
||||||
|
# ============================================================================
|
||||||
|
status_transitions:
|
||||||
|
pending:
|
||||||
|
- in_progress # Start work
|
||||||
|
- blocked # Dependencies not met
|
||||||
|
in_progress:
|
||||||
|
- review # Ready for review
|
||||||
|
- blocked # Hit blocker
|
||||||
|
review:
|
||||||
|
- approved # Review passed
|
||||||
|
- in_progress # Changes requested
|
||||||
|
approved:
|
||||||
|
- completed # Final completion
|
||||||
|
blocked:
|
||||||
|
- pending # Blocker resolved
|
||||||
|
- in_progress # Resume after unblock
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# EXAMPLE: Complete Task with Context
|
||||||
|
# ============================================================================
|
||||||
|
example_task:
|
||||||
|
id: task_create_api_create_user
|
||||||
|
type: create
|
||||||
|
title: "Create User API Endpoint"
|
||||||
|
agent: backend
|
||||||
|
entity_id: api_create_user
|
||||||
|
entity_ids: [api_create_user]
|
||||||
|
status: pending
|
||||||
|
layer: 2
|
||||||
|
parallel_group: "layer_2_apis"
|
||||||
|
description: "Implement POST /api/users endpoint for user registration"
|
||||||
|
file_paths:
|
||||||
|
- app/api/users/route.ts
|
||||||
|
dependencies:
|
||||||
|
- task_create_model_user
|
||||||
|
acceptance_criteria:
|
||||||
|
- "POST /api/users returns 201 on success"
|
||||||
|
- "Validates email format"
|
||||||
|
- "Returns 409 if email exists"
|
||||||
|
- "Hashes password before storage"
|
||||||
|
- "Returns user object without password"
|
||||||
|
priority: high
|
||||||
|
complexity: medium
|
||||||
|
|
||||||
|
context:
|
||||||
|
design_version: "rev_3"
|
||||||
|
workflow_version: "v001"
|
||||||
|
context_snapshot_path: ".workflow/versions/v001/contexts/api_create_user.yml"
|
||||||
|
|
||||||
|
target:
|
||||||
|
entity_id: api_create_user
|
||||||
|
entity_type: api
|
||||||
|
api:
|
||||||
|
method: POST
|
||||||
|
path: /api/users
|
||||||
|
summary: "Create a new user"
|
||||||
|
request_body:
|
||||||
|
type: object
|
||||||
|
properties:
|
||||||
|
email: { type: string, required: true, validation: email }
|
||||||
|
name: { type: string, required: true, validation: "min:1,max:100" }
|
||||||
|
password: { type: string, required: true, validation: "min:8" }
|
||||||
|
responses:
|
||||||
|
- status: 201
|
||||||
|
schema: { id: uuid, email: string, name: string, created_at: datetime }
|
||||||
|
- status: 400
|
||||||
|
schema: { error: string, details: array }
|
||||||
|
- status: 409
|
||||||
|
schema: { error: string }
|
||||||
|
auth:
|
||||||
|
required: false
|
||||||
|
|
||||||
|
dependencies:
|
||||||
|
models:
|
||||||
|
- id: model_user
|
||||||
|
definition:
|
||||||
|
name: User
|
||||||
|
table_name: users
|
||||||
|
fields:
|
||||||
|
- { name: id, type: uuid, constraints: [primary_key] }
|
||||||
|
- { name: email, type: string, constraints: [unique, not_null] }
|
||||||
|
- { name: name, type: string, constraints: [not_null] }
|
||||||
|
- { name: password_hash, type: string, constraints: [not_null] }
|
||||||
|
- { name: created_at, type: datetime, constraints: [not_null] }
|
||||||
|
|
||||||
|
contracts:
|
||||||
|
inputs:
|
||||||
|
- from_task: task_create_model_user
|
||||||
|
provides: model_user
|
||||||
|
type: model
|
||||||
|
outputs:
|
||||||
|
- entity_id: api_create_user
|
||||||
|
type: api
|
||||||
|
consumers: [page_signup, component_signup_form]
|
||||||
|
|
||||||
|
files:
|
||||||
|
create:
|
||||||
|
- app/api/users/route.ts
|
||||||
|
reference:
|
||||||
|
- path: app/api/health/route.ts
|
||||||
|
purpose: "API route pattern"
|
||||||
|
- path: app/lib/db.ts
|
||||||
|
purpose: "Database connection"
|
||||||
|
- path: app/lib/auth.ts
|
||||||
|
purpose: "Password hashing"
|
||||||
|
|
||||||
|
validation:
|
||||||
|
checks:
|
||||||
|
- { type: typecheck, target: "app/api/users/route.ts", required: true }
|
||||||
|
- { type: lint, target: "app/api/users/route.ts", required: true }
|
||||||
|
- { type: test, target: "app/api/users/*.test.ts", required: false }
|
||||||
|
criteria:
|
||||||
|
- criterion: "Returns 201 with user object on success"
|
||||||
|
verification: "curl -X POST /api/users with valid data"
|
||||||
|
- criterion: "Returns 409 if email exists"
|
||||||
|
verification: "curl -X POST /api/users with duplicate email"
|
||||||
|
|
||||||
|
hints:
|
||||||
|
patterns:
|
||||||
|
- pattern: "Use NextResponse for responses"
|
||||||
|
reference: "app/api/health/route.ts"
|
||||||
|
- pattern: "Use Prisma for database operations"
|
||||||
|
reference: "app/lib/db.ts"
|
||||||
|
avoid:
|
||||||
|
- issue: "Don't store plain text passwords"
|
||||||
|
reason: "Security vulnerability - always hash with bcrypt"
|
||||||
|
- issue: "Don't return password_hash in response"
|
||||||
|
reason: "Sensitive data exposure"
|
||||||
|
examples:
|
||||||
|
- description: "Similar API endpoint"
|
||||||
|
file: "app/api/health/route.ts"
|
||||||
|
|
@ -0,0 +1,116 @@
|
||||||
|
# Workflow State Schema
|
||||||
|
# Tracks automated workflow progress with approval gates
|
||||||
|
|
||||||
|
workflow_state:
|
||||||
|
# Unique workflow run ID
|
||||||
|
id: string # workflow_<timestamp>
|
||||||
|
|
||||||
|
# Feature/task being implemented
|
||||||
|
feature: string
|
||||||
|
|
||||||
|
# Current phase in the workflow
|
||||||
|
current_phase:
|
||||||
|
enum:
|
||||||
|
- INITIALIZING # Starting workflow
|
||||||
|
- DESIGNING # Architect creating entities/tasks
|
||||||
|
- AWAITING_DESIGN_APPROVAL # Gate 1: User approval needed
|
||||||
|
- DESIGN_APPROVED # User approved design
|
||||||
|
- DESIGN_REJECTED # User rejected, needs revision
|
||||||
|
- IMPLEMENTING # Frontend/Backend working
|
||||||
|
- REVIEWING # Reviewer checking implementation
|
||||||
|
- AWAITING_IMPL_APPROVAL # Gate 2: User approval needed
|
||||||
|
- IMPL_APPROVED # User approved implementation
|
||||||
|
- IMPL_REJECTED # User rejected, needs fixes
|
||||||
|
- COMPLETING # Marking tasks as done
|
||||||
|
- COMPLETED # Workflow finished
|
||||||
|
- PAUSED # User paused workflow
|
||||||
|
- FAILED # Workflow encountered error
|
||||||
|
|
||||||
|
# Approval gates status
|
||||||
|
gates:
|
||||||
|
design_approval:
|
||||||
|
status: pending | approved | rejected
|
||||||
|
approved_at: timestamp | null
|
||||||
|
approved_by: string | null
|
||||||
|
rejection_reason: string | null
|
||||||
|
revision_count: integer
|
||||||
|
|
||||||
|
implementation_approval:
|
||||||
|
status: pending | approved | rejected
|
||||||
|
approved_at: timestamp | null
|
||||||
|
approved_by: string | null
|
||||||
|
rejection_reason: string | null
|
||||||
|
revision_count: integer
|
||||||
|
|
||||||
|
# Progress tracking
|
||||||
|
progress:
|
||||||
|
entities_designed: integer
|
||||||
|
tasks_created: integer
|
||||||
|
tasks_implemented: integer
|
||||||
|
tasks_reviewed: integer
|
||||||
|
tasks_approved: integer
|
||||||
|
tasks_completed: integer
|
||||||
|
|
||||||
|
# Task tracking
|
||||||
|
tasks:
|
||||||
|
pending: [task_id]
|
||||||
|
in_progress: [task_id]
|
||||||
|
review: [task_id]
|
||||||
|
approved: [task_id]
|
||||||
|
completed: [task_id]
|
||||||
|
blocked: [task_id]
|
||||||
|
|
||||||
|
# Timestamps
|
||||||
|
started_at: timestamp
|
||||||
|
updated_at: timestamp
|
||||||
|
completed_at: timestamp | null
|
||||||
|
|
||||||
|
# Error tracking
|
||||||
|
last_error: string | null
|
||||||
|
|
||||||
|
# Resumability
|
||||||
|
resume_point:
|
||||||
|
phase: string
|
||||||
|
task_id: string | null
|
||||||
|
action: string # What to do when resuming
|
||||||
|
|
||||||
|
# Example workflow state file
|
||||||
|
example:
|
||||||
|
id: workflow_20250116_143022
|
||||||
|
feature: "User authentication with OAuth"
|
||||||
|
current_phase: AWAITING_DESIGN_APPROVAL
|
||||||
|
gates:
|
||||||
|
design_approval:
|
||||||
|
status: pending
|
||||||
|
approved_at: null
|
||||||
|
approved_by: null
|
||||||
|
rejection_reason: null
|
||||||
|
revision_count: 0
|
||||||
|
implementation_approval:
|
||||||
|
status: pending
|
||||||
|
approved_at: null
|
||||||
|
approved_by: null
|
||||||
|
rejection_reason: null
|
||||||
|
revision_count: 0
|
||||||
|
progress:
|
||||||
|
entities_designed: 5
|
||||||
|
tasks_created: 8
|
||||||
|
tasks_implemented: 0
|
||||||
|
tasks_reviewed: 0
|
||||||
|
tasks_approved: 0
|
||||||
|
tasks_completed: 0
|
||||||
|
tasks:
|
||||||
|
pending: [task_create_LoginPage, task_create_AuthAPI]
|
||||||
|
in_progress: []
|
||||||
|
review: []
|
||||||
|
approved: []
|
||||||
|
completed: []
|
||||||
|
blocked: []
|
||||||
|
started_at: "2025-01-16T14:30:22Z"
|
||||||
|
updated_at: "2025-01-16T14:35:00Z"
|
||||||
|
completed_at: null
|
||||||
|
last_error: null
|
||||||
|
resume_point:
|
||||||
|
phase: AWAITING_DESIGN_APPROVAL
|
||||||
|
task_id: null
|
||||||
|
action: "await_user_approval"
|
||||||
|
|
@ -0,0 +1,323 @@
|
||||||
|
# Workflow Versioning Schema
|
||||||
|
# Links workflow sessions with task sessions and operations
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# WORKFLOW SESSION (Top Level)
|
||||||
|
# ============================================================================
|
||||||
|
workflow_session:
|
||||||
|
# Unique version identifier
|
||||||
|
version: string # v001, v002, v003...
|
||||||
|
|
||||||
|
# Feature being implemented
|
||||||
|
feature: string
|
||||||
|
|
||||||
|
# Session metadata
|
||||||
|
session_id: string # workflow_<timestamp>
|
||||||
|
parent_version: string | null # If this is a continuation/fix
|
||||||
|
|
||||||
|
# Status
|
||||||
|
status: pending | in_progress | completed | failed | rolled_back
|
||||||
|
|
||||||
|
# Timestamps
|
||||||
|
started_at: timestamp
|
||||||
|
completed_at: timestamp | null
|
||||||
|
|
||||||
|
# Approval records
|
||||||
|
approvals:
|
||||||
|
design:
|
||||||
|
status: pending | approved | rejected
|
||||||
|
approved_by: string | null
|
||||||
|
approved_at: timestamp | null
|
||||||
|
rejection_reason: string | null
|
||||||
|
implementation:
|
||||||
|
status: pending | approved | rejected
|
||||||
|
approved_by: string | null
|
||||||
|
approved_at: timestamp | null
|
||||||
|
rejection_reason: string | null
|
||||||
|
|
||||||
|
# Linked task sessions
|
||||||
|
task_sessions: [task_session_id]
|
||||||
|
|
||||||
|
# Aggregate summary
|
||||||
|
summary:
|
||||||
|
total_tasks: integer
|
||||||
|
tasks_completed: integer
|
||||||
|
entities_created: integer
|
||||||
|
entities_updated: integer
|
||||||
|
entities_deleted: integer
|
||||||
|
files_created: integer
|
||||||
|
files_updated: integer
|
||||||
|
files_deleted: integer
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# TASK SESSION (Per Task)
|
||||||
|
# ============================================================================
|
||||||
|
task_session:
|
||||||
|
# Unique identifier
|
||||||
|
session_id: string # tasksession_<task_id>_<timestamp>
|
||||||
|
|
||||||
|
# Link to parent workflow
|
||||||
|
workflow_version: string # v001
|
||||||
|
|
||||||
|
# Task reference
|
||||||
|
task_id: string
|
||||||
|
task_type: create | update | delete | refactor | test
|
||||||
|
|
||||||
|
# Agent info
|
||||||
|
agent: frontend | backend | reviewer | architect
|
||||||
|
|
||||||
|
# Timestamps
|
||||||
|
started_at: timestamp
|
||||||
|
completed_at: timestamp | null
|
||||||
|
duration_ms: integer | null
|
||||||
|
|
||||||
|
# Status
|
||||||
|
status: pending | in_progress | review | approved | completed | failed | blocked
|
||||||
|
|
||||||
|
# Operations performed in this session
|
||||||
|
operations: [operation]
|
||||||
|
|
||||||
|
# Review link (if reviewed)
|
||||||
|
review_session: review_session | null
|
||||||
|
|
||||||
|
# Error tracking
|
||||||
|
errors: [error_record]
|
||||||
|
|
||||||
|
# Retry info
|
||||||
|
attempt_number: integer # 1, 2, 3...
|
||||||
|
previous_attempts: [session_id]
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# OPERATION (Atomic Change)
|
||||||
|
# ============================================================================
|
||||||
|
operation:
|
||||||
|
# Unique operation ID
|
||||||
|
id: string # op_<timestamp>_<sequence>
|
||||||
|
|
||||||
|
# Operation type
|
||||||
|
type: CREATE | UPDATE | DELETE | RENAME | MOVE
|
||||||
|
|
||||||
|
# Target
|
||||||
|
target_type: file | entity | task | manifest
|
||||||
|
target_id: string # entity_id or file path
|
||||||
|
target_path: string | null # file path if applicable
|
||||||
|
|
||||||
|
# Change details
|
||||||
|
changes:
|
||||||
|
before: string | null # Previous state/content hash
|
||||||
|
after: string | null # New state/content hash
|
||||||
|
diff_summary: string # Human-readable summary
|
||||||
|
|
||||||
|
# Timestamp
|
||||||
|
performed_at: timestamp
|
||||||
|
|
||||||
|
# Reversibility
|
||||||
|
reversible: boolean
|
||||||
|
rollback_data: object | null # Data needed to reverse
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# REVIEW SESSION
|
||||||
|
# ============================================================================
|
||||||
|
review_session:
|
||||||
|
session_id: string # review_<task_id>_<timestamp>
|
||||||
|
|
||||||
|
# Links
|
||||||
|
task_session_id: string
|
||||||
|
workflow_version: string
|
||||||
|
|
||||||
|
# Reviewer
|
||||||
|
reviewer: string # "reviewer" agent or user
|
||||||
|
|
||||||
|
# Timing
|
||||||
|
started_at: timestamp
|
||||||
|
completed_at: timestamp
|
||||||
|
|
||||||
|
# Decision
|
||||||
|
decision: approved | rejected | needs_changes
|
||||||
|
|
||||||
|
# Checks performed
|
||||||
|
checks:
|
||||||
|
file_exists: pass | fail | skip
|
||||||
|
manifest_compliance: pass | fail | skip
|
||||||
|
code_quality: pass | fail | skip
|
||||||
|
lint: pass | fail | skip
|
||||||
|
build: pass | fail | skip
|
||||||
|
tests: pass | fail | skip
|
||||||
|
|
||||||
|
# Feedback
|
||||||
|
notes: string
|
||||||
|
issues_found: [string]
|
||||||
|
suggestions: [string]
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# ERROR RECORD
|
||||||
|
# ============================================================================
|
||||||
|
error_record:
|
||||||
|
timestamp: timestamp
|
||||||
|
phase: string # Which step failed
|
||||||
|
error_type: string
|
||||||
|
message: string
|
||||||
|
stack_trace: string | null
|
||||||
|
resolved: boolean
|
||||||
|
resolution: string | null
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# VERSION INDEX (Quick Lookup)
|
||||||
|
# ============================================================================
|
||||||
|
version_index:
|
||||||
|
versions:
|
||||||
|
- version: v001
|
||||||
|
feature: "User authentication"
|
||||||
|
status: completed
|
||||||
|
started_at: timestamp
|
||||||
|
completed_at: timestamp
|
||||||
|
tasks_count: 8
|
||||||
|
operations_count: 15
|
||||||
|
- version: v002
|
||||||
|
feature: "Task filters"
|
||||||
|
status: in_progress
|
||||||
|
started_at: timestamp
|
||||||
|
completed_at: null
|
||||||
|
tasks_count: 5
|
||||||
|
operations_count: 7
|
||||||
|
|
||||||
|
latest_version: v002
|
||||||
|
total_versions: 2
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# TASK SESSION DIRECTORY STRUCTURE
|
||||||
|
# ============================================================================
|
||||||
|
task_session_directory:
|
||||||
|
description: "Each task session has its own directory with full context"
|
||||||
|
path_pattern: ".workflow/versions/{version}/task_sessions/{task_id}/"
|
||||||
|
|
||||||
|
files:
|
||||||
|
session.yml:
|
||||||
|
description: "Task session metadata (existing schema)"
|
||||||
|
schema: task_session
|
||||||
|
|
||||||
|
task.yml:
|
||||||
|
description: "Snapshot of task definition at execution time"
|
||||||
|
fields:
|
||||||
|
id: string
|
||||||
|
type: create | update | delete | refactor | test
|
||||||
|
title: string
|
||||||
|
agent: frontend | backend | reviewer | architect
|
||||||
|
status_at_snapshot: string
|
||||||
|
entity_ids: [string]
|
||||||
|
file_paths: [string]
|
||||||
|
dependencies: [string]
|
||||||
|
description: string
|
||||||
|
acceptance_criteria: [string]
|
||||||
|
snapshotted_at: timestamp
|
||||||
|
source_path: string
|
||||||
|
|
||||||
|
operations.log:
|
||||||
|
description: "Chronological audit trail of all operations"
|
||||||
|
format: text
|
||||||
|
entry_pattern: "[{timestamp}] {operation_type} {target_type}: {target_id} ({path})"
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# EXAMPLE: Complete Workflow Session
|
||||||
|
# ============================================================================
|
||||||
|
example_workflow_session:
|
||||||
|
version: v001
|
||||||
|
feature: "User authentication with OAuth"
|
||||||
|
session_id: workflow_20250116_143022
|
||||||
|
parent_version: null
|
||||||
|
status: completed
|
||||||
|
started_at: "2025-01-16T14:30:22Z"
|
||||||
|
completed_at: "2025-01-16T15:45:00Z"
|
||||||
|
|
||||||
|
approvals:
|
||||||
|
design:
|
||||||
|
status: approved
|
||||||
|
approved_by: user
|
||||||
|
approved_at: "2025-01-16T14:45:00Z"
|
||||||
|
rejection_reason: null
|
||||||
|
implementation:
|
||||||
|
status: approved
|
||||||
|
approved_by: user
|
||||||
|
approved_at: "2025-01-16T15:40:00Z"
|
||||||
|
rejection_reason: null
|
||||||
|
|
||||||
|
task_sessions:
|
||||||
|
- tasksession_task_create_LoginPage_20250116_144501
|
||||||
|
- tasksession_task_create_AuthAPI_20250116_145001
|
||||||
|
- tasksession_task_update_Header_20250116_150001
|
||||||
|
|
||||||
|
summary:
|
||||||
|
total_tasks: 3
|
||||||
|
tasks_completed: 3
|
||||||
|
entities_created: 2
|
||||||
|
entities_updated: 1
|
||||||
|
entities_deleted: 0
|
||||||
|
files_created: 3
|
||||||
|
files_updated: 2
|
||||||
|
files_deleted: 0
|
||||||
|
|
||||||
|
example_task_session:
|
||||||
|
session_id: tasksession_task_create_LoginPage_20250116_144501
|
||||||
|
workflow_version: v001
|
||||||
|
task_id: task_create_LoginPage
|
||||||
|
task_type: create
|
||||||
|
agent: frontend
|
||||||
|
started_at: "2025-01-16T14:45:01Z"
|
||||||
|
completed_at: "2025-01-16T14:55:00Z"
|
||||||
|
duration_ms: 599000
|
||||||
|
status: completed
|
||||||
|
|
||||||
|
operations:
|
||||||
|
- id: op_20250116_144502_001
|
||||||
|
type: CREATE
|
||||||
|
target_type: file
|
||||||
|
target_id: page_login
|
||||||
|
target_path: app/login/page.tsx
|
||||||
|
changes:
|
||||||
|
before: null
|
||||||
|
after: "sha256:abc123..."
|
||||||
|
diff_summary: "Created login page with email/password form"
|
||||||
|
performed_at: "2025-01-16T14:45:02Z"
|
||||||
|
reversible: true
|
||||||
|
rollback_data:
|
||||||
|
action: delete_file
|
||||||
|
path: app/login/page.tsx
|
||||||
|
|
||||||
|
- id: op_20250116_144503_002
|
||||||
|
type: UPDATE
|
||||||
|
target_type: manifest
|
||||||
|
target_id: project_manifest
|
||||||
|
target_path: project_manifest.json
|
||||||
|
changes:
|
||||||
|
before: "sha256:def456..."
|
||||||
|
after: "sha256:ghi789..."
|
||||||
|
diff_summary: "Added page_login entity, set status to IMPLEMENTED"
|
||||||
|
performed_at: "2025-01-16T14:45:03Z"
|
||||||
|
reversible: true
|
||||||
|
rollback_data:
|
||||||
|
action: restore_content
|
||||||
|
content_hash: "sha256:def456..."
|
||||||
|
|
||||||
|
review_session:
|
||||||
|
session_id: review_task_create_LoginPage_20250116_145501
|
||||||
|
task_session_id: tasksession_task_create_LoginPage_20250116_144501
|
||||||
|
workflow_version: v001
|
||||||
|
reviewer: reviewer
|
||||||
|
started_at: "2025-01-16T14:55:01Z"
|
||||||
|
completed_at: "2025-01-16T14:58:00Z"
|
||||||
|
decision: approved
|
||||||
|
checks:
|
||||||
|
file_exists: pass
|
||||||
|
manifest_compliance: pass
|
||||||
|
code_quality: pass
|
||||||
|
lint: pass
|
||||||
|
build: pass
|
||||||
|
tests: skip
|
||||||
|
notes: "Login page implementation matches manifest spec"
|
||||||
|
issues_found: []
|
||||||
|
suggestions:
|
||||||
|
- "Consider adding loading state for form submission"
|
||||||
|
|
||||||
|
errors: []
|
||||||
|
attempt_number: 1
|
||||||
|
previous_attempts: []
|
||||||
|
|
@ -0,0 +1,274 @@
|
||||||
|
# Task Session Migration Guide
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This guide explains how to migrate task sessions from the old flat file structure to the new directory-based structure.
|
||||||
|
|
||||||
|
## Background
|
||||||
|
|
||||||
|
### Old Structure (Flat Files)
|
||||||
|
```
|
||||||
|
.workflow/versions/v001/task_sessions/
|
||||||
|
├── task_design.yml
|
||||||
|
├── task_implementation.yml
|
||||||
|
└── task_review.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
### New Structure (Directories)
|
||||||
|
```
|
||||||
|
.workflow/versions/v001/task_sessions/
|
||||||
|
├── task_design/
|
||||||
|
│ ├── session.yml # Session data (execution info)
|
||||||
|
│ ├── task.yml # Task snapshot (definition at execution time)
|
||||||
|
│ └── operations.log # Human-readable operation log
|
||||||
|
├── task_implementation/
|
||||||
|
│ ├── session.yml
|
||||||
|
│ ├── task.yml
|
||||||
|
│ └── operations.log
|
||||||
|
└── task_review/
|
||||||
|
├── session.yml
|
||||||
|
├── task.yml
|
||||||
|
└── operations.log
|
||||||
|
```
|
||||||
|
|
||||||
|
## Benefits of New Structure
|
||||||
|
|
||||||
|
1. **Better Organization**: Each task session has its own directory
|
||||||
|
2. **Snapshot Preservation**: Task definitions are captured at execution time
|
||||||
|
3. **Human-Readable Logs**: Operations log provides easy-to-read history
|
||||||
|
4. **Extensibility**: Easy to add attachments, artifacts, or outputs per task
|
||||||
|
5. **Backwards Compatible**: Old code can still read from either structure
|
||||||
|
|
||||||
|
## Migration Script
|
||||||
|
|
||||||
|
### Location
|
||||||
|
```
|
||||||
|
skills/guardrail-orchestrator/scripts/migrate_task_sessions.py
|
||||||
|
```
|
||||||
|
|
||||||
|
### Usage
|
||||||
|
|
||||||
|
#### Dry Run (Recommended First Step)
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/migrate_task_sessions.py --dry-run
|
||||||
|
```
|
||||||
|
|
||||||
|
This will:
|
||||||
|
- Find all flat task session files
|
||||||
|
- Report what would be migrated
|
||||||
|
- Show actions that would be taken
|
||||||
|
- **NOT make any changes**
|
||||||
|
|
||||||
|
#### Live Migration
|
||||||
|
```bash
|
||||||
|
python3 skills/guardrail-orchestrator/scripts/migrate_task_sessions.py
|
||||||
|
```
|
||||||
|
|
||||||
|
This will:
|
||||||
|
- Create directory for each task session
|
||||||
|
- Move session data to `session.yml`
|
||||||
|
- Create `task.yml` snapshot
|
||||||
|
- Generate `operations.log`
|
||||||
|
- Delete original flat files
|
||||||
|
|
||||||
|
## Migration Process
|
||||||
|
|
||||||
|
### What the Script Does
|
||||||
|
|
||||||
|
For each flat task session file (e.g., `task_design.yml`):
|
||||||
|
|
||||||
|
1. **Create Directory**: `task_sessions/task_design/`
|
||||||
|
|
||||||
|
2. **Move Session Data**:
|
||||||
|
- Read original `task_design.yml`
|
||||||
|
- Save to `task_design/session.yml`
|
||||||
|
- Delete original file
|
||||||
|
|
||||||
|
3. **Create Task Snapshot**:
|
||||||
|
- Look for `tasks/task_design.yml`
|
||||||
|
- If found: Copy and add snapshot metadata
|
||||||
|
- If not found: Create minimal task.yml from session data
|
||||||
|
- Save to `task_design/task.yml`
|
||||||
|
|
||||||
|
4. **Create Operations Log**:
|
||||||
|
- Initialize `task_design/operations.log`
|
||||||
|
- Add migration note
|
||||||
|
- If session has operations array, convert to log format
|
||||||
|
- Human-readable format with timestamps
|
||||||
|
|
||||||
|
### Task Snapshot Metadata
|
||||||
|
|
||||||
|
When a task definition is found, these fields are added:
|
||||||
|
```yaml
|
||||||
|
snapshotted_at: '2025-12-16T12:00:00'
|
||||||
|
source_path: 'tasks/task_design.yml'
|
||||||
|
status_at_snapshot: 'completed'
|
||||||
|
migration_note: 'Created during migration from flat file structure'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Operations Log Format
|
||||||
|
|
||||||
|
```
|
||||||
|
# Operations Log for task_design
|
||||||
|
# Migrated: 2025-12-16T12:00:00
|
||||||
|
# Format: [timestamp] OPERATION target_type: target_id (path)
|
||||||
|
======================================================================
|
||||||
|
|
||||||
|
[2025-12-16T12:00:00] MIGRATION: Converted from flat file structure
|
||||||
|
|
||||||
|
# Historical operations from session data:
|
||||||
|
[2025-12-16T11:00:00] CREATE file: auth.ts (app/lib/auth.ts)
|
||||||
|
Summary: Created authentication module
|
||||||
|
|
||||||
|
[2025-12-16T11:15:00] UPDATE entity: User (app/lib/types.ts)
|
||||||
|
Summary: Added email field to User type
|
||||||
|
```
|
||||||
|
|
||||||
|
## Migration Results
|
||||||
|
|
||||||
|
### Success Output
|
||||||
|
```
|
||||||
|
======================================================================
|
||||||
|
Migration Summary
|
||||||
|
======================================================================
|
||||||
|
Total files processed: 3
|
||||||
|
Successful migrations: 3
|
||||||
|
Failed migrations: 0
|
||||||
|
|
||||||
|
Migration completed successfully!
|
||||||
|
|
||||||
|
Next steps:
|
||||||
|
1. Verify migrated files in .workflow/versions/*/task_sessions/
|
||||||
|
2. Check that each task has session.yml, task.yml, and operations.log
|
||||||
|
3. Test the system to ensure compatibility
|
||||||
|
```
|
||||||
|
|
||||||
|
### Dry Run Output
|
||||||
|
```
|
||||||
|
Processing: v001/task_design.yml
|
||||||
|
----------------------------------------------------------------------
|
||||||
|
Would create directory: .workflow/versions/v001/task_sessions/task_design
|
||||||
|
Would move task_design.yml to .workflow/versions/v001/task_sessions/task_design/session.yml
|
||||||
|
Would create .workflow/versions/v001/task_sessions/task_design/task.yml (if source exists)
|
||||||
|
Would create .workflow/versions/v001/task_sessions/task_design/operations.log
|
||||||
|
|
||||||
|
This was a DRY RUN. No files were modified.
|
||||||
|
Run without --dry-run to perform the migration.
|
||||||
|
```
|
||||||
|
|
||||||
|
## Verification Steps
|
||||||
|
|
||||||
|
After migration, verify the structure:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check directory structure
|
||||||
|
ls -la .workflow/versions/v001/task_sessions/task_design/
|
||||||
|
|
||||||
|
# Should show:
|
||||||
|
# session.yml
|
||||||
|
# task.yml
|
||||||
|
# operations.log
|
||||||
|
|
||||||
|
# Verify session data
|
||||||
|
cat .workflow/versions/v001/task_sessions/task_design/session.yml
|
||||||
|
|
||||||
|
# Verify task snapshot
|
||||||
|
cat .workflow/versions/v001/task_sessions/task_design/task.yml
|
||||||
|
|
||||||
|
# Check operations log
|
||||||
|
cat .workflow/versions/v001/task_sessions/task_design/operations.log
|
||||||
|
```
|
||||||
|
|
||||||
|
## Backwards Compatibility
|
||||||
|
|
||||||
|
The `version_manager.py` module includes backwards-compatible loading:
|
||||||
|
|
||||||
|
```python
|
||||||
|
def load_task_session(version: str, task_id: str) -> Optional[dict]:
|
||||||
|
"""Load a task session from directory or flat file (backwards compatible)."""
|
||||||
|
# Try new directory structure first
|
||||||
|
session_dir = get_version_dir(version) / 'task_sessions' / task_id
|
||||||
|
session_path = session_dir / 'session.yml'
|
||||||
|
|
||||||
|
if session_path.exists():
|
||||||
|
return load_yaml(str(session_path))
|
||||||
|
|
||||||
|
# Fallback to old flat file structure
|
||||||
|
old_path = get_version_dir(version) / 'task_sessions' / f'{task_id}.yml'
|
||||||
|
if old_path.exists():
|
||||||
|
return load_yaml(str(old_path))
|
||||||
|
|
||||||
|
return None
|
||||||
|
```
|
||||||
|
|
||||||
|
This means:
|
||||||
|
- New code works with both structures
|
||||||
|
- No breaking changes for existing workflows
|
||||||
|
- Migration can be done gradually
|
||||||
|
- Rollback is possible if needed
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### No Files Found
|
||||||
|
If the script reports "No flat task session files found":
|
||||||
|
- Check that `.workflow/versions/` exists
|
||||||
|
- Verify that task sessions are in expected location
|
||||||
|
- Confirm files have `.yml` or `.yaml` extension
|
||||||
|
- May indicate all sessions are already migrated
|
||||||
|
|
||||||
|
### Task File Not Found
|
||||||
|
If `tasks/task_id.yml` doesn't exist:
|
||||||
|
- Script creates minimal task.yml from session data
|
||||||
|
- Warning is logged but migration continues
|
||||||
|
- Check `task.yml` has `migration_note` field
|
||||||
|
|
||||||
|
### Migration Errors
|
||||||
|
If migration fails:
|
||||||
|
- Review error message in output
|
||||||
|
- Check file permissions
|
||||||
|
- Verify disk space
|
||||||
|
- Try dry-run mode to diagnose
|
||||||
|
|
||||||
|
### Rollback (If Needed)
|
||||||
|
To rollback a migration:
|
||||||
|
1. Stop any running workflows
|
||||||
|
2. For each migrated directory:
|
||||||
|
```bash
|
||||||
|
# Copy session.yml back to flat file
|
||||||
|
cp .workflow/versions/v001/task_sessions/task_design/session.yml \
|
||||||
|
.workflow/versions/v001/task_sessions/task_design.yml
|
||||||
|
|
||||||
|
# Remove directory
|
||||||
|
rm -rf .workflow/versions/v001/task_sessions/task_design/
|
||||||
|
```
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
1. **Always dry-run first**: Use `--dry-run` to preview changes
|
||||||
|
2. **Backup before migration**: Copy `.workflow/` directory
|
||||||
|
3. **Migrate per version**: Test one version before migrating all
|
||||||
|
4. **Verify after migration**: Check files and run system tests
|
||||||
|
5. **Keep old backups**: Don't delete backups immediately
|
||||||
|
|
||||||
|
## Integration with Workflow System
|
||||||
|
|
||||||
|
After migration, all workflow operations work seamlessly:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Start task session (creates directory structure)
|
||||||
|
session = create_workflow_session("new feature", None)
|
||||||
|
task_session = create_task_session(session, "task_api", "create", "backend")
|
||||||
|
|
||||||
|
# Load task session (works with both structures)
|
||||||
|
task = load_task_session("v001", "task_design")
|
||||||
|
|
||||||
|
# Log operations (appends to operations.log)
|
||||||
|
log_operation(task, "CREATE", "file", "api.ts", target_path="app/api/api.ts")
|
||||||
|
```
|
||||||
|
|
||||||
|
## Additional Resources
|
||||||
|
|
||||||
|
- `version_manager.py`: Core versioning system
|
||||||
|
- `workflow_manager.py`: Workflow orchestration
|
||||||
|
- `.workflow/operations.log`: Global operations log
|
||||||
|
- `.workflow/index.yml`: Version index
|
||||||
|
|
@ -0,0 +1,486 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Analyze codebase and generate project manifest from existing code."""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
from datetime import datetime
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Optional
|
||||||
|
|
||||||
|
|
||||||
|
def find_files(base_path: str, pattern: str) -> list[str]:
|
||||||
|
"""Find files matching a glob pattern."""
|
||||||
|
base = Path(base_path)
|
||||||
|
return [str(p.relative_to(base)) for p in base.glob(pattern)]
|
||||||
|
|
||||||
|
|
||||||
|
def read_file(filepath: str) -> str:
|
||||||
|
"""Read file contents."""
|
||||||
|
try:
|
||||||
|
with open(filepath, 'r', encoding='utf-8') as f:
|
||||||
|
return f.read()
|
||||||
|
except Exception:
|
||||||
|
return ""
|
||||||
|
|
||||||
|
|
||||||
|
def extract_component_name(filepath: str) -> str:
|
||||||
|
"""Extract component name from file path."""
|
||||||
|
name = Path(filepath).stem
|
||||||
|
return name
|
||||||
|
|
||||||
|
|
||||||
|
def to_snake_case(name: str) -> str:
|
||||||
|
"""Convert PascalCase to snake_case."""
|
||||||
|
s1 = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', name)
|
||||||
|
return re.sub('([a-z0-9])([A-Z])', r'\1_\2', s1).lower()
|
||||||
|
|
||||||
|
|
||||||
|
def extract_props_from_content(content: str) -> dict:
|
||||||
|
"""Extract props interface from component content."""
|
||||||
|
props = {}
|
||||||
|
|
||||||
|
# Look for interface Props or type Props
|
||||||
|
interface_match = re.search(
|
||||||
|
r'(?:interface|type)\s+\w*Props\w*\s*(?:=\s*)?\{([^}]+)\}',
|
||||||
|
content,
|
||||||
|
re.DOTALL
|
||||||
|
)
|
||||||
|
|
||||||
|
if interface_match:
|
||||||
|
props_block = interface_match.group(1)
|
||||||
|
# Parse individual props
|
||||||
|
prop_matches = re.findall(
|
||||||
|
r'(\w+)(\?)?:\s*([^;,\n]+)',
|
||||||
|
props_block
|
||||||
|
)
|
||||||
|
for name, optional, prop_type in prop_matches:
|
||||||
|
props[name] = {
|
||||||
|
"type": prop_type.strip(),
|
||||||
|
"optional": bool(optional)
|
||||||
|
}
|
||||||
|
|
||||||
|
return props
|
||||||
|
|
||||||
|
|
||||||
|
def extract_imports(content: str) -> list[str]:
|
||||||
|
"""Extract component imports from file."""
|
||||||
|
imports = []
|
||||||
|
|
||||||
|
# Look for imports from components directory
|
||||||
|
import_matches = re.findall(
|
||||||
|
r"import\s+\{?\s*([^}]+)\s*\}?\s+from\s+['\"]\.\.?/components/(\w+)['\"]",
|
||||||
|
content
|
||||||
|
)
|
||||||
|
|
||||||
|
for imported, component in import_matches:
|
||||||
|
imports.append(component)
|
||||||
|
|
||||||
|
# Also check for direct component imports
|
||||||
|
direct_imports = re.findall(
|
||||||
|
r"import\s+(\w+)\s+from\s+['\"]\.\.?/components/(\w+)['\"]",
|
||||||
|
content
|
||||||
|
)
|
||||||
|
|
||||||
|
for imported, component in direct_imports:
|
||||||
|
imports.append(component)
|
||||||
|
|
||||||
|
return list(set(imports))
|
||||||
|
|
||||||
|
|
||||||
|
def extract_api_methods(content: str) -> list[str]:
|
||||||
|
"""Extract HTTP methods from API route file."""
|
||||||
|
methods = []
|
||||||
|
method_patterns = ['GET', 'POST', 'PUT', 'DELETE', 'PATCH', 'HEAD', 'OPTIONS']
|
||||||
|
|
||||||
|
for method in method_patterns:
|
||||||
|
if re.search(rf'export\s+(?:async\s+)?function\s+{method}\s*\(', content):
|
||||||
|
methods.append(method)
|
||||||
|
|
||||||
|
return methods
|
||||||
|
|
||||||
|
|
||||||
|
def extract_fetch_calls(content: str) -> list[str]:
|
||||||
|
"""Extract API fetch calls from content."""
|
||||||
|
apis = []
|
||||||
|
|
||||||
|
# Look for fetch('/api/...') patterns - handle static paths
|
||||||
|
fetch_matches = re.findall(
|
||||||
|
r"fetch\s*\(\s*['\"`]/api/([^'\"`\?\$\{]+)",
|
||||||
|
content
|
||||||
|
)
|
||||||
|
apis.extend(fetch_matches)
|
||||||
|
|
||||||
|
# Look for fetch(`/api/tasks`) or similar template literals with static paths
|
||||||
|
template_matches = re.findall(
|
||||||
|
r"fetch\s*\(\s*`/api/(\w+)`",
|
||||||
|
content
|
||||||
|
)
|
||||||
|
apis.extend(template_matches)
|
||||||
|
|
||||||
|
# Clean up: remove trailing slashes and normalize
|
||||||
|
cleaned = []
|
||||||
|
for api in apis:
|
||||||
|
api = api.rstrip('/')
|
||||||
|
if api and not api.startswith('$'):
|
||||||
|
cleaned.append(api)
|
||||||
|
|
||||||
|
return list(set(cleaned))
|
||||||
|
|
||||||
|
|
||||||
|
def extract_types_from_db(content: str) -> dict:
|
||||||
|
"""Extract type definitions from db.ts or similar."""
|
||||||
|
types = {}
|
||||||
|
|
||||||
|
# Extract interfaces
|
||||||
|
interface_matches = re.findall(
|
||||||
|
r'export\s+interface\s+(\w+)\s*\{([^}]+)\}',
|
||||||
|
content,
|
||||||
|
re.DOTALL
|
||||||
|
)
|
||||||
|
|
||||||
|
for name, body in interface_matches:
|
||||||
|
fields = {}
|
||||||
|
field_matches = re.findall(r'(\w+)(\?)?:\s*([^;,\n]+)', body)
|
||||||
|
for field_name, optional, field_type in field_matches:
|
||||||
|
fields[field_name] = field_type.strip()
|
||||||
|
types[name] = fields
|
||||||
|
|
||||||
|
# Extract type aliases
|
||||||
|
type_matches = re.findall(
|
||||||
|
r"export\s+type\s+(\w+)\s*=\s*([^;]+);",
|
||||||
|
content
|
||||||
|
)
|
||||||
|
|
||||||
|
for name, type_def in type_matches:
|
||||||
|
types[name] = type_def.strip()
|
||||||
|
|
||||||
|
return types
|
||||||
|
|
||||||
|
|
||||||
|
def path_to_route(filepath: str) -> str:
|
||||||
|
"""Convert file path to route path."""
|
||||||
|
# Remove app/ prefix and page.tsx suffix
|
||||||
|
route = filepath.replace('app/', '').replace('/page.tsx', '').replace('page.tsx', '')
|
||||||
|
|
||||||
|
if route == '' or route == '/':
|
||||||
|
return '/'
|
||||||
|
|
||||||
|
# Handle dynamic segments [id] -> [id]
|
||||||
|
route = re.sub(r'\[([^\]]+)\]', r'[\1]', route)
|
||||||
|
|
||||||
|
# Ensure starts with /
|
||||||
|
if not route.startswith('/'):
|
||||||
|
route = '/' + route
|
||||||
|
|
||||||
|
return route
|
||||||
|
|
||||||
|
|
||||||
|
def analyze_pages(base_path: str) -> list[dict]:
|
||||||
|
"""Analyze all page files."""
|
||||||
|
pages = []
|
||||||
|
page_files = find_files(base_path, 'app/**/page.tsx')
|
||||||
|
|
||||||
|
for filepath in page_files:
|
||||||
|
full_path = os.path.join(base_path, filepath)
|
||||||
|
content = read_file(full_path)
|
||||||
|
|
||||||
|
route = path_to_route(filepath)
|
||||||
|
|
||||||
|
# Generate page ID
|
||||||
|
if route == '/' or filepath == 'app/page.tsx':
|
||||||
|
page_id = 'page_home'
|
||||||
|
name = 'Home'
|
||||||
|
route = '/'
|
||||||
|
else:
|
||||||
|
name = route.strip('/').replace('/', '_').replace('[', '').replace(']', '')
|
||||||
|
page_id = f'page_{name}'
|
||||||
|
|
||||||
|
# Extract component imports
|
||||||
|
components = extract_imports(content)
|
||||||
|
comp_ids = [f"comp_{to_snake_case(c)}" for c in components]
|
||||||
|
|
||||||
|
# Extract API dependencies
|
||||||
|
api_calls = extract_fetch_calls(content)
|
||||||
|
api_ids = [f"api_{a.replace('/', '_')}" for a in api_calls]
|
||||||
|
|
||||||
|
pages.append({
|
||||||
|
"id": page_id,
|
||||||
|
"path": route,
|
||||||
|
"file_path": filepath,
|
||||||
|
"status": "IMPLEMENTED",
|
||||||
|
"description": f"Page at {route}",
|
||||||
|
"components": comp_ids,
|
||||||
|
"data_dependencies": api_ids
|
||||||
|
})
|
||||||
|
|
||||||
|
return pages
|
||||||
|
|
||||||
|
|
||||||
|
def analyze_components(base_path: str) -> list[dict]:
|
||||||
|
"""Analyze all component files."""
|
||||||
|
components = []
|
||||||
|
component_files = find_files(base_path, 'app/components/*.tsx')
|
||||||
|
|
||||||
|
for filepath in component_files:
|
||||||
|
full_path = os.path.join(base_path, filepath)
|
||||||
|
content = read_file(full_path)
|
||||||
|
|
||||||
|
name = extract_component_name(filepath)
|
||||||
|
comp_id = f"comp_{to_snake_case(name)}"
|
||||||
|
|
||||||
|
# Extract props
|
||||||
|
props = extract_props_from_content(content)
|
||||||
|
|
||||||
|
components.append({
|
||||||
|
"id": comp_id,
|
||||||
|
"name": name,
|
||||||
|
"file_path": filepath,
|
||||||
|
"status": "IMPLEMENTED",
|
||||||
|
"description": f"{name} component",
|
||||||
|
"props": props
|
||||||
|
})
|
||||||
|
|
||||||
|
return components
|
||||||
|
|
||||||
|
|
||||||
|
def analyze_apis(base_path: str) -> list[dict]:
|
||||||
|
"""Analyze all API route files."""
|
||||||
|
apis = []
|
||||||
|
api_files = find_files(base_path, 'app/api/**/route.ts')
|
||||||
|
|
||||||
|
for filepath in api_files:
|
||||||
|
full_path = os.path.join(base_path, filepath)
|
||||||
|
content = read_file(full_path)
|
||||||
|
|
||||||
|
# Extract path from file location
|
||||||
|
path = '/' + filepath.replace('app/', '').replace('/route.ts', '')
|
||||||
|
|
||||||
|
# Extract HTTP methods
|
||||||
|
methods = extract_api_methods(content)
|
||||||
|
|
||||||
|
for method in methods:
|
||||||
|
# Generate action name from method
|
||||||
|
action_map = {
|
||||||
|
'GET': 'list' if '[' not in path else 'get',
|
||||||
|
'POST': 'create',
|
||||||
|
'PUT': 'update',
|
||||||
|
'DELETE': 'delete',
|
||||||
|
'PATCH': 'patch'
|
||||||
|
}
|
||||||
|
action = action_map.get(method, method.lower())
|
||||||
|
|
||||||
|
# Generate resource name from path
|
||||||
|
resource = path.replace('/api/', '').replace('/', '_').replace('[', '').replace(']', '')
|
||||||
|
if not resource:
|
||||||
|
resource = 'root'
|
||||||
|
|
||||||
|
api_id = f"api_{action}_{resource}"
|
||||||
|
|
||||||
|
apis.append({
|
||||||
|
"id": api_id,
|
||||||
|
"path": path,
|
||||||
|
"method": method,
|
||||||
|
"file_path": filepath,
|
||||||
|
"status": "IMPLEMENTED",
|
||||||
|
"description": f"{method} {path}",
|
||||||
|
"request": {},
|
||||||
|
"response": {
|
||||||
|
"type": "object",
|
||||||
|
"description": "Response data"
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
return apis
|
||||||
|
|
||||||
|
|
||||||
|
def analyze_database(base_path: str) -> tuple[list[dict], dict]:
|
||||||
|
"""Analyze database/type files."""
|
||||||
|
tables = []
|
||||||
|
types = {}
|
||||||
|
|
||||||
|
# Check for db.ts file
|
||||||
|
db_path = os.path.join(base_path, 'app/lib/db.ts')
|
||||||
|
if os.path.exists(db_path):
|
||||||
|
content = read_file(db_path)
|
||||||
|
types = extract_types_from_db(content)
|
||||||
|
|
||||||
|
# Look for table/collection definitions
|
||||||
|
if 'tasks' in content.lower():
|
||||||
|
tables.append({
|
||||||
|
"id": "table_tasks",
|
||||||
|
"name": "tasks",
|
||||||
|
"file_path": "app/lib/db.ts",
|
||||||
|
"status": "IMPLEMENTED",
|
||||||
|
"description": "Tasks storage",
|
||||||
|
"columns": types.get('Task', {})
|
||||||
|
})
|
||||||
|
|
||||||
|
return tables, types
|
||||||
|
|
||||||
|
|
||||||
|
def build_dependencies(pages: list, components: list, apis: list) -> dict:
|
||||||
|
"""Build dependency mappings."""
|
||||||
|
component_to_page = {}
|
||||||
|
api_to_component = {}
|
||||||
|
|
||||||
|
# Build component to page mapping
|
||||||
|
for page in pages:
|
||||||
|
for comp_id in page.get('components', []):
|
||||||
|
if comp_id not in component_to_page:
|
||||||
|
component_to_page[comp_id] = []
|
||||||
|
component_to_page[comp_id].append(page['id'])
|
||||||
|
|
||||||
|
# API to component would require deeper analysis
|
||||||
|
# For now, we'll leave it based on page dependencies
|
||||||
|
|
||||||
|
return {
|
||||||
|
"component_to_page": component_to_page,
|
||||||
|
"api_to_component": {},
|
||||||
|
"table_to_api": {}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def generate_manifest(
|
||||||
|
base_path: str,
|
||||||
|
project_name: Optional[str] = None
|
||||||
|
) -> dict:
|
||||||
|
"""Generate complete project manifest."""
|
||||||
|
|
||||||
|
# Determine project name
|
||||||
|
if not project_name:
|
||||||
|
# Try to get from package.json
|
||||||
|
pkg_path = os.path.join(base_path, 'package.json')
|
||||||
|
if os.path.exists(pkg_path):
|
||||||
|
try:
|
||||||
|
with open(pkg_path) as f:
|
||||||
|
pkg = json.load(f)
|
||||||
|
project_name = pkg.get('name', Path(base_path).name)
|
||||||
|
except Exception:
|
||||||
|
project_name = Path(base_path).name
|
||||||
|
else:
|
||||||
|
project_name = Path(base_path).name
|
||||||
|
|
||||||
|
# Analyze codebase
|
||||||
|
pages = analyze_pages(base_path)
|
||||||
|
components = analyze_components(base_path)
|
||||||
|
apis = analyze_apis(base_path)
|
||||||
|
tables, types = analyze_database(base_path)
|
||||||
|
dependencies = build_dependencies(pages, components, apis)
|
||||||
|
|
||||||
|
now = datetime.now().isoformat()
|
||||||
|
|
||||||
|
manifest = {
|
||||||
|
"project": {
|
||||||
|
"name": project_name,
|
||||||
|
"version": "1.0.0",
|
||||||
|
"created_at": now,
|
||||||
|
"description": f"Project manifest for {project_name}"
|
||||||
|
},
|
||||||
|
"state": {
|
||||||
|
"current_phase": "IMPLEMENTATION_PHASE",
|
||||||
|
"approval_status": {
|
||||||
|
"manifest_approved": True,
|
||||||
|
"approved_by": "analyzer",
|
||||||
|
"approved_at": now
|
||||||
|
},
|
||||||
|
"revision_history": [
|
||||||
|
{
|
||||||
|
"action": "MANIFEST_GENERATED",
|
||||||
|
"timestamp": now,
|
||||||
|
"details": "Generated from existing codebase analysis"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"entities": {
|
||||||
|
"pages": pages,
|
||||||
|
"components": components,
|
||||||
|
"api_endpoints": apis,
|
||||||
|
"database_tables": tables
|
||||||
|
},
|
||||||
|
"dependencies": dependencies,
|
||||||
|
"types": types
|
||||||
|
}
|
||||||
|
|
||||||
|
return manifest
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description='Analyze codebase and generate project manifest'
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
'--path',
|
||||||
|
default='.',
|
||||||
|
help='Path to project root'
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
'--name',
|
||||||
|
help='Project name (defaults to package.json name or directory name)'
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
'--output',
|
||||||
|
default='project_manifest.json',
|
||||||
|
help='Output file path'
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
'--dry-run',
|
||||||
|
action='store_true',
|
||||||
|
help='Print manifest without writing to file'
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
'--force',
|
||||||
|
action='store_true',
|
||||||
|
help='Overwrite existing manifest'
|
||||||
|
)
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
base_path = os.path.abspath(args.path)
|
||||||
|
output_path = os.path.join(base_path, args.output)
|
||||||
|
|
||||||
|
# Check for existing manifest
|
||||||
|
if os.path.exists(output_path) and not args.force and not args.dry_run:
|
||||||
|
print(f"Error: {args.output} already exists. Use --force to overwrite.")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
print(f"Analyzing codebase at: {base_path}")
|
||||||
|
print()
|
||||||
|
|
||||||
|
# Generate manifest
|
||||||
|
manifest = generate_manifest(base_path, args.name)
|
||||||
|
|
||||||
|
# Count entities
|
||||||
|
pages = len(manifest['entities']['pages'])
|
||||||
|
components = len(manifest['entities']['components'])
|
||||||
|
apis = len(manifest['entities']['api_endpoints'])
|
||||||
|
tables = len(manifest['entities']['database_tables'])
|
||||||
|
|
||||||
|
if args.dry_run:
|
||||||
|
print(json.dumps(manifest, indent=2))
|
||||||
|
else:
|
||||||
|
with open(output_path, 'w') as f:
|
||||||
|
json.dump(manifest, f, indent=2)
|
||||||
|
print(f"Manifest written to: {output_path}")
|
||||||
|
|
||||||
|
print()
|
||||||
|
print("╔══════════════════════════════════════════════════════════════╗")
|
||||||
|
print("║ 📊 MANIFEST GENERATED ║")
|
||||||
|
print("╠══════════════════════════════════════════════════════════════╣")
|
||||||
|
print(f"║ Project: {manifest['project']['name']:<51} ║")
|
||||||
|
print("╠══════════════════════════════════════════════════════════════╣")
|
||||||
|
print("║ ENTITIES DISCOVERED ║")
|
||||||
|
print(f"║ 📄 Pages: {pages:<43} ║")
|
||||||
|
print(f"║ 🧩 Components: {components:<43} ║")
|
||||||
|
print(f"║ 🔌 APIs: {apis:<43} ║")
|
||||||
|
print(f"║ 🗄️ Tables: {tables:<43} ║")
|
||||||
|
print("╠══════════════════════════════════════════════════════════════╣")
|
||||||
|
print("║ Status: All entities marked as IMPLEMENTED ║")
|
||||||
|
print("║ Phase: IMPLEMENTATION_PHASE ║")
|
||||||
|
print("╚══════════════════════════════════════════════════════════════╝")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
||||||
|
|
@ -0,0 +1,615 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Generate API Contract and Shared Types from Design Document
|
||||||
|
|
||||||
|
This script:
|
||||||
|
1. Reads the design_document.yml
|
||||||
|
2. Extracts all API endpoints and their types
|
||||||
|
3. Generates api_contract.yml with strict typing
|
||||||
|
4. Generates app/types/api.ts with shared TypeScript interfaces
|
||||||
|
|
||||||
|
Both frontend and backend agents MUST use these generated files
|
||||||
|
to ensure contract compliance.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
from datetime import datetime
|
||||||
|
from typing import Dict, List, Any, Optional, Set
|
||||||
|
|
||||||
|
try:
|
||||||
|
import yaml
|
||||||
|
except ImportError:
|
||||||
|
yaml = None
|
||||||
|
|
||||||
|
|
||||||
|
def load_yaml(path: Path) -> Dict:
|
||||||
|
"""Load YAML file."""
|
||||||
|
if yaml:
|
||||||
|
with open(path) as f:
|
||||||
|
return yaml.safe_load(f)
|
||||||
|
else:
|
||||||
|
# Fallback: simple YAML parser for basic cases
|
||||||
|
with open(path) as f:
|
||||||
|
content = f.read()
|
||||||
|
# Try JSON first (YAML is superset of JSON)
|
||||||
|
try:
|
||||||
|
return json.loads(content)
|
||||||
|
except:
|
||||||
|
print(f"Warning: yaml module not available, using basic parser", file=sys.stderr)
|
||||||
|
return {}
|
||||||
|
|
||||||
|
|
||||||
|
def save_yaml(data: Dict, path: Path) -> None:
|
||||||
|
"""Save data as YAML."""
|
||||||
|
if yaml:
|
||||||
|
with open(path, 'w') as f:
|
||||||
|
yaml.dump(data, f, default_flow_style=False, sort_keys=False, allow_unicode=True)
|
||||||
|
else:
|
||||||
|
# Fallback: JSON format
|
||||||
|
with open(path, 'w') as f:
|
||||||
|
json.dump(data, f, indent=2)
|
||||||
|
|
||||||
|
|
||||||
|
def ts_type_from_field(field: Dict) -> str:
|
||||||
|
"""Convert design document field type to TypeScript type."""
|
||||||
|
type_map = {
|
||||||
|
'string': 'string',
|
||||||
|
'text': 'string',
|
||||||
|
'integer': 'number',
|
||||||
|
'float': 'number',
|
||||||
|
'decimal': 'number',
|
||||||
|
'boolean': 'boolean',
|
||||||
|
'datetime': 'Date',
|
||||||
|
'date': 'Date',
|
||||||
|
'uuid': 'string',
|
||||||
|
'json': 'Record<string, unknown>',
|
||||||
|
'array': 'unknown[]',
|
||||||
|
}
|
||||||
|
|
||||||
|
field_type = field.get('type', 'string')
|
||||||
|
|
||||||
|
# Handle enum type
|
||||||
|
if field_type == 'enum':
|
||||||
|
enum_values = field.get('enum_values', [])
|
||||||
|
if enum_values:
|
||||||
|
return ' | '.join([f"'{v}'" for v in enum_values])
|
||||||
|
return 'string'
|
||||||
|
|
||||||
|
return type_map.get(field_type, 'unknown')
|
||||||
|
|
||||||
|
|
||||||
|
def generate_type_from_model(model: Dict) -> Dict:
|
||||||
|
"""Generate TypeScript type definition from model."""
|
||||||
|
type_id = f"type_{model['name']}"
|
||||||
|
properties = []
|
||||||
|
|
||||||
|
for field in model.get('fields', []):
|
||||||
|
# Skip internal fields like password_hash
|
||||||
|
if field['name'].endswith('_hash'):
|
||||||
|
continue
|
||||||
|
|
||||||
|
constraints = field.get('constraints', [])
|
||||||
|
required = 'not_null' in constraints or 'primary_key' in constraints
|
||||||
|
|
||||||
|
properties.append({
|
||||||
|
'name': to_camel_case(field['name']),
|
||||||
|
'type': ts_type_from_field(field),
|
||||||
|
'required': required,
|
||||||
|
'description': field.get('description', ''),
|
||||||
|
})
|
||||||
|
|
||||||
|
return {
|
||||||
|
'id': type_id,
|
||||||
|
'name': model['name'],
|
||||||
|
'definition': {
|
||||||
|
'type': 'object',
|
||||||
|
'properties': properties,
|
||||||
|
},
|
||||||
|
'used_by': {
|
||||||
|
'models': [model['id']],
|
||||||
|
'responses': [],
|
||||||
|
'requests': [],
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def generate_request_type(endpoint: Dict) -> Optional[Dict]:
|
||||||
|
"""Generate request body type from endpoint definition."""
|
||||||
|
request_body = endpoint.get('request_body', {})
|
||||||
|
if not request_body:
|
||||||
|
return None
|
||||||
|
|
||||||
|
schema = request_body.get('schema', {})
|
||||||
|
if not schema:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Generate type name from endpoint
|
||||||
|
parts = endpoint['id'].replace('api_', '').split('_')
|
||||||
|
type_name = ''.join([p.capitalize() for p in parts]) + 'Request'
|
||||||
|
type_id = f"type_{type_name}"
|
||||||
|
|
||||||
|
properties = []
|
||||||
|
for prop in schema.get('properties', []):
|
||||||
|
properties.append({
|
||||||
|
'name': to_camel_case(prop['name']),
|
||||||
|
'type': ts_type_from_field(prop),
|
||||||
|
'required': prop.get('required', False),
|
||||||
|
'description': prop.get('description', ''),
|
||||||
|
'validation': ','.join(prop.get('validations', [])) if prop.get('validations') else None,
|
||||||
|
})
|
||||||
|
|
||||||
|
return {
|
||||||
|
'id': type_id,
|
||||||
|
'name': type_name,
|
||||||
|
'definition': {
|
||||||
|
'type': 'object',
|
||||||
|
'properties': properties,
|
||||||
|
},
|
||||||
|
'used_by': {
|
||||||
|
'models': [],
|
||||||
|
'responses': [],
|
||||||
|
'requests': [endpoint['id']],
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def generate_response_type(endpoint: Dict, models: Dict[str, Dict]) -> Optional[Dict]:
|
||||||
|
"""Generate response type from endpoint definition - may reference model types."""
|
||||||
|
responses = endpoint.get('responses', [])
|
||||||
|
success_response = None
|
||||||
|
|
||||||
|
for resp in responses:
|
||||||
|
status = resp.get('status', 0)
|
||||||
|
if 200 <= status < 300:
|
||||||
|
success_response = resp
|
||||||
|
break
|
||||||
|
|
||||||
|
if not success_response:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Check if this response references a model
|
||||||
|
depends_on = endpoint.get('depends_on_models', [])
|
||||||
|
if depends_on:
|
||||||
|
# Response likely uses model type
|
||||||
|
primary_model = depends_on[0]
|
||||||
|
if primary_model in models:
|
||||||
|
return None # Will use model type directly
|
||||||
|
|
||||||
|
# Generate custom response type
|
||||||
|
schema = success_response.get('schema', {})
|
||||||
|
if not schema or schema.get('type') != 'object':
|
||||||
|
return None
|
||||||
|
|
||||||
|
parts = endpoint['id'].replace('api_', '').split('_')
|
||||||
|
type_name = ''.join([p.capitalize() for p in parts]) + 'Response'
|
||||||
|
type_id = f"type_{type_name}"
|
||||||
|
|
||||||
|
properties = []
|
||||||
|
for prop in schema.get('properties', []):
|
||||||
|
properties.append({
|
||||||
|
'name': to_camel_case(prop['name']),
|
||||||
|
'type': ts_type_from_field(prop),
|
||||||
|
'required': True,
|
||||||
|
'description': '',
|
||||||
|
})
|
||||||
|
|
||||||
|
return {
|
||||||
|
'id': type_id,
|
||||||
|
'name': type_name,
|
||||||
|
'definition': {
|
||||||
|
'type': 'object',
|
||||||
|
'properties': properties,
|
||||||
|
},
|
||||||
|
'used_by': {
|
||||||
|
'models': [],
|
||||||
|
'responses': [endpoint['id']],
|
||||||
|
'requests': [],
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def to_camel_case(snake_str: str) -> str:
|
||||||
|
"""Convert snake_case to camelCase."""
|
||||||
|
components = snake_str.split('_')
|
||||||
|
return components[0] + ''.join(x.capitalize() for x in components[1:])
|
||||||
|
|
||||||
|
|
||||||
|
def generate_endpoint_contract(endpoint: Dict, types: Dict[str, Dict], models: Dict[str, Dict]) -> Dict:
|
||||||
|
"""Generate endpoint contract from design document endpoint."""
|
||||||
|
# Determine request body type
|
||||||
|
request_body = None
|
||||||
|
if endpoint.get('request_body'):
|
||||||
|
# Generate request type name
|
||||||
|
parts = endpoint['id'].replace('api_', '').split('_')
|
||||||
|
type_name = ''.join([p.capitalize() for p in parts]) + 'Request'
|
||||||
|
request_body = {
|
||||||
|
'type_id': f"type_{type_name}",
|
||||||
|
'content_type': 'application/json',
|
||||||
|
}
|
||||||
|
|
||||||
|
# Determine response type
|
||||||
|
response_type_id = None
|
||||||
|
is_array = False
|
||||||
|
|
||||||
|
responses = endpoint.get('responses', [])
|
||||||
|
success_response = None
|
||||||
|
for resp in responses:
|
||||||
|
if 200 <= resp.get('status', 0) < 300:
|
||||||
|
success_response = resp
|
||||||
|
break
|
||||||
|
|
||||||
|
if success_response:
|
||||||
|
# Check if referencing a model
|
||||||
|
depends_on = endpoint.get('depends_on_models', [])
|
||||||
|
if depends_on:
|
||||||
|
model_id = depends_on[0]
|
||||||
|
if model_id in models:
|
||||||
|
model_name = models[model_id].get('name', model_id.replace('model_', '').capitalize())
|
||||||
|
response_type_id = f"type_{model_name}"
|
||||||
|
|
||||||
|
# Check if response is array
|
||||||
|
schema = success_response.get('schema', {})
|
||||||
|
if schema.get('type') == 'array':
|
||||||
|
is_array = True
|
||||||
|
|
||||||
|
if not response_type_id:
|
||||||
|
parts = endpoint['id'].replace('api_', '').split('_')
|
||||||
|
type_name = ''.join([p.capitalize() for p in parts]) + 'Response'
|
||||||
|
response_type_id = f"type_{type_name}"
|
||||||
|
|
||||||
|
# Extract path params
|
||||||
|
path_params = []
|
||||||
|
for param in endpoint.get('path_params', []):
|
||||||
|
path_params.append({
|
||||||
|
'name': param['name'],
|
||||||
|
'type': ts_type_from_field(param),
|
||||||
|
'description': param.get('description', ''),
|
||||||
|
})
|
||||||
|
|
||||||
|
# Extract query params
|
||||||
|
query_params = []
|
||||||
|
for param in endpoint.get('query_params', []):
|
||||||
|
query_params.append({
|
||||||
|
'name': param['name'],
|
||||||
|
'type': ts_type_from_field(param),
|
||||||
|
'required': param.get('required', False),
|
||||||
|
'default': param.get('default'),
|
||||||
|
'description': param.get('description', ''),
|
||||||
|
})
|
||||||
|
|
||||||
|
# Build error responses
|
||||||
|
error_responses = []
|
||||||
|
for resp in responses:
|
||||||
|
status = resp.get('status', 0)
|
||||||
|
if status >= 400:
|
||||||
|
error_responses.append({
|
||||||
|
'status': status,
|
||||||
|
'type_id': 'type_ApiError',
|
||||||
|
'description': resp.get('description', ''),
|
||||||
|
})
|
||||||
|
|
||||||
|
return {
|
||||||
|
'id': endpoint['id'],
|
||||||
|
'method': endpoint['method'],
|
||||||
|
'path': endpoint['path'],
|
||||||
|
'path_params': path_params,
|
||||||
|
'query_params': query_params,
|
||||||
|
'request_body': request_body,
|
||||||
|
'response': {
|
||||||
|
'success': {
|
||||||
|
'status': success_response.get('status', 200) if success_response else 200,
|
||||||
|
'type_id': response_type_id,
|
||||||
|
'is_array': is_array,
|
||||||
|
},
|
||||||
|
'errors': error_responses,
|
||||||
|
},
|
||||||
|
'auth': endpoint.get('auth', {'required': False, 'roles': []}),
|
||||||
|
'version': '1.0.0',
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def generate_frontend_calls(pages: List[Dict], components: List[Dict], endpoints: Dict[str, Dict]) -> List[Dict]:
|
||||||
|
"""Generate frontend call contracts from pages and components."""
|
||||||
|
calls = []
|
||||||
|
|
||||||
|
# From pages
|
||||||
|
for page in pages:
|
||||||
|
for data_need in page.get('data_needs', []):
|
||||||
|
api_id = data_need.get('api_id')
|
||||||
|
if api_id and api_id in endpoints:
|
||||||
|
calls.append({
|
||||||
|
'id': f"call_{page['id']}_{api_id}",
|
||||||
|
'source': {
|
||||||
|
'entity_id': page['id'],
|
||||||
|
'file_path': f"app{page['path']}/page.tsx",
|
||||||
|
},
|
||||||
|
'endpoint_id': api_id,
|
||||||
|
'purpose': data_need.get('purpose', 'Load data'),
|
||||||
|
'trigger': 'onLoad' if data_need.get('on_load') else 'onDemand',
|
||||||
|
'request_mapping': {
|
||||||
|
'from_props': [],
|
||||||
|
'from_state': [],
|
||||||
|
'from_form': [],
|
||||||
|
},
|
||||||
|
'response_handling': {
|
||||||
|
'success_action': 'Update state',
|
||||||
|
'error_action': 'Show error',
|
||||||
|
},
|
||||||
|
})
|
||||||
|
|
||||||
|
# From components
|
||||||
|
for component in components:
|
||||||
|
for api_id in component.get('uses_apis', []):
|
||||||
|
if api_id in endpoints:
|
||||||
|
endpoint = endpoints[api_id]
|
||||||
|
method = endpoint.get('method', 'GET')
|
||||||
|
trigger = 'onSubmit' if method in ['POST', 'PUT', 'PATCH'] else 'onDemand'
|
||||||
|
|
||||||
|
calls.append({
|
||||||
|
'id': f"call_{component['id']}_{api_id}",
|
||||||
|
'source': {
|
||||||
|
'entity_id': component['id'],
|
||||||
|
'file_path': f"app/components/{component['name']}.tsx",
|
||||||
|
},
|
||||||
|
'endpoint_id': api_id,
|
||||||
|
'purpose': f"Call {api_id}",
|
||||||
|
'trigger': trigger,
|
||||||
|
'request_mapping': {
|
||||||
|
'from_props': [],
|
||||||
|
'from_state': [],
|
||||||
|
'from_form': [],
|
||||||
|
},
|
||||||
|
'response_handling': {
|
||||||
|
'success_action': 'Handle response',
|
||||||
|
'error_action': 'Show error',
|
||||||
|
},
|
||||||
|
})
|
||||||
|
|
||||||
|
return calls
|
||||||
|
|
||||||
|
|
||||||
|
def generate_backend_routes(endpoints: List[Dict]) -> List[Dict]:
|
||||||
|
"""Generate backend route contracts from endpoints."""
|
||||||
|
routes = []
|
||||||
|
|
||||||
|
for endpoint in endpoints:
|
||||||
|
# Determine file path from endpoint path
|
||||||
|
api_path = endpoint['path'].replace('/api/', '')
|
||||||
|
# Handle dynamic segments like /users/:id
|
||||||
|
parts = api_path.split('/')
|
||||||
|
file_parts = []
|
||||||
|
for part in parts:
|
||||||
|
if part.startswith(':'):
|
||||||
|
file_parts.append(f"[{part[1:]}]")
|
||||||
|
else:
|
||||||
|
file_parts.append(part)
|
||||||
|
|
||||||
|
file_path = f"app/api/{'/'.join(file_parts)}/route.ts"
|
||||||
|
|
||||||
|
routes.append({
|
||||||
|
'id': f"route_{endpoint['method'].lower()}_{api_path.replace('/', '_')}",
|
||||||
|
'endpoint_id': endpoint['id'],
|
||||||
|
'file_path': file_path,
|
||||||
|
'export_name': endpoint['method'],
|
||||||
|
'uses_models': endpoint.get('depends_on_models', []),
|
||||||
|
'uses_services': [],
|
||||||
|
'must_validate': [],
|
||||||
|
'must_authenticate': endpoint.get('auth', {}).get('required', False),
|
||||||
|
'must_authorize': endpoint.get('auth', {}).get('roles', []),
|
||||||
|
})
|
||||||
|
|
||||||
|
return routes
|
||||||
|
|
||||||
|
|
||||||
|
def generate_typescript_types(types: List[Dict]) -> str:
|
||||||
|
"""Generate TypeScript type definitions."""
|
||||||
|
lines = [
|
||||||
|
"// AUTO-GENERATED - DO NOT EDIT",
|
||||||
|
"// Source: .workflow/versions/vXXX/contracts/api_contract.yml",
|
||||||
|
f"// Generated: {datetime.now().isoformat()}",
|
||||||
|
"",
|
||||||
|
"// ============================================================================",
|
||||||
|
"// Shared API Types",
|
||||||
|
"// Both frontend and backend MUST import from this file",
|
||||||
|
"// ============================================================================",
|
||||||
|
"",
|
||||||
|
]
|
||||||
|
|
||||||
|
# Add standard error types
|
||||||
|
lines.extend([
|
||||||
|
"// === Error Types ===",
|
||||||
|
"",
|
||||||
|
"export interface ApiError {",
|
||||||
|
" error: string;",
|
||||||
|
" message?: string;",
|
||||||
|
" code?: string;",
|
||||||
|
"}",
|
||||||
|
"",
|
||||||
|
"export interface ValidationError {",
|
||||||
|
" error: string;",
|
||||||
|
" details: string[];",
|
||||||
|
"}",
|
||||||
|
"",
|
||||||
|
])
|
||||||
|
|
||||||
|
# Generate interfaces for each type
|
||||||
|
lines.append("// === Domain Types ===")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
for type_def in types:
|
||||||
|
name = type_def['name']
|
||||||
|
definition = type_def['definition']
|
||||||
|
|
||||||
|
if definition['type'] == 'object':
|
||||||
|
lines.append(f"export interface {name} {{")
|
||||||
|
for prop in definition.get('properties', []):
|
||||||
|
optional = '' if prop.get('required') else '?'
|
||||||
|
desc = prop.get('description', '')
|
||||||
|
if desc:
|
||||||
|
lines.append(f" /** {desc} */")
|
||||||
|
lines.append(f" {prop['name']}{optional}: {prop['type']};")
|
||||||
|
lines.append("}")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
elif definition['type'] == 'enum':
|
||||||
|
values = definition.get('enum_values', [])
|
||||||
|
quoted_values = [f"'{v}'" for v in values]
|
||||||
|
lines.append(f"export type {name} = {' | '.join(quoted_values)};")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
elif definition['type'] == 'union':
|
||||||
|
members = definition.get('union_members', [])
|
||||||
|
lines.append(f"export type {name} = {' | '.join(members)};")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
return '\n'.join(lines)
|
||||||
|
|
||||||
|
|
||||||
|
def generate_api_paths(endpoints: List[Dict]) -> str:
|
||||||
|
"""Generate API path constants for type-safe calls."""
|
||||||
|
lines = [
|
||||||
|
"",
|
||||||
|
"// === API Paths ===",
|
||||||
|
"",
|
||||||
|
"export const API_PATHS = {",
|
||||||
|
]
|
||||||
|
|
||||||
|
for endpoint in endpoints:
|
||||||
|
# Generate constant name: api_get_users -> GET_USERS
|
||||||
|
const_name = endpoint['id'].replace('api_', '').upper()
|
||||||
|
lines.append(f" {const_name}: '{endpoint['path']}' as const,")
|
||||||
|
|
||||||
|
lines.append("} as const;")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
return '\n'.join(lines)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
"""Main entry point."""
|
||||||
|
if len(sys.argv) < 2:
|
||||||
|
print("Usage: generate_api_contract.py <design_document.yml> [--output-dir <dir>]", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
design_doc_path = Path(sys.argv[1])
|
||||||
|
|
||||||
|
# Parse output directory
|
||||||
|
output_dir = design_doc_path.parent.parent # .workflow/versions/vXXX/
|
||||||
|
if '--output-dir' in sys.argv:
|
||||||
|
idx = sys.argv.index('--output-dir')
|
||||||
|
output_dir = Path(sys.argv[idx + 1])
|
||||||
|
|
||||||
|
if not design_doc_path.exists():
|
||||||
|
print(f"Error: Design document not found: {design_doc_path}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
# Load design document
|
||||||
|
design_doc = load_yaml(design_doc_path)
|
||||||
|
|
||||||
|
if not design_doc:
|
||||||
|
print("Error: Failed to load design document", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
# Extract entities
|
||||||
|
models = {m['id']: m for m in design_doc.get('data_models', [])}
|
||||||
|
endpoints = design_doc.get('api_endpoints', [])
|
||||||
|
pages = design_doc.get('pages', [])
|
||||||
|
components = design_doc.get('components', [])
|
||||||
|
|
||||||
|
workflow_version = design_doc.get('workflow_version', 'unknown')
|
||||||
|
revision = design_doc.get('revision', 1)
|
||||||
|
|
||||||
|
# Generate types from models
|
||||||
|
types = []
|
||||||
|
for model in models.values():
|
||||||
|
type_def = generate_type_from_model(model)
|
||||||
|
types.append(type_def)
|
||||||
|
|
||||||
|
# Generate request/response types from endpoints
|
||||||
|
endpoints_dict = {e['id']: e for e in endpoints}
|
||||||
|
for endpoint in endpoints:
|
||||||
|
req_type = generate_request_type(endpoint)
|
||||||
|
if req_type:
|
||||||
|
types.append(req_type)
|
||||||
|
|
||||||
|
resp_type = generate_response_type(endpoint, models)
|
||||||
|
if resp_type:
|
||||||
|
types.append(resp_type)
|
||||||
|
|
||||||
|
# Generate types dictionary for lookup
|
||||||
|
types_dict = {t['id']: t for t in types}
|
||||||
|
|
||||||
|
# Generate endpoint contracts
|
||||||
|
endpoint_contracts = []
|
||||||
|
for endpoint in endpoints:
|
||||||
|
contract = generate_endpoint_contract(endpoint, types_dict, models)
|
||||||
|
endpoint_contracts.append(contract)
|
||||||
|
|
||||||
|
# Generate frontend calls
|
||||||
|
frontend_calls = generate_frontend_calls(pages, components, endpoints_dict)
|
||||||
|
|
||||||
|
# Generate backend routes
|
||||||
|
backend_routes = generate_backend_routes(endpoints)
|
||||||
|
|
||||||
|
# Build API contract
|
||||||
|
api_contract = {
|
||||||
|
'api_contract': {
|
||||||
|
'workflow_version': workflow_version,
|
||||||
|
'design_document_revision': revision,
|
||||||
|
'generated_at': datetime.now().isoformat(),
|
||||||
|
'validated_at': None,
|
||||||
|
'status': 'draft',
|
||||||
|
},
|
||||||
|
'types': types,
|
||||||
|
'endpoints': endpoint_contracts,
|
||||||
|
'frontend_calls': frontend_calls,
|
||||||
|
'backend_routes': backend_routes,
|
||||||
|
}
|
||||||
|
|
||||||
|
# Create output directories
|
||||||
|
contracts_dir = output_dir / 'contracts'
|
||||||
|
contracts_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
# Save API contract
|
||||||
|
contract_path = contracts_dir / 'api_contract.yml'
|
||||||
|
save_yaml(api_contract, contract_path)
|
||||||
|
print(f"Generated: {contract_path}")
|
||||||
|
|
||||||
|
# Generate TypeScript types
|
||||||
|
ts_types = generate_typescript_types(types)
|
||||||
|
ts_paths = generate_api_paths(endpoint_contracts)
|
||||||
|
|
||||||
|
# Find project root (look for package.json)
|
||||||
|
project_root = output_dir
|
||||||
|
while project_root != project_root.parent:
|
||||||
|
if (project_root / 'package.json').exists():
|
||||||
|
break
|
||||||
|
project_root = project_root.parent
|
||||||
|
|
||||||
|
if not (project_root / 'package.json').exists():
|
||||||
|
project_root = output_dir.parent.parent.parent # Assume .workflow is in project root
|
||||||
|
|
||||||
|
# Create types directory and file
|
||||||
|
types_dir = project_root / 'app' / 'types'
|
||||||
|
types_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
types_file = types_dir / 'api.ts'
|
||||||
|
types_file.write_text(ts_types + ts_paths)
|
||||||
|
print(f"Generated: {types_file}")
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
print("\n=== API CONTRACT GENERATED ===")
|
||||||
|
print(f"Types: {len(types)}")
|
||||||
|
print(f"Endpoints: {len(endpoint_contracts)}")
|
||||||
|
print(f"Frontend calls: {len(frontend_calls)}")
|
||||||
|
print(f"Backend routes: {len(backend_routes)}")
|
||||||
|
print(f"\nContract file: {contract_path}")
|
||||||
|
print(f"Types file: {types_file}")
|
||||||
|
print("\nBoth agents MUST import from app/types/api.ts")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
||||||
|
|
@ -0,0 +1,73 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Initialize a guardrailed project with manifest."""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
|
||||||
|
def create_manifest(name: str, path: str) -> dict:
|
||||||
|
"""Create initial project manifest structure."""
|
||||||
|
return {
|
||||||
|
"project": {
|
||||||
|
"name": name,
|
||||||
|
"version": "0.1.0",
|
||||||
|
"created_at": datetime.now().isoformat(),
|
||||||
|
"description": f"{name} - A guardrailed project"
|
||||||
|
},
|
||||||
|
"state": {
|
||||||
|
"current_phase": "DESIGN_PHASE",
|
||||||
|
"approval_status": {
|
||||||
|
"manifest_approved": False,
|
||||||
|
"approved_by": None,
|
||||||
|
"approved_at": None
|
||||||
|
},
|
||||||
|
"revision_history": [
|
||||||
|
{
|
||||||
|
"action": "PROJECT_INITIALIZED",
|
||||||
|
"timestamp": datetime.now().isoformat(),
|
||||||
|
"details": f"Project {name} created"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"entities": {
|
||||||
|
"pages": [],
|
||||||
|
"components": [],
|
||||||
|
"api_endpoints": [],
|
||||||
|
"database_tables": []
|
||||||
|
},
|
||||||
|
"dependencies": {
|
||||||
|
"component_to_page": {},
|
||||||
|
"api_to_component": {},
|
||||||
|
"table_to_api": {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Initialize guardrailed project")
|
||||||
|
parser.add_argument("--name", required=True, help="Project name")
|
||||||
|
parser.add_argument("--path", required=True, help="Project path")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
manifest_path = os.path.join(args.path, "project_manifest.json")
|
||||||
|
|
||||||
|
if os.path.exists(manifest_path):
|
||||||
|
print(f"Warning: Manifest already exists at {manifest_path}")
|
||||||
|
print("Use --force to overwrite (not implemented)")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
manifest = create_manifest(args.name, args.path)
|
||||||
|
|
||||||
|
with open(manifest_path, "w") as f:
|
||||||
|
json.dump(manifest, f, indent=2)
|
||||||
|
|
||||||
|
print(f"Initialized guardrailed project: {args.name}")
|
||||||
|
print(f"Manifest created at: {manifest_path}")
|
||||||
|
print(f"Current phase: DESIGN_PHASE")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
exit(main())
|
||||||
|
|
@ -0,0 +1,530 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Manifest diffing and changelog generation between workflow versions.
|
||||||
|
|
||||||
|
Compares project_manifest.json snapshots to show:
|
||||||
|
- Added entities (pages, components, API endpoints)
|
||||||
|
- Removed entities
|
||||||
|
- Modified entities (status changes, path changes)
|
||||||
|
- Dependency changes
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
from datetime import datetime
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Dict, List, Optional, Set, Tuple, Any
|
||||||
|
|
||||||
|
# Try to import yaml
|
||||||
|
try:
|
||||||
|
import yaml
|
||||||
|
HAS_YAML = True
|
||||||
|
except ImportError:
|
||||||
|
HAS_YAML = False
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# File Helpers
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def load_json(filepath: str) -> dict:
|
||||||
|
"""Load JSON file."""
|
||||||
|
if not os.path.exists(filepath):
|
||||||
|
return {}
|
||||||
|
with open(filepath, 'r') as f:
|
||||||
|
return json.load(f)
|
||||||
|
|
||||||
|
|
||||||
|
def load_yaml(filepath: str) -> dict:
|
||||||
|
"""Load YAML file."""
|
||||||
|
if not os.path.exists(filepath):
|
||||||
|
return {}
|
||||||
|
with open(filepath, 'r') as f:
|
||||||
|
content = f.read()
|
||||||
|
if not content.strip():
|
||||||
|
return {}
|
||||||
|
if HAS_YAML:
|
||||||
|
return yaml.safe_load(content) or {}
|
||||||
|
return {}
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Path Helpers
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def get_workflow_dir() -> Path:
|
||||||
|
return Path('.workflow')
|
||||||
|
|
||||||
|
|
||||||
|
def get_version_dir(version: str) -> Path:
|
||||||
|
return get_workflow_dir() / 'versions' / version
|
||||||
|
|
||||||
|
|
||||||
|
def get_snapshot_path(version: str, snapshot_type: str) -> Path:
|
||||||
|
"""Get path to manifest snapshot for a version."""
|
||||||
|
return get_version_dir(version) / f'snapshot_{snapshot_type}' / 'manifest.json'
|
||||||
|
|
||||||
|
|
||||||
|
def get_current_manifest_path() -> Path:
|
||||||
|
return Path('project_manifest.json')
|
||||||
|
|
||||||
|
|
||||||
|
def get_versions_list() -> List[str]:
|
||||||
|
"""Get list of all versions."""
|
||||||
|
versions_dir = get_workflow_dir() / 'versions'
|
||||||
|
if not versions_dir.exists():
|
||||||
|
return []
|
||||||
|
return sorted([d.name for d in versions_dir.iterdir() if d.is_dir()])
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Entity Extraction
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def extract_entities(manifest: dict) -> Dict[str, Dict[str, Any]]:
|
||||||
|
"""
|
||||||
|
Extract all entities from manifest into a flat dict keyed by ID.
|
||||||
|
|
||||||
|
Returns dict like:
|
||||||
|
{
|
||||||
|
"page_home": {"type": "page", "name": "Home", "status": "APPROVED", ...},
|
||||||
|
"component_Button": {"type": "component", "name": "Button", ...},
|
||||||
|
...
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
entities = {}
|
||||||
|
|
||||||
|
entity_types = manifest.get('entities', {})
|
||||||
|
|
||||||
|
for entity_type, entity_list in entity_types.items():
|
||||||
|
if not isinstance(entity_list, list):
|
||||||
|
continue
|
||||||
|
|
||||||
|
for entity in entity_list:
|
||||||
|
entity_id = entity.get('id')
|
||||||
|
if entity_id:
|
||||||
|
entities[entity_id] = {
|
||||||
|
'type': entity_type.rstrip('s'), # pages -> page
|
||||||
|
**entity
|
||||||
|
}
|
||||||
|
|
||||||
|
return entities
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Diff Computation
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def compute_diff(before: dict, after: dict) -> dict:
|
||||||
|
"""
|
||||||
|
Compute the difference between two manifests.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
{
|
||||||
|
"added": [list of added entities],
|
||||||
|
"removed": [list of removed entities],
|
||||||
|
"modified": [list of modified entities with changes],
|
||||||
|
"unchanged": [list of unchanged entity IDs]
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
before_entities = extract_entities(before)
|
||||||
|
after_entities = extract_entities(after)
|
||||||
|
|
||||||
|
before_ids = set(before_entities.keys())
|
||||||
|
after_ids = set(after_entities.keys())
|
||||||
|
|
||||||
|
added_ids = after_ids - before_ids
|
||||||
|
removed_ids = before_ids - after_ids
|
||||||
|
common_ids = before_ids & after_ids
|
||||||
|
|
||||||
|
diff = {
|
||||||
|
'added': [],
|
||||||
|
'removed': [],
|
||||||
|
'modified': [],
|
||||||
|
'unchanged': []
|
||||||
|
}
|
||||||
|
|
||||||
|
# Added entities
|
||||||
|
for entity_id in sorted(added_ids):
|
||||||
|
entity = after_entities[entity_id]
|
||||||
|
diff['added'].append({
|
||||||
|
'id': entity_id,
|
||||||
|
'type': entity.get('type'),
|
||||||
|
'name': entity.get('name'),
|
||||||
|
'file_path': entity.get('file_path'),
|
||||||
|
'status': entity.get('status')
|
||||||
|
})
|
||||||
|
|
||||||
|
# Removed entities
|
||||||
|
for entity_id in sorted(removed_ids):
|
||||||
|
entity = before_entities[entity_id]
|
||||||
|
diff['removed'].append({
|
||||||
|
'id': entity_id,
|
||||||
|
'type': entity.get('type'),
|
||||||
|
'name': entity.get('name'),
|
||||||
|
'file_path': entity.get('file_path'),
|
||||||
|
'status': entity.get('status')
|
||||||
|
})
|
||||||
|
|
||||||
|
# Modified entities
|
||||||
|
for entity_id in sorted(common_ids):
|
||||||
|
before_entity = before_entities[entity_id]
|
||||||
|
after_entity = after_entities[entity_id]
|
||||||
|
|
||||||
|
changes = []
|
||||||
|
|
||||||
|
# Check each field for changes
|
||||||
|
for field in ['name', 'file_path', 'status', 'description', 'dependencies']:
|
||||||
|
before_val = before_entity.get(field)
|
||||||
|
after_val = after_entity.get(field)
|
||||||
|
|
||||||
|
if before_val != after_val:
|
||||||
|
changes.append({
|
||||||
|
'field': field,
|
||||||
|
'before': before_val,
|
||||||
|
'after': after_val
|
||||||
|
})
|
||||||
|
|
||||||
|
if changes:
|
||||||
|
diff['modified'].append({
|
||||||
|
'id': entity_id,
|
||||||
|
'type': before_entity.get('type'),
|
||||||
|
'name': after_entity.get('name'),
|
||||||
|
'file_path': after_entity.get('file_path'),
|
||||||
|
'changes': changes
|
||||||
|
})
|
||||||
|
else:
|
||||||
|
diff['unchanged'].append(entity_id)
|
||||||
|
|
||||||
|
return diff
|
||||||
|
|
||||||
|
|
||||||
|
def compute_summary(diff: dict) -> dict:
|
||||||
|
"""Compute summary statistics from diff."""
|
||||||
|
return {
|
||||||
|
'total_added': len(diff['added']),
|
||||||
|
'total_removed': len(diff['removed']),
|
||||||
|
'total_modified': len(diff['modified']),
|
||||||
|
'total_unchanged': len(diff['unchanged']),
|
||||||
|
'by_type': {
|
||||||
|
'pages': {
|
||||||
|
'added': len([e for e in diff['added'] if e['type'] == 'page']),
|
||||||
|
'removed': len([e for e in diff['removed'] if e['type'] == 'page']),
|
||||||
|
'modified': len([e for e in diff['modified'] if e['type'] == 'page'])
|
||||||
|
},
|
||||||
|
'components': {
|
||||||
|
'added': len([e for e in diff['added'] if e['type'] == 'component']),
|
||||||
|
'removed': len([e for e in diff['removed'] if e['type'] == 'component']),
|
||||||
|
'modified': len([e for e in diff['modified'] if e['type'] == 'component'])
|
||||||
|
},
|
||||||
|
'api_endpoints': {
|
||||||
|
'added': len([e for e in diff['added'] if e['type'] == 'api_endpoint']),
|
||||||
|
'removed': len([e for e in diff['removed'] if e['type'] == 'api_endpoint']),
|
||||||
|
'modified': len([e for e in diff['modified'] if e['type'] == 'api_endpoint'])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Display Functions
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def format_entity(entity: dict, prefix: str = '') -> str:
|
||||||
|
"""Format an entity for display."""
|
||||||
|
type_icon = {
|
||||||
|
'page': '📄',
|
||||||
|
'component': '🧩',
|
||||||
|
'api_endpoint': '🔌',
|
||||||
|
'lib': '📚',
|
||||||
|
'hook': '🪝',
|
||||||
|
'type': '📝',
|
||||||
|
'config': '⚙️'
|
||||||
|
}.get(entity.get('type', ''), '•')
|
||||||
|
|
||||||
|
name = entity.get('name', entity.get('id', 'Unknown'))
|
||||||
|
file_path = entity.get('file_path', '')
|
||||||
|
|
||||||
|
return f"{prefix}{type_icon} {name} ({file_path})"
|
||||||
|
|
||||||
|
|
||||||
|
def display_diff(diff: dict, summary: dict, from_version: str, to_version: str):
|
||||||
|
"""Display diff in a formatted way."""
|
||||||
|
print()
|
||||||
|
print("╔" + "═" * 70 + "╗")
|
||||||
|
print("║" + f" MANIFEST DIFF: {from_version} → {to_version}".ljust(70) + "║")
|
||||||
|
print("╠" + "═" * 70 + "╣")
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
print("║" + " SUMMARY".ljust(70) + "║")
|
||||||
|
print("║" + f" + Added: {summary['total_added']}".ljust(70) + "║")
|
||||||
|
print("║" + f" ~ Modified: {summary['total_modified']}".ljust(70) + "║")
|
||||||
|
print("║" + f" - Removed: {summary['total_removed']}".ljust(70) + "║")
|
||||||
|
print("║" + f" = Unchanged: {summary['total_unchanged']}".ljust(70) + "║")
|
||||||
|
|
||||||
|
# By type
|
||||||
|
print("╠" + "═" * 70 + "╣")
|
||||||
|
print("║" + " BY TYPE".ljust(70) + "║")
|
||||||
|
for type_name, counts in summary['by_type'].items():
|
||||||
|
changes = []
|
||||||
|
if counts['added'] > 0:
|
||||||
|
changes.append(f"+{counts['added']}")
|
||||||
|
if counts['modified'] > 0:
|
||||||
|
changes.append(f"~{counts['modified']}")
|
||||||
|
if counts['removed'] > 0:
|
||||||
|
changes.append(f"-{counts['removed']}")
|
||||||
|
if changes:
|
||||||
|
print("║" + f" {type_name}: {' '.join(changes)}".ljust(70) + "║")
|
||||||
|
|
||||||
|
# Added
|
||||||
|
if diff['added']:
|
||||||
|
print("╠" + "═" * 70 + "╣")
|
||||||
|
print("║" + " ➕ ADDED".ljust(70) + "║")
|
||||||
|
for entity in diff['added']:
|
||||||
|
line = format_entity(entity, ' + ')
|
||||||
|
print("║" + line[:70].ljust(70) + "║")
|
||||||
|
|
||||||
|
# Modified
|
||||||
|
if diff['modified']:
|
||||||
|
print("╠" + "═" * 70 + "╣")
|
||||||
|
print("║" + " 📝 MODIFIED".ljust(70) + "║")
|
||||||
|
for entity in diff['modified']:
|
||||||
|
line = format_entity(entity, ' ~ ')
|
||||||
|
print("║" + line[:70].ljust(70) + "║")
|
||||||
|
for change in entity['changes']:
|
||||||
|
field = change['field']
|
||||||
|
before = str(change['before'])[:20] if change['before'] else '(none)'
|
||||||
|
after = str(change['after'])[:20] if change['after'] else '(none)'
|
||||||
|
change_line = f" {field}: {before} → {after}"
|
||||||
|
print("║" + change_line[:70].ljust(70) + "║")
|
||||||
|
|
||||||
|
# Removed
|
||||||
|
if diff['removed']:
|
||||||
|
print("╠" + "═" * 70 + "╣")
|
||||||
|
print("║" + " ➖ REMOVED".ljust(70) + "║")
|
||||||
|
for entity in diff['removed']:
|
||||||
|
line = format_entity(entity, ' - ')
|
||||||
|
print("║" + line[:70].ljust(70) + "║")
|
||||||
|
|
||||||
|
print("╚" + "═" * 70 + "╝")
|
||||||
|
|
||||||
|
|
||||||
|
def display_changelog(version: str, session: dict, diff: dict, summary: dict):
|
||||||
|
"""Display changelog for a single version."""
|
||||||
|
print()
|
||||||
|
print("╔" + "═" * 70 + "╗")
|
||||||
|
print("║" + f" CHANGELOG: {version}".ljust(70) + "║")
|
||||||
|
print("╠" + "═" * 70 + "╣")
|
||||||
|
print("║" + f" Feature: {session.get('feature', 'Unknown')[:55]}".ljust(70) + "║")
|
||||||
|
print("║" + f" Status: {session.get('status', 'unknown')}".ljust(70) + "║")
|
||||||
|
|
||||||
|
if session.get('started_at'):
|
||||||
|
print("║" + f" Started: {session['started_at'][:19]}".ljust(70) + "║")
|
||||||
|
|
||||||
|
if session.get('completed_at'):
|
||||||
|
print("║" + f" Completed: {session['completed_at'][:19]}".ljust(70) + "║")
|
||||||
|
|
||||||
|
print("╠" + "═" * 70 + "╣")
|
||||||
|
print("║" + " CHANGES".ljust(70) + "║")
|
||||||
|
|
||||||
|
if not diff['added'] and not diff['modified'] and not diff['removed']:
|
||||||
|
print("║" + " No entity changes".ljust(70) + "║")
|
||||||
|
else:
|
||||||
|
for entity in diff['added']:
|
||||||
|
line = f" + Added {entity['type']}: {entity['name']}"
|
||||||
|
print("║" + line[:70].ljust(70) + "║")
|
||||||
|
|
||||||
|
for entity in diff['modified']:
|
||||||
|
line = f" ~ Modified {entity['type']}: {entity['name']}"
|
||||||
|
print("║" + line[:70].ljust(70) + "║")
|
||||||
|
|
||||||
|
for entity in diff['removed']:
|
||||||
|
line = f" - Removed {entity['type']}: {entity['name']}"
|
||||||
|
print("║" + line[:70].ljust(70) + "║")
|
||||||
|
|
||||||
|
print("╚" + "═" * 70 + "╝")
|
||||||
|
|
||||||
|
|
||||||
|
def output_json(data: dict):
|
||||||
|
"""Output data as JSON."""
|
||||||
|
print(json.dumps(data, indent=2))
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Commands
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def diff_versions(version1: str, version2: str, output_format: str = 'text') -> int:
|
||||||
|
"""Diff two specific versions."""
|
||||||
|
# Load snapshots
|
||||||
|
before_path = get_snapshot_path(version1, 'after')
|
||||||
|
if not before_path.exists():
|
||||||
|
before_path = get_snapshot_path(version1, 'before')
|
||||||
|
|
||||||
|
after_path = get_snapshot_path(version2, 'after')
|
||||||
|
if not after_path.exists():
|
||||||
|
after_path = get_snapshot_path(version2, 'before')
|
||||||
|
|
||||||
|
if not before_path.exists():
|
||||||
|
print(f"Error: No snapshot found for version {version1}")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
if not after_path.exists():
|
||||||
|
print(f"Error: No snapshot found for version {version2}")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
before = load_json(str(before_path))
|
||||||
|
after = load_json(str(after_path))
|
||||||
|
|
||||||
|
diff = compute_diff(before, after)
|
||||||
|
summary = compute_summary(diff)
|
||||||
|
|
||||||
|
if output_format == 'json':
|
||||||
|
output_json({
|
||||||
|
'from_version': version1,
|
||||||
|
'to_version': version2,
|
||||||
|
'diff': diff,
|
||||||
|
'summary': summary
|
||||||
|
})
|
||||||
|
else:
|
||||||
|
display_diff(diff, summary, version1, version2)
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
def diff_with_current(version: str, output_format: str = 'text') -> int:
|
||||||
|
"""Diff a version with current manifest."""
|
||||||
|
# Load version snapshot
|
||||||
|
snapshot_path = get_snapshot_path(version, 'before')
|
||||||
|
if not snapshot_path.exists():
|
||||||
|
print(f"Error: No snapshot found for version {version}")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
before = load_json(str(snapshot_path))
|
||||||
|
|
||||||
|
# Load current manifest
|
||||||
|
current_path = get_current_manifest_path()
|
||||||
|
if not current_path.exists():
|
||||||
|
print("Error: No current manifest found")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
after = load_json(str(current_path))
|
||||||
|
|
||||||
|
diff = compute_diff(before, after)
|
||||||
|
summary = compute_summary(diff)
|
||||||
|
|
||||||
|
if output_format == 'json':
|
||||||
|
output_json({
|
||||||
|
'from_version': version,
|
||||||
|
'to_version': 'current',
|
||||||
|
'diff': diff,
|
||||||
|
'summary': summary
|
||||||
|
})
|
||||||
|
else:
|
||||||
|
display_diff(diff, summary, version, 'current')
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
def show_changelog(version: str = None, output_format: str = 'text') -> int:
|
||||||
|
"""Show changelog for a version or all versions."""
|
||||||
|
versions = get_versions_list()
|
||||||
|
|
||||||
|
if not versions:
|
||||||
|
print("No workflow versions found.")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
if version:
|
||||||
|
versions = [v for v in versions if v == version]
|
||||||
|
if not versions:
|
||||||
|
print(f"Version {version} not found.")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
for i, v in enumerate(versions):
|
||||||
|
# Load session info
|
||||||
|
session_path = get_version_dir(v) / 'session.yml'
|
||||||
|
session = load_yaml(str(session_path)) if session_path.exists() else {}
|
||||||
|
|
||||||
|
# Get before/after snapshots
|
||||||
|
before_path = get_snapshot_path(v, 'before')
|
||||||
|
after_path = get_snapshot_path(v, 'after')
|
||||||
|
|
||||||
|
before = load_json(str(before_path)) if before_path.exists() else {}
|
||||||
|
after = load_json(str(after_path)) if after_path.exists() else {}
|
||||||
|
|
||||||
|
if not after:
|
||||||
|
after = before # Use before if no after exists
|
||||||
|
|
||||||
|
diff = compute_diff(before, after)
|
||||||
|
summary = compute_summary(diff)
|
||||||
|
|
||||||
|
if output_format == 'json':
|
||||||
|
output_json({
|
||||||
|
'version': v,
|
||||||
|
'session': session,
|
||||||
|
'diff': diff,
|
||||||
|
'summary': summary
|
||||||
|
})
|
||||||
|
else:
|
||||||
|
display_changelog(v, session, diff, summary)
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# CLI Interface
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Manifest diffing and changelog generation")
|
||||||
|
subparsers = parser.add_subparsers(dest='command', help='Commands')
|
||||||
|
|
||||||
|
# diff command
|
||||||
|
diff_parser = subparsers.add_parser('diff', help='Diff two versions')
|
||||||
|
diff_parser.add_argument('version1', help='First version')
|
||||||
|
diff_parser.add_argument('version2', nargs='?', help='Second version (or "current")')
|
||||||
|
diff_parser.add_argument('--json', action='store_true', help='Output as JSON')
|
||||||
|
|
||||||
|
# changelog command
|
||||||
|
changelog_parser = subparsers.add_parser('changelog', help='Show version changelog')
|
||||||
|
changelog_parser.add_argument('version', nargs='?', help='Specific version (or all)')
|
||||||
|
changelog_parser.add_argument('--json', action='store_true', help='Output as JSON')
|
||||||
|
|
||||||
|
# versions command
|
||||||
|
subparsers.add_parser('versions', help='List all versions')
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
if args.command == 'diff':
|
||||||
|
output_format = 'json' if args.json else 'text'
|
||||||
|
|
||||||
|
if args.version2:
|
||||||
|
if args.version2 == 'current':
|
||||||
|
sys.exit(diff_with_current(args.version1, output_format))
|
||||||
|
else:
|
||||||
|
sys.exit(diff_versions(args.version1, args.version2, output_format))
|
||||||
|
else:
|
||||||
|
# Diff with current by default
|
||||||
|
sys.exit(diff_with_current(args.version1, output_format))
|
||||||
|
|
||||||
|
elif args.command == 'changelog':
|
||||||
|
output_format = 'json' if args.json else 'text'
|
||||||
|
sys.exit(show_changelog(args.version, output_format))
|
||||||
|
|
||||||
|
elif args.command == 'versions':
|
||||||
|
versions = get_versions_list()
|
||||||
|
if versions:
|
||||||
|
print("\nAvailable versions:")
|
||||||
|
for v in versions:
|
||||||
|
print(f" - {v}")
|
||||||
|
else:
|
||||||
|
print("No versions found.")
|
||||||
|
|
||||||
|
else:
|
||||||
|
parser.print_help()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
|
|
@ -0,0 +1,265 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Migration script to convert flat task session files to directory structure.
|
||||||
|
|
||||||
|
This script migrates task sessions from the old flat file structure:
|
||||||
|
.workflow/versions/v001/task_sessions/task_design.yml
|
||||||
|
|
||||||
|
To the new directory structure:
|
||||||
|
.workflow/versions/v001/task_sessions/task_design/
|
||||||
|
session.yml
|
||||||
|
task.yml
|
||||||
|
operations.log
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python3 migrate_task_sessions.py [--dry-run]
|
||||||
|
|
||||||
|
Options:
|
||||||
|
--dry-run Show what would be done without making changes
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import argparse
|
||||||
|
from pathlib import Path
|
||||||
|
from datetime import datetime
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
# Add parent to path for imports
|
||||||
|
sys.path.insert(0, str(Path(__file__).parent))
|
||||||
|
from version_manager import load_yaml, save_yaml, get_workflow_dir
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Discovery Functions
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def find_flat_task_sessions() -> list[tuple[Path, str]]:
|
||||||
|
"""
|
||||||
|
Find all flat task session YAML files.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of tuples: (file_path, version_name)
|
||||||
|
"""
|
||||||
|
workflow_dir = get_workflow_dir()
|
||||||
|
versions_dir = workflow_dir / 'versions'
|
||||||
|
|
||||||
|
flat_sessions = []
|
||||||
|
if versions_dir.exists():
|
||||||
|
for version_dir in versions_dir.iterdir():
|
||||||
|
if version_dir.is_dir():
|
||||||
|
task_sessions_dir = version_dir / 'task_sessions'
|
||||||
|
if task_sessions_dir.exists():
|
||||||
|
for item in task_sessions_dir.iterdir():
|
||||||
|
# Check if it's a YAML file (not a directory)
|
||||||
|
if item.is_file() and item.suffix in ['.yml', '.yaml']:
|
||||||
|
flat_sessions.append((item, version_dir.name))
|
||||||
|
|
||||||
|
return flat_sessions
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Migration Functions
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def migrate_task_session(file_path: Path, version: str, dry_run: bool = False) -> dict:
|
||||||
|
"""
|
||||||
|
Migrate a single flat task session to directory structure.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
file_path: Path to the flat YAML file
|
||||||
|
version: Version identifier (e.g., 'v001')
|
||||||
|
dry_run: If True, only report what would be done
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary with migration results and actions taken
|
||||||
|
"""
|
||||||
|
task_id = file_path.stem # e.g., "task_design" from "task_design.yml"
|
||||||
|
parent_dir = file_path.parent
|
||||||
|
new_dir = parent_dir / task_id
|
||||||
|
|
||||||
|
result = {
|
||||||
|
'task_id': task_id,
|
||||||
|
'version': version,
|
||||||
|
'original_path': str(file_path),
|
||||||
|
'new_path': str(new_dir),
|
||||||
|
'success': False,
|
||||||
|
'actions': []
|
||||||
|
}
|
||||||
|
|
||||||
|
if dry_run:
|
||||||
|
result['actions'].append(f"Would create directory: {new_dir}")
|
||||||
|
result['actions'].append(f"Would move {file_path.name} to {new_dir}/session.yml")
|
||||||
|
result['actions'].append(f"Would create {new_dir}/task.yml (if source exists)")
|
||||||
|
result['actions'].append(f"Would create {new_dir}/operations.log")
|
||||||
|
result['success'] = True
|
||||||
|
return result
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Create directory
|
||||||
|
new_dir.mkdir(exist_ok=True)
|
||||||
|
result['actions'].append(f"Created directory: {new_dir}")
|
||||||
|
|
||||||
|
# Move session file
|
||||||
|
session_data = load_yaml(str(file_path))
|
||||||
|
save_yaml(str(new_dir / 'session.yml'), session_data)
|
||||||
|
file_path.unlink() # Delete original
|
||||||
|
result['actions'].append(f"Moved session data to: {new_dir}/session.yml")
|
||||||
|
|
||||||
|
# Create task.yml snapshot (try to find original task)
|
||||||
|
task_file = Path('tasks') / f'{task_id}.yml'
|
||||||
|
if task_file.exists():
|
||||||
|
task_data = load_yaml(str(task_file))
|
||||||
|
task_data['snapshotted_at'] = datetime.now().isoformat()
|
||||||
|
task_data['source_path'] = str(task_file)
|
||||||
|
task_data['status_at_snapshot'] = task_data.get('status', 'migrated')
|
||||||
|
task_data['migration_note'] = 'Created during migration from flat file structure'
|
||||||
|
save_yaml(str(new_dir / 'task.yml'), task_data)
|
||||||
|
result['actions'].append(f"Created task snapshot: {new_dir}/task.yml")
|
||||||
|
else:
|
||||||
|
# Create minimal task.yml from session data
|
||||||
|
minimal_task = {
|
||||||
|
'id': task_id,
|
||||||
|
'type': session_data.get('task_type', 'unknown'),
|
||||||
|
'agent': session_data.get('agent', 'unknown'),
|
||||||
|
'snapshotted_at': datetime.now().isoformat(),
|
||||||
|
'source_path': 'N/A - reconstructed from session',
|
||||||
|
'status_at_snapshot': 'migrated',
|
||||||
|
'migration_note': 'Task file not found - reconstructed from session data'
|
||||||
|
}
|
||||||
|
save_yaml(str(new_dir / 'task.yml'), minimal_task)
|
||||||
|
result['actions'].append(f"Warning: Task file not found at {task_file}")
|
||||||
|
result['actions'].append(f"Created minimal task snapshot: {new_dir}/task.yml")
|
||||||
|
|
||||||
|
# Create operations.log
|
||||||
|
log_content = f"# Operations Log for {task_id}\n"
|
||||||
|
log_content += f"# Migrated: {datetime.now().isoformat()}\n"
|
||||||
|
log_content += "# Format: [timestamp] OPERATION target_type: target_id (path)\n"
|
||||||
|
log_content += "=" * 70 + "\n\n"
|
||||||
|
log_content += f"[{datetime.now().isoformat()}] MIGRATION: Converted from flat file structure\n"
|
||||||
|
|
||||||
|
# If session has operations, add them to the log
|
||||||
|
if 'operations' in session_data and session_data['operations']:
|
||||||
|
log_content += f"\n# Historical operations from session data:\n"
|
||||||
|
for op in session_data['operations']:
|
||||||
|
timestamp = op.get('performed_at', 'unknown')
|
||||||
|
op_type = op.get('type', 'UNKNOWN')
|
||||||
|
target_type = op.get('target_type', 'unknown')
|
||||||
|
target_id = op.get('target_id', 'unknown')
|
||||||
|
target_path = op.get('target_path', '')
|
||||||
|
|
||||||
|
entry = f"[{timestamp}] {op_type} {target_type}: {target_id}"
|
||||||
|
if target_path:
|
||||||
|
entry += f" ({target_path})"
|
||||||
|
|
||||||
|
diff_summary = op.get('changes', {}).get('diff_summary', '')
|
||||||
|
if diff_summary:
|
||||||
|
entry += f"\n Summary: {diff_summary}"
|
||||||
|
|
||||||
|
log_content += entry + "\n"
|
||||||
|
|
||||||
|
(new_dir / 'operations.log').write_text(log_content)
|
||||||
|
result['actions'].append(f"Created operations log: {new_dir}/operations.log")
|
||||||
|
|
||||||
|
result['success'] = True
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
result['error'] = str(e)
|
||||||
|
result['actions'].append(f"Error: {e}")
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Main Entry Point
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def main():
|
||||||
|
"""Main migration script entry point."""
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description='Migrate task session files from flat structure to directories',
|
||||||
|
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||||
|
epilog=__doc__
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
'--dry-run',
|
||||||
|
action='store_true',
|
||||||
|
help='Show what would be done without making changes'
|
||||||
|
)
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
dry_run = args.dry_run
|
||||||
|
|
||||||
|
# Header
|
||||||
|
print("=" * 70)
|
||||||
|
print("Task Session Migration Script".center(70))
|
||||||
|
print(f"Mode: {'DRY RUN' if dry_run else 'LIVE MIGRATION'}".center(70))
|
||||||
|
print("=" * 70)
|
||||||
|
print()
|
||||||
|
|
||||||
|
# Find flat sessions
|
||||||
|
flat_sessions = find_flat_task_sessions()
|
||||||
|
|
||||||
|
if not flat_sessions:
|
||||||
|
print("No flat task session files found. Nothing to migrate.")
|
||||||
|
print()
|
||||||
|
print("This could mean:")
|
||||||
|
print(" 1. All task sessions are already migrated")
|
||||||
|
print(" 2. No task sessions exist yet")
|
||||||
|
print(" 3. .workflow directory doesn't exist")
|
||||||
|
return
|
||||||
|
|
||||||
|
print(f"Found {len(flat_sessions)} flat task session file(s) to migrate:")
|
||||||
|
print()
|
||||||
|
|
||||||
|
# Process each file
|
||||||
|
results = []
|
||||||
|
for file_path, version in flat_sessions:
|
||||||
|
print(f"Processing: {version}/{file_path.name}")
|
||||||
|
print("-" * 70)
|
||||||
|
|
||||||
|
result = migrate_task_session(file_path, version, dry_run)
|
||||||
|
results.append(result)
|
||||||
|
|
||||||
|
for action in result['actions']:
|
||||||
|
print(f" {action}")
|
||||||
|
|
||||||
|
if not result['success'] and 'error' in result:
|
||||||
|
print(f" ERROR: {result['error']}")
|
||||||
|
|
||||||
|
print()
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
successful = sum(1 for r in results if r['success'])
|
||||||
|
failed = len(results) - successful
|
||||||
|
|
||||||
|
print("=" * 70)
|
||||||
|
print("Migration Summary".center(70))
|
||||||
|
print("=" * 70)
|
||||||
|
print(f"Total files processed: {len(results)}")
|
||||||
|
print(f"Successful migrations: {successful}")
|
||||||
|
print(f"Failed migrations: {failed}")
|
||||||
|
print()
|
||||||
|
|
||||||
|
if dry_run:
|
||||||
|
print("This was a DRY RUN. No files were modified.")
|
||||||
|
print("Run without --dry-run to perform the migration.")
|
||||||
|
else:
|
||||||
|
if successful > 0:
|
||||||
|
print("Migration completed successfully!")
|
||||||
|
print()
|
||||||
|
print("Next steps:")
|
||||||
|
print(" 1. Verify migrated files in .workflow/versions/*/task_sessions/")
|
||||||
|
print(" 2. Check that each task has session.yml, task.yml, and operations.log")
|
||||||
|
print(" 3. Test the system to ensure compatibility")
|
||||||
|
|
||||||
|
if failed > 0:
|
||||||
|
print()
|
||||||
|
print(f"WARNING: {failed} migration(s) failed. Review the errors above.")
|
||||||
|
|
||||||
|
print()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
||||||
|
|
@ -0,0 +1,79 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Post-write hook to update entity status in manifest."""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
|
||||||
|
def load_manifest(manifest_path: str) -> dict | None:
|
||||||
|
"""Load manifest if it exists."""
|
||||||
|
if not os.path.exists(manifest_path):
|
||||||
|
return None
|
||||||
|
with open(manifest_path) as f:
|
||||||
|
return json.load(f)
|
||||||
|
|
||||||
|
|
||||||
|
def save_manifest(manifest_path: str, manifest: dict):
|
||||||
|
"""Save manifest to file."""
|
||||||
|
with open(manifest_path, "w") as f:
|
||||||
|
json.dump(manifest, f, indent=2)
|
||||||
|
|
||||||
|
|
||||||
|
def find_entity_by_path(manifest: dict, file_path: str) -> tuple:
|
||||||
|
"""Find entity by file path, return (entity_type, index, entity)."""
|
||||||
|
entities = manifest.get("entities", {})
|
||||||
|
for entity_type in ["pages", "components", "api_endpoints", "database_tables"]:
|
||||||
|
for idx, entity in enumerate(entities.get(entity_type, [])):
|
||||||
|
if entity.get("file_path") == file_path:
|
||||||
|
return (entity_type, idx, entity)
|
||||||
|
return (None, None, None)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Post-write hook")
|
||||||
|
parser.add_argument("--manifest", required=True, help="Path to manifest")
|
||||||
|
parser.add_argument("--file", help="File that was written")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
manifest = load_manifest(args.manifest)
|
||||||
|
|
||||||
|
if manifest is None:
|
||||||
|
return 0
|
||||||
|
|
||||||
|
# If file provided, update entity status
|
||||||
|
if args.file:
|
||||||
|
# Normalize the file path
|
||||||
|
file_path = args.file.lstrip('./')
|
||||||
|
|
||||||
|
entity_type, idx, entity = find_entity_by_path(manifest, args.file)
|
||||||
|
|
||||||
|
# Try without leading ./
|
||||||
|
if not entity:
|
||||||
|
entity_type, idx, entity = find_entity_by_path(manifest, file_path)
|
||||||
|
|
||||||
|
if entity and entity.get("status") == "APPROVED":
|
||||||
|
manifest["entities"][entity_type][idx]["status"] = "IMPLEMENTED"
|
||||||
|
manifest["entities"][entity_type][idx]["implemented_at"] = datetime.now().isoformat()
|
||||||
|
|
||||||
|
# Add to history (ensure it exists)
|
||||||
|
if "state" not in manifest:
|
||||||
|
manifest["state"] = {}
|
||||||
|
if "revision_history" not in manifest["state"]:
|
||||||
|
manifest["state"]["revision_history"] = []
|
||||||
|
|
||||||
|
manifest["state"]["revision_history"].append({
|
||||||
|
"action": "ENTITY_IMPLEMENTED",
|
||||||
|
"timestamp": datetime.now().isoformat(),
|
||||||
|
"details": f"Implemented {entity.get('id', 'unknown')}"
|
||||||
|
})
|
||||||
|
|
||||||
|
save_manifest(args.manifest, manifest)
|
||||||
|
print(f"GUARDRAIL: Updated {entity.get('id')} to IMPLEMENTED")
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
exit(main())
|
||||||
|
|
@ -0,0 +1,601 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Comprehensive Security Scanner for guardrail workflow.
|
||||||
|
|
||||||
|
Performs static security analysis on codebase:
|
||||||
|
- Hardcoded secrets and credentials
|
||||||
|
- SQL injection vulnerabilities
|
||||||
|
- XSS vulnerabilities
|
||||||
|
- Path traversal risks
|
||||||
|
- Insecure dependencies
|
||||||
|
- Authentication/Authorization issues
|
||||||
|
- OWASP Top 10 patterns
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python3 security_scan.py --project-dir . [--severity critical|high|medium|low]
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import NamedTuple
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class SecurityIssue:
|
||||||
|
"""Security vulnerability finding."""
|
||||||
|
severity: str # CRITICAL, HIGH, MEDIUM, LOW, INFO
|
||||||
|
category: str
|
||||||
|
title: str
|
||||||
|
description: str
|
||||||
|
file_path: str
|
||||||
|
line_number: int | None
|
||||||
|
code_snippet: str
|
||||||
|
recommendation: str
|
||||||
|
cwe_id: str | None = None
|
||||||
|
owasp_category: str | None = None
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ScanResult:
|
||||||
|
"""Complete scan results."""
|
||||||
|
issues: list[SecurityIssue] = field(default_factory=list)
|
||||||
|
files_scanned: int = 0
|
||||||
|
scan_duration: float = 0.0
|
||||||
|
|
||||||
|
|
||||||
|
# Security patterns organized by category
|
||||||
|
SECURITY_PATTERNS = {
|
||||||
|
'hardcoded_secrets': {
|
||||||
|
'severity': 'CRITICAL',
|
||||||
|
'cwe': 'CWE-798',
|
||||||
|
'owasp': 'A07:2021-Identification and Authentication Failures',
|
||||||
|
'patterns': [
|
||||||
|
# API Keys
|
||||||
|
(r'''(?:api[_-]?key|apikey)\s*[:=]\s*['"]((?!process\.env)[^'"]{10,})['"']''', 'Hardcoded API key'),
|
||||||
|
(r'''(?:api[_-]?secret|apisecret)\s*[:=]\s*['"]((?!process\.env)[^'"]{10,})['"']''', 'Hardcoded API secret'),
|
||||||
|
# Passwords
|
||||||
|
(r'''(?:password|passwd|pwd)\s*[:=]\s*['"]([^'"]{4,})['"']''', 'Hardcoded password'),
|
||||||
|
# Private keys
|
||||||
|
(r'''-----BEGIN (?:RSA |EC |DSA )?PRIVATE KEY-----''', 'Embedded private key'),
|
||||||
|
# AWS credentials
|
||||||
|
(r'''(?:aws[_-]?access[_-]?key[_-]?id|aws[_-]?secret)\s*[:=]\s*['"]([A-Z0-9]{16,})['"']''', 'AWS credential'),
|
||||||
|
(r'''AKIA[0-9A-Z]{16}''', 'AWS Access Key ID'),
|
||||||
|
# JWT secrets
|
||||||
|
(r'''(?:jwt[_-]?secret|token[_-]?secret)\s*[:=]\s*['"]([^'"]{8,})['"']''', 'Hardcoded JWT secret'),
|
||||||
|
# Database connection strings
|
||||||
|
(r'''(?:mongodb|postgres|mysql|redis):\/\/[^:]+:[^@]+@''', 'Database credentials in connection string'),
|
||||||
|
# Generic secrets
|
||||||
|
(r'''(?:secret|token|auth)[_-]?(?:key)?\s*[:=]\s*['"]([^'"]{8,})['"']''', 'Potential hardcoded secret'),
|
||||||
|
]
|
||||||
|
},
|
||||||
|
'sql_injection': {
|
||||||
|
'severity': 'CRITICAL',
|
||||||
|
'cwe': 'CWE-89',
|
||||||
|
'owasp': 'A03:2021-Injection',
|
||||||
|
'patterns': [
|
||||||
|
# String concatenation in queries
|
||||||
|
(r'''(?:query|sql|execute)\s*\(\s*[`'"].*\$\{''', 'SQL injection via template literal'),
|
||||||
|
(r'''(?:query|sql|execute)\s*\(\s*['"].*\+\s*(?:req\.|params\.|body\.|query\.)''', 'SQL injection via concatenation'),
|
||||||
|
(r'''(?:SELECT|INSERT|UPDATE|DELETE|FROM|WHERE).*\$\{''', 'Raw SQL with template interpolation'),
|
||||||
|
# Raw queries
|
||||||
|
(r'''\.raw\s*\(\s*[`'"].*\$\{''', 'Raw query with interpolation'),
|
||||||
|
(r'''prisma\.\$queryRaw\s*`[^`]*\$\{''', 'Prisma raw query with interpolation'),
|
||||||
|
]
|
||||||
|
},
|
||||||
|
'xss': {
|
||||||
|
'severity': 'HIGH',
|
||||||
|
'cwe': 'CWE-79',
|
||||||
|
'owasp': 'A03:2021-Injection',
|
||||||
|
'patterns': [
|
||||||
|
# React dangerouslySetInnerHTML
|
||||||
|
(r'''dangerouslySetInnerHTML\s*=\s*\{\s*\{__html:\s*(?!DOMPurify|sanitize)''', 'Unsanitized dangerouslySetInnerHTML'),
|
||||||
|
# innerHTML assignment
|
||||||
|
(r'''\.innerHTML\s*=\s*(?!['"`]<)''', 'Direct innerHTML assignment'),
|
||||||
|
# document.write
|
||||||
|
(r'''document\.write\s*\(''', 'document.write usage'),
|
||||||
|
# eval with user input
|
||||||
|
(r'''eval\s*\(\s*(?:req\.|params\.|body\.|query\.|props\.)''', 'eval with user input'),
|
||||||
|
# jQuery html() with user input
|
||||||
|
(r'''\$\([^)]+\)\.html\s*\(\s*(?!['"`])''', 'jQuery html() with dynamic content'),
|
||||||
|
]
|
||||||
|
},
|
||||||
|
'path_traversal': {
|
||||||
|
'severity': 'HIGH',
|
||||||
|
'cwe': 'CWE-22',
|
||||||
|
'owasp': 'A01:2021-Broken Access Control',
|
||||||
|
'patterns': [
|
||||||
|
# File operations with user input
|
||||||
|
(r'''(?:readFile|writeFile|readFileSync|writeFileSync|createReadStream)\s*\(\s*(?:req\.|params\.|body\.|query\.)''', 'File operation with user input'),
|
||||||
|
(r'''(?:readFile|writeFile)\s*\(\s*[`'"].*\$\{(?:req\.|params\.|body\.|query\.)''', 'File path with user input interpolation'),
|
||||||
|
# Path.join with user input (without validation)
|
||||||
|
(r'''path\.(?:join|resolve)\s*\([^)]*(?:req\.|params\.|body\.|query\.)''', 'Path operation with user input'),
|
||||||
|
]
|
||||||
|
},
|
||||||
|
'command_injection': {
|
||||||
|
'severity': 'CRITICAL',
|
||||||
|
'cwe': 'CWE-78',
|
||||||
|
'owasp': 'A03:2021-Injection',
|
||||||
|
'patterns': [
|
||||||
|
# exec/spawn with user input
|
||||||
|
(r'''(?:exec|execSync|spawn|spawnSync)\s*\(\s*[`'"].*\$\{''', 'Command injection via template literal'),
|
||||||
|
(r'''(?:exec|execSync|spawn|spawnSync)\s*\(\s*(?:req\.|params\.|body\.|query\.)''', 'Command execution with user input'),
|
||||||
|
# child_process with concatenation
|
||||||
|
(r'''child_process.*\(\s*['"].*\+\s*(?:req\.|params\.|body\.|query\.)''', 'Command injection via concatenation'),
|
||||||
|
]
|
||||||
|
},
|
||||||
|
'insecure_auth': {
|
||||||
|
'severity': 'HIGH',
|
||||||
|
'cwe': 'CWE-287',
|
||||||
|
'owasp': 'A07:2021-Identification and Authentication Failures',
|
||||||
|
'patterns': [
|
||||||
|
# Weak JWT algorithms
|
||||||
|
(r'''algorithm\s*[:=]\s*['"](?:none|HS256)['"']''', 'Weak JWT algorithm'),
|
||||||
|
# No password hashing
|
||||||
|
(r'''password\s*===?\s*(?:req\.|body\.|params\.)''', 'Plain text password comparison'),
|
||||||
|
# Disabled security
|
||||||
|
(r'''(?:verify|secure|https|ssl)\s*[:=]\s*false''', 'Security feature disabled'),
|
||||||
|
# Cookie without security flags
|
||||||
|
(r'''cookie\s*\([^)]*\)\s*(?!.*(?:httpOnly|secure|sameSite))''', 'Cookie without security flags'),
|
||||||
|
]
|
||||||
|
},
|
||||||
|
'sensitive_data_exposure': {
|
||||||
|
'severity': 'MEDIUM',
|
||||||
|
'cwe': 'CWE-200',
|
||||||
|
'owasp': 'A02:2021-Cryptographic Failures',
|
||||||
|
'patterns': [
|
||||||
|
# Logging sensitive data
|
||||||
|
(r'''console\.(?:log|info|debug)\s*\([^)]*(?:password|secret|token|key|credential)''', 'Logging sensitive data'),
|
||||||
|
# Error messages with sensitive info
|
||||||
|
(r'''(?:throw|Error)\s*\([^)]*(?:password|secret|token|key|sql|query)''', 'Sensitive info in error message'),
|
||||||
|
# HTTP instead of HTTPS
|
||||||
|
(r'''['"]http:\/\/(?!localhost|127\.0\.0\.1)''', 'HTTP URL (should be HTTPS)'),
|
||||||
|
]
|
||||||
|
},
|
||||||
|
'insecure_dependencies': {
|
||||||
|
'severity': 'MEDIUM',
|
||||||
|
'cwe': 'CWE-1104',
|
||||||
|
'owasp': 'A06:2021-Vulnerable and Outdated Components',
|
||||||
|
'patterns': [
|
||||||
|
# Known vulnerable patterns
|
||||||
|
(r'''require\s*\(\s*['"](?:serialize-javascript|lodash\.template|node-serialize)['"]\s*\)''', 'Known vulnerable package'),
|
||||||
|
# Outdated crypto
|
||||||
|
(r'''crypto\.createCipher\s*\(''', 'Deprecated crypto.createCipher'),
|
||||||
|
(r'''md5\s*\(|createHash\s*\(\s*['"]md5['"]''', 'MD5 hash usage (weak)'),
|
||||||
|
(r'''sha1\s*\(|createHash\s*\(\s*['"]sha1['"]''', 'SHA1 hash usage (weak)'),
|
||||||
|
]
|
||||||
|
},
|
||||||
|
'cors_misconfiguration': {
|
||||||
|
'severity': 'MEDIUM',
|
||||||
|
'cwe': 'CWE-942',
|
||||||
|
'owasp': 'A01:2021-Broken Access Control',
|
||||||
|
'patterns': [
|
||||||
|
# Wildcard CORS
|
||||||
|
(r'''(?:Access-Control-Allow-Origin|origin)\s*[:=]\s*['"]\*['"']''', 'Wildcard CORS origin'),
|
||||||
|
(r'''cors\s*\(\s*\{[^}]*origin\s*:\s*true''', 'CORS allows all origins'),
|
||||||
|
# Credentials with wildcard
|
||||||
|
(r'''credentials\s*:\s*true[^}]*origin\s*:\s*['"]\*['"']''', 'CORS credentials with wildcard origin'),
|
||||||
|
]
|
||||||
|
},
|
||||||
|
'insecure_randomness': {
|
||||||
|
'severity': 'LOW',
|
||||||
|
'cwe': 'CWE-330',
|
||||||
|
'owasp': 'A02:2021-Cryptographic Failures',
|
||||||
|
'patterns': [
|
||||||
|
# Math.random for security
|
||||||
|
(r'''Math\.random\s*\(\s*\)[^;]*(?:token|secret|password|key|id|session)''', 'Math.random for security-sensitive value'),
|
||||||
|
(r'''(?:token|secret|key|session)[^=]*=\s*Math\.random''', 'Math.random for security-sensitive value'),
|
||||||
|
]
|
||||||
|
},
|
||||||
|
'debug_code': {
|
||||||
|
'severity': 'LOW',
|
||||||
|
'cwe': 'CWE-489',
|
||||||
|
'owasp': 'A05:2021-Security Misconfiguration',
|
||||||
|
'patterns': [
|
||||||
|
# Debug statements
|
||||||
|
(r'''console\.(?:log|debug|info|warn)\s*\(''', 'Console statement (remove in production)'),
|
||||||
|
(r'''debugger\s*;''', 'Debugger statement'),
|
||||||
|
# TODO/FIXME security notes
|
||||||
|
(r'''(?:TODO|FIXME|HACK|XXX).*(?:security|auth|password|secret|vulnerable)''', 'Security-related TODO'),
|
||||||
|
]
|
||||||
|
},
|
||||||
|
'nosql_injection': {
|
||||||
|
'severity': 'HIGH',
|
||||||
|
'cwe': 'CWE-943',
|
||||||
|
'owasp': 'A03:2021-Injection',
|
||||||
|
'patterns': [
|
||||||
|
# MongoDB injection
|
||||||
|
(r'''\.find\s*\(\s*\{[^}]*\$(?:where|regex|gt|lt|ne|in|nin|or|and).*(?:req\.|params\.|body\.|query\.)''', 'NoSQL injection risk'),
|
||||||
|
(r'''\.find\s*\(\s*(?:req\.|params\.|body\.|query\.)''', 'Direct user input in query'),
|
||||||
|
]
|
||||||
|
},
|
||||||
|
'prototype_pollution': {
|
||||||
|
'severity': 'HIGH',
|
||||||
|
'cwe': 'CWE-1321',
|
||||||
|
'owasp': 'A03:2021-Injection',
|
||||||
|
'patterns': [
|
||||||
|
# Deep merge without protection
|
||||||
|
(r'''(?:merge|extend|assign)\s*\([^)]*(?:req\.|params\.|body\.|query\.)''', 'Potential prototype pollution via merge'),
|
||||||
|
(r'''Object\.assign\s*\(\s*\{\}[^)]*(?:req\.|params\.|body\.|query\.)''', 'Object.assign with user input'),
|
||||||
|
# __proto__ access
|
||||||
|
(r'''__proto__''', 'Direct __proto__ access'),
|
||||||
|
(r'''constructor\s*\[\s*['"]prototype['"]''', 'Prototype access via constructor'),
|
||||||
|
]
|
||||||
|
},
|
||||||
|
'ssrf': {
|
||||||
|
'severity': 'HIGH',
|
||||||
|
'cwe': 'CWE-918',
|
||||||
|
'owasp': 'A10:2021-Server-Side Request Forgery',
|
||||||
|
'patterns': [
|
||||||
|
# Fetch/axios with user URL
|
||||||
|
(r'''(?:fetch|axios\.get|axios\.post|http\.get|https\.get)\s*\(\s*(?:req\.|params\.|body\.|query\.)''', 'SSRF via user-controlled URL'),
|
||||||
|
(r'''(?:fetch|axios)\s*\(\s*[`'"].*\$\{(?:req\.|params\.|body\.|query\.)''', 'SSRF via URL interpolation'),
|
||||||
|
]
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
# File extensions to scan
|
||||||
|
SCAN_EXTENSIONS = {'.ts', '.tsx', '.js', '.jsx', '.mjs', '.cjs'}
|
||||||
|
|
||||||
|
# Directories to skip
|
||||||
|
SKIP_DIRS = {'node_modules', '.next', 'dist', 'build', '.git', 'coverage', '__pycache__'}
|
||||||
|
|
||||||
|
|
||||||
|
def find_source_files(project_dir: str) -> list[str]:
|
||||||
|
"""Find all source files to scan."""
|
||||||
|
files = []
|
||||||
|
for root, dirs, filenames in os.walk(project_dir):
|
||||||
|
# Skip excluded directories
|
||||||
|
dirs[:] = [d for d in dirs if d not in SKIP_DIRS]
|
||||||
|
|
||||||
|
for filename in filenames:
|
||||||
|
ext = os.path.splitext(filename)[1]
|
||||||
|
if ext in SCAN_EXTENSIONS:
|
||||||
|
files.append(os.path.join(root, filename))
|
||||||
|
|
||||||
|
return files
|
||||||
|
|
||||||
|
|
||||||
|
def scan_file(file_path: str) -> list[SecurityIssue]:
|
||||||
|
"""Scan a single file for security issues."""
|
||||||
|
issues = []
|
||||||
|
|
||||||
|
try:
|
||||||
|
with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
|
||||||
|
content = f.read()
|
||||||
|
lines = content.split('\n')
|
||||||
|
except (IOError, OSError):
|
||||||
|
return []
|
||||||
|
|
||||||
|
for category, config in SECURITY_PATTERNS.items():
|
||||||
|
for pattern, title in config['patterns']:
|
||||||
|
try:
|
||||||
|
for match in re.finditer(pattern, content, re.IGNORECASE | re.MULTILINE):
|
||||||
|
# Find line number
|
||||||
|
line_start = content[:match.start()].count('\n') + 1
|
||||||
|
line_content = lines[line_start - 1] if line_start <= len(lines) else ''
|
||||||
|
|
||||||
|
# Skip if in comment
|
||||||
|
stripped = line_content.strip()
|
||||||
|
if stripped.startswith('//') or stripped.startswith('*') or stripped.startswith('/*'):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Skip if looks like env var reference
|
||||||
|
if 'process.env' in line_content or 'import.meta.env' in line_content:
|
||||||
|
continue
|
||||||
|
|
||||||
|
issues.append(SecurityIssue(
|
||||||
|
severity=config['severity'],
|
||||||
|
category=category,
|
||||||
|
title=title,
|
||||||
|
description=get_description(category),
|
||||||
|
file_path=file_path,
|
||||||
|
line_number=line_start,
|
||||||
|
code_snippet=line_content.strip()[:100],
|
||||||
|
recommendation=get_recommendation(category),
|
||||||
|
cwe_id=config.get('cwe'),
|
||||||
|
owasp_category=config.get('owasp')
|
||||||
|
))
|
||||||
|
except re.error:
|
||||||
|
continue
|
||||||
|
|
||||||
|
return issues
|
||||||
|
|
||||||
|
|
||||||
|
def get_description(category: str) -> str:
|
||||||
|
"""Get detailed description for category."""
|
||||||
|
descriptions = {
|
||||||
|
'hardcoded_secrets': 'Credentials or secrets hardcoded in source code can be extracted by attackers.',
|
||||||
|
'sql_injection': 'User input directly in SQL queries allows attackers to manipulate database operations.',
|
||||||
|
'xss': 'Unsanitized user input rendered in HTML allows attackers to inject malicious scripts.',
|
||||||
|
'path_traversal': 'User input in file paths allows attackers to access arbitrary files.',
|
||||||
|
'command_injection': 'User input in system commands allows attackers to execute arbitrary commands.',
|
||||||
|
'insecure_auth': 'Weak authentication mechanisms can be bypassed by attackers.',
|
||||||
|
'sensitive_data_exposure': 'Sensitive information may be exposed through logs or errors.',
|
||||||
|
'insecure_dependencies': 'Known vulnerable packages or weak cryptographic functions.',
|
||||||
|
'cors_misconfiguration': 'Overly permissive CORS allows unauthorized cross-origin requests.',
|
||||||
|
'insecure_randomness': 'Predictable random values can be guessed by attackers.',
|
||||||
|
'debug_code': 'Debug code in production may expose sensitive information.',
|
||||||
|
'nosql_injection': 'User input in NoSQL queries allows attackers to manipulate database operations.',
|
||||||
|
'prototype_pollution': 'Modifying object prototypes can lead to code execution.',
|
||||||
|
'ssrf': 'User-controlled URLs allow attackers to make requests to internal services.',
|
||||||
|
}
|
||||||
|
return descriptions.get(category, 'Security vulnerability detected.')
|
||||||
|
|
||||||
|
|
||||||
|
def get_recommendation(category: str) -> str:
|
||||||
|
"""Get remediation recommendation for category."""
|
||||||
|
recommendations = {
|
||||||
|
'hardcoded_secrets': 'Use environment variables (process.env) or a secrets manager.',
|
||||||
|
'sql_injection': 'Use parameterized queries or ORM methods. Never concatenate user input.',
|
||||||
|
'xss': 'Sanitize user input with DOMPurify or escape HTML entities.',
|
||||||
|
'path_traversal': 'Validate and sanitize file paths. Use path.basename() and whitelist allowed paths.',
|
||||||
|
'command_injection': 'Avoid shell commands with user input. Use execFile with argument arrays.',
|
||||||
|
'insecure_auth': 'Use strong algorithms (RS256), hash passwords with bcrypt, enable all security flags.',
|
||||||
|
'sensitive_data_exposure': 'Remove sensitive data from logs. Use generic error messages.',
|
||||||
|
'insecure_dependencies': 'Update to latest secure versions. Use crypto.createCipheriv and SHA-256+.',
|
||||||
|
'cors_misconfiguration': 'Specify exact allowed origins. Do not use wildcard with credentials.',
|
||||||
|
'insecure_randomness': 'Use crypto.randomBytes() or crypto.randomUUID() for security-sensitive values.',
|
||||||
|
'debug_code': 'Remove console statements and debugger in production builds.',
|
||||||
|
'nosql_injection': 'Sanitize input and use schema validation. Avoid $where operators.',
|
||||||
|
'prototype_pollution': 'Use Object.create(null) or validate/sanitize object keys.',
|
||||||
|
'ssrf': 'Validate URLs against allowlist. Block internal IP ranges.',
|
||||||
|
}
|
||||||
|
return recommendations.get(category, 'Review and remediate the security issue.')
|
||||||
|
|
||||||
|
|
||||||
|
def check_package_json(project_dir: str) -> list[SecurityIssue]:
|
||||||
|
"""Check package.json for security issues."""
|
||||||
|
issues = []
|
||||||
|
pkg_path = os.path.join(project_dir, 'package.json')
|
||||||
|
|
||||||
|
if not os.path.exists(pkg_path):
|
||||||
|
return []
|
||||||
|
|
||||||
|
try:
|
||||||
|
with open(pkg_path, 'r') as f:
|
||||||
|
pkg = json.load(f)
|
||||||
|
except (json.JSONDecodeError, IOError):
|
||||||
|
return []
|
||||||
|
|
||||||
|
# Known vulnerable packages (simplified check)
|
||||||
|
vulnerable_packages = {
|
||||||
|
'lodash': '< 4.17.21',
|
||||||
|
'axios': '< 0.21.1',
|
||||||
|
'node-fetch': '< 2.6.1',
|
||||||
|
'minimist': '< 1.2.6',
|
||||||
|
'serialize-javascript': '< 3.1.0',
|
||||||
|
}
|
||||||
|
|
||||||
|
all_deps = {}
|
||||||
|
all_deps.update(pkg.get('dependencies', {}))
|
||||||
|
all_deps.update(pkg.get('devDependencies', {}))
|
||||||
|
|
||||||
|
for pkg_name in vulnerable_packages:
|
||||||
|
if pkg_name in all_deps:
|
||||||
|
issues.append(SecurityIssue(
|
||||||
|
severity='MEDIUM',
|
||||||
|
category='insecure_dependencies',
|
||||||
|
title=f'Potentially vulnerable package: {pkg_name}',
|
||||||
|
description=f'Package {pkg_name} may have known vulnerabilities. Run npm audit for details.',
|
||||||
|
file_path=pkg_path,
|
||||||
|
line_number=None,
|
||||||
|
code_snippet=f'"{pkg_name}": "{all_deps[pkg_name]}"',
|
||||||
|
recommendation='Run `npm audit` and update to the latest secure version.',
|
||||||
|
cwe_id='CWE-1104',
|
||||||
|
owasp_category='A06:2021-Vulnerable and Outdated Components'
|
||||||
|
))
|
||||||
|
|
||||||
|
return issues
|
||||||
|
|
||||||
|
|
||||||
|
def check_env_files(project_dir: str) -> list[SecurityIssue]:
|
||||||
|
"""Check for exposed environment files."""
|
||||||
|
issues = []
|
||||||
|
|
||||||
|
env_files = ['.env', '.env.local', '.env.production', '.env.development']
|
||||||
|
|
||||||
|
for env_file in env_files:
|
||||||
|
env_path = os.path.join(project_dir, env_file)
|
||||||
|
if os.path.exists(env_path):
|
||||||
|
# Check if in .gitignore
|
||||||
|
gitignore_path = os.path.join(project_dir, '.gitignore')
|
||||||
|
in_gitignore = False
|
||||||
|
|
||||||
|
if os.path.exists(gitignore_path):
|
||||||
|
try:
|
||||||
|
with open(gitignore_path, 'r') as f:
|
||||||
|
gitignore_content = f.read()
|
||||||
|
if env_file in gitignore_content or '.env*' in gitignore_content:
|
||||||
|
in_gitignore = True
|
||||||
|
except IOError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
if not in_gitignore:
|
||||||
|
issues.append(SecurityIssue(
|
||||||
|
severity='HIGH',
|
||||||
|
category='sensitive_data_exposure',
|
||||||
|
title=f'Environment file not in .gitignore: {env_file}',
|
||||||
|
description='Environment files containing secrets may be committed to version control.',
|
||||||
|
file_path=env_path,
|
||||||
|
line_number=None,
|
||||||
|
code_snippet=env_file,
|
||||||
|
recommendation=f'Add {env_file} to .gitignore immediately.',
|
||||||
|
cwe_id='CWE-200',
|
||||||
|
owasp_category='A02:2021-Cryptographic Failures'
|
||||||
|
))
|
||||||
|
|
||||||
|
return issues
|
||||||
|
|
||||||
|
|
||||||
|
def run_scan(project_dir: str, min_severity: str = 'LOW') -> ScanResult:
|
||||||
|
"""Run full security scan."""
|
||||||
|
import time
|
||||||
|
start_time = time.time()
|
||||||
|
|
||||||
|
severity_order = ['CRITICAL', 'HIGH', 'MEDIUM', 'LOW', 'INFO']
|
||||||
|
min_severity_index = severity_order.index(min_severity.upper()) if min_severity.upper() in severity_order else 3
|
||||||
|
|
||||||
|
result = ScanResult()
|
||||||
|
|
||||||
|
# Find and scan source files
|
||||||
|
files = find_source_files(project_dir)
|
||||||
|
result.files_scanned = len(files)
|
||||||
|
|
||||||
|
for file_path in files:
|
||||||
|
issues = scan_file(file_path)
|
||||||
|
result.issues.extend(issues)
|
||||||
|
|
||||||
|
# Additional checks
|
||||||
|
result.issues.extend(check_package_json(project_dir))
|
||||||
|
result.issues.extend(check_env_files(project_dir))
|
||||||
|
|
||||||
|
# Filter by severity
|
||||||
|
result.issues = [
|
||||||
|
i for i in result.issues
|
||||||
|
if severity_order.index(i.severity) <= min_severity_index
|
||||||
|
]
|
||||||
|
|
||||||
|
# Sort by severity
|
||||||
|
result.issues.sort(key=lambda x: severity_order.index(x.severity))
|
||||||
|
|
||||||
|
result.scan_duration = time.time() - start_time
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def format_report(result: ScanResult, format_type: str = 'text') -> str:
|
||||||
|
"""Format scan results."""
|
||||||
|
if format_type == 'json':
|
||||||
|
return json.dumps({
|
||||||
|
'files_scanned': result.files_scanned,
|
||||||
|
'scan_duration': result.scan_duration,
|
||||||
|
'total_issues': len(result.issues),
|
||||||
|
'by_severity': {
|
||||||
|
'CRITICAL': len([i for i in result.issues if i.severity == 'CRITICAL']),
|
||||||
|
'HIGH': len([i for i in result.issues if i.severity == 'HIGH']),
|
||||||
|
'MEDIUM': len([i for i in result.issues if i.severity == 'MEDIUM']),
|
||||||
|
'LOW': len([i for i in result.issues if i.severity == 'LOW']),
|
||||||
|
},
|
||||||
|
'issues': [
|
||||||
|
{
|
||||||
|
'severity': i.severity,
|
||||||
|
'category': i.category,
|
||||||
|
'title': i.title,
|
||||||
|
'description': i.description,
|
||||||
|
'file_path': i.file_path,
|
||||||
|
'line_number': i.line_number,
|
||||||
|
'code_snippet': i.code_snippet,
|
||||||
|
'recommendation': i.recommendation,
|
||||||
|
'cwe_id': i.cwe_id,
|
||||||
|
'owasp_category': i.owasp_category,
|
||||||
|
}
|
||||||
|
for i in result.issues
|
||||||
|
]
|
||||||
|
}, indent=2)
|
||||||
|
|
||||||
|
# Text format
|
||||||
|
lines = []
|
||||||
|
|
||||||
|
# Header
|
||||||
|
lines.append("")
|
||||||
|
lines.append("=" * 80)
|
||||||
|
lines.append(" SECURITY SCAN REPORT")
|
||||||
|
lines.append("=" * 80)
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
critical = len([i for i in result.issues if i.severity == 'CRITICAL'])
|
||||||
|
high = len([i for i in result.issues if i.severity == 'HIGH'])
|
||||||
|
medium = len([i for i in result.issues if i.severity == 'MEDIUM'])
|
||||||
|
low = len([i for i in result.issues if i.severity == 'LOW'])
|
||||||
|
|
||||||
|
lines.append("SUMMARY")
|
||||||
|
lines.append("-" * 80)
|
||||||
|
lines.append(f" Files scanned: {result.files_scanned}")
|
||||||
|
lines.append(f" Scan duration: {result.scan_duration:.2f}s")
|
||||||
|
lines.append(f" Total issues: {len(result.issues)}")
|
||||||
|
lines.append("")
|
||||||
|
lines.append(" By Severity:")
|
||||||
|
lines.append(f" CRITICAL: {critical}")
|
||||||
|
lines.append(f" HIGH: {high}")
|
||||||
|
lines.append(f" MEDIUM: {medium}")
|
||||||
|
lines.append(f" LOW: {low}")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# Issues by severity
|
||||||
|
if result.issues:
|
||||||
|
for severity in ['CRITICAL', 'HIGH', 'MEDIUM', 'LOW']:
|
||||||
|
severity_issues = [i for i in result.issues if i.severity == severity]
|
||||||
|
if severity_issues:
|
||||||
|
icon = {'CRITICAL': '!!!', 'HIGH': '!!', 'MEDIUM': '!', 'LOW': '.'}[severity]
|
||||||
|
lines.append(f"{icon} {severity} SEVERITY ISSUES ({len(severity_issues)})")
|
||||||
|
lines.append("-" * 80)
|
||||||
|
|
||||||
|
for idx, issue in enumerate(severity_issues, 1):
|
||||||
|
lines.append(f" [{idx}] {issue.title}")
|
||||||
|
lines.append(f" Category: {issue.category}")
|
||||||
|
if issue.file_path:
|
||||||
|
loc = f"{issue.file_path}:{issue.line_number}" if issue.line_number else issue.file_path
|
||||||
|
lines.append(f" Location: {loc}")
|
||||||
|
if issue.code_snippet:
|
||||||
|
lines.append(f" Code: {issue.code_snippet[:60]}...")
|
||||||
|
if issue.cwe_id:
|
||||||
|
lines.append(f" CWE: {issue.cwe_id}")
|
||||||
|
if issue.owasp_category:
|
||||||
|
lines.append(f" OWASP: {issue.owasp_category}")
|
||||||
|
lines.append(f" Fix: {issue.recommendation}")
|
||||||
|
lines.append("")
|
||||||
|
else:
|
||||||
|
lines.append("No security issues found!")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# Result
|
||||||
|
lines.append("=" * 80)
|
||||||
|
if critical > 0:
|
||||||
|
lines.append(f" RESULT: CRITICAL ({critical} critical issues require immediate attention)")
|
||||||
|
elif high > 0:
|
||||||
|
lines.append(f" RESULT: FAIL ({high} high severity issues found)")
|
||||||
|
elif medium > 0:
|
||||||
|
lines.append(f" RESULT: WARNING ({medium} medium severity issues found)")
|
||||||
|
elif low > 0:
|
||||||
|
lines.append(f" RESULT: PASS WITH NOTES ({low} low severity issues)")
|
||||||
|
else:
|
||||||
|
lines.append(" RESULT: PASS (no security issues detected)")
|
||||||
|
lines.append("=" * 80)
|
||||||
|
|
||||||
|
return "\n".join(lines)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Security scanner for codebase")
|
||||||
|
parser.add_argument("--project-dir", default=".", help="Project directory to scan")
|
||||||
|
parser.add_argument("--severity", default="LOW",
|
||||||
|
choices=['CRITICAL', 'HIGH', 'MEDIUM', 'LOW'],
|
||||||
|
help="Minimum severity to report")
|
||||||
|
parser.add_argument("--json", action="store_true", help="Output as JSON")
|
||||||
|
parser.add_argument("--strict", action="store_true", help="Fail on any HIGH or above")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
result = run_scan(args.project_dir, args.severity)
|
||||||
|
|
||||||
|
format_type = 'json' if args.json else 'text'
|
||||||
|
print(format_report(result, format_type))
|
||||||
|
|
||||||
|
# Exit code
|
||||||
|
critical = len([i for i in result.issues if i.severity == 'CRITICAL'])
|
||||||
|
high = len([i for i in result.issues if i.severity == 'HIGH'])
|
||||||
|
|
||||||
|
if critical > 0:
|
||||||
|
return 2 # Critical issues
|
||||||
|
if args.strict and high > 0:
|
||||||
|
return 1 # High issues in strict mode
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
|
|
@ -0,0 +1,363 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Task management utilities for the guardrail workflow."""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
from datetime import datetime
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
# Try to import yaml, fall back to basic parsing if not available
|
||||||
|
try:
|
||||||
|
import yaml
|
||||||
|
HAS_YAML = True
|
||||||
|
except ImportError:
|
||||||
|
HAS_YAML = False
|
||||||
|
|
||||||
|
|
||||||
|
def parse_yaml_simple(content: str) -> dict:
|
||||||
|
"""Simple YAML parser for basic task files."""
|
||||||
|
result = {}
|
||||||
|
current_key = None
|
||||||
|
current_list = None
|
||||||
|
|
||||||
|
for line in content.split('\n'):
|
||||||
|
line = line.rstrip()
|
||||||
|
if not line or line.startswith('#'):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Handle list items
|
||||||
|
if line.startswith(' - '):
|
||||||
|
if current_list is not None:
|
||||||
|
current_list.append(line[4:].strip())
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Handle key-value pairs
|
||||||
|
if ':' in line and not line.startswith(' '):
|
||||||
|
key, _, value = line.partition(':')
|
||||||
|
key = key.strip()
|
||||||
|
value = value.strip()
|
||||||
|
|
||||||
|
if value:
|
||||||
|
result[key] = value
|
||||||
|
current_list = None
|
||||||
|
else:
|
||||||
|
result[key] = []
|
||||||
|
current_list = result[key]
|
||||||
|
current_key = key
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def load_yaml(filepath: str) -> dict:
|
||||||
|
"""Load YAML file."""
|
||||||
|
with open(filepath, 'r') as f:
|
||||||
|
content = f.read()
|
||||||
|
|
||||||
|
if HAS_YAML:
|
||||||
|
return yaml.safe_load(content) or {}
|
||||||
|
return parse_yaml_simple(content)
|
||||||
|
|
||||||
|
|
||||||
|
def save_yaml(filepath: str, data: dict):
|
||||||
|
"""Save data to YAML file."""
|
||||||
|
if HAS_YAML:
|
||||||
|
with open(filepath, 'w') as f:
|
||||||
|
yaml.dump(data, f, default_flow_style=False, sort_keys=False)
|
||||||
|
else:
|
||||||
|
# Simple YAML writer
|
||||||
|
lines = []
|
||||||
|
for key, value in data.items():
|
||||||
|
if isinstance(value, list):
|
||||||
|
lines.append(f"{key}:")
|
||||||
|
for item in value:
|
||||||
|
lines.append(f" - {item}")
|
||||||
|
elif isinstance(value, str) and '\n' in value:
|
||||||
|
lines.append(f"{key}: |")
|
||||||
|
for line in value.split('\n'):
|
||||||
|
lines.append(f" {line}")
|
||||||
|
else:
|
||||||
|
lines.append(f"{key}: {value}")
|
||||||
|
|
||||||
|
with open(filepath, 'w') as f:
|
||||||
|
f.write('\n'.join(lines))
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Version-Aware Task Directory
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def get_workflow_dir() -> Path:
|
||||||
|
"""Get the .workflow directory path."""
|
||||||
|
return Path('.workflow')
|
||||||
|
|
||||||
|
|
||||||
|
def get_current_tasks_dir() -> str:
|
||||||
|
"""Get the tasks directory for the currently active workflow version.
|
||||||
|
|
||||||
|
Returns the version-specific tasks directory if a workflow is active,
|
||||||
|
otherwise falls back to 'tasks' for backward compatibility.
|
||||||
|
"""
|
||||||
|
current_path = get_workflow_dir() / 'current.yml'
|
||||||
|
if not current_path.exists():
|
||||||
|
return 'tasks' # Fallback for no active workflow
|
||||||
|
|
||||||
|
current = load_yaml(str(current_path))
|
||||||
|
version = current.get('active_version')
|
||||||
|
if not version:
|
||||||
|
return 'tasks' # Fallback
|
||||||
|
|
||||||
|
tasks_dir = get_workflow_dir() / 'versions' / version / 'tasks'
|
||||||
|
tasks_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
return str(tasks_dir)
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Task Operations
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def find_tasks(tasks_dir: str, filters: dict = None) -> list:
|
||||||
|
"""Find all task files matching filters."""
|
||||||
|
tasks = []
|
||||||
|
tasks_path = Path(tasks_dir)
|
||||||
|
|
||||||
|
if not tasks_path.exists():
|
||||||
|
return tasks
|
||||||
|
|
||||||
|
for filepath in tasks_path.glob('**/*.yml'):
|
||||||
|
try:
|
||||||
|
task = load_yaml(str(filepath))
|
||||||
|
task['_filepath'] = str(filepath)
|
||||||
|
|
||||||
|
# Apply filters
|
||||||
|
if filters:
|
||||||
|
match = True
|
||||||
|
for key, value in filters.items():
|
||||||
|
if task.get(key) != value:
|
||||||
|
match = False
|
||||||
|
break
|
||||||
|
if not match:
|
||||||
|
continue
|
||||||
|
|
||||||
|
tasks.append(task)
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Warning: Could not parse {filepath}: {e}", file=sys.stderr)
|
||||||
|
|
||||||
|
return tasks
|
||||||
|
|
||||||
|
|
||||||
|
def list_tasks(tasks_dir: str, status: str = None, agent: str = None):
|
||||||
|
"""List tasks with optional filtering."""
|
||||||
|
filters = {}
|
||||||
|
if status:
|
||||||
|
filters['status'] = status
|
||||||
|
if agent:
|
||||||
|
filters['agent'] = agent
|
||||||
|
|
||||||
|
tasks = find_tasks(tasks_dir, filters)
|
||||||
|
|
||||||
|
if not tasks:
|
||||||
|
print("No tasks found.")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Group by status
|
||||||
|
by_status = {}
|
||||||
|
for task in tasks:
|
||||||
|
s = task.get('status', 'unknown')
|
||||||
|
if s not in by_status:
|
||||||
|
by_status[s] = []
|
||||||
|
by_status[s].append(task)
|
||||||
|
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("TASK LIST")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
status_order = ['pending', 'in_progress', 'review', 'approved', 'completed', 'blocked']
|
||||||
|
for s in status_order:
|
||||||
|
if s in by_status:
|
||||||
|
print(f"\n{s.upper()} ({len(by_status[s])})")
|
||||||
|
print("-" * 40)
|
||||||
|
for task in by_status[s]:
|
||||||
|
agent = task.get('agent', '?')
|
||||||
|
priority = task.get('priority', 'medium')
|
||||||
|
print(f" [{agent}] {task.get('id', 'unknown')} ({priority})")
|
||||||
|
print(f" {task.get('title', 'No title')}")
|
||||||
|
|
||||||
|
|
||||||
|
def get_next_task(tasks_dir: str, agent: str) -> dict:
|
||||||
|
"""Get next available task for an agent."""
|
||||||
|
tasks = find_tasks(tasks_dir, {'agent': agent, 'status': 'pending'})
|
||||||
|
|
||||||
|
if not tasks:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Sort by priority (high > medium > low)
|
||||||
|
priority_order = {'high': 0, 'medium': 1, 'low': 2}
|
||||||
|
tasks.sort(key=lambda t: priority_order.get(t.get('priority', 'medium'), 1))
|
||||||
|
|
||||||
|
# Check dependencies
|
||||||
|
for task in tasks:
|
||||||
|
deps = task.get('dependencies', [])
|
||||||
|
if not deps:
|
||||||
|
return task
|
||||||
|
|
||||||
|
# Check if all dependencies are completed
|
||||||
|
all_deps_done = True
|
||||||
|
for dep_id in deps:
|
||||||
|
dep_tasks = find_tasks(tasks_dir, {'id': dep_id})
|
||||||
|
if dep_tasks and dep_tasks[0].get('status') != 'completed':
|
||||||
|
all_deps_done = False
|
||||||
|
break
|
||||||
|
|
||||||
|
if all_deps_done:
|
||||||
|
return task
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def update_task_status(tasks_dir: str, task_id: str, new_status: str, notes: str = None):
|
||||||
|
"""Update task status."""
|
||||||
|
tasks = find_tasks(tasks_dir, {'id': task_id})
|
||||||
|
|
||||||
|
if not tasks:
|
||||||
|
print(f"Error: Task {task_id} not found")
|
||||||
|
return False
|
||||||
|
|
||||||
|
task = tasks[0]
|
||||||
|
filepath = task['_filepath']
|
||||||
|
|
||||||
|
# Remove internal field
|
||||||
|
del task['_filepath']
|
||||||
|
|
||||||
|
# Update status
|
||||||
|
task['status'] = new_status
|
||||||
|
|
||||||
|
if new_status == 'completed':
|
||||||
|
task['completed_at'] = datetime.now().isoformat()
|
||||||
|
|
||||||
|
if notes:
|
||||||
|
task['review_notes'] = notes
|
||||||
|
|
||||||
|
save_yaml(filepath, task)
|
||||||
|
print(f"Updated {task_id} to {new_status}")
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def complete_all_tasks(tasks_dir: str):
|
||||||
|
"""Mark all non-completed tasks as completed."""
|
||||||
|
tasks = find_tasks(tasks_dir)
|
||||||
|
completed_count = 0
|
||||||
|
|
||||||
|
for task in tasks:
|
||||||
|
if task.get('status') != 'completed':
|
||||||
|
filepath = task['_filepath']
|
||||||
|
del task['_filepath']
|
||||||
|
task['status'] = 'completed'
|
||||||
|
task['completed_at'] = datetime.now().isoformat()
|
||||||
|
save_yaml(filepath, task)
|
||||||
|
completed_count += 1
|
||||||
|
print(f" Completed: {task.get('id', 'unknown')}")
|
||||||
|
|
||||||
|
print(f"\nMarked {completed_count} task(s) as completed.")
|
||||||
|
return completed_count
|
||||||
|
|
||||||
|
|
||||||
|
def show_status(tasks_dir: str, manifest_path: str):
|
||||||
|
"""Show overall workflow status."""
|
||||||
|
tasks = find_tasks(tasks_dir)
|
||||||
|
|
||||||
|
# Count by status
|
||||||
|
status_counts = {}
|
||||||
|
agent_counts = {'frontend': {'pending': 0, 'completed': 0},
|
||||||
|
'backend': {'pending': 0, 'completed': 0},
|
||||||
|
'reviewer': {'pending': 0}}
|
||||||
|
|
||||||
|
for task in tasks:
|
||||||
|
s = task.get('status', 'unknown')
|
||||||
|
status_counts[s] = status_counts.get(s, 0) + 1
|
||||||
|
|
||||||
|
agent = task.get('agent', 'unknown')
|
||||||
|
if agent in agent_counts:
|
||||||
|
if s == 'pending':
|
||||||
|
agent_counts[agent]['pending'] += 1
|
||||||
|
elif s == 'completed':
|
||||||
|
if 'completed' in agent_counts[agent]:
|
||||||
|
agent_counts[agent]['completed'] += 1
|
||||||
|
|
||||||
|
print("\n" + "╔" + "═" * 58 + "╗")
|
||||||
|
print("║" + "WORKFLOW STATUS".center(58) + "║")
|
||||||
|
print("╠" + "═" * 58 + "╣")
|
||||||
|
print("║" + " TASKS BY STATUS".ljust(58) + "║")
|
||||||
|
print("║" + f" ⏳ Pending: {status_counts.get('pending', 0)}".ljust(58) + "║")
|
||||||
|
print("║" + f" 🔄 In Progress: {status_counts.get('in_progress', 0)}".ljust(58) + "║")
|
||||||
|
print("║" + f" 🔍 Review: {status_counts.get('review', 0)}".ljust(58) + "║")
|
||||||
|
print("║" + f" ✅ Approved: {status_counts.get('approved', 0)}".ljust(58) + "║")
|
||||||
|
print("║" + f" ✓ Completed: {status_counts.get('completed', 0)}".ljust(58) + "║")
|
||||||
|
print("║" + f" 🚫 Blocked: {status_counts.get('blocked', 0)}".ljust(58) + "║")
|
||||||
|
print("╠" + "═" * 58 + "╣")
|
||||||
|
print("║" + " TASKS BY AGENT".ljust(58) + "║")
|
||||||
|
print("║" + f" 🎨 Frontend: {agent_counts['frontend']['pending']} pending, {agent_counts['frontend']['completed']} completed".ljust(58) + "║")
|
||||||
|
print("║" + f" ⚙️ Backend: {agent_counts['backend']['pending']} pending, {agent_counts['backend']['completed']} completed".ljust(58) + "║")
|
||||||
|
print("║" + f" 🔍 Reviewer: {agent_counts['reviewer']['pending']} pending".ljust(58) + "║")
|
||||||
|
print("╚" + "═" * 58 + "╝")
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Task management for guardrail workflow")
|
||||||
|
subparsers = parser.add_subparsers(dest='command', help='Commands')
|
||||||
|
|
||||||
|
# list command
|
||||||
|
list_parser = subparsers.add_parser('list', help='List tasks')
|
||||||
|
list_parser.add_argument('--status', help='Filter by status')
|
||||||
|
list_parser.add_argument('--agent', help='Filter by agent')
|
||||||
|
list_parser.add_argument('--tasks-dir', default=None, help='Tasks directory (defaults to current version)')
|
||||||
|
|
||||||
|
# next command
|
||||||
|
next_parser = subparsers.add_parser('next', help='Get next task for agent')
|
||||||
|
next_parser.add_argument('agent', choices=['frontend', 'backend', 'reviewer'])
|
||||||
|
next_parser.add_argument('--tasks-dir', default=None, help='Tasks directory (defaults to current version)')
|
||||||
|
|
||||||
|
# update command
|
||||||
|
update_parser = subparsers.add_parser('update', help='Update task status')
|
||||||
|
update_parser.add_argument('task_id', help='Task ID')
|
||||||
|
update_parser.add_argument('status', choices=['pending', 'in_progress', 'review', 'approved', 'completed', 'blocked'])
|
||||||
|
update_parser.add_argument('--notes', help='Review notes')
|
||||||
|
update_parser.add_argument('--tasks-dir', default=None, help='Tasks directory (defaults to current version)')
|
||||||
|
|
||||||
|
# status command
|
||||||
|
status_parser = subparsers.add_parser('status', help='Show workflow status')
|
||||||
|
status_parser.add_argument('--tasks-dir', default=None, help='Tasks directory (defaults to current version)')
|
||||||
|
status_parser.add_argument('--manifest', default='project_manifest.json', help='Manifest path')
|
||||||
|
|
||||||
|
# complete-all command
|
||||||
|
complete_all_parser = subparsers.add_parser('complete-all', help='Mark all tasks as completed')
|
||||||
|
complete_all_parser.add_argument('--tasks-dir', default=None, help='Tasks directory (defaults to current version)')
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
# Resolve tasks_dir to version-specific directory if not explicitly provided
|
||||||
|
if hasattr(args, 'tasks_dir') and args.tasks_dir is None:
|
||||||
|
args.tasks_dir = get_current_tasks_dir()
|
||||||
|
|
||||||
|
if args.command == 'list':
|
||||||
|
list_tasks(args.tasks_dir, args.status, args.agent)
|
||||||
|
elif args.command == 'next':
|
||||||
|
task = get_next_task(args.tasks_dir, args.agent)
|
||||||
|
if task:
|
||||||
|
print(f"Next task for {args.agent}: {task.get('id')}")
|
||||||
|
print(f" Title: {task.get('title')}")
|
||||||
|
print(f" Files: {task.get('file_paths', [])}")
|
||||||
|
else:
|
||||||
|
print(f"No pending tasks for {args.agent}")
|
||||||
|
elif args.command == 'update':
|
||||||
|
update_task_status(args.tasks_dir, args.task_id, args.status, args.notes)
|
||||||
|
elif args.command == 'status':
|
||||||
|
show_status(args.tasks_dir, args.manifest)
|
||||||
|
elif args.command == 'complete-all':
|
||||||
|
complete_all_tasks(args.tasks_dir)
|
||||||
|
else:
|
||||||
|
parser.print_help()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
|
|
@ -0,0 +1,715 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Task State Manager for parallel execution and dependency tracking.
|
||||||
|
|
||||||
|
Manages task-level states independently from workflow phase, enabling:
|
||||||
|
- Multiple tasks in_progress simultaneously (if no blocking dependencies)
|
||||||
|
- Dependency validation before task execution
|
||||||
|
- Task grouping by agent type for parallel frontend/backend work
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
from datetime import datetime
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Dict, List, Optional, Set, Tuple
|
||||||
|
|
||||||
|
# Try to import yaml
|
||||||
|
try:
|
||||||
|
import yaml
|
||||||
|
HAS_YAML = True
|
||||||
|
except ImportError:
|
||||||
|
HAS_YAML = False
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# YAML Helpers
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def load_yaml(filepath: str) -> dict:
|
||||||
|
"""Load YAML file."""
|
||||||
|
if not os.path.exists(filepath):
|
||||||
|
return {}
|
||||||
|
with open(filepath, 'r') as f:
|
||||||
|
content = f.read()
|
||||||
|
if not content.strip():
|
||||||
|
return {}
|
||||||
|
if HAS_YAML:
|
||||||
|
return yaml.safe_load(content) or {}
|
||||||
|
return parse_simple_yaml(content)
|
||||||
|
|
||||||
|
|
||||||
|
def parse_simple_yaml(content: str) -> dict:
|
||||||
|
"""Parse simple YAML without PyYAML dependency."""
|
||||||
|
result = {}
|
||||||
|
current_key = None
|
||||||
|
current_list = None
|
||||||
|
|
||||||
|
for line in content.split('\n'):
|
||||||
|
stripped = line.strip()
|
||||||
|
|
||||||
|
if not stripped or stripped.startswith('#'):
|
||||||
|
continue
|
||||||
|
|
||||||
|
if stripped.startswith('- '):
|
||||||
|
if current_list is not None:
|
||||||
|
value = stripped[2:].strip()
|
||||||
|
if (value.startswith('"') and value.endswith('"')) or \
|
||||||
|
(value.startswith("'") and value.endswith("'")):
|
||||||
|
value = value[1:-1]
|
||||||
|
current_list.append(value)
|
||||||
|
continue
|
||||||
|
|
||||||
|
if ':' in stripped:
|
||||||
|
key, _, value = stripped.partition(':')
|
||||||
|
key = key.strip()
|
||||||
|
value = value.strip()
|
||||||
|
|
||||||
|
if value == '' or value == '[]':
|
||||||
|
current_key = key
|
||||||
|
current_list = []
|
||||||
|
result[key] = current_list
|
||||||
|
elif value == '{}':
|
||||||
|
result[key] = {}
|
||||||
|
current_list = None
|
||||||
|
elif value == 'null' or value == '~':
|
||||||
|
result[key] = None
|
||||||
|
current_list = None
|
||||||
|
elif value == 'true':
|
||||||
|
result[key] = True
|
||||||
|
current_list = None
|
||||||
|
elif value == 'false':
|
||||||
|
result[key] = False
|
||||||
|
current_list = None
|
||||||
|
elif value.isdigit():
|
||||||
|
result[key] = int(value)
|
||||||
|
current_list = None
|
||||||
|
else:
|
||||||
|
if (value.startswith('"') and value.endswith('"')) or \
|
||||||
|
(value.startswith("'") and value.endswith("'")):
|
||||||
|
value = value[1:-1]
|
||||||
|
result[key] = value
|
||||||
|
current_list = None
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def save_yaml(filepath: str, data: dict):
|
||||||
|
"""Save data to YAML file."""
|
||||||
|
os.makedirs(os.path.dirname(filepath), exist_ok=True)
|
||||||
|
if HAS_YAML:
|
||||||
|
with open(filepath, 'w') as f:
|
||||||
|
yaml.dump(data, f, default_flow_style=False, sort_keys=False, allow_unicode=True)
|
||||||
|
else:
|
||||||
|
with open(filepath, 'w') as f:
|
||||||
|
json.dump(data, f, indent=2)
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Path Helpers
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def get_workflow_dir() -> Path:
|
||||||
|
return Path('.workflow')
|
||||||
|
|
||||||
|
|
||||||
|
def get_current_state_path() -> Path:
|
||||||
|
return get_workflow_dir() / 'current.yml'
|
||||||
|
|
||||||
|
|
||||||
|
def get_active_version() -> Optional[str]:
|
||||||
|
"""Get the currently active workflow version."""
|
||||||
|
current_path = get_current_state_path()
|
||||||
|
if not current_path.exists():
|
||||||
|
return None
|
||||||
|
current = load_yaml(str(current_path))
|
||||||
|
return current.get('active_version')
|
||||||
|
|
||||||
|
|
||||||
|
def get_tasks_dir() -> Optional[Path]:
|
||||||
|
"""Get the tasks directory for the active version."""
|
||||||
|
version = get_active_version()
|
||||||
|
if not version:
|
||||||
|
return None
|
||||||
|
tasks_dir = get_workflow_dir() / 'versions' / version / 'tasks'
|
||||||
|
tasks_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
return tasks_dir
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Task State Constants
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
TASK_STATES = ['pending', 'in_progress', 'review', 'approved', 'completed', 'blocked']
|
||||||
|
|
||||||
|
VALID_TASK_TRANSITIONS = {
|
||||||
|
'pending': ['in_progress', 'blocked'],
|
||||||
|
'in_progress': ['review', 'blocked', 'pending'], # Can go back if paused
|
||||||
|
'review': ['approved', 'in_progress'], # Can go back if changes needed
|
||||||
|
'approved': ['completed'],
|
||||||
|
'completed': [], # Terminal state
|
||||||
|
'blocked': ['pending'] # Unblocked when dependencies resolve
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Task Loading
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def load_all_tasks() -> Dict[str, dict]:
|
||||||
|
"""Load all tasks from the current version's tasks directory."""
|
||||||
|
tasks_dir = get_tasks_dir()
|
||||||
|
if not tasks_dir or not tasks_dir.exists():
|
||||||
|
return {}
|
||||||
|
|
||||||
|
tasks = {}
|
||||||
|
for task_file in tasks_dir.glob('*.yml'):
|
||||||
|
task_id = task_file.stem
|
||||||
|
task = load_yaml(str(task_file))
|
||||||
|
if task:
|
||||||
|
tasks[task_id] = task
|
||||||
|
|
||||||
|
return tasks
|
||||||
|
|
||||||
|
|
||||||
|
def load_task(task_id: str) -> Optional[dict]:
|
||||||
|
"""Load a single task by ID."""
|
||||||
|
tasks_dir = get_tasks_dir()
|
||||||
|
if not tasks_dir:
|
||||||
|
return None
|
||||||
|
|
||||||
|
task_path = tasks_dir / f"{task_id}.yml"
|
||||||
|
if not task_path.exists():
|
||||||
|
return None
|
||||||
|
|
||||||
|
return load_yaml(str(task_path))
|
||||||
|
|
||||||
|
|
||||||
|
def save_task(task: dict):
|
||||||
|
"""Save a task to the tasks directory."""
|
||||||
|
tasks_dir = get_tasks_dir()
|
||||||
|
if not tasks_dir:
|
||||||
|
print("Error: No active workflow")
|
||||||
|
return
|
||||||
|
|
||||||
|
task_id = task.get('id', task.get('task_id'))
|
||||||
|
if not task_id:
|
||||||
|
print("Error: Task has no ID")
|
||||||
|
return
|
||||||
|
|
||||||
|
task['updated_at'] = datetime.now().isoformat()
|
||||||
|
save_yaml(str(tasks_dir / f"{task_id}.yml"), task)
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Dependency Resolution
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def get_task_dependencies(task: dict) -> List[str]:
|
||||||
|
"""Get the list of task IDs that this task depends on."""
|
||||||
|
return task.get('dependencies', []) or []
|
||||||
|
|
||||||
|
|
||||||
|
def check_dependencies_met(task_id: str, all_tasks: Dict[str, dict]) -> Tuple[bool, List[str]]:
|
||||||
|
"""
|
||||||
|
Check if all dependencies for a task are completed.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (all_met, unmet_dependency_ids)
|
||||||
|
"""
|
||||||
|
task = all_tasks.get(task_id)
|
||||||
|
if not task:
|
||||||
|
return False, [f"Task {task_id} not found"]
|
||||||
|
|
||||||
|
dependencies = get_task_dependencies(task)
|
||||||
|
unmet = []
|
||||||
|
|
||||||
|
for dep_id in dependencies:
|
||||||
|
dep_task = all_tasks.get(dep_id)
|
||||||
|
if not dep_task:
|
||||||
|
unmet.append(f"{dep_id} (not found)")
|
||||||
|
elif dep_task.get('status') not in ['completed', 'approved']:
|
||||||
|
unmet.append(f"{dep_id} (status: {dep_task.get('status', 'unknown')})")
|
||||||
|
|
||||||
|
return len(unmet) == 0, unmet
|
||||||
|
|
||||||
|
|
||||||
|
def get_dependency_graph(all_tasks: Dict[str, dict]) -> Dict[str, Set[str]]:
|
||||||
|
"""Build a dependency graph for all tasks."""
|
||||||
|
graph = {}
|
||||||
|
for task_id, task in all_tasks.items():
|
||||||
|
deps = get_task_dependencies(task)
|
||||||
|
graph[task_id] = set(deps)
|
||||||
|
return graph
|
||||||
|
|
||||||
|
|
||||||
|
def detect_circular_dependencies(all_tasks: Dict[str, dict]) -> List[List[str]]:
|
||||||
|
"""Detect circular dependencies using DFS."""
|
||||||
|
graph = get_dependency_graph(all_tasks)
|
||||||
|
cycles = []
|
||||||
|
visited = set()
|
||||||
|
rec_stack = set()
|
||||||
|
|
||||||
|
def dfs(node: str, path: List[str]) -> bool:
|
||||||
|
visited.add(node)
|
||||||
|
rec_stack.add(node)
|
||||||
|
path.append(node)
|
||||||
|
|
||||||
|
for neighbor in graph.get(node, set()):
|
||||||
|
if neighbor not in visited:
|
||||||
|
if dfs(neighbor, path):
|
||||||
|
return True
|
||||||
|
elif neighbor in rec_stack:
|
||||||
|
# Found cycle
|
||||||
|
cycle_start = path.index(neighbor)
|
||||||
|
cycles.append(path[cycle_start:] + [neighbor])
|
||||||
|
return True
|
||||||
|
|
||||||
|
path.pop()
|
||||||
|
rec_stack.remove(node)
|
||||||
|
return False
|
||||||
|
|
||||||
|
for node in graph:
|
||||||
|
if node not in visited:
|
||||||
|
dfs(node, [])
|
||||||
|
|
||||||
|
return cycles
|
||||||
|
|
||||||
|
|
||||||
|
def get_execution_order(all_tasks: Dict[str, dict]) -> List[str]:
|
||||||
|
"""Get topologically sorted execution order respecting dependencies."""
|
||||||
|
graph = get_dependency_graph(all_tasks)
|
||||||
|
|
||||||
|
# Kahn's algorithm for topological sort
|
||||||
|
in_degree = {task_id: 0 for task_id in all_tasks}
|
||||||
|
for deps in graph.values():
|
||||||
|
for dep in deps:
|
||||||
|
if dep in in_degree:
|
||||||
|
in_degree[dep] += 1
|
||||||
|
|
||||||
|
queue = [t for t, d in in_degree.items() if d == 0]
|
||||||
|
result = []
|
||||||
|
|
||||||
|
while queue:
|
||||||
|
node = queue.pop(0)
|
||||||
|
result.append(node)
|
||||||
|
|
||||||
|
for other, deps in graph.items():
|
||||||
|
if node in deps:
|
||||||
|
in_degree[other] -= 1
|
||||||
|
if in_degree[other] == 0:
|
||||||
|
queue.append(other)
|
||||||
|
|
||||||
|
# Reverse since we want dependencies first
|
||||||
|
return list(reversed(result))
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Parallel Execution Support
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def get_parallel_candidates(all_tasks: Dict[str, dict]) -> Dict[str, List[str]]:
|
||||||
|
"""
|
||||||
|
Get tasks that can be executed in parallel, grouped by agent.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dict mapping agent type to list of task IDs ready for parallel execution
|
||||||
|
"""
|
||||||
|
candidates = {'frontend': [], 'backend': [], 'other': []}
|
||||||
|
|
||||||
|
for task_id, task in all_tasks.items():
|
||||||
|
status = task.get('status', 'pending')
|
||||||
|
|
||||||
|
# Only consider pending tasks
|
||||||
|
if status != 'pending':
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Check if dependencies are met
|
||||||
|
deps_met, _ = check_dependencies_met(task_id, all_tasks)
|
||||||
|
if not deps_met:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Group by agent
|
||||||
|
agent = task.get('agent', 'other')
|
||||||
|
if agent in candidates:
|
||||||
|
candidates[agent].append(task_id)
|
||||||
|
else:
|
||||||
|
candidates['other'].append(task_id)
|
||||||
|
|
||||||
|
return candidates
|
||||||
|
|
||||||
|
|
||||||
|
def get_active_tasks() -> Dict[str, List[str]]:
|
||||||
|
"""
|
||||||
|
Get currently active (in_progress) tasks grouped by agent.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dict mapping agent type to list of active task IDs
|
||||||
|
"""
|
||||||
|
all_tasks = load_all_tasks()
|
||||||
|
active = {'frontend': [], 'backend': [], 'other': []}
|
||||||
|
|
||||||
|
for task_id, task in all_tasks.items():
|
||||||
|
if task.get('status') == 'in_progress':
|
||||||
|
agent = task.get('agent', 'other')
|
||||||
|
if agent in active:
|
||||||
|
active[agent].append(task_id)
|
||||||
|
else:
|
||||||
|
active['other'].append(task_id)
|
||||||
|
|
||||||
|
return active
|
||||||
|
|
||||||
|
|
||||||
|
def can_start_task(task_id: str, max_per_agent: int = 1) -> Tuple[bool, str]:
|
||||||
|
"""
|
||||||
|
Check if a task can be started given current active tasks.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
task_id: Task to check
|
||||||
|
max_per_agent: Maximum concurrent tasks per agent type
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (can_start, reason)
|
||||||
|
"""
|
||||||
|
all_tasks = load_all_tasks()
|
||||||
|
task = all_tasks.get(task_id)
|
||||||
|
|
||||||
|
if not task:
|
||||||
|
return False, f"Task {task_id} not found"
|
||||||
|
|
||||||
|
status = task.get('status', 'pending')
|
||||||
|
if status != 'pending':
|
||||||
|
return False, f"Task is not pending (status: {status})"
|
||||||
|
|
||||||
|
# Check dependencies
|
||||||
|
deps_met, unmet = check_dependencies_met(task_id, all_tasks)
|
||||||
|
if not deps_met:
|
||||||
|
return False, f"Dependencies not met: {', '.join(unmet)}"
|
||||||
|
|
||||||
|
# Check concurrent task limit per agent
|
||||||
|
agent = task.get('agent', 'other')
|
||||||
|
active = get_active_tasks()
|
||||||
|
if len(active.get(agent, [])) >= max_per_agent:
|
||||||
|
return False, f"Max concurrent {agent} tasks reached ({max_per_agent})"
|
||||||
|
|
||||||
|
return True, "Ready to start"
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# State Transitions
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def transition_task(task_id: str, new_status: str) -> Tuple[bool, str]:
|
||||||
|
"""
|
||||||
|
Transition a task to a new status with validation.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (success, message)
|
||||||
|
"""
|
||||||
|
task = load_task(task_id)
|
||||||
|
if not task:
|
||||||
|
return False, f"Task {task_id} not found"
|
||||||
|
|
||||||
|
current_status = task.get('status', 'pending')
|
||||||
|
|
||||||
|
# Validate transition
|
||||||
|
valid_next = VALID_TASK_TRANSITIONS.get(current_status, [])
|
||||||
|
if new_status not in valid_next:
|
||||||
|
return False, f"Invalid transition: {current_status} → {new_status}. Valid: {valid_next}"
|
||||||
|
|
||||||
|
# For in_progress, check dependencies
|
||||||
|
if new_status == 'in_progress':
|
||||||
|
all_tasks = load_all_tasks()
|
||||||
|
deps_met, unmet = check_dependencies_met(task_id, all_tasks)
|
||||||
|
if not deps_met:
|
||||||
|
# Block instead
|
||||||
|
task['status'] = 'blocked'
|
||||||
|
task['blocked_by'] = unmet
|
||||||
|
task['blocked_at'] = datetime.now().isoformat()
|
||||||
|
save_task(task)
|
||||||
|
return False, f"Dependencies not met, task blocked: {', '.join(unmet)}"
|
||||||
|
|
||||||
|
# Perform transition
|
||||||
|
task['status'] = new_status
|
||||||
|
task[f'{new_status}_at'] = datetime.now().isoformat()
|
||||||
|
|
||||||
|
# Clear blocked info if unblocking
|
||||||
|
if current_status == 'blocked' and new_status == 'pending':
|
||||||
|
task.pop('blocked_by', None)
|
||||||
|
task.pop('blocked_at', None)
|
||||||
|
|
||||||
|
save_task(task)
|
||||||
|
return True, f"Task {task_id}: {current_status} → {new_status}"
|
||||||
|
|
||||||
|
|
||||||
|
def update_blocked_tasks():
|
||||||
|
"""Check and unblock tasks whose dependencies are now met."""
|
||||||
|
all_tasks = load_all_tasks()
|
||||||
|
unblocked = []
|
||||||
|
|
||||||
|
for task_id, task in all_tasks.items():
|
||||||
|
if task.get('status') != 'blocked':
|
||||||
|
continue
|
||||||
|
|
||||||
|
deps_met, _ = check_dependencies_met(task_id, all_tasks)
|
||||||
|
if deps_met:
|
||||||
|
success, msg = transition_task(task_id, 'pending')
|
||||||
|
if success:
|
||||||
|
unblocked.append(task_id)
|
||||||
|
|
||||||
|
return unblocked
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Status Report
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def get_status_summary() -> dict:
|
||||||
|
"""Get summary of task statuses."""
|
||||||
|
all_tasks = load_all_tasks()
|
||||||
|
|
||||||
|
summary = {
|
||||||
|
'total': len(all_tasks),
|
||||||
|
'by_status': {status: 0 for status in TASK_STATES},
|
||||||
|
'by_agent': {},
|
||||||
|
'blocked_details': [],
|
||||||
|
'ready_for_parallel': get_parallel_candidates(all_tasks)
|
||||||
|
}
|
||||||
|
|
||||||
|
for task_id, task in all_tasks.items():
|
||||||
|
status = task.get('status', 'pending')
|
||||||
|
agent = task.get('agent', 'other')
|
||||||
|
|
||||||
|
summary['by_status'][status] = summary['by_status'].get(status, 0) + 1
|
||||||
|
|
||||||
|
if agent not in summary['by_agent']:
|
||||||
|
summary['by_agent'][agent] = {'total': 0, 'by_status': {}}
|
||||||
|
summary['by_agent'][agent]['total'] += 1
|
||||||
|
summary['by_agent'][agent]['by_status'][status] = \
|
||||||
|
summary['by_agent'][agent]['by_status'].get(status, 0) + 1
|
||||||
|
|
||||||
|
if status == 'blocked':
|
||||||
|
summary['blocked_details'].append({
|
||||||
|
'task_id': task_id,
|
||||||
|
'blocked_by': task.get('blocked_by', []),
|
||||||
|
'blocked_at': task.get('blocked_at')
|
||||||
|
})
|
||||||
|
|
||||||
|
return summary
|
||||||
|
|
||||||
|
|
||||||
|
def show_status():
|
||||||
|
"""Display task status summary."""
|
||||||
|
summary = get_status_summary()
|
||||||
|
|
||||||
|
print()
|
||||||
|
print("╔" + "═" * 60 + "╗")
|
||||||
|
print("║" + "TASK STATE MANAGER STATUS".center(60) + "║")
|
||||||
|
print("╠" + "═" * 60 + "╣")
|
||||||
|
print("║" + f" Total Tasks: {summary['total']}".ljust(60) + "║")
|
||||||
|
print("╠" + "═" * 60 + "╣")
|
||||||
|
print("║" + " BY STATUS".ljust(60) + "║")
|
||||||
|
|
||||||
|
status_icons = {
|
||||||
|
'pending': '⏳', 'in_progress': '🔄', 'review': '🔍',
|
||||||
|
'approved': '✅', 'completed': '✓', 'blocked': '🚫'
|
||||||
|
}
|
||||||
|
|
||||||
|
for status, count in summary['by_status'].items():
|
||||||
|
icon = status_icons.get(status, '•')
|
||||||
|
print("║" + f" {icon} {status}: {count}".ljust(60) + "║")
|
||||||
|
|
||||||
|
print("╠" + "═" * 60 + "╣")
|
||||||
|
print("║" + " BY AGENT".ljust(60) + "║")
|
||||||
|
|
||||||
|
for agent, data in summary['by_agent'].items():
|
||||||
|
print("║" + f" {agent}: {data['total']} tasks".ljust(60) + "║")
|
||||||
|
for status, count in data['by_status'].items():
|
||||||
|
if count > 0:
|
||||||
|
print("║" + f" └─ {status}: {count}".ljust(60) + "║")
|
||||||
|
|
||||||
|
# Show parallel candidates
|
||||||
|
parallel = summary['ready_for_parallel']
|
||||||
|
has_parallel = any(len(v) > 0 for v in parallel.values())
|
||||||
|
|
||||||
|
if has_parallel:
|
||||||
|
print("╠" + "═" * 60 + "╣")
|
||||||
|
print("║" + " 🔀 READY FOR PARALLEL EXECUTION".ljust(60) + "║")
|
||||||
|
for agent, tasks in parallel.items():
|
||||||
|
if tasks:
|
||||||
|
print("║" + f" {agent}: {', '.join(tasks[:3])}".ljust(60) + "║")
|
||||||
|
if len(tasks) > 3:
|
||||||
|
print("║" + f" (+{len(tasks) - 3} more)".ljust(60) + "║")
|
||||||
|
|
||||||
|
# Show blocked tasks
|
||||||
|
if summary['blocked_details']:
|
||||||
|
print("╠" + "═" * 60 + "╣")
|
||||||
|
print("║" + " 🚫 BLOCKED TASKS".ljust(60) + "║")
|
||||||
|
for blocked in summary['blocked_details'][:5]:
|
||||||
|
deps = ', '.join(blocked['blocked_by'][:2])
|
||||||
|
if len(blocked['blocked_by']) > 2:
|
||||||
|
deps += f" (+{len(blocked['blocked_by']) - 2})"
|
||||||
|
print("║" + f" {blocked['task_id']}".ljust(60) + "║")
|
||||||
|
print("║" + f" Blocked by: {deps}".ljust(60) + "║")
|
||||||
|
|
||||||
|
print("╚" + "═" * 60 + "╝")
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# CLI Interface
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Task state management for parallel execution")
|
||||||
|
subparsers = parser.add_subparsers(dest='command', help='Commands')
|
||||||
|
|
||||||
|
# status command
|
||||||
|
subparsers.add_parser('status', help='Show task status summary')
|
||||||
|
|
||||||
|
# transition command
|
||||||
|
trans_parser = subparsers.add_parser('transition', help='Transition task status')
|
||||||
|
trans_parser.add_argument('task_id', help='Task ID')
|
||||||
|
trans_parser.add_argument('status', choices=TASK_STATES, help='New status')
|
||||||
|
|
||||||
|
# can-start command
|
||||||
|
can_start_parser = subparsers.add_parser('can-start', help='Check if task can start')
|
||||||
|
can_start_parser.add_argument('task_id', help='Task ID')
|
||||||
|
can_start_parser.add_argument('--max-per-agent', type=int, default=1,
|
||||||
|
help='Max concurrent tasks per agent')
|
||||||
|
|
||||||
|
# parallel command
|
||||||
|
subparsers.add_parser('parallel', help='Show tasks ready for parallel execution')
|
||||||
|
|
||||||
|
# deps command
|
||||||
|
deps_parser = subparsers.add_parser('deps', help='Show task dependencies')
|
||||||
|
deps_parser.add_argument('task_id', nargs='?', help='Task ID (optional)')
|
||||||
|
|
||||||
|
# check-deps command
|
||||||
|
check_deps_parser = subparsers.add_parser('check-deps', help='Check if dependencies are met')
|
||||||
|
check_deps_parser.add_argument('task_id', help='Task ID')
|
||||||
|
|
||||||
|
# unblock command
|
||||||
|
subparsers.add_parser('unblock', help='Update blocked tasks whose deps are now met')
|
||||||
|
|
||||||
|
# order command
|
||||||
|
subparsers.add_parser('order', help='Show execution order respecting dependencies')
|
||||||
|
|
||||||
|
# cycles command
|
||||||
|
subparsers.add_parser('cycles', help='Detect circular dependencies')
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
if args.command == 'status':
|
||||||
|
show_status()
|
||||||
|
|
||||||
|
elif args.command == 'transition':
|
||||||
|
success, msg = transition_task(args.task_id, args.status)
|
||||||
|
print(msg)
|
||||||
|
if not success:
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
elif args.command == 'can-start':
|
||||||
|
can_start, reason = can_start_task(args.task_id, args.max_per_agent)
|
||||||
|
print(f"{'✅ Yes' if can_start else '❌ No'}: {reason}")
|
||||||
|
sys.exit(0 if can_start else 1)
|
||||||
|
|
||||||
|
elif args.command == 'parallel':
|
||||||
|
all_tasks = load_all_tasks()
|
||||||
|
candidates = get_parallel_candidates(all_tasks)
|
||||||
|
|
||||||
|
print("\n🔀 Tasks Ready for Parallel Execution:\n")
|
||||||
|
for agent, tasks in candidates.items():
|
||||||
|
if tasks:
|
||||||
|
print(f" {agent}:")
|
||||||
|
for task_id in tasks:
|
||||||
|
task = all_tasks.get(task_id, {})
|
||||||
|
print(f" - {task_id}: {task.get('title', 'No title')}")
|
||||||
|
|
||||||
|
if not any(candidates.values()):
|
||||||
|
print(" No tasks ready for parallel execution")
|
||||||
|
|
||||||
|
elif args.command == 'deps':
|
||||||
|
all_tasks = load_all_tasks()
|
||||||
|
|
||||||
|
if args.task_id:
|
||||||
|
task = all_tasks.get(args.task_id)
|
||||||
|
if task:
|
||||||
|
deps = get_task_dependencies(task)
|
||||||
|
print(f"\n{args.task_id} depends on:")
|
||||||
|
if deps:
|
||||||
|
for dep_id in deps:
|
||||||
|
dep = all_tasks.get(dep_id, {})
|
||||||
|
status = dep.get('status', 'unknown')
|
||||||
|
print(f" - {dep_id} ({status})")
|
||||||
|
else:
|
||||||
|
print(" (no dependencies)")
|
||||||
|
else:
|
||||||
|
print(f"Task {args.task_id} not found")
|
||||||
|
else:
|
||||||
|
# Show all dependencies
|
||||||
|
graph = get_dependency_graph(all_tasks)
|
||||||
|
print("\nDependency Graph:\n")
|
||||||
|
for task_id, deps in graph.items():
|
||||||
|
if deps:
|
||||||
|
print(f" {task_id} ← {', '.join(deps)}")
|
||||||
|
|
||||||
|
elif args.command == 'check-deps':
|
||||||
|
all_tasks = load_all_tasks()
|
||||||
|
deps_met, unmet = check_dependencies_met(args.task_id, all_tasks)
|
||||||
|
|
||||||
|
if deps_met:
|
||||||
|
print(f"✅ All dependencies met for {args.task_id}")
|
||||||
|
else:
|
||||||
|
print(f"❌ Unmet dependencies for {args.task_id}:")
|
||||||
|
for dep in unmet:
|
||||||
|
print(f" - {dep}")
|
||||||
|
|
||||||
|
sys.exit(0 if deps_met else 1)
|
||||||
|
|
||||||
|
elif args.command == 'unblock':
|
||||||
|
unblocked = update_blocked_tasks()
|
||||||
|
if unblocked:
|
||||||
|
print(f"✅ Unblocked {len(unblocked)} tasks:")
|
||||||
|
for task_id in unblocked:
|
||||||
|
print(f" - {task_id}")
|
||||||
|
else:
|
||||||
|
print("No tasks to unblock")
|
||||||
|
|
||||||
|
elif args.command == 'order':
|
||||||
|
all_tasks = load_all_tasks()
|
||||||
|
|
||||||
|
# Check for cycles first
|
||||||
|
cycles = detect_circular_dependencies(all_tasks)
|
||||||
|
if cycles:
|
||||||
|
print("⚠️ Cannot determine order - circular dependencies detected!")
|
||||||
|
for cycle in cycles:
|
||||||
|
print(f" Cycle: {' → '.join(cycle)}")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
order = get_execution_order(all_tasks)
|
||||||
|
print("\n📋 Execution Order (respecting dependencies):\n")
|
||||||
|
for i, task_id in enumerate(order, 1):
|
||||||
|
task = all_tasks.get(task_id, {})
|
||||||
|
status = task.get('status', 'pending')
|
||||||
|
agent = task.get('agent', '?')
|
||||||
|
print(f" {i}. [{agent}] {task_id} ({status})")
|
||||||
|
|
||||||
|
elif args.command == 'cycles':
|
||||||
|
all_tasks = load_all_tasks()
|
||||||
|
cycles = detect_circular_dependencies(all_tasks)
|
||||||
|
|
||||||
|
if cycles:
|
||||||
|
print("⚠️ Circular dependencies detected:\n")
|
||||||
|
for cycle in cycles:
|
||||||
|
print(f" {' → '.join(cycle)}")
|
||||||
|
else:
|
||||||
|
print("✅ No circular dependencies detected")
|
||||||
|
|
||||||
|
else:
|
||||||
|
parser.print_help()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
|
|
@ -0,0 +1,75 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Transition project between phases."""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
VALID_PHASES = ["DESIGN_PHASE", "DESIGN_REVIEW", "IMPLEMENTATION_PHASE"]
|
||||||
|
VALID_TRANSITIONS = {
|
||||||
|
"DESIGN_PHASE": ["DESIGN_REVIEW"],
|
||||||
|
"DESIGN_REVIEW": ["DESIGN_PHASE", "IMPLEMENTATION_PHASE"],
|
||||||
|
"IMPLEMENTATION_PHASE": ["DESIGN_PHASE"]
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def load_manifest(manifest_path: str) -> dict:
|
||||||
|
"""Load manifest."""
|
||||||
|
with open(manifest_path) as f:
|
||||||
|
return json.load(f)
|
||||||
|
|
||||||
|
|
||||||
|
def save_manifest(manifest_path: str, manifest: dict):
|
||||||
|
"""Save manifest."""
|
||||||
|
with open(manifest_path, "w") as f:
|
||||||
|
json.dump(manifest, f, indent=2)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Transition project phase")
|
||||||
|
parser.add_argument("--to", required=True, choices=VALID_PHASES, help="Target phase")
|
||||||
|
parser.add_argument("--manifest", default="project_manifest.json", help="Manifest path")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
manifest_path = args.manifest
|
||||||
|
if not os.path.isabs(manifest_path):
|
||||||
|
manifest_path = os.path.join(os.getcwd(), manifest_path)
|
||||||
|
|
||||||
|
if not os.path.exists(manifest_path):
|
||||||
|
print(f"Error: Manifest not found at {manifest_path}")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
manifest = load_manifest(manifest_path)
|
||||||
|
current_phase = manifest["state"]["current_phase"]
|
||||||
|
target_phase = args.to
|
||||||
|
|
||||||
|
if target_phase not in VALID_TRANSITIONS.get(current_phase, []):
|
||||||
|
print(f"Error: Cannot transition from {current_phase} to {target_phase}")
|
||||||
|
print(f"Valid transitions: {VALID_TRANSITIONS.get(current_phase, [])}")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
# Update phase
|
||||||
|
manifest["state"]["current_phase"] = target_phase
|
||||||
|
|
||||||
|
# Add to history
|
||||||
|
manifest["state"]["revision_history"].append({
|
||||||
|
"action": "PHASE_TRANSITION",
|
||||||
|
"timestamp": datetime.now().isoformat(),
|
||||||
|
"details": f"Transitioned from {current_phase} to {target_phase}"
|
||||||
|
})
|
||||||
|
|
||||||
|
# If transitioning to implementation, mark entities as approved
|
||||||
|
if target_phase == "IMPLEMENTATION_PHASE":
|
||||||
|
for entity_type in ["pages", "components", "api_endpoints", "database_tables"]:
|
||||||
|
for entity in manifest["entities"].get(entity_type, []):
|
||||||
|
if entity.get("status") == "DEFINED":
|
||||||
|
entity["status"] = "APPROVED"
|
||||||
|
|
||||||
|
save_manifest(manifest_path, manifest)
|
||||||
|
print(f"Transitioned to {target_phase}")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
exit(main())
|
||||||
|
|
@ -0,0 +1,381 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Validate Implementation Against API Contract
|
||||||
|
|
||||||
|
This script verifies that both backend and frontend implementations
|
||||||
|
comply with the generated API contract.
|
||||||
|
|
||||||
|
Checks performed:
|
||||||
|
1. Backend routes exist and export correct HTTP methods
|
||||||
|
2. Frontend components import from shared types file
|
||||||
|
3. API calls use correct paths and methods
|
||||||
|
4. Types are properly imported (not recreated locally)
|
||||||
|
|
||||||
|
Exit codes:
|
||||||
|
0 = All validations pass
|
||||||
|
1 = Warnings found (non-critical violations)
|
||||||
|
2 = Critical violations (missing routes, type mismatches)
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import re
|
||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Dict, List, Any, Tuple, Optional
|
||||||
|
|
||||||
|
try:
|
||||||
|
import yaml
|
||||||
|
except ImportError:
|
||||||
|
yaml = None
|
||||||
|
|
||||||
|
|
||||||
|
def load_yaml(path: Path) -> Dict:
|
||||||
|
"""Load YAML file."""
|
||||||
|
if yaml:
|
||||||
|
with open(path) as f:
|
||||||
|
return yaml.safe_load(f)
|
||||||
|
else:
|
||||||
|
with open(path) as f:
|
||||||
|
content = f.read()
|
||||||
|
try:
|
||||||
|
return json.loads(content)
|
||||||
|
except:
|
||||||
|
return {}
|
||||||
|
|
||||||
|
|
||||||
|
def find_project_root(start_dir: Path) -> Path:
|
||||||
|
"""Find project root by looking for package.json."""
|
||||||
|
current = start_dir.resolve()
|
||||||
|
while current != current.parent:
|
||||||
|
if (current / 'package.json').exists():
|
||||||
|
return current
|
||||||
|
current = current.parent
|
||||||
|
return start_dir
|
||||||
|
|
||||||
|
|
||||||
|
class ContractValidator:
|
||||||
|
"""Validates implementation against API contract."""
|
||||||
|
|
||||||
|
def __init__(self, contract_path: Path, project_dir: Path):
|
||||||
|
self.contract_path = contract_path
|
||||||
|
self.project_dir = project_dir
|
||||||
|
self.contract = load_yaml(contract_path)
|
||||||
|
self.violations: List[Dict[str, Any]] = []
|
||||||
|
self.warnings: List[Dict[str, Any]] = []
|
||||||
|
|
||||||
|
def validate_all(self) -> Tuple[int, List[Dict], List[Dict]]:
|
||||||
|
"""
|
||||||
|
Run all validations.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (exit_code, violations, warnings)
|
||||||
|
"""
|
||||||
|
# Validate backend
|
||||||
|
self.validate_backend_routes()
|
||||||
|
self.validate_backend_type_imports()
|
||||||
|
|
||||||
|
# Validate frontend
|
||||||
|
self.validate_frontend_type_imports()
|
||||||
|
self.validate_frontend_api_calls()
|
||||||
|
|
||||||
|
# Determine exit code
|
||||||
|
critical_count = len([v for v in self.violations if v.get('severity') == 'critical'])
|
||||||
|
warning_count = len(self.warnings)
|
||||||
|
|
||||||
|
if critical_count > 0:
|
||||||
|
return 2, self.violations, self.warnings
|
||||||
|
elif len(self.violations) > 0:
|
||||||
|
return 1, self.violations, self.warnings
|
||||||
|
else:
|
||||||
|
return 0, self.violations, self.warnings
|
||||||
|
|
||||||
|
def validate_backend_routes(self) -> None:
|
||||||
|
"""Validate that all backend routes from contract exist."""
|
||||||
|
backend_routes = self.contract.get('backend_routes', [])
|
||||||
|
|
||||||
|
for route in backend_routes:
|
||||||
|
file_path = self.project_dir / route['file_path']
|
||||||
|
endpoint_id = route.get('endpoint_id', 'unknown')
|
||||||
|
export_name = route.get('export_name', 'GET')
|
||||||
|
|
||||||
|
if not file_path.exists():
|
||||||
|
self.violations.append({
|
||||||
|
'type': 'missing_route',
|
||||||
|
'severity': 'critical',
|
||||||
|
'endpoint_id': endpoint_id,
|
||||||
|
'expected_file': str(route['file_path']),
|
||||||
|
'message': f"Backend route file missing: {route['file_path']}",
|
||||||
|
})
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Check if file exports the correct HTTP method
|
||||||
|
content = file_path.read_text()
|
||||||
|
|
||||||
|
# Check for Next.js App Router pattern: export async function GET/POST/etc.
|
||||||
|
export_pattern = rf'export\s+(async\s+)?function\s+{export_name}\s*\('
|
||||||
|
if not re.search(export_pattern, content):
|
||||||
|
# Also check for const exports: export const GET = ...
|
||||||
|
const_pattern = rf'export\s+const\s+{export_name}\s*='
|
||||||
|
if not re.search(const_pattern, content):
|
||||||
|
self.violations.append({
|
||||||
|
'type': 'missing_export',
|
||||||
|
'severity': 'critical',
|
||||||
|
'endpoint_id': endpoint_id,
|
||||||
|
'file': str(route['file_path']),
|
||||||
|
'expected_export': export_name,
|
||||||
|
'message': f"Route {route['file_path']} missing {export_name} export",
|
||||||
|
})
|
||||||
|
|
||||||
|
def validate_backend_type_imports(self) -> None:
|
||||||
|
"""Validate backend files import from shared types."""
|
||||||
|
backend_routes = self.contract.get('backend_routes', [])
|
||||||
|
|
||||||
|
for route in backend_routes:
|
||||||
|
file_path = self.project_dir / route['file_path']
|
||||||
|
if not file_path.exists():
|
||||||
|
continue # Already reported as missing
|
||||||
|
|
||||||
|
content = file_path.read_text()
|
||||||
|
|
||||||
|
# Check for import from @/types/api or ./types/api or ../types/api
|
||||||
|
import_patterns = [
|
||||||
|
r"import\s+.*from\s+['\"]@/types/api['\"]",
|
||||||
|
r"import\s+.*from\s+['\"]\.+/types/api['\"]",
|
||||||
|
r"import\s+type\s+.*from\s+['\"]@/types/api['\"]",
|
||||||
|
]
|
||||||
|
|
||||||
|
has_import = any(re.search(p, content) for p in import_patterns)
|
||||||
|
|
||||||
|
if not has_import:
|
||||||
|
self.warnings.append({
|
||||||
|
'type': 'missing_type_import',
|
||||||
|
'severity': 'warning',
|
||||||
|
'file': str(route['file_path']),
|
||||||
|
'message': f"Backend route {route['file_path']} should import types from @/types/api",
|
||||||
|
})
|
||||||
|
|
||||||
|
# Check for local type declarations that might conflict
|
||||||
|
local_type_patterns = [
|
||||||
|
r'(interface|type)\s+User\s*[={]',
|
||||||
|
r'(interface|type)\s+.*Request\s*[={]',
|
||||||
|
r'(interface|type)\s+.*Response\s*[={]',
|
||||||
|
]
|
||||||
|
|
||||||
|
for pattern in local_type_patterns:
|
||||||
|
match = re.search(pattern, content)
|
||||||
|
if match and 'import' not in content[:match.start()].split('\n')[-1]:
|
||||||
|
self.warnings.append({
|
||||||
|
'type': 'local_type_definition',
|
||||||
|
'severity': 'warning',
|
||||||
|
'file': str(route['file_path']),
|
||||||
|
'message': f"Backend route defines local types. Should import from @/types/api instead.",
|
||||||
|
})
|
||||||
|
break
|
||||||
|
|
||||||
|
def validate_frontend_type_imports(self) -> None:
|
||||||
|
"""Validate frontend files import from shared types."""
|
||||||
|
frontend_calls = self.contract.get('frontend_calls', [])
|
||||||
|
|
||||||
|
checked_files = set()
|
||||||
|
|
||||||
|
for call in frontend_calls:
|
||||||
|
file_path_str = call.get('source', {}).get('file_path', '')
|
||||||
|
if not file_path_str or file_path_str in checked_files:
|
||||||
|
continue
|
||||||
|
|
||||||
|
checked_files.add(file_path_str)
|
||||||
|
file_path = self.project_dir / file_path_str
|
||||||
|
|
||||||
|
if not file_path.exists():
|
||||||
|
# Check alternate paths (page vs component)
|
||||||
|
if '/components/' in file_path_str:
|
||||||
|
alt_path = file_path_str.replace('/components/', '/app/components/')
|
||||||
|
file_path = self.project_dir / alt_path
|
||||||
|
if not file_path.exists():
|
||||||
|
self.violations.append({
|
||||||
|
'type': 'missing_frontend_file',
|
||||||
|
'severity': 'high',
|
||||||
|
'expected_file': file_path_str,
|
||||||
|
'message': f"Frontend file missing: {file_path_str}",
|
||||||
|
})
|
||||||
|
continue
|
||||||
|
else:
|
||||||
|
self.violations.append({
|
||||||
|
'type': 'missing_frontend_file',
|
||||||
|
'severity': 'high',
|
||||||
|
'expected_file': file_path_str,
|
||||||
|
'message': f"Frontend file missing: {file_path_str}",
|
||||||
|
})
|
||||||
|
continue
|
||||||
|
|
||||||
|
content = file_path.read_text()
|
||||||
|
|
||||||
|
# Check for import from @/types/api
|
||||||
|
import_patterns = [
|
||||||
|
r"import\s+.*from\s+['\"]@/types/api['\"]",
|
||||||
|
r"import\s+.*from\s+['\"]\.+/types/api['\"]",
|
||||||
|
r"import\s+type\s+.*from\s+['\"]@/types/api['\"]",
|
||||||
|
]
|
||||||
|
|
||||||
|
has_import = any(re.search(p, content) for p in import_patterns)
|
||||||
|
|
||||||
|
if not has_import:
|
||||||
|
self.warnings.append({
|
||||||
|
'type': 'missing_type_import',
|
||||||
|
'severity': 'warning',
|
||||||
|
'file': file_path_str,
|
||||||
|
'message': f"Frontend file {file_path_str} should import types from @/types/api",
|
||||||
|
})
|
||||||
|
|
||||||
|
def validate_frontend_api_calls(self) -> None:
|
||||||
|
"""Validate frontend API calls match contract."""
|
||||||
|
frontend_calls = self.contract.get('frontend_calls', [])
|
||||||
|
endpoints = {e['id']: e for e in self.contract.get('endpoints', [])}
|
||||||
|
|
||||||
|
for call in frontend_calls:
|
||||||
|
file_path_str = call.get('source', {}).get('file_path', '')
|
||||||
|
endpoint_id = call.get('endpoint_id', '')
|
||||||
|
|
||||||
|
if not file_path_str or endpoint_id not in endpoints:
|
||||||
|
continue
|
||||||
|
|
||||||
|
file_path = self.project_dir / file_path_str
|
||||||
|
|
||||||
|
# Try alternate paths
|
||||||
|
if not file_path.exists():
|
||||||
|
if '/components/' in file_path_str:
|
||||||
|
alt_path = file_path_str.replace('/components/', '/app/components/')
|
||||||
|
file_path = self.project_dir / alt_path
|
||||||
|
|
||||||
|
if not file_path.exists():
|
||||||
|
continue # Already reported
|
||||||
|
|
||||||
|
content = file_path.read_text()
|
||||||
|
endpoint = endpoints[endpoint_id]
|
||||||
|
expected_method = endpoint.get('method', 'GET')
|
||||||
|
expected_path = endpoint.get('path', '')
|
||||||
|
|
||||||
|
# Check for API call to this endpoint
|
||||||
|
# Look for fetch calls or axios calls
|
||||||
|
fetch_patterns = [
|
||||||
|
rf"fetch\s*\(\s*['\"`][^'\"]*{re.escape(expected_path)}",
|
||||||
|
rf"fetch\s*\(\s*API_PATHS\.",
|
||||||
|
rf"axios\.{expected_method.lower()}\s*\(",
|
||||||
|
]
|
||||||
|
|
||||||
|
has_call = any(re.search(p, content, re.IGNORECASE) for p in fetch_patterns)
|
||||||
|
|
||||||
|
# If component is supposed to call this API but doesn't, it might be a dynamic call
|
||||||
|
# or using a different pattern - this is a soft warning
|
||||||
|
# The important validation is that when they DO call, they use correct types
|
||||||
|
|
||||||
|
def validate_types_file_exists(self) -> bool:
|
||||||
|
"""Check if shared types file exists."""
|
||||||
|
types_file = self.project_dir / 'app' / 'types' / 'api.ts'
|
||||||
|
if not types_file.exists():
|
||||||
|
self.violations.append({
|
||||||
|
'type': 'missing_types_file',
|
||||||
|
'severity': 'critical',
|
||||||
|
'expected_file': 'app/types/api.ts',
|
||||||
|
'message': "Shared types file missing: app/types/api.ts",
|
||||||
|
})
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def print_report(violations: List[Dict], warnings: List[Dict]) -> None:
|
||||||
|
"""Print validation report."""
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("API CONTRACT VALIDATION REPORT")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
if not violations and not warnings:
|
||||||
|
print("\n✅ ALL VALIDATIONS PASSED")
|
||||||
|
print("\nBoth frontend and backend implementations comply with the API contract.")
|
||||||
|
return
|
||||||
|
|
||||||
|
if violations:
|
||||||
|
print(f"\n❌ VIOLATIONS FOUND: {len(violations)}")
|
||||||
|
print("-" * 40)
|
||||||
|
|
||||||
|
critical = [v for v in violations if v.get('severity') == 'critical']
|
||||||
|
high = [v for v in violations if v.get('severity') == 'high']
|
||||||
|
other = [v for v in violations if v.get('severity') not in ['critical', 'high']]
|
||||||
|
|
||||||
|
if critical:
|
||||||
|
print("\n🔴 CRITICAL (Must fix):")
|
||||||
|
for v in critical:
|
||||||
|
print(f" • {v['message']}")
|
||||||
|
if 'expected_file' in v:
|
||||||
|
print(f" Expected: {v['expected_file']}")
|
||||||
|
|
||||||
|
if high:
|
||||||
|
print("\n🟠 HIGH (Should fix):")
|
||||||
|
for v in high:
|
||||||
|
print(f" • {v['message']}")
|
||||||
|
|
||||||
|
if other:
|
||||||
|
print("\n🟡 OTHER:")
|
||||||
|
for v in other:
|
||||||
|
print(f" • {v['message']}")
|
||||||
|
|
||||||
|
if warnings:
|
||||||
|
print(f"\n⚠️ WARNINGS: {len(warnings)}")
|
||||||
|
print("-" * 40)
|
||||||
|
for w in warnings:
|
||||||
|
print(f" • {w['message']}")
|
||||||
|
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
|
||||||
|
if any(v.get('severity') == 'critical' for v in violations):
|
||||||
|
print("VERDICT: ❌ FAILED - Critical violations must be fixed")
|
||||||
|
elif violations:
|
||||||
|
print("VERDICT: ⚠️ WARNINGS - Review and fix if possible")
|
||||||
|
else:
|
||||||
|
print("VERDICT: ✅ PASSED with warnings")
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
"""Main entry point."""
|
||||||
|
if len(sys.argv) < 2:
|
||||||
|
print("Usage: validate_against_contract.py <api_contract.yml> [--project-dir <dir>]", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
contract_path = Path(sys.argv[1])
|
||||||
|
|
||||||
|
# Parse project directory
|
||||||
|
project_dir = Path('.')
|
||||||
|
if '--project-dir' in sys.argv:
|
||||||
|
idx = sys.argv.index('--project-dir')
|
||||||
|
project_dir = Path(sys.argv[idx + 1])
|
||||||
|
|
||||||
|
project_dir = find_project_root(project_dir)
|
||||||
|
|
||||||
|
if not contract_path.exists():
|
||||||
|
print(f"Error: Contract file not found: {contract_path}", file=sys.stderr)
|
||||||
|
sys.exit(2)
|
||||||
|
|
||||||
|
# Run validation
|
||||||
|
validator = ContractValidator(contract_path, project_dir)
|
||||||
|
|
||||||
|
# First check types file exists
|
||||||
|
validator.validate_types_file_exists()
|
||||||
|
|
||||||
|
# Run all validations
|
||||||
|
exit_code, violations, warnings = validator.validate_all()
|
||||||
|
|
||||||
|
# Print report
|
||||||
|
print_report(violations, warnings)
|
||||||
|
|
||||||
|
# Summary stats
|
||||||
|
print(f"\nValidation complete:")
|
||||||
|
print(f" Backend routes checked: {len(validator.contract.get('backend_routes', []))}")
|
||||||
|
print(f" Frontend calls checked: {len(validator.contract.get('frontend_calls', []))}")
|
||||||
|
print(f" Types defined: {len(validator.contract.get('types', []))}")
|
||||||
|
|
||||||
|
sys.exit(exit_code)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
||||||
|
|
@ -0,0 +1,536 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
API Contract Validator for guardrail workflow.
|
||||||
|
|
||||||
|
Validates that frontend API calls match backend endpoint definitions:
|
||||||
|
- Endpoints exist
|
||||||
|
- HTTP methods match
|
||||||
|
- Request/response structures align
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python3 validate_api_contract.py --manifest project_manifest.json --project-dir .
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import NamedTuple
|
||||||
|
|
||||||
|
|
||||||
|
class APICall(NamedTuple):
|
||||||
|
"""Frontend API call."""
|
||||||
|
file_path: str
|
||||||
|
line_number: int
|
||||||
|
endpoint: str
|
||||||
|
method: str
|
||||||
|
has_body: bool
|
||||||
|
raw_line: str
|
||||||
|
|
||||||
|
|
||||||
|
class APIEndpoint(NamedTuple):
|
||||||
|
"""Backend API endpoint."""
|
||||||
|
file_path: str
|
||||||
|
endpoint: str
|
||||||
|
method: str
|
||||||
|
has_request_body: bool
|
||||||
|
response_type: str
|
||||||
|
|
||||||
|
|
||||||
|
class ContractIssue(NamedTuple):
|
||||||
|
"""API contract violation."""
|
||||||
|
severity: str # ERROR, WARNING
|
||||||
|
category: str
|
||||||
|
message: str
|
||||||
|
file_path: str
|
||||||
|
line_number: int | None
|
||||||
|
suggestion: str
|
||||||
|
|
||||||
|
|
||||||
|
def load_manifest(manifest_path: str) -> dict | None:
|
||||||
|
"""Load manifest if exists."""
|
||||||
|
if not os.path.exists(manifest_path):
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
with open(manifest_path) as f:
|
||||||
|
return json.load(f)
|
||||||
|
except (json.JSONDecodeError, IOError):
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def find_frontend_files(project_dir: str) -> list[str]:
|
||||||
|
"""Find frontend source files."""
|
||||||
|
frontend_patterns = [
|
||||||
|
'app/**/*.tsx', 'app/**/*.ts',
|
||||||
|
'src/**/*.tsx', 'src/**/*.ts',
|
||||||
|
'pages/**/*.tsx', 'pages/**/*.ts',
|
||||||
|
'components/**/*.tsx', 'components/**/*.ts',
|
||||||
|
'hooks/**/*.ts', 'hooks/**/*.tsx',
|
||||||
|
'lib/**/*.ts', 'lib/**/*.tsx',
|
||||||
|
'services/**/*.ts', 'services/**/*.tsx',
|
||||||
|
]
|
||||||
|
|
||||||
|
# Exclude patterns
|
||||||
|
exclude_patterns = ['node_modules', '.next', 'dist', 'build', 'api']
|
||||||
|
|
||||||
|
files = []
|
||||||
|
for pattern in frontend_patterns:
|
||||||
|
base_dir = pattern.split('/')[0]
|
||||||
|
search_dir = Path(project_dir) / base_dir
|
||||||
|
if search_dir.exists():
|
||||||
|
for file_path in search_dir.rglob('*.ts*'):
|
||||||
|
path_str = str(file_path)
|
||||||
|
if not any(ex in path_str for ex in exclude_patterns):
|
||||||
|
# Skip API route files
|
||||||
|
if '/api/' not in path_str:
|
||||||
|
files.append(path_str)
|
||||||
|
|
||||||
|
return list(set(files))
|
||||||
|
|
||||||
|
|
||||||
|
def find_backend_files(project_dir: str) -> list[str]:
|
||||||
|
"""Find backend API route files."""
|
||||||
|
backend_patterns = [
|
||||||
|
'app/api/**/*.ts', 'app/api/**/*.tsx',
|
||||||
|
'pages/api/**/*.ts', 'pages/api/**/*.tsx',
|
||||||
|
'api/**/*.ts',
|
||||||
|
'src/api/**/*.ts',
|
||||||
|
'server/**/*.ts',
|
||||||
|
'routes/**/*.ts',
|
||||||
|
]
|
||||||
|
|
||||||
|
files = []
|
||||||
|
for pattern in backend_patterns:
|
||||||
|
base_parts = pattern.split('/')
|
||||||
|
search_dir = Path(project_dir)
|
||||||
|
for part in base_parts[:-1]:
|
||||||
|
if '*' not in part:
|
||||||
|
search_dir = search_dir / part
|
||||||
|
|
||||||
|
if search_dir.exists():
|
||||||
|
for file_path in search_dir.rglob('*.ts*'):
|
||||||
|
path_str = str(file_path)
|
||||||
|
if 'node_modules' not in path_str:
|
||||||
|
files.append(path_str)
|
||||||
|
|
||||||
|
return list(set(files))
|
||||||
|
|
||||||
|
|
||||||
|
def extract_frontend_api_calls(file_path: str) -> list[APICall]:
|
||||||
|
"""Extract API calls from frontend file."""
|
||||||
|
calls = []
|
||||||
|
|
||||||
|
try:
|
||||||
|
with open(file_path, 'r', encoding='utf-8') as f:
|
||||||
|
content = f.read()
|
||||||
|
lines = content.split('\n')
|
||||||
|
except (IOError, UnicodeDecodeError):
|
||||||
|
return []
|
||||||
|
|
||||||
|
# Patterns for API calls
|
||||||
|
patterns = [
|
||||||
|
# fetch('/api/...', { method: 'POST', body: ... })
|
||||||
|
(r'''fetch\s*\(\s*['"](/api/[^'"]+)['"]''', 'fetch'),
|
||||||
|
# axios.get('/api/...'), axios.post('/api/...', data)
|
||||||
|
(r'''axios\.(get|post|put|patch|delete)\s*\(\s*['"](/api/[^'"]+)['"]''', 'axios'),
|
||||||
|
# api.get('/users'), api.post('/users', data)
|
||||||
|
(r'''api\.(get|post|put|patch|delete)\s*\(\s*['"]([^'"]+)['"]''', 'api_client'),
|
||||||
|
# useSWR('/api/...'), useSWR(() => '/api/...')
|
||||||
|
(r'''useSWR\s*\(\s*['"](/api/[^'"]+)['"]''', 'swr'),
|
||||||
|
# useQuery(['key'], () => fetch('/api/...'))
|
||||||
|
(r'''fetch\s*\(\s*[`'"](/api/[^`'"]+)[`'"]''', 'fetch_template'),
|
||||||
|
]
|
||||||
|
|
||||||
|
for line_num, line in enumerate(lines, 1):
|
||||||
|
for pattern, call_type in patterns:
|
||||||
|
matches = re.finditer(pattern, line, re.IGNORECASE)
|
||||||
|
for match in matches:
|
||||||
|
groups = match.groups()
|
||||||
|
|
||||||
|
if call_type == 'fetch' or call_type == 'swr' or call_type == 'fetch_template':
|
||||||
|
endpoint = groups[0]
|
||||||
|
# Try to detect method from options
|
||||||
|
method = 'GET'
|
||||||
|
if 'method' in line.lower():
|
||||||
|
method_match = re.search(r'''method:\s*['"](\w+)['"]''', line, re.IGNORECASE)
|
||||||
|
if method_match:
|
||||||
|
method = method_match.group(1).upper()
|
||||||
|
has_body = 'body:' in line.lower() or 'body=' in line.lower()
|
||||||
|
|
||||||
|
elif call_type == 'axios' or call_type == 'api_client':
|
||||||
|
method = groups[0].upper()
|
||||||
|
endpoint = groups[1]
|
||||||
|
# POST, PUT, PATCH typically have body
|
||||||
|
has_body = method in ['POST', 'PUT', 'PATCH']
|
||||||
|
else:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Normalize endpoint
|
||||||
|
if not endpoint.startswith('/api/'):
|
||||||
|
endpoint = f'/api/{endpoint.lstrip("/")}'
|
||||||
|
|
||||||
|
calls.append(APICall(
|
||||||
|
file_path=file_path,
|
||||||
|
line_number=line_num,
|
||||||
|
endpoint=endpoint,
|
||||||
|
method=method,
|
||||||
|
has_body=has_body,
|
||||||
|
raw_line=line.strip()
|
||||||
|
))
|
||||||
|
|
||||||
|
return calls
|
||||||
|
|
||||||
|
|
||||||
|
def extract_backend_endpoints(file_path: str) -> list[APIEndpoint]:
|
||||||
|
"""Extract API endpoints from backend file."""
|
||||||
|
endpoints = []
|
||||||
|
|
||||||
|
try:
|
||||||
|
with open(file_path, 'r', encoding='utf-8') as f:
|
||||||
|
content = f.read()
|
||||||
|
except (IOError, UnicodeDecodeError):
|
||||||
|
return []
|
||||||
|
|
||||||
|
# Determine endpoint from file path (Next.js App Router / Pages Router)
|
||||||
|
rel_path = file_path
|
||||||
|
if '/app/api/' in file_path:
|
||||||
|
# App Router: app/api/users/route.ts -> /api/users
|
||||||
|
api_path = re.search(r'/app/api/(.+?)/(route|page)\.(ts|tsx|js|jsx)', file_path)
|
||||||
|
if api_path:
|
||||||
|
endpoint = f'/api/{api_path.group(1)}'
|
||||||
|
else:
|
||||||
|
api_path = re.search(r'/app/api/(.+?)\.(ts|tsx|js|jsx)', file_path)
|
||||||
|
if api_path:
|
||||||
|
endpoint = f'/api/{api_path.group(1)}'
|
||||||
|
else:
|
||||||
|
endpoint = '/api/unknown'
|
||||||
|
elif '/pages/api/' in file_path:
|
||||||
|
# Pages Router: pages/api/users.ts -> /api/users
|
||||||
|
api_path = re.search(r'/pages/api/(.+?)\.(ts|tsx|js|jsx)', file_path)
|
||||||
|
if api_path:
|
||||||
|
endpoint = f'/api/{api_path.group(1)}'
|
||||||
|
else:
|
||||||
|
endpoint = '/api/unknown'
|
||||||
|
else:
|
||||||
|
endpoint = '/api/unknown'
|
||||||
|
|
||||||
|
# Clean up dynamic segments: [id] -> :id
|
||||||
|
endpoint = re.sub(r'\[(\w+)\]', r':\1', endpoint)
|
||||||
|
|
||||||
|
# Detect HTTP methods
|
||||||
|
# Next.js App Router exports: GET, POST, PUT, DELETE, PATCH
|
||||||
|
app_router_methods = re.findall(
|
||||||
|
r'export\s+(?:async\s+)?function\s+(GET|POST|PUT|DELETE|PATCH|HEAD|OPTIONS)',
|
||||||
|
content
|
||||||
|
)
|
||||||
|
|
||||||
|
# Pages Router: req.method checks
|
||||||
|
pages_router_methods = re.findall(
|
||||||
|
r'''req\.method\s*===?\s*['"](\w+)['"]''',
|
||||||
|
content
|
||||||
|
)
|
||||||
|
|
||||||
|
# Express-style: router.get, router.post, app.get, app.post
|
||||||
|
express_methods = re.findall(
|
||||||
|
r'''(?:router|app)\.(get|post|put|patch|delete)\s*\(''',
|
||||||
|
content,
|
||||||
|
re.IGNORECASE
|
||||||
|
)
|
||||||
|
|
||||||
|
methods = set()
|
||||||
|
methods.update(m.upper() for m in app_router_methods)
|
||||||
|
methods.update(m.upper() for m in pages_router_methods)
|
||||||
|
methods.update(m.upper() for m in express_methods)
|
||||||
|
|
||||||
|
# Default to GET if no methods detected
|
||||||
|
if not methods:
|
||||||
|
methods = {'GET'}
|
||||||
|
|
||||||
|
# Detect request body handling
|
||||||
|
has_body_patterns = [
|
||||||
|
r'request\.json\(\)',
|
||||||
|
r'req\.body',
|
||||||
|
r'await\s+request\.json',
|
||||||
|
r'JSON\.parse',
|
||||||
|
r'body\s*:',
|
||||||
|
]
|
||||||
|
has_request_body = any(re.search(p, content) for p in has_body_patterns)
|
||||||
|
|
||||||
|
# Detect response type
|
||||||
|
response_type = 'json' # default
|
||||||
|
if 'NextResponse.json' in content or 'res.json' in content:
|
||||||
|
response_type = 'json'
|
||||||
|
elif 'new Response(' in content:
|
||||||
|
response_type = 'response'
|
||||||
|
elif 'res.send' in content:
|
||||||
|
response_type = 'text'
|
||||||
|
|
||||||
|
for method in methods:
|
||||||
|
endpoints.append(APIEndpoint(
|
||||||
|
file_path=file_path,
|
||||||
|
endpoint=endpoint,
|
||||||
|
method=method,
|
||||||
|
has_request_body=has_request_body,
|
||||||
|
response_type=response_type
|
||||||
|
))
|
||||||
|
|
||||||
|
return endpoints
|
||||||
|
|
||||||
|
|
||||||
|
def normalize_endpoint(endpoint: str) -> str:
|
||||||
|
"""Normalize endpoint for comparison."""
|
||||||
|
# Remove query params
|
||||||
|
endpoint = endpoint.split('?')[0]
|
||||||
|
# Normalize dynamic segments
|
||||||
|
endpoint = re.sub(r':\w+', ':param', endpoint)
|
||||||
|
endpoint = re.sub(r'\$\{[^}]+\}', ':param', endpoint)
|
||||||
|
# Remove trailing slash
|
||||||
|
endpoint = endpoint.rstrip('/')
|
||||||
|
return endpoint.lower()
|
||||||
|
|
||||||
|
|
||||||
|
def match_endpoints(call_endpoint: str, api_endpoint: str) -> bool:
|
||||||
|
"""Check if frontend call matches backend endpoint."""
|
||||||
|
norm_call = normalize_endpoint(call_endpoint)
|
||||||
|
norm_api = normalize_endpoint(api_endpoint)
|
||||||
|
|
||||||
|
# Exact match
|
||||||
|
if norm_call == norm_api:
|
||||||
|
return True
|
||||||
|
|
||||||
|
# Pattern match with dynamic segments
|
||||||
|
api_pattern = re.sub(r':param', r'[^/]+', norm_api)
|
||||||
|
if re.match(f'^{api_pattern}$', norm_call):
|
||||||
|
return True
|
||||||
|
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def validate_api_contract(
|
||||||
|
project_dir: str,
|
||||||
|
manifest: dict | None = None
|
||||||
|
) -> tuple[list[ContractIssue], dict]:
|
||||||
|
"""Validate API contract between frontend and backend."""
|
||||||
|
issues = []
|
||||||
|
stats = {
|
||||||
|
'frontend_calls': 0,
|
||||||
|
'backend_endpoints': 0,
|
||||||
|
'matched': 0,
|
||||||
|
'unmatched_calls': 0,
|
||||||
|
'method_mismatches': 0,
|
||||||
|
'body_mismatches': 0,
|
||||||
|
}
|
||||||
|
|
||||||
|
# Find files
|
||||||
|
frontend_files = find_frontend_files(project_dir)
|
||||||
|
backend_files = find_backend_files(project_dir)
|
||||||
|
|
||||||
|
# Extract API calls and endpoints
|
||||||
|
all_calls: list[APICall] = []
|
||||||
|
all_endpoints: list[APIEndpoint] = []
|
||||||
|
|
||||||
|
for file in frontend_files:
|
||||||
|
all_calls.extend(extract_frontend_api_calls(file))
|
||||||
|
|
||||||
|
for file in backend_files:
|
||||||
|
all_endpoints.extend(extract_backend_endpoints(file))
|
||||||
|
|
||||||
|
stats['frontend_calls'] = len(all_calls)
|
||||||
|
stats['backend_endpoints'] = len(all_endpoints)
|
||||||
|
|
||||||
|
# Build endpoint lookup
|
||||||
|
endpoint_map: dict[str, list[APIEndpoint]] = {}
|
||||||
|
for ep in all_endpoints:
|
||||||
|
key = normalize_endpoint(ep.endpoint)
|
||||||
|
if key not in endpoint_map:
|
||||||
|
endpoint_map[key] = []
|
||||||
|
endpoint_map[key].append(ep)
|
||||||
|
|
||||||
|
# Validate each frontend call
|
||||||
|
for call in all_calls:
|
||||||
|
matched = False
|
||||||
|
|
||||||
|
for ep in all_endpoints:
|
||||||
|
if match_endpoints(call.endpoint, ep.endpoint):
|
||||||
|
matched = True
|
||||||
|
|
||||||
|
# Check method match
|
||||||
|
if call.method != ep.method:
|
||||||
|
# Check if endpoint supports this method
|
||||||
|
endpoint_methods = [e.method for e in all_endpoints
|
||||||
|
if match_endpoints(call.endpoint, e.endpoint)]
|
||||||
|
if call.method not in endpoint_methods:
|
||||||
|
issues.append(ContractIssue(
|
||||||
|
severity='ERROR',
|
||||||
|
category='METHOD_MISMATCH',
|
||||||
|
message=f"Frontend calls {call.method} {call.endpoint} but backend only supports {endpoint_methods}",
|
||||||
|
file_path=call.file_path,
|
||||||
|
line_number=call.line_number,
|
||||||
|
suggestion=f"Change method to one of: {', '.join(endpoint_methods)}"
|
||||||
|
))
|
||||||
|
stats['method_mismatches'] += 1
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Check body requirements
|
||||||
|
if call.has_body and not ep.has_request_body:
|
||||||
|
issues.append(ContractIssue(
|
||||||
|
severity='WARNING',
|
||||||
|
category='BODY_MISMATCH',
|
||||||
|
message=f"Frontend sends body to {call.endpoint} but backend may not process it",
|
||||||
|
file_path=call.file_path,
|
||||||
|
line_number=call.line_number,
|
||||||
|
suggestion="Verify backend handles request body or remove body from frontend call"
|
||||||
|
))
|
||||||
|
stats['body_mismatches'] += 1
|
||||||
|
|
||||||
|
if not call.has_body and ep.has_request_body and ep.method in ['POST', 'PUT', 'PATCH']:
|
||||||
|
issues.append(ContractIssue(
|
||||||
|
severity='WARNING',
|
||||||
|
category='MISSING_BODY',
|
||||||
|
message=f"Backend expects body for {call.method} {call.endpoint} but frontend may not send it",
|
||||||
|
file_path=call.file_path,
|
||||||
|
line_number=call.line_number,
|
||||||
|
suggestion="Add request body to frontend call"
|
||||||
|
))
|
||||||
|
|
||||||
|
stats['matched'] += 1
|
||||||
|
break
|
||||||
|
|
||||||
|
if not matched:
|
||||||
|
issues.append(ContractIssue(
|
||||||
|
severity='ERROR',
|
||||||
|
category='ENDPOINT_NOT_FOUND',
|
||||||
|
message=f"Frontend calls {call.method} {call.endpoint} but no matching backend endpoint found",
|
||||||
|
file_path=call.file_path,
|
||||||
|
line_number=call.line_number,
|
||||||
|
suggestion=f"Create backend endpoint at {call.endpoint} or fix the frontend URL"
|
||||||
|
))
|
||||||
|
stats['unmatched_calls'] += 1
|
||||||
|
|
||||||
|
# Check for unused backend endpoints
|
||||||
|
called_endpoints = set()
|
||||||
|
for call in all_calls:
|
||||||
|
called_endpoints.add((normalize_endpoint(call.endpoint), call.method))
|
||||||
|
|
||||||
|
for ep in all_endpoints:
|
||||||
|
key = (normalize_endpoint(ep.endpoint), ep.method)
|
||||||
|
if key not in called_endpoints:
|
||||||
|
# Check if any call matches with different method
|
||||||
|
matching_calls = [c for c in all_calls
|
||||||
|
if match_endpoints(c.endpoint, ep.endpoint)]
|
||||||
|
if not matching_calls:
|
||||||
|
issues.append(ContractIssue(
|
||||||
|
severity='WARNING',
|
||||||
|
category='UNUSED_ENDPOINT',
|
||||||
|
message=f"Backend endpoint {ep.method} {ep.endpoint} is not called by frontend",
|
||||||
|
file_path=ep.file_path,
|
||||||
|
line_number=None,
|
||||||
|
suggestion="Verify endpoint is needed or remove unused code"
|
||||||
|
))
|
||||||
|
|
||||||
|
return issues, stats
|
||||||
|
|
||||||
|
|
||||||
|
def format_report(issues: list[ContractIssue], stats: dict) -> str:
|
||||||
|
"""Format validation report."""
|
||||||
|
lines = []
|
||||||
|
|
||||||
|
lines.append("")
|
||||||
|
lines.append("=" * 70)
|
||||||
|
lines.append(" API CONTRACT VALIDATION REPORT")
|
||||||
|
lines.append("=" * 70)
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# Stats
|
||||||
|
lines.append("SUMMARY")
|
||||||
|
lines.append("-" * 70)
|
||||||
|
lines.append(f" Frontend API calls found: {stats['frontend_calls']}")
|
||||||
|
lines.append(f" Backend endpoints found: {stats['backend_endpoints']}")
|
||||||
|
lines.append(f" Matched calls: {stats['matched']}")
|
||||||
|
lines.append(f" Unmatched calls: {stats['unmatched_calls']}")
|
||||||
|
lines.append(f" Method mismatches: {stats['method_mismatches']}")
|
||||||
|
lines.append(f" Body mismatches: {stats['body_mismatches']}")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# Issues by severity
|
||||||
|
errors = [i for i in issues if i.severity == 'ERROR']
|
||||||
|
warnings = [i for i in issues if i.severity == 'WARNING']
|
||||||
|
|
||||||
|
if errors:
|
||||||
|
lines.append("ERRORS (must fix)")
|
||||||
|
lines.append("-" * 70)
|
||||||
|
for i, issue in enumerate(errors, 1):
|
||||||
|
lines.append(f" {i}. [{issue.category}] {issue.message}")
|
||||||
|
lines.append(f" File: {issue.file_path}:{issue.line_number or '?'}")
|
||||||
|
lines.append(f" Fix: {issue.suggestion}")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
if warnings:
|
||||||
|
lines.append("WARNINGS (review)")
|
||||||
|
lines.append("-" * 70)
|
||||||
|
for i, issue in enumerate(warnings, 1):
|
||||||
|
lines.append(f" {i}. [{issue.category}] {issue.message}")
|
||||||
|
lines.append(f" File: {issue.file_path}:{issue.line_number or '?'}")
|
||||||
|
lines.append(f" Fix: {issue.suggestion}")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# Result
|
||||||
|
lines.append("=" * 70)
|
||||||
|
if not errors:
|
||||||
|
lines.append(" RESULT: PASS (no errors)")
|
||||||
|
else:
|
||||||
|
lines.append(f" RESULT: FAIL ({len(errors)} errors)")
|
||||||
|
lines.append("=" * 70)
|
||||||
|
|
||||||
|
return "\n".join(lines)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Validate API contract")
|
||||||
|
parser.add_argument("--manifest", help="Path to project_manifest.json")
|
||||||
|
parser.add_argument("--project-dir", default=".", help="Project directory")
|
||||||
|
parser.add_argument("--json", action="store_true", help="Output as JSON")
|
||||||
|
parser.add_argument("--strict", action="store_true", help="Fail on warnings too")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
manifest = None
|
||||||
|
if args.manifest:
|
||||||
|
manifest = load_manifest(args.manifest)
|
||||||
|
|
||||||
|
issues, stats = validate_api_contract(args.project_dir, manifest)
|
||||||
|
|
||||||
|
if args.json:
|
||||||
|
output = {
|
||||||
|
'stats': stats,
|
||||||
|
'issues': [
|
||||||
|
{
|
||||||
|
'severity': i.severity,
|
||||||
|
'category': i.category,
|
||||||
|
'message': i.message,
|
||||||
|
'file_path': i.file_path,
|
||||||
|
'line_number': i.line_number,
|
||||||
|
'suggestion': i.suggestion
|
||||||
|
}
|
||||||
|
for i in issues
|
||||||
|
],
|
||||||
|
'result': 'PASS' if not any(i.severity == 'ERROR' for i in issues) else 'FAIL'
|
||||||
|
}
|
||||||
|
print(json.dumps(output, indent=2))
|
||||||
|
else:
|
||||||
|
print(format_report(issues, stats))
|
||||||
|
|
||||||
|
# Exit code
|
||||||
|
errors = [i for i in issues if i.severity == 'ERROR']
|
||||||
|
warnings = [i for i in issues if i.severity == 'WARNING']
|
||||||
|
|
||||||
|
if errors:
|
||||||
|
return 1
|
||||||
|
if args.strict and warnings:
|
||||||
|
return 1
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
|
|
@ -0,0 +1,271 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Bash command validator for guardrail enforcement.
|
||||||
|
|
||||||
|
Blocks shell commands that could write files outside the workflow.
|
||||||
|
|
||||||
|
Exit codes:
|
||||||
|
0 = Command allowed
|
||||||
|
1 = Command blocked (with message)
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
|
||||||
|
# Patterns that indicate file writing
|
||||||
|
WRITE_PATTERNS = [
|
||||||
|
# Redirections
|
||||||
|
r'\s*>\s*["\']?([^"\'&|;\s]+)', # > file
|
||||||
|
r'\s*>>\s*["\']?([^"\'&|;\s]+)', # >> file
|
||||||
|
r'\s*2>\s*["\']?([^"\'&|;\s]+)', # 2> file
|
||||||
|
r'\s*&>\s*["\']?([^"\'&|;\s]+)', # &> file
|
||||||
|
|
||||||
|
# tee command
|
||||||
|
r'\btee\s+(?:-a\s+)?["\']?([^"\'&|;\s]+)',
|
||||||
|
|
||||||
|
# Direct file creation
|
||||||
|
r'\btouch\s+["\']?([^"\'&|;\s]+)',
|
||||||
|
|
||||||
|
# Copy/Move operations
|
||||||
|
r'\bcp\s+.*\s+["\']?([^"\'&|;\s]+)',
|
||||||
|
r'\bmv\s+.*\s+["\']?([^"\'&|;\s]+)',
|
||||||
|
|
||||||
|
# In-place editing
|
||||||
|
r'\bsed\s+-i',
|
||||||
|
r'\bawk\s+-i\s+inplace',
|
||||||
|
r'\bperl\s+-i',
|
||||||
|
|
||||||
|
# Here documents
|
||||||
|
r'<<\s*["\']?EOF',
|
||||||
|
r'<<\s*["\']?END',
|
||||||
|
r"cat\s*<<",
|
||||||
|
|
||||||
|
# mkdir (could be prep for writing)
|
||||||
|
r'\bmkdir\s+(?:-p\s+)?["\']?([^"\'&|;\s]+)',
|
||||||
|
|
||||||
|
# rm (destructive)
|
||||||
|
r'\brm\s+(?:-rf?\s+)?["\']?([^"\'&|;\s]+)',
|
||||||
|
|
||||||
|
# chmod/chown
|
||||||
|
r'\bchmod\s+',
|
||||||
|
r'\bchown\s+',
|
||||||
|
|
||||||
|
# curl/wget writing to file
|
||||||
|
r'\bcurl\s+.*-o\s+["\']?([^"\'&|;\s]+)',
|
||||||
|
r'\bwget\s+.*-O\s+["\']?([^"\'&|;\s]+)',
|
||||||
|
|
||||||
|
# git operations that modify files
|
||||||
|
r'\bgit\s+checkout\s+',
|
||||||
|
r'\bgit\s+reset\s+--hard',
|
||||||
|
r'\bgit\s+clean\s+',
|
||||||
|
r'\bgit\s+stash\s+pop',
|
||||||
|
|
||||||
|
# npm/yarn install (modifies node_modules)
|
||||||
|
r'\bnpm\s+install\b',
|
||||||
|
r'\byarn\s+add\b',
|
||||||
|
r'\bpnpm\s+add\b',
|
||||||
|
|
||||||
|
# dd command
|
||||||
|
r'\bdd\s+',
|
||||||
|
|
||||||
|
# patch command
|
||||||
|
r'\bpatch\s+',
|
||||||
|
|
||||||
|
# ln (symlinks)
|
||||||
|
r'\bln\s+',
|
||||||
|
]
|
||||||
|
|
||||||
|
# Commands that are always allowed
|
||||||
|
ALWAYS_ALLOWED = [
|
||||||
|
r'^ls\b',
|
||||||
|
r'^cat\s+[^>]+$', # cat without redirect
|
||||||
|
r'^head\b',
|
||||||
|
r'^tail\b',
|
||||||
|
r'^grep\b',
|
||||||
|
r'^find\b',
|
||||||
|
r'^wc\b',
|
||||||
|
r'^echo\s+[^>]+$', # echo without redirect
|
||||||
|
r'^pwd$',
|
||||||
|
r'^cd\b',
|
||||||
|
r'^which\b',
|
||||||
|
r'^type\b',
|
||||||
|
r'^file\b',
|
||||||
|
r'^stat\b',
|
||||||
|
r'^du\b',
|
||||||
|
r'^df\b',
|
||||||
|
r'^ps\b',
|
||||||
|
r'^env$',
|
||||||
|
r'^printenv',
|
||||||
|
r'^date$',
|
||||||
|
r'^whoami$',
|
||||||
|
r'^hostname$',
|
||||||
|
r'^uname\b',
|
||||||
|
r'^git\s+status',
|
||||||
|
r'^git\s+log',
|
||||||
|
r'^git\s+diff',
|
||||||
|
r'^git\s+branch',
|
||||||
|
r'^git\s+show',
|
||||||
|
r'^git\s+remote',
|
||||||
|
r'^npm\s+run\b',
|
||||||
|
r'^npm\s+test\b',
|
||||||
|
r'^npm\s+start\b',
|
||||||
|
r'^npx\b',
|
||||||
|
r'^node\b',
|
||||||
|
r'^python3?\b.*(?!.*>)', # python without redirect
|
||||||
|
r'^pip\s+list',
|
||||||
|
r'^pip\s+show',
|
||||||
|
r'^tree\b',
|
||||||
|
r'^jq\b',
|
||||||
|
r'^curl\s+(?!.*-o)', # curl without -o
|
||||||
|
r'^wget\s+(?!.*-O)', # wget without -O
|
||||||
|
]
|
||||||
|
|
||||||
|
# Paths that are always allowed for writing
|
||||||
|
ALLOWED_PATHS = [
|
||||||
|
'.workflow/',
|
||||||
|
'.claude/',
|
||||||
|
'skills/',
|
||||||
|
'project_manifest.json',
|
||||||
|
'/tmp/',
|
||||||
|
'/var/tmp/',
|
||||||
|
'node_modules/', # npm install
|
||||||
|
'.git/', # git operations
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def is_always_allowed(command: str) -> bool:
|
||||||
|
"""Check if command matches always-allowed patterns."""
|
||||||
|
command = command.strip()
|
||||||
|
for pattern in ALWAYS_ALLOWED:
|
||||||
|
if re.match(pattern, command, re.IGNORECASE):
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def extract_target_paths(command: str) -> list:
|
||||||
|
"""Extract potential file paths being written to."""
|
||||||
|
paths = []
|
||||||
|
|
||||||
|
for pattern in WRITE_PATTERNS:
|
||||||
|
matches = re.findall(pattern, command)
|
||||||
|
for match in matches:
|
||||||
|
if isinstance(match, tuple):
|
||||||
|
paths.extend(match)
|
||||||
|
elif match:
|
||||||
|
paths.append(match)
|
||||||
|
|
||||||
|
return [p for p in paths if p and not p.startswith('-')]
|
||||||
|
|
||||||
|
|
||||||
|
def is_path_allowed(path: str) -> bool:
|
||||||
|
"""Check if path is in allowed list."""
|
||||||
|
path = path.lstrip('./')
|
||||||
|
|
||||||
|
for allowed in ALLOWED_PATHS:
|
||||||
|
if path.startswith(allowed) or path == allowed.rstrip('/'):
|
||||||
|
return True
|
||||||
|
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def has_write_operation(command: str) -> tuple[bool, list]:
|
||||||
|
"""
|
||||||
|
Check if command contains write operations.
|
||||||
|
|
||||||
|
Returns (has_write, target_paths)
|
||||||
|
"""
|
||||||
|
for pattern in WRITE_PATTERNS:
|
||||||
|
if re.search(pattern, command, re.IGNORECASE):
|
||||||
|
paths = extract_target_paths(command)
|
||||||
|
return True, paths
|
||||||
|
|
||||||
|
return False, []
|
||||||
|
|
||||||
|
|
||||||
|
def validate_bash_command(command: str) -> tuple[bool, str]:
|
||||||
|
"""
|
||||||
|
Validate a bash command for guardrail compliance.
|
||||||
|
|
||||||
|
Returns (allowed, message)
|
||||||
|
"""
|
||||||
|
if not command or not command.strip():
|
||||||
|
return True, "✓ GUARDRAIL: Empty command"
|
||||||
|
|
||||||
|
command = command.strip()
|
||||||
|
|
||||||
|
# Check if always allowed
|
||||||
|
if is_always_allowed(command):
|
||||||
|
return True, f"✓ GUARDRAIL: Safe command allowed"
|
||||||
|
|
||||||
|
# Check for write operations
|
||||||
|
has_write, target_paths = has_write_operation(command)
|
||||||
|
|
||||||
|
if not has_write:
|
||||||
|
return True, f"✓ GUARDRAIL: No write operations detected"
|
||||||
|
|
||||||
|
# Check if all target paths are allowed
|
||||||
|
blocked_paths = []
|
||||||
|
for path in target_paths:
|
||||||
|
if not is_path_allowed(path):
|
||||||
|
blocked_paths.append(path)
|
||||||
|
|
||||||
|
if not blocked_paths:
|
||||||
|
return True, f"✓ GUARDRAIL: Write to allowed paths"
|
||||||
|
|
||||||
|
# Block the command
|
||||||
|
suggested_feature = f"modify files via bash"
|
||||||
|
|
||||||
|
error_msg = f"""
|
||||||
|
⛔ GUARDRAIL VIOLATION: Bash command blocked
|
||||||
|
|
||||||
|
Command: {command[:100]}{'...' if len(command) > 100 else ''}
|
||||||
|
|
||||||
|
Detected write operation to unauthorized paths:
|
||||||
|
{chr(10).join(f' - {p}' for p in blocked_paths)}
|
||||||
|
|
||||||
|
╔══════════════════════════════════════════════════════════════════╗
|
||||||
|
║ 👉 REQUIRED ACTION: Use the workflow instead of bash ║
|
||||||
|
║ ║
|
||||||
|
║ Run this command: ║
|
||||||
|
║ /workflow:spawn {suggested_feature} ║
|
||||||
|
║ ║
|
||||||
|
║ Then use Write/Edit tools (not bash) to modify files. ║
|
||||||
|
║ ║
|
||||||
|
║ Bash is for reading/running, not writing files. ║
|
||||||
|
╚══════════════════════════════════════════════════════════════════╝
|
||||||
|
|
||||||
|
Allowed bash write targets:
|
||||||
|
- .workflow/*, .claude/*, skills/*
|
||||||
|
- project_manifest.json
|
||||||
|
- /tmp/*, node_modules/
|
||||||
|
"""
|
||||||
|
|
||||||
|
return False, error_msg
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Validate bash command for guardrails")
|
||||||
|
parser.add_argument("--command", help="Bash command to validate")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
command = args.command or ""
|
||||||
|
|
||||||
|
# Also try reading from stdin if no command provided
|
||||||
|
if not command and not sys.stdin.isatty():
|
||||||
|
command = sys.stdin.read().strip()
|
||||||
|
|
||||||
|
allowed, message = validate_bash_command(command)
|
||||||
|
|
||||||
|
if allowed:
|
||||||
|
print(message)
|
||||||
|
return 0
|
||||||
|
else:
|
||||||
|
print(message, file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
|
|
@ -0,0 +1,868 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Design Document Validator and Dependency Graph Generator
|
||||||
|
|
||||||
|
Validates design_document.yml and generates:
|
||||||
|
1. dependency_graph.yml - Layered execution order
|
||||||
|
2. Context snapshots for each task
|
||||||
|
3. Tasks with full context
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
from collections import defaultdict
|
||||||
|
from datetime import datetime
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional, Set, Tuple
|
||||||
|
|
||||||
|
# Try to import yaml
|
||||||
|
try:
|
||||||
|
import yaml
|
||||||
|
HAS_YAML = True
|
||||||
|
except ImportError:
|
||||||
|
HAS_YAML = False
|
||||||
|
print("Warning: PyYAML not installed. Using basic parser.", file=sys.stderr)
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# YAML Helpers
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def load_yaml(filepath: str) -> dict:
|
||||||
|
"""Load YAML file."""
|
||||||
|
if not os.path.exists(filepath):
|
||||||
|
return {}
|
||||||
|
with open(filepath, 'r') as f:
|
||||||
|
content = f.read()
|
||||||
|
if not content.strip():
|
||||||
|
return {}
|
||||||
|
|
||||||
|
if HAS_YAML:
|
||||||
|
return yaml.safe_load(content) or {}
|
||||||
|
|
||||||
|
# Basic fallback parser (limited)
|
||||||
|
print(f"Warning: Using basic YAML parser for {filepath}", file=sys.stderr)
|
||||||
|
return {}
|
||||||
|
|
||||||
|
|
||||||
|
def save_yaml(filepath: str, data: dict):
|
||||||
|
"""Save data to YAML file."""
|
||||||
|
os.makedirs(os.path.dirname(filepath), exist_ok=True)
|
||||||
|
|
||||||
|
if HAS_YAML:
|
||||||
|
with open(filepath, 'w') as f:
|
||||||
|
yaml.dump(data, f, default_flow_style=False, sort_keys=False, allow_unicode=True)
|
||||||
|
else:
|
||||||
|
# Simple JSON fallback
|
||||||
|
with open(filepath, 'w') as f:
|
||||||
|
json.dump(data, f, indent=2)
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Validation Classes
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
class ValidationError:
|
||||||
|
"""Represents a validation error."""
|
||||||
|
def __init__(self, category: str, entity_id: str, message: str, severity: str = "error"):
|
||||||
|
self.category = category
|
||||||
|
self.entity_id = entity_id
|
||||||
|
self.message = message
|
||||||
|
self.severity = severity # error, warning
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
icon = "❌" if self.severity == "error" else "⚠️"
|
||||||
|
return f"{icon} [{self.category}] {self.entity_id}: {self.message}"
|
||||||
|
|
||||||
|
|
||||||
|
class DesignValidator:
|
||||||
|
"""Validates design document structure and relationships."""
|
||||||
|
|
||||||
|
def __init__(self, design_doc: dict):
|
||||||
|
self.design = design_doc
|
||||||
|
self.errors: List[ValidationError] = []
|
||||||
|
self.warnings: List[ValidationError] = []
|
||||||
|
|
||||||
|
# Collected entity IDs
|
||||||
|
self.model_ids: Set[str] = set()
|
||||||
|
self.api_ids: Set[str] = set()
|
||||||
|
self.page_ids: Set[str] = set()
|
||||||
|
self.component_ids: Set[str] = set()
|
||||||
|
self.all_ids: Set[str] = set()
|
||||||
|
|
||||||
|
def validate(self) -> bool:
|
||||||
|
"""Run all validations. Returns True if no errors."""
|
||||||
|
self._collect_ids()
|
||||||
|
self._validate_models()
|
||||||
|
self._validate_apis()
|
||||||
|
self._validate_pages()
|
||||||
|
self._validate_components()
|
||||||
|
self._validate_no_circular_deps()
|
||||||
|
|
||||||
|
return len(self.errors) == 0
|
||||||
|
|
||||||
|
def _collect_ids(self):
|
||||||
|
"""Collect all entity IDs."""
|
||||||
|
for model in self.design.get('data_models', []):
|
||||||
|
self.model_ids.add(model.get('id', ''))
|
||||||
|
for api in self.design.get('api_endpoints', []):
|
||||||
|
self.api_ids.add(api.get('id', ''))
|
||||||
|
for page in self.design.get('pages', []):
|
||||||
|
self.page_ids.add(page.get('id', ''))
|
||||||
|
for comp in self.design.get('components', []):
|
||||||
|
self.component_ids.add(comp.get('id', ''))
|
||||||
|
|
||||||
|
self.all_ids = self.model_ids | self.api_ids | self.page_ids | self.component_ids
|
||||||
|
|
||||||
|
def _validate_models(self):
|
||||||
|
"""Validate data models."""
|
||||||
|
for model in self.design.get('data_models', []):
|
||||||
|
model_id = model.get('id', 'unknown')
|
||||||
|
|
||||||
|
# Check required fields
|
||||||
|
if not model.get('id'):
|
||||||
|
self.errors.append(ValidationError('model', model_id, "Missing 'id' field"))
|
||||||
|
if not model.get('name'):
|
||||||
|
self.errors.append(ValidationError('model', model_id, "Missing 'name' field"))
|
||||||
|
if not model.get('fields'):
|
||||||
|
self.errors.append(ValidationError('model', model_id, "Missing 'fields' - model has no fields"))
|
||||||
|
|
||||||
|
# Check for primary key
|
||||||
|
fields = model.get('fields', [])
|
||||||
|
has_pk = any('primary_key' in f.get('constraints', []) for f in fields)
|
||||||
|
if not has_pk:
|
||||||
|
self.errors.append(ValidationError('model', model_id, "No primary_key field defined"))
|
||||||
|
|
||||||
|
# Check relations reference existing models
|
||||||
|
for relation in model.get('relations', []):
|
||||||
|
target = relation.get('target', '')
|
||||||
|
if target and target not in self.model_ids:
|
||||||
|
self.errors.append(ValidationError(
|
||||||
|
'model', model_id,
|
||||||
|
f"Relation target '{target}' does not exist"
|
||||||
|
))
|
||||||
|
|
||||||
|
# Check enum fields have values
|
||||||
|
for field in fields:
|
||||||
|
if field.get('type') == 'enum' and not field.get('enum_values'):
|
||||||
|
self.errors.append(ValidationError(
|
||||||
|
'model', model_id,
|
||||||
|
f"Enum field '{field.get('name')}' missing enum_values"
|
||||||
|
))
|
||||||
|
|
||||||
|
def _validate_apis(self):
|
||||||
|
"""Validate API endpoints."""
|
||||||
|
for api in self.design.get('api_endpoints', []):
|
||||||
|
api_id = api.get('id', 'unknown')
|
||||||
|
|
||||||
|
# Check required fields
|
||||||
|
if not api.get('id'):
|
||||||
|
self.errors.append(ValidationError('api', api_id, "Missing 'id' field"))
|
||||||
|
if not api.get('method'):
|
||||||
|
self.errors.append(ValidationError('api', api_id, "Missing 'method' field"))
|
||||||
|
if not api.get('path'):
|
||||||
|
self.errors.append(ValidationError('api', api_id, "Missing 'path' field"))
|
||||||
|
|
||||||
|
# POST/PUT/PATCH should have request_body
|
||||||
|
method = api.get('method', '').upper()
|
||||||
|
if method in ['POST', 'PUT', 'PATCH'] and not api.get('request_body'):
|
||||||
|
self.warnings.append(ValidationError(
|
||||||
|
'api', api_id,
|
||||||
|
f"{method} endpoint should have request_body",
|
||||||
|
severity="warning"
|
||||||
|
))
|
||||||
|
|
||||||
|
# Check at least one response defined
|
||||||
|
if not api.get('responses'):
|
||||||
|
self.errors.append(ValidationError('api', api_id, "No responses defined"))
|
||||||
|
|
||||||
|
# Check model dependencies exist
|
||||||
|
for model_id in api.get('depends_on_models', []):
|
||||||
|
if model_id not in self.model_ids:
|
||||||
|
self.errors.append(ValidationError(
|
||||||
|
'api', api_id,
|
||||||
|
f"depends_on_models references non-existent model '{model_id}'"
|
||||||
|
))
|
||||||
|
|
||||||
|
# Check API dependencies exist
|
||||||
|
for dep_api_id in api.get('depends_on_apis', []):
|
||||||
|
if dep_api_id not in self.api_ids:
|
||||||
|
self.errors.append(ValidationError(
|
||||||
|
'api', api_id,
|
||||||
|
f"depends_on_apis references non-existent API '{dep_api_id}'"
|
||||||
|
))
|
||||||
|
|
||||||
|
def _validate_pages(self):
|
||||||
|
"""Validate pages."""
|
||||||
|
for page in self.design.get('pages', []):
|
||||||
|
page_id = page.get('id', 'unknown')
|
||||||
|
|
||||||
|
# Check required fields
|
||||||
|
if not page.get('id'):
|
||||||
|
self.errors.append(ValidationError('page', page_id, "Missing 'id' field"))
|
||||||
|
if not page.get('path'):
|
||||||
|
self.errors.append(ValidationError('page', page_id, "Missing 'path' field"))
|
||||||
|
|
||||||
|
# Check data_needs reference existing APIs
|
||||||
|
for data_need in page.get('data_needs', []):
|
||||||
|
api_id = data_need.get('api_id', '')
|
||||||
|
if api_id and api_id not in self.api_ids:
|
||||||
|
self.errors.append(ValidationError(
|
||||||
|
'page', page_id,
|
||||||
|
f"data_needs references non-existent API '{api_id}'"
|
||||||
|
))
|
||||||
|
|
||||||
|
# Check components exist
|
||||||
|
for comp_id in page.get('components', []):
|
||||||
|
if comp_id not in self.component_ids:
|
||||||
|
self.errors.append(ValidationError(
|
||||||
|
'page', page_id,
|
||||||
|
f"References non-existent component '{comp_id}'"
|
||||||
|
))
|
||||||
|
|
||||||
|
def _validate_components(self):
|
||||||
|
"""Validate components."""
|
||||||
|
for comp in self.design.get('components', []):
|
||||||
|
comp_id = comp.get('id', 'unknown')
|
||||||
|
|
||||||
|
# Check required fields
|
||||||
|
if not comp.get('id'):
|
||||||
|
self.errors.append(ValidationError('component', comp_id, "Missing 'id' field"))
|
||||||
|
if not comp.get('name'):
|
||||||
|
self.errors.append(ValidationError('component', comp_id, "Missing 'name' field"))
|
||||||
|
|
||||||
|
# Check uses_apis reference existing APIs
|
||||||
|
for api_id in comp.get('uses_apis', []):
|
||||||
|
if api_id not in self.api_ids:
|
||||||
|
self.errors.append(ValidationError(
|
||||||
|
'component', comp_id,
|
||||||
|
f"uses_apis references non-existent API '{api_id}'"
|
||||||
|
))
|
||||||
|
|
||||||
|
# Check uses_components reference existing components
|
||||||
|
for child_id in comp.get('uses_components', []):
|
||||||
|
if child_id not in self.component_ids:
|
||||||
|
self.errors.append(ValidationError(
|
||||||
|
'component', comp_id,
|
||||||
|
f"uses_components references non-existent component '{child_id}'"
|
||||||
|
))
|
||||||
|
|
||||||
|
def _validate_no_circular_deps(self):
|
||||||
|
"""Check for circular dependencies."""
|
||||||
|
# Build dependency graph
|
||||||
|
deps: Dict[str, Set[str]] = defaultdict(set)
|
||||||
|
|
||||||
|
# Model relations
|
||||||
|
for model in self.design.get('data_models', []):
|
||||||
|
model_id = model.get('id', '')
|
||||||
|
for relation in model.get('relations', []):
|
||||||
|
target = relation.get('target', '')
|
||||||
|
if target:
|
||||||
|
deps[model_id].add(target)
|
||||||
|
|
||||||
|
# API dependencies
|
||||||
|
for api in self.design.get('api_endpoints', []):
|
||||||
|
api_id = api.get('id', '')
|
||||||
|
for model_id in api.get('depends_on_models', []):
|
||||||
|
deps[api_id].add(model_id)
|
||||||
|
for dep_api_id in api.get('depends_on_apis', []):
|
||||||
|
deps[api_id].add(dep_api_id)
|
||||||
|
|
||||||
|
# Page dependencies
|
||||||
|
for page in self.design.get('pages', []):
|
||||||
|
page_id = page.get('id', '')
|
||||||
|
for data_need in page.get('data_needs', []):
|
||||||
|
api_id = data_need.get('api_id', '')
|
||||||
|
if api_id:
|
||||||
|
deps[page_id].add(api_id)
|
||||||
|
for comp_id in page.get('components', []):
|
||||||
|
deps[page_id].add(comp_id)
|
||||||
|
|
||||||
|
# Component dependencies
|
||||||
|
for comp in self.design.get('components', []):
|
||||||
|
comp_id = comp.get('id', '')
|
||||||
|
for api_id in comp.get('uses_apis', []):
|
||||||
|
deps[comp_id].add(api_id)
|
||||||
|
for child_id in comp.get('uses_components', []):
|
||||||
|
deps[comp_id].add(child_id)
|
||||||
|
|
||||||
|
# Detect cycles using DFS
|
||||||
|
visited = set()
|
||||||
|
rec_stack = set()
|
||||||
|
|
||||||
|
def has_cycle(node: str, path: List[str]) -> Optional[List[str]]:
|
||||||
|
visited.add(node)
|
||||||
|
rec_stack.add(node)
|
||||||
|
path.append(node)
|
||||||
|
|
||||||
|
for neighbor in deps.get(node, []):
|
||||||
|
if neighbor not in visited:
|
||||||
|
result = has_cycle(neighbor, path)
|
||||||
|
if result:
|
||||||
|
return result
|
||||||
|
elif neighbor in rec_stack:
|
||||||
|
# Found cycle
|
||||||
|
cycle_start = path.index(neighbor)
|
||||||
|
return path[cycle_start:] + [neighbor]
|
||||||
|
|
||||||
|
path.pop()
|
||||||
|
rec_stack.remove(node)
|
||||||
|
return None
|
||||||
|
|
||||||
|
for entity_id in self.all_ids:
|
||||||
|
if entity_id not in visited:
|
||||||
|
cycle = has_cycle(entity_id, [])
|
||||||
|
if cycle:
|
||||||
|
self.errors.append(ValidationError(
|
||||||
|
'dependency', entity_id,
|
||||||
|
f"Circular dependency detected: {' → '.join(cycle)}"
|
||||||
|
))
|
||||||
|
|
||||||
|
def print_report(self):
|
||||||
|
"""Print validation report."""
|
||||||
|
print()
|
||||||
|
print("=" * 60)
|
||||||
|
print("DESIGN VALIDATION REPORT".center(60))
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
print()
|
||||||
|
print(f" Models: {len(self.model_ids)}")
|
||||||
|
print(f" APIs: {len(self.api_ids)}")
|
||||||
|
print(f" Pages: {len(self.page_ids)}")
|
||||||
|
print(f" Components: {len(self.component_ids)}")
|
||||||
|
print(f" Total: {len(self.all_ids)}")
|
||||||
|
|
||||||
|
# Errors
|
||||||
|
if self.errors:
|
||||||
|
print()
|
||||||
|
print("-" * 60)
|
||||||
|
print(f"ERRORS ({len(self.errors)})")
|
||||||
|
print("-" * 60)
|
||||||
|
for error in self.errors:
|
||||||
|
print(f" {error}")
|
||||||
|
|
||||||
|
# Warnings
|
||||||
|
if self.warnings:
|
||||||
|
print()
|
||||||
|
print("-" * 60)
|
||||||
|
print(f"WARNINGS ({len(self.warnings)})")
|
||||||
|
print("-" * 60)
|
||||||
|
for warning in self.warnings:
|
||||||
|
print(f" {warning}")
|
||||||
|
|
||||||
|
# Result
|
||||||
|
print()
|
||||||
|
print("=" * 60)
|
||||||
|
if self.errors:
|
||||||
|
print("❌ VALIDATION FAILED".center(60))
|
||||||
|
else:
|
||||||
|
print("✅ VALIDATION PASSED".center(60))
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Dependency Graph Generator
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
class DependencyGraphGenerator:
|
||||||
|
"""Generates dependency graph and execution layers from design document."""
|
||||||
|
|
||||||
|
def __init__(self, design_doc: dict):
|
||||||
|
self.design = design_doc
|
||||||
|
self.deps: Dict[str, Set[str]] = defaultdict(set)
|
||||||
|
self.reverse_deps: Dict[str, Set[str]] = defaultdict(set)
|
||||||
|
self.entity_types: Dict[str, str] = {}
|
||||||
|
self.entity_names: Dict[str, str] = {}
|
||||||
|
self.layers: List[List[str]] = []
|
||||||
|
|
||||||
|
def generate(self) -> dict:
|
||||||
|
"""Generate the full dependency graph."""
|
||||||
|
self._build_dependency_map()
|
||||||
|
self._calculate_layers()
|
||||||
|
return self._build_graph_document()
|
||||||
|
|
||||||
|
def _build_dependency_map(self):
|
||||||
|
"""Build forward and reverse dependency maps."""
|
||||||
|
# Models
|
||||||
|
for model in self.design.get('data_models', []):
|
||||||
|
model_id = model.get('id', '')
|
||||||
|
self.entity_types[model_id] = 'model'
|
||||||
|
self.entity_names[model_id] = model.get('name', model_id)
|
||||||
|
|
||||||
|
for relation in model.get('relations', []):
|
||||||
|
target = relation.get('target', '')
|
||||||
|
if target:
|
||||||
|
self.deps[model_id].add(target)
|
||||||
|
self.reverse_deps[target].add(model_id)
|
||||||
|
|
||||||
|
# APIs
|
||||||
|
for api in self.design.get('api_endpoints', []):
|
||||||
|
api_id = api.get('id', '')
|
||||||
|
self.entity_types[api_id] = 'api'
|
||||||
|
self.entity_names[api_id] = api.get('summary', api_id)
|
||||||
|
|
||||||
|
for model_id in api.get('depends_on_models', []):
|
||||||
|
self.deps[api_id].add(model_id)
|
||||||
|
self.reverse_deps[model_id].add(api_id)
|
||||||
|
|
||||||
|
for dep_api_id in api.get('depends_on_apis', []):
|
||||||
|
self.deps[api_id].add(dep_api_id)
|
||||||
|
self.reverse_deps[dep_api_id].add(api_id)
|
||||||
|
|
||||||
|
# Pages
|
||||||
|
for page in self.design.get('pages', []):
|
||||||
|
page_id = page.get('id', '')
|
||||||
|
self.entity_types[page_id] = 'page'
|
||||||
|
self.entity_names[page_id] = page.get('name', page_id)
|
||||||
|
|
||||||
|
for data_need in page.get('data_needs', []):
|
||||||
|
api_id = data_need.get('api_id', '')
|
||||||
|
if api_id:
|
||||||
|
self.deps[page_id].add(api_id)
|
||||||
|
self.reverse_deps[api_id].add(page_id)
|
||||||
|
|
||||||
|
for comp_id in page.get('components', []):
|
||||||
|
self.deps[page_id].add(comp_id)
|
||||||
|
self.reverse_deps[comp_id].add(page_id)
|
||||||
|
|
||||||
|
# Components
|
||||||
|
for comp in self.design.get('components', []):
|
||||||
|
comp_id = comp.get('id', '')
|
||||||
|
self.entity_types[comp_id] = 'component'
|
||||||
|
self.entity_names[comp_id] = comp.get('name', comp_id)
|
||||||
|
|
||||||
|
for api_id in comp.get('uses_apis', []):
|
||||||
|
self.deps[comp_id].add(api_id)
|
||||||
|
self.reverse_deps[api_id].add(comp_id)
|
||||||
|
|
||||||
|
for child_id in comp.get('uses_components', []):
|
||||||
|
self.deps[comp_id].add(child_id)
|
||||||
|
self.reverse_deps[child_id].add(comp_id)
|
||||||
|
|
||||||
|
def _calculate_layers(self):
|
||||||
|
"""Calculate execution layers using topological sort."""
|
||||||
|
# Find all entities with no dependencies (Layer 1)
|
||||||
|
all_entities = set(self.entity_types.keys())
|
||||||
|
remaining = all_entities.copy()
|
||||||
|
assigned = set()
|
||||||
|
|
||||||
|
while remaining:
|
||||||
|
# Find entities whose dependencies are all assigned
|
||||||
|
layer = []
|
||||||
|
for entity_id in remaining:
|
||||||
|
deps = self.deps.get(entity_id, set())
|
||||||
|
if deps.issubset(assigned):
|
||||||
|
layer.append(entity_id)
|
||||||
|
|
||||||
|
if not layer:
|
||||||
|
# Shouldn't happen if no circular deps, but safety check
|
||||||
|
print(f"Warning: Could not assign remaining entities: {remaining}", file=sys.stderr)
|
||||||
|
break
|
||||||
|
|
||||||
|
self.layers.append(sorted(layer))
|
||||||
|
for entity_id in layer:
|
||||||
|
remaining.remove(entity_id)
|
||||||
|
assigned.add(entity_id)
|
||||||
|
|
||||||
|
def _build_graph_document(self) -> dict:
|
||||||
|
"""Build the dependency graph document."""
|
||||||
|
# Calculate stats
|
||||||
|
max_parallelism = max(len(layer) for layer in self.layers) if self.layers else 0
|
||||||
|
critical_path = len(self.layers)
|
||||||
|
|
||||||
|
graph = {
|
||||||
|
'dependency_graph': {
|
||||||
|
'design_version': self.design.get('revision', 1),
|
||||||
|
'workflow_version': self.design.get('workflow_version', 'v001'),
|
||||||
|
'generated_at': datetime.now().isoformat(),
|
||||||
|
'generator': 'validate_design.py',
|
||||||
|
'stats': {
|
||||||
|
'total_entities': len(self.entity_types),
|
||||||
|
'total_layers': len(self.layers),
|
||||||
|
'max_parallelism': max_parallelism,
|
||||||
|
'critical_path_length': critical_path
|
||||||
|
}
|
||||||
|
},
|
||||||
|
'layers': [],
|
||||||
|
'dependency_map': {},
|
||||||
|
'task_map': []
|
||||||
|
}
|
||||||
|
|
||||||
|
# Build layers
|
||||||
|
layer_names = {
|
||||||
|
1: ("Data Layer", "Database models - no external dependencies"),
|
||||||
|
2: ("API Layer", "REST endpoints - depend on models"),
|
||||||
|
3: ("UI Layer", "Pages and components - depend on APIs"),
|
||||||
|
}
|
||||||
|
|
||||||
|
for i, layer_entities in enumerate(self.layers, 1):
|
||||||
|
name, desc = layer_names.get(i, (f"Layer {i}", f"Entities with {i-1} levels of dependencies"))
|
||||||
|
|
||||||
|
layer_items = []
|
||||||
|
for entity_id in layer_entities:
|
||||||
|
entity_type = self.entity_types.get(entity_id, 'unknown')
|
||||||
|
agent = 'backend' if entity_type in ['model', 'api'] else 'frontend'
|
||||||
|
|
||||||
|
layer_items.append({
|
||||||
|
'id': entity_id,
|
||||||
|
'type': entity_type,
|
||||||
|
'name': self.entity_names.get(entity_id, entity_id),
|
||||||
|
'depends_on': list(self.deps.get(entity_id, [])),
|
||||||
|
'task_id': f"task_create_{entity_id}",
|
||||||
|
'agent': agent,
|
||||||
|
'complexity': 'medium' # Could be calculated
|
||||||
|
})
|
||||||
|
|
||||||
|
graph['layers'].append({
|
||||||
|
'layer': i,
|
||||||
|
'name': name,
|
||||||
|
'description': desc,
|
||||||
|
'items': layer_items,
|
||||||
|
'requires_layers': list(range(1, i)) if i > 1 else [],
|
||||||
|
'parallel_count': len(layer_items)
|
||||||
|
})
|
||||||
|
|
||||||
|
# Build dependency map
|
||||||
|
for entity_id in self.entity_types:
|
||||||
|
graph['dependency_map'][entity_id] = {
|
||||||
|
'type': self.entity_types.get(entity_id),
|
||||||
|
'layer': self._get_layer_number(entity_id),
|
||||||
|
'depends_on': list(self.deps.get(entity_id, [])),
|
||||||
|
'depended_by': list(self.reverse_deps.get(entity_id, []))
|
||||||
|
}
|
||||||
|
|
||||||
|
return graph
|
||||||
|
|
||||||
|
def _get_layer_number(self, entity_id: str) -> int:
|
||||||
|
"""Get the layer number for an entity."""
|
||||||
|
for i, layer in enumerate(self.layers, 1):
|
||||||
|
if entity_id in layer:
|
||||||
|
return i
|
||||||
|
return 0
|
||||||
|
|
||||||
|
def print_layers(self):
|
||||||
|
"""Print layer visualization."""
|
||||||
|
print()
|
||||||
|
print("=" * 60)
|
||||||
|
print("EXECUTION LAYERS".center(60))
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
for i, layer_entities in enumerate(self.layers, 1):
|
||||||
|
print()
|
||||||
|
print(f"Layer {i}: ({len(layer_entities)} items - parallel)")
|
||||||
|
print("-" * 40)
|
||||||
|
|
||||||
|
for entity_id in layer_entities:
|
||||||
|
entity_type = self.entity_types.get(entity_id, '?')
|
||||||
|
icon = {'model': '📦', 'api': '🔌', 'page': '📄', 'component': '🧩'}.get(entity_type, '❓')
|
||||||
|
deps = self.deps.get(entity_id, set())
|
||||||
|
deps_str = f" ← [{', '.join(deps)}]" if deps else ""
|
||||||
|
print(f" {icon} {entity_id}{deps_str}")
|
||||||
|
|
||||||
|
print()
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Context Generator
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
class ContextGenerator:
|
||||||
|
"""Generates context snapshots for tasks."""
|
||||||
|
|
||||||
|
def __init__(self, design_doc: dict, graph: dict, output_dir: str):
|
||||||
|
self.design = design_doc
|
||||||
|
self.graph = graph
|
||||||
|
self.output_dir = output_dir
|
||||||
|
|
||||||
|
# Index design entities by ID for quick lookup
|
||||||
|
self.models: Dict[str, dict] = {}
|
||||||
|
self.apis: Dict[str, dict] = {}
|
||||||
|
self.pages: Dict[str, dict] = {}
|
||||||
|
self.components: Dict[str, dict] = {}
|
||||||
|
|
||||||
|
self._index_entities()
|
||||||
|
|
||||||
|
def _index_entities(self):
|
||||||
|
"""Index all entities by ID."""
|
||||||
|
for model in self.design.get('data_models', []):
|
||||||
|
self.models[model.get('id', '')] = model
|
||||||
|
for api in self.design.get('api_endpoints', []):
|
||||||
|
self.apis[api.get('id', '')] = api
|
||||||
|
for page in self.design.get('pages', []):
|
||||||
|
self.pages[page.get('id', '')] = page
|
||||||
|
for comp in self.design.get('components', []):
|
||||||
|
self.components[comp.get('id', '')] = comp
|
||||||
|
|
||||||
|
def generate_all_contexts(self):
|
||||||
|
"""Generate context files for all entities."""
|
||||||
|
contexts_dir = Path(self.output_dir) / 'contexts'
|
||||||
|
contexts_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
for entity_id, entity_info in self.graph.get('dependency_map', {}).items():
|
||||||
|
context = self._generate_context(entity_id, entity_info)
|
||||||
|
context_path = contexts_dir / f"{entity_id}.yml"
|
||||||
|
save_yaml(str(context_path), context)
|
||||||
|
|
||||||
|
print(f"Generated {len(self.graph.get('dependency_map', {}))} context files in {contexts_dir}")
|
||||||
|
|
||||||
|
def _generate_context(self, entity_id: str, entity_info: dict) -> dict:
|
||||||
|
"""Generate context for a single entity."""
|
||||||
|
entity_type = entity_info.get('type', '')
|
||||||
|
deps = entity_info.get('depends_on', [])
|
||||||
|
|
||||||
|
context = {
|
||||||
|
'task_id': f"task_create_{entity_id}",
|
||||||
|
'entity_id': entity_id,
|
||||||
|
'generated_at': datetime.now().isoformat(),
|
||||||
|
'workflow_version': self.graph.get('dependency_graph', {}).get('workflow_version', 'v001'),
|
||||||
|
'target': {
|
||||||
|
'type': entity_type,
|
||||||
|
'definition': self._get_entity_definition(entity_id, entity_type)
|
||||||
|
},
|
||||||
|
'related': {
|
||||||
|
'models': [],
|
||||||
|
'apis': [],
|
||||||
|
'components': []
|
||||||
|
},
|
||||||
|
'dependencies': {
|
||||||
|
'entity_ids': deps,
|
||||||
|
'definitions': []
|
||||||
|
},
|
||||||
|
'files': {
|
||||||
|
'to_create': self._get_files_to_create(entity_id, entity_type),
|
||||||
|
'reference': []
|
||||||
|
},
|
||||||
|
'acceptance': self._get_acceptance_criteria(entity_id, entity_type)
|
||||||
|
}
|
||||||
|
|
||||||
|
# Add related entity definitions
|
||||||
|
for dep_id in deps:
|
||||||
|
dep_info = self.graph.get('dependency_map', {}).get(dep_id, {})
|
||||||
|
dep_type = dep_info.get('type', '')
|
||||||
|
dep_def = self._get_entity_definition(dep_id, dep_type)
|
||||||
|
|
||||||
|
if dep_type == 'model':
|
||||||
|
context['related']['models'].append({'id': dep_id, 'definition': dep_def})
|
||||||
|
elif dep_type == 'api':
|
||||||
|
context['related']['apis'].append({'id': dep_id, 'definition': dep_def})
|
||||||
|
elif dep_type == 'component':
|
||||||
|
context['related']['components'].append({'id': dep_id, 'definition': dep_def})
|
||||||
|
|
||||||
|
context['dependencies']['definitions'].append({
|
||||||
|
'id': dep_id,
|
||||||
|
'type': dep_type,
|
||||||
|
'definition': dep_def
|
||||||
|
})
|
||||||
|
|
||||||
|
return context
|
||||||
|
|
||||||
|
def _get_entity_definition(self, entity_id: str, entity_type: str) -> dict:
|
||||||
|
"""Get the full definition for an entity."""
|
||||||
|
if entity_type == 'model':
|
||||||
|
return self.models.get(entity_id, {})
|
||||||
|
elif entity_type == 'api':
|
||||||
|
return self.apis.get(entity_id, {})
|
||||||
|
elif entity_type == 'page':
|
||||||
|
return self.pages.get(entity_id, {})
|
||||||
|
elif entity_type == 'component':
|
||||||
|
return self.components.get(entity_id, {})
|
||||||
|
return {}
|
||||||
|
|
||||||
|
def _get_files_to_create(self, entity_id: str, entity_type: str) -> List[str]:
|
||||||
|
"""Get list of files to create for an entity."""
|
||||||
|
if entity_type == 'model':
|
||||||
|
name = self.models.get(entity_id, {}).get('name', entity_id)
|
||||||
|
return [
|
||||||
|
'prisma/schema.prisma',
|
||||||
|
f'app/models/{name.lower()}.ts'
|
||||||
|
]
|
||||||
|
elif entity_type == 'api':
|
||||||
|
path = self.apis.get(entity_id, {}).get('path', '/api/unknown')
|
||||||
|
route_path = path.replace('/api/', '').replace(':', '')
|
||||||
|
return [f'app/api/{route_path}/route.ts']
|
||||||
|
elif entity_type == 'page':
|
||||||
|
path = self.pages.get(entity_id, {}).get('path', '/unknown')
|
||||||
|
return [f'app{path}/page.tsx']
|
||||||
|
elif entity_type == 'component':
|
||||||
|
name = self.components.get(entity_id, {}).get('name', 'Unknown')
|
||||||
|
return [f'app/components/{name}.tsx']
|
||||||
|
return []
|
||||||
|
|
||||||
|
def _get_acceptance_criteria(self, entity_id: str, entity_type: str) -> List[dict]:
|
||||||
|
"""Get acceptance criteria for an entity."""
|
||||||
|
criteria = []
|
||||||
|
|
||||||
|
if entity_type == 'model':
|
||||||
|
criteria = [
|
||||||
|
{'criterion': 'Model defined in Prisma schema', 'verification': 'Check prisma/schema.prisma'},
|
||||||
|
{'criterion': 'TypeScript types exported', 'verification': 'Import type in test file'},
|
||||||
|
{'criterion': 'Relations properly configured', 'verification': 'Check Prisma relations'},
|
||||||
|
]
|
||||||
|
elif entity_type == 'api':
|
||||||
|
api = self.apis.get(entity_id, {})
|
||||||
|
method = api.get('method', 'GET')
|
||||||
|
path = api.get('path', '/api/unknown')
|
||||||
|
criteria = [
|
||||||
|
{'criterion': f'{method} {path} returns success response', 'verification': f'curl -X {method} {path}'},
|
||||||
|
{'criterion': 'Request validation implemented', 'verification': 'Test with invalid data'},
|
||||||
|
{'criterion': 'Error responses match contract', 'verification': 'Test error scenarios'},
|
||||||
|
]
|
||||||
|
elif entity_type == 'page':
|
||||||
|
page = self.pages.get(entity_id, {})
|
||||||
|
path = page.get('path', '/unknown')
|
||||||
|
criteria = [
|
||||||
|
{'criterion': f'Page renders at {path}', 'verification': f'Navigate to {path}'},
|
||||||
|
{'criterion': 'Data fetching works', 'verification': 'Check network tab'},
|
||||||
|
{'criterion': 'Components render correctly', 'verification': 'Visual inspection'},
|
||||||
|
]
|
||||||
|
elif entity_type == 'component':
|
||||||
|
criteria = [
|
||||||
|
{'criterion': 'Component renders without errors', 'verification': 'Import and render in test'},
|
||||||
|
{'criterion': 'Props are typed correctly', 'verification': 'TypeScript compilation'},
|
||||||
|
{'criterion': 'Events fire correctly', 'verification': 'Test event handlers'},
|
||||||
|
]
|
||||||
|
|
||||||
|
return criteria
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Task Generator
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
class TaskGenerator:
|
||||||
|
"""Generates task files with full context."""
|
||||||
|
|
||||||
|
def __init__(self, design_doc: dict, graph: dict, output_dir: str):
|
||||||
|
self.design = design_doc
|
||||||
|
self.graph = graph
|
||||||
|
self.output_dir = output_dir
|
||||||
|
|
||||||
|
def generate_all_tasks(self):
|
||||||
|
"""Generate task files for all entities."""
|
||||||
|
tasks_dir = Path(self.output_dir) / 'tasks'
|
||||||
|
tasks_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
task_count = 0
|
||||||
|
for layer in self.graph.get('layers', []):
|
||||||
|
for item in layer.get('items', []):
|
||||||
|
task = self._generate_task(item, layer.get('layer', 1))
|
||||||
|
task_path = tasks_dir / f"{task['id']}.yml"
|
||||||
|
save_yaml(str(task_path), task)
|
||||||
|
task_count += 1
|
||||||
|
|
||||||
|
print(f"Generated {task_count} task files in {tasks_dir}")
|
||||||
|
|
||||||
|
def _generate_task(self, item: dict, layer_num: int) -> dict:
|
||||||
|
"""Generate a task for an entity."""
|
||||||
|
entity_id = item.get('id', '')
|
||||||
|
entity_type = item.get('type', '')
|
||||||
|
|
||||||
|
task = {
|
||||||
|
'id': item.get('task_id', f'task_create_{entity_id}'),
|
||||||
|
'type': 'create',
|
||||||
|
'title': f"Create {item.get('name', entity_id)}",
|
||||||
|
'agent': item.get('agent', 'backend'),
|
||||||
|
'entity_id': entity_id,
|
||||||
|
'entity_ids': [entity_id],
|
||||||
|
'status': 'pending',
|
||||||
|
'layer': layer_num,
|
||||||
|
'parallel_group': f"layer_{layer_num}",
|
||||||
|
'complexity': item.get('complexity', 'medium'),
|
||||||
|
'dependencies': [f"task_create_{dep}" for dep in item.get('depends_on', [])],
|
||||||
|
'context': {
|
||||||
|
'design_version': self.graph.get('dependency_graph', {}).get('design_version', 1),
|
||||||
|
'workflow_version': self.graph.get('dependency_graph', {}).get('workflow_version', 'v001'),
|
||||||
|
'context_snapshot_path': f".workflow/versions/v001/contexts/{entity_id}.yml"
|
||||||
|
},
|
||||||
|
'created_at': datetime.now().isoformat()
|
||||||
|
}
|
||||||
|
|
||||||
|
return task
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Main CLI
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Validate design document and generate dependency graph")
|
||||||
|
parser.add_argument('design_file', help='Path to design_document.yml')
|
||||||
|
parser.add_argument('--output-dir', '-o', default='.workflow/versions/v001',
|
||||||
|
help='Output directory for generated files')
|
||||||
|
parser.add_argument('--validate-only', '-v', action='store_true',
|
||||||
|
help='Only validate, do not generate files')
|
||||||
|
parser.add_argument('--quiet', '-q', action='store_true',
|
||||||
|
help='Suppress output except errors')
|
||||||
|
parser.add_argument('--json', action='store_true',
|
||||||
|
help='Output validation result as JSON')
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
# Load design document
|
||||||
|
design = load_yaml(args.design_file)
|
||||||
|
if not design:
|
||||||
|
print(f"Error: Could not load design document: {args.design_file}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
# Validate
|
||||||
|
validator = DesignValidator(design)
|
||||||
|
is_valid = validator.validate()
|
||||||
|
|
||||||
|
if args.json:
|
||||||
|
result = {
|
||||||
|
'valid': is_valid,
|
||||||
|
'errors': [str(e) for e in validator.errors],
|
||||||
|
'warnings': [str(w) for w in validator.warnings],
|
||||||
|
'stats': {
|
||||||
|
'models': len(validator.model_ids),
|
||||||
|
'apis': len(validator.api_ids),
|
||||||
|
'pages': len(validator.page_ids),
|
||||||
|
'components': len(validator.component_ids)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
print(json.dumps(result, indent=2))
|
||||||
|
sys.exit(0 if is_valid else 1)
|
||||||
|
|
||||||
|
if not args.quiet:
|
||||||
|
validator.print_report()
|
||||||
|
|
||||||
|
if not is_valid:
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
if args.validate_only:
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
# Generate dependency graph
|
||||||
|
generator = DependencyGraphGenerator(design)
|
||||||
|
graph = generator.generate()
|
||||||
|
|
||||||
|
if not args.quiet:
|
||||||
|
generator.print_layers()
|
||||||
|
|
||||||
|
# Save dependency graph
|
||||||
|
output_dir = Path(args.output_dir)
|
||||||
|
output_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
graph_path = output_dir / 'dependency_graph.yml'
|
||||||
|
save_yaml(str(graph_path), graph)
|
||||||
|
print(f"Saved dependency graph to: {graph_path}")
|
||||||
|
|
||||||
|
# Generate context files
|
||||||
|
context_gen = ContextGenerator(design, graph, str(output_dir))
|
||||||
|
context_gen.generate_all_contexts()
|
||||||
|
|
||||||
|
# Generate task files
|
||||||
|
task_gen = TaskGenerator(design, graph, str(output_dir))
|
||||||
|
task_gen.generate_all_tasks()
|
||||||
|
|
||||||
|
print()
|
||||||
|
print("✅ Design validation and generation complete!")
|
||||||
|
print(f" Output directory: {output_dir}")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
|
|
@ -0,0 +1,129 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Validate project manifest integrity."""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
|
||||||
|
|
||||||
|
def validate_structure(manifest: dict) -> list:
|
||||||
|
"""Validate manifest has required structure."""
|
||||||
|
errors = []
|
||||||
|
required_keys = ["project", "state", "entities", "dependencies"]
|
||||||
|
|
||||||
|
for key in required_keys:
|
||||||
|
if key not in manifest:
|
||||||
|
errors.append(f"Missing required key: {key}")
|
||||||
|
|
||||||
|
if "entities" in manifest:
|
||||||
|
entity_types = ["pages", "components", "api_endpoints", "database_tables"]
|
||||||
|
for etype in entity_types:
|
||||||
|
if etype not in manifest["entities"]:
|
||||||
|
errors.append(f"Missing entity type: {etype}")
|
||||||
|
|
||||||
|
return errors
|
||||||
|
|
||||||
|
|
||||||
|
def validate_pages(pages: list) -> list:
|
||||||
|
"""Validate page entities."""
|
||||||
|
errors = []
|
||||||
|
for page in pages:
|
||||||
|
if "id" not in page:
|
||||||
|
errors.append(f"Page missing id: {page}")
|
||||||
|
if "path" not in page:
|
||||||
|
errors.append(f"Page {page.get('id', 'unknown')} missing path")
|
||||||
|
if "file_path" not in page:
|
||||||
|
errors.append(f"Page {page.get('id', 'unknown')} missing file_path")
|
||||||
|
return errors
|
||||||
|
|
||||||
|
|
||||||
|
def validate_components(components: list) -> list:
|
||||||
|
"""Validate component entities."""
|
||||||
|
errors = []
|
||||||
|
for comp in components:
|
||||||
|
if "id" not in comp:
|
||||||
|
errors.append(f"Component missing id: {comp}")
|
||||||
|
if "name" not in comp:
|
||||||
|
errors.append(f"Component {comp.get('id', 'unknown')} missing name")
|
||||||
|
if "file_path" not in comp:
|
||||||
|
errors.append(f"Component {comp.get('id', 'unknown')} missing file_path")
|
||||||
|
return errors
|
||||||
|
|
||||||
|
|
||||||
|
def validate_apis(apis: list) -> list:
|
||||||
|
"""Validate API endpoint entities."""
|
||||||
|
errors = []
|
||||||
|
for api in apis:
|
||||||
|
if "id" not in api:
|
||||||
|
errors.append(f"API missing id: {api}")
|
||||||
|
if "method" not in api:
|
||||||
|
errors.append(f"API {api.get('id', 'unknown')} missing method")
|
||||||
|
if "path" not in api:
|
||||||
|
errors.append(f"API {api.get('id', 'unknown')} missing path")
|
||||||
|
return errors
|
||||||
|
|
||||||
|
|
||||||
|
def validate_tables(tables: list) -> list:
|
||||||
|
"""Validate database table entities."""
|
||||||
|
errors = []
|
||||||
|
for table in tables:
|
||||||
|
if "id" not in table:
|
||||||
|
errors.append(f"Table missing id: {table}")
|
||||||
|
if "name" not in table:
|
||||||
|
errors.append(f"Table {table.get('id', 'unknown')} missing name")
|
||||||
|
if "columns" not in table:
|
||||||
|
errors.append(f"Table {table.get('id', 'unknown')} missing columns")
|
||||||
|
return errors
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Validate project manifest")
|
||||||
|
parser.add_argument("--strict", action="store_true", help="Treat warnings as errors")
|
||||||
|
parser.add_argument("--manifest", default="project_manifest.json", help="Path to manifest")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
manifest_path = args.manifest
|
||||||
|
if not os.path.isabs(manifest_path):
|
||||||
|
manifest_path = os.path.join(os.getcwd(), manifest_path)
|
||||||
|
|
||||||
|
if not os.path.exists(manifest_path):
|
||||||
|
print(f"Error: Manifest not found at {manifest_path}")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
with open(manifest_path) as f:
|
||||||
|
manifest = json.load(f)
|
||||||
|
|
||||||
|
errors = []
|
||||||
|
warnings = []
|
||||||
|
|
||||||
|
# Structure validation
|
||||||
|
errors.extend(validate_structure(manifest))
|
||||||
|
|
||||||
|
if "entities" in manifest:
|
||||||
|
errors.extend(validate_pages(manifest["entities"].get("pages", [])))
|
||||||
|
errors.extend(validate_components(manifest["entities"].get("components", [])))
|
||||||
|
errors.extend(validate_apis(manifest["entities"].get("api_endpoints", [])))
|
||||||
|
errors.extend(validate_tables(manifest["entities"].get("database_tables", [])))
|
||||||
|
|
||||||
|
# Report results
|
||||||
|
if errors:
|
||||||
|
print("VALIDATION FAILED")
|
||||||
|
for error in errors:
|
||||||
|
print(f" ERROR: {error}")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
if warnings:
|
||||||
|
print("VALIDATION PASSED WITH WARNINGS")
|
||||||
|
for warning in warnings:
|
||||||
|
print(f" WARNING: {warning}")
|
||||||
|
if args.strict:
|
||||||
|
return 1
|
||||||
|
return 0
|
||||||
|
|
||||||
|
print("VALIDATION PASSED")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
exit(main())
|
||||||
|
|
@ -0,0 +1,478 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Workflow enforcement hook for Claude Code.
|
||||||
|
Validates that operations comply with current workflow phase.
|
||||||
|
|
||||||
|
When blocked, instructs AI to run /workflow:spawn to start a proper workflow.
|
||||||
|
|
||||||
|
Exit codes:
|
||||||
|
0 = Operation allowed
|
||||||
|
1 = Operation blocked (with message)
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
# Try to import yaml
|
||||||
|
try:
|
||||||
|
import yaml
|
||||||
|
HAS_YAML = True
|
||||||
|
except ImportError:
|
||||||
|
HAS_YAML = False
|
||||||
|
|
||||||
|
|
||||||
|
def load_yaml(filepath: str) -> dict:
|
||||||
|
"""Load YAML file."""
|
||||||
|
if not os.path.exists(filepath):
|
||||||
|
return {}
|
||||||
|
|
||||||
|
with open(filepath, 'r') as f:
|
||||||
|
content = f.read()
|
||||||
|
|
||||||
|
if not content.strip():
|
||||||
|
return {}
|
||||||
|
|
||||||
|
if HAS_YAML:
|
||||||
|
return yaml.safe_load(content) or {}
|
||||||
|
|
||||||
|
# Simple fallback parser
|
||||||
|
result = {}
|
||||||
|
current_list = None
|
||||||
|
for line in content.split('\n'):
|
||||||
|
stripped = line.strip()
|
||||||
|
if not stripped or stripped.startswith('#'):
|
||||||
|
continue
|
||||||
|
# Handle list items
|
||||||
|
if stripped.startswith('- '):
|
||||||
|
if current_list is not None:
|
||||||
|
value = stripped[2:].strip()
|
||||||
|
if (value.startswith('"') and value.endswith('"')) or \
|
||||||
|
(value.startswith("'") and value.endswith("'")):
|
||||||
|
value = value[1:-1]
|
||||||
|
current_list.append(value)
|
||||||
|
continue
|
||||||
|
if ':' in stripped:
|
||||||
|
key, _, value = stripped.partition(':')
|
||||||
|
key = key.strip()
|
||||||
|
value = value.strip()
|
||||||
|
if value == '' or value == '[]':
|
||||||
|
result[key] = []
|
||||||
|
current_list = result[key]
|
||||||
|
elif value == 'null' or value == '~':
|
||||||
|
result[key] = None
|
||||||
|
current_list = None
|
||||||
|
elif value == 'true':
|
||||||
|
result[key] = True
|
||||||
|
current_list = None
|
||||||
|
elif value == 'false':
|
||||||
|
result[key] = False
|
||||||
|
current_list = None
|
||||||
|
elif value.isdigit():
|
||||||
|
result[key] = int(value)
|
||||||
|
current_list = None
|
||||||
|
else:
|
||||||
|
if (value.startswith('"') and value.endswith('"')) or \
|
||||||
|
(value.startswith("'") and value.endswith("'")):
|
||||||
|
value = value[1:-1]
|
||||||
|
result[key] = value
|
||||||
|
current_list = None
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def get_current_phase() -> str:
|
||||||
|
"""Get current workflow phase from version session."""
|
||||||
|
workflow_dir = Path('.workflow')
|
||||||
|
current_path = workflow_dir / 'current.yml'
|
||||||
|
|
||||||
|
if not current_path.exists():
|
||||||
|
return 'NO_WORKFLOW'
|
||||||
|
|
||||||
|
current = load_yaml(str(current_path))
|
||||||
|
active_version = current.get('active_version')
|
||||||
|
|
||||||
|
if not active_version:
|
||||||
|
return 'NO_WORKFLOW'
|
||||||
|
|
||||||
|
session_path = workflow_dir / 'versions' / active_version / 'session.yml'
|
||||||
|
if not session_path.exists():
|
||||||
|
return 'NO_WORKFLOW'
|
||||||
|
|
||||||
|
session = load_yaml(str(session_path))
|
||||||
|
return session.get('current_phase', 'UNKNOWN')
|
||||||
|
|
||||||
|
|
||||||
|
def get_active_version() -> str:
|
||||||
|
"""Get active workflow version."""
|
||||||
|
workflow_dir = Path('.workflow')
|
||||||
|
current_path = workflow_dir / 'current.yml'
|
||||||
|
|
||||||
|
if not current_path.exists():
|
||||||
|
return None
|
||||||
|
|
||||||
|
current = load_yaml(str(current_path))
|
||||||
|
return current.get('active_version')
|
||||||
|
|
||||||
|
|
||||||
|
def get_workflow_feature() -> str:
|
||||||
|
"""Get the feature name from current workflow."""
|
||||||
|
workflow_dir = Path('.workflow')
|
||||||
|
current_path = workflow_dir / 'current.yml'
|
||||||
|
|
||||||
|
if not current_path.exists():
|
||||||
|
return None
|
||||||
|
|
||||||
|
current = load_yaml(str(current_path))
|
||||||
|
active_version = current.get('active_version')
|
||||||
|
|
||||||
|
if not active_version:
|
||||||
|
return None
|
||||||
|
|
||||||
|
session_path = workflow_dir / 'versions' / active_version / 'session.yml'
|
||||||
|
if not session_path.exists():
|
||||||
|
return None
|
||||||
|
|
||||||
|
session = load_yaml(str(session_path))
|
||||||
|
return session.get('feature', 'unknown feature')
|
||||||
|
|
||||||
|
|
||||||
|
def count_task_files(version: str) -> int:
|
||||||
|
"""Count task files in version directory."""
|
||||||
|
tasks_dir = Path('.workflow') / 'versions' / version / 'tasks'
|
||||||
|
if not tasks_dir.exists():
|
||||||
|
return 0
|
||||||
|
return len(list(tasks_dir.glob('task_*.yml')))
|
||||||
|
|
||||||
|
|
||||||
|
def extract_feature_from_file(file_path: str) -> str:
|
||||||
|
"""Extract a feature description from the file path."""
|
||||||
|
# Convert path to a human-readable feature description
|
||||||
|
parts = Path(file_path).parts
|
||||||
|
|
||||||
|
# Remove common prefixes
|
||||||
|
skip = {'src', 'app', 'lib', 'components', 'pages', 'api', 'utils', 'hooks'}
|
||||||
|
meaningful = [p for p in parts if p not in skip and not p.startswith('.')]
|
||||||
|
|
||||||
|
if meaningful:
|
||||||
|
# Get the file name without extension
|
||||||
|
name = Path(file_path).stem
|
||||||
|
return f"update {name}"
|
||||||
|
|
||||||
|
return f"modify {file_path}"
|
||||||
|
|
||||||
|
|
||||||
|
def validate_task_spawn(tool_input: dict) -> tuple[bool, str]:
|
||||||
|
"""
|
||||||
|
Validate Task tool spawning for workflow compliance.
|
||||||
|
"""
|
||||||
|
phase = get_current_phase()
|
||||||
|
prompt = tool_input.get('prompt', '')
|
||||||
|
subagent_type = tool_input.get('subagent_type', '')
|
||||||
|
|
||||||
|
agent_type = subagent_type.lower()
|
||||||
|
|
||||||
|
# Check architect agent
|
||||||
|
if 'system-architect' in agent_type or 'ARCHITECT AGENT' in prompt.upper():
|
||||||
|
if phase not in ['DESIGNING', 'NO_WORKFLOW']:
|
||||||
|
return False, f"""
|
||||||
|
⛔ WORKFLOW VIOLATION: Cannot spawn Architect agent
|
||||||
|
|
||||||
|
Current Phase: {phase}
|
||||||
|
Required Phase: DESIGNING
|
||||||
|
|
||||||
|
The Architect agent can only be spawned during the DESIGNING phase.
|
||||||
|
|
||||||
|
👉 REQUIRED ACTION: Run /workflow:status to check current state.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Check frontend agent
|
||||||
|
if 'frontend' in agent_type or 'FRONTEND AGENT' in prompt.upper():
|
||||||
|
if phase not in ['IMPLEMENTING', 'IMPL_REJECTED']:
|
||||||
|
return False, f"""
|
||||||
|
⛔ WORKFLOW VIOLATION: Cannot spawn Frontend agent
|
||||||
|
|
||||||
|
Current Phase: {phase}
|
||||||
|
Required Phase: IMPLEMENTING
|
||||||
|
|
||||||
|
👉 REQUIRED ACTION: Complete the design phase first, then run /workflow:approve
|
||||||
|
"""
|
||||||
|
|
||||||
|
version = get_active_version()
|
||||||
|
if version and count_task_files(version) == 0:
|
||||||
|
return False, f"""
|
||||||
|
⛔ WORKFLOW VIOLATION: No task files found
|
||||||
|
|
||||||
|
Cannot start implementation without design tasks.
|
||||||
|
|
||||||
|
👉 REQUIRED ACTION: Ensure Architect agent created task files in:
|
||||||
|
.workflow/versions/{version}/tasks/
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Check backend agent
|
||||||
|
if 'backend' in agent_type or 'BACKEND AGENT' in prompt.upper():
|
||||||
|
if phase not in ['IMPLEMENTING', 'IMPL_REJECTED']:
|
||||||
|
return False, f"""
|
||||||
|
⛔ WORKFLOW VIOLATION: Cannot spawn Backend agent
|
||||||
|
|
||||||
|
Current Phase: {phase}
|
||||||
|
Required Phase: IMPLEMENTING
|
||||||
|
|
||||||
|
👉 REQUIRED ACTION: Complete the design phase first, then run /workflow:approve
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Check reviewer agent
|
||||||
|
if 'quality' in agent_type or 'REVIEWER AGENT' in prompt.upper():
|
||||||
|
if phase not in ['REVIEWING', 'AWAITING_IMPL_APPROVAL']:
|
||||||
|
return False, f"""
|
||||||
|
⛔ WORKFLOW VIOLATION: Cannot spawn Reviewer agent
|
||||||
|
|
||||||
|
Current Phase: {phase}
|
||||||
|
Required Phase: REVIEWING
|
||||||
|
|
||||||
|
👉 REQUIRED ACTION: Complete implementation first.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Check security agent
|
||||||
|
if 'security' in agent_type or 'SECURITY AGENT' in prompt.upper():
|
||||||
|
if phase not in ['SECURITY_REVIEW', 'REVIEWING']:
|
||||||
|
return False, f"""
|
||||||
|
⛔ WORKFLOW VIOLATION: Cannot spawn Security agent
|
||||||
|
|
||||||
|
Current Phase: {phase}
|
||||||
|
Required Phase: SECURITY_REVIEW
|
||||||
|
|
||||||
|
👉 REQUIRED ACTION: Complete code review first, then security review runs.
|
||||||
|
"""
|
||||||
|
|
||||||
|
return True, ""
|
||||||
|
|
||||||
|
|
||||||
|
def validate_write_operation(tool_input: dict) -> tuple[bool, str]:
|
||||||
|
"""
|
||||||
|
Validate Write/Edit operations for workflow compliance.
|
||||||
|
"""
|
||||||
|
phase = get_current_phase()
|
||||||
|
file_path = tool_input.get('file_path', tool_input.get('path', ''))
|
||||||
|
|
||||||
|
if not file_path:
|
||||||
|
return True, ""
|
||||||
|
|
||||||
|
# Normalize path
|
||||||
|
try:
|
||||||
|
abs_file_path = str(Path(file_path).resolve())
|
||||||
|
project_dir = str(Path.cwd().resolve())
|
||||||
|
|
||||||
|
if abs_file_path.startswith(project_dir):
|
||||||
|
rel_path = abs_file_path[len(project_dir):].lstrip('/')
|
||||||
|
else:
|
||||||
|
rel_path = file_path
|
||||||
|
except:
|
||||||
|
rel_path = file_path
|
||||||
|
|
||||||
|
# Always allow these
|
||||||
|
always_allowed = [
|
||||||
|
'project_manifest.json',
|
||||||
|
'.workflow/',
|
||||||
|
'skills/',
|
||||||
|
'.claude/',
|
||||||
|
'CLAUDE.md',
|
||||||
|
'package.json',
|
||||||
|
'package-lock.json',
|
||||||
|
'docs/', # Documentation generation (/eureka:index, /eureka:landing)
|
||||||
|
'claudedocs/', # Claude-specific documentation
|
||||||
|
'public/', # Public assets (landing pages, images)
|
||||||
|
]
|
||||||
|
|
||||||
|
for allowed in always_allowed:
|
||||||
|
if rel_path.startswith(allowed) or rel_path == allowed.rstrip('/'):
|
||||||
|
return True, ""
|
||||||
|
|
||||||
|
# Extract feature suggestion from file path
|
||||||
|
suggested_feature = extract_feature_from_file(rel_path)
|
||||||
|
|
||||||
|
# NO_WORKFLOW - Must start a workflow first
|
||||||
|
if phase == 'NO_WORKFLOW':
|
||||||
|
return False, f"""
|
||||||
|
⛔ WORKFLOW REQUIRED: No active workflow
|
||||||
|
|
||||||
|
You are trying to modify: {rel_path}
|
||||||
|
|
||||||
|
This project uses guardrail workflows. You cannot directly edit files.
|
||||||
|
|
||||||
|
╔══════════════════════════════════════════════════════════════════╗
|
||||||
|
║ 👉 REQUIRED ACTION: Start a workflow first! ║
|
||||||
|
║ ║
|
||||||
|
║ Run this command: ║
|
||||||
|
║ /workflow:spawn {suggested_feature} ║
|
||||||
|
║ ║
|
||||||
|
║ This will: ║
|
||||||
|
║ 1. Create a design for your changes ║
|
||||||
|
║ 2. Get approval ║
|
||||||
|
║ 3. Then allow you to implement ║
|
||||||
|
╚══════════════════════════════════════════════════════════════════╝
|
||||||
|
"""
|
||||||
|
|
||||||
|
# DESIGNING phase - can't write implementation files
|
||||||
|
if phase == 'DESIGNING':
|
||||||
|
return False, f"""
|
||||||
|
⛔ WORKFLOW VIOLATION: Cannot write implementation files during DESIGNING
|
||||||
|
|
||||||
|
Current Phase: DESIGNING
|
||||||
|
File: {rel_path}
|
||||||
|
|
||||||
|
During DESIGNING phase, only these files can be modified:
|
||||||
|
- project_manifest.json
|
||||||
|
- .workflow/versions/*/tasks/*.yml
|
||||||
|
|
||||||
|
╔══════════════════════════════════════════════════════════════════╗
|
||||||
|
║ 👉 REQUIRED ACTION: Complete design and get approval ║
|
||||||
|
║ ║
|
||||||
|
║ 1. Finish adding entities to project_manifest.json ║
|
||||||
|
║ 2. Create task files in .workflow/versions/*/tasks/ ║
|
||||||
|
║ 3. Run: /workflow:approve ║
|
||||||
|
╚══════════════════════════════════════════════════════════════════╝
|
||||||
|
"""
|
||||||
|
|
||||||
|
# REVIEWING phase - read only
|
||||||
|
if phase == 'REVIEWING':
|
||||||
|
return False, f"""
|
||||||
|
⛔ WORKFLOW VIOLATION: Cannot write files during REVIEWING
|
||||||
|
|
||||||
|
Current Phase: REVIEWING
|
||||||
|
File: {rel_path}
|
||||||
|
|
||||||
|
During REVIEWING phase, files are READ-ONLY.
|
||||||
|
|
||||||
|
╔══════════════════════════════════════════════════════════════════╗
|
||||||
|
║ 👉 REQUIRED ACTION: Complete the review ║
|
||||||
|
║ ║
|
||||||
|
║ If changes are needed: ║
|
||||||
|
║ - Run: /workflow:reject "reason for changes" ║
|
||||||
|
║ - This returns to IMPLEMENTING phase ║
|
||||||
|
║ ║
|
||||||
|
║ If review passes: ║
|
||||||
|
║ - Run: /workflow:approve ║
|
||||||
|
╚══════════════════════════════════════════════════════════════════╝
|
||||||
|
"""
|
||||||
|
|
||||||
|
# SECURITY_REVIEW phase - read only
|
||||||
|
if phase == 'SECURITY_REVIEW':
|
||||||
|
return False, f"""
|
||||||
|
⛔ WORKFLOW VIOLATION: Cannot write files during SECURITY_REVIEW
|
||||||
|
|
||||||
|
Current Phase: SECURITY_REVIEW
|
||||||
|
File: {rel_path}
|
||||||
|
|
||||||
|
During SECURITY_REVIEW phase, files are READ-ONLY.
|
||||||
|
Security scan is running to check for vulnerabilities.
|
||||||
|
|
||||||
|
╔══════════════════════════════════════════════════════════════════╗
|
||||||
|
║ 👉 REQUIRED ACTION: Wait for security scan to complete ║
|
||||||
|
║ ║
|
||||||
|
║ If security issues found: ║
|
||||||
|
║ - Workflow returns to IMPLEMENTING phase to fix issues ║
|
||||||
|
║ ║
|
||||||
|
║ If security passes: ║
|
||||||
|
║ - Workflow proceeds to AWAITING_IMPL_APPROVAL ║
|
||||||
|
║ ║
|
||||||
|
║ For full audit: /workflow:security --full ║
|
||||||
|
╚══════════════════════════════════════════════════════════════════╝
|
||||||
|
"""
|
||||||
|
|
||||||
|
# AWAITING approval phases
|
||||||
|
if phase in ['AWAITING_DESIGN_APPROVAL', 'AWAITING_IMPL_APPROVAL']:
|
||||||
|
gate_type = "design" if "DESIGN" in phase else "implementation"
|
||||||
|
return False, f"""
|
||||||
|
⛔ WORKFLOW VIOLATION: Cannot write files while awaiting approval
|
||||||
|
|
||||||
|
Current Phase: {phase}
|
||||||
|
File: {rel_path}
|
||||||
|
|
||||||
|
╔══════════════════════════════════════════════════════════════════╗
|
||||||
|
║ 👉 REQUIRED ACTION: Get user approval ║
|
||||||
|
║ ║
|
||||||
|
║ Waiting for {gate_type} approval. Ask the user to run: ║
|
||||||
|
║ - /workflow:approve (to proceed) ║
|
||||||
|
║ - /workflow:reject (to revise) ║
|
||||||
|
╚══════════════════════════════════════════════════════════════════╝
|
||||||
|
"""
|
||||||
|
|
||||||
|
# COMPLETED - need new workflow
|
||||||
|
if phase == 'COMPLETED':
|
||||||
|
return False, f"""
|
||||||
|
⛔ WORKFLOW VIOLATION: Workflow already completed
|
||||||
|
|
||||||
|
Current Phase: COMPLETED
|
||||||
|
File: {rel_path}
|
||||||
|
|
||||||
|
This workflow version is complete.
|
||||||
|
|
||||||
|
╔══════════════════════════════════════════════════════════════════╗
|
||||||
|
║ 👉 REQUIRED ACTION: Start a new workflow ║
|
||||||
|
║ ║
|
||||||
|
║ Run: /workflow:spawn {suggested_feature} ║
|
||||||
|
╚══════════════════════════════════════════════════════════════════╝
|
||||||
|
"""
|
||||||
|
|
||||||
|
return True, ""
|
||||||
|
|
||||||
|
|
||||||
|
def validate_transition(tool_input: dict) -> tuple[bool, str]:
|
||||||
|
"""Validate phase transitions for proper sequencing."""
|
||||||
|
return True, ""
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Workflow enforcement hook")
|
||||||
|
parser.add_argument('--operation', required=True,
|
||||||
|
choices=['task', 'write', 'edit', 'transition', 'build'],
|
||||||
|
help='Operation type being validated')
|
||||||
|
parser.add_argument('--input', help='JSON input from tool call')
|
||||||
|
parser.add_argument('--file', help='File path (for write/edit operations)')
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
# Parse input
|
||||||
|
tool_input = {}
|
||||||
|
if args.input:
|
||||||
|
try:
|
||||||
|
tool_input = json.loads(args.input)
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
tool_input = {'raw': args.input}
|
||||||
|
|
||||||
|
if args.file:
|
||||||
|
tool_input['file_path'] = args.file
|
||||||
|
|
||||||
|
# Route to appropriate validator
|
||||||
|
allowed = True
|
||||||
|
message = ""
|
||||||
|
|
||||||
|
if args.operation == 'task':
|
||||||
|
allowed, message = validate_task_spawn(tool_input)
|
||||||
|
|
||||||
|
elif args.operation in ['write', 'edit']:
|
||||||
|
allowed, message = validate_write_operation(tool_input)
|
||||||
|
|
||||||
|
elif args.operation == 'transition':
|
||||||
|
allowed, message = validate_transition(tool_input)
|
||||||
|
|
||||||
|
elif args.operation == 'build':
|
||||||
|
phase = get_current_phase()
|
||||||
|
print(f"BUILD: Running in phase {phase}")
|
||||||
|
allowed = True
|
||||||
|
|
||||||
|
# Output result
|
||||||
|
if not allowed:
|
||||||
|
print(message, file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
else:
|
||||||
|
phase = get_current_phase()
|
||||||
|
version = get_active_version() or 'N/A'
|
||||||
|
print(f"✓ WORKFLOW: {args.operation.upper()} allowed in {phase} (v{version})")
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
|
|
@ -0,0 +1,282 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Pre-write validation hook for guardrail enforcement.
|
||||||
|
|
||||||
|
Validates that file writes are allowed based on:
|
||||||
|
1. Current workflow phase
|
||||||
|
2. Manifest-defined allowed paths
|
||||||
|
3. Always-allowed system paths
|
||||||
|
|
||||||
|
Exit codes:
|
||||||
|
0 = Write allowed
|
||||||
|
1 = Write blocked (with error message)
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
|
||||||
|
# Always allowed paths (relative to project root)
|
||||||
|
ALWAYS_ALLOWED_PATTERNS = [
|
||||||
|
"project_manifest.json",
|
||||||
|
".workflow/",
|
||||||
|
".claude/",
|
||||||
|
"skills/",
|
||||||
|
"CLAUDE.md",
|
||||||
|
"package.json",
|
||||||
|
"package-lock.json",
|
||||||
|
"tsconfig.json",
|
||||||
|
".gitignore",
|
||||||
|
".env.local",
|
||||||
|
".env.example",
|
||||||
|
"docs/", # Documentation generation (/eureka:index, /eureka:landing)
|
||||||
|
"claudedocs/", # Claude-specific documentation
|
||||||
|
"public/", # Public assets (landing pages, images)
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def load_manifest(manifest_path: str) -> dict | None:
|
||||||
|
"""Load manifest if it exists."""
|
||||||
|
if not os.path.exists(manifest_path):
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
with open(manifest_path) as f:
|
||||||
|
return json.load(f)
|
||||||
|
except (json.JSONDecodeError, IOError):
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def normalize_path(file_path: str, project_dir: str) -> str:
|
||||||
|
"""Normalize file path to relative path from project root."""
|
||||||
|
try:
|
||||||
|
abs_path = Path(file_path).resolve()
|
||||||
|
proj_path = Path(project_dir).resolve()
|
||||||
|
|
||||||
|
# Make relative if under project
|
||||||
|
if str(abs_path).startswith(str(proj_path)):
|
||||||
|
return str(abs_path.relative_to(proj_path))
|
||||||
|
return str(abs_path)
|
||||||
|
except (ValueError, OSError):
|
||||||
|
return file_path
|
||||||
|
|
||||||
|
|
||||||
|
def is_always_allowed(rel_path: str) -> bool:
|
||||||
|
"""Check if path is in always-allowed list."""
|
||||||
|
for pattern in ALWAYS_ALLOWED_PATTERNS:
|
||||||
|
if pattern.endswith('/'):
|
||||||
|
# Directory pattern
|
||||||
|
if rel_path.startswith(pattern) or rel_path == pattern.rstrip('/'):
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
# Exact file match
|
||||||
|
if rel_path == pattern:
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def get_allowed_paths_from_manifest(manifest: dict) -> set:
|
||||||
|
"""Extract all allowed file paths from manifest entities."""
|
||||||
|
allowed = set()
|
||||||
|
|
||||||
|
entities = manifest.get("entities", {})
|
||||||
|
entity_types = ["pages", "components", "api_endpoints", "database_tables", "services", "utils", "hooks", "types"]
|
||||||
|
|
||||||
|
for entity_type in entity_types:
|
||||||
|
for entity in entities.get(entity_type, []):
|
||||||
|
status = entity.get("status", "")
|
||||||
|
# Allow APPROVED, IMPLEMENTED, or PENDING (for design phase updates)
|
||||||
|
if status in ["APPROVED", "IMPLEMENTED", "PENDING", "IN_PROGRESS"]:
|
||||||
|
if "file_path" in entity:
|
||||||
|
allowed.add(entity["file_path"])
|
||||||
|
# Also check for multiple file paths
|
||||||
|
if "file_paths" in entity:
|
||||||
|
for fp in entity.get("file_paths", []):
|
||||||
|
allowed.add(fp)
|
||||||
|
|
||||||
|
return allowed
|
||||||
|
|
||||||
|
|
||||||
|
def get_allowed_paths_from_tasks(project_dir: str) -> set:
|
||||||
|
"""Extract allowed file paths from task files in active workflow version."""
|
||||||
|
allowed = set()
|
||||||
|
|
||||||
|
# Try to import yaml
|
||||||
|
try:
|
||||||
|
import yaml
|
||||||
|
has_yaml = True
|
||||||
|
except ImportError:
|
||||||
|
has_yaml = False
|
||||||
|
|
||||||
|
# Find active version
|
||||||
|
current_path = Path(project_dir) / ".workflow" / "current.yml"
|
||||||
|
if not current_path.exists():
|
||||||
|
return allowed
|
||||||
|
|
||||||
|
try:
|
||||||
|
with open(current_path) as f:
|
||||||
|
content = f.read()
|
||||||
|
|
||||||
|
if has_yaml:
|
||||||
|
current = yaml.safe_load(content) or {}
|
||||||
|
else:
|
||||||
|
# Simple fallback parser
|
||||||
|
current = {}
|
||||||
|
for line in content.split('\n'):
|
||||||
|
if ':' in line and not line.startswith(' '):
|
||||||
|
key, _, value = line.partition(':')
|
||||||
|
current[key.strip()] = value.strip()
|
||||||
|
|
||||||
|
active_version = current.get('active_version')
|
||||||
|
if not active_version:
|
||||||
|
return allowed
|
||||||
|
|
||||||
|
# Read task files
|
||||||
|
tasks_dir = Path(project_dir) / ".workflow" / "versions" / active_version / "tasks"
|
||||||
|
if not tasks_dir.exists():
|
||||||
|
return allowed
|
||||||
|
|
||||||
|
for task_file in tasks_dir.glob("*.yml"):
|
||||||
|
try:
|
||||||
|
with open(task_file) as f:
|
||||||
|
task_content = f.read()
|
||||||
|
|
||||||
|
if has_yaml:
|
||||||
|
task = yaml.safe_load(task_content) or {}
|
||||||
|
file_paths = task.get('file_paths', [])
|
||||||
|
for fp in file_paths:
|
||||||
|
allowed.add(fp)
|
||||||
|
else:
|
||||||
|
# Simple extraction for file_paths
|
||||||
|
in_file_paths = False
|
||||||
|
for line in task_content.split('\n'):
|
||||||
|
if line.strip().startswith('file_paths:'):
|
||||||
|
in_file_paths = True
|
||||||
|
continue
|
||||||
|
if in_file_paths:
|
||||||
|
if line.strip().startswith('- '):
|
||||||
|
fp = line.strip()[2:].strip()
|
||||||
|
allowed.add(fp)
|
||||||
|
elif not line.startswith(' '):
|
||||||
|
in_file_paths = False
|
||||||
|
except (IOError, Exception):
|
||||||
|
continue
|
||||||
|
|
||||||
|
except (IOError, Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
return allowed
|
||||||
|
|
||||||
|
|
||||||
|
def validate_write(file_path: str, manifest_path: str) -> tuple[bool, str]:
|
||||||
|
"""
|
||||||
|
Validate if a write operation is allowed.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
(allowed: bool, message: str)
|
||||||
|
"""
|
||||||
|
project_dir = os.path.dirname(manifest_path) or os.getcwd()
|
||||||
|
rel_path = normalize_path(file_path, project_dir)
|
||||||
|
|
||||||
|
# Check always-allowed paths first
|
||||||
|
if is_always_allowed(rel_path):
|
||||||
|
return True, f"✓ GUARDRAIL: Always-allowed path: {rel_path}"
|
||||||
|
|
||||||
|
# Load manifest
|
||||||
|
manifest = load_manifest(manifest_path)
|
||||||
|
|
||||||
|
# If no manifest exists, guardrails not active
|
||||||
|
if manifest is None:
|
||||||
|
return True, "✓ GUARDRAIL: No manifest found, allowing write"
|
||||||
|
|
||||||
|
# Get current phase
|
||||||
|
phase = manifest.get("state", {}).get("current_phase", "UNKNOWN")
|
||||||
|
|
||||||
|
# Collect all allowed paths
|
||||||
|
allowed_from_manifest = get_allowed_paths_from_manifest(manifest)
|
||||||
|
allowed_from_tasks = get_allowed_paths_from_tasks(project_dir)
|
||||||
|
all_allowed = allowed_from_manifest | allowed_from_tasks
|
||||||
|
|
||||||
|
# Check if file is in allowed paths
|
||||||
|
if rel_path in all_allowed:
|
||||||
|
return True, f"✓ GUARDRAIL: Allowed in manifest/tasks: {rel_path}"
|
||||||
|
|
||||||
|
# Also check with leading ./ removed
|
||||||
|
clean_path = rel_path.lstrip('./')
|
||||||
|
if clean_path in all_allowed:
|
||||||
|
return True, f"✓ GUARDRAIL: Allowed in manifest/tasks: {clean_path}"
|
||||||
|
|
||||||
|
# Check if any allowed path matches (handle path variations)
|
||||||
|
for allowed in all_allowed:
|
||||||
|
allowed_clean = allowed.lstrip('./')
|
||||||
|
if clean_path == allowed_clean:
|
||||||
|
return True, f"✓ GUARDRAIL: Allowed (path match): {rel_path}"
|
||||||
|
|
||||||
|
# Extract suggested feature from file path
|
||||||
|
name = Path(rel_path).stem
|
||||||
|
suggested_feature = f"update {name}"
|
||||||
|
|
||||||
|
# Not allowed - generate helpful error message with actionable instructions
|
||||||
|
error_msg = f"""
|
||||||
|
⛔ GUARDRAIL VIOLATION: Unauthorized file write
|
||||||
|
|
||||||
|
File: {rel_path}
|
||||||
|
Phase: {phase}
|
||||||
|
|
||||||
|
This file is not in the approved manifest or task files.
|
||||||
|
|
||||||
|
Allowed paths from manifest: {len(allowed_from_manifest)}
|
||||||
|
Allowed paths from tasks: {len(allowed_from_tasks)}
|
||||||
|
|
||||||
|
╔══════════════════════════════════════════════════════════════════╗
|
||||||
|
║ 👉 REQUIRED ACTION: Start a workflow to modify this file ║
|
||||||
|
║ ║
|
||||||
|
║ Run this command: ║
|
||||||
|
║ /workflow:spawn {suggested_feature} ║
|
||||||
|
║ ║
|
||||||
|
║ This will: ║
|
||||||
|
║ 1. Design what changes are needed ║
|
||||||
|
║ 2. Add this file to approved paths ║
|
||||||
|
║ 3. Get approval, then implement ║
|
||||||
|
╚══════════════════════════════════════════════════════════════════╝
|
||||||
|
|
||||||
|
Alternative: If workflow exists, add this file to:
|
||||||
|
- project_manifest.json (entities.*.file_path)
|
||||||
|
- .workflow/versions/*/tasks/*.yml (file_paths list)
|
||||||
|
"""
|
||||||
|
|
||||||
|
return False, error_msg
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Validate write operation against guardrails")
|
||||||
|
parser.add_argument("--manifest", required=True, help="Path to project_manifest.json")
|
||||||
|
parser.add_argument("--file", help="File path being written")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
# Get file path from argument or environment
|
||||||
|
file_path = args.file or os.environ.get('TOOL_INPUT_FILE_PATH', '')
|
||||||
|
|
||||||
|
if not file_path:
|
||||||
|
# Try reading from stdin
|
||||||
|
if not sys.stdin.isatty():
|
||||||
|
file_path = sys.stdin.read().strip()
|
||||||
|
|
||||||
|
if not file_path:
|
||||||
|
print("✓ GUARDRAIL: No file path provided, allowing (hook misconfiguration?)")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
allowed, message = validate_write(file_path, args.manifest)
|
||||||
|
|
||||||
|
if allowed:
|
||||||
|
print(message)
|
||||||
|
return 0
|
||||||
|
else:
|
||||||
|
print(message, file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
|
|
@ -0,0 +1,237 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Static analysis for async/await issues in TypeScript/JavaScript.
|
||||||
|
|
||||||
|
Catches common mistakes:
|
||||||
|
- fetch() without await
|
||||||
|
- .json() without await
|
||||||
|
- Async function calls without await
|
||||||
|
- Floating promises (promise not handled)
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Dict, List, Tuple
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Async Pattern Detection
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
ASYNC_ISSUES = [
|
||||||
|
# fetch without await - but allow .then() chains
|
||||||
|
{
|
||||||
|
"pattern": r"(?<!await\s)(?<!return\s)fetch\s*\([^)]*\)(?!\s*\.then)(?!\s*\))",
|
||||||
|
"severity": "HIGH",
|
||||||
|
"message": "fetch() without await or .then()",
|
||||||
|
"fix": "Add 'await' before fetch() or use .then().catch()"
|
||||||
|
},
|
||||||
|
# .json() without await
|
||||||
|
{
|
||||||
|
"pattern": r"(?<!await\s)\.json\s*\(\s*\)(?!\s*\.then)",
|
||||||
|
"severity": "HIGH",
|
||||||
|
"message": ".json() without await",
|
||||||
|
"fix": "Add 'await' before .json() call"
|
||||||
|
},
|
||||||
|
# .text() without await
|
||||||
|
{
|
||||||
|
"pattern": r"(?<!await\s)\.text\s*\(\s*\)(?!\s*\.then)",
|
||||||
|
"severity": "MEDIUM",
|
||||||
|
"message": ".text() without await",
|
||||||
|
"fix": "Add 'await' before .text() call"
|
||||||
|
},
|
||||||
|
# axios/fetch response access without await
|
||||||
|
{
|
||||||
|
"pattern": r"(?<!await\s)(axios\.(get|post|put|delete|patch))\s*\([^)]*\)\.data",
|
||||||
|
"severity": "HIGH",
|
||||||
|
"message": "Accessing .data on unawaited axios call",
|
||||||
|
"fix": "Add 'await' or use (await axios.get(...)).data"
|
||||||
|
},
|
||||||
|
# Promise.all without await
|
||||||
|
{
|
||||||
|
"pattern": r"(?<!await\s)(?<!return\s)Promise\.(all|allSettled|race|any)\s*\(",
|
||||||
|
"severity": "HIGH",
|
||||||
|
"message": "Promise.all/race without await",
|
||||||
|
"fix": "Add 'await' before Promise.all()"
|
||||||
|
},
|
||||||
|
# Async function call patterns (common API functions)
|
||||||
|
{
|
||||||
|
"pattern": r"(?<!await\s)(?<!return\s)(createUser|updateUser|deleteUser|getUser|saveData|loadData|fetchData|submitForm|handleSubmit)\s*\([^)]*\)\s*;",
|
||||||
|
"severity": "MEDIUM",
|
||||||
|
"message": "Async function call may need await",
|
||||||
|
"fix": "Check if this function is async and add 'await' if needed"
|
||||||
|
},
|
||||||
|
# setState with async value without await
|
||||||
|
{
|
||||||
|
"pattern": r"set\w+\s*\(\s*(?:await\s+)?fetch\s*\(",
|
||||||
|
"severity": "HIGH",
|
||||||
|
"message": "Setting state with fetch result - ensure await is used",
|
||||||
|
"fix": "Use: const data = await fetch(...); setData(data)"
|
||||||
|
},
|
||||||
|
# useEffect with async but no await
|
||||||
|
{
|
||||||
|
"pattern": r"useEffect\s*\(\s*\(\s*\)\s*=>\s*\{[^}]*fetch\s*\([^}]*\}\s*,",
|
||||||
|
"severity": "MEDIUM",
|
||||||
|
"message": "useEffect with fetch - check async handling",
|
||||||
|
"fix": "Create inner async function: useEffect(() => { const load = async () => {...}; load(); }, [])"
|
||||||
|
},
|
||||||
|
]
|
||||||
|
|
||||||
|
# Files/patterns to skip
|
||||||
|
SKIP_PATTERNS = [
|
||||||
|
r"node_modules",
|
||||||
|
r"\.next",
|
||||||
|
r"dist",
|
||||||
|
r"build",
|
||||||
|
r"\.test\.",
|
||||||
|
r"\.spec\.",
|
||||||
|
r"__tests__",
|
||||||
|
r"__mocks__",
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def should_skip_file(file_path: str) -> bool:
|
||||||
|
"""Check if file should be skipped."""
|
||||||
|
for pattern in SKIP_PATTERNS:
|
||||||
|
if re.search(pattern, file_path):
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def analyze_file(file_path: Path) -> List[Dict]:
|
||||||
|
"""Analyze a single file for async issues."""
|
||||||
|
issues = []
|
||||||
|
|
||||||
|
try:
|
||||||
|
content = file_path.read_text(encoding='utf-8')
|
||||||
|
except Exception:
|
||||||
|
return issues
|
||||||
|
|
||||||
|
lines = content.split('\n')
|
||||||
|
|
||||||
|
for line_num, line in enumerate(lines, 1):
|
||||||
|
# Skip comments
|
||||||
|
stripped = line.strip()
|
||||||
|
if stripped.startswith('//') or stripped.startswith('*'):
|
||||||
|
continue
|
||||||
|
|
||||||
|
for rule in ASYNC_ISSUES:
|
||||||
|
if re.search(rule["pattern"], line):
|
||||||
|
# Additional context check - skip if line has .then or .catch nearby
|
||||||
|
context = '\n'.join(lines[max(0, line_num-2):min(len(lines), line_num+2)])
|
||||||
|
if '.then(' in context and '.catch(' in context:
|
||||||
|
continue
|
||||||
|
|
||||||
|
issues.append({
|
||||||
|
"file": str(file_path),
|
||||||
|
"line": line_num,
|
||||||
|
"severity": rule["severity"],
|
||||||
|
"message": rule["message"],
|
||||||
|
"fix": rule["fix"],
|
||||||
|
"code": line.strip()[:80]
|
||||||
|
})
|
||||||
|
|
||||||
|
return issues
|
||||||
|
|
||||||
|
|
||||||
|
def analyze_project(root_dir: str = ".") -> List[Dict]:
|
||||||
|
"""Analyze all TypeScript/JavaScript files in project."""
|
||||||
|
all_issues = []
|
||||||
|
|
||||||
|
extensions = [".ts", ".tsx", ".js", ".jsx"]
|
||||||
|
root = Path(root_dir)
|
||||||
|
|
||||||
|
for ext in extensions:
|
||||||
|
for file_path in root.rglob(f"*{ext}"):
|
||||||
|
if should_skip_file(str(file_path)):
|
||||||
|
continue
|
||||||
|
issues = analyze_file(file_path)
|
||||||
|
all_issues.extend(issues)
|
||||||
|
|
||||||
|
return all_issues
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Output Formatting
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def format_text(issues: List[Dict]) -> str:
|
||||||
|
"""Format issues as readable text."""
|
||||||
|
if not issues:
|
||||||
|
return "\n✅ No async/await issues found.\n"
|
||||||
|
|
||||||
|
lines = []
|
||||||
|
lines.append("")
|
||||||
|
lines.append("╔" + "═" * 70 + "╗")
|
||||||
|
lines.append("║" + " ASYNC/AWAIT VERIFICATION".ljust(70) + "║")
|
||||||
|
lines.append("╠" + "═" * 70 + "╣")
|
||||||
|
|
||||||
|
high = [i for i in issues if i["severity"] == "HIGH"]
|
||||||
|
medium = [i for i in issues if i["severity"] == "MEDIUM"]
|
||||||
|
|
||||||
|
lines.append("║" + f" 🔴 High: {len(high)} issues".ljust(70) + "║")
|
||||||
|
lines.append("║" + f" 🟡 Medium: {len(medium)} issues".ljust(70) + "║")
|
||||||
|
|
||||||
|
if high:
|
||||||
|
lines.append("╠" + "═" * 70 + "╣")
|
||||||
|
lines.append("║" + " 🔴 HIGH SEVERITY".ljust(70) + "║")
|
||||||
|
for issue in high:
|
||||||
|
loc = f"{issue['file']}:{issue['line']}"
|
||||||
|
lines.append("║" + f" {loc}".ljust(70) + "║")
|
||||||
|
lines.append("║" + f" ❌ {issue['message']}".ljust(70) + "║")
|
||||||
|
lines.append("║" + f" 💡 {issue['fix']}".ljust(70) + "║")
|
||||||
|
code = issue['code'][:55]
|
||||||
|
lines.append("║" + f" 📝 {code}".ljust(70) + "║")
|
||||||
|
|
||||||
|
if medium:
|
||||||
|
lines.append("╠" + "═" * 70 + "╣")
|
||||||
|
lines.append("║" + " 🟡 MEDIUM SEVERITY".ljust(70) + "║")
|
||||||
|
for issue in medium[:5]: # Limit display
|
||||||
|
loc = f"{issue['file']}:{issue['line']}"
|
||||||
|
lines.append("║" + f" {loc}".ljust(70) + "║")
|
||||||
|
lines.append("║" + f" ⚠️ {issue['message']}".ljust(70) + "║")
|
||||||
|
|
||||||
|
lines.append("╚" + "═" * 70 + "╝")
|
||||||
|
|
||||||
|
return "\n".join(lines)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Check for async/await issues")
|
||||||
|
parser.add_argument("--json", action="store_true", help="Output as JSON")
|
||||||
|
parser.add_argument("--path", default=".", help="Project path to analyze")
|
||||||
|
parser.add_argument("--strict", action="store_true", help="Fail on any issue")
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
print("Scanning for async/await issues...")
|
||||||
|
issues = analyze_project(args.path)
|
||||||
|
|
||||||
|
if args.json:
|
||||||
|
print(json.dumps({
|
||||||
|
"issues": issues,
|
||||||
|
"summary": {
|
||||||
|
"high": len([i for i in issues if i["severity"] == "HIGH"]),
|
||||||
|
"medium": len([i for i in issues if i["severity"] == "MEDIUM"]),
|
||||||
|
"total": len(issues)
|
||||||
|
}
|
||||||
|
}, indent=2))
|
||||||
|
else:
|
||||||
|
print(format_text(issues))
|
||||||
|
|
||||||
|
# Exit codes
|
||||||
|
high_count = len([i for i in issues if i["severity"] == "HIGH"])
|
||||||
|
|
||||||
|
if high_count > 0:
|
||||||
|
sys.exit(1) # High severity issues found
|
||||||
|
elif args.strict and issues:
|
||||||
|
sys.exit(1) # Any issues in strict mode
|
||||||
|
else:
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
|
|
@ -0,0 +1,376 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Verify implementation matches manifest specifications."""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class VerificationResult:
|
||||||
|
entity_id: str
|
||||||
|
entity_type: str
|
||||||
|
file_path: str
|
||||||
|
exists: bool
|
||||||
|
issues: list[str]
|
||||||
|
warnings: list[str]
|
||||||
|
|
||||||
|
|
||||||
|
def load_manifest(manifest_path: str) -> dict:
|
||||||
|
"""Load manifest from file."""
|
||||||
|
with open(manifest_path) as f:
|
||||||
|
return json.load(f)
|
||||||
|
|
||||||
|
|
||||||
|
def check_file_exists(project_root: str, file_path: str) -> bool:
|
||||||
|
"""Check if implementation file exists."""
|
||||||
|
full_path = os.path.join(project_root, file_path)
|
||||||
|
return os.path.exists(full_path)
|
||||||
|
|
||||||
|
|
||||||
|
def read_file_content(project_root: str, file_path: str) -> Optional[str]:
|
||||||
|
"""Read file content if it exists."""
|
||||||
|
full_path = os.path.join(project_root, file_path)
|
||||||
|
if not os.path.exists(full_path):
|
||||||
|
return None
|
||||||
|
with open(full_path, 'r') as f:
|
||||||
|
return f.read()
|
||||||
|
|
||||||
|
|
||||||
|
def verify_component(project_root: str, component: dict) -> VerificationResult:
|
||||||
|
"""Verify a component implementation matches manifest."""
|
||||||
|
issues = []
|
||||||
|
warnings = []
|
||||||
|
file_path = component.get("file_path", "")
|
||||||
|
|
||||||
|
exists = check_file_exists(project_root, file_path)
|
||||||
|
|
||||||
|
if not exists:
|
||||||
|
issues.append(f"File not found: {file_path}")
|
||||||
|
return VerificationResult(
|
||||||
|
entity_id=component.get("id", "unknown"),
|
||||||
|
entity_type="component",
|
||||||
|
file_path=file_path,
|
||||||
|
exists=False,
|
||||||
|
issues=issues,
|
||||||
|
warnings=warnings
|
||||||
|
)
|
||||||
|
|
||||||
|
content = read_file_content(project_root, file_path)
|
||||||
|
if not content:
|
||||||
|
issues.append("Could not read file content")
|
||||||
|
return VerificationResult(
|
||||||
|
entity_id=component.get("id", "unknown"),
|
||||||
|
entity_type="component",
|
||||||
|
file_path=file_path,
|
||||||
|
exists=True,
|
||||||
|
issues=issues,
|
||||||
|
warnings=warnings
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check component name exists
|
||||||
|
name = component.get("name", "")
|
||||||
|
if name:
|
||||||
|
# Check for function/const declaration or export
|
||||||
|
patterns = [
|
||||||
|
rf"export\s+(const|function)\s+{name}",
|
||||||
|
rf"(const|function)\s+{name}",
|
||||||
|
rf"export\s+\{{\s*{name}\s*\}}",
|
||||||
|
]
|
||||||
|
found = any(re.search(p, content) for p in patterns)
|
||||||
|
if not found:
|
||||||
|
issues.append(f"Component '{name}' not found in file")
|
||||||
|
|
||||||
|
# Check props interface
|
||||||
|
props = component.get("props", {})
|
||||||
|
if props:
|
||||||
|
# Check if props interface exists
|
||||||
|
interface_pattern = rf"interface\s+{name}Props"
|
||||||
|
if not re.search(interface_pattern, content):
|
||||||
|
warnings.append(f"Props interface '{name}Props' not found")
|
||||||
|
|
||||||
|
# Check each prop exists in the file
|
||||||
|
for prop_name, prop_spec in props.items():
|
||||||
|
if prop_name not in content:
|
||||||
|
warnings.append(f"Prop '{prop_name}' may not be implemented")
|
||||||
|
|
||||||
|
return VerificationResult(
|
||||||
|
entity_id=component.get("id", "unknown"),
|
||||||
|
entity_type="component",
|
||||||
|
file_path=file_path,
|
||||||
|
exists=True,
|
||||||
|
issues=issues,
|
||||||
|
warnings=warnings
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def verify_page(project_root: str, page: dict) -> VerificationResult:
|
||||||
|
"""Verify a page implementation matches manifest."""
|
||||||
|
issues = []
|
||||||
|
warnings = []
|
||||||
|
file_path = page.get("file_path", "")
|
||||||
|
|
||||||
|
exists = check_file_exists(project_root, file_path)
|
||||||
|
|
||||||
|
if not exists:
|
||||||
|
issues.append(f"File not found: {file_path}")
|
||||||
|
return VerificationResult(
|
||||||
|
entity_id=page.get("id", "unknown"),
|
||||||
|
entity_type="page",
|
||||||
|
file_path=file_path,
|
||||||
|
exists=False,
|
||||||
|
issues=issues,
|
||||||
|
warnings=warnings
|
||||||
|
)
|
||||||
|
|
||||||
|
content = read_file_content(project_root, file_path)
|
||||||
|
if not content:
|
||||||
|
issues.append("Could not read file content")
|
||||||
|
return VerificationResult(
|
||||||
|
entity_id=page.get("id", "unknown"),
|
||||||
|
entity_type="page",
|
||||||
|
file_path=file_path,
|
||||||
|
exists=True,
|
||||||
|
issues=issues,
|
||||||
|
warnings=warnings
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check for default export (Next.js page requirement)
|
||||||
|
if "export default" not in content:
|
||||||
|
issues.append("Missing 'export default' (required for Next.js pages)")
|
||||||
|
|
||||||
|
# Check component dependencies
|
||||||
|
components = page.get("components", [])
|
||||||
|
for comp_id in components:
|
||||||
|
# Extract component name from ID (e.g., comp_header -> Header)
|
||||||
|
comp_name = comp_id.replace("comp_", "").title().replace("_", "")
|
||||||
|
if comp_name not in content:
|
||||||
|
warnings.append(f"Component '{comp_name}' (from {comp_id}) may not be used")
|
||||||
|
|
||||||
|
return VerificationResult(
|
||||||
|
entity_id=page.get("id", "unknown"),
|
||||||
|
entity_type="page",
|
||||||
|
file_path=file_path,
|
||||||
|
exists=True,
|
||||||
|
issues=issues,
|
||||||
|
warnings=warnings
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def verify_api_endpoint(project_root: str, endpoint: dict) -> VerificationResult:
|
||||||
|
"""Verify an API endpoint implementation matches manifest."""
|
||||||
|
issues = []
|
||||||
|
warnings = []
|
||||||
|
file_path = endpoint.get("file_path", "")
|
||||||
|
|
||||||
|
exists = check_file_exists(project_root, file_path)
|
||||||
|
|
||||||
|
if not exists:
|
||||||
|
issues.append(f"File not found: {file_path}")
|
||||||
|
return VerificationResult(
|
||||||
|
entity_id=endpoint.get("id", "unknown"),
|
||||||
|
entity_type="api_endpoint",
|
||||||
|
file_path=file_path,
|
||||||
|
exists=False,
|
||||||
|
issues=issues,
|
||||||
|
warnings=warnings
|
||||||
|
)
|
||||||
|
|
||||||
|
content = read_file_content(project_root, file_path)
|
||||||
|
if not content:
|
||||||
|
issues.append("Could not read file content")
|
||||||
|
return VerificationResult(
|
||||||
|
entity_id=endpoint.get("id", "unknown"),
|
||||||
|
entity_type="api_endpoint",
|
||||||
|
file_path=file_path,
|
||||||
|
exists=True,
|
||||||
|
issues=issues,
|
||||||
|
warnings=warnings
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check HTTP method handler exists
|
||||||
|
method = endpoint.get("method", "").upper()
|
||||||
|
method_patterns = [
|
||||||
|
rf"export\s+async\s+function\s+{method}\s*\(",
|
||||||
|
rf"export\s+function\s+{method}\s*\(",
|
||||||
|
rf"export\s+const\s+{method}\s*=",
|
||||||
|
]
|
||||||
|
found = any(re.search(p, content) for p in method_patterns)
|
||||||
|
if not found:
|
||||||
|
issues.append(f"HTTP method handler '{method}' not found")
|
||||||
|
|
||||||
|
# Check request body params if defined
|
||||||
|
request = endpoint.get("request", {})
|
||||||
|
if request.get("body"):
|
||||||
|
for param in request["body"].keys():
|
||||||
|
if param not in content:
|
||||||
|
warnings.append(f"Request param '{param}' may not be handled")
|
||||||
|
|
||||||
|
return VerificationResult(
|
||||||
|
entity_id=endpoint.get("id", "unknown"),
|
||||||
|
entity_type="api_endpoint",
|
||||||
|
file_path=file_path,
|
||||||
|
exists=True,
|
||||||
|
issues=issues,
|
||||||
|
warnings=warnings
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def verify_database_table(project_root: str, table: dict) -> VerificationResult:
|
||||||
|
"""Verify a database table implementation matches manifest."""
|
||||||
|
issues = []
|
||||||
|
warnings = []
|
||||||
|
file_path = table.get("file_path", "")
|
||||||
|
|
||||||
|
exists = check_file_exists(project_root, file_path)
|
||||||
|
|
||||||
|
if not exists:
|
||||||
|
issues.append(f"File not found: {file_path}")
|
||||||
|
return VerificationResult(
|
||||||
|
entity_id=table.get("id", "unknown"),
|
||||||
|
entity_type="database_table",
|
||||||
|
file_path=file_path,
|
||||||
|
exists=False,
|
||||||
|
issues=issues,
|
||||||
|
warnings=warnings
|
||||||
|
)
|
||||||
|
|
||||||
|
content = read_file_content(project_root, file_path)
|
||||||
|
if not content:
|
||||||
|
issues.append("Could not read file content")
|
||||||
|
return VerificationResult(
|
||||||
|
entity_id=table.get("id", "unknown"),
|
||||||
|
entity_type="database_table",
|
||||||
|
file_path=file_path,
|
||||||
|
exists=True,
|
||||||
|
issues=issues,
|
||||||
|
warnings=warnings
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check columns/fields are defined
|
||||||
|
columns = table.get("columns", {})
|
||||||
|
for col_name in columns.keys():
|
||||||
|
if col_name not in content:
|
||||||
|
warnings.append(f"Column '{col_name}' may not be defined")
|
||||||
|
|
||||||
|
# Check for CRUD operations
|
||||||
|
crud_ops = ["create", "get", "update", "delete", "find", "all"]
|
||||||
|
found_ops = [op for op in crud_ops if op.lower() in content.lower()]
|
||||||
|
if len(found_ops) < 2:
|
||||||
|
warnings.append("May be missing CRUD operations")
|
||||||
|
|
||||||
|
return VerificationResult(
|
||||||
|
entity_id=table.get("id", "unknown"),
|
||||||
|
entity_type="database_table",
|
||||||
|
file_path=file_path,
|
||||||
|
exists=True,
|
||||||
|
issues=issues,
|
||||||
|
warnings=warnings
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def print_result(result: VerificationResult, verbose: bool = False):
|
||||||
|
"""Print verification result."""
|
||||||
|
status = "✅" if result.exists and not result.issues else "❌"
|
||||||
|
print(f"{status} [{result.entity_type}] {result.entity_id}")
|
||||||
|
print(f" File: {result.file_path}")
|
||||||
|
|
||||||
|
if result.issues:
|
||||||
|
for issue in result.issues:
|
||||||
|
print(f" ❌ ERROR: {issue}")
|
||||||
|
|
||||||
|
if verbose and result.warnings:
|
||||||
|
for warning in result.warnings:
|
||||||
|
print(f" ⚠️ WARN: {warning}")
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Verify implementation against manifest")
|
||||||
|
parser.add_argument("--manifest", default="project_manifest.json", help="Path to manifest")
|
||||||
|
parser.add_argument("--project-root", default=".", help="Project root directory")
|
||||||
|
parser.add_argument("--verbose", "-v", action="store_true", help="Show warnings")
|
||||||
|
parser.add_argument("--json", action="store_true", help="Output as JSON")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
manifest_path = args.manifest
|
||||||
|
if not os.path.isabs(manifest_path):
|
||||||
|
manifest_path = os.path.join(args.project_root, manifest_path)
|
||||||
|
|
||||||
|
if not os.path.exists(manifest_path):
|
||||||
|
print(f"Error: Manifest not found at {manifest_path}")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
manifest = load_manifest(manifest_path)
|
||||||
|
entities = manifest.get("entities", {})
|
||||||
|
|
||||||
|
results = []
|
||||||
|
|
||||||
|
# Verify components
|
||||||
|
for component in entities.get("components", []):
|
||||||
|
result = verify_component(args.project_root, component)
|
||||||
|
results.append(result)
|
||||||
|
|
||||||
|
# Verify pages
|
||||||
|
for page in entities.get("pages", []):
|
||||||
|
result = verify_page(args.project_root, page)
|
||||||
|
results.append(result)
|
||||||
|
|
||||||
|
# Verify API endpoints
|
||||||
|
for endpoint in entities.get("api_endpoints", []):
|
||||||
|
result = verify_api_endpoint(args.project_root, endpoint)
|
||||||
|
results.append(result)
|
||||||
|
|
||||||
|
# Verify database tables
|
||||||
|
for table in entities.get("database_tables", []):
|
||||||
|
result = verify_database_table(args.project_root, table)
|
||||||
|
results.append(result)
|
||||||
|
|
||||||
|
# Output results
|
||||||
|
if args.json:
|
||||||
|
output = {
|
||||||
|
"total": len(results),
|
||||||
|
"passed": sum(1 for r in results if r.exists and not r.issues),
|
||||||
|
"failed": sum(1 for r in results if not r.exists or r.issues),
|
||||||
|
"results": [
|
||||||
|
{
|
||||||
|
"entity_id": r.entity_id,
|
||||||
|
"entity_type": r.entity_type,
|
||||||
|
"file_path": r.file_path,
|
||||||
|
"exists": r.exists,
|
||||||
|
"issues": r.issues,
|
||||||
|
"warnings": r.warnings,
|
||||||
|
}
|
||||||
|
for r in results
|
||||||
|
]
|
||||||
|
}
|
||||||
|
print(json.dumps(output, indent=2))
|
||||||
|
else:
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("IMPLEMENTATION VERIFICATION REPORT")
|
||||||
|
print("=" * 60 + "\n")
|
||||||
|
|
||||||
|
for result in results:
|
||||||
|
print_result(result, args.verbose)
|
||||||
|
print()
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
passed = sum(1 for r in results if r.exists and not r.issues)
|
||||||
|
failed = sum(1 for r in results if not r.exists or r.issues)
|
||||||
|
warnings = sum(len(r.warnings) for r in results)
|
||||||
|
|
||||||
|
print("=" * 60)
|
||||||
|
print(f"SUMMARY: {passed}/{len(results)} passed, {failed} failed, {warnings} warnings")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
if failed > 0:
|
||||||
|
return 1
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
exit(main())
|
||||||
|
|
@ -0,0 +1,986 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Workflow versioning system with task session tracking.
|
||||||
|
Links workflow sessions with task sessions and individual operations.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import hashlib
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import shutil
|
||||||
|
import sys
|
||||||
|
from datetime import datetime
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
# Try to import yaml
|
||||||
|
try:
|
||||||
|
import yaml
|
||||||
|
HAS_YAML = True
|
||||||
|
except ImportError:
|
||||||
|
HAS_YAML = False
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# YAML/JSON Helpers
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def load_yaml(filepath: str) -> dict:
|
||||||
|
"""Load YAML file."""
|
||||||
|
if not os.path.exists(filepath):
|
||||||
|
return {}
|
||||||
|
with open(filepath, 'r') as f:
|
||||||
|
content = f.read()
|
||||||
|
if not content.strip():
|
||||||
|
return {}
|
||||||
|
if HAS_YAML:
|
||||||
|
return yaml.safe_load(content) or {}
|
||||||
|
# Simple YAML fallback parser for basic key: value structures
|
||||||
|
return parse_simple_yaml(content)
|
||||||
|
|
||||||
|
|
||||||
|
def parse_simple_yaml(content: str) -> dict:
|
||||||
|
"""Parse simple YAML without PyYAML dependency."""
|
||||||
|
result = {}
|
||||||
|
current_key = None
|
||||||
|
current_list = None
|
||||||
|
|
||||||
|
for line in content.split('\n'):
|
||||||
|
stripped = line.strip()
|
||||||
|
|
||||||
|
# Skip empty lines and comments
|
||||||
|
if not stripped or stripped.startswith('#'):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Handle list items
|
||||||
|
if stripped.startswith('- '):
|
||||||
|
if current_list is not None:
|
||||||
|
value = stripped[2:].strip()
|
||||||
|
# Handle quoted strings
|
||||||
|
if (value.startswith('"') and value.endswith('"')) or \
|
||||||
|
(value.startswith("'") and value.endswith("'")):
|
||||||
|
value = value[1:-1]
|
||||||
|
current_list.append(value)
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Handle key: value
|
||||||
|
if ':' in stripped:
|
||||||
|
key, _, value = stripped.partition(':')
|
||||||
|
key = key.strip()
|
||||||
|
value = value.strip()
|
||||||
|
|
||||||
|
# Check if this is a list start
|
||||||
|
if value == '' or value == '[]':
|
||||||
|
current_key = key
|
||||||
|
current_list = []
|
||||||
|
result[key] = current_list
|
||||||
|
elif value == '{}':
|
||||||
|
result[key] = {}
|
||||||
|
current_list = None
|
||||||
|
elif value == 'null' or value == '~':
|
||||||
|
result[key] = None
|
||||||
|
current_list = None
|
||||||
|
elif value == 'true':
|
||||||
|
result[key] = True
|
||||||
|
current_list = None
|
||||||
|
elif value == 'false':
|
||||||
|
result[key] = False
|
||||||
|
current_list = None
|
||||||
|
elif value.isdigit():
|
||||||
|
result[key] = int(value)
|
||||||
|
current_list = None
|
||||||
|
else:
|
||||||
|
# Handle quoted strings
|
||||||
|
if (value.startswith('"') and value.endswith('"')) or \
|
||||||
|
(value.startswith("'") and value.endswith("'")):
|
||||||
|
value = value[1:-1]
|
||||||
|
result[key] = value
|
||||||
|
current_list = None
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def save_yaml(filepath: str, data: dict):
|
||||||
|
"""Save data to YAML file."""
|
||||||
|
os.makedirs(os.path.dirname(filepath), exist_ok=True)
|
||||||
|
if HAS_YAML:
|
||||||
|
with open(filepath, 'w') as f:
|
||||||
|
yaml.dump(data, f, default_flow_style=False, sort_keys=False, allow_unicode=True)
|
||||||
|
else:
|
||||||
|
with open(filepath, 'w') as f:
|
||||||
|
json.dump(data, f, indent=2)
|
||||||
|
|
||||||
|
|
||||||
|
def file_hash(filepath: str) -> str:
|
||||||
|
"""Get SHA256 hash of file content."""
|
||||||
|
if not os.path.exists(filepath):
|
||||||
|
return None
|
||||||
|
with open(filepath, 'rb') as f:
|
||||||
|
return hashlib.sha256(f.read()).hexdigest()[:16]
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Path Helpers
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def get_workflow_dir() -> Path:
|
||||||
|
return Path('.workflow')
|
||||||
|
|
||||||
|
|
||||||
|
def get_versions_dir() -> Path:
|
||||||
|
return get_workflow_dir() / 'versions'
|
||||||
|
|
||||||
|
|
||||||
|
def get_index_path() -> Path:
|
||||||
|
return get_workflow_dir() / 'index.yml'
|
||||||
|
|
||||||
|
|
||||||
|
def get_operations_log_path() -> Path:
|
||||||
|
return get_workflow_dir() / 'operations.log'
|
||||||
|
|
||||||
|
|
||||||
|
def get_version_dir(version: str) -> Path:
|
||||||
|
return get_versions_dir() / version
|
||||||
|
|
||||||
|
|
||||||
|
def get_current_state_path() -> Path:
|
||||||
|
return get_workflow_dir() / 'current.yml'
|
||||||
|
|
||||||
|
|
||||||
|
def get_version_tasks_dir(version: str) -> Path:
|
||||||
|
"""Get the tasks directory for a specific version."""
|
||||||
|
return get_version_dir(version) / 'tasks'
|
||||||
|
|
||||||
|
|
||||||
|
def get_current_tasks_dir() -> Optional[Path]:
|
||||||
|
"""Get the tasks directory for the currently active version."""
|
||||||
|
current_path = get_current_state_path()
|
||||||
|
if not current_path.exists():
|
||||||
|
return None
|
||||||
|
current = load_yaml(str(current_path))
|
||||||
|
version = current.get('active_version')
|
||||||
|
if not version:
|
||||||
|
return None
|
||||||
|
tasks_dir = get_version_tasks_dir(version)
|
||||||
|
tasks_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
return tasks_dir
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Version Index Management
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def load_index() -> dict:
|
||||||
|
"""Load or create version index."""
|
||||||
|
index_path = get_index_path()
|
||||||
|
if index_path.exists():
|
||||||
|
return load_yaml(str(index_path))
|
||||||
|
return {
|
||||||
|
'versions': [],
|
||||||
|
'latest_version': None,
|
||||||
|
'total_versions': 0
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def save_index(index: dict):
|
||||||
|
"""Save version index."""
|
||||||
|
save_yaml(str(get_index_path()), index)
|
||||||
|
|
||||||
|
|
||||||
|
def get_next_version() -> str:
|
||||||
|
"""Get next version number."""
|
||||||
|
index = load_index()
|
||||||
|
return f"v{index['total_versions'] + 1:03d}"
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Workflow Session Management
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def create_workflow_session(feature: str, parent_version: str = None) -> dict:
|
||||||
|
"""Create a new workflow session with version tracking."""
|
||||||
|
now = datetime.now()
|
||||||
|
version = get_next_version()
|
||||||
|
session_id = f"workflow_{now.strftime('%Y%m%d_%H%M%S')}"
|
||||||
|
|
||||||
|
# Create version directory and tasks subdirectory
|
||||||
|
version_dir = get_version_dir(version)
|
||||||
|
version_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
(version_dir / 'tasks').mkdir(exist_ok=True)
|
||||||
|
|
||||||
|
# Create workflow session
|
||||||
|
session = {
|
||||||
|
'version': version,
|
||||||
|
'feature': feature,
|
||||||
|
'session_id': session_id,
|
||||||
|
'parent_version': parent_version,
|
||||||
|
'status': 'pending',
|
||||||
|
'started_at': now.isoformat(),
|
||||||
|
'completed_at': None,
|
||||||
|
'current_phase': 'INITIALIZING',
|
||||||
|
'approvals': {
|
||||||
|
'design': {
|
||||||
|
'status': 'pending',
|
||||||
|
'approved_by': None,
|
||||||
|
'approved_at': None,
|
||||||
|
'rejection_reason': None
|
||||||
|
},
|
||||||
|
'implementation': {
|
||||||
|
'status': 'pending',
|
||||||
|
'approved_by': None,
|
||||||
|
'approved_at': None,
|
||||||
|
'rejection_reason': None
|
||||||
|
}
|
||||||
|
},
|
||||||
|
'task_sessions': [],
|
||||||
|
'summary': {
|
||||||
|
'total_tasks': 0,
|
||||||
|
'tasks_completed': 0,
|
||||||
|
'entities_created': 0,
|
||||||
|
'entities_updated': 0,
|
||||||
|
'entities_deleted': 0,
|
||||||
|
'files_created': 0,
|
||||||
|
'files_updated': 0,
|
||||||
|
'files_deleted': 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Save session to version directory
|
||||||
|
save_yaml(str(version_dir / 'session.yml'), session)
|
||||||
|
|
||||||
|
# Update current state pointer
|
||||||
|
get_workflow_dir().mkdir(exist_ok=True)
|
||||||
|
save_yaml(str(get_current_state_path()), {
|
||||||
|
'active_version': version,
|
||||||
|
'session_id': session_id
|
||||||
|
})
|
||||||
|
|
||||||
|
# Update index
|
||||||
|
index = load_index()
|
||||||
|
index['versions'].append({
|
||||||
|
'version': version,
|
||||||
|
'feature': feature,
|
||||||
|
'status': 'pending',
|
||||||
|
'started_at': now.isoformat(),
|
||||||
|
'completed_at': None,
|
||||||
|
'tasks_count': 0,
|
||||||
|
'operations_count': 0
|
||||||
|
})
|
||||||
|
index['latest_version'] = version
|
||||||
|
index['total_versions'] += 1
|
||||||
|
save_index(index)
|
||||||
|
|
||||||
|
# Take snapshot of current state (manifest, tasks)
|
||||||
|
take_snapshot(version, 'before')
|
||||||
|
|
||||||
|
return session
|
||||||
|
|
||||||
|
|
||||||
|
def load_current_session() -> Optional[dict]:
|
||||||
|
"""Load the current active workflow session."""
|
||||||
|
current_path = get_current_state_path()
|
||||||
|
if not current_path.exists():
|
||||||
|
return None
|
||||||
|
|
||||||
|
current = load_yaml(str(current_path))
|
||||||
|
version = current.get('active_version')
|
||||||
|
if not version:
|
||||||
|
return None
|
||||||
|
|
||||||
|
session_path = get_version_dir(version) / 'session.yml'
|
||||||
|
if not session_path.exists():
|
||||||
|
return None
|
||||||
|
|
||||||
|
return load_yaml(str(session_path))
|
||||||
|
|
||||||
|
|
||||||
|
def save_current_session(session: dict):
|
||||||
|
"""Save the current workflow session."""
|
||||||
|
version = session['version']
|
||||||
|
session['updated_at'] = datetime.now().isoformat()
|
||||||
|
save_yaml(str(get_version_dir(version) / 'session.yml'), session)
|
||||||
|
|
||||||
|
# Update index
|
||||||
|
index = load_index()
|
||||||
|
for v in index['versions']:
|
||||||
|
if v['version'] == version:
|
||||||
|
v['status'] = session['status']
|
||||||
|
v['tasks_count'] = session['summary']['total_tasks']
|
||||||
|
break
|
||||||
|
save_index(index)
|
||||||
|
|
||||||
|
|
||||||
|
def complete_workflow_session(session: dict):
|
||||||
|
"""Mark workflow session as completed."""
|
||||||
|
now = datetime.now()
|
||||||
|
session['status'] = 'completed'
|
||||||
|
session['completed_at'] = now.isoformat()
|
||||||
|
save_current_session(session)
|
||||||
|
|
||||||
|
# Take final snapshot
|
||||||
|
take_snapshot(session['version'], 'after')
|
||||||
|
|
||||||
|
# Update index
|
||||||
|
index = load_index()
|
||||||
|
for v in index['versions']:
|
||||||
|
if v['version'] == session['version']:
|
||||||
|
v['status'] = 'completed'
|
||||||
|
v['completed_at'] = now.isoformat()
|
||||||
|
break
|
||||||
|
save_index(index)
|
||||||
|
|
||||||
|
# Clear current pointer
|
||||||
|
current_path = get_current_state_path()
|
||||||
|
if current_path.exists():
|
||||||
|
current_path.unlink()
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Task Session Management
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def create_task_session(workflow_session: dict, task_id: str, task_type: str, agent: str) -> dict:
|
||||||
|
"""Create a new task session with full directory structure."""
|
||||||
|
now = datetime.now()
|
||||||
|
session_id = f"tasksession_{task_id}_{now.strftime('%Y%m%d_%H%M%S')}"
|
||||||
|
|
||||||
|
# Create task session DIRECTORY (not file)
|
||||||
|
version_dir = get_version_dir(workflow_session['version'])
|
||||||
|
task_session_dir = version_dir / 'task_sessions' / task_id
|
||||||
|
task_session_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
task_session = {
|
||||||
|
'session_id': session_id,
|
||||||
|
'workflow_version': workflow_session['version'],
|
||||||
|
'task_id': task_id,
|
||||||
|
'task_type': task_type,
|
||||||
|
'agent': agent,
|
||||||
|
'started_at': now.isoformat(),
|
||||||
|
'completed_at': None,
|
||||||
|
'duration_ms': None,
|
||||||
|
'status': 'in_progress',
|
||||||
|
'operations': [],
|
||||||
|
'review_session': None,
|
||||||
|
'errors': [],
|
||||||
|
'attempt_number': 1,
|
||||||
|
'previous_attempts': []
|
||||||
|
}
|
||||||
|
|
||||||
|
# Save session.yml
|
||||||
|
save_yaml(str(task_session_dir / 'session.yml'), task_session)
|
||||||
|
|
||||||
|
# Snapshot task definition
|
||||||
|
snapshot_task_definition(task_id, task_session_dir)
|
||||||
|
|
||||||
|
# Initialize operations.log
|
||||||
|
init_operations_log(task_session_dir, task_id, now)
|
||||||
|
|
||||||
|
# Link to workflow
|
||||||
|
workflow_session['task_sessions'].append(session_id)
|
||||||
|
workflow_session['summary']['total_tasks'] += 1
|
||||||
|
save_current_session(workflow_session)
|
||||||
|
|
||||||
|
return task_session
|
||||||
|
|
||||||
|
|
||||||
|
def snapshot_task_definition(task_id: str, task_session_dir: Path):
|
||||||
|
"""Snapshot the task definition at execution time."""
|
||||||
|
task_file = Path('tasks') / f'{task_id}.yml'
|
||||||
|
|
||||||
|
if task_file.exists():
|
||||||
|
task_data = load_yaml(str(task_file))
|
||||||
|
task_data['snapshotted_at'] = datetime.now().isoformat()
|
||||||
|
task_data['source_path'] = str(task_file)
|
||||||
|
task_data['status_at_snapshot'] = task_data.get('status', 'unknown')
|
||||||
|
save_yaml(str(task_session_dir / 'task.yml'), task_data)
|
||||||
|
|
||||||
|
|
||||||
|
def init_operations_log(task_session_dir: Path, task_id: str, start_time: datetime):
|
||||||
|
"""Initialize the operations log file."""
|
||||||
|
log_path = task_session_dir / 'operations.log'
|
||||||
|
header = f"# Operations Log for {task_id}\n"
|
||||||
|
header += f"# Started: {start_time.isoformat()}\n"
|
||||||
|
header += "# Format: [timestamp] OPERATION target_type: target_id (path)\n"
|
||||||
|
header += "=" * 70 + "\n\n"
|
||||||
|
with open(log_path, 'w') as f:
|
||||||
|
f.write(header)
|
||||||
|
|
||||||
|
|
||||||
|
def log_to_task_operations_log(task_session: dict, operation: dict):
|
||||||
|
"""Append operation to task-specific operations log."""
|
||||||
|
version = task_session['workflow_version']
|
||||||
|
task_id = task_session['task_id']
|
||||||
|
log_path = get_version_dir(version) / 'task_sessions' / task_id / 'operations.log'
|
||||||
|
|
||||||
|
if not log_path.exists():
|
||||||
|
return
|
||||||
|
|
||||||
|
entry = (
|
||||||
|
f"[{operation['performed_at']}] "
|
||||||
|
f"{operation['type']} {operation['target_type']}: {operation['target_id']}"
|
||||||
|
)
|
||||||
|
if operation.get('target_path'):
|
||||||
|
entry += f" ({operation['target_path']})"
|
||||||
|
entry += f"\n Summary: {operation['changes']['diff_summary']}\n"
|
||||||
|
|
||||||
|
with open(log_path, 'a') as f:
|
||||||
|
f.write(entry + "\n")
|
||||||
|
|
||||||
|
|
||||||
|
def load_task_session(version: str, task_id: str) -> Optional[dict]:
|
||||||
|
"""Load a task session from directory or flat file (backwards compatible)."""
|
||||||
|
# Try new directory structure first
|
||||||
|
session_dir = get_version_dir(version) / 'task_sessions' / task_id
|
||||||
|
session_path = session_dir / 'session.yml'
|
||||||
|
|
||||||
|
if session_path.exists():
|
||||||
|
return load_yaml(str(session_path))
|
||||||
|
|
||||||
|
# Fallback to old flat file structure
|
||||||
|
old_path = get_version_dir(version) / 'task_sessions' / f'{task_id}.yml'
|
||||||
|
if old_path.exists():
|
||||||
|
return load_yaml(str(old_path))
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def save_task_session(task_session: dict):
|
||||||
|
"""Save a task session to directory structure."""
|
||||||
|
version = task_session['workflow_version']
|
||||||
|
task_id = task_session['task_id']
|
||||||
|
session_dir = get_version_dir(version) / 'task_sessions' / task_id
|
||||||
|
session_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
save_yaml(str(session_dir / 'session.yml'), task_session)
|
||||||
|
|
||||||
|
|
||||||
|
def complete_task_session(task_session: dict, status: str = 'completed'):
|
||||||
|
"""Mark task session as completed."""
|
||||||
|
now = datetime.now()
|
||||||
|
started = datetime.fromisoformat(task_session['started_at'])
|
||||||
|
task_session['completed_at'] = now.isoformat()
|
||||||
|
task_session['duration_ms'] = int((now - started).total_seconds() * 1000)
|
||||||
|
task_session['status'] = status
|
||||||
|
save_task_session(task_session)
|
||||||
|
|
||||||
|
# Update workflow summary
|
||||||
|
session = load_current_session()
|
||||||
|
if session and status == 'completed':
|
||||||
|
session['summary']['tasks_completed'] += 1
|
||||||
|
save_current_session(session)
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Operation Logging
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def log_operation(
|
||||||
|
task_session: dict,
|
||||||
|
op_type: str, # CREATE, UPDATE, DELETE, RENAME, MOVE
|
||||||
|
target_type: str, # file, entity, task, manifest
|
||||||
|
target_id: str,
|
||||||
|
target_path: str = None,
|
||||||
|
before_state: str = None,
|
||||||
|
after_state: str = None,
|
||||||
|
diff_summary: str = None,
|
||||||
|
rollback_data: dict = None
|
||||||
|
) -> dict:
|
||||||
|
"""Log an operation within a task session."""
|
||||||
|
now = datetime.now()
|
||||||
|
seq = len(task_session['operations']) + 1
|
||||||
|
op_id = f"op_{now.strftime('%Y%m%d_%H%M%S')}_{seq:03d}"
|
||||||
|
|
||||||
|
operation = {
|
||||||
|
'id': op_id,
|
||||||
|
'type': op_type,
|
||||||
|
'target_type': target_type,
|
||||||
|
'target_id': target_id,
|
||||||
|
'target_path': target_path,
|
||||||
|
'changes': {
|
||||||
|
'before': before_state,
|
||||||
|
'after': after_state,
|
||||||
|
'diff_summary': diff_summary or f"{op_type} {target_type}: {target_id}"
|
||||||
|
},
|
||||||
|
'performed_at': now.isoformat(),
|
||||||
|
'reversible': rollback_data is not None,
|
||||||
|
'rollback_data': rollback_data
|
||||||
|
}
|
||||||
|
|
||||||
|
task_session['operations'].append(operation)
|
||||||
|
save_task_session(task_session)
|
||||||
|
|
||||||
|
# Update workflow summary
|
||||||
|
session = load_current_session()
|
||||||
|
if session:
|
||||||
|
if op_type == 'CREATE':
|
||||||
|
if target_type == 'file':
|
||||||
|
session['summary']['files_created'] += 1
|
||||||
|
elif target_type == 'entity':
|
||||||
|
session['summary']['entities_created'] += 1
|
||||||
|
elif op_type == 'UPDATE':
|
||||||
|
if target_type == 'file':
|
||||||
|
session['summary']['files_updated'] += 1
|
||||||
|
elif target_type == 'entity':
|
||||||
|
session['summary']['entities_updated'] += 1
|
||||||
|
elif op_type == 'DELETE':
|
||||||
|
if target_type == 'file':
|
||||||
|
session['summary']['files_deleted'] += 1
|
||||||
|
elif target_type == 'entity':
|
||||||
|
session['summary']['entities_deleted'] += 1
|
||||||
|
save_current_session(session)
|
||||||
|
|
||||||
|
# Also log to operations log
|
||||||
|
log_to_file(operation, task_session)
|
||||||
|
|
||||||
|
# Also log to task-specific operations log
|
||||||
|
log_to_task_operations_log(task_session, operation)
|
||||||
|
|
||||||
|
# Update index operations count
|
||||||
|
index = load_index()
|
||||||
|
for v in index['versions']:
|
||||||
|
if v['version'] == task_session['workflow_version']:
|
||||||
|
v['operations_count'] = v.get('operations_count', 0) + 1
|
||||||
|
break
|
||||||
|
save_index(index)
|
||||||
|
|
||||||
|
return operation
|
||||||
|
|
||||||
|
|
||||||
|
def log_to_file(operation: dict, task_session: dict):
|
||||||
|
"""Append operation to global operations log."""
|
||||||
|
log_path = get_operations_log_path()
|
||||||
|
log_entry = (
|
||||||
|
f"[{operation['performed_at']}] "
|
||||||
|
f"v{task_session['workflow_version']} | "
|
||||||
|
f"{task_session['task_id']} | "
|
||||||
|
f"{operation['type']} {operation['target_type']}: {operation['target_id']}"
|
||||||
|
)
|
||||||
|
if operation['target_path']:
|
||||||
|
log_entry += f" ({operation['target_path']})"
|
||||||
|
log_entry += "\n"
|
||||||
|
|
||||||
|
with open(log_path, 'a') as f:
|
||||||
|
f.write(log_entry)
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Review Session Management
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def create_review_session(task_session: dict, reviewer: str = 'reviewer') -> dict:
|
||||||
|
"""Create a review session for a task."""
|
||||||
|
now = datetime.now()
|
||||||
|
session_id = f"review_{task_session['task_id']}_{now.strftime('%Y%m%d_%H%M%S')}"
|
||||||
|
|
||||||
|
review = {
|
||||||
|
'session_id': session_id,
|
||||||
|
'task_session_id': task_session['session_id'],
|
||||||
|
'workflow_version': task_session['workflow_version'],
|
||||||
|
'reviewer': reviewer,
|
||||||
|
'started_at': now.isoformat(),
|
||||||
|
'completed_at': None,
|
||||||
|
'decision': None,
|
||||||
|
'checks': {
|
||||||
|
'file_exists': None,
|
||||||
|
'manifest_compliance': None,
|
||||||
|
'code_quality': None,
|
||||||
|
'lint': None,
|
||||||
|
'build': None,
|
||||||
|
'tests': None
|
||||||
|
},
|
||||||
|
'notes': '',
|
||||||
|
'issues_found': [],
|
||||||
|
'suggestions': []
|
||||||
|
}
|
||||||
|
|
||||||
|
task_session['review_session'] = review
|
||||||
|
save_task_session(task_session)
|
||||||
|
|
||||||
|
return review
|
||||||
|
|
||||||
|
|
||||||
|
def complete_review_session(
|
||||||
|
task_session: dict,
|
||||||
|
decision: str,
|
||||||
|
checks: dict,
|
||||||
|
notes: str = '',
|
||||||
|
issues: list = None,
|
||||||
|
suggestions: list = None
|
||||||
|
):
|
||||||
|
"""Complete a review session."""
|
||||||
|
now = datetime.now()
|
||||||
|
review = task_session['review_session']
|
||||||
|
review['completed_at'] = now.isoformat()
|
||||||
|
review['decision'] = decision
|
||||||
|
review['checks'].update(checks)
|
||||||
|
review['notes'] = notes
|
||||||
|
review['issues_found'] = issues or []
|
||||||
|
review['suggestions'] = suggestions or []
|
||||||
|
|
||||||
|
save_task_session(task_session)
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Snapshots
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def take_snapshot(version: str, snapshot_type: str):
|
||||||
|
"""Take a snapshot of current state (before/after)."""
|
||||||
|
snapshot_dir = get_version_dir(version) / f'snapshot_{snapshot_type}'
|
||||||
|
snapshot_dir.mkdir(exist_ok=True)
|
||||||
|
|
||||||
|
# Snapshot manifest
|
||||||
|
if os.path.exists('project_manifest.json'):
|
||||||
|
shutil.copy('project_manifest.json', snapshot_dir / 'manifest.json')
|
||||||
|
|
||||||
|
# Snapshot tasks directory
|
||||||
|
if os.path.exists('tasks'):
|
||||||
|
tasks_snapshot = snapshot_dir / 'tasks'
|
||||||
|
if tasks_snapshot.exists():
|
||||||
|
shutil.rmtree(tasks_snapshot)
|
||||||
|
shutil.copytree('tasks', tasks_snapshot)
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# History & Diff
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def list_versions() -> list:
|
||||||
|
"""List all workflow versions."""
|
||||||
|
index = load_index()
|
||||||
|
return index['versions']
|
||||||
|
|
||||||
|
|
||||||
|
def get_version_details(version: str) -> Optional[dict]:
|
||||||
|
"""Get detailed info about a version."""
|
||||||
|
session_path = get_version_dir(version) / 'session.yml'
|
||||||
|
if not session_path.exists():
|
||||||
|
return None
|
||||||
|
return load_yaml(str(session_path))
|
||||||
|
|
||||||
|
|
||||||
|
def get_changelog(version: str) -> dict:
|
||||||
|
"""Generate changelog for a version."""
|
||||||
|
session = get_version_details(version)
|
||||||
|
if not session:
|
||||||
|
return None
|
||||||
|
|
||||||
|
changelog = {
|
||||||
|
'version': version,
|
||||||
|
'feature': session['feature'],
|
||||||
|
'status': session['status'],
|
||||||
|
'started_at': session['started_at'],
|
||||||
|
'completed_at': session['completed_at'],
|
||||||
|
'operations': {
|
||||||
|
'created': [],
|
||||||
|
'updated': [],
|
||||||
|
'deleted': []
|
||||||
|
},
|
||||||
|
'summary': session['summary']
|
||||||
|
}
|
||||||
|
|
||||||
|
# Collect operations from all task sessions
|
||||||
|
tasks_dir = get_version_dir(version) / 'task_sessions'
|
||||||
|
if tasks_dir.exists():
|
||||||
|
for task_file in tasks_dir.glob('*.yml'):
|
||||||
|
task = load_yaml(str(task_file))
|
||||||
|
for op in task.get('operations', []):
|
||||||
|
entry = {
|
||||||
|
'type': op['target_type'],
|
||||||
|
'id': op['target_id'],
|
||||||
|
'path': op['target_path'],
|
||||||
|
'task': task['task_id'],
|
||||||
|
'agent': task['agent']
|
||||||
|
}
|
||||||
|
if op['type'] == 'CREATE':
|
||||||
|
changelog['operations']['created'].append(entry)
|
||||||
|
elif op['type'] == 'UPDATE':
|
||||||
|
changelog['operations']['updated'].append(entry)
|
||||||
|
elif op['type'] == 'DELETE':
|
||||||
|
changelog['operations']['deleted'].append(entry)
|
||||||
|
|
||||||
|
return changelog
|
||||||
|
|
||||||
|
|
||||||
|
def diff_versions(version1: str, version2: str) -> dict:
|
||||||
|
"""Compare two versions."""
|
||||||
|
v1 = get_version_details(version1)
|
||||||
|
v2 = get_version_details(version2)
|
||||||
|
|
||||||
|
if not v1 or not v2:
|
||||||
|
return None
|
||||||
|
|
||||||
|
return {
|
||||||
|
'from_version': version1,
|
||||||
|
'to_version': version2,
|
||||||
|
'from_feature': v1['feature'],
|
||||||
|
'to_feature': v2['feature'],
|
||||||
|
'summary_diff': {
|
||||||
|
'entities_created': v2['summary']['entities_created'] - v1['summary']['entities_created'],
|
||||||
|
'entities_updated': v2['summary']['entities_updated'] - v1['summary']['entities_updated'],
|
||||||
|
'files_created': v2['summary']['files_created'] - v1['summary']['files_created'],
|
||||||
|
'files_updated': v2['summary']['files_updated'] - v1['summary']['files_updated']
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Display Functions
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def show_history():
|
||||||
|
"""Display version history."""
|
||||||
|
versions = list_versions()
|
||||||
|
|
||||||
|
print()
|
||||||
|
print("╔" + "═" * 70 + "╗")
|
||||||
|
print("║" + "WORKFLOW VERSION HISTORY".center(70) + "║")
|
||||||
|
print("╠" + "═" * 70 + "╣")
|
||||||
|
|
||||||
|
if not versions:
|
||||||
|
print("║" + " No workflow versions found.".ljust(70) + "║")
|
||||||
|
else:
|
||||||
|
for v in versions:
|
||||||
|
status_icon = "✅" if v['status'] == 'completed' else "🔄" if v['status'] == 'in_progress' else "⏳"
|
||||||
|
line1 = f" {status_icon} {v['version']}: {v['feature'][:45]}"
|
||||||
|
print("║" + line1.ljust(70) + "║")
|
||||||
|
line2 = f" Started: {v['started_at'][:19]} | Tasks: {v['tasks_count']} | Ops: {v.get('operations_count', 0)}"
|
||||||
|
print("║" + line2.ljust(70) + "║")
|
||||||
|
print("║" + "─" * 70 + "║")
|
||||||
|
|
||||||
|
print("╚" + "═" * 70 + "╝")
|
||||||
|
|
||||||
|
|
||||||
|
def show_changelog(version: str):
|
||||||
|
"""Display changelog for a version."""
|
||||||
|
changelog = get_changelog(version)
|
||||||
|
if not changelog:
|
||||||
|
print(f"Version {version} not found.")
|
||||||
|
return
|
||||||
|
|
||||||
|
print()
|
||||||
|
print("╔" + "═" * 70 + "╗")
|
||||||
|
print("║" + f"CHANGELOG: {version}".center(70) + "║")
|
||||||
|
print("╠" + "═" * 70 + "╣")
|
||||||
|
print("║" + f" Feature: {changelog['feature'][:55]}".ljust(70) + "║")
|
||||||
|
print("║" + f" Status: {changelog['status']}".ljust(70) + "║")
|
||||||
|
print("╠" + "═" * 70 + "╣")
|
||||||
|
|
||||||
|
ops = changelog['operations']
|
||||||
|
print("║" + " CREATED".ljust(70) + "║")
|
||||||
|
for item in ops['created']:
|
||||||
|
print("║" + f" + [{item['type']}] {item['id']}".ljust(70) + "║")
|
||||||
|
if item['path']:
|
||||||
|
print("║" + f" {item['path']}".ljust(70) + "║")
|
||||||
|
|
||||||
|
print("║" + " UPDATED".ljust(70) + "║")
|
||||||
|
for item in ops['updated']:
|
||||||
|
print("║" + f" ~ [{item['type']}] {item['id']}".ljust(70) + "║")
|
||||||
|
|
||||||
|
print("║" + " DELETED".ljust(70) + "║")
|
||||||
|
for item in ops['deleted']:
|
||||||
|
print("║" + f" - [{item['type']}] {item['id']}".ljust(70) + "║")
|
||||||
|
|
||||||
|
print("╠" + "═" * 70 + "╣")
|
||||||
|
s = changelog['summary']
|
||||||
|
print("║" + " SUMMARY".ljust(70) + "║")
|
||||||
|
print("║" + f" Entities: +{s['entities_created']} ~{s['entities_updated']} -{s['entities_deleted']}".ljust(70) + "║")
|
||||||
|
print("║" + f" Files: +{s['files_created']} ~{s['files_updated']} -{s['files_deleted']}".ljust(70) + "║")
|
||||||
|
print("╚" + "═" * 70 + "╝")
|
||||||
|
|
||||||
|
|
||||||
|
def show_current():
|
||||||
|
"""Show current active workflow."""
|
||||||
|
session = load_current_session()
|
||||||
|
if not session:
|
||||||
|
print("No active workflow.")
|
||||||
|
print("Start one with: /workflow:spawn 'feature name'")
|
||||||
|
return
|
||||||
|
|
||||||
|
print()
|
||||||
|
print("╔" + "═" * 70 + "╗")
|
||||||
|
print("║" + "CURRENT WORKFLOW SESSION".center(70) + "║")
|
||||||
|
print("╠" + "═" * 70 + "╣")
|
||||||
|
print("║" + f" Version: {session['version']}".ljust(70) + "║")
|
||||||
|
print("║" + f" Feature: {session['feature'][:55]}".ljust(70) + "║")
|
||||||
|
print("║" + f" Phase: {session['current_phase']}".ljust(70) + "║")
|
||||||
|
print("║" + f" Status: {session['status']}".ljust(70) + "║")
|
||||||
|
print("╠" + "═" * 70 + "╣")
|
||||||
|
print("║" + " APPROVALS".ljust(70) + "║")
|
||||||
|
d = session['approvals']['design']
|
||||||
|
i = session['approvals']['implementation']
|
||||||
|
d_icon = "✅" if d['status'] == 'approved' else "❌" if d['status'] == 'rejected' else "⏳"
|
||||||
|
i_icon = "✅" if i['status'] == 'approved' else "❌" if i['status'] == 'rejected' else "⏳"
|
||||||
|
print("║" + f" {d_icon} Design: {d['status']}".ljust(70) + "║")
|
||||||
|
print("║" + f" {i_icon} Implementation: {i['status']}".ljust(70) + "║")
|
||||||
|
print("╠" + "═" * 70 + "╣")
|
||||||
|
s = session['summary']
|
||||||
|
print("║" + " PROGRESS".ljust(70) + "║")
|
||||||
|
print("║" + f" Tasks: {s['tasks_completed']}/{s['total_tasks']} completed".ljust(70) + "║")
|
||||||
|
print("║" + f" Entities: +{s['entities_created']} ~{s['entities_updated']} -{s['entities_deleted']}".ljust(70) + "║")
|
||||||
|
print("║" + f" Files: +{s['files_created']} ~{s['files_updated']} -{s['files_deleted']}".ljust(70) + "║")
|
||||||
|
print("╚" + "═" * 70 + "╝")
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# CLI Interface
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Workflow versioning system")
|
||||||
|
subparsers = parser.add_subparsers(dest='command', help='Commands')
|
||||||
|
|
||||||
|
# create command
|
||||||
|
create_parser = subparsers.add_parser('create', help='Create new workflow version')
|
||||||
|
create_parser.add_argument('feature', help='Feature description')
|
||||||
|
create_parser.add_argument('--parent', help='Parent version (for fixes)')
|
||||||
|
|
||||||
|
# current command
|
||||||
|
subparsers.add_parser('current', help='Show current workflow')
|
||||||
|
|
||||||
|
# history command
|
||||||
|
subparsers.add_parser('history', help='Show version history')
|
||||||
|
|
||||||
|
# changelog command
|
||||||
|
changelog_parser = subparsers.add_parser('changelog', help='Show version changelog')
|
||||||
|
changelog_parser.add_argument('version', help='Version to show')
|
||||||
|
|
||||||
|
# diff command
|
||||||
|
diff_parser = subparsers.add_parser('diff', help='Compare two versions')
|
||||||
|
diff_parser.add_argument('version1', help='First version')
|
||||||
|
diff_parser.add_argument('version2', help='Second version')
|
||||||
|
|
||||||
|
# task-start command
|
||||||
|
task_start = subparsers.add_parser('task-start', help='Start a task session')
|
||||||
|
task_start.add_argument('task_id', help='Task ID')
|
||||||
|
task_start.add_argument('--type', default='create', help='Task type')
|
||||||
|
task_start.add_argument('--agent', required=True, help='Agent performing task')
|
||||||
|
|
||||||
|
# task-complete command
|
||||||
|
task_complete = subparsers.add_parser('task-complete', help='Complete a task session')
|
||||||
|
task_complete.add_argument('task_id', help='Task ID')
|
||||||
|
task_complete.add_argument('--status', default='completed', help='Final status')
|
||||||
|
|
||||||
|
# log-op command
|
||||||
|
log_op = subparsers.add_parser('log-op', help='Log an operation')
|
||||||
|
log_op.add_argument('task_id', help='Task ID')
|
||||||
|
log_op.add_argument('op_type', choices=['CREATE', 'UPDATE', 'DELETE'])
|
||||||
|
log_op.add_argument('target_type', choices=['file', 'entity', 'task', 'manifest'])
|
||||||
|
log_op.add_argument('target_id', help='Target ID')
|
||||||
|
log_op.add_argument('--path', help='File path if applicable')
|
||||||
|
log_op.add_argument('--summary', help='Change summary')
|
||||||
|
|
||||||
|
# complete command
|
||||||
|
subparsers.add_parser('complete', help='Complete current workflow')
|
||||||
|
|
||||||
|
# update-phase command
|
||||||
|
phase_parser = subparsers.add_parser('update-phase', help='Update workflow phase')
|
||||||
|
phase_parser.add_argument('phase', help='New phase')
|
||||||
|
|
||||||
|
# tasks-dir command
|
||||||
|
tasks_dir_parser = subparsers.add_parser('tasks-dir', help='Get tasks directory for current or specific version')
|
||||||
|
tasks_dir_parser.add_argument('--version', help='Specific version (defaults to current)')
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
if args.command == 'create':
|
||||||
|
session = create_workflow_session(args.feature, args.parent)
|
||||||
|
print(f"Created workflow version: {session['version']}")
|
||||||
|
print(f"Feature: {args.feature}")
|
||||||
|
print(f"Session ID: {session['session_id']}")
|
||||||
|
|
||||||
|
elif args.command == 'current':
|
||||||
|
show_current()
|
||||||
|
|
||||||
|
elif args.command == 'history':
|
||||||
|
show_history()
|
||||||
|
|
||||||
|
elif args.command == 'changelog':
|
||||||
|
show_changelog(args.version)
|
||||||
|
|
||||||
|
elif args.command == 'diff':
|
||||||
|
result = diff_versions(args.version1, args.version2)
|
||||||
|
if result:
|
||||||
|
print(json.dumps(result, indent=2))
|
||||||
|
else:
|
||||||
|
print("Could not compare versions")
|
||||||
|
|
||||||
|
elif args.command == 'task-start':
|
||||||
|
session = load_current_session()
|
||||||
|
if not session:
|
||||||
|
print("Error: No active workflow")
|
||||||
|
sys.exit(1)
|
||||||
|
task = create_task_session(session, args.task_id, args.type, args.agent)
|
||||||
|
print(f"Started task session: {task['session_id']}")
|
||||||
|
|
||||||
|
elif args.command == 'task-complete':
|
||||||
|
session = load_current_session()
|
||||||
|
if not session:
|
||||||
|
print("Error: No active workflow")
|
||||||
|
sys.exit(1)
|
||||||
|
task = load_task_session(session['version'], args.task_id)
|
||||||
|
if task:
|
||||||
|
complete_task_session(task, args.status)
|
||||||
|
print(f"Completed task: {args.task_id}")
|
||||||
|
else:
|
||||||
|
print(f"Task session not found: {args.task_id}")
|
||||||
|
|
||||||
|
elif args.command == 'log-op':
|
||||||
|
session = load_current_session()
|
||||||
|
if not session:
|
||||||
|
print("Error: No active workflow")
|
||||||
|
sys.exit(1)
|
||||||
|
task = load_task_session(session['version'], args.task_id)
|
||||||
|
if task:
|
||||||
|
op = log_operation(
|
||||||
|
task,
|
||||||
|
args.op_type,
|
||||||
|
args.target_type,
|
||||||
|
args.target_id,
|
||||||
|
target_path=args.path,
|
||||||
|
diff_summary=args.summary
|
||||||
|
)
|
||||||
|
print(f"Logged operation: {op['id']}")
|
||||||
|
else:
|
||||||
|
print(f"Task session not found: {args.task_id}")
|
||||||
|
|
||||||
|
elif args.command == 'complete':
|
||||||
|
session = load_current_session()
|
||||||
|
if not session:
|
||||||
|
print("Error: No active workflow")
|
||||||
|
sys.exit(1)
|
||||||
|
complete_workflow_session(session)
|
||||||
|
print(f"Completed workflow: {session['version']}")
|
||||||
|
|
||||||
|
elif args.command == 'update-phase':
|
||||||
|
session = load_current_session()
|
||||||
|
if not session:
|
||||||
|
print("Error: No active workflow")
|
||||||
|
sys.exit(1)
|
||||||
|
session['current_phase'] = args.phase
|
||||||
|
save_current_session(session)
|
||||||
|
print(f"Updated phase to: {args.phase}")
|
||||||
|
|
||||||
|
elif args.command == 'tasks-dir':
|
||||||
|
if args.version:
|
||||||
|
# Specific version requested
|
||||||
|
tasks_dir = get_version_tasks_dir(args.version)
|
||||||
|
tasks_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
print(str(tasks_dir))
|
||||||
|
else:
|
||||||
|
# Use current version
|
||||||
|
tasks_dir = get_current_tasks_dir()
|
||||||
|
if tasks_dir:
|
||||||
|
print(str(tasks_dir))
|
||||||
|
else:
|
||||||
|
print("Error: No active workflow")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
else:
|
||||||
|
parser.print_help()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
|
|
@ -0,0 +1,343 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Design visualization for guardrail workflow.
|
||||||
|
|
||||||
|
Generates ASCII art visualization of pages, components, and API endpoints
|
||||||
|
from the project manifest.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python3 visualize_design.py --manifest project_manifest.json
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
|
||||||
|
def load_manifest(manifest_path: str) -> dict | None:
|
||||||
|
"""Load manifest if it exists."""
|
||||||
|
if not os.path.exists(manifest_path):
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
with open(manifest_path) as f:
|
||||||
|
return json.load(f)
|
||||||
|
except (json.JSONDecodeError, IOError):
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def get_status_icon(status: str) -> str:
|
||||||
|
"""Get icon for entity status."""
|
||||||
|
icons = {
|
||||||
|
'PENDING': '⏳',
|
||||||
|
'APPROVED': '✅',
|
||||||
|
'IMPLEMENTED': '🟢',
|
||||||
|
'IN_PROGRESS': '🔄',
|
||||||
|
'REJECTED': '❌',
|
||||||
|
}
|
||||||
|
return icons.get(status, '○')
|
||||||
|
|
||||||
|
|
||||||
|
def visualize_page(page: dict, components: list, indent: str = "") -> list:
|
||||||
|
"""Generate ASCII visualization for a page."""
|
||||||
|
lines = []
|
||||||
|
name = page.get('name', 'Unknown')
|
||||||
|
status = page.get('status', 'PENDING')
|
||||||
|
file_path = page.get('file_path', '')
|
||||||
|
description = page.get('description', '')
|
||||||
|
|
||||||
|
icon = get_status_icon(status)
|
||||||
|
|
||||||
|
# Page header
|
||||||
|
lines.append(f"{indent}┌{'─' * 60}┐")
|
||||||
|
lines.append(f"{indent}│ {icon} PAGE: {name:<50} │")
|
||||||
|
lines.append(f"{indent}│ {' ' * 3}Path: {file_path:<48} │")
|
||||||
|
if description:
|
||||||
|
desc_short = description[:45] + '...' if len(description) > 45 else description
|
||||||
|
lines.append(f"{indent}│ {' ' * 3}Desc: {desc_short:<48} │")
|
||||||
|
lines.append(f"{indent}├{'─' * 60}┤")
|
||||||
|
|
||||||
|
# Find components used by this page
|
||||||
|
page_components = []
|
||||||
|
page_id = page.get('id', '')
|
||||||
|
for comp in components:
|
||||||
|
deps = comp.get('dependencies', [])
|
||||||
|
used_by = comp.get('used_by', [])
|
||||||
|
if page_id in deps or page_id in used_by or page.get('name', '').lower() in str(comp).lower():
|
||||||
|
page_components.append(comp)
|
||||||
|
|
||||||
|
if page_components:
|
||||||
|
lines.append(f"{indent}│ COMPONENTS: │")
|
||||||
|
for comp in page_components:
|
||||||
|
comp_name = comp.get('name', 'Unknown')
|
||||||
|
comp_status = comp.get('status', 'PENDING')
|
||||||
|
comp_icon = get_status_icon(comp_status)
|
||||||
|
lines.append(f"{indent}│ {comp_icon} {comp_name:<53} │")
|
||||||
|
else:
|
||||||
|
lines.append(f"{indent}│ (No components defined yet) │")
|
||||||
|
|
||||||
|
lines.append(f"{indent}└{'─' * 60}┘")
|
||||||
|
|
||||||
|
return lines
|
||||||
|
|
||||||
|
|
||||||
|
def visualize_component_tree(components: list) -> list:
|
||||||
|
"""Generate ASCII tree of components."""
|
||||||
|
lines = []
|
||||||
|
|
||||||
|
if not components:
|
||||||
|
return [" (No components defined)"]
|
||||||
|
|
||||||
|
lines.append("┌─────────────────────────────────────────────────────────────┐")
|
||||||
|
lines.append("│ 🧩 COMPONENTS │")
|
||||||
|
lines.append("├─────────────────────────────────────────────────────────────┤")
|
||||||
|
|
||||||
|
for i, comp in enumerate(components):
|
||||||
|
name = comp.get('name', 'Unknown')
|
||||||
|
status = comp.get('status', 'PENDING')
|
||||||
|
file_path = comp.get('file_path', '')
|
||||||
|
icon = get_status_icon(status)
|
||||||
|
|
||||||
|
is_last = i == len(components) - 1
|
||||||
|
prefix = "└──" if is_last else "├──"
|
||||||
|
|
||||||
|
lines.append(f"│ {prefix} {icon} {name:<50} │")
|
||||||
|
lines.append(f"│ {' ' if is_last else '│ '} {file_path:<50} │")
|
||||||
|
|
||||||
|
lines.append("└─────────────────────────────────────────────────────────────┘")
|
||||||
|
|
||||||
|
return lines
|
||||||
|
|
||||||
|
|
||||||
|
def visualize_api_endpoints(endpoints: list) -> list:
|
||||||
|
"""Generate ASCII visualization of API endpoints."""
|
||||||
|
lines = []
|
||||||
|
|
||||||
|
if not endpoints:
|
||||||
|
return []
|
||||||
|
|
||||||
|
lines.append("┌─────────────────────────────────────────────────────────────┐")
|
||||||
|
lines.append("│ 🔌 API ENDPOINTS │")
|
||||||
|
lines.append("├─────────────────────────────────────────────────────────────┤")
|
||||||
|
|
||||||
|
for endpoint in endpoints:
|
||||||
|
name = endpoint.get('name', 'Unknown')
|
||||||
|
method = endpoint.get('method', 'GET')
|
||||||
|
path = endpoint.get('path', endpoint.get('file_path', ''))
|
||||||
|
status = endpoint.get('status', 'PENDING')
|
||||||
|
icon = get_status_icon(status)
|
||||||
|
|
||||||
|
method_colors = {
|
||||||
|
'GET': '🟢',
|
||||||
|
'POST': '🟡',
|
||||||
|
'PUT': '🟠',
|
||||||
|
'PATCH': '🟠',
|
||||||
|
'DELETE': '🔴',
|
||||||
|
}
|
||||||
|
method_icon = method_colors.get(method.upper(), '⚪')
|
||||||
|
|
||||||
|
lines.append(f"│ {icon} {method_icon} {method.upper():<6} {name:<45} │")
|
||||||
|
lines.append(f"│ Path: {path:<47} │")
|
||||||
|
|
||||||
|
lines.append("└─────────────────────────────────────────────────────────────┘")
|
||||||
|
|
||||||
|
return lines
|
||||||
|
|
||||||
|
|
||||||
|
def visualize_page_flow(pages: list) -> list:
|
||||||
|
"""Generate ASCII flow diagram of pages."""
|
||||||
|
lines = []
|
||||||
|
|
||||||
|
if not pages:
|
||||||
|
return []
|
||||||
|
|
||||||
|
lines.append("")
|
||||||
|
lines.append("📱 PAGE FLOW DIAGRAM")
|
||||||
|
lines.append("═" * 65)
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# Simple flow visualization
|
||||||
|
for i, page in enumerate(pages):
|
||||||
|
name = page.get('name', 'Unknown')
|
||||||
|
status = page.get('status', 'PENDING')
|
||||||
|
icon = get_status_icon(status)
|
||||||
|
|
||||||
|
# Page box
|
||||||
|
box_width = max(len(name) + 4, 20)
|
||||||
|
lines.append(f" ┌{'─' * box_width}┐")
|
||||||
|
lines.append(f" │ {icon} {name.center(box_width - 4)} │")
|
||||||
|
lines.append(f" └{'─' * box_width}┘")
|
||||||
|
|
||||||
|
# Arrow to next page (if not last)
|
||||||
|
if i < len(pages) - 1:
|
||||||
|
lines.append(f" {'│'.center(box_width + 4)}")
|
||||||
|
lines.append(f" {'▼'.center(box_width + 4)}")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
return lines
|
||||||
|
|
||||||
|
|
||||||
|
def visualize_data_flow(manifest: dict) -> list:
|
||||||
|
"""Generate data flow visualization."""
|
||||||
|
lines = []
|
||||||
|
|
||||||
|
pages = manifest.get('entities', {}).get('pages', [])
|
||||||
|
components = manifest.get('entities', {}).get('components', [])
|
||||||
|
endpoints = manifest.get('entities', {}).get('api_endpoints', [])
|
||||||
|
|
||||||
|
if not any([pages, components, endpoints]):
|
||||||
|
return []
|
||||||
|
|
||||||
|
lines.append("")
|
||||||
|
lines.append("🔄 DATA FLOW ARCHITECTURE")
|
||||||
|
lines.append("═" * 65)
|
||||||
|
lines.append("")
|
||||||
|
lines.append(" ┌─────────────────────────────────────────────────────────┐")
|
||||||
|
lines.append(" │ FRONTEND │")
|
||||||
|
lines.append(" │ ┌─────────┐ ┌─────────────┐ ┌───────────────┐ │")
|
||||||
|
lines.append(" │ │ Pages │───▶│ Components │───▶│ Hooks │ │")
|
||||||
|
|
||||||
|
page_count = len(pages)
|
||||||
|
comp_count = len(components)
|
||||||
|
lines.append(f" │ │ ({page_count:^3}) │ │ ({comp_count:^3}) │ │ (state) │ │")
|
||||||
|
lines.append(" │ └─────────┘ └─────────────┘ └───────┬───────┘ │")
|
||||||
|
lines.append(" │ │ │")
|
||||||
|
lines.append(" └────────────────────────────────────────────┼───────────┘")
|
||||||
|
lines.append(" │")
|
||||||
|
lines.append(" ▼")
|
||||||
|
lines.append(" ┌─────────────────────────────────────────────────────────┐")
|
||||||
|
lines.append(" │ BACKEND │")
|
||||||
|
lines.append(" │ ┌─────────────────────────────────────────────────┐ │")
|
||||||
|
lines.append(" │ │ API Endpoints │ │")
|
||||||
|
|
||||||
|
api_count = len(endpoints)
|
||||||
|
lines.append(f" │ │ ({api_count:^3}) │ │")
|
||||||
|
lines.append(" │ └──────────────────────────┬──────────────────────┘ │")
|
||||||
|
lines.append(" │ │ │")
|
||||||
|
lines.append(" │ ▼ │")
|
||||||
|
lines.append(" │ ┌─────────────────────────────────────────────────┐ │")
|
||||||
|
lines.append(" │ │ Database │ │")
|
||||||
|
lines.append(" │ └─────────────────────────────────────────────────┘ │")
|
||||||
|
lines.append(" └─────────────────────────────────────────────────────────┘")
|
||||||
|
|
||||||
|
return lines
|
||||||
|
|
||||||
|
|
||||||
|
def generate_full_visualization(manifest: dict) -> str:
|
||||||
|
"""Generate complete design visualization."""
|
||||||
|
lines = []
|
||||||
|
|
||||||
|
project_name = manifest.get('project', {}).get('name', 'Unknown Project')
|
||||||
|
entities = manifest.get('entities', {})
|
||||||
|
|
||||||
|
pages = entities.get('pages', [])
|
||||||
|
components = entities.get('components', [])
|
||||||
|
api_endpoints = entities.get('api_endpoints', [])
|
||||||
|
|
||||||
|
# Header
|
||||||
|
lines.append("")
|
||||||
|
lines.append("╔═══════════════════════════════════════════════════════════════╗")
|
||||||
|
lines.append("║ 📐 DESIGN VISUALIZATION ║")
|
||||||
|
lines.append(f"║ Project: {project_name:<51} ║")
|
||||||
|
lines.append("╚═══════════════════════════════════════════════════════════════╝")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# Summary counts
|
||||||
|
lines.append("📊 ENTITY SUMMARY")
|
||||||
|
lines.append("─" * 65)
|
||||||
|
lines.append(f" Pages: {len(pages):>3} │ Components: {len(components):>3} │ API Endpoints: {len(api_endpoints):>3}")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# Page Flow
|
||||||
|
if pages:
|
||||||
|
lines.extend(visualize_page_flow(pages))
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# Detailed Pages with Components
|
||||||
|
if pages:
|
||||||
|
lines.append("")
|
||||||
|
lines.append("📄 PAGE DETAILS")
|
||||||
|
lines.append("═" * 65)
|
||||||
|
for page in pages:
|
||||||
|
lines.extend(visualize_page(page, components))
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# Component Tree
|
||||||
|
if components:
|
||||||
|
lines.append("")
|
||||||
|
lines.append("🧩 COMPONENT HIERARCHY")
|
||||||
|
lines.append("═" * 65)
|
||||||
|
lines.extend(visualize_component_tree(components))
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# API Endpoints
|
||||||
|
if api_endpoints:
|
||||||
|
lines.append("")
|
||||||
|
lines.append("🔌 API LAYER")
|
||||||
|
lines.append("═" * 65)
|
||||||
|
lines.extend(visualize_api_endpoints(api_endpoints))
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# Data Flow Architecture
|
||||||
|
lines.extend(visualize_data_flow(manifest))
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# Legend
|
||||||
|
lines.append("")
|
||||||
|
lines.append("📋 LEGEND")
|
||||||
|
lines.append("─" * 65)
|
||||||
|
lines.append(" ⏳ PENDING - Designed, awaiting approval")
|
||||||
|
lines.append(" ✅ APPROVED - Approved, ready for implementation")
|
||||||
|
lines.append(" 🔄 IN_PROGRESS - Currently being implemented")
|
||||||
|
lines.append(" 🟢 IMPLEMENTED - Implementation complete")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
return "\n".join(lines)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Visualize design from manifest")
|
||||||
|
parser.add_argument("--manifest", required=True, help="Path to project_manifest.json")
|
||||||
|
parser.add_argument("--format", choices=['full', 'pages', 'components', 'api', 'flow'],
|
||||||
|
default='full', help="Visualization format")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
manifest = load_manifest(args.manifest)
|
||||||
|
|
||||||
|
if manifest is None:
|
||||||
|
print("❌ Error: Could not load manifest from", args.manifest)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
entities = manifest.get('entities', {})
|
||||||
|
|
||||||
|
if not any([
|
||||||
|
entities.get('pages'),
|
||||||
|
entities.get('components'),
|
||||||
|
entities.get('api_endpoints')
|
||||||
|
]):
|
||||||
|
print("⚠️ No entities found in manifest. Design phase may not be complete.")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
if args.format == 'full':
|
||||||
|
print(generate_full_visualization(manifest))
|
||||||
|
elif args.format == 'pages':
|
||||||
|
pages = entities.get('pages', [])
|
||||||
|
components = entities.get('components', [])
|
||||||
|
for page in pages:
|
||||||
|
print("\n".join(visualize_page(page, components)))
|
||||||
|
elif args.format == 'components':
|
||||||
|
components = entities.get('components', [])
|
||||||
|
print("\n".join(visualize_component_tree(components)))
|
||||||
|
elif args.format == 'api':
|
||||||
|
endpoints = entities.get('api_endpoints', [])
|
||||||
|
print("\n".join(visualize_api_endpoints(endpoints)))
|
||||||
|
elif args.format == 'flow':
|
||||||
|
pages = entities.get('pages', [])
|
||||||
|
print("\n".join(visualize_page_flow(pages)))
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
|
|
@ -0,0 +1,651 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Implementation Visualizer for guardrail workflow.
|
||||||
|
|
||||||
|
Generates visual representation of implemented pages and components:
|
||||||
|
- Component tree structure
|
||||||
|
- Props and interfaces
|
||||||
|
- Page layouts
|
||||||
|
- API endpoints
|
||||||
|
- File statistics
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python3 visualize_implementation.py --manifest project_manifest.json
|
||||||
|
python3 visualize_implementation.py --tasks-dir .workflow/versions/v001/tasks
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
# Try to import yaml
|
||||||
|
try:
|
||||||
|
import yaml
|
||||||
|
HAS_YAML = True
|
||||||
|
except ImportError:
|
||||||
|
HAS_YAML = False
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ComponentInfo:
|
||||||
|
"""Parsed component information."""
|
||||||
|
name: str
|
||||||
|
file_path: str
|
||||||
|
props: list[str] = field(default_factory=list)
|
||||||
|
imports: list[str] = field(default_factory=list)
|
||||||
|
exports: list[str] = field(default_factory=list)
|
||||||
|
hooks: list[str] = field(default_factory=list)
|
||||||
|
children: list[str] = field(default_factory=list)
|
||||||
|
lines: int = 0
|
||||||
|
has_types: bool = False
|
||||||
|
status: str = "IMPLEMENTED"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class PageInfo:
|
||||||
|
"""Parsed page information."""
|
||||||
|
name: str
|
||||||
|
file_path: str
|
||||||
|
route: str
|
||||||
|
components: list[str] = field(default_factory=list)
|
||||||
|
api_calls: list[str] = field(default_factory=list)
|
||||||
|
lines: int = 0
|
||||||
|
is_client: bool = False
|
||||||
|
is_server: bool = True
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class APIEndpointInfo:
|
||||||
|
"""Parsed API endpoint information."""
|
||||||
|
name: str
|
||||||
|
file_path: str
|
||||||
|
route: str
|
||||||
|
methods: list[str] = field(default_factory=list)
|
||||||
|
has_auth: bool = False
|
||||||
|
has_validation: bool = False
|
||||||
|
lines: int = 0
|
||||||
|
|
||||||
|
|
||||||
|
def load_yaml(filepath: str) -> dict:
|
||||||
|
"""Load YAML file."""
|
||||||
|
if not os.path.exists(filepath):
|
||||||
|
return {}
|
||||||
|
with open(filepath, 'r') as f:
|
||||||
|
content = f.read()
|
||||||
|
if HAS_YAML:
|
||||||
|
return yaml.safe_load(content) or {}
|
||||||
|
# Basic fallback
|
||||||
|
result = {}
|
||||||
|
for line in content.split('\n'):
|
||||||
|
if ':' in line and not line.startswith(' '):
|
||||||
|
key, _, value = line.partition(':')
|
||||||
|
result[key.strip()] = value.strip()
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def load_manifest(manifest_path: str) -> dict:
|
||||||
|
"""Load project manifest."""
|
||||||
|
if not os.path.exists(manifest_path):
|
||||||
|
return {}
|
||||||
|
try:
|
||||||
|
with open(manifest_path) as f:
|
||||||
|
return json.load(f)
|
||||||
|
except (json.JSONDecodeError, IOError):
|
||||||
|
return {}
|
||||||
|
|
||||||
|
|
||||||
|
def parse_typescript_file(file_path: str) -> dict:
|
||||||
|
"""Parse TypeScript/TSX file for component information."""
|
||||||
|
if not os.path.exists(file_path):
|
||||||
|
return {'exists': False}
|
||||||
|
|
||||||
|
try:
|
||||||
|
with open(file_path, 'r', encoding='utf-8') as f:
|
||||||
|
content = f.read()
|
||||||
|
lines = content.split('\n')
|
||||||
|
except (IOError, UnicodeDecodeError):
|
||||||
|
return {'exists': False}
|
||||||
|
|
||||||
|
result = {
|
||||||
|
'exists': True,
|
||||||
|
'lines': len(lines),
|
||||||
|
'imports': [],
|
||||||
|
'exports': [],
|
||||||
|
'props': [],
|
||||||
|
'hooks': [],
|
||||||
|
'components_used': [],
|
||||||
|
'api_calls': [],
|
||||||
|
'is_client': "'use client'" in content or '"use client"' in content,
|
||||||
|
'has_types': 'interface ' in content or 'type ' in content,
|
||||||
|
'methods': [],
|
||||||
|
}
|
||||||
|
|
||||||
|
# Extract imports
|
||||||
|
import_pattern = r"import\s+(?:{[^}]+}|\w+)\s+from\s+['\"]([^'\"]+)['\"]"
|
||||||
|
for match in re.finditer(import_pattern, content):
|
||||||
|
result['imports'].append(match.group(1))
|
||||||
|
|
||||||
|
# Extract exports
|
||||||
|
export_patterns = [
|
||||||
|
r"export\s+(?:default\s+)?(?:function|const|class)\s+(\w+)",
|
||||||
|
r"export\s+{\s*([^}]+)\s*}",
|
||||||
|
]
|
||||||
|
for pattern in export_patterns:
|
||||||
|
for match in re.finditer(pattern, content):
|
||||||
|
exports = match.group(1).split(',')
|
||||||
|
result['exports'].extend([e.strip() for e in exports if e.strip()])
|
||||||
|
|
||||||
|
# Extract props interface
|
||||||
|
props_pattern = r"(?:interface|type)\s+(\w*Props\w*)\s*(?:=|{)"
|
||||||
|
for match in re.finditer(props_pattern, content):
|
||||||
|
result['props'].append(match.group(1))
|
||||||
|
|
||||||
|
# Extract React hooks
|
||||||
|
hooks_pattern = r"\b(use[A-Z]\w+)\s*\("
|
||||||
|
for match in re.finditer(hooks_pattern, content):
|
||||||
|
hook = match.group(1)
|
||||||
|
if hook not in result['hooks']:
|
||||||
|
result['hooks'].append(hook)
|
||||||
|
|
||||||
|
# Extract component usage (JSX)
|
||||||
|
component_pattern = r"<([A-Z]\w+)(?:\s|/|>)"
|
||||||
|
for match in re.finditer(component_pattern, content):
|
||||||
|
comp = match.group(1)
|
||||||
|
if comp not in result['components_used'] and comp not in ['React', 'Fragment']:
|
||||||
|
result['components_used'].append(comp)
|
||||||
|
|
||||||
|
# Extract API calls
|
||||||
|
api_patterns = [
|
||||||
|
r"fetch\s*\(\s*['\"`](/api/[^'\"`]+)['\"`]",
|
||||||
|
r"axios\.\w+\s*\(\s*['\"`](/api/[^'\"`]+)['\"`]",
|
||||||
|
]
|
||||||
|
for pattern in api_patterns:
|
||||||
|
for match in re.finditer(pattern, content):
|
||||||
|
result['api_calls'].append(match.group(1))
|
||||||
|
|
||||||
|
# Extract HTTP methods (for API routes)
|
||||||
|
method_pattern = r"export\s+(?:async\s+)?function\s+(GET|POST|PUT|DELETE|PATCH)"
|
||||||
|
for match in re.finditer(method_pattern, content):
|
||||||
|
result['methods'].append(match.group(1))
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def get_route_from_path(file_path: str) -> str:
|
||||||
|
"""Convert file path to route."""
|
||||||
|
# Handle Next.js App Router
|
||||||
|
if '/app/' in file_path:
|
||||||
|
route = file_path.split('/app/')[-1]
|
||||||
|
route = re.sub(r'/page\.(tsx?|jsx?)$', '', route)
|
||||||
|
route = re.sub(r'/route\.(tsx?|jsx?)$', '', route)
|
||||||
|
route = '/' + route if route else '/'
|
||||||
|
# Handle dynamic routes
|
||||||
|
route = re.sub(r'\[(\w+)\]', r':\1', route)
|
||||||
|
return route
|
||||||
|
|
||||||
|
# Handle Pages Router
|
||||||
|
if '/pages/' in file_path:
|
||||||
|
route = file_path.split('/pages/')[-1]
|
||||||
|
route = re.sub(r'\.(tsx?|jsx?)$', '', route)
|
||||||
|
route = re.sub(r'/index$', '', route)
|
||||||
|
route = '/' + route if route else '/'
|
||||||
|
route = re.sub(r'\[(\w+)\]', r':\1', route)
|
||||||
|
return route
|
||||||
|
|
||||||
|
return file_path
|
||||||
|
|
||||||
|
|
||||||
|
def visualize_component(info: ComponentInfo, indent: str = "") -> list[str]:
|
||||||
|
"""Generate ASCII visualization for a component."""
|
||||||
|
lines = []
|
||||||
|
|
||||||
|
status_icon = {
|
||||||
|
'IMPLEMENTED': '🟢',
|
||||||
|
'PENDING': '⏳',
|
||||||
|
'IN_PROGRESS': '🔄',
|
||||||
|
'ERROR': '❌',
|
||||||
|
}.get(info.status, '○')
|
||||||
|
|
||||||
|
# Component header
|
||||||
|
lines.append(f"{indent}┌{'─' * 60}┐")
|
||||||
|
lines.append(f"{indent}│ {status_icon} COMPONENT: {info.name:<46} │")
|
||||||
|
lines.append(f"{indent}│ 📁 {info.file_path:<52} │")
|
||||||
|
lines.append(f"{indent}│ 📏 {info.lines} lines │"[:63] + "│")
|
||||||
|
|
||||||
|
# Props
|
||||||
|
if info.props:
|
||||||
|
lines.append(f"{indent}├{'─' * 60}┤")
|
||||||
|
lines.append(f"{indent}│ PROPS │")
|
||||||
|
for prop in info.props[:3]:
|
||||||
|
lines.append(f"{indent}│ • {prop:<54} │")
|
||||||
|
|
||||||
|
# Hooks
|
||||||
|
if info.hooks:
|
||||||
|
lines.append(f"{indent}├{'─' * 60}┤")
|
||||||
|
lines.append(f"{indent}│ HOOKS │")
|
||||||
|
hooks_str = ', '.join(info.hooks[:5])
|
||||||
|
if len(hooks_str) > 52:
|
||||||
|
hooks_str = hooks_str[:49] + '...'
|
||||||
|
lines.append(f"{indent}│ {hooks_str:<56} │")
|
||||||
|
|
||||||
|
# Children components
|
||||||
|
if info.children:
|
||||||
|
lines.append(f"{indent}├{'─' * 60}┤")
|
||||||
|
lines.append(f"{indent}│ USES COMPONENTS │")
|
||||||
|
for child in info.children[:5]:
|
||||||
|
lines.append(f"{indent}│ └── {child:<52} │")
|
||||||
|
|
||||||
|
lines.append(f"{indent}└{'─' * 60}┘")
|
||||||
|
|
||||||
|
return lines
|
||||||
|
|
||||||
|
|
||||||
|
def visualize_page(info: PageInfo, indent: str = "") -> list[str]:
|
||||||
|
"""Generate ASCII visualization for a page."""
|
||||||
|
lines = []
|
||||||
|
|
||||||
|
client_icon = "🖥️" if info.is_client else "🌐"
|
||||||
|
|
||||||
|
# Page header
|
||||||
|
lines.append(f"{indent}╔{'═' * 62}╗")
|
||||||
|
lines.append(f"{indent}║ {client_icon} PAGE: {info.name:<52} ║")
|
||||||
|
lines.append(f"{indent}║ Route: {info.route:<51} ║")
|
||||||
|
lines.append(f"{indent}║ File: {info.file_path:<51} ║")
|
||||||
|
lines.append(f"{indent}╠{'═' * 62}╣")
|
||||||
|
|
||||||
|
# Components used
|
||||||
|
if info.components:
|
||||||
|
lines.append(f"{indent}║ COMPONENTS USED ║")
|
||||||
|
for comp in info.components[:6]:
|
||||||
|
lines.append(f"{indent}║ ├── {comp:<54} ║")
|
||||||
|
if len(info.components) > 6:
|
||||||
|
lines.append(f"{indent}║ └── ... and {len(info.components) - 6} more ║"[:65] + "║")
|
||||||
|
else:
|
||||||
|
lines.append(f"{indent}║ (No child components detected) ║")
|
||||||
|
|
||||||
|
# API calls
|
||||||
|
if info.api_calls:
|
||||||
|
lines.append(f"{indent}╠{'═' * 62}╣")
|
||||||
|
lines.append(f"{indent}║ API CALLS ║")
|
||||||
|
for api in info.api_calls[:4]:
|
||||||
|
api_short = api[:50] if len(api) <= 50 else api[:47] + '...'
|
||||||
|
lines.append(f"{indent}║ 🔌 {api_short:<55} ║")
|
||||||
|
|
||||||
|
lines.append(f"{indent}╚{'═' * 62}╝")
|
||||||
|
|
||||||
|
return lines
|
||||||
|
|
||||||
|
|
||||||
|
def visualize_api_endpoint(info: APIEndpointInfo, indent: str = "") -> list[str]:
|
||||||
|
"""Generate ASCII visualization for an API endpoint."""
|
||||||
|
lines = []
|
||||||
|
|
||||||
|
method_colors = {
|
||||||
|
'GET': '🟢',
|
||||||
|
'POST': '🟡',
|
||||||
|
'PUT': '🟠',
|
||||||
|
'PATCH': '🟠',
|
||||||
|
'DELETE': '🔴',
|
||||||
|
}
|
||||||
|
|
||||||
|
methods_str = ' '.join([f"{method_colors.get(m, '⚪')}{m}" for m in info.methods])
|
||||||
|
|
||||||
|
lines.append(f"{indent}┌{'─' * 60}┐")
|
||||||
|
lines.append(f"{indent}│ 🔌 API: {info.route:<50} │")
|
||||||
|
lines.append(f"{indent}│ Methods: {methods_str:<47} │"[:63] + "│")
|
||||||
|
lines.append(f"{indent}│ File: {info.file_path:<50} │")
|
||||||
|
|
||||||
|
features = []
|
||||||
|
if info.has_auth:
|
||||||
|
features.append("🔐 Auth")
|
||||||
|
if info.has_validation:
|
||||||
|
features.append("✓ Validation")
|
||||||
|
if features:
|
||||||
|
features_str = ' '.join(features)
|
||||||
|
lines.append(f"{indent}│ Features: {features_str:<46} │")
|
||||||
|
|
||||||
|
lines.append(f"{indent}└{'─' * 60}┘")
|
||||||
|
|
||||||
|
return lines
|
||||||
|
|
||||||
|
|
||||||
|
def generate_implementation_tree(components: list[ComponentInfo]) -> list[str]:
|
||||||
|
"""Generate a tree view of component hierarchy."""
|
||||||
|
lines = []
|
||||||
|
|
||||||
|
lines.append("")
|
||||||
|
lines.append("🌳 COMPONENT HIERARCHY")
|
||||||
|
lines.append("═" * 65)
|
||||||
|
|
||||||
|
if not components:
|
||||||
|
lines.append(" (No components found)")
|
||||||
|
return lines
|
||||||
|
|
||||||
|
# Group by directory
|
||||||
|
by_dir: dict[str, list[ComponentInfo]] = {}
|
||||||
|
for comp in components:
|
||||||
|
dir_path = str(Path(comp.file_path).parent)
|
||||||
|
if dir_path not in by_dir:
|
||||||
|
by_dir[dir_path] = []
|
||||||
|
by_dir[dir_path].append(comp)
|
||||||
|
|
||||||
|
for dir_path, comps in sorted(by_dir.items()):
|
||||||
|
lines.append(f" 📂 {dir_path}/")
|
||||||
|
for i, comp in enumerate(comps):
|
||||||
|
is_last = i == len(comps) - 1
|
||||||
|
prefix = " └──" if is_last else " ├──"
|
||||||
|
status = "🟢" if comp.status == "IMPLEMENTED" else "⏳"
|
||||||
|
lines.append(f" {prefix} {status} {comp.name}")
|
||||||
|
|
||||||
|
# Show props
|
||||||
|
if comp.props:
|
||||||
|
prop_prefix = " " if is_last else " │ "
|
||||||
|
for prop in comp.props[:2]:
|
||||||
|
lines.append(f"{prop_prefix} 📋 {prop}")
|
||||||
|
|
||||||
|
return lines
|
||||||
|
|
||||||
|
|
||||||
|
def generate_stats(
|
||||||
|
pages: list[PageInfo],
|
||||||
|
components: list[ComponentInfo],
|
||||||
|
endpoints: list[APIEndpointInfo]
|
||||||
|
) -> list[str]:
|
||||||
|
"""Generate implementation statistics."""
|
||||||
|
lines = []
|
||||||
|
|
||||||
|
total_lines = sum(p.lines for p in pages) + sum(c.lines for c in components) + sum(e.lines for e in endpoints)
|
||||||
|
client_pages = sum(1 for p in pages if p.is_client)
|
||||||
|
server_pages = len(pages) - client_pages
|
||||||
|
typed_components = sum(1 for c in components if c.has_types)
|
||||||
|
|
||||||
|
lines.append("")
|
||||||
|
lines.append("╔══════════════════════════════════════════════════════════════════╗")
|
||||||
|
lines.append("║ 📊 IMPLEMENTATION STATS ║")
|
||||||
|
lines.append("╠══════════════════════════════════════════════════════════════════╣")
|
||||||
|
lines.append(f"║ Pages: {len(pages):<5} │ Client: {client_pages:<3} │ Server: {server_pages:<3} ║")
|
||||||
|
lines.append(f"║ Components: {len(components):<5} │ Typed: {typed_components:<4} ║")
|
||||||
|
lines.append(f"║ API Endpoints: {len(endpoints):<5} ║")
|
||||||
|
lines.append(f"║ Total Lines: {total_lines:<5} ║")
|
||||||
|
lines.append("╠══════════════════════════════════════════════════════════════════╣")
|
||||||
|
|
||||||
|
# Hooks usage
|
||||||
|
all_hooks = []
|
||||||
|
for comp in components:
|
||||||
|
all_hooks.extend(comp.hooks)
|
||||||
|
hook_counts = {}
|
||||||
|
for hook in all_hooks:
|
||||||
|
hook_counts[hook] = hook_counts.get(hook, 0) + 1
|
||||||
|
|
||||||
|
if hook_counts:
|
||||||
|
lines.append("║ HOOKS USAGE ║")
|
||||||
|
for hook, count in sorted(hook_counts.items(), key=lambda x: -x[1])[:5]:
|
||||||
|
lines.append(f"║ {hook:<20} × {count:<3} ║"[:69] + "║")
|
||||||
|
|
||||||
|
lines.append("╚══════════════════════════════════════════════════════════════════╝")
|
||||||
|
|
||||||
|
return lines
|
||||||
|
|
||||||
|
|
||||||
|
def generate_page_flow(pages: list[PageInfo]) -> list[str]:
|
||||||
|
"""Generate page flow visualization."""
|
||||||
|
lines = []
|
||||||
|
|
||||||
|
if not pages:
|
||||||
|
return lines
|
||||||
|
|
||||||
|
lines.append("")
|
||||||
|
lines.append("📱 PAGE STRUCTURE")
|
||||||
|
lines.append("═" * 65)
|
||||||
|
|
||||||
|
# Sort by route
|
||||||
|
sorted_pages = sorted(pages, key=lambda p: p.route)
|
||||||
|
|
||||||
|
for i, page in enumerate(sorted_pages):
|
||||||
|
is_last = i == len(sorted_pages) - 1
|
||||||
|
icon = "🖥️" if page.is_client else "🌐"
|
||||||
|
|
||||||
|
# Page box
|
||||||
|
lines.append(f" ┌{'─' * 50}┐")
|
||||||
|
lines.append(f" │ {icon} {page.route:<47} │")
|
||||||
|
lines.append(f" │ {page.name:<48} │")
|
||||||
|
|
||||||
|
# Components count
|
||||||
|
comp_count = len(page.components)
|
||||||
|
api_count = len(page.api_calls)
|
||||||
|
lines.append(f" │ 🧩 {comp_count} components 🔌 {api_count} API calls │"[:56] + "│")
|
||||||
|
lines.append(f" └{'─' * 50}┘")
|
||||||
|
|
||||||
|
if not is_last:
|
||||||
|
lines.append(" │")
|
||||||
|
lines.append(" ▼")
|
||||||
|
|
||||||
|
return lines
|
||||||
|
|
||||||
|
|
||||||
|
def visualize_from_manifest(manifest_path: str) -> str:
|
||||||
|
"""Generate full visualization from manifest."""
|
||||||
|
manifest = load_manifest(manifest_path)
|
||||||
|
|
||||||
|
if not manifest:
|
||||||
|
return "❌ Could not load manifest"
|
||||||
|
|
||||||
|
entities = manifest.get('entities', {})
|
||||||
|
project_name = manifest.get('project', {}).get('name', 'Unknown')
|
||||||
|
|
||||||
|
lines = []
|
||||||
|
|
||||||
|
# Header
|
||||||
|
lines.append("")
|
||||||
|
lines.append("╔═══════════════════════════════════════════════════════════════════╗")
|
||||||
|
lines.append("║ 🏗️ IMPLEMENTATION VISUALIZATION ║")
|
||||||
|
lines.append(f"║ Project: {project_name:<56} ║")
|
||||||
|
lines.append("╚═══════════════════════════════════════════════════════════════════╝")
|
||||||
|
|
||||||
|
pages: list[PageInfo] = []
|
||||||
|
components: list[ComponentInfo] = []
|
||||||
|
endpoints: list[APIEndpointInfo] = []
|
||||||
|
|
||||||
|
# Parse pages
|
||||||
|
for page_data in entities.get('pages', []):
|
||||||
|
file_path = page_data.get('file_path', '')
|
||||||
|
if file_path and os.path.exists(file_path):
|
||||||
|
parsed = parse_typescript_file(file_path)
|
||||||
|
page = PageInfo(
|
||||||
|
name=page_data.get('name', 'Unknown'),
|
||||||
|
file_path=file_path,
|
||||||
|
route=get_route_from_path(file_path),
|
||||||
|
components=parsed.get('components_used', []),
|
||||||
|
api_calls=parsed.get('api_calls', []),
|
||||||
|
lines=parsed.get('lines', 0),
|
||||||
|
is_client=parsed.get('is_client', False),
|
||||||
|
)
|
||||||
|
pages.append(page)
|
||||||
|
|
||||||
|
# Parse components
|
||||||
|
for comp_data in entities.get('components', []):
|
||||||
|
file_path = comp_data.get('file_path', '')
|
||||||
|
if file_path and os.path.exists(file_path):
|
||||||
|
parsed = parse_typescript_file(file_path)
|
||||||
|
comp = ComponentInfo(
|
||||||
|
name=comp_data.get('name', 'Unknown'),
|
||||||
|
file_path=file_path,
|
||||||
|
props=parsed.get('props', []),
|
||||||
|
imports=parsed.get('imports', []),
|
||||||
|
exports=parsed.get('exports', []),
|
||||||
|
hooks=parsed.get('hooks', []),
|
||||||
|
children=parsed.get('components_used', []),
|
||||||
|
lines=parsed.get('lines', 0),
|
||||||
|
has_types=parsed.get('has_types', False),
|
||||||
|
status=comp_data.get('status', 'IMPLEMENTED'),
|
||||||
|
)
|
||||||
|
components.append(comp)
|
||||||
|
|
||||||
|
# Parse API endpoints
|
||||||
|
for api_data in entities.get('api_endpoints', []):
|
||||||
|
file_path = api_data.get('file_path', '')
|
||||||
|
if file_path and os.path.exists(file_path):
|
||||||
|
parsed = parse_typescript_file(file_path)
|
||||||
|
endpoint = APIEndpointInfo(
|
||||||
|
name=api_data.get('name', 'Unknown'),
|
||||||
|
file_path=file_path,
|
||||||
|
route=get_route_from_path(file_path),
|
||||||
|
methods=parsed.get('methods', ['GET']),
|
||||||
|
lines=parsed.get('lines', 0),
|
||||||
|
)
|
||||||
|
endpoints.append(endpoint)
|
||||||
|
|
||||||
|
# Page flow
|
||||||
|
lines.extend(generate_page_flow(pages))
|
||||||
|
|
||||||
|
# Detailed pages
|
||||||
|
if pages:
|
||||||
|
lines.append("")
|
||||||
|
lines.append("📄 PAGE DETAILS")
|
||||||
|
lines.append("═" * 65)
|
||||||
|
for page in pages:
|
||||||
|
lines.extend(visualize_page(page))
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# Component hierarchy
|
||||||
|
lines.extend(generate_implementation_tree(components))
|
||||||
|
|
||||||
|
# Detailed components
|
||||||
|
if components:
|
||||||
|
lines.append("")
|
||||||
|
lines.append("🧩 COMPONENT DETAILS")
|
||||||
|
lines.append("═" * 65)
|
||||||
|
for comp in components[:10]: # Limit to 10
|
||||||
|
lines.extend(visualize_component(comp))
|
||||||
|
lines.append("")
|
||||||
|
if len(components) > 10:
|
||||||
|
lines.append(f" ... and {len(components) - 10} more components")
|
||||||
|
|
||||||
|
# API endpoints
|
||||||
|
if endpoints:
|
||||||
|
lines.append("")
|
||||||
|
lines.append("🔌 API ENDPOINTS")
|
||||||
|
lines.append("═" * 65)
|
||||||
|
for endpoint in endpoints:
|
||||||
|
lines.extend(visualize_api_endpoint(endpoint))
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# Stats
|
||||||
|
lines.extend(generate_stats(pages, components, endpoints))
|
||||||
|
|
||||||
|
# Legend
|
||||||
|
lines.append("")
|
||||||
|
lines.append("📋 LEGEND")
|
||||||
|
lines.append("─" * 65)
|
||||||
|
lines.append(" 🟢 Implemented ⏳ Pending 🔄 In Progress ❌ Error")
|
||||||
|
lines.append(" 🖥️ Client Component 🌐 Server Component")
|
||||||
|
lines.append(" 🟢 GET 🟡 POST 🟠 PUT/PATCH 🔴 DELETE")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
return "\n".join(lines)
|
||||||
|
|
||||||
|
|
||||||
|
def visualize_from_tasks(tasks_dir: str) -> str:
|
||||||
|
"""Generate visualization from task files."""
|
||||||
|
tasks_path = Path(tasks_dir)
|
||||||
|
|
||||||
|
if not tasks_path.exists():
|
||||||
|
return f"❌ Tasks directory not found: {tasks_dir}"
|
||||||
|
|
||||||
|
task_files = list(tasks_path.glob('task_*.yml'))
|
||||||
|
|
||||||
|
if not task_files:
|
||||||
|
return f"❌ No task files found in: {tasks_dir}"
|
||||||
|
|
||||||
|
lines = []
|
||||||
|
lines.append("")
|
||||||
|
lines.append("╔═══════════════════════════════════════════════════════════════════╗")
|
||||||
|
lines.append("║ 📋 TASK IMPLEMENTATION STATUS ║")
|
||||||
|
lines.append("╚═══════════════════════════════════════════════════════════════════╝")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
implemented_files = []
|
||||||
|
|
||||||
|
for task_file in sorted(task_files):
|
||||||
|
task = load_yaml(str(task_file))
|
||||||
|
task_id = task.get('id', task_file.stem)
|
||||||
|
status = task.get('status', 'unknown')
|
||||||
|
title = task.get('title', 'Unknown task')
|
||||||
|
file_paths = task.get('file_paths', [])
|
||||||
|
|
||||||
|
status_icon = {
|
||||||
|
'completed': '✅',
|
||||||
|
'approved': '✅',
|
||||||
|
'pending': '⏳',
|
||||||
|
'in_progress': '🔄',
|
||||||
|
'blocked': '🚫',
|
||||||
|
}.get(status, '○')
|
||||||
|
|
||||||
|
lines.append(f" {status_icon} {task_id}")
|
||||||
|
lines.append(f" {title[:55]}")
|
||||||
|
|
||||||
|
for fp in file_paths:
|
||||||
|
if os.path.exists(fp):
|
||||||
|
lines.append(f" └── ✓ {fp}")
|
||||||
|
implemented_files.append(fp)
|
||||||
|
else:
|
||||||
|
lines.append(f" └── ✗ {fp} (missing)")
|
||||||
|
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
# Parse and visualize implemented files
|
||||||
|
if implemented_files:
|
||||||
|
lines.append("─" * 65)
|
||||||
|
lines.append("")
|
||||||
|
lines.append("🔍 IMPLEMENTED FILES ANALYSIS")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
for fp in implemented_files[:5]:
|
||||||
|
parsed = parse_typescript_file(fp)
|
||||||
|
if parsed.get('exists'):
|
||||||
|
name = Path(fp).stem
|
||||||
|
lines.append(f" 📁 {fp}")
|
||||||
|
lines.append(f" Lines: {parsed.get('lines', 0)}")
|
||||||
|
|
||||||
|
if parsed.get('exports'):
|
||||||
|
lines.append(f" Exports: {', '.join(parsed['exports'][:3])}")
|
||||||
|
if parsed.get('hooks'):
|
||||||
|
lines.append(f" Hooks: {', '.join(parsed['hooks'][:3])}")
|
||||||
|
if parsed.get('components_used'):
|
||||||
|
lines.append(f" Uses: {', '.join(parsed['components_used'][:3])}")
|
||||||
|
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
return "\n".join(lines)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Visualize implementation")
|
||||||
|
parser.add_argument('--manifest', help='Path to project_manifest.json')
|
||||||
|
parser.add_argument('--tasks-dir', help='Path to tasks directory')
|
||||||
|
parser.add_argument('--format', choices=['full', 'tree', 'stats', 'pages'],
|
||||||
|
default='full', help='Output format')
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
if args.manifest:
|
||||||
|
output = visualize_from_manifest(args.manifest)
|
||||||
|
elif args.tasks_dir:
|
||||||
|
output = visualize_from_tasks(args.tasks_dir)
|
||||||
|
else:
|
||||||
|
# Auto-detect
|
||||||
|
if os.path.exists('project_manifest.json'):
|
||||||
|
output = visualize_from_manifest('project_manifest.json')
|
||||||
|
else:
|
||||||
|
output = "Usage: python3 visualize_implementation.py --manifest project_manifest.json"
|
||||||
|
|
||||||
|
print(output)
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
|
|
@ -0,0 +1,835 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Workflow state management for automated orchestration with approval gates."""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import shutil
|
||||||
|
import sys
|
||||||
|
from datetime import datetime
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
# Try to import yaml, fall back to basic parsing if not available
|
||||||
|
try:
|
||||||
|
import yaml
|
||||||
|
HAS_YAML = True
|
||||||
|
except ImportError:
|
||||||
|
HAS_YAML = False
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# YAML Helpers
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def load_yaml(filepath: str) -> dict:
|
||||||
|
"""Load YAML file."""
|
||||||
|
if not os.path.exists(filepath):
|
||||||
|
return {}
|
||||||
|
with open(filepath, 'r') as f:
|
||||||
|
content = f.read()
|
||||||
|
if not content.strip():
|
||||||
|
return {}
|
||||||
|
|
||||||
|
if HAS_YAML:
|
||||||
|
return yaml.safe_load(content) or {}
|
||||||
|
|
||||||
|
# Simple fallback parser
|
||||||
|
result = {}
|
||||||
|
current_key = None
|
||||||
|
current_list = None
|
||||||
|
|
||||||
|
for line in content.split('\n'):
|
||||||
|
line = line.rstrip()
|
||||||
|
if not line or line.startswith('#'):
|
||||||
|
continue
|
||||||
|
|
||||||
|
if line.startswith(' - '):
|
||||||
|
if current_list is not None:
|
||||||
|
value = line[4:].strip()
|
||||||
|
# Handle quoted strings
|
||||||
|
if (value.startswith('"') and value.endswith('"')) or \
|
||||||
|
(value.startswith("'") and value.endswith("'")):
|
||||||
|
value = value[1:-1]
|
||||||
|
current_list.append(value)
|
||||||
|
continue
|
||||||
|
|
||||||
|
if ':' in line and not line.startswith(' '):
|
||||||
|
key, _, value = line.partition(':')
|
||||||
|
key = key.strip()
|
||||||
|
value = value.strip()
|
||||||
|
|
||||||
|
if value == '[]':
|
||||||
|
result[key] = []
|
||||||
|
current_list = result[key]
|
||||||
|
elif value == '{}':
|
||||||
|
result[key] = {}
|
||||||
|
current_list = None
|
||||||
|
elif value == 'null' or value == '~':
|
||||||
|
result[key] = None
|
||||||
|
current_list = None
|
||||||
|
elif value == 'true':
|
||||||
|
result[key] = True
|
||||||
|
current_list = None
|
||||||
|
elif value == 'false':
|
||||||
|
result[key] = False
|
||||||
|
current_list = None
|
||||||
|
elif value.isdigit():
|
||||||
|
result[key] = int(value)
|
||||||
|
current_list = None
|
||||||
|
elif value:
|
||||||
|
# Handle quoted strings
|
||||||
|
if (value.startswith('"') and value.endswith('"')) or \
|
||||||
|
(value.startswith("'") and value.endswith("'")):
|
||||||
|
value = value[1:-1]
|
||||||
|
result[key] = value
|
||||||
|
current_list = None
|
||||||
|
else:
|
||||||
|
result[key] = []
|
||||||
|
current_list = result[key]
|
||||||
|
current_key = key
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def save_yaml(filepath: str, data: dict):
|
||||||
|
"""Save data to YAML file."""
|
||||||
|
os.makedirs(os.path.dirname(filepath), exist_ok=True)
|
||||||
|
|
||||||
|
if HAS_YAML:
|
||||||
|
with open(filepath, 'w') as f:
|
||||||
|
yaml.dump(data, f, default_flow_style=False, sort_keys=False, allow_unicode=True)
|
||||||
|
else:
|
||||||
|
# Simple YAML writer
|
||||||
|
def write_value(value, indent=0):
|
||||||
|
prefix = ' ' * indent
|
||||||
|
lines = []
|
||||||
|
if isinstance(value, dict):
|
||||||
|
for k, v in value.items():
|
||||||
|
if isinstance(v, (dict, list)) and v:
|
||||||
|
lines.append(f"{prefix}{k}:")
|
||||||
|
lines.extend(write_value(v, indent + 1))
|
||||||
|
elif isinstance(v, list):
|
||||||
|
lines.append(f"{prefix}{k}: []")
|
||||||
|
else:
|
||||||
|
lines.append(f"{prefix}{k}: {v}")
|
||||||
|
elif isinstance(value, list):
|
||||||
|
for item in value:
|
||||||
|
if isinstance(item, dict):
|
||||||
|
lines.append(f"{prefix}-")
|
||||||
|
for k, v in item.items():
|
||||||
|
lines.append(f"{prefix} {k}: {v}")
|
||||||
|
else:
|
||||||
|
lines.append(f"{prefix}- {item}")
|
||||||
|
return lines
|
||||||
|
|
||||||
|
lines = write_value(data)
|
||||||
|
with open(filepath, 'w') as f:
|
||||||
|
f.write('\n'.join(lines))
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# Workflow State Management
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
PHASES = [
|
||||||
|
'INITIALIZING',
|
||||||
|
'DESIGNING',
|
||||||
|
'AWAITING_DESIGN_APPROVAL',
|
||||||
|
'DESIGN_APPROVED',
|
||||||
|
'DESIGN_REJECTED',
|
||||||
|
'IMPLEMENTING',
|
||||||
|
'REVIEWING',
|
||||||
|
'SECURITY_REVIEW', # New phase for security audit
|
||||||
|
'AWAITING_IMPL_APPROVAL',
|
||||||
|
'IMPL_APPROVED',
|
||||||
|
'IMPL_REJECTED',
|
||||||
|
'COMPLETING',
|
||||||
|
'COMPLETED',
|
||||||
|
'PAUSED',
|
||||||
|
'FAILED'
|
||||||
|
]
|
||||||
|
|
||||||
|
VALID_TRANSITIONS = {
|
||||||
|
'INITIALIZING': ['DESIGNING', 'FAILED'],
|
||||||
|
'DESIGNING': ['AWAITING_DESIGN_APPROVAL', 'FAILED'],
|
||||||
|
'AWAITING_DESIGN_APPROVAL': ['DESIGN_APPROVED', 'DESIGN_REJECTED', 'PAUSED'],
|
||||||
|
'DESIGN_APPROVED': ['IMPLEMENTING', 'FAILED'],
|
||||||
|
'DESIGN_REJECTED': ['DESIGNING'],
|
||||||
|
'IMPLEMENTING': ['REVIEWING', 'FAILED', 'PAUSED'],
|
||||||
|
'REVIEWING': ['SECURITY_REVIEW', 'IMPLEMENTING', 'FAILED'], # Must pass through security
|
||||||
|
'SECURITY_REVIEW': ['AWAITING_IMPL_APPROVAL', 'IMPLEMENTING', 'FAILED'], # Can go back to fix
|
||||||
|
'AWAITING_IMPL_APPROVAL': ['IMPL_APPROVED', 'IMPL_REJECTED', 'PAUSED'],
|
||||||
|
'IMPL_APPROVED': ['COMPLETING', 'FAILED'],
|
||||||
|
'IMPL_REJECTED': ['IMPLEMENTING'],
|
||||||
|
'COMPLETING': ['COMPLETED', 'FAILED'],
|
||||||
|
'COMPLETED': [],
|
||||||
|
'PAUSED': PHASES, # Can resume to any phase
|
||||||
|
'FAILED': ['INITIALIZING', 'DESIGNING', 'IMPLEMENTING'] # Can retry
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def get_workflow_dir() -> Path:
|
||||||
|
"""Get the .workflow directory path."""
|
||||||
|
return Path('.workflow')
|
||||||
|
|
||||||
|
|
||||||
|
def get_current_state_path() -> Path:
|
||||||
|
"""Get the current workflow state file path."""
|
||||||
|
return get_workflow_dir() / 'current.yml'
|
||||||
|
|
||||||
|
|
||||||
|
def get_history_dir() -> Path:
|
||||||
|
"""Get the workflow history directory."""
|
||||||
|
return get_workflow_dir() / 'history'
|
||||||
|
|
||||||
|
|
||||||
|
def create_workflow(feature: str) -> dict:
|
||||||
|
"""Create a new workflow state."""
|
||||||
|
now = datetime.now()
|
||||||
|
workflow_id = f"workflow_{now.strftime('%Y%m%d_%H%M%S')}"
|
||||||
|
|
||||||
|
state = {
|
||||||
|
'id': workflow_id,
|
||||||
|
'feature': feature,
|
||||||
|
'current_phase': 'INITIALIZING',
|
||||||
|
'gates': {
|
||||||
|
'design_approval': {
|
||||||
|
'status': 'pending',
|
||||||
|
'approved_at': None,
|
||||||
|
'approved_by': None,
|
||||||
|
'rejection_reason': None,
|
||||||
|
'revision_count': 0
|
||||||
|
},
|
||||||
|
'implementation_approval': {
|
||||||
|
'status': 'pending',
|
||||||
|
'approved_at': None,
|
||||||
|
'approved_by': None,
|
||||||
|
'rejection_reason': None,
|
||||||
|
'revision_count': 0
|
||||||
|
}
|
||||||
|
},
|
||||||
|
'progress': {
|
||||||
|
'entities_designed': 0,
|
||||||
|
'tasks_created': 0,
|
||||||
|
'tasks_implemented': 0,
|
||||||
|
'tasks_reviewed': 0,
|
||||||
|
'tasks_approved': 0,
|
||||||
|
'tasks_completed': 0
|
||||||
|
},
|
||||||
|
'tasks': {
|
||||||
|
'pending': [],
|
||||||
|
'in_progress': [],
|
||||||
|
'review': [],
|
||||||
|
'approved': [],
|
||||||
|
'completed': [],
|
||||||
|
'blocked': []
|
||||||
|
},
|
||||||
|
'started_at': now.isoformat(),
|
||||||
|
'updated_at': now.isoformat(),
|
||||||
|
'completed_at': None,
|
||||||
|
'last_error': None,
|
||||||
|
'resume_point': {
|
||||||
|
'phase': 'INITIALIZING',
|
||||||
|
'task_id': None,
|
||||||
|
'action': 'start_workflow'
|
||||||
|
},
|
||||||
|
'checkpoints': [] # List of checkpoint snapshots for recovery
|
||||||
|
}
|
||||||
|
|
||||||
|
# Ensure directory exists
|
||||||
|
get_workflow_dir().mkdir(exist_ok=True)
|
||||||
|
get_history_dir().mkdir(exist_ok=True)
|
||||||
|
|
||||||
|
# Save state
|
||||||
|
save_yaml(str(get_current_state_path()), state)
|
||||||
|
|
||||||
|
return state
|
||||||
|
|
||||||
|
|
||||||
|
def load_current_workflow() -> Optional[dict]:
|
||||||
|
"""Load the current workflow state from the active version."""
|
||||||
|
state_path = get_current_state_path()
|
||||||
|
if not state_path.exists():
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Read current.yml to get active version
|
||||||
|
current = load_yaml(str(state_path))
|
||||||
|
active_version = current.get('active_version')
|
||||||
|
if not active_version:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Load the version's session.yml
|
||||||
|
version_session_path = get_workflow_dir() / 'versions' / active_version / 'session.yml'
|
||||||
|
if not version_session_path.exists():
|
||||||
|
return None
|
||||||
|
|
||||||
|
session = load_yaml(str(version_session_path))
|
||||||
|
|
||||||
|
current_phase = session.get('current_phase', 'INITIALIZING')
|
||||||
|
|
||||||
|
# Convert session format to state format expected by show_status
|
||||||
|
return {
|
||||||
|
'id': session.get('session_id', active_version),
|
||||||
|
'feature': session.get('feature', 'Unknown'),
|
||||||
|
'current_phase': current_phase,
|
||||||
|
'gates': {
|
||||||
|
'design_approval': session.get('approvals', {}).get('design', {'status': 'pending'}),
|
||||||
|
'implementation_approval': session.get('approvals', {}).get('implementation', {'status': 'pending'})
|
||||||
|
},
|
||||||
|
'progress': {
|
||||||
|
'entities_designed': session.get('summary', {}).get('entities_created', 0),
|
||||||
|
'tasks_created': session.get('summary', {}).get('total_tasks', 0),
|
||||||
|
'tasks_implemented': session.get('summary', {}).get('tasks_completed', 0),
|
||||||
|
'tasks_reviewed': 0,
|
||||||
|
'tasks_completed': session.get('summary', {}).get('tasks_completed', 0)
|
||||||
|
},
|
||||||
|
'tasks': {
|
||||||
|
'pending': [],
|
||||||
|
'in_progress': [],
|
||||||
|
'review': [],
|
||||||
|
'approved': [],
|
||||||
|
'completed': session.get('task_sessions', []),
|
||||||
|
'blocked': []
|
||||||
|
},
|
||||||
|
'version': active_version,
|
||||||
|
'status': session.get('status', 'unknown'),
|
||||||
|
'last_error': None,
|
||||||
|
'started_at': session.get('started_at', ''),
|
||||||
|
'updated_at': session.get('updated_at', ''),
|
||||||
|
'completed_at': session.get('completed_at'),
|
||||||
|
'resume_point': {
|
||||||
|
'phase': current_phase,
|
||||||
|
'task_id': None,
|
||||||
|
'action': 'continue_workflow'
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def save_workflow(state: dict):
|
||||||
|
"""Save workflow state to the version's session.yml file."""
|
||||||
|
# Get active version
|
||||||
|
current_path = get_current_state_path()
|
||||||
|
if not current_path.exists():
|
||||||
|
print("Error: No current.yml found")
|
||||||
|
return
|
||||||
|
|
||||||
|
current = load_yaml(str(current_path))
|
||||||
|
active_version = current.get('active_version')
|
||||||
|
if not active_version:
|
||||||
|
print("Error: No active version set")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Get the version's session.yml path
|
||||||
|
version_session_path = get_workflow_dir() / 'versions' / active_version / 'session.yml'
|
||||||
|
if not version_session_path.exists():
|
||||||
|
print(f"Error: Session file not found: {version_session_path}")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Load existing session data
|
||||||
|
session = load_yaml(str(version_session_path))
|
||||||
|
|
||||||
|
# Create backup
|
||||||
|
backup_path = version_session_path.with_suffix('.yml.bak')
|
||||||
|
shutil.copy(version_session_path, backup_path)
|
||||||
|
|
||||||
|
# Update session with state changes
|
||||||
|
session['current_phase'] = state['current_phase']
|
||||||
|
session['updated_at'] = datetime.now().isoformat()
|
||||||
|
|
||||||
|
if state.get('completed_at'):
|
||||||
|
session['completed_at'] = state['completed_at']
|
||||||
|
session['status'] = 'completed'
|
||||||
|
|
||||||
|
# Update approvals
|
||||||
|
if 'gates' in state:
|
||||||
|
if 'approvals' not in session:
|
||||||
|
session['approvals'] = {}
|
||||||
|
if state['gates'].get('design_approval', {}).get('status') == 'approved':
|
||||||
|
session['approvals']['design'] = state['gates']['design_approval']
|
||||||
|
if state['gates'].get('implementation_approval', {}).get('status') == 'approved':
|
||||||
|
session['approvals']['implementation'] = state['gates']['implementation_approval']
|
||||||
|
|
||||||
|
save_yaml(str(version_session_path), session)
|
||||||
|
|
||||||
|
|
||||||
|
def transition_phase(state: dict, new_phase: str, error: str = None) -> bool:
|
||||||
|
"""Transition workflow to a new phase."""
|
||||||
|
current = state['current_phase']
|
||||||
|
|
||||||
|
if new_phase not in PHASES:
|
||||||
|
print(f"Error: Invalid phase '{new_phase}'")
|
||||||
|
return False
|
||||||
|
|
||||||
|
if new_phase not in VALID_TRANSITIONS.get(current, []):
|
||||||
|
print(f"Error: Cannot transition from '{current}' to '{new_phase}'")
|
||||||
|
print(f"Valid transitions: {VALID_TRANSITIONS.get(current, [])}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
state['current_phase'] = new_phase
|
||||||
|
state['resume_point']['phase'] = new_phase
|
||||||
|
|
||||||
|
if new_phase == 'FAILED' and error:
|
||||||
|
state['last_error'] = error
|
||||||
|
|
||||||
|
if new_phase == 'COMPLETED':
|
||||||
|
state['completed_at'] = datetime.now().isoformat()
|
||||||
|
|
||||||
|
# Set appropriate resume action
|
||||||
|
resume_actions = {
|
||||||
|
'INITIALIZING': 'start_workflow',
|
||||||
|
'DESIGNING': 'continue_design',
|
||||||
|
'AWAITING_DESIGN_APPROVAL': 'await_user_approval',
|
||||||
|
'DESIGN_APPROVED': 'start_implementation',
|
||||||
|
'DESIGN_REJECTED': 'revise_design',
|
||||||
|
'IMPLEMENTING': 'continue_implementation',
|
||||||
|
'REVIEWING': 'continue_review',
|
||||||
|
'SECURITY_REVIEW': 'run_security_audit',
|
||||||
|
'AWAITING_IMPL_APPROVAL': 'await_user_approval',
|
||||||
|
'IMPL_APPROVED': 'start_completion',
|
||||||
|
'IMPL_REJECTED': 'fix_implementation',
|
||||||
|
'COMPLETING': 'continue_completion',
|
||||||
|
'COMPLETED': 'workflow_done',
|
||||||
|
'PAUSED': 'resume_workflow',
|
||||||
|
'FAILED': 'retry_or_abort'
|
||||||
|
}
|
||||||
|
state['resume_point']['action'] = resume_actions.get(new_phase, 'unknown')
|
||||||
|
|
||||||
|
save_workflow(state)
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def approve_gate(state: dict, gate: str, approver: str = 'user') -> bool:
|
||||||
|
"""Approve a gate."""
|
||||||
|
if gate not in ['design_approval', 'implementation_approval']:
|
||||||
|
print(f"Error: Invalid gate '{gate}'")
|
||||||
|
return False
|
||||||
|
|
||||||
|
state['gates'][gate]['status'] = 'approved'
|
||||||
|
state['gates'][gate]['approved_at'] = datetime.now().isoformat()
|
||||||
|
state['gates'][gate]['approved_by'] = approver
|
||||||
|
|
||||||
|
# Transition to next phase
|
||||||
|
if gate == 'design_approval':
|
||||||
|
transition_phase(state, 'DESIGN_APPROVED')
|
||||||
|
else:
|
||||||
|
transition_phase(state, 'IMPL_APPROVED')
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def reject_gate(state: dict, gate: str, reason: str) -> bool:
|
||||||
|
"""Reject a gate."""
|
||||||
|
if gate not in ['design_approval', 'implementation_approval']:
|
||||||
|
print(f"Error: Invalid gate '{gate}'")
|
||||||
|
return False
|
||||||
|
|
||||||
|
state['gates'][gate]['status'] = 'rejected'
|
||||||
|
state['gates'][gate]['rejection_reason'] = reason
|
||||||
|
state['gates'][gate]['revision_count'] += 1
|
||||||
|
|
||||||
|
# Transition to rejection phase
|
||||||
|
if gate == 'design_approval':
|
||||||
|
transition_phase(state, 'DESIGN_REJECTED')
|
||||||
|
else:
|
||||||
|
transition_phase(state, 'IMPL_REJECTED')
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def update_progress(state: dict, **kwargs):
|
||||||
|
"""Update progress counters."""
|
||||||
|
for key, value in kwargs.items():
|
||||||
|
if key in state['progress']:
|
||||||
|
state['progress'][key] = value
|
||||||
|
save_workflow(state)
|
||||||
|
|
||||||
|
|
||||||
|
def update_task_status(state: dict, task_id: str, new_status: str):
|
||||||
|
"""Update task status in workflow state."""
|
||||||
|
# Remove from all status lists
|
||||||
|
for status in state['tasks']:
|
||||||
|
if task_id in state['tasks'][status]:
|
||||||
|
state['tasks'][status].remove(task_id)
|
||||||
|
|
||||||
|
# Add to new status list
|
||||||
|
if new_status in state['tasks']:
|
||||||
|
state['tasks'][new_status].append(task_id)
|
||||||
|
|
||||||
|
# Update resume point if task is in progress
|
||||||
|
if new_status == 'in_progress':
|
||||||
|
state['resume_point']['task_id'] = task_id
|
||||||
|
|
||||||
|
save_workflow(state)
|
||||||
|
|
||||||
|
|
||||||
|
def save_checkpoint(state: dict, description: str, data: dict = None) -> dict:
|
||||||
|
"""Save a checkpoint for recovery during long operations.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
state: Current workflow state
|
||||||
|
description: Human-readable description of checkpoint
|
||||||
|
data: Optional additional data to store
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The checkpoint object that was created
|
||||||
|
"""
|
||||||
|
checkpoint = {
|
||||||
|
'id': f"checkpoint_{datetime.now().strftime('%Y%m%d_%H%M%S')}",
|
||||||
|
'timestamp': datetime.now().isoformat(),
|
||||||
|
'phase': state['current_phase'],
|
||||||
|
'description': description,
|
||||||
|
'resume_point': state['resume_point'].copy(),
|
||||||
|
'progress': state['progress'].copy(),
|
||||||
|
'data': data or {}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Keep only last 10 checkpoints to avoid bloat
|
||||||
|
if 'checkpoints' not in state:
|
||||||
|
state['checkpoints'] = []
|
||||||
|
state['checkpoints'].append(checkpoint)
|
||||||
|
if len(state['checkpoints']) > 10:
|
||||||
|
state['checkpoints'] = state['checkpoints'][-10:]
|
||||||
|
|
||||||
|
save_workflow(state)
|
||||||
|
return checkpoint
|
||||||
|
|
||||||
|
|
||||||
|
def get_latest_checkpoint(state: dict) -> Optional[dict]:
|
||||||
|
"""Get the most recent checkpoint.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Latest checkpoint or None if no checkpoints exist
|
||||||
|
"""
|
||||||
|
checkpoints = state.get('checkpoints', [])
|
||||||
|
return checkpoints[-1] if checkpoints else None
|
||||||
|
|
||||||
|
|
||||||
|
def restore_from_checkpoint(state: dict, checkpoint_id: str = None) -> bool:
|
||||||
|
"""Restore workflow state from a checkpoint.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
state: Current workflow state
|
||||||
|
checkpoint_id: Optional specific checkpoint ID, defaults to latest
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if restoration was successful
|
||||||
|
"""
|
||||||
|
checkpoints = state.get('checkpoints', [])
|
||||||
|
if not checkpoints:
|
||||||
|
print("Error: No checkpoints available")
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Find checkpoint
|
||||||
|
if checkpoint_id:
|
||||||
|
checkpoint = next((c for c in checkpoints if c['id'] == checkpoint_id), None)
|
||||||
|
if not checkpoint:
|
||||||
|
print(f"Error: Checkpoint '{checkpoint_id}' not found")
|
||||||
|
return False
|
||||||
|
else:
|
||||||
|
checkpoint = checkpoints[-1]
|
||||||
|
|
||||||
|
# Restore state from checkpoint
|
||||||
|
state['resume_point'] = checkpoint['resume_point'].copy()
|
||||||
|
state['progress'] = checkpoint['progress'].copy()
|
||||||
|
state['current_phase'] = checkpoint['phase']
|
||||||
|
state['last_error'] = None # Clear any error since we're recovering
|
||||||
|
|
||||||
|
save_workflow(state)
|
||||||
|
print(f"Restored from checkpoint: {checkpoint['description']}")
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def list_checkpoints(state: dict) -> list:
|
||||||
|
"""List all available checkpoints.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of checkpoint summaries
|
||||||
|
"""
|
||||||
|
return [
|
||||||
|
{
|
||||||
|
'id': c['id'],
|
||||||
|
'timestamp': c['timestamp'],
|
||||||
|
'phase': c['phase'],
|
||||||
|
'description': c['description']
|
||||||
|
}
|
||||||
|
for c in state.get('checkpoints', [])
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def clear_checkpoints(state: dict):
|
||||||
|
"""Clear all checkpoints (typically after successful completion)."""
|
||||||
|
state['checkpoints'] = []
|
||||||
|
save_workflow(state)
|
||||||
|
|
||||||
|
|
||||||
|
def archive_workflow(state: dict, suffix: str = ''):
|
||||||
|
"""Archive completed/aborted workflow."""
|
||||||
|
history_dir = get_history_dir()
|
||||||
|
history_dir.mkdir(exist_ok=True)
|
||||||
|
|
||||||
|
filename = f"{state['id']}{suffix}.yml"
|
||||||
|
archive_path = history_dir / filename
|
||||||
|
|
||||||
|
save_yaml(str(archive_path), state)
|
||||||
|
|
||||||
|
# Remove current state
|
||||||
|
current_path = get_current_state_path()
|
||||||
|
if current_path.exists():
|
||||||
|
current_path.unlink()
|
||||||
|
|
||||||
|
|
||||||
|
def show_status(state: dict):
|
||||||
|
"""Display workflow status."""
|
||||||
|
print()
|
||||||
|
print("╔" + "═" * 58 + "╗")
|
||||||
|
print("║" + "WORKFLOW STATUS".center(58) + "║")
|
||||||
|
print("╠" + "═" * 58 + "╣")
|
||||||
|
print("║" + f" ID: {state['id']}".ljust(58) + "║")
|
||||||
|
print("║" + f" Feature: {state['feature'][:45]}".ljust(58) + "║")
|
||||||
|
print("║" + f" Phase: {state['current_phase']}".ljust(58) + "║")
|
||||||
|
print("╠" + "═" * 58 + "╣")
|
||||||
|
print("║" + " APPROVAL GATES".ljust(58) + "║")
|
||||||
|
|
||||||
|
design_gate = state['gates']['design_approval']
|
||||||
|
impl_gate = state['gates']['implementation_approval']
|
||||||
|
|
||||||
|
design_icon = "✅" if design_gate['status'] == 'approved' else "❌" if design_gate['status'] == 'rejected' else "⏳"
|
||||||
|
impl_icon = "✅" if impl_gate['status'] == 'approved' else "❌" if impl_gate['status'] == 'rejected' else "⏳"
|
||||||
|
|
||||||
|
print("║" + f" {design_icon} Design: {design_gate['status']}".ljust(58) + "║")
|
||||||
|
print("║" + f" {impl_icon} Implementation: {impl_gate['status']}".ljust(58) + "║")
|
||||||
|
print("╠" + "═" * 58 + "╣")
|
||||||
|
print("║" + " PROGRESS".ljust(58) + "║")
|
||||||
|
|
||||||
|
p = state['progress']
|
||||||
|
print("║" + f" Entities Designed: {p['entities_designed']}".ljust(58) + "║")
|
||||||
|
print("║" + f" Tasks Created: {p['tasks_created']}".ljust(58) + "║")
|
||||||
|
print("║" + f" Tasks Implemented: {p['tasks_implemented']}".ljust(58) + "║")
|
||||||
|
print("║" + f" Tasks Reviewed: {p['tasks_reviewed']}".ljust(58) + "║")
|
||||||
|
print("║" + f" Tasks Completed: {p['tasks_completed']}".ljust(58) + "║")
|
||||||
|
print("╠" + "═" * 58 + "╣")
|
||||||
|
print("║" + " TASK BREAKDOWN".ljust(58) + "║")
|
||||||
|
|
||||||
|
t = state['tasks']
|
||||||
|
print("║" + f" ⏳ Pending: {len(t['pending'])}".ljust(58) + "║")
|
||||||
|
print("║" + f" 🔄 In Progress: {len(t['in_progress'])}".ljust(58) + "║")
|
||||||
|
print("║" + f" 🔍 Review: {len(t['review'])}".ljust(58) + "║")
|
||||||
|
print("║" + f" ✅ Approved: {len(t['approved'])}".ljust(58) + "║")
|
||||||
|
print("║" + f" ✓ Completed: {len(t['completed'])}".ljust(58) + "║")
|
||||||
|
print("║" + f" 🚫 Blocked: {len(t['blocked'])}".ljust(58) + "║")
|
||||||
|
|
||||||
|
if state['last_error']:
|
||||||
|
print("╠" + "═" * 58 + "╣")
|
||||||
|
print("║" + " ⚠️ LAST ERROR".ljust(58) + "║")
|
||||||
|
print("║" + f" {state['last_error'][:52]}".ljust(58) + "║")
|
||||||
|
|
||||||
|
print("╠" + "═" * 58 + "╣")
|
||||||
|
print("║" + " TIMESTAMPS".ljust(58) + "║")
|
||||||
|
print("║" + f" Started: {state['started_at'][:19]}".ljust(58) + "║")
|
||||||
|
print("║" + f" Updated: {state['updated_at'][:19]}".ljust(58) + "║")
|
||||||
|
if state['completed_at']:
|
||||||
|
print("║" + f" Completed: {state['completed_at'][:19]}".ljust(58) + "║")
|
||||||
|
print("╚" + "═" * 58 + "╝")
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# CLI Interface
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Workflow state management")
|
||||||
|
subparsers = parser.add_subparsers(dest='command', help='Commands')
|
||||||
|
|
||||||
|
# create command
|
||||||
|
create_parser = subparsers.add_parser('create', help='Create new workflow')
|
||||||
|
create_parser.add_argument('feature', help='Feature to implement')
|
||||||
|
|
||||||
|
# status command
|
||||||
|
subparsers.add_parser('status', help='Show workflow status')
|
||||||
|
|
||||||
|
# transition command
|
||||||
|
trans_parser = subparsers.add_parser('transition', help='Transition to new phase')
|
||||||
|
trans_parser.add_argument('phase', choices=PHASES, help='Target phase')
|
||||||
|
trans_parser.add_argument('--error', help='Error message (for FAILED phase)')
|
||||||
|
|
||||||
|
# approve command
|
||||||
|
approve_parser = subparsers.add_parser('approve', help='Approve a gate')
|
||||||
|
approve_parser.add_argument('gate', choices=['design', 'implementation'], help='Gate to approve')
|
||||||
|
approve_parser.add_argument('--approver', default='user', help='Approver name')
|
||||||
|
|
||||||
|
# reject command
|
||||||
|
reject_parser = subparsers.add_parser('reject', help='Reject a gate')
|
||||||
|
reject_parser.add_argument('gate', choices=['design', 'implementation'], help='Gate to reject')
|
||||||
|
reject_parser.add_argument('reason', help='Rejection reason')
|
||||||
|
|
||||||
|
# progress command
|
||||||
|
progress_parser = subparsers.add_parser('progress', help='Update progress')
|
||||||
|
progress_parser.add_argument('--entities', type=int, help='Entities designed')
|
||||||
|
progress_parser.add_argument('--tasks-created', type=int, help='Tasks created')
|
||||||
|
progress_parser.add_argument('--tasks-impl', type=int, help='Tasks implemented')
|
||||||
|
progress_parser.add_argument('--tasks-reviewed', type=int, help='Tasks reviewed')
|
||||||
|
progress_parser.add_argument('--tasks-completed', type=int, help='Tasks completed')
|
||||||
|
|
||||||
|
# task command
|
||||||
|
task_parser = subparsers.add_parser('task', help='Update task status')
|
||||||
|
task_parser.add_argument('task_id', help='Task ID')
|
||||||
|
task_parser.add_argument('status', choices=['pending', 'in_progress', 'review', 'approved', 'completed', 'blocked'])
|
||||||
|
|
||||||
|
# archive command
|
||||||
|
archive_parser = subparsers.add_parser('archive', help='Archive workflow')
|
||||||
|
archive_parser.add_argument('--suffix', default='', help='Filename suffix (e.g., _aborted)')
|
||||||
|
|
||||||
|
# exists command
|
||||||
|
subparsers.add_parser('exists', help='Check if workflow exists')
|
||||||
|
|
||||||
|
# checkpoint command
|
||||||
|
checkpoint_parser = subparsers.add_parser('checkpoint', help='Manage checkpoints')
|
||||||
|
checkpoint_parser.add_argument('action', choices=['save', 'list', 'restore', 'clear'],
|
||||||
|
help='Checkpoint action')
|
||||||
|
checkpoint_parser.add_argument('--description', '-d', help='Checkpoint description (for save)')
|
||||||
|
checkpoint_parser.add_argument('--id', help='Checkpoint ID (for restore)')
|
||||||
|
checkpoint_parser.add_argument('--data', help='JSON data to store (for save)')
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
if args.command == 'create':
|
||||||
|
state = create_workflow(args.feature)
|
||||||
|
print(f"Created workflow: {state['id']}")
|
||||||
|
print(f"Feature: {args.feature}")
|
||||||
|
print(f"State saved to: {get_current_state_path()}")
|
||||||
|
|
||||||
|
elif args.command == 'status':
|
||||||
|
state = load_current_workflow()
|
||||||
|
if state:
|
||||||
|
show_status(state)
|
||||||
|
else:
|
||||||
|
print("No active workflow found.")
|
||||||
|
print("Start a new workflow with: /workflow:spawn <feature>")
|
||||||
|
|
||||||
|
elif args.command == 'transition':
|
||||||
|
state = load_current_workflow()
|
||||||
|
if not state:
|
||||||
|
print("Error: No active workflow")
|
||||||
|
sys.exit(1)
|
||||||
|
if transition_phase(state, args.phase, args.error):
|
||||||
|
print(f"Transitioned to: {args.phase}")
|
||||||
|
else:
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
elif args.command == 'approve':
|
||||||
|
state = load_current_workflow()
|
||||||
|
if not state:
|
||||||
|
print("Error: No active workflow")
|
||||||
|
sys.exit(1)
|
||||||
|
gate = f"{args.gate}_approval"
|
||||||
|
if approve_gate(state, gate, args.approver):
|
||||||
|
print(f"Approved: {args.gate}")
|
||||||
|
|
||||||
|
elif args.command == 'reject':
|
||||||
|
state = load_current_workflow()
|
||||||
|
if not state:
|
||||||
|
print("Error: No active workflow")
|
||||||
|
sys.exit(1)
|
||||||
|
gate = f"{args.gate}_approval"
|
||||||
|
if reject_gate(state, gate, args.reason):
|
||||||
|
print(f"Rejected: {args.gate}")
|
||||||
|
print(f"Reason: {args.reason}")
|
||||||
|
|
||||||
|
elif args.command == 'progress':
|
||||||
|
state = load_current_workflow()
|
||||||
|
if not state:
|
||||||
|
print("Error: No active workflow")
|
||||||
|
sys.exit(1)
|
||||||
|
updates = {}
|
||||||
|
if args.entities is not None:
|
||||||
|
updates['entities_designed'] = args.entities
|
||||||
|
if args.tasks_created is not None:
|
||||||
|
updates['tasks_created'] = args.tasks_created
|
||||||
|
if args.tasks_impl is not None:
|
||||||
|
updates['tasks_implemented'] = args.tasks_impl
|
||||||
|
if args.tasks_reviewed is not None:
|
||||||
|
updates['tasks_reviewed'] = args.tasks_reviewed
|
||||||
|
if args.tasks_completed is not None:
|
||||||
|
updates['tasks_completed'] = args.tasks_completed
|
||||||
|
if updates:
|
||||||
|
update_progress(state, **updates)
|
||||||
|
print("Progress updated")
|
||||||
|
|
||||||
|
elif args.command == 'task':
|
||||||
|
state = load_current_workflow()
|
||||||
|
if not state:
|
||||||
|
print("Error: No active workflow")
|
||||||
|
sys.exit(1)
|
||||||
|
update_task_status(state, args.task_id, args.status)
|
||||||
|
print(f"Task {args.task_id} → {args.status}")
|
||||||
|
|
||||||
|
elif args.command == 'archive':
|
||||||
|
state = load_current_workflow()
|
||||||
|
if not state:
|
||||||
|
print("Error: No active workflow")
|
||||||
|
sys.exit(1)
|
||||||
|
archive_workflow(state, args.suffix)
|
||||||
|
print(f"Workflow archived to: {get_history_dir()}/{state['id']}{args.suffix}.yml")
|
||||||
|
|
||||||
|
elif args.command == 'exists':
|
||||||
|
state = load_current_workflow()
|
||||||
|
if state:
|
||||||
|
print("true")
|
||||||
|
sys.exit(0)
|
||||||
|
else:
|
||||||
|
print("false")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
elif args.command == 'checkpoint':
|
||||||
|
state = load_current_workflow()
|
||||||
|
if not state:
|
||||||
|
print("Error: No active workflow")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
if args.action == 'save':
|
||||||
|
if not args.description:
|
||||||
|
print("Error: --description required for save")
|
||||||
|
sys.exit(1)
|
||||||
|
data = None
|
||||||
|
if args.data:
|
||||||
|
try:
|
||||||
|
data = json.loads(args.data)
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
print("Error: --data must be valid JSON")
|
||||||
|
sys.exit(1)
|
||||||
|
checkpoint = save_checkpoint(state, args.description, data)
|
||||||
|
print(f"Checkpoint saved: {checkpoint['id']}")
|
||||||
|
print(f"Description: {args.description}")
|
||||||
|
|
||||||
|
elif args.action == 'list':
|
||||||
|
checkpoints = list_checkpoints(state)
|
||||||
|
if not checkpoints:
|
||||||
|
print("No checkpoints available")
|
||||||
|
else:
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("CHECKPOINTS".center(60))
|
||||||
|
print("=" * 60)
|
||||||
|
for cp in checkpoints:
|
||||||
|
print(f"\n ID: {cp['id']}")
|
||||||
|
print(f" Time: {cp['timestamp'][:19]}")
|
||||||
|
print(f" Phase: {cp['phase']}")
|
||||||
|
print(f" Description: {cp['description']}")
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
|
||||||
|
elif args.action == 'restore':
|
||||||
|
if restore_from_checkpoint(state, args.id):
|
||||||
|
print("Workflow state restored successfully")
|
||||||
|
else:
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
elif args.action == 'clear':
|
||||||
|
clear_checkpoints(state)
|
||||||
|
print("All checkpoints cleared")
|
||||||
|
|
||||||
|
else:
|
||||||
|
parser.print_help()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
|
|
@ -0,0 +1,99 @@
|
||||||
|
# Example Task - Created by Architect Agent
|
||||||
|
# Copy this template when creating new tasks
|
||||||
|
|
||||||
|
# ============================================
|
||||||
|
# FRONTEND TASK EXAMPLE
|
||||||
|
# ============================================
|
||||||
|
id: task_create_button
|
||||||
|
type: create
|
||||||
|
title: Create Button component
|
||||||
|
agent: frontend
|
||||||
|
status: pending
|
||||||
|
priority: high
|
||||||
|
|
||||||
|
entity_ids:
|
||||||
|
- comp_button
|
||||||
|
|
||||||
|
file_paths:
|
||||||
|
- app/components/Button.tsx
|
||||||
|
|
||||||
|
dependencies: [] # No dependencies, can start immediately
|
||||||
|
|
||||||
|
description: |
|
||||||
|
Create a reusable Button component with variant support.
|
||||||
|
Must match the manifest specification for comp_button.
|
||||||
|
|
||||||
|
acceptance_criteria:
|
||||||
|
- Exports Button component as named export
|
||||||
|
- Implements ButtonProps interface matching manifest
|
||||||
|
- Supports variant prop (primary, secondary, danger)
|
||||||
|
- Supports size prop (sm, md, lg)
|
||||||
|
- Supports disabled state
|
||||||
|
- Uses Tailwind CSS for styling
|
||||||
|
- Follows existing component patterns
|
||||||
|
|
||||||
|
---
|
||||||
|
# ============================================
|
||||||
|
# BACKEND TASK EXAMPLE
|
||||||
|
# ============================================
|
||||||
|
id: task_create_api_tasks
|
||||||
|
type: create
|
||||||
|
title: Create Tasks API endpoints
|
||||||
|
agent: backend
|
||||||
|
status: pending
|
||||||
|
priority: high
|
||||||
|
|
||||||
|
entity_ids:
|
||||||
|
- api_list_tasks
|
||||||
|
- api_create_task
|
||||||
|
|
||||||
|
file_paths:
|
||||||
|
- app/api/tasks/route.ts
|
||||||
|
- app/lib/db.ts
|
||||||
|
|
||||||
|
dependencies:
|
||||||
|
- task_create_db_tasks # Needs database first
|
||||||
|
|
||||||
|
description: |
|
||||||
|
Implement GET and POST handlers for /api/tasks endpoint.
|
||||||
|
GET: List all tasks with optional filtering
|
||||||
|
POST: Create a new task
|
||||||
|
|
||||||
|
acceptance_criteria:
|
||||||
|
- Exports GET function for listing tasks
|
||||||
|
- Exports POST function for creating tasks
|
||||||
|
- GET supports ?status and ?search query params
|
||||||
|
- POST validates required title field
|
||||||
|
- Returns proper HTTP status codes
|
||||||
|
- Matches manifest request/response schemas
|
||||||
|
|
||||||
|
---
|
||||||
|
# ============================================
|
||||||
|
# REVIEW TASK EXAMPLE
|
||||||
|
# ============================================
|
||||||
|
id: task_review_button
|
||||||
|
type: review
|
||||||
|
title: Review Button component implementation
|
||||||
|
agent: reviewer
|
||||||
|
status: pending
|
||||||
|
priority: medium
|
||||||
|
|
||||||
|
entity_ids:
|
||||||
|
- comp_button
|
||||||
|
|
||||||
|
file_paths:
|
||||||
|
- app/components/Button.tsx
|
||||||
|
|
||||||
|
dependencies:
|
||||||
|
- task_create_button # Must complete implementation first
|
||||||
|
|
||||||
|
description: |
|
||||||
|
Review the Button component implementation for quality,
|
||||||
|
correctness, and adherence to manifest specifications.
|
||||||
|
|
||||||
|
acceptance_criteria:
|
||||||
|
- Code matches manifest spec
|
||||||
|
- Props interface is correct
|
||||||
|
- Follows project patterns
|
||||||
|
- Lint passes
|
||||||
|
- Type-check passes
|
||||||
|
|
@ -0,0 +1,6 @@
|
||||||
|
#!/bin/bash
|
||||||
|
# Auto-generated by Eureka Factory
|
||||||
|
# This script starts Claude Code with the workflow command
|
||||||
|
|
||||||
|
cd "$(dirname "$0")"
|
||||||
|
claude '/workflow:spawn --auto a platform where artists can publish their original songs'
|
||||||
Loading…
Reference in New Issue