feat: Add workflow enforcement system with code-reviewer subagent

- Add phase_gate.py for strict phase transition enforcement
- Add blocking_conditions.yml for checkpoint configuration
- Add code-reviewer subagent for comprehensive code review
- Integrate type-check and lint validation into REVIEWING phase
- Add fix loops that return to IMPLEMENTING when issues found
- Add auto-fix capability for CRITICAL issues in auto mode
- Update spawn.md with gate checks for all phases (1-7)
- Add ENFORCEMENT_PLAN.md documenting the enforcement architecture

Key features:
- Blocking checkpoints that MUST pass before phase transitions
- Script-based validation gates (build, type-check, lint)
- State persistence that survives interruptions
- Code review agent with CRITICAL/WARNING/INFO categorization
- Auto-fix agent for auto_fixable issues

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
mazemaze 2025-12-18 02:52:58 +09:00
parent 09d49aefe7
commit d533768f74
48 changed files with 18607 additions and 0 deletions

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,152 @@
# Documentation Writer Agent
# Specialized agent for generating dual-audience documentation
name: doc-writer
role: Documentation Specialist
description: |
Expert in creating comprehensive documentation that serves both technical
and non-technical audiences. Specializes in translating complex technical
concepts into accessible language while maintaining technical accuracy.
capabilities:
- Analyze project structure and extract key information
- Generate visual ASCII diagrams for architecture
- Write plain-language descriptions of technical features
- Create technical reference documentation
- Build glossaries for technical terms
- Structure documentation for multiple audience levels
allowed_tools:
- Read
- Write
- Edit
- Glob
- Grep
- Bash
blocked_tools:
- Task # Should not spawn sub-agents
allowed_files:
- "docs/**/*"
- "*.md"
- "package.json"
- "project_manifest.json"
- "tsconfig.json"
- "requirements.txt"
- "pyproject.toml"
- "Cargo.toml"
- "go.mod"
responsibilities:
- Analyze source code to understand functionality
- Extract API endpoints and document them
- Document components with props and usage
- Create ER diagrams for data models
- Write executive summaries for stakeholders
- Build glossaries for technical terms
- Generate quick reference cards
outputs:
- PROJECT_DOCUMENTATION.md (main documentation)
- QUICK_REFERENCE.md (one-page summary)
- API_REFERENCE.md (detailed API docs)
- COMPONENTS.md (component catalog)
- GLOSSARY.md (term definitions)
cannot_do:
- Modify source code
- Change project configuration
- Run tests or builds
- Deploy or publish
writing_principles:
non_technical:
- Lead with "What" and "Why", not "How"
- Use analogies and real-world comparisons
- Avoid acronyms; spell them out first time
- Use bullet points over paragraphs
- Include visual diagrams
- Focus on value and outcomes
technical:
- Include in collapsible <details> sections
- Provide code examples with syntax highlighting
- Reference file paths and line numbers
- Include type definitions and interfaces
- Link to source files
- Document edge cases and error handling
documentation_sections:
executive_summary:
audience: everyone
purpose: Project purpose, value proposition, key capabilities
format: Plain English, no jargon
architecture_overview:
audience: everyone
purpose: Visual system understanding
format: ASCII diagrams, technology tables
getting_started:
audience: semi-technical
purpose: Quick onboarding
format: Step-by-step with explanations
feature_guide:
audience: non-technical
purpose: Feature documentation
format: What/Why/How (simplified)
api_reference:
audience: developers
purpose: API documentation
format: Endpoints, schemas, examples
component_catalog:
audience: developers
purpose: UI component documentation
format: Props, events, usage examples
data_models:
audience: both
purpose: Data structure documentation
format: ER diagrams + plain descriptions
glossary:
audience: non-technical
purpose: Term definitions
format: Term -> Plain English definition
ascii_diagram_templates:
system_architecture: |
┌─────────────────────────────────────────────────────────┐
│ [System Name] │
├─────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ [Layer] │───▶│ [Layer] │───▶│ [Layer] │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────┘
entity_relationship: |
┌──────────────┐ ┌──────────────┐
│ [Entity] │ │ [Entity] │
├──────────────┤ ├──────────────┤
│ id (PK) │──────▶│ id (PK) │
│ field │ │ foreign_key │
└──────────────┘ └──────────────┘
data_flow: |
[Source] ──▶ [Process] ──▶ [Output]
│ │ │
▼ ▼ ▼
[Storage] [Transform] [Display]
quality_checklist:
- All referenced files exist
- All code examples are syntactically correct
- No broken internal links
- Technical details wrapped in <details>
- Plain English explanations for all features
- Glossary includes all technical terms used
- ASCII diagrams render correctly in markdown

View File

@ -0,0 +1,274 @@
# Documentation Output Schema
# Defines the structure for generated documentation
version: "1.0"
description: Schema for dual-audience project documentation
output_files:
main_documentation:
filename: PROJECT_DOCUMENTATION.md
required: true
sections:
- executive_summary
- quick_start
- architecture_overview
- features
- for_developers
- glossary
quick_reference:
filename: QUICK_REFERENCE.md
required: true
sections:
- commands
- key_files
- api_endpoints
- environment_variables
api_reference:
filename: API_REFERENCE.md
required: false
condition: has_api_endpoints
sections:
- authentication
- endpoints_by_resource
- error_codes
- rate_limiting
components:
filename: COMPONENTS.md
required: false
condition: has_ui_components
sections:
- component_index
- component_details
- usage_examples
section_schemas:
executive_summary:
description: High-level project overview for all audiences
fields:
project_name:
type: string
required: true
tagline:
type: string
required: true
max_length: 100
description: One-line description in plain English
what_it_does:
type: string
required: true
description: 2-3 sentences, no technical jargon
who_its_for:
type: string
required: true
description: Target audience in plain English
key_capabilities:
type: array
items:
capability: string
description: string
min_items: 3
max_items: 8
quick_start:
description: Getting started guide for new users
fields:
prerequisites:
type: array
items:
tool: string
purpose: string # Plain English explanation
install_command: string
installation_steps:
type: array
items:
step: integer
command: string
explanation: string # What this does
basic_usage:
type: string
description: Simple example of how to use
architecture_overview:
description: Visual system architecture
fields:
system_diagram:
type: string
format: ascii_art
required: true
technology_stack:
type: array
items:
layer: string
technology: string
purpose: string # Plain English
directory_structure:
type: string
format: tree
required: true
features:
description: Feature documentation for all audiences
fields:
features:
type: array
items:
name: string
what_it_does: string # Plain English
how_to_use: string # Simple instructions
example: string # Code or usage example
technical_notes: string # For engineers, optional
api_endpoint:
description: Single API endpoint documentation
fields:
method:
type: enum
values: [GET, POST, PUT, PATCH, DELETE]
path:
type: string
pattern: "^/api/"
summary:
type: string
description: Plain English description
description:
type: string
description: Detailed explanation
authentication:
type: object
fields:
required: boolean
type: string # bearer, api_key, session
request:
type: object
fields:
content_type: string
body_schema: object
query_params: array
path_params: array
responses:
type: array
items:
status: integer
description: string
schema: object
examples:
type: array
items:
name: string
request: object
response: object
component:
description: UI component documentation
fields:
name:
type: string
pattern: "^[A-Z][a-zA-Z]*$" # PascalCase
path:
type: string
description:
type: string
description: Plain English purpose
props:
type: array
items:
name: string
type: string
required: boolean
default: any
description: string
events:
type: array
items:
name: string
payload: string
description: string
usage_example:
type: string
format: code
dependencies:
type: array
items: string
data_model:
description: Data model documentation
fields:
name:
type: string
description:
type: string
description: What data it represents (plain English)
table_name:
type: string
fields:
type: array
items:
name: string
type: string
description: string # Plain English
constraints: array
relations:
type: array
items:
type: enum
values: [has_one, has_many, belongs_to, many_to_many]
target: string
description: string
glossary_term:
description: Technical term definition
fields:
term:
type: string
required: true
definition:
type: string
required: true
description: Plain English definition
see_also:
type: array
items: string
description: Related terms
audience_markers:
non_technical:
indicator: "📖"
description: "For all readers"
technical:
indicator: "🔧"
description: "For developers"
wrapper: "<details><summary>🔧 Technical Details</summary>...content...</details>"
formatting_rules:
headings:
h1: "# Title"
h2: "## Section"
h3: "### Subsection"
code_blocks:
language_hints: required
max_lines: 30
tables:
alignment: left
max_columns: 5
diagrams:
format: ascii_art
max_width: 80
links:
internal: "[text](#anchor)"
external: "[text](url)"
file_reference: "`path/to/file`"
validation_rules:
- name: no_broken_links
description: All internal links must resolve
- name: code_syntax
description: All code blocks must be syntactically valid
- name: file_references
description: All referenced files must exist
- name: glossary_coverage
description: All technical terms must be in glossary
- name: diagram_rendering
description: ASCII diagrams must render correctly

View File

@ -0,0 +1,272 @@
# Project Analysis Schema
# Defines the structure for project analysis output
version: "1.0"
description: Schema for analyzing project structure before documentation generation
project_analysis:
project:
type: object
required: true
fields:
name:
type: string
required: true
source: package.json/name or directory name
version:
type: string
required: false
source: package.json/version
description:
type: string
required: false
source: package.json/description or README.md first paragraph
type:
type: enum
values: [node, python, rust, go, java, dotnet, ruby, php, other]
detection:
node: package.json
python: requirements.txt, pyproject.toml, setup.py
rust: Cargo.toml
go: go.mod
java: pom.xml, build.gradle
dotnet: "*.csproj, *.sln"
ruby: Gemfile
php: composer.json
repository:
type: string
source: package.json/repository or .git/config
tech_stack:
type: object
required: true
fields:
language:
type: string
description: Primary programming language
framework:
type: string
description: Main application framework
detection:
next: "next" in dependencies
react: "react" in dependencies without "next"
vue: "vue" in dependencies
angular: "@angular/core" in dependencies
express: "express" in dependencies
fastapi: "fastapi" in requirements
django: "django" in requirements
flask: "flask" in requirements
rails: "rails" in Gemfile
database:
type: string
description: Database system if any
detection:
prisma: "@prisma/client" in dependencies
mongoose: "mongoose" in dependencies
typeorm: "typeorm" in dependencies
sequelize: "sequelize" in dependencies
sqlalchemy: "sqlalchemy" in requirements
ui_framework:
type: string
description: UI component framework if any
detection:
tailwind: "tailwindcss" in devDependencies
mui: "@mui/material" in dependencies
chakra: "@chakra-ui/react" in dependencies
shadcn: "shadcn" patterns in components
key_dependencies:
type: array
items:
name: string
version: string
purpose: string # Plain English explanation
categorization:
core: Framework, runtime dependencies
database: ORM, database clients
auth: Authentication libraries
ui: UI component libraries
testing: Test frameworks
build: Build tools, bundlers
utility: Helper libraries
structure:
type: object
required: true
fields:
source_dir:
type: string
description: Main source code directory
detection:
- src/
- app/
- lib/
- source/
directories:
type: array
items:
path: string
purpose: string # Plain English description
file_count: integer
key_files: array
common_mappings:
src/components: "UI components"
src/pages: "Application pages/routes"
src/api: "API route handlers"
src/lib: "Utility functions and shared code"
src/hooks: "Custom React hooks"
src/context: "React context providers"
src/store: "State management"
src/types: "TypeScript type definitions"
src/styles: "Global styles and themes"
prisma/: "Database schema and migrations"
public/: "Static assets"
tests/: "Test files"
__tests__/: "Test files (Jest convention)"
features:
type: array
description: Main features/capabilities of the project
items:
name: string
description: string # Plain English
technical_notes: string # For engineers
files: array # Key file paths
detection_patterns:
authentication:
keywords: [auth, login, logout, session, jwt, oauth]
files: ["**/auth/**", "**/login/**"]
user_management:
keywords: [user, profile, account, register, signup]
files: ["**/user/**", "**/users/**"]
api:
keywords: [api, endpoint, route, handler]
files: ["**/api/**", "**/routes/**"]
database:
keywords: [model, entity, schema, migration, prisma]
files: ["**/models/**", "**/prisma/**"]
file_upload:
keywords: [upload, file, storage, s3, blob]
files: ["**/upload/**", "**/storage/**"]
search:
keywords: [search, filter, query]
files: ["**/search/**"]
notifications:
keywords: [notification, email, sms, push]
files: ["**/notification/**", "**/email/**"]
components:
type: array
description: UI components found in the project
items:
id: string # component_<name>
name: string # PascalCase
path: string
description: string
props: string # Props summary
dependencies: array # Imported components
detection:
react: "export (default )?(function|const) [A-Z]"
vue: "<template>.*<script>"
angular: "@Component"
api_endpoints:
type: array
description: API endpoints found in the project
items:
method: enum [GET, POST, PUT, PATCH, DELETE]
path: string
handler_file: string
description: string
technical_notes: string
detection:
next_app_router: "app/api/**/route.ts exports GET, POST, etc."
next_pages_router: "pages/api/**/*.ts"
express: "router.get/post/put/delete"
fastapi: "@app.get/post/put/delete"
data_models:
type: array
description: Data models/entities found
items:
name: string
description: string # Plain English
fields: array
relations: array
detection:
prisma: "model [A-Z][a-z]+ {"
typeorm: "@Entity"
mongoose: "new Schema"
sqlalchemy: "class.*Base"
glossary_terms:
type: array
description: Technical terms that need definitions
items:
term: string
definition: string # Plain English
auto_detection:
- Acronyms (all caps words)
- Framework-specific terms
- Domain-specific terminology
- Technical jargon from code comments
analysis_process:
steps:
1_identify_type:
description: Determine project type from config files
files_to_check:
- package.json
- requirements.txt
- pyproject.toml
- Cargo.toml
- go.mod
- pom.xml
2_scan_structure:
description: Map directory structure
actions:
- List all directories
- Count files per directory
- Identify purpose from names
3_extract_metadata:
description: Get project metadata
sources:
- package.json (name, version, description, dependencies)
- README.md (description, usage)
- project_manifest.json (if exists)
4_identify_features:
description: Detect main features
methods:
- Keyword scanning in file names
- Pattern matching in code
- Directory structure analysis
5_map_components:
description: Catalog UI components
methods:
- Scan component directories
- Extract props from TypeScript
- Find usage patterns
6_document_apis:
description: Document API endpoints
methods:
- Scan API routes
- Extract request/response schemas
- Find authentication requirements
7_model_data:
description: Document data models
methods:
- Parse Prisma schema
- Extract TypeORM entities
- Find Mongoose schemas
8_collect_terms:
description: Build glossary
methods:
- Extract acronyms
- Find domain terms
- Look for jargon in comments

View File

@ -0,0 +1,489 @@
#!/usr/bin/env python3
"""
Project Analyzer for Documentation Generation
Analyzes project structure and outputs YAML for documentation generation.
"""
import os
import sys
import json
import re
from pathlib import Path
from typing import Dict, List, Any, Optional
from datetime import datetime
# Try to import yaml, but provide fallback
try:
import yaml
except ImportError:
yaml = None
def detect_project_type(root_path: Path) -> Dict[str, Any]:
"""Detect project type from config files."""
indicators = {
'node': ['package.json'],
'python': ['requirements.txt', 'pyproject.toml', 'setup.py', 'Pipfile'],
'rust': ['Cargo.toml'],
'go': ['go.mod'],
'java': ['pom.xml', 'build.gradle', 'build.gradle.kts'],
'dotnet': list(root_path.glob('*.csproj')) + list(root_path.glob('*.sln')),
'ruby': ['Gemfile'],
'php': ['composer.json'],
}
for lang, files in indicators.items():
if isinstance(files, list) and isinstance(files[0], str):
for f in files:
if (root_path / f).exists():
return {'type': lang, 'config_file': f}
elif files: # Already Path objects from glob
return {'type': lang, 'config_file': str(files[0].name)}
return {'type': 'other', 'config_file': None}
def parse_package_json(root_path: Path) -> Dict[str, Any]:
"""Parse package.json for Node.js projects."""
pkg_path = root_path / 'package.json'
if not pkg_path.exists():
return {}
with open(pkg_path, 'r') as f:
data = json.load(f)
deps = data.get('dependencies', {})
dev_deps = data.get('devDependencies', {})
# Detect framework
framework = None
if 'next' in deps:
framework = 'Next.js'
elif 'react' in deps:
framework = 'React'
elif 'vue' in deps:
framework = 'Vue.js'
elif '@angular/core' in deps:
framework = 'Angular'
elif 'express' in deps:
framework = 'Express'
elif 'fastify' in deps:
framework = 'Fastify'
# Detect database
database = None
if '@prisma/client' in deps:
database = 'Prisma (PostgreSQL/MySQL/SQLite)'
elif 'mongoose' in deps:
database = 'MongoDB (Mongoose)'
elif 'typeorm' in deps:
database = 'TypeORM'
elif 'sequelize' in deps:
database = 'Sequelize'
# Detect UI framework
ui_framework = None
if 'tailwindcss' in dev_deps or 'tailwindcss' in deps:
ui_framework = 'Tailwind CSS'
if '@mui/material' in deps:
ui_framework = 'Material UI'
elif '@chakra-ui/react' in deps:
ui_framework = 'Chakra UI'
# Categorize dependencies
key_deps = []
dep_categories = {
'core': ['react', 'next', 'vue', 'angular', 'express', 'fastify'],
'database': ['@prisma/client', 'mongoose', 'typeorm', 'sequelize', 'pg', 'mysql2'],
'auth': ['next-auth', 'passport', 'jsonwebtoken', '@auth0/nextjs-auth0'],
'ui': ['@mui/material', '@chakra-ui/react', 'antd', '@radix-ui'],
'state': ['zustand', 'redux', '@reduxjs/toolkit', 'recoil', 'jotai'],
'testing': ['jest', 'vitest', '@testing-library/react', 'cypress'],
}
for dep, version in {**deps, **dev_deps}.items():
category = 'utility'
for cat, patterns in dep_categories.items():
if any(p in dep for p in patterns):
category = cat
break
if category != 'utility' or dep in ['axios', 'zod', 'date-fns', 'lodash']:
key_deps.append({
'name': dep,
'version': version.replace('^', '').replace('~', ''),
'category': category,
'purpose': get_dep_purpose(dep)
})
return {
'name': data.get('name', 'Unknown'),
'version': data.get('version', '0.0.0'),
'description': data.get('description', ''),
'framework': framework,
'database': database,
'ui_framework': ui_framework,
'key_dependencies': key_deps[:15], # Limit to 15 most important
'scripts': data.get('scripts', {})
}
def get_dep_purpose(dep_name: str) -> str:
"""Get plain English purpose for common dependencies."""
purposes = {
'react': 'UI component library',
'next': 'Full-stack React framework',
'vue': 'Progressive UI framework',
'express': 'Web server framework',
'fastify': 'High-performance web framework',
'@prisma/client': 'Database ORM and query builder',
'mongoose': 'MongoDB object modeling',
'typeorm': 'TypeScript ORM',
'sequelize': 'SQL ORM',
'next-auth': 'Authentication for Next.js',
'passport': 'Authentication middleware',
'jsonwebtoken': 'JWT token handling',
'@mui/material': 'Material Design components',
'@chakra-ui/react': 'Accessible component library',
'tailwindcss': 'Utility-first CSS framework',
'zustand': 'State management',
'redux': 'Predictable state container',
'@reduxjs/toolkit': 'Redux development toolkit',
'axios': 'HTTP client',
'zod': 'Schema validation',
'date-fns': 'Date utility functions',
'lodash': 'Utility functions',
'jest': 'Testing framework',
'vitest': 'Fast unit testing',
'@testing-library/react': 'React component testing',
'cypress': 'End-to-end testing',
}
return purposes.get(dep_name, 'Utility library')
def scan_directory_structure(root_path: Path) -> Dict[str, Any]:
"""Scan and categorize directory structure."""
ignore_dirs = {
'node_modules', '.git', '.next', '__pycache__', 'venv',
'.venv', 'dist', 'build', '.cache', 'coverage', '.turbo'
}
common_purposes = {
'src': 'Main source code directory',
'app': 'Application code (Next.js App Router)',
'pages': 'Page components (Next.js Pages Router)',
'components': 'Reusable UI components',
'lib': 'Shared utilities and libraries',
'utils': 'Utility functions',
'hooks': 'Custom React hooks',
'context': 'React context providers',
'store': 'State management',
'styles': 'CSS and styling',
'types': 'TypeScript type definitions',
'api': 'API route handlers',
'services': 'Business logic services',
'models': 'Data models/entities',
'prisma': 'Database schema and migrations',
'public': 'Static assets',
'tests': 'Test files',
'__tests__': 'Jest test files',
'test': 'Test files',
'spec': 'Test specifications',
'docs': 'Documentation',
'scripts': 'Build and utility scripts',
'config': 'Configuration files',
}
directories = []
source_dir = None
# Find main source directory
for candidate in ['src', 'app', 'lib', 'source']:
if (root_path / candidate).is_dir():
source_dir = candidate
break
# Scan directories
for item in sorted(root_path.iterdir()):
if item.is_dir() and item.name not in ignore_dirs and not item.name.startswith('.'):
file_count = sum(1 for _ in item.rglob('*') if _.is_file())
key_files = [
f.name for f in item.iterdir()
if f.is_file() and f.suffix in ['.ts', '.tsx', '.js', '.jsx', '.py', '.rs', '.go']
][:5]
directories.append({
'path': item.name,
'purpose': common_purposes.get(item.name, 'Project directory'),
'file_count': file_count,
'key_files': key_files
})
return {
'source_dir': source_dir or '.',
'directories': directories
}
def detect_features(root_path: Path) -> List[Dict[str, Any]]:
"""Detect main features from code patterns."""
features = []
feature_patterns = {
'authentication': {
'keywords': ['auth', 'login', 'logout', 'session', 'jwt', 'oauth'],
'description': 'User authentication and session management',
'technical_notes': 'Handles user login, logout, and session tokens'
},
'user_management': {
'keywords': ['user', 'profile', 'account', 'register', 'signup'],
'description': 'User account creation and profile management',
'technical_notes': 'CRUD operations for user data'
},
'api': {
'keywords': ['api', 'endpoint', 'route'],
'description': 'REST API endpoints for data operations',
'technical_notes': 'HTTP handlers for client-server communication'
},
'database': {
'keywords': ['prisma', 'model', 'entity', 'schema', 'migration'],
'description': 'Database storage and data persistence',
'technical_notes': 'ORM-based data layer with migrations'
},
'file_upload': {
'keywords': ['upload', 'file', 'storage', 's3', 'blob'],
'description': 'File upload and storage functionality',
'technical_notes': 'Handles file uploads and cloud storage'
},
'search': {
'keywords': ['search', 'filter', 'query'],
'description': 'Search and filtering capabilities',
'technical_notes': 'Full-text search or database queries'
},
}
# Scan for features
all_files = list(root_path.rglob('*.ts')) + list(root_path.rglob('*.tsx')) + \
list(root_path.rglob('*.js')) + list(root_path.rglob('*.jsx'))
file_names = [f.stem.lower() for f in all_files]
file_paths = [str(f.relative_to(root_path)).lower() for f in all_files]
for feature_name, config in feature_patterns.items():
found_files = []
for keyword in config['keywords']:
found_files.extend([
str(f.relative_to(root_path)) for f in all_files
if keyword in str(f).lower()
])
if found_files:
features.append({
'name': feature_name.replace('_', ' ').title(),
'description': config['description'],
'technical_notes': config['technical_notes'],
'files': list(set(found_files))[:5]
})
return features
def find_components(root_path: Path) -> List[Dict[str, Any]]:
"""Find UI components in the project."""
components = []
component_dirs = ['components', 'src/components', 'app/components']
for comp_dir in component_dirs:
comp_path = root_path / comp_dir
if comp_path.exists():
for file in comp_path.rglob('*.tsx'):
if file.name.startswith('_') or file.name == 'index.tsx':
continue
name = file.stem
if name[0].isupper(): # Component names are PascalCase
components.append({
'id': f'component_{name.lower()}',
'name': name,
'path': str(file.relative_to(root_path)),
'description': f'{name} component',
'props': 'See source file'
})
return components[:20] # Limit to 20 components
def find_api_endpoints(root_path: Path) -> List[Dict[str, Any]]:
"""Find API endpoints in the project."""
endpoints = []
# Next.js App Router: app/api/**/route.ts
api_dir = root_path / 'app' / 'api'
if api_dir.exists():
for route_file in api_dir.rglob('route.ts'):
path_parts = route_file.parent.relative_to(api_dir).parts
api_path = '/api/' + '/'.join(path_parts)
# Read file to detect methods
content = route_file.read_text()
methods = []
for method in ['GET', 'POST', 'PUT', 'PATCH', 'DELETE']:
if f'export async function {method}' in content or f'export function {method}' in content:
methods.append(method)
for method in methods:
endpoints.append({
'method': method,
'path': api_path.replace('[', ':').replace(']', ''),
'handler_file': str(route_file.relative_to(root_path)),
'description': f'{method} {api_path}',
'technical_notes': 'Next.js App Router endpoint'
})
# Next.js Pages Router: pages/api/**/*.ts
pages_api = root_path / 'pages' / 'api'
if pages_api.exists():
for api_file in pages_api.rglob('*.ts'):
path_parts = api_file.relative_to(pages_api).with_suffix('').parts
api_path = '/api/' + '/'.join(path_parts)
endpoints.append({
'method': 'MULTIPLE',
'path': api_path.replace('[', ':').replace(']', ''),
'handler_file': str(api_file.relative_to(root_path)),
'description': f'API endpoint at {api_path}',
'technical_notes': 'Next.js Pages Router endpoint'
})
return endpoints
def find_data_models(root_path: Path) -> List[Dict[str, Any]]:
"""Find data models in the project."""
models = []
# Prisma schema
prisma_schema = root_path / 'prisma' / 'schema.prisma'
if prisma_schema.exists():
content = prisma_schema.read_text()
model_pattern = re.compile(r'model\s+(\w+)\s*\{([^}]+)\}', re.MULTILINE)
for match in model_pattern.finditer(content):
model_name = match.group(1)
model_body = match.group(2)
# Extract fields
fields = []
for line in model_body.strip().split('\n'):
line = line.strip()
if line and not line.startswith('@@') and not line.startswith('//'):
parts = line.split()
if len(parts) >= 2:
fields.append({
'name': parts[0],
'type': parts[1],
'description': f'{parts[0]} field'
})
models.append({
'name': model_name,
'description': f'{model_name} data model',
'fields': fields[:10] # Limit fields
})
return models
def collect_glossary_terms(features: List, components: List, endpoints: List) -> List[Dict[str, str]]:
"""Collect technical terms that need definitions."""
common_terms = {
'API': 'Application Programming Interface - a way for different software to communicate',
'REST': 'Representational State Transfer - a standard way to design web APIs',
'Component': 'A reusable piece of the user interface',
'Endpoint': 'A specific URL that the application responds to',
'ORM': 'Object-Relational Mapping - connects code to database tables',
'JWT': 'JSON Web Token - a secure way to transmit user identity',
'CRUD': 'Create, Read, Update, Delete - basic data operations',
'Props': 'Properties passed to a component to customize it',
'State': 'Data that can change and affects what users see',
'Hook': 'A way to add features to React components',
'Migration': 'A controlled change to database structure',
'Schema': 'The structure/shape of data',
'Route': 'A URL path that maps to specific functionality',
'Handler': 'Code that responds to a specific request',
}
return [{'term': k, 'definition': v} for k, v in common_terms.items()]
def generate_analysis(root_path: Path) -> Dict[str, Any]:
"""Generate complete project analysis."""
project_info = detect_project_type(root_path)
pkg_info = parse_package_json(root_path) if project_info['type'] == 'node' else {}
structure = scan_directory_structure(root_path)
features = detect_features(root_path)
components = find_components(root_path)
endpoints = find_api_endpoints(root_path)
models = find_data_models(root_path)
glossary = collect_glossary_terms(features, components, endpoints)
return {
'analysis_timestamp': datetime.now().isoformat(),
'project': {
'name': pkg_info.get('name', root_path.name),
'version': pkg_info.get('version', '0.0.0'),
'description': pkg_info.get('description', ''),
'type': project_info['type'],
},
'tech_stack': {
'language': 'TypeScript' if project_info['type'] == 'node' else project_info['type'],
'framework': pkg_info.get('framework'),
'database': pkg_info.get('database'),
'ui_framework': pkg_info.get('ui_framework'),
'key_dependencies': pkg_info.get('key_dependencies', []),
},
'structure': structure,
'features': features,
'components': components,
'api_endpoints': endpoints,
'data_models': models,
'glossary_terms': glossary,
}
def output_yaml(data: Dict[str, Any], output_path: Optional[Path] = None):
"""Output analysis as YAML."""
if yaml:
output = yaml.dump(data, default_flow_style=False, allow_unicode=True, sort_keys=False)
else:
# Fallback to JSON if yaml not available
output = json.dumps(data, indent=2)
if output_path:
output_path.write_text(output)
print(f"Analysis written to: {output_path}")
else:
print(output)
def main():
"""Main entry point."""
root_path = Path.cwd()
if len(sys.argv) > 1:
root_path = Path(sys.argv[1])
if not root_path.exists():
print(f"Error: Path does not exist: {root_path}", file=sys.stderr)
sys.exit(1)
output_path = None
if len(sys.argv) > 2:
output_path = Path(sys.argv[2])
analysis = generate_analysis(root_path)
output_yaml(analysis, output_path)
if __name__ == '__main__':
main()

View File

@ -0,0 +1,491 @@
#!/usr/bin/env python3
"""
HTML Documentation Generator
Generates beautiful HTML documentation from project analysis.
"""
import os
import sys
import json
import re
from pathlib import Path
from datetime import datetime
from typing import Dict, List, Any, Optional
# Try to import yaml
try:
import yaml
except ImportError:
yaml = None
def load_template(template_path: Path) -> str:
"""Load the HTML template."""
with open(template_path, 'r', encoding='utf-8') as f:
return f.read()
def load_analysis(analysis_path: Path) -> Dict[str, Any]:
"""Load project analysis from YAML or JSON."""
with open(analysis_path, 'r', encoding='utf-8') as f:
content = f.read()
if yaml and (analysis_path.suffix in ['.yml', '.yaml']):
return yaml.safe_load(content)
return json.loads(content)
def escape_html(text: str) -> str:
"""Escape HTML special characters."""
if not text:
return ''
return (str(text)
.replace('&', '&amp;')
.replace('<', '&lt;')
.replace('>', '&gt;')
.replace('"', '&quot;')
.replace("'", '&#39;'))
def generate_capabilities_html(capabilities: List[Dict]) -> str:
"""Generate HTML for capabilities cards."""
icons = ['', '', '🔐', '📊', '🚀', '💡', '🎯', '🔧']
html_parts = []
for i, cap in enumerate(capabilities[:8]):
icon = icons[i % len(icons)]
html_parts.append(f'''
<div class="card">
<div class="card-header">
<div class="card-icon">{icon}</div>
<div class="card-title">{escape_html(cap.get('capability', cap.get('name', 'Feature')))}</div>
</div>
<p>{escape_html(cap.get('description', ''))}</p>
</div>''')
return '\n'.join(html_parts)
def generate_prerequisites_html(prerequisites: List[Dict]) -> str:
"""Generate HTML for prerequisites table rows."""
html_parts = []
for prereq in prerequisites:
tool = prereq.get('tool', prereq.get('name', ''))
purpose = prereq.get('purpose', prereq.get('description', ''))
html_parts.append(f'''
<tr>
<td><code>{escape_html(tool)}</code></td>
<td>{escape_html(purpose)}</td>
</tr>''')
return '\n'.join(html_parts) if html_parts else '''
<tr>
<td><code>Node.js</code></td>
<td>JavaScript runtime environment</td>
</tr>'''
def generate_tech_stack_html(tech_stack: Dict) -> str:
"""Generate HTML for technology stack table rows."""
html_parts = []
stack_items = [
('Language', tech_stack.get('language')),
('Framework', tech_stack.get('framework')),
('Database', tech_stack.get('database')),
('UI Framework', tech_stack.get('ui_framework')),
]
purposes = {
'TypeScript': 'Type-safe JavaScript for better code quality',
'JavaScript': 'Programming language for web applications',
'Python': 'General-purpose programming language',
'Next.js': 'Full-stack React framework with SSR',
'React': 'Component-based UI library',
'Vue.js': 'Progressive JavaScript framework',
'Express': 'Minimal web server framework',
'Prisma': 'Type-safe database ORM',
'MongoDB': 'NoSQL document database',
'PostgreSQL': 'Relational database',
'Tailwind CSS': 'Utility-first CSS framework',
'Material UI': 'React component library',
}
for layer, tech in stack_items:
if tech:
purpose = purposes.get(tech, f'{tech} for {layer.lower()}')
html_parts.append(f'''
<tr>
<td>{escape_html(layer)}</td>
<td><span class="badge badge-primary">{escape_html(tech)}</span></td>
<td>{escape_html(purpose)}</td>
</tr>''')
# Add key dependencies
for dep in tech_stack.get('key_dependencies', [])[:5]:
html_parts.append(f'''
<tr>
<td>Dependency</td>
<td><span class="badge badge-info">{escape_html(dep.get('name', ''))}</span></td>
<td>{escape_html(dep.get('purpose', ''))}</td>
</tr>''')
return '\n'.join(html_parts)
def generate_directory_structure(structure: Dict) -> str:
"""Generate directory structure text."""
lines = ['project/']
for i, dir_info in enumerate(structure.get('directories', [])[:10]):
prefix = '└── ' if i == len(structure.get('directories', [])) - 1 else '├── '
path = dir_info.get('path', '')
purpose = dir_info.get('purpose', '')
lines.append(f"{prefix}{path}/ # {purpose}")
return '\n'.join(lines)
def generate_features_html(features: List[Dict]) -> str:
"""Generate HTML for features section."""
icons = ['🔐', '👤', '🔌', '💾', '📁', '🔍', '📧', '⚙️']
html_parts = []
for i, feature in enumerate(features[:8]):
icon = icons[i % len(icons)]
name = feature.get('name', 'Feature')
description = feature.get('description', '')
technical_notes = feature.get('technical_notes', '')
files = feature.get('files', [])
files_html = '\n'.join([f'<li><code>{escape_html(f)}</code></li>' for f in files[:3]])
html_parts.append(f'''
<div class="feature-item">
<div class="feature-icon">{icon}</div>
<div class="feature-content">
<h4>{escape_html(name)}</h4>
<p>{escape_html(description)}</p>
<details>
<summary>
🔧 Technical Details
<span class="tech-badge">For Engineers</span>
</summary>
<div>
<p>{escape_html(technical_notes)}</p>
<p><strong>Key Files:</strong></p>
<ul>
{files_html}
</ul>
</div>
</details>
</div>
</div>''')
return '\n'.join(html_parts) if html_parts else '''
<div class="feature-item">
<div class="feature-icon"></div>
<div class="feature-content">
<h4>Core Functionality</h4>
<p>Main features of the application.</p>
</div>
</div>'''
def generate_api_endpoints_html(endpoints: List[Dict]) -> str:
"""Generate HTML for API endpoints."""
if not endpoints:
return '''
<p>No API endpoints detected. This project may use a different API pattern or may not have an API layer.</p>'''
html_parts = ['<table>', '<thead>', '<tr>', '<th>Method</th>', '<th>Endpoint</th>', '<th>Description</th>', '</tr>', '</thead>', '<tbody>']
for endpoint in endpoints[:15]:
method = endpoint.get('method', 'GET')
method_class = f'method-{method.lower()}'
path = endpoint.get('path', '')
description = endpoint.get('description', '')
html_parts.append(f'''
<tr>
<td><span class="method {method_class}">{escape_html(method)}</span></td>
<td><code>{escape_html(path)}</code></td>
<td>{escape_html(description)}</td>
</tr>''')
html_parts.extend(['</tbody>', '</table>'])
return '\n'.join(html_parts)
def generate_components_html(components: List[Dict]) -> str:
"""Generate HTML for component catalog."""
if not components:
return '''
<p>No UI components detected. This project may not have a frontend layer or uses a different component pattern.</p>'''
html_parts = []
for comp in components[:10]:
name = comp.get('name', 'Component')
description = comp.get('description', f'{name} component')
path = comp.get('path', '')
props = comp.get('props', 'See source file')
html_parts.append(f'''
<div class="card">
<div class="card-header">
<div class="card-icon">🧩</div>
<div class="card-title">{escape_html(name)}</div>
</div>
<p>{escape_html(description)}</p>
<p><code>{escape_html(path)}</code></p>
<details>
<summary>
🔧 Props & Usage
<span class="tech-badge">Technical</span>
</summary>
<div>
<p><strong>Props:</strong> {escape_html(props)}</p>
<h4>Usage Example</h4>
<pre><code>&lt;{escape_html(name)} /&gt;</code></pre>
</div>
</details>
</div>''')
return '\n'.join(html_parts)
def generate_data_models_html(models: List[Dict]) -> str:
"""Generate HTML for data models."""
if not models:
return '''
<p>No data models detected. This project may not use a database or uses a different data pattern.</p>'''
html_parts = []
for model in models[:10]:
name = model.get('name', 'Model')
description = model.get('description', f'{name} data model')
fields = model.get('fields', [])
fields_html = ''
if fields:
fields_html = '<table><thead><tr><th>Field</th><th>Type</th><th>Description</th></tr></thead><tbody>'
for field in fields[:10]:
field_name = field.get('name', '')
field_type = field.get('type', 'unknown')
field_desc = field.get('description', '')
fields_html += f'''
<tr>
<td><code>{escape_html(field_name)}</code></td>
<td><code>{escape_html(field_type)}</code></td>
<td>{escape_html(field_desc)}</td>
</tr>'''
fields_html += '</tbody></table>'
html_parts.append(f'''
<h3>{escape_html(name)}</h3>
<p><strong>What it represents:</strong> {escape_html(description)}</p>
{fields_html}''')
return '\n'.join(html_parts)
def generate_er_diagram(models: List[Dict]) -> str:
"""Generate ASCII ER diagram."""
if not models:
return '''┌─────────────────────────────────────┐
No data models detected
'''
lines = []
for model in models[:4]:
name = model.get('name', 'Model')
fields = model.get('fields', [])[:4]
width = max(len(name) + 4, max([len(f.get('name', '')) + len(f.get('type', '')) + 5 for f in fields] or [20]))
lines.append('' + '' * width + '')
lines.append('' + f' {name} '.center(width) + '')
lines.append('' + '' * width + '')
for field in fields:
field_str = f" {field.get('name', '')} : {field.get('type', '')}"
lines.append('' + field_str.ljust(width) + '')
lines.append('' + '' * width + '')
lines.append('')
return '\n'.join(lines)
def generate_glossary_html(terms: List[Dict]) -> str:
"""Generate HTML for glossary."""
html_parts = []
for term in terms:
word = term.get('term', '')
definition = term.get('definition', '')
html_parts.append(f'''
<div class="glossary-term">
<span class="glossary-word">{escape_html(word)}</span>
<span class="glossary-definition">{escape_html(definition)}</span>
</div>''')
return '\n'.join(html_parts)
def generate_system_diagram(tech_stack: Dict, structure: Dict) -> str:
"""Generate ASCII system architecture diagram."""
framework = tech_stack.get('framework', 'Application')
database = tech_stack.get('database', '')
ui = tech_stack.get('ui_framework', 'UI')
diagram = f'''┌─────────────────────────────────────────────────────────────┐
Application Architecture
Client API Database
({ui or 'UI'}) ({framework or 'Server'}) ({database or 'Storage'})
'''
return diagram
def generate_html(analysis: Dict, template: str) -> str:
"""Generate final HTML from analysis and template."""
project = analysis.get('project', {})
tech_stack = analysis.get('tech_stack', {})
structure = analysis.get('structure', {})
features = analysis.get('features', [])
components = analysis.get('components', [])
endpoints = analysis.get('api_endpoints', [])
models = analysis.get('data_models', [])
glossary = analysis.get('glossary_terms', [])
# Basic replacements
replacements = {
'{{PROJECT_NAME}}': escape_html(project.get('name', 'Project')),
'{{VERSION}}': escape_html(project.get('version', '1.0.0')),
'{{TAGLINE}}': escape_html(project.get('description', 'Project documentation')),
'{{DESCRIPTION}}': escape_html(project.get('description', 'This project provides various features and capabilities.')),
'{{AUDIENCE}}': 'Developers, stakeholders, and anyone interested in understanding this project.',
'{{GENERATED_DATE}}': datetime.now().strftime('%Y-%m-%d'),
}
# Generate complex sections
html = template
# Replace simple placeholders
for key, value in replacements.items():
html = html.replace(key, value)
# Replace capabilities section
capabilities = [{'capability': cap.get('name'), 'description': cap.get('description')}
for cap in features[:4]] if features else [
{'capability': 'Core Features', 'description': 'Main application functionality'},
{'capability': 'Easy Integration', 'description': 'Simple setup and configuration'}
]
# Find and replace the capabilities placeholder section
cap_html = generate_capabilities_html(capabilities)
html = re.sub(
r'<!-- CAPABILITIES_PLACEHOLDER -->.*?</div>\s*</div>',
f'<!-- Generated Capabilities -->\n{cap_html}',
html,
flags=re.DOTALL
)
# Replace technology stack
html = re.sub(
r'<!-- TECH_STACK_PLACEHOLDER -->.*?</tr>',
generate_tech_stack_html(tech_stack),
html,
flags=re.DOTALL
)
# Generate and replace diagrams
html = html.replace('{{SYSTEM_DIAGRAM}}', generate_system_diagram(tech_stack, structure))
html = html.replace('{{DIRECTORY_STRUCTURE}}', generate_directory_structure(structure))
html = html.replace('{{ER_DIAGRAM}}', generate_er_diagram(models))
# Replace features section
html = re.sub(
r'<!-- FEATURES_PLACEHOLDER -->.*?</div>\s*</div>\s*</div>',
f'<!-- Generated Features -->\n{generate_features_html(features)}',
html,
flags=re.DOTALL
)
# Replace API endpoints
html = re.sub(
r'<!-- API_ENDPOINTS_PLACEHOLDER -->.*?</details>',
f'<h3>API Endpoints</h3>\n{generate_api_endpoints_html(endpoints)}',
html,
flags=re.DOTALL
)
# Replace components
html = re.sub(
r'<!-- COMPONENTS_PLACEHOLDER -->.*?</div>\s*</div>',
f'<!-- Generated Components -->\n{generate_components_html(components)}',
html,
flags=re.DOTALL
)
# Replace data models
html = re.sub(
r'<!-- DATA_MODELS_PLACEHOLDER -->.*?</table>',
f'<!-- Generated Data Models -->\n{generate_data_models_html(models)}',
html,
flags=re.DOTALL
)
# Replace glossary
html = re.sub(
r'<!-- GLOSSARY_PLACEHOLDER -->.*?</div>',
f'<!-- Generated Glossary -->\n{generate_glossary_html(glossary)}\n</div>',
html,
flags=re.DOTALL
)
# Clean up remaining placeholders
html = re.sub(r'\{\{[A-Z_]+\}\}', '', html)
return html
def main():
"""Main entry point."""
if len(sys.argv) < 3:
print("Usage: generate_html.py <analysis.yml> <template.html> [output.html]")
sys.exit(1)
analysis_path = Path(sys.argv[1])
template_path = Path(sys.argv[2])
output_path = Path(sys.argv[3]) if len(sys.argv) > 3 else Path('documentation.html')
if not analysis_path.exists():
print(f"Error: Analysis file not found: {analysis_path}", file=sys.stderr)
sys.exit(1)
if not template_path.exists():
print(f"Error: Template file not found: {template_path}", file=sys.stderr)
sys.exit(1)
analysis = load_analysis(analysis_path)
template = load_template(template_path)
html = generate_html(analysis, template)
output_path.write_text(html, encoding='utf-8')
print(f"HTML documentation generated: {output_path}")
if __name__ == '__main__':
main()

View File

@ -0,0 +1,97 @@
# Documentation Generator Skill
# Generates comprehensive dual-audience documentation
name: documentation-generator
version: "1.0.0"
description: |
Analyzes project structure and generates comprehensive documentation
that serves both technical (engineers) and non-technical audiences.
triggers:
commands:
- "/eureka:index"
- "/eureka:docs"
keywords:
- "generate documentation"
- "create docs"
- "document project"
- "project documentation"
- "index project"
agents:
- doc-writer
schemas:
- documentation_output.yml
- project_analysis.yml
scripts:
- analyze_project.py
- generate_html.py
templates:
- documentation.html
capabilities:
- Project structure analysis
- Dual-audience documentation generation
- ASCII diagram creation
- API documentation
- Component cataloging
- Glossary generation
outputs:
primary:
- index.html # Beautiful HTML for non-engineers
- PROJECT_DOCUMENTATION.md
- QUICK_REFERENCE.md
optional:
- API_REFERENCE.md
- COMPONENTS.md
- GLOSSARY.md
data:
- analysis.yml # Project analysis data
audience_support:
non_technical:
- Executive Summary
- Feature Guide
- Glossary
- Visual Diagrams
technical:
- API Reference
- Component Catalog
- Data Models
- Code Examples
configuration:
default_output_dir: docs
supported_formats:
- markdown
- html
default_sections:
- executive_summary
- architecture_overview
- getting_started
- features
- api_reference
- component_catalog
- data_models
- glossary
dependencies:
required:
- Read tool (file access)
- Write tool (file creation)
- Glob tool (file discovery)
- Grep tool (pattern search)
optional:
- Bash tool (script execution)
- Task tool (agent delegation)
quality_gates:
- All referenced files must exist
- All code examples must be syntactically valid
- All internal links must resolve
- Technical details wrapped in collapsible sections
- Glossary covers all technical terms used

View File

@ -0,0 +1,962 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{{PROJECT_NAME}} - Documentation</title>
<style>
/* ===== CSS Variables ===== */
:root {
--color-primary: #2563eb;
--color-primary-light: #3b82f6;
--color-primary-dark: #1d4ed8;
--color-secondary: #7c3aed;
--color-success: #10b981;
--color-warning: #f59e0b;
--color-danger: #ef4444;
--color-info: #06b6d4;
--color-bg: #ffffff;
--color-bg-alt: #f8fafc;
--color-bg-code: #f1f5f9;
--color-text: #1e293b;
--color-text-light: #64748b;
--color-text-muted: #94a3b8;
--color-border: #e2e8f0;
--font-sans: 'Inter', -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
--font-mono: 'JetBrains Mono', 'Fira Code', Consolas, monospace;
--shadow-sm: 0 1px 2px 0 rgb(0 0 0 / 0.05);
--shadow-md: 0 4px 6px -1px rgb(0 0 0 / 0.1);
--shadow-lg: 0 10px 15px -3px rgb(0 0 0 / 0.1);
--radius-sm: 0.375rem;
--radius-md: 0.5rem;
--radius-lg: 0.75rem;
}
/* Dark mode */
@media (prefers-color-scheme: dark) {
:root {
--color-bg: #0f172a;
--color-bg-alt: #1e293b;
--color-bg-code: #334155;
--color-text: #f1f5f9;
--color-text-light: #94a3b8;
--color-text-muted: #64748b;
--color-border: #334155;
}
}
/* ===== Reset & Base ===== */
*, *::before, *::after {
box-sizing: border-box;
margin: 0;
padding: 0;
}
html {
scroll-behavior: smooth;
font-size: 16px;
}
body {
font-family: var(--font-sans);
background: var(--color-bg);
color: var(--color-text);
line-height: 1.7;
min-height: 100vh;
}
/* ===== Layout ===== */
.layout {
display: flex;
min-height: 100vh;
}
/* Sidebar */
.sidebar {
position: fixed;
top: 0;
left: 0;
width: 280px;
height: 100vh;
background: var(--color-bg-alt);
border-right: 1px solid var(--color-border);
overflow-y: auto;
padding: 2rem 0;
z-index: 100;
}
.sidebar-header {
padding: 0 1.5rem 1.5rem;
border-bottom: 1px solid var(--color-border);
margin-bottom: 1rem;
}
.sidebar-logo {
font-size: 1.25rem;
font-weight: 700;
color: var(--color-primary);
text-decoration: none;
display: flex;
align-items: center;
gap: 0.5rem;
}
.sidebar-version {
font-size: 0.75rem;
color: var(--color-text-muted);
margin-top: 0.25rem;
}
.sidebar-nav {
padding: 0 1rem;
}
.nav-section {
margin-bottom: 1.5rem;
}
.nav-section-title {
font-size: 0.75rem;
font-weight: 600;
text-transform: uppercase;
letter-spacing: 0.05em;
color: var(--color-text-muted);
padding: 0 0.5rem;
margin-bottom: 0.5rem;
}
.nav-link {
display: block;
padding: 0.5rem;
color: var(--color-text-light);
text-decoration: none;
border-radius: var(--radius-sm);
font-size: 0.9rem;
transition: all 0.15s ease;
}
.nav-link:hover {
background: var(--color-border);
color: var(--color-text);
}
.nav-link.active {
background: var(--color-primary);
color: white;
}
.nav-link-icon {
margin-right: 0.5rem;
}
/* Main content */
.main {
flex: 1;
margin-left: 280px;
padding: 3rem 4rem;
max-width: 900px;
}
/* ===== Typography ===== */
h1, h2, h3, h4, h5, h6 {
font-weight: 600;
line-height: 1.3;
margin-top: 2rem;
margin-bottom: 1rem;
}
h1 {
font-size: 2.5rem;
margin-top: 0;
padding-bottom: 1rem;
border-bottom: 2px solid var(--color-border);
}
h2 {
font-size: 1.75rem;
color: var(--color-primary);
}
h3 {
font-size: 1.25rem;
}
p {
margin-bottom: 1rem;
}
a {
color: var(--color-primary);
text-decoration: none;
}
a:hover {
text-decoration: underline;
}
/* ===== Components ===== */
/* Hero section */
.hero {
background: linear-gradient(135deg, var(--color-primary) 0%, var(--color-secondary) 100%);
color: white;
padding: 3rem;
border-radius: var(--radius-lg);
margin-bottom: 3rem;
}
.hero h1 {
color: white;
border-bottom: none;
padding-bottom: 0;
margin-bottom: 0.5rem;
}
.hero-tagline {
font-size: 1.25rem;
opacity: 0.9;
}
/* Cards */
.card {
background: var(--color-bg);
border: 1px solid var(--color-border);
border-radius: var(--radius-lg);
padding: 1.5rem;
margin-bottom: 1.5rem;
box-shadow: var(--shadow-sm);
}
.card-header {
display: flex;
align-items: center;
gap: 0.75rem;
margin-bottom: 1rem;
}
.card-icon {
width: 2.5rem;
height: 2.5rem;
background: var(--color-primary);
color: white;
border-radius: var(--radius-md);
display: flex;
align-items: center;
justify-content: center;
font-size: 1.25rem;
}
.card-title {
font-size: 1.1rem;
font-weight: 600;
margin: 0;
}
/* Grid */
.grid {
display: grid;
gap: 1.5rem;
}
.grid-2 {
grid-template-columns: repeat(2, 1fr);
}
.grid-3 {
grid-template-columns: repeat(3, 1fr);
}
@media (max-width: 768px) {
.grid-2, .grid-3 {
grid-template-columns: 1fr;
}
}
/* Tables */
table {
width: 100%;
border-collapse: collapse;
margin: 1.5rem 0;
font-size: 0.9rem;
}
th, td {
padding: 0.75rem 1rem;
text-align: left;
border-bottom: 1px solid var(--color-border);
}
th {
background: var(--color-bg-alt);
font-weight: 600;
color: var(--color-text-light);
text-transform: uppercase;
font-size: 0.75rem;
letter-spacing: 0.05em;
}
tr:hover {
background: var(--color-bg-alt);
}
/* Code blocks */
code {
font-family: var(--font-mono);
font-size: 0.875em;
background: var(--color-bg-code);
padding: 0.2em 0.4em;
border-radius: var(--radius-sm);
}
pre {
background: var(--color-bg-code);
border-radius: var(--radius-md);
padding: 1.25rem;
overflow-x: auto;
margin: 1.5rem 0;
border: 1px solid var(--color-border);
}
pre code {
background: none;
padding: 0;
font-size: 0.875rem;
line-height: 1.6;
}
/* Diagrams */
.diagram {
background: var(--color-bg-alt);
border: 1px solid var(--color-border);
border-radius: var(--radius-md);
padding: 1.5rem;
overflow-x: auto;
margin: 1.5rem 0;
font-family: var(--font-mono);
font-size: 0.8rem;
line-height: 1.4;
white-space: pre;
}
/* Badges */
.badge {
display: inline-flex;
align-items: center;
padding: 0.25rem 0.75rem;
font-size: 0.75rem;
font-weight: 500;
border-radius: 9999px;
text-transform: uppercase;
letter-spacing: 0.025em;
}
.badge-primary {
background: rgba(37, 99, 235, 0.1);
color: var(--color-primary);
}
.badge-success {
background: rgba(16, 185, 129, 0.1);
color: var(--color-success);
}
.badge-warning {
background: rgba(245, 158, 11, 0.1);
color: var(--color-warning);
}
.badge-info {
background: rgba(6, 182, 212, 0.1);
color: var(--color-info);
}
/* Collapsible technical details */
details {
background: var(--color-bg-alt);
border: 1px solid var(--color-border);
border-radius: var(--radius-md);
margin: 1rem 0;
}
summary {
padding: 1rem 1.25rem;
cursor: pointer;
font-weight: 500;
display: flex;
align-items: center;
gap: 0.5rem;
color: var(--color-text-light);
}
summary:hover {
color: var(--color-text);
}
summary::marker {
content: '';
}
summary::before {
content: '▶';
font-size: 0.75rem;
transition: transform 0.2s ease;
}
details[open] summary::before {
transform: rotate(90deg);
}
details > div {
padding: 0 1.25rem 1.25rem;
border-top: 1px solid var(--color-border);
}
.tech-badge {
background: var(--color-primary);
color: white;
padding: 0.125rem 0.5rem;
border-radius: var(--radius-sm);
font-size: 0.7rem;
margin-left: auto;
}
/* API Methods */
.method {
display: inline-block;
padding: 0.25rem 0.5rem;
border-radius: var(--radius-sm);
font-size: 0.75rem;
font-weight: 600;
font-family: var(--font-mono);
}
.method-get { background: #dcfce7; color: #166534; }
.method-post { background: #dbeafe; color: #1e40af; }
.method-put { background: #fef3c7; color: #92400e; }
.method-patch { background: #fce7f3; color: #9d174d; }
.method-delete { background: #fee2e2; color: #991b1b; }
/* Alerts / Callouts */
.callout {
padding: 1rem 1.25rem;
border-radius: var(--radius-md);
margin: 1.5rem 0;
border-left: 4px solid;
}
.callout-info {
background: rgba(6, 182, 212, 0.1);
border-color: var(--color-info);
}
.callout-tip {
background: rgba(16, 185, 129, 0.1);
border-color: var(--color-success);
}
.callout-warning {
background: rgba(245, 158, 11, 0.1);
border-color: var(--color-warning);
}
.callout-title {
font-weight: 600;
margin-bottom: 0.5rem;
display: flex;
align-items: center;
gap: 0.5rem;
}
/* Lists */
ul, ol {
margin: 1rem 0;
padding-left: 1.5rem;
}
li {
margin-bottom: 0.5rem;
}
/* Glossary */
.glossary-term {
display: flex;
padding: 1rem 0;
border-bottom: 1px solid var(--color-border);
}
.glossary-term:last-child {
border-bottom: none;
}
.glossary-word {
font-weight: 600;
min-width: 150px;
color: var(--color-primary);
}
.glossary-definition {
color: var(--color-text-light);
}
/* Feature list */
.feature-item {
display: flex;
gap: 1rem;
padding: 1.25rem;
border: 1px solid var(--color-border);
border-radius: var(--radius-md);
margin-bottom: 1rem;
transition: all 0.2s ease;
}
.feature-item:hover {
border-color: var(--color-primary);
box-shadow: var(--shadow-md);
}
.feature-icon {
width: 3rem;
height: 3rem;
background: linear-gradient(135deg, var(--color-primary) 0%, var(--color-secondary) 100%);
color: white;
border-radius: var(--radius-md);
display: flex;
align-items: center;
justify-content: center;
font-size: 1.25rem;
flex-shrink: 0;
}
.feature-content h4 {
margin: 0 0 0.5rem 0;
font-size: 1rem;
}
.feature-content p {
margin: 0;
color: var(--color-text-light);
font-size: 0.9rem;
}
/* Responsive */
@media (max-width: 1024px) {
.sidebar {
transform: translateX(-100%);
transition: transform 0.3s ease;
}
.sidebar.open {
transform: translateX(0);
}
.main {
margin-left: 0;
padding: 2rem 1.5rem;
}
.mobile-menu-btn {
display: block;
position: fixed;
top: 1rem;
left: 1rem;
z-index: 101;
background: var(--color-primary);
color: white;
border: none;
padding: 0.75rem;
border-radius: var(--radius-md);
cursor: pointer;
}
}
@media (min-width: 1025px) {
.mobile-menu-btn {
display: none;
}
}
/* Print styles */
@media print {
.sidebar {
display: none;
}
.main {
margin-left: 0;
max-width: 100%;
}
details {
display: block !important;
}
details > div {
display: block !important;
}
}
</style>
</head>
<body>
<button class="mobile-menu-btn" onclick="toggleSidebar()"></button>
<div class="layout">
<!-- Sidebar Navigation -->
<aside class="sidebar" id="sidebar">
<div class="sidebar-header">
<a href="#" class="sidebar-logo">
📚 {{PROJECT_NAME}}
</a>
<div class="sidebar-version">Version {{VERSION}}</div>
</div>
<nav class="sidebar-nav">
<div class="nav-section">
<div class="nav-section-title">Overview</div>
<a href="#executive-summary" class="nav-link">
<span class="nav-link-icon">📋</span> Executive Summary
</a>
<a href="#quick-start" class="nav-link">
<span class="nav-link-icon">🚀</span> Quick Start
</a>
<a href="#architecture" class="nav-link">
<span class="nav-link-icon">🏗️</span> Architecture
</a>
</div>
<div class="nav-section">
<div class="nav-section-title">Features</div>
<a href="#features" class="nav-link">
<span class="nav-link-icon"></span> Feature Guide
</a>
</div>
<div class="nav-section">
<div class="nav-section-title">For Developers</div>
<a href="#api-reference" class="nav-link">
<span class="nav-link-icon">🔌</span> API Reference
</a>
<a href="#components" class="nav-link">
<span class="nav-link-icon">🧩</span> Components
</a>
<a href="#data-models" class="nav-link">
<span class="nav-link-icon">💾</span> Data Models
</a>
</div>
<div class="nav-section">
<div class="nav-section-title">Reference</div>
<a href="#glossary" class="nav-link">
<span class="nav-link-icon">📖</span> Glossary
</a>
</div>
</nav>
</aside>
<!-- Main Content -->
<main class="main">
<!-- Hero Section -->
<section class="hero">
<h1>{{PROJECT_NAME}}</h1>
<p class="hero-tagline">{{TAGLINE}}</p>
</section>
<!-- Executive Summary -->
<section id="executive-summary">
<h2>📋 Executive Summary</h2>
<h3>What is {{PROJECT_NAME}}?</h3>
<p>{{DESCRIPTION}}</p>
<h3>Who is it for?</h3>
<p>{{AUDIENCE}}</p>
<h3>Key Capabilities</h3>
<div class="grid grid-2">
<!-- CAPABILITIES_PLACEHOLDER -->
<div class="card">
<div class="card-header">
<div class="card-icon"></div>
<div class="card-title">{{CAPABILITY_1_NAME}}</div>
</div>
<p>{{CAPABILITY_1_DESCRIPTION}}</p>
</div>
<div class="card">
<div class="card-header">
<div class="card-icon"></div>
<div class="card-title">{{CAPABILITY_2_NAME}}</div>
</div>
<p>{{CAPABILITY_2_DESCRIPTION}}</p>
</div>
</div>
</section>
<!-- Quick Start -->
<section id="quick-start">
<h2>🚀 Quick Start</h2>
<h3>Prerequisites</h3>
<table>
<thead>
<tr>
<th>Tool</th>
<th>Purpose</th>
</tr>
</thead>
<tbody>
<!-- PREREQUISITES_PLACEHOLDER -->
<tr>
<td><code>{{PREREQ_1_TOOL}}</code></td>
<td>{{PREREQ_1_PURPOSE}}</td>
</tr>
</tbody>
</table>
<h3>Installation</h3>
<pre><code>{{INSTALLATION_COMMANDS}}</code></pre>
<h3>Basic Usage</h3>
<pre><code>{{BASIC_USAGE}}</code></pre>
</section>
<!-- Architecture -->
<section id="architecture">
<h2>🏗️ Architecture Overview</h2>
<h3>System Diagram</h3>
<div class="diagram">{{SYSTEM_DIAGRAM}}</div>
<h3>Technology Stack</h3>
<table>
<thead>
<tr>
<th>Layer</th>
<th>Technology</th>
<th>Purpose</th>
</tr>
</thead>
<tbody>
<!-- TECH_STACK_PLACEHOLDER -->
<tr>
<td>{{TECH_LAYER}}</td>
<td><span class="badge badge-primary">{{TECH_NAME}}</span></td>
<td>{{TECH_PURPOSE}}</td>
</tr>
</tbody>
</table>
<h3>Directory Structure</h3>
<pre><code>{{DIRECTORY_STRUCTURE}}</code></pre>
</section>
<!-- Features -->
<section id="features">
<h2>✨ Features</h2>
<!-- FEATURES_PLACEHOLDER -->
<div class="feature-item">
<div class="feature-icon">🔐</div>
<div class="feature-content">
<h4>{{FEATURE_NAME}}</h4>
<p>{{FEATURE_DESCRIPTION}}</p>
<details>
<summary>
🔧 Technical Details
<span class="tech-badge">For Engineers</span>
</summary>
<div>
<p>{{FEATURE_TECHNICAL_NOTES}}</p>
<p><strong>Key Files:</strong></p>
<ul>
<li><code>{{FEATURE_FILE_1}}</code></li>
</ul>
</div>
</details>
</div>
</div>
</section>
<!-- API Reference -->
<section id="api-reference">
<h2>🔌 API Reference</h2>
<div class="callout callout-info">
<div class="callout-title"> About the API</div>
<p>This section is primarily for developers who need to integrate with or extend the application.</p>
</div>
<!-- API_ENDPOINTS_PLACEHOLDER -->
<h3>{{API_GROUP_NAME}}</h3>
<table>
<thead>
<tr>
<th>Method</th>
<th>Endpoint</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><span class="method method-get">GET</span></td>
<td><code>{{API_PATH}}</code></td>
<td>{{API_DESCRIPTION}}</td>
</tr>
</tbody>
</table>
<details>
<summary>
📖 Request & Response Details
<span class="tech-badge">Technical</span>
</summary>
<div>
<h4>Request</h4>
<pre><code>{{API_REQUEST_EXAMPLE}}</code></pre>
<h4>Response</h4>
<pre><code>{{API_RESPONSE_EXAMPLE}}</code></pre>
</div>
</details>
</section>
<!-- Components -->
<section id="components">
<h2>🧩 Component Catalog</h2>
<!-- COMPONENTS_PLACEHOLDER -->
<div class="card">
<div class="card-header">
<div class="card-icon">🧩</div>
<div class="card-title">{{COMPONENT_NAME}}</div>
</div>
<p>{{COMPONENT_DESCRIPTION}}</p>
<p><code>{{COMPONENT_PATH}}</code></p>
<details>
<summary>
🔧 Props & Usage
<span class="tech-badge">Technical</span>
</summary>
<div>
<table>
<thead>
<tr>
<th>Prop</th>
<th>Type</th>
<th>Required</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>{{PROP_NAME}}</code></td>
<td><code>{{PROP_TYPE}}</code></td>
<td>{{PROP_REQUIRED}}</td>
<td>{{PROP_DESCRIPTION}}</td>
</tr>
</tbody>
</table>
<h4>Usage Example</h4>
<pre><code>{{COMPONENT_USAGE_EXAMPLE}}</code></pre>
</div>
</details>
</div>
</section>
<!-- Data Models -->
<section id="data-models">
<h2>💾 Data Models</h2>
<h3>Entity Relationship Diagram</h3>
<div class="diagram">{{ER_DIAGRAM}}</div>
<!-- DATA_MODELS_PLACEHOLDER -->
<h3>{{MODEL_NAME}}</h3>
<p><strong>What it represents:</strong> {{MODEL_DESCRIPTION}}</p>
<table>
<thead>
<tr>
<th>Field</th>
<th>Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>{{FIELD_NAME}}</code></td>
<td><code>{{FIELD_TYPE}}</code></td>
<td>{{FIELD_DESCRIPTION}}</td>
</tr>
</tbody>
</table>
</section>
<!-- Glossary -->
<section id="glossary">
<h2>📖 Glossary</h2>
<div class="callout callout-tip">
<div class="callout-title">💡 Tip</div>
<p>This glossary explains technical terms in plain English. Perfect for non-technical stakeholders!</p>
</div>
<div class="card">
<!-- GLOSSARY_PLACEHOLDER -->
<div class="glossary-term">
<span class="glossary-word">{{TERM}}</span>
<span class="glossary-definition">{{DEFINITION}}</span>
</div>
</div>
</section>
<!-- Footer -->
<footer style="margin-top: 4rem; padding-top: 2rem; border-top: 1px solid var(--color-border); color: var(--color-text-muted); text-align: center;">
<p>Generated by <strong>Eureka Index</strong> · {{GENERATED_DATE}}</p>
</footer>
</main>
</div>
<script>
// Mobile menu toggle
function toggleSidebar() {
document.getElementById('sidebar').classList.toggle('open');
}
// Active nav highlighting
const sections = document.querySelectorAll('section[id]');
const navLinks = document.querySelectorAll('.nav-link');
window.addEventListener('scroll', () => {
let current = '';
sections.forEach(section => {
const sectionTop = section.offsetTop;
if (scrollY >= sectionTop - 100) {
current = section.getAttribute('id');
}
});
navLinks.forEach(link => {
link.classList.remove('active');
if (link.getAttribute('href') === '#' + current) {
link.classList.add('active');
}
});
});
// Close sidebar when clicking a link (mobile)
navLinks.forEach(link => {
link.addEventListener('click', () => {
if (window.innerWidth < 1025) {
document.getElementById('sidebar').classList.remove('open');
}
});
});
</script>
</body>
</html>

View File

@ -0,0 +1,429 @@
# Workflow Enforcement Implementation Plan
## Problem Statement
The current `/workflow:spawn` command documents phases and steps but lacks **strict enforcement**:
- Steps are described in markdown but rely on Claude following them voluntarily
- No blocking mechanisms prevent skipping phases
- Review and security checks can be bypassed in auto modes
- State persistence is incomplete - phases can be skipped on resume
## Solution Architecture
### Three Pillars of Enforcement
```
┌─────────────────────────────────────────────────────────────────┐
│ ENFORCEMENT SYSTEM │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────────┐ │
│ │ BLOCKING │ │ SCRIPT │ │ STATE │ │
│ │ CHECKPOINTS │ │ GATES │ │ PERSISTENCE │ │
│ │ │ │ │ │ │ │
│ │ - HALT until │ │ - Validation │ │ - File-based │ │
│ │ verified │ │ scripts │ │ checkpoints │ │
│ │ - Exit code │ │ - Exit 0 │ │ - Cannot skip │ │
│ │ required │ │ required │ │ on resume │ │
│ └──────────────┘ └──────────────┘ └──────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## Implementation Components
### 1. Phase Gate Validator (`phase_gate.py`)
**Purpose**: Single script that controls ALL phase transitions
**Commands**:
```bash
# Check if ready to ENTER a phase
phase_gate.py can-enter $PHASE
# Mark completion of a phase's requirements
phase_gate.py complete $PHASE
# Save a named checkpoint within a phase
phase_gate.py checkpoint $NAME --phase $PHASE
# Verify all checkpoints for a phase are complete
phase_gate.py verify-checkpoints $PHASE
# Get current blocking conditions
phase_gate.py blockers
```
**Exit Codes**:
- `0` = PASS - proceed allowed
- `1` = BLOCKED - conditions not met (with reason)
- `2` = ERROR - system error
### 2. Checkpoint State File
**Location**: `.workflow/versions/$VERSION/gate_state.yml`
```yaml
version: v001
phase: IMPLEMENTING
created_at: "2025-12-18T10:00:00Z"
updated_at: "2025-12-18T10:30:00Z"
phases:
INITIALIZING:
status: completed
entered_at: "2025-12-18T10:00:00Z"
completed_at: "2025-12-18T10:01:00Z"
checkpoints:
manifest_exists: { status: passed, at: "..." }
version_created: { status: passed, at: "..." }
DESIGNING:
status: completed
entered_at: "2025-12-18T10:01:00Z"
completed_at: "2025-12-18T10:10:00Z"
checkpoints:
design_document_created: { status: passed, at: "..." }
design_validated: { status: passed, at: "..." }
tasks_generated: { status: passed, at: "..." }
task_count_verified: { status: passed, at: "...", data: { count: 15 } }
AWAITING_DESIGN_APPROVAL:
status: completed
entered_at: "2025-12-18T10:10:00Z"
completed_at: "2025-12-18T10:11:00Z"
checkpoints:
approval_granted: { status: passed, at: "...", data: { approver: "auto" } }
IMPLEMENTING:
status: in_progress
entered_at: "2025-12-18T10:11:00Z"
completed_at: null
checkpoints:
layer_1_complete: { status: passed, at: "..." }
layer_2_complete: { status: pending }
layer_3_complete: { status: pending }
build_passes: { status: pending }
REVIEWING:
status: not_started
checkpoints:
review_script_run: { status: pending }
all_files_exist: { status: pending }
build_verified: { status: pending }
SECURITY_REVIEW:
status: not_started
checkpoints:
security_scan_run: { status: pending }
api_contract_validated: { status: pending }
no_critical_issues: { status: pending }
AWAITING_IMPL_APPROVAL:
status: not_started
checkpoints:
approval_granted: { status: pending }
COMPLETING:
status: not_started
checkpoints:
tasks_marked_complete: { status: pending }
manifest_updated: { status: pending }
version_completed: { status: pending }
```
### 3. Blocking Conditions Configuration
**Location**: `skills/guardrail-orchestrator/config/blocking_conditions.yml`
```yaml
# Defines what MUST be true before entering each phase
# and what MUST be completed before leaving each phase
phases:
INITIALIZING:
entry_requirements: [] # No requirements to start
exit_requirements:
- check: file_exists
path: project_manifest.json
error: "Run /guardrail:init first"
- check: script_passes
script: version_manager.py create
error: "Failed to create version"
checkpoints:
- name: manifest_exists
validator: file_exists
args: { path: project_manifest.json }
- name: version_created
validator: directory_exists
args: { path: ".workflow/versions/$VERSION" }
DESIGNING:
entry_requirements:
- check: phase_completed
phase: INITIALIZING
error: "Must complete INITIALIZING first"
exit_requirements:
- check: file_exists
path: ".workflow/versions/$VERSION/design/design_document.yml"
error: "Design document not created"
- check: script_passes
script: validate_design.py
error: "Design validation failed"
- check: min_file_count
pattern: ".workflow/versions/$VERSION/tasks/*.yml"
minimum: 1
error: "No task files generated"
checkpoints:
- name: design_document_created
validator: file_exists
args: { path: ".workflow/versions/$VERSION/design/design_document.yml" }
- name: design_validated
validator: script_passes
args: { script: validate_design.py }
- name: tasks_generated
validator: min_file_count
args: { pattern: ".workflow/versions/$VERSION/tasks/*.yml", minimum: 1 }
AWAITING_DESIGN_APPROVAL:
entry_requirements:
- check: phase_completed
phase: DESIGNING
error: "Must complete DESIGNING first"
exit_requirements:
- check: approval_granted
gate: design
error: "Design approval required"
checkpoints:
- name: approval_granted
validator: approval_status
args: { gate: design, required: approved }
IMPLEMENTING:
entry_requirements:
- check: phase_completed
phase: AWAITING_DESIGN_APPROVAL
error: "Must get design approval first"
- check: approval_granted
gate: design
error: "Design not approved"
exit_requirements:
- check: script_passes
script: "npm run build"
error: "Build must pass"
- check: all_tasks_have_files
error: "Not all tasks have implementation files"
checkpoints:
- name: layer_1_complete
validator: layer_implemented
args: { layer: 1 }
- name: layer_2_complete
validator: layer_implemented
args: { layer: 2 }
- name: layer_3_complete
validator: layer_implemented
args: { layer: 3 }
- name: build_passes
validator: script_passes
args: { script: "npm run build" }
REVIEWING:
entry_requirements:
- check: phase_completed
phase: IMPLEMENTING
error: "Must complete IMPLEMENTING first"
- check: script_passes
script: "npm run build"
error: "Build must pass before review"
exit_requirements:
- check: script_passes
script: verify_implementation.py
error: "Implementation verification failed"
checkpoints:
- name: review_script_run
validator: script_passes
args: { script: verify_implementation.py }
- name: all_files_exist
validator: all_task_files_exist
args: {}
- name: build_verified
validator: script_passes
args: { script: "npm run build" }
SECURITY_REVIEW:
entry_requirements:
- check: phase_completed
phase: REVIEWING
error: "Must complete REVIEWING first"
exit_requirements:
- check: script_passes
script: "security_scan.py --severity HIGH"
error: "Security scan has CRITICAL issues"
- check: script_passes
script: validate_api_contract.py
error: "API contract validation failed"
checkpoints:
- name: security_scan_run
validator: script_passes
args: { script: "security_scan.py --project-dir . --severity HIGH" }
- name: api_contract_validated
validator: script_passes
args: { script: validate_api_contract.py }
- name: no_critical_issues
validator: security_no_critical
args: {}
AWAITING_IMPL_APPROVAL:
entry_requirements:
- check: phase_completed
phase: SECURITY_REVIEW
error: "Must complete SECURITY_REVIEW first"
exit_requirements:
- check: approval_granted
gate: implementation
error: "Implementation approval required"
checkpoints:
- name: approval_granted
validator: approval_status
args: { gate: implementation, required: approved }
COMPLETING:
entry_requirements:
- check: phase_completed
phase: AWAITING_IMPL_APPROVAL
error: "Must get implementation approval first"
exit_requirements:
- check: script_passes
script: version_manager.py complete
error: "Failed to complete version"
checkpoints:
- name: tasks_marked_complete
validator: all_tasks_status
args: { status: completed }
- name: manifest_updated
validator: manifest_entities_implemented
args: {}
- name: version_completed
validator: script_passes
args: { script: "version_manager.py complete" }
```
### 4. Updated spawn.md Phase Execution
Each phase in spawn.md will be updated to use mandatory gate calls:
```markdown
### PHASE 5: REVIEWING
#### Step 5.0: Gate Check [MANDATORY - BLOCKING]
```bash
# MUST pass before proceeding - HALT if fails
python3 skills/guardrail-orchestrator/scripts/phase_gate.py can-enter REVIEWING
EXIT_CODE=$?
if [ $EXIT_CODE -ne 0 ]; then
echo "❌ BLOCKED: Cannot enter REVIEWING phase"
echo "Run /workflow:status to see blocking conditions"
# HALT - DO NOT PROCEED
exit 1
fi
```
#### Step 5.1: Run Build Validation [MANDATORY]
```bash
npm run build 2>&1
BUILD_EXIT=$?
# Save checkpoint
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint build_verified \
--phase REVIEWING \
--status $([ $BUILD_EXIT -eq 0 ] && echo "passed" || echo "failed")
```
... (rest of phase)
#### Step 5.X: Complete Phase [MANDATORY]
```bash
# MUST pass before transitioning - HALT if fails
python3 skills/guardrail-orchestrator/scripts/phase_gate.py complete REVIEWING
EXIT_CODE=$?
if [ $EXIT_CODE -ne 0 ]; then
echo "❌ BLOCKED: Cannot complete REVIEWING phase"
echo "Missing checkpoints:"
python3 skills/guardrail-orchestrator/scripts/phase_gate.py blockers
# HALT - DO NOT PROCEED
exit 1
fi
```
```
### 5. Auto-Mode Enforcement
Even in `--auto` and `--full-auto` modes, the following are **NEVER skipped**:
| Phase | Auto-Approved | Script Gates Still Run |
|-------|---------------|------------------------|
| DESIGNING | N/A | validate_design.py |
| AWAITING_DESIGN_APPROVAL | YES | None (approval only) |
| IMPLEMENTING | N/A | npm run build |
| REVIEWING | YES if passes | verify_implementation.py, npm run build |
| SECURITY_REVIEW | YES if passes | security_scan.py, validate_api_contract.py |
| AWAITING_IMPL_APPROVAL | YES if passes | None (approval only) |
## Implementation Order
### Phase 1: Core Infrastructure
1. Create `phase_gate.py` script
2. Create `blocking_conditions.yml` config
3. Create `gate_state.yml` schema
### Phase 2: Integration
4. Update `workflow_manager.py` to use phase_gate
5. Update `validate_workflow.py` to check gate_state
6. Create hook integration for real-time enforcement
### Phase 3: spawn.md Updates
7. Add gate checks to each phase in spawn.md
8. Add checkpoint saves after each major step
9. Add blocking conditions to phase transitions
### Phase 4: Testing
10. Test manual mode enforcement
11. Test --auto mode (gates still run)
12. Test --full-auto mode (gates still run)
13. Test resume after interruption
## File Summary
| File | Purpose |
|------|---------|
| `scripts/phase_gate.py` | Main enforcement script |
| `config/blocking_conditions.yml` | Phase requirements definition |
| `.workflow/versions/$V/gate_state.yml` | Checkpoint persistence |
| `commands/workflow/spawn.md` | Updated with gate calls |
## Success Criteria
1. **Cannot skip phases**: Attempting to transition without completing previous phase fails
2. **Cannot skip review**: REVIEWING phase always runs verification scripts
3. **Cannot skip security**: SECURITY_REVIEW always runs security scan
4. **Resume preserves state**: Interrupted workflow resumes at correct checkpoint
5. **Auto modes run gates**: --auto and --full-auto still execute all validation scripts
## Command Reference
```bash
# Check current blockers
python3 skills/guardrail-orchestrator/scripts/phase_gate.py blockers
# Check if can enter a phase
python3 skills/guardrail-orchestrator/scripts/phase_gate.py can-enter REVIEWING
# Save a checkpoint
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint build_verified --phase REVIEWING --status passed
# Complete a phase
python3 skills/guardrail-orchestrator/scripts/phase_gate.py complete REVIEWING
# View gate state
cat .workflow/versions/$VERSION/gate_state.yml
```

View File

@ -0,0 +1,38 @@
# Architect Agent Definition
# Responsible for manifest design and task creation
name: architect
role: System Designer & Task Planner
description: |
The Architect designs the system by defining entities in the manifest
and breaking down implementation into discrete tasks for other agents.
allowed_tools:
- Read # Read any file for context
- Write # Write to manifest and task files ONLY
blocked_tools:
- Bash # Cannot execute commands
- Edit # Cannot modify existing code
allowed_files:
- project_manifest.json
- "tasks/*.yml"
- "tasks/**/*.yml"
responsibilities:
- Design system architecture in manifest
- Define entities (pages, components, APIs, tables)
- Create implementation tasks for frontend/backend agents
- Set task priorities and dependencies
- Ensure no orphan entities or circular dependencies
outputs:
- Updated project_manifest.json with new entities
- Task files in tasks/ directory
cannot_do:
- Implement any code
- Run build/test commands
- Modify existing source files

View File

@ -0,0 +1,45 @@
# Backend Agent Definition
# Responsible for API endpoints and database implementation
name: backend
role: Backend Developer
description: |
The Backend agent implements API endpoints, database schemas,
and server-side logic based on approved entities and assigned tasks.
allowed_tools:
- Read # Read files for context
- Write # Create new files
- Edit # Modify existing files
- Bash # Run build, lint, type-check, tests
blocked_tools: [] # Full access for implementation
allowed_files:
- "app/api/**/*"
- "app/lib/**/*"
- "prisma/**/*"
- "db/**/*"
- "*.config.*"
responsibilities:
- Implement API route handlers (GET, POST, PUT, DELETE)
- Create database schemas and migrations
- Implement data access layer (CRUD operations)
- Ensure request/response match manifest specs
- Handle errors appropriately
- Run lint/type-check before marking complete
task_types:
- create # New API/DB entity
- update # Modify existing backend
- refactor # Improve code quality
- delete # Remove deprecated endpoints
workflow:
1. Read assigned task from tasks/*.yml
2. Verify entity is APPROVED in manifest
3. Implement code matching manifest spec
4. Run validation (lint, type-check)
5. Update task status to "review"

View File

@ -0,0 +1,44 @@
# Frontend Agent Definition
# Responsible for UI component and page implementation
name: frontend
role: Frontend Developer
description: |
The Frontend agent implements UI components and pages based on
approved entities in the manifest and assigned tasks.
allowed_tools:
- Read # Read files for context
- Write # Create new files
- Edit # Modify existing files
- Bash # Run build, lint, type-check
blocked_tools: [] # Full access for implementation
allowed_files:
- "app/components/**/*"
- "app/**/page.tsx"
- "app/**/layout.tsx"
- "app/globals.css"
- "*.config.*"
responsibilities:
- Implement UI components matching manifest specs
- Create pages with correct routing
- Ensure props match manifest definitions
- Follow existing code patterns and styles
- Run lint/type-check before marking complete
task_types:
- create # New component/page
- update # Modify existing UI
- refactor # Improve code quality
- delete # Remove deprecated UI
workflow:
1. Read assigned task from tasks/*.yml
2. Verify entity is APPROVED in manifest
3. Implement code matching manifest spec
4. Run validation (lint, type-check)
5. Update task status to "review"

View File

@ -0,0 +1,85 @@
# Orchestrator Agent Definition
# Coordinates the entire workflow and delegates to specialized agents
name: orchestrator
role: Workflow Coordinator
description: |
The Orchestrator manages the end-to-end workflow, delegating tasks
to specialized agents based on task type and current phase.
workflow_phases:
1_design:
description: Design system entities in manifest
agent: architect
inputs: Feature requirements
outputs: Updated manifest with DEFINED entities
2_plan:
description: Create implementation tasks
agent: architect
inputs: Approved manifest entities
outputs: Task files in tasks/*.yml
3_implement:
description: Implement tasks by type
agents:
frontend: UI components, pages
backend: API endpoints, database
inputs: Tasks with status "pending"
outputs: Implemented code, tasks with status "review"
4_review:
description: Review implementations
agent: reviewer
inputs: Tasks with status "review"
outputs: Approved tasks or change requests
5_complete:
description: Mark tasks as done
agent: orchestrator
inputs: Tasks with status "approved"
outputs: Tasks with status "completed"
delegation_rules:
# Task assignment by entity type
entity_routing:
pages: frontend
components: frontend
api_endpoints: backend
database_tables: backend
# Task assignment by task type
task_routing:
create: frontend | backend # Based on entity type
update: frontend | backend # Based on entity type
delete: frontend | backend # Based on entity type
refactor: frontend | backend # Based on entity type
review: reviewer
test: reviewer
status_transitions:
pending:
- in_progress # When agent starts work
- blocked # If dependencies not met
in_progress:
- review # When implementation complete
- blocked # If blocked by issue
review:
- approved # Reviewer accepts
- in_progress # Reviewer requests changes
approved:
- completed # Final state
blocked:
- pending # When blocker resolved
commands:
- /workflow:start <feature> # Start new feature workflow
- /workflow:plan # Create tasks from manifest
- /workflow:assign # Assign tasks to agents
- /workflow:status # Show workflow status
- /workflow:next # Process next available task

View File

@ -0,0 +1,52 @@
# Reviewer Agent Definition
# Responsible for code review and quality assurance
name: reviewer
role: Code Reviewer & QA
description: |
The Reviewer agent reviews implementations, runs tests,
and approves or requests changes. Cannot modify code directly.
allowed_tools:
- Read # Read any file for review
- Bash # Run tests, lint, type-check, verify
blocked_tools:
- Write # Cannot create files
- Edit # Cannot modify files
allowed_files:
- "*" # Can read everything
responsibilities:
- Review implementations match manifest specs
- Verify acceptance criteria are met
- Run tests and validation commands
- Check code quality and patterns
- Approve or request changes with feedback
task_types:
- review # Review completed implementation
review_checklist:
- File exists at manifest file_path
- Exports match manifest definitions
- Props/types match manifest specs
- Follows project code patterns
- Lint passes
- Type-check passes
- Tests pass (if applicable)
workflow:
1. Read task with status "review"
2. Read implementation files
3. Run verification commands
4. Compare against manifest specs
5. Either:
- APPROVE: Update task status to "approved"
- REQUEST_CHANGES: Add review_notes, set status to "in_progress"
outputs:
- review_notes in task file
- status update (approved | in_progress)

View File

@ -0,0 +1,216 @@
# Security Reviewer Agent
**Role**: Security-focused code review and vulnerability assessment
**Trigger**: `/workflow:security` command or security review phase
---
## Agent Capabilities
### Primary Functions
1. **Static Security Analysis**: Pattern-based vulnerability detection
2. **OWASP Top 10 Assessment**: Check for common web vulnerabilities
3. **Dependency Audit**: Identify vulnerable packages
4. **Configuration Review**: Check security settings and configurations
5. **Secret Detection**: Find hardcoded credentials and sensitive data
### Security Categories Analyzed
| Category | CWE | OWASP | Severity |
|----------|-----|-------|----------|
| Hardcoded Secrets | CWE-798 | A07 | CRITICAL |
| SQL Injection | CWE-89 | A03 | CRITICAL |
| Command Injection | CWE-78 | A03 | CRITICAL |
| XSS | CWE-79 | A03 | HIGH |
| Path Traversal | CWE-22 | A01 | HIGH |
| NoSQL Injection | CWE-943 | A03 | HIGH |
| SSRF | CWE-918 | A10 | HIGH |
| Prototype Pollution | CWE-1321 | A03 | HIGH |
| Insecure Auth | CWE-287 | A07 | HIGH |
| CORS Misconfiguration | CWE-942 | A01 | MEDIUM |
| Sensitive Data Exposure | CWE-200 | A02 | MEDIUM |
| Insecure Dependencies | CWE-1104 | A06 | MEDIUM |
| Insecure Randomness | CWE-330 | A02 | LOW |
| Debug Code | CWE-489 | A05 | LOW |
---
## Agent Constraints
### READ-ONLY MODE
- **CANNOT** modify files
- **CANNOT** fix issues directly
- **CAN** only read, analyze, and report
### Output Requirements
- Must produce structured security report
- Must categorize issues by severity
- Must provide remediation guidance
- Must reference CWE/OWASP standards
---
## Execution Flow
### Step 1: Run Automated Scanner
```bash
python3 skills/guardrail-orchestrator/scripts/security_scan.py --project-dir . --json
```
### Step 2: Deep Analysis (Task Agent)
For each CRITICAL/HIGH issue, perform deeper analysis:
- Trace data flow from source to sink
- Identify attack vectors
- Assess exploitability
- Check for existing mitigations
### Step 3: Dependency Audit
```bash
npm audit --json 2>/dev/null || echo "{}"
```
### Step 4: Configuration Review
Check security-relevant configurations:
- CORS settings
- CSP headers
- Authentication configuration
- Session management
- Cookie settings
### Step 5: Manual Code Review Checklist
For implemented features, verify:
- [ ] Input validation on all user inputs
- [ ] Output encoding for XSS prevention
- [ ] Parameterized queries for database access
- [ ] Proper error handling (no sensitive data in errors)
- [ ] Authentication/authorization checks
- [ ] HTTPS enforcement
- [ ] Secure cookie flags
- [ ] Rate limiting on sensitive endpoints
### Step 6: Generate Report
Output comprehensive security report with:
- Executive summary
- Issue breakdown by severity
- Detailed findings with code locations
- Remediation recommendations
- Risk assessment
---
## Report Format
```
+======================================================================+
| SECURITY REVIEW REPORT |
+======================================================================+
| Project: $PROJECT_NAME |
| Scan Date: $DATE |
| Agent: security-reviewer |
+======================================================================+
| EXECUTIVE SUMMARY |
+----------------------------------------------------------------------+
| Risk Level: CRITICAL / HIGH / MEDIUM / LOW / PASS |
| Total Issues: X |
| Critical: X (immediate action required) |
| High: X (fix before production) |
| Medium: X (should fix) |
| Low: X (consider fixing) |
+======================================================================+
| CRITICAL FINDINGS |
+----------------------------------------------------------------------+
| [1] Hardcoded API Key |
| File: src/lib/api.ts:15 |
| CWE: CWE-798 |
| Code: apiKey = "sk-..." |
| Risk: Credentials can be extracted from source |
| Fix: Use environment variable: process.env.API_KEY |
+----------------------------------------------------------------------+
| [2] SQL Injection |
| File: app/api/users/route.ts:42 |
| CWE: CWE-89 |
| Code: query(`SELECT * FROM users WHERE id = ${userId}`) |
| Risk: Attacker can manipulate database queries |
| Fix: Use parameterized query: query($1, [userId]) |
+======================================================================+
| HIGH FINDINGS |
+----------------------------------------------------------------------+
| [3] XSS Vulnerability |
| File: app/components/Comment.tsx:28 |
| ... |
+======================================================================+
| DEPENDENCY VULNERABILITIES |
+----------------------------------------------------------------------+
| lodash@4.17.20 - Prototype Pollution (HIGH) |
| axios@0.21.0 - SSRF Risk (MEDIUM) |
| Fix: npm audit fix |
+======================================================================+
| RECOMMENDATIONS |
+----------------------------------------------------------------------+
| 1. Immediately rotate any exposed credentials |
| 2. Fix SQL injection before deploying |
| 3. Add input validation layer |
| 4. Update vulnerable dependencies |
| 5. Add security headers middleware |
+======================================================================+
| VERDICT: FAIL - X critical issues must be fixed |
+======================================================================+
```
---
## Integration with Workflow
### In Review Phase
The security agent is automatically invoked during `/workflow:review`:
1. Review command runs security_scan.py
2. If CRITICAL issues found → blocks approval
3. Report included in review output
### Standalone Security Audit
Use `/workflow:security` for dedicated security review:
- More thorough analysis
- Deep code inspection
- Dependency audit
- Configuration review
### Remediation Flow
After security issues are identified:
1. Issues added to task queue as blockers
2. Implementation agents fix issues
3. Security agent re-validates fixes
4. Approval only after clean scan
---
## Tool Usage
### Primary Tools
- `Bash`: Run security_scan.py, npm audit
- `Read`: Analyze suspicious code patterns
- `Grep`: Search for vulnerability patterns
### Blocked Tools
- `Write`: Cannot create files
- `Edit`: Cannot modify files
- `Task`: Cannot delegate to other agents
---
## Exit Conditions
### PASS
- No CRITICAL or HIGH issues
- All dependencies up to date or acknowledged
- Security configurations reviewed
### FAIL
- Any CRITICAL issue present
- Multiple HIGH issues present
- Critical dependencies vulnerable
### WARNING
- Only MEDIUM/LOW issues
- Some dependencies outdated
- Minor configuration concerns

View File

@ -0,0 +1,339 @@
# Blocking Conditions Configuration
# Defines what MUST be true for each phase transition
# Used by phase_gate.py for strict enforcement
version: "1.0"
# Phase execution order (cannot skip)
phase_order:
- INITIALIZING
- DESIGNING
- AWAITING_DESIGN_APPROVAL
- IMPLEMENTING
- REVIEWING
- SECURITY_REVIEW
- AWAITING_IMPL_APPROVAL
- COMPLETING
- COMPLETED
# Phases that trigger fix loops on failure
fix_loop_phases:
- REVIEWING
- SECURITY_REVIEW
# Detailed phase requirements
phases:
INITIALIZING:
description: "Workflow initialization"
entry_requirements: []
checkpoints:
manifest_exists:
description: "project_manifest.json must exist"
validator: file_exists
args:
path: project_manifest.json
on_fail: "Run /guardrail:init or /guardrail:analyze first"
version_created:
description: "Workflow version directory created"
validator: directory_exists
args:
path: ".workflow/versions/{version}"
on_fail: "Version creation failed"
exit_requirements:
- all_checkpoints_passed
DESIGNING:
description: "Architecture and task design"
entry_requirements:
- phase_completed: INITIALIZING
checkpoints:
design_document_created:
description: "Design document exists"
validator: file_exists
args:
path: ".workflow/versions/{version}/design/design_document.yml"
on_fail: "Architect agent must create design document"
design_validated:
description: "Design passes validation"
validator: script_passes
args:
script: "python3 skills/guardrail-orchestrator/scripts/validate_design.py .workflow/versions/{version}/design/design_document.yml --output-dir .workflow/versions/{version}"
on_fail: "Design validation failed - review errors"
tasks_generated:
description: "Task files generated from design"
validator: min_file_count
args:
pattern: ".workflow/versions/{version}/tasks/*.yml"
minimum: 1
on_fail: "No task files generated - validate_design.py must run"
exit_requirements:
- all_checkpoints_passed
- min_task_count: 1
AWAITING_DESIGN_APPROVAL:
description: "Gate 1 - Design approval required"
entry_requirements:
- phase_completed: DESIGNING
checkpoints:
design_approved:
description: "Design approval granted"
validator: approval_status
args:
gate: design
required_status: approved
on_fail: "Design approval required - run /workflow:approve or wait for auto-approval"
exit_requirements:
- all_checkpoints_passed
auto_mode_behavior:
auto_approve: true
still_validates: false
IMPLEMENTING:
description: "Code implementation by layers"
entry_requirements:
- phase_completed: AWAITING_DESIGN_APPROVAL
- approval_granted: design
checkpoints:
all_layers_complete:
description: "All dependency layers implemented"
validator: all_layers_implemented
args: {}
on_fail: "Not all layers complete - check dependency_graph.yml"
build_passes:
description: "npm run build succeeds"
validator: script_passes
args:
script: "npm run build"
timeout: 300
on_fail: "Build failed - fix compilation errors"
type_check_passes:
description: "npx tsc --noEmit succeeds"
validator: script_passes
args:
script: "npx tsc --noEmit"
timeout: 300
on_fail: "Type check failed - fix TypeScript errors"
lint_passes:
description: "npm run lint succeeds"
validator: script_passes
args:
script: "npm run lint"
timeout: 300
on_fail: "Lint failed - fix lint errors"
exit_requirements:
- all_checkpoints_passed
- build_exit_code: 0
- type_check_exit_code: 0
- lint_exit_code: 0
REVIEWING:
description: "Code review and verification"
entry_requirements:
- phase_completed: IMPLEMENTING
- build_passes: true
- type_check_passes: true
- lint_passes: true
checkpoints:
review_script_run:
description: "Review verification script executed"
validator: script_passes
args:
script: "python3 skills/guardrail-orchestrator/scripts/verify_implementation.py --version {version}"
on_fail: "Review script failed to run"
all_files_verified:
description: "All task files have implementations"
validator: all_task_files_exist
args: {}
on_fail: "Some implementation files are missing"
code_review_passed:
description: "Code review agent found no CRITICAL issues"
validator: code_review_result
args:
report_path: ".workflow/versions/{version}/review/code_review_report.yml"
block_on_critical: true
block_on_warnings: false
on_fail: "Code review found CRITICAL issues that must be fixed"
review_passed:
description: "Review found no blocking issues (umbrella checkpoint)"
validator: review_result
args:
allow_warnings: true
block_on_errors: true
on_fail: "Review found issues that must be fixed"
exit_requirements:
- all_checkpoints_passed
fix_loop:
enabled: true
return_to: IMPLEMENTING
trigger_on:
- checkpoint_failed: review_passed
- checkpoint_failed: all_files_verified
- checkpoint_failed: code_review_passed
max_iterations: 5
on_max_iterations: "Too many fix iterations - manual intervention required"
auto_mode_behavior:
auto_approve: false # Must pass review
still_validates: true
fix_loop_enabled: true
SECURITY_REVIEW:
description: "Security scanning and API validation"
entry_requirements:
- phase_completed: REVIEWING
- checkpoint_passed: review_passed
checkpoints:
security_scan_run:
description: "Security scanner executed"
validator: script_passes
args:
script: "python3 skills/guardrail-orchestrator/scripts/security_scan.py --project-dir . --severity HIGH"
on_fail: "Security scan failed to run"
api_contract_validated:
description: "API contracts match frontend calls"
validator: script_passes
args:
script: "python3 skills/guardrail-orchestrator/scripts/validate_api_contract.py --project-dir ."
on_fail: "API contract validation failed"
security_passed:
description: "No CRITICAL security issues"
validator: security_result
args:
block_on_critical: true
block_on_high: false # Warning only
allow_medium: true
allow_low: true
on_fail: "CRITICAL security issues found - must fix before proceeding"
exit_requirements:
- all_checkpoints_passed
- no_critical_security_issues: true
fix_loop:
enabled: true
return_to: IMPLEMENTING
trigger_on:
- checkpoint_failed: security_passed
- security_critical_found: true
max_iterations: 5
on_max_iterations: "Security issues persist - manual security review required"
auto_mode_behavior:
auto_approve: false # Must pass security
still_validates: true
fix_loop_enabled: true
AWAITING_IMPL_APPROVAL:
description: "Gate 2 - Implementation approval required"
entry_requirements:
- phase_completed: SECURITY_REVIEW
- checkpoint_passed: security_passed
checkpoints:
implementation_approved:
description: "Implementation approval granted"
validator: approval_status
args:
gate: implementation
required_status: approved
on_fail: "Implementation approval required"
exit_requirements:
- all_checkpoints_passed
auto_mode_behavior:
auto_approve: true # Auto if review + security passed
still_validates: false
COMPLETING:
description: "Finalization and cleanup"
entry_requirements:
- phase_completed: AWAITING_IMPL_APPROVAL
- approval_granted: implementation
checkpoints:
tasks_marked_complete:
description: "All tasks marked as completed"
validator: all_tasks_status
args:
required_status: completed
on_fail: "Not all tasks marked complete"
version_finalized:
description: "Version marked as complete"
validator: script_passes
args:
script: "python3 skills/guardrail-orchestrator/scripts/version_manager.py complete"
on_fail: "Version finalization failed"
exit_requirements:
- all_checkpoints_passed
COMPLETED:
description: "Workflow finished"
entry_requirements:
- phase_completed: COMPLETING
checkpoints: {}
exit_requirements: []
# Global rules
global_rules:
# Cannot skip phases
strict_phase_order: true
# Must complete previous phase before entering next
require_phase_completion: true
# Fix loops are mandatory for REVIEWING and SECURITY_REVIEW
mandatory_fix_loops:
- REVIEWING
- SECURITY_REVIEW
# Maximum fix loop iterations before requiring manual intervention
max_fix_iterations: 5
# Build must pass before REVIEWING
build_required_before_review: true
# Security must pass before AWAITING_IMPL_APPROVAL
security_required_before_approval: true
# Scripts referenced by validators
scripts:
validate_design:
path: "skills/guardrail-orchestrator/scripts/validate_design.py"
required_exit_code: 0
verify_implementation:
path: "skills/guardrail-orchestrator/scripts/verify_implementation.py"
required_exit_code: 0
security_scan:
path: "skills/guardrail-orchestrator/scripts/security_scan.py"
critical_exit_code: 2
high_exit_code: 1
pass_exit_code: 0
validate_api_contract:
path: "skills/guardrail-orchestrator/scripts/validate_api_contract.py"
required_exit_code: 0
version_manager:
path: "skills/guardrail-orchestrator/scripts/version_manager.py"
required_exit_code: 0

View File

@ -0,0 +1,276 @@
# Context Compaction Schema
# Handles context window management with pre/post compact hooks
# Ensures AI can resume work after context compaction
# ============================================================================
# CONTEXT STATE (Saved before compaction)
# ============================================================================
context_state:
# Unique session ID
session_id: string # compact_<timestamp>
# When state was captured
captured_at: timestamp
# Context usage at capture time
context_usage:
tokens_used: integer
tokens_max: integer
percentage: float # 0.0 - 1.0
threshold_triggered: float # e.g., 0.75 for 75%
# Current workflow position
workflow_position:
workflow_id: string # Reference to workflow_state
current_phase: string # DESIGNING, IMPLEMENTING, etc.
active_task_id: string | null
layer: integer # Current execution layer
# What was being worked on
active_work:
entity_id: string # model_user, api_create_user, etc.
entity_type: string # model, api, page, component
action: string # creating, implementing, reviewing
file_path: string | null # File being edited
progress_notes: string # What was done so far
# Pending actions (what to do next)
next_actions:
- action: string # create, implement, test, review
target: string # Entity or file
priority: integer # 1 = immediate
context_needed: [string] # Files/entities needed for context
# Files that were recently modified
modified_files:
- path: string
action: string # created, modified, deleted
summary: string # What changed
# Important decisions made in this session
decisions:
- topic: string
decision: string
reasoning: string
timestamp: timestamp
# Blockers or issues encountered
blockers:
- issue: string
status: string # unresolved, workaround, resolved
notes: string
# ============================================================================
# COMPACTION TRIGGERS
# ============================================================================
compaction_triggers:
# Automatic triggers based on context usage
auto_triggers:
warning_threshold: 0.70 # Show warning at 70%
save_threshold: 0.80 # Auto-save state at 80%
compact_threshold: 0.90 # Recommend compaction at 90%
# Manual triggers
manual_triggers:
- command: "/compact" # User requests compaction
- command: "/save-state" # User requests state save
- command: "/resume" # User requests resume from state
# ============================================================================
# PRE-COMPACT HOOK
# ============================================================================
pre_compact_hook:
description: "Executed before context compaction"
actions:
# 1. Capture current state
- action: capture_workflow_state
output: .workflow/context_state.json
# 2. Save progress notes
- action: generate_progress_summary
include:
- completed_tasks
- current_task_status
- next_steps
- blockers
# 3. Commit any uncommitted work
- action: git_checkpoint
message: "WIP: Pre-compaction checkpoint"
# 4. Generate resume prompt
- action: generate_resume_prompt
output: .workflow/resume_prompt.md
output_files:
- .workflow/context_state.json # Full state
- .workflow/resume_prompt.md # Human-readable resume
- .workflow/modified_files.json # Recent changes
# ============================================================================
# POST-COMPACT HOOK (Resume Injector)
# ============================================================================
post_compact_hook:
description: "Executed after compaction to restore context"
# Resume prompt template (injected into context)
resume_prompt_template: |
## Context Recovery - Resuming Previous Session
### Session Info
- **Original Session**: {{session_id}}
- **Captured At**: {{captured_at}}
- **Context Usage**: {{percentage}}% (triggered at {{threshold_triggered}}%)
### Workflow Position
- **Phase**: {{current_phase}}
- **Active Task**: {{active_task_id}}
- **Layer**: {{layer}} of {{total_layers}}
### What Was Being Worked On
- **Entity**: {{entity_id}} ({{entity_type}})
- **Action**: {{action}}
- **File**: {{file_path}}
- **Progress**: {{progress_notes}}
### Next Actions (Priority Order)
{{#each next_actions}}
{{priority}}. **{{action}}** {{target}}
- Context needed: {{context_needed}}
{{/each}}
### Recent Changes
{{#each modified_files}}
- `{{path}}` - {{action}}: {{summary}}
{{/each}}
### Key Decisions Made
{{#each decisions}}
- **{{topic}}**: {{decision}}
{{/each}}
### Current Blockers
{{#each blockers}}
- {{issue}} ({{status}}): {{notes}}
{{/each}}
---
**Action Required**: Continue from the next action listed above.
# ============================================================================
# CONTEXT MONITOR
# ============================================================================
context_monitor:
description: "Monitors context usage and triggers hooks"
# Check frequency
check_interval: after_each_tool_call
# Warning messages by threshold
warnings:
- threshold: 0.70
message: "Context at 70%. Consider saving state soon."
action: log_warning
- threshold: 0.80
message: "Context at 80%. Auto-saving workflow state..."
action: auto_save_state
- threshold: 0.90
message: "Context at 90%. Compaction recommended. Run /compact to save state and clear context."
action: recommend_compact
- threshold: 0.95
message: "CRITICAL: Context at 95%. Forcing state save before potential truncation."
action: force_save_state
# ============================================================================
# RESUME WORKFLOW
# ============================================================================
resume_workflow:
description: "Steps to resume after compaction"
steps:
# 1. Check for existing state
- step: check_state_exists
file: .workflow/context_state.json
if_missing: "No previous state found. Starting fresh."
# 2. Load state
- step: load_state
parse: context_state
# 3. Validate state currency
- step: validate_state
checks:
- git_status_matches # Ensure no unexpected changes
- workflow_exists # Workflow files still present
- task_valid # Active task still exists
# 4. Inject resume context
- step: inject_resume_prompt
template: post_compact_hook.resume_prompt_template
# 5. Display summary
- step: display_summary
show:
- progress_percentage
- next_action
- files_to_read
# ============================================================================
# EXAMPLE STATE FILE
# ============================================================================
example_context_state:
session_id: compact_20250118_143022
captured_at: "2025-01-18T14:30:22Z"
context_usage:
tokens_used: 85000
tokens_max: 100000
percentage: 0.85
threshold_triggered: 0.80
workflow_position:
workflow_id: workflow_20250118_100000
current_phase: IMPLEMENTING
active_task_id: task_create_api_auth_login
layer: 2
active_work:
entity_id: api_auth_login
entity_type: api
action: implementing
file_path: app/api/auth/login/route.ts
progress_notes: "Created route handler, added validation. Need to implement JWT generation."
next_actions:
- action: implement
target: JWT token generation in login route
priority: 1
context_needed:
- app/api/auth/login/route.ts
- app/lib/auth.ts
- action: implement
target: api_auth_register
priority: 2
context_needed:
- .workflow/versions/v001/contexts/api_auth_register.yml
modified_files:
- path: app/api/auth/login/route.ts
action: created
summary: "Basic route structure with validation"
- path: prisma/schema.prisma
action: modified
summary: "Added User model"
decisions:
- topic: Authentication method
decision: "Use JWT with httpOnly cookies"
reasoning: "More secure than localStorage, works with SSR"
timestamp: "2025-01-18T14:00:00Z"
blockers: []

View File

@ -0,0 +1,340 @@
# Dependency Graph Schema
# Auto-generated from design_document.yml
# Determines execution order for parallel task distribution
# ============================================================================
# GRAPH METADATA
# ============================================================================
dependency_graph:
# Links to design document
design_version: string # design_document revision this was generated from
workflow_version: string # v001, v002, etc.
# Generation info
generated_at: timestamp
generator: string # Script that generated this
# Statistics
stats:
total_entities: integer
total_layers: integer
max_parallelism: integer # Max items that can run in parallel
critical_path_length: integer # Longest dependency chain
# ============================================================================
# EXECUTION LAYERS
# ============================================================================
layers:
description: "Ordered layers for parallel execution within each layer"
layer_schema:
layer: integer # 1, 2, 3...
name: string # Human-readable name
description: string # What this layer contains
# Items in this layer (can run in parallel)
items:
- id: string # Entity ID (model_*, api_*, page_*, component_*)
type: enum # model | api | page | component
name: string # Human-readable name
# Dependencies (all must be in lower layers)
depends_on: [string] # Entity IDs this depends on
# Task mapping
task_id: string # task_* ID for implementation
agent: enum # frontend | backend
# Estimated complexity
complexity: enum # low | medium | high
# Layer constraints
requires_layers: [integer] # Layer numbers that must complete first
parallel_count: integer # Number of items that can run in parallel
# Example layers
example:
- layer: 1
name: "Data Layer"
description: "Database models - no external dependencies"
items:
- id: model_user
type: model
name: User
depends_on: []
task_id: task_create_model_user
agent: backend
complexity: medium
- id: model_post
type: model
name: Post
depends_on: []
task_id: task_create_model_post
agent: backend
complexity: low
requires_layers: []
parallel_count: 2
- layer: 2
name: "API Layer"
description: "REST endpoints - depend on models"
items:
- id: api_create_user
type: api
name: "Create User"
depends_on: [model_user]
task_id: task_create_api_create_user
agent: backend
complexity: medium
- id: api_list_users
type: api
name: "List Users"
depends_on: [model_user]
task_id: task_create_api_list_users
agent: backend
complexity: low
requires_layers: [1]
parallel_count: 2
- layer: 3
name: "UI Layer"
description: "Pages and components - depend on APIs"
items:
- id: component_user_card
type: component
name: UserCard
depends_on: []
task_id: task_create_component_user_card
agent: frontend
complexity: low
- id: page_users
type: page
name: "Users Page"
depends_on: [api_list_users, component_user_card]
task_id: task_create_page_users
agent: frontend
complexity: medium
requires_layers: [2]
parallel_count: 2
# ============================================================================
# FULL DEPENDENCY MAP
# ============================================================================
dependency_map:
description: "Complete dependency relationships for visualization"
entry_schema:
entity_id:
type: enum # model | api | page | component
layer: integer # Which layer this belongs to
depends_on: [string] # What this entity needs
depended_by: [string] # What entities need this
# Example
example:
model_user:
type: model
layer: 1
depends_on: []
depended_by: [model_post, api_create_user, api_list_users, api_get_user]
api_create_user:
type: api
layer: 2
depends_on: [model_user]
depended_by: [page_user_create, component_user_form]
page_users:
type: page
layer: 3
depends_on: [api_list_users, component_user_card]
depended_by: []
# ============================================================================
# TASK GENERATION MAP
# ============================================================================
task_map:
description: "Maps entities to implementation tasks with context"
task_entry_schema:
entity_id: string # model_user, api_create_user, etc.
task_id: string # task_create_model_user
layer: integer # Execution layer
agent: enum # frontend | backend
# Context to pass to subagent (snapshot from design_document)
context:
# For models
model_definition:
fields: [object]
relations: [object]
validations: [object]
# For APIs
api_contract:
method: string
path: string
request_body: object
responses: [object]
auth: object
# For pages
page_definition:
path: string
data_needs: [object]
components: [string]
auth: object
# For components
component_definition:
props: [object]
events: [object]
uses_apis: [string]
# Shared context
related_models: [object] # Models this entity interacts with
related_apis: [object] # APIs this entity needs/provides
# Dependencies as task IDs
depends_on_tasks: [string] # Task IDs that must complete first
# Output definition
outputs:
files: [string] # Files this task will create
provides: [string] # Entity IDs this task provides
# ============================================================================
# EXECUTION PLAN
# ============================================================================
execution_plan:
description: "Concrete execution order for workflow orchestrator"
phase_schema:
phase: integer # 1, 2, 3... (maps to layers)
# Parallel batch within phase
parallel_batch:
- task_id: string
entity_id: string
agent: enum
# Full context blob for subagent
context_file: string # Path to context snapshot file
# Expected outputs
expected_files: [string]
# Validation to run after completion
validation:
- type: enum # file_exists | lint | typecheck | test
target: string
# Example
example:
- phase: 1
parallel_batch:
- task_id: task_create_model_user
entity_id: model_user
agent: backend
context_file: .workflow/versions/v001/contexts/model_user.yml
expected_files: [prisma/schema.prisma, app/models/user.ts]
validation:
- type: typecheck
target: app/models/user.ts
- task_id: task_create_model_post
entity_id: model_post
agent: backend
context_file: .workflow/versions/v001/contexts/model_post.yml
expected_files: [prisma/schema.prisma, app/models/post.ts]
validation:
- type: typecheck
target: app/models/post.ts
- phase: 2
parallel_batch:
- task_id: task_create_api_create_user
entity_id: api_create_user
agent: backend
context_file: .workflow/versions/v001/contexts/api_create_user.yml
expected_files: [app/api/users/route.ts]
validation:
- type: lint
target: app/api/users/route.ts
- type: typecheck
target: app/api/users/route.ts
# ============================================================================
# CONTEXT SNAPSHOT SCHEMA
# ============================================================================
context_snapshot:
description: "Schema for per-task context files passed to subagents"
snapshot_schema:
# Metadata
task_id: string
entity_id: string
generated_at: timestamp
workflow_version: string
# The entity being implemented
target:
type: enum # model | api | page | component
definition: object # Full definition from design_document
# Related entities (for reference)
related:
models: [object] # Model definitions this task needs to know about
apis: [object] # API contracts this task needs to know about
components: [object] # Component definitions this task needs
# Dependency chain
dependencies:
completed: [string] # Entity IDs already implemented
pending: [string] # Entity IDs not yet implemented (shouldn't depend on)
# File context
files:
to_create: [string] # Files this task should create
to_modify: [string] # Files this task may modify
reference: [string] # Files to read for context
# Acceptance criteria
acceptance:
- criterion: string # What must be true
validation: string # How to verify
# Implementation hints
hints:
patterns: [string] # Patterns to follow (from existing codebase)
avoid: [string] # Anti-patterns to avoid
examples: [string] # Example file paths to reference
# ============================================================================
# GRAPH GENERATION RULES
# ============================================================================
generation_rules:
layer_assignment:
- "Models with no relations → Layer 1"
- "Models with relations to Layer 1 models → Layer 1 (parallel)"
- "APIs depending only on models → Layer 2"
- "Components with no API deps → Layer 3 (parallel with pages)"
- "Pages and components with API deps → Layer 3+"
- "Recursive: if all deps in Layer N, assign to Layer N+1"
parallelism:
- "Items in same layer with no inter-dependencies can run in parallel"
- "Max parallelism = min(layer_item_count, configured_max_agents)"
- "Group by agent type for efficient batching"
context_generation:
- "Include full definition of target entity"
- "Include definitions of all direct dependencies"
- "Include one-level of indirect dependencies for context"
- "Exclude unrelated entities to minimize context size"
validation:
- "No circular dependencies (would prevent layer assignment)"
- "All dependency targets must exist in design_document"
- "Each entity must be in exactly one layer"
- "Layer numbers must be consecutive starting from 1"

View File

@ -0,0 +1,463 @@
# Design Document Schema
# The source of truth for system design - all tasks derive from this
# Created during DESIGNING phase, approved before IMPLEMENTING
# ============================================================================
# DOCUMENT METADATA
# ============================================================================
design_document:
# Links to workflow
workflow_version: string # e.g., v001
feature: string # Feature being implemented
# Timestamps
created_at: timestamp
updated_at: timestamp
approved_at: timestamp | null
# Design status
status: draft | review | approved | rejected
# Revision tracking
revision: integer # Increments on changes
revision_notes: string # What changed in this revision
# ============================================================================
# LAYER 1: DATA MODELS (ER Diagram)
# ============================================================================
data_models:
description: "Database entities and their relationships"
model_schema:
# Identity
id: string # model_<name> (e.g., model_user, model_post)
name: string # PascalCase entity name (e.g., User, Post)
description: string # What this model represents
# Table/Collection info
table_name: string # snake_case (e.g., users, posts)
# Fields
fields:
- name: string # snake_case field name
type: enum # string | integer | boolean | datetime | uuid | json | text | float | decimal | enum
constraints: [enum] # primary_key | foreign_key | unique | not_null | indexed | auto_increment | default
default: any # Default value if constraint includes 'default'
enum_values: [string] # If type is 'enum', list valid values
description: string # Field purpose
# Relations to other models
relations:
- type: enum # has_one | has_many | belongs_to | many_to_many
target: string # Target model_id (e.g., model_post)
foreign_key: string # FK field name
through: string # Junction table for many_to_many
on_delete: enum # cascade | set_null | restrict | no_action
# Indexes
indexes:
- fields: [string] # Fields in index
unique: boolean # Is unique index
name: string # Index name
# Timestamps (common pattern)
timestamps: boolean # Auto-add created_at, updated_at
soft_delete: boolean # Add deleted_at for soft deletes
# Validation rules (business logic)
validations:
- field: string # Field to validate
rule: string # Validation rule (e.g., "email", "min:8", "max:100")
message: string # Error message
# Example
example_model:
id: model_user
name: User
description: "Application user account"
table_name: users
fields:
- name: id
type: uuid
constraints: [primary_key]
description: "Unique identifier"
- name: email
type: string
constraints: [unique, not_null, indexed]
description: "User email address"
- name: name
type: string
constraints: [not_null]
description: "Display name"
- name: password_hash
type: string
constraints: [not_null]
description: "Bcrypt hashed password"
- name: role
type: enum
enum_values: [user, admin, moderator]
constraints: [not_null, default]
default: user
description: "User role for authorization"
relations:
- type: has_many
target: model_post
foreign_key: user_id
on_delete: cascade
timestamps: true
soft_delete: false
validations:
- field: email
rule: email
message: "Invalid email format"
- field: password_hash
rule: min:60
message: "Invalid password hash"
# ============================================================================
# LAYER 2: API ENDPOINTS
# ============================================================================
api_endpoints:
description: "REST API endpoints with request/response contracts"
endpoint_schema:
# Identity
id: string # api_<verb>_<resource> (e.g., api_create_user)
# HTTP
method: enum # GET | POST | PUT | PATCH | DELETE
path: string # URL path (e.g., /api/users/:id)
# Description
summary: string # Short description
description: string # Detailed description
# Tags for grouping
tags: [string] # e.g., [users, authentication]
# Path parameters
path_params:
- name: string # Parameter name (e.g., id)
type: string # Data type
description: string
# Query parameters (for GET)
query_params:
- name: string # Parameter name
type: string # Data type
required: boolean
default: any
description: string
# Request body (for POST/PUT/PATCH)
request_body:
content_type: string # application/json
schema:
type: object | array
properties:
- name: string
type: string
required: boolean
validations: [string] # Validation rules
description: string
example: object # Example request body
# Response schemas by status code
responses:
- status: integer # HTTP status code
description: string
schema:
type: object | array
properties:
- name: string
type: string
example: object
# Dependencies
depends_on_models: [string] # model_ids this endpoint uses
depends_on_apis: [string] # api_ids this endpoint calls (internal)
# Authentication/Authorization
auth:
required: boolean
roles: [string] # Required roles (empty = any authenticated)
# Rate limiting
rate_limit:
requests: integer # Max requests
window: string # Time window (e.g., "1m", "1h")
# Example
example_endpoint:
id: api_create_user
method: POST
path: /api/users
summary: "Create a new user"
description: "Register a new user account with email and password"
tags: [users, authentication]
request_body:
content_type: application/json
schema:
type: object
properties:
- name: email
type: string
required: true
validations: [email]
description: "User email address"
- name: name
type: string
required: true
validations: [min:1, max:100]
description: "Display name"
- name: password
type: string
required: true
validations: [min:8]
description: "Password (will be hashed)"
example:
email: "user@example.com"
name: "John Doe"
password: "securepass123"
responses:
- status: 201
description: "User created successfully"
schema:
type: object
properties:
- name: id
type: uuid
- name: email
type: string
- name: name
type: string
- name: created_at
type: datetime
example:
id: "550e8400-e29b-41d4-a716-446655440000"
email: "user@example.com"
name: "John Doe"
created_at: "2025-01-16T10:00:00Z"
- status: 400
description: "Validation error"
schema:
type: object
properties:
- name: error
type: string
- name: details
type: array
example:
error: "Validation failed"
details: ["Email is invalid", "Password too short"]
- status: 409
description: "Email already exists"
schema:
type: object
properties:
- name: error
type: string
example:
error: "Email already registered"
depends_on_models: [model_user]
depends_on_apis: []
auth:
required: false
roles: []
# ============================================================================
# LAYER 3: UI PAGES
# ============================================================================
pages:
description: "Application pages/routes"
page_schema:
# Identity
id: string # page_<name> (e.g., page_users, page_user_detail)
name: string # Human-readable name
# Routing
path: string # URL path (e.g., /users, /users/[id])
layout: string # Layout component to use
# Data requirements
data_needs:
- api_id: string # API endpoint to call
purpose: string # Why this data is needed
on_load: boolean # Fetch on page load
# Components used
components: [string] # component_ids used on this page
# SEO
seo:
title: string
description: string
# Auth requirements
auth:
required: boolean
roles: [string]
redirect: string # Where to redirect if not authorized
# State management
state:
local: [string] # Local state variables
global: [string] # Global state dependencies
# Example
example_page:
id: page_users
name: "Users List"
path: /users
layout: layout_dashboard
data_needs:
- api_id: api_list_users
purpose: "Display user list"
on_load: true
components: [component_user_list, component_user_card, component_pagination]
seo:
title: "Users"
description: "View all users"
auth:
required: true
roles: [admin]
redirect: /login
# ============================================================================
# LAYER 3: UI COMPONENTS
# ============================================================================
components:
description: "Reusable UI components"
component_schema:
# Identity
id: string # component_<name> (e.g., component_user_card)
name: string # PascalCase component name
# Props (input)
props:
- name: string # Prop name
type: string # TypeScript type
required: boolean
default: any
description: string
# Events (output)
events:
- name: string # Event name (e.g., onClick, onSubmit)
payload: string # Payload type
description: string
# API calls (if component fetches data)
uses_apis: [string] # api_ids this component calls directly
# Child components
uses_components: [string] # component_ids used inside this component
# State
internal_state: [string] # Internal state variables
# Styling
variants: [string] # Style variants (e.g., primary, secondary)
# Example
example_component:
id: component_user_card
name: UserCard
props:
- name: user
type: User
required: true
description: "User object to display"
- name: showActions
type: boolean
required: false
default: true
description: "Show edit/delete buttons"
events:
- name: onEdit
payload: "User"
description: "Fired when edit button clicked"
- name: onDelete
payload: "string"
description: "Fired when delete confirmed, payload is user ID"
uses_apis: []
uses_components: [component_avatar, component_button]
internal_state: [isDeleting]
variants: [default, compact]
# ============================================================================
# DEPENDENCY GRAPH (Auto-generated from above)
# ============================================================================
dependency_graph:
description: "Execution order based on dependencies - auto-generated"
# Layers for parallel execution
layers:
- layer: 1
name: "Data Models"
description: "Database schema - no dependencies"
items:
- id: string # Entity ID
type: model # model | api | page | component
dependencies: [] # Empty for layer 1
- layer: 2
name: "API Endpoints"
description: "Backend APIs - depend on models"
items:
- id: string
type: api
dependencies: [string] # model_ids
- layer: 3
name: "UI Layer"
description: "Pages and components - depend on APIs"
items:
- id: string
type: page | component
dependencies: [string] # api_ids, component_ids
# Full dependency map for visualization
dependency_map:
model_user:
depends_on: []
depended_by: [api_create_user, api_list_users, api_get_user]
api_create_user:
depends_on: [model_user]
depended_by: [page_user_create, component_user_form]
page_users:
depends_on: [api_list_users, component_user_list]
depended_by: []
# ============================================================================
# DESIGN VALIDATION RULES
# ============================================================================
validation_rules:
models:
- "Every model must have a primary_key field"
- "Foreign keys must reference existing models"
- "Relation targets must exist in data_models"
- "Enum types must have enum_values defined"
apis:
- "Every API must have at least one response defined"
- "POST/PUT/PATCH must have request_body"
- "depends_on_models must reference existing models"
- "Path params must match :param patterns in path"
pages:
- "data_needs must reference existing api_ids"
- "components must reference existing component_ids"
- "auth.redirect must be a valid path"
components:
- "uses_apis must reference existing api_ids"
- "uses_components must reference existing component_ids"
- "No circular component dependencies"
graph:
- "No circular dependencies in dependency_graph"
- "All entities must be assigned to a layer"
- "Layer N items can only depend on Layer < N items"

View File

@ -0,0 +1,364 @@
# Implementation Task Template Schema
# Used by the Architect agent to create implementation tasks
# Tasks are generated from design_document.yml with full context
# ============================================================================
# TASK DEFINITION
# ============================================================================
task:
# Required fields
id: task_<type>_<entity> # e.g., task_create_model_user, task_create_api_users
type: create | update | delete | refactor | test | review
title: string # Human-readable title
agent: frontend | backend # Which agent implements this
entity_id: string # Primary entity ID from design_document
entity_ids: [string] # All entity IDs this task covers (for multi-entity tasks)
status: pending | in_progress | review | approved | completed | blocked
# Execution layer (from dependency_graph)
layer: integer # Which layer this task belongs to (1, 2, 3...)
parallel_group: string # Group ID for parallel execution
# Optional fields
description: string # Detailed implementation notes
file_paths: [string] # Files to create/modify
dependencies: [string] # Task IDs that must complete first
acceptance_criteria: [string] # Checklist for completion
priority: low | medium | high # Task priority
complexity: low | medium | high # Estimated complexity
# Tracking (set by system)
created_at: datetime
assigned_at: datetime
completed_at: datetime
reviewed_by: string
review_notes: string
# ============================================================================
# CONTEXT SECTION (Passed to subagent)
# ============================================================================
# This is the critical section that provides full context to subagents
# Generated from design_document.yml during task creation
context:
# Source reference
design_version: string # design_document revision
workflow_version: string # Workflow version (v001, etc.)
context_snapshot_path: string # Path to full context file
# TARGET: What this task implements
target:
entity_id: string # model_user, api_create_user, etc.
entity_type: model | api | page | component
definition: object # Full definition from design_document
# For models
model:
name: string
table_name: string
fields: [field_definition]
relations: [relation_definition]
validations: [validation_rule]
indexes: [index_definition]
# For APIs
api:
method: string
path: string
summary: string
path_params: [param_definition]
query_params: [param_definition]
request_body: object
responses: [response_definition]
auth: object
# For pages
page:
path: string
layout: string
data_needs: [data_requirement]
components: [string]
seo: object
auth: object
# For components
component:
name: string
props: [prop_definition]
events: [event_definition]
uses_apis: [string]
uses_components: [string]
variants: [string]
# DEPENDENCIES: What this task needs
dependencies:
# Models this task interacts with
models:
- id: string # model_user
definition:
name: string
fields: [field_definition]
relations: [relation_definition]
# APIs this task needs
apis:
- id: string # api_get_user
definition:
method: string
path: string
request_body: object
responses: [object]
# Components this task uses
components:
- id: string # component_button
definition:
props: [prop_definition]
events: [event_definition]
# CONTRACTS: Input/Output specifications
contracts:
# What this task receives from previous tasks
inputs:
- from_task: string # task_create_model_user
provides: string # model_user
type: model | api | component | file
# What this task provides to later tasks
outputs:
- entity_id: string # api_create_user
type: model | api | component | file
consumers: [string] # [page_user_create, component_user_form]
# FILES: File operations
files:
# Files to create
create: [string]
# Files to modify
modify: [string]
# Files to read for patterns/context
reference:
- path: string
purpose: string # "Similar component pattern", "API route pattern"
# VALIDATION: How to verify completion
validation:
# Required checks
checks:
- type: file_exists | lint | typecheck | test | build
target: string # File or test pattern
required: boolean
# Acceptance criteria (human-readable)
criteria:
- criterion: string
verification: string # How to verify this
# HINTS: Implementation guidance
hints:
# Patterns to follow
patterns:
- pattern: string # "Use existing API route pattern"
reference: string # "app/api/health/route.ts"
# Things to avoid
avoid:
- issue: string
reason: string
# Code examples
examples:
- description: string
file: string
# ============================================================================
# TASK GENERATION RULES
# ============================================================================
generation_rules:
from_model:
task_id: "task_create_model_{model_name}"
type: create
agent: backend
file_paths:
- "prisma/schema.prisma" # Add model to schema
- "app/models/{model_name}.ts" # TypeScript types
acceptance_criteria:
- "Model defined in Prisma schema"
- "TypeScript types exported"
- "Relations properly configured"
- "Migrations generated"
from_api:
task_id: "task_create_api_{endpoint_name}"
type: create
agent: backend
file_paths:
- "app/api/{path}/route.ts"
acceptance_criteria:
- "Endpoint responds to {method} requests"
- "Request validation implemented"
- "Response matches contract"
- "Auth requirements enforced"
- "Error handling complete"
from_page:
task_id: "task_create_page_{page_name}"
type: create
agent: frontend
file_paths:
- "app/{path}/page.tsx"
acceptance_criteria:
- "Page renders at {path}"
- "Data fetching implemented"
- "Components integrated"
- "Auth protection active"
- "SEO metadata set"
from_component:
task_id: "task_create_component_{component_name}"
type: create
agent: frontend
file_paths:
- "app/components/{ComponentName}.tsx"
acceptance_criteria:
- "Component renders correctly"
- "Props typed and documented"
- "Events emitted properly"
- "Variants implemented"
- "Accessible (a11y)"
# ============================================================================
# VALID STATUS TRANSITIONS
# ============================================================================
status_transitions:
pending:
- in_progress # Start work
- blocked # Dependencies not met
in_progress:
- review # Ready for review
- blocked # Hit blocker
review:
- approved # Review passed
- in_progress # Changes requested
approved:
- completed # Final completion
blocked:
- pending # Blocker resolved
- in_progress # Resume after unblock
# ============================================================================
# EXAMPLE: Complete Task with Context
# ============================================================================
example_task:
id: task_create_api_create_user
type: create
title: "Create User API Endpoint"
agent: backend
entity_id: api_create_user
entity_ids: [api_create_user]
status: pending
layer: 2
parallel_group: "layer_2_apis"
description: "Implement POST /api/users endpoint for user registration"
file_paths:
- app/api/users/route.ts
dependencies:
- task_create_model_user
acceptance_criteria:
- "POST /api/users returns 201 on success"
- "Validates email format"
- "Returns 409 if email exists"
- "Hashes password before storage"
- "Returns user object without password"
priority: high
complexity: medium
context:
design_version: "rev_3"
workflow_version: "v001"
context_snapshot_path: ".workflow/versions/v001/contexts/api_create_user.yml"
target:
entity_id: api_create_user
entity_type: api
api:
method: POST
path: /api/users
summary: "Create a new user"
request_body:
type: object
properties:
email: { type: string, required: true, validation: email }
name: { type: string, required: true, validation: "min:1,max:100" }
password: { type: string, required: true, validation: "min:8" }
responses:
- status: 201
schema: { id: uuid, email: string, name: string, created_at: datetime }
- status: 400
schema: { error: string, details: array }
- status: 409
schema: { error: string }
auth:
required: false
dependencies:
models:
- id: model_user
definition:
name: User
table_name: users
fields:
- { name: id, type: uuid, constraints: [primary_key] }
- { name: email, type: string, constraints: [unique, not_null] }
- { name: name, type: string, constraints: [not_null] }
- { name: password_hash, type: string, constraints: [not_null] }
- { name: created_at, type: datetime, constraints: [not_null] }
contracts:
inputs:
- from_task: task_create_model_user
provides: model_user
type: model
outputs:
- entity_id: api_create_user
type: api
consumers: [page_signup, component_signup_form]
files:
create:
- app/api/users/route.ts
reference:
- path: app/api/health/route.ts
purpose: "API route pattern"
- path: app/lib/db.ts
purpose: "Database connection"
- path: app/lib/auth.ts
purpose: "Password hashing"
validation:
checks:
- { type: typecheck, target: "app/api/users/route.ts", required: true }
- { type: lint, target: "app/api/users/route.ts", required: true }
- { type: test, target: "app/api/users/*.test.ts", required: false }
criteria:
- criterion: "Returns 201 with user object on success"
verification: "curl -X POST /api/users with valid data"
- criterion: "Returns 409 if email exists"
verification: "curl -X POST /api/users with duplicate email"
hints:
patterns:
- pattern: "Use NextResponse for responses"
reference: "app/api/health/route.ts"
- pattern: "Use Prisma for database operations"
reference: "app/lib/db.ts"
avoid:
- issue: "Don't store plain text passwords"
reason: "Security vulnerability - always hash with bcrypt"
- issue: "Don't return password_hash in response"
reason: "Sensitive data exposure"
examples:
- description: "Similar API endpoint"
file: "app/api/health/route.ts"

View File

@ -0,0 +1,116 @@
# Workflow State Schema
# Tracks automated workflow progress with approval gates
workflow_state:
# Unique workflow run ID
id: string # workflow_<timestamp>
# Feature/task being implemented
feature: string
# Current phase in the workflow
current_phase:
enum:
- INITIALIZING # Starting workflow
- DESIGNING # Architect creating entities/tasks
- AWAITING_DESIGN_APPROVAL # Gate 1: User approval needed
- DESIGN_APPROVED # User approved design
- DESIGN_REJECTED # User rejected, needs revision
- IMPLEMENTING # Frontend/Backend working
- REVIEWING # Reviewer checking implementation
- AWAITING_IMPL_APPROVAL # Gate 2: User approval needed
- IMPL_APPROVED # User approved implementation
- IMPL_REJECTED # User rejected, needs fixes
- COMPLETING # Marking tasks as done
- COMPLETED # Workflow finished
- PAUSED # User paused workflow
- FAILED # Workflow encountered error
# Approval gates status
gates:
design_approval:
status: pending | approved | rejected
approved_at: timestamp | null
approved_by: string | null
rejection_reason: string | null
revision_count: integer
implementation_approval:
status: pending | approved | rejected
approved_at: timestamp | null
approved_by: string | null
rejection_reason: string | null
revision_count: integer
# Progress tracking
progress:
entities_designed: integer
tasks_created: integer
tasks_implemented: integer
tasks_reviewed: integer
tasks_approved: integer
tasks_completed: integer
# Task tracking
tasks:
pending: [task_id]
in_progress: [task_id]
review: [task_id]
approved: [task_id]
completed: [task_id]
blocked: [task_id]
# Timestamps
started_at: timestamp
updated_at: timestamp
completed_at: timestamp | null
# Error tracking
last_error: string | null
# Resumability
resume_point:
phase: string
task_id: string | null
action: string # What to do when resuming
# Example workflow state file
example:
id: workflow_20250116_143022
feature: "User authentication with OAuth"
current_phase: AWAITING_DESIGN_APPROVAL
gates:
design_approval:
status: pending
approved_at: null
approved_by: null
rejection_reason: null
revision_count: 0
implementation_approval:
status: pending
approved_at: null
approved_by: null
rejection_reason: null
revision_count: 0
progress:
entities_designed: 5
tasks_created: 8
tasks_implemented: 0
tasks_reviewed: 0
tasks_approved: 0
tasks_completed: 0
tasks:
pending: [task_create_LoginPage, task_create_AuthAPI]
in_progress: []
review: []
approved: []
completed: []
blocked: []
started_at: "2025-01-16T14:30:22Z"
updated_at: "2025-01-16T14:35:00Z"
completed_at: null
last_error: null
resume_point:
phase: AWAITING_DESIGN_APPROVAL
task_id: null
action: "await_user_approval"

View File

@ -0,0 +1,323 @@
# Workflow Versioning Schema
# Links workflow sessions with task sessions and operations
# ============================================================================
# WORKFLOW SESSION (Top Level)
# ============================================================================
workflow_session:
# Unique version identifier
version: string # v001, v002, v003...
# Feature being implemented
feature: string
# Session metadata
session_id: string # workflow_<timestamp>
parent_version: string | null # If this is a continuation/fix
# Status
status: pending | in_progress | completed | failed | rolled_back
# Timestamps
started_at: timestamp
completed_at: timestamp | null
# Approval records
approvals:
design:
status: pending | approved | rejected
approved_by: string | null
approved_at: timestamp | null
rejection_reason: string | null
implementation:
status: pending | approved | rejected
approved_by: string | null
approved_at: timestamp | null
rejection_reason: string | null
# Linked task sessions
task_sessions: [task_session_id]
# Aggregate summary
summary:
total_tasks: integer
tasks_completed: integer
entities_created: integer
entities_updated: integer
entities_deleted: integer
files_created: integer
files_updated: integer
files_deleted: integer
# ============================================================================
# TASK SESSION (Per Task)
# ============================================================================
task_session:
# Unique identifier
session_id: string # tasksession_<task_id>_<timestamp>
# Link to parent workflow
workflow_version: string # v001
# Task reference
task_id: string
task_type: create | update | delete | refactor | test
# Agent info
agent: frontend | backend | reviewer | architect
# Timestamps
started_at: timestamp
completed_at: timestamp | null
duration_ms: integer | null
# Status
status: pending | in_progress | review | approved | completed | failed | blocked
# Operations performed in this session
operations: [operation]
# Review link (if reviewed)
review_session: review_session | null
# Error tracking
errors: [error_record]
# Retry info
attempt_number: integer # 1, 2, 3...
previous_attempts: [session_id]
# ============================================================================
# OPERATION (Atomic Change)
# ============================================================================
operation:
# Unique operation ID
id: string # op_<timestamp>_<sequence>
# Operation type
type: CREATE | UPDATE | DELETE | RENAME | MOVE
# Target
target_type: file | entity | task | manifest
target_id: string # entity_id or file path
target_path: string | null # file path if applicable
# Change details
changes:
before: string | null # Previous state/content hash
after: string | null # New state/content hash
diff_summary: string # Human-readable summary
# Timestamp
performed_at: timestamp
# Reversibility
reversible: boolean
rollback_data: object | null # Data needed to reverse
# ============================================================================
# REVIEW SESSION
# ============================================================================
review_session:
session_id: string # review_<task_id>_<timestamp>
# Links
task_session_id: string
workflow_version: string
# Reviewer
reviewer: string # "reviewer" agent or user
# Timing
started_at: timestamp
completed_at: timestamp
# Decision
decision: approved | rejected | needs_changes
# Checks performed
checks:
file_exists: pass | fail | skip
manifest_compliance: pass | fail | skip
code_quality: pass | fail | skip
lint: pass | fail | skip
build: pass | fail | skip
tests: pass | fail | skip
# Feedback
notes: string
issues_found: [string]
suggestions: [string]
# ============================================================================
# ERROR RECORD
# ============================================================================
error_record:
timestamp: timestamp
phase: string # Which step failed
error_type: string
message: string
stack_trace: string | null
resolved: boolean
resolution: string | null
# ============================================================================
# VERSION INDEX (Quick Lookup)
# ============================================================================
version_index:
versions:
- version: v001
feature: "User authentication"
status: completed
started_at: timestamp
completed_at: timestamp
tasks_count: 8
operations_count: 15
- version: v002
feature: "Task filters"
status: in_progress
started_at: timestamp
completed_at: null
tasks_count: 5
operations_count: 7
latest_version: v002
total_versions: 2
# ============================================================================
# TASK SESSION DIRECTORY STRUCTURE
# ============================================================================
task_session_directory:
description: "Each task session has its own directory with full context"
path_pattern: ".workflow/versions/{version}/task_sessions/{task_id}/"
files:
session.yml:
description: "Task session metadata (existing schema)"
schema: task_session
task.yml:
description: "Snapshot of task definition at execution time"
fields:
id: string
type: create | update | delete | refactor | test
title: string
agent: frontend | backend | reviewer | architect
status_at_snapshot: string
entity_ids: [string]
file_paths: [string]
dependencies: [string]
description: string
acceptance_criteria: [string]
snapshotted_at: timestamp
source_path: string
operations.log:
description: "Chronological audit trail of all operations"
format: text
entry_pattern: "[{timestamp}] {operation_type} {target_type}: {target_id} ({path})"
# ============================================================================
# EXAMPLE: Complete Workflow Session
# ============================================================================
example_workflow_session:
version: v001
feature: "User authentication with OAuth"
session_id: workflow_20250116_143022
parent_version: null
status: completed
started_at: "2025-01-16T14:30:22Z"
completed_at: "2025-01-16T15:45:00Z"
approvals:
design:
status: approved
approved_by: user
approved_at: "2025-01-16T14:45:00Z"
rejection_reason: null
implementation:
status: approved
approved_by: user
approved_at: "2025-01-16T15:40:00Z"
rejection_reason: null
task_sessions:
- tasksession_task_create_LoginPage_20250116_144501
- tasksession_task_create_AuthAPI_20250116_145001
- tasksession_task_update_Header_20250116_150001
summary:
total_tasks: 3
tasks_completed: 3
entities_created: 2
entities_updated: 1
entities_deleted: 0
files_created: 3
files_updated: 2
files_deleted: 0
example_task_session:
session_id: tasksession_task_create_LoginPage_20250116_144501
workflow_version: v001
task_id: task_create_LoginPage
task_type: create
agent: frontend
started_at: "2025-01-16T14:45:01Z"
completed_at: "2025-01-16T14:55:00Z"
duration_ms: 599000
status: completed
operations:
- id: op_20250116_144502_001
type: CREATE
target_type: file
target_id: page_login
target_path: app/login/page.tsx
changes:
before: null
after: "sha256:abc123..."
diff_summary: "Created login page with email/password form"
performed_at: "2025-01-16T14:45:02Z"
reversible: true
rollback_data:
action: delete_file
path: app/login/page.tsx
- id: op_20250116_144503_002
type: UPDATE
target_type: manifest
target_id: project_manifest
target_path: project_manifest.json
changes:
before: "sha256:def456..."
after: "sha256:ghi789..."
diff_summary: "Added page_login entity, set status to IMPLEMENTED"
performed_at: "2025-01-16T14:45:03Z"
reversible: true
rollback_data:
action: restore_content
content_hash: "sha256:def456..."
review_session:
session_id: review_task_create_LoginPage_20250116_145501
task_session_id: tasksession_task_create_LoginPage_20250116_144501
workflow_version: v001
reviewer: reviewer
started_at: "2025-01-16T14:55:01Z"
completed_at: "2025-01-16T14:58:00Z"
decision: approved
checks:
file_exists: pass
manifest_compliance: pass
code_quality: pass
lint: pass
build: pass
tests: skip
notes: "Login page implementation matches manifest spec"
issues_found: []
suggestions:
- "Consider adding loading state for form submission"
errors: []
attempt_number: 1
previous_attempts: []

View File

@ -0,0 +1,274 @@
# Task Session Migration Guide
## Overview
This guide explains how to migrate task sessions from the old flat file structure to the new directory-based structure.
## Background
### Old Structure (Flat Files)
```
.workflow/versions/v001/task_sessions/
├── task_design.yml
├── task_implementation.yml
└── task_review.yml
```
### New Structure (Directories)
```
.workflow/versions/v001/task_sessions/
├── task_design/
│ ├── session.yml # Session data (execution info)
│ ├── task.yml # Task snapshot (definition at execution time)
│ └── operations.log # Human-readable operation log
├── task_implementation/
│ ├── session.yml
│ ├── task.yml
│ └── operations.log
└── task_review/
├── session.yml
├── task.yml
└── operations.log
```
## Benefits of New Structure
1. **Better Organization**: Each task session has its own directory
2. **Snapshot Preservation**: Task definitions are captured at execution time
3. **Human-Readable Logs**: Operations log provides easy-to-read history
4. **Extensibility**: Easy to add attachments, artifacts, or outputs per task
5. **Backwards Compatible**: Old code can still read from either structure
## Migration Script
### Location
```
skills/guardrail-orchestrator/scripts/migrate_task_sessions.py
```
### Usage
#### Dry Run (Recommended First Step)
```bash
python3 skills/guardrail-orchestrator/scripts/migrate_task_sessions.py --dry-run
```
This will:
- Find all flat task session files
- Report what would be migrated
- Show actions that would be taken
- **NOT make any changes**
#### Live Migration
```bash
python3 skills/guardrail-orchestrator/scripts/migrate_task_sessions.py
```
This will:
- Create directory for each task session
- Move session data to `session.yml`
- Create `task.yml` snapshot
- Generate `operations.log`
- Delete original flat files
## Migration Process
### What the Script Does
For each flat task session file (e.g., `task_design.yml`):
1. **Create Directory**: `task_sessions/task_design/`
2. **Move Session Data**:
- Read original `task_design.yml`
- Save to `task_design/session.yml`
- Delete original file
3. **Create Task Snapshot**:
- Look for `tasks/task_design.yml`
- If found: Copy and add snapshot metadata
- If not found: Create minimal task.yml from session data
- Save to `task_design/task.yml`
4. **Create Operations Log**:
- Initialize `task_design/operations.log`
- Add migration note
- If session has operations array, convert to log format
- Human-readable format with timestamps
### Task Snapshot Metadata
When a task definition is found, these fields are added:
```yaml
snapshotted_at: '2025-12-16T12:00:00'
source_path: 'tasks/task_design.yml'
status_at_snapshot: 'completed'
migration_note: 'Created during migration from flat file structure'
```
### Operations Log Format
```
# Operations Log for task_design
# Migrated: 2025-12-16T12:00:00
# Format: [timestamp] OPERATION target_type: target_id (path)
======================================================================
[2025-12-16T12:00:00] MIGRATION: Converted from flat file structure
# Historical operations from session data:
[2025-12-16T11:00:00] CREATE file: auth.ts (app/lib/auth.ts)
Summary: Created authentication module
[2025-12-16T11:15:00] UPDATE entity: User (app/lib/types.ts)
Summary: Added email field to User type
```
## Migration Results
### Success Output
```
======================================================================
Migration Summary
======================================================================
Total files processed: 3
Successful migrations: 3
Failed migrations: 0
Migration completed successfully!
Next steps:
1. Verify migrated files in .workflow/versions/*/task_sessions/
2. Check that each task has session.yml, task.yml, and operations.log
3. Test the system to ensure compatibility
```
### Dry Run Output
```
Processing: v001/task_design.yml
----------------------------------------------------------------------
Would create directory: .workflow/versions/v001/task_sessions/task_design
Would move task_design.yml to .workflow/versions/v001/task_sessions/task_design/session.yml
Would create .workflow/versions/v001/task_sessions/task_design/task.yml (if source exists)
Would create .workflow/versions/v001/task_sessions/task_design/operations.log
This was a DRY RUN. No files were modified.
Run without --dry-run to perform the migration.
```
## Verification Steps
After migration, verify the structure:
```bash
# Check directory structure
ls -la .workflow/versions/v001/task_sessions/task_design/
# Should show:
# session.yml
# task.yml
# operations.log
# Verify session data
cat .workflow/versions/v001/task_sessions/task_design/session.yml
# Verify task snapshot
cat .workflow/versions/v001/task_sessions/task_design/task.yml
# Check operations log
cat .workflow/versions/v001/task_sessions/task_design/operations.log
```
## Backwards Compatibility
The `version_manager.py` module includes backwards-compatible loading:
```python
def load_task_session(version: str, task_id: str) -> Optional[dict]:
"""Load a task session from directory or flat file (backwards compatible)."""
# Try new directory structure first
session_dir = get_version_dir(version) / 'task_sessions' / task_id
session_path = session_dir / 'session.yml'
if session_path.exists():
return load_yaml(str(session_path))
# Fallback to old flat file structure
old_path = get_version_dir(version) / 'task_sessions' / f'{task_id}.yml'
if old_path.exists():
return load_yaml(str(old_path))
return None
```
This means:
- New code works with both structures
- No breaking changes for existing workflows
- Migration can be done gradually
- Rollback is possible if needed
## Troubleshooting
### No Files Found
If the script reports "No flat task session files found":
- Check that `.workflow/versions/` exists
- Verify that task sessions are in expected location
- Confirm files have `.yml` or `.yaml` extension
- May indicate all sessions are already migrated
### Task File Not Found
If `tasks/task_id.yml` doesn't exist:
- Script creates minimal task.yml from session data
- Warning is logged but migration continues
- Check `task.yml` has `migration_note` field
### Migration Errors
If migration fails:
- Review error message in output
- Check file permissions
- Verify disk space
- Try dry-run mode to diagnose
### Rollback (If Needed)
To rollback a migration:
1. Stop any running workflows
2. For each migrated directory:
```bash
# Copy session.yml back to flat file
cp .workflow/versions/v001/task_sessions/task_design/session.yml \
.workflow/versions/v001/task_sessions/task_design.yml
# Remove directory
rm -rf .workflow/versions/v001/task_sessions/task_design/
```
## Best Practices
1. **Always dry-run first**: Use `--dry-run` to preview changes
2. **Backup before migration**: Copy `.workflow/` directory
3. **Migrate per version**: Test one version before migrating all
4. **Verify after migration**: Check files and run system tests
5. **Keep old backups**: Don't delete backups immediately
## Integration with Workflow System
After migration, all workflow operations work seamlessly:
```python
# Start task session (creates directory structure)
session = create_workflow_session("new feature", None)
task_session = create_task_session(session, "task_api", "create", "backend")
# Load task session (works with both structures)
task = load_task_session("v001", "task_design")
# Log operations (appends to operations.log)
log_operation(task, "CREATE", "file", "api.ts", target_path="app/api/api.ts")
```
## Additional Resources
- `version_manager.py`: Core versioning system
- `workflow_manager.py`: Workflow orchestration
- `.workflow/operations.log`: Global operations log
- `.workflow/index.yml`: Version index

View File

@ -0,0 +1,486 @@
#!/usr/bin/env python3
"""Analyze codebase and generate project manifest from existing code."""
import argparse
import json
import os
import re
import sys
from datetime import datetime
from pathlib import Path
from typing import Any, Optional
def find_files(base_path: str, pattern: str) -> list[str]:
"""Find files matching a glob pattern."""
base = Path(base_path)
return [str(p.relative_to(base)) for p in base.glob(pattern)]
def read_file(filepath: str) -> str:
"""Read file contents."""
try:
with open(filepath, 'r', encoding='utf-8') as f:
return f.read()
except Exception:
return ""
def extract_component_name(filepath: str) -> str:
"""Extract component name from file path."""
name = Path(filepath).stem
return name
def to_snake_case(name: str) -> str:
"""Convert PascalCase to snake_case."""
s1 = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', name)
return re.sub('([a-z0-9])([A-Z])', r'\1_\2', s1).lower()
def extract_props_from_content(content: str) -> dict:
"""Extract props interface from component content."""
props = {}
# Look for interface Props or type Props
interface_match = re.search(
r'(?:interface|type)\s+\w*Props\w*\s*(?:=\s*)?\{([^}]+)\}',
content,
re.DOTALL
)
if interface_match:
props_block = interface_match.group(1)
# Parse individual props
prop_matches = re.findall(
r'(\w+)(\?)?:\s*([^;,\n]+)',
props_block
)
for name, optional, prop_type in prop_matches:
props[name] = {
"type": prop_type.strip(),
"optional": bool(optional)
}
return props
def extract_imports(content: str) -> list[str]:
"""Extract component imports from file."""
imports = []
# Look for imports from components directory
import_matches = re.findall(
r"import\s+\{?\s*([^}]+)\s*\}?\s+from\s+['\"]\.\.?/components/(\w+)['\"]",
content
)
for imported, component in import_matches:
imports.append(component)
# Also check for direct component imports
direct_imports = re.findall(
r"import\s+(\w+)\s+from\s+['\"]\.\.?/components/(\w+)['\"]",
content
)
for imported, component in direct_imports:
imports.append(component)
return list(set(imports))
def extract_api_methods(content: str) -> list[str]:
"""Extract HTTP methods from API route file."""
methods = []
method_patterns = ['GET', 'POST', 'PUT', 'DELETE', 'PATCH', 'HEAD', 'OPTIONS']
for method in method_patterns:
if re.search(rf'export\s+(?:async\s+)?function\s+{method}\s*\(', content):
methods.append(method)
return methods
def extract_fetch_calls(content: str) -> list[str]:
"""Extract API fetch calls from content."""
apis = []
# Look for fetch('/api/...') patterns - handle static paths
fetch_matches = re.findall(
r"fetch\s*\(\s*['\"`]/api/([^'\"`\?\$\{]+)",
content
)
apis.extend(fetch_matches)
# Look for fetch(`/api/tasks`) or similar template literals with static paths
template_matches = re.findall(
r"fetch\s*\(\s*`/api/(\w+)`",
content
)
apis.extend(template_matches)
# Clean up: remove trailing slashes and normalize
cleaned = []
for api in apis:
api = api.rstrip('/')
if api and not api.startswith('$'):
cleaned.append(api)
return list(set(cleaned))
def extract_types_from_db(content: str) -> dict:
"""Extract type definitions from db.ts or similar."""
types = {}
# Extract interfaces
interface_matches = re.findall(
r'export\s+interface\s+(\w+)\s*\{([^}]+)\}',
content,
re.DOTALL
)
for name, body in interface_matches:
fields = {}
field_matches = re.findall(r'(\w+)(\?)?:\s*([^;,\n]+)', body)
for field_name, optional, field_type in field_matches:
fields[field_name] = field_type.strip()
types[name] = fields
# Extract type aliases
type_matches = re.findall(
r"export\s+type\s+(\w+)\s*=\s*([^;]+);",
content
)
for name, type_def in type_matches:
types[name] = type_def.strip()
return types
def path_to_route(filepath: str) -> str:
"""Convert file path to route path."""
# Remove app/ prefix and page.tsx suffix
route = filepath.replace('app/', '').replace('/page.tsx', '').replace('page.tsx', '')
if route == '' or route == '/':
return '/'
# Handle dynamic segments [id] -> [id]
route = re.sub(r'\[([^\]]+)\]', r'[\1]', route)
# Ensure starts with /
if not route.startswith('/'):
route = '/' + route
return route
def analyze_pages(base_path: str) -> list[dict]:
"""Analyze all page files."""
pages = []
page_files = find_files(base_path, 'app/**/page.tsx')
for filepath in page_files:
full_path = os.path.join(base_path, filepath)
content = read_file(full_path)
route = path_to_route(filepath)
# Generate page ID
if route == '/' or filepath == 'app/page.tsx':
page_id = 'page_home'
name = 'Home'
route = '/'
else:
name = route.strip('/').replace('/', '_').replace('[', '').replace(']', '')
page_id = f'page_{name}'
# Extract component imports
components = extract_imports(content)
comp_ids = [f"comp_{to_snake_case(c)}" for c in components]
# Extract API dependencies
api_calls = extract_fetch_calls(content)
api_ids = [f"api_{a.replace('/', '_')}" for a in api_calls]
pages.append({
"id": page_id,
"path": route,
"file_path": filepath,
"status": "IMPLEMENTED",
"description": f"Page at {route}",
"components": comp_ids,
"data_dependencies": api_ids
})
return pages
def analyze_components(base_path: str) -> list[dict]:
"""Analyze all component files."""
components = []
component_files = find_files(base_path, 'app/components/*.tsx')
for filepath in component_files:
full_path = os.path.join(base_path, filepath)
content = read_file(full_path)
name = extract_component_name(filepath)
comp_id = f"comp_{to_snake_case(name)}"
# Extract props
props = extract_props_from_content(content)
components.append({
"id": comp_id,
"name": name,
"file_path": filepath,
"status": "IMPLEMENTED",
"description": f"{name} component",
"props": props
})
return components
def analyze_apis(base_path: str) -> list[dict]:
"""Analyze all API route files."""
apis = []
api_files = find_files(base_path, 'app/api/**/route.ts')
for filepath in api_files:
full_path = os.path.join(base_path, filepath)
content = read_file(full_path)
# Extract path from file location
path = '/' + filepath.replace('app/', '').replace('/route.ts', '')
# Extract HTTP methods
methods = extract_api_methods(content)
for method in methods:
# Generate action name from method
action_map = {
'GET': 'list' if '[' not in path else 'get',
'POST': 'create',
'PUT': 'update',
'DELETE': 'delete',
'PATCH': 'patch'
}
action = action_map.get(method, method.lower())
# Generate resource name from path
resource = path.replace('/api/', '').replace('/', '_').replace('[', '').replace(']', '')
if not resource:
resource = 'root'
api_id = f"api_{action}_{resource}"
apis.append({
"id": api_id,
"path": path,
"method": method,
"file_path": filepath,
"status": "IMPLEMENTED",
"description": f"{method} {path}",
"request": {},
"response": {
"type": "object",
"description": "Response data"
}
})
return apis
def analyze_database(base_path: str) -> tuple[list[dict], dict]:
"""Analyze database/type files."""
tables = []
types = {}
# Check for db.ts file
db_path = os.path.join(base_path, 'app/lib/db.ts')
if os.path.exists(db_path):
content = read_file(db_path)
types = extract_types_from_db(content)
# Look for table/collection definitions
if 'tasks' in content.lower():
tables.append({
"id": "table_tasks",
"name": "tasks",
"file_path": "app/lib/db.ts",
"status": "IMPLEMENTED",
"description": "Tasks storage",
"columns": types.get('Task', {})
})
return tables, types
def build_dependencies(pages: list, components: list, apis: list) -> dict:
"""Build dependency mappings."""
component_to_page = {}
api_to_component = {}
# Build component to page mapping
for page in pages:
for comp_id in page.get('components', []):
if comp_id not in component_to_page:
component_to_page[comp_id] = []
component_to_page[comp_id].append(page['id'])
# API to component would require deeper analysis
# For now, we'll leave it based on page dependencies
return {
"component_to_page": component_to_page,
"api_to_component": {},
"table_to_api": {}
}
def generate_manifest(
base_path: str,
project_name: Optional[str] = None
) -> dict:
"""Generate complete project manifest."""
# Determine project name
if not project_name:
# Try to get from package.json
pkg_path = os.path.join(base_path, 'package.json')
if os.path.exists(pkg_path):
try:
with open(pkg_path) as f:
pkg = json.load(f)
project_name = pkg.get('name', Path(base_path).name)
except Exception:
project_name = Path(base_path).name
else:
project_name = Path(base_path).name
# Analyze codebase
pages = analyze_pages(base_path)
components = analyze_components(base_path)
apis = analyze_apis(base_path)
tables, types = analyze_database(base_path)
dependencies = build_dependencies(pages, components, apis)
now = datetime.now().isoformat()
manifest = {
"project": {
"name": project_name,
"version": "1.0.0",
"created_at": now,
"description": f"Project manifest for {project_name}"
},
"state": {
"current_phase": "IMPLEMENTATION_PHASE",
"approval_status": {
"manifest_approved": True,
"approved_by": "analyzer",
"approved_at": now
},
"revision_history": [
{
"action": "MANIFEST_GENERATED",
"timestamp": now,
"details": "Generated from existing codebase analysis"
}
]
},
"entities": {
"pages": pages,
"components": components,
"api_endpoints": apis,
"database_tables": tables
},
"dependencies": dependencies,
"types": types
}
return manifest
def main():
parser = argparse.ArgumentParser(
description='Analyze codebase and generate project manifest'
)
parser.add_argument(
'--path',
default='.',
help='Path to project root'
)
parser.add_argument(
'--name',
help='Project name (defaults to package.json name or directory name)'
)
parser.add_argument(
'--output',
default='project_manifest.json',
help='Output file path'
)
parser.add_argument(
'--dry-run',
action='store_true',
help='Print manifest without writing to file'
)
parser.add_argument(
'--force',
action='store_true',
help='Overwrite existing manifest'
)
args = parser.parse_args()
base_path = os.path.abspath(args.path)
output_path = os.path.join(base_path, args.output)
# Check for existing manifest
if os.path.exists(output_path) and not args.force and not args.dry_run:
print(f"Error: {args.output} already exists. Use --force to overwrite.")
sys.exit(1)
print(f"Analyzing codebase at: {base_path}")
print()
# Generate manifest
manifest = generate_manifest(base_path, args.name)
# Count entities
pages = len(manifest['entities']['pages'])
components = len(manifest['entities']['components'])
apis = len(manifest['entities']['api_endpoints'])
tables = len(manifest['entities']['database_tables'])
if args.dry_run:
print(json.dumps(manifest, indent=2))
else:
with open(output_path, 'w') as f:
json.dump(manifest, f, indent=2)
print(f"Manifest written to: {output_path}")
print()
print("╔══════════════════════════════════════════════════════════════╗")
print("║ 📊 MANIFEST GENERATED ║")
print("╠══════════════════════════════════════════════════════════════╣")
print(f"║ Project: {manifest['project']['name']:<51}")
print("╠══════════════════════════════════════════════════════════════╣")
print("║ ENTITIES DISCOVERED ║")
print(f"║ 📄 Pages: {pages:<43}")
print(f"║ 🧩 Components: {components:<43}")
print(f"║ 🔌 APIs: {apis:<43}")
print(f"║ 🗄️ Tables: {tables:<43}")
print("╠══════════════════════════════════════════════════════════════╣")
print("║ Status: All entities marked as IMPLEMENTED ║")
print("║ Phase: IMPLEMENTATION_PHASE ║")
print("╚══════════════════════════════════════════════════════════════╝")
if __name__ == '__main__':
main()

View File

@ -0,0 +1,504 @@
#!/usr/bin/env python3
"""
Context Compaction Manager
Handles pre-compact state saving and post-compact resume injection.
Monitors context usage and triggers appropriate hooks.
Usage:
python3 context_compact.py save [--workflow-dir .workflow/versions/v001]
python3 context_compact.py resume [--workflow-dir .workflow/versions/v001]
python3 context_compact.py status [--workflow-dir .workflow/versions/v001]
"""
import argparse
import json
import os
import subprocess
import sys
from datetime import datetime
from pathlib import Path
from typing import Any, Dict, List, Optional
# ============================================================================
# Configuration
# ============================================================================
DEFAULT_WORKFLOW_DIR = ".workflow/versions/v001"
STATE_FILE = "context_state.json"
RESUME_PROMPT_FILE = "resume_prompt.md"
MODIFIED_FILES_FILE = "modified_files.json"
# Context thresholds (percentage)
THRESHOLDS = {
"warning": 0.70,
"save": 0.80,
"compact": 0.90,
"critical": 0.95
}
# ============================================================================
# File Operations
# ============================================================================
def load_json(filepath: str) -> dict:
"""Load JSON file."""
if not os.path.exists(filepath):
return {}
try:
with open(filepath, 'r') as f:
return json.load(f)
except json.JSONDecodeError:
return {}
def save_json(filepath: str, data: dict):
"""Save data to JSON file."""
os.makedirs(os.path.dirname(filepath), exist_ok=True)
with open(filepath, 'w') as f:
json.dump(data, f, indent=2, default=str)
def save_text(filepath: str, content: str):
"""Save text to file."""
os.makedirs(os.path.dirname(filepath), exist_ok=True)
with open(filepath, 'w') as f:
f.write(content)
# ============================================================================
# Git Operations
# ============================================================================
def get_git_status() -> List[Dict[str, str]]:
"""Get list of modified files from git."""
try:
result = subprocess.run(
['git', 'status', '--porcelain'],
capture_output=True, text=True, check=True
)
files = []
for line in result.stdout.strip().split('\n'):
if line:
status = line[:2].strip()
path = line[3:]
action = {
'M': 'modified',
'A': 'added',
'D': 'deleted',
'?': 'untracked',
'R': 'renamed'
}.get(status[0] if status else '?', 'unknown')
files.append({'path': path, 'action': action, 'summary': ''})
return files
except subprocess.CalledProcessError:
return []
def get_recent_commits(count: int = 5) -> List[Dict[str, str]]:
"""Get recent commit messages."""
try:
result = subprocess.run(
['git', 'log', f'-{count}', '--oneline'],
capture_output=True, text=True, check=True
)
commits = []
for line in result.stdout.strip().split('\n'):
if line:
parts = line.split(' ', 1)
commits.append({
'hash': parts[0],
'message': parts[1] if len(parts) > 1 else ''
})
return commits
except subprocess.CalledProcessError:
return []
def create_checkpoint(message: str = "WIP: Pre-compaction checkpoint"):
"""Create a git checkpoint with uncommitted changes."""
try:
# Check if there are changes
result = subprocess.run(
['git', 'status', '--porcelain'],
capture_output=True, text=True
)
if result.stdout.strip():
# Stage all changes
subprocess.run(['git', 'add', '-A'], check=True)
# Commit
subprocess.run(['git', 'commit', '-m', message], check=True)
print(f"Created checkpoint: {message}")
return True
except subprocess.CalledProcessError as e:
print(f"Warning: Could not create checkpoint: {e}")
return False
# ============================================================================
# Workflow State Operations
# ============================================================================
def load_workflow_state(workflow_dir: str) -> dict:
"""Load current workflow state."""
state_path = os.path.join(workflow_dir, 'workflow_state.json')
return load_json(state_path)
def load_active_tasks(workflow_dir: str) -> List[dict]:
"""Load tasks that are in progress."""
tasks_dir = os.path.join(workflow_dir, 'tasks')
active_tasks = []
if os.path.exists(tasks_dir):
for filename in os.listdir(tasks_dir):
if filename.endswith('.yml') or filename.endswith('.json'):
task_path = os.path.join(tasks_dir, filename)
task = load_json(task_path) if filename.endswith('.json') else {}
if task.get('status') == 'in_progress':
active_tasks.append(task)
return active_tasks
def get_pending_tasks(workflow_dir: str) -> List[dict]:
"""Get pending tasks in priority order."""
tasks_dir = os.path.join(workflow_dir, 'tasks')
pending = []
if os.path.exists(tasks_dir):
for filename in os.listdir(tasks_dir):
if filename.endswith('.json'):
task_path = os.path.join(tasks_dir, filename)
task = load_json(task_path)
if task.get('status') == 'pending':
pending.append(task)
# Sort by layer, then by ID
pending.sort(key=lambda t: (t.get('layer', 999), t.get('id', '')))
return pending
# ============================================================================
# Context State Management
# ============================================================================
def capture_context_state(
workflow_dir: str,
context_percentage: float = 0.0,
active_work: Optional[dict] = None,
decisions: Optional[List[dict]] = None,
blockers: Optional[List[dict]] = None
) -> dict:
"""Capture current context state for later resume."""
workflow_state = load_workflow_state(workflow_dir)
active_tasks = load_active_tasks(workflow_dir)
pending_tasks = get_pending_tasks(workflow_dir)
modified_files = get_git_status()
# Determine active task
active_task = active_tasks[0] if active_tasks else None
# Build next actions from pending tasks
next_actions = []
for task in pending_tasks[:5]: # Top 5 pending
next_actions.append({
'action': task.get('type', 'implement'),
'target': task.get('title', task.get('id', 'unknown')),
'priority': len(next_actions) + 1,
'context_needed': [task.get('context', {}).get('context_snapshot_path', '')]
})
# Build context state
state = {
'session_id': f"compact_{datetime.now().strftime('%Y%m%d_%H%M%S')}",
'captured_at': datetime.now().isoformat(),
'context_usage': {
'tokens_used': 0, # Would need to be passed in
'tokens_max': 0,
'percentage': context_percentage,
'threshold_triggered': THRESHOLDS['save']
},
'workflow_position': {
'workflow_id': workflow_state.get('id', 'unknown'),
'current_phase': workflow_state.get('current_phase', 'UNKNOWN'),
'active_task_id': active_task.get('id') if active_task else None,
'layer': active_task.get('layer', 1) if active_task else 1
},
'active_work': active_work or {
'entity_id': active_task.get('entity_id', '') if active_task else '',
'entity_type': active_task.get('type', '') if active_task else '',
'action': 'implementing' if active_task else 'pending',
'file_path': None,
'progress_notes': ''
},
'next_actions': next_actions,
'modified_files': modified_files,
'decisions': decisions or [],
'blockers': blockers or []
}
return state
def save_context_state(workflow_dir: str, state: dict):
"""Save context state to file."""
state_path = os.path.join(workflow_dir, STATE_FILE)
save_json(state_path, state)
print(f"Saved context state to: {state_path}")
# Also save modified files separately for quick access
modified_path = os.path.join(workflow_dir, MODIFIED_FILES_FILE)
save_json(modified_path, state.get('modified_files', []))
def generate_resume_prompt(state: dict) -> str:
"""Generate human-readable resume prompt."""
lines = [
"## Context Recovery - Resuming Previous Session",
"",
"### Session Info",
f"- **Original Session**: {state.get('session_id', 'unknown')}",
f"- **Captured At**: {state.get('captured_at', 'unknown')}",
f"- **Context Usage**: {state.get('context_usage', {}).get('percentage', 0) * 100:.1f}%",
"",
"### Workflow Position",
f"- **Phase**: {state.get('workflow_position', {}).get('current_phase', 'UNKNOWN')}",
f"- **Active Task**: {state.get('workflow_position', {}).get('active_task_id', 'None')}",
f"- **Layer**: {state.get('workflow_position', {}).get('layer', 1)}",
"",
"### What Was Being Worked On",
]
active = state.get('active_work', {})
lines.extend([
f"- **Entity**: {active.get('entity_id', 'None')} ({active.get('entity_type', '')})",
f"- **Action**: {active.get('action', 'unknown')}",
f"- **File**: {active.get('file_path', 'None')}",
f"- **Progress**: {active.get('progress_notes', 'No notes')}",
"",
"### Next Actions (Priority Order)",
])
for action in state.get('next_actions', []):
context = ', '.join(action.get('context_needed', [])) or 'None'
lines.append(f"{action.get('priority', '?')}. **{action.get('action', '')}** {action.get('target', '')}")
lines.append(f" - Context needed: {context}")
if state.get('modified_files'):
lines.extend(["", "### Recent Changes"])
for f in state.get('modified_files', []):
lines.append(f"- `{f.get('path', '')}` - {f.get('action', '')}: {f.get('summary', '')}")
if state.get('decisions'):
lines.extend(["", "### Key Decisions Made"])
for d in state.get('decisions', []):
lines.append(f"- **{d.get('topic', '')}**: {d.get('decision', '')}")
if state.get('blockers'):
lines.extend(["", "### Current Blockers"])
for b in state.get('blockers', []):
lines.append(f"- {b.get('issue', '')} ({b.get('status', '')}): {b.get('notes', '')}")
lines.extend([
"",
"---",
"**Action Required**: Continue from the next action listed above.",
"",
"To load full context, read the following files:",
])
# Add context files to read
for action in state.get('next_actions', [])[:3]:
for ctx in action.get('context_needed', []):
if ctx:
lines.append(f"- `{ctx}`")
return '\n'.join(lines)
# ============================================================================
# Commands
# ============================================================================
def cmd_save(args):
"""Save current context state (pre-compact hook)."""
workflow_dir = args.workflow_dir
print("=" * 60)
print("PRE-COMPACT: Saving Context State")
print("=" * 60)
# Capture state
state = capture_context_state(
workflow_dir,
context_percentage=args.percentage / 100 if args.percentage else 0.80
)
# Save state
save_context_state(workflow_dir, state)
# Generate resume prompt
resume_prompt = generate_resume_prompt(state)
resume_path = os.path.join(workflow_dir, RESUME_PROMPT_FILE)
save_text(resume_path, resume_prompt)
print(f"Generated resume prompt: {resume_path}")
# Create git checkpoint if requested
if args.checkpoint:
create_checkpoint("WIP: Pre-compaction checkpoint")
print()
print("=" * 60)
print("State saved successfully!")
print(f" Session ID: {state['session_id']}")
print(f" Active Task: {state['workflow_position']['active_task_id']}")
print(f" Next Actions: {len(state['next_actions'])}")
print("=" * 60)
return 0
def cmd_resume(args):
"""Display resume prompt (post-compact hook)."""
workflow_dir = args.workflow_dir
# Load state
state_path = os.path.join(workflow_dir, STATE_FILE)
state = load_json(state_path)
if not state:
print("No saved context state found.")
print(f"Expected: {state_path}")
return 1
# Generate and display resume prompt
if args.json:
print(json.dumps(state, indent=2))
else:
resume_prompt = generate_resume_prompt(state)
print(resume_prompt)
return 0
def cmd_status(args):
"""Show current context state status."""
workflow_dir = args.workflow_dir
print("=" * 60)
print("CONTEXT STATE STATUS")
print("=" * 60)
# Check for saved state
state_path = os.path.join(workflow_dir, STATE_FILE)
if os.path.exists(state_path):
state = load_json(state_path)
print(f"\nSaved State Found:")
print(f" Session ID: {state.get('session_id', 'unknown')}")
print(f" Captured At: {state.get('captured_at', 'unknown')}")
print(f" Phase: {state.get('workflow_position', {}).get('current_phase', 'UNKNOWN')}")
print(f" Active Task: {state.get('workflow_position', {}).get('active_task_id', 'None')}")
else:
print(f"\nNo saved state at: {state_path}")
# Check workflow state
workflow_state = load_workflow_state(workflow_dir)
if workflow_state:
print(f"\nCurrent Workflow:")
print(f" ID: {workflow_state.get('id', 'unknown')}")
print(f" Phase: {workflow_state.get('current_phase', 'UNKNOWN')}")
# Check git status
modified = get_git_status()
if modified:
print(f"\nModified Files: {len(modified)}")
for f in modified[:5]:
print(f" - {f['action']}: {f['path']}")
if len(modified) > 5:
print(f" ... and {len(modified) - 5} more")
return 0
def cmd_clear(args):
"""Clear saved context state."""
workflow_dir = args.workflow_dir
files_to_remove = [
os.path.join(workflow_dir, STATE_FILE),
os.path.join(workflow_dir, RESUME_PROMPT_FILE),
os.path.join(workflow_dir, MODIFIED_FILES_FILE)
]
removed = 0
for filepath in files_to_remove:
if os.path.exists(filepath):
os.remove(filepath)
print(f"Removed: {filepath}")
removed += 1
if removed == 0:
print("No state files found to remove.")
else:
print(f"\nCleared {removed} state file(s).")
return 0
# ============================================================================
# Main CLI
# ============================================================================
def main():
parser = argparse.ArgumentParser(
description="Context Compaction Manager for Guardrail Orchestrator"
)
parser.add_argument(
'--workflow-dir', '-w',
default=DEFAULT_WORKFLOW_DIR,
help='Workflow directory path'
)
subparsers = parser.add_subparsers(dest='command', help='Commands')
# Save command
save_parser = subparsers.add_parser('save', help='Save context state (pre-compact)')
save_parser.add_argument('--percentage', '-p', type=float, help='Current context percentage (0-100)')
save_parser.add_argument('--checkpoint', '-c', action='store_true', help='Create git checkpoint')
# Resume command
resume_parser = subparsers.add_parser('resume', help='Display resume prompt (post-compact)')
resume_parser.add_argument('--json', '-j', action='store_true', help='Output as JSON')
# Status command
subparsers.add_parser('status', help='Show context state status')
# Clear command
subparsers.add_parser('clear', help='Clear saved context state')
args = parser.parse_args()
if args.command == 'save':
return cmd_save(args)
elif args.command == 'resume':
return cmd_resume(args)
elif args.command == 'status':
return cmd_status(args)
elif args.command == 'clear':
return cmd_clear(args)
else:
parser.print_help()
return 1
if __name__ == "__main__":
sys.exit(main())

View File

@ -0,0 +1,73 @@
#!/usr/bin/env python3
"""Initialize a guardrailed project with manifest."""
import argparse
import json
import os
from datetime import datetime
def create_manifest(name: str, path: str) -> dict:
"""Create initial project manifest structure."""
return {
"project": {
"name": name,
"version": "0.1.0",
"created_at": datetime.now().isoformat(),
"description": f"{name} - A guardrailed project"
},
"state": {
"current_phase": "DESIGN_PHASE",
"approval_status": {
"manifest_approved": False,
"approved_by": None,
"approved_at": None
},
"revision_history": [
{
"action": "PROJECT_INITIALIZED",
"timestamp": datetime.now().isoformat(),
"details": f"Project {name} created"
}
]
},
"entities": {
"pages": [],
"components": [],
"api_endpoints": [],
"database_tables": []
},
"dependencies": {
"component_to_page": {},
"api_to_component": {},
"table_to_api": {}
}
}
def main():
parser = argparse.ArgumentParser(description="Initialize guardrailed project")
parser.add_argument("--name", required=True, help="Project name")
parser.add_argument("--path", required=True, help="Project path")
args = parser.parse_args()
manifest_path = os.path.join(args.path, "project_manifest.json")
if os.path.exists(manifest_path):
print(f"Warning: Manifest already exists at {manifest_path}")
print("Use --force to overwrite (not implemented)")
return 1
manifest = create_manifest(args.name, args.path)
with open(manifest_path, "w") as f:
json.dump(manifest, f, indent=2)
print(f"Initialized guardrailed project: {args.name}")
print(f"Manifest created at: {manifest_path}")
print(f"Current phase: DESIGN_PHASE")
return 0
if __name__ == "__main__":
exit(main())

View File

@ -0,0 +1,530 @@
#!/usr/bin/env python3
"""
Manifest diffing and changelog generation between workflow versions.
Compares project_manifest.json snapshots to show:
- Added entities (pages, components, API endpoints)
- Removed entities
- Modified entities (status changes, path changes)
- Dependency changes
"""
import argparse
import json
import os
import sys
from datetime import datetime
from pathlib import Path
from typing import Dict, List, Optional, Set, Tuple, Any
# Try to import yaml
try:
import yaml
HAS_YAML = True
except ImportError:
HAS_YAML = False
# ============================================================================
# File Helpers
# ============================================================================
def load_json(filepath: str) -> dict:
"""Load JSON file."""
if not os.path.exists(filepath):
return {}
with open(filepath, 'r') as f:
return json.load(f)
def load_yaml(filepath: str) -> dict:
"""Load YAML file."""
if not os.path.exists(filepath):
return {}
with open(filepath, 'r') as f:
content = f.read()
if not content.strip():
return {}
if HAS_YAML:
return yaml.safe_load(content) or {}
return {}
# ============================================================================
# Path Helpers
# ============================================================================
def get_workflow_dir() -> Path:
return Path('.workflow')
def get_version_dir(version: str) -> Path:
return get_workflow_dir() / 'versions' / version
def get_snapshot_path(version: str, snapshot_type: str) -> Path:
"""Get path to manifest snapshot for a version."""
return get_version_dir(version) / f'snapshot_{snapshot_type}' / 'manifest.json'
def get_current_manifest_path() -> Path:
return Path('project_manifest.json')
def get_versions_list() -> List[str]:
"""Get list of all versions."""
versions_dir = get_workflow_dir() / 'versions'
if not versions_dir.exists():
return []
return sorted([d.name for d in versions_dir.iterdir() if d.is_dir()])
# ============================================================================
# Entity Extraction
# ============================================================================
def extract_entities(manifest: dict) -> Dict[str, Dict[str, Any]]:
"""
Extract all entities from manifest into a flat dict keyed by ID.
Returns dict like:
{
"page_home": {"type": "page", "name": "Home", "status": "APPROVED", ...},
"component_Button": {"type": "component", "name": "Button", ...},
...
}
"""
entities = {}
entity_types = manifest.get('entities', {})
for entity_type, entity_list in entity_types.items():
if not isinstance(entity_list, list):
continue
for entity in entity_list:
entity_id = entity.get('id')
if entity_id:
entities[entity_id] = {
'type': entity_type.rstrip('s'), # pages -> page
**entity
}
return entities
# ============================================================================
# Diff Computation
# ============================================================================
def compute_diff(before: dict, after: dict) -> dict:
"""
Compute the difference between two manifests.
Returns:
{
"added": [list of added entities],
"removed": [list of removed entities],
"modified": [list of modified entities with changes],
"unchanged": [list of unchanged entity IDs]
}
"""
before_entities = extract_entities(before)
after_entities = extract_entities(after)
before_ids = set(before_entities.keys())
after_ids = set(after_entities.keys())
added_ids = after_ids - before_ids
removed_ids = before_ids - after_ids
common_ids = before_ids & after_ids
diff = {
'added': [],
'removed': [],
'modified': [],
'unchanged': []
}
# Added entities
for entity_id in sorted(added_ids):
entity = after_entities[entity_id]
diff['added'].append({
'id': entity_id,
'type': entity.get('type'),
'name': entity.get('name'),
'file_path': entity.get('file_path'),
'status': entity.get('status')
})
# Removed entities
for entity_id in sorted(removed_ids):
entity = before_entities[entity_id]
diff['removed'].append({
'id': entity_id,
'type': entity.get('type'),
'name': entity.get('name'),
'file_path': entity.get('file_path'),
'status': entity.get('status')
})
# Modified entities
for entity_id in sorted(common_ids):
before_entity = before_entities[entity_id]
after_entity = after_entities[entity_id]
changes = []
# Check each field for changes
for field in ['name', 'file_path', 'status', 'description', 'dependencies']:
before_val = before_entity.get(field)
after_val = after_entity.get(field)
if before_val != after_val:
changes.append({
'field': field,
'before': before_val,
'after': after_val
})
if changes:
diff['modified'].append({
'id': entity_id,
'type': before_entity.get('type'),
'name': after_entity.get('name'),
'file_path': after_entity.get('file_path'),
'changes': changes
})
else:
diff['unchanged'].append(entity_id)
return diff
def compute_summary(diff: dict) -> dict:
"""Compute summary statistics from diff."""
return {
'total_added': len(diff['added']),
'total_removed': len(diff['removed']),
'total_modified': len(diff['modified']),
'total_unchanged': len(diff['unchanged']),
'by_type': {
'pages': {
'added': len([e for e in diff['added'] if e['type'] == 'page']),
'removed': len([e for e in diff['removed'] if e['type'] == 'page']),
'modified': len([e for e in diff['modified'] if e['type'] == 'page'])
},
'components': {
'added': len([e for e in diff['added'] if e['type'] == 'component']),
'removed': len([e for e in diff['removed'] if e['type'] == 'component']),
'modified': len([e for e in diff['modified'] if e['type'] == 'component'])
},
'api_endpoints': {
'added': len([e for e in diff['added'] if e['type'] == 'api_endpoint']),
'removed': len([e for e in diff['removed'] if e['type'] == 'api_endpoint']),
'modified': len([e for e in diff['modified'] if e['type'] == 'api_endpoint'])
}
}
}
# ============================================================================
# Display Functions
# ============================================================================
def format_entity(entity: dict, prefix: str = '') -> str:
"""Format an entity for display."""
type_icon = {
'page': '📄',
'component': '🧩',
'api_endpoint': '🔌',
'lib': '📚',
'hook': '🪝',
'type': '📝',
'config': '⚙️'
}.get(entity.get('type', ''), '')
name = entity.get('name', entity.get('id', 'Unknown'))
file_path = entity.get('file_path', '')
return f"{prefix}{type_icon} {name} ({file_path})"
def display_diff(diff: dict, summary: dict, from_version: str, to_version: str):
"""Display diff in a formatted way."""
print()
print("" + "" * 70 + "")
print("" + f" MANIFEST DIFF: {from_version}{to_version}".ljust(70) + "")
print("" + "" * 70 + "")
# Summary
print("" + " SUMMARY".ljust(70) + "")
print("" + f" + Added: {summary['total_added']}".ljust(70) + "")
print("" + f" ~ Modified: {summary['total_modified']}".ljust(70) + "")
print("" + f" - Removed: {summary['total_removed']}".ljust(70) + "")
print("" + f" = Unchanged: {summary['total_unchanged']}".ljust(70) + "")
# By type
print("" + "" * 70 + "")
print("" + " BY TYPE".ljust(70) + "")
for type_name, counts in summary['by_type'].items():
changes = []
if counts['added'] > 0:
changes.append(f"+{counts['added']}")
if counts['modified'] > 0:
changes.append(f"~{counts['modified']}")
if counts['removed'] > 0:
changes.append(f"-{counts['removed']}")
if changes:
print("" + f" {type_name}: {' '.join(changes)}".ljust(70) + "")
# Added
if diff['added']:
print("" + "" * 70 + "")
print("" + " ADDED".ljust(70) + "")
for entity in diff['added']:
line = format_entity(entity, ' + ')
print("" + line[:70].ljust(70) + "")
# Modified
if diff['modified']:
print("" + "" * 70 + "")
print("" + " 📝 MODIFIED".ljust(70) + "")
for entity in diff['modified']:
line = format_entity(entity, ' ~ ')
print("" + line[:70].ljust(70) + "")
for change in entity['changes']:
field = change['field']
before = str(change['before'])[:20] if change['before'] else '(none)'
after = str(change['after'])[:20] if change['after'] else '(none)'
change_line = f" {field}: {before}{after}"
print("" + change_line[:70].ljust(70) + "")
# Removed
if diff['removed']:
print("" + "" * 70 + "")
print("" + " REMOVED".ljust(70) + "")
for entity in diff['removed']:
line = format_entity(entity, ' - ')
print("" + line[:70].ljust(70) + "")
print("" + "" * 70 + "")
def display_changelog(version: str, session: dict, diff: dict, summary: dict):
"""Display changelog for a single version."""
print()
print("" + "" * 70 + "")
print("" + f" CHANGELOG: {version}".ljust(70) + "")
print("" + "" * 70 + "")
print("" + f" Feature: {session.get('feature', 'Unknown')[:55]}".ljust(70) + "")
print("" + f" Status: {session.get('status', 'unknown')}".ljust(70) + "")
if session.get('started_at'):
print("" + f" Started: {session['started_at'][:19]}".ljust(70) + "")
if session.get('completed_at'):
print("" + f" Completed: {session['completed_at'][:19]}".ljust(70) + "")
print("" + "" * 70 + "")
print("" + " CHANGES".ljust(70) + "")
if not diff['added'] and not diff['modified'] and not diff['removed']:
print("" + " No entity changes".ljust(70) + "")
else:
for entity in diff['added']:
line = f" + Added {entity['type']}: {entity['name']}"
print("" + line[:70].ljust(70) + "")
for entity in diff['modified']:
line = f" ~ Modified {entity['type']}: {entity['name']}"
print("" + line[:70].ljust(70) + "")
for entity in diff['removed']:
line = f" - Removed {entity['type']}: {entity['name']}"
print("" + line[:70].ljust(70) + "")
print("" + "" * 70 + "")
def output_json(data: dict):
"""Output data as JSON."""
print(json.dumps(data, indent=2))
# ============================================================================
# Commands
# ============================================================================
def diff_versions(version1: str, version2: str, output_format: str = 'text') -> int:
"""Diff two specific versions."""
# Load snapshots
before_path = get_snapshot_path(version1, 'after')
if not before_path.exists():
before_path = get_snapshot_path(version1, 'before')
after_path = get_snapshot_path(version2, 'after')
if not after_path.exists():
after_path = get_snapshot_path(version2, 'before')
if not before_path.exists():
print(f"Error: No snapshot found for version {version1}")
return 1
if not after_path.exists():
print(f"Error: No snapshot found for version {version2}")
return 1
before = load_json(str(before_path))
after = load_json(str(after_path))
diff = compute_diff(before, after)
summary = compute_summary(diff)
if output_format == 'json':
output_json({
'from_version': version1,
'to_version': version2,
'diff': diff,
'summary': summary
})
else:
display_diff(diff, summary, version1, version2)
return 0
def diff_with_current(version: str, output_format: str = 'text') -> int:
"""Diff a version with current manifest."""
# Load version snapshot
snapshot_path = get_snapshot_path(version, 'before')
if not snapshot_path.exists():
print(f"Error: No snapshot found for version {version}")
return 1
before = load_json(str(snapshot_path))
# Load current manifest
current_path = get_current_manifest_path()
if not current_path.exists():
print("Error: No current manifest found")
return 1
after = load_json(str(current_path))
diff = compute_diff(before, after)
summary = compute_summary(diff)
if output_format == 'json':
output_json({
'from_version': version,
'to_version': 'current',
'diff': diff,
'summary': summary
})
else:
display_diff(diff, summary, version, 'current')
return 0
def show_changelog(version: str = None, output_format: str = 'text') -> int:
"""Show changelog for a version or all versions."""
versions = get_versions_list()
if not versions:
print("No workflow versions found.")
return 1
if version:
versions = [v for v in versions if v == version]
if not versions:
print(f"Version {version} not found.")
return 1
for i, v in enumerate(versions):
# Load session info
session_path = get_version_dir(v) / 'session.yml'
session = load_yaml(str(session_path)) if session_path.exists() else {}
# Get before/after snapshots
before_path = get_snapshot_path(v, 'before')
after_path = get_snapshot_path(v, 'after')
before = load_json(str(before_path)) if before_path.exists() else {}
after = load_json(str(after_path)) if after_path.exists() else {}
if not after:
after = before # Use before if no after exists
diff = compute_diff(before, after)
summary = compute_summary(diff)
if output_format == 'json':
output_json({
'version': v,
'session': session,
'diff': diff,
'summary': summary
})
else:
display_changelog(v, session, diff, summary)
return 0
# ============================================================================
# CLI Interface
# ============================================================================
def main():
parser = argparse.ArgumentParser(description="Manifest diffing and changelog generation")
subparsers = parser.add_subparsers(dest='command', help='Commands')
# diff command
diff_parser = subparsers.add_parser('diff', help='Diff two versions')
diff_parser.add_argument('version1', help='First version')
diff_parser.add_argument('version2', nargs='?', help='Second version (or "current")')
diff_parser.add_argument('--json', action='store_true', help='Output as JSON')
# changelog command
changelog_parser = subparsers.add_parser('changelog', help='Show version changelog')
changelog_parser.add_argument('version', nargs='?', help='Specific version (or all)')
changelog_parser.add_argument('--json', action='store_true', help='Output as JSON')
# versions command
subparsers.add_parser('versions', help='List all versions')
args = parser.parse_args()
if args.command == 'diff':
output_format = 'json' if args.json else 'text'
if args.version2:
if args.version2 == 'current':
sys.exit(diff_with_current(args.version1, output_format))
else:
sys.exit(diff_versions(args.version1, args.version2, output_format))
else:
# Diff with current by default
sys.exit(diff_with_current(args.version1, output_format))
elif args.command == 'changelog':
output_format = 'json' if args.json else 'text'
sys.exit(show_changelog(args.version, output_format))
elif args.command == 'versions':
versions = get_versions_list()
if versions:
print("\nAvailable versions:")
for v in versions:
print(f" - {v}")
else:
print("No versions found.")
else:
parser.print_help()
if __name__ == "__main__":
main()

View File

@ -0,0 +1,265 @@
#!/usr/bin/env python3
"""
Migration script to convert flat task session files to directory structure.
This script migrates task sessions from the old flat file structure:
.workflow/versions/v001/task_sessions/task_design.yml
To the new directory structure:
.workflow/versions/v001/task_sessions/task_design/
session.yml
task.yml
operations.log
Usage:
python3 migrate_task_sessions.py [--dry-run]
Options:
--dry-run Show what would be done without making changes
"""
from __future__ import annotations
import sys
import argparse
from pathlib import Path
from datetime import datetime
from typing import Optional
# Add parent to path for imports
sys.path.insert(0, str(Path(__file__).parent))
from version_manager import load_yaml, save_yaml, get_workflow_dir
# ============================================================================
# Discovery Functions
# ============================================================================
def find_flat_task_sessions() -> list[tuple[Path, str]]:
"""
Find all flat task session YAML files.
Returns:
List of tuples: (file_path, version_name)
"""
workflow_dir = get_workflow_dir()
versions_dir = workflow_dir / 'versions'
flat_sessions = []
if versions_dir.exists():
for version_dir in versions_dir.iterdir():
if version_dir.is_dir():
task_sessions_dir = version_dir / 'task_sessions'
if task_sessions_dir.exists():
for item in task_sessions_dir.iterdir():
# Check if it's a YAML file (not a directory)
if item.is_file() and item.suffix in ['.yml', '.yaml']:
flat_sessions.append((item, version_dir.name))
return flat_sessions
# ============================================================================
# Migration Functions
# ============================================================================
def migrate_task_session(file_path: Path, version: str, dry_run: bool = False) -> dict:
"""
Migrate a single flat task session to directory structure.
Args:
file_path: Path to the flat YAML file
version: Version identifier (e.g., 'v001')
dry_run: If True, only report what would be done
Returns:
Dictionary with migration results and actions taken
"""
task_id = file_path.stem # e.g., "task_design" from "task_design.yml"
parent_dir = file_path.parent
new_dir = parent_dir / task_id
result = {
'task_id': task_id,
'version': version,
'original_path': str(file_path),
'new_path': str(new_dir),
'success': False,
'actions': []
}
if dry_run:
result['actions'].append(f"Would create directory: {new_dir}")
result['actions'].append(f"Would move {file_path.name} to {new_dir}/session.yml")
result['actions'].append(f"Would create {new_dir}/task.yml (if source exists)")
result['actions'].append(f"Would create {new_dir}/operations.log")
result['success'] = True
return result
try:
# Create directory
new_dir.mkdir(exist_ok=True)
result['actions'].append(f"Created directory: {new_dir}")
# Move session file
session_data = load_yaml(str(file_path))
save_yaml(str(new_dir / 'session.yml'), session_data)
file_path.unlink() # Delete original
result['actions'].append(f"Moved session data to: {new_dir}/session.yml")
# Create task.yml snapshot (try to find original task)
task_file = Path('tasks') / f'{task_id}.yml'
if task_file.exists():
task_data = load_yaml(str(task_file))
task_data['snapshotted_at'] = datetime.now().isoformat()
task_data['source_path'] = str(task_file)
task_data['status_at_snapshot'] = task_data.get('status', 'migrated')
task_data['migration_note'] = 'Created during migration from flat file structure'
save_yaml(str(new_dir / 'task.yml'), task_data)
result['actions'].append(f"Created task snapshot: {new_dir}/task.yml")
else:
# Create minimal task.yml from session data
minimal_task = {
'id': task_id,
'type': session_data.get('task_type', 'unknown'),
'agent': session_data.get('agent', 'unknown'),
'snapshotted_at': datetime.now().isoformat(),
'source_path': 'N/A - reconstructed from session',
'status_at_snapshot': 'migrated',
'migration_note': 'Task file not found - reconstructed from session data'
}
save_yaml(str(new_dir / 'task.yml'), minimal_task)
result['actions'].append(f"Warning: Task file not found at {task_file}")
result['actions'].append(f"Created minimal task snapshot: {new_dir}/task.yml")
# Create operations.log
log_content = f"# Operations Log for {task_id}\n"
log_content += f"# Migrated: {datetime.now().isoformat()}\n"
log_content += "# Format: [timestamp] OPERATION target_type: target_id (path)\n"
log_content += "=" * 70 + "\n\n"
log_content += f"[{datetime.now().isoformat()}] MIGRATION: Converted from flat file structure\n"
# If session has operations, add them to the log
if 'operations' in session_data and session_data['operations']:
log_content += f"\n# Historical operations from session data:\n"
for op in session_data['operations']:
timestamp = op.get('performed_at', 'unknown')
op_type = op.get('type', 'UNKNOWN')
target_type = op.get('target_type', 'unknown')
target_id = op.get('target_id', 'unknown')
target_path = op.get('target_path', '')
entry = f"[{timestamp}] {op_type} {target_type}: {target_id}"
if target_path:
entry += f" ({target_path})"
diff_summary = op.get('changes', {}).get('diff_summary', '')
if diff_summary:
entry += f"\n Summary: {diff_summary}"
log_content += entry + "\n"
(new_dir / 'operations.log').write_text(log_content)
result['actions'].append(f"Created operations log: {new_dir}/operations.log")
result['success'] = True
except Exception as e:
result['error'] = str(e)
result['actions'].append(f"Error: {e}")
return result
# ============================================================================
# Main Entry Point
# ============================================================================
def main():
"""Main migration script entry point."""
parser = argparse.ArgumentParser(
description='Migrate task session files from flat structure to directories',
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=__doc__
)
parser.add_argument(
'--dry-run',
action='store_true',
help='Show what would be done without making changes'
)
args = parser.parse_args()
dry_run = args.dry_run
# Header
print("=" * 70)
print("Task Session Migration Script".center(70))
print(f"Mode: {'DRY RUN' if dry_run else 'LIVE MIGRATION'}".center(70))
print("=" * 70)
print()
# Find flat sessions
flat_sessions = find_flat_task_sessions()
if not flat_sessions:
print("No flat task session files found. Nothing to migrate.")
print()
print("This could mean:")
print(" 1. All task sessions are already migrated")
print(" 2. No task sessions exist yet")
print(" 3. .workflow directory doesn't exist")
return
print(f"Found {len(flat_sessions)} flat task session file(s) to migrate:")
print()
# Process each file
results = []
for file_path, version in flat_sessions:
print(f"Processing: {version}/{file_path.name}")
print("-" * 70)
result = migrate_task_session(file_path, version, dry_run)
results.append(result)
for action in result['actions']:
print(f" {action}")
if not result['success'] and 'error' in result:
print(f" ERROR: {result['error']}")
print()
# Summary
successful = sum(1 for r in results if r['success'])
failed = len(results) - successful
print("=" * 70)
print("Migration Summary".center(70))
print("=" * 70)
print(f"Total files processed: {len(results)}")
print(f"Successful migrations: {successful}")
print(f"Failed migrations: {failed}")
print()
if dry_run:
print("This was a DRY RUN. No files were modified.")
print("Run without --dry-run to perform the migration.")
else:
if successful > 0:
print("Migration completed successfully!")
print()
print("Next steps:")
print(" 1. Verify migrated files in .workflow/versions/*/task_sessions/")
print(" 2. Check that each task has session.yml, task.yml, and operations.log")
print(" 3. Test the system to ensure compatibility")
if failed > 0:
print()
print(f"WARNING: {failed} migration(s) failed. Review the errors above.")
print()
if __name__ == '__main__':
main()

View File

@ -0,0 +1,715 @@
#!/usr/bin/env python3
"""
Phase Gate Enforcement System
Provides strict enforcement of workflow phases with:
- Blocking checkpoints that MUST pass before proceeding
- Script-based validation gates
- State persistence that survives interruptions
- Fix loops that return to IMPLEMENTING when issues found
Exit codes:
0 = PASS - operation allowed
1 = BLOCKED - conditions not met (with detailed reason)
2 = ERROR - system error
Usage:
phase_gate.py can-enter PHASE # Check if ready to enter phase
phase_gate.py complete PHASE # Mark phase as complete (validates all checkpoints)
phase_gate.py checkpoint NAME # Save a checkpoint
phase_gate.py verify PHASE # Verify all checkpoints for phase
phase_gate.py blockers # Show current blocking conditions
phase_gate.py fix-loop PHASE # Handle fix loop logic
phase_gate.py status # Show full gate state
"""
import argparse
import json
import os
import subprocess
import sys
from datetime import datetime
from pathlib import Path
from typing import Optional
try:
import yaml
HAS_YAML = True
except ImportError:
HAS_YAML = False
# ============================================================================
# Configuration
# ============================================================================
PHASES_ORDER = [
'INITIALIZING',
'DESIGNING',
'AWAITING_DESIGN_APPROVAL',
'IMPLEMENTING',
'REVIEWING',
'SECURITY_REVIEW',
'AWAITING_IMPL_APPROVAL',
'COMPLETING',
'COMPLETED'
]
# Phases that can trigger a fix loop back to IMPLEMENTING
FIX_LOOP_PHASES = ['REVIEWING', 'SECURITY_REVIEW']
# Phase entry requirements - what MUST be true to enter
ENTRY_REQUIREMENTS = {
'INITIALIZING': [],
'DESIGNING': [
{'type': 'phase_completed', 'phase': 'INITIALIZING'},
],
'AWAITING_DESIGN_APPROVAL': [
{'type': 'phase_completed', 'phase': 'DESIGNING'},
{'type': 'file_exists', 'path': '.workflow/versions/{version}/design/design_document.yml'},
{'type': 'min_file_count', 'pattern': '.workflow/versions/{version}/tasks/*.yml', 'minimum': 1},
],
'IMPLEMENTING': [
{'type': 'phase_completed', 'phase': 'AWAITING_DESIGN_APPROVAL'},
{'type': 'approval_granted', 'gate': 'design'},
],
'REVIEWING': [
{'type': 'phase_completed', 'phase': 'IMPLEMENTING'},
{'type': 'script_passes', 'script': 'npm run build', 'name': 'build'},
{'type': 'script_passes', 'script': 'npx tsc --noEmit', 'name': 'type-check'},
{'type': 'script_passes', 'script': 'npm run lint', 'name': 'lint'},
],
'SECURITY_REVIEW': [
{'type': 'phase_completed', 'phase': 'REVIEWING'},
{'type': 'checkpoint_passed', 'checkpoint': 'review_passed'},
],
'AWAITING_IMPL_APPROVAL': [
{'type': 'phase_completed', 'phase': 'SECURITY_REVIEW'},
{'type': 'checkpoint_passed', 'checkpoint': 'security_passed'},
],
'COMPLETING': [
{'type': 'phase_completed', 'phase': 'AWAITING_IMPL_APPROVAL'},
{'type': 'approval_granted', 'gate': 'implementation'},
],
'COMPLETED': [
{'type': 'phase_completed', 'phase': 'COMPLETING'},
],
}
# Phase checkpoints - what MUST be completed within each phase
PHASE_CHECKPOINTS = {
'INITIALIZING': [
'manifest_exists',
'version_created',
],
'DESIGNING': [
'design_document_created',
'design_validated',
'tasks_generated',
],
'AWAITING_DESIGN_APPROVAL': [
'design_approved',
],
'IMPLEMENTING': [
'all_layers_complete',
'build_passes',
'type_check_passes',
'lint_passes',
],
'REVIEWING': [
'review_script_run',
'all_files_verified',
'code_review_passed', # Code review agent must pass
'review_passed', # Critical - umbrella checkpoint, must pass or trigger fix loop
],
'SECURITY_REVIEW': [
'security_scan_run',
'api_contract_validated',
'security_passed', # Critical - must pass or trigger fix loop
],
'AWAITING_IMPL_APPROVAL': [
'implementation_approved',
],
'COMPLETING': [
'tasks_marked_complete',
'version_finalized',
],
}
# ============================================================================
# YAML Helpers
# ============================================================================
def load_yaml(filepath: str) -> dict:
"""Load YAML file (or JSON fallback)."""
if not os.path.exists(filepath):
return {}
with open(filepath, 'r') as f:
content = f.read()
if not content.strip():
return {}
if HAS_YAML:
return yaml.safe_load(content) or {}
# Fallback: try JSON parsing (many .yml files are actually JSON)
try:
return json.loads(content) or {}
except json.JSONDecodeError:
return {}
def save_yaml(filepath: str, data: dict):
"""Save data to YAML file."""
os.makedirs(os.path.dirname(filepath), exist_ok=True)
if HAS_YAML:
with open(filepath, 'w') as f:
yaml.dump(data, f, default_flow_style=False, sort_keys=False, allow_unicode=True)
else:
with open(filepath, 'w') as f:
json.dump(data, f, indent=2)
# ============================================================================
# State Management
# ============================================================================
def get_workflow_dir() -> Path:
return Path('.workflow')
def get_active_version() -> Optional[str]:
"""Get active workflow version."""
current_path = get_workflow_dir() / 'current.yml'
if not current_path.exists():
return None
current = load_yaml(str(current_path))
return current.get('active_version')
def get_gate_state_path(version: str) -> Path:
"""Get path to gate state file."""
return get_workflow_dir() / 'versions' / version / 'gate_state.yml'
def get_session_path(version: str) -> Path:
"""Get path to session file."""
return get_workflow_dir() / 'versions' / version / 'session.yml'
def load_gate_state(version: str) -> dict:
"""Load gate state for version."""
path = get_gate_state_path(version)
if path.exists():
return load_yaml(str(path))
# Initialize new gate state
return {
'version': version,
'current_phase': 'INITIALIZING',
'created_at': datetime.now().isoformat(),
'updated_at': datetime.now().isoformat(),
'phases': {phase: {
'status': 'not_started',
'entered_at': None,
'completed_at': None,
'checkpoints': {cp: {'status': 'pending', 'at': None, 'data': {}}
for cp in PHASE_CHECKPOINTS.get(phase, [])}
} for phase in PHASES_ORDER},
'fix_loops': [], # Track fix loop iterations
}
def save_gate_state(version: str, state: dict):
"""Save gate state."""
state['updated_at'] = datetime.now().isoformat()
save_yaml(str(get_gate_state_path(version)), state)
def get_current_phase(version: str) -> str:
"""Get current phase from session."""
session_path = get_session_path(version)
if not session_path.exists():
return 'INITIALIZING'
session = load_yaml(str(session_path))
return session.get('current_phase', 'INITIALIZING')
# ============================================================================
# Validation Functions
# ============================================================================
def check_file_exists(path: str, version: str) -> tuple[bool, str]:
"""Check if file exists."""
resolved_path = path.format(version=version)
if os.path.exists(resolved_path):
return True, f"File exists: {resolved_path}"
return False, f"File missing: {resolved_path}"
def check_min_file_count(pattern: str, minimum: int, version: str) -> tuple[bool, str]:
"""Check minimum file count matching pattern."""
import glob
resolved_pattern = pattern.format(version=version)
files = glob.glob(resolved_pattern)
count = len(files)
if count >= minimum:
return True, f"Found {count} files (minimum: {minimum})"
return False, f"Found only {count} files, need at least {minimum}"
def check_phase_completed(phase: str, version: str) -> tuple[bool, str]:
"""Check if a phase has been completed."""
state = load_gate_state(version)
phase_state = state['phases'].get(phase, {})
if phase_state.get('status') == 'completed':
return True, f"Phase {phase} is completed"
return False, f"Phase {phase} not completed (status: {phase_state.get('status', 'unknown')})"
def check_approval_granted(gate: str, version: str) -> tuple[bool, str]:
"""Check if approval gate has been granted."""
session_path = get_session_path(version)
if not session_path.exists():
return False, f"No session file found"
session = load_yaml(str(session_path))
approvals = session.get('approvals', {})
gate_approval = approvals.get(gate, {})
if gate_approval.get('status') == 'approved':
return True, f"{gate} approval granted"
return False, f"{gate} approval not granted (status: {gate_approval.get('status', 'pending')})"
def check_checkpoint_passed(checkpoint: str, version: str) -> tuple[bool, str]:
"""Check if a specific checkpoint has passed."""
state = load_gate_state(version)
# Search all phases for this checkpoint
for phase, phase_data in state['phases'].items():
checkpoints = phase_data.get('checkpoints', {})
if checkpoint in checkpoints:
cp_state = checkpoints[checkpoint]
if cp_state.get('status') == 'passed':
return True, f"Checkpoint {checkpoint} passed"
return False, f"Checkpoint {checkpoint} not passed (status: {cp_state.get('status', 'pending')})"
return False, f"Checkpoint {checkpoint} not found"
def check_script_passes(script: str, name: str = None) -> tuple[bool, str]:
"""Run a script and check if it passes (exit code 0)."""
script_name = name or script.split()[0]
try:
result = subprocess.run(
script,
shell=True,
capture_output=True,
text=True,
timeout=300 # 5 minute timeout
)
if result.returncode == 0:
return True, f"Script '{script_name}' passed"
return False, f"Script '{script_name}' failed (exit code: {result.returncode})\n{result.stderr[:500]}"
except subprocess.TimeoutExpired:
return False, f"Script '{script_name}' timed out"
except Exception as e:
return False, f"Script '{script_name}' error: {str(e)}"
def validate_requirement(req: dict, version: str) -> tuple[bool, str]:
"""Validate a single requirement."""
req_type = req.get('type')
if req_type == 'file_exists':
return check_file_exists(req['path'], version)
elif req_type == 'min_file_count':
return check_min_file_count(req['pattern'], req['minimum'], version)
elif req_type == 'phase_completed':
return check_phase_completed(req['phase'], version)
elif req_type == 'approval_granted':
return check_approval_granted(req['gate'], version)
elif req_type == 'checkpoint_passed':
return check_checkpoint_passed(req['checkpoint'], version)
elif req_type == 'script_passes':
return check_script_passes(req['script'], req.get('name'))
else:
return False, f"Unknown requirement type: {req_type}"
# ============================================================================
# Gate Operations
# ============================================================================
def can_enter_phase(phase: str, version: str) -> tuple[bool, list[str]]:
"""Check if all entry requirements are met for a phase."""
requirements = ENTRY_REQUIREMENTS.get(phase, [])
failures = []
for req in requirements:
passed, message = validate_requirement(req, version)
if not passed:
failures.append(message)
return len(failures) == 0, failures
def save_checkpoint(phase: str, checkpoint: str, status: str, version: str, data: dict = None):
"""Save a checkpoint status."""
state = load_gate_state(version)
if phase not in state['phases']:
state['phases'][phase] = {
'status': 'in_progress',
'entered_at': datetime.now().isoformat(),
'completed_at': None,
'checkpoints': {}
}
state['phases'][phase]['checkpoints'][checkpoint] = {
'status': status,
'at': datetime.now().isoformat(),
'data': data or {}
}
save_gate_state(version, state)
return True
def verify_phase_checkpoints(phase: str, version: str) -> tuple[bool, list[str]]:
"""Verify all checkpoints for a phase are passed."""
state = load_gate_state(version)
required_checkpoints = PHASE_CHECKPOINTS.get(phase, [])
phase_state = state['phases'].get(phase, {})
checkpoints = phase_state.get('checkpoints', {})
failures = []
for cp in required_checkpoints:
cp_state = checkpoints.get(cp, {})
if cp_state.get('status') != 'passed':
failures.append(f"Checkpoint '{cp}' not passed (status: {cp_state.get('status', 'missing')})")
return len(failures) == 0, failures
def complete_phase(phase: str, version: str) -> tuple[bool, list[str]]:
"""Mark a phase as complete after verifying all checkpoints."""
# First verify all checkpoints
passed, failures = verify_phase_checkpoints(phase, version)
if not passed:
return False, failures
# Update state
state = load_gate_state(version)
state['phases'][phase]['status'] = 'completed'
state['phases'][phase]['completed_at'] = datetime.now().isoformat()
save_gate_state(version, state)
return True, []
def enter_phase(phase: str, version: str) -> tuple[bool, list[str]]:
"""Enter a new phase after verifying entry requirements."""
# Check entry requirements
can_enter, failures = can_enter_phase(phase, version)
if not can_enter:
return False, failures
# Update state
state = load_gate_state(version)
state['current_phase'] = phase
state['phases'][phase]['status'] = 'in_progress'
state['phases'][phase]['entered_at'] = datetime.now().isoformat()
save_gate_state(version, state)
return True, []
def handle_fix_loop(phase: str, version: str, issues: list[str]) -> dict:
"""Handle fix loop - return to IMPLEMENTING to fix issues."""
state = load_gate_state(version)
# Record fix loop
fix_loop_entry = {
'from_phase': phase,
'issues': issues,
'timestamp': datetime.now().isoformat(),
'iteration': len([fl for fl in state.get('fix_loops', []) if fl['from_phase'] == phase]) + 1
}
if 'fix_loops' not in state:
state['fix_loops'] = []
state['fix_loops'].append(fix_loop_entry)
# Reset phase states for re-run
state['phases'][phase]['status'] = 'needs_fix'
state['phases'][phase]['completed_at'] = None
# Reset checkpoints that need re-verification
if phase == 'REVIEWING':
state['phases'][phase]['checkpoints']['review_passed'] = {'status': 'pending', 'at': None, 'data': {}}
elif phase == 'SECURITY_REVIEW':
state['phases'][phase]['checkpoints']['security_passed'] = {'status': 'pending', 'at': None, 'data': {}}
# Set IMPLEMENTING as needing re-entry
state['phases']['IMPLEMENTING']['status'] = 'needs_reentry'
state['current_phase'] = 'IMPLEMENTING'
save_gate_state(version, state)
return {
'action': 'FIX_REQUIRED',
'return_to': 'IMPLEMENTING',
'issues': issues,
'iteration': fix_loop_entry['iteration']
}
def get_blockers(version: str) -> list[dict]:
"""Get all current blocking conditions."""
state = load_gate_state(version)
current_phase = state.get('current_phase', 'INITIALIZING')
blockers = []
# Check current phase checkpoints
phase_state = state['phases'].get(current_phase, {})
checkpoints = phase_state.get('checkpoints', {})
for cp_name, cp_state in checkpoints.items():
if cp_state.get('status') != 'passed':
blockers.append({
'type': 'checkpoint',
'phase': current_phase,
'name': cp_name,
'status': cp_state.get('status', 'pending')
})
# Check if in fix loop
fix_loops = state.get('fix_loops', [])
if fix_loops:
latest = fix_loops[-1]
if state['phases'].get(latest['from_phase'], {}).get('status') == 'needs_fix':
blockers.append({
'type': 'fix_loop',
'phase': latest['from_phase'],
'issues': latest['issues'],
'iteration': latest['iteration']
})
return blockers
def show_status(version: str):
"""Display full gate state status."""
state = load_gate_state(version)
print()
print("=" * 70)
print(" PHASE GATE STATUS".center(70))
print("=" * 70)
print(f" Version: {version}")
print(f" Current Phase: {state.get('current_phase', 'UNKNOWN')}")
print(f" Updated: {state.get('updated_at', 'N/A')}")
print("=" * 70)
for phase in PHASES_ORDER:
phase_state = state['phases'].get(phase, {})
status = phase_state.get('status', 'not_started')
# Status icon
if status == 'completed':
icon = ""
elif status == 'in_progress':
icon = "🔄"
elif status == 'needs_fix':
icon = "🔧"
elif status == 'needs_reentry':
icon = "↩️ "
else:
icon = ""
print(f"\n {icon} {phase}: {status}")
# Show checkpoints
checkpoints = phase_state.get('checkpoints', {})
if checkpoints:
for cp_name, cp_state in checkpoints.items():
cp_status = cp_state.get('status', 'pending')
cp_icon = "" if cp_status == 'passed' else "" if cp_status == 'pending' else ""
print(f" {cp_icon} {cp_name}: {cp_status}")
# Show fix loops
fix_loops = state.get('fix_loops', [])
if fix_loops:
print("\n" + "-" * 70)
print(" FIX LOOP HISTORY")
print("-" * 70)
for fl in fix_loops[-5:]: # Show last 5
print(f" [{fl['timestamp'][:19]}] {fl['from_phase']} → IMPLEMENTING (iteration {fl['iteration']})")
for issue in fl['issues'][:3]:
print(f" - {issue[:60]}")
print("\n" + "=" * 70)
# ============================================================================
# CLI Interface
# ============================================================================
def main():
parser = argparse.ArgumentParser(description="Phase gate enforcement system")
subparsers = parser.add_subparsers(dest='command', help='Commands')
# can-enter command
enter_parser = subparsers.add_parser('can-enter', help='Check if can enter a phase')
enter_parser.add_argument('phase', choices=PHASES_ORDER, help='Target phase')
# complete command
complete_parser = subparsers.add_parser('complete', help='Complete a phase')
complete_parser.add_argument('phase', choices=PHASES_ORDER, help='Phase to complete')
# checkpoint command
cp_parser = subparsers.add_parser('checkpoint', help='Save a checkpoint')
cp_parser.add_argument('name', help='Checkpoint name')
cp_parser.add_argument('--phase', required=True, help='Phase for checkpoint')
cp_parser.add_argument('--status', default='passed', choices=['passed', 'failed', 'pending'])
cp_parser.add_argument('--data', help='JSON data to store')
# verify command
verify_parser = subparsers.add_parser('verify', help='Verify phase checkpoints')
verify_parser.add_argument('phase', choices=PHASES_ORDER, help='Phase to verify')
# blockers command
subparsers.add_parser('blockers', help='Show current blockers')
# fix-loop command
fix_parser = subparsers.add_parser('fix-loop', help='Trigger fix loop')
fix_parser.add_argument('phase', choices=FIX_LOOP_PHASES, help='Phase triggering fix')
fix_parser.add_argument('--issues', nargs='+', help='Issues that need fixing')
# status command
subparsers.add_parser('status', help='Show full gate state')
# enter command (actually enter a phase)
do_enter_parser = subparsers.add_parser('enter', help='Enter a phase')
do_enter_parser.add_argument('phase', choices=PHASES_ORDER, help='Phase to enter')
args = parser.parse_args()
# Get active version
version = get_active_version()
if not version and args.command not in ['status', 'blockers']:
print("Error: No active workflow version")
print("Run /workflow:spawn to start a new workflow")
sys.exit(2)
if args.command == 'can-enter':
can_enter, failures = can_enter_phase(args.phase, version)
if can_enter:
print(f"✅ CAN ENTER: {args.phase}")
print("All entry requirements met")
sys.exit(0)
else:
print(f"❌ BLOCKED: Cannot enter {args.phase}")
print("\nBlocking conditions:")
for f in failures:
print(f" - {f}")
sys.exit(1)
elif args.command == 'enter':
success, failures = enter_phase(args.phase, version)
if success:
print(f"✅ ENTERED: {args.phase}")
sys.exit(0)
else:
print(f"❌ BLOCKED: Cannot enter {args.phase}")
for f in failures:
print(f" - {f}")
sys.exit(1)
elif args.command == 'complete':
success, failures = complete_phase(args.phase, version)
if success:
print(f"✅ COMPLETED: {args.phase}")
sys.exit(0)
else:
print(f"❌ BLOCKED: Cannot complete {args.phase}")
print("\nMissing checkpoints:")
for f in failures:
print(f" - {f}")
sys.exit(1)
elif args.command == 'checkpoint':
data = None
if args.data:
try:
data = json.loads(args.data)
except json.JSONDecodeError:
data = {'raw': args.data}
save_checkpoint(args.phase, args.name, args.status, version, data)
icon = "" if args.status == 'passed' else "" if args.status == 'failed' else ""
print(f"{icon} Checkpoint saved: {args.name} = {args.status}")
sys.exit(0)
elif args.command == 'verify':
passed, failures = verify_phase_checkpoints(args.phase, version)
if passed:
print(f"✅ VERIFIED: All checkpoints for {args.phase} passed")
sys.exit(0)
else:
print(f"❌ FAILED: {args.phase} has incomplete checkpoints")
for f in failures:
print(f" - {f}")
sys.exit(1)
elif args.command == 'blockers':
if not version:
print("No active workflow")
sys.exit(0)
blockers = get_blockers(version)
if not blockers:
print("✅ No blockers - workflow can proceed")
sys.exit(0)
print("❌ CURRENT BLOCKERS:")
print("-" * 50)
for b in blockers:
if b['type'] == 'checkpoint':
print(f" [{b['phase']}] Checkpoint '{b['name']}': {b['status']}")
elif b['type'] == 'fix_loop':
print(f" [FIX REQUIRED] From {b['phase']} (iteration {b['iteration']})")
for issue in b.get('issues', [])[:3]:
print(f" - {issue[:60]}")
sys.exit(1)
elif args.command == 'fix-loop':
issues = args.issues or ['Unspecified issues found']
result = handle_fix_loop(args.phase, version, issues)
print(f"🔧 FIX LOOP TRIGGERED")
print(f" From: {args.phase}")
print(f" Return to: {result['return_to']}")
print(f" Iteration: {result['iteration']}")
print(f"\n Issues to fix:")
for issue in result['issues']:
print(f" - {issue}")
print(f"\n 👉 Run /workflow:resume to continue fixing")
sys.exit(1) # Exit 1 to indicate fix needed
elif args.command == 'status':
if version:
show_status(version)
else:
print("No active workflow")
sys.exit(0)
else:
parser.print_help()
sys.exit(0)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,79 @@
#!/usr/bin/env python3
"""Post-write hook to update entity status in manifest."""
import argparse
import json
import os
from datetime import datetime
def load_manifest(manifest_path: str) -> dict | None:
"""Load manifest if it exists."""
if not os.path.exists(manifest_path):
return None
with open(manifest_path) as f:
return json.load(f)
def save_manifest(manifest_path: str, manifest: dict):
"""Save manifest to file."""
with open(manifest_path, "w") as f:
json.dump(manifest, f, indent=2)
def find_entity_by_path(manifest: dict, file_path: str) -> tuple:
"""Find entity by file path, return (entity_type, index, entity)."""
entities = manifest.get("entities", {})
for entity_type in ["pages", "components", "api_endpoints", "database_tables"]:
for idx, entity in enumerate(entities.get(entity_type, [])):
if entity.get("file_path") == file_path:
return (entity_type, idx, entity)
return (None, None, None)
def main():
parser = argparse.ArgumentParser(description="Post-write hook")
parser.add_argument("--manifest", required=True, help="Path to manifest")
parser.add_argument("--file", help="File that was written")
args = parser.parse_args()
manifest = load_manifest(args.manifest)
if manifest is None:
return 0
# If file provided, update entity status
if args.file:
# Normalize the file path
file_path = args.file.lstrip('./')
entity_type, idx, entity = find_entity_by_path(manifest, args.file)
# Try without leading ./
if not entity:
entity_type, idx, entity = find_entity_by_path(manifest, file_path)
if entity and entity.get("status") == "APPROVED":
manifest["entities"][entity_type][idx]["status"] = "IMPLEMENTED"
manifest["entities"][entity_type][idx]["implemented_at"] = datetime.now().isoformat()
# Add to history (ensure it exists)
if "state" not in manifest:
manifest["state"] = {}
if "revision_history" not in manifest["state"]:
manifest["state"]["revision_history"] = []
manifest["state"]["revision_history"].append({
"action": "ENTITY_IMPLEMENTED",
"timestamp": datetime.now().isoformat(),
"details": f"Implemented {entity.get('id', 'unknown')}"
})
save_manifest(args.manifest, manifest)
print(f"GUARDRAIL: Updated {entity.get('id')} to IMPLEMENTED")
return 0
if __name__ == "__main__":
exit(main())

View File

@ -0,0 +1,601 @@
#!/usr/bin/env python3
"""
Comprehensive Security Scanner for guardrail workflow.
Performs static security analysis on codebase:
- Hardcoded secrets and credentials
- SQL injection vulnerabilities
- XSS vulnerabilities
- Path traversal risks
- Insecure dependencies
- Authentication/Authorization issues
- OWASP Top 10 patterns
Usage:
python3 security_scan.py --project-dir . [--severity critical|high|medium|low]
"""
import argparse
import json
import os
import re
import sys
from pathlib import Path
from typing import NamedTuple
from dataclasses import dataclass, field
@dataclass
class SecurityIssue:
"""Security vulnerability finding."""
severity: str # CRITICAL, HIGH, MEDIUM, LOW, INFO
category: str
title: str
description: str
file_path: str
line_number: int | None
code_snippet: str
recommendation: str
cwe_id: str | None = None
owasp_category: str | None = None
@dataclass
class ScanResult:
"""Complete scan results."""
issues: list[SecurityIssue] = field(default_factory=list)
files_scanned: int = 0
scan_duration: float = 0.0
# Security patterns organized by category
SECURITY_PATTERNS = {
'hardcoded_secrets': {
'severity': 'CRITICAL',
'cwe': 'CWE-798',
'owasp': 'A07:2021-Identification and Authentication Failures',
'patterns': [
# API Keys
(r'''(?:api[_-]?key|apikey)\s*[:=]\s*['"]((?!process\.env)[^'"]{10,})['"']''', 'Hardcoded API key'),
(r'''(?:api[_-]?secret|apisecret)\s*[:=]\s*['"]((?!process\.env)[^'"]{10,})['"']''', 'Hardcoded API secret'),
# Passwords
(r'''(?:password|passwd|pwd)\s*[:=]\s*['"]([^'"]{4,})['"']''', 'Hardcoded password'),
# Private keys
(r'''-----BEGIN (?:RSA |EC |DSA )?PRIVATE KEY-----''', 'Embedded private key'),
# AWS credentials
(r'''(?:aws[_-]?access[_-]?key[_-]?id|aws[_-]?secret)\s*[:=]\s*['"]([A-Z0-9]{16,})['"']''', 'AWS credential'),
(r'''AKIA[0-9A-Z]{16}''', 'AWS Access Key ID'),
# JWT secrets
(r'''(?:jwt[_-]?secret|token[_-]?secret)\s*[:=]\s*['"]([^'"]{8,})['"']''', 'Hardcoded JWT secret'),
# Database connection strings
(r'''(?:mongodb|postgres|mysql|redis):\/\/[^:]+:[^@]+@''', 'Database credentials in connection string'),
# Generic secrets
(r'''(?:secret|token|auth)[_-]?(?:key)?\s*[:=]\s*['"]([^'"]{8,})['"']''', 'Potential hardcoded secret'),
]
},
'sql_injection': {
'severity': 'CRITICAL',
'cwe': 'CWE-89',
'owasp': 'A03:2021-Injection',
'patterns': [
# String concatenation in queries
(r'''(?:query|sql|execute)\s*\(\s*[`'"].*\$\{''', 'SQL injection via template literal'),
(r'''(?:query|sql|execute)\s*\(\s*['"].*\+\s*(?:req\.|params\.|body\.|query\.)''', 'SQL injection via concatenation'),
(r'''(?:SELECT|INSERT|UPDATE|DELETE|FROM|WHERE).*\$\{''', 'Raw SQL with template interpolation'),
# Raw queries
(r'''\.raw\s*\(\s*[`'"].*\$\{''', 'Raw query with interpolation'),
(r'''prisma\.\$queryRaw\s*`[^`]*\$\{''', 'Prisma raw query with interpolation'),
]
},
'xss': {
'severity': 'HIGH',
'cwe': 'CWE-79',
'owasp': 'A03:2021-Injection',
'patterns': [
# React dangerouslySetInnerHTML
(r'''dangerouslySetInnerHTML\s*=\s*\{\s*\{__html:\s*(?!DOMPurify|sanitize)''', 'Unsanitized dangerouslySetInnerHTML'),
# innerHTML assignment
(r'''\.innerHTML\s*=\s*(?!['"`]<)''', 'Direct innerHTML assignment'),
# document.write
(r'''document\.write\s*\(''', 'document.write usage'),
# eval with user input
(r'''eval\s*\(\s*(?:req\.|params\.|body\.|query\.|props\.)''', 'eval with user input'),
# jQuery html() with user input
(r'''\$\([^)]+\)\.html\s*\(\s*(?!['"`])''', 'jQuery html() with dynamic content'),
]
},
'path_traversal': {
'severity': 'HIGH',
'cwe': 'CWE-22',
'owasp': 'A01:2021-Broken Access Control',
'patterns': [
# File operations with user input
(r'''(?:readFile|writeFile|readFileSync|writeFileSync|createReadStream)\s*\(\s*(?:req\.|params\.|body\.|query\.)''', 'File operation with user input'),
(r'''(?:readFile|writeFile)\s*\(\s*[`'"].*\$\{(?:req\.|params\.|body\.|query\.)''', 'File path with user input interpolation'),
# Path.join with user input (without validation)
(r'''path\.(?:join|resolve)\s*\([^)]*(?:req\.|params\.|body\.|query\.)''', 'Path operation with user input'),
]
},
'command_injection': {
'severity': 'CRITICAL',
'cwe': 'CWE-78',
'owasp': 'A03:2021-Injection',
'patterns': [
# exec/spawn with user input
(r'''(?:exec|execSync|spawn|spawnSync)\s*\(\s*[`'"].*\$\{''', 'Command injection via template literal'),
(r'''(?:exec|execSync|spawn|spawnSync)\s*\(\s*(?:req\.|params\.|body\.|query\.)''', 'Command execution with user input'),
# child_process with concatenation
(r'''child_process.*\(\s*['"].*\+\s*(?:req\.|params\.|body\.|query\.)''', 'Command injection via concatenation'),
]
},
'insecure_auth': {
'severity': 'HIGH',
'cwe': 'CWE-287',
'owasp': 'A07:2021-Identification and Authentication Failures',
'patterns': [
# Weak JWT algorithms
(r'''algorithm\s*[:=]\s*['"](?:none|HS256)['"']''', 'Weak JWT algorithm'),
# No password hashing
(r'''password\s*===?\s*(?:req\.|body\.|params\.)''', 'Plain text password comparison'),
# Disabled security
(r'''(?:verify|secure|https|ssl)\s*[:=]\s*false''', 'Security feature disabled'),
# Cookie without security flags
(r'''cookie\s*\([^)]*\)\s*(?!.*(?:httpOnly|secure|sameSite))''', 'Cookie without security flags'),
]
},
'sensitive_data_exposure': {
'severity': 'MEDIUM',
'cwe': 'CWE-200',
'owasp': 'A02:2021-Cryptographic Failures',
'patterns': [
# Logging sensitive data
(r'''console\.(?:log|info|debug)\s*\([^)]*(?:password|secret|token|key|credential)''', 'Logging sensitive data'),
# Error messages with sensitive info
(r'''(?:throw|Error)\s*\([^)]*(?:password|secret|token|key|sql|query)''', 'Sensitive info in error message'),
# HTTP instead of HTTPS
(r'''['"]http:\/\/(?!localhost|127\.0\.0\.1)''', 'HTTP URL (should be HTTPS)'),
]
},
'insecure_dependencies': {
'severity': 'MEDIUM',
'cwe': 'CWE-1104',
'owasp': 'A06:2021-Vulnerable and Outdated Components',
'patterns': [
# Known vulnerable patterns
(r'''require\s*\(\s*['"](?:serialize-javascript|lodash\.template|node-serialize)['"]\s*\)''', 'Known vulnerable package'),
# Outdated crypto
(r'''crypto\.createCipher\s*\(''', 'Deprecated crypto.createCipher'),
(r'''md5\s*\(|createHash\s*\(\s*['"]md5['"]''', 'MD5 hash usage (weak)'),
(r'''sha1\s*\(|createHash\s*\(\s*['"]sha1['"]''', 'SHA1 hash usage (weak)'),
]
},
'cors_misconfiguration': {
'severity': 'MEDIUM',
'cwe': 'CWE-942',
'owasp': 'A01:2021-Broken Access Control',
'patterns': [
# Wildcard CORS
(r'''(?:Access-Control-Allow-Origin|origin)\s*[:=]\s*['"]\*['"']''', 'Wildcard CORS origin'),
(r'''cors\s*\(\s*\{[^}]*origin\s*:\s*true''', 'CORS allows all origins'),
# Credentials with wildcard
(r'''credentials\s*:\s*true[^}]*origin\s*:\s*['"]\*['"']''', 'CORS credentials with wildcard origin'),
]
},
'insecure_randomness': {
'severity': 'LOW',
'cwe': 'CWE-330',
'owasp': 'A02:2021-Cryptographic Failures',
'patterns': [
# Math.random for security
(r'''Math\.random\s*\(\s*\)[^;]*(?:token|secret|password|key|id|session)''', 'Math.random for security-sensitive value'),
(r'''(?:token|secret|key|session)[^=]*=\s*Math\.random''', 'Math.random for security-sensitive value'),
]
},
'debug_code': {
'severity': 'LOW',
'cwe': 'CWE-489',
'owasp': 'A05:2021-Security Misconfiguration',
'patterns': [
# Debug statements
(r'''console\.(?:log|debug|info|warn)\s*\(''', 'Console statement (remove in production)'),
(r'''debugger\s*;''', 'Debugger statement'),
# TODO/FIXME security notes
(r'''(?:TODO|FIXME|HACK|XXX).*(?:security|auth|password|secret|vulnerable)''', 'Security-related TODO'),
]
},
'nosql_injection': {
'severity': 'HIGH',
'cwe': 'CWE-943',
'owasp': 'A03:2021-Injection',
'patterns': [
# MongoDB injection
(r'''\.find\s*\(\s*\{[^}]*\$(?:where|regex|gt|lt|ne|in|nin|or|and).*(?:req\.|params\.|body\.|query\.)''', 'NoSQL injection risk'),
(r'''\.find\s*\(\s*(?:req\.|params\.|body\.|query\.)''', 'Direct user input in query'),
]
},
'prototype_pollution': {
'severity': 'HIGH',
'cwe': 'CWE-1321',
'owasp': 'A03:2021-Injection',
'patterns': [
# Deep merge without protection
(r'''(?:merge|extend|assign)\s*\([^)]*(?:req\.|params\.|body\.|query\.)''', 'Potential prototype pollution via merge'),
(r'''Object\.assign\s*\(\s*\{\}[^)]*(?:req\.|params\.|body\.|query\.)''', 'Object.assign with user input'),
# __proto__ access
(r'''__proto__''', 'Direct __proto__ access'),
(r'''constructor\s*\[\s*['"]prototype['"]''', 'Prototype access via constructor'),
]
},
'ssrf': {
'severity': 'HIGH',
'cwe': 'CWE-918',
'owasp': 'A10:2021-Server-Side Request Forgery',
'patterns': [
# Fetch/axios with user URL
(r'''(?:fetch|axios\.get|axios\.post|http\.get|https\.get)\s*\(\s*(?:req\.|params\.|body\.|query\.)''', 'SSRF via user-controlled URL'),
(r'''(?:fetch|axios)\s*\(\s*[`'"].*\$\{(?:req\.|params\.|body\.|query\.)''', 'SSRF via URL interpolation'),
]
},
}
# File extensions to scan
SCAN_EXTENSIONS = {'.ts', '.tsx', '.js', '.jsx', '.mjs', '.cjs'}
# Directories to skip
SKIP_DIRS = {'node_modules', '.next', 'dist', 'build', '.git', 'coverage', '__pycache__'}
def find_source_files(project_dir: str) -> list[str]:
"""Find all source files to scan."""
files = []
for root, dirs, filenames in os.walk(project_dir):
# Skip excluded directories
dirs[:] = [d for d in dirs if d not in SKIP_DIRS]
for filename in filenames:
ext = os.path.splitext(filename)[1]
if ext in SCAN_EXTENSIONS:
files.append(os.path.join(root, filename))
return files
def scan_file(file_path: str) -> list[SecurityIssue]:
"""Scan a single file for security issues."""
issues = []
try:
with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
content = f.read()
lines = content.split('\n')
except (IOError, OSError):
return []
for category, config in SECURITY_PATTERNS.items():
for pattern, title in config['patterns']:
try:
for match in re.finditer(pattern, content, re.IGNORECASE | re.MULTILINE):
# Find line number
line_start = content[:match.start()].count('\n') + 1
line_content = lines[line_start - 1] if line_start <= len(lines) else ''
# Skip if in comment
stripped = line_content.strip()
if stripped.startswith('//') or stripped.startswith('*') or stripped.startswith('/*'):
continue
# Skip if looks like env var reference
if 'process.env' in line_content or 'import.meta.env' in line_content:
continue
issues.append(SecurityIssue(
severity=config['severity'],
category=category,
title=title,
description=get_description(category),
file_path=file_path,
line_number=line_start,
code_snippet=line_content.strip()[:100],
recommendation=get_recommendation(category),
cwe_id=config.get('cwe'),
owasp_category=config.get('owasp')
))
except re.error:
continue
return issues
def get_description(category: str) -> str:
"""Get detailed description for category."""
descriptions = {
'hardcoded_secrets': 'Credentials or secrets hardcoded in source code can be extracted by attackers.',
'sql_injection': 'User input directly in SQL queries allows attackers to manipulate database operations.',
'xss': 'Unsanitized user input rendered in HTML allows attackers to inject malicious scripts.',
'path_traversal': 'User input in file paths allows attackers to access arbitrary files.',
'command_injection': 'User input in system commands allows attackers to execute arbitrary commands.',
'insecure_auth': 'Weak authentication mechanisms can be bypassed by attackers.',
'sensitive_data_exposure': 'Sensitive information may be exposed through logs or errors.',
'insecure_dependencies': 'Known vulnerable packages or weak cryptographic functions.',
'cors_misconfiguration': 'Overly permissive CORS allows unauthorized cross-origin requests.',
'insecure_randomness': 'Predictable random values can be guessed by attackers.',
'debug_code': 'Debug code in production may expose sensitive information.',
'nosql_injection': 'User input in NoSQL queries allows attackers to manipulate database operations.',
'prototype_pollution': 'Modifying object prototypes can lead to code execution.',
'ssrf': 'User-controlled URLs allow attackers to make requests to internal services.',
}
return descriptions.get(category, 'Security vulnerability detected.')
def get_recommendation(category: str) -> str:
"""Get remediation recommendation for category."""
recommendations = {
'hardcoded_secrets': 'Use environment variables (process.env) or a secrets manager.',
'sql_injection': 'Use parameterized queries or ORM methods. Never concatenate user input.',
'xss': 'Sanitize user input with DOMPurify or escape HTML entities.',
'path_traversal': 'Validate and sanitize file paths. Use path.basename() and whitelist allowed paths.',
'command_injection': 'Avoid shell commands with user input. Use execFile with argument arrays.',
'insecure_auth': 'Use strong algorithms (RS256), hash passwords with bcrypt, enable all security flags.',
'sensitive_data_exposure': 'Remove sensitive data from logs. Use generic error messages.',
'insecure_dependencies': 'Update to latest secure versions. Use crypto.createCipheriv and SHA-256+.',
'cors_misconfiguration': 'Specify exact allowed origins. Do not use wildcard with credentials.',
'insecure_randomness': 'Use crypto.randomBytes() or crypto.randomUUID() for security-sensitive values.',
'debug_code': 'Remove console statements and debugger in production builds.',
'nosql_injection': 'Sanitize input and use schema validation. Avoid $where operators.',
'prototype_pollution': 'Use Object.create(null) or validate/sanitize object keys.',
'ssrf': 'Validate URLs against allowlist. Block internal IP ranges.',
}
return recommendations.get(category, 'Review and remediate the security issue.')
def check_package_json(project_dir: str) -> list[SecurityIssue]:
"""Check package.json for security issues."""
issues = []
pkg_path = os.path.join(project_dir, 'package.json')
if not os.path.exists(pkg_path):
return []
try:
with open(pkg_path, 'r') as f:
pkg = json.load(f)
except (json.JSONDecodeError, IOError):
return []
# Known vulnerable packages (simplified check)
vulnerable_packages = {
'lodash': '< 4.17.21',
'axios': '< 0.21.1',
'node-fetch': '< 2.6.1',
'minimist': '< 1.2.6',
'serialize-javascript': '< 3.1.0',
}
all_deps = {}
all_deps.update(pkg.get('dependencies', {}))
all_deps.update(pkg.get('devDependencies', {}))
for pkg_name in vulnerable_packages:
if pkg_name in all_deps:
issues.append(SecurityIssue(
severity='MEDIUM',
category='insecure_dependencies',
title=f'Potentially vulnerable package: {pkg_name}',
description=f'Package {pkg_name} may have known vulnerabilities. Run npm audit for details.',
file_path=pkg_path,
line_number=None,
code_snippet=f'"{pkg_name}": "{all_deps[pkg_name]}"',
recommendation='Run `npm audit` and update to the latest secure version.',
cwe_id='CWE-1104',
owasp_category='A06:2021-Vulnerable and Outdated Components'
))
return issues
def check_env_files(project_dir: str) -> list[SecurityIssue]:
"""Check for exposed environment files."""
issues = []
env_files = ['.env', '.env.local', '.env.production', '.env.development']
for env_file in env_files:
env_path = os.path.join(project_dir, env_file)
if os.path.exists(env_path):
# Check if in .gitignore
gitignore_path = os.path.join(project_dir, '.gitignore')
in_gitignore = False
if os.path.exists(gitignore_path):
try:
with open(gitignore_path, 'r') as f:
gitignore_content = f.read()
if env_file in gitignore_content or '.env*' in gitignore_content:
in_gitignore = True
except IOError:
pass
if not in_gitignore:
issues.append(SecurityIssue(
severity='HIGH',
category='sensitive_data_exposure',
title=f'Environment file not in .gitignore: {env_file}',
description='Environment files containing secrets may be committed to version control.',
file_path=env_path,
line_number=None,
code_snippet=env_file,
recommendation=f'Add {env_file} to .gitignore immediately.',
cwe_id='CWE-200',
owasp_category='A02:2021-Cryptographic Failures'
))
return issues
def run_scan(project_dir: str, min_severity: str = 'LOW') -> ScanResult:
"""Run full security scan."""
import time
start_time = time.time()
severity_order = ['CRITICAL', 'HIGH', 'MEDIUM', 'LOW', 'INFO']
min_severity_index = severity_order.index(min_severity.upper()) if min_severity.upper() in severity_order else 3
result = ScanResult()
# Find and scan source files
files = find_source_files(project_dir)
result.files_scanned = len(files)
for file_path in files:
issues = scan_file(file_path)
result.issues.extend(issues)
# Additional checks
result.issues.extend(check_package_json(project_dir))
result.issues.extend(check_env_files(project_dir))
# Filter by severity
result.issues = [
i for i in result.issues
if severity_order.index(i.severity) <= min_severity_index
]
# Sort by severity
result.issues.sort(key=lambda x: severity_order.index(x.severity))
result.scan_duration = time.time() - start_time
return result
def format_report(result: ScanResult, format_type: str = 'text') -> str:
"""Format scan results."""
if format_type == 'json':
return json.dumps({
'files_scanned': result.files_scanned,
'scan_duration': result.scan_duration,
'total_issues': len(result.issues),
'by_severity': {
'CRITICAL': len([i for i in result.issues if i.severity == 'CRITICAL']),
'HIGH': len([i for i in result.issues if i.severity == 'HIGH']),
'MEDIUM': len([i for i in result.issues if i.severity == 'MEDIUM']),
'LOW': len([i for i in result.issues if i.severity == 'LOW']),
},
'issues': [
{
'severity': i.severity,
'category': i.category,
'title': i.title,
'description': i.description,
'file_path': i.file_path,
'line_number': i.line_number,
'code_snippet': i.code_snippet,
'recommendation': i.recommendation,
'cwe_id': i.cwe_id,
'owasp_category': i.owasp_category,
}
for i in result.issues
]
}, indent=2)
# Text format
lines = []
# Header
lines.append("")
lines.append("=" * 80)
lines.append(" SECURITY SCAN REPORT")
lines.append("=" * 80)
lines.append("")
# Summary
critical = len([i for i in result.issues if i.severity == 'CRITICAL'])
high = len([i for i in result.issues if i.severity == 'HIGH'])
medium = len([i for i in result.issues if i.severity == 'MEDIUM'])
low = len([i for i in result.issues if i.severity == 'LOW'])
lines.append("SUMMARY")
lines.append("-" * 80)
lines.append(f" Files scanned: {result.files_scanned}")
lines.append(f" Scan duration: {result.scan_duration:.2f}s")
lines.append(f" Total issues: {len(result.issues)}")
lines.append("")
lines.append(" By Severity:")
lines.append(f" CRITICAL: {critical}")
lines.append(f" HIGH: {high}")
lines.append(f" MEDIUM: {medium}")
lines.append(f" LOW: {low}")
lines.append("")
# Issues by severity
if result.issues:
for severity in ['CRITICAL', 'HIGH', 'MEDIUM', 'LOW']:
severity_issues = [i for i in result.issues if i.severity == severity]
if severity_issues:
icon = {'CRITICAL': '!!!', 'HIGH': '!!', 'MEDIUM': '!', 'LOW': '.'}[severity]
lines.append(f"{icon} {severity} SEVERITY ISSUES ({len(severity_issues)})")
lines.append("-" * 80)
for idx, issue in enumerate(severity_issues, 1):
lines.append(f" [{idx}] {issue.title}")
lines.append(f" Category: {issue.category}")
if issue.file_path:
loc = f"{issue.file_path}:{issue.line_number}" if issue.line_number else issue.file_path
lines.append(f" Location: {loc}")
if issue.code_snippet:
lines.append(f" Code: {issue.code_snippet[:60]}...")
if issue.cwe_id:
lines.append(f" CWE: {issue.cwe_id}")
if issue.owasp_category:
lines.append(f" OWASP: {issue.owasp_category}")
lines.append(f" Fix: {issue.recommendation}")
lines.append("")
else:
lines.append("No security issues found!")
lines.append("")
# Result
lines.append("=" * 80)
if critical > 0:
lines.append(f" RESULT: CRITICAL ({critical} critical issues require immediate attention)")
elif high > 0:
lines.append(f" RESULT: FAIL ({high} high severity issues found)")
elif medium > 0:
lines.append(f" RESULT: WARNING ({medium} medium severity issues found)")
elif low > 0:
lines.append(f" RESULT: PASS WITH NOTES ({low} low severity issues)")
else:
lines.append(" RESULT: PASS (no security issues detected)")
lines.append("=" * 80)
return "\n".join(lines)
def main():
parser = argparse.ArgumentParser(description="Security scanner for codebase")
parser.add_argument("--project-dir", default=".", help="Project directory to scan")
parser.add_argument("--severity", default="LOW",
choices=['CRITICAL', 'HIGH', 'MEDIUM', 'LOW'],
help="Minimum severity to report")
parser.add_argument("--json", action="store_true", help="Output as JSON")
parser.add_argument("--strict", action="store_true", help="Fail on any HIGH or above")
args = parser.parse_args()
result = run_scan(args.project_dir, args.severity)
format_type = 'json' if args.json else 'text'
print(format_report(result, format_type))
# Exit code
critical = len([i for i in result.issues if i.severity == 'CRITICAL'])
high = len([i for i in result.issues if i.severity == 'HIGH'])
if critical > 0:
return 2 # Critical issues
if args.strict and high > 0:
return 1 # High issues in strict mode
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@ -0,0 +1,363 @@
#!/usr/bin/env python3
"""Task management utilities for the guardrail workflow."""
import argparse
import os
import sys
from datetime import datetime
from pathlib import Path
# Try to import yaml, fall back to basic parsing if not available
try:
import yaml
HAS_YAML = True
except ImportError:
HAS_YAML = False
def parse_yaml_simple(content: str) -> dict:
"""Simple YAML parser for basic task files."""
result = {}
current_key = None
current_list = None
for line in content.split('\n'):
line = line.rstrip()
if not line or line.startswith('#'):
continue
# Handle list items
if line.startswith(' - '):
if current_list is not None:
current_list.append(line[4:].strip())
continue
# Handle key-value pairs
if ':' in line and not line.startswith(' '):
key, _, value = line.partition(':')
key = key.strip()
value = value.strip()
if value:
result[key] = value
current_list = None
else:
result[key] = []
current_list = result[key]
current_key = key
return result
def load_yaml(filepath: str) -> dict:
"""Load YAML file."""
with open(filepath, 'r') as f:
content = f.read()
if HAS_YAML:
return yaml.safe_load(content) or {}
return parse_yaml_simple(content)
def save_yaml(filepath: str, data: dict):
"""Save data to YAML file."""
if HAS_YAML:
with open(filepath, 'w') as f:
yaml.dump(data, f, default_flow_style=False, sort_keys=False)
else:
# Simple YAML writer
lines = []
for key, value in data.items():
if isinstance(value, list):
lines.append(f"{key}:")
for item in value:
lines.append(f" - {item}")
elif isinstance(value, str) and '\n' in value:
lines.append(f"{key}: |")
for line in value.split('\n'):
lines.append(f" {line}")
else:
lines.append(f"{key}: {value}")
with open(filepath, 'w') as f:
f.write('\n'.join(lines))
# ============================================================================
# Version-Aware Task Directory
# ============================================================================
def get_workflow_dir() -> Path:
"""Get the .workflow directory path."""
return Path('.workflow')
def get_current_tasks_dir() -> str:
"""Get the tasks directory for the currently active workflow version.
Returns the version-specific tasks directory if a workflow is active,
otherwise falls back to 'tasks' for backward compatibility.
"""
current_path = get_workflow_dir() / 'current.yml'
if not current_path.exists():
return 'tasks' # Fallback for no active workflow
current = load_yaml(str(current_path))
version = current.get('active_version')
if not version:
return 'tasks' # Fallback
tasks_dir = get_workflow_dir() / 'versions' / version / 'tasks'
tasks_dir.mkdir(parents=True, exist_ok=True)
return str(tasks_dir)
# ============================================================================
# Task Operations
# ============================================================================
def find_tasks(tasks_dir: str, filters: dict = None) -> list:
"""Find all task files matching filters."""
tasks = []
tasks_path = Path(tasks_dir)
if not tasks_path.exists():
return tasks
for filepath in tasks_path.glob('**/*.yml'):
try:
task = load_yaml(str(filepath))
task['_filepath'] = str(filepath)
# Apply filters
if filters:
match = True
for key, value in filters.items():
if task.get(key) != value:
match = False
break
if not match:
continue
tasks.append(task)
except Exception as e:
print(f"Warning: Could not parse {filepath}: {e}", file=sys.stderr)
return tasks
def list_tasks(tasks_dir: str, status: str = None, agent: str = None):
"""List tasks with optional filtering."""
filters = {}
if status:
filters['status'] = status
if agent:
filters['agent'] = agent
tasks = find_tasks(tasks_dir, filters)
if not tasks:
print("No tasks found.")
return
# Group by status
by_status = {}
for task in tasks:
s = task.get('status', 'unknown')
if s not in by_status:
by_status[s] = []
by_status[s].append(task)
print("\n" + "=" * 60)
print("TASK LIST")
print("=" * 60)
status_order = ['pending', 'in_progress', 'review', 'approved', 'completed', 'blocked']
for s in status_order:
if s in by_status:
print(f"\n{s.upper()} ({len(by_status[s])})")
print("-" * 40)
for task in by_status[s]:
agent = task.get('agent', '?')
priority = task.get('priority', 'medium')
print(f" [{agent}] {task.get('id', 'unknown')} ({priority})")
print(f" {task.get('title', 'No title')}")
def get_next_task(tasks_dir: str, agent: str) -> dict:
"""Get next available task for an agent."""
tasks = find_tasks(tasks_dir, {'agent': agent, 'status': 'pending'})
if not tasks:
return None
# Sort by priority (high > medium > low)
priority_order = {'high': 0, 'medium': 1, 'low': 2}
tasks.sort(key=lambda t: priority_order.get(t.get('priority', 'medium'), 1))
# Check dependencies
for task in tasks:
deps = task.get('dependencies', [])
if not deps:
return task
# Check if all dependencies are completed
all_deps_done = True
for dep_id in deps:
dep_tasks = find_tasks(tasks_dir, {'id': dep_id})
if dep_tasks and dep_tasks[0].get('status') != 'completed':
all_deps_done = False
break
if all_deps_done:
return task
return None
def update_task_status(tasks_dir: str, task_id: str, new_status: str, notes: str = None):
"""Update task status."""
tasks = find_tasks(tasks_dir, {'id': task_id})
if not tasks:
print(f"Error: Task {task_id} not found")
return False
task = tasks[0]
filepath = task['_filepath']
# Remove internal field
del task['_filepath']
# Update status
task['status'] = new_status
if new_status == 'completed':
task['completed_at'] = datetime.now().isoformat()
if notes:
task['review_notes'] = notes
save_yaml(filepath, task)
print(f"Updated {task_id} to {new_status}")
return True
def complete_all_tasks(tasks_dir: str):
"""Mark all non-completed tasks as completed."""
tasks = find_tasks(tasks_dir)
completed_count = 0
for task in tasks:
if task.get('status') != 'completed':
filepath = task['_filepath']
del task['_filepath']
task['status'] = 'completed'
task['completed_at'] = datetime.now().isoformat()
save_yaml(filepath, task)
completed_count += 1
print(f" Completed: {task.get('id', 'unknown')}")
print(f"\nMarked {completed_count} task(s) as completed.")
return completed_count
def show_status(tasks_dir: str, manifest_path: str):
"""Show overall workflow status."""
tasks = find_tasks(tasks_dir)
# Count by status
status_counts = {}
agent_counts = {'frontend': {'pending': 0, 'completed': 0},
'backend': {'pending': 0, 'completed': 0},
'reviewer': {'pending': 0}}
for task in tasks:
s = task.get('status', 'unknown')
status_counts[s] = status_counts.get(s, 0) + 1
agent = task.get('agent', 'unknown')
if agent in agent_counts:
if s == 'pending':
agent_counts[agent]['pending'] += 1
elif s == 'completed':
if 'completed' in agent_counts[agent]:
agent_counts[agent]['completed'] += 1
print("\n" + "" + "" * 58 + "")
print("" + "WORKFLOW STATUS".center(58) + "")
print("" + "" * 58 + "")
print("" + " TASKS BY STATUS".ljust(58) + "")
print("" + f" ⏳ Pending: {status_counts.get('pending', 0)}".ljust(58) + "")
print("" + f" 🔄 In Progress: {status_counts.get('in_progress', 0)}".ljust(58) + "")
print("" + f" 🔍 Review: {status_counts.get('review', 0)}".ljust(58) + "")
print("" + f" ✅ Approved: {status_counts.get('approved', 0)}".ljust(58) + "")
print("" + f" ✓ Completed: {status_counts.get('completed', 0)}".ljust(58) + "")
print("" + f" 🚫 Blocked: {status_counts.get('blocked', 0)}".ljust(58) + "")
print("" + "" * 58 + "")
print("" + " TASKS BY AGENT".ljust(58) + "")
print("" + f" 🎨 Frontend: {agent_counts['frontend']['pending']} pending, {agent_counts['frontend']['completed']} completed".ljust(58) + "")
print("" + f" ⚙️ Backend: {agent_counts['backend']['pending']} pending, {agent_counts['backend']['completed']} completed".ljust(58) + "")
print("" + f" 🔍 Reviewer: {agent_counts['reviewer']['pending']} pending".ljust(58) + "")
print("" + "" * 58 + "")
def main():
parser = argparse.ArgumentParser(description="Task management for guardrail workflow")
subparsers = parser.add_subparsers(dest='command', help='Commands')
# list command
list_parser = subparsers.add_parser('list', help='List tasks')
list_parser.add_argument('--status', help='Filter by status')
list_parser.add_argument('--agent', help='Filter by agent')
list_parser.add_argument('--tasks-dir', default=None, help='Tasks directory (defaults to current version)')
# next command
next_parser = subparsers.add_parser('next', help='Get next task for agent')
next_parser.add_argument('agent', choices=['frontend', 'backend', 'reviewer'])
next_parser.add_argument('--tasks-dir', default=None, help='Tasks directory (defaults to current version)')
# update command
update_parser = subparsers.add_parser('update', help='Update task status')
update_parser.add_argument('task_id', help='Task ID')
update_parser.add_argument('status', choices=['pending', 'in_progress', 'review', 'approved', 'completed', 'blocked'])
update_parser.add_argument('--notes', help='Review notes')
update_parser.add_argument('--tasks-dir', default=None, help='Tasks directory (defaults to current version)')
# status command
status_parser = subparsers.add_parser('status', help='Show workflow status')
status_parser.add_argument('--tasks-dir', default=None, help='Tasks directory (defaults to current version)')
status_parser.add_argument('--manifest', default='project_manifest.json', help='Manifest path')
# complete-all command
complete_all_parser = subparsers.add_parser('complete-all', help='Mark all tasks as completed')
complete_all_parser.add_argument('--tasks-dir', default=None, help='Tasks directory (defaults to current version)')
args = parser.parse_args()
# Resolve tasks_dir to version-specific directory if not explicitly provided
if hasattr(args, 'tasks_dir') and args.tasks_dir is None:
args.tasks_dir = get_current_tasks_dir()
if args.command == 'list':
list_tasks(args.tasks_dir, args.status, args.agent)
elif args.command == 'next':
task = get_next_task(args.tasks_dir, args.agent)
if task:
print(f"Next task for {args.agent}: {task.get('id')}")
print(f" Title: {task.get('title')}")
print(f" Files: {task.get('file_paths', [])}")
else:
print(f"No pending tasks for {args.agent}")
elif args.command == 'update':
update_task_status(args.tasks_dir, args.task_id, args.status, args.notes)
elif args.command == 'status':
show_status(args.tasks_dir, args.manifest)
elif args.command == 'complete-all':
complete_all_tasks(args.tasks_dir)
else:
parser.print_help()
if __name__ == "__main__":
main()

View File

@ -0,0 +1,715 @@
#!/usr/bin/env python3
"""
Task State Manager for parallel execution and dependency tracking.
Manages task-level states independently from workflow phase, enabling:
- Multiple tasks in_progress simultaneously (if no blocking dependencies)
- Dependency validation before task execution
- Task grouping by agent type for parallel frontend/backend work
"""
import argparse
import json
import os
import sys
from datetime import datetime
from pathlib import Path
from typing import Dict, List, Optional, Set, Tuple
# Try to import yaml
try:
import yaml
HAS_YAML = True
except ImportError:
HAS_YAML = False
# ============================================================================
# YAML Helpers
# ============================================================================
def load_yaml(filepath: str) -> dict:
"""Load YAML file."""
if not os.path.exists(filepath):
return {}
with open(filepath, 'r') as f:
content = f.read()
if not content.strip():
return {}
if HAS_YAML:
return yaml.safe_load(content) or {}
return parse_simple_yaml(content)
def parse_simple_yaml(content: str) -> dict:
"""Parse simple YAML without PyYAML dependency."""
result = {}
current_key = None
current_list = None
for line in content.split('\n'):
stripped = line.strip()
if not stripped or stripped.startswith('#'):
continue
if stripped.startswith('- '):
if current_list is not None:
value = stripped[2:].strip()
if (value.startswith('"') and value.endswith('"')) or \
(value.startswith("'") and value.endswith("'")):
value = value[1:-1]
current_list.append(value)
continue
if ':' in stripped:
key, _, value = stripped.partition(':')
key = key.strip()
value = value.strip()
if value == '' or value == '[]':
current_key = key
current_list = []
result[key] = current_list
elif value == '{}':
result[key] = {}
current_list = None
elif value == 'null' or value == '~':
result[key] = None
current_list = None
elif value == 'true':
result[key] = True
current_list = None
elif value == 'false':
result[key] = False
current_list = None
elif value.isdigit():
result[key] = int(value)
current_list = None
else:
if (value.startswith('"') and value.endswith('"')) or \
(value.startswith("'") and value.endswith("'")):
value = value[1:-1]
result[key] = value
current_list = None
return result
def save_yaml(filepath: str, data: dict):
"""Save data to YAML file."""
os.makedirs(os.path.dirname(filepath), exist_ok=True)
if HAS_YAML:
with open(filepath, 'w') as f:
yaml.dump(data, f, default_flow_style=False, sort_keys=False, allow_unicode=True)
else:
with open(filepath, 'w') as f:
json.dump(data, f, indent=2)
# ============================================================================
# Path Helpers
# ============================================================================
def get_workflow_dir() -> Path:
return Path('.workflow')
def get_current_state_path() -> Path:
return get_workflow_dir() / 'current.yml'
def get_active_version() -> Optional[str]:
"""Get the currently active workflow version."""
current_path = get_current_state_path()
if not current_path.exists():
return None
current = load_yaml(str(current_path))
return current.get('active_version')
def get_tasks_dir() -> Optional[Path]:
"""Get the tasks directory for the active version."""
version = get_active_version()
if not version:
return None
tasks_dir = get_workflow_dir() / 'versions' / version / 'tasks'
tasks_dir.mkdir(parents=True, exist_ok=True)
return tasks_dir
# ============================================================================
# Task State Constants
# ============================================================================
TASK_STATES = ['pending', 'in_progress', 'review', 'approved', 'completed', 'blocked']
VALID_TASK_TRANSITIONS = {
'pending': ['in_progress', 'blocked'],
'in_progress': ['review', 'blocked', 'pending'], # Can go back if paused
'review': ['approved', 'in_progress'], # Can go back if changes needed
'approved': ['completed'],
'completed': [], # Terminal state
'blocked': ['pending'] # Unblocked when dependencies resolve
}
# ============================================================================
# Task Loading
# ============================================================================
def load_all_tasks() -> Dict[str, dict]:
"""Load all tasks from the current version's tasks directory."""
tasks_dir = get_tasks_dir()
if not tasks_dir or not tasks_dir.exists():
return {}
tasks = {}
for task_file in tasks_dir.glob('*.yml'):
task_id = task_file.stem
task = load_yaml(str(task_file))
if task:
tasks[task_id] = task
return tasks
def load_task(task_id: str) -> Optional[dict]:
"""Load a single task by ID."""
tasks_dir = get_tasks_dir()
if not tasks_dir:
return None
task_path = tasks_dir / f"{task_id}.yml"
if not task_path.exists():
return None
return load_yaml(str(task_path))
def save_task(task: dict):
"""Save a task to the tasks directory."""
tasks_dir = get_tasks_dir()
if not tasks_dir:
print("Error: No active workflow")
return
task_id = task.get('id', task.get('task_id'))
if not task_id:
print("Error: Task has no ID")
return
task['updated_at'] = datetime.now().isoformat()
save_yaml(str(tasks_dir / f"{task_id}.yml"), task)
# ============================================================================
# Dependency Resolution
# ============================================================================
def get_task_dependencies(task: dict) -> List[str]:
"""Get the list of task IDs that this task depends on."""
return task.get('dependencies', []) or []
def check_dependencies_met(task_id: str, all_tasks: Dict[str, dict]) -> Tuple[bool, List[str]]:
"""
Check if all dependencies for a task are completed.
Returns:
Tuple of (all_met, unmet_dependency_ids)
"""
task = all_tasks.get(task_id)
if not task:
return False, [f"Task {task_id} not found"]
dependencies = get_task_dependencies(task)
unmet = []
for dep_id in dependencies:
dep_task = all_tasks.get(dep_id)
if not dep_task:
unmet.append(f"{dep_id} (not found)")
elif dep_task.get('status') not in ['completed', 'approved']:
unmet.append(f"{dep_id} (status: {dep_task.get('status', 'unknown')})")
return len(unmet) == 0, unmet
def get_dependency_graph(all_tasks: Dict[str, dict]) -> Dict[str, Set[str]]:
"""Build a dependency graph for all tasks."""
graph = {}
for task_id, task in all_tasks.items():
deps = get_task_dependencies(task)
graph[task_id] = set(deps)
return graph
def detect_circular_dependencies(all_tasks: Dict[str, dict]) -> List[List[str]]:
"""Detect circular dependencies using DFS."""
graph = get_dependency_graph(all_tasks)
cycles = []
visited = set()
rec_stack = set()
def dfs(node: str, path: List[str]) -> bool:
visited.add(node)
rec_stack.add(node)
path.append(node)
for neighbor in graph.get(node, set()):
if neighbor not in visited:
if dfs(neighbor, path):
return True
elif neighbor in rec_stack:
# Found cycle
cycle_start = path.index(neighbor)
cycles.append(path[cycle_start:] + [neighbor])
return True
path.pop()
rec_stack.remove(node)
return False
for node in graph:
if node not in visited:
dfs(node, [])
return cycles
def get_execution_order(all_tasks: Dict[str, dict]) -> List[str]:
"""Get topologically sorted execution order respecting dependencies."""
graph = get_dependency_graph(all_tasks)
# Kahn's algorithm for topological sort
in_degree = {task_id: 0 for task_id in all_tasks}
for deps in graph.values():
for dep in deps:
if dep in in_degree:
in_degree[dep] += 1
queue = [t for t, d in in_degree.items() if d == 0]
result = []
while queue:
node = queue.pop(0)
result.append(node)
for other, deps in graph.items():
if node in deps:
in_degree[other] -= 1
if in_degree[other] == 0:
queue.append(other)
# Reverse since we want dependencies first
return list(reversed(result))
# ============================================================================
# Parallel Execution Support
# ============================================================================
def get_parallel_candidates(all_tasks: Dict[str, dict]) -> Dict[str, List[str]]:
"""
Get tasks that can be executed in parallel, grouped by agent.
Returns:
Dict mapping agent type to list of task IDs ready for parallel execution
"""
candidates = {'frontend': [], 'backend': [], 'other': []}
for task_id, task in all_tasks.items():
status = task.get('status', 'pending')
# Only consider pending tasks
if status != 'pending':
continue
# Check if dependencies are met
deps_met, _ = check_dependencies_met(task_id, all_tasks)
if not deps_met:
continue
# Group by agent
agent = task.get('agent', 'other')
if agent in candidates:
candidates[agent].append(task_id)
else:
candidates['other'].append(task_id)
return candidates
def get_active_tasks() -> Dict[str, List[str]]:
"""
Get currently active (in_progress) tasks grouped by agent.
Returns:
Dict mapping agent type to list of active task IDs
"""
all_tasks = load_all_tasks()
active = {'frontend': [], 'backend': [], 'other': []}
for task_id, task in all_tasks.items():
if task.get('status') == 'in_progress':
agent = task.get('agent', 'other')
if agent in active:
active[agent].append(task_id)
else:
active['other'].append(task_id)
return active
def can_start_task(task_id: str, max_per_agent: int = 1) -> Tuple[bool, str]:
"""
Check if a task can be started given current active tasks.
Args:
task_id: Task to check
max_per_agent: Maximum concurrent tasks per agent type
Returns:
Tuple of (can_start, reason)
"""
all_tasks = load_all_tasks()
task = all_tasks.get(task_id)
if not task:
return False, f"Task {task_id} not found"
status = task.get('status', 'pending')
if status != 'pending':
return False, f"Task is not pending (status: {status})"
# Check dependencies
deps_met, unmet = check_dependencies_met(task_id, all_tasks)
if not deps_met:
return False, f"Dependencies not met: {', '.join(unmet)}"
# Check concurrent task limit per agent
agent = task.get('agent', 'other')
active = get_active_tasks()
if len(active.get(agent, [])) >= max_per_agent:
return False, f"Max concurrent {agent} tasks reached ({max_per_agent})"
return True, "Ready to start"
# ============================================================================
# State Transitions
# ============================================================================
def transition_task(task_id: str, new_status: str) -> Tuple[bool, str]:
"""
Transition a task to a new status with validation.
Returns:
Tuple of (success, message)
"""
task = load_task(task_id)
if not task:
return False, f"Task {task_id} not found"
current_status = task.get('status', 'pending')
# Validate transition
valid_next = VALID_TASK_TRANSITIONS.get(current_status, [])
if new_status not in valid_next:
return False, f"Invalid transition: {current_status}{new_status}. Valid: {valid_next}"
# For in_progress, check dependencies
if new_status == 'in_progress':
all_tasks = load_all_tasks()
deps_met, unmet = check_dependencies_met(task_id, all_tasks)
if not deps_met:
# Block instead
task['status'] = 'blocked'
task['blocked_by'] = unmet
task['blocked_at'] = datetime.now().isoformat()
save_task(task)
return False, f"Dependencies not met, task blocked: {', '.join(unmet)}"
# Perform transition
task['status'] = new_status
task[f'{new_status}_at'] = datetime.now().isoformat()
# Clear blocked info if unblocking
if current_status == 'blocked' and new_status == 'pending':
task.pop('blocked_by', None)
task.pop('blocked_at', None)
save_task(task)
return True, f"Task {task_id}: {current_status}{new_status}"
def update_blocked_tasks():
"""Check and unblock tasks whose dependencies are now met."""
all_tasks = load_all_tasks()
unblocked = []
for task_id, task in all_tasks.items():
if task.get('status') != 'blocked':
continue
deps_met, _ = check_dependencies_met(task_id, all_tasks)
if deps_met:
success, msg = transition_task(task_id, 'pending')
if success:
unblocked.append(task_id)
return unblocked
# ============================================================================
# Status Report
# ============================================================================
def get_status_summary() -> dict:
"""Get summary of task statuses."""
all_tasks = load_all_tasks()
summary = {
'total': len(all_tasks),
'by_status': {status: 0 for status in TASK_STATES},
'by_agent': {},
'blocked_details': [],
'ready_for_parallel': get_parallel_candidates(all_tasks)
}
for task_id, task in all_tasks.items():
status = task.get('status', 'pending')
agent = task.get('agent', 'other')
summary['by_status'][status] = summary['by_status'].get(status, 0) + 1
if agent not in summary['by_agent']:
summary['by_agent'][agent] = {'total': 0, 'by_status': {}}
summary['by_agent'][agent]['total'] += 1
summary['by_agent'][agent]['by_status'][status] = \
summary['by_agent'][agent]['by_status'].get(status, 0) + 1
if status == 'blocked':
summary['blocked_details'].append({
'task_id': task_id,
'blocked_by': task.get('blocked_by', []),
'blocked_at': task.get('blocked_at')
})
return summary
def show_status():
"""Display task status summary."""
summary = get_status_summary()
print()
print("" + "" * 60 + "")
print("" + "TASK STATE MANAGER STATUS".center(60) + "")
print("" + "" * 60 + "")
print("" + f" Total Tasks: {summary['total']}".ljust(60) + "")
print("" + "" * 60 + "")
print("" + " BY STATUS".ljust(60) + "")
status_icons = {
'pending': '', 'in_progress': '🔄', 'review': '🔍',
'approved': '', 'completed': '', 'blocked': '🚫'
}
for status, count in summary['by_status'].items():
icon = status_icons.get(status, '')
print("" + f" {icon} {status}: {count}".ljust(60) + "")
print("" + "" * 60 + "")
print("" + " BY AGENT".ljust(60) + "")
for agent, data in summary['by_agent'].items():
print("" + f" {agent}: {data['total']} tasks".ljust(60) + "")
for status, count in data['by_status'].items():
if count > 0:
print("" + f" └─ {status}: {count}".ljust(60) + "")
# Show parallel candidates
parallel = summary['ready_for_parallel']
has_parallel = any(len(v) > 0 for v in parallel.values())
if has_parallel:
print("" + "" * 60 + "")
print("" + " 🔀 READY FOR PARALLEL EXECUTION".ljust(60) + "")
for agent, tasks in parallel.items():
if tasks:
print("" + f" {agent}: {', '.join(tasks[:3])}".ljust(60) + "")
if len(tasks) > 3:
print("" + f" (+{len(tasks) - 3} more)".ljust(60) + "")
# Show blocked tasks
if summary['blocked_details']:
print("" + "" * 60 + "")
print("" + " 🚫 BLOCKED TASKS".ljust(60) + "")
for blocked in summary['blocked_details'][:5]:
deps = ', '.join(blocked['blocked_by'][:2])
if len(blocked['blocked_by']) > 2:
deps += f" (+{len(blocked['blocked_by']) - 2})"
print("" + f" {blocked['task_id']}".ljust(60) + "")
print("" + f" Blocked by: {deps}".ljust(60) + "")
print("" + "" * 60 + "")
# ============================================================================
# CLI Interface
# ============================================================================
def main():
parser = argparse.ArgumentParser(description="Task state management for parallel execution")
subparsers = parser.add_subparsers(dest='command', help='Commands')
# status command
subparsers.add_parser('status', help='Show task status summary')
# transition command
trans_parser = subparsers.add_parser('transition', help='Transition task status')
trans_parser.add_argument('task_id', help='Task ID')
trans_parser.add_argument('status', choices=TASK_STATES, help='New status')
# can-start command
can_start_parser = subparsers.add_parser('can-start', help='Check if task can start')
can_start_parser.add_argument('task_id', help='Task ID')
can_start_parser.add_argument('--max-per-agent', type=int, default=1,
help='Max concurrent tasks per agent')
# parallel command
subparsers.add_parser('parallel', help='Show tasks ready for parallel execution')
# deps command
deps_parser = subparsers.add_parser('deps', help='Show task dependencies')
deps_parser.add_argument('task_id', nargs='?', help='Task ID (optional)')
# check-deps command
check_deps_parser = subparsers.add_parser('check-deps', help='Check if dependencies are met')
check_deps_parser.add_argument('task_id', help='Task ID')
# unblock command
subparsers.add_parser('unblock', help='Update blocked tasks whose deps are now met')
# order command
subparsers.add_parser('order', help='Show execution order respecting dependencies')
# cycles command
subparsers.add_parser('cycles', help='Detect circular dependencies')
args = parser.parse_args()
if args.command == 'status':
show_status()
elif args.command == 'transition':
success, msg = transition_task(args.task_id, args.status)
print(msg)
if not success:
sys.exit(1)
elif args.command == 'can-start':
can_start, reason = can_start_task(args.task_id, args.max_per_agent)
print(f"{'✅ Yes' if can_start else '❌ No'}: {reason}")
sys.exit(0 if can_start else 1)
elif args.command == 'parallel':
all_tasks = load_all_tasks()
candidates = get_parallel_candidates(all_tasks)
print("\n🔀 Tasks Ready for Parallel Execution:\n")
for agent, tasks in candidates.items():
if tasks:
print(f" {agent}:")
for task_id in tasks:
task = all_tasks.get(task_id, {})
print(f" - {task_id}: {task.get('title', 'No title')}")
if not any(candidates.values()):
print(" No tasks ready for parallel execution")
elif args.command == 'deps':
all_tasks = load_all_tasks()
if args.task_id:
task = all_tasks.get(args.task_id)
if task:
deps = get_task_dependencies(task)
print(f"\n{args.task_id} depends on:")
if deps:
for dep_id in deps:
dep = all_tasks.get(dep_id, {})
status = dep.get('status', 'unknown')
print(f" - {dep_id} ({status})")
else:
print(" (no dependencies)")
else:
print(f"Task {args.task_id} not found")
else:
# Show all dependencies
graph = get_dependency_graph(all_tasks)
print("\nDependency Graph:\n")
for task_id, deps in graph.items():
if deps:
print(f" {task_id}{', '.join(deps)}")
elif args.command == 'check-deps':
all_tasks = load_all_tasks()
deps_met, unmet = check_dependencies_met(args.task_id, all_tasks)
if deps_met:
print(f"✅ All dependencies met for {args.task_id}")
else:
print(f"❌ Unmet dependencies for {args.task_id}:")
for dep in unmet:
print(f" - {dep}")
sys.exit(0 if deps_met else 1)
elif args.command == 'unblock':
unblocked = update_blocked_tasks()
if unblocked:
print(f"✅ Unblocked {len(unblocked)} tasks:")
for task_id in unblocked:
print(f" - {task_id}")
else:
print("No tasks to unblock")
elif args.command == 'order':
all_tasks = load_all_tasks()
# Check for cycles first
cycles = detect_circular_dependencies(all_tasks)
if cycles:
print("⚠️ Cannot determine order - circular dependencies detected!")
for cycle in cycles:
print(f" Cycle: {''.join(cycle)}")
sys.exit(1)
order = get_execution_order(all_tasks)
print("\n📋 Execution Order (respecting dependencies):\n")
for i, task_id in enumerate(order, 1):
task = all_tasks.get(task_id, {})
status = task.get('status', 'pending')
agent = task.get('agent', '?')
print(f" {i}. [{agent}] {task_id} ({status})")
elif args.command == 'cycles':
all_tasks = load_all_tasks()
cycles = detect_circular_dependencies(all_tasks)
if cycles:
print("⚠️ Circular dependencies detected:\n")
for cycle in cycles:
print(f" {''.join(cycle)}")
else:
print("✅ No circular dependencies detected")
else:
parser.print_help()
if __name__ == "__main__":
main()

View File

@ -0,0 +1,75 @@
#!/usr/bin/env python3
"""Transition project between phases."""
import argparse
import json
import os
from datetime import datetime
VALID_PHASES = ["DESIGN_PHASE", "DESIGN_REVIEW", "IMPLEMENTATION_PHASE"]
VALID_TRANSITIONS = {
"DESIGN_PHASE": ["DESIGN_REVIEW"],
"DESIGN_REVIEW": ["DESIGN_PHASE", "IMPLEMENTATION_PHASE"],
"IMPLEMENTATION_PHASE": ["DESIGN_PHASE"]
}
def load_manifest(manifest_path: str) -> dict:
"""Load manifest."""
with open(manifest_path) as f:
return json.load(f)
def save_manifest(manifest_path: str, manifest: dict):
"""Save manifest."""
with open(manifest_path, "w") as f:
json.dump(manifest, f, indent=2)
def main():
parser = argparse.ArgumentParser(description="Transition project phase")
parser.add_argument("--to", required=True, choices=VALID_PHASES, help="Target phase")
parser.add_argument("--manifest", default="project_manifest.json", help="Manifest path")
args = parser.parse_args()
manifest_path = args.manifest
if not os.path.isabs(manifest_path):
manifest_path = os.path.join(os.getcwd(), manifest_path)
if not os.path.exists(manifest_path):
print(f"Error: Manifest not found at {manifest_path}")
return 1
manifest = load_manifest(manifest_path)
current_phase = manifest["state"]["current_phase"]
target_phase = args.to
if target_phase not in VALID_TRANSITIONS.get(current_phase, []):
print(f"Error: Cannot transition from {current_phase} to {target_phase}")
print(f"Valid transitions: {VALID_TRANSITIONS.get(current_phase, [])}")
return 1
# Update phase
manifest["state"]["current_phase"] = target_phase
# Add to history
manifest["state"]["revision_history"].append({
"action": "PHASE_TRANSITION",
"timestamp": datetime.now().isoformat(),
"details": f"Transitioned from {current_phase} to {target_phase}"
})
# If transitioning to implementation, mark entities as approved
if target_phase == "IMPLEMENTATION_PHASE":
for entity_type in ["pages", "components", "api_endpoints", "database_tables"]:
for entity in manifest["entities"].get(entity_type, []):
if entity.get("status") == "DEFINED":
entity["status"] = "APPROVED"
save_manifest(manifest_path, manifest)
print(f"Transitioned to {target_phase}")
return 0
if __name__ == "__main__":
exit(main())

View File

@ -0,0 +1,536 @@
#!/usr/bin/env python3
"""
API Contract Validator for guardrail workflow.
Validates that frontend API calls match backend endpoint definitions:
- Endpoints exist
- HTTP methods match
- Request/response structures align
Usage:
python3 validate_api_contract.py --manifest project_manifest.json --project-dir .
"""
import argparse
import json
import os
import re
import sys
from pathlib import Path
from typing import NamedTuple
class APICall(NamedTuple):
"""Frontend API call."""
file_path: str
line_number: int
endpoint: str
method: str
has_body: bool
raw_line: str
class APIEndpoint(NamedTuple):
"""Backend API endpoint."""
file_path: str
endpoint: str
method: str
has_request_body: bool
response_type: str
class ContractIssue(NamedTuple):
"""API contract violation."""
severity: str # ERROR, WARNING
category: str
message: str
file_path: str
line_number: int | None
suggestion: str
def load_manifest(manifest_path: str) -> dict | None:
"""Load manifest if exists."""
if not os.path.exists(manifest_path):
return None
try:
with open(manifest_path) as f:
return json.load(f)
except (json.JSONDecodeError, IOError):
return None
def find_frontend_files(project_dir: str) -> list[str]:
"""Find frontend source files."""
frontend_patterns = [
'app/**/*.tsx', 'app/**/*.ts',
'src/**/*.tsx', 'src/**/*.ts',
'pages/**/*.tsx', 'pages/**/*.ts',
'components/**/*.tsx', 'components/**/*.ts',
'hooks/**/*.ts', 'hooks/**/*.tsx',
'lib/**/*.ts', 'lib/**/*.tsx',
'services/**/*.ts', 'services/**/*.tsx',
]
# Exclude patterns
exclude_patterns = ['node_modules', '.next', 'dist', 'build', 'api']
files = []
for pattern in frontend_patterns:
base_dir = pattern.split('/')[0]
search_dir = Path(project_dir) / base_dir
if search_dir.exists():
for file_path in search_dir.rglob('*.ts*'):
path_str = str(file_path)
if not any(ex in path_str for ex in exclude_patterns):
# Skip API route files
if '/api/' not in path_str:
files.append(path_str)
return list(set(files))
def find_backend_files(project_dir: str) -> list[str]:
"""Find backend API route files."""
backend_patterns = [
'app/api/**/*.ts', 'app/api/**/*.tsx',
'pages/api/**/*.ts', 'pages/api/**/*.tsx',
'api/**/*.ts',
'src/api/**/*.ts',
'server/**/*.ts',
'routes/**/*.ts',
]
files = []
for pattern in backend_patterns:
base_parts = pattern.split('/')
search_dir = Path(project_dir)
for part in base_parts[:-1]:
if '*' not in part:
search_dir = search_dir / part
if search_dir.exists():
for file_path in search_dir.rglob('*.ts*'):
path_str = str(file_path)
if 'node_modules' not in path_str:
files.append(path_str)
return list(set(files))
def extract_frontend_api_calls(file_path: str) -> list[APICall]:
"""Extract API calls from frontend file."""
calls = []
try:
with open(file_path, 'r', encoding='utf-8') as f:
content = f.read()
lines = content.split('\n')
except (IOError, UnicodeDecodeError):
return []
# Patterns for API calls
patterns = [
# fetch('/api/...', { method: 'POST', body: ... })
(r'''fetch\s*\(\s*['"](/api/[^'"]+)['"]''', 'fetch'),
# axios.get('/api/...'), axios.post('/api/...', data)
(r'''axios\.(get|post|put|patch|delete)\s*\(\s*['"](/api/[^'"]+)['"]''', 'axios'),
# api.get('/users'), api.post('/users', data)
(r'''api\.(get|post|put|patch|delete)\s*\(\s*['"]([^'"]+)['"]''', 'api_client'),
# useSWR('/api/...'), useSWR(() => '/api/...')
(r'''useSWR\s*\(\s*['"](/api/[^'"]+)['"]''', 'swr'),
# useQuery(['key'], () => fetch('/api/...'))
(r'''fetch\s*\(\s*[`'"](/api/[^`'"]+)[`'"]''', 'fetch_template'),
]
for line_num, line in enumerate(lines, 1):
for pattern, call_type in patterns:
matches = re.finditer(pattern, line, re.IGNORECASE)
for match in matches:
groups = match.groups()
if call_type == 'fetch' or call_type == 'swr' or call_type == 'fetch_template':
endpoint = groups[0]
# Try to detect method from options
method = 'GET'
if 'method' in line.lower():
method_match = re.search(r'''method:\s*['"](\w+)['"]''', line, re.IGNORECASE)
if method_match:
method = method_match.group(1).upper()
has_body = 'body:' in line.lower() or 'body=' in line.lower()
elif call_type == 'axios' or call_type == 'api_client':
method = groups[0].upper()
endpoint = groups[1]
# POST, PUT, PATCH typically have body
has_body = method in ['POST', 'PUT', 'PATCH']
else:
continue
# Normalize endpoint
if not endpoint.startswith('/api/'):
endpoint = f'/api/{endpoint.lstrip("/")}'
calls.append(APICall(
file_path=file_path,
line_number=line_num,
endpoint=endpoint,
method=method,
has_body=has_body,
raw_line=line.strip()
))
return calls
def extract_backend_endpoints(file_path: str) -> list[APIEndpoint]:
"""Extract API endpoints from backend file."""
endpoints = []
try:
with open(file_path, 'r', encoding='utf-8') as f:
content = f.read()
except (IOError, UnicodeDecodeError):
return []
# Determine endpoint from file path (Next.js App Router / Pages Router)
rel_path = file_path
if '/app/api/' in file_path:
# App Router: app/api/users/route.ts -> /api/users
api_path = re.search(r'/app/api/(.+?)/(route|page)\.(ts|tsx|js|jsx)', file_path)
if api_path:
endpoint = f'/api/{api_path.group(1)}'
else:
api_path = re.search(r'/app/api/(.+?)\.(ts|tsx|js|jsx)', file_path)
if api_path:
endpoint = f'/api/{api_path.group(1)}'
else:
endpoint = '/api/unknown'
elif '/pages/api/' in file_path:
# Pages Router: pages/api/users.ts -> /api/users
api_path = re.search(r'/pages/api/(.+?)\.(ts|tsx|js|jsx)', file_path)
if api_path:
endpoint = f'/api/{api_path.group(1)}'
else:
endpoint = '/api/unknown'
else:
endpoint = '/api/unknown'
# Clean up dynamic segments: [id] -> :id
endpoint = re.sub(r'\[(\w+)\]', r':\1', endpoint)
# Detect HTTP methods
# Next.js App Router exports: GET, POST, PUT, DELETE, PATCH
app_router_methods = re.findall(
r'export\s+(?:async\s+)?function\s+(GET|POST|PUT|DELETE|PATCH|HEAD|OPTIONS)',
content
)
# Pages Router: req.method checks
pages_router_methods = re.findall(
r'''req\.method\s*===?\s*['"](\w+)['"]''',
content
)
# Express-style: router.get, router.post, app.get, app.post
express_methods = re.findall(
r'''(?:router|app)\.(get|post|put|patch|delete)\s*\(''',
content,
re.IGNORECASE
)
methods = set()
methods.update(m.upper() for m in app_router_methods)
methods.update(m.upper() for m in pages_router_methods)
methods.update(m.upper() for m in express_methods)
# Default to GET if no methods detected
if not methods:
methods = {'GET'}
# Detect request body handling
has_body_patterns = [
r'request\.json\(\)',
r'req\.body',
r'await\s+request\.json',
r'JSON\.parse',
r'body\s*:',
]
has_request_body = any(re.search(p, content) for p in has_body_patterns)
# Detect response type
response_type = 'json' # default
if 'NextResponse.json' in content or 'res.json' in content:
response_type = 'json'
elif 'new Response(' in content:
response_type = 'response'
elif 'res.send' in content:
response_type = 'text'
for method in methods:
endpoints.append(APIEndpoint(
file_path=file_path,
endpoint=endpoint,
method=method,
has_request_body=has_request_body,
response_type=response_type
))
return endpoints
def normalize_endpoint(endpoint: str) -> str:
"""Normalize endpoint for comparison."""
# Remove query params
endpoint = endpoint.split('?')[0]
# Normalize dynamic segments
endpoint = re.sub(r':\w+', ':param', endpoint)
endpoint = re.sub(r'\$\{[^}]+\}', ':param', endpoint)
# Remove trailing slash
endpoint = endpoint.rstrip('/')
return endpoint.lower()
def match_endpoints(call_endpoint: str, api_endpoint: str) -> bool:
"""Check if frontend call matches backend endpoint."""
norm_call = normalize_endpoint(call_endpoint)
norm_api = normalize_endpoint(api_endpoint)
# Exact match
if norm_call == norm_api:
return True
# Pattern match with dynamic segments
api_pattern = re.sub(r':param', r'[^/]+', norm_api)
if re.match(f'^{api_pattern}$', norm_call):
return True
return False
def validate_api_contract(
project_dir: str,
manifest: dict | None = None
) -> tuple[list[ContractIssue], dict]:
"""Validate API contract between frontend and backend."""
issues = []
stats = {
'frontend_calls': 0,
'backend_endpoints': 0,
'matched': 0,
'unmatched_calls': 0,
'method_mismatches': 0,
'body_mismatches': 0,
}
# Find files
frontend_files = find_frontend_files(project_dir)
backend_files = find_backend_files(project_dir)
# Extract API calls and endpoints
all_calls: list[APICall] = []
all_endpoints: list[APIEndpoint] = []
for file in frontend_files:
all_calls.extend(extract_frontend_api_calls(file))
for file in backend_files:
all_endpoints.extend(extract_backend_endpoints(file))
stats['frontend_calls'] = len(all_calls)
stats['backend_endpoints'] = len(all_endpoints)
# Build endpoint lookup
endpoint_map: dict[str, list[APIEndpoint]] = {}
for ep in all_endpoints:
key = normalize_endpoint(ep.endpoint)
if key not in endpoint_map:
endpoint_map[key] = []
endpoint_map[key].append(ep)
# Validate each frontend call
for call in all_calls:
matched = False
for ep in all_endpoints:
if match_endpoints(call.endpoint, ep.endpoint):
matched = True
# Check method match
if call.method != ep.method:
# Check if endpoint supports this method
endpoint_methods = [e.method for e in all_endpoints
if match_endpoints(call.endpoint, e.endpoint)]
if call.method not in endpoint_methods:
issues.append(ContractIssue(
severity='ERROR',
category='METHOD_MISMATCH',
message=f"Frontend calls {call.method} {call.endpoint} but backend only supports {endpoint_methods}",
file_path=call.file_path,
line_number=call.line_number,
suggestion=f"Change method to one of: {', '.join(endpoint_methods)}"
))
stats['method_mismatches'] += 1
continue
# Check body requirements
if call.has_body and not ep.has_request_body:
issues.append(ContractIssue(
severity='WARNING',
category='BODY_MISMATCH',
message=f"Frontend sends body to {call.endpoint} but backend may not process it",
file_path=call.file_path,
line_number=call.line_number,
suggestion="Verify backend handles request body or remove body from frontend call"
))
stats['body_mismatches'] += 1
if not call.has_body and ep.has_request_body and ep.method in ['POST', 'PUT', 'PATCH']:
issues.append(ContractIssue(
severity='WARNING',
category='MISSING_BODY',
message=f"Backend expects body for {call.method} {call.endpoint} but frontend may not send it",
file_path=call.file_path,
line_number=call.line_number,
suggestion="Add request body to frontend call"
))
stats['matched'] += 1
break
if not matched:
issues.append(ContractIssue(
severity='ERROR',
category='ENDPOINT_NOT_FOUND',
message=f"Frontend calls {call.method} {call.endpoint} but no matching backend endpoint found",
file_path=call.file_path,
line_number=call.line_number,
suggestion=f"Create backend endpoint at {call.endpoint} or fix the frontend URL"
))
stats['unmatched_calls'] += 1
# Check for unused backend endpoints
called_endpoints = set()
for call in all_calls:
called_endpoints.add((normalize_endpoint(call.endpoint), call.method))
for ep in all_endpoints:
key = (normalize_endpoint(ep.endpoint), ep.method)
if key not in called_endpoints:
# Check if any call matches with different method
matching_calls = [c for c in all_calls
if match_endpoints(c.endpoint, ep.endpoint)]
if not matching_calls:
issues.append(ContractIssue(
severity='WARNING',
category='UNUSED_ENDPOINT',
message=f"Backend endpoint {ep.method} {ep.endpoint} is not called by frontend",
file_path=ep.file_path,
line_number=None,
suggestion="Verify endpoint is needed or remove unused code"
))
return issues, stats
def format_report(issues: list[ContractIssue], stats: dict) -> str:
"""Format validation report."""
lines = []
lines.append("")
lines.append("=" * 70)
lines.append(" API CONTRACT VALIDATION REPORT")
lines.append("=" * 70)
lines.append("")
# Stats
lines.append("SUMMARY")
lines.append("-" * 70)
lines.append(f" Frontend API calls found: {stats['frontend_calls']}")
lines.append(f" Backend endpoints found: {stats['backend_endpoints']}")
lines.append(f" Matched calls: {stats['matched']}")
lines.append(f" Unmatched calls: {stats['unmatched_calls']}")
lines.append(f" Method mismatches: {stats['method_mismatches']}")
lines.append(f" Body mismatches: {stats['body_mismatches']}")
lines.append("")
# Issues by severity
errors = [i for i in issues if i.severity == 'ERROR']
warnings = [i for i in issues if i.severity == 'WARNING']
if errors:
lines.append("ERRORS (must fix)")
lines.append("-" * 70)
for i, issue in enumerate(errors, 1):
lines.append(f" {i}. [{issue.category}] {issue.message}")
lines.append(f" File: {issue.file_path}:{issue.line_number or '?'}")
lines.append(f" Fix: {issue.suggestion}")
lines.append("")
if warnings:
lines.append("WARNINGS (review)")
lines.append("-" * 70)
for i, issue in enumerate(warnings, 1):
lines.append(f" {i}. [{issue.category}] {issue.message}")
lines.append(f" File: {issue.file_path}:{issue.line_number or '?'}")
lines.append(f" Fix: {issue.suggestion}")
lines.append("")
# Result
lines.append("=" * 70)
if not errors:
lines.append(" RESULT: PASS (no errors)")
else:
lines.append(f" RESULT: FAIL ({len(errors)} errors)")
lines.append("=" * 70)
return "\n".join(lines)
def main():
parser = argparse.ArgumentParser(description="Validate API contract")
parser.add_argument("--manifest", help="Path to project_manifest.json")
parser.add_argument("--project-dir", default=".", help="Project directory")
parser.add_argument("--json", action="store_true", help="Output as JSON")
parser.add_argument("--strict", action="store_true", help="Fail on warnings too")
args = parser.parse_args()
manifest = None
if args.manifest:
manifest = load_manifest(args.manifest)
issues, stats = validate_api_contract(args.project_dir, manifest)
if args.json:
output = {
'stats': stats,
'issues': [
{
'severity': i.severity,
'category': i.category,
'message': i.message,
'file_path': i.file_path,
'line_number': i.line_number,
'suggestion': i.suggestion
}
for i in issues
],
'result': 'PASS' if not any(i.severity == 'ERROR' for i in issues) else 'FAIL'
}
print(json.dumps(output, indent=2))
else:
print(format_report(issues, stats))
# Exit code
errors = [i for i in issues if i.severity == 'ERROR']
warnings = [i for i in issues if i.severity == 'WARNING']
if errors:
return 1
if args.strict and warnings:
return 1
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@ -0,0 +1,271 @@
#!/usr/bin/env python3
"""
Bash command validator for guardrail enforcement.
Blocks shell commands that could write files outside the workflow.
Exit codes:
0 = Command allowed
1 = Command blocked (with message)
"""
import argparse
import re
import sys
from pathlib import Path
# Patterns that indicate file writing
WRITE_PATTERNS = [
# Redirections
r'\s*>\s*["\']?([^"\'&|;\s]+)', # > file
r'\s*>>\s*["\']?([^"\'&|;\s]+)', # >> file
r'\s*2>\s*["\']?([^"\'&|;\s]+)', # 2> file
r'\s*&>\s*["\']?([^"\'&|;\s]+)', # &> file
# tee command
r'\btee\s+(?:-a\s+)?["\']?([^"\'&|;\s]+)',
# Direct file creation
r'\btouch\s+["\']?([^"\'&|;\s]+)',
# Copy/Move operations
r'\bcp\s+.*\s+["\']?([^"\'&|;\s]+)',
r'\bmv\s+.*\s+["\']?([^"\'&|;\s]+)',
# In-place editing
r'\bsed\s+-i',
r'\bawk\s+-i\s+inplace',
r'\bperl\s+-i',
# Here documents
r'<<\s*["\']?EOF',
r'<<\s*["\']?END',
r"cat\s*<<",
# mkdir (could be prep for writing)
r'\bmkdir\s+(?:-p\s+)?["\']?([^"\'&|;\s]+)',
# rm (destructive)
r'\brm\s+(?:-rf?\s+)?["\']?([^"\'&|;\s]+)',
# chmod/chown
r'\bchmod\s+',
r'\bchown\s+',
# curl/wget writing to file
r'\bcurl\s+.*-o\s+["\']?([^"\'&|;\s]+)',
r'\bwget\s+.*-O\s+["\']?([^"\'&|;\s]+)',
# git operations that modify files
r'\bgit\s+checkout\s+',
r'\bgit\s+reset\s+--hard',
r'\bgit\s+clean\s+',
r'\bgit\s+stash\s+pop',
# npm/yarn install (modifies node_modules)
r'\bnpm\s+install\b',
r'\byarn\s+add\b',
r'\bpnpm\s+add\b',
# dd command
r'\bdd\s+',
# patch command
r'\bpatch\s+',
# ln (symlinks)
r'\bln\s+',
]
# Commands that are always allowed
ALWAYS_ALLOWED = [
r'^ls\b',
r'^cat\s+[^>]+$', # cat without redirect
r'^head\b',
r'^tail\b',
r'^grep\b',
r'^find\b',
r'^wc\b',
r'^echo\s+[^>]+$', # echo without redirect
r'^pwd$',
r'^cd\b',
r'^which\b',
r'^type\b',
r'^file\b',
r'^stat\b',
r'^du\b',
r'^df\b',
r'^ps\b',
r'^env$',
r'^printenv',
r'^date$',
r'^whoami$',
r'^hostname$',
r'^uname\b',
r'^git\s+status',
r'^git\s+log',
r'^git\s+diff',
r'^git\s+branch',
r'^git\s+show',
r'^git\s+remote',
r'^npm\s+run\b',
r'^npm\s+test\b',
r'^npm\s+start\b',
r'^npx\b',
r'^node\b',
r'^python3?\b.*(?!.*>)', # python without redirect
r'^pip\s+list',
r'^pip\s+show',
r'^tree\b',
r'^jq\b',
r'^curl\s+(?!.*-o)', # curl without -o
r'^wget\s+(?!.*-O)', # wget without -O
]
# Paths that are always allowed for writing
ALLOWED_PATHS = [
'.workflow/',
'.claude/',
'skills/',
'project_manifest.json',
'/tmp/',
'/var/tmp/',
'node_modules/', # npm install
'.git/', # git operations
]
def is_always_allowed(command: str) -> bool:
"""Check if command matches always-allowed patterns."""
command = command.strip()
for pattern in ALWAYS_ALLOWED:
if re.match(pattern, command, re.IGNORECASE):
return True
return False
def extract_target_paths(command: str) -> list:
"""Extract potential file paths being written to."""
paths = []
for pattern in WRITE_PATTERNS:
matches = re.findall(pattern, command)
for match in matches:
if isinstance(match, tuple):
paths.extend(match)
elif match:
paths.append(match)
return [p for p in paths if p and not p.startswith('-')]
def is_path_allowed(path: str) -> bool:
"""Check if path is in allowed list."""
path = path.lstrip('./')
for allowed in ALLOWED_PATHS:
if path.startswith(allowed) or path == allowed.rstrip('/'):
return True
return False
def has_write_operation(command: str) -> tuple[bool, list]:
"""
Check if command contains write operations.
Returns (has_write, target_paths)
"""
for pattern in WRITE_PATTERNS:
if re.search(pattern, command, re.IGNORECASE):
paths = extract_target_paths(command)
return True, paths
return False, []
def validate_bash_command(command: str) -> tuple[bool, str]:
"""
Validate a bash command for guardrail compliance.
Returns (allowed, message)
"""
if not command or not command.strip():
return True, "✓ GUARDRAIL: Empty command"
command = command.strip()
# Check if always allowed
if is_always_allowed(command):
return True, f"✓ GUARDRAIL: Safe command allowed"
# Check for write operations
has_write, target_paths = has_write_operation(command)
if not has_write:
return True, f"✓ GUARDRAIL: No write operations detected"
# Check if all target paths are allowed
blocked_paths = []
for path in target_paths:
if not is_path_allowed(path):
blocked_paths.append(path)
if not blocked_paths:
return True, f"✓ GUARDRAIL: Write to allowed paths"
# Block the command
suggested_feature = f"modify files via bash"
error_msg = f"""
GUARDRAIL VIOLATION: Bash command blocked
Command: {command[:100]}{'...' if len(command) > 100 else ''}
Detected write operation to unauthorized paths:
{chr(10).join(f' - {p}' for p in blocked_paths)}
👉 REQUIRED ACTION: Use the workflow instead of bash
Run this command:
/workflow:spawn {suggested_feature}
Then use Write/Edit tools (not bash) to modify files.
Bash is for reading/running, not writing files.
Allowed bash write targets:
- .workflow/*, .claude/*, skills/*
- project_manifest.json
- /tmp/*, node_modules/
"""
return False, error_msg
def main():
parser = argparse.ArgumentParser(description="Validate bash command for guardrails")
parser.add_argument("--command", help="Bash command to validate")
args = parser.parse_args()
command = args.command or ""
# Also try reading from stdin if no command provided
if not command and not sys.stdin.isatty():
command = sys.stdin.read().strip()
allowed, message = validate_bash_command(command)
if allowed:
print(message)
return 0
else:
print(message, file=sys.stderr)
return 1
if __name__ == "__main__":
sys.exit(main())

View File

@ -0,0 +1,881 @@
#!/usr/bin/env python3
"""
Design Document Validator and Dependency Graph Generator
Validates design_document.yml and generates:
1. dependency_graph.yml - Layered execution order
2. Context snapshots for each task
3. Tasks with full context
"""
import argparse
import json
import os
import sys
from collections import defaultdict
from datetime import datetime
from pathlib import Path
from typing import Any, Dict, List, Optional, Set, Tuple
# Try to import yaml
try:
import yaml
HAS_YAML = True
except ImportError:
HAS_YAML = False
print("Warning: PyYAML not installed. Using basic parser.", file=sys.stderr)
# ============================================================================
# YAML Helpers
# ============================================================================
def load_yaml(filepath: str) -> dict:
"""Load YAML or JSON file."""
if not os.path.exists(filepath):
return {}
with open(filepath, 'r') as f:
content = f.read()
if not content.strip():
return {}
# Handle JSON files directly (no PyYAML needed)
if filepath.endswith('.json'):
try:
return json.loads(content) or {}
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {filepath}: {e}", file=sys.stderr)
return {}
# Handle YAML files
if HAS_YAML:
return yaml.safe_load(content) or {}
# Basic fallback - try JSON parser for YAML (works for simple cases)
print(f"Warning: PyYAML not installed. Trying JSON parser for {filepath}", file=sys.stderr)
try:
return json.loads(content) or {}
except json.JSONDecodeError:
print(f"Error: Could not parse {filepath}. Install PyYAML: pip install pyyaml", file=sys.stderr)
return {}
def save_yaml(filepath: str, data: dict):
"""Save data to YAML file."""
os.makedirs(os.path.dirname(filepath), exist_ok=True)
if HAS_YAML:
with open(filepath, 'w') as f:
yaml.dump(data, f, default_flow_style=False, sort_keys=False, allow_unicode=True)
else:
# Simple JSON fallback
with open(filepath, 'w') as f:
json.dump(data, f, indent=2)
# ============================================================================
# Validation Classes
# ============================================================================
class ValidationError:
"""Represents a validation error."""
def __init__(self, category: str, entity_id: str, message: str, severity: str = "error"):
self.category = category
self.entity_id = entity_id
self.message = message
self.severity = severity # error, warning
def __str__(self):
icon = "" if self.severity == "error" else "⚠️"
return f"{icon} [{self.category}] {self.entity_id}: {self.message}"
class DesignValidator:
"""Validates design document structure and relationships."""
def __init__(self, design_doc: dict):
self.design = design_doc
self.errors: List[ValidationError] = []
self.warnings: List[ValidationError] = []
# Collected entity IDs
self.model_ids: Set[str] = set()
self.api_ids: Set[str] = set()
self.page_ids: Set[str] = set()
self.component_ids: Set[str] = set()
self.all_ids: Set[str] = set()
def validate(self) -> bool:
"""Run all validations. Returns True if no errors."""
self._collect_ids()
self._validate_models()
self._validate_apis()
self._validate_pages()
self._validate_components()
self._validate_no_circular_deps()
return len(self.errors) == 0
def _collect_ids(self):
"""Collect all entity IDs."""
for model in self.design.get('data_models', []):
self.model_ids.add(model.get('id', ''))
for api in self.design.get('api_endpoints', []):
self.api_ids.add(api.get('id', ''))
for page in self.design.get('pages', []):
self.page_ids.add(page.get('id', ''))
for comp in self.design.get('components', []):
self.component_ids.add(comp.get('id', ''))
self.all_ids = self.model_ids | self.api_ids | self.page_ids | self.component_ids
def _validate_models(self):
"""Validate data models."""
for model in self.design.get('data_models', []):
model_id = model.get('id', 'unknown')
# Check required fields
if not model.get('id'):
self.errors.append(ValidationError('model', model_id, "Missing 'id' field"))
if not model.get('name'):
self.errors.append(ValidationError('model', model_id, "Missing 'name' field"))
if not model.get('fields'):
self.errors.append(ValidationError('model', model_id, "Missing 'fields' - model has no fields"))
# Check for primary key
fields = model.get('fields', [])
has_pk = any('primary_key' in f.get('constraints', []) for f in fields)
if not has_pk:
self.errors.append(ValidationError('model', model_id, "No primary_key field defined"))
# Check relations reference existing models
for relation in model.get('relations', []):
target = relation.get('target', '')
if target and target not in self.model_ids:
self.errors.append(ValidationError(
'model', model_id,
f"Relation target '{target}' does not exist"
))
# Check enum fields have values
for field in fields:
if field.get('type') == 'enum' and not field.get('enum_values'):
self.errors.append(ValidationError(
'model', model_id,
f"Enum field '{field.get('name')}' missing enum_values"
))
def _validate_apis(self):
"""Validate API endpoints."""
for api in self.design.get('api_endpoints', []):
api_id = api.get('id', 'unknown')
# Check required fields
if not api.get('id'):
self.errors.append(ValidationError('api', api_id, "Missing 'id' field"))
if not api.get('method'):
self.errors.append(ValidationError('api', api_id, "Missing 'method' field"))
if not api.get('path'):
self.errors.append(ValidationError('api', api_id, "Missing 'path' field"))
# POST/PUT/PATCH should have request_body
method = api.get('method', '').upper()
if method in ['POST', 'PUT', 'PATCH'] and not api.get('request_body'):
self.warnings.append(ValidationError(
'api', api_id,
f"{method} endpoint should have request_body",
severity="warning"
))
# Check at least one response defined
if not api.get('responses'):
self.errors.append(ValidationError('api', api_id, "No responses defined"))
# Check model dependencies exist
for model_id in api.get('depends_on_models', []):
if model_id not in self.model_ids:
self.errors.append(ValidationError(
'api', api_id,
f"depends_on_models references non-existent model '{model_id}'"
))
# Check API dependencies exist
for dep_api_id in api.get('depends_on_apis', []):
if dep_api_id not in self.api_ids:
self.errors.append(ValidationError(
'api', api_id,
f"depends_on_apis references non-existent API '{dep_api_id}'"
))
def _validate_pages(self):
"""Validate pages."""
for page in self.design.get('pages', []):
page_id = page.get('id', 'unknown')
# Check required fields
if not page.get('id'):
self.errors.append(ValidationError('page', page_id, "Missing 'id' field"))
if not page.get('path'):
self.errors.append(ValidationError('page', page_id, "Missing 'path' field"))
# Check data_needs reference existing APIs
for data_need in page.get('data_needs', []):
api_id = data_need.get('api_id', '')
if api_id and api_id not in self.api_ids:
self.errors.append(ValidationError(
'page', page_id,
f"data_needs references non-existent API '{api_id}'"
))
# Check components exist
for comp_id in page.get('components', []):
if comp_id not in self.component_ids:
self.errors.append(ValidationError(
'page', page_id,
f"References non-existent component '{comp_id}'"
))
def _validate_components(self):
"""Validate components."""
for comp in self.design.get('components', []):
comp_id = comp.get('id', 'unknown')
# Check required fields
if not comp.get('id'):
self.errors.append(ValidationError('component', comp_id, "Missing 'id' field"))
if not comp.get('name'):
self.errors.append(ValidationError('component', comp_id, "Missing 'name' field"))
# Check uses_apis reference existing APIs
for api_id in comp.get('uses_apis', []):
if api_id not in self.api_ids:
self.errors.append(ValidationError(
'component', comp_id,
f"uses_apis references non-existent API '{api_id}'"
))
# Check uses_components reference existing components
for child_id in comp.get('uses_components', []):
if child_id not in self.component_ids:
self.errors.append(ValidationError(
'component', comp_id,
f"uses_components references non-existent component '{child_id}'"
))
def _validate_no_circular_deps(self):
"""Check for circular dependencies."""
# Build dependency graph
deps: Dict[str, Set[str]] = defaultdict(set)
# Model relations
for model in self.design.get('data_models', []):
model_id = model.get('id', '')
for relation in model.get('relations', []):
target = relation.get('target', '')
if target:
deps[model_id].add(target)
# API dependencies
for api in self.design.get('api_endpoints', []):
api_id = api.get('id', '')
for model_id in api.get('depends_on_models', []):
deps[api_id].add(model_id)
for dep_api_id in api.get('depends_on_apis', []):
deps[api_id].add(dep_api_id)
# Page dependencies
for page in self.design.get('pages', []):
page_id = page.get('id', '')
for data_need in page.get('data_needs', []):
api_id = data_need.get('api_id', '')
if api_id:
deps[page_id].add(api_id)
for comp_id in page.get('components', []):
deps[page_id].add(comp_id)
# Component dependencies
for comp in self.design.get('components', []):
comp_id = comp.get('id', '')
for api_id in comp.get('uses_apis', []):
deps[comp_id].add(api_id)
for child_id in comp.get('uses_components', []):
deps[comp_id].add(child_id)
# Detect cycles using DFS
visited = set()
rec_stack = set()
def has_cycle(node: str, path: List[str]) -> Optional[List[str]]:
visited.add(node)
rec_stack.add(node)
path.append(node)
for neighbor in deps.get(node, []):
if neighbor not in visited:
result = has_cycle(neighbor, path)
if result:
return result
elif neighbor in rec_stack:
# Found cycle
cycle_start = path.index(neighbor)
return path[cycle_start:] + [neighbor]
path.pop()
rec_stack.remove(node)
return None
for entity_id in self.all_ids:
if entity_id not in visited:
cycle = has_cycle(entity_id, [])
if cycle:
self.errors.append(ValidationError(
'dependency', entity_id,
f"Circular dependency detected: {''.join(cycle)}"
))
def print_report(self):
"""Print validation report."""
print()
print("=" * 60)
print("DESIGN VALIDATION REPORT".center(60))
print("=" * 60)
# Summary
print()
print(f" Models: {len(self.model_ids)}")
print(f" APIs: {len(self.api_ids)}")
print(f" Pages: {len(self.page_ids)}")
print(f" Components: {len(self.component_ids)}")
print(f" Total: {len(self.all_ids)}")
# Errors
if self.errors:
print()
print("-" * 60)
print(f"ERRORS ({len(self.errors)})")
print("-" * 60)
for error in self.errors:
print(f" {error}")
# Warnings
if self.warnings:
print()
print("-" * 60)
print(f"WARNINGS ({len(self.warnings)})")
print("-" * 60)
for warning in self.warnings:
print(f" {warning}")
# Result
print()
print("=" * 60)
if self.errors:
print("❌ VALIDATION FAILED".center(60))
else:
print("✅ VALIDATION PASSED".center(60))
print("=" * 60)
# ============================================================================
# Dependency Graph Generator
# ============================================================================
class DependencyGraphGenerator:
"""Generates dependency graph and execution layers from design document."""
def __init__(self, design_doc: dict):
self.design = design_doc
self.deps: Dict[str, Set[str]] = defaultdict(set)
self.reverse_deps: Dict[str, Set[str]] = defaultdict(set)
self.entity_types: Dict[str, str] = {}
self.entity_names: Dict[str, str] = {}
self.layers: List[List[str]] = []
def generate(self) -> dict:
"""Generate the full dependency graph."""
self._build_dependency_map()
self._calculate_layers()
return self._build_graph_document()
def _build_dependency_map(self):
"""Build forward and reverse dependency maps."""
# Models
for model in self.design.get('data_models', []):
model_id = model.get('id', '')
self.entity_types[model_id] = 'model'
self.entity_names[model_id] = model.get('name', model_id)
for relation in model.get('relations', []):
target = relation.get('target', '')
if target:
self.deps[model_id].add(target)
self.reverse_deps[target].add(model_id)
# APIs
for api in self.design.get('api_endpoints', []):
api_id = api.get('id', '')
self.entity_types[api_id] = 'api'
self.entity_names[api_id] = api.get('summary', api_id)
for model_id in api.get('depends_on_models', []):
self.deps[api_id].add(model_id)
self.reverse_deps[model_id].add(api_id)
for dep_api_id in api.get('depends_on_apis', []):
self.deps[api_id].add(dep_api_id)
self.reverse_deps[dep_api_id].add(api_id)
# Pages
for page in self.design.get('pages', []):
page_id = page.get('id', '')
self.entity_types[page_id] = 'page'
self.entity_names[page_id] = page.get('name', page_id)
for data_need in page.get('data_needs', []):
api_id = data_need.get('api_id', '')
if api_id:
self.deps[page_id].add(api_id)
self.reverse_deps[api_id].add(page_id)
for comp_id in page.get('components', []):
self.deps[page_id].add(comp_id)
self.reverse_deps[comp_id].add(page_id)
# Components
for comp in self.design.get('components', []):
comp_id = comp.get('id', '')
self.entity_types[comp_id] = 'component'
self.entity_names[comp_id] = comp.get('name', comp_id)
for api_id in comp.get('uses_apis', []):
self.deps[comp_id].add(api_id)
self.reverse_deps[api_id].add(comp_id)
for child_id in comp.get('uses_components', []):
self.deps[comp_id].add(child_id)
self.reverse_deps[child_id].add(comp_id)
def _calculate_layers(self):
"""Calculate execution layers using topological sort."""
# Find all entities with no dependencies (Layer 1)
all_entities = set(self.entity_types.keys())
remaining = all_entities.copy()
assigned = set()
while remaining:
# Find entities whose dependencies are all assigned
layer = []
for entity_id in remaining:
deps = self.deps.get(entity_id, set())
if deps.issubset(assigned):
layer.append(entity_id)
if not layer:
# Shouldn't happen if no circular deps, but safety check
print(f"Warning: Could not assign remaining entities: {remaining}", file=sys.stderr)
break
self.layers.append(sorted(layer))
for entity_id in layer:
remaining.remove(entity_id)
assigned.add(entity_id)
def _build_graph_document(self) -> dict:
"""Build the dependency graph document."""
# Calculate stats
max_parallelism = max(len(layer) for layer in self.layers) if self.layers else 0
critical_path = len(self.layers)
graph = {
'dependency_graph': {
'design_version': self.design.get('revision', 1),
'workflow_version': self.design.get('workflow_version', 'v001'),
'generated_at': datetime.now().isoformat(),
'generator': 'validate_design.py',
'stats': {
'total_entities': len(self.entity_types),
'total_layers': len(self.layers),
'max_parallelism': max_parallelism,
'critical_path_length': critical_path
}
},
'layers': [],
'dependency_map': {},
'task_map': []
}
# Build layers
layer_names = {
1: ("Data Layer", "Database models - no external dependencies"),
2: ("API Layer", "REST endpoints - depend on models"),
3: ("UI Layer", "Pages and components - depend on APIs"),
}
for i, layer_entities in enumerate(self.layers, 1):
name, desc = layer_names.get(i, (f"Layer {i}", f"Entities with {i-1} levels of dependencies"))
layer_items = []
for entity_id in layer_entities:
entity_type = self.entity_types.get(entity_id, 'unknown')
agent = 'backend' if entity_type in ['model', 'api'] else 'frontend'
layer_items.append({
'id': entity_id,
'type': entity_type,
'name': self.entity_names.get(entity_id, entity_id),
'depends_on': list(self.deps.get(entity_id, [])),
'task_id': f"task_create_{entity_id}",
'agent': agent,
'complexity': 'medium' # Could be calculated
})
graph['layers'].append({
'layer': i,
'name': name,
'description': desc,
'items': layer_items,
'requires_layers': list(range(1, i)) if i > 1 else [],
'parallel_count': len(layer_items)
})
# Build dependency map
for entity_id in self.entity_types:
graph['dependency_map'][entity_id] = {
'type': self.entity_types.get(entity_id),
'layer': self._get_layer_number(entity_id),
'depends_on': list(self.deps.get(entity_id, [])),
'depended_by': list(self.reverse_deps.get(entity_id, []))
}
return graph
def _get_layer_number(self, entity_id: str) -> int:
"""Get the layer number for an entity."""
for i, layer in enumerate(self.layers, 1):
if entity_id in layer:
return i
return 0
def print_layers(self):
"""Print layer visualization."""
print()
print("=" * 60)
print("EXECUTION LAYERS".center(60))
print("=" * 60)
for i, layer_entities in enumerate(self.layers, 1):
print()
print(f"Layer {i}: ({len(layer_entities)} items - parallel)")
print("-" * 40)
for entity_id in layer_entities:
entity_type = self.entity_types.get(entity_id, '?')
icon = {'model': '📦', 'api': '🔌', 'page': '📄', 'component': '🧩'}.get(entity_type, '')
deps = self.deps.get(entity_id, set())
deps_str = f" ← [{', '.join(deps)}]" if deps else ""
print(f" {icon} {entity_id}{deps_str}")
print()
print("=" * 60)
# ============================================================================
# Context Generator
# ============================================================================
class ContextGenerator:
"""Generates context snapshots for tasks."""
def __init__(self, design_doc: dict, graph: dict, output_dir: str):
self.design = design_doc
self.graph = graph
self.output_dir = output_dir
# Index design entities by ID for quick lookup
self.models: Dict[str, dict] = {}
self.apis: Dict[str, dict] = {}
self.pages: Dict[str, dict] = {}
self.components: Dict[str, dict] = {}
self._index_entities()
def _index_entities(self):
"""Index all entities by ID."""
for model in self.design.get('data_models', []):
self.models[model.get('id', '')] = model
for api in self.design.get('api_endpoints', []):
self.apis[api.get('id', '')] = api
for page in self.design.get('pages', []):
self.pages[page.get('id', '')] = page
for comp in self.design.get('components', []):
self.components[comp.get('id', '')] = comp
def generate_all_contexts(self):
"""Generate context files for all entities."""
contexts_dir = Path(self.output_dir) / 'contexts'
contexts_dir.mkdir(parents=True, exist_ok=True)
for entity_id, entity_info in self.graph.get('dependency_map', {}).items():
context = self._generate_context(entity_id, entity_info)
context_path = contexts_dir / f"{entity_id}.yml"
save_yaml(str(context_path), context)
print(f"Generated {len(self.graph.get('dependency_map', {}))} context files in {contexts_dir}")
def _generate_context(self, entity_id: str, entity_info: dict) -> dict:
"""Generate context for a single entity."""
entity_type = entity_info.get('type', '')
deps = entity_info.get('depends_on', [])
context = {
'task_id': f"task_create_{entity_id}",
'entity_id': entity_id,
'generated_at': datetime.now().isoformat(),
'workflow_version': self.graph.get('dependency_graph', {}).get('workflow_version', 'v001'),
'target': {
'type': entity_type,
'definition': self._get_entity_definition(entity_id, entity_type)
},
'related': {
'models': [],
'apis': [],
'components': []
},
'dependencies': {
'entity_ids': deps,
'definitions': []
},
'files': {
'to_create': self._get_files_to_create(entity_id, entity_type),
'reference': []
},
'acceptance': self._get_acceptance_criteria(entity_id, entity_type)
}
# Add related entity definitions
for dep_id in deps:
dep_info = self.graph.get('dependency_map', {}).get(dep_id, {})
dep_type = dep_info.get('type', '')
dep_def = self._get_entity_definition(dep_id, dep_type)
if dep_type == 'model':
context['related']['models'].append({'id': dep_id, 'definition': dep_def})
elif dep_type == 'api':
context['related']['apis'].append({'id': dep_id, 'definition': dep_def})
elif dep_type == 'component':
context['related']['components'].append({'id': dep_id, 'definition': dep_def})
context['dependencies']['definitions'].append({
'id': dep_id,
'type': dep_type,
'definition': dep_def
})
return context
def _get_entity_definition(self, entity_id: str, entity_type: str) -> dict:
"""Get the full definition for an entity."""
if entity_type == 'model':
return self.models.get(entity_id, {})
elif entity_type == 'api':
return self.apis.get(entity_id, {})
elif entity_type == 'page':
return self.pages.get(entity_id, {})
elif entity_type == 'component':
return self.components.get(entity_id, {})
return {}
def _get_files_to_create(self, entity_id: str, entity_type: str) -> List[str]:
"""Get list of files to create for an entity."""
if entity_type == 'model':
name = self.models.get(entity_id, {}).get('name', entity_id)
return [
'prisma/schema.prisma',
f'app/models/{name.lower()}.ts'
]
elif entity_type == 'api':
path = self.apis.get(entity_id, {}).get('path', '/api/unknown')
route_path = path.replace('/api/', '').replace(':', '')
return [f'app/api/{route_path}/route.ts']
elif entity_type == 'page':
path = self.pages.get(entity_id, {}).get('path', '/unknown')
return [f'app{path}/page.tsx']
elif entity_type == 'component':
name = self.components.get(entity_id, {}).get('name', 'Unknown')
return [f'app/components/{name}.tsx']
return []
def _get_acceptance_criteria(self, entity_id: str, entity_type: str) -> List[dict]:
"""Get acceptance criteria for an entity."""
criteria = []
if entity_type == 'model':
criteria = [
{'criterion': 'Model defined in Prisma schema', 'verification': 'Check prisma/schema.prisma'},
{'criterion': 'TypeScript types exported', 'verification': 'Import type in test file'},
{'criterion': 'Relations properly configured', 'verification': 'Check Prisma relations'},
]
elif entity_type == 'api':
api = self.apis.get(entity_id, {})
method = api.get('method', 'GET')
path = api.get('path', '/api/unknown')
criteria = [
{'criterion': f'{method} {path} returns success response', 'verification': f'curl -X {method} {path}'},
{'criterion': 'Request validation implemented', 'verification': 'Test with invalid data'},
{'criterion': 'Error responses match contract', 'verification': 'Test error scenarios'},
]
elif entity_type == 'page':
page = self.pages.get(entity_id, {})
path = page.get('path', '/unknown')
criteria = [
{'criterion': f'Page renders at {path}', 'verification': f'Navigate to {path}'},
{'criterion': 'Data fetching works', 'verification': 'Check network tab'},
{'criterion': 'Components render correctly', 'verification': 'Visual inspection'},
]
elif entity_type == 'component':
criteria = [
{'criterion': 'Component renders without errors', 'verification': 'Import and render in test'},
{'criterion': 'Props are typed correctly', 'verification': 'TypeScript compilation'},
{'criterion': 'Events fire correctly', 'verification': 'Test event handlers'},
]
return criteria
# ============================================================================
# Task Generator
# ============================================================================
class TaskGenerator:
"""Generates task files with full context."""
def __init__(self, design_doc: dict, graph: dict, output_dir: str):
self.design = design_doc
self.graph = graph
self.output_dir = output_dir
def generate_all_tasks(self):
"""Generate task files for all entities."""
tasks_dir = Path(self.output_dir) / 'tasks'
tasks_dir.mkdir(parents=True, exist_ok=True)
task_count = 0
for layer in self.graph.get('layers', []):
for item in layer.get('items', []):
task = self._generate_task(item, layer.get('layer', 1))
task_path = tasks_dir / f"{task['id']}.yml"
save_yaml(str(task_path), task)
task_count += 1
print(f"Generated {task_count} task files in {tasks_dir}")
def _generate_task(self, item: dict, layer_num: int) -> dict:
"""Generate a task for an entity."""
entity_id = item.get('id', '')
entity_type = item.get('type', '')
task = {
'id': item.get('task_id', f'task_create_{entity_id}'),
'type': 'create',
'title': f"Create {item.get('name', entity_id)}",
'agent': item.get('agent', 'backend'),
'entity_id': entity_id,
'entity_ids': [entity_id],
'status': 'pending',
'layer': layer_num,
'parallel_group': f"layer_{layer_num}",
'complexity': item.get('complexity', 'medium'),
'dependencies': [f"task_create_{dep}" for dep in item.get('depends_on', [])],
'context': {
'design_version': self.graph.get('dependency_graph', {}).get('design_version', 1),
'workflow_version': self.graph.get('dependency_graph', {}).get('workflow_version', 'v001'),
'context_snapshot_path': f".workflow/versions/v001/contexts/{entity_id}.yml"
},
'created_at': datetime.now().isoformat()
}
return task
# ============================================================================
# Main CLI
# ============================================================================
def main():
parser = argparse.ArgumentParser(description="Validate design document and generate dependency graph")
parser.add_argument('design_file', help='Path to design_document.yml')
parser.add_argument('--output-dir', '-o', default='.workflow/versions/v001',
help='Output directory for generated files')
parser.add_argument('--validate-only', '-v', action='store_true',
help='Only validate, do not generate files')
parser.add_argument('--quiet', '-q', action='store_true',
help='Suppress output except errors')
parser.add_argument('--json', action='store_true',
help='Output validation result as JSON')
args = parser.parse_args()
# Load design document
design = load_yaml(args.design_file)
if not design:
print(f"Error: Could not load design document: {args.design_file}", file=sys.stderr)
sys.exit(1)
# Validate
validator = DesignValidator(design)
is_valid = validator.validate()
if args.json:
result = {
'valid': is_valid,
'errors': [str(e) for e in validator.errors],
'warnings': [str(w) for w in validator.warnings],
'stats': {
'models': len(validator.model_ids),
'apis': len(validator.api_ids),
'pages': len(validator.page_ids),
'components': len(validator.component_ids)
}
}
print(json.dumps(result, indent=2))
sys.exit(0 if is_valid else 1)
if not args.quiet:
validator.print_report()
if not is_valid:
sys.exit(1)
if args.validate_only:
sys.exit(0)
# Generate dependency graph
generator = DependencyGraphGenerator(design)
graph = generator.generate()
if not args.quiet:
generator.print_layers()
# Save dependency graph
output_dir = Path(args.output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
graph_path = output_dir / 'dependency_graph.yml'
save_yaml(str(graph_path), graph)
print(f"Saved dependency graph to: {graph_path}")
# Generate context files
context_gen = ContextGenerator(design, graph, str(output_dir))
context_gen.generate_all_contexts()
# Generate task files
task_gen = TaskGenerator(design, graph, str(output_dir))
task_gen.generate_all_tasks()
print()
print("✅ Design validation and generation complete!")
print(f" Output directory: {output_dir}")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,129 @@
#!/usr/bin/env python3
"""Validate project manifest integrity."""
import argparse
import json
import os
import sys
def validate_structure(manifest: dict) -> list:
"""Validate manifest has required structure."""
errors = []
required_keys = ["project", "state", "entities", "dependencies"]
for key in required_keys:
if key not in manifest:
errors.append(f"Missing required key: {key}")
if "entities" in manifest:
entity_types = ["pages", "components", "api_endpoints", "database_tables"]
for etype in entity_types:
if etype not in manifest["entities"]:
errors.append(f"Missing entity type: {etype}")
return errors
def validate_pages(pages: list) -> list:
"""Validate page entities."""
errors = []
for page in pages:
if "id" not in page:
errors.append(f"Page missing id: {page}")
if "path" not in page:
errors.append(f"Page {page.get('id', 'unknown')} missing path")
if "file_path" not in page:
errors.append(f"Page {page.get('id', 'unknown')} missing file_path")
return errors
def validate_components(components: list) -> list:
"""Validate component entities."""
errors = []
for comp in components:
if "id" not in comp:
errors.append(f"Component missing id: {comp}")
if "name" not in comp:
errors.append(f"Component {comp.get('id', 'unknown')} missing name")
if "file_path" not in comp:
errors.append(f"Component {comp.get('id', 'unknown')} missing file_path")
return errors
def validate_apis(apis: list) -> list:
"""Validate API endpoint entities."""
errors = []
for api in apis:
if "id" not in api:
errors.append(f"API missing id: {api}")
if "method" not in api:
errors.append(f"API {api.get('id', 'unknown')} missing method")
if "path" not in api:
errors.append(f"API {api.get('id', 'unknown')} missing path")
return errors
def validate_tables(tables: list) -> list:
"""Validate database table entities."""
errors = []
for table in tables:
if "id" not in table:
errors.append(f"Table missing id: {table}")
if "name" not in table:
errors.append(f"Table {table.get('id', 'unknown')} missing name")
if "columns" not in table:
errors.append(f"Table {table.get('id', 'unknown')} missing columns")
return errors
def main():
parser = argparse.ArgumentParser(description="Validate project manifest")
parser.add_argument("--strict", action="store_true", help="Treat warnings as errors")
parser.add_argument("--manifest", default="project_manifest.json", help="Path to manifest")
args = parser.parse_args()
manifest_path = args.manifest
if not os.path.isabs(manifest_path):
manifest_path = os.path.join(os.getcwd(), manifest_path)
if not os.path.exists(manifest_path):
print(f"Error: Manifest not found at {manifest_path}")
return 1
with open(manifest_path) as f:
manifest = json.load(f)
errors = []
warnings = []
# Structure validation
errors.extend(validate_structure(manifest))
if "entities" in manifest:
errors.extend(validate_pages(manifest["entities"].get("pages", [])))
errors.extend(validate_components(manifest["entities"].get("components", [])))
errors.extend(validate_apis(manifest["entities"].get("api_endpoints", [])))
errors.extend(validate_tables(manifest["entities"].get("database_tables", [])))
# Report results
if errors:
print("VALIDATION FAILED")
for error in errors:
print(f" ERROR: {error}")
return 1
if warnings:
print("VALIDATION PASSED WITH WARNINGS")
for warning in warnings:
print(f" WARNING: {warning}")
if args.strict:
return 1
return 0
print("VALIDATION PASSED")
return 0
if __name__ == "__main__":
exit(main())

View File

@ -0,0 +1,478 @@
#!/usr/bin/env python3
"""
Workflow enforcement hook for Claude Code.
Validates that operations comply with current workflow phase.
When blocked, instructs AI to run /workflow:spawn to start a proper workflow.
Exit codes:
0 = Operation allowed
1 = Operation blocked (with message)
"""
import argparse
import json
import os
import sys
from pathlib import Path
# Try to import yaml
try:
import yaml
HAS_YAML = True
except ImportError:
HAS_YAML = False
def load_yaml(filepath: str) -> dict:
"""Load YAML file."""
if not os.path.exists(filepath):
return {}
with open(filepath, 'r') as f:
content = f.read()
if not content.strip():
return {}
if HAS_YAML:
return yaml.safe_load(content) or {}
# Simple fallback parser
result = {}
current_list = None
for line in content.split('\n'):
stripped = line.strip()
if not stripped or stripped.startswith('#'):
continue
# Handle list items
if stripped.startswith('- '):
if current_list is not None:
value = stripped[2:].strip()
if (value.startswith('"') and value.endswith('"')) or \
(value.startswith("'") and value.endswith("'")):
value = value[1:-1]
current_list.append(value)
continue
if ':' in stripped:
key, _, value = stripped.partition(':')
key = key.strip()
value = value.strip()
if value == '' or value == '[]':
result[key] = []
current_list = result[key]
elif value == 'null' or value == '~':
result[key] = None
current_list = None
elif value == 'true':
result[key] = True
current_list = None
elif value == 'false':
result[key] = False
current_list = None
elif value.isdigit():
result[key] = int(value)
current_list = None
else:
if (value.startswith('"') and value.endswith('"')) or \
(value.startswith("'") and value.endswith("'")):
value = value[1:-1]
result[key] = value
current_list = None
return result
def get_current_phase() -> str:
"""Get current workflow phase from version session."""
workflow_dir = Path('.workflow')
current_path = workflow_dir / 'current.yml'
if not current_path.exists():
return 'NO_WORKFLOW'
current = load_yaml(str(current_path))
active_version = current.get('active_version')
if not active_version:
return 'NO_WORKFLOW'
session_path = workflow_dir / 'versions' / active_version / 'session.yml'
if not session_path.exists():
return 'NO_WORKFLOW'
session = load_yaml(str(session_path))
return session.get('current_phase', 'UNKNOWN')
def get_active_version() -> str:
"""Get active workflow version."""
workflow_dir = Path('.workflow')
current_path = workflow_dir / 'current.yml'
if not current_path.exists():
return None
current = load_yaml(str(current_path))
return current.get('active_version')
def get_workflow_feature() -> str:
"""Get the feature name from current workflow."""
workflow_dir = Path('.workflow')
current_path = workflow_dir / 'current.yml'
if not current_path.exists():
return None
current = load_yaml(str(current_path))
active_version = current.get('active_version')
if not active_version:
return None
session_path = workflow_dir / 'versions' / active_version / 'session.yml'
if not session_path.exists():
return None
session = load_yaml(str(session_path))
return session.get('feature', 'unknown feature')
def count_task_files(version: str) -> int:
"""Count task files in version directory."""
tasks_dir = Path('.workflow') / 'versions' / version / 'tasks'
if not tasks_dir.exists():
return 0
return len(list(tasks_dir.glob('task_*.yml')))
def extract_feature_from_file(file_path: str) -> str:
"""Extract a feature description from the file path."""
# Convert path to a human-readable feature description
parts = Path(file_path).parts
# Remove common prefixes
skip = {'src', 'app', 'lib', 'components', 'pages', 'api', 'utils', 'hooks'}
meaningful = [p for p in parts if p not in skip and not p.startswith('.')]
if meaningful:
# Get the file name without extension
name = Path(file_path).stem
return f"update {name}"
return f"modify {file_path}"
def validate_task_spawn(tool_input: dict) -> tuple[bool, str]:
"""
Validate Task tool spawning for workflow compliance.
"""
phase = get_current_phase()
prompt = tool_input.get('prompt', '')
subagent_type = tool_input.get('subagent_type', '')
agent_type = subagent_type.lower()
# Check architect agent
if 'system-architect' in agent_type or 'ARCHITECT AGENT' in prompt.upper():
if phase not in ['DESIGNING', 'NO_WORKFLOW']:
return False, f"""
WORKFLOW VIOLATION: Cannot spawn Architect agent
Current Phase: {phase}
Required Phase: DESIGNING
The Architect agent can only be spawned during the DESIGNING phase.
👉 REQUIRED ACTION: Run /workflow:status to check current state.
"""
# Check frontend agent
if 'frontend' in agent_type or 'FRONTEND AGENT' in prompt.upper():
if phase not in ['IMPLEMENTING', 'IMPL_REJECTED']:
return False, f"""
WORKFLOW VIOLATION: Cannot spawn Frontend agent
Current Phase: {phase}
Required Phase: IMPLEMENTING
👉 REQUIRED ACTION: Complete the design phase first, then run /workflow:approve
"""
version = get_active_version()
if version and count_task_files(version) == 0:
return False, f"""
WORKFLOW VIOLATION: No task files found
Cannot start implementation without design tasks.
👉 REQUIRED ACTION: Ensure Architect agent created task files in:
.workflow/versions/{version}/tasks/
"""
# Check backend agent
if 'backend' in agent_type or 'BACKEND AGENT' in prompt.upper():
if phase not in ['IMPLEMENTING', 'IMPL_REJECTED']:
return False, f"""
WORKFLOW VIOLATION: Cannot spawn Backend agent
Current Phase: {phase}
Required Phase: IMPLEMENTING
👉 REQUIRED ACTION: Complete the design phase first, then run /workflow:approve
"""
# Check reviewer agent
if 'quality' in agent_type or 'REVIEWER AGENT' in prompt.upper():
if phase not in ['REVIEWING', 'AWAITING_IMPL_APPROVAL']:
return False, f"""
WORKFLOW VIOLATION: Cannot spawn Reviewer agent
Current Phase: {phase}
Required Phase: REVIEWING
👉 REQUIRED ACTION: Complete implementation first.
"""
# Check security agent
if 'security' in agent_type or 'SECURITY AGENT' in prompt.upper():
if phase not in ['SECURITY_REVIEW', 'REVIEWING']:
return False, f"""
WORKFLOW VIOLATION: Cannot spawn Security agent
Current Phase: {phase}
Required Phase: SECURITY_REVIEW
👉 REQUIRED ACTION: Complete code review first, then security review runs.
"""
return True, ""
def validate_write_operation(tool_input: dict) -> tuple[bool, str]:
"""
Validate Write/Edit operations for workflow compliance.
"""
phase = get_current_phase()
file_path = tool_input.get('file_path', tool_input.get('path', ''))
if not file_path:
return True, ""
# Normalize path
try:
abs_file_path = str(Path(file_path).resolve())
project_dir = str(Path.cwd().resolve())
if abs_file_path.startswith(project_dir):
rel_path = abs_file_path[len(project_dir):].lstrip('/')
else:
rel_path = file_path
except:
rel_path = file_path
# Always allow these
always_allowed = [
'project_manifest.json',
'.workflow/',
'skills/',
'.claude/',
'CLAUDE.md',
'package.json',
'package-lock.json',
'docs/', # Documentation generation (/eureka:index, /eureka:landing)
'claudedocs/', # Claude-specific documentation
'public/', # Public assets (landing pages, images)
]
for allowed in always_allowed:
if rel_path.startswith(allowed) or rel_path == allowed.rstrip('/'):
return True, ""
# Extract feature suggestion from file path
suggested_feature = extract_feature_from_file(rel_path)
# NO_WORKFLOW - Must start a workflow first
if phase == 'NO_WORKFLOW':
return False, f"""
WORKFLOW REQUIRED: No active workflow
You are trying to modify: {rel_path}
This project uses guardrail workflows. You cannot directly edit files.
👉 REQUIRED ACTION: Start a workflow first!
Run this command:
/workflow:spawn {suggested_feature}
This will:
1. Create a design for your changes
2. Get approval
3. Then allow you to implement
"""
# DESIGNING phase - can't write implementation files
if phase == 'DESIGNING':
return False, f"""
WORKFLOW VIOLATION: Cannot write implementation files during DESIGNING
Current Phase: DESIGNING
File: {rel_path}
During DESIGNING phase, only these files can be modified:
- project_manifest.json
- .workflow/versions/*/tasks/*.yml
👉 REQUIRED ACTION: Complete design and get approval
1. Finish adding entities to project_manifest.json
2. Create task files in .workflow/versions/*/tasks/
3. Run: /workflow:approve
"""
# REVIEWING phase - read only
if phase == 'REVIEWING':
return False, f"""
WORKFLOW VIOLATION: Cannot write files during REVIEWING
Current Phase: REVIEWING
File: {rel_path}
During REVIEWING phase, files are READ-ONLY.
👉 REQUIRED ACTION: Complete the review
If changes are needed:
- Run: /workflow:reject "reason for changes"
- This returns to IMPLEMENTING phase
If review passes:
- Run: /workflow:approve
"""
# SECURITY_REVIEW phase - read only
if phase == 'SECURITY_REVIEW':
return False, f"""
WORKFLOW VIOLATION: Cannot write files during SECURITY_REVIEW
Current Phase: SECURITY_REVIEW
File: {rel_path}
During SECURITY_REVIEW phase, files are READ-ONLY.
Security scan is running to check for vulnerabilities.
👉 REQUIRED ACTION: Wait for security scan to complete
If security issues found:
- Workflow returns to IMPLEMENTING phase to fix issues
If security passes:
- Workflow proceeds to AWAITING_IMPL_APPROVAL
For full audit: /workflow:security --full
"""
# AWAITING approval phases
if phase in ['AWAITING_DESIGN_APPROVAL', 'AWAITING_IMPL_APPROVAL']:
gate_type = "design" if "DESIGN" in phase else "implementation"
return False, f"""
WORKFLOW VIOLATION: Cannot write files while awaiting approval
Current Phase: {phase}
File: {rel_path}
👉 REQUIRED ACTION: Get user approval
Waiting for {gate_type} approval. Ask the user to run:
- /workflow:approve (to proceed)
- /workflow:reject (to revise)
"""
# COMPLETED - need new workflow
if phase == 'COMPLETED':
return False, f"""
WORKFLOW VIOLATION: Workflow already completed
Current Phase: COMPLETED
File: {rel_path}
This workflow version is complete.
👉 REQUIRED ACTION: Start a new workflow
Run: /workflow:spawn {suggested_feature}
"""
return True, ""
def validate_transition(tool_input: dict) -> tuple[bool, str]:
"""Validate phase transitions for proper sequencing."""
return True, ""
def main():
parser = argparse.ArgumentParser(description="Workflow enforcement hook")
parser.add_argument('--operation', required=True,
choices=['task', 'write', 'edit', 'transition', 'build'],
help='Operation type being validated')
parser.add_argument('--input', help='JSON input from tool call')
parser.add_argument('--file', help='File path (for write/edit operations)')
args = parser.parse_args()
# Parse input
tool_input = {}
if args.input:
try:
tool_input = json.loads(args.input)
except json.JSONDecodeError:
tool_input = {'raw': args.input}
if args.file:
tool_input['file_path'] = args.file
# Route to appropriate validator
allowed = True
message = ""
if args.operation == 'task':
allowed, message = validate_task_spawn(tool_input)
elif args.operation in ['write', 'edit']:
allowed, message = validate_write_operation(tool_input)
elif args.operation == 'transition':
allowed, message = validate_transition(tool_input)
elif args.operation == 'build':
phase = get_current_phase()
print(f"BUILD: Running in phase {phase}")
allowed = True
# Output result
if not allowed:
print(message, file=sys.stderr)
sys.exit(1)
else:
phase = get_current_phase()
version = get_active_version() or 'N/A'
print(f"✓ WORKFLOW: {args.operation.upper()} allowed in {phase} (v{version})")
sys.exit(0)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,282 @@
#!/usr/bin/env python3
"""
Pre-write validation hook for guardrail enforcement.
Validates that file writes are allowed based on:
1. Current workflow phase
2. Manifest-defined allowed paths
3. Always-allowed system paths
Exit codes:
0 = Write allowed
1 = Write blocked (with error message)
"""
import argparse
import json
import os
import sys
from pathlib import Path
# Always allowed paths (relative to project root)
ALWAYS_ALLOWED_PATTERNS = [
"project_manifest.json",
".workflow/",
".claude/",
"skills/",
"CLAUDE.md",
"package.json",
"package-lock.json",
"tsconfig.json",
".gitignore",
".env.local",
".env.example",
"docs/", # Documentation generation (/eureka:index, /eureka:landing)
"claudedocs/", # Claude-specific documentation
"public/", # Public assets (landing pages, images)
]
def load_manifest(manifest_path: str) -> dict | None:
"""Load manifest if it exists."""
if not os.path.exists(manifest_path):
return None
try:
with open(manifest_path) as f:
return json.load(f)
except (json.JSONDecodeError, IOError):
return None
def normalize_path(file_path: str, project_dir: str) -> str:
"""Normalize file path to relative path from project root."""
try:
abs_path = Path(file_path).resolve()
proj_path = Path(project_dir).resolve()
# Make relative if under project
if str(abs_path).startswith(str(proj_path)):
return str(abs_path.relative_to(proj_path))
return str(abs_path)
except (ValueError, OSError):
return file_path
def is_always_allowed(rel_path: str) -> bool:
"""Check if path is in always-allowed list."""
for pattern in ALWAYS_ALLOWED_PATTERNS:
if pattern.endswith('/'):
# Directory pattern
if rel_path.startswith(pattern) or rel_path == pattern.rstrip('/'):
return True
else:
# Exact file match
if rel_path == pattern:
return True
return False
def get_allowed_paths_from_manifest(manifest: dict) -> set:
"""Extract all allowed file paths from manifest entities."""
allowed = set()
entities = manifest.get("entities", {})
entity_types = ["pages", "components", "api_endpoints", "database_tables", "services", "utils", "hooks", "types"]
for entity_type in entity_types:
for entity in entities.get(entity_type, []):
status = entity.get("status", "")
# Allow APPROVED, IMPLEMENTED, or PENDING (for design phase updates)
if status in ["APPROVED", "IMPLEMENTED", "PENDING", "IN_PROGRESS"]:
if "file_path" in entity:
allowed.add(entity["file_path"])
# Also check for multiple file paths
if "file_paths" in entity:
for fp in entity.get("file_paths", []):
allowed.add(fp)
return allowed
def get_allowed_paths_from_tasks(project_dir: str) -> set:
"""Extract allowed file paths from task files in active workflow version."""
allowed = set()
# Try to import yaml
try:
import yaml
has_yaml = True
except ImportError:
has_yaml = False
# Find active version
current_path = Path(project_dir) / ".workflow" / "current.yml"
if not current_path.exists():
return allowed
try:
with open(current_path) as f:
content = f.read()
if has_yaml:
current = yaml.safe_load(content) or {}
else:
# Simple fallback parser
current = {}
for line in content.split('\n'):
if ':' in line and not line.startswith(' '):
key, _, value = line.partition(':')
current[key.strip()] = value.strip()
active_version = current.get('active_version')
if not active_version:
return allowed
# Read task files
tasks_dir = Path(project_dir) / ".workflow" / "versions" / active_version / "tasks"
if not tasks_dir.exists():
return allowed
for task_file in tasks_dir.glob("*.yml"):
try:
with open(task_file) as f:
task_content = f.read()
if has_yaml:
task = yaml.safe_load(task_content) or {}
file_paths = task.get('file_paths', [])
for fp in file_paths:
allowed.add(fp)
else:
# Simple extraction for file_paths
in_file_paths = False
for line in task_content.split('\n'):
if line.strip().startswith('file_paths:'):
in_file_paths = True
continue
if in_file_paths:
if line.strip().startswith('- '):
fp = line.strip()[2:].strip()
allowed.add(fp)
elif not line.startswith(' '):
in_file_paths = False
except (IOError, Exception):
continue
except (IOError, Exception):
pass
return allowed
def validate_write(file_path: str, manifest_path: str) -> tuple[bool, str]:
"""
Validate if a write operation is allowed.
Returns:
(allowed: bool, message: str)
"""
project_dir = os.path.dirname(manifest_path) or os.getcwd()
rel_path = normalize_path(file_path, project_dir)
# Check always-allowed paths first
if is_always_allowed(rel_path):
return True, f"✓ GUARDRAIL: Always-allowed path: {rel_path}"
# Load manifest
manifest = load_manifest(manifest_path)
# If no manifest exists, guardrails not active
if manifest is None:
return True, "✓ GUARDRAIL: No manifest found, allowing write"
# Get current phase
phase = manifest.get("state", {}).get("current_phase", "UNKNOWN")
# Collect all allowed paths
allowed_from_manifest = get_allowed_paths_from_manifest(manifest)
allowed_from_tasks = get_allowed_paths_from_tasks(project_dir)
all_allowed = allowed_from_manifest | allowed_from_tasks
# Check if file is in allowed paths
if rel_path in all_allowed:
return True, f"✓ GUARDRAIL: Allowed in manifest/tasks: {rel_path}"
# Also check with leading ./ removed
clean_path = rel_path.lstrip('./')
if clean_path in all_allowed:
return True, f"✓ GUARDRAIL: Allowed in manifest/tasks: {clean_path}"
# Check if any allowed path matches (handle path variations)
for allowed in all_allowed:
allowed_clean = allowed.lstrip('./')
if clean_path == allowed_clean:
return True, f"✓ GUARDRAIL: Allowed (path match): {rel_path}"
# Extract suggested feature from file path
name = Path(rel_path).stem
suggested_feature = f"update {name}"
# Not allowed - generate helpful error message with actionable instructions
error_msg = f"""
GUARDRAIL VIOLATION: Unauthorized file write
File: {rel_path}
Phase: {phase}
This file is not in the approved manifest or task files.
Allowed paths from manifest: {len(allowed_from_manifest)}
Allowed paths from tasks: {len(allowed_from_tasks)}
👉 REQUIRED ACTION: Start a workflow to modify this file
Run this command:
/workflow:spawn {suggested_feature}
This will:
1. Design what changes are needed
2. Add this file to approved paths
3. Get approval, then implement
Alternative: If workflow exists, add this file to:
- project_manifest.json (entities.*.file_path)
- .workflow/versions/*/tasks/*.yml (file_paths list)
"""
return False, error_msg
def main():
parser = argparse.ArgumentParser(description="Validate write operation against guardrails")
parser.add_argument("--manifest", required=True, help="Path to project_manifest.json")
parser.add_argument("--file", help="File path being written")
args = parser.parse_args()
# Get file path from argument or environment
file_path = args.file or os.environ.get('TOOL_INPUT_FILE_PATH', '')
if not file_path:
# Try reading from stdin
if not sys.stdin.isatty():
file_path = sys.stdin.read().strip()
if not file_path:
print("✓ GUARDRAIL: No file path provided, allowing (hook misconfiguration?)")
return 0
allowed, message = validate_write(file_path, args.manifest)
if allowed:
print(message)
return 0
else:
print(message, file=sys.stderr)
return 1
if __name__ == "__main__":
sys.exit(main())

View File

@ -0,0 +1,237 @@
#!/usr/bin/env python3
"""
Static analysis for async/await issues in TypeScript/JavaScript.
Catches common mistakes:
- fetch() without await
- .json() without await
- Async function calls without await
- Floating promises (promise not handled)
"""
import argparse
import json
import os
import re
import sys
from pathlib import Path
from typing import Dict, List, Tuple
# ============================================================================
# Async Pattern Detection
# ============================================================================
ASYNC_ISSUES = [
# fetch without await - but allow .then() chains
{
"pattern": r"(?<!await\s)(?<!return\s)fetch\s*\([^)]*\)(?!\s*\.then)(?!\s*\))",
"severity": "HIGH",
"message": "fetch() without await or .then()",
"fix": "Add 'await' before fetch() or use .then().catch()"
},
# .json() without await
{
"pattern": r"(?<!await\s)\.json\s*\(\s*\)(?!\s*\.then)",
"severity": "HIGH",
"message": ".json() without await",
"fix": "Add 'await' before .json() call"
},
# .text() without await
{
"pattern": r"(?<!await\s)\.text\s*\(\s*\)(?!\s*\.then)",
"severity": "MEDIUM",
"message": ".text() without await",
"fix": "Add 'await' before .text() call"
},
# axios/fetch response access without await
{
"pattern": r"(?<!await\s)(axios\.(get|post|put|delete|patch))\s*\([^)]*\)\.data",
"severity": "HIGH",
"message": "Accessing .data on unawaited axios call",
"fix": "Add 'await' or use (await axios.get(...)).data"
},
# Promise.all without await
{
"pattern": r"(?<!await\s)(?<!return\s)Promise\.(all|allSettled|race|any)\s*\(",
"severity": "HIGH",
"message": "Promise.all/race without await",
"fix": "Add 'await' before Promise.all()"
},
# Async function call patterns (common API functions)
{
"pattern": r"(?<!await\s)(?<!return\s)(createUser|updateUser|deleteUser|getUser|saveData|loadData|fetchData|submitForm|handleSubmit)\s*\([^)]*\)\s*;",
"severity": "MEDIUM",
"message": "Async function call may need await",
"fix": "Check if this function is async and add 'await' if needed"
},
# setState with async value without await
{
"pattern": r"set\w+\s*\(\s*(?:await\s+)?fetch\s*\(",
"severity": "HIGH",
"message": "Setting state with fetch result - ensure await is used",
"fix": "Use: const data = await fetch(...); setData(data)"
},
# useEffect with async but no await
{
"pattern": r"useEffect\s*\(\s*\(\s*\)\s*=>\s*\{[^}]*fetch\s*\([^}]*\}\s*,",
"severity": "MEDIUM",
"message": "useEffect with fetch - check async handling",
"fix": "Create inner async function: useEffect(() => { const load = async () => {...}; load(); }, [])"
},
]
# Files/patterns to skip
SKIP_PATTERNS = [
r"node_modules",
r"\.next",
r"dist",
r"build",
r"\.test\.",
r"\.spec\.",
r"__tests__",
r"__mocks__",
]
def should_skip_file(file_path: str) -> bool:
"""Check if file should be skipped."""
for pattern in SKIP_PATTERNS:
if re.search(pattern, file_path):
return True
return False
def analyze_file(file_path: Path) -> List[Dict]:
"""Analyze a single file for async issues."""
issues = []
try:
content = file_path.read_text(encoding='utf-8')
except Exception:
return issues
lines = content.split('\n')
for line_num, line in enumerate(lines, 1):
# Skip comments
stripped = line.strip()
if stripped.startswith('//') or stripped.startswith('*'):
continue
for rule in ASYNC_ISSUES:
if re.search(rule["pattern"], line):
# Additional context check - skip if line has .then or .catch nearby
context = '\n'.join(lines[max(0, line_num-2):min(len(lines), line_num+2)])
if '.then(' in context and '.catch(' in context:
continue
issues.append({
"file": str(file_path),
"line": line_num,
"severity": rule["severity"],
"message": rule["message"],
"fix": rule["fix"],
"code": line.strip()[:80]
})
return issues
def analyze_project(root_dir: str = ".") -> List[Dict]:
"""Analyze all TypeScript/JavaScript files in project."""
all_issues = []
extensions = [".ts", ".tsx", ".js", ".jsx"]
root = Path(root_dir)
for ext in extensions:
for file_path in root.rglob(f"*{ext}"):
if should_skip_file(str(file_path)):
continue
issues = analyze_file(file_path)
all_issues.extend(issues)
return all_issues
# ============================================================================
# Output Formatting
# ============================================================================
def format_text(issues: List[Dict]) -> str:
"""Format issues as readable text."""
if not issues:
return "\n✅ No async/await issues found.\n"
lines = []
lines.append("")
lines.append("" + "" * 70 + "")
lines.append("" + " ASYNC/AWAIT VERIFICATION".ljust(70) + "")
lines.append("" + "" * 70 + "")
high = [i for i in issues if i["severity"] == "HIGH"]
medium = [i for i in issues if i["severity"] == "MEDIUM"]
lines.append("" + f" 🔴 High: {len(high)} issues".ljust(70) + "")
lines.append("" + f" 🟡 Medium: {len(medium)} issues".ljust(70) + "")
if high:
lines.append("" + "" * 70 + "")
lines.append("" + " 🔴 HIGH SEVERITY".ljust(70) + "")
for issue in high:
loc = f"{issue['file']}:{issue['line']}"
lines.append("" + f" {loc}".ljust(70) + "")
lines.append("" + f"{issue['message']}".ljust(70) + "")
lines.append("" + f" 💡 {issue['fix']}".ljust(70) + "")
code = issue['code'][:55]
lines.append("" + f" 📝 {code}".ljust(70) + "")
if medium:
lines.append("" + "" * 70 + "")
lines.append("" + " 🟡 MEDIUM SEVERITY".ljust(70) + "")
for issue in medium[:5]: # Limit display
loc = f"{issue['file']}:{issue['line']}"
lines.append("" + f" {loc}".ljust(70) + "")
lines.append("" + f" ⚠️ {issue['message']}".ljust(70) + "")
lines.append("" + "" * 70 + "")
return "\n".join(lines)
def main():
parser = argparse.ArgumentParser(description="Check for async/await issues")
parser.add_argument("--json", action="store_true", help="Output as JSON")
parser.add_argument("--path", default=".", help="Project path to analyze")
parser.add_argument("--strict", action="store_true", help="Fail on any issue")
args = parser.parse_args()
print("Scanning for async/await issues...")
issues = analyze_project(args.path)
if args.json:
print(json.dumps({
"issues": issues,
"summary": {
"high": len([i for i in issues if i["severity"] == "HIGH"]),
"medium": len([i for i in issues if i["severity"] == "MEDIUM"]),
"total": len(issues)
}
}, indent=2))
else:
print(format_text(issues))
# Exit codes
high_count = len([i for i in issues if i["severity"] == "HIGH"])
if high_count > 0:
sys.exit(1) # High severity issues found
elif args.strict and issues:
sys.exit(1) # Any issues in strict mode
else:
sys.exit(0)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,376 @@
#!/usr/bin/env python3
"""Verify implementation matches manifest specifications."""
import argparse
import json
import os
import re
import sys
from dataclasses import dataclass
from typing import Optional
@dataclass
class VerificationResult:
entity_id: str
entity_type: str
file_path: str
exists: bool
issues: list[str]
warnings: list[str]
def load_manifest(manifest_path: str) -> dict:
"""Load manifest from file."""
with open(manifest_path) as f:
return json.load(f)
def check_file_exists(project_root: str, file_path: str) -> bool:
"""Check if implementation file exists."""
full_path = os.path.join(project_root, file_path)
return os.path.exists(full_path)
def read_file_content(project_root: str, file_path: str) -> Optional[str]:
"""Read file content if it exists."""
full_path = os.path.join(project_root, file_path)
if not os.path.exists(full_path):
return None
with open(full_path, 'r') as f:
return f.read()
def verify_component(project_root: str, component: dict) -> VerificationResult:
"""Verify a component implementation matches manifest."""
issues = []
warnings = []
file_path = component.get("file_path", "")
exists = check_file_exists(project_root, file_path)
if not exists:
issues.append(f"File not found: {file_path}")
return VerificationResult(
entity_id=component.get("id", "unknown"),
entity_type="component",
file_path=file_path,
exists=False,
issues=issues,
warnings=warnings
)
content = read_file_content(project_root, file_path)
if not content:
issues.append("Could not read file content")
return VerificationResult(
entity_id=component.get("id", "unknown"),
entity_type="component",
file_path=file_path,
exists=True,
issues=issues,
warnings=warnings
)
# Check component name exists
name = component.get("name", "")
if name:
# Check for function/const declaration or export
patterns = [
rf"export\s+(const|function)\s+{name}",
rf"(const|function)\s+{name}",
rf"export\s+\{{\s*{name}\s*\}}",
]
found = any(re.search(p, content) for p in patterns)
if not found:
issues.append(f"Component '{name}' not found in file")
# Check props interface
props = component.get("props", {})
if props:
# Check if props interface exists
interface_pattern = rf"interface\s+{name}Props"
if not re.search(interface_pattern, content):
warnings.append(f"Props interface '{name}Props' not found")
# Check each prop exists in the file
for prop_name, prop_spec in props.items():
if prop_name not in content:
warnings.append(f"Prop '{prop_name}' may not be implemented")
return VerificationResult(
entity_id=component.get("id", "unknown"),
entity_type="component",
file_path=file_path,
exists=True,
issues=issues,
warnings=warnings
)
def verify_page(project_root: str, page: dict) -> VerificationResult:
"""Verify a page implementation matches manifest."""
issues = []
warnings = []
file_path = page.get("file_path", "")
exists = check_file_exists(project_root, file_path)
if not exists:
issues.append(f"File not found: {file_path}")
return VerificationResult(
entity_id=page.get("id", "unknown"),
entity_type="page",
file_path=file_path,
exists=False,
issues=issues,
warnings=warnings
)
content = read_file_content(project_root, file_path)
if not content:
issues.append("Could not read file content")
return VerificationResult(
entity_id=page.get("id", "unknown"),
entity_type="page",
file_path=file_path,
exists=True,
issues=issues,
warnings=warnings
)
# Check for default export (Next.js page requirement)
if "export default" not in content:
issues.append("Missing 'export default' (required for Next.js pages)")
# Check component dependencies
components = page.get("components", [])
for comp_id in components:
# Extract component name from ID (e.g., comp_header -> Header)
comp_name = comp_id.replace("comp_", "").title().replace("_", "")
if comp_name not in content:
warnings.append(f"Component '{comp_name}' (from {comp_id}) may not be used")
return VerificationResult(
entity_id=page.get("id", "unknown"),
entity_type="page",
file_path=file_path,
exists=True,
issues=issues,
warnings=warnings
)
def verify_api_endpoint(project_root: str, endpoint: dict) -> VerificationResult:
"""Verify an API endpoint implementation matches manifest."""
issues = []
warnings = []
file_path = endpoint.get("file_path", "")
exists = check_file_exists(project_root, file_path)
if not exists:
issues.append(f"File not found: {file_path}")
return VerificationResult(
entity_id=endpoint.get("id", "unknown"),
entity_type="api_endpoint",
file_path=file_path,
exists=False,
issues=issues,
warnings=warnings
)
content = read_file_content(project_root, file_path)
if not content:
issues.append("Could not read file content")
return VerificationResult(
entity_id=endpoint.get("id", "unknown"),
entity_type="api_endpoint",
file_path=file_path,
exists=True,
issues=issues,
warnings=warnings
)
# Check HTTP method handler exists
method = endpoint.get("method", "").upper()
method_patterns = [
rf"export\s+async\s+function\s+{method}\s*\(",
rf"export\s+function\s+{method}\s*\(",
rf"export\s+const\s+{method}\s*=",
]
found = any(re.search(p, content) for p in method_patterns)
if not found:
issues.append(f"HTTP method handler '{method}' not found")
# Check request body params if defined
request = endpoint.get("request", {})
if request.get("body"):
for param in request["body"].keys():
if param not in content:
warnings.append(f"Request param '{param}' may not be handled")
return VerificationResult(
entity_id=endpoint.get("id", "unknown"),
entity_type="api_endpoint",
file_path=file_path,
exists=True,
issues=issues,
warnings=warnings
)
def verify_database_table(project_root: str, table: dict) -> VerificationResult:
"""Verify a database table implementation matches manifest."""
issues = []
warnings = []
file_path = table.get("file_path", "")
exists = check_file_exists(project_root, file_path)
if not exists:
issues.append(f"File not found: {file_path}")
return VerificationResult(
entity_id=table.get("id", "unknown"),
entity_type="database_table",
file_path=file_path,
exists=False,
issues=issues,
warnings=warnings
)
content = read_file_content(project_root, file_path)
if not content:
issues.append("Could not read file content")
return VerificationResult(
entity_id=table.get("id", "unknown"),
entity_type="database_table",
file_path=file_path,
exists=True,
issues=issues,
warnings=warnings
)
# Check columns/fields are defined
columns = table.get("columns", {})
for col_name in columns.keys():
if col_name not in content:
warnings.append(f"Column '{col_name}' may not be defined")
# Check for CRUD operations
crud_ops = ["create", "get", "update", "delete", "find", "all"]
found_ops = [op for op in crud_ops if op.lower() in content.lower()]
if len(found_ops) < 2:
warnings.append("May be missing CRUD operations")
return VerificationResult(
entity_id=table.get("id", "unknown"),
entity_type="database_table",
file_path=file_path,
exists=True,
issues=issues,
warnings=warnings
)
def print_result(result: VerificationResult, verbose: bool = False):
"""Print verification result."""
status = "" if result.exists and not result.issues else ""
print(f"{status} [{result.entity_type}] {result.entity_id}")
print(f" File: {result.file_path}")
if result.issues:
for issue in result.issues:
print(f" ❌ ERROR: {issue}")
if verbose and result.warnings:
for warning in result.warnings:
print(f" ⚠️ WARN: {warning}")
def main():
parser = argparse.ArgumentParser(description="Verify implementation against manifest")
parser.add_argument("--manifest", default="project_manifest.json", help="Path to manifest")
parser.add_argument("--project-root", default=".", help="Project root directory")
parser.add_argument("--verbose", "-v", action="store_true", help="Show warnings")
parser.add_argument("--json", action="store_true", help="Output as JSON")
args = parser.parse_args()
manifest_path = args.manifest
if not os.path.isabs(manifest_path):
manifest_path = os.path.join(args.project_root, manifest_path)
if not os.path.exists(manifest_path):
print(f"Error: Manifest not found at {manifest_path}")
return 1
manifest = load_manifest(manifest_path)
entities = manifest.get("entities", {})
results = []
# Verify components
for component in entities.get("components", []):
result = verify_component(args.project_root, component)
results.append(result)
# Verify pages
for page in entities.get("pages", []):
result = verify_page(args.project_root, page)
results.append(result)
# Verify API endpoints
for endpoint in entities.get("api_endpoints", []):
result = verify_api_endpoint(args.project_root, endpoint)
results.append(result)
# Verify database tables
for table in entities.get("database_tables", []):
result = verify_database_table(args.project_root, table)
results.append(result)
# Output results
if args.json:
output = {
"total": len(results),
"passed": sum(1 for r in results if r.exists and not r.issues),
"failed": sum(1 for r in results if not r.exists or r.issues),
"results": [
{
"entity_id": r.entity_id,
"entity_type": r.entity_type,
"file_path": r.file_path,
"exists": r.exists,
"issues": r.issues,
"warnings": r.warnings,
}
for r in results
]
}
print(json.dumps(output, indent=2))
else:
print("\n" + "=" * 60)
print("IMPLEMENTATION VERIFICATION REPORT")
print("=" * 60 + "\n")
for result in results:
print_result(result, args.verbose)
print()
# Summary
passed = sum(1 for r in results if r.exists and not r.issues)
failed = sum(1 for r in results if not r.exists or r.issues)
warnings = sum(len(r.warnings) for r in results)
print("=" * 60)
print(f"SUMMARY: {passed}/{len(results)} passed, {failed} failed, {warnings} warnings")
print("=" * 60)
if failed > 0:
return 1
return 0
if __name__ == "__main__":
exit(main())

View File

@ -0,0 +1,986 @@
#!/usr/bin/env python3
"""
Workflow versioning system with task session tracking.
Links workflow sessions with task sessions and individual operations.
"""
from __future__ import annotations
import argparse
import hashlib
import json
import os
import shutil
import sys
from datetime import datetime
from pathlib import Path
from typing import Optional
# Try to import yaml
try:
import yaml
HAS_YAML = True
except ImportError:
HAS_YAML = False
# ============================================================================
# YAML/JSON Helpers
# ============================================================================
def load_yaml(filepath: str) -> dict:
"""Load YAML file."""
if not os.path.exists(filepath):
return {}
with open(filepath, 'r') as f:
content = f.read()
if not content.strip():
return {}
if HAS_YAML:
return yaml.safe_load(content) or {}
# Simple YAML fallback parser for basic key: value structures
return parse_simple_yaml(content)
def parse_simple_yaml(content: str) -> dict:
"""Parse simple YAML without PyYAML dependency."""
result = {}
current_key = None
current_list = None
for line in content.split('\n'):
stripped = line.strip()
# Skip empty lines and comments
if not stripped or stripped.startswith('#'):
continue
# Handle list items
if stripped.startswith('- '):
if current_list is not None:
value = stripped[2:].strip()
# Handle quoted strings
if (value.startswith('"') and value.endswith('"')) or \
(value.startswith("'") and value.endswith("'")):
value = value[1:-1]
current_list.append(value)
continue
# Handle key: value
if ':' in stripped:
key, _, value = stripped.partition(':')
key = key.strip()
value = value.strip()
# Check if this is a list start
if value == '' or value == '[]':
current_key = key
current_list = []
result[key] = current_list
elif value == '{}':
result[key] = {}
current_list = None
elif value == 'null' or value == '~':
result[key] = None
current_list = None
elif value == 'true':
result[key] = True
current_list = None
elif value == 'false':
result[key] = False
current_list = None
elif value.isdigit():
result[key] = int(value)
current_list = None
else:
# Handle quoted strings
if (value.startswith('"') and value.endswith('"')) or \
(value.startswith("'") and value.endswith("'")):
value = value[1:-1]
result[key] = value
current_list = None
return result
def save_yaml(filepath: str, data: dict):
"""Save data to YAML file."""
os.makedirs(os.path.dirname(filepath), exist_ok=True)
if HAS_YAML:
with open(filepath, 'w') as f:
yaml.dump(data, f, default_flow_style=False, sort_keys=False, allow_unicode=True)
else:
with open(filepath, 'w') as f:
json.dump(data, f, indent=2)
def file_hash(filepath: str) -> str:
"""Get SHA256 hash of file content."""
if not os.path.exists(filepath):
return None
with open(filepath, 'rb') as f:
return hashlib.sha256(f.read()).hexdigest()[:16]
# ============================================================================
# Path Helpers
# ============================================================================
def get_workflow_dir() -> Path:
return Path('.workflow')
def get_versions_dir() -> Path:
return get_workflow_dir() / 'versions'
def get_index_path() -> Path:
return get_workflow_dir() / 'index.yml'
def get_operations_log_path() -> Path:
return get_workflow_dir() / 'operations.log'
def get_version_dir(version: str) -> Path:
return get_versions_dir() / version
def get_current_state_path() -> Path:
return get_workflow_dir() / 'current.yml'
def get_version_tasks_dir(version: str) -> Path:
"""Get the tasks directory for a specific version."""
return get_version_dir(version) / 'tasks'
def get_current_tasks_dir() -> Optional[Path]:
"""Get the tasks directory for the currently active version."""
current_path = get_current_state_path()
if not current_path.exists():
return None
current = load_yaml(str(current_path))
version = current.get('active_version')
if not version:
return None
tasks_dir = get_version_tasks_dir(version)
tasks_dir.mkdir(parents=True, exist_ok=True)
return tasks_dir
# ============================================================================
# Version Index Management
# ============================================================================
def load_index() -> dict:
"""Load or create version index."""
index_path = get_index_path()
if index_path.exists():
return load_yaml(str(index_path))
return {
'versions': [],
'latest_version': None,
'total_versions': 0
}
def save_index(index: dict):
"""Save version index."""
save_yaml(str(get_index_path()), index)
def get_next_version() -> str:
"""Get next version number."""
index = load_index()
return f"v{index['total_versions'] + 1:03d}"
# ============================================================================
# Workflow Session Management
# ============================================================================
def create_workflow_session(feature: str, parent_version: str = None) -> dict:
"""Create a new workflow session with version tracking."""
now = datetime.now()
version = get_next_version()
session_id = f"workflow_{now.strftime('%Y%m%d_%H%M%S')}"
# Create version directory and tasks subdirectory
version_dir = get_version_dir(version)
version_dir.mkdir(parents=True, exist_ok=True)
(version_dir / 'tasks').mkdir(exist_ok=True)
# Create workflow session
session = {
'version': version,
'feature': feature,
'session_id': session_id,
'parent_version': parent_version,
'status': 'pending',
'started_at': now.isoformat(),
'completed_at': None,
'current_phase': 'INITIALIZING',
'approvals': {
'design': {
'status': 'pending',
'approved_by': None,
'approved_at': None,
'rejection_reason': None
},
'implementation': {
'status': 'pending',
'approved_by': None,
'approved_at': None,
'rejection_reason': None
}
},
'task_sessions': [],
'summary': {
'total_tasks': 0,
'tasks_completed': 0,
'entities_created': 0,
'entities_updated': 0,
'entities_deleted': 0,
'files_created': 0,
'files_updated': 0,
'files_deleted': 0
}
}
# Save session to version directory
save_yaml(str(version_dir / 'session.yml'), session)
# Update current state pointer
get_workflow_dir().mkdir(exist_ok=True)
save_yaml(str(get_current_state_path()), {
'active_version': version,
'session_id': session_id
})
# Update index
index = load_index()
index['versions'].append({
'version': version,
'feature': feature,
'status': 'pending',
'started_at': now.isoformat(),
'completed_at': None,
'tasks_count': 0,
'operations_count': 0
})
index['latest_version'] = version
index['total_versions'] += 1
save_index(index)
# Take snapshot of current state (manifest, tasks)
take_snapshot(version, 'before')
return session
def load_current_session() -> Optional[dict]:
"""Load the current active workflow session."""
current_path = get_current_state_path()
if not current_path.exists():
return None
current = load_yaml(str(current_path))
version = current.get('active_version')
if not version:
return None
session_path = get_version_dir(version) / 'session.yml'
if not session_path.exists():
return None
return load_yaml(str(session_path))
def save_current_session(session: dict):
"""Save the current workflow session."""
version = session['version']
session['updated_at'] = datetime.now().isoformat()
save_yaml(str(get_version_dir(version) / 'session.yml'), session)
# Update index
index = load_index()
for v in index['versions']:
if v['version'] == version:
v['status'] = session['status']
v['tasks_count'] = session['summary']['total_tasks']
break
save_index(index)
def complete_workflow_session(session: dict):
"""Mark workflow session as completed."""
now = datetime.now()
session['status'] = 'completed'
session['completed_at'] = now.isoformat()
save_current_session(session)
# Take final snapshot
take_snapshot(session['version'], 'after')
# Update index
index = load_index()
for v in index['versions']:
if v['version'] == session['version']:
v['status'] = 'completed'
v['completed_at'] = now.isoformat()
break
save_index(index)
# Clear current pointer
current_path = get_current_state_path()
if current_path.exists():
current_path.unlink()
# ============================================================================
# Task Session Management
# ============================================================================
def create_task_session(workflow_session: dict, task_id: str, task_type: str, agent: str) -> dict:
"""Create a new task session with full directory structure."""
now = datetime.now()
session_id = f"tasksession_{task_id}_{now.strftime('%Y%m%d_%H%M%S')}"
# Create task session DIRECTORY (not file)
version_dir = get_version_dir(workflow_session['version'])
task_session_dir = version_dir / 'task_sessions' / task_id
task_session_dir.mkdir(parents=True, exist_ok=True)
task_session = {
'session_id': session_id,
'workflow_version': workflow_session['version'],
'task_id': task_id,
'task_type': task_type,
'agent': agent,
'started_at': now.isoformat(),
'completed_at': None,
'duration_ms': None,
'status': 'in_progress',
'operations': [],
'review_session': None,
'errors': [],
'attempt_number': 1,
'previous_attempts': []
}
# Save session.yml
save_yaml(str(task_session_dir / 'session.yml'), task_session)
# Snapshot task definition
snapshot_task_definition(task_id, task_session_dir)
# Initialize operations.log
init_operations_log(task_session_dir, task_id, now)
# Link to workflow
workflow_session['task_sessions'].append(session_id)
workflow_session['summary']['total_tasks'] += 1
save_current_session(workflow_session)
return task_session
def snapshot_task_definition(task_id: str, task_session_dir: Path):
"""Snapshot the task definition at execution time."""
task_file = Path('tasks') / f'{task_id}.yml'
if task_file.exists():
task_data = load_yaml(str(task_file))
task_data['snapshotted_at'] = datetime.now().isoformat()
task_data['source_path'] = str(task_file)
task_data['status_at_snapshot'] = task_data.get('status', 'unknown')
save_yaml(str(task_session_dir / 'task.yml'), task_data)
def init_operations_log(task_session_dir: Path, task_id: str, start_time: datetime):
"""Initialize the operations log file."""
log_path = task_session_dir / 'operations.log'
header = f"# Operations Log for {task_id}\n"
header += f"# Started: {start_time.isoformat()}\n"
header += "# Format: [timestamp] OPERATION target_type: target_id (path)\n"
header += "=" * 70 + "\n\n"
with open(log_path, 'w') as f:
f.write(header)
def log_to_task_operations_log(task_session: dict, operation: dict):
"""Append operation to task-specific operations log."""
version = task_session['workflow_version']
task_id = task_session['task_id']
log_path = get_version_dir(version) / 'task_sessions' / task_id / 'operations.log'
if not log_path.exists():
return
entry = (
f"[{operation['performed_at']}] "
f"{operation['type']} {operation['target_type']}: {operation['target_id']}"
)
if operation.get('target_path'):
entry += f" ({operation['target_path']})"
entry += f"\n Summary: {operation['changes']['diff_summary']}\n"
with open(log_path, 'a') as f:
f.write(entry + "\n")
def load_task_session(version: str, task_id: str) -> Optional[dict]:
"""Load a task session from directory or flat file (backwards compatible)."""
# Try new directory structure first
session_dir = get_version_dir(version) / 'task_sessions' / task_id
session_path = session_dir / 'session.yml'
if session_path.exists():
return load_yaml(str(session_path))
# Fallback to old flat file structure
old_path = get_version_dir(version) / 'task_sessions' / f'{task_id}.yml'
if old_path.exists():
return load_yaml(str(old_path))
return None
def save_task_session(task_session: dict):
"""Save a task session to directory structure."""
version = task_session['workflow_version']
task_id = task_session['task_id']
session_dir = get_version_dir(version) / 'task_sessions' / task_id
session_dir.mkdir(parents=True, exist_ok=True)
save_yaml(str(session_dir / 'session.yml'), task_session)
def complete_task_session(task_session: dict, status: str = 'completed'):
"""Mark task session as completed."""
now = datetime.now()
started = datetime.fromisoformat(task_session['started_at'])
task_session['completed_at'] = now.isoformat()
task_session['duration_ms'] = int((now - started).total_seconds() * 1000)
task_session['status'] = status
save_task_session(task_session)
# Update workflow summary
session = load_current_session()
if session and status == 'completed':
session['summary']['tasks_completed'] += 1
save_current_session(session)
# ============================================================================
# Operation Logging
# ============================================================================
def log_operation(
task_session: dict,
op_type: str, # CREATE, UPDATE, DELETE, RENAME, MOVE
target_type: str, # file, entity, task, manifest
target_id: str,
target_path: str = None,
before_state: str = None,
after_state: str = None,
diff_summary: str = None,
rollback_data: dict = None
) -> dict:
"""Log an operation within a task session."""
now = datetime.now()
seq = len(task_session['operations']) + 1
op_id = f"op_{now.strftime('%Y%m%d_%H%M%S')}_{seq:03d}"
operation = {
'id': op_id,
'type': op_type,
'target_type': target_type,
'target_id': target_id,
'target_path': target_path,
'changes': {
'before': before_state,
'after': after_state,
'diff_summary': diff_summary or f"{op_type} {target_type}: {target_id}"
},
'performed_at': now.isoformat(),
'reversible': rollback_data is not None,
'rollback_data': rollback_data
}
task_session['operations'].append(operation)
save_task_session(task_session)
# Update workflow summary
session = load_current_session()
if session:
if op_type == 'CREATE':
if target_type == 'file':
session['summary']['files_created'] += 1
elif target_type == 'entity':
session['summary']['entities_created'] += 1
elif op_type == 'UPDATE':
if target_type == 'file':
session['summary']['files_updated'] += 1
elif target_type == 'entity':
session['summary']['entities_updated'] += 1
elif op_type == 'DELETE':
if target_type == 'file':
session['summary']['files_deleted'] += 1
elif target_type == 'entity':
session['summary']['entities_deleted'] += 1
save_current_session(session)
# Also log to operations log
log_to_file(operation, task_session)
# Also log to task-specific operations log
log_to_task_operations_log(task_session, operation)
# Update index operations count
index = load_index()
for v in index['versions']:
if v['version'] == task_session['workflow_version']:
v['operations_count'] = v.get('operations_count', 0) + 1
break
save_index(index)
return operation
def log_to_file(operation: dict, task_session: dict):
"""Append operation to global operations log."""
log_path = get_operations_log_path()
log_entry = (
f"[{operation['performed_at']}] "
f"v{task_session['workflow_version']} | "
f"{task_session['task_id']} | "
f"{operation['type']} {operation['target_type']}: {operation['target_id']}"
)
if operation['target_path']:
log_entry += f" ({operation['target_path']})"
log_entry += "\n"
with open(log_path, 'a') as f:
f.write(log_entry)
# ============================================================================
# Review Session Management
# ============================================================================
def create_review_session(task_session: dict, reviewer: str = 'reviewer') -> dict:
"""Create a review session for a task."""
now = datetime.now()
session_id = f"review_{task_session['task_id']}_{now.strftime('%Y%m%d_%H%M%S')}"
review = {
'session_id': session_id,
'task_session_id': task_session['session_id'],
'workflow_version': task_session['workflow_version'],
'reviewer': reviewer,
'started_at': now.isoformat(),
'completed_at': None,
'decision': None,
'checks': {
'file_exists': None,
'manifest_compliance': None,
'code_quality': None,
'lint': None,
'build': None,
'tests': None
},
'notes': '',
'issues_found': [],
'suggestions': []
}
task_session['review_session'] = review
save_task_session(task_session)
return review
def complete_review_session(
task_session: dict,
decision: str,
checks: dict,
notes: str = '',
issues: list = None,
suggestions: list = None
):
"""Complete a review session."""
now = datetime.now()
review = task_session['review_session']
review['completed_at'] = now.isoformat()
review['decision'] = decision
review['checks'].update(checks)
review['notes'] = notes
review['issues_found'] = issues or []
review['suggestions'] = suggestions or []
save_task_session(task_session)
# ============================================================================
# Snapshots
# ============================================================================
def take_snapshot(version: str, snapshot_type: str):
"""Take a snapshot of current state (before/after)."""
snapshot_dir = get_version_dir(version) / f'snapshot_{snapshot_type}'
snapshot_dir.mkdir(exist_ok=True)
# Snapshot manifest
if os.path.exists('project_manifest.json'):
shutil.copy('project_manifest.json', snapshot_dir / 'manifest.json')
# Snapshot tasks directory
if os.path.exists('tasks'):
tasks_snapshot = snapshot_dir / 'tasks'
if tasks_snapshot.exists():
shutil.rmtree(tasks_snapshot)
shutil.copytree('tasks', tasks_snapshot)
# ============================================================================
# History & Diff
# ============================================================================
def list_versions() -> list:
"""List all workflow versions."""
index = load_index()
return index['versions']
def get_version_details(version: str) -> Optional[dict]:
"""Get detailed info about a version."""
session_path = get_version_dir(version) / 'session.yml'
if not session_path.exists():
return None
return load_yaml(str(session_path))
def get_changelog(version: str) -> dict:
"""Generate changelog for a version."""
session = get_version_details(version)
if not session:
return None
changelog = {
'version': version,
'feature': session['feature'],
'status': session['status'],
'started_at': session['started_at'],
'completed_at': session['completed_at'],
'operations': {
'created': [],
'updated': [],
'deleted': []
},
'summary': session['summary']
}
# Collect operations from all task sessions
tasks_dir = get_version_dir(version) / 'task_sessions'
if tasks_dir.exists():
for task_file in tasks_dir.glob('*.yml'):
task = load_yaml(str(task_file))
for op in task.get('operations', []):
entry = {
'type': op['target_type'],
'id': op['target_id'],
'path': op['target_path'],
'task': task['task_id'],
'agent': task['agent']
}
if op['type'] == 'CREATE':
changelog['operations']['created'].append(entry)
elif op['type'] == 'UPDATE':
changelog['operations']['updated'].append(entry)
elif op['type'] == 'DELETE':
changelog['operations']['deleted'].append(entry)
return changelog
def diff_versions(version1: str, version2: str) -> dict:
"""Compare two versions."""
v1 = get_version_details(version1)
v2 = get_version_details(version2)
if not v1 or not v2:
return None
return {
'from_version': version1,
'to_version': version2,
'from_feature': v1['feature'],
'to_feature': v2['feature'],
'summary_diff': {
'entities_created': v2['summary']['entities_created'] - v1['summary']['entities_created'],
'entities_updated': v2['summary']['entities_updated'] - v1['summary']['entities_updated'],
'files_created': v2['summary']['files_created'] - v1['summary']['files_created'],
'files_updated': v2['summary']['files_updated'] - v1['summary']['files_updated']
}
}
# ============================================================================
# Display Functions
# ============================================================================
def show_history():
"""Display version history."""
versions = list_versions()
print()
print("" + "" * 70 + "")
print("" + "WORKFLOW VERSION HISTORY".center(70) + "")
print("" + "" * 70 + "")
if not versions:
print("" + " No workflow versions found.".ljust(70) + "")
else:
for v in versions:
status_icon = "" if v['status'] == 'completed' else "🔄" if v['status'] == 'in_progress' else ""
line1 = f" {status_icon} {v['version']}: {v['feature'][:45]}"
print("" + line1.ljust(70) + "")
line2 = f" Started: {v['started_at'][:19]} | Tasks: {v['tasks_count']} | Ops: {v.get('operations_count', 0)}"
print("" + line2.ljust(70) + "")
print("" + "" * 70 + "")
print("" + "" * 70 + "")
def show_changelog(version: str):
"""Display changelog for a version."""
changelog = get_changelog(version)
if not changelog:
print(f"Version {version} not found.")
return
print()
print("" + "" * 70 + "")
print("" + f"CHANGELOG: {version}".center(70) + "")
print("" + "" * 70 + "")
print("" + f" Feature: {changelog['feature'][:55]}".ljust(70) + "")
print("" + f" Status: {changelog['status']}".ljust(70) + "")
print("" + "" * 70 + "")
ops = changelog['operations']
print("" + " CREATED".ljust(70) + "")
for item in ops['created']:
print("" + f" + [{item['type']}] {item['id']}".ljust(70) + "")
if item['path']:
print("" + f" {item['path']}".ljust(70) + "")
print("" + " UPDATED".ljust(70) + "")
for item in ops['updated']:
print("" + f" ~ [{item['type']}] {item['id']}".ljust(70) + "")
print("" + " DELETED".ljust(70) + "")
for item in ops['deleted']:
print("" + f" - [{item['type']}] {item['id']}".ljust(70) + "")
print("" + "" * 70 + "")
s = changelog['summary']
print("" + " SUMMARY".ljust(70) + "")
print("" + f" Entities: +{s['entities_created']} ~{s['entities_updated']} -{s['entities_deleted']}".ljust(70) + "")
print("" + f" Files: +{s['files_created']} ~{s['files_updated']} -{s['files_deleted']}".ljust(70) + "")
print("" + "" * 70 + "")
def show_current():
"""Show current active workflow."""
session = load_current_session()
if not session:
print("No active workflow.")
print("Start one with: /workflow:spawn 'feature name'")
return
print()
print("" + "" * 70 + "")
print("" + "CURRENT WORKFLOW SESSION".center(70) + "")
print("" + "" * 70 + "")
print("" + f" Version: {session['version']}".ljust(70) + "")
print("" + f" Feature: {session['feature'][:55]}".ljust(70) + "")
print("" + f" Phase: {session['current_phase']}".ljust(70) + "")
print("" + f" Status: {session['status']}".ljust(70) + "")
print("" + "" * 70 + "")
print("" + " APPROVALS".ljust(70) + "")
d = session['approvals']['design']
i = session['approvals']['implementation']
d_icon = "" if d['status'] == 'approved' else "" if d['status'] == 'rejected' else ""
i_icon = "" if i['status'] == 'approved' else "" if i['status'] == 'rejected' else ""
print("" + f" {d_icon} Design: {d['status']}".ljust(70) + "")
print("" + f" {i_icon} Implementation: {i['status']}".ljust(70) + "")
print("" + "" * 70 + "")
s = session['summary']
print("" + " PROGRESS".ljust(70) + "")
print("" + f" Tasks: {s['tasks_completed']}/{s['total_tasks']} completed".ljust(70) + "")
print("" + f" Entities: +{s['entities_created']} ~{s['entities_updated']} -{s['entities_deleted']}".ljust(70) + "")
print("" + f" Files: +{s['files_created']} ~{s['files_updated']} -{s['files_deleted']}".ljust(70) + "")
print("" + "" * 70 + "")
# ============================================================================
# CLI Interface
# ============================================================================
def main():
parser = argparse.ArgumentParser(description="Workflow versioning system")
subparsers = parser.add_subparsers(dest='command', help='Commands')
# create command
create_parser = subparsers.add_parser('create', help='Create new workflow version')
create_parser.add_argument('feature', help='Feature description')
create_parser.add_argument('--parent', help='Parent version (for fixes)')
# current command
subparsers.add_parser('current', help='Show current workflow')
# history command
subparsers.add_parser('history', help='Show version history')
# changelog command
changelog_parser = subparsers.add_parser('changelog', help='Show version changelog')
changelog_parser.add_argument('version', help='Version to show')
# diff command
diff_parser = subparsers.add_parser('diff', help='Compare two versions')
diff_parser.add_argument('version1', help='First version')
diff_parser.add_argument('version2', help='Second version')
# task-start command
task_start = subparsers.add_parser('task-start', help='Start a task session')
task_start.add_argument('task_id', help='Task ID')
task_start.add_argument('--type', default='create', help='Task type')
task_start.add_argument('--agent', required=True, help='Agent performing task')
# task-complete command
task_complete = subparsers.add_parser('task-complete', help='Complete a task session')
task_complete.add_argument('task_id', help='Task ID')
task_complete.add_argument('--status', default='completed', help='Final status')
# log-op command
log_op = subparsers.add_parser('log-op', help='Log an operation')
log_op.add_argument('task_id', help='Task ID')
log_op.add_argument('op_type', choices=['CREATE', 'UPDATE', 'DELETE'])
log_op.add_argument('target_type', choices=['file', 'entity', 'task', 'manifest'])
log_op.add_argument('target_id', help='Target ID')
log_op.add_argument('--path', help='File path if applicable')
log_op.add_argument('--summary', help='Change summary')
# complete command
subparsers.add_parser('complete', help='Complete current workflow')
# update-phase command
phase_parser = subparsers.add_parser('update-phase', help='Update workflow phase')
phase_parser.add_argument('phase', help='New phase')
# tasks-dir command
tasks_dir_parser = subparsers.add_parser('tasks-dir', help='Get tasks directory for current or specific version')
tasks_dir_parser.add_argument('--version', help='Specific version (defaults to current)')
args = parser.parse_args()
if args.command == 'create':
session = create_workflow_session(args.feature, args.parent)
print(f"Created workflow version: {session['version']}")
print(f"Feature: {args.feature}")
print(f"Session ID: {session['session_id']}")
elif args.command == 'current':
show_current()
elif args.command == 'history':
show_history()
elif args.command == 'changelog':
show_changelog(args.version)
elif args.command == 'diff':
result = diff_versions(args.version1, args.version2)
if result:
print(json.dumps(result, indent=2))
else:
print("Could not compare versions")
elif args.command == 'task-start':
session = load_current_session()
if not session:
print("Error: No active workflow")
sys.exit(1)
task = create_task_session(session, args.task_id, args.type, args.agent)
print(f"Started task session: {task['session_id']}")
elif args.command == 'task-complete':
session = load_current_session()
if not session:
print("Error: No active workflow")
sys.exit(1)
task = load_task_session(session['version'], args.task_id)
if task:
complete_task_session(task, args.status)
print(f"Completed task: {args.task_id}")
else:
print(f"Task session not found: {args.task_id}")
elif args.command == 'log-op':
session = load_current_session()
if not session:
print("Error: No active workflow")
sys.exit(1)
task = load_task_session(session['version'], args.task_id)
if task:
op = log_operation(
task,
args.op_type,
args.target_type,
args.target_id,
target_path=args.path,
diff_summary=args.summary
)
print(f"Logged operation: {op['id']}")
else:
print(f"Task session not found: {args.task_id}")
elif args.command == 'complete':
session = load_current_session()
if not session:
print("Error: No active workflow")
sys.exit(1)
complete_workflow_session(session)
print(f"Completed workflow: {session['version']}")
elif args.command == 'update-phase':
session = load_current_session()
if not session:
print("Error: No active workflow")
sys.exit(1)
session['current_phase'] = args.phase
save_current_session(session)
print(f"Updated phase to: {args.phase}")
elif args.command == 'tasks-dir':
if args.version:
# Specific version requested
tasks_dir = get_version_tasks_dir(args.version)
tasks_dir.mkdir(parents=True, exist_ok=True)
print(str(tasks_dir))
else:
# Use current version
tasks_dir = get_current_tasks_dir()
if tasks_dir:
print(str(tasks_dir))
else:
print("Error: No active workflow")
sys.exit(1)
else:
parser.print_help()
if __name__ == "__main__":
main()

View File

@ -0,0 +1,343 @@
#!/usr/bin/env python3
"""
Design visualization for guardrail workflow.
Generates ASCII art visualization of pages, components, and API endpoints
from the project manifest.
Usage:
python3 visualize_design.py --manifest project_manifest.json
"""
import argparse
import json
import os
import sys
from pathlib import Path
def load_manifest(manifest_path: str) -> dict | None:
"""Load manifest if it exists."""
if not os.path.exists(manifest_path):
return None
try:
with open(manifest_path) as f:
return json.load(f)
except (json.JSONDecodeError, IOError):
return None
def get_status_icon(status: str) -> str:
"""Get icon for entity status."""
icons = {
'PENDING': '',
'APPROVED': '',
'IMPLEMENTED': '🟢',
'IN_PROGRESS': '🔄',
'REJECTED': '',
}
return icons.get(status, '')
def visualize_page(page: dict, components: list, indent: str = "") -> list:
"""Generate ASCII visualization for a page."""
lines = []
name = page.get('name', 'Unknown')
status = page.get('status', 'PENDING')
file_path = page.get('file_path', '')
description = page.get('description', '')
icon = get_status_icon(status)
# Page header
lines.append(f"{indent}{'' * 60}")
lines.append(f"{indent}{icon} PAGE: {name:<50}")
lines.append(f"{indent}{' ' * 3}Path: {file_path:<48}")
if description:
desc_short = description[:45] + '...' if len(description) > 45 else description
lines.append(f"{indent}{' ' * 3}Desc: {desc_short:<48}")
lines.append(f"{indent}{'' * 60}")
# Find components used by this page
page_components = []
page_id = page.get('id', '')
for comp in components:
deps = comp.get('dependencies', [])
used_by = comp.get('used_by', [])
if page_id in deps or page_id in used_by or page.get('name', '').lower() in str(comp).lower():
page_components.append(comp)
if page_components:
lines.append(f"{indent}│ COMPONENTS: │")
for comp in page_components:
comp_name = comp.get('name', 'Unknown')
comp_status = comp.get('status', 'PENDING')
comp_icon = get_status_icon(comp_status)
lines.append(f"{indent}{comp_icon} {comp_name:<53}")
else:
lines.append(f"{indent}│ (No components defined yet) │")
lines.append(f"{indent}{'' * 60}")
return lines
def visualize_component_tree(components: list) -> list:
"""Generate ASCII tree of components."""
lines = []
if not components:
return [" (No components defined)"]
lines.append("┌─────────────────────────────────────────────────────────────┐")
lines.append("│ 🧩 COMPONENTS │")
lines.append("├─────────────────────────────────────────────────────────────┤")
for i, comp in enumerate(components):
name = comp.get('name', 'Unknown')
status = comp.get('status', 'PENDING')
file_path = comp.get('file_path', '')
icon = get_status_icon(status)
is_last = i == len(components) - 1
prefix = "└──" if is_last else "├──"
lines.append(f"{prefix} {icon} {name:<50}")
lines.append(f"{' ' if is_last else ''} {file_path:<50}")
lines.append("└─────────────────────────────────────────────────────────────┘")
return lines
def visualize_api_endpoints(endpoints: list) -> list:
"""Generate ASCII visualization of API endpoints."""
lines = []
if not endpoints:
return []
lines.append("┌─────────────────────────────────────────────────────────────┐")
lines.append("│ 🔌 API ENDPOINTS │")
lines.append("├─────────────────────────────────────────────────────────────┤")
for endpoint in endpoints:
name = endpoint.get('name', 'Unknown')
method = endpoint.get('method', 'GET')
path = endpoint.get('path', endpoint.get('file_path', ''))
status = endpoint.get('status', 'PENDING')
icon = get_status_icon(status)
method_colors = {
'GET': '🟢',
'POST': '🟡',
'PUT': '🟠',
'PATCH': '🟠',
'DELETE': '🔴',
}
method_icon = method_colors.get(method.upper(), '')
lines.append(f"{icon} {method_icon} {method.upper():<6} {name:<45}")
lines.append(f"│ Path: {path:<47}")
lines.append("└─────────────────────────────────────────────────────────────┘")
return lines
def visualize_page_flow(pages: list) -> list:
"""Generate ASCII flow diagram of pages."""
lines = []
if not pages:
return []
lines.append("")
lines.append("📱 PAGE FLOW DIAGRAM")
lines.append("" * 65)
lines.append("")
# Simple flow visualization
for i, page in enumerate(pages):
name = page.get('name', 'Unknown')
status = page.get('status', 'PENDING')
icon = get_status_icon(status)
# Page box
box_width = max(len(name) + 4, 20)
lines.append(f"{'' * box_width}")
lines.append(f"{icon} {name.center(box_width - 4)}")
lines.append(f"{'' * box_width}")
# Arrow to next page (if not last)
if i < len(pages) - 1:
lines.append(f" {''.center(box_width + 4)}")
lines.append(f" {''.center(box_width + 4)}")
lines.append("")
return lines
def visualize_data_flow(manifest: dict) -> list:
"""Generate data flow visualization."""
lines = []
pages = manifest.get('entities', {}).get('pages', [])
components = manifest.get('entities', {}).get('components', [])
endpoints = manifest.get('entities', {}).get('api_endpoints', [])
if not any([pages, components, endpoints]):
return []
lines.append("")
lines.append("🔄 DATA FLOW ARCHITECTURE")
lines.append("" * 65)
lines.append("")
lines.append(" ┌─────────────────────────────────────────────────────────┐")
lines.append(" │ FRONTEND │")
lines.append(" │ ┌─────────┐ ┌─────────────┐ ┌───────────────┐ │")
lines.append(" │ │ Pages │───▶│ Components │───▶│ Hooks │ │")
page_count = len(pages)
comp_count = len(components)
lines.append(f" │ │ ({page_count:^3}) │ │ ({comp_count:^3}) │ │ (state) │ │")
lines.append(" │ └─────────┘ └─────────────┘ └───────┬───────┘ │")
lines.append(" │ │ │")
lines.append(" └────────────────────────────────────────────┼───────────┘")
lines.append("")
lines.append("")
lines.append(" ┌─────────────────────────────────────────────────────────┐")
lines.append(" │ BACKEND │")
lines.append(" │ ┌─────────────────────────────────────────────────┐ │")
lines.append(" │ │ API Endpoints │ │")
api_count = len(endpoints)
lines.append(f" │ │ ({api_count:^3}) │ │")
lines.append(" │ └──────────────────────────┬──────────────────────┘ │")
lines.append(" │ │ │")
lines.append(" │ ▼ │")
lines.append(" │ ┌─────────────────────────────────────────────────┐ │")
lines.append(" │ │ Database │ │")
lines.append(" │ └─────────────────────────────────────────────────┘ │")
lines.append(" └─────────────────────────────────────────────────────────┘")
return lines
def generate_full_visualization(manifest: dict) -> str:
"""Generate complete design visualization."""
lines = []
project_name = manifest.get('project', {}).get('name', 'Unknown Project')
entities = manifest.get('entities', {})
pages = entities.get('pages', [])
components = entities.get('components', [])
api_endpoints = entities.get('api_endpoints', [])
# Header
lines.append("")
lines.append("╔═══════════════════════════════════════════════════════════════╗")
lines.append("║ 📐 DESIGN VISUALIZATION ║")
lines.append(f"║ Project: {project_name:<51}")
lines.append("╚═══════════════════════════════════════════════════════════════╝")
lines.append("")
# Summary counts
lines.append("📊 ENTITY SUMMARY")
lines.append("" * 65)
lines.append(f" Pages: {len(pages):>3} │ Components: {len(components):>3} │ API Endpoints: {len(api_endpoints):>3}")
lines.append("")
# Page Flow
if pages:
lines.extend(visualize_page_flow(pages))
lines.append("")
# Detailed Pages with Components
if pages:
lines.append("")
lines.append("📄 PAGE DETAILS")
lines.append("" * 65)
for page in pages:
lines.extend(visualize_page(page, components))
lines.append("")
# Component Tree
if components:
lines.append("")
lines.append("🧩 COMPONENT HIERARCHY")
lines.append("" * 65)
lines.extend(visualize_component_tree(components))
lines.append("")
# API Endpoints
if api_endpoints:
lines.append("")
lines.append("🔌 API LAYER")
lines.append("" * 65)
lines.extend(visualize_api_endpoints(api_endpoints))
lines.append("")
# Data Flow Architecture
lines.extend(visualize_data_flow(manifest))
lines.append("")
# Legend
lines.append("")
lines.append("📋 LEGEND")
lines.append("" * 65)
lines.append(" ⏳ PENDING - Designed, awaiting approval")
lines.append(" ✅ APPROVED - Approved, ready for implementation")
lines.append(" 🔄 IN_PROGRESS - Currently being implemented")
lines.append(" 🟢 IMPLEMENTED - Implementation complete")
lines.append("")
return "\n".join(lines)
def main():
parser = argparse.ArgumentParser(description="Visualize design from manifest")
parser.add_argument("--manifest", required=True, help="Path to project_manifest.json")
parser.add_argument("--format", choices=['full', 'pages', 'components', 'api', 'flow'],
default='full', help="Visualization format")
args = parser.parse_args()
manifest = load_manifest(args.manifest)
if manifest is None:
print("❌ Error: Could not load manifest from", args.manifest)
return 1
entities = manifest.get('entities', {})
if not any([
entities.get('pages'),
entities.get('components'),
entities.get('api_endpoints')
]):
print("⚠️ No entities found in manifest. Design phase may not be complete.")
return 0
if args.format == 'full':
print(generate_full_visualization(manifest))
elif args.format == 'pages':
pages = entities.get('pages', [])
components = entities.get('components', [])
for page in pages:
print("\n".join(visualize_page(page, components)))
elif args.format == 'components':
components = entities.get('components', [])
print("\n".join(visualize_component_tree(components)))
elif args.format == 'api':
endpoints = entities.get('api_endpoints', [])
print("\n".join(visualize_api_endpoints(endpoints)))
elif args.format == 'flow':
pages = entities.get('pages', [])
print("\n".join(visualize_page_flow(pages)))
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@ -0,0 +1,651 @@
#!/usr/bin/env python3
"""
Implementation Visualizer for guardrail workflow.
Generates visual representation of implemented pages and components:
- Component tree structure
- Props and interfaces
- Page layouts
- API endpoints
- File statistics
Usage:
python3 visualize_implementation.py --manifest project_manifest.json
python3 visualize_implementation.py --tasks-dir .workflow/versions/v001/tasks
"""
import argparse
import json
import os
import re
import sys
from pathlib import Path
from dataclasses import dataclass, field
from typing import Optional
# Try to import yaml
try:
import yaml
HAS_YAML = True
except ImportError:
HAS_YAML = False
@dataclass
class ComponentInfo:
"""Parsed component information."""
name: str
file_path: str
props: list[str] = field(default_factory=list)
imports: list[str] = field(default_factory=list)
exports: list[str] = field(default_factory=list)
hooks: list[str] = field(default_factory=list)
children: list[str] = field(default_factory=list)
lines: int = 0
has_types: bool = False
status: str = "IMPLEMENTED"
@dataclass
class PageInfo:
"""Parsed page information."""
name: str
file_path: str
route: str
components: list[str] = field(default_factory=list)
api_calls: list[str] = field(default_factory=list)
lines: int = 0
is_client: bool = False
is_server: bool = True
@dataclass
class APIEndpointInfo:
"""Parsed API endpoint information."""
name: str
file_path: str
route: str
methods: list[str] = field(default_factory=list)
has_auth: bool = False
has_validation: bool = False
lines: int = 0
def load_yaml(filepath: str) -> dict:
"""Load YAML file."""
if not os.path.exists(filepath):
return {}
with open(filepath, 'r') as f:
content = f.read()
if HAS_YAML:
return yaml.safe_load(content) or {}
# Basic fallback
result = {}
for line in content.split('\n'):
if ':' in line and not line.startswith(' '):
key, _, value = line.partition(':')
result[key.strip()] = value.strip()
return result
def load_manifest(manifest_path: str) -> dict:
"""Load project manifest."""
if not os.path.exists(manifest_path):
return {}
try:
with open(manifest_path) as f:
return json.load(f)
except (json.JSONDecodeError, IOError):
return {}
def parse_typescript_file(file_path: str) -> dict:
"""Parse TypeScript/TSX file for component information."""
if not os.path.exists(file_path):
return {'exists': False}
try:
with open(file_path, 'r', encoding='utf-8') as f:
content = f.read()
lines = content.split('\n')
except (IOError, UnicodeDecodeError):
return {'exists': False}
result = {
'exists': True,
'lines': len(lines),
'imports': [],
'exports': [],
'props': [],
'hooks': [],
'components_used': [],
'api_calls': [],
'is_client': "'use client'" in content or '"use client"' in content,
'has_types': 'interface ' in content or 'type ' in content,
'methods': [],
}
# Extract imports
import_pattern = r"import\s+(?:{[^}]+}|\w+)\s+from\s+['\"]([^'\"]+)['\"]"
for match in re.finditer(import_pattern, content):
result['imports'].append(match.group(1))
# Extract exports
export_patterns = [
r"export\s+(?:default\s+)?(?:function|const|class)\s+(\w+)",
r"export\s+{\s*([^}]+)\s*}",
]
for pattern in export_patterns:
for match in re.finditer(pattern, content):
exports = match.group(1).split(',')
result['exports'].extend([e.strip() for e in exports if e.strip()])
# Extract props interface
props_pattern = r"(?:interface|type)\s+(\w*Props\w*)\s*(?:=|{)"
for match in re.finditer(props_pattern, content):
result['props'].append(match.group(1))
# Extract React hooks
hooks_pattern = r"\b(use[A-Z]\w+)\s*\("
for match in re.finditer(hooks_pattern, content):
hook = match.group(1)
if hook not in result['hooks']:
result['hooks'].append(hook)
# Extract component usage (JSX)
component_pattern = r"<([A-Z]\w+)(?:\s|/|>)"
for match in re.finditer(component_pattern, content):
comp = match.group(1)
if comp not in result['components_used'] and comp not in ['React', 'Fragment']:
result['components_used'].append(comp)
# Extract API calls
api_patterns = [
r"fetch\s*\(\s*['\"`](/api/[^'\"`]+)['\"`]",
r"axios\.\w+\s*\(\s*['\"`](/api/[^'\"`]+)['\"`]",
]
for pattern in api_patterns:
for match in re.finditer(pattern, content):
result['api_calls'].append(match.group(1))
# Extract HTTP methods (for API routes)
method_pattern = r"export\s+(?:async\s+)?function\s+(GET|POST|PUT|DELETE|PATCH)"
for match in re.finditer(method_pattern, content):
result['methods'].append(match.group(1))
return result
def get_route_from_path(file_path: str) -> str:
"""Convert file path to route."""
# Handle Next.js App Router
if '/app/' in file_path:
route = file_path.split('/app/')[-1]
route = re.sub(r'/page\.(tsx?|jsx?)$', '', route)
route = re.sub(r'/route\.(tsx?|jsx?)$', '', route)
route = '/' + route if route else '/'
# Handle dynamic routes
route = re.sub(r'\[(\w+)\]', r':\1', route)
return route
# Handle Pages Router
if '/pages/' in file_path:
route = file_path.split('/pages/')[-1]
route = re.sub(r'\.(tsx?|jsx?)$', '', route)
route = re.sub(r'/index$', '', route)
route = '/' + route if route else '/'
route = re.sub(r'\[(\w+)\]', r':\1', route)
return route
return file_path
def visualize_component(info: ComponentInfo, indent: str = "") -> list[str]:
"""Generate ASCII visualization for a component."""
lines = []
status_icon = {
'IMPLEMENTED': '🟢',
'PENDING': '',
'IN_PROGRESS': '🔄',
'ERROR': '',
}.get(info.status, '')
# Component header
lines.append(f"{indent}{'' * 60}")
lines.append(f"{indent}{status_icon} COMPONENT: {info.name:<46}")
lines.append(f"{indent}│ 📁 {info.file_path:<52}")
lines.append(f"{indent}│ 📏 {info.lines} lines │"[:63] + "")
# Props
if info.props:
lines.append(f"{indent}{'' * 60}")
lines.append(f"{indent}│ PROPS │")
for prop in info.props[:3]:
lines.append(f"{indent}│ • {prop:<54}")
# Hooks
if info.hooks:
lines.append(f"{indent}{'' * 60}")
lines.append(f"{indent}│ HOOKS │")
hooks_str = ', '.join(info.hooks[:5])
if len(hooks_str) > 52:
hooks_str = hooks_str[:49] + '...'
lines.append(f"{indent}{hooks_str:<56}")
# Children components
if info.children:
lines.append(f"{indent}{'' * 60}")
lines.append(f"{indent}│ USES COMPONENTS │")
for child in info.children[:5]:
lines.append(f"{indent}│ └── {child:<52}")
lines.append(f"{indent}{'' * 60}")
return lines
def visualize_page(info: PageInfo, indent: str = "") -> list[str]:
"""Generate ASCII visualization for a page."""
lines = []
client_icon = "🖥️" if info.is_client else "🌐"
# Page header
lines.append(f"{indent}{'' * 62}")
lines.append(f"{indent}{client_icon} PAGE: {info.name:<52}")
lines.append(f"{indent}║ Route: {info.route:<51}")
lines.append(f"{indent}║ File: {info.file_path:<51}")
lines.append(f"{indent}{'' * 62}")
# Components used
if info.components:
lines.append(f"{indent}║ COMPONENTS USED ║")
for comp in info.components[:6]:
lines.append(f"{indent}║ ├── {comp:<54}")
if len(info.components) > 6:
lines.append(f"{indent}║ └── ... and {len(info.components) - 6} more ║"[:65] + "")
else:
lines.append(f"{indent}║ (No child components detected) ║")
# API calls
if info.api_calls:
lines.append(f"{indent}{'' * 62}")
lines.append(f"{indent}║ API CALLS ║")
for api in info.api_calls[:4]:
api_short = api[:50] if len(api) <= 50 else api[:47] + '...'
lines.append(f"{indent}║ 🔌 {api_short:<55}")
lines.append(f"{indent}{'' * 62}")
return lines
def visualize_api_endpoint(info: APIEndpointInfo, indent: str = "") -> list[str]:
"""Generate ASCII visualization for an API endpoint."""
lines = []
method_colors = {
'GET': '🟢',
'POST': '🟡',
'PUT': '🟠',
'PATCH': '🟠',
'DELETE': '🔴',
}
methods_str = ' '.join([f"{method_colors.get(m, '')}{m}" for m in info.methods])
lines.append(f"{indent}{'' * 60}")
lines.append(f"{indent}│ 🔌 API: {info.route:<50}")
lines.append(f"{indent}│ Methods: {methods_str:<47}"[:63] + "")
lines.append(f"{indent}│ File: {info.file_path:<50}")
features = []
if info.has_auth:
features.append("🔐 Auth")
if info.has_validation:
features.append("✓ Validation")
if features:
features_str = ' '.join(features)
lines.append(f"{indent}│ Features: {features_str:<46}")
lines.append(f"{indent}{'' * 60}")
return lines
def generate_implementation_tree(components: list[ComponentInfo]) -> list[str]:
"""Generate a tree view of component hierarchy."""
lines = []
lines.append("")
lines.append("🌳 COMPONENT HIERARCHY")
lines.append("" * 65)
if not components:
lines.append(" (No components found)")
return lines
# Group by directory
by_dir: dict[str, list[ComponentInfo]] = {}
for comp in components:
dir_path = str(Path(comp.file_path).parent)
if dir_path not in by_dir:
by_dir[dir_path] = []
by_dir[dir_path].append(comp)
for dir_path, comps in sorted(by_dir.items()):
lines.append(f" 📂 {dir_path}/")
for i, comp in enumerate(comps):
is_last = i == len(comps) - 1
prefix = " └──" if is_last else " ├──"
status = "🟢" if comp.status == "IMPLEMENTED" else ""
lines.append(f" {prefix} {status} {comp.name}")
# Show props
if comp.props:
prop_prefix = " " if is_last else ""
for prop in comp.props[:2]:
lines.append(f"{prop_prefix} 📋 {prop}")
return lines
def generate_stats(
pages: list[PageInfo],
components: list[ComponentInfo],
endpoints: list[APIEndpointInfo]
) -> list[str]:
"""Generate implementation statistics."""
lines = []
total_lines = sum(p.lines for p in pages) + sum(c.lines for c in components) + sum(e.lines for e in endpoints)
client_pages = sum(1 for p in pages if p.is_client)
server_pages = len(pages) - client_pages
typed_components = sum(1 for c in components if c.has_types)
lines.append("")
lines.append("╔══════════════════════════════════════════════════════════════════╗")
lines.append("║ 📊 IMPLEMENTATION STATS ║")
lines.append("╠══════════════════════════════════════════════════════════════════╣")
lines.append(f"║ Pages: {len(pages):<5} │ Client: {client_pages:<3} │ Server: {server_pages:<3}")
lines.append(f"║ Components: {len(components):<5} │ Typed: {typed_components:<4}")
lines.append(f"║ API Endpoints: {len(endpoints):<5}")
lines.append(f"║ Total Lines: {total_lines:<5}")
lines.append("╠══════════════════════════════════════════════════════════════════╣")
# Hooks usage
all_hooks = []
for comp in components:
all_hooks.extend(comp.hooks)
hook_counts = {}
for hook in all_hooks:
hook_counts[hook] = hook_counts.get(hook, 0) + 1
if hook_counts:
lines.append("║ HOOKS USAGE ║")
for hook, count in sorted(hook_counts.items(), key=lambda x: -x[1])[:5]:
lines.append(f"{hook:<20} × {count:<3}"[:69] + "")
lines.append("╚══════════════════════════════════════════════════════════════════╝")
return lines
def generate_page_flow(pages: list[PageInfo]) -> list[str]:
"""Generate page flow visualization."""
lines = []
if not pages:
return lines
lines.append("")
lines.append("📱 PAGE STRUCTURE")
lines.append("" * 65)
# Sort by route
sorted_pages = sorted(pages, key=lambda p: p.route)
for i, page in enumerate(sorted_pages):
is_last = i == len(sorted_pages) - 1
icon = "🖥️" if page.is_client else "🌐"
# Page box
lines.append(f"{'' * 50}")
lines.append(f"{icon} {page.route:<47}")
lines.append(f"{page.name:<48}")
# Components count
comp_count = len(page.components)
api_count = len(page.api_calls)
lines.append(f" │ 🧩 {comp_count} components 🔌 {api_count} API calls │"[:56] + "")
lines.append(f"{'' * 50}")
if not is_last:
lines.append("")
lines.append("")
return lines
def visualize_from_manifest(manifest_path: str) -> str:
"""Generate full visualization from manifest."""
manifest = load_manifest(manifest_path)
if not manifest:
return "❌ Could not load manifest"
entities = manifest.get('entities', {})
project_name = manifest.get('project', {}).get('name', 'Unknown')
lines = []
# Header
lines.append("")
lines.append("╔═══════════════════════════════════════════════════════════════════╗")
lines.append("║ 🏗️ IMPLEMENTATION VISUALIZATION ║")
lines.append(f"║ Project: {project_name:<56}")
lines.append("╚═══════════════════════════════════════════════════════════════════╝")
pages: list[PageInfo] = []
components: list[ComponentInfo] = []
endpoints: list[APIEndpointInfo] = []
# Parse pages
for page_data in entities.get('pages', []):
file_path = page_data.get('file_path', '')
if file_path and os.path.exists(file_path):
parsed = parse_typescript_file(file_path)
page = PageInfo(
name=page_data.get('name', 'Unknown'),
file_path=file_path,
route=get_route_from_path(file_path),
components=parsed.get('components_used', []),
api_calls=parsed.get('api_calls', []),
lines=parsed.get('lines', 0),
is_client=parsed.get('is_client', False),
)
pages.append(page)
# Parse components
for comp_data in entities.get('components', []):
file_path = comp_data.get('file_path', '')
if file_path and os.path.exists(file_path):
parsed = parse_typescript_file(file_path)
comp = ComponentInfo(
name=comp_data.get('name', 'Unknown'),
file_path=file_path,
props=parsed.get('props', []),
imports=parsed.get('imports', []),
exports=parsed.get('exports', []),
hooks=parsed.get('hooks', []),
children=parsed.get('components_used', []),
lines=parsed.get('lines', 0),
has_types=parsed.get('has_types', False),
status=comp_data.get('status', 'IMPLEMENTED'),
)
components.append(comp)
# Parse API endpoints
for api_data in entities.get('api_endpoints', []):
file_path = api_data.get('file_path', '')
if file_path and os.path.exists(file_path):
parsed = parse_typescript_file(file_path)
endpoint = APIEndpointInfo(
name=api_data.get('name', 'Unknown'),
file_path=file_path,
route=get_route_from_path(file_path),
methods=parsed.get('methods', ['GET']),
lines=parsed.get('lines', 0),
)
endpoints.append(endpoint)
# Page flow
lines.extend(generate_page_flow(pages))
# Detailed pages
if pages:
lines.append("")
lines.append("📄 PAGE DETAILS")
lines.append("" * 65)
for page in pages:
lines.extend(visualize_page(page))
lines.append("")
# Component hierarchy
lines.extend(generate_implementation_tree(components))
# Detailed components
if components:
lines.append("")
lines.append("🧩 COMPONENT DETAILS")
lines.append("" * 65)
for comp in components[:10]: # Limit to 10
lines.extend(visualize_component(comp))
lines.append("")
if len(components) > 10:
lines.append(f" ... and {len(components) - 10} more components")
# API endpoints
if endpoints:
lines.append("")
lines.append("🔌 API ENDPOINTS")
lines.append("" * 65)
for endpoint in endpoints:
lines.extend(visualize_api_endpoint(endpoint))
lines.append("")
# Stats
lines.extend(generate_stats(pages, components, endpoints))
# Legend
lines.append("")
lines.append("📋 LEGEND")
lines.append("" * 65)
lines.append(" 🟢 Implemented ⏳ Pending 🔄 In Progress ❌ Error")
lines.append(" 🖥️ Client Component 🌐 Server Component")
lines.append(" 🟢 GET 🟡 POST 🟠 PUT/PATCH 🔴 DELETE")
lines.append("")
return "\n".join(lines)
def visualize_from_tasks(tasks_dir: str) -> str:
"""Generate visualization from task files."""
tasks_path = Path(tasks_dir)
if not tasks_path.exists():
return f"❌ Tasks directory not found: {tasks_dir}"
task_files = list(tasks_path.glob('task_*.yml'))
if not task_files:
return f"❌ No task files found in: {tasks_dir}"
lines = []
lines.append("")
lines.append("╔═══════════════════════════════════════════════════════════════════╗")
lines.append("║ 📋 TASK IMPLEMENTATION STATUS ║")
lines.append("╚═══════════════════════════════════════════════════════════════════╝")
lines.append("")
implemented_files = []
for task_file in sorted(task_files):
task = load_yaml(str(task_file))
task_id = task.get('id', task_file.stem)
status = task.get('status', 'unknown')
title = task.get('title', 'Unknown task')
file_paths = task.get('file_paths', [])
status_icon = {
'completed': '',
'approved': '',
'pending': '',
'in_progress': '🔄',
'blocked': '🚫',
}.get(status, '')
lines.append(f" {status_icon} {task_id}")
lines.append(f" {title[:55]}")
for fp in file_paths:
if os.path.exists(fp):
lines.append(f" └── ✓ {fp}")
implemented_files.append(fp)
else:
lines.append(f" └── ✗ {fp} (missing)")
lines.append("")
# Parse and visualize implemented files
if implemented_files:
lines.append("" * 65)
lines.append("")
lines.append("🔍 IMPLEMENTED FILES ANALYSIS")
lines.append("")
for fp in implemented_files[:5]:
parsed = parse_typescript_file(fp)
if parsed.get('exists'):
name = Path(fp).stem
lines.append(f" 📁 {fp}")
lines.append(f" Lines: {parsed.get('lines', 0)}")
if parsed.get('exports'):
lines.append(f" Exports: {', '.join(parsed['exports'][:3])}")
if parsed.get('hooks'):
lines.append(f" Hooks: {', '.join(parsed['hooks'][:3])}")
if parsed.get('components_used'):
lines.append(f" Uses: {', '.join(parsed['components_used'][:3])}")
lines.append("")
return "\n".join(lines)
def main():
parser = argparse.ArgumentParser(description="Visualize implementation")
parser.add_argument('--manifest', help='Path to project_manifest.json')
parser.add_argument('--tasks-dir', help='Path to tasks directory')
parser.add_argument('--format', choices=['full', 'tree', 'stats', 'pages'],
default='full', help='Output format')
args = parser.parse_args()
if args.manifest:
output = visualize_from_manifest(args.manifest)
elif args.tasks_dir:
output = visualize_from_tasks(args.tasks_dir)
else:
# Auto-detect
if os.path.exists('project_manifest.json'):
output = visualize_from_manifest('project_manifest.json')
else:
output = "Usage: python3 visualize_implementation.py --manifest project_manifest.json"
print(output)
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@ -0,0 +1,835 @@
#!/usr/bin/env python3
"""Workflow state management for automated orchestration with approval gates."""
import argparse
import json
import os
import shutil
import sys
from datetime import datetime
from pathlib import Path
from typing import Optional
# Try to import yaml, fall back to basic parsing if not available
try:
import yaml
HAS_YAML = True
except ImportError:
HAS_YAML = False
# ============================================================================
# YAML Helpers
# ============================================================================
def load_yaml(filepath: str) -> dict:
"""Load YAML file."""
if not os.path.exists(filepath):
return {}
with open(filepath, 'r') as f:
content = f.read()
if not content.strip():
return {}
if HAS_YAML:
return yaml.safe_load(content) or {}
# Simple fallback parser
result = {}
current_key = None
current_list = None
for line in content.split('\n'):
line = line.rstrip()
if not line or line.startswith('#'):
continue
if line.startswith(' - '):
if current_list is not None:
value = line[4:].strip()
# Handle quoted strings
if (value.startswith('"') and value.endswith('"')) or \
(value.startswith("'") and value.endswith("'")):
value = value[1:-1]
current_list.append(value)
continue
if ':' in line and not line.startswith(' '):
key, _, value = line.partition(':')
key = key.strip()
value = value.strip()
if value == '[]':
result[key] = []
current_list = result[key]
elif value == '{}':
result[key] = {}
current_list = None
elif value == 'null' or value == '~':
result[key] = None
current_list = None
elif value == 'true':
result[key] = True
current_list = None
elif value == 'false':
result[key] = False
current_list = None
elif value.isdigit():
result[key] = int(value)
current_list = None
elif value:
# Handle quoted strings
if (value.startswith('"') and value.endswith('"')) or \
(value.startswith("'") and value.endswith("'")):
value = value[1:-1]
result[key] = value
current_list = None
else:
result[key] = []
current_list = result[key]
current_key = key
return result
def save_yaml(filepath: str, data: dict):
"""Save data to YAML file."""
os.makedirs(os.path.dirname(filepath), exist_ok=True)
if HAS_YAML:
with open(filepath, 'w') as f:
yaml.dump(data, f, default_flow_style=False, sort_keys=False, allow_unicode=True)
else:
# Simple YAML writer
def write_value(value, indent=0):
prefix = ' ' * indent
lines = []
if isinstance(value, dict):
for k, v in value.items():
if isinstance(v, (dict, list)) and v:
lines.append(f"{prefix}{k}:")
lines.extend(write_value(v, indent + 1))
elif isinstance(v, list):
lines.append(f"{prefix}{k}: []")
else:
lines.append(f"{prefix}{k}: {v}")
elif isinstance(value, list):
for item in value:
if isinstance(item, dict):
lines.append(f"{prefix}-")
for k, v in item.items():
lines.append(f"{prefix} {k}: {v}")
else:
lines.append(f"{prefix}- {item}")
return lines
lines = write_value(data)
with open(filepath, 'w') as f:
f.write('\n'.join(lines))
# ============================================================================
# Workflow State Management
# ============================================================================
PHASES = [
'INITIALIZING',
'DESIGNING',
'AWAITING_DESIGN_APPROVAL',
'DESIGN_APPROVED',
'DESIGN_REJECTED',
'IMPLEMENTING',
'REVIEWING',
'SECURITY_REVIEW', # New phase for security audit
'AWAITING_IMPL_APPROVAL',
'IMPL_APPROVED',
'IMPL_REJECTED',
'COMPLETING',
'COMPLETED',
'PAUSED',
'FAILED'
]
VALID_TRANSITIONS = {
'INITIALIZING': ['DESIGNING', 'FAILED'],
'DESIGNING': ['AWAITING_DESIGN_APPROVAL', 'FAILED'],
'AWAITING_DESIGN_APPROVAL': ['DESIGN_APPROVED', 'DESIGN_REJECTED', 'PAUSED'],
'DESIGN_APPROVED': ['IMPLEMENTING', 'FAILED'],
'DESIGN_REJECTED': ['DESIGNING'],
'IMPLEMENTING': ['REVIEWING', 'FAILED', 'PAUSED'],
'REVIEWING': ['SECURITY_REVIEW', 'IMPLEMENTING', 'FAILED'], # Must pass through security
'SECURITY_REVIEW': ['AWAITING_IMPL_APPROVAL', 'IMPLEMENTING', 'FAILED'], # Can go back to fix
'AWAITING_IMPL_APPROVAL': ['IMPL_APPROVED', 'IMPL_REJECTED', 'PAUSED'],
'IMPL_APPROVED': ['COMPLETING', 'FAILED'],
'IMPL_REJECTED': ['IMPLEMENTING'],
'COMPLETING': ['COMPLETED', 'FAILED'],
'COMPLETED': [],
'PAUSED': PHASES, # Can resume to any phase
'FAILED': ['INITIALIZING', 'DESIGNING', 'IMPLEMENTING'] # Can retry
}
def get_workflow_dir() -> Path:
"""Get the .workflow directory path."""
return Path('.workflow')
def get_current_state_path() -> Path:
"""Get the current workflow state file path."""
return get_workflow_dir() / 'current.yml'
def get_history_dir() -> Path:
"""Get the workflow history directory."""
return get_workflow_dir() / 'history'
def create_workflow(feature: str) -> dict:
"""Create a new workflow state."""
now = datetime.now()
workflow_id = f"workflow_{now.strftime('%Y%m%d_%H%M%S')}"
state = {
'id': workflow_id,
'feature': feature,
'current_phase': 'INITIALIZING',
'gates': {
'design_approval': {
'status': 'pending',
'approved_at': None,
'approved_by': None,
'rejection_reason': None,
'revision_count': 0
},
'implementation_approval': {
'status': 'pending',
'approved_at': None,
'approved_by': None,
'rejection_reason': None,
'revision_count': 0
}
},
'progress': {
'entities_designed': 0,
'tasks_created': 0,
'tasks_implemented': 0,
'tasks_reviewed': 0,
'tasks_approved': 0,
'tasks_completed': 0
},
'tasks': {
'pending': [],
'in_progress': [],
'review': [],
'approved': [],
'completed': [],
'blocked': []
},
'started_at': now.isoformat(),
'updated_at': now.isoformat(),
'completed_at': None,
'last_error': None,
'resume_point': {
'phase': 'INITIALIZING',
'task_id': None,
'action': 'start_workflow'
},
'checkpoints': [] # List of checkpoint snapshots for recovery
}
# Ensure directory exists
get_workflow_dir().mkdir(exist_ok=True)
get_history_dir().mkdir(exist_ok=True)
# Save state
save_yaml(str(get_current_state_path()), state)
return state
def load_current_workflow() -> Optional[dict]:
"""Load the current workflow state from the active version."""
state_path = get_current_state_path()
if not state_path.exists():
return None
# Read current.yml to get active version
current = load_yaml(str(state_path))
active_version = current.get('active_version')
if not active_version:
return None
# Load the version's session.yml
version_session_path = get_workflow_dir() / 'versions' / active_version / 'session.yml'
if not version_session_path.exists():
return None
session = load_yaml(str(version_session_path))
current_phase = session.get('current_phase', 'INITIALIZING')
# Convert session format to state format expected by show_status
return {
'id': session.get('session_id', active_version),
'feature': session.get('feature', 'Unknown'),
'current_phase': current_phase,
'gates': {
'design_approval': session.get('approvals', {}).get('design', {'status': 'pending'}),
'implementation_approval': session.get('approvals', {}).get('implementation', {'status': 'pending'})
},
'progress': {
'entities_designed': session.get('summary', {}).get('entities_created', 0),
'tasks_created': session.get('summary', {}).get('total_tasks', 0),
'tasks_implemented': session.get('summary', {}).get('tasks_completed', 0),
'tasks_reviewed': 0,
'tasks_completed': session.get('summary', {}).get('tasks_completed', 0)
},
'tasks': {
'pending': [],
'in_progress': [],
'review': [],
'approved': [],
'completed': session.get('task_sessions', []),
'blocked': []
},
'version': active_version,
'status': session.get('status', 'unknown'),
'last_error': None,
'started_at': session.get('started_at', ''),
'updated_at': session.get('updated_at', ''),
'completed_at': session.get('completed_at'),
'resume_point': {
'phase': current_phase,
'task_id': None,
'action': 'continue_workflow'
}
}
def save_workflow(state: dict):
"""Save workflow state to the version's session.yml file."""
# Get active version
current_path = get_current_state_path()
if not current_path.exists():
print("Error: No current.yml found")
return
current = load_yaml(str(current_path))
active_version = current.get('active_version')
if not active_version:
print("Error: No active version set")
return
# Get the version's session.yml path
version_session_path = get_workflow_dir() / 'versions' / active_version / 'session.yml'
if not version_session_path.exists():
print(f"Error: Session file not found: {version_session_path}")
return
# Load existing session data
session = load_yaml(str(version_session_path))
# Create backup
backup_path = version_session_path.with_suffix('.yml.bak')
shutil.copy(version_session_path, backup_path)
# Update session with state changes
session['current_phase'] = state['current_phase']
session['updated_at'] = datetime.now().isoformat()
if state.get('completed_at'):
session['completed_at'] = state['completed_at']
session['status'] = 'completed'
# Update approvals
if 'gates' in state:
if 'approvals' not in session:
session['approvals'] = {}
if state['gates'].get('design_approval', {}).get('status') == 'approved':
session['approvals']['design'] = state['gates']['design_approval']
if state['gates'].get('implementation_approval', {}).get('status') == 'approved':
session['approvals']['implementation'] = state['gates']['implementation_approval']
save_yaml(str(version_session_path), session)
def transition_phase(state: dict, new_phase: str, error: str = None) -> bool:
"""Transition workflow to a new phase."""
current = state['current_phase']
if new_phase not in PHASES:
print(f"Error: Invalid phase '{new_phase}'")
return False
if new_phase not in VALID_TRANSITIONS.get(current, []):
print(f"Error: Cannot transition from '{current}' to '{new_phase}'")
print(f"Valid transitions: {VALID_TRANSITIONS.get(current, [])}")
return False
state['current_phase'] = new_phase
state['resume_point']['phase'] = new_phase
if new_phase == 'FAILED' and error:
state['last_error'] = error
if new_phase == 'COMPLETED':
state['completed_at'] = datetime.now().isoformat()
# Set appropriate resume action
resume_actions = {
'INITIALIZING': 'start_workflow',
'DESIGNING': 'continue_design',
'AWAITING_DESIGN_APPROVAL': 'await_user_approval',
'DESIGN_APPROVED': 'start_implementation',
'DESIGN_REJECTED': 'revise_design',
'IMPLEMENTING': 'continue_implementation',
'REVIEWING': 'continue_review',
'SECURITY_REVIEW': 'run_security_audit',
'AWAITING_IMPL_APPROVAL': 'await_user_approval',
'IMPL_APPROVED': 'start_completion',
'IMPL_REJECTED': 'fix_implementation',
'COMPLETING': 'continue_completion',
'COMPLETED': 'workflow_done',
'PAUSED': 'resume_workflow',
'FAILED': 'retry_or_abort'
}
state['resume_point']['action'] = resume_actions.get(new_phase, 'unknown')
save_workflow(state)
return True
def approve_gate(state: dict, gate: str, approver: str = 'user') -> bool:
"""Approve a gate."""
if gate not in ['design_approval', 'implementation_approval']:
print(f"Error: Invalid gate '{gate}'")
return False
state['gates'][gate]['status'] = 'approved'
state['gates'][gate]['approved_at'] = datetime.now().isoformat()
state['gates'][gate]['approved_by'] = approver
# Transition to next phase
if gate == 'design_approval':
transition_phase(state, 'DESIGN_APPROVED')
else:
transition_phase(state, 'IMPL_APPROVED')
return True
def reject_gate(state: dict, gate: str, reason: str) -> bool:
"""Reject a gate."""
if gate not in ['design_approval', 'implementation_approval']:
print(f"Error: Invalid gate '{gate}'")
return False
state['gates'][gate]['status'] = 'rejected'
state['gates'][gate]['rejection_reason'] = reason
state['gates'][gate]['revision_count'] += 1
# Transition to rejection phase
if gate == 'design_approval':
transition_phase(state, 'DESIGN_REJECTED')
else:
transition_phase(state, 'IMPL_REJECTED')
return True
def update_progress(state: dict, **kwargs):
"""Update progress counters."""
for key, value in kwargs.items():
if key in state['progress']:
state['progress'][key] = value
save_workflow(state)
def update_task_status(state: dict, task_id: str, new_status: str):
"""Update task status in workflow state."""
# Remove from all status lists
for status in state['tasks']:
if task_id in state['tasks'][status]:
state['tasks'][status].remove(task_id)
# Add to new status list
if new_status in state['tasks']:
state['tasks'][new_status].append(task_id)
# Update resume point if task is in progress
if new_status == 'in_progress':
state['resume_point']['task_id'] = task_id
save_workflow(state)
def save_checkpoint(state: dict, description: str, data: dict = None) -> dict:
"""Save a checkpoint for recovery during long operations.
Args:
state: Current workflow state
description: Human-readable description of checkpoint
data: Optional additional data to store
Returns:
The checkpoint object that was created
"""
checkpoint = {
'id': f"checkpoint_{datetime.now().strftime('%Y%m%d_%H%M%S')}",
'timestamp': datetime.now().isoformat(),
'phase': state['current_phase'],
'description': description,
'resume_point': state['resume_point'].copy(),
'progress': state['progress'].copy(),
'data': data or {}
}
# Keep only last 10 checkpoints to avoid bloat
if 'checkpoints' not in state:
state['checkpoints'] = []
state['checkpoints'].append(checkpoint)
if len(state['checkpoints']) > 10:
state['checkpoints'] = state['checkpoints'][-10:]
save_workflow(state)
return checkpoint
def get_latest_checkpoint(state: dict) -> Optional[dict]:
"""Get the most recent checkpoint.
Returns:
Latest checkpoint or None if no checkpoints exist
"""
checkpoints = state.get('checkpoints', [])
return checkpoints[-1] if checkpoints else None
def restore_from_checkpoint(state: dict, checkpoint_id: str = None) -> bool:
"""Restore workflow state from a checkpoint.
Args:
state: Current workflow state
checkpoint_id: Optional specific checkpoint ID, defaults to latest
Returns:
True if restoration was successful
"""
checkpoints = state.get('checkpoints', [])
if not checkpoints:
print("Error: No checkpoints available")
return False
# Find checkpoint
if checkpoint_id:
checkpoint = next((c for c in checkpoints if c['id'] == checkpoint_id), None)
if not checkpoint:
print(f"Error: Checkpoint '{checkpoint_id}' not found")
return False
else:
checkpoint = checkpoints[-1]
# Restore state from checkpoint
state['resume_point'] = checkpoint['resume_point'].copy()
state['progress'] = checkpoint['progress'].copy()
state['current_phase'] = checkpoint['phase']
state['last_error'] = None # Clear any error since we're recovering
save_workflow(state)
print(f"Restored from checkpoint: {checkpoint['description']}")
return True
def list_checkpoints(state: dict) -> list:
"""List all available checkpoints.
Returns:
List of checkpoint summaries
"""
return [
{
'id': c['id'],
'timestamp': c['timestamp'],
'phase': c['phase'],
'description': c['description']
}
for c in state.get('checkpoints', [])
]
def clear_checkpoints(state: dict):
"""Clear all checkpoints (typically after successful completion)."""
state['checkpoints'] = []
save_workflow(state)
def archive_workflow(state: dict, suffix: str = ''):
"""Archive completed/aborted workflow."""
history_dir = get_history_dir()
history_dir.mkdir(exist_ok=True)
filename = f"{state['id']}{suffix}.yml"
archive_path = history_dir / filename
save_yaml(str(archive_path), state)
# Remove current state
current_path = get_current_state_path()
if current_path.exists():
current_path.unlink()
def show_status(state: dict):
"""Display workflow status."""
print()
print("" + "" * 58 + "")
print("" + "WORKFLOW STATUS".center(58) + "")
print("" + "" * 58 + "")
print("" + f" ID: {state['id']}".ljust(58) + "")
print("" + f" Feature: {state['feature'][:45]}".ljust(58) + "")
print("" + f" Phase: {state['current_phase']}".ljust(58) + "")
print("" + "" * 58 + "")
print("" + " APPROVAL GATES".ljust(58) + "")
design_gate = state['gates']['design_approval']
impl_gate = state['gates']['implementation_approval']
design_icon = "" if design_gate['status'] == 'approved' else "" if design_gate['status'] == 'rejected' else ""
impl_icon = "" if impl_gate['status'] == 'approved' else "" if impl_gate['status'] == 'rejected' else ""
print("" + f" {design_icon} Design: {design_gate['status']}".ljust(58) + "")
print("" + f" {impl_icon} Implementation: {impl_gate['status']}".ljust(58) + "")
print("" + "" * 58 + "")
print("" + " PROGRESS".ljust(58) + "")
p = state['progress']
print("" + f" Entities Designed: {p['entities_designed']}".ljust(58) + "")
print("" + f" Tasks Created: {p['tasks_created']}".ljust(58) + "")
print("" + f" Tasks Implemented: {p['tasks_implemented']}".ljust(58) + "")
print("" + f" Tasks Reviewed: {p['tasks_reviewed']}".ljust(58) + "")
print("" + f" Tasks Completed: {p['tasks_completed']}".ljust(58) + "")
print("" + "" * 58 + "")
print("" + " TASK BREAKDOWN".ljust(58) + "")
t = state['tasks']
print("" + f" ⏳ Pending: {len(t['pending'])}".ljust(58) + "")
print("" + f" 🔄 In Progress: {len(t['in_progress'])}".ljust(58) + "")
print("" + f" 🔍 Review: {len(t['review'])}".ljust(58) + "")
print("" + f" ✅ Approved: {len(t['approved'])}".ljust(58) + "")
print("" + f" ✓ Completed: {len(t['completed'])}".ljust(58) + "")
print("" + f" 🚫 Blocked: {len(t['blocked'])}".ljust(58) + "")
if state['last_error']:
print("" + "" * 58 + "")
print("" + " ⚠️ LAST ERROR".ljust(58) + "")
print("" + f" {state['last_error'][:52]}".ljust(58) + "")
print("" + "" * 58 + "")
print("" + " TIMESTAMPS".ljust(58) + "")
print("" + f" Started: {state['started_at'][:19]}".ljust(58) + "")
print("" + f" Updated: {state['updated_at'][:19]}".ljust(58) + "")
if state['completed_at']:
print("" + f" Completed: {state['completed_at'][:19]}".ljust(58) + "")
print("" + "" * 58 + "")
# ============================================================================
# CLI Interface
# ============================================================================
def main():
parser = argparse.ArgumentParser(description="Workflow state management")
subparsers = parser.add_subparsers(dest='command', help='Commands')
# create command
create_parser = subparsers.add_parser('create', help='Create new workflow')
create_parser.add_argument('feature', help='Feature to implement')
# status command
subparsers.add_parser('status', help='Show workflow status')
# transition command
trans_parser = subparsers.add_parser('transition', help='Transition to new phase')
trans_parser.add_argument('phase', choices=PHASES, help='Target phase')
trans_parser.add_argument('--error', help='Error message (for FAILED phase)')
# approve command
approve_parser = subparsers.add_parser('approve', help='Approve a gate')
approve_parser.add_argument('gate', choices=['design', 'implementation'], help='Gate to approve')
approve_parser.add_argument('--approver', default='user', help='Approver name')
# reject command
reject_parser = subparsers.add_parser('reject', help='Reject a gate')
reject_parser.add_argument('gate', choices=['design', 'implementation'], help='Gate to reject')
reject_parser.add_argument('reason', help='Rejection reason')
# progress command
progress_parser = subparsers.add_parser('progress', help='Update progress')
progress_parser.add_argument('--entities', type=int, help='Entities designed')
progress_parser.add_argument('--tasks-created', type=int, help='Tasks created')
progress_parser.add_argument('--tasks-impl', type=int, help='Tasks implemented')
progress_parser.add_argument('--tasks-reviewed', type=int, help='Tasks reviewed')
progress_parser.add_argument('--tasks-completed', type=int, help='Tasks completed')
# task command
task_parser = subparsers.add_parser('task', help='Update task status')
task_parser.add_argument('task_id', help='Task ID')
task_parser.add_argument('status', choices=['pending', 'in_progress', 'review', 'approved', 'completed', 'blocked'])
# archive command
archive_parser = subparsers.add_parser('archive', help='Archive workflow')
archive_parser.add_argument('--suffix', default='', help='Filename suffix (e.g., _aborted)')
# exists command
subparsers.add_parser('exists', help='Check if workflow exists')
# checkpoint command
checkpoint_parser = subparsers.add_parser('checkpoint', help='Manage checkpoints')
checkpoint_parser.add_argument('action', choices=['save', 'list', 'restore', 'clear'],
help='Checkpoint action')
checkpoint_parser.add_argument('--description', '-d', help='Checkpoint description (for save)')
checkpoint_parser.add_argument('--id', help='Checkpoint ID (for restore)')
checkpoint_parser.add_argument('--data', help='JSON data to store (for save)')
args = parser.parse_args()
if args.command == 'create':
state = create_workflow(args.feature)
print(f"Created workflow: {state['id']}")
print(f"Feature: {args.feature}")
print(f"State saved to: {get_current_state_path()}")
elif args.command == 'status':
state = load_current_workflow()
if state:
show_status(state)
else:
print("No active workflow found.")
print("Start a new workflow with: /workflow:spawn <feature>")
elif args.command == 'transition':
state = load_current_workflow()
if not state:
print("Error: No active workflow")
sys.exit(1)
if transition_phase(state, args.phase, args.error):
print(f"Transitioned to: {args.phase}")
else:
sys.exit(1)
elif args.command == 'approve':
state = load_current_workflow()
if not state:
print("Error: No active workflow")
sys.exit(1)
gate = f"{args.gate}_approval"
if approve_gate(state, gate, args.approver):
print(f"Approved: {args.gate}")
elif args.command == 'reject':
state = load_current_workflow()
if not state:
print("Error: No active workflow")
sys.exit(1)
gate = f"{args.gate}_approval"
if reject_gate(state, gate, args.reason):
print(f"Rejected: {args.gate}")
print(f"Reason: {args.reason}")
elif args.command == 'progress':
state = load_current_workflow()
if not state:
print("Error: No active workflow")
sys.exit(1)
updates = {}
if args.entities is not None:
updates['entities_designed'] = args.entities
if args.tasks_created is not None:
updates['tasks_created'] = args.tasks_created
if args.tasks_impl is not None:
updates['tasks_implemented'] = args.tasks_impl
if args.tasks_reviewed is not None:
updates['tasks_reviewed'] = args.tasks_reviewed
if args.tasks_completed is not None:
updates['tasks_completed'] = args.tasks_completed
if updates:
update_progress(state, **updates)
print("Progress updated")
elif args.command == 'task':
state = load_current_workflow()
if not state:
print("Error: No active workflow")
sys.exit(1)
update_task_status(state, args.task_id, args.status)
print(f"Task {args.task_id}{args.status}")
elif args.command == 'archive':
state = load_current_workflow()
if not state:
print("Error: No active workflow")
sys.exit(1)
archive_workflow(state, args.suffix)
print(f"Workflow archived to: {get_history_dir()}/{state['id']}{args.suffix}.yml")
elif args.command == 'exists':
state = load_current_workflow()
if state:
print("true")
sys.exit(0)
else:
print("false")
sys.exit(1)
elif args.command == 'checkpoint':
state = load_current_workflow()
if not state:
print("Error: No active workflow")
sys.exit(1)
if args.action == 'save':
if not args.description:
print("Error: --description required for save")
sys.exit(1)
data = None
if args.data:
try:
data = json.loads(args.data)
except json.JSONDecodeError:
print("Error: --data must be valid JSON")
sys.exit(1)
checkpoint = save_checkpoint(state, args.description, data)
print(f"Checkpoint saved: {checkpoint['id']}")
print(f"Description: {args.description}")
elif args.action == 'list':
checkpoints = list_checkpoints(state)
if not checkpoints:
print("No checkpoints available")
else:
print("\n" + "=" * 60)
print("CHECKPOINTS".center(60))
print("=" * 60)
for cp in checkpoints:
print(f"\n ID: {cp['id']}")
print(f" Time: {cp['timestamp'][:19]}")
print(f" Phase: {cp['phase']}")
print(f" Description: {cp['description']}")
print("\n" + "=" * 60)
elif args.action == 'restore':
if restore_from_checkpoint(state, args.id):
print("Workflow state restored successfully")
else:
sys.exit(1)
elif args.action == 'clear':
clear_checkpoints(state)
print("All checkpoints cleared")
else:
parser.print_help()
if __name__ == "__main__":
main()

View File

@ -0,0 +1,99 @@
# Example Task - Created by Architect Agent
# Copy this template when creating new tasks
# ============================================
# FRONTEND TASK EXAMPLE
# ============================================
id: task_create_button
type: create
title: Create Button component
agent: frontend
status: pending
priority: high
entity_ids:
- comp_button
file_paths:
- app/components/Button.tsx
dependencies: [] # No dependencies, can start immediately
description: |
Create a reusable Button component with variant support.
Must match the manifest specification for comp_button.
acceptance_criteria:
- Exports Button component as named export
- Implements ButtonProps interface matching manifest
- Supports variant prop (primary, secondary, danger)
- Supports size prop (sm, md, lg)
- Supports disabled state
- Uses Tailwind CSS for styling
- Follows existing component patterns
---
# ============================================
# BACKEND TASK EXAMPLE
# ============================================
id: task_create_api_tasks
type: create
title: Create Tasks API endpoints
agent: backend
status: pending
priority: high
entity_ids:
- api_list_tasks
- api_create_task
file_paths:
- app/api/tasks/route.ts
- app/lib/db.ts
dependencies:
- task_create_db_tasks # Needs database first
description: |
Implement GET and POST handlers for /api/tasks endpoint.
GET: List all tasks with optional filtering
POST: Create a new task
acceptance_criteria:
- Exports GET function for listing tasks
- Exports POST function for creating tasks
- GET supports ?status and ?search query params
- POST validates required title field
- Returns proper HTTP status codes
- Matches manifest request/response schemas
---
# ============================================
# REVIEW TASK EXAMPLE
# ============================================
id: task_review_button
type: review
title: Review Button component implementation
agent: reviewer
status: pending
priority: medium
entity_ids:
- comp_button
file_paths:
- app/components/Button.tsx
dependencies:
- task_create_button # Must complete implementation first
description: |
Review the Button component implementation for quality,
correctness, and adherence to manifest specifications.
acceptance_criteria:
- Code matches manifest spec
- Props interface is correct
- Follows project patterns
- Lint passes
- Type-check passes