project-standalo-todo-super/.claude/commands/workflow/spawn.md

81 KiB

description allowed-tools
Automated workflow orchestrator with approval gates and sub-agent delegation Read, Write, Edit, Bash, Task, AskUserQuestion, TodoWrite

Workflow Orchestrator - Spawn

Input: "$ARGUMENTS"


CRITICAL ENFORCEMENT RULES

READ THESE RULES BEFORE ANY ACTION. VIOLATIONS WILL CORRUPT THE WORKFLOW.

🔴 MUST DO (Non-Negotiable)

  1. MUST run version_manager.py create BEFORE any design work
  2. MUST verify phase state BEFORE each transition
  3. MUST run validation script AFTER each phase completes
  4. MUST capture and display script output (never assume success)
  5. MUST create task files in .workflow/versions/vXXX/tasks/ (version-specific)
  6. MUST wait for Task agents to fully complete before proceeding
  7. MUST run npm run build and verify exit code = 0 before approval
  8. MUST run npx tsc --noEmit (type check) and verify exit code = 0
  9. MUST run npm run lint and verify exit code = 0

🚫 CANNOT DO (Strictly Forbidden)

  1. CANNOT skip phases or combine phases
  2. CANNOT proceed if any verification fails
  3. CANNOT assume task files exist - must verify with ls
  4. CANNOT assume build passes - must run and check exit code
  5. CANNOT transition without running the transition script
  6. CANNOT mark workflow complete if any task is not 'approved'
  7. CANNOT proceed to IMPLEMENTING if no task files exist

⚠️ BLOCKING CONDITIONS

These conditions HALT the workflow immediately:

Condition Blocked Phase Resolution
project_manifest.json missing INITIALIZE Create manifest first
No task files created DESIGNING → IMPLEMENTING Architect must create tasks
Build fails IMPLEMENTING → REVIEWING Fix build errors
Type check fails IMPLEMENTING → REVIEWING Fix TypeScript errors
Lint fails IMPLEMENTING → REVIEWING Fix lint errors
Files missing REVIEWING Implement missing files
Version mismatch Any Run /workflow:status

ARGUMENT PARSING

FULL_AUTO_MODE = "$ARGUMENTS" contains "--full-auto"
AUTO_MODE = "$ARGUMENTS" contains "--auto" AND NOT "--full-auto"
MANUAL_MODE = NOT AUTO_MODE AND NOT FULL_AUTO_MODE
FEATURE = "$ARGUMENTS" with "--auto" and "--full-auto" removed and trimmed

Mode Comparison

Aspect Manual --auto --full-auto
Requirements User provides all AI asks questions with options AI expands autonomously
Questions None Until AI has enough info Only acceptance criteria
Design Approval Manual Auto-approve Auto-approve
Impl Approval Manual Auto if validation passes Auto if validation passes
Best For Full control Guided discovery Quick prototyping

--auto Behavior (Interactive Discovery)

  • PHASE 1.5: AI asks clarifying questions with multiple-choice options
  • Questions continue until AI determines requirements are complete
  • Examples: "What auth method?", "Need password reset?", "Which OAuth providers?"
  • Gate 1 (design approval): Auto-approve
  • Gate 2 (impl approval): Auto-approve IF validation passes
  • STILL RUNS: All validation checks, build verification

--full-auto Behavior (AI-Driven Expansion)

  • PHASE 1.5: AI autonomously analyzes and expands the idea
  • AI generates comprehensive requirements from brief input
  • ONLY ASKS: Acceptance criteria ("How will you know this is done?")
  • Gate 1 (design approval): Auto-approve
  • Gate 2 (impl approval): Auto-approve IF validation passes
  • STILL RUNS: All validation checks, build verification

PHASE EXECUTION PROTOCOL

═══════════════════════════════════════════════════════════════

PHASE 1: INITIALIZE

═══════════════════════════════════════════════════════════════

Entry Condition: Command invoked Exit Condition: Version created, session started, phase = DESIGNING

Step 1.0: Gate Entry Check [MANDATORY - BLOCKING]

# Initialize gate state for new workflow
python3 skills/guardrail-orchestrator/scripts/phase_gate.py can-enter INITIALIZING
GATE_EXIT=$?
if [ $GATE_EXIT -ne 0 ]; then
  echo "❌ BLOCKED: Cannot start workflow"
  python3 skills/guardrail-orchestrator/scripts/phase_gate.py blockers
  exit 1
fi

# Enter the phase (records entry timestamp)
python3 skills/guardrail-orchestrator/scripts/phase_gate.py enter INITIALIZING

Step 1.1: Parse Arguments [MANDATORY]

Extract: AUTO_MODE, FEATURE
Validate: FEATURE is not empty

BLOCK IF: FEATURE is empty → Ask user for feature description

Step 1.2: Verify Prerequisites [MANDATORY]

# MUST run this check - do not skip
ls project_manifest.json

BLOCK IF: File does not exist → Error: "Run /guardrail:init or /guardrail:analyze first"

# Create workflow directory if missing (auto-recovery)
mkdir -p .workflow/versions
if [ ! -f .workflow/index.yml ]; then
  cat > .workflow/index.yml << 'EOF'
versions: []
latest_version: null
total_versions: 0
EOF
fi

Step 1.3: Create Version [MANDATORY]

python3 skills/guardrail-orchestrator/scripts/version_manager.py create "$FEATURE"

MUST capture output and extract VERSION_ID (e.g., "v004") BLOCK IF: Script fails → Error with script output

Step 1.4: Display Start Banner [MANDATORY]

╔══════════════════════════════════════════════════════════════╗
║ 🚀 WORKFLOW STARTED                                          ║
╠══════════════════════════════════════════════════════════════╣
║ Version:  $VERSION_ID                                        ║
║ Feature:  $FEATURE                                           ║
║ Mode:     $MODE (AUTO/INTERACTIVE)                           ║
║ Tasks:    .workflow/versions/$VERSION_ID/tasks/              ║
╚══════════════════════════════════════════════════════════════╝

Step 1.5: Complete Phase & Transition [MANDATORY]

# Save checkpoints
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint manifest_exists \
  --phase INITIALIZING --status passed
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint version_created \
  --phase INITIALIZING --status passed \
  --data "{\"version\": \"$VERSION_ID\"}"

# Complete the phase
python3 skills/guardrail-orchestrator/scripts/phase_gate.py complete INITIALIZING
COMPLETE_EXIT=$?
if [ $COMPLETE_EXIT -ne 0 ]; then
  echo "❌ BLOCKED: Cannot complete INITIALIZING phase"
  python3 skills/guardrail-orchestrator/scripts/phase_gate.py blockers
  exit 1
fi

# Transition to next phase
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py transition DESIGNING

VERIFY: Script exits with code 0


═══════════════════════════════════════════════════════════════

PHASE 1.5: REQUIREMENTS GATHERING (--auto and --full-auto only)

═══════════════════════════════════════════════════════════════

Entry Condition: Phase = DESIGNING, Mode = AUTO_MODE or FULL_AUTO_MODE Exit Condition: Requirements documented, ready for design SKIP IF: MANUAL_MODE (proceed directly to PHASE 2)


IF AUTO_MODE: Interactive Requirements Discovery

Purpose: Ask clarifying questions until AI has enough information to design the system.

Step 1.5.1: Initialize Requirements Document
mkdir -p .workflow/versions/$VERSION_ID/requirements

Create requirements tracking:

# .workflow/versions/$VERSION_ID/requirements/discovery.yml
feature: "$FEATURE"
status: gathering
questions_asked: 0
requirements: []
ready_for_design: false
Step 1.5.2: Question Loop [MANDATORY]

REPEAT until ready_for_design = true:

Use AskUserQuestion tool to gather requirements:

Use AskUserQuestion with intelligent questions based on feature type.

Question categories to explore:
1. SCOPE: What exactly should this feature do?
2. USERS: Who will use this? What roles?
3. DATA: What information needs to be stored?
4. ACTIONS: What operations can users perform?
5. AUTH: What security/permissions are needed?
6. UI: What screens/components are needed?
7. INTEGRATIONS: Any external services?
8. EDGE CASES: What happens when X fails?

Example Question Flow for "add user authentication":

Round 1 - Authentication Type:
  Question: "What authentication method do you need?"
  Options:
    - "Email/Password (Recommended)" - Traditional login with email and password
    - "OAuth Social Login" - Login with Google, GitHub, etc.
    - "Magic Link" - Passwordless email link login
    - "Multi-factor" - 2FA with authenticator apps
  [multiSelect: true]

Round 2 - (If OAuth selected) Providers:
  Question: "Which OAuth providers should be supported?"
  Options:
    - "Google (Recommended)" - Most common, easy setup
    - "GitHub" - Popular for developer tools
    - "Apple" - Required for iOS apps
    - "Microsoft" - Common for enterprise
  [multiSelect: true]

Round 3 - User Data:
  Question: "What user information should be stored?"
  Options:
    - "Basic (name, email)" - Minimal user profile
    - "Extended (+ avatar, bio)" - Social features
    - "Professional (+ company, role)" - B2B applications
    - "Custom fields" - I'll specify additional fields
  [multiSelect: false]

Round 4 - Features:
  Question: "Which additional features do you need?"
  Options:
    - "Password reset" - Email-based password recovery
    - "Email verification" - Confirm email ownership
    - "Remember me" - Persistent sessions
    - "Account deletion" - GDPR compliance
  [multiSelect: true]

Round 5 - UI Components:
  Question: "What UI components are needed?"
  Options:
    - "Login page" - Standalone login screen
    - "Registration page" - New user signup
    - "Profile page" - View/edit user info
    - "Settings page" - Account settings
  [multiSelect: true]
Step 1.5.3: Evaluate Completeness [MANDATORY]

After each round, evaluate if requirements are sufficient:

READY_FOR_DESIGN = true IF ALL of these are answered:
  - [ ] Core functionality is clear
  - [ ] Data model requirements are known
  - [ ] API operations are identified
  - [ ] UI screens are listed
  - [ ] Authentication/authorization is defined
  - [ ] Key edge cases are addressed

If NOT ready: Generate next question based on gaps If ready: Proceed to Step 1.5.4

Step 1.5.4: Generate Requirements Summary [MANDATORY]

Save gathered requirements:

# .workflow/versions/$VERSION_ID/requirements/summary.yml
feature: "$FEATURE"
gathered_at: <timestamp>
questions_asked: X
mode: auto

requirements:
  authentication:
    methods: [email_password, oauth_google]
    features: [password_reset, email_verification]

  data_model:
    user_fields: [name, email, avatar, bio]
    additional_entities: []

  ui_components:
    pages: [login, register, profile]
    components: [login_form, user_avatar]

  api_endpoints:
    - POST /api/auth/login
    - POST /api/auth/register
    - POST /api/auth/forgot-password
    - GET /api/users/me

acceptance_criteria:
  - User can register with email/password
  - User can login with Google OAuth
  - User can reset forgotten password
  - User profile displays correctly
Step 1.5.5: Display Requirements Summary [MANDATORY]
╔══════════════════════════════════════════════════════════════╗
║ 📋 REQUIREMENTS GATHERED                                     ║
╠══════════════════════════════════════════════════════════════╣
║ Feature:  $FEATURE                                           ║
║ Questions asked: X                                           ║
╠══════════════════════════════════════════════════════════════╣
║ SCOPE DEFINED                                                ║
║   ✅ Authentication: Email/Password + Google OAuth           ║
║   ✅ Features: Password reset, Email verification            ║
║   ✅ User Data: Name, email, avatar, bio                     ║
║   ✅ UI: Login, Register, Profile pages                      ║
╠══════════════════════════════════════════════════════════════╣
║ Proceeding to DESIGN phase...                                ║
╚══════════════════════════════════════════════════════════════╝

IF FULL_AUTO_MODE: AI-Driven Expansion

Purpose: AI autonomously expands brief input into comprehensive requirements. Only asks user for acceptance criteria.

Step 1.5.1: Autonomous Analysis [MANDATORY]

Use Task tool to expand requirements:

Use Task tool with:
  subagent_type: "requirements-analyst"
  prompt: |
    # REQUIREMENTS ANALYST - Autonomous Expansion

    ## INPUT
    Feature request: "$FEATURE"

    ## YOUR MISSION
    Expand this brief feature request into comprehensive requirements.
    Think like a senior product manager.

    ## ANALYSIS PROCESS

    1. **Understand Intent**
       - What problem is the user trying to solve?
       - What is the core value proposition?
       - Who are the target users?

    2. **Expand Scope**
       - What are the obvious features needed?
       - What are commonly expected features users don't mention?
       - What are the MVP requirements vs nice-to-haves?

    3. **Data Requirements**
       - What entities need to be stored?
       - What are the relationships between entities?
       - What fields does each entity need?

    4. **API Design**
       - What CRUD operations are needed?
       - What custom operations are needed?
       - What authentication/authorization is required?

    5. **UI Components**
       - What pages/screens are needed?
       - What reusable components are needed?
       - What is the user flow?

    6. **Edge Cases**
       - What happens on errors?
       - What are the validation rules?
       - What are the security considerations?

    ## OUTPUT FORMAT

    Create: .workflow/versions/$VERSION_ID/requirements/expanded.yml

    ```yaml
    feature: "$FEATURE"
    expanded_at: <timestamp>
    mode: full_auto

    analysis:
      problem_statement: "<what problem this solves>"
      target_users: "<who will use this>"
      core_value: "<main benefit>"

    scope:
      mvp_features:
        - <feature 1>
        - <feature 2>
      future_features:
        - <feature for later>

    data_model:
      entities:
        - name: <Entity>
          fields: [<field1>, <field2>]
          relations: [<relation>]

    api_endpoints:
      - method: POST
        path: /api/xxx
        purpose: <what it does>

    ui_structure:
      pages:
        - name: <PageName>
          route: /<path>
          purpose: <what user does here>
      components:
        - name: <ComponentName>
          purpose: <what it displays/does>

    security:
      authentication: <method>
      authorization: <rules>

    edge_cases:
      - scenario: <what could go wrong>
        handling: <how to handle it>
    ```

    ## OUTPUT
    After creating the file, output a summary of your analysis.
Step 1.5.2: Display Expanded Requirements [MANDATORY]
╔══════════════════════════════════════════════════════════════╗
║ 🤖 AI-EXPANDED REQUIREMENTS                                  ║
╠══════════════════════════════════════════════════════════════╣
║ Original: "$FEATURE"                                         ║
╠══════════════════════════════════════════════════════════════╣
║ EXPANDED SCOPE                                               ║
║   Problem: <problem statement>                               ║
║   Users: <target users>                                      ║
║   Value: <core value>                                        ║
╠══════════════════════════════════════════════════════════════╣
║ MVP FEATURES                                                 ║
║   • Feature 1                                                ║
║   • Feature 2                                                ║
║   • Feature 3                                                ║
╠══════════════════════════════════════════════════════════════╣
║ DATA MODEL                                                   ║
║   📦 Entity1 (X fields)                                      ║
║   📦 Entity2 (Y fields)                                      ║
╠══════════════════════════════════════════════════════════════╣
║ API ENDPOINTS: X                                             ║
║ UI PAGES: X                                                  ║
║ COMPONENTS: X                                                ║
╚══════════════════════════════════════════════════════════════╝
Step 1.5.3: Ask for Acceptance Criteria [MANDATORY]

This is the ONLY question in full-auto mode:

Use AskUserQuestion:
  Question: "Based on the expanded requirements above, what are your acceptance criteria? How will you know this feature is complete and working?"

  Options:
    - "Looks good - use AI-suggested criteria"
      Description: "AI will generate acceptance criteria based on the requirements"
    - "I'll specify my own criteria"
      Description: "Enter your own acceptance criteria"
    - "Add to AI criteria"
      Description: "Use AI criteria plus add my own"

If user chooses "Looks good": AI generates acceptance criteria If user chooses "I'll specify": Prompt user for criteria If user chooses "Add to": Combine AI + user criteria

Step 1.5.4: Finalize Requirements [MANDATORY]

Save final requirements with acceptance criteria:

# .workflow/versions/$VERSION_ID/requirements/final.yml
feature: "$FEATURE"
mode: full_auto
finalized_at: <timestamp>

# ... expanded requirements ...

acceptance_criteria:
  - criterion: "User can successfully register"
    verification: "Submit registration form, verify account created"
  - criterion: "User can login with credentials"
    verification: "Login with valid credentials, verify session created"
  - criterion: "Invalid login shows error"
    verification: "Login with wrong password, verify error message"
  # ... more criteria
Step 1.5.5: Display Final Summary [MANDATORY]
╔══════════════════════════════════════════════════════════════╗
║ ✅ REQUIREMENTS FINALIZED                                    ║
╠══════════════════════════════════════════════════════════════╣
║ Feature:  $FEATURE                                           ║
║ Mode:     Full-Auto                                          ║
╠══════════════════════════════════════════════════════════════╣
║ ACCEPTANCE CRITERIA                                          ║
║   ☐ User can successfully register                           ║
║   ☐ User can login with credentials                          ║
║   ☐ Invalid login shows error                                ║
║   ☐ Password reset works via email                           ║
╠══════════════════════════════════════════════════════════════╣
║ Proceeding to DESIGN phase (auto-approved)...                ║
╚══════════════════════════════════════════════════════════════╝

IF MANUAL_MODE: Skip to Design

No requirements gathering phase. Proceed directly to PHASE 2.


═══════════════════════════════════════════════════════════════

PHASE 2: DESIGNING (Enhanced with Design Document)

═══════════════════════════════════════════════════════════════

Entry Condition: Phase = DESIGNING (verified via gate check) Exit Condition: Design document validated, dependency graph generated, tasks with context created

Step 2.0: Gate Entry Check [MANDATORY - BLOCKING]

# MUST pass before proceeding - HALT if fails
python3 skills/guardrail-orchestrator/scripts/phase_gate.py can-enter DESIGNING
GATE_EXIT=$?
if [ $GATE_EXIT -ne 0 ]; then
  echo "❌ BLOCKED: Cannot enter DESIGNING phase"
  python3 skills/guardrail-orchestrator/scripts/phase_gate.py blockers
  exit 1
fi

# Enter the phase (records entry timestamp)
python3 skills/guardrail-orchestrator/scripts/phase_gate.py enter DESIGNING

Step 2.1: Verify Phase State [MANDATORY]

python3 skills/guardrail-orchestrator/scripts/workflow_manager.py status

BLOCK IF: Current phase is not DESIGNING

Step 2.2: Create Design Directories [MANDATORY]

mkdir -p .workflow/versions/$VERSION_ID/design
mkdir -p .workflow/versions/$VERSION_ID/contexts
mkdir -p .workflow/versions/$VERSION_ID/tasks

Step 2.3: Spawn Architect Agent for Design Document [MANDATORY]

MUST use Task tool to create comprehensive design document:

Use Task tool with:
  subagent_type: "system-architect"
  prompt: |
    # SYSTEM ARCHITECT - Design Document Creation
    # VERSION $VERSION_ID

    ## STRICT REQUIREMENTS
    Create a COMPLETE design document. Partial designs = failure.

    ## INPUT
    Feature: "$FEATURE"
    Version: $VERSION_ID
    Output: .workflow/versions/$VERSION_ID/design/design_document.yml
    Schema: skills/guardrail-orchestrator/schemas/design_document.yml

    ## DESIGN PROCESS

    ### Phase A: Analyze Requirements
    1. Break down "$FEATURE" into user stories
    2. Identify data that needs to be stored
    3. Identify operations users can perform
    4. Plan the UI structure

    ### Phase B: Design Data Layer (LAYER 1)
    Create data_models section with:
    - id: model_<name>
    - name: PascalCase entity name
    - table_name: snake_case
    - fields: [name, type, constraints]
    - relations: [type, target, foreign_key, on_delete]
    - timestamps: true
    - validations: [field, rule, message]

    ### Phase C: Design API Layer (LAYER 2)
    Create api_endpoints section with:
    - id: api_<verb>_<resource>
    - method: GET|POST|PUT|PATCH|DELETE
    - path: /api/<path>
    - request_body: (for POST/PUT/PATCH)
    - responses: [{status, description, schema}]
    - depends_on_models: [model_ids]
    - auth: {required, roles}

    ### Phase D: Design UI Layer (LAYER 3)
    Create pages section with:
    - id: page_<name>
    - path: /<route>
    - data_needs: [{api_id, purpose, on_load}]
    - components: [component_ids]
    - auth: {required, roles, redirect}

    Create components section with:
    - id: component_<name>
    - name: PascalCaseName
    - props: [{name, type, required, description}]
    - events: [{name, payload, description}]
    - uses_apis: [api_ids]
    - uses_components: [component_ids]

    ## OUTPUT FORMAT
    Create file: .workflow/versions/$VERSION_ID/design/design_document.yml

    ```yaml
    workflow_version: "$VERSION_ID"
    feature: "$FEATURE"
    created_at: <timestamp>
    status: draft
    revision: 1

    data_models:
      - id: model_<name>
        name: <Name>
        # ... full model definition

    api_endpoints:
      - id: api_<verb>_<resource>
        method: <METHOD>
        path: /api/<path>
        # ... full endpoint definition

    pages:
      - id: page_<name>
        path: /<route>
        # ... full page definition

    components:
      - id: component_<name>
        name: <Name>
        # ... full component definition
    ```

    ## ALSO UPDATE project_manifest.json
    Add entities under appropriate sections with status: "PENDING"

    ## VERIFICATION
    Before finishing, verify the design document exists:
    ```bash
    cat .workflow/versions/$VERSION_ID/design/design_document.yml | head -20
    ```

    ## OUTPUT SUMMARY
    ```
    === DESIGN DOCUMENT CREATED ===
    Data Models:     X
    API Endpoints:   X
    Pages:           X
    Components:      X
    File: .workflow/versions/$VERSION_ID/design/design_document.yml
    ```

Step 2.4: Validate Design & Generate Artifacts [MANDATORY]

Run design validation to generate dependency graph, contexts, and tasks:

python3 skills/guardrail-orchestrator/scripts/validate_design.py \
  .workflow/versions/$VERSION_ID/design/design_document.yml \
  --output-dir .workflow/versions/$VERSION_ID

This generates:

  • dependency_graph.yml - Layered execution order
  • contexts/*.yml - Per-entity context snapshots for subagents
  • tasks/*.yml - Implementation tasks with full context

BLOCK IF: Validation fails (exit code != 0) → Display errors, re-run design

Step 2.5: Verify Generated Artifacts [MANDATORY]

# Check dependency graph exists
ls .workflow/versions/$VERSION_ID/dependency_graph.yml

# Count generated tasks
TASK_COUNT=$(ls .workflow/versions/$VERSION_ID/tasks/*.yml 2>/dev/null | wc -l)
echo "Tasks generated: $TASK_COUNT"

# Count context files
CONTEXT_COUNT=$(ls .workflow/versions/$VERSION_ID/contexts/*.yml 2>/dev/null | wc -l)
echo "Context files: $CONTEXT_COUNT"

BLOCK IF: TASK_COUNT = 0 → Error: "No tasks generated from design"

Step 2.6: Display Layered Execution Plan [MANDATORY]

Read dependency_graph.yml and display:

╔══════════════════════════════════════════════════════════════╗
║ 📊 EXECUTION LAYERS (Dependency Graph)                       ║
╠══════════════════════════════════════════════════════════════╣
║                                                              ║
║ Layer 1: DATA MODELS (Parallel)                              ║
║ ─────────────────────────────────────────────                ║
║   📦 model_xxx      → backend  [no deps]                     ║
║   📦 model_yyy      → backend  [no deps]                     ║
║                                                              ║
║ Layer 2: API ENDPOINTS (After Layer 1)                       ║
║ ─────────────────────────────────────────────                ║
║   🔌 api_xxx        → backend  [needs: model_xxx]            ║
║   🔌 api_yyy        → backend  [needs: model_xxx, model_yyy] ║
║                                                              ║
║ Layer 3: UI (After Layer 2)                                  ║
║ ─────────────────────────────────────────────                ║
║   🧩 component_xxx  → frontend [no deps]                     ║
║   📄 page_xxx       → frontend [needs: api_xxx, component_xxx]║
║                                                              ║
╠══════════════════════════════════════════════════════════════╣
║ EXECUTION SUMMARY                                            ║
║   Total tasks:      X                                        ║
║   Total layers:     X                                        ║
║   Max parallelism:  X tasks can run simultaneously           ║
╚══════════════════════════════════════════════════════════════╝

Step 2.7: Complete Phase & Transition [MANDATORY]

# Save checkpoints
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint design_document_created \
  --phase DESIGNING --status passed
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint design_validated \
  --phase DESIGNING --status passed
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint tasks_generated \
  --phase DESIGNING --status passed \
  --data "{\"task_count\": $TASK_COUNT, \"context_count\": $CONTEXT_COUNT}"

# Complete the phase
python3 skills/guardrail-orchestrator/scripts/phase_gate.py complete DESIGNING
COMPLETE_EXIT=$?
if [ $COMPLETE_EXIT -ne 0 ]; then
  echo "❌ BLOCKED: Cannot complete DESIGNING phase"
  python3 skills/guardrail-orchestrator/scripts/phase_gate.py blockers
  exit 1
fi

# Update progress and transition
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py progress \
  --tasks-created $TASK_COUNT
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py transition AWAITING_DESIGN_APPROVAL

Step 2.8: Display Design Summary [MANDATORY]

╔══════════════════════════════════════════════════════════════╗
║ 📐 DESIGN COMPLETE - AWAITING APPROVAL                       ║
╠══════════════════════════════════════════════════════════════╣
║ Feature:  $FEATURE                                           ║
║ Version:  $VERSION_ID                                        ║
╠══════════════════════════════════════════════════════════════╣
║ DESIGN DOCUMENT                                              ║
║   📦 Data Models:     X                                      ║
║   🔌 API Endpoints:   X                                      ║
║   📄 Pages:           X                                      ║
║   🧩 Components:      X                                      ║
╠══════════════════════════════════════════════════════════════╣
║ GENERATED ARTIFACTS                                          ║
║   ✅ Design document created                                 ║
║   ✅ Dependency graph calculated                             ║
║   ✅ Context snapshots: X files                              ║
║   ✅ Implementation tasks: X tasks                           ║
╠══════════════════════════════════════════════════════════════╣
║ Each subagent will receive FULL CONTEXT including:           ║
║   - Target entity definition                                 ║
║   - Related model/API definitions                            ║
║   - Input/output contracts                                   ║
║   - Acceptance criteria                                      ║
╠══════════════════════════════════════════════════════════════╣
║ 👆 Review the execution layers above                         ║
║                                                              ║
║ If design looks correct: /workflow:approve                   ║
║ If changes needed: /workflow:reject "reason"                 ║
╚══════════════════════════════════════════════════════════════╝

═══════════════════════════════════════════════════════════════

PHASE 3: GATE 1 - Design Approval

═══════════════════════════════════════════════════════════════

Entry Condition: Phase = AWAITING_DESIGN_APPROVAL (verified via gate check) Exit Condition: Design approved, phase = IMPLEMENTING

Step 3.0: Gate Entry Check [MANDATORY - BLOCKING]

# MUST pass before proceeding - HALT if fails
python3 skills/guardrail-orchestrator/scripts/phase_gate.py can-enter AWAITING_DESIGN_APPROVAL
GATE_EXIT=$?
if [ $GATE_EXIT -ne 0 ]; then
  echo "❌ BLOCKED: Cannot enter AWAITING_DESIGN_APPROVAL phase"
  python3 skills/guardrail-orchestrator/scripts/phase_gate.py blockers
  exit 1
fi

# Enter the phase (records entry timestamp)
python3 skills/guardrail-orchestrator/scripts/phase_gate.py enter AWAITING_DESIGN_APPROVAL

IF AUTO_MODE = true:

# Save approval checkpoint
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint design_approved \
  --phase AWAITING_DESIGN_APPROVAL --status passed \
  --data "{\"approver\": \"auto\", \"mode\": \"auto\"}"

# Complete the phase
python3 skills/guardrail-orchestrator/scripts/phase_gate.py complete AWAITING_DESIGN_APPROVAL

python3 skills/guardrail-orchestrator/scripts/workflow_manager.py approve design
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py transition IMPLEMENTING

Output: " Design auto-approved. Proceeding to implementation."

IF AUTO_MODE = false:

Use AskUserQuestion:

Question: "Review the design. How do you want to proceed?"
Options:
  1. "Approve - Continue to implementation"
  2. "Reject - Revise design"
  3. "Pause - Save and exit"

On Approve:

# Save approval checkpoint
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint design_approved \
  --phase AWAITING_DESIGN_APPROVAL --status passed \
  --data "{\"approver\": \"user\", \"mode\": \"manual\"}"

# Complete the phase
python3 skills/guardrail-orchestrator/scripts/phase_gate.py complete AWAITING_DESIGN_APPROVAL

python3 skills/guardrail-orchestrator/scripts/workflow_manager.py approve design
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py transition IMPLEMENTING

On Reject: Return to Phase 2 On Pause: Output resume command and stop


═══════════════════════════════════════════════════════════════

PHASE 4: IMPLEMENTING (Layer-Based Parallel Execution)

═══════════════════════════════════════════════════════════════

Entry Condition: Phase = IMPLEMENTING (verified via gate check) Exit Condition: All layers implemented in order, build passes

KEY CHANGE: Tasks are now executed LAYER BY LAYER with FULL CONTEXT. Each subagent receives a context snapshot with all dependencies.

Step 4.0: Gate Entry Check [MANDATORY - BLOCKING]

# MUST pass before proceeding - HALT if fails
python3 skills/guardrail-orchestrator/scripts/phase_gate.py can-enter IMPLEMENTING
GATE_EXIT=$?
if [ $GATE_EXIT -ne 0 ]; then
  echo "❌ BLOCKED: Cannot enter IMPLEMENTING phase"
  python3 skills/guardrail-orchestrator/scripts/phase_gate.py blockers
  exit 1
fi

# Enter the phase (records entry timestamp)
python3 skills/guardrail-orchestrator/scripts/phase_gate.py enter IMPLEMENTING

Step 4.1: Verify Phase State [MANDATORY]

python3 skills/guardrail-orchestrator/scripts/workflow_manager.py status

BLOCK IF: Phase is not IMPLEMENTING

Step 4.2: Load Dependency Graph [MANDATORY]

# Read dependency graph to get layers
cat .workflow/versions/$VERSION_ID/dependency_graph.yml

Extract layer information:

  • Layer count
  • Tasks per layer
  • Dependencies per task

Step 4.3: Execute Layers Sequentially [MANDATORY]

FOR EACH LAYER (1, 2, 3, ...):

Step 4.3.1: Get Layer Tasks
# Get all tasks for current layer
LAYER_TASKS=$(grep -l "layer: $LAYER_NUM" .workflow/versions/$VERSION_ID/tasks/*.yml)
Step 4.3.2: Spawn Parallel Agents for Layer [MANDATORY]

Launch ALL tasks in current layer IN PARALLEL using multiple Task tool calls

For EACH task in layer, spawn agent with FULL CONTEXT:

Use Task tool with:
  subagent_type: "backend-architect" OR "frontend-architect"  # Based on task.agent
  prompt: |
    # IMPLEMENTATION AGENT - $TASK_ID
    # VERSION $VERSION_ID | LAYER $LAYER_NUM

    ## YOUR SINGLE TASK
    You are implementing ONE entity with FULL CONTEXT provided.

    ## CONTEXT (Read this first!)
    Context file: .workflow/versions/$VERSION_ID/contexts/$ENTITY_ID.yml

    Read the context file. It contains:
    - target: The entity you are implementing (full definition)
    - related: All models/APIs/components you need to know about
    - dependencies: What this entity depends on (already implemented)
    - files: Files to create and reference files for patterns
    - acceptance: Criteria that must be met

    ## TASK DETAILS
    Task file: .workflow/versions/$VERSION_ID/tasks/$TASK_ID.yml

    ## IMPLEMENTATION PROCESS

    1. **Read Context File** [MANDATORY]
       ```bash
       cat .workflow/versions/$VERSION_ID/contexts/$ENTITY_ID.yml
       ```

    2. **Read Task File** [MANDATORY]
       ```bash
       cat .workflow/versions/$VERSION_ID/tasks/$TASK_ID.yml
       ```

    3. **Read Reference Files** (from context.files.reference)
       These show existing patterns to follow.

    4. **Implement**
       Create file(s) at exact paths from context.files.to_create
       Follow patterns from reference files
       Meet all acceptance criteria

    5. **Verify**
       ```bash
       # Check file exists
       ls <created_file_path>

       # Check TypeScript
       npx tsc --noEmit <created_file_path> 2>&1 || true
       ```

    ## OUTPUT FORMAT
    ```
    === TASK COMPLETE: $TASK_ID ===
    Entity:  $ENTITY_ID
    Layer:   $LAYER_NUM
    Files created:
      - <path> ✓
    TypeScript: PASS/FAIL
    Acceptance criteria:
      - [criterion 1]: PASS/FAIL
      - [criterion 2]: PASS/FAIL
    ```

IMPORTANT: Launch ALL tasks in the same layer using PARALLEL Task tool calls in a single message.

Step 4.3.3: Wait for Layer Completion [MANDATORY]

MUST wait for ALL agents in current layer to complete before proceeding to next layer

Step 4.3.4: Verify Layer [MANDATORY]
# Check all files for this layer exist
for task in $LAYER_TASKS; do
  # Extract file_paths and verify
  grep "to_create:" -A 5 .workflow/versions/$VERSION_ID/contexts/*.yml | grep "app/" | while read path; do
    ls "$path" 2>/dev/null || echo "MISSING: $path"
  done
done

BLOCK IF: Any files missing in layer → Do not proceed to next layer

Step 4.3.5: Display Layer Progress [MANDATORY]
╔══════════════════════════════════════════════════════════════╗
║ ✅ LAYER $LAYER_NUM COMPLETE                                 ║
╠══════════════════════════════════════════════════════════════╣
║ Tasks completed:  X                                          ║
║ Files created:    X                                          ║
║ Proceeding to Layer $NEXT_LAYER...                           ║
╚══════════════════════════════════════════════════════════════╝

REPEAT for next layer until all layers complete

Step 4.4: Post-Implementation Verification [MANDATORY]

# Verify build passes
npm run build
BUILD_EXIT=$?
echo "Build exit code: $BUILD_EXIT"

# Verify type check passes
npx tsc --noEmit
TYPE_EXIT=$?
echo "Type check exit code: $TYPE_EXIT"

# Verify lint passes
npm run lint
LINT_EXIT=$?
echo "Lint exit code: $LINT_EXIT"

# Check all passed
if [ $BUILD_EXIT -ne 0 ] || [ $TYPE_EXIT -ne 0 ] || [ $LINT_EXIT -ne 0 ]; then
  echo "❌ VERIFICATION FAILED"
  [ $BUILD_EXIT -ne 0 ] && echo "  - Build failed"
  [ $TYPE_EXIT -ne 0 ] && echo "  - Type check failed"
  [ $LINT_EXIT -ne 0 ] && echo "  - Lint failed"
  exit 1
fi

BLOCK IF: Any exit code != 0 → Error with output

# Verify all task files have corresponding implementation files
for task in .workflow/versions/$VERSION_ID/tasks/*.yml; do
  grep "to_create:" -A 10 "$task" | grep -E "^\s+-" | sed 's/.*- //' | while read path; do
    if [ ! -f "$path" ]; then
      echo "MISSING: $path"
    fi
  done
done

BLOCK IF: Any file MISSING → List missing files, halt workflow

Step 4.5: Display Implementation Summary [MANDATORY]

╔══════════════════════════════════════════════════════════════╗
║ ✅ ALL LAYERS IMPLEMENTED                                    ║
╠══════════════════════════════════════════════════════════════╣
║ Layer 1:  X tasks (models)     ✓                             ║
║ Layer 2:  X tasks (APIs)       ✓                             ║
║ Layer 3:  X tasks (UI)         ✓                             ║
╠══════════════════════════════════════════════════════════════╣
║ Total files created:  X                                      ║
║ Build:                PASS                                   ║
╚══════════════════════════════════════════════════════════════╝

Step 4.6: Complete Phase & Transition [MANDATORY]

# Save checkpoints for each completed layer
for layer in $(seq 1 $TOTAL_LAYERS); do
  python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint layer_${layer}_complete \
    --phase IMPLEMENTING --status passed
done

# Save build checkpoint
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint build_passes \
  --phase IMPLEMENTING --status passed \
  --data "{\"exit_code\": 0}"

# Save type-check checkpoint
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint type_check_passes \
  --phase IMPLEMENTING --status passed \
  --data "{\"exit_code\": 0}"

# Save lint checkpoint
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint lint_passes \
  --phase IMPLEMENTING --status passed \
  --data "{\"exit_code\": 0}"

# Complete the phase
python3 skills/guardrail-orchestrator/scripts/phase_gate.py complete IMPLEMENTING
COMPLETE_EXIT=$?
if [ $COMPLETE_EXIT -ne 0 ]; then
  echo "❌ BLOCKED: Cannot complete IMPLEMENTING phase"
  python3 skills/guardrail-orchestrator/scripts/phase_gate.py blockers
  exit 1
fi

# Transition to next phase
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py transition REVIEWING

═══════════════════════════════════════════════════════════════

PHASE 5: REVIEWING (With Fix Loop Enforcement)

═══════════════════════════════════════════════════════════════

Entry Condition: Phase = REVIEWING (verified via gate check) Exit Condition: All checks pass, review_passed checkpoint set Fix Loop: If issues found → Return to IMPLEMENTING → Fix → Re-run review

Step 5.0: Gate Entry Check [MANDATORY - BLOCKING]

# MUST pass before proceeding - HALT if fails
python3 skills/guardrail-orchestrator/scripts/phase_gate.py can-enter REVIEWING
GATE_EXIT=$?
if [ $GATE_EXIT -ne 0 ]; then
  echo "❌ BLOCKED: Cannot enter REVIEWING phase"
  python3 skills/guardrail-orchestrator/scripts/phase_gate.py blockers
  exit 1
fi

# Enter the phase (records entry timestamp)
python3 skills/guardrail-orchestrator/scripts/phase_gate.py enter REVIEWING

Step 5.1: Run Build, Type-Check, and Lint Validation [MANDATORY]

# Run build
npm run build 2>&1
BUILD_EXIT=$?

# Run type check
npx tsc --noEmit 2>&1
TYPE_EXIT=$?

# Run lint
npm run lint 2>&1
LINT_EXIT=$?

# Save checkpoints
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint build_verified \
  --phase REVIEWING \
  --status $([ $BUILD_EXIT -eq 0 ] && echo "passed" || echo "failed") \
  --data "{\"exit_code\": $BUILD_EXIT}"

python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint type_check_verified \
  --phase REVIEWING \
  --status $([ $TYPE_EXIT -eq 0 ] && echo "passed" || echo "failed") \
  --data "{\"exit_code\": $TYPE_EXIT}"

python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint lint_verified \
  --phase REVIEWING \
  --status $([ $LINT_EXIT -eq 0 ] && echo "passed" || echo "failed") \
  --data "{\"exit_code\": $LINT_EXIT}"

BLOCK IF: Any exit != 0 → Trigger fix loop

Step 5.2: Spawn Code Review Agent [MANDATORY]

Run comprehensive code review on all implemented files

Use Task tool with:
  subagent_type: "code-reviewer"
  prompt: |
    # CODE REVIEW AGENT - Quality Enhancement
    # VERSION $VERSION_ID

    ## YOUR MISSION
    Review ALL implemented files for this workflow version and identify issues
    that need fixing before the workflow can proceed.

    ## REVIEW SCOPE
    Task files: .workflow/versions/$VERSION_ID/tasks/*.yml

    For EACH task file:
    1. Read the task to find implementation file paths
    2. Review each implemented file

    ## REVIEW CRITERIA

    ### 1. Code Quality (CRITICAL)
    - [ ] DRY violations - duplicated code that should be abstracted
    - [ ] SOLID principle violations
    - [ ] Dead code or unused imports
    - [ ] Overly complex functions (cyclomatic complexity)
    - [ ] Missing error handling

    ### 2. TypeScript Best Practices (CRITICAL)
    - [ ] Any use of `any` type (should be properly typed)
    - [ ] Missing type annotations on function parameters/returns
    - [ ] Incorrect type assertions
    - [ ] Unused type imports

    ### 3. Security Issues (CRITICAL - BLOCKING)
    - [ ] Hardcoded secrets or API keys
    - [ ] SQL injection vulnerabilities
    - [ ] XSS vulnerabilities (unescaped user input)
    - [ ] Insecure data handling
    - [ ] Missing input validation

    ### 4. Performance Concerns (WARNING)
    - [ ] N+1 query patterns
    - [ ] Missing memoization in React components
    - [ ] Unnecessary re-renders
    - [ ] Large bundle imports that could be lazy-loaded
    - [ ] Missing pagination for lists

    ### 5. Framework Best Practices (WARNING)
    - [ ] React hooks rules violations
    - [ ] Missing cleanup in useEffect
    - [ ] Prop drilling that should use context
    - [ ] Missing loading/error states
    - [ ] Accessibility issues (missing aria labels, alt text)

    ### 6. Code Style & Maintainability (INFO)
    - [ ] Inconsistent naming conventions
    - [ ] Missing or outdated comments for complex logic
    - [ ] Overly long files that should be split
    - [ ] Magic numbers/strings that should be constants

    ## REVIEW PROCESS

    1. **List all files to review**
       ```bash
       for task in .workflow/versions/$VERSION_ID/tasks/*.yml; do
         grep -A 10 "to_create:" "$task" | grep -E "^\s+-" | sed 's/.*- //'
       done
       ```

    2. **For each file, run review**
       - Read the file
       - Check against ALL criteria above
       - Note line numbers for issues

    3. **Categorize findings by severity**
       - CRITICAL: Must fix before proceeding (security, type errors)
       - WARNING: Should fix, may cause problems
       - INFO: Suggestions for improvement

    ## OUTPUT FORMAT

    ```yaml
    # .workflow/versions/$VERSION_ID/review/code_review_report.yml
    version: $VERSION_ID
    reviewed_at: <timestamp>
    reviewer: code-review-agent

    summary:
      files_reviewed: X
      critical_issues: X
      warnings: X
      info: X
      verdict: PASS | NEEDS_FIX | BLOCKED

    files:
      - path: "src/app/api/xxx/route.ts"
        issues:
          - severity: CRITICAL
            line: 45
            category: security
            message: "Hardcoded API key found"
            suggestion: "Use environment variable"
            auto_fixable: true
          - severity: WARNING
            line: 23
            category: performance
            message: "Missing error boundary"
            suggestion: "Wrap component in ErrorBoundary"
            auto_fixable: false

    auto_fix_commands:
      - file: "src/app/api/xxx/route.ts"
        line: 45
        action: "Replace hardcoded key with process.env.API_KEY"
    ```

    ## FINAL OUTPUT

    After creating the report file, output a summary:

    ```
    === CODE REVIEW COMPLETE ===
    Files reviewed:    X
    Critical issues:   X (must fix)
    Warnings:          X (should fix)
    Info:              X (suggestions)

    VERDICT: [PASS/NEEDS_FIX/BLOCKED]

    [If NEEDS_FIX or BLOCKED, list top 5 critical issues]
    ```

Capture review results:

REVIEW_REPORT=".workflow/versions/$VERSION_ID/review/code_review_report.yml"
if [ -f "$REVIEW_REPORT" ]; then
  CRITICAL_ISSUES=$(grep "severity: CRITICAL" "$REVIEW_REPORT" | wc -l)
  WARNINGS=$(grep "severity: WARNING" "$REVIEW_REPORT" | wc -l)

  python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint code_review_run \
    --phase REVIEWING --status passed \
    --data "{\"critical\": $CRITICAL_ISSUES, \"warnings\": $WARNINGS}"
fi

Auto-Fix Critical Issues (if auto_fixable):

IF CRITICAL_ISSUES > 0 AND AUTO_MODE = true:
  Use Task tool with:
    subagent_type: "refactoring-expert"
    prompt: |
      # AUTO-FIX AGENT - Critical Issues

      ## MISSION
      Apply automatic fixes for CRITICAL issues found in code review.

      ## SOURCE
      Review report: .workflow/versions/$VERSION_ID/review/code_review_report.yml

      ## INSTRUCTIONS
      1. Read the review report
      2. For each issue with auto_fixable: true:
         - Apply the suggested fix
         - Verify the fix doesn't break anything
      3. Run: npm run build && npx tsc --noEmit && npm run lint
      4. If all pass, report success
      5. If any fail, report which issues couldn't be auto-fixed

      ## OUTPUT
      List of:
      - Successfully auto-fixed issues
      - Issues requiring manual intervention

Step 5.3: Generate Implementation Visualization [MANDATORY]

python3 skills/guardrail-orchestrator/scripts/visualize_implementation.py --manifest project_manifest.json

Step 5.4: Verify All Task Files Exist [MANDATORY]

python3 skills/guardrail-orchestrator/scripts/verify_implementation.py --version $VERSION_ID
VERIFY_EXIT=$?
ISSUES_FOUND=$(python3 skills/guardrail-orchestrator/scripts/verify_implementation.py --version $VERSION_ID --json | jq '.missing_files | length')

# Save checkpoint
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint all_files_verified \
  --phase REVIEWING \
  --status $([ $VERIFY_EXIT -eq 0 ] && echo "passed" || echo "failed") \
  --data "{\"missing_count\": $ISSUES_FOUND}"

Step 5.5: Review Decision with Fix Loop [MANDATORY]

Collect All Issues
REVIEW_ISSUES=()

if [ $BUILD_EXIT -ne 0 ]; then
  REVIEW_ISSUES+=("Build failed with exit code $BUILD_EXIT")
fi

if [ $TYPE_EXIT -ne 0 ]; then
  REVIEW_ISSUES+=("Type check failed with exit code $TYPE_EXIT")
fi

if [ $LINT_EXIT -ne 0 ]; then
  REVIEW_ISSUES+=("Lint failed with exit code $LINT_EXIT")
fi

if [ $ISSUES_FOUND -gt 0 ]; then
  REVIEW_ISSUES+=("$ISSUES_FOUND implementation files missing")
fi

# Check code review results
REVIEW_REPORT=".workflow/versions/$VERSION_ID/review/code_review_report.yml"
if [ -f "$REVIEW_REPORT" ]; then
  CODE_CRITICAL=$(grep "severity: CRITICAL" "$REVIEW_REPORT" | wc -l | tr -d ' ')
  if [ "$CODE_CRITICAL" -gt 0 ]; then
    REVIEW_ISSUES+=("Code review found $CODE_CRITICAL CRITICAL issues")
  fi
fi
IF Issues Found → TRIGGER FIX LOOP [CRITICAL]
if [ ${#REVIEW_ISSUES[@]} -gt 0 ]; then
  echo "❌ REVIEW FAILED - FIX LOOP TRIGGERED"
  echo ""
  echo "╔══════════════════════════════════════════════════════════════╗"
  echo "║ 🔧 FIX LOOP: Returning to IMPLEMENTING                       ║"
  echo "╠══════════════════════════════════════════════════════════════╣"
  echo "║ Issues that MUST be fixed:                                   ║"
  for issue in "${REVIEW_ISSUES[@]}"; do
    echo "║   • $issue"
  done
  echo "╠══════════════════════════════════════════════════════════════╣"
  echo "║ 👉 NEXT STEPS:                                               ║"
  echo "║   1. Fix the issues listed above                             ║"
  echo "║   2. Run: npm run build (verify it passes)                   ║"
  echo "║   3. Run: npx tsc --noEmit (verify type check passes)        ║"
  echo "║   4. Run: npm run lint (verify lint passes)                  ║"
  echo "║   5. Fix any CRITICAL code review issues                     ║"
  echo "║   6. Run: /workflow:resume                                   ║"
  echo "║                                                              ║"
  echo "║ The workflow will automatically re-run REVIEWING after fix   ║"
  echo "╚══════════════════════════════════════════════════════════════╝"

  # Show code review details if issues exist
  REVIEW_REPORT=".workflow/versions/$VERSION_ID/review/code_review_report.yml"
  if [ -f "$REVIEW_REPORT" ]; then
    echo ""
    echo "📋 CODE REVIEW ISSUES:"
    echo "────────────────────────────────────────────────────────────────"
    grep -A 5 "severity: CRITICAL" "$REVIEW_REPORT" | head -30
    echo ""
    echo "Full report: $REVIEW_REPORT"
    echo "────────────────────────────────────────────────────────────────"
  fi

  # Trigger fix loop - returns to IMPLEMENTING
  python3 skills/guardrail-orchestrator/scripts/phase_gate.py fix-loop REVIEWING \
    --issues "${REVIEW_ISSUES[@]}"

  # Transition back to IMPLEMENTING
  python3 skills/guardrail-orchestrator/scripts/workflow_manager.py transition IMPLEMENTING

  # HALT - Must fix before continuing
  exit 1
fi
IF No Issues → PASS Review
# All checks passed - save code_review_passed checkpoint
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint code_review_passed \
  --phase REVIEWING \
  --status passed \
  --data "{\"critical_issues\": 0}"

# Save review_passed checkpoint (umbrella checkpoint)
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint review_passed \
  --phase REVIEWING \
  --status passed \
  --data "{\"build\": \"passed\", \"type_check\": \"passed\", \"lint\": \"passed\", \"files\": \"verified\", \"code_review\": \"passed\"}"

# Mark phase complete
python3 skills/guardrail-orchestrator/scripts/phase_gate.py complete REVIEWING

Step 5.6: Display Review Report [MANDATORY]

╔══════════════════════════════════════════════════════════════╗
║ 🔍 REVIEW RESULTS                                            ║
╠══════════════════════════════════════════════════════════════╣
║ Build:        ✅ PASS                                        ║
║ Type Check:   ✅ PASS                                        ║
║ Lint:         ✅ PASS                                        ║
║ Files:        ✅ All exist                                   ║
║ Code Review:  ✅ No CRITICAL issues                          ║
╠══════════════════════════════════════════════════════════════╣
║ IMPLEMENTATION SUMMARY                                       ║
║   Pages:        X implemented                                ║
║   Components:   X implemented                                ║
║   API Endpoints: X implemented                               ║
╠══════════════════════════════════════════════════════════════╣
║ ✅ Proceeding to SECURITY_REVIEW...                          ║
╚══════════════════════════════════════════════════════════════╝

Step 5.7: Transition to Security Review [MANDATORY]

python3 skills/guardrail-orchestrator/scripts/workflow_manager.py transition SECURITY_REVIEW

═══════════════════════════════════════════════════════════════

PHASE 5.5: SECURITY REVIEW (With Fix Loop Enforcement)

═══════════════════════════════════════════════════════════════

Entry Condition: Phase = SECURITY_REVIEW (verified via gate check) Exit Condition: Security scan passes, security_passed checkpoint set Fix Loop: If CRITICAL/HIGH issues found → Return to IMPLEMENTING → Fix → Re-run security

Step 5.5.0: Gate Entry Check [MANDATORY - BLOCKING]

# MUST pass before proceeding - HALT if fails
python3 skills/guardrail-orchestrator/scripts/phase_gate.py can-enter SECURITY_REVIEW
GATE_EXIT=$?
if [ $GATE_EXIT -ne 0 ]; then
  echo "❌ BLOCKED: Cannot enter SECURITY_REVIEW phase"
  python3 skills/guardrail-orchestrator/scripts/phase_gate.py blockers
  exit 1
fi

# Enter the phase (records entry timestamp)
python3 skills/guardrail-orchestrator/scripts/phase_gate.py enter SECURITY_REVIEW

Step 5.5.1: Run Security Scanner [MANDATORY]

python3 skills/guardrail-orchestrator/scripts/security_scan.py \
  --project-dir . \
  --severity HIGH
SECURITY_EXIT=$?

# Capture security report for fix loop
SECURITY_REPORT=$(python3 skills/guardrail-orchestrator/scripts/security_scan.py \
  --project-dir . --severity HIGH --json 2>/dev/null || echo '{}')
CRITICAL_COUNT=$(echo "$SECURITY_REPORT" | jq '.by_severity.CRITICAL // 0')
HIGH_COUNT=$(echo "$SECURITY_REPORT" | jq '.by_severity.HIGH // 0')

# Save checkpoint
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint security_scan_run \
  --phase SECURITY_REVIEW \
  --status $([ $SECURITY_EXIT -le 1 ] && echo "passed" || echo "failed") \
  --data "{\"exit_code\": $SECURITY_EXIT, \"critical\": $CRITICAL_COUNT, \"high\": $HIGH_COUNT}"

Exit codes:

  • 0 = PASS (no critical/high issues)
  • 1 = HIGH issues found (triggers fix loop in strict mode)
  • 2 = CRITICAL issues found (ALWAYS triggers fix loop)

Step 5.5.2: API Contract Validation [MANDATORY]

python3 skills/guardrail-orchestrator/scripts/validate_api_contract.py \
  --project-dir .
API_EXIT=$?

# Save checkpoint
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint api_contract_validated \
  --phase SECURITY_REVIEW \
  --status $([ $API_EXIT -eq 0 ] && echo "passed" || echo "failed") \
  --data "{\"exit_code\": $API_EXIT}"

Step 5.5.3: Security Decision with Fix Loop [MANDATORY - CRITICAL]

Collect All Security Issues
SECURITY_ISSUES=()

if [ $SECURITY_EXIT -eq 2 ]; then
  SECURITY_ISSUES+=("CRITICAL: $CRITICAL_COUNT critical security vulnerabilities found")
fi

if [ $SECURITY_EXIT -eq 1 ]; then
  SECURITY_ISSUES+=("HIGH: $HIGH_COUNT high severity security issues found")
fi

if [ $API_EXIT -ne 0 ]; then
  SECURITY_ISSUES+=("API Contract: Frontend-backend API mismatch detected")
fi
IF CRITICAL Issues Found → TRIGGER FIX LOOP [MANDATORY]
if [ $SECURITY_EXIT -eq 2 ]; then
  echo "❌ CRITICAL SECURITY ISSUES - FIX LOOP TRIGGERED"
  echo ""
  echo "╔══════════════════════════════════════════════════════════════╗"
  echo "║ 🚨 SECURITY FIX REQUIRED - CRITICAL ISSUES                   ║"
  echo "╠══════════════════════════════════════════════════════════════╣"
  echo "║ CRITICAL issues MUST be fixed before workflow can continue   ║"
  echo "╠══════════════════════════════════════════════════════════════╣"
  echo "║ Issues found:                                                ║"
  for issue in "${SECURITY_ISSUES[@]}"; do
    echo "║   • $issue"
  done
  echo "╠══════════════════════════════════════════════════════════════╣"
  echo "║ 👉 REQUIRED ACTIONS:                                         ║"
  echo "║   1. Review security report above                            ║"
  echo "║   2. Fix ALL critical vulnerabilities                        ║"
  echo "║   3. Run: /workflow:resume                                   ║"
  echo "║                                                              ║"
  echo "║ Common fixes:                                                ║"
  echo "║   - Remove hardcoded secrets → use env vars                  ║"
  echo "║   - Fix SQL injection → use parameterized queries            ║"
  echo "║   - Fix XSS → sanitize user input                            ║"
  echo "╚══════════════════════════════════════════════════════════════╝"

  # Trigger fix loop - returns to IMPLEMENTING
  python3 skills/guardrail-orchestrator/scripts/phase_gate.py fix-loop SECURITY_REVIEW \
    --issues "${SECURITY_ISSUES[@]}"

  # Transition back to IMPLEMENTING
  python3 skills/guardrail-orchestrator/scripts/workflow_manager.py transition IMPLEMENTING

  # HALT - Must fix before continuing
  exit 1
fi
IF HIGH Issues Found (--auto mode) → WARNING but allow continue
if [ $SECURITY_EXIT -eq 1 ]; then
  echo "⚠️  HIGH SEVERITY SECURITY ISSUES FOUND"
  echo ""
  echo "╔══════════════════════════════════════════════════════════════╗"
  echo "║ ⚠️  SECURITY WARNING - HIGH SEVERITY ISSUES                  ║"
  echo "╠══════════════════════════════════════════════════════════════╣"
  echo "║ $HIGH_COUNT high severity issues detected                    ║"
  echo "║                                                              ║"
  echo "║ In AUTO mode: Proceeding with warning                        ║"
  echo "║ Recommendation: Fix these issues before production deploy    ║"
  echo "╚══════════════════════════════════════════════════════════════╝"

  # Log warning but continue in auto mode
  # In manual mode, this would ask the user
fi
IF API Contract Failed → TRIGGER FIX LOOP [MANDATORY]
if [ $API_EXIT -ne 0 ]; then
  echo "❌ API CONTRACT VALIDATION FAILED - FIX LOOP TRIGGERED"
  echo ""
  echo "╔══════════════════════════════════════════════════════════════╗"
  echo "║ 🔌 API CONTRACT MISMATCH                                     ║"
  echo "╠══════════════════════════════════════════════════════════════╣"
  echo "║ Frontend API calls don't match backend endpoints             ║"
  echo "╠══════════════════════════════════════════════════════════════╣"
  echo "║ 👉 REQUIRED ACTIONS:                                         ║"
  echo "║   1. Check that all frontend fetch/axios calls exist         ║"
  echo "║   2. Verify HTTP methods match (GET/POST/PUT/DELETE)         ║"
  echo "║   3. Ensure request bodies are correct                       ║"
  echo "║   4. Run: /workflow:resume                                   ║"
  echo "╚══════════════════════════════════════════════════════════════╝"

  # Trigger fix loop
  python3 skills/guardrail-orchestrator/scripts/phase_gate.py fix-loop SECURITY_REVIEW \
    --issues "API contract validation failed"

  python3 skills/guardrail-orchestrator/scripts/workflow_manager.py transition IMPLEMENTING
  exit 1
fi
IF All Passed → Complete Security Review
# All checks passed - save security_passed checkpoint
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint security_passed \
  --phase SECURITY_REVIEW \
  --status passed \
  --data "{\"security\": \"passed\", \"api_contract\": \"passed\"}"

# Mark phase complete
python3 skills/guardrail-orchestrator/scripts/phase_gate.py complete SECURITY_REVIEW

Step 5.5.4: Display Security Report [MANDATORY]

╔══════════════════════════════════════════════════════════════╗
║ 🔒 SECURITY REVIEW RESULTS                                   ║
╠══════════════════════════════════════════════════════════════╣
║ Security Scan:    ✅ PASS / ⚠️ WARNING / ❌ CRITICAL         ║
║   Critical:       X issues                                   ║
║   High:           X issues                                   ║
║   Medium:         X issues                                   ║
║   Low:            X issues                                   ║
╠══════════════════════════════════════════════════════════════╣
║ API Contract:     ✅ PASS / ❌ FAIL                          ║
║   Matched calls:  X                                          ║
║   Unmatched:      X                                          ║
║   Method errors:  X                                          ║
╠══════════════════════════════════════════════════════════════╣
║ VERDICT: ✅ APPROVED / 🔧 NEEDS_FIXES                        ║
╚══════════════════════════════════════════════════════════════╝

Step 5.5.5: Transition to Approval [MANDATORY]

python3 skills/guardrail-orchestrator/scripts/workflow_manager.py transition AWAITING_IMPL_APPROVAL

═══════════════════════════════════════════════════════════════

PHASE 6: GATE 2 - Implementation Approval

═══════════════════════════════════════════════════════════════

Entry Condition: Phase = AWAITING_IMPL_APPROVAL (verified via gate check) Exit Condition: Implementation approved, phase = COMPLETING

Step 6.0: Gate Entry Check [MANDATORY - BLOCKING]

# MUST pass before proceeding - HALT if fails
python3 skills/guardrail-orchestrator/scripts/phase_gate.py can-enter AWAITING_IMPL_APPROVAL
GATE_EXIT=$?
if [ $GATE_EXIT -ne 0 ]; then
  echo "❌ BLOCKED: Cannot enter AWAITING_IMPL_APPROVAL phase"
  python3 skills/guardrail-orchestrator/scripts/phase_gate.py blockers
  exit 1
fi

# Enter the phase (records entry timestamp)
python3 skills/guardrail-orchestrator/scripts/phase_gate.py enter AWAITING_IMPL_APPROVAL

IF AUTO_MODE = true:

# Auto-approve since review and security checks passed
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint implementation_approved \
  --phase AWAITING_IMPL_APPROVAL --status passed \
  --data "{\"approver\": \"auto\", \"mode\": \"auto\"}"

# Complete the phase
python3 skills/guardrail-orchestrator/scripts/phase_gate.py complete AWAITING_IMPL_APPROVAL

python3 skills/guardrail-orchestrator/scripts/workflow_manager.py approve implementation
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py transition COMPLETING

Output: " Implementation auto-approved. Proceeding to completion."

IF AUTO_MODE = false:

Use AskUserQuestion:

Question: "Review complete. How do you want to proceed?"
Options:
  1. "Approve - Mark as complete"
  2. "Reject - Request fixes"
  3. "Pause - Save and exit"

On Approve:

# Save approval checkpoint
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint implementation_approved \
  --phase AWAITING_IMPL_APPROVAL --status passed \
  --data "{\"approver\": \"user\", \"mode\": \"manual\"}"

# Complete the phase
python3 skills/guardrail-orchestrator/scripts/phase_gate.py complete AWAITING_IMPL_APPROVAL

python3 skills/guardrail-orchestrator/scripts/workflow_manager.py approve implementation
python3 skills/guardrail-orchestrator/scripts/workflow_manager.py transition COMPLETING

On Reject: Provide feedback, return to Phase 4 On Pause: Output resume command and stop


═══════════════════════════════════════════════════════════════

PHASE 7: COMPLETING

═══════════════════════════════════════════════════════════════

Entry Condition: Phase = COMPLETING (verified via gate check) Exit Condition: Version marked complete, success report displayed

Step 7.0: Gate Entry Check [MANDATORY - BLOCKING]

# MUST pass before proceeding - HALT if fails
python3 skills/guardrail-orchestrator/scripts/phase_gate.py can-enter COMPLETING
GATE_EXIT=$?
if [ $GATE_EXIT -ne 0 ]; then
  echo "❌ BLOCKED: Cannot enter COMPLETING phase"
  python3 skills/guardrail-orchestrator/scripts/phase_gate.py blockers
  exit 1
fi

# Enter the phase (records entry timestamp)
python3 skills/guardrail-orchestrator/scripts/phase_gate.py enter COMPLETING

Step 7.1: Verify Phase State [MANDATORY]

python3 skills/guardrail-orchestrator/scripts/workflow_manager.py status

BLOCK IF: Phase is not COMPLETING

Step 7.2: Update Task Statuses [MANDATORY]

# Mark all tasks as completed
for task in .workflow/versions/$VERSION_ID/tasks/*.yml; do
  sed -i '' 's/status: .*/status: completed/' "$task" 2>/dev/null || \
  sed -i 's/status: .*/status: completed/' "$task"
done

Step 7.3: Update Manifest Statuses [MANDATORY]

Update all entities referenced in tasks from "PENDING" to "IMPLEMENTED"

Step 7.4: Complete Version & Phase [MANDATORY]

# Save checkpoints
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint tasks_marked_complete \
  --phase COMPLETING --status passed
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint manifest_updated \
  --phase COMPLETING --status passed

# Complete the version
python3 skills/guardrail-orchestrator/scripts/version_manager.py complete
VERSION_EXIT=$?

# Save version completion checkpoint
python3 skills/guardrail-orchestrator/scripts/phase_gate.py checkpoint version_finalized \
  --phase COMPLETING --status $([ $VERSION_EXIT -eq 0 ] && echo "passed" || echo "failed")

if [ $VERSION_EXIT -ne 0 ]; then
  echo "❌ BLOCKED: Version finalization failed"
  exit 1
fi

# Complete the COMPLETING phase
python3 skills/guardrail-orchestrator/scripts/phase_gate.py complete COMPLETING

# Transition to final COMPLETED state
python3 skills/guardrail-orchestrator/scripts/phase_gate.py enter COMPLETED
python3 skills/guardrail-orchestrator/scripts/phase_gate.py complete COMPLETED

VERIFY: Script exits with code 0

Step 7.5: Final Report [MANDATORY]

IF AUTO_MODE = true: Display completion-only report (NO next steps)

╔══════════════════════════════════════════════════════════════╗
║ ✅ WORKFLOW COMPLETED (AUTO)                                 ║
╠══════════════════════════════════════════════════════════════╣
║ Version:  $VERSION_ID                                        ║
║ Feature:  $FEATURE                                           ║
╠══════════════════════════════════════════════════════════════╣
║ SUMMARY                                                      ║
║   Tasks completed:  X                                        ║
║   Files created:    X                                        ║
║   Files modified:   X                                        ║
║   Build:            PASS                                     ║
╚══════════════════════════════════════════════════════════════╝

DO NOT include "NEXT STEPS" in AUTO mode - the workflow is complete.

IF AUTO_MODE = false: Display full report with next steps

╔══════════════════════════════════════════════════════════════╗
║ ✅ WORKFLOW COMPLETED                                        ║
╠══════════════════════════════════════════════════════════════╣
║ Version:  $VERSION_ID                                        ║
║ Feature:  $FEATURE                                           ║
║ Mode:     INTERACTIVE                                        ║
╠══════════════════════════════════════════════════════════════╣
║ SUMMARY                                                      ║
║   Tasks completed:  X                                        ║
║   Files created:    X                                        ║
║   Files modified:   X                                        ║
╠══════════════════════════════════════════════════════════════╣
║ NEXT STEPS                                                   ║
║   npm run dev              Test the feature                  ║
║   /workflow:history        View all versions                 ║
╚══════════════════════════════════════════════════════════════╝

USAGE

# Manual mode (full control, stops at all gates)
/workflow:spawn add user profile page

# Auto mode (guided discovery with questions, auto-approves gates)
/workflow:spawn --auto add user authentication

# Full-auto mode (AI expands idea, only asks for acceptance criteria)
/workflow:spawn --full-auto add dark mode toggle

Mode Selection Guide

Use Case Recommended Mode Why
Complex feature, unclear requirements --auto AI guides you through requirements
Quick prototype, trust AI judgment --full-auto Fast, minimal input needed
Specific requirements already known Manual Full control over every step
Learning the workflow Manual See all gates and decisions
Production feature --auto Ensures requirements are complete

Examples

# Manual - full control
/workflow:spawn add user authentication

# Auto - AI asks questions until requirements are clear
/workflow:spawn --auto add user authentication
# AI asks: "What auth method?" → "OAuth providers?" → "Password reset?" → etc.

# Full-auto - AI expands idea, you approve criteria
/workflow:spawn --full-auto add user authentication
# AI expands: "I'll add login, register, password reset, OAuth, profile..."
# AI asks: "What are your acceptance criteria?"

ERROR RECOVERY

Error Command
Workflow interrupted /workflow:resume
Check current state /workflow:status
View history /workflow:history
Skip to specific phase Not allowed - must follow sequence

ENFORCEMENT CHECKLIST

Before completing this command, verify:

  • Version created with version_manager.py
  • Phase transitions logged with workflow_manager.py
  • Task files exist in .workflow/versions/$VERSION_ID/tasks/
  • Build passes (exit code 0)
  • All file_paths in tasks exist
  • Final report displayed