docs: update all documentation and add AI tooling configs

- Rewrite README.md with current architecture, features and stack
- Update docs/API.md with all current endpoints (corporate, BI, client 360)
- Update docs/ARCHITECTURE.md with cache, modular queries, services, ETL
- Update docs/GUIA-USUARIO.md for all roles (admin, corporate, agente)
- Add docs/INDEX.md documentation index
- Add PROJETO.md comprehensive project reference
- Add BI-CCC-Implementation-Guide.md
- Include AI agent configs (.claude, .agents, .gemini, _bmad)
- Add netbird VPN configuration
- Add status report

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
2026-03-19 13:29:03 -04:00
parent c5b377e788
commit 647cbec54f
3246 changed files with 479789 additions and 983 deletions

View File

@@ -0,0 +1,367 @@
# Interactive Prototypes - Getting Started Guide
**Version**: 1.0
**Last Updated**: December 10, 2025
**For**: WDS Agents (Freya, Saga)
---
## 🎯 Overview
This system creates **production-ready, self-contained interactive prototypes** using:
**Tailwind CSS** - No separate CSS files
**Vanilla JavaScript** - Components in shared folders
**Section-by-section** - Approval gates prevent errors
**Just-in-time stories** - Created as needed, not all upfront
**Demo data auto-loading** - Works immediately
**Self-contained** - Zip & share, works anywhere
---
## 📁 Folder Structure (Per Scenario)
```
[Scenario]/Prototype/
├── [Page-1].html ← HTML in ROOT (double-click to open)
├── [Page-2].html ← HTML in ROOT
├── [Page-3].html ← HTML in ROOT
├── shared/ ← Shared code (ONE COPY)
│ ├── prototype-api.js
│ ├── init.js
│ └── utils.js
├── components/ ← Reusable components (ONE COPY)
│ ├── image-crop.js
│ ├── toast.js
│ ├── modal.js
│ └── form-validation.js
├── pages/ ← Page-specific scripts (only if >150 lines)
│ ├── [complex-page].js
│ └── [another-complex-page].js
├── data/ ← Demo data (auto-loads)
│ ├── demo-data.json
│ └── [additional-data].json
├── assets/ ← Images, icons (optional)
│ ├── images/
│ └── icons/
├── stories/ ← Section dev files (created just-in-time)
│ ├── [Page].1-[section].md
│ ├── [Page].2-[section].md
│ └── ...
├── work/ ← Planning files (created at start)
│ ├── [Page]-Work.yaml
│ └── ...
└── PROTOTYPE-ROADMAP.md ← ONE document with everything
```
---
## 🔄 Complete Workflow
### Phase 1: INITIATION & PLANNING
1. **User requests** prototype for [Page]
2. **Agent asks** about device compatibility
3. **Agent creates** `work/[Page]-Work.yaml` (complete plan)
4. **User reviews** and approves plan
5. **Ready to implement** section-by-section
### Phase 2: SECTION-BY-SECTION IMPLEMENTATION
**For each section (1-N)**:
1. **Agent announces** section
2. **Agent creates** story file (just-in-time)
3. **Agent implements** in HTML (root location from start)
4. **Agent presents** for testing
5. **User tests** and gives feedback
6. **Agent fixes** any issues (loop until approved)
7. **User approves** → Move to next section
### Phase 3: FINALIZATION
1. **All sections complete**
2. **Final integration test**
3. **User approves**
4. **Prototype complete** (already in final location)
---
## 📄 Templates Available
### In `templates/` folder:
1. **`work-file-template.yaml`**
- Complete planning document
- Created ONCE at start
- High-level section breakdown
2. **`story-file-template.md`**
- Detailed section implementation guide
- Created JUST-IN-TIME before each section
- Documents what was actually built
3. **`page-template.html`**
- Complete HTML page with Tailwind
- Inline JavaScript examples
- All common patterns included
4. **`PROTOTYPE-ROADMAP-template.md`**
- Scenario overview document
- One per scenario Prototype folder
5. **`demo-data-template.json`**
- Demo data structure
- Auto-loads on first page open
---
## 🎨 Key Principles
### 1. Tailwind First
- Use Tailwind CDN
- Inline config for project colors
- Custom CSS only for what Tailwind can't do
- No separate CSS files
### 2. Pages in Root
- All HTML files in Prototype root
- Easy to find and open
- Simple relative paths
- No nested page folders
### 3. ONE COPY of Shared Code
- `shared/` contains ONE copy of each utility
- `components/` contains ONE copy of each component
- Update once → affects all pages
- Zero duplication
### 4. Self-Contained
- Zip entire Prototype folder
- Works on any computer
- No server needed
- No setup needed
### 5. Section-by-Section
- Break page into 4-8 sections
- Build one section at a time
- Test after each section
- Approval gate before next section
- Prevents errors from compounding
### 6. Just-in-Time Stories
- Create story file RIGHT BEFORE implementing each section
- Not all at once upfront
- Allows flexibility to adjust based on feedback
- Documents exactly what was built (including changes)
### 7. Build in Final Location
- No temp folder
- Create file in root from start
- Add sections incrementally
- Use "🚧" placeholders for upcoming sections
- File grows organically
---
## 🛠️ Tools & Technologies
**Required**:
- Tailwind CSS (via CDN)
- Vanilla JavaScript (no frameworks)
- SessionStorage (for demo data)
**Optional**:
- Google Fonts (Inter recommended)
- Custom fonts in `assets/fonts/`
**Not Needed**:
- Node.js / npm
- Build process
- CSS preprocessors
- Bundlers
---
## 📚 For Agents
### Freya (UX/UI Designer)
**Primary role**: Create interactive prototypes
**Read**:
1. `FREYA-WORKFLOW-INSTRUCTIONS.md` (complete step-by-step)
2. `templates/` (use these for all work)
3. Dog Week examples (reference implementations)
**Create**:
1. Work files (planning)
2. Story files (just-in-time)
3. HTML pages (section-by-section)
4. Demo data (if new data entities)
---
### Saga (Analyst)
**Role in prototypes**: Provide specifications, validate requirements
**Read**:
1. Work files (understand planned sections)
2. Story files (review implementation details)
3. Completed prototypes (validate against requirements)
**Create**:
1. Page specifications (source for work files)
2. User flow documentation
3. Success criteria definitions
---
---
## 🎓 Learning Path
### Week 1: Understand the System
- Read this guide
- Read `FREYA-WORKFLOW-INSTRUCTIONS.md`
- Open Dog Week prototypes
- Test in browser
- Check console logs
### Week 2: Study Examples
- Read 1.2-Sign-In.html (simple)
- Read 1.6-Add-Dog.html (medium)
- Read 3.1-Calendar.html (complex)
- Compare to their work files
- Review story files
### Week 3: Modify Example
- Copy existing prototype
- Change fields, text, colors
- Test modifications
- Understand file relationships
### Week 4: Create New Prototype
- Start with simple page
- Follow workflow exactly
- Build section-by-section
- Get feedback, iterate
---
## ✅ Quality Standards
Every prototype must have:
**Functionality**:
- [ ] All interactions work
- [ ] Form validation correct
- [ ] Loading states display
- [ ] Success/error feedback shows
- [ ] Navigation works
- [ ] Data persists
**Code Quality**:
- [ ] All Object IDs present
- [ ] Tailwind classes used properly
- [ ] Console logs helpful
- [ ] No console errors
- [ ] Inline JS < 150 lines (or external file)
- [ ] Functions documented
**Mobile**:
- [ ] Tested at target width
- [ ] Touch targets min 44px
- [ ] No horizontal scroll
- [ ] Text readable
**Documentation**:
- [ ] Work file complete
- [ ] Story files for all sections
- [ ] Changes documented
- [ ] Status updated
---
## 🚀 Benefits
| Aspect | Benefit |
|--------|---------|
| **For Designers** | No coding complexity, visual results fast |
| **For Users** | Real interactions, usable for testing |
| **For Developers** | Clear implementation reference |
| **For Stakeholders** | Works immediately, no setup |
| **For Project** | Self-contained, easy to share |
---
## 📊 Success Metrics
**Speed**: 30-45 min per page (section-by-section)
**Quality**: Production-ready code
**Error Rate**: Low (approval gates prevent issues)
**Flexibility**: High (adjust as you go)
**Reusability**: High (shared components)
**Maintainability**: High (ONE copy of shared code)
---
## 🆘 Need Help?
**Question**: "How do I start?"
**Answer**: Read `FREYA-WORKFLOW-INSTRUCTIONS.md` and follow step-by-step
**Question**: "Which template do I use?"
**Answer**:
- Planning `work-file-template.yaml`
- Implementing `story-file-template.md` (just-in-time)
- Coding `page-template.html`
**Question**: "How do I create demo data?"
**Answer**: Copy `demo-data-template.json`, fill in values, save to `data/` folder
**Question**: "What if section needs changes?"
**Answer**: Make changes directly in HTML, document in story file, re-test, get approval
**Question**: "How do I share prototype?"
**Answer**: Zip entire Prototype folder, send to stakeholder
---
## 📝 Quick Reference
**Start new prototype**: Create work file Get approval Build section 1
**Add section**: Create story Implement Test Get approval Next section
**Fix issue**: Update HTML Re-test Get approval
**Complete prototype**: Final integration test Update status Done
**Share prototype**: Zip Prototype folder Send
---
## 🎯 Remember
1. **Tailwind first** - Use classes, not custom CSS
2. **Pages in root** - Easy to find and open
3. **ONE COPY** - No duplication of shared code
4. **Section-by-section** - Approval gates prevent errors
5. **Just-in-time stories** - Create when needed, not all upfront
6. **Build in final location** - No temp folder needed
7. **Test after each section** - Don't wait until the end
8. **Object IDs always** - Every interactive element
9. **Demo data ready** - Auto-loads on first use
10. **Self-contained** - Zip & works anywhere
---
**You are ready to create production-ready interactive prototypes!** 🚀
For detailed step-by-step instructions, see: `FREYA-WORKFLOW-INSTRUCTIONS.md`

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,75 @@
# Execution Principles
## Document Before Acting
**Every decision, action, and problem must be documented in the dialog file BEFORE acting on it.**
This ensures full traceability, clean handoff, and the dialog document is always the source of truth.
## Sketch Fidelity
**Implement code as close to the provided sketches as possible.**
Sketches are intentional design decisions, not loose suggestions:
| Element | Approach |
|---------|----------|
| **Text sizes** | Match relative sizes (headings vs body vs labels) |
| **Proportions** | Preserve ratios between elements |
| **Spacing** | Maintain visual rhythm and whitespace |
| **Layout** | Follow the arrangement precisely |
| **Component style** | Match the visual pattern (pills, cards, buttons) |
When in doubt: ask the designer. If constraints make exact matching impossible, document the deviation and explain why.
## Sub-Steps During Execution
While working on a step, add discovered tasks as sub-steps:
```markdown
| # | Section | Status | Notes |
|---|---------|--------|-------|
| 14 | Book It Button | Done | Complete |
| 14a | Fix button alignment | Done | Added during 14 |
| 14b | Add loading state | Done | Added during 14 |
| 15 | Cancel Button | In Progress | |
```
Sub-steps use letter suffixes (14a, 14b) to maintain parent position.
## Dynamic Planning After Step Completion
After completing each step, review and adjust the plan:
1. Review remaining steps — still accurate?
2. Shuffle if needed — reorder based on learnings
3. Add new steps — if implementation revealed new requirements
4. Remove steps — if no longer needed
5. Update the dialog file
**Numbering rules:** Completed steps = fixed numbering. Future steps = dynamic numbering.
## Plan-then-Execute Pattern
**Separate planning from execution into distinct sessions.**
Context windows are finite. Long sessions accumulate noise. The solution:
**Planning Session:**
1. Explore codebase and requirements
2. Discuss approach with designer
3. Write plan to dialog file
4. End with clear handoff
**Execution Session:**
1. Start fresh (new conversation)
2. Read product brief for context
3. Read page specification for requirements
4. Read dialog document for plan and progress
5. Execute steps one by one
**When to split:** After complex exploration, when plan is complete, when session is getting long, before major implementation.
## Handoff Always References Dialog
Any handoff — to a new session, agent, or human — **MUST** reference the dialog document. Never hand off verbally. Always point to the dialog.

View File

@@ -0,0 +1,86 @@
# User Feedback Protocol
**CRITICAL: Never implement feedback without first classifying it and stating when it should be addressed.**
## Feedback Types
| Type | What It Is | When to Address |
|------|------------|-----------------|
| **Bug/Issue** | Something broken, error, not working | Now — fix immediately, iterate until resolved |
| **Quick Adjustment** | Small tweak, change X to Y | Now — implement immediately |
| **Addition** | New requirement that fits current dialog | Later step — add to plan |
| **Change Request** | Outside current dialog scope | Future session — document in Change Requests |
## The 2-Minute Rule (GTD)
**If a fix takes less than 2 minutes, do it immediately.**
From David Allen's "Getting Things Done": planning overhead should not exceed task complexity.
| Situation | Action |
|-----------|--------|
| Missing condition check | Fix now, log as sub-step |
| Wrong variable name | Fix now, log as sub-step |
| Needs new component | Add to plan |
| Architectural change | Add to plan |
**Pattern:** Do the fix → Log as sub-step (e.g., 20a-1) → Continue main task
## Response Flow
When user reports something:
1. **CLASSIFY** — What type of feedback is this?
2. **TIMING** — When should it be addressed?
3. **DOCUMENT** — For bugs, add to plan BEFORE fixing
4. **CONFIRM** — For additions and change requests, confirm before proceeding
5. **EXECUTE** — Implement or document as appropriate
### Bug/Issue (Document First, Then Fix)
**User says:** "This is broken" / "Error occurred" / "Not working"
**Agent response:**
```
This is a bug — [brief description].
Let's call it 10A and let me add it to the plan, then fix it.
```
Required sequence:
1. Identify — Understand and describe the bug
2. Document — Add to dialog plan as sub-step (e.g., 21a, 21b)
3. Execute — Fix and iterate until resolved
4. Log — Record resolution in progress log
If user reports multiple issues: list each separately, add ALL to plan first, then fix one by one.
### Quick Adjustment (Fix Now)
**User says:** "Change X to Y" / "Make this button go here"
**Agent response:** "Quick adjustment — I'll implement this now." Then implement.
### Addition (Add to Plan)
**User says:** "We should also add X"
**Agent response:** "This is an addition that fits the current dialog. I'll add it to Step {N}. Confirm?"
### Change Request (Document for Later)
**User says:** "We need a settings page"
**Agent response:**
```
This is outside the current dialog scope.
It doesn't block {feature name}.
I'll add it to Change Requests for a future session. Confirm?
```
**WAIT for user confirmation.** If user says "do it now" → treat as quick adjustment.
### Anti-Pattern
**NEVER** immediately implement without classifying. **ALWAYS** classify, state timing, then confirm or act.
The extra seconds to classify and confirm build trust and ensure alignment.

View File

@@ -0,0 +1,212 @@
# Agentic Development - File Index
**Location**: `src/workflows/5-agentic-development/`
---
## 📁 Complete File Structure
```
agentic-development/
├── AGENTIC-DEVELOPMENT-GUIDE.md ← START HERE (overview & quick reference)
├── workflow.md ← Workflow overview with phase links
├── PROTOTYPE-INITIATION-DIALOG.md ← Conversation scripts for initiation
├── CREATION-GUIDE.md ← Original detailed guide (reference)
├── PROTOTYPE-ANALYSIS.md ← Dog Week analysis (examples)
├── steps-p/ ← Micro-step workflow files
│ ├── 1-prototype-setup.md ← Phase 1: Environment setup
│ ├── 2-scenario-analysis.md ← Phase 2: Analyze spec & create views
│ ├── 3-logical-view-breakdown.md ← Phase 3: Break view into sections
│ ├── 4a-announce-and-gather.md ← Phase 4a: Announce section
│ ├── 4b-create-story-file.md ← Phase 4b: Create story file
│ ├── 4c-implement-section.md ← Phase 4c: Implement code
│ ├── 4d-present-for-testing.md ← Phase 4d: Present for testing
│ ├── 4e-handle-issue.md ← Phase 4e: Fix issues (loop)
│ ├── 4f-handle-improvement.md ← Phase 4f: Handle improvements (loop)
│ ├── 4g-section-approved.md ← Phase 4g: Section approved
│ └── 5-finalization.md ← Phase 5: Integration test & approval
├── templates/
│ ├── work-file-template.yaml ← Planning document template
│ ├── story-file-template.md ← Section implementation template
│ ├── page-template.html ← Complete HTML page template
│ ├── PROTOTYPE-ROADMAP-template.md ← Scenario roadmap template
│ ├── demo-data-template.json ← Demo data structure template
│ └── components/
│ ├── dev-mode.html ← Dev mode toggle button
│ ├── dev-mode.js ← Dev mode logic (Shift+Click to copy IDs)
│ ├── dev-mode.css ← Dev mode styles
│ └── DEV-MODE-GUIDE.md ← Dev mode usage guide
└── examples/
└── (Dog Week prototypes as reference)
```
---
## 📚 What Each File Does
### Core Documentation
#### `AGENTIC-DEVELOPMENT-GUIDE.md`
**Purpose**: Complete system overview
**For**: All agents (Freya, Saga)
**Contains**:
- System overview
- Folder structure
- Complete workflow summary
- Key principles
- Quick reference
- Success metrics
**Read this**: To understand the complete system
---
#### `workflow.md`
**Purpose**: Workflow overview with phase navigation
**For**: Freya (primary), other agents (reference)
**Contains**:
- Overview of all phases
- Clear links to step files
- When to use each phase
- What each phase creates
**Read this**: To understand the workflow structure
---
### Step Files
#### `steps-p/1-prototype-setup.md`
**Purpose**: Environment setup instructions
**Contains**: Device compatibility, design fidelity, languages, demo data creation
**Next**: Phase 2
---
#### `steps-p/2-scenario-analysis.md`
**Purpose**: Scenario analysis and view identification
**Contains**: Spec analysis, logical view mapping
**Next**: Phase 3
---
#### `steps-p/3-logical-view-breakdown.md`
**Purpose**: Break view into implementable sections
**Contains**: Section breakdown, work file creation
**Next**: Phase 4
---
#### `steps-p/4a-4g-*.md` (Phase 4 Loop)
**Purpose**: Section-by-section implementation
**Contains**: Announce, create story, implement, test, handle feedback, approve
**Flow**: 4a → 4b → 4c → 4d → [4e/4f loop] → 4g → [next section]
---
#### `steps-p/5-finalization.md`
**Purpose**: Integration test and completion
**Contains**: Final test, quality checklist, next steps
**Next**: New page (Phase 3) or new scenario (Phase 1)
---
### Templates
#### `templates/work-file-template.yaml`
**Purpose**: Planning document
**When to use**: Start of EVERY implementation
**Created**: Once per page at beginning
**Contains**:
- Metadata (page info, device compatibility)
- Design tokens (Tailwind config)
- Page requirements (from spec)
- Demo data needs
- Object ID map
- Section breakdown (4-8 sections)
- Testing checklist
**Use this**: To create work file (plan BEFORE coding)
---
#### `templates/story-file-template.md`
**Purpose**: Section implementation guide
**When to use**: Just-in-time (right before implementing each section)
**Created**: Once per section (4-8 per page)
**Contains**:
- Section goal
- What to build (HTML/JS)
- Tailwind classes to use
- Dependencies
- Acceptance criteria
- Test instructions
- Common issues
**Use this**: To create story file before each section
---
#### `templates/page-template.html`
**Purpose**: Complete HTML page structure
**When to use**: Creating new HTML page
**Created**: Once per page (at start of Section 1)
**Contains**:
- Complete HTML structure
- Tailwind CDN setup
- Tailwind config inline
- Component examples
- Shared script includes
**Use this**: As starting point for new page HTML
---
## 🎯 Which File When?
### Starting New Scenario
1. Read: `workflow.md` (understand phases)
2. Follow: `steps-p/1-prototype-setup.md` (setup)
3. Use: `PROTOTYPE-ROADMAP-template.md` → Create roadmap
4. Use: `demo-data-template.json` → Create demo data
### Starting New Page
1. Follow: `steps-p/2-scenario-analysis.md` (analyze)
2. Follow: `steps-p/3-logical-view-breakdown.md` (break down)
3. Use: `work-file-template.yaml` → Create work file
4. Get approval
### Implementing Each Section
1. Follow: `steps-p/4a-4g-*.md` (loop)
2. Use: `story-file-template.md` → Create story file (just-in-time)
3. Implement in HTML (incrementally)
4. Test
5. Get approval
6. Repeat for next section
### Finishing Page
1. Follow: `steps-p/5-finalization.md` (integration test)
2. Get final approval
3. Choose: New page, new scenario, or done
---
## 📝 Template Usage Summary
| Template | When Created | How Many | Purpose |
|----------|--------------|----------|---------|
| work-file | Start of page | 1 per page | Complete plan |
| story-file | Before each section | 4-8 per page | Section implementation |
| page | Start of Section 1 | 1 per page | HTML structure |
| roadmap | Start of scenario | 1 per scenario | Scenario overview |
| demo-data | Setup scenario | 1 per scenario | Auto-loading data |
---
**All templates and micro-step instructions are ready!**
Next step: Activate Freya and follow `workflow.md``steps-p/1-prototype-setup.md`

View File

@@ -0,0 +1,190 @@
# Inline Testing Guide
**For**: WDS Agents performing Agentic Development
**Purpose**: Self-verify implementation using Puppeteer before presenting to user
**Scope**: During-development testing (NOT Phase 7 post-development validation)
---
## Core Principle
**The agent tests its own work before presenting it to the user.**
After implementing a section, the agent uses Puppeteer to open the browser, navigate to the page, and verify all measurable acceptance criteria. Only after all measurable criteria pass does the agent present the result to the user for qualitative feedback.
---
## Responsibility Split
| Responsibility | Owner | Examples |
|---------------|-------|----------|
| **Measurable criteria** | Agent (Puppeteer) | Text content matches spec, colors match hex values, touch targets >= 44px, error states display correctly, element visibility, layout positioning |
| **Qualitative judgment** | Human | Flow feels natural, visual hierarchy works, user understands next steps, pacing feels right, overall consistency |
**The agent never asks the user to verify something it can measure itself.**
---
## When to Test
| Trigger | Action |
|---------|--------|
| Section implementation complete (4c done) | Run Puppeteer verification before presenting (4d) |
| Public page implementation complete | Run SEO validation → [SEO-VALIDATION-GUIDE.md](SEO-VALIDATION-GUIDE.md) |
| Issue fixed (4e done) | Re-verify the fix + check for regressions before re-presenting |
| Modifying existing feature | Capture baseline BEFORE making changes |
| Integration test (Phase 5) | Verify all states across all sections |
---
## Baseline Capture
When modifying an existing feature, capture current state BEFORE making changes:
1. Open browser with Puppeteer
2. Navigate to the page/component
3. Document current state:
- Screenshot the current rendering
- Key measurable values (text, colors, dimensions)
- Current behavior for each relevant interaction
4. Record as baseline in the story file under "Baseline State"
5. After implementation, compare against baseline to confirm only intended changes occurred
**Why:** Without a baseline, you can't distinguish intended changes from regressions. The agent needs to know what "before" looked like to verify "after" is correct.
---
## Puppeteer Verification Process
### Step 1: Open and Navigate
```
1. Open browser with Puppeteer
2. Navigate to [View].html or the relevant page URL
3. Wait for page to fully load
4. Set viewport to target device width if relevant (e.g., 375px for mobile)
```
### Step 2: Verify Each Criterion
For each acceptance criterion in the test plan:
```
1. Locate the element (by data-object-id, selector, or content)
2. Read the actual value (text, computed style, dimensions, visibility)
3. Compare against the spec value
4. Record result with narration
```
### Step 3: Narrate Findings
Use this narration pattern — group by category, state both actual and expected:
```
Verifying Section [N]: [Section Name]
Text Content:
Headline text is "Boka promenad" — matches spec. ✓
Subtext is "Välj tid och dag" — matches spec. ✓
Styling:
Primary button background is #2563EB — matches spec. ✓
Error text color is #EF4444 — spec says #DC2626. ✗ Mismatch.
Layout:
Touch target is 48x48px — meets minimum 44px. ✓
Input field width is 100% of container — matches spec. ✓
States:
Empty state shows placeholder text — correct. ✓
Error state displays validation message — correct. ✓
Loading state disables button and shows spinner — correct. ✓
Result: 8/9 criteria pass. 1 mismatch found.
```
**Rules:**
- Always state both actual and expected values
- Always group by category for readability
- Always end with a summary line (X/Y criteria pass)
### Step 4: Fix or Present
- **All criteria pass** — Proceed to Phase 4d (present to user for qualitative feedback)
- **Any criteria fail** — Fix the issue, then re-run verification. Do NOT present to user with known measurable failures.
---
## Test Plan Structure
Story files split acceptance criteria into two categories. This is the format:
### Agent-Verifiable (Puppeteer)
Measurable criteria the agent checks itself:
| # | Criterion | Element | Expected | How to Verify |
|---|-----------|---------|----------|---------------|
| 1 | Headline text | `[data-object-id="section-title"]` | "Boka promenad" | Read textContent |
| 2 | Button color | `[data-object-id="submit-btn"]` | bg: #2563EB | Read computed backgroundColor |
| 3 | Touch target | `[data-object-id="submit-btn"]` | >= 44x44px | Read offsetWidth, offsetHeight |
| 4 | Error display | `#emailError` | Visible when email invalid | Trigger error, check visibility |
| 5 | Loading state | `[data-object-id="submit-btn"]` | Disabled + spinner | Click submit, check disabled attr |
### User-Evaluable (Qualitative)
Criteria only the human can judge:
- [ ] Flow feels natural and intuitive
- [ ] Visual hierarchy guides the eye correctly
- [ ] Error messages are understandable (not just present)
- [ ] Section feels consistent with the rest of the prototype
---
## Integration with Phase 4 Flow
```
4a: Announce & Gather
4b: Create Story File (includes split test plan)
4c: Implement Section
Agent runs Puppeteer verification
Agent runs SEO validation (if public page) → SEO-VALIDATION-GUIDE.md
All pass? ── No ──→ Agent fixes, re-verifies (loop)
Yes
4d: Present for Testing (user evaluates qualitative criteria only)
4e/4f: Handle Issue/Improvement (if needed)
4g: Section Approved
```
---
## Distinction from Phase 7 Testing
| Aspect | Inline Testing (This Guide) | Phase 7 Testing |
|--------|----------------------------|-----------------|
| **When** | During development, per section | After development complete |
| **Who tests** | Agent (automated via Puppeteer) | Designer (manual validation) |
| **What** | Measurable spec conformity | Full design vision validation |
| **Scope** | Single section at a time | Entire feature/delivery |
| **Outcome** | Agent fixes before showing user | Issues documented for developer |
These are complementary, not competing. Inline testing catches measurable issues early. Phase 7 testing validates the complete feature against the full design vision.
---
## Anti-Patterns
- **Never present to user with known measurable failures** — Fix them first
- **Never ask user to check something Puppeteer can verify** — Colors, text, sizes are the agent's job
- **Never skip baseline capture when modifying existing features** — Prevents unintended regressions
- **Never narrate without comparison values** — Always state both actual and expected
- **Never batch all testing to the end** — Test each section as you build it
---
*Test as you build. Fix before you present. Let the human focus on what only humans can judge.*

View File

@@ -0,0 +1,832 @@
# Interactive Prototype Analysis - Dog Week Project
**Date**: December 10, 2025
**Project**: Dog Week Mobile Web App
**Analyzed By**: WDS System
**Purpose**: Document proven interactive prototype patterns for WDS agents
---
## 🎯 Executive Summary
The Dog Week project demonstrates **production-ready interactive prototypes** that bridge the gap between design specifications and developer handoff. These prototypes are:
**Fully functional** - Real interactions, state management, data persistence
**Mobile-optimized** - Responsive design with touch interactions
**Developer-ready** - Clean code, documented patterns, easy to extract
**User-testable** - Can be used for real usability testing
**Backend-agnostic** - Uses abstraction layer for easy Supabase integration
---
## 📋 Prototype Inventory
### Analyzed Prototypes
| Page | Location | Features Demonstrated |
| ------------------------ | --------------------------------------------------------------------- | -------------------------------------------------------------------------- |
| **1.2 Sign In** | `C-UX-Scenarios/01-Customer-Onboarding/1.2-Sign-In/Frontend/` | Google SSO, Magic Link, Multi-language, State transitions |
| **1.3 Profile Setup** | `C-UX-Scenarios/01-Customer-Onboarding/1.3-Profile-Setup/Frontend/` | Image upload/crop, Form validation, Multi-language, Terms acceptance |
| **1.6 Add Dog** | `C-UX-Scenarios/01-Customer-Onboarding/1.6-Add-Dog/Frontend/` | Image cropping, Breed search/filter, Split buttons, Character counters |
| **3.1 Calendar Booking** | `C-UX-Scenarios/03-Booking-Dog-Walks/3.1-Dog-Calendar-Booking/Frontend/` | Swedish week calendar, Leaderboard, Dev tools menu, Multi-member switching |
---
## 🏗️ Architecture Patterns
### File Structure (Per Page)
```
1.2-Sign-In/
├── Frontend/
│ ├── 1.2-Sign-In-Preview.html ← Main HTML with structure
│ ├── 1.2-Sign-In-Preview.css ← Page-specific styles
│ ├── 1.2-Sign-In-Preview.js ← Page logic & interactions
│ └── prototype-api.js ← Shared API abstraction layer
```
**Why this works:**
- **Separation of concerns** - HTML, CSS, JS clearly divided
- **Reusable API layer** - `prototype-api.js` shared across all pages
- **Easy extraction** - Developers can grab entire folder
- **Version control friendly** - Each page isolated, easy to track changes
---
## 🔧 Core Innovation: Prototype API Layer
### The `prototype-api.js` Abstraction
**Location**: `prototype-api.js` (shared across all prototypes)
**Purpose**: Simulate backend API calls using sessionStorage, with clear path to Supabase migration
### Architecture Overview
```javascript
const DogWeekAPI = {
config: {
mode: 'prototype', // Switch to 'production' later
storagePrefix: 'dogweek_'
},
// User operations
async getUser() { ... },
async createUserProfile(userData) { ... },
async signInWithEmail(email) { ... },
// Family operations
async createFamily(familyData) { ... },
async getActiveFamily() { ... },
// Dog operations
async addDog(dogData) { ... },
async getFamilyDogs() { ... },
// Utility
clearAllData() { ... },
getDebugInfo() { ... }
};
```
### Key Features
#### 1. Mode Switching
```javascript
config: {
mode: 'prototype', // or 'production'
supabaseUrl: null,
supabaseKey: null
}
```
**Benefit**: Same calling code works in prototype and production
#### 2. Async/Await Pattern
```javascript
async getUser() {
await this._delay(); // Simulate network latency
if (this.config.mode === 'prototype') {
return this._storage.get('currentUser');
} else {
// TODO: Replace with Supabase auth.getUser()
return null;
}
}
```
**Benefit**: Realistic timing, clear migration path with TODO comments
#### 3. SessionStorage Abstraction
```javascript
_storage: {
get(key) {
const prefixedKey = DogWeekAPI.config.storagePrefix + key;
return JSON.parse(sessionStorage.getItem(prefixedKey));
},
set(key, value) { ... },
remove(key) { ... }
}
```
**Benefit**: Easy to swap storage backend without changing calling code
#### 4. Console Logging
```javascript
console.log('🐕 Adding dog to family:', dog.name);
console.log('👤 Creating user profile:', user);
console.log('🔐 Signing in with email:', email);
```
**Benefit**: Developers can track data flow, test without backend
---
## 🎨 UI/UX Patterns
### 1. Multi-Language Support (1.2 Sign In)
**Implementation**:
```javascript
const translations = {
se: {
welcomeTitle: 'Välkommen tillbaka',
welcomeSubtitle: 'Logga in på ditt konto',
// ... all UI text
},
en: {
welcomeTitle: 'Welcome back',
welcomeSubtitle: 'Sign in to your account',
// ...
},
};
function applyLanguage(lang) {
document.getElementById('welcomeTitle').textContent = translations[lang].welcomeTitle;
// ... update all elements
}
```
**Why it's excellent**:
- ✅ All text centralized in one place
- ✅ Easy to add new languages
- ✅ Preserves language preference in storage
- ✅ Instant switching without reload
**Extracted Pattern**: Language selector in header + translation dictionary
---
### 2. Image Upload with Cropping (1.3 Profile Setup, 1.6 Add Dog)
**Flow**:
1. User clicks upload button → file picker
2. Image loaded → **crop modal appears**
3. User adjusts zoom/position → circle mask overlay
4. Confirm → cropped image displayed in avatar
5. Image stored as base64 in sessionStorage
**Technical Implementation**:
```javascript
function handlePictureUpload() {
document.getElementById('pictureInput').click();
}
pictureInput.addEventListener('change', (e) => {
const file = e.target.files[0];
if (file) {
const reader = new FileReader();
reader.onload = (e) => {
showCropModal(e.target.result);
};
reader.readAsDataURL(file);
}
});
```
**Crop Modal Features**:
- Circle mask overlay (CSS clip-path)
- Zoom slider (10-200%)
- Drag-to-reposition
- "Replace Image" and "Cancel" options
- Final confirm button
**Why it's production-ready**:
- ✅ Real image manipulation (not just display)
- ✅ Mobile-touch friendly
- ✅ Stores base64 for easy API upload later
- ✅ Handles aspect ratios and constraints
---
### 3. Breed Combobox with Search (1.6 Add Dog)
**Pattern**: Custom combobox (not native select) with:
- Button trigger showing selected breed
- Popover with search input
- Filtered list of options
- "No results" state with custom option hint
**Implementation**:
```javascript
function handleBreedSearch(query) {
const filtered = dogBreeds.filter((breed) => breed.toLowerCase().includes(query.toLowerCase()));
if (filtered.length === 0) {
showNoResults();
} else {
renderBreedSuggestions(filtered);
}
}
```
**Why this pattern is superior to native `<select>`**:
- ✅ Searchable (critical for 300+ dog breeds)
- ✅ Mobile-friendly large tap targets
- ✅ Custom styling matches design system
- ✅ Keyboard navigation support
---
### 4. Split Button (Gender Selection)
**Visual**: `[ Hane | Hona ]`
**Implementation**:
```javascript
function selectGender(gender) {
// Remove active from both
document.getElementById('genderMale').classList.remove('selected');
document.getElementById('genderFemale').classList.remove('selected');
// Add to selected
if (gender === 'male') {
document.getElementById('genderMale').classList.add('selected');
} else {
document.getElementById('genderFemale').classList.add('selected');
}
selectedGender = gender;
}
```
**Why it works**:
- ✅ Clear binary choice
- ✅ Large tap targets (mobile-friendly)
- ✅ Visual feedback (selected state)
- ✅ Better than radio buttons for mobile
---
### 5. Swedish Week Calendar (3.1 Calendar Booking)
**Unique Feature**: Week-based calendar (not month) with:
- Week number display (V48, V49, etc.)
- 7-day horizontal scroll
- Today indicator
- Multi-dog leaderboard
- Per-member booking rows
**Technical Complexity**:
- ISO 8601 week calculation
- Swedish week numbering (starts Monday)
- Dynamic day generation
- Horizontal scroll with snap points
- Touch gestures for booking slots
**Implementation Highlights**:
```javascript
function getWeekNumber(date) {
const target = new Date(date.valueOf());
const dayNr = (date.getDay() + 6) % 7; // Monday = 0
target.setDate(target.getDate() - dayNr + 3);
const jan4 = new Date(target.getFullYear(), 0, 4);
const dayDiff = (target - jan4) / 86400000;
return 1 + Math.ceil(dayDiff / 7);
}
```
**Why it's impressive**:
- ✅ Culturally accurate (Swedish weeks)
- ✅ Complex date math handled correctly
- ✅ Smooth scrolling and interactions
- ✅ Multi-user state management
---
### 6. Developer Tools Menu (3.1 Calendar)
**Purpose**: Built-in testing and debugging tools
**Features**:
- **Edit Mode**: Click any element to copy its Object ID
- **Member Switcher**: View calendar as different family members
- **Load Demo Family**: Instantly populate with test data
- **Clear All Data**: Reset sessionStorage
- **Keyboard Shortcuts**: `Ctrl+E` for edit mode
**Implementation**:
```javascript
document.addEventListener('keydown', (e) => {
if (e.ctrlKey && e.key === 'e') {
e.preventDefault();
toggleEditMode();
}
});
```
**Why this is genius**:
-**UX testing** - Switch user perspectives instantly
-**Design validation** - Copy Object IDs for specs
-**Developer handoff** - Demo data ready to explore
-**QA workflow** - Reset and test from scratch
---
## 🔄 State Management Patterns
### 1. Form Validation States
**Pattern**: Real-time validation with visual feedback
```javascript
function validateField(fieldId, value, validator) {
const errorElement = document.getElementById(`${fieldId}Error`);
if (!validator(value)) {
errorElement.textContent = 'Invalid value';
errorElement.classList.remove('hidden');
return false;
} else {
errorElement.classList.add('hidden');
return true;
}
}
```
**Visual States**:
-**Default**: Normal border, no message
- 🔴 **Error**: Red border, error message shown
-**Valid**: Subtle green indicator (optional)
---
### 2. Loading States
**Pattern**: Disable form, show spinner, prevent double-submit
```javascript
async function handleSubmit(event) {
event.preventDefault();
// Show loading state
const submitBtn = document.getElementById('submitButton');
submitBtn.disabled = true;
submitBtn.querySelector('#submitButtonText').classList.add('hidden');
submitBtn.querySelector('#submitButtonSpinner').classList.remove('hidden');
try {
await DogWeekAPI.addDog(formData);
showSuccessToast();
navigateToNextPage();
} catch (error) {
showErrorBanner(error.message);
} finally {
// Reset loading state
submitBtn.disabled = false;
submitBtn.querySelector('#submitButtonText').classList.remove('hidden');
submitBtn.querySelector('#submitButtonSpinner').classList.add('hidden');
}
}
```
**Why it's production-quality**:
- ✅ Prevents double-submission
- ✅ Clear visual feedback
- ✅ Handles errors gracefully
- ✅ Always resets state (finally block)
---
### 3. Toast Notifications
**Pattern**: Non-blocking success/error messages
```javascript
function showSuccessToast(message) {
const toast = document.getElementById('successToast');
toast.querySelector('#successToastMessage').textContent = message;
toast.classList.remove('hidden');
setTimeout(() => {
toast.classList.add('hidden');
}, 3000);
}
```
**Design**: Slides in from bottom, auto-dismisses after 3s
---
## 🎓 Best Practices Demonstrated
### 1. Object ID System
**Every interactive element** has a `data-object-id` attribute:
```html
<button id="add-dog-button-submit" data-object-id="add-dog-button-submit" class="submit-button">Lägg till hund</button>
```
**Purpose**:
- Links prototype to specification document
- Enables automatic testing (Playwright, Cypress)
- Makes developer handoff crystal clear
- Supports design validation workflow
---
### 2. Semantic HTML Structure
**Pattern**: Proper landmarks and hierarchy
```html
<header class="calendar-header">...</header>
<main class="px-4 py-6">
<section class="week-overview">...</section>
<section class="leaderboard-section">...</section>
<section class="booking-calendar-section">...</section>
</main>
<nav class="bottom-nav">...</nav>
```
**Benefits**:
- ✅ Accessibility (screen readers)
- ✅ SEO-ready structure
- ✅ Easy to navigate in dev tools
- ✅ Reflects actual implementation needs
---
### 3. CSS Custom Properties
**Pattern**: Design tokens as CSS variables
```css
:root {
--dog-week-primary: #2563eb;
--dog-week-primary-hover: #1d4ed8;
--dog-week-success: #10b981;
--gray-50: #f9fafb;
--gray-900: #111827;
}
```
**Usage**:
```css
.submit-button {
background: var(--dog-week-primary);
}
.submit-button:hover {
background: var(--dog-week-primary-hover);
}
```
**Why it matters**:
- ✅ Single source of truth for colors
- ✅ Easy theme switching
- ✅ Consistent with design system
- ✅ Matches Tailwind CSS conventions
---
### 4. Mobile-First Responsive Design
**Pattern**: All prototypes start mobile, scale up
```css
/* Mobile-first (default) */
.calendar-page {
max-width: 100%;
padding: 1rem;
}
/* Tablet and up */
@media (min-width: 768px) {
.calendar-page {
max-width: 640px;
margin: 0 auto;
}
}
```
**Why mobile-first**:
- ✅ Dog Week is mobile-focused
- ✅ Forces constraint-based thinking
- ✅ Easier to scale up than down
- ✅ Matches user behavior (80%+ mobile usage expected)
---
## 📦 Reusable Components
### Components That Could Be Extracted
1. **Image Cropper** (`image-crop.js`)
- Circular mask overlay
- Zoom slider
- Drag-to-reposition
- Base64 output
2. **Language Selector** (Header component)
- Dropdown with flags
- Persistence
- Instant UI updates
3. **Breed Combobox** (Custom select with search)
- Popover trigger
- Search input
- Filtered list
- No results state
4. **Split Button** (Binary choice)
- Two-option selector
- Active state
- Mobile-optimized
5. **Toast Notification** (Success/error)
- Slide-in animation
- Auto-dismiss
- Icon + message
6. **Dev Tools Menu** (Debug panel)
- Edit mode
- Data management
- Test utilities
---
## 🚀 Migration Path to Production
### From Prototype to Supabase (Example)
**Prototype Code**:
```javascript
const user = await DogWeekAPI.createUserProfile({
firstName: 'Patrick',
lastName: 'Parent',
email: 'patrick@example.com',
});
```
**Production Code** (minimal changes):
```javascript
// In prototype-api.js, update createUserProfile:
async createUserProfile(userData) {
if (this.config.mode === 'production') {
const { data, error } = await supabase
.from('profiles')
.insert([userData])
.select()
.single();
if (error) throw error;
return data;
} else {
// ... existing prototype code
}
}
```
**Calling code stays identical!**
---
## 📊 Prototype Quality Metrics
| Metric | Dog Week Score | Notes |
| ----------------------- | -------------- | ------------------------------------------ |
| **Functionality** | 95% | All interactions work, minor polish needed |
| **Mobile UX** | 100% | Touch-optimized, smooth gestures |
| **Code Quality** | 90% | Clean, documented, follows patterns |
| **Developer Readiness** | 95% | Clear structure, easy to extract |
| **Design Fidelity** | 90% | Matches specs, minor visual refinements |
| **Testing Utility** | 100% | Can be used for real user testing |
| **Migration Path** | 95% | Clear TODOs, abstraction in place |
**Overall Assessment**: 🌟 **Production-Ready Interactive Prototypes**
---
## 🎯 Recommendations for WDS Agents
### For Freya (UX/UI Designer Agent)
When creating interactive prototypes, follow this proven structure:
#### 1. File Organization
```
Page-Name/
├── Frontend/
│ ├── Page-Name-Preview.html
│ ├── Page-Name-Preview.css
│ ├── Page-Name-Preview.js
│ ├── prototype-api.js (shared)
│ └── [specialized libs: image-crop.js, etc.]
```
#### 2. HTML Template Structure
```html
<!DOCTYPE html>
<html lang="se">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>[Page Number] [Page Name] - [Project Name]</title>
<!-- Fonts -->
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&display=swap" rel="stylesheet" />
<!-- Page-specific styles -->
<link rel="stylesheet" href="[Page-Name]-Preview.css" />
</head>
<body>
<!-- Header -->
<header>...</header>
<!-- Main Content -->
<main>
<form id="mainForm" onsubmit="handleSubmit(event)">
<!-- Form fields with Object IDs -->
</form>
</main>
<!-- Modals (if needed) -->
<div id="modal" class="modal-overlay hidden">...</div>
<!-- Toast (if needed) -->
<div id="toast" class="toast hidden">...</div>
<!-- Scripts -->
<script src="prototype-api.js"></script>
<script src="[Page-Name]-Preview.js"></script>
</body>
</html>
```
#### 3. Required Object IDs
**Every interactive element** must have:
```html
<button id="page-section-action" data-object-id="page-section-action" onclick="handleAction()"></button>
```
**Naming Convention**: `[page]-[section]-[action]`
Examples:
- `add-dog-input-name`
- `profile-avatar-upload`
- `calendar-week-next`
#### 4. State Management Checklist
✅ Loading states (spinner, disabled)
✅ Error states (red border, error message)
✅ Success feedback (toast notification)
✅ Form validation (real-time)
✅ Data persistence (sessionStorage via API)
#### 5. Mobile Optimization Checklist
✅ Touch targets min 44x44px
✅ Viewport meta tag present
✅ Mobile-first CSS
✅ Touch gestures (swipe, pinch-zoom where needed)
✅ No hover-dependent interactions
#### 6. Developer Handoff Assets
Include with each prototype:
1. **README.md** - How to run, features, known issues
2. **Object ID map** - Links to specification
3. **API usage examples** - How page uses prototype-api.js
4. **Migration notes** - What needs Supabase integration
---
## 🔮 Future Enhancements
### Potential Improvements Identified
1. **Component Library**
- Extract reusable components (image cropper, breed selector, etc.)
- Create shared component library
- Reduce code duplication across pages
2. **Prototype Navigation**
- Add global navigation between prototypes
- Show current flow position
- Quick jump to any page in scenario
3. **Animation Library**
- Standardize transitions (slide-in, fade, etc.)
- Page transition animations
- Micro-interactions library
4. **Accessibility Audit**
- Keyboard navigation testing
- Screen reader testing
- ARIA labels audit
5. **Performance Optimization**
- Image compression
- Lazy loading for modals
- CSS/JS minification for production
---
## 📚 Learning Resources
### For Team Members Learning From This
**To understand the patterns**:
1. Start with simplest prototype (1.2 Sign In)
2. Study `prototype-api.js` architecture
3. Compare two similar prototypes (1.3 Profile vs 1.6 Add Dog)
4. Explore most complex (3.1 Calendar)
**To create new prototypes**:
1. Copy an existing prototype folder as template
2. Update HTML structure and content
3. Modify CSS for new design
4. Update JS for new interactions
5. Ensure all Object IDs match spec
**To test prototypes**:
1. Open in mobile viewport (375px width)
2. Complete full user flow
3. Check dev tools console for errors
4. Test data persistence (reload page)
5. Try edge cases (empty states, errors)
---
## ✅ Conclusion
The Dog Week interactive prototypes represent **the gold standard** for UX design deliverables in 2025:
🎯 **For Designers**: These are _real interfaces_, not mockups
🎯 **For Developers**: These provide _working reference implementations_
🎯 **For Users**: These enable _real usability testing_
🎯 **For Stakeholders**: These demonstrate _actual functionality_
**These prototypes prove that AI-assisted design can produce production-quality interactive prototypes that serve as both design validation tools AND developer handoff artifacts.**
---
**Document Status**: Complete
**Last Updated**: December 10, 2025
**Maintained By**: WDS System
**Next Review**: After next major prototype addition

View File

@@ -0,0 +1,409 @@
# Prototype Initiation Dialog
**Agent**: Freya WDS Designer Agent
**Purpose**: Interactive conversation to gather all requirements before creating a prototype
**Output**: Complete Work File (YAML) ready for section-by-section implementation
---
## 🎯 Conversation Flow
### **Opening**
> "I'll create an interactive prototype for this page. Before we start coding, let's plan it out together through a few quick questions. This ensures we build exactly what you need!"
>
> **Let's start!** 🚀
---
## 📝 **Question 1: Page Context**
> "**Which page are we building?**
>
> Please provide:
> - Page number and name (e.g., "3.1 Dog Calendar Booking")
> - Link to the specification (if available)
> - Scenario name"
**Wait for response**
**Record**:
- `metadata.page_number`
- `metadata.page_name`
- `metadata.scenario`
---
## 📱 **Question 2: Device Compatibility**
> "**Which devices will users primarily use this page on?**
>
> Choose one:
>
> 1. 📱 **Mobile-Only** (375px-428px)
> - For: Personal apps, on-the-go tools
> - Testing: iPhone SE, iPhone 14 Pro, iPhone 14 Pro Max
> - No breakpoints, touch-optimized only
> - Hover: None
>
> 2. 📱💻 **Mobile + Tablet** (375px-1024px)
> - For: Content apps, casual use
> - Testing: Mobile + iPad
> - Breakpoint at 768px
> - Hover: Tablet only
>
> 3. 🌐 **Fully Responsive** (375px-1920px+)
> - For: Business apps, multi-device use
> - Testing: Mobile + Tablet + Desktop
> - Multiple breakpoints (768px, 1024px, 1280px)
> - Hover: Tablet & Desktop
>
> 4. 🖥️ **Desktop-Only** (1280px+)
> - For: Complex data entry, professional tools
> - Testing: Desktop only
> - Breakpoint: None (fixed large)
> - Hover: Always
>
> **Which option?** (1-4)"
**Wait for response**
**Record**:
- `metadata.device_compatibility.type`
- `metadata.device_compatibility.test_viewports`
- `metadata.device_compatibility.touch_optimized`
- `metadata.device_compatibility.hover_interactions`
**If Mobile-Only**, ask:
> "Perfect! **Which mobile devices should we test on?**
>
> Default is:
> - iPhone SE (375px × 667px) - Smallest common size
> - iPhone 14 Pro (393px × 852px) - Standard size
> - iPhone 14 Pro Max (428px × 926px) - Largest common size
>
> Use these defaults? (Y/N)"
---
## 🎨 **Question 3: Design Fidelity**
> "**What level of design fidelity should we use?**
>
> Choose one:
>
> 1. **Generic Gray Model** (Wireframe)
> - Grayscale placeholder design
> - Generic Tailwind defaults (grays, blues)
> - Focus on functionality first, style later
> - Fastest to build
>
> 2. **Design System Components**
> - Uses your documented Design System
> - Branded colors, typography, spacing
> - Consistent with your design tokens
> - Production-ready look and feel
>
> 3. **Full Design / Figma MCP Integration**
> - Import directly from Figma designs
> - Pixel-perfect implementation
> - All visual details, shadows, gradients
> - Highest fidelity
>
> **Which option?** (1, 2, or 3)"
**Wait for response**
**If option 2 or 3**, ask:
> "Great! Where is your Design System located? (I'll look for it in `docs/D-Design-System/` or you can specify)"
**Record**:
- `metadata.design_fidelity`
- `design_tokens` (colors, typography, spacing from Design System)
---
## 🌍 **Question 4: Languages**
**Check project brief/outline first**:
- If project defines multiple languages → Ask this question
- If project is single language → Skip this question
> "**I see your project supports [Languages from project brief].**
>
> **Should this prototype include language switching?** (Y/N)
>
> If **YES**:
> - Which languages? (e.g., Swedish, English)
> - How to switch? (Toggle button, dropdown, flag icons)
>
> If **NO**:
> - Which language to use? (Default to primary language from project)"
**Wait for response**
**Record**:
- `languages` (array: ["sv", "en"] or single: ["en"])
- `language_switcher` (boolean)
- `primary_language` (default language)
**Implementation Note**:
- Prototypes use **hardcoded translations** directly in HTML/JS
- No separate translation files (easier to read, no secrets)
- Simple language switcher if multi-language
**Example**:
```javascript
const strings = {
sv: { bookWalk: 'Boka promenad', cancel: 'Avbryt' },
en: { bookWalk: 'Book walk', cancel: 'Cancel' }
};
```
---
## 📦 **Question 5: Required Components**
> "**Let me identify the components needed based on your specification...**
>
> [Agent reads specification and extracts Component Dependencies section]
>
> **This page requires:**
>
> **Organism Components**:
> - [List with Object IDs from spec]
>
> **Molecular Components**:
> - [List with Object IDs from spec]
>
> **Atomic Components**:
> - [List from Design System]
>
> **Special Features**:
> - [Real-time features, persistence, etc.]
>
> **All components and Object IDs from the specification will be included.**
>
> **Does this match your expectations?** (Y/N)"
**Wait for response**
**If N**, ask:
> "What would you like to add or change?"
**Record**:
- `metadata.dependencies`
- `object_ids` (all Object IDs from spec)
- `design_tokens.components_available`
---
## 🗂️ **Question 6: Demo Data**
> "**What demo data do we need for realistic testing?**
>
> Based on your user context, I'll suggest:
> - Current user (first name, last name, email)
> - [Other entities based on page type]
>
> **Any specific demo data requirements?** (or 'use defaults')"
**Wait for response**
**Record**:
- `demo_data_needed`
---
## 📄 **Initiation Complete - Summary**
> "Perfect! Here's what we have:
>
> **📱 Scenario**: [Scenario Name]
> **📄 Pages**: [List pages in this scenario]
> **📱 Devices**: [Device type] ([Viewport range])
> **🎨 Design**: [Gray Model / Design System / Figma]
> **👥 Demo Family**: [Family members + dogs]
>
> **Files Created**:
> - ✅ `data/demo-data.json` (demo family data)
> - ✅ `PROTOTYPE-ROADMAP.md` (scenario overview)
>
> **Next: Build prototypes page by page!**
>
> **Which page should we start with?** ([Page number] or list to see all)"
**Wait for user to select first page**
---
## 🚀 **Per-Page Building Process**
**When user selects a page** (e.g., "3.1"):
> "**Building: 3.1 Dog Calendar Booking**
>
> Let me analyze the specification and break it into sections...
>
> [Agent reads spec, identifies all components and Object IDs]
>
> **Proposed sections**:
> 1. [Section name] (~X min)
> 2. [Section name] (~X min)
> 3. [Section name] (~X min)
> ...
>
> **Total**: [N] sections, ~[X] hours
>
> **Approve this breakdown?** (Y/N)"
**If Y**:
> "✅ Creating Work File: `work/3.1-Dog-Calendar-Work.yaml`
>
> [Creates complete work file with all sections]
>
> ✅ Work File created!
>
> **Ready to start Section 1?** (Y)"
**Then proceed to section-by-section building** (follow FREYA-WORKFLOW-INSTRUCTIONS.md Phase 2)
---
## 📝 **Notes for Freya**
**Scenario Initiation** creates:
-`[Scenario]-Prototype/` folder with complete structure:
- `data/` folder with `demo-data.json`
- `work/` folder (empty, for work files)
- `stories/` folder (empty, for just-in-time stories)
- `shared/` folder (empty, for shared JS)
- `components/` folder (empty, for reusable components)
- `pages/` folder (empty, for page-specific scripts)
- `assets/` folder (empty, for images/icons)
-`PROTOTYPE-ROADMAP.md` (scenario overview)
**Per-Page Building** creates:
-`work/[Page]-Work.yaml` (page-specific work file)
-`stories/[Page]-[Section].md` (just-in-time story files)
-`[Page].html` (actual prototype, built section-by-section)
**Key Principles**:
1. **Scenario init** = High-level setup (device type, demo data)
2. **Per-page** = Detailed section breakdown and building
3. **One page at a time** = Prevents overwhelming scope
4. **Just-in-time stories** = Create story right before building that section
---
## 🎓 **Tips During Conversation**
**For Freya (implementation notes)**:
1. **Be conversational** - Not a survey, a dialogue
2. **Explain options** - Help user understand choices
3. **Suggest defaults** - Based on common patterns
4. **Adapt questions** - Skip irrelevant based on answers
5. **Show examples** - Make abstract concepts concrete
6. **Confirm understanding** - Reflect back what you heard
7. **Stay encouraging** - This is collaborative, not interrogation
---
## 🔄 **After Work File Creation**
**Agent automatically**:
1. Creates `work/[Page]-Work.yaml` with all gathered info
2. Creates `data/demo-data.json` (if doesn't exist)
3. Updates `PROTOTYPE-ROADMAP.md` with this page
4. Presents Section 1 story and asks to proceed
**Then enters**: Section-by-section build mode (separate workflow)
---
## 📋 **Example Complete Exchange**
### **Part 1: Scenario Initiation** (one-time setup)
**Freya**: "Let's set up prototypes for Scenario 3: Booking Dog Walks! Which devices?"
- 1. Mobile-Only
- 2. Mobile + Tablet
- 3. Fully Responsive
- 4. Desktop-Only
**User**: "1"
**Freya**: "Mobile-only! Use default test devices? (320-428px) (Y/N)"
**User**: "Y"
**Freya**: "Design fidelity?"
- 1. Generic Gray Model
- 2. Design System Components
- 3. Full Design / Figma MCP Integration
**User**: "2"
**Freya**: "Perfect! I'll use your Design System. I see your project supports Swedish and English. Should this prototype include language switching? (Y/N)"
**User**: "Y - Swedish primary, English secondary, toggle button"
**Freya**: "Got it! Demo data?"
- Need realistic test family? (Y/N)
**User**: "Y - Familjen Svensson with 4 members and Golden Retriever Rufus, no pre-booked walks"
**Freya**: "✅ Created complete prototype environment!
-`03-Booking-Dog-Walks-Prototype/` folder
-`data/demo-data.json` with demo family
-`PROTOTYPE-ROADMAP.md` with scenario overview
- ✅ All working folders ready
**Which page should we start with?**"
---
### **Part 2: Page Selection**
**Freya**: "Which page should we start with?"
**User**: "3.1 Dog Calendar Booking"
---
### **Part 3: Per-Page Section Breakdown**
**Freya**: "Building: 3.1 Dog Calendar Booking. Let me analyze the spec...
**Proposed sections**:
1. Page Structure & Header (~10 min)
2. Week Overview Component (~20 min)
3. Leaderboard Section (~15 min)
4. Booking Calendar - Day Columns (~15 min)
5. Walk Slot Cards - States & Booking (~30 min)
6. Real-Time Updates (~20 min)
7. Bottom Tab Navigation (~10 min)
8. Dev Mode Integration (~5 min)
**Total**: 8 sections, ~2 hours
**Approve this breakdown?** (Y/N)"
**User**: "Y"
**Freya**: "✅ Creating Work File: `work/3.1-Dog-Calendar-Work.yaml`
✅ Work File created!
**Ready to start Section 1?** (Y)"
**User**: "Y"
**Freya**: "🚀 Starting Section 1: Page Structure & Header..."
---
**Scenario initiation: 3-4 questions, 2-3 minutes**
**Per-page building: Section approval, then build!** 🎯

View File

@@ -0,0 +1,551 @@
# SEO Validation Guide
**For:** WDS Agents performing Agentic Development
**Purpose:** Verify SEO implementation against specification before presenting to user
**When:** After a public page is built and previewable (browser or deployed)
---
## Core Principle
**Every public page must pass SEO validation before approval.**
The agent verifies all measurable SEO criteria using browser tools (Puppeteer, MCP browser-tools, or manual inspection). SEO failures caught during development cost minutes to fix. SEO failures caught after deployment cost rankings and traffic.
---
## When to Run SEO Validation
| Trigger | Action |
|---------|--------|
| Public page section complete (4c/4d) | Run SEO checks before presenting |
| Full page implementation complete | Run complete SEO audit |
| Pre-deployment review | Full validation against spec + project brief |
| Post-deployment verification | Validate live URL matches specification |
---
## Reference Documents
Before running validation, gather:
1. **Page specification** — SEO & Search section (keywords, URL, headings, meta)
2. **Project brief** — SEO Strategy section (page-keyword map, structured data plan)
3. **SEO Strategy Guide**`../../data/agent-guides/saga/seo-strategy-guide.md`
---
## SEO Validation Checklist
### Level 1: Critical (Must Pass)
These are the top errors found in real-world SEO audits. Failing any of these blocks approval.
#### 1.1 Page Title Tag
```
Verify:
- Title tag exists and is not empty
- Length ≤ 60 characters (check each language)
- Contains primary keyword
- Contains brand name
- Is unique (different from other pages)
- Matches specification
Report:
"Page title is 'Bilservice på Öland | Källa Fordonservice' (51 chars)
— contains keyword 'bilservice', includes brand. ✓ Passes"
"Page title is 'Home' (4 chars)
— too short, no keyword, no brand. ✗ Fails"
```
#### 1.2 Meta Description
```
Verify:
- Meta description tag exists and is not empty
- Length 150-160 characters
- Contains primary keyword
- Contains call-to-action
- Matches specification
Report:
"Meta description is 156 chars, contains 'bilservice Öland', ends with
'Ring oss idag!' ✓ Passes"
"Meta description is missing. ✗ Fails — 80% of audited sites miss this"
```
#### 1.3 H1 Heading
```
Verify:
- Exactly ONE <h1> tag on the page
- Contains primary keyword (natural, not stuffed)
- Is visible (not hidden)
- Matches specification
Report:
"Found 1 <h1>: 'Bilservice och reparationer på Öland'
— contains keyword 'bilservice'. ✓ Passes"
"Found 0 <h1> tags. ✗ Fails — 75% of audited sites have H1 issues"
"Found 3 <h1> tags. ✗ Fails — only one H1 allowed per page"
```
#### 1.4 Heading Hierarchy
```
Verify:
- Headings follow logical order (H1 → H2 → H3)
- No skipped levels (H1 → H3 without H2)
- No duplicate H1
Report:
"Heading hierarchy: H1 → H2 → H3 → H2 → H3 ✓ Logical flow"
"Heading hierarchy: H1 → H3 (skipped H2) ✗ Fix: Change H3 to H2"
```
#### 1.5 Image Alt Text
```
Verify:
- ALL images have alt attributes
- Alt text is descriptive (not empty, not "image")
- Alt text exists in all required languages
- Decorative images have alt="" (empty, not missing)
Report:
"Found 8 images:
hero-image: alt='Källa Fordonservice verkstad...' ✓
service-ac: alt='AC-service på personbil' ✓
icon-phone: alt='' (decorative) ✓
team-photo: alt attribute MISSING ✗
Result: 7/8 images pass. 1 missing alt text."
```
---
### Level 2: Important (Should Pass)
#### 2.1 Open Graph / Social Sharing
```
Verify:
- og:title tag present
- og:description tag present
- og:image tag present (valid URL, image exists)
- og:type tag present
- twitter:card tag present
Report:
"Social sharing tags:
og:title: 'Bilservice Öland — Källa Fordonservice' ✓
og:description: present (148 chars) ✓
og:image: '/images/social/hem-social.jpg' ✓ (file exists)
og:type: 'website' ✓
twitter:card: 'summary_large_image' ✓
All social tags present."
"Missing: og:image ✗ — 70% of audited sites miss social tags"
```
#### 2.2 Structured Data (Schema.org)
```
Verify:
- JSON-LD script tag exists
- Schema type matches project brief plan
- Required properties present (name, address, phone for LocalBusiness)
- JSON is valid (parseable)
Report:
"Structured data found:
@type: 'AutoRepair' ✓
name: 'Källa Fordonservice' ✓
address: complete ✓
telephone: '+46485-27070' ✓
openingHours: present ✓
JSON-LD validates. ✓ Passes"
"No structured data found. ✗ Fails — spec requires LocalBusiness schema"
```
#### 2.3 Internal Links
```
Verify:
- Page has at least 2 internal links to other pages
- Links have descriptive anchor text (not "click here", "read more")
- No broken internal links (404s)
- No redirect chains (link → 301 → 301 → page)
Report:
"Internal links found: 5
'Läs mer om AC-service' → /ac-service ✓ Descriptive
'Ring oss' → tel:+46485-27070 ✓ CTA
'Klicka här' → /kontakt ✗ Non-descriptive anchor text
Result: 4/5 links pass."
```
#### 2.4 URL / Slug
```
Verify:
- URL slug matches specification
- Slug is lowercase
- Uses hyphens (not underscores or spaces)
- No special characters (ä, ö, å)
- Keyword present in slug
Report:
"URL slug: /ac-service ✓ Matches spec, lowercase, keyword present"
"URL slug: /Sida?id=42 ✗ Not descriptive, no keyword"
```
#### 2.5 Canonical URL
```
Verify:
- <link rel="canonical"> tag present
- Points to the correct URL (self-referencing)
- One canonical per page
Report:
"Canonical: <link rel='canonical' href='https://kallafordon.se/ac-service'> ✓"
"Canonical tag missing. ✗ Fails"
```
---
### Level 3: Technical (Verify on Deployment)
These checks apply to the deployed/preview site, not the prototype.
#### 3.1 Performance
```
Verify:
- Total page weight < 3MB
- Largest image < 400KB (hero) / < 200KB (other)
- Time to First Byte (TTFB) < 1.5s
- No uncompressed images (should be WebP or compressed JPEG)
Report:
"Page weight: 1.8MB ✓ (target < 3MB)
hero.jpg: 380KB ✓ (target < 400KB)
team.jpg: 1.2MB ✗ (target < 200KB — compress!)
icon.svg: 3KB ✓
TTFB: 0.8s ✓ (target < 1.5s)"
```
#### 3.2 robots.txt
```
Verify:
- robots.txt exists (not 404)
- Allows crawling of public pages
- References sitemap
- Blocks admin/private pages
Report:
"robots.txt: exists ✓
Sitemap reference: present ✓
Public pages: allowed ✓
/wp-admin/: blocked ✓"
```
#### 3.3 XML Sitemap
```
Verify:
- Sitemap exists at /sitemap.xml (or referenced location)
- Contains all public pages
- All URLs return 200 (no broken links)
- Includes all language versions (if multilingual)
Report:
"Sitemap: 32 URLs, all return 200 ✓
Includes /en/ versions ✓
Includes /de/ versions ✓"
```
#### 3.4 hreflang Tags (Multilingual)
```
Verify:
- Each page declares all language alternates
- x-default points to primary language
- Tags are reciprocal (EN page links to SE, SE page links to EN)
Report:
"hreflang tags on /ac-service:
sv: /ac-service ✓
en: /en/ac-service ✓
de: /de/ac-service ✓
x-default: /ac-service ✓
All reciprocal. ✓ Passes"
```
#### 3.5 Security Headers
```
Verify:
- HSTS present
- X-Content-Type-Options present
- X-Frame-Options present
- Referrer-Policy present
Report:
"Security headers: 2/6 present ✗
HSTS: missing
CSP: missing
X-Content-Type-Options: 'nosniff' ✓
X-Frame-Options: 'DENY' ✓
Referrer-Policy: missing
Permissions-Policy: missing
Note: 95% of audited sites fail security headers."
```
#### 3.6 Favicon
```
Verify:
- Favicon exists (check <link rel="icon">)
- Multiple sizes available (16x16, 32x32, 180x180)
Report:
"Favicon: present ✓
16x16: ✓
32x32: ✓
apple-touch-icon (180x180): ✓"
```
---
## Verification with Puppeteer
### Automated SEO Check Script Pattern
```javascript
// Navigate to page
await page.goto(pageUrl, { waitUntil: 'networkidle0' });
// 1. Title tag
const title = await page.title();
console.log(`Title: "${title}" (${title.length} chars)`);
// 2. Meta description
const metaDesc = await page.$eval(
'meta[name="description"]',
el => el.content
).catch(() => null);
console.log(`Meta desc: "${metaDesc}" (${metaDesc?.length || 0} chars)`);
// 3. H1 count and content
const h1s = await page.$$eval('h1', els => els.map(el => el.textContent.trim()));
console.log(`H1 tags: ${h1s.length} — "${h1s.join('", "')}"`);
// 4. Heading hierarchy
const headings = await page.$$eval('h1,h2,h3,h4,h5,h6', els =>
els.map(el => ({ tag: el.tagName, text: el.textContent.trim().substring(0, 50) }))
);
console.log('Heading hierarchy:', headings.map(h => h.tag).join(' → '));
// 5. Images without alt
const imagesNoAlt = await page.$$eval('img', els =>
els.filter(el => !el.hasAttribute('alt')).map(el => el.src)
);
console.log(`Images without alt: ${imagesNoAlt.length}`);
// 6. Open Graph tags
const ogTags = await page.$$eval('meta[property^="og:"]', els =>
els.map(el => ({ property: el.getAttribute('property'), content: el.content }))
);
console.log(`OG tags: ${ogTags.length}`, ogTags);
// 7. Structured data
const jsonLd = await page.$$eval('script[type="application/ld+json"]', els =>
els.map(el => JSON.parse(el.textContent))
).catch(() => []);
console.log(`Structured data: ${jsonLd.length} blocks`, jsonLd.map(j => j['@type']));
// 8. Canonical
const canonical = await page.$eval('link[rel="canonical"]', el => el.href).catch(() => null);
console.log(`Canonical: ${canonical || 'MISSING'}`);
// 9. Internal links
const links = await page.$$eval('a[href]', els =>
els.filter(el => el.href.startsWith(window.location.origin))
.map(el => ({ text: el.textContent.trim().substring(0, 40), href: el.href }))
);
console.log(`Internal links: ${links.length}`);
```
---
## Narration Pattern
Group results by severity and narrate clearly:
```
## SEO Validation Report: {Page Name}
### Critical ✓/✗
Title tag: "Bilservice Öland | Källa Fordonservice" (51 chars) ✓
Meta description: "Komplett bilverkstad..." (156 chars) ✓
H1: 1 found — "Bilservice och reparationer på Öland" ✓
Heading hierarchy: H1 → H2 → H3 → H2 → H3 ✓
Image alt text: 7/8 images have alt ✗ (team-photo missing)
### Important ✓/✗
Open Graph: 5/5 tags present ✓
Structured data: AutoRepair schema valid ✓
Internal links: 5 found, 4/5 descriptive ✗ (1 "Klicka här")
URL slug: /ac-service ✓
Canonical: present, self-referencing ✓
### Technical (deployment only)
Page weight: 1.8MB ✓
Image sizes: 1 oversized (team.jpg 1.2MB) ✗
Security headers: 2/6 ✗
### Summary
Critical: 4/5 pass
Important: 4/5 pass
Technical: 1/3 pass
Action needed: Fix 1 missing alt text, 1 non-descriptive link,
1 oversized image, 4 security headers.
```
---
## Integration with Phase 5 Flow
```
4a: Announce & Gather
4b: Create Story File
4c: Implement Section
Agent runs Puppeteer verification (INLINE-TESTING-GUIDE)
Agent runs SEO validation (THIS GUIDE) — for public pages only
All pass? ── No ──→ Agent fixes, re-verifies (loop)
Yes
4d: Present for Testing
```
### Story File Addition
Add SEO criteria to the story file's Agent-Verifiable section:
```markdown
### SEO Criteria (Public Pages)
| # | Criterion | Expected | How to Verify |
|---|-----------|----------|---------------|
| S1 | Title tag | "Bilservice Öland \| Källa" ≤60 chars | Read document.title |
| S2 | Meta description | 150-160 chars, keyword present | Read meta[name=description] |
| S3 | H1 count | Exactly 1 | Count h1 elements |
| S4 | H1 keyword | Contains "bilservice" | Read h1 textContent |
| S5 | Heading hierarchy | H1→H2→H3, no skips | Scan all headings |
| S6 | Image alt coverage | 100% images have alt | Check img elements |
| S7 | OG tags | og:title, og:description, og:image | Check meta[property^=og:] |
| S8 | Internal links | ≥ 2, descriptive text | Count and check a[href] |
```
---
## Integration with Acceptance Testing
When creating test scenarios (Phase 4 [H] Handover / Phase 5 [T] Acceptance Testing), include SEO as a test category:
```yaml
seo_checks:
- id: 'SEO-001'
name: 'Page title correct'
verify:
- 'Title tag matches specification'
- 'Title ≤ 60 characters'
- 'Contains primary keyword'
- id: 'SEO-002'
name: 'Meta description correct'
verify:
- 'Meta description matches specification'
- 'Length 150-160 characters'
- 'Contains CTA'
- id: 'SEO-003'
name: 'Heading structure valid'
verify:
- 'Exactly one H1'
- 'No skipped heading levels'
- id: 'SEO-004'
name: 'Image alt text complete'
verify:
- 'All content images have alt text'
- 'Alt text in correct language'
- id: 'SEO-005'
name: 'Structured data valid'
verify:
- 'JSON-LD present and parseable'
- 'Schema type matches plan'
- 'Required properties present'
```
---
## Anti-Patterns
- **Never skip SEO validation on public pages** — It's not optional
- **Never approve a page with missing alt text** — 85% of real sites fail this
- **Never use "click here" or "read more" as link text** — Describe the destination
- **Never have more than one H1** — One per page, always
- **Never deploy without meta description** — 80% of sites miss this
- **Never assume SEO "can be added later"** — It's specification, not decoration
---
## Common Fixes (From 44 Real-World Audits)
| Issue | Frequency | Fix Time | Fix |
|-------|-----------|----------|-----|
| Missing alt text | 85% | 1 min/image | Add descriptive alt attribute |
| Missing meta description | 80% | 2 min/page | Add meta tag from spec |
| H1 missing or wrong | 75% | 1 min | Add/fix h1 tag |
| Missing OG tags | 70% | 3 min/page | Add og: meta tags from spec |
| Missing structured data | 65% | 5 min/page | Add JSON-LD script |
| Oversized images | 65% | 2 min/image | Compress + convert to WebP |
| Non-descriptive links | 30% | 1 min/link | Rewrite anchor text |
| Missing canonical | 40% | 1 min | Add link rel=canonical |
**Total estimated fix time for a typical page: 15-20 minutes**
These are all preventable by validating during development.
---
## Related Resources
- **Inline Testing Guide:** `INLINE-TESTING-GUIDE.md` — General Puppeteer verification
- **SEO Strategy Guide:** `../../data/agent-guides/saga/seo-strategy-guide.md` — SEO reference
- **SEO Content Instructions:** `../../4-ux-design/templates/instructions/seo-content.instructions.md` — Spec-level SEO
- **Specification Quality:** `../../data/agent-guides/freya/specification-quality.md` — Quality checklist
- **Meta Content Guide:** `../../data/agent-guides/freya/meta-content-guide.md` — Meta tag details
---
*SEO validation during development = zero SEO issues at launch. Validate as you build.*

View File

@@ -0,0 +1,46 @@
# Session Start Protocol
When starting or resuming a session, **always follow this sequence before implementing anything:**
## 1. Read the Dialog Document
Read the dialog file completely to understand:
- What steps are done
- What steps remain
- Any blockers or change requests
- Current context and decisions
## 2. Verify Plan Against Reality
**The plan may be outdated.** Check if:
- Steps marked "To Do" have actually been implemented
- Steps marked "Done" are truly complete
- Numbering is sequential and accurate
If the plan is outdated → Update it before proceeding.
## 3. Present Current Status
Summarize for the designer:
- What's done (with step numbers)
- What's remaining (with step numbers)
- Any change requests pending
## 4. Before Implementing a Step
**Always check the specification/sketches first:**
```
Agent: "Before implementing step 20, let me check the sketches..."
Agent: "I see this requires a nested drawer pattern, not inline buttons.
Should I break this into sub-steps?"
```
This prevents building the wrong thing and wasting effort.
## Why This Matters
Sessions can be interrupted. Context can be lost. The dialog document survives — but only if it's kept accurate. This protocol ensures:
- No duplicate work (re-implementing what exists)
- No missed work (skipping what's actually needed)
- Correct understanding of requirements before implementation

View File

@@ -0,0 +1,213 @@
# Issue Templates
Templates for creating issue tickets and test reports.
---
## Issue File Template
**File:** `issues/ISS-XXX-description.md`
```markdown
# Issue: [Short Description]
**ID:** ISS-XXX
**Severity:** [Critical | High | Medium | Low]
**Status:** Open
**Delivery:** DD-XXX
**Test:** TS-XXX, Check: [Test ID]
**Created:** [Date]
**Assigned:** BMad Developer
## Description
[Clear description of the problem]
## Expected
[What should happen according to design]
## Actual
[What actually happens]
## Impact
[Why this matters - user impact, business impact]
## Design Reference
- Design Spec: [Path to specification]
- Design Token: [Path to token if applicable]
- Component Spec: [Path to component spec if applicable]
## Steps to Reproduce
1. [Step 1]
2. [Step 2]
3. [Step 3]
## Screenshot/Video
![Issue screenshot](../testing/DD-XXX/screenshots/ISS-XXX.png)
## Recommendation
[How to fix this - be specific]
## Related Issues
- [Link to related issues if any]
---
**Priority for fix:** [Next release | This release | Future]
```
---
## Severity Levels
| Severity | Description | Fix Timeline |
|----------|-------------|--------------|
| **Critical** | App crashes, data loss, security issue | Immediate |
| **High** | Major functionality broken, blocking | This release |
| **Medium** | Feature works but wrong, confusing UX | This release |
| **Low** | Minor polish, nice to have | Future release |
---
## Test Report Template
**File:** `testing/DD-XXX/TR-XXX-[flow-name].md`
```markdown
# Test Report: DD-XXX [Flow Name]
**Report ID:** TR-XXX
**Date:** [Date]
**Tester:** [Your name]
**Build:** [Version]
**Device:** [Device/Browser]
**Status:** PASS / FAIL
## Summary
**Overall Result:** [PASS/FAIL]
**Total Issues:** [X]
**High Severity:** [X]
**Blocking:** [Yes/No]
## Test Coverage
| Category | Passed | Failed | Total |
|----------|--------|--------|-------|
| Happy Path | X | X | X |
| Error States | X | X | X |
| Edge Cases | X | X | X |
| Design System | X | X | X |
| Accessibility | X | X | X |
| **Total** | X | X | X |
## Issues Found
| ID | Severity | Description | Status |
|----|----------|-------------|--------|
| ISS-001 | High | [Description] | Open |
| ISS-002 | Medium | [Description] | Open |
## Sign-Off Recommendation
- [ ] Ready for production
- [x] Needs fixes before production
## Next Steps
1. [Next step 1]
2. [Next step 2]
## Attachments
- Screen recordings: [List]
- Screenshots: [List]
- Issue files: [List]
```
---
## Retest Report Template
```markdown
# Retest Report: DD-XXX
**Date:** [Date]
**Build:** [New version]
**Previous Build:** [Previous version]
## Fixed Issues Verification
| ID | Description | Fixed? |
|----|-------------|--------|
| ISS-001 | [Description] | ✓ Yes |
| ISS-002 | [Description] | ✓ Yes |
## Regression Check
- [ ] Happy path still works
- [ ] Error handling still works
- [ ] No new issues introduced
## Result
**Retest Status:** PASS / FAIL
## Recommendation
[Approve for production / Need more fixes]
```
---
## Sign-Off Document Template
```markdown
# Sign-Off: DD-XXX [Flow Name]
**Date:** [Date]
**Approved By:** [Your name], WDS UX Expert
## Approval Summary
I certify that Design Delivery DD-XXX has been:
- ✅ Tested against all test scenarios
- ✅ Verified against design specifications
- ✅ Validated for accessibility requirements
- ✅ Confirmed ready for production
## Test Summary
- **Total Tests:** X
- **Passed:** X/X (XX%)
- **Issues Found:** X
- **Issues Fixed:** X
- **Test Iterations:** X
## Quality Gate
- [x] All Critical issues fixed
- [x] All High severity issues fixed
- [x] Medium/Low issues accepted or deferred
- [x] Design system compliance > 95%
- [x] No accessibility blockers
## Approved
**Signature:** [Your name]
**Date:** [Date]
**Role:** WDS UX Expert
---
_This feature is approved for production deployment._
```

View File

@@ -0,0 +1,210 @@
# Test Result Templates
Templates for documenting test execution results.
---
## Test Step Documentation Template
```markdown
## [Test-ID]: [Test Name]
### Step X: [Step Name]
- Action: [What was done]
- Expected: [What should happen]
- Actual: [What actually happened]
- Result: PASS/FAIL
- Issue: [If FAIL, describe the issue]
- Screenshot: [filename if FAIL]
```
---
## Happy Path Results Template
```markdown
# Happy Path Test Results
## HP-001: [Test Name]
- Status: PASS/FAIL
- Steps: X total
- Passed: X/X (XX%)
- Failed: X/X (XX%)
- Issues: X ([brief description])
- Duration: X minutes X seconds
- Recording: happy-path-HP-001.mov
## Summary
- Total Tests: X
- Passed: X/X (XX%)
- Failed: X/X (XX%)
- Total Issues: X
```
---
## Error State Results Template
```markdown
# Error State Test Results
## ES-001: [Error Scenario Name]
- Status: PASS/FAIL
- Issue: [Brief description if FAIL]
- Severity: Critical/High/Medium/Low
## Summary
- Total Tests: X
- Passed: X/X (XX%)
- Failed: X/X (XX%)
- Total Issues: X
```
---
## Edge Case Results Template
```markdown
# Edge Case Test Results
## EC-001: [Edge Case Name]
- Status: PASS/FAIL
- Issue: [Brief description if FAIL]
- Severity: Critical/High/Medium/Low
## Summary
- Total Tests: X
- Passed: X/X (XX%)
- Failed: X/X (XX%)
- Total Issues: X
```
---
## Design System Validation Template
```markdown
# Design System Validation Results
## DS-001: [Component Type]
### [Component Instance]: "[Label]"
- Height: Xpx ✓/✗
- Background: #XXXXXX ✓/✗ (Expected: #XXXXXX)
- Text: #XXXXXX ✓/✗
- Typography: Xpx, weight ✓/✗
- Border radius: Xpx ✓/✗
- Padding: Xpx Xpx ✓/✗
- Result: PASS/FAIL ([issue if FAIL])
## Summary
- Total Components: X types
- Compliant: X/X (XX%)
- Non-compliant: X/X (XX%)
- Target: >95% compliance
- Result: PASS/FAIL
```
---
## Accessibility Results Template
```markdown
# Accessibility Test Results
## A11Y-001: Screen Reader Navigation
- Status: PASS/PARTIAL PASS/FAIL
- Issues: X ([brief description])
- Severity: Critical/High/Medium/Low
## A11Y-002: Color Contrast
- Body text: X:1 ✓/✗ (min 4.5:1)
- Button text: X:1 ✓/✗ (min 4.5:1)
- Error text: X:1 ✓/✗ (min 4.5:1)
- Link text: X:1 ✓/✗ (min 4.5:1)
- Result: PASS/FAIL
## A11Y-003: Touch Targets
- Buttons: Xpx height ✓/✗ (min 44px)
- Input fields: Xpx height ✓/✗ (min 44px)
- Text links: Xpx height ✓/✗ (min 44px)
- Spacing: Xpx ✓/✗ (min 8px)
- Result: PASS/FAIL
## Summary
- Total Tests: X
- Passed: X/X (XX%)
- Partial: X/X (XX%)
- Failed: X/X (XX%)
- Total Issues: X
```
---
## Overall Test Summary Template
```markdown
# Test Summary: DD-XXX [Flow Name]
**Date:** [Date]
**Tester:** [Your name]
**Build:** [Version]
**Device:** [Device/Browser]
## Overall Result
**Status:** PASS/FAIL ([X] issues found, [X] high severity)
## Test Coverage
- Happy Path: X/X passed (XX%)
- Error States: X/X passed (XX%)
- Edge Cases: X/X passed (XX%)
- Design System: X/X compliant (XX%)
- Accessibility: X/X passed (XX%)
## Issues Summary
**Total Issues:** X
**By Severity:**
- Critical: X
- High: X
- Medium: X
- Low: X
**By Category:**
- Functionality: X
- Design System: X
- Accessibility: X
## Next Steps
1. Create issue tickets for all issues
2. Create detailed test report
3. Send to BMad for fixes
4. Schedule retest after fixes
```
---
## Screenshot Naming Convention
- Happy Path: `HP-XXX-step-X-FAIL.png`
- Error State: `ES-XXX-[description]-FAIL.png`
- Edge Case: `EC-XXX-[description]-FAIL.png`
- Design System: `DS-XXX-[component]-FAIL.png`
- Accessibility: `A11Y-XXX-[issue]-FAIL.png`

View File

@@ -0,0 +1,682 @@
# Phase 5 [T] Acceptance Testing (Designer Validation)
**Validate implementation matches design vision and quality standards**
---
## Purpose
Acceptance Testing is where you:
1. Wait for BMad to notify you that a feature is complete
2. Run test scenarios to validate implementation
3. Create issues if problems are found
4. Iterate with BMad until quality meets standards
5. Sign off when approved
**This is Touch Point 3:** BMad → WDS (BMad integrates with WDS testing)
---
## When to Enter Acceptance Testing
**After BMad notifies you:**
```
BMad Developer: "Feature complete: DD-001 Login & Onboarding
Implemented:
✓ All 4 scenarios
✓ All error states
✓ All edge cases
✓ Design system components
Build: v0.1.0-beta.1
Device: Staging environment
Ready for designer validation.
Test scenario: test-scenarios/TS-001.yaml"
```
**You respond:**
```
WDS Analyst: "Received! Starting validation testing..."
```
---
## Acceptance Testing Steps
### Step 1: Prepare for Testing
**Gather materials:**
- [ ] Test scenario file (TS-XXX.yaml)
- [ ] Design Delivery file (DD-XXX.yaml)
- [ ] Scenario specifications (C-UX-Scenarios/)
- [ ] Design system specs (D-Design-System/)
**Set up environment:**
- [ ] Access to staging build
- [ ] Test devices ready (iOS, Android, etc.)
- [ ] Test data prepared
- [ ] Screen recording tools ready
- [ ] Note-taking tools ready
---
### Step 2: Run Happy Path Tests
**Follow test scenario:**
```yaml
happy_path:
- id: 'HP-001'
name: 'New User Complete Onboarding'
steps:
- action: 'Open app'
expected: 'Welcome screen appears'
design_ref: 'C-UX-Scenarios/01-welcome/Frontend/specifications.md'
```
**For each step:**
1. Perform the action
2. Observe the result
3. Compare to expected result
4. Check design reference
5. Mark as Pass or Fail
6. Take screenshots if issues found
7. Note any deviations
**Record results:**
```
HP-001: New User Complete Onboarding
✓ Step 1: Open app → Welcome screen appears (PASS)
✓ Step 2: Tap "Get Started" → Login/Signup choice (PASS)
✗ Step 3: Tap "Create Account" → Signup form (FAIL)
Issue: Transition too fast, feels jarring
Expected: 300ms smooth transition
Actual: Instant transition
Screenshot: screenshots/HP-001-step-3.png
```
---
### Step 3: Run Error State Tests
**Test error handling:**
```yaml
error_states:
- id: 'ES-001'
name: 'Email Already Exists'
steps:
- action: 'Enter existing email'
- action: "Tap 'Create Account'"
- expected: "Error message: 'This email is already registered...'"
```
**Verify:**
- Error messages are clear and helpful
- Error states are visually distinct
- Recovery options are provided
- User can retry without losing data
---
### Step 4: Run Edge Case Tests
**Test unusual scenarios:**
```yaml
edge_cases:
- id: 'EC-001'
name: 'User Closes App Mid-Onboarding'
steps:
- action: 'Start onboarding, complete signup'
- action: 'Close app (force quit)'
- action: 'Reopen app'
- expected: 'Resume at Family Setup'
```
**Verify:**
- Edge cases are handled gracefully
- No crashes or blank screens
- User experience is smooth
---
### Step 5: Validate Design System Compliance
**Check component usage:**
```yaml
design_system_checks:
- id: 'DS-001'
name: 'Button Components'
checks:
- component: 'Primary Button'
instances: ['Get Started', 'Create Account']
verify:
- 'Correct size (48px height)'
- 'Correct color (primary brand color)'
- 'Correct typography (16px, semibold)'
```
**Verify:**
- Components match design system specs
- Colors are correct
- Typography is correct
- Spacing is correct
- States work correctly (hover, active, disabled)
---
### Step 6: Validate SEO (Public Pages)
**Run SEO validation for all public pages:**
Reference: `../5-agentic-development/guides/SEO-VALIDATION-GUIDE.md`
```yaml
seo_checks:
- id: 'SEO-001'
name: 'Title tag correct'
verify:
- 'Title matches specification (≤ 60 chars)'
- 'Contains primary keyword'
- 'Contains brand name'
- id: 'SEO-002'
name: 'Meta description correct'
verify:
- 'Meta description matches specification'
- 'Length 150-160 characters'
- 'Contains CTA'
- id: 'SEO-003'
name: 'Heading structure valid'
verify:
- 'Exactly one H1'
- 'H1 contains primary keyword'
- 'No skipped heading levels'
- id: 'SEO-004'
name: 'Image alt text complete'
verify:
- 'All content images have alt text'
- 'Alt text in correct language'
- 'No images > 200KB (hero < 400KB)'
- id: 'SEO-005'
name: 'Social sharing tags'
verify:
- 'og:title, og:description, og:image present'
- 'twitter:card present'
- id: 'SEO-006'
name: 'Structured data valid'
verify:
- 'JSON-LD present and parseable'
- 'Schema type matches plan'
- id: 'SEO-007'
name: 'Technical SEO'
verify:
- 'Canonical URL present'
- 'hreflang tags present (if multilingual)'
- 'robots.txt exists and references sitemap'
- 'Security headers present'
```
**Verify:**
- All public pages have title, meta description, H1
- All images have alt text in all languages
- Social sharing tags present
- Structured data valid
- No oversized images
- Security headers configured
---
### Step 7: Test Accessibility
**Run accessibility tests:**
```yaml
accessibility:
- id: 'A11Y-001'
name: 'Screen Reader Navigation'
setup: 'Enable VoiceOver (iOS) or TalkBack (Android)'
verify:
- 'All buttons have descriptive labels'
- 'Form fields announce their purpose'
- 'Error messages are announced'
```
**Verify:**
- Screen reader can navigate
- All interactive elements are accessible
- Color contrast meets WCAG 2.1 AA
- Touch targets are 44×44px minimum
---
### Step 8: Create Issues
**If problems found, create issue tickets:**
**File:** `issues/ISS-XXX-description.md`
**Template:**
````markdown
# Issue: Button Color Incorrect
**ID:** ISS-001
**Severity:** High
**Status:** Open
**Delivery:** DD-001
**Test:** TS-001, Check: DS-001
## Description
Primary button color doesn't match design system specification.
## Expected
Primary button background: #2563EB (brand primary)
## Actual
Primary button background: #3B82F6 (lighter blue)
## Impact
Brand inconsistency, doesn't match design system
## Design Reference
- Design System: D-Design-System/03-Atomic-Components/Buttons/Button-Primary.md
- Design Token: tokens/colors.json → "button.primary.background"
## Steps to Reproduce
1. Open Login screen
2. Observe "Sign In" button color
## Screenshot
![Button color issue](screenshots/ISS-001.png)
## Recommendation
Update button background color to use design token:
```tsx
backgroundColor: tokens.button.primary.background; // #2563EB
```
````
````
**Severity levels:**
- **Critical:** Blocks usage, must fix immediately
- **High:** Major issue, fix before release
- **Medium:** Noticeable issue, fix soon
- **Low:** Minor issue, fix when possible
---
### Step 9: Create Test Report
**File:** `test-reports/TR-XXX-YYYY-MM-DD.md`
**Template:**
```markdown
# Test Report: TS-001 Login & Onboarding
**Date:** 2024-12-09
**Tester:** Sarah (Designer)
**Device:** iPhone 14 Pro (iOS 17)
**Build:** v0.1.0-beta.1
## Summary
**Overall Result:** FAIL (2 issues found, 1 high severity)
**Test Coverage:**
- Happy Path: 12/13 passed (92%)
- Error States: 3/3 passed (100%)
- Edge Cases: 2/2 passed (100%)
- Design System: 8/10 passed (80%)
- Accessibility: 2/2 passed (100%)
## Issues Found
### ISS-001: Button Color Incorrect (HIGH)
[Details...]
### ISS-002: Transition Too Fast (MEDIUM)
[Details...]
## Recommendations
### What Worked Well
- Error handling is clear and helpful
- Accessibility is excellent
- User flow is intuitive
### What Needs Improvement
- Design system compliance (80% → target 95%)
- Transition animations need polish
### Next Steps
1. Fix ISS-001 (button color) - CRITICAL
2. Fix ISS-002 (transition speed)
3. Retest with updated build
## Sign-off
**Status:** NOT APPROVED
**Reason:** High severity issue + design system compliance below threshold
**Retest Required:** Yes
````
---
### Step 10: Send to BMad
**Notify BMad of results:**
```
WDS Analyst: "Testing complete for DD-001.
Results: 2 issues found
- ISS-001: Button color incorrect (HIGH)
- ISS-002: Transition too fast (MEDIUM)
Test report: test-reports/TR-001-2024-12-09.md
Issues: issues/ISS-001.md, issues/ISS-002.md
Please fix and notify when ready for retest."
```
**BMad responds:**
```
BMad Developer: "Issues received. Fixing:
- ISS-001: Button color
- ISS-002: Transition speed
Will notify when ready for retest."
```
---
### Step 11: Iterate Until Approved
**BMad fixes issues:**
```
BMad Developer: "Issues fixed.
Build: v0.1.0-beta.2
Ready for retest."
```
**You retest:**
- Run test scenarios again
- Verify issues are fixed
- Check for new issues
- Update test report
**If approved:**
```
WDS Analyst: "Retest complete!
All issues resolved.
Design system compliance: 98%
✅ APPROVED - Ready to ship!
Test report: test-reports/TR-001-2024-12-15.md"
```
**If not approved:**
- Create new issues
- Send to BMad
- Repeat until approved
---
## Sign-Off Criteria
**Required for approval:**
- [ ] All critical tests pass
- [ ] No critical or high severity issues
- [ ] Design system compliance > 95%
- [ ] Accessibility tests pass
- [ ] SEO validation passes (all public pages)
- [ ] Usability metrics meet targets
- [ ] All acceptance criteria met
**Designer approval:**
```
I confirm that the implemented feature matches the design
specifications and meets the quality standards defined in
the test scenario.
Designer: ________________
Date: ________________
```
---
## Deliverables
### Test Report
**Location:** `test-reports/TR-XXX-YYYY-MM-DD.md`
**Contents:**
- Test summary (date, tester, device, build)
- Overall result (pass/fail/partial)
- Test coverage (happy path, errors, edge cases, etc.)
- Issues found (with severity and details)
- Recommendations (what worked, what needs improvement)
- Sign-off status
---
### Issue Tickets
**Location:** `issues/ISS-XXX-description.md`
**Contents:**
- Issue metadata (id, severity, status, delivery, test)
- Description
- Expected vs Actual
- Impact
- Design reference
- Steps to reproduce
- Screenshot/video
- Recommendation
---
## Common Issues
### Design System Violations
**Button color incorrect:**
- Expected: Design token color
- Actual: Hardcoded color
- Fix: Use design token
**Typography wrong:**
- Expected: 16px, Semibold
- Actual: 14px, Regular
- Fix: Use design system styles
**Spacing inconsistent:**
- Expected: 20px between elements
- Actual: 15px, 18px, 23px
- Fix: Use spacing tokens
---
### Interaction Issues
**Transition too fast:**
- Expected: 300ms smooth transition
- Actual: Instant transition
- Fix: Add transition animation
**Touch target too small:**
- Expected: 44×44px minimum
- Actual: 32×32px
- Fix: Increase button size
**No loading state:**
- Expected: Spinner during load
- Actual: Blank screen
- Fix: Add loading indicator
---
### Accessibility Issues
**Missing labels:**
- Expected: Descriptive button labels
- Actual: Generic "Button" label
- Fix: Add aria-label
**Low contrast:**
- Expected: 4.5:1 contrast ratio
- Actual: 3:1 contrast ratio
- Fix: Increase text color contrast
**Not keyboard accessible:**
- Expected: Can navigate with keyboard
- Actual: Keyboard navigation doesn't work
- Fix: Add keyboard support
---
## Tips for Success
### DO ✅
**Be thorough:**
- Test every step in test scenario
- Check all design references
- Verify all acceptance criteria
- Don't skip edge cases
**Be specific:**
- Clear issue descriptions
- Include screenshots/videos
- Reference design specs
- Provide recommendations
**Be collaborative:**
- Communicate clearly with BMad
- Answer questions promptly
- Appreciate good work
- Focus on quality, not blame
**Be iterative:**
- Expect multiple rounds
- Test quickly and provide feedback
- Don't wait for perfection
- Sign off when quality is good enough
### DON'T ❌
**Don't be vague:**
- "It doesn't look right" ❌
- "Button color is #3B82F6, should be #2563EB" ✅
**Don't be nitpicky:**
- Focus on critical issues first
- Don't block on minor details
- Remember: good enough to ship
**Don't disappear:**
- Respond to BMad questions
- Retest promptly
- Stay engaged until sign-off
**Don't skip documentation:**
- Always create test reports
- Always document issues
- Always provide clear feedback
---
## Next Steps
**After Acceptance Testing (Sign-off):**
1. **Feature ships** to production
2. **Monitor** user feedback and metrics
3. **Iterate** based on real-world usage
4. **Continue** with next delivery (return to Phase 4-5)
**If more flows in progress:**
- Test next completed flow
- Continue parallel work
- Maintain quality standards
---
## Resources
**Templates:**
- `templates/test-scenario.template.yaml`
- `templates/test-report.template.md` (to be created)
- `templates/issue.template.md` (to be created)
**Specifications:**
- `src/core/resources/wds/integration-guide.md`
- Test scenario files in `test-scenarios/`
---
**Acceptance Testing is where you ensure quality! Test thoroughly, communicate clearly, and sign off with confidence!** ✅✨

View File

@@ -0,0 +1,145 @@
---
name: 'step-01-define-question'
description: 'Articulate exactly what you need to understand about the codebase before reading a single file'
# File References
nextStepFile: './step-02-scan-codebase.md'
---
# Step 1: Define Question
## STEP GOAL:
Articulate exactly what you need to understand about the codebase before reading a single file.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on defining clear analysis questions, scope, expected output, and time box
- 🚫 FORBIDDEN to begin scanning or reading any codebase files — that is a later step
- 💬 Approach: Help user articulate their question clearly and set boundaries before any exploration
- 📋 If user provides vague questions, help them sharpen and prioritize
## EXECUTION PROTOCOLS:
- 🎯 Produce a clear, scoped analysis question with defined boundaries
- 💾 Save the question, scope, output format, and time box for subsequent steps
- 📖 Reference the category table and scope options to guide user
- 🚫 Do not open, scan, or explore any code during this step
## CONTEXT BOUNDARIES:
- Available context: User's initial request or motivation for analysis
- Focus: Defining what to analyze and how to scope it
- Limits: No codebase exploration yet — purely planning
- Dependencies: None — this is the first step
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Articulate the Question
Write down what you need to understand. Common analysis questions:
| Category | Example Questions |
|----------|-------------------|
| **Architecture** | How is this application structured? What patterns does it use? |
| **Dependencies** | What does this module depend on? What would break if I change it? |
| **Patterns** | What conventions does the team follow? Where are they inconsistent? |
| **Performance** | Where are the bottlenecks? What queries are expensive? |
| **Data flow** | How does data move from user input to database and back? |
| **Integration** | What external services does this connect to? How? |
| **Security** | Where is authentication handled? Are there exposed endpoints? |
Write the question in plain language. If you have multiple questions, prioritize them.
### 2. Define Scope
Determine what is in and out of scope:
- **Entire codebase** — Full architecture review (use for onboarding or documentation)
- **Specific module** — One feature area, service, or package
- **Cross-cutting concern** — Authentication, error handling, logging across the codebase
- **Integration boundary** — How two systems communicate
Be explicit about what you will NOT analyze. Unbounded scope leads to analysis paralysis.
### 3. Define Expected Output
Decide what the analysis should produce:
- **Architecture map** — High-level component diagram with relationships
- **Dependency graph** — Module dependencies, import chains
- **Pattern catalog** — Design patterns used, where, and how consistently
- **API map** — Endpoints, request/response shapes, authentication
- **Data model** — Entity relationships, storage patterns
- **Risk assessment** — Tech debt, security concerns, fragile areas
### 4. Set Time Box
Decide how long the analysis should take. Recommended:
| Scope | Time Box |
|-------|----------|
| Single module | 15-30 min |
| Feature area | 30-60 min |
| Full codebase | 1-2 hours |
If the time box expires, document what you found and what remains unexplored.
### 5. Verify Checklist
- [ ] Question is written in plain language
- [ ] Scope boundaries are defined (what is in, what is out)
- [ ] Expected output format is chosen
- [ ] Time box is set
### 6. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 2: Scan Codebase"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the user has confirmed a clear question, scope, output format, and time box will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Question is written in plain language
- Scope boundaries are defined (what is in, what is out)
- Expected output format is chosen
- Time box is set
### ❌ SYSTEM FAILURE:
- Beginning codebase exploration before question is defined
- Proceeding without clear scope boundaries
- Skipping time box definition
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,179 @@
---
name: 'step-02-scan-codebase'
description: 'Build a mental model of the codebase through systematic exploration of structure, tech stack, and entry points'
# File References
nextStepFile: './step-03-map-architecture.md'
---
# Step 2: Scan Codebase
## STEP GOAL:
Build a mental model of the codebase through systematic exploration of structure, tech stack, and entry points.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on scanning structure, tech stack, entry points, configuration, and build pipeline
- 🚫 FORBIDDEN to begin deep architectural analysis — that is the next step
- 💬 Approach: Systematically explore top-level structure and record observations with user
- 📋 Document observations as you go — structure, stack, patterns
## EXECUTION PROTOCOLS:
- 🎯 Map directory structure, identify tech stack, locate entry points
- 💾 Document structure overview, tech stack, entry points, configuration, pipeline, and initial patterns
- 📖 Use the question and scope defined in Step 1 to guide scanning focus
- 🚫 Do not trace data flows or map dependencies in detail yet
## CONTEXT BOUNDARIES:
- Available context: Analysis question, scope, output format, and time box from Step 1
- Focus: High-level codebase reconnaissance
- Limits: No deep architectural mapping — stay at survey level
- Dependencies: Step 1 must be complete (question and scope defined)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Read Project Structure
Start with a directory listing of the root and first two levels:
- Note the top-level folders and what they suggest (e.g., `src/`, `lib/`, `tests/`, `docs/`)
- Identify organizational patterns: by feature, by layer, by module, flat
- Note any unusual or project-specific folders
Record observations:
```
Root structure:
- src/ — Application source code
- tests/ — Test suites
- scripts/ — Build and utility scripts
- docs/ — Documentation
- ...
```
### 2. Identify Tech Stack and Frameworks
Read key configuration files:
| File | Reveals |
|------|---------|
| `package.json` / `requirements.txt` / `Cargo.toml` | Dependencies, scripts, project metadata |
| `tsconfig.json` / `pyproject.toml` | Language configuration |
| `.env.example` / `.env.template` | Environment variables needed |
| `docker-compose.yml` / `Dockerfile` | Container setup, services |
| `Makefile` / `justfile` | Build commands |
Document the stack:
```
Tech stack:
- Language: TypeScript 5.x
- Framework: Next.js 14
- Database: PostgreSQL via Prisma
- Testing: Vitest + Playwright
- Deployment: Docker → AWS ECS
```
### 3. Map Entry Points
Find where execution starts:
- **Web app:** Main page/route, layout files, app entry
- **API:** Server entry, route definitions, middleware chain
- **CLI:** Main command file, argument parsing
- **Library:** Exported modules, public API surface
### 4. Read Configuration Files
Scan for configuration that shapes behavior:
- Environment configuration (dev/staging/prod differences)
- Feature flags or toggles
- Routing configuration
- Database connection setup
- Third-party service configuration (auth, payments, email)
### 5. Identify Build and Deploy Pipeline
Look for:
- CI/CD configuration (`.github/workflows/`, `.gitlab-ci.yml`, `Jenkinsfile`)
- Build scripts and what they produce
- Deployment targets and process
- Environment-specific builds
### 6. Note Patterns Observed
As you scan, note initial observations about:
- Code organization style (monolith, monorepo, microservices)
- Naming conventions (files, folders, variables)
- Test organization (co-located, separate folder, both)
- Documentation level (thorough, sparse, outdated)
- Apparent code quality signals (linting config, formatting config, type strictness)
### 7. Verify Checklist
- [ ] Directory structure mapped (top 2 levels)
- [ ] Tech stack and versions identified
- [ ] Entry points located
- [ ] Configuration files read
- [ ] Build/deploy pipeline identified
- [ ] Initial patterns and conventions noted
### 8. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 3: Map Architecture"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the codebase scan is complete and observations documented will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Directory structure mapped (top 2 levels)
- Tech stack and versions identified
- Entry points located
- Configuration files read
- Build/deploy pipeline identified
- Initial patterns and conventions noted
### ❌ SYSTEM FAILURE:
- Jumping into deep architecture analysis before completing the scan
- Skipping configuration or build pipeline investigation
- Not documenting observations systematically
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,186 @@
---
name: 'step-03-map-architecture'
description: 'Build a detailed map of components, data flow, dependencies, and patterns as the analytical core of the workflow'
# File References
nextStepFile: './step-04-document-findings.md'
---
# Step 3: Map Architecture
## STEP GOAL:
Build a detailed map of components, data flow, dependencies, and patterns. This is the analytical core of the workflow.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on mapping components, tracing data flows, identifying layers, integrations, state, dependencies, and patterns
- 🚫 FORBIDDEN to begin writing the final architecture document — that is the next step
- 💬 Approach: Deep dive into codebase structure with user, tracing flows and mapping dependencies collaboratively
- 📋 Use the scan observations from Step 2 as your starting point
## EXECUTION PROTOCOLS:
- 🎯 Create comprehensive component inventory, data flow traces, and dependency maps
- 💾 Document component map, data flows, layers, integrations, state management, dependencies, and patterns
- 📖 Reference scan results from Step 2 and original questions from Step 1
- 🚫 Do not write the final document yet — focus on raw analysis
## CONTEXT BOUNDARIES:
- Available context: Analysis question and scope from Step 1; scan observations from Step 2
- Focus: Deep architectural analysis — components, flows, dependencies, patterns
- Limits: No final document writing — raw analysis only
- Dependencies: Steps 1 and 2 must be complete
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Map Component and Module Structure
For each major module or component area:
- What is its responsibility?
- What does it expose (public API, exports)?
- What does it encapsulate (internal details)?
- How large is it (files, lines, complexity)?
Create a component inventory:
```
Components:
- AuthModule — Handles login, registration, session management
- UserService — CRUD operations for user profiles
- PaymentGateway — Stripe integration, invoice generation
- NotificationService — Email + push notifications
- ...
```
### 2. Trace Data Flow
Pick 2-3 key user actions and trace data from start to finish:
1. **User action** (click, form submit, page load)
2. **Frontend handling** (event handler, state update, API call)
3. **API layer** (route, middleware, validation)
4. **Business logic** (service, domain logic, transformations)
5. **Data layer** (database query, cache, external API)
6. **Response path** (back through layers to UI)
Document at least one complete flow end-to-end.
### 3. Identify Layers
Determine the architectural layers and their boundaries:
| Layer | Purpose | Location |
|-------|---------|----------|
| **Presentation** | UI rendering, user interaction | `src/components/`, `src/pages/` |
| **Application** | Use cases, orchestration | `src/services/`, `src/hooks/` |
| **Domain** | Business rules, entities | `src/models/`, `src/domain/` |
| **Infrastructure** | Database, external APIs, file system | `src/db/`, `src/integrations/` |
Note: Not all codebases have clean layers. Document what you actually find, including layer violations.
### 4. Map External Integrations
List every external system the codebase talks to:
| Integration | Purpose | Protocol | Location in Code |
|-------------|---------|----------|-----------------|
| PostgreSQL | Primary data store | SQL via ORM | `src/db/` |
| Stripe | Payment processing | REST API | `src/payments/` |
| SendGrid | Email delivery | REST API | `src/notifications/` |
| Redis | Caching, sessions | Redis protocol | `src/cache/` |
### 5. Document State Management
How does the application manage state?
- **Frontend:** Global store (Redux, Zustand), context, local state, URL state
- **Backend:** Session storage, database, cache, in-memory
- **Shared:** How does state sync between client and server?
### 6. Create Dependency Graph
Map which modules depend on which:
- Identify core modules that many things depend on (high fan-in)
- Identify modules that depend on many things (high fan-out)
- Look for circular dependencies
- Note tightly coupled vs loosely coupled areas
### 7. Identify Architectural Patterns
Document patterns you observe:
- **Structural:** MVC, MVVM, Clean Architecture, Hexagonal, Monolith, Microservices
- **Data:** Repository pattern, Active Record, Data Mapper, CQRS
- **Communication:** REST, GraphQL, WebSocket, Event-driven, Message queue
- **Error handling:** Centralized, per-module, try-catch patterns, error boundaries
- **Testing:** Unit/integration/e2e split, mocking approach, test data strategy
### 8. Verify Checklist
- [ ] Component/module inventory created
- [ ] At least one data flow traced end-to-end
- [ ] Architectural layers identified
- [ ] External integrations mapped
- [ ] State management approach documented
- [ ] Key dependencies mapped (especially high fan-in/fan-out)
- [ ] Architectural patterns identified
### 9. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 4: Document Findings"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the architectural mapping is complete with components, flows, and dependencies documented will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Component/module inventory created
- At least one data flow traced end-to-end
- Architectural layers identified
- External integrations mapped
- State management approach documented
- Key dependencies mapped (especially high fan-in/fan-out)
- Architectural patterns identified
### ❌ SYSTEM FAILURE:
- Skipping data flow tracing
- Not mapping external integrations
- Beginning final document creation before analysis is complete
- Ignoring dependency relationships
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,242 @@
---
name: 'step-04-document-findings'
description: 'Create a structured architecture document that answers the original questions, includes diagrams, and provides actionable recommendations'
# File References
activityWorkflowFile: '../workflow-analysis.md'
---
# Step 4: Document Findings
## STEP GOAL:
Create a structured architecture document that answers the original questions, includes diagrams, and provides actionable recommendations.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on creating the final architecture document with diagrams and recommendations
- 🚫 FORBIDDEN to redo analysis — use findings from Steps 2 and 3
- 💬 Approach: Collaboratively structure and write the document with user, ensuring original questions are answered
- 📋 Include at least one Mermaid diagram and prioritized recommendations
## EXECUTION PROTOCOLS:
- 🎯 Produce a complete architecture document answering the original questions
- 💾 Save the document to the project's output location
- 📖 Reference all findings from Steps 1-3
- 🚫 Do not restart analysis — synthesize existing findings
## CONTEXT BOUNDARIES:
- Available context: Question and scope from Step 1; scan from Step 2; architecture map from Step 3
- Focus: Document synthesis, diagrams, risk assessment, recommendations
- Limits: No new analysis — document what was found
- Dependencies: Steps 1, 2, and 3 must be complete
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Create the Architecture Document
Create a document in the output folder that answers the questions defined in Step 01. Use this structure:
```markdown
# Architecture Analysis: [Project Name]
**Date:** [Date]
**Scope:** [What was analyzed]
**Questions:** [Original questions from Step 01]
## Summary
[2-3 sentence overview of key findings]
## Tech Stack
[Language, framework, database, infrastructure — from Step 02]
## Architecture Overview
[High-level description of how the system is structured]
### Architecture Diagram
[Mermaid diagram — see below]
## Component Map
[Module inventory with responsibilities — from Step 03]
## Data Flow
[End-to-end flow traces — from Step 03]
## Dependencies
[Key dependency relationships, high fan-in/fan-out modules]
## Patterns and Conventions
[Architectural and code patterns observed]
## Risks and Tech Debt
[Issues found during analysis]
## Recommendations
[Actionable next steps]
```
### 2. Include Diagrams
Use Mermaid syntax for diagrams that render in Markdown viewers.
**Architecture diagram example:**
```mermaid
graph TD
Client[Browser] --> API[API Server]
API --> Auth[Auth Service]
API --> Users[User Service]
API --> Payments[Payment Service]
Users --> DB[(PostgreSQL)]
Payments --> Stripe[Stripe API]
Auth --> Redis[(Redis)]
```
**Dependency diagram example:**
```mermaid
graph LR
A[Module A] --> C[Core]
B[Module B] --> C
B --> D[Module D]
D --> C
```
**Data flow example:**
```mermaid
sequenceDiagram
User->>Frontend: Click "Submit"
Frontend->>API: POST /orders
API->>Validation: Validate input
API->>OrderService: Create order
OrderService->>Database: INSERT order
OrderService->>PaymentService: Charge card
PaymentService->>Stripe: Create charge
Stripe-->>PaymentService: Charge confirmed
PaymentService-->>API: Payment success
API-->>Frontend: 201 Created
Frontend-->>User: Show confirmation
```
### 3. Document Risks and Tech Debt
For each risk or debt item found during analysis:
| Risk | Severity | Location | Impact |
|------|----------|----------|--------|
| No input validation on `/api/admin/` routes | High | `src/routes/admin.ts` | Security vulnerability |
| Circular dependency between User and Order modules | Medium | `src/services/` | Fragile, hard to test |
| No error handling in payment flow | High | `src/payments/stripe.ts` | Silent failures |
| Outdated dependencies (2 major versions behind) | Medium | `package.json` | Security + compatibility |
### 4. Write Recommendations
Each recommendation should be:
- **Specific** — Name the module, file, or pattern
- **Actionable** — Describe what to do, not just what is wrong
- **Prioritized** — Order by impact and effort
Example:
```markdown
## Recommendations
1. **Add input validation to admin routes** (High priority, low effort)
- Add Zod schemas to all `/api/admin/` endpoints
- Estimated: 2-3 hours
2. **Break circular dependency between User and Order** (Medium priority, medium effort)
- Extract shared types to a common module
- Use dependency injection for cross-service calls
- Estimated: 4-6 hours
3. **Add error handling to payment flow** (High priority, medium effort)
- Wrap Stripe calls in try-catch with structured error responses
- Add retry logic for transient failures
- Estimated: 3-4 hours
```
### 5. Save Output
Save the document to the project's output location. If no output location is defined, save to:
- `docs/architecture/` in the analyzed project, or
- The agent experiences folder for session insights
### 6. Verify Checklist
- [ ] Document answers the original questions from Step 01
- [ ] Summary is clear and concise (2-3 sentences)
- [ ] At least one Mermaid diagram included
- [ ] Risks and tech debt listed with severity
- [ ] Recommendations are specific, actionable, and prioritized
- [ ] Document saved to output folder
### 7. Present MENU OPTIONS
Display: "**Select an Option:** [M] Return to Activity Menu"
#### Menu Handling Logic:
- IF M: Update design log, then load, read entire file, then execute {activityWorkflowFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed when user selects 'M'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the architecture document is complete and saved will you then load and read fully `{activityWorkflowFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Document answers the original questions from Step 01
- Summary is clear and concise (2-3 sentences)
- At least one Mermaid diagram included
- Risks and tech debt listed with severity
- Recommendations are specific, actionable, and prioritized
- Document saved to output folder
### ❌ SYSTEM FAILURE:
- Document does not answer original questions
- No diagrams included
- Recommendations are vague or not prioritized
- Document not saved
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,157 @@
---
name: 'step-01-scope-and-plan'
description: 'Read the approved spec, extract every deliverable, and create a sequenced implementation plan'
# File References
nextStepFile: './step-02-setup-environment.md'
---
# Step 1: Scope and Plan
## STEP GOAL:
Read the approved spec, extract every deliverable, and create a sequenced implementation plan.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on reading the spec, inventorying work items, mapping dependencies, and creating the implementation plan
- 🚫 FORBIDDEN to begin implementing or coding anything — that is a later step
- 💬 Approach: Walk through the spec with user, extract deliverables collaboratively, and sequence them
- 📋 Every acceptance criterion must be extracted into a standalone checklist
## EXECUTION PROTOCOLS:
- 🎯 Produce a complete, sequenced implementation plan with all work items inventoried
- 💾 Record inventory, implementation order, and acceptance criteria in the dialog file
- 📖 Reference the approved spec document thoroughly
- 🚫 Do not write any implementation code
## CONTEXT BOUNDARIES:
- Available context: Approved specification document
- Focus: Planning and scoping — deliverables, dependencies, order, acceptance criteria
- Limits: No implementation, no environment setup
- Dependencies: An approved spec must exist
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Read the Approved Specification
Open the approved spec (page spec, feature spec, or component spec) and read it end to end. Do not skim. Note:
- Every page, component, and feature described
- All properties, states, and variants for each component
- Responsive behavior and breakpoints
- Data requirements (APIs, models, mock data)
- Acceptance criteria (these are your definition of done)
- Design tokens and design system references
### 2. Inventory All Work Items
Create a flat list of everything that needs to be built. Be specific — not "build the form" but each field, validation rule, and submit behavior. Include:
- Pages / routes
- Components (new and modified)
- Data models or API integrations
- State management additions
- Styling / responsive work
- Tests to write
### 3. Identify Dependencies and Determine Order
Map which items depend on which. Common patterns:
- Shared components before pages that use them
- Data layer before UI that consumes it
- Layout / structure before content sections
- Base styles / tokens before component styles
Order the work so you never build something before its dependencies exist.
### 4. Estimate Effort
For each work item, note relative size (small / medium / large). This is not for time tracking — it is for spotting items that are too large and should be broken down further. Any "large" item should be split into sub-tasks.
### 5. Create the Branch
Create a feature branch from the appropriate base branch. Use a descriptive name that ties to the spec (e.g., `feature/booking-page`, `feature/user-profile-card`).
### 6. List All Acceptance Criteria
Extract every acceptance criterion from the spec into a standalone checklist. This checklist will be used in Step 04 (Verify) to confirm completion. Format:
```markdown
## Acceptance Criteria
- [ ] Criterion from spec
- [ ] Criterion from spec
- [ ] ...
```
Record the inventory, implementation order, and acceptance criteria in the dialog file.
### 7. Verify Checklist
- [ ] Spec read in full — no sections skipped
- [ ] All pages, components, and features inventoried
- [ ] Dependencies mapped, implementation order defined
- [ ] Large items broken into sub-tasks
- [ ] Feature branch created
- [ ] All acceptance criteria extracted into a checklist
- [ ] Plan documented in dialog file
### 8. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 2: Setup Environment"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the implementation plan is complete with inventoried work items, sequenced order, and acceptance criteria will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Spec read in full — no sections skipped
- All pages, components, and features inventoried
- Dependencies mapped, implementation order defined
- Large items broken into sub-tasks
- Feature branch created
- All acceptance criteria extracted into a checklist
- Plan documented in dialog file
### ❌ SYSTEM FAILURE:
- Beginning implementation before planning is complete
- Skipping sections of the spec
- Not extracting acceptance criteria
- Not creating a feature branch
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,167 @@
---
name: 'step-02-setup-environment'
description: 'Verify the development environment is ready, all dependencies are installed, and the project runs cleanly before writing any new code'
# File References
nextStepFile: './step-03-implement.md'
---
# Step 2: Setup Environment
## STEP GOAL:
Verify the development environment is ready, all dependencies are installed, and the project runs cleanly before writing any new code.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on verifying environment, installing dependencies, starting dev server, and establishing test baseline
- 🚫 FORBIDDEN to begin implementing features — that is the next step
- 💬 Approach: Walk through environment verification systematically with user
- 📋 Document any pre-existing issues so they are not confused with regressions later
## EXECUTION PROTOCOLS:
- 🎯 Confirm environment is clean and ready for implementation
- 💾 Document test baseline and any pre-existing issues in the dialog file
- 📖 Reference project configuration files and existing documentation
- 🚫 Do not write any feature code during this step
## CONTEXT BOUNDARIES:
- Available context: Implementation plan from Step 1; project configuration files
- Focus: Environment verification, dependency installation, baseline establishment
- Limits: No feature implementation
- Dependencies: Step 1 must be complete (plan exists, branch created)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Verify Tech Stack Requirements
Check that the project's required tooling is available:
- Runtime (Node.js version, Python version, etc.)
- Package manager (npm, yarn, pnpm, pip, etc.)
- Build tools (Vite, Webpack, Turbopack, etc.)
- Any CLI tools the project depends on
If version mismatches exist, resolve them now — not mid-implementation.
### 2. Install Dependencies
Run the project's install command. Watch the output for:
- Deprecation warnings (note but do not fix unless blocking)
- Peer dependency conflicts (resolve if they cause build failures)
- Missing system-level dependencies
```
npm install # or yarn, pnpm install, pip install -r requirements.txt, etc.
```
### 3. Start the Development Server
Run the dev server and confirm:
- The project builds without errors
- The existing pages / routes load in the browser
- Hot reload or watch mode works
- No console errors on existing pages
### 4. Verify Design System Access
If the project uses a design system or design tokens, confirm you can access them:
- Token files (colors, spacing, typography) are present and importable
- Component library is installed and renders correctly
- Icon sets or asset libraries are available
- Any Figma-to-code output is up to date
### 5. Create Project Structure (If Needed)
If the spec requires new directories or organizational structure, create them now:
- New page directories
- New component directories
- Test file locations
- Any configuration files for new routes
Follow the existing project conventions for naming and placement.
### 6. Run Existing Tests to Establish Baseline
Run the full test suite before touching anything:
```
npm test # or the project's equivalent
```
Record the result:
- **All pass:** Good baseline. Any future failure is from your changes.
- **Some fail:** Document which tests fail BEFORE you start. These are pre-existing failures and not your responsibility, but you need to know they exist so you do not accidentally claim them as regressions.
- **No tests exist:** Note this. Consider whether the spec requires tests (Step 03 will address test writing).
### 7. Verify Checklist
- [ ] Runtime and tooling versions verified
- [ ] Dependencies installed without blocking errors
- [ ] Dev server starts and existing pages load
- [ ] Design tokens / design system accessible
- [ ] New directories created (if needed)
- [ ] Existing test suite run, baseline documented
- [ ] Any pre-existing issues noted in dialog file
### 8. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 3: Implement"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the environment is verified clean, dependencies installed, and test baseline documented will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Runtime and tooling versions verified
- Dependencies installed without blocking errors
- Dev server starts and existing pages load
- Design tokens / design system accessible
- Existing test suite run, baseline documented
- Pre-existing issues noted
### ❌ SYSTEM FAILURE:
- Starting implementation with unresolved environment issues
- Not establishing test baseline
- Not documenting pre-existing failures
- Skipping dependency installation
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,177 @@
---
name: 'step-03-implement'
description: 'Build every feature in the implementation plan, one at a time, following existing codebase patterns'
# File References
nextStepFile: './step-04-verify.md'
---
# Step 3: Implement
## STEP GOAL:
Build every feature in the implementation plan, one at a time, following existing codebase patterns.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on implementing plan items one at a time in dependency order
- 🚫 FORBIDDEN to skip ahead or implement multiple items simultaneously
- 💬 Approach: Work through each item methodically — re-read spec, implement, test, commit
- 📋 Follow existing codebase patterns for consistency; document any deviations
## EXECUTION PROTOCOLS:
- 🎯 Complete every work item from the implementation plan in order
- 💾 Commit incrementally after each logical unit of work
- 📖 Re-read the relevant spec section before implementing each item
- 🚫 Do not accumulate large uncommitted changesets
## CONTEXT BOUNDARIES:
- Available context: Implementation plan from Step 1; environment baseline from Step 2; approved spec
- Focus: Building features one at a time in dependency order
- Limits: Only implement what is in the plan — no scope creep
- Dependencies: Steps 1 and 2 must be complete
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Work Through the Plan Item by Item
Take the first item from your implementation order (Step 01). For each item:
1. Re-read the relevant section of the spec
2. Identify the acceptance criteria that apply to this item
3. Implement it
4. Test it (see section 5 below)
5. Commit it
6. Move to the next item
Do not jump ahead. Do not implement multiple items simultaneously. Finishing one thing completely before starting the next prevents half-done work from accumulating.
### 2. Follow Existing Codebase Patterns
Before writing new code, study how the existing codebase handles the same concerns:
| Concern | Look At |
|---------|---------|
| **File naming** | How are existing components/pages named? |
| **Component structure** | Props interface, state management, lifecycle |
| **Styling approach** | CSS modules, Tailwind, styled-components, etc. |
| **Error handling** | How do existing features handle errors? |
| **Data fetching** | What patterns are used for API calls? |
| **State management** | Local state, context, store — what is the convention? |
| **Testing patterns** | How are existing tests structured? |
Match these patterns. Consistency with the codebase is more important than what you consider "better."
### 3. Reference the Spec for Each Component
For every component you build, check the spec for:
- **Properties:** What props/attributes does it accept?
- **States:** Default, hover, active, disabled, loading, error, empty
- **Responsive behavior:** How does it change at each breakpoint?
- **Content:** Exact text, labels, placeholder copy
- **Interactions:** Click handlers, form submissions, navigation
- **Edge cases:** What happens with long text, missing data, zero items?
Do not rely on memory. Re-read the spec section each time.
### 4. Use Design System Tokens
If the project has design tokens or a design system:
- Use token variables for colors, spacing, typography — never hardcode values
- Use existing components from the design system before creating new ones
- If you need a new component, build it using the same token system
- Check that your implementation visually matches at design-review zoom levels
### 5. Run Tests After Each Major Change
After completing each work item (not just at the end):
- Run the relevant test suite
- If you wrote new tests, confirm they pass
- If existing tests break, determine if it is an intentional change or a regression
- Fix regressions immediately — do not move on with broken tests
For projects using inline testing (Puppeteer), follow the INLINE-TESTING-GUIDE.md in `data/guides/`.
### 6. Commit Incrementally
Each commit should represent one logical unit of work:
- One component added
- One feature wired up
- One set of tests for a feature
Write clear commit messages that reference what was built. Do not accumulate large uncommitted changesets.
### 7. Verify Checklist
- [ ] Working through plan items in dependency order
- [ ] Each item: spec re-read before implementation
- [ ] Following existing codebase patterns (naming, structure, styling)
- [ ] Design tokens used — no hardcoded values
- [ ] All component states implemented (not just the happy path)
- [ ] Responsive behavior implemented per spec
- [ ] Tests run after each major change
- [ ] Commits are incremental and logically scoped
- [ ] Deviations from spec documented with rationale
### 8. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 4: Verify"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN all plan items are implemented, tested, and committed will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- All plan items implemented in dependency order
- Spec re-read before each item implementation
- Existing codebase patterns followed
- Design tokens used consistently
- All component states implemented
- Tests run after each major change
- Commits are incremental and logically scoped
### ❌ SYSTEM FAILURE:
- Implementing items out of order or simultaneously
- Not re-reading spec before implementation
- Hardcoding values instead of using design tokens
- Accumulating large uncommitted changesets
- Moving on with broken tests
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,177 @@
---
name: 'step-04-verify'
description: 'Systematically confirm that the implementation satisfies every requirement in the spec'
# File References
nextStepFile: './step-05-finalize.md'
---
# Step 4: Verify
## STEP GOAL:
Systematically confirm that the implementation satisfies every requirement in the spec.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on verifying acceptance criteria, responsive behavior, interactive states, accessibility, and visual fidelity
- 🚫 FORBIDDEN to add new features or refactor — only verify and fix issues found
- 💬 Approach: Walk through each acceptance criterion with user, testing concretely
- 📋 Fix failures immediately as they are found — do not batch them
## EXECUTION PROTOCOLS:
- 🎯 Every acceptance criterion tested and passing
- 💾 Document verification results and any fixes applied
- 📖 Reference acceptance criteria from Step 1 and the original spec
- 🚫 Do not add scope — only verify what was planned
## CONTEXT BOUNDARIES:
- Available context: Acceptance criteria from Step 1; implementation from Step 3; spec
- Focus: Systematic verification against spec requirements
- Limits: No new features, no refactoring beyond fixing issues
- Dependencies: Step 3 must be complete (implementation done)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Walk Through Every Acceptance Criterion
Open the acceptance criteria checklist from Step 01. Go through each criterion one by one:
- Does the implementation satisfy it? Test it concretely, do not assume.
- If it passes, check it off.
- If it fails, note what is wrong and fix it before continuing.
Do not batch failures for later. Fix as you find them.
### 2. Test All Responsive Breakpoints
For each page and component, test at every breakpoint defined in the spec:
- Mobile (typically 375px)
- Tablet (typically 768px)
- Desktop (typically 1024px+)
- Any custom breakpoints specified
At each breakpoint, verify:
- Layout adapts correctly (stacking, reordering, hiding/showing elements)
- Text remains readable — no overflow, no truncation unless intended
- Touch targets meet minimum size (44x44px) on touch devices
- Images and media scale appropriately
- No horizontal scroll unless intended
### 3. Test All Interactive States
For every interactive element, verify each state:
| State | Verify |
|-------|--------|
| **Default** | Renders correctly on load |
| **Hover** | Visual feedback appears |
| **Focus** | Focus ring or indicator visible (keyboard users) |
| **Active / Pressed** | Visual response on click/tap |
| **Disabled** | Visually distinct, not interactive |
| **Loading** | Spinner or skeleton shown, interactions blocked |
| **Error** | Error message displayed, field highlighted |
| **Empty** | Empty state message or placeholder shown |
| **Success** | Confirmation feedback displayed |
### 4. Test Accessibility
Minimum accessibility checks for every feature:
- **Keyboard navigation:** Can you reach and operate every interactive element using only Tab, Enter, Space, Escape, and arrow keys?
- **Screen reader:** Do headings, labels, buttons, and form fields have meaningful text? Are ARIA labels present where needed?
- **Color contrast:** Does text meet WCAG AA contrast ratios (4.5:1 for normal text, 3:1 for large text)?
- **Focus management:** After modal open/close, form submit, or route change — is focus placed logically?
- **Alt text:** Do images have descriptive alt text (or empty alt for decorative images)?
### 5. Cross-Browser Check (If Specified)
If the spec requires specific browser support:
- Test in each listed browser
- Check for layout differences, font rendering, and JavaScript behavior
- Note any browser-specific issues and whether they are acceptable
### 6. Compare Implementation to Spec Side by Side
With the spec open next to the running implementation:
- Compare visual layout at each breakpoint
- Compare text content word for word
- Compare colors to spec hex values
- Compare spacing and proportions
- Note any discrepancies — fix or document as intentional deviations
For projects using Puppeteer, follow the verification process in INLINE-TESTING-GUIDE.md: measure what you can measure programmatically, and present only qualitative questions to the user.
### 7. Verify Checklist
- [ ] Every acceptance criterion tested and passing
- [ ] All responsive breakpoints verified
- [ ] All interactive states working (hover, focus, disabled, loading, error, empty, success)
- [ ] Keyboard navigation works for all interactive elements
- [ ] Screen reader labels and ARIA attributes present
- [ ] Color contrast meets WCAG AA
- [ ] Focus management correct after state changes
- [ ] Cross-browser tested (if required by spec)
- [ ] Visual comparison to spec completed — no unintended differences
- [ ] All found issues fixed or documented as intentional deviations
### 8. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 5: Finalize"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN all acceptance criteria are verified passing and all issues fixed will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Every acceptance criterion tested and passing
- All responsive breakpoints verified
- All interactive states working
- Accessibility checks completed
- Visual comparison to spec completed
- All found issues fixed or documented
### ❌ SYSTEM FAILURE:
- Assuming criteria pass without testing concretely
- Skipping responsive or accessibility verification
- Batching failures instead of fixing immediately
- Not comparing implementation to spec visually
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,182 @@
---
name: 'step-05-finalize'
description: 'Clean up, run the full test suite, document deviations, and prepare the work for review'
# File References
activityWorkflowFile: '../workflow-development.md'
---
# Step 5: Finalize
## STEP GOAL:
Clean up, run the full test suite, document deviations, and prepare the work for review.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on cleanup, full test suite, deviation documentation, and PR preparation
- 🚫 FORBIDDEN to add new features or make non-cleanup changes
- 💬 Approach: Systematically clean up, test, document, and prepare for review with user
- 📋 Ensure deviations are documented with rationale for reviewer context
## EXECUTION PROTOCOLS:
- 🎯 Code cleaned, tests passing, deviations documented, PR prepared
- 💾 Document spec deviations and PR description in the dialog file
- 📖 Reference test baseline from Step 2 to distinguish regressions from pre-existing failures
- 🚫 Do not add features or refactor beyond cleanup
## CONTEXT BOUNDARIES:
- Available context: Implementation from Step 3; verification from Step 4; test baseline from Step 2; spec
- Focus: Final cleanup, testing, documentation, and review preparation
- Limits: No new features, no major refactoring
- Dependencies: Step 4 must be complete (verification passed)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Clean Up Code
Go through all files you created or modified:
- Remove `console.log`, `debugger`, and other debug statements
- Remove commented-out code (if it was kept for reference, it belongs in a comment on the PR, not in the code)
- Remove unused imports and variables
- Verify file naming follows project conventions
- Check for any TODO comments you left — resolve them or convert to tracked issues
### 2. Run the Full Test Suite
Run the complete test suite, not just the tests for your changes:
```
npm test # or the project's equivalent
```
- **All pass:** Proceed.
- **New failures:** Determine if they are regressions from your changes or pre-existing. Fix regressions. Pre-existing failures should match the baseline from Step 02.
- **Flaky tests:** Note them but do not ignore them. If your changes made a test flaky, fix it.
If the project has linting or type checking, run those too:
```
npm run lint
npm run typecheck # if applicable
```
Fix any issues your code introduced.
### 3. Document Deviations from Spec
If you deviated from the spec during implementation (discovered in Step 03 or Step 04), document each deviation:
```markdown
## Spec Deviations
### [Component/Feature Name]
- **Spec said:** [what the spec required]
- **Implementation does:** [what you built instead]
- **Reason:** [why the deviation was necessary]
```
Record this in the dialog file. These deviations become discussion points during review.
### 4. Update Affected Documentation
Check if your changes require documentation updates:
- Component API documentation (new props, changed behavior)
- Route documentation (new pages, changed URLs)
- Environment setup docs (new dependencies, new env vars)
- Storybook stories (if the project uses Storybook)
Update only what your changes affect. Do not create new documentation that was not asked for.
### 5. Prepare PR Description
Write a pull request description that helps the reviewer:
- **Summary:** What was built and why (reference the spec)
- **Changes:** Key files and what changed in each
- **Testing:** How to test the changes (steps to reproduce each feature)
- **Deviations:** Any departures from the spec, with rationale
- **Screenshots:** Before/after if visual changes were made
- **Acceptance criteria:** Copy the checklist from Step 01 with all items checked
### 6. Suggest Acceptance Testing
Based on what was built, recommend next steps:
- If the project uses formal acceptance testing, suggest triggering the [T] Acceptance Testing workflow
- If a designer needs to review visual fidelity, flag that
- If stakeholder demo is needed, note what to demonstrate
- If the feature has user-facing copy, suggest copy review
### 7. Verify Checklist
- [ ] Debug statements removed
- [ ] Unused code and imports removed
- [ ] TODOs resolved or converted to tracked issues
- [ ] Full test suite passes (or pre-existing failures match baseline)
- [ ] Linting and type checking pass
- [ ] Spec deviations documented with rationale
- [ ] Affected documentation updated
- [ ] PR description written with summary, changes, testing steps, and deviations
- [ ] Next steps recommended (acceptance testing, design review, etc.)
### 8. Present MENU OPTIONS
Display: "**Select an Option:** [M] Return to Activity Menu"
#### Menu Handling Logic:
- IF M: Update design log, then load, read entire file, then execute {activityWorkflowFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed when user selects 'M'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN all cleanup is done, tests pass, deviations are documented, and PR is prepared will you then load and read fully `{activityWorkflowFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Debug statements removed
- Unused code and imports removed
- Full test suite passes (or pre-existing failures match baseline)
- Linting and type checking pass
- Spec deviations documented with rationale
- PR description written
- Next steps recommended
### ❌ SYSTEM FAILURE:
- Leaving debug statements in code
- Not running the full test suite
- Not documenting spec deviations
- Not preparing a PR description
- Skipping linting or type checking
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,135 @@
---
name: 'step-01-scope-change'
description: 'Define exactly what is new, what is modified, and what must remain untouched'
# File References
nextStepFile: './step-02-analyze-impact.md'
---
# Step 1: Scope Change
## STEP GOAL:
Define exactly what is new, what is modified, and what must remain untouched.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on defining scope: new functionality, existing unchanged functionality, boundary map, and integration points
- 🚫 FORBIDDEN to begin analyzing impact or planning implementation — those are later steps
- 💬 Approach: Help user categorize all affected areas into new/modified/untouched
- 📋 Every integration point must be identified and documented
## EXECUTION PROTOCOLS:
- 🎯 Produce a clear boundary map categorizing all areas as new, modified, or untouched
- 💾 Update dialog file with scope definition, boundary map, and integration points
- 📖 Reference the feature spec or change request
- 🚫 Do not analyze dependencies or plan implementation yet
## CONTEXT BOUNDARIES:
- Available context: Feature specification or change request
- Focus: Scoping what changes and what stays the same
- Limits: No impact analysis, no implementation planning
- Dependencies: A feature spec or change request must exist
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Load Feature Spec
- Read the feature specification or change request
- Understand the desired outcome from the user's perspective
- Clarify any ambiguities before proceeding
### 2. List All New Functionality
- Enumerate every new capability being added
- For each item: what it does, where it lives, how the user interacts with it
- Note any new UI components, API endpoints, data models, or routes
### 3. List All Existing Functionality That Must Stay Unchanged
- Identify every existing feature that is in scope or adjacent
- Explicitly state: "This must continue to work exactly as it does now"
- Include both direct features and indirect dependencies (e.g., shared components)
### 4. Create Boundary Map
Categorize all affected areas:
| Category | Description | Examples |
|----------|-------------|----------|
| **New** | Does not exist yet, being added | New page, new API endpoint, new component |
| **Modified** | Exists and will be changed | Updated component to accept new props, extended API response |
| **Untouched** | Exists and must not change | Existing pages, unrelated features, shared utilities |
### 5. Identify Integration Points
- Where does new code connect to existing code?
- What interfaces, APIs, or data structures are shared?
- Are there shared components that need to support both old and new behavior?
- Document each integration point and its risk level
### 6. Verify Checklist
- [ ] Feature spec loaded and understood
- [ ] New functionality listed
- [ ] Existing functionality that must stay unchanged listed
- [ ] Boundary map created (new / modified / untouched)
- [ ] Integration points identified
- [ ] Dialog file updated with scope definition
### 7. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 2: Analyze Impact"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the scope is fully defined with boundary map and integration points will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Feature spec loaded and understood
- New functionality listed
- Existing functionality that must stay unchanged listed
- Boundary map created (new / modified / untouched)
- Integration points identified
- Dialog file updated with scope definition
### ❌ SYSTEM FAILURE:
- Beginning impact analysis before scope is defined
- Not identifying what must remain untouched
- Skipping integration point identification
- Leaving ambiguities in the feature spec unresolved
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,136 @@
---
name: 'step-02-analyze-impact'
description: 'Understand how the proposed changes affect existing code and identify risks'
# File References
nextStepFile: './step-03-plan-implementation.md'
---
# Step 2: Analyze Impact
## STEP GOAL:
Understand how the proposed changes affect existing code, and identify risks.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on reading affected code, mapping dependencies, identifying breaking change risks, and assessing database/API impacts
- 🚫 FORBIDDEN to begin planning or implementing changes — that is a later step
- 💬 Approach: Deep dive into affected code with user, mapping what depends on what
- 📋 Document a risk assessment table with likelihood, impact, and mitigation
## EXECUTION PROTOCOLS:
- 🎯 Produce a dependency map, breaking change risk list, and risk assessment
- 💾 Document risk assessment in the dialog file
- 📖 Reference the boundary map and integration points from Step 1
- 🚫 Do not plan implementation order or write any code
## CONTEXT BOUNDARIES:
- Available context: Scope definition, boundary map, and integration points from Step 1
- Focus: Impact analysis — dependencies, risks, breaking changes
- Limits: No implementation planning, no code changes
- Dependencies: Step 1 must be complete (scope defined)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Read Code for All Affected Areas
- Open and read every file identified as "modified" in the boundary map
- Read files adjacent to integration points
- Understand the current behavior before planning changes
### 2. Map Dependencies
- For each modified file, identify:
- What imports it / calls it / depends on it
- What it imports / calls / depends on
- Build a dependency graph (mental or documented) of the affected area
- Flag shared utilities, components, or state that multiple features rely on
### 3. Identify Breaking Change Risks
- Will any existing interface signatures change?
- Will any existing data shapes change?
- Could new code paths affect timing or ordering of existing operations?
- Are there implicit assumptions in existing code that the new feature violates?
### 4. Assess Database/API Impacts
- Are database schema changes needed? (migrations, new tables, altered columns)
- Are API contract changes needed? (new endpoints, changed response shapes)
- Can changes be additive (non-breaking) or do they require migration?
- Will existing clients (other services, mobile apps) be affected?
### 5. Document What Could Go Wrong
Write a risk assessment in the dialog file:
| Risk | Likelihood | Impact | Mitigation |
|------|-----------|--------|------------|
| Shared component breaks for existing use | Medium | High | Test all existing consumers |
| API change breaks mobile client | Low | High | Make change additive, not breaking |
| State management conflict | Medium | Medium | Isolate new state, feature flag |
### 6. Verify Checklist
- [ ] All affected code read and understood
- [ ] Dependencies mapped
- [ ] Breaking change risks identified
- [ ] Database/API impacts assessed
- [ ] Risk assessment documented in dialog file
### 7. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 3: Plan Implementation"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the impact analysis is complete with dependencies mapped and risks documented will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- All affected code read and understood
- Dependencies mapped
- Breaking change risks identified
- Database/API impacts assessed
- Risk assessment documented in dialog file
### ❌ SYSTEM FAILURE:
- Beginning implementation planning without completing impact analysis
- Not reading all affected code
- Skipping dependency mapping
- Not documenting risks
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,145 @@
---
name: 'step-03-plan-implementation'
description: 'Create an ordered, incremental implementation plan with verification points'
# File References
nextStepFile: './step-04-implement.md'
---
# Step 3: Plan Implementation
## STEP GOAL:
Create an ordered, incremental implementation plan with verification points.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on ordering changes, planning incremental commits, defining verification points, and estimating effort
- 🚫 FORBIDDEN to begin writing code — that is the next step
- 💬 Approach: Collaboratively sequence the implementation with user, ensuring each step leaves the system working
- 📋 Each commit must be a complete, working unit with clear verification criteria
## EXECUTION PROTOCOLS:
- 🎯 Produce a sequenced implementation plan with verification points between steps
- 💾 Write implementation plan to the dialog file
- 📖 Reference the scope from Step 1 and risk assessment from Step 2
- 🚫 Do not write any code during this step
## CONTEXT BOUNDARIES:
- Available context: Scope and boundary map from Step 1; impact analysis and risks from Step 2
- Focus: Implementation planning — order, commits, verification, feature flags
- Limits: No code writing
- Dependencies: Steps 1 and 2 must be complete
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Order Changes by Dependency
- Start with foundational changes (data models, utilities, shared logic)
- Then build upward: API endpoints, components, pages, integrations
- Each step should build on the previous one
- No step should depend on a later step
### 2. Determine If Feature Flags Are Needed
Use feature flags when:
- The change is large enough that partial deployment is risky
- The feature needs to be toggled per environment or per user
- Rollback needs to be instant (without redeploying)
If not needed, keep it simple — branches and PRs are sufficient.
### 3. Plan Incremental Commits
Each commit should:
- Be a complete, working unit (system stays functional after each commit)
- Be small enough to review and understand
- Have a clear purpose described in its message
Example plan structure:
```
Commit 1: Add new data model (no UI, no routes)
Commit 2: Add API endpoint for new feature (behind feature flag)
Commit 3: Add new component (rendered only in new context)
Commit 4: Integrate component into existing page
Commit 5: Enable feature, update tests
```
### 4. Define Verification Points
Between steps, define what to check:
- After commit 1: "Run migrations, verify existing queries still work"
- After commit 3: "Render component in isolation, verify it matches spec"
- After commit 4: "Full regression on existing page, verify new feature appears"
### 5. Estimate Effort
- Note which steps are straightforward vs which carry risk
- Identify steps that may need designer/stakeholder input
- Flag any steps that might require splitting into sub-steps
### 6. Verify Checklist
- [ ] Changes ordered by dependency
- [ ] Feature flag decision made and documented
- [ ] Incremental commits planned (each leaves system working)
- [ ] Verification points defined between steps
- [ ] Effort estimated, risks flagged
- [ ] Implementation plan written to dialog file
### 7. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 4: Implement"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the implementation plan is complete with ordered steps, verification points, and feature flag decisions will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Changes ordered by dependency
- Feature flag decision made and documented
- Incremental commits planned (each leaves system working)
- Verification points defined between steps
- Effort estimated, risks flagged
- Implementation plan written to dialog file
### ❌ SYSTEM FAILURE:
- Beginning implementation without a complete plan
- Planning commits that leave the system in a broken state
- Not defining verification points
- Skipping feature flag assessment
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,139 @@
---
name: 'step-04-implement'
description: 'Implement changes incrementally, verifying at each step'
# File References
nextStepFile: './step-05-verify-and-document.md'
---
# Step 4: Implement
## STEP GOAL:
Implement changes incrementally, verifying at each step.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on implementing each planned commit, testing after each, and ensuring backward compatibility
- 🚫 FORBIDDEN to skip verification points or proceed with broken tests
- 💬 Approach: Execute each planned step methodically — implement, self-check, test, commit
- 📋 Backward compatibility must be verified at every commit
## EXECUTION PROTOCOLS:
- 🎯 All planned steps implemented with passing tests and backward compatibility
- 💾 Update dialog file with implementation progress after each commit
- 📖 Reference the implementation plan from Step 3 and verification points
- 🚫 Do not proceed to next commit if current one is broken
## CONTEXT BOUNDARIES:
- Available context: Scope from Step 1; risks from Step 2; implementation plan from Step 3
- Focus: Incremental implementation following the plan
- Limits: Only implement what is in the plan — no scope creep
- Dependencies: Step 3 must be complete (plan exists)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Create Feature Branch
- Branch from the current working branch
- Use a descriptive name: `feature/<short-description>` or `evolve/<feature-name>`
- Commit the branch before making changes
### 2. Implement Incrementally Per Plan
For each planned commit:
1. **Read the plan** — What does this step require?
2. **Implement** — Write the code for this step only
3. **Self-check** — Does this change stay within its planned scope?
4. **Test** — Run relevant tests, verify the step works
5. **Commit** — Clean commit with descriptive message
### 3. Test After Each Step
- Run the verification point defined in Step 03
- If automated tests exist, run them after each commit
- If modifying existing features, verify they still work (baseline comparison)
- Do not proceed to the next step if the current one is broken
### 4. Ensure Backward Compatibility at Each Commit
At every commit, ask:
- Does the existing functionality still work?
- Can an existing user do everything they could before?
- Are any existing API consumers affected?
- If a commit breaks backward compatibility, either fix it or add a feature flag
### 5. Use Feature Flags if Needed
When the plan calls for feature flags:
- Implement the flag check early (before building the feature behind it)
- Default to "off" — new feature is hidden until explicitly enabled
- Ensure the "off" path is identical to the pre-change behavior
- Test both flag states: on (new behavior) and off (old behavior)
### 6. Verify Checklist
- [ ] Feature branch created
- [ ] Each planned step implemented and committed separately
- [ ] Tests pass after each commit
- [ ] Backward compatibility verified at each commit
- [ ] Feature flags implemented correctly (if applicable)
- [ ] Dialog file updated with implementation progress
### 7. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 5: Verify and Document"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN all planned steps are implemented, tested, and committed with backward compatibility verified will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Feature branch created
- Each planned step implemented and committed separately
- Tests pass after each commit
- Backward compatibility verified at each commit
- Feature flags implemented correctly (if applicable)
- Dialog file updated with implementation progress
### ❌ SYSTEM FAILURE:
- Proceeding to next step with broken tests
- Not verifying backward compatibility
- Implementing outside the planned scope
- Not testing feature flag on/off states
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,148 @@
---
name: 'step-05-verify-and-document'
description: 'Comprehensive verification of all new and existing functionality, then document and create PR'
# File References
activityWorkflowFile: '../workflow-evolution.md'
---
# Step 5: Verify and Document
## STEP GOAL:
Comprehensive verification of all new and existing functionality, then document and PR.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on verifying new functionality, running regression, verifying backward compatibility, checking performance, documenting, and creating PR
- 🚫 FORBIDDEN to add new features — only verify, fix issues, and document
- 💬 Approach: Comprehensive end-to-end verification with user, then collaborative PR creation
- 📋 Any regression failure must be fixed before proceeding
## EXECUTION PROTOCOLS:
- 🎯 All new and existing functionality verified, documented, and PR created
- 💾 Finalize dialog file with status, deviations, and PR link
- 📖 Reference the boundary map from Step 1 for backward compatibility checks
- 🚫 Do not add scope — only verify and document
## CONTEXT BOUNDARIES:
- Available context: Scope from Step 1; risks from Step 2; plan from Step 3; implementation from Step 4
- Focus: Final verification, documentation, and PR creation
- Limits: No new features
- Dependencies: Step 4 must be complete (implementation done)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Test All New Functionality
- Walk through every new feature end-to-end
- Verify against the feature spec — does it do what was requested?
- Test happy paths, error paths, and edge cases
- Use Puppeteer for measurable UI criteria (text, colors, layout, interactions)
### 2. Run Regression Suite
- Run the full test suite (unit, integration, end-to-end)
- If no automated suite exists, manually test all core user flows
- Pay special attention to areas identified as "modified" or "integration points" in Step 01
- Any failure here must be fixed before proceeding
### 3. Verify Backward Compatibility
- Test every feature listed as "untouched" in the boundary map
- Confirm that existing users see no change in behavior (unless intended)
- If feature flags were used, test with flag OFF — behavior must match pre-change baseline
- Test with flag ON — new behavior works correctly alongside existing features
### 4. Performance Check
- Does the new feature load within acceptable time?
- Did the changes increase bundle size significantly?
- Are there new network requests that could slow down existing pages?
- Check for obvious performance regressions (slow renders, unnecessary re-renders)
### 5. Document Changes
- Update the dialog file with final status
- Document what was added, what was changed, what was left untouched
- Note any deviations from the original plan and why
### 6. Create PR
- Write a PR title that summarizes the evolution: `feat: <what was added>`
- PR body should include:
- **What:** New functionality added
- **Why:** Business reason or user need
- **How:** Implementation approach (incremental steps, feature flags)
- **Testing:** How it was verified, including backward compatibility
- **Risks:** Any known risks or areas to watch
- Link to the feature spec or dialog file
### 7. Verify Checklist
- [ ] All new functionality tested and working
- [ ] Regression suite passes
- [ ] Backward compatibility verified
- [ ] Performance acceptable
- [ ] Changes documented in dialog file
- [ ] PR created with clear description
- [ ] Feature flags documented (if used)
- [ ] Dialog file finalized
### 8. Present MENU OPTIONS
Display: "**Select an Option:** [M] Return to Activity Menu"
#### Menu Handling Logic:
- IF M: Update design log, then load, read entire file, then execute {activityWorkflowFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed when user selects 'M'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN all verification is complete, changes documented, and PR created will you then load and read fully `{activityWorkflowFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- All new functionality tested and working
- Regression suite passes
- Backward compatibility verified
- Performance acceptable
- Changes documented in dialog file
- PR created with clear description
### ❌ SYSTEM FAILURE:
- Not running regression suite
- Not verifying backward compatibility
- Skipping performance check
- Creating PR without clear description
- Not documenting deviations from the plan
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,136 @@
---
name: 'step-01-reproduce'
description: 'Confirm the bug exists and document exact reproduction steps'
# File References
nextStepFile: './step-02-investigate.md'
---
# Step 1: Reproduce
## STEP GOAL:
Confirm the bug exists and document exact reproduction steps.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on gathering bug report details, setting up reproduction environment, and reproducing the bug
- 🚫 FORBIDDEN to begin investigating root cause or implementing fixes — those are later steps
- 💬 Approach: Methodically reproduce the issue with user, documenting every step and observation
- 📋 Bug must be reproduced at least twice before proceeding; if non-reproducible, follow the non-reproducible protocol
## EXECUTION PROTOCOLS:
- 🎯 Confirm the bug exists with documented, repeatable reproduction steps
- 💾 Update dialog file with reproduction details, screenshots/logs
- 📖 Reference the bug report or user description
- 🚫 Do not investigate code or attempt fixes during this step
## CONTEXT BOUNDARIES:
- Available context: Bug report or user description of the issue
- Focus: Reproduction — confirming the bug exists and documenting how to trigger it
- Limits: No code investigation, no fix attempts
- Dependencies: A bug report or description must exist
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Gather Bug Report Details
- Read the bug report or user description
- Extract: what happened, what was expected, when it started
- Note the reporter's environment (browser, device, OS, screen size)
- Identify any relevant user actions leading up to the bug
### 2. Set Up Reproduction Environment
- Match the reporter's environment as closely as possible
- Use the same browser, viewport, and device settings
- Ensure you are on the same version/branch of the code
- Load any required test data or user state
### 3. Reproduce the Bug
- Follow the reported steps exactly, in order
- Document each action and its result
- Capture screenshots or logs at each step
- Record the exact error messages, console output, or visual artifacts
### 4. Confirm Reproducibility
- Reproduce at least twice to confirm consistency
- Note if the bug is intermittent (and under what conditions)
- Test slight variations to understand the trigger boundaries
### 5. Handle Non-Reproducible Bugs
If the bug cannot be reproduced:
- Request more details from the reporter (exact steps, screenshots, browser version)
- Check if the bug is environment-specific (browser, OS, network)
- Check if recent changes resolved it inadvertently
- Document what was tried and what remains unknown
- Do NOT proceed to investigation without reproduction or a clear theory
### 6. Verify Checklist
- [ ] Bug report details gathered and understood
- [ ] Reproduction environment matches reporter's setup
- [ ] Bug reproduced at least twice
- [ ] Reproduction steps documented precisely
- [ ] Screenshots/logs captured
- [ ] Dialog file updated with reproduction details
### 7. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 2: Investigate"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the bug has been reproduced and documented with exact steps will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Bug report details gathered and understood
- Reproduction environment matches reporter's setup
- Bug reproduced at least twice
- Reproduction steps documented precisely
- Screenshots/logs captured
- Dialog file updated with reproduction details
### ❌ SYSTEM FAILURE:
- Proceeding to investigation without reproducing the bug
- Attempting to fix the bug before understanding how to trigger it
- Not documenting reproduction steps
- Skipping environment setup
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,137 @@
---
name: 'step-02-investigate'
description: 'Identify the root cause of the bug, not just the symptom'
# File References
nextStepFile: './step-03-fix.md'
---
# Step 2: Investigate
## STEP GOAL:
Identify the root cause of the bug, not just the symptom.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on reading code, tracing execution path, identifying root cause, and checking wider impact
- 🚫 FORBIDDEN to begin implementing a fix — that is the next step
- 💬 Approach: Trace the execution path from trigger to symptom with user, distinguishing symptom from cause
- 📋 Root cause must be pinpointed to specific line(s) or logic before proceeding
## EXECUTION PROTOCOLS:
- 🎯 Identify the exact root cause with proposed fix approach
- 💾 Document root cause, symptom explanation, affected areas, and proposed fix in dialog file
- 📖 Reference reproduction steps from Step 1
- 🚫 Do not write fix code during this step
## CONTEXT BOUNDARIES:
- Available context: Bug report and reproduction details from Step 1
- Focus: Root cause investigation — code reading, execution tracing, impact assessment
- Limits: No fix implementation
- Dependencies: Step 1 must be complete (bug reproduced)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Read the Relevant Code
- Open the files related to the bug's location (component, module, route)
- Read the code around the symptom — understand what it is supposed to do
- Check recent commits for changes in the affected area (`git log --oneline -20 -- <file>`)
### 2. Trace the Execution Path
- Start from the user action that triggers the bug
- Follow the code path: event handler, state update, render, API call
- Identify where the actual behavior diverges from the expected behavior
- Use console logs, breakpoints, or Puppeteer to observe intermediate state
### 3. Identify Root Cause
- Distinguish symptom from cause — the visible bug is rarely the root issue
- Common root causes to check:
- State not reset or updated correctly
- Race condition or timing issue
- Missing null/undefined check
- Wrong data type or format
- CSS specificity or inheritance conflict
- API response changed or error not handled
- Pin down the exact line(s) or logic that cause the failure
### 4. Check Wider Impact
- Does this code path affect other features?
- Are there similar patterns elsewhere that might have the same bug?
- Could the root cause indicate a systemic issue (e.g., missing error handling pattern)?
### 5. Document the Root Cause
- Write a clear, concise explanation in the dialog file:
- What causes the bug (root cause)
- Why it manifests the way it does (symptom explanation)
- What areas are affected
- Proposed fix approach
### 6. Verify Checklist
- [ ] Relevant code read and understood
- [ ] Execution path traced from trigger to symptom
- [ ] Root cause identified (not just symptom)
- [ ] Wider impact assessed
- [ ] Root cause documented in dialog file
### 7. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 3: Fix"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the root cause is identified and documented with proposed fix approach will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Relevant code read and understood
- Execution path traced from trigger to symptom
- Root cause identified (not just symptom)
- Wider impact assessed
- Root cause documented in dialog file
### ❌ SYSTEM FAILURE:
- Beginning to fix without identifying root cause
- Treating the symptom instead of the cause
- Not checking wider impact
- Not documenting the root cause
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,130 @@
---
name: 'step-03-fix'
description: 'Implement the minimal fix targeting the root cause'
# File References
nextStepFile: './step-04-verify.md'
---
# Step 3: Fix
## STEP GOAL:
Implement the minimal fix targeting the root cause.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on writing a failing test, implementing the minimal fix, and verifying the reproduction case passes
- 🚫 FORBIDDEN to refactor surrounding code, add unrelated improvements, or fix other issues noticed nearby
- 💬 Approach: Write test first, then minimal fix, then verify — test-driven bugfixing
- 📋 The fix must be the smallest change that resolves the root cause
## EXECUTION PROTOCOLS:
- 🎯 Bug fixed with minimal change, regression test written, reproduction case passes
- 💾 Update dialog file with fix details
- 📖 Reference root cause from Step 2 and reproduction steps from Step 1
- 🚫 Do not include unrelated changes in the fix
## CONTEXT BOUNDARIES:
- Available context: Reproduction details from Step 1; root cause and proposed fix from Step 2
- Focus: Minimal fix targeting root cause with regression test
- Limits: No refactoring, no unrelated improvements
- Dependencies: Step 2 must be complete (root cause identified)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Create Fix Branch
- Create a branch from the current working branch
- Use a descriptive name: `fix/<short-description>` or `bugfix/<issue-id>`
- Commit the branch before making changes
### 2. Write a Test That Catches the Bug
- Before writing the fix, write a test that reproduces the bug
- The test should fail in the current state (proving the bug exists)
- This ensures the fix is verifiable and the bug cannot silently return
- If automated testing is not set up, document the manual test steps clearly
### 3. Implement the Minimal Fix
- Target the root cause identified in Step 02
- Make the smallest change that resolves the issue
- Do NOT:
- Refactor surrounding code
- Add unrelated improvements
- Change formatting or style in unrelated lines
- "Fix" other issues you noticed nearby
- If the fix requires more than a few lines, pause and verify scope
### 4. Verify the Fix Resolves the Reproduction Case
- Run the reproduction steps from Step 01
- Confirm the bug no longer occurs
- Run the test from sub-step 2 — it should now pass
- If the fix does not resolve the bug, revisit Step 02
### 5. Verify Checklist
- [ ] Fix branch created
- [ ] Test written that catches the bug (fails before fix, passes after)
- [ ] Minimal fix implemented targeting root cause
- [ ] No unrelated changes included
- [ ] Reproduction case passes with the fix
- [ ] Dialog file updated with fix details
### 6. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 4: Verify"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the fix is implemented, test passes, and reproduction case is resolved will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Fix branch created
- Test written that catches the bug (fails before fix, passes after)
- Minimal fix implemented targeting root cause
- No unrelated changes included
- Reproduction case passes with the fix
### ❌ SYSTEM FAILURE:
- Implementing fix without writing a regression test first
- Including refactoring or unrelated improvements
- Fix does not target root cause (treats symptom only)
- Proceeding without verifying reproduction case
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,134 @@
---
name: 'step-04-verify'
description: 'Confirm the fix works and has not introduced regressions'
# File References
nextStepFile: './step-05-document.md'
---
# Step 4: Verify
## STEP GOAL:
Confirm the fix works and has not introduced regressions.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on re-running reproduction steps, running full regression, testing edge cases, verifying no side effects, and cross-platform checks
- 🚫 FORBIDDEN to add new features or make additional code changes — only verify and fix regressions
- 💬 Approach: Systematic verification with user, testing the fix and all adjacent functionality
- 📋 Any regression must be fixed before proceeding to documentation
## EXECUTION PROTOCOLS:
- 🎯 Fix confirmed working with no regressions
- 💾 Update dialog file with verification results
- 📖 Reference reproduction steps from Step 1 and fix details from Step 3
- 🚫 Do not add features or make non-fix changes
## CONTEXT BOUNDARIES:
- Available context: Reproduction from Step 1; root cause from Step 2; fix from Step 3
- Focus: Comprehensive verification — fix works, no regressions, no side effects
- Limits: No feature additions, only regression fixes if needed
- Dependencies: Step 3 must be complete (fix implemented)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Re-Run Reproduction Steps
- Follow the exact reproduction steps from Step 01
- Confirm the bug is fully resolved (not partially or intermittently)
- Test on the same environment used for reproduction
### 2. Run Full Regression Tests
- Run the project's test suite (unit tests, integration tests)
- If no automated test suite exists, manually test the core user flows
- Pay extra attention to features that share code with the fixed area
### 3. Check Edge Cases
- Test boundary conditions around the fix:
- Empty/null/undefined inputs
- Maximum/minimum values
- Rapid repeated actions
- Network errors or slow responses (if applicable)
- Test the exact scenario that was broken, but with slight variations
### 4. Verify No Side Effects
- Check features that are adjacent to or depend on the fixed code
- If the fix changed shared state, verify all consumers of that state
- If the fix changed a utility function, verify all callers
- Compare behavior with baseline (if captured)
### 5. Cross-Platform Check
- If the bug was platform-specific, verify the fix works on that platform
- Also verify the fix does not break other platforms
- Test on relevant viewport sizes if the bug was visual
### 6. Verify Checklist
- [ ] Reproduction steps pass — bug is fixed
- [ ] Test suite passes (all tests green)
- [ ] Edge cases tested around the fix
- [ ] No side effects on related features
- [ ] Cross-platform verification done (if applicable)
- [ ] Dialog file updated with verification results
### 7. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 5: Document"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN all verification passes with no regressions will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Reproduction steps pass — bug is fixed
- Test suite passes (all tests green)
- Edge cases tested around the fix
- No side effects on related features
- Cross-platform verification done (if applicable)
### ❌ SYSTEM FAILURE:
- Not re-running reproduction steps
- Skipping regression test suite
- Not testing edge cases
- Ignoring side effects on related features
- Proceeding with failing tests
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,134 @@
---
name: 'step-05-document'
description: 'Document the bug, the fix, and create a clean PR'
# File References
activityWorkflowFile: '../workflow-bugfixing.md'
---
# Step 5: Document
## STEP GOAL:
Document the bug, the fix, and create a clean PR.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on documenting root cause, fix approach, updating tests, creating PR, and flagging similar risks
- 🚫 FORBIDDEN to add new features or make code changes beyond test cleanup
- 💬 Approach: Collaborative documentation and PR creation with user
- 📋 Flag similar risks elsewhere in the codebase for future investigation
## EXECUTION PROTOCOLS:
- 🎯 Complete bug documentation, PR created, similar risks flagged
- 💾 Finalize dialog file with complete bugfix record
- 📖 Reference all previous steps for comprehensive documentation
- 🚫 Do not add features or make non-documentation changes
## CONTEXT BOUNDARIES:
- Available context: Reproduction from Step 1; root cause from Step 2; fix from Step 3; verification from Step 4
- Focus: Documentation, PR creation, risk flagging
- Limits: No feature additions, no code changes beyond test cleanup
- Dependencies: Step 4 must be complete (verification passed)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Document What Caused the Bug
- Write a clear root cause summary in the dialog file
- Explain why the bug existed (design oversight, edge case, regression from another change)
- Include the relevant code context — what was wrong and why
### 2. Document the Fix Approach
- Explain what was changed and why this approach was chosen
- If alternative approaches were considered, note why they were rejected
- Reference the specific files and lines changed
### 3. Update Tests
- Ensure the regression test from Step 03 is complete and well-named
- Add any additional test cases discovered during verification
- Verify all tests pass with the final fix
### 4. Create PR with Clear Description
- Write a PR title that summarizes the fix: `fix: <what was fixed>`
- PR body should include:
- **Bug:** What the user experienced
- **Root cause:** Why it happened
- **Fix:** What was changed
- **Testing:** How it was verified
- Link to the bug report or issue if one exists
### 5. Flag Similar Risks
- If the root cause suggests similar bugs may exist elsewhere, note this
- Examples: "This pattern of unchecked null access also exists in X and Y"
- Create follow-up issues or add notes to the dialog for future investigation
### 6. Verify Checklist
- [ ] Root cause documented
- [ ] Fix approach documented
- [ ] Tests updated and passing
- [ ] PR created with clear description
- [ ] Similar risks flagged (if any)
- [ ] Dialog file finalized with complete bugfix record
### 7. Present MENU OPTIONS
Display: "**Select an Option:** [M] Return to Activity Menu"
#### Menu Handling Logic:
- IF M: Update design log, then load, read entire file, then execute {activityWorkflowFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed when user selects 'M'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN documentation is complete, PR is created, and similar risks are flagged will you then load and read fully `{activityWorkflowFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Root cause documented
- Fix approach documented
- Tests updated and passing
- PR created with clear description
- Similar risks flagged (if any)
- Dialog file finalized with complete bugfix record
### ❌ SYSTEM FAILURE:
- Not documenting root cause
- Creating PR without clear description
- Not flagging similar risks
- Not finalizing dialog file
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,140 @@
---
name: '1-prototype-setup'
description: 'Set up the prototype environment for an entire scenario (one-time setup)'
# File References
nextStepFile: './2-scenario-analysis.md'
---
# Step 1: Prototype Setup
## STEP GOAL:
Set up the prototype environment for an entire scenario (one-time setup). This assumes the scenario specification already exists.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on running the initiation dialog, creating folder structure, and setting up demo data
- 🚫 FORBIDDEN to begin building any pages or components — that is a later step
- 💬 Approach: Ask the 4 initiation questions, then create the folder structure with user
- 📋 Skip this phase if scenario already has `data/demo-data.json` and `PROTOTYPE-ROADMAP.md`
## EXECUTION PROTOCOLS:
- 🎯 Prototype folder structure created with demo data and roadmap
- 💾 Create demo-data.json and PROTOTYPE-ROADMAP.md
- 📖 Reference PROTOTYPE-INITIATION-DIALOG.md for exact conversation scripts
- 🚫 Do not build any pages or UI during this step
## CONTEXT BOUNDARIES:
- Available context: Scenario specification (from scenario-init workflow)
- Focus: Environment setup — folder structure, demo data, configuration
- Limits: No page building, no UI work
- Dependencies: Scenario specification must exist
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. User Requests Scenario Setup
**User says**: "Create interactive prototypes for Scenario [N]: [Scenario Name]"
**Your response**: Follow the **Scenario Initiation Dialog** in `PROTOTYPE-INITIATION-DIALOG.md`
### 2. Run Initiation Dialog
**Ask 4 questions**:
1. **Device Compatibility** (Mobile-Only / Mobile+Tablet / Fully Responsive / Desktop-Only)
2. **Design Fidelity** (Gray Model / Design System / Figma Integration)
3. **Languages** (if project supports multiple languages)
4. **Demo Data** (Create realistic test family data)
**See**: `PROTOTYPE-INITIATION-DIALOG.md` for exact conversation scripts
### 3. Create Prototype Folder Structure
**Actions**:
1. **Create prototype folder**: `[Scenario-Number]-[Scenario-Name]-Prototype/`
2. **Create all subfolders**:
- `data/` - Demo data JSON files
- `work/` - Planning/work files (one per page)
- `stories/` - Section implementation guides (created just-in-time)
- `shared/` - Shared JavaScript (utilities, API abstraction)
- `components/` - Reusable UI components
- `pages/` - Page-specific scripts (if complex)
- `assets/` - Images, icons, etc.
3. **Create `data/demo-data.json`** with demo family
4. **Create `PROTOTYPE-ROADMAP.md`** with scenario overview
5. **Record device compatibility and design approach** in roadmap
**Folder structure created**:
```
[Scenario-Number]-[Scenario-Name]-Prototype/
├── PROTOTYPE-ROADMAP.md
├── data/
│ └── demo-data.json
├── work/ (empty, will be filled per-page)
├── stories/ (empty, created just-in-time)
├── shared/ (empty, add as needed)
├── components/ (empty, add as needed)
├── pages/ (empty, add if needed)
└── assets/ (empty, add as needed)
HTML files will be placed in root as they're created.
```
### 4. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 2: Scenario Analysis"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the prototype folder structure is created with demo data and roadmap will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Initiation dialog completed (4 questions answered)
- Prototype folder structure created with all subfolders
- demo-data.json created with realistic test data
- PROTOTYPE-ROADMAP.md created with scenario overview
- Device compatibility and design approach recorded
### ❌ SYSTEM FAILURE:
- Beginning page building before setup is complete
- Skipping initiation dialog questions
- Not creating demo data
- Not creating the roadmap
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,130 @@
---
name: '2-scenario-analysis'
description: 'Analyze the entire scenario to identify all logical views and map which scenario steps use which views'
# File References
nextStepFile: './3-logical-view-breakdown.md'
---
# Step 2: Scenario Analysis & Logical View Identification
## STEP GOAL:
Analyze the entire scenario to identify all logical views and map which scenario steps use which views. A "logical view" is a conceptual page/screen with multiple states.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on reading all scenario step specs, identifying logical views, getting user confirmation, and creating the logical view map
- 🚫 FORBIDDEN to begin building any views or breaking them into sections — that is the next step
- 💬 Approach: Present logical view mapping to user for review and confirmation
- 📋 Multiple scenario steps can use the same logical view with different states
## EXECUTION PROTOCOLS:
- 🎯 Complete logical view map with all views identified and confirmed by user
- 💾 Create `work/Logical-View-Map.md` with view mapping and build order
- 📖 Read all scenario step specification files
- 🚫 Do not begin section breakdown or implementation
## CONTEXT BOUNDARIES:
- Available context: Prototype folder structure from Step 1; all scenario step specifications
- Focus: Identifying logical views and mapping scenario steps to views
- Limits: No section breakdown, no implementation
- Dependencies: Step 1 must be complete (prototype folder exists)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Read All Scenario Step Specifications
**Actions**:
1. List all scenario step folders in `../[Scenario]/`
2. Read each `[Step].md` specification file
3. Note step names, purposes, and any "inherit from" or "base page" references
### 2. Identify Logical Views
For each scenario step, determine:
- Is this a **new logical view** (new page/screen)?
- Or does it **reuse an existing logical view** (same page, different state)?
**Key indicators of SAME logical view**:
- Spec says "inherit from [other step]"
- Spec says "same structure as [other step]"
- Same page name (e.g., "Family Page" in 1.5, 1.7, 1.9)
- Overlay/modal/confirmation on existing page
**Key indicators of NEW logical view**:
- Completely different page structure
- Different purpose and user context
- No reference to inheriting from another step
Present the mapping to user for confirmation.
### 3. User Reviews & Confirms Mapping
**Wait for response**
**If user says "N"**:
- Ask what needs adjustment
- Update logical view mapping
- Re-present for confirmation
**If user says "Y"**: Proceed to create the map document
### 4. Create Logical View Map Document
Create `work/Logical-View-Map.md` with view details, build order, and notes.
### 5. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 3: Logical View Breakdown"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the logical view mapping is confirmed by user and the map document is created will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- All scenario step specifications read
- Logical views identified with correct grouping
- User confirmed the mapping
- Logical-View-Map.md created with build order
### ❌ SYSTEM FAILURE:
- Beginning to build views before analysis is complete
- Not reading all scenario step specifications
- Not getting user confirmation on the mapping
- Not creating the map document
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,128 @@
---
name: '3-logical-view-breakdown'
description: 'Select a logical view to build and break it into implementable sections'
# File References
nextStepFile: './4a-announce-and-gather.md'
---
# Step 3: Logical View Selection & Section Breakdown
## STEP GOAL:
Select a logical view to build and break it into implementable sections. This creates the work plan, but NOT the story files yet (those are created just-in-time).
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on confirming view selection, gathering objects from specs, proposing section breakdown, and creating the work file
- 🚫 FORBIDDEN to create story files or begin implementing — those are later steps
- 💬 Approach: Collaboratively break down the view into 4-8 sections with user approval
- 📋 Group objects logically, consider all states, estimate time per section
## EXECUTION PROTOCOLS:
- 🎯 Work file created with approved section breakdown
- 💾 Create `work/[View-Name]-Work.yaml` with section plan
- 📖 Read all scenario step specs that use the selected logical view
- 🚫 Do not create story files or write any HTML/JS
## CONTEXT BOUNDARIES:
- Available context: Logical view map from Step 2; all scenario step specifications
- Focus: Section breakdown planning — objects, grouping, estimation
- Limits: No story files, no implementation
- Dependencies: Step 2 must be complete (logical view map exists)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Confirm Logical View Selection
**User says**: "Let's build [Logical View Name]" or selects from list
### 2. Gather All Specifications
**Actions**:
1. **Read all scenario step specs** that use this logical view
2. **Extract all Object IDs** across all states
3. **Identify unique objects** vs **state-specific objects**
4. **Note functional requirements** from all specs
Present objects to user for confirmation.
### 3. User Confirms Objects
**If user says "N"**: Ask what's missing or should be removed, update, re-present
**If user says "Y"**: Continue to section breakdown
### 4. Propose Section Breakdown
**Actions**:
1. **Group objects logically** into 4-8 sections
2. **Consider all states** when grouping
3. **Estimate time** per section
Present breakdown to user for approval.
### 5. User Reviews Section Breakdown
**If user says "N"**: Ask what needs adjustment, revise, re-present
### 6. Create Work File
**When user approves**: Create `work/[View-Name]-Work.yaml` with section details, statuses, and estimates.
### 7. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 4a: Announce and Gather"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the section breakdown is approved and work file is created will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Logical view selection confirmed
- All objects extracted from specifications
- User confirmed object list
- Section breakdown approved (4-8 sections)
- Work file created with section plan
### ❌ SYSTEM FAILURE:
- Creating story files before work file is approved
- Beginning implementation before planning
- Not getting user approval on section breakdown
- Not extracting all objects from specs
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,111 @@
---
name: '4a-announce-and-gather'
description: 'Announce which section is being built and gather all requirements from specifications'
# File References
nextStepFile: './4b-create-story-file.md'
---
# Step 4a: Announce Section & Gather Requirements
## STEP GOAL:
Announce which section we're building and gather all requirements from specifications. Prepare to create the story file by collecting all necessary information.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on announcing the section, reading relevant specs, and gathering requirements
- 🚫 FORBIDDEN to create the story file or begin implementation — those are the next steps
- 💬 Approach: Announce what will be built, then systematically gather all requirements
- 📋 Extract object IDs, descriptions, state behavior, functional requirements, and design references
## EXECUTION PROTOCOLS:
- 🎯 All requirements gathered from specifications for this section
- 💾 Requirements ready for story file creation
- 📖 Reference the work file and all relevant scenario step specifications
- 🚫 Do not create story files or write code
## CONTEXT BOUNDARIES:
- Available context: Work file from Step 3; all scenario step specifications
- Focus: Requirements gathering for the current section
- Limits: No story file creation, no implementation
- Dependencies: Work file must exist (Step 3 complete), previous section approved (or this is Section 1)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Announce Section
Present to user what section is being built, including features, object IDs, states covered, and estimated time.
### 2. Read Relevant Specifications
**Actions**:
1. Open work file: `work/[View]-Work.yaml`
2. Find Section [N] details
3. Read all scenario step specifications that need this section
4. For each spec, extract:
- Object IDs for this section
- Object descriptions (type, label, behavior)
- State-specific behavior
- Functional requirements
- Design references
### 3. Gather Requirements Summary
Present requirements summary to user including object count, specifications referenced, states to handle, functions needed, and design tokens.
### 4. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 4b: Create Story File"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN all requirements are gathered from specifications will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Section announced with clear scope
- All relevant specifications read
- Object IDs, behaviors, and states extracted
- Requirements summary presented to user
### ❌ SYSTEM FAILURE:
- Creating story file before requirements are gathered
- Not reading all relevant specifications
- Missing object IDs or state behaviors
- Beginning implementation prematurely
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,110 @@
---
name: '4b-create-story-file'
description: 'Create the focused story file for this section with all implementation details'
# File References
nextStepFile: './4c-implement-section.md'
---
# Step 4b: Create Story File
## STEP GOAL:
Create the focused story file for this section with all implementation details. Use the story template to create complete, clear instructions for implementation.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on creating the story file with objects, HTML structure, Tailwind classes, JavaScript requirements, demo data, and acceptance criteria
- 🚫 FORBIDDEN to begin implementing — that is the next step
- 💬 Approach: Create comprehensive story file, then offer user review or proceed to implementation
- 📋 Story file must include both agent-verifiable (Puppeteer) and user-evaluable (qualitative) criteria
## EXECUTION PROTOCOLS:
- 🎯 Complete story file created with all implementation instructions
- 💾 Create `stories/[View].[N]-[section-name].md`
- 📖 Reference requirements gathered in Step 4a
- 🚫 Do not write any HTML, CSS, or JavaScript code
## CONTEXT BOUNDARIES:
- Available context: Requirements gathered in Step 4a; work file; specifications
- Focus: Story file creation — complete implementation instructions
- Limits: No code implementation
- Dependencies: Step 4a must be complete (requirements gathered)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Create Story File
Create `stories/[View].[N]-[section-name].md` with:
- Purpose, specifications reference
- All objects with type, label, behavior, states, and spec reference
- HTML structure to build
- Tailwind classes to use
- JavaScript requirements (functions and state handling)
- Demo data requirements
- Acceptance criteria (agent-verifiable and user-evaluable)
- Test instructions (Puppeteer self-verification and user qualitative review)
### 2. Present Story to User
Present summary and offer user the choice to review the story first or proceed to implementation.
### 3. Handle User Response
**If user says "review"**: Show key sections, answer questions, make adjustments, ask if ready to implement.
**If user says "implement"** or "Y": Proceed to next step.
### 4. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 4c: Implement Section"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the story file is created and user is ready to proceed will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Story file created with complete implementation instructions
- All objects defined with types, behaviors, and states
- Acceptance criteria include both agent-verifiable and user-evaluable items
- User approved or chose to proceed
### ❌ SYSTEM FAILURE:
- Beginning implementation without a story file
- Missing objects or acceptance criteria
- Not offering user the chance to review
- Creating incomplete story file
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,139 @@
---
name: '4c-implement-section'
description: 'Implement the section following the story file precisely'
# File References
nextStepFile: './4d-present-for-testing.md'
---
# Step 4c: Implement Section
## STEP GOAL:
Implement the section following the story file precisely. Linear code generation is the task.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on implementing the HTML structure, adding object IDs, Tailwind classes, JavaScript, and placeholders per the story file
- 🚫 FORBIDDEN to deviate from the story file instructions or add unplanned features
- 💬 Approach: Follow the story file precisely, implementing section by section
- 📋 For Section 1, create new HTML file; for subsequent sections, update existing file
## EXECUTION PROTOCOLS:
- 🎯 Section implemented with all objects, styles, and JavaScript per story file
- 💾 HTML file created/updated with section implementation
- 📖 Follow story file instructions precisely
- 🚫 Do not add features not in the story file
## CONTEXT BOUNDARIES:
- Available context: Story file from Step 4b; page template (for Section 1)
- Focus: Code implementation following story file
- Limits: Only implement what the story file specifies
- Dependencies: Step 4b must be complete (story file exists and approved)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Begin Implementation
Announce implementation start.
### 2. Create or Update HTML File
**For Section 1 ONLY**:
- Create new HTML file from `templates/page-template.html`
- Name it: `[View].html`
- Place in prototype root folder
**For Sections 2+**:
- Open existing `[View].html` file
- Find insertion point (after previous section or before placeholder)
### 3. Add HTML Structure
**Follow story file precisely**:
1. Add HTML structure with Tailwind classes from story
2. Add all Object IDs on interactive elements
3. Add state-specific classes/attributes
4. Add placeholder content where specified
### 4. Add JavaScript
**If section needs JavaScript**:
1. Add functions specified in story file
2. Add event listeners for interactive elements
3. Add state handling logic
4. Add console logging for debugging
5. Load demo data from `data/demo-data.json`
### 5. Add Placeholder for Remaining Sections
**If more sections remain**: Add a placeholder div at the bottom indicating the next section.
### 6. Final Check
**Before presenting to user, verify**:
- [ ] All Object IDs from story file are present
- [ ] Tailwind classes match story file
- [ ] JavaScript functions implemented
- [ ] Console logging added
- [ ] Code is clean and readable
- [ ] No syntax errors
### 7. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 4d: Present for Testing"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the section is fully implemented per the story file will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- All Object IDs from story file present
- Tailwind classes match story file
- JavaScript functions implemented
- Code is clean, readable, and error-free
- Placeholder for remaining sections added (if applicable)
### ❌ SYSTEM FAILURE:
- Deviating from story file instructions
- Missing Object IDs
- Adding unplanned features
- Syntax errors in code
- Not following story file precisely
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,127 @@
---
name: '4d-present-for-testing'
description: 'Present the implemented section to user with clear test instructions after agent self-verification'
---
# Step 4d: Present Section for Testing
## STEP GOAL:
Present the implemented section to user with clear test instructions after performing agent self-verification.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on agent self-verification with Puppeteer, presenting implementation, and requesting qualitative user review
- 🚫 FORBIDDEN to skip self-verification before presenting to user
- 💬 Approach: Verify first, then present with clear test instructions for qualitative aspects
- 📋 Only present to user when all agent-verifiable criteria pass
## EXECUTION PROTOCOLS:
- 🎯 Agent self-verification complete, section presented to user for qualitative review
- 💾 Record verification results
- 📖 Reference story file acceptance criteria for verification
- 🚫 Do not present to user until self-verification passes
## CONTEXT BOUNDARIES:
- Available context: Implemented section from Step 4c; story file acceptance criteria
- Focus: Self-verification and user presentation
- Limits: No code changes during presentation (unless self-verification fails)
- Dependencies: Step 4c must be complete (section implemented)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 0. Agent Self-Verification (Before Presenting)
**BEFORE presenting to the user, verify your own work with Puppeteer.**
See: [Inline Testing Guide](../data/guides/INLINE-TESTING-GUIDE.md) for full methodology.
**Actions**:
1. Open the page in browser using Puppeteer
2. Set viewport to target device width
3. Verify each agent-verifiable criterion from the story file
4. Narrate findings using the pass/fail pattern (actual vs expected)
5. Fix any failures and re-verify
**If modifying existing features**: Compare against baseline captured before implementation. Confirm only intended changes occurred.
**Only proceed to Step 1 when all agent-verifiable criteria pass.**
### 1. Present Implementation
Present what was built, listing new features with Object IDs and files updated.
### 2. Present Verification Results & Request Qualitative Review
Present Puppeteer verification results, then ask user to evaluate qualitative aspects:
- Feel the flow: Does the interaction feel natural?
- Visual hierarchy: Does your eye go to the right place first?
- Clarity: Is it immediately clear what to do?
- Consistency: Does this section feel like it belongs with the rest?
### 3. Wait for User Feedback
**User will respond with one of**:
- Approved: "Looks good!" / "Y" / "Perfect!" -> Go to `4g-section-approved.md`
- Issue: "The button doesn't..." / "I see a problem with..." -> Go to `4e-handle-issue.md`
- Improvement: "Could we make it..." / "What if we..." -> Go to `4f-handle-improvement.md`
### 4. Present MENU OPTIONS
Display based on user feedback:
- **If approved**: "[C] Continue to Step 4g: Section Approved"
- **If issue reported**: "[C] Continue to Step 4e: Handle Issue"
- **If improvement suggested**: "[C] Continue to Step 4f: Handle Improvement"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute the appropriate next step file
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN user has provided feedback will you then load and read fully the appropriate next step file to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Agent self-verification completed before presenting
- All agent-verifiable criteria pass
- Implementation presented clearly with Object IDs
- Qualitative review requested from user
- User feedback captured and routed correctly
### ❌ SYSTEM FAILURE:
- Presenting to user without self-verification
- Skipping Puppeteer verification
- Not requesting qualitative review
- Routing user feedback incorrectly
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,127 @@
---
name: '4e-handle-issue'
description: 'Fix reported issues in the section, document, and re-verify'
---
# Step 4e: Handle Issue
## STEP GOAL:
Fix reported issues in the section. Identify, fix, document, and re-test.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on acknowledging the issue, fixing it, updating the story file with learning, re-verifying, and re-presenting
- 🚫 FORBIDDEN to add unrelated improvements while fixing an issue
- 💬 Approach: Acknowledge, analyze, fix, document the learning, then re-verify
- 📋 Update story file with what was wrong, why, and what was learned
## EXECUTION PROTOCOLS:
- 🎯 Issue fixed, documented in story file, re-verified with Puppeteer
- 💾 Update story file with changes made section
- 📖 Reference the reported issue and story file
- 🚫 Do not add unrelated features or improvements
## CONTEXT BOUNDARIES:
- Available context: User's issue report; current implementation; story file
- Focus: Issue identification, fix, documentation, re-verification
- Limits: Only fix the reported issue — no scope expansion
- Dependencies: User has reported an issue (from Step 4d)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Acknowledge Issue
Acknowledge the specific problem, analyze why it is happening, and describe the fix.
### 2. Fix the Issue
**Actions**:
1. Identify the root cause
2. Make the specific fix in the code
3. Test the fix mentally (does it solve the problem?)
4. Keep the fix focused and local
### 3. Update Story File with Learning
Add to story file `stories/[View].[N]-[section-name].md`:
- Problem: What was wrong
- Root cause: Why it happened
- Solution: What was changed
- Code change: Specific change made
- Learned: What to do differently next time
### 3.5. Re-Verify with Puppeteer
After fixing the issue, run Puppeteer verification before re-presenting:
1. Open page in browser
2. Verify the fix resolves the reported issue
3. Verify no regressions on previously passing criteria
4. Narrate findings with pass/fail
**Only proceed to re-present when all criteria pass.**
### 4. Re-present for Testing
Present the fix, explain what changed, why it works now, and request re-testing.
**Note**: This may loop multiple times until issue is resolved. After re-presenting, route back to Step 4d for user feedback.
### 5. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 4d: Present for Testing (re-test)"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute `./4d-present-for-testing.md`
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the issue is fixed and re-verified will you then loop back to present for testing again.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Issue acknowledged and analyzed
- Root cause identified
- Focused fix implemented
- Story file updated with learning
- Re-verified with Puppeteer before re-presenting
### ❌ SYSTEM FAILURE:
- Not acknowledging or analyzing the issue
- Fix does not address root cause
- Not updating story file with learning
- Skipping re-verification
- Adding unrelated improvements during fix
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,122 @@
---
name: '4f-handle-improvement'
description: 'Implement user improvement suggestion, capture learning, and consider specification update'
---
# Step 4f: Handle Improvement Suggestion
## STEP GOAL:
Implement user's improvement suggestion and capture learning. Enhance the implementation based on user feedback.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on acknowledging the improvement, implementing it, updating the story file, considering spec updates, and re-presenting
- 🚫 FORBIDDEN to reject valid improvement suggestions without explanation
- 💬 Approach: Acknowledge, implement, document, consider spec update, re-present
- 📋 Ask user if the improvement should be reflected in the specification
## EXECUTION PROTOCOLS:
- 🎯 Improvement implemented, documented in story file, spec update considered
- 💾 Update story file with improvement details
- 📖 Reference the user's suggestion
- 🚫 Keep changes focused on the improvement
## CONTEXT BOUNDARIES:
- Available context: User's improvement suggestion; current implementation; story file
- Focus: Implementing the improvement and capturing the learning
- Limits: Only implement the suggested improvement
- Dependencies: User has suggested an improvement (from Step 4d)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Acknowledge Improvement
Acknowledge the suggestion, describe current approach, proposed improvement, and benefit.
### 2. Implement Improvement
**Actions**:
1. Understand the user's suggestion
2. Implement the improvement in the code
3. Ensure it enhances UX or code quality
4. Keep changes focused
### 3. Update Story File with Improvement
Add to story file `stories/[View].[N]-[section-name].md`:
- Original: What it was
- Improved to: What it is now
- Reason: Why it is better
- Impact: How it improves UX/code
- Learned: Pattern to use in future
### 4. Consider Specification Update
Ask user if the improvement should be reflected in the specification for future work.
**If user says "Y"**: Note which spec files to update and what should be added.
**If user says "N"**: Learning is captured in story file for reference.
### 5. Re-present for Testing
Present the improvement, explain what changed, why it is better, and request re-testing.
After re-presenting, route back to Step 4d for user feedback.
### 6. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 4d: Present for Testing (re-test)"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute `./4d-present-for-testing.md`
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the improvement is implemented and documented will you then loop back to present for testing again.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Improvement acknowledged and understood
- Implementation enhances UX or code quality
- Story file updated with improvement details
- Specification update considered
- Re-presented for testing
### ❌ SYSTEM FAILURE:
- Rejecting valid improvement without explanation
- Not documenting the improvement in story file
- Not asking about specification update
- Implementing something different from what was suggested
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,122 @@
---
name: '4g-section-approved'
description: 'Finalize section approval, update status, and determine next action'
# File References
nextStepFile: './5-finalization.md'
---
# Step 4g: Section Approved & Next Steps
## STEP GOAL:
Finalize section approval and determine next action. Update status and move forward.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on updating story file status, updating work file, checking progress, and routing to next section or finalization
- 🚫 FORBIDDEN to begin next section without updating status files
- 💬 Approach: Celebrate completion, update records, present progress and next steps
- 📋 If more sections remain, loop back to Step 4a; if all complete, proceed to Step 5
## EXECUTION PROTOCOLS:
- 🎯 Section status updated, progress reported, next action determined
- 💾 Update story file status and work file
- 📖 Reference work file for section progress tracking
- 🚫 Do not skip status updates
## CONTEXT BOUNDARIES:
- Available context: Approved section; work file with section plan
- Focus: Status updates and routing
- Limits: No new implementation
- Dependencies: User has approved the section (from Step 4d)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Section Approved
Acknowledge user approval and announce status update.
### 2. Update Story File Status
Update `stories/[View].[N]-[section-name].md` with:
- Status: Complete
- Completed date
- Implementation summary (objects, issues, improvements, time)
### 3. Update Work File
Update `work/[View]-Work.yaml` with section status, completed date, actual time, issues encountered, and improvements made.
### 4. Check Progress
Count sections: total, completed, remaining.
### 5a. If More Sections Remain
Present progress, announce next section, and ask if ready to continue.
**If user says "Y"**: Go back to **Step 4a** (`4a-announce-and-gather.md`)
**If user says "N"** or wants to pause: Save state and acknowledge pause.
### 5b. If All Sections Complete
Announce completion of all sections and present summary of files created and states covered. Suggest proceeding to Phase 5 for integration testing.
### 6. Present MENU OPTIONS
Display based on status:
- **If more sections**: "[C] Continue to Step 4a: Announce and Gather (next section)"
- **If all complete**: "[C] Continue to Step 5: Finalization"
#### Menu Handling Logic:
- IF C (more sections): Update design log, then load, read entire file, then execute `./4a-announce-and-gather.md`
- IF C (all complete): Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN status files are updated and user has chosen to continue will you then load and read fully the appropriate next step file to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Story file status updated to complete
- Work file updated with section status
- Progress reported to user
- Correct routing (next section or finalization)
### ❌ SYSTEM FAILURE:
- Not updating story file status
- Not updating work file
- Skipping progress report
- Routing incorrectly (wrong next step)
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,137 @@
---
name: '5-finalization'
description: 'Complete integration test and final approval for the logical view'
# File References
activityWorkflowFile: '../workflow-prototyping.md'
---
# Step 5: Finalization
## STEP GOAL:
Complete integration test and final approval for the logical view.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on announcing completion, running integration tests across all states, handling final issues, and presenting the complete logical view
- 🚫 FORBIDDEN to add new sections or features — only test and fix integration issues
- 💬 Approach: Comprehensive integration testing across all states with user
- 📋 All states must work correctly before marking the logical view as complete
## EXECUTION PROTOCOLS:
- 🎯 Integration test complete, all states working, logical view approved
- 💾 Final status recorded in work files and story files
- 📖 Reference logical view map for all states that need testing
- 🚫 Do not add new features — only fix integration issues
## CONTEXT BOUNDARIES:
- Available context: All completed sections; work file; logical view map; all story files
- Focus: Integration testing across all states
- Limits: No new features, only integration fixes
- Dependencies: All sections must be approved (Step 4g complete for all)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Announce Completion
Present all completed sections, files created, and states covered.
### 2. Integration Test Instructions
Provide test instructions for each state:
- Clear browser data between states
- Actions to trigger each state
- Expected results for each state
**Check**:
- All Object IDs present
- State transitions work smoothly
- No console errors
- Responsive at target device width
### 3. Handle Final Issues or Approve
**If user reports issues**: Fix issues, update story files with learnings, update specifications if needed, re-test, loop until approved.
**If user approves**: Present complete summary including:
- View name and HTML file
- Sections completed count
- Object IDs implemented count
- States working count
- Device optimization
- Quality checklist (all items checked)
- All files created
Present options:
- Build another logical view in this scenario?
- Start a new scenario?
- Refine this view?
### 4. Scenario Completion Check
When all logical views complete, review `work/Logical-View-Map.md`:
- Are all logical views built?
- Are all scenario steps covered?
- Are all states working?
If YES: Scenario prototype complete!
### 5. Present MENU OPTIONS
Display: "**Select an Option:** [M] Return to Activity Menu"
#### Menu Handling Logic:
- IF M: Update design log, then load, read entire file, then execute {activityWorkflowFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed when user selects 'M'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the integration test passes and logical view is approved will you then load and read fully `{activityWorkflowFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- All sections complete and integrated
- All states tested and working
- All Object IDs present
- Responsive at target device width
- No console errors
- Quality checklist fully checked
- Complete summary presented to user
### ❌ SYSTEM FAILURE:
- Not testing all states
- Skipping integration test
- Not presenting complete summary
- Leaving console errors unresolved
- Not checking scenario completion status
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,156 @@
---
name: 'step-01-identify-target'
description: 'Define what to reverse engineer, how to access it, and what to extract'
# File References
nextStepFile: './step-02-explore-and-capture.md'
---
# Step 1: Identify Target
## STEP GOAL:
Define what to reverse engineer, how to access it, and what to extract.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on defining the target, determining access method, defining extraction goals, and documenting the target overview
- 🚫 FORBIDDEN to begin exploring or capturing the target — that is the next step
- 💬 Approach: Help user clearly define what they want to reverse engineer and what they need from it
- 📋 Access method must be verified before proceeding
## EXECUTION PROTOCOLS:
- 🎯 Clear target definition with access method and extraction goals
- 💾 Document target overview in dialog file
- 📖 Reference the target type table and extraction goals checklist
- 🚫 Do not begin exploring or capturing any content
## CONTEXT BOUNDARIES:
- Available context: User's initial request or target description
- Focus: Target identification — what, how, and what to extract
- Limits: No exploration, no capturing
- Dependencies: None — this is the first step
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Define the Target
Identify what you are reverse engineering:
| Target Type | Examples |
|-------------|----------|
| **Website** | Public marketing site, SaaS application, e-commerce store |
| **Web application** | Dashboard, admin panel, booking system |
| **Component library** | Design system documentation site, Storybook instance |
| **Mobile app** | iOS/Android app (via screenshots or simulator) |
| **Source code** | Your own codebase, open source project |
Write a clear target description:
```
Target: [Name / URL / Project]
Type: [Website / Web app / Component library / Mobile app / Source code]
Owner: [Own product / Client product / Public / Competitor]
```
### 2. Determine Access Method
How will you explore the target?
| Access Method | When to Use | Tools |
|---------------|-------------|-------|
| **URL (browser)** | Public websites, web apps with demo | Puppeteer, browser DevTools |
| **Source code** | Own codebase, open source | File reading, code analysis |
| **Screenshots** | No live access, mobile apps, provided by client | Image analysis |
| **Combination** | Source code + live URL | Both code and browser |
Note any access limitations (login required, rate limits, geo-restrictions).
### 3. Define Extraction Goals
What do you need to extract? Check all that apply:
- [ ] **Page specifications** — Layout, structure, content for each page
- [ ] **Design system** — Colors, typography, spacing, shadows, tokens
- [ ] **Component inventory** — Reusable UI components, variants, states
- [ ] **Content strategy** — Copy patterns, tone, content hierarchy
- [ ] **Architecture** — Tech stack, routing, data flow (requires source access)
- [ ] **User flows** — Multi-page journeys, interaction patterns
- [ ] **All of the above** — Full extraction for rebuild or migration
### 4. Document Target Overview
Create a brief overview to guide the exploration:
```
Target Overview:
- Name: [Name]
- URL: [URL or "source code at /path/"]
- Access: [Browser / Source / Screenshots]
- Goals: [What to extract]
- Output language: [Language for generated specs]
- Pages estimated: [Rough count if known]
- Notes: [Login required? Special states? Known complexity?]
```
### 5. Verify Checklist
- [ ] Target clearly identified with name and type
- [ ] Access method determined and verified (URL loads, source exists, screenshots provided)
- [ ] Extraction goals defined
- [ ] Target overview documented
### 6. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 2: Explore and Capture"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the target is defined with access method verified and extraction goals documented will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Target clearly identified with name and type
- Access method determined and verified
- Extraction goals defined
- Target overview documented
### ❌ SYSTEM FAILURE:
- Beginning exploration before target is fully defined
- Not verifying access method
- Not defining extraction goals
- Skipping target documentation
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,173 @@
---
name: 'step-02-explore-and-capture'
description: 'Systematically explore the target and capture a complete inventory of pages, components, patterns, and design tokens'
# File References
nextStepFile: './step-03-generate-specs.md'
---
# Step 2: Explore and Capture
## STEP GOAL:
Systematically explore the target and capture a complete inventory of pages, components, patterns, and design tokens.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on crawling pages, capturing structure, noting interactions, and extracting design tokens
- 🚫 FORBIDDEN to begin generating specs — that is the next step
- 💬 Approach: Systematically explore using the access method, documenting everything as inventories
- 📋 Use the appropriate exploration method (URL, source code, or screenshots) as determined in Step 1
## EXECUTION PROTOCOLS:
- 🎯 Complete inventories of pages, navigation, components, colors, typography, and spacing
- 💾 Document all inventories in the dialog file
- 📖 Reference the access method and extraction goals from Step 1
- 🚫 Do not generate specs during exploration
## CONTEXT BOUNDARIES:
- Available context: Target definition and access method from Step 1
- Focus: Exploration and capturing — inventorying everything found
- Limits: No spec generation, no design system document creation
- Dependencies: Step 1 must be complete (target defined, access verified)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### If URL (Browser Access)
#### 1a. Crawl Pages
1. Start at the home page or main entry point
2. Follow navigation links to discover all pages
3. For each page, note: URL, page title, primary purpose, key content sections, interactive elements
4. Check for hidden pages: sitemap.xml, footer links, search results
#### 1b. Capture Structure
For each page: Header, Main content, Footer, Sidebar (if present)
#### 1c. Note Interactions
Hover states, dropdown behavior, form validation, modal triggers, loading states, responsive breakpoints
#### 1d. Extract CSS and Design Tokens
From browser DevTools or computed styles: colors, fonts, spacing, border radius, shadows, breakpoints
### If Source Code Access
#### 1e. Read Code Structure
Identify component directory, list and categorize components, map routes, identify shared styles and theme config
#### 1f. Map Routes
Create a route inventory mapping URLs to pages/views
### If Screenshots
#### 1g. Analyze Visual Patterns
For each screenshot: identify page type, sketch layout grid, list components, note typography hierarchy, extract colors, note spacing rhythm
### Document Results (All Methods)
Compile findings into these inventories:
#### Page Inventory
| # | Page | URL / Route | Type | Key Sections |
|---|------|-------------|------|--------------|
| 1 | Home | / | Landing | Hero, Features, Testimonials, CTA |
| 2 | About | /about | Content | Story, Team, Values |
#### Navigation Structure
Primary nav, secondary nav, footer nav, mobile nav
#### Component Inventory
| Component | Variants | Used On |
|-----------|----------|---------|
#### Color Palette
| Name | Value | Usage |
|------|-------|-------|
#### Typography Scale
| Level | Size | Weight | Line Height | Usage |
|-------|------|--------|-------------|-------|
#### Spacing Patterns
Note the base spacing unit and common values.
### Verify Checklist
- [ ] All pages discovered and inventoried
- [ ] Navigation structure documented
- [ ] Component inventory created
- [ ] Color palette extracted
- [ ] Typography scale documented
- [ ] Spacing patterns noted
- [ ] Interactive patterns observed (hover, modal, form behavior)
### Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 3: Generate Specs"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN all inventories are captured and documented will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- All pages discovered and inventoried
- Navigation structure documented
- Component inventory created
- Color palette extracted
- Typography scale documented
- Spacing patterns noted
- Interactive patterns observed
### ❌ SYSTEM FAILURE:
- Beginning spec generation before exploration is complete
- Missing pages or components in inventory
- Not extracting design tokens
- Skipping interaction observation
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,146 @@
---
name: 'step-03-generate-specs'
description: 'Create WDS-format page specifications from the observations captured in Step 02'
# File References
nextStepFile: './step-04-extract-design-system.md'
---
# Step 3: Generate Specs
## STEP GOAL:
Create WDS-format page specifications from the observations captured in Step 02. Generate all output in `document_output_language`.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on prioritizing pages, generating page specifications, creating scenario outlines, and cross-referencing with components
- 🚫 FORBIDDEN to begin extracting the design system — that is the next step
- 💬 Approach: Generate specifications collaboratively, starting with foundational pages
- 📋 Every component mentioned in specs must appear in the component inventory from Step 02
## EXECUTION PROTOCOLS:
- 🎯 Complete page specifications and scenario outlines generated in WDS format
- 💾 Save all specifications to the output folder
- 📖 Reference page inventory and component inventory from Step 02
- 🚫 Do not create the design system document yet
## CONTEXT BOUNDARIES:
- Available context: All inventories from Step 02 (pages, components, colors, typography, spacing)
- Focus: Spec generation — page specs, scenario outlines, cross-referencing
- Limits: No design system extraction
- Dependencies: Step 02 must be complete (inventories captured)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Prioritize Pages
From the page inventory (Step 02), decide which pages to spec first:
- Start with the most representative or foundational pages
- Group pages that share layout patterns
- Skip pages that are nearly identical (spec one, note variations)
### 2. Generate Page Specification for Each Page
For each page, create a specification document following the WDS page spec structure:
- Overview (purpose, URL, type)
- Layout Structure (desktop layout, sections top to bottom)
- Component List (component, location, variant, notes)
- Content Strategy (headline pattern, body copy, CTA language, images)
- Responsive Behavior (breakpoint changes)
- Interactions (hover, click, scroll, form validation, modal triggers)
### 3. Create Scenario Outlines from User Flows
If the target has multi-page flows (sign up, checkout, onboarding), document them as scenario outlines with steps, success path, and error states.
### 4. Cross-Reference with Components
Ensure every component mentioned in page specs appears in the component inventory from Step 02. Flag any components that appear in specs but were not captured.
### 5. Save Output
Save all specifications to the output folder using consistent naming:
```
output/
specs/
page-home.md
page-about.md
page-products.md
scenarios/
scenario-checkout.md
scenario-onboarding.md
```
### 6. Verify Checklist
- [ ] Pages prioritized and ordered
- [ ] Page specification created for each key page
- [ ] Layout structure documented (desktop and sections)
- [ ] Component list matches component inventory from Step 02
- [ ] Content strategy noted per page
- [ ] Responsive behavior described
- [ ] User flow scenarios documented (if applicable)
- [ ] All output generated in `document_output_language`
- [ ] Specs saved to output folder
### 7. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 4: Extract Design System"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN all page specifications and scenario outlines are generated and saved will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Pages prioritized and ordered
- Page specification created for each key page
- Layout structure documented
- Component list matches inventory
- Content strategy noted
- Responsive behavior described
- Specs saved to output folder
### ❌ SYSTEM FAILURE:
- Beginning design system extraction before specs are complete
- Missing pages in specifications
- Component list does not match inventory
- Not saving output
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,145 @@
---
name: 'step-04-extract-design-system'
description: 'Consolidate all design tokens and component patterns into a structured WDS-compatible design system document'
# File References
activityWorkflowFile: '../workflow-reverse-engineering.md'
---
# Step 4: Extract Design System
## STEP GOAL:
Consolidate all design tokens and component patterns into a structured WDS-compatible design system document.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on extracting design tokens, cataloging components, documenting variants/states, mapping tokens to components, and creating the design system document
- 🚫 FORBIDDEN to modify page specifications — they are final from Step 03
- 💬 Approach: Organize raw values into a structured token system with user input
- 📋 Token-to-component mapping must be explicit
## EXECUTION PROTOCOLS:
- 🎯 Complete design system document in WDS format
- 💾 Save design system document to output folder alongside page specs
- 📖 Reference inventories from Step 02 and page specs from Step 03
- 🚫 Do not modify existing page specifications
## CONTEXT BOUNDARIES:
- Available context: All inventories from Step 02; page specs from Step 03
- Focus: Design system extraction — tokens, components, patterns
- Limits: No page spec modifications
- Dependencies: Steps 02 and 03 must be complete
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Extract Design Tokens
Take the raw values captured in Step 02 and organize them into a structured token system:
#### Colors
Group colors by purpose: brand, text, background, border, feedback
#### Typography
Document font families, scale (h1 through caption) with size, weight, and line-height
#### Spacing
Document base unit and scale, note common patterns (section padding, card padding, element gap)
#### Other Tokens
Border radius, shadows, breakpoints
### 2. Catalog Reusable Components
For each component in the inventory (from Step 02), document as a component specification with variants, sizes, and states.
### 3. Document Component Variants and States
For each component, capture:
- **Variants** — Visual variations (primary/secondary, small/medium/large)
- **States** — Interactive states (default, hover, focus, active, disabled, loading, error, success)
- **Content slots** — What content goes where (icon, label, description, image)
- **Responsive behavior** — How the component adapts at different breakpoints
### 4. Map Token Usage to Components
Show which tokens each component uses, connecting the token system to the component system.
| Component | Colors Used | Typography | Spacing | Border Radius |
|-----------|------------|------------|---------|---------------|
### 5. Output in WDS Design System Format
Compile everything into a single design system document with: Design Tokens, Components, Patterns, and Notes.
Save to the output folder alongside the page specs from Step 03.
### 6. Verify Checklist
- [ ] Color tokens organized by purpose (brand, text, background, border, feedback)
- [ ] Typography scale documented (families, sizes, weights, line heights)
- [ ] Spacing system extracted (base unit + scale)
- [ ] Additional tokens captured (radii, shadows, breakpoints)
- [ ] Each component cataloged with variants and states
- [ ] Token-to-component mapping created
- [ ] Design system document saved in WDS format
- [ ] Output is consistent with page specs from Step 03
### 7. Present MENU OPTIONS
Display: "**Select an Option:** [M] Return to Activity Menu"
#### Menu Handling Logic:
- IF M: Update design log, then load, read entire file, then execute {activityWorkflowFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed when user selects 'M'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the design system document is complete and saved will you then load and read fully `{activityWorkflowFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Color tokens organized by purpose
- Typography scale documented
- Spacing system extracted
- Each component cataloged with variants and states
- Token-to-component mapping created
- Design system document saved in WDS format
- Output is consistent with page specs
### ❌ SYSTEM FAILURE:
- Missing token categories (colors, typography, spacing)
- Not cataloging components with variants and states
- No token-to-component mapping
- Design system document not saved
- Inconsistency with page specs
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,182 @@
---
name: 'step-01-prepare'
description: 'Gather all materials and set up testing environment before starting validation'
# File References
nextStepFile: './step-02-execute.md'
---
# Step 1: Prepare for Acceptance Testing
## STEP GOAL:
Gather all materials and set up your testing environment before starting validation.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on gathering test materials, setting up environment, preparing test data, and creating testing workspace
- 🚫 FORBIDDEN to begin executing tests — that is the next step
- 💬 Approach: Systematically walk through preparation with user, ensuring nothing is missed
- 📋 All materials must be gathered and environment verified before testing begins
## EXECUTION PROTOCOLS:
- 🎯 All materials gathered, environment set up, test data prepared, workspace created
- 💾 Document preparation status in dialog file
- 📖 Reference test scenario file and design delivery file
- 🚫 Do not execute any tests during preparation
## CONTEXT BOUNDARIES:
- Available context: Test scenario file, design delivery file, scenario specifications, design system specs
- Focus: Preparation — materials, environment, data, workspace, time estimation
- Limits: No test execution
- Dependencies: Test scenario file and design delivery must exist
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Gather Materials
#### Test Scenario
- Load test scenario file: `test-scenarios/TS-XXX.yaml`
- Review all test cases
- Understand success criteria
- Note any special setup needed
#### Design Delivery
- Load Design Delivery file: `deliveries/DD-XXX.yaml`
- Review user value and success criteria
- Review acceptance criteria
- Understand what "done" looks like
#### Scenario Specifications
- Load all scenario specs from `C-UX-Scenarios/`
- Review each scenario specification
- Note design details
- Understand expected behavior
#### Design System Specs
- Load design system specs from `D-Design-System/`
- Review component specifications
- Review design tokens
- Note exact colors, sizes, spacing
### 2. Set Up Environment
#### Access the Build
- Staging URL, credentials, platform
- Install build if needed (TestFlight, APK, or web staging)
#### Prepare Test Devices
- Primary device: charged, WiFi, screen recording enabled, screenshot tools ready
- Secondary device (if needed): different platform, screen size, OS version
#### Set Up Tools
- Screen recording (QuickTime, built-in, OBS Studio)
- Screenshot tools with annotation
- Markdown editor and note-taking
- Accessibility tools (VoiceOver/TalkBack, contrast checker, zoom)
### 3. Prepare Test Data
Create test accounts and prepare test data:
- Valid and invalid emails
- Strong and weak passwords
- Special characters
- Edge case data (long names, etc.)
### 4. Create Testing Workspace
Create file structure:
```
testing/DD-XXX/
├── screenshots/
├── screen-recordings/
├── notes.md
└── issues-found.md
```
### 5. Review Test Plan
Understand what you are testing:
- Happy Path Tests: count, flows, expected results
- Error State Tests: count, scenarios, error messages
- Edge Case Tests: count, unusual scenarios, expected behavior
- Design System Validation: components to check, specifications
- Accessibility Tests: screen reader, contrast, touch targets
### 6. Time Estimate
Calculate total testing time with 20% buffer.
### 7. Verify Checklist
- [ ] Test scenario loaded and reviewed
- [ ] Design Delivery loaded and reviewed
- [ ] All scenario specs loaded
- [ ] Design system specs loaded
- [ ] Build accessible and working
- [ ] Test devices ready
- [ ] Tools set up (recording, screenshots, notes)
- [ ] Test data prepared
- [ ] Workspace created
- [ ] Time blocked on calendar
### 8. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 2: Execute"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN all materials are gathered, environment is set up, and workspace is ready will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- All materials gathered
- Environment set up and accessible
- Test devices ready
- Tools configured
- Test data prepared
- Workspace created
- Test plan reviewed
- Time estimated
### ❌ SYSTEM FAILURE:
- Starting testing without materials
- Cannot access staging environment
- Test devices not ready
- No screen recording capability
- No test data prepared
- No time estimate
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,175 @@
---
name: 'step-02-execute'
description: 'Execute all test scenarios defined in the test scenario file and document results'
# File References
nextStepFile: './step-03-document-issues.md'
---
# Step 2: Run Test Scenarios
## STEP GOAL:
Execute all test scenarios defined in the test scenario file and document results.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on executing tests in order: happy path, error states, edge cases, design system validation, and accessibility
- 🚫 FORBIDDEN to skip test categories or rush through tests
- 💬 Approach: Execute each test category methodically, documenting results as you go
- 📋 Happy path must work before moving to error states and edge cases
## EXECUTION PROTOCOLS:
- 🎯 All test categories executed with results documented
- 💾 Document results using templates from data/test-result-templates.md
- 📖 Reference test scenario file for each test case
- 🚫 Do not skip any test category
## CONTEXT BOUNDARIES:
- Available context: All prepared materials from Step 1; test scenario file
- Focus: Test execution and result documentation
- Limits: No issue creation yet — just document pass/fail
- Dependencies: Step 1 must be complete (preparation done)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Happy Path Tests
For each test in TS-XXX.yaml `happy_path` section:
1. Start screen recording
2. Perform action exactly as written
3. Observe result, compare to expected
4. Compare to design reference
5. Mark PASS or FAIL
6. Take screenshot if FAIL (naming: `HP-XXX-step-X-FAIL.png`)
7. Document using template
### 2. Error State Tests
For each test in TS-XXX.yaml `error_states` section:
1. Set up error condition using test data
2. Trigger the error
3. Verify error handling (message, styling, recovery)
4. Check against design spec
5. Document results using template
### 3. Edge Case Tests
For each test in TS-XXX.yaml `edge_cases` section:
1. Set up unusual scenario
2. Perform edge case action
3. Verify graceful handling (no crash, smooth UX)
4. Document results using template
### 4. Design System Validation
For each component in TS-XXX.yaml `design_system_checks` section:
1. Locate all component instances
2. Measure dimensions (height, width, padding)
3. Check colors against design tokens
4. Check typography (size, weight, line height)
5. Check spacing
6. Check all states (default, hover, active, disabled, focus)
7. Document results using template
### 5. Accessibility Tests
#### Screen Reader Testing
- Enable VoiceOver (iOS) or TalkBack (Android)
- Navigate through flow using only screen reader
- Check button labels, form field labels, error announcements
#### Color Contrast Testing
- Use contrast checker tool
- Body text: 4.5:1 minimum (WCAG AA)
- Large text: 3:1 minimum
#### Touch Target Testing
- Measure all interactive elements
- Minimum: 44x44px
- Minimum 8px spacing between targets
### 6. Compile Overall Summary
After all tests complete, create overall test summary:
- Overall result (PASS/FAIL)
- Test coverage percentages
- Issues by severity
- Issues by category
- Next steps
### 7. Verify Checklist
- [ ] All happy path tests executed
- [ ] All error state tests executed
- [ ] All edge case tests executed
- [ ] Design system validation complete
- [ ] Accessibility tests complete
- [ ] All results documented
- [ ] Screenshots captured for issues
- [ ] Screen recordings saved
### 8. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 3: Document Issues"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN all test categories are executed and results documented will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- All happy path tests executed
- All error state tests executed
- All edge case tests executed
- Design system validation complete
- Accessibility tests complete
- All results documented
- Screenshots captured for issues
- Screen recordings saved
### ❌ SYSTEM FAILURE:
- Skipping test categories
- Not documenting results
- No screenshots for issues
- Not checking design references
- Rushing through tests
- Not measuring design system compliance
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,138 @@
---
name: 'step-03-document-issues'
description: 'Document all problems found during testing as issue tickets'
# File References
nextStepFile: './step-04-report.md'
---
# Step 3: Create Issues
## STEP GOAL:
Document all problems found during testing as issue tickets that can be fixed.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on creating issue files with proper severity, description, screenshots, and design references
- 🚫 FORBIDDEN to create vague or non-actionable issues
- 💬 Approach: Help user document each issue with specific details, design references, and actionable recommendations
- 📋 Every issue must include severity, steps to reproduce, expected vs actual, screenshot, and design reference
## EXECUTION PROTOCOLS:
- 🎯 All issues documented as numbered issue files with proper severity
- 💾 Create issue files in `issues/ISS-XXX-description.md` and issues summary
- 📖 Reference test results from Step 2 and issue templates
- 🚫 Do not create vague descriptions
## CONTEXT BOUNDARIES:
- Available context: Test results from Step 2; screenshots; design specifications
- Focus: Issue creation — specific, actionable, with design references
- Limits: No fixing issues — just documenting them
- Dependencies: Step 2 must be complete (all tests executed)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Issue Creation Process
For each issue found, create issue file: `issues/ISS-XXX-description.md`
Numbering: Start at ISS-001, increment for each issue, use leading zeros.
Reference: [data/issue-templates.md](data/issue-templates.md) for complete issue template
### 2. Assign Severity Levels
| Severity | Description | Fix Timeline |
|----------|-------------|--------------|
| **Critical** | App crashes, data loss, security | Immediate |
| **High** | Major functionality broken | This release |
| **Medium** | Feature wrong, confusing UX | This release |
| **Low** | Minor polish, nice to have | Future release |
### 3. Issue Writing Best Practices
**Be specific:**
- Not "Button looks wrong"
- Instead "Primary button background #3B82F6, should be #2563EB per tokens/colors.json"
**Be actionable:**
- Not "Fix the transition"
- Instead "Add 300ms fade transition per specifications.md line 45"
**Be visual:**
- Include screenshots
- Annotate key areas
- Show expected vs actual
### 4. Create Issues Summary
After creating all issues, create summary with total count and breakdown by severity.
### 5. Verify Checklist
- [ ] All issues documented with correct template
- [ ] Severity levels assigned appropriately
- [ ] Design references included
- [ ] Screenshots attached
- [ ] Recommendations provided
- [ ] Issues summary created
### 6. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 4: Report"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN all issues are documented with proper severity and design references will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- All issues documented with correct template
- Severity levels assigned appropriately
- Design references included
- Screenshots attached
- Recommendations provided
- Issues summary created
### ❌ SYSTEM FAILURE:
- Vague descriptions
- Missing severity
- No screenshots
- No design reference
- No steps to reproduce
- Not actionable
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,132 @@
---
name: 'step-04-report'
description: 'Create a comprehensive test report summarizing all testing results'
# File References
nextStepFile: './step-05-iterate.md'
---
# Step 4: Create Test Report
## STEP GOAL:
Create a comprehensive test report summarizing all testing results.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on creating the test report with summary, coverage, issues, sign-off recommendation, and attachments
- 🚫 FORBIDDEN to modify test results or issues — only compile them into the report
- 💬 Approach: Compile all results into a clear, actionable report with user
- 📋 Report must include clear PASS/FAIL determination with criteria
## EXECUTION PROTOCOLS:
- 🎯 Complete test report created with all sections
- 💾 Save report to `testing/DD-XXX/TR-XXX-[flow-name].md`
- 📖 Reference test results from Step 2 and issues from Step 3
- 🚫 Do not modify test results or issues
## CONTEXT BOUNDARIES:
- Available context: Test results from Step 2; issues from Step 3; screenshots and recordings
- Focus: Report compilation — summary, coverage, issues, recommendation
- Limits: No test result modification
- Dependencies: Step 3 must be complete (all issues documented)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Create Test Report File
File: `testing/DD-XXX/TR-XXX-[flow-name].md`
Reference: [data/issue-templates.md](data/issue-templates.md) for complete test report template
### 2. Report Sections
1. **Summary** - Overall result, total issues, blocking status
2. **Test Coverage** - Pass/fail by category
3. **Issues Found** - Table of all issues
4. **Sign-Off Recommendation** - Ready or needs fixes
5. **Next Steps** - What happens next
6. **Attachments** - Recordings, screenshots, issue files
### 3. Overall Result Determination
**PASS if:**
- All Critical issues: 0
- All High issues: Fixed or accepted risk
- Happy path: 100% pass
- Design system: > 95% compliant
**FAIL if:**
- Any Critical issues unfixed
- Any High issues blocking
- Happy path failures
- Design system < 95% compliant
### 4. Attach Supporting Files
Organize testing folder with report, screenshots, recordings, and test data.
### 5. Verify Checklist
- [ ] Test report created with all sections
- [ ] Test coverage complete
- [ ] Issues list accurate
- [ ] Clear recommendation
- [ ] All attachments organized
### 6. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 5: Iterate"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the test report is complete with all sections and clear recommendation will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Test report created with all sections
- Test coverage complete
- Issues list accurate
- Clear recommendation
- All attachments organized
### ❌ SYSTEM FAILURE:
- Missing test categories
- Incorrect issue counts
- Unclear recommendation
- Missing attachments
- Incomplete coverage data
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,127 @@
---
name: 'step-05-iterate'
description: 'Either iterate with development team to fix issues, or approve the feature for production'
# File References
activityWorkflowFile: '../workflow-acceptance-testing.md'
---
# Step 5: Iterate or Approve
## STEP GOAL:
Either iterate with development team to fix issues, or approve the feature for production.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on two paths: iterate (fix issues and retest) or approve (sign off for production)
- 🚫 FORBIDDEN to approve with unfixed high-severity issues or create endless iteration loops (max 3 iterations)
- 💬 Approach: Guide user through the appropriate path based on test results
- 📋 Maximum 3 iterations before escalation
## EXECUTION PROTOCOLS:
- 🎯 Feature either approved with sign-off document or issues fixed and retested
- 💾 Create sign-off document if approved; create retest report if iterating
- 📖 Reference test report from Step 4 and issues from Step 3
- 🚫 Do not approve with unfixed high-severity issues
## CONTEXT BOUNDARIES:
- Available context: Test report from Step 4; issues from Step 3; all test results
- Focus: Decision — iterate or approve
- Limits: Maximum 3 iterations before escalation
- Dependencies: Step 4 must be complete (test report created)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### Two Paths
#### Path A: Issues Found - Iterate
**If test result was FAIL:**
1. **Wait for Fixes** - Be available for questions, clarify issues, review early feedback
2. **Receive Ready for Retest** notification
3. **Retest** - Focus on:
- Fixed issues: Verify actually fixed
- Regression testing: Fixes did not break anything
- Related areas: Check affected parts
- Use abbreviated testing (do not rerun all tests)
4. **Update Issues** - Mark fixed issues as Closed with version, date, and verifier
5. **Create Retest Report** - Reference data/issue-templates.md for template
6. **Decision Point**:
- If all high-severity fixed: proceed to Path B (Approve)
- If issues remain: Repeat iteration (max 3 total)
#### Path B: No Issues - Approve
**If test result was PASS:**
1. **Create Sign-Off Document** - Reference data/issue-templates.md for template
2. **Notify Development Team** - Formal approval notification
3. **Update Status** - Set delivery status to 'approved' with timestamp and approver
### Iteration Limits
**Maximum iterations:** 3
If after 3 iterations issues persist:
1. Escalate to leads
2. Review requirements
3. Consider scope reduction
### Present MENU OPTIONS
Display: "**Select an Option:** [M] Return to Activity Menu"
#### Menu Handling Logic:
- IF M: Update design log, then load, read entire file, then execute {activityWorkflowFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed when user selects 'M'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the feature is approved with sign-off document or escalated will you then load and read fully `{activityWorkflowFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- All high-severity issues fixed
- Retesting complete
- Sign-off document created
- Development team notified of approval
- Status updated to approved
### ❌ SYSTEM FAILURE:
- Approving with unfixed high-severity issues
- No sign-off document
- Status not updated
- Development team not notified
- Endless iteration loop (more than 3)
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,382 @@
# Scenario [Number]: [Scenario Name] - Prototype Roadmap
**Scenario**: [Scenario Name]
**Pages**: [First] through [Last]
**Device Compatibility**: [Type] ([Width range])
**Last Updated**: [Date]
---
## 🎯 Scenario Overview
**User Journey**: [Brief description of complete user flow]
**Pages in this Scenario**:
1. [Page 1] - [Description]
2. [Page 2] - [Description]
3. [Page 3] - [Description]
...
---
## 📱 Device Compatibility
**Type**: [Mobile-Only | Mobile + Tablet | Fully Responsive | Desktop-Only]
**Reasoning**:
[Why this device compatibility was chosen for this scenario]
**Test Viewports**:
- [Device 1] ([width]px × [height]px) - [Purpose]
- [Device 2] ([width]px × [height]px) - [Purpose]
- [Device 3] ([width]px × [height]px) - [Purpose]
**Optimization Strategy**:
- ✅ [Optimization 1]
- ✅ [Optimization 2]
- ✅ [Optimization 3]
- ❌ [Not included 1]
- ❌ [Not included 2]
**Tailwind Approach**:
```html
<!-- [Brief description of Tailwind strategy] -->
```
---
## 📁 Folder Structure
**HTML Files** (root level - double-click to open):
```
[Page-1].html
[Page-2].html
[Page-3].html
...
```
**Supporting Folders**:
- `shared/` - Shared code (ONE COPY for all pages)
- `components/` - Reusable UI components (ONE COPY)
- `pages/` - Page-specific scripts (only for complex pages)
- `data/` - Demo data (auto-loads on first use)
- `stories/` - Section development documentation
- `work/` - Planning files (work.yaml for each page)
---
## 🚀 Quick Start
### For Testing
1. **Open** `[First-Page].html` (double-click)
2. **Demo data prompt** → Click YES
3. **Navigate** through the flow
4. **Data persists** across pages (sessionStorage)
### For Stakeholders
1. **Unzip** the Prototype folder
2. **Open** `[First-Page].html`
3. **Test** complete user journey
4. **Share feedback**
### For Developers
1. **Review** `work/` folder for specifications
2. **Check** `stories/` folder for implementation details
3. **Examine** `shared/prototype-api.js` for data operations
4. **Extract** HTML/Tailwind structure
5. **Migrate** to production (see TODOs in code)
---
## 🎨 Shared Resources (No Duplication!)
### `shared/prototype-api.js`
**Used by**: ALL prototypes
**Purpose**: API abstraction layer (simulates backend with sessionStorage)
**Key methods**:
```javascript
PrototypeAPI.getUser()
PrototypeAPI.createUserProfile(userData)
PrototypeAPI.createFamily(familyData)
PrototypeAPI.addDog(dogData)
// ... see file for complete API
```
**Console commands** (for debugging):
```javascript
PrototypeAPI.getDebugInfo() // See current state
PrototypeAPI.clearAllData() // Reset everything
```
---
### `shared/init.js`
**Used by**: ALL prototypes
**Purpose**: Auto-initialization (loads demo data, sets up page)
**What it does**:
- Checks if demo data exists
- Loads from `data/demo-data.json` if empty
- Calls `window.initPage()` if defined
- Logs current state to console
---
### `shared/utils.js`
**Used by**: ALL prototypes
**Purpose**: Helper functions (date formatting, validation, etc.)
---
## 🧩 Components (Reusable - ONE COPY)
### `components/image-crop.js`
**Used by**: [Pages that use image upload]
**Purpose**: Image upload with circular crop
**Usage**:
```javascript
ImageCrop.cropImage(file, { aspectRatio: 1 });
```
---
### `components/toast.js`
**Used by**: [Pages with notifications]
**Purpose**: Success/error toast notifications
**Usage**:
```javascript
showToast('Success message!', 'success');
showToast('Error message', 'error');
```
---
### `components/modal.js`
**Used by**: [Pages with modals]
**Purpose**: Generic modal overlay
---
### `components/form-validation.js`
**Used by**: [Pages with forms]
**Purpose**: Real-time form validation
---
## 📊 Demo Data
### `data/demo-data.json`
**Purpose**: Complete demo dataset for scenario
**Contents**:
- User profile
- Family data
- [Other data entities]
**Edit this file** to change demo data (JSON format, designer-friendly)
---
### `data/[additional-data].json`
**Purpose**: [Description]
---
## 📋 Prototype Status
| Page | Status | Sections | Last Updated | Notes |
|------|--------|----------|--------------|-------|
| [Page 1] | ✅ Complete | 3/3 | [Date] | - |
| [Page 2] | ✅ Complete | 4/4 | [Date] | - |
| [Page 3] | 🚧 In Progress | 2/5 | [Date] | Building form fields |
| [Page 4] | ⏸️ Not Started | 0/6 | - | Planned |
**Status Legend**:
- ✅ Complete - All sections done, tested, approved
- 🚧 In Progress - Currently building section-by-section
- ⏸️ Not Started - Planned, not yet started
- 🔴 Blocked - Waiting on dependency
---
## 🔄 Development Workflow
### 1. Planning Phase
- Create work file: `work/[Page]-Work.yaml`
- Define sections (4-8 per page)
- Identify Object IDs
- List demo data needs
- Get approval
### 2. Implementation Phase
- Build section-by-section
- Create story files just-in-time
- Test after each section
- Get approval before next section
- File lives in root from start (no temp folder)
### 3. Finalization Phase
- Complete integration test
- Update status to Complete
- Document any changes
- Update this roadmap
---
## 🧪 Testing Requirements
### Functional Testing (All Pages)
- [ ] All form fields work
- [ ] Validation shows errors correctly
- [ ] Submit buttons work with loading states
- [ ] Success/error feedback displays
- [ ] Navigation works (back, next, cancel)
- [ ] Data persists across pages
### Device Testing
- [ ] [Primary viewport] ([width]px)
- [ ] [Secondary viewport] ([width]px)
- [ ] [Tertiary viewport] ([width]px)
- [ ] Portrait orientation
- [ ] Touch interactions work
- [ ] No horizontal scroll
### Browser Testing
- [ ] Chrome (primary)
- [ ] Safari (iOS/Mac)
- [ ] Firefox
- [ ] Edge
---
## 🎓 Tailwind Reference
### Project Colors
```javascript
// Tailwind config (in each HTML file)
'[project-name]': {
50: '#eff6ff',
500: '#2563eb',
600: '#1d4ed8',
700: '#1e40af',
}
```
**Usage**: `bg-[project-name]-600`, `text-[project-name]-500`, etc.
### Common Patterns
**Form Input**:
```html
<input class="w-full px-3 py-2 border border-gray-300 rounded-lg focus:outline-none focus:ring-2 focus:ring-[project]-500">
```
**Primary Button**:
```html
<button class="w-full py-3 bg-[project]-600 text-white rounded-lg font-semibold hover:bg-[project]-700 transition-colors">
```
**Toast Notification**:
```html
<div class="fixed bottom-6 left-1/2 -translate-x-1/2 bg-gray-900 text-white px-6 py-3 rounded-lg shadow-lg">
```
---
## 🐛 Troubleshooting
### Issue: Demo data not loading
**Solution**: Check `data/demo-data.json` exists, check console for errors
### Issue: Tailwind not working
**Solution**: Check `<script src="https://cdn.tailwindcss.com"></script>` in `<head>`
### Issue: Navigation not working
**Solution**: Check relative paths (should be `[Page].html` from root)
### Issue: Shared code not loading
**Solution**: Check paths are `shared/[file].js`, `components/[file].js`
### Issue: Form not submitting
**Solution**: Check `event.preventDefault()` in `handleSubmit(event)`
---
## 📚 Documentation
**Work Files** (`work/` folder):
- High-level plans for each page
- Section breakdowns
- Object ID maps
- Acceptance criteria
**Story Files** (`stories/` folder):
- Detailed section implementation guides
- Created just-in-time during development
- Document what was actually built
- Include changes from original plan
---
## 🚀 Production Migration
### Steps to Production
1. **Replace** `PrototypeAPI` calls with real backend
2. **Migrate** sessionStorage to database
3. **Add** authentication layer
4. **Implement** proper error handling
5. **Add** loading states for real network delays
6. **Setup** Tailwind build process (vs CDN)
7. **Optimize** images and assets
8. **Test** with real data
### Migration Helpers
- Search for `TODO:` comments in code
- Check `PrototypeAPI` methods for Supabase equivalents
- Review work files for production requirements
---
## 📧 Support & Questions
**For design questions**: Review story files in `stories/` folder
**For functionality questions**: Review work files in `work/` folder
**For implementation details**: Check inline comments in HTML files
**For API questions**: Review `shared/prototype-api.js` documentation
---
## 📊 Scenario Statistics
**Total Pages**: [N]
**Completed**: [N]
**In Progress**: [N]
**Total Sections**: [N]
**Object IDs**: [N]
**Shared Components**: [N]
**Demo Data Files**: [N]
**Estimated Test Time**: [X] minutes (complete flow)
**Estimated Build Time**: [X] hours (all pages)
---
## 📝 Change Log
### [Date]
- [Change description]
- [Page affected]
### [Date]
- [Change description]
- [Page affected]
---
**Last Updated**: [Date]
**Version**: 1.0
**Status**: [In Development | Testing | Complete]

View File

@@ -0,0 +1,189 @@
# Dev Mode - Usage Guide
**Purpose**: Easy feedback on prototypes by copying Object IDs to clipboard
---
## 🎯 What is Dev Mode?
Dev Mode is a built-in feature in all WDS prototypes that allows testers, stakeholders, and designers to easily reference specific UI elements when providing feedback.
Instead of saying *"The button in the top right"*, you can say *"Fix `customer-sign-bankid`"* - precise and unambiguous!
---
## 🚀 How to Use
### Step 1: Activate Dev Mode
**Two ways**:
1. Click the **Dev Mode button** (top-right corner)
2. Press **Ctrl+E** on your keyboard
The button will turn blue and say **"Dev Mode: ON"**
---
### Step 2: Find the Element
- **Hover** over any element you want to reference
- You'll see a **gray outline** appear
- A **tooltip** shows the Object ID
**Prototype still works normally!** You can click buttons, fill forms, etc.
---
### Step 3: Copy the Object ID
- **Hold the Shift key** (outline turns **green**)
- **Click the element** while holding Shift
- **Object ID is copied!** ✓
You'll see a green success message: **"✓ Copied: [object-id]"**
**Important**: Shift key is **disabled when typing in form fields** (input, textarea, etc.) so you can type capital letters and special characters normally!
---
### Step 4: Paste in Feedback
Now paste the Object ID in your feedback:
**Good feedback**:
```
❌ Issue with `customer-sign-bankid`:
The button is disabled even after I check the consent checkbox.
💡 Suggestion for `sidebar-video`:
Make the video auto-play on mobile.
```
**Developer knows EXACTLY** which element you're talking about!
---
## 🎨 Visual Guide
| State | Appearance | Action |
|-------|------------|--------|
| **Dev Mode OFF** | Normal prototype | Click button or press Ctrl+E |
| **Dev Mode ON (hovering)** | Gray outline | Shows Object ID in tooltip |
| **Shift held (hovering)** | Green outline | Click to copy |
| **After copying** | Green flash | Object ID in clipboard |
---
## ⌨️ Keyboard Shortcuts
- **Ctrl+E**: Toggle Dev Mode on/off
- **Shift + Click**: Copy Object ID (when dev mode is on)
---
## 💡 Tips
1. **Activate once**, then navigate through prototype normally
2. **Hold Shift only when copying** - prototype works without it
3. **Type in fields normally** - Shift is disabled when focused on input/textarea
4. **Deactivate when done** testing (Ctrl+E again)
5. **Object IDs are permanent** - always refer to the same element
---
## 📋 Example Workflow
### Tester's Perspective:
1. Open prototype
2. Press **Ctrl+E** (Dev Mode on)
3. Test the prototype normally
4. Find a bug - hover over problem element
5. Hold **Shift**, click element
6. Paste Object ID into bug report: "`customer-facility-startdate-group` shows wrong default date"
7. Continue testing
### Designer's Perspective:
Receives feedback:
```
Bug: `customer-facility-startdate-group` shows wrong default date
```
- Open prototype
- Press **Ctrl+F** in browser, search for `customer-facility-startdate-group`
- Find exact element in code
- Fix the date calculation
- Done! ✅
---
## 🔧 For Developers
When you receive Object IDs in feedback:
1. Open the HTML file
2. Search for the Object ID (Ctrl+F)
3. Element is either:
- `id="object-id"` attribute
- `data-object-id="object-id"` attribute
4. Fix the issue in that specific element
---
## ❓ FAQs
**Q: Does Dev Mode affect the prototype?**
A: No! The prototype works normally. You need to hold Shift to copy IDs.
**Q: Can I use this on mobile?**
A: Yes! The button appears on mobile too. Use a Bluetooth keyboard or on-screen Shift key.
**Q: Can I type in form fields while Dev Mode is on?**
A: Yes! Shift key is automatically disabled when you're typing in input fields or textareas, so you can type capital letters and special characters normally.
**Q: What if an element doesn't have an ID?**
A: Dev Mode walks up the tree to find the nearest parent with an ID.
**Q: Can I copy multiple IDs?**
A: Yes! Hold Shift, click first element, release Shift, hold again, click second element, etc.
**Q: Is this only for bugs?**
A: No! Use it for any feedback - bugs, suggestions, questions, clarifications.
---
## 🎓 Best Practices
### For Testers:
-**DO**: Include Object ID in every piece of feedback
-**DO**: Test prototype normally, copy IDs when needed
-**DO**: Combine Object ID with description
-**DON'T**: Leave Dev Mode on during normal use
### For Designers:
-**DO**: Ensure all interactive elements have Object IDs
-**DO**: Use descriptive, consistent naming
-**DO**: Include Dev Mode in all prototypes
-**DON'T**: Change Object IDs after sharing prototype
---
## 🚨 Troubleshooting
**Problem**: Dev Mode button not showing
**Solution**: Check that `dev-mode.js` and `dev-mode.css` are loaded
**Problem**: Clicking doesn't copy
**Solution**: Make sure you're holding **Shift** while clicking
**Problem**: Tooltip not showing
**Solution**: Element might not have an ID - check console logs
**Problem**: Can't turn off Dev Mode
**Solution**: Press Ctrl+E or refresh the page
---
**Dev Mode makes feedback precise, fast, and frustration-free!** 🎯

View File

@@ -0,0 +1,164 @@
/* ============================================================================
PROTOTYPE DEV MODE STYLES
Styles for developer/feedback mode that allows copying Object IDs
Usage: Include these styles in your prototype HTML or CSS file
============================================================================ */
/* Dev Mode Toggle Button */
.dev-mode-toggle {
position: fixed;
top: 20px;
right: 20px;
z-index: 9999;
background: #fff;
border: 2px solid #e5e7eb;
border-radius: 8px;
padding: 10px 16px;
display: flex;
align-items: center;
gap: 8px;
cursor: pointer;
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.1);
transition: all 0.2s;
font-size: 14px;
font-weight: 500;
color: #6b7280;
}
.dev-mode-toggle:hover {
background: #f9fafb;
border-color: #d1d5db;
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.15);
}
.dev-mode-toggle.active {
background: #0066CC;
border-color: #0066CC;
color: #fff;
box-shadow: 0 4px 12px rgba(0, 102, 204, 0.3);
}
.dev-mode-toggle svg {
flex-shrink: 0;
}
/* Dev Mode Active State */
body.dev-mode-active {
cursor: help !important; /* Show help cursor to indicate special mode */
}
/* Subtle element highlighting on hover (not Shift held) */
body.dev-mode-active [id]:hover {
outline: 2px solid #6b7280 !important;
outline-offset: 2px !important;
box-shadow: 0 0 0 2px rgba(107, 114, 128, 0.2) !important;
}
/* Active highlighting when Shift is held (ready to copy) */
body.dev-mode-active.shift-held {
cursor: copy !important;
}
body.dev-mode-active.shift-held [id]:hover {
outline: 3px solid #10B981 !important;
outline-offset: 3px !important;
box-shadow: 0 0 0 5px rgba(16, 185, 129, 0.4) !important;
}
/* Dev Mode Tooltip */
.dev-mode-tooltip {
position: fixed;
background: #1F2937;
color: #fff;
padding: 8px 12px;
border-radius: 6px;
font-size: 13px;
font-weight: 600;
font-family: 'Courier New', monospace;
z-index: 10000;
pointer-events: none;
white-space: nowrap;
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.2);
transition: background 0.2s;
}
.dev-mode-tooltip::before {
content: '';
position: absolute;
top: -4px;
left: 8px;
width: 8px;
height: 8px;
background: inherit;
transform: rotate(45deg);
}
/* Disable only certain interactions when Shift is held in dev mode */
body.dev-mode-active.shift-held button:not(#dev-mode-toggle),
body.dev-mode-active.shift-held input,
body.dev-mode-active.shift-held select,
body.dev-mode-active.shift-held textarea,
body.dev-mode-active.shift-held a {
pointer-events: none !important;
}
/* But allow the toggle button to work */
body.dev-mode-active #dev-mode-toggle,
body.dev-mode-active #dev-mode-toggle * {
pointer-events: auto !important;
cursor: pointer !important;
}
/* Feedback overlay (created dynamically) */
@keyframes fadeInOut {
0% {
opacity: 0;
transform: translate(-50%, -50%) scale(0.9);
}
20% {
opacity: 1;
transform: translate(-50%, -50%) scale(1);
}
80% {
opacity: 1;
transform: translate(-50%, -50%) scale(1);
}
100% {
opacity: 0;
transform: translate(-50%, -50%) scale(0.9);
}
}
/* Responsive: Adjust toggle button on mobile */
@media (max-width: 768px) {
.dev-mode-toggle {
top: 10px;
right: 10px;
padding: 8px 12px;
font-size: 12px;
}
.dev-mode-toggle span {
display: none; /* Hide text on mobile, show only icon */
}
.dev-mode-toggle.active span {
display: inline; /* Show "ON" status */
max-width: 60px;
}
}
/* Optional: Add visual indicator when Shift is held */
body.dev-mode-active.shift-held .dev-mode-toggle::after {
content: '⬆️';
margin-left: 4px;
animation: pulse 1s infinite;
}
@keyframes pulse {
0%, 100% { opacity: 1; transform: scale(1); }
50% { opacity: 0.7; transform: scale(1.1); }
}

View File

@@ -0,0 +1,18 @@
<!-- ============================================================================
PROTOTYPE DEV MODE - HTML SNIPPET
Add this HTML to your prototype page (inside <body>, preferably at the top)
============================================================================ -->
<!-- Dev Mode Toggle Button (fixed position, top-right) -->
<button id="dev-mode-toggle" class="dev-mode-toggle" title="Toggle Dev Mode (Ctrl+E)">
<svg width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2">
<path d="M11 4H4a2 2 0 0 0-2 2v14a2 2 0 0 0 2 2h14a2 2 0 0 0 2-2v-7"></path>
<path d="M18.5 2.5a2.121 2.121 0 0 1 3 3L12 15l-4 1 1-4 9.5-9.5z"></path>
</svg>
<span>Dev Mode: OFF</span>
</button>
<!-- Dev Mode Tooltip (shown when hovering over elements in dev mode) -->
<div id="dev-mode-tooltip" class="dev-mode-tooltip" style="display: none;"></div>

View File

@@ -0,0 +1,430 @@
/* eslint-disable n/no-unsupported-features/node-builtins */
/* global document, window */
/**
* PROTOTYPE DEV MODE
*
* Developer/feedback mode that allows users to easily copy Object IDs to clipboard
* for providing precise feedback on prototype elements.
*
* Features:
* - Toggle dev mode with button or Ctrl+E
* - Prototype works NORMALLY when dev mode is on
* - Hold Shift + Click any element to copy its Object ID
* - Visual highlights show what will be copied (green when Shift is held)
* - Tooltip shows Object ID on hover
* - Success feedback when copied
*
* Usage:
* 1. Include this script in your prototype HTML
* 2. Add the HTML toggle button and tooltip (see HTML template)
* 3. Add the CSS styles (see CSS template)
* 4. Call initDevMode() on page load
*
* How it works:
* - Activate dev mode (Ctrl+E or click button)
* - Hover over elements to see their Object IDs (gray outline)
* - Hold Shift key (outline turns green)
* - Click while holding Shift to copy Object ID
* - Prototype works normally without Shift held
* - **Shift is disabled when typing in form fields** (input, textarea, etc.)
*/
// ============================================================================
// DEV MODE STATE
// ============================================================================
let devModeActive = false;
let shiftKeyPressed = false;
let currentHighlightedElement = null;
// ============================================================================
// INITIALIZATION
// ============================================================================
function initDevMode() {
const toggleButton = document.querySelector('#dev-mode-toggle');
const tooltip = document.querySelector('#dev-mode-tooltip');
if (!toggleButton || !tooltip) {
console.warn('⚠️ Dev Mode: Toggle button or tooltip not found');
return;
}
// Check if user agent supports clipboard API
if (typeof navigator !== 'undefined' && navigator.clipboard) {
// Clipboard API available
} else {
console.warn('⚠️ Clipboard API not supported in this browser');
return;
}
setupKeyboardShortcuts();
setupToggleButton(toggleButton, tooltip);
setupHoverHighlight(tooltip);
setupClickCopy();
console.log('%c💡 Dev Mode available: Press Ctrl+E or click the Dev Mode button', 'color: #0066CC; font-weight: bold;');
}
// ============================================================================
// KEYBOARD SHORTCUTS
// ============================================================================
function setupKeyboardShortcuts() {
// Track Shift key for container selection
document.addEventListener('keydown', (e) => {
if (e.key === 'Shift') {
// Don't activate if user is typing in a form field
if (isTypingInField()) {
return;
}
shiftKeyPressed = true;
document.body.classList.add('shift-held');
if (devModeActive) {
console.log('%c⬆ Shift held: Click any element to copy its Object ID', 'color: #10B981; font-weight: bold;');
}
}
// Ctrl+E toggle
if (e.ctrlKey && e.key === 'e') {
e.preventDefault();
document.querySelector('#dev-mode-toggle')?.click();
}
});
document.addEventListener('keyup', (e) => {
if (e.key === 'Shift') {
shiftKeyPressed = false;
document.body.classList.remove('shift-held');
if (devModeActive) {
console.log('%c⬇ Shift released: Prototype works normally (hold Shift to copy)', 'color: #6b7280;');
}
}
});
}
// ============================================================================
// TOGGLE BUTTON
// ============================================================================
function setupToggleButton(toggleButton, tooltip) {
toggleButton.addEventListener('click', function (e) {
e.stopPropagation();
if (typeof globalThis !== 'undefined') {
globalThis.devModeActive = true;
} else if (globalThis.window !== undefined) {
globalThis.devModeActive = true;
}
devModeActive = !devModeActive;
// Update UI
document.body.classList.toggle('dev-mode-active', devModeActive);
toggleButton.classList.toggle('active', devModeActive);
const statusText = toggleButton.querySelector('span');
if (statusText) {
statusText.textContent = devModeActive ? 'Dev Mode: ON' : 'Dev Mode: OFF';
}
// Log status
console.log(`🔧 Dev Mode: ${devModeActive ? 'ACTIVATED' : 'DEACTIVATED'}`);
if (devModeActive) {
console.log('%c🔧 DEV MODE ACTIVE', 'color: #0066CC; font-size: 16px; font-weight: bold;');
console.log('%c⚠ Hold SHIFT + Click any element to copy its Object ID', 'color: #FFB800; font-size: 14px; font-weight: bold;');
console.log('%cWithout Shift: Prototype works normally', 'color: #6b7280;');
console.log('%cPress Ctrl+E to toggle Dev Mode', 'color: #6b7280;');
} else {
tooltip.style.display = 'none';
if (currentHighlightedElement) {
clearHighlight();
}
}
});
}
// ============================================================================
// HOVER HIGHLIGHT
// ============================================================================
function setupHoverHighlight(tooltip) {
// Show tooltip and highlight on hover
document.addEventListener('mouseover', function (e) {
if (!devModeActive) return;
// Don't highlight if user is typing in a field
if (isTypingInField()) {
tooltip.style.display = 'none';
clearHighlight();
return;
}
clearHighlight();
let element = findElementWithId(e.target);
if (!element || !element.id || isSystemElement(element.id)) {
tooltip.style.display = 'none';
return;
}
// Highlight element
highlightElement(element, shiftKeyPressed);
currentHighlightedElement = element;
// Show tooltip
const prefix = shiftKeyPressed ? '✓ Click to Copy: ' : '⬆️ Hold Shift + Click: ';
tooltip.textContent = prefix + element.id;
tooltip.style.display = 'block';
tooltip.style.background = shiftKeyPressed ? '#10B981' : '#6b7280';
tooltip.style.color = '#fff';
updateTooltipPosition(e, tooltip);
});
// Update tooltip position on mouse move
document.addEventListener('mousemove', function (e) {
if (devModeActive && tooltip.style.display === 'block') {
updateTooltipPosition(e, tooltip);
}
});
// Clear highlight on mouse out
document.addEventListener('mouseout', function (e) {
if (!devModeActive) return;
if (e.target.id) {
tooltip.style.display = 'none';
clearHighlight();
}
});
}
// ============================================================================
// CLICK TO COPY
// ============================================================================
function setupClickCopy() {
// Use capture phase to intercept clicks with Shift
document.addEventListener(
'click',
function (e) {
if (!devModeActive) return;
// Allow toggle button to work normally
if (isToggleButton(e.target)) return;
// ONLY copy if Shift is held
if (!shiftKeyPressed) {
// Let prototype work normally without Shift
return;
}
// Don't intercept if user is clicking in/around a form field
if (isTypingInField() || isFormElement(e.target)) {
return;
}
// Shift is held and not in a form field - intercept and copy
e.preventDefault();
e.stopPropagation();
e.stopImmediatePropagation();
let element = findElementWithId(e.target);
if (!element || !element.id || isSystemElement(element.id)) {
console.log('❌ No Object ID found');
return false;
}
// Copy to clipboard
const objectId = element.id;
copyToClipboard(objectId);
// Show feedback
showCopyFeedback(element, objectId);
return false;
},
true,
); // Capture phase
}
// ============================================================================
// HELPER FUNCTIONS
// ============================================================================
function findElementWithId(element) {
let current = element;
let attempts = 0;
while (current && !current.id && attempts < 10) {
current = current.parentElement;
attempts++;
}
return current;
}
function isSystemElement(id) {
const systemIds = ['app', 'dev-mode-toggle', 'dev-mode-tooltip'];
return systemIds.includes(id);
}
function isToggleButton(element) {
return element.id === 'dev-mode-toggle' || element.closest('#dev-mode-toggle') || element.classList.contains('dev-mode-toggle');
}
function isTypingInField() {
const activeElement = document.activeElement;
if (!activeElement) return false;
const tagName = activeElement.tagName.toLowerCase();
const isEditable = activeElement.isContentEditable;
// Check if user is currently typing in a form field
return tagName === 'input' || tagName === 'textarea' || tagName === 'select' || isEditable;
}
function isFormElement(element) {
if (!element) return false;
const tagName = element.tagName.toLowerCase();
const isEditable = element.isContentEditable;
// Check if the clicked element is a form element
return tagName === 'input' || tagName === 'textarea' || tagName === 'select' || isEditable;
}
function highlightElement(element, isShiftHeld) {
const color = isShiftHeld ? '#10B981' : '#6b7280';
const width = isShiftHeld ? '3px' : '2px';
const offset = isShiftHeld ? '3px' : '2px';
const shadowSpread = isShiftHeld ? '5px' : '2px';
const shadowOpacity = isShiftHeld ? '0.4' : '0.2';
element.style.outline = `${width} solid ${color}`;
element.style.outlineOffset = offset;
element.style.boxShadow = `0 0 0 ${shadowSpread} rgba(${isShiftHeld ? '16, 185, 129' : '107, 114, 128'}, ${shadowOpacity})`;
}
function clearHighlight() {
if (currentHighlightedElement) {
currentHighlightedElement.style.outline = '';
currentHighlightedElement.style.boxShadow = '';
currentHighlightedElement = null;
}
}
function updateTooltipPosition(e, tooltip) {
const offset = 15;
let x = e.clientX + offset;
let y = e.clientY + offset;
// Keep tooltip on screen
const rect = tooltip.getBoundingClientRect();
if (x + rect.width > window.innerWidth) {
x = e.clientX - rect.width - offset;
}
if (y + rect.height > window.innerHeight) {
y = e.clientY - rect.height - offset;
}
tooltip.style.left = x + 'px';
tooltip.style.top = y + 'px';
}
function copyToClipboard(text) {
if (typeof navigator !== 'undefined' && navigator.clipboard && navigator.clipboard.writeText) {
navigator.clipboard
.writeText(text)
.then(() => {
console.log(`📋 Copied to clipboard: ${text}`);
})
.catch((error) => {
console.error('Dev Mode error:', error);
fallbackCopy(text);
});
} else {
fallbackCopy(text);
}
}
function fallbackCopy(text) {
const textarea = document.createElement('textarea');
textarea.value = text;
textarea.style.position = 'fixed';
textarea.style.left = '-999999px';
document.body.append(textarea);
textarea.focus();
textarea.select();
try {
document.execCommand('copy');
console.log(`📋 Copied (fallback): ${text}`);
} catch (error) {
console.error('Dev Mode error:', error);
}
textarea.remove();
}
function showCopyFeedback(element, objectId) {
// Create feedback overlay
const feedback = document.createElement('div');
feedback.textContent = '✓ Copied: ' + objectId;
feedback.style.cssText = `
position: fixed;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
background: #10B981;
color: #fff;
padding: 16px 32px;
border-radius: 8px;
font-size: 16px;
font-weight: 600;
z-index: 100000;
box-shadow: 0 10px 25px rgba(0,0,0,0.3);
animation: fadeInOut 1.5s ease-in-out;
pointer-events: none;
`;
document.body.append(feedback);
setTimeout(() => {
feedback.remove();
}, 1500);
// Flash element
const originalOutline = element.style.outline;
element.style.outline = '3px solid #10B981';
setTimeout(() => {
element.style.outline = originalOutline;
}, 300);
}
// Add CSS animation
const style = document.createElement('style');
style.textContent = `
@keyframes fadeInOut {
0% { opacity: 0; transform: translate(-50%, -50%) scale(0.9); }
20% { opacity: 1; transform: translate(-50%, -50%) scale(1); }
80% { opacity: 1; transform: translate(-50%, -50%) scale(1); }
100% { opacity: 0; transform: translate(-50%, -50%) scale(0.9); }
}
`;
document.head.append(style);
// ============================================================================
// EXPORT
// ============================================================================
// Make available globally
globalThis.initDevMode = initDevMode;
// Export for use in other scripts
if (typeof globalThis !== 'undefined' && globalThis.exports) {
globalThis.exports = { initDevMode };
}

View File

@@ -0,0 +1,63 @@
{
"user": {
"id": "demo-user-001",
"firstName": "[First Name]",
"lastName": "[Last Name]",
"email": "[email@example.com]",
"phoneNumber": "[+1234567890]",
"picture": "",
"role": "owner",
"createdAt": "2024-01-01T00:00:00.000Z",
"updatedAt": "2024-01-01T00:00:00.000Z"
},
"family": {
"id": "demo-family-001",
"name": "[Family Name]",
"description": "[Brief family description]",
"location": "[City, Country]",
"picture": "",
"ownerId": "demo-user-001",
"createdAt": "2024-01-01T00:00:00.000Z",
"updatedAt": "2024-01-01T00:00:00.000Z"
},
"members": [
{
"id": "demo-member-001",
"familyId": "demo-family-001",
"userId": "demo-user-001",
"firstName": "[Member 1 First Name]",
"lastName": "[Member 1 Last Name]",
"email": "[member1@example.com]",
"role": "owner",
"picture": "",
"createdAt": "2024-01-01T00:00:00.000Z"
},
{
"id": "demo-member-002",
"familyId": "demo-family-001",
"userId": "demo-user-002",
"firstName": "[Member 2 First Name]",
"lastName": "[Member 2 Last Name]",
"email": "[member2@example.com]",
"role": "co-owner",
"picture": "",
"createdAt": "2024-01-02T00:00:00.000Z"
}
],
"dogs": [
{
"id": "demo-dog-001",
"familyId": "demo-family-001",
"name": "[Dog Name]",
"breed": "[Dog Breed]",
"gender": "male",
"birthDate": "2020-05-15",
"color": "[Color]",
"specialNeeds": "[Any special needs or notes]",
"picture": "",
"createdAt": "2024-01-01T00:00:00.000Z",
"updatedAt": "2024-01-01T00:00:00.000Z"
}
],
"comment": "This is demo data that loads automatically when prototype is opened for the first time. Edit this file to change the demo data. All fields with empty strings ('') are optional."
}

View File

@@ -0,0 +1,465 @@
<!DOCTYPE html>
<html lang="se">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>[Page-Number] [Page Name] - [Project Name]</title>
<!-- Tailwind CSS via CDN -->
<script src="https://cdn.tailwindcss.com"></script>
<!-- Tailwind Config (Design Tokens) -->
<script>
tailwind.config = {
theme: {
extend: {
colors: {
'[project-name]': {
50: '#eff6ff',
100: '#dbeafe',
500: '#2563eb',
600: '#1d4ed8',
700: '#1e40af',
}
},
fontFamily: {
sans: ['Inter', 'system-ui', 'sans-serif'],
}
}
}
}
</script>
<!-- Google Fonts -->
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&display=swap" rel="stylesheet">
<!-- Dev Mode Styles (feedback/testing tool) -->
<link rel="stylesheet" href="components/dev-mode.css">
<!-- Custom Styles (minimal - only what Tailwind can't do) -->
<style>
/* Custom styles that can't be done with Tailwind */
/* Example: Complex animations, special overlays, etc. */
</style>
</head>
<body class="min-h-screen bg-gray-50 font-sans">
<!-- ========================================================================
DEV MODE TOGGLE (for easy feedback - copy Object IDs)
======================================================================== -->
<button id="dev-mode-toggle" class="dev-mode-toggle" title="Toggle Dev Mode (Ctrl+E)">
<svg width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2">
<path d="M11 4H4a2 2 0 0 0-2 2v14a2 2 0 0 0 2 2h14a2 2 0 0 0 2-2v-7"></path>
<path d="M18.5 2.5a2.121 2.121 0 0 1 3 3L12 15l-4 1 1-4 9.5-9.5z"></path>
</svg>
<span>Dev Mode: OFF</span>
</button>
<div id="dev-mode-tooltip" class="dev-mode-tooltip" style="display: none;"></div>
<!-- ========================================================================
HEADER
======================================================================== -->
<header class="bg-white border-b border-gray-200 px-4 py-3 flex items-center justify-between">
<!-- Back Button -->
<button
id="[page]-header-back"
data-object-id="[page]-header-back"
onclick="history.back()"
class="text-gray-600 hover:text-gray-900 font-medium text-sm transition-colors"
>
← [Back Text]
</button>
<!-- Page Title -->
<h1
id="[page]-header-title"
data-object-id="[page]-header-title"
class="text-lg font-semibold text-gray-900"
>
[Page Title]
</h1>
<!-- Spacer (for alignment) -->
<div class="w-20"></div>
<!-- Optional: Language Selector or Action Button -->
<!-- <button class="text-[project-name]-600">Action</button> -->
</header>
<!-- ========================================================================
MAIN CONTENT
======================================================================== -->
<main class="max-w-md mx-auto p-4">
<form id="[page]Form" class="space-y-4" onsubmit="handleSubmit(event)">
<!-- ============================================================
SECTION 1: Example - Picture Upload
============================================================ -->
<div class="flex items-center gap-4 mb-6">
<button
type="button"
id="[page]-picture-upload"
data-object-id="[page]-picture-upload"
onclick="handlePictureUpload()"
class="w-24 h-24 rounded-full bg-gray-100 border-2 border-dashed border-gray-300 flex items-center justify-center hover:border-[project-name]-500 hover:bg-gray-50 transition-all cursor-pointer"
>
<img id="picturePreview" class="hidden w-full h-full rounded-full object-cover" alt="Preview" />
<svg class="w-10 h-10 text-gray-400" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 6v6m0 0v6m0-6h6m-6 0H6"></path>
</svg>
</button>
<input type="file" id="pictureInput" accept="image/*" class="hidden">
<div class="flex-1">
<label class="text-sm text-gray-700 font-medium">
[Upload Label]
</label>
<p class="text-xs text-red-600 hidden" id="pictureError"></p>
</div>
</div>
<!-- ============================================================
SECTION 2: Example - Text Input
============================================================ -->
<div>
<input
type="text"
id="[page]-input-[field]"
data-object-id="[page]-input-[field]"
name="[fieldName]"
placeholder="[Placeholder] *"
class="w-full px-3 py-2 border border-gray-300 rounded-lg focus:outline-none focus:ring-2 focus:ring-[project-name]-500 focus:border-transparent transition-all"
required
/>
<p class="text-sm text-red-600 hidden mt-1" id="[field]Error"></p>
</div>
<!-- ============================================================
SECTION 3: Example - Split Button (Binary Choice)
============================================================ -->
<div
id="[page]-split-[choice]"
data-object-id="[page]-split-[choice]"
class="grid grid-cols-2 gap-0 border border-gray-300 rounded-lg overflow-hidden"
>
<button
type="button"
id="choice1"
onclick="selectChoice('option1')"
class="py-2 text-center font-medium text-gray-700 hover:bg-gray-50 transition-colors"
>
[Option 1]
</button>
<button
type="button"
id="choice2"
onclick="selectChoice('option2')"
class="py-2 text-center font-medium text-gray-700 hover:bg-gray-50 transition-colors border-l border-gray-300"
>
[Option 2]
</button>
</div>
<!-- ============================================================
SECTION 4: Example - Textarea
============================================================ -->
<div>
<textarea
id="[page]-textarea-[field]"
data-object-id="[page]-textarea-[field]"
name="[fieldName]"
placeholder="[Placeholder]"
maxlength="500"
rows="3"
oninput="updateCharCounter()"
class="w-full px-3 py-2 border border-gray-300 rounded-lg focus:outline-none focus:ring-2 focus:ring-[project-name]-500 focus:border-transparent resize-none"
></textarea>
<p class="text-xs text-gray-500 text-right hidden" id="charCounter"></p>
</div>
<!-- ============================================================
SUBMIT BUTTON
============================================================ -->
<button
type="submit"
id="[page]-button-submit"
data-object-id="[page]-button-submit"
class="w-full py-3 bg-[project-name]-600 text-white rounded-lg font-semibold hover:bg-[project-name]-700 focus:outline-none focus:ring-2 focus:ring-[project-name]-500 focus:ring-offset-2 disabled:opacity-50 disabled:cursor-not-allowed transition-all flex items-center justify-center gap-2"
>
<span id="submitButtonText">[Submit Text]</span>
<svg id="submitButtonSpinner" class="hidden animate-spin w-5 h-5" fill="none" viewBox="0 0 24 24">
<circle class="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" stroke-width="4"></circle>
<path class="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"></path>
</svg>
</button>
</form>
</main>
<!-- ========================================================================
SUCCESS TOAST
======================================================================== -->
<div id="successToast" class="hidden fixed bottom-6 left-1/2 -translate-x-1/2 bg-gray-900 text-white px-6 py-3 rounded-lg shadow-lg flex items-center gap-2 z-50 animate-slide-up">
<svg class="w-5 h-5" fill="currentColor" viewBox="0 0 20 20">
<path fill-rule="evenodd" d="M10 18a8 8 0 100-16 8 8 0 000 16zm3.707-9.293a1 1 0 00-1.414-1.414L9 10.586 7.707 9.293a1 1 0 00-1.414 1.414l2 2a1 1 0 001.414 0l4-4z" clip-rule="evenodd"></path>
</svg>
<span id="toastMessage">[Success Message]</span>
</div>
<!-- ========================================================================
ERROR BANNER (optional)
======================================================================== -->
<div id="errorBanner" class="hidden fixed top-20 left-1/2 -translate-x-1/2 max-w-md w-full mx-4 bg-red-50 border border-red-200 rounded-lg p-4 flex items-start gap-3 z-50">
<svg class="w-5 h-5 text-red-600 flex-shrink-0 mt-0.5" fill="currentColor" viewBox="0 0 20 20">
<path fill-rule="evenodd" d="M10 18a8 8 0 100-16 8 8 0 000 16zM8.707 7.293a1 1 0 00-1.414 1.414L8.586 10l-1.293 1.293a1 1 0 101.414 1.414L10 11.414l1.293 1.293a1 1 0 001.414-1.414L11.414 10l1.293-1.293a1 1 0 00-1.414-1.414L10 8.586 8.707 7.293z" clip-rule="evenodd"></path>
</svg>
<div class="flex-1">
<p class="font-medium text-red-900">Error</p>
<p class="text-sm text-red-700" id="errorMessage">[Error message]</p>
</div>
<button onclick="hideErrorBanner()" class="text-red-600 hover:text-red-900">
<svg class="w-5 h-5" fill="currentColor" viewBox="0 0 20 20">
<path fill-rule="evenodd" d="M4.293 4.293a1 1 0 011.414 0L10 8.586l4.293-4.293a1 1 0 111.414 1.414L11.414 10l4.293 4.293a1 1 0 01-1.414 1.414L10 11.414l-4.293 4.293a1 1 0 01-1.414-1.414L8.586 10 4.293 5.707a1 1 0 010-1.414z" clip-rule="evenodd"></path>
</svg>
</button>
</div>
<!-- ========================================================================
MODALS (add as needed - example: image crop modal)
======================================================================== -->
<!-- Image Crop Modal Template (uncomment if using image upload) -->
<!--
<div id="cropModal" class="hidden fixed inset-0 bg-black bg-opacity-50 flex items-center justify-center z-50 p-4">
<div class="bg-white rounded-lg max-w-md w-full overflow-hidden">
<div class="flex items-center justify-between px-4 py-3 border-b border-gray-200">
<button onclick="cancelCrop()" class="text-[project-name]-600 font-medium">Cancel</button>
<h2 class="font-semibold text-gray-900">Crop Image</h2>
<button onclick="replaceImage()" class="text-[project-name]-600 font-medium">Replace</button>
</div>
<div class="p-4">
<div class="relative bg-gray-100 rounded-lg overflow-hidden" style="height: 300px;">
<img id="cropImage" src="" alt="Crop" class="w-full h-full object-contain">
</div>
<input type="range" id="zoomSlider" min="10" max="200" value="100"
class="w-full mt-4 accent-[project-name]-600">
</div>
<div class="p-4">
<button onclick="confirmCrop()"
class="w-full py-3 bg-[project-name]-600 text-white rounded-lg font-semibold hover:bg-[project-name]-700">
Use Image
</button>
</div>
</div>
</div>
-->
<!-- ========================================================================
JAVASCRIPT - Shared Scripts
======================================================================== -->
<script src="shared/prototype-api.js"></script>
<script src="shared/init.js"></script>
<script src="shared/utils.js"></script>
<!-- ========================================================================
JAVASCRIPT - Dev Mode (feedback tool)
======================================================================== -->
<script src="components/dev-mode.js"></script>
<!-- ========================================================================
JAVASCRIPT - Component Scripts (load as needed)
======================================================================== -->
<!-- <script src="components/image-crop.js"></script> -->
<!-- <script src="components/toast.js"></script> -->
<!-- <script src="components/modal.js"></script> -->
<!-- <script src="components/form-validation.js"></script> -->
<!-- ========================================================================
JAVASCRIPT - Page-Specific Script (if complex logic)
======================================================================== -->
<!-- Option 1: External file (if >150 lines) -->
<!-- <script src="pages/[page-number]-[page-name].js"></script> -->
<!-- Option 2: Inline script (preferred for <150 lines) -->
<script>
/**
* Page: [Page Number] [Page Name]
* Purpose: [Brief description]
*/
// ================================================================
// STATE MANAGEMENT
// ================================================================
let formData = {
// Initialize form state
};
// ================================================================
// INITIALIZATION
// ================================================================
window.initPage = function() {
console.log('📄 [Page Name] loaded');
// Page-specific initialization
loadPageData();
};
async function loadPageData() {
try {
// Load any required data
const user = await window.PrototypeAPI.getUser();
console.log('👤 Current user:', user);
// Pre-fill form if needed
} catch (error) {
console.error('❌ Error loading data:', error);
}
}
// ================================================================
// FORM HANDLING
// ================================================================
async function handleSubmit(event) {
event.preventDefault();
// Validate
if (!validateForm()) {
return;
}
// Show loading
setLoadingState(true);
try {
// Collect form data
const data = {
// Extract form values
};
// API call
console.log('📤 Submitting:', data);
const result = await window.PrototypeAPI.[method](data);
console.log('✅ Success:', result);
// Show success
showToast('[Success message]', 'success');
// Navigate (after delay)
setTimeout(() => {
window.location.href = '[next-page].html';
}, 1500);
} catch (error) {
console.error('❌ Error:', error);
showErrorBanner(error.message);
} finally {
setLoadingState(false);
}
}
function validateForm() {
let isValid = true;
// Validate each field
// Example:
// const name = document.getElementById('[field]').value;
// if (!name) {
// showFieldError('[field]', 'This field is required');
// isValid = false;
// }
return isValid;
}
// ================================================================
// UI INTERACTIONS
// ================================================================
function handlePictureUpload() {
document.getElementById('pictureInput').click();
}
function selectChoice(choice) {
// Handle choice selection
console.log('Choice selected:', choice);
}
function updateCharCounter() {
const textarea = document.getElementById('[page]-textarea-[field]');
const counter = document.getElementById('charCounter');
const current = textarea.value.length;
const max = textarea.maxLength;
counter.textContent = `${current}/${max}`;
counter.classList.remove('hidden');
}
// ================================================================
// FEEDBACK
// ================================================================
function setLoadingState(isLoading) {
const btn = document.getElementById('[page]-button-submit');
const text = document.getElementById('submitButtonText');
const spinner = document.getElementById('submitButtonSpinner');
btn.disabled = isLoading;
text.classList.toggle('hidden', isLoading);
spinner.classList.toggle('hidden', !isLoading);
}
function showToast(message, type = 'success') {
const toast = document.getElementById('successToast');
const messageEl = document.getElementById('toastMessage');
messageEl.textContent = message;
toast.classList.remove('hidden');
setTimeout(() => {
toast.classList.add('hidden');
}, 3000);
}
function showErrorBanner(message) {
const banner = document.getElementById('errorBanner');
const messageEl = document.getElementById('errorMessage');
messageEl.textContent = message;
banner.classList.remove('hidden');
setTimeout(() => {
banner.classList.add('hidden');
}, 5000);
}
function hideErrorBanner() {
document.getElementById('errorBanner').classList.add('hidden');
}
function showFieldError(fieldId, message) {
const errorEl = document.getElementById(`${fieldId}Error`);
const inputEl = document.getElementById(fieldId);
if (errorEl) {
errorEl.textContent = message;
errorEl.classList.remove('hidden');
}
if (inputEl) {
inputEl.classList.add('border-red-500');
}
}
function clearFieldError(fieldId) {
const errorEl = document.getElementById(`${fieldId}Error`);
const inputEl = document.getElementById(fieldId);
if (errorEl) {
errorEl.classList.add('hidden');
}
if (inputEl) {
inputEl.classList.remove('border-red-500');
}
}
</script>
</body>
</html>

View File

@@ -0,0 +1,191 @@
# Story [Page].[Section]: [Page Name] - [Section Name]
**Page**: [Page Number] [Page Name]
**Section**: [N] of [Total]
**Complexity**: Simple | Medium | Complex
**Estimated Time**: [X] minutes
---
## 🎯 Goal
[Brief description of what this section accomplishes]
---
## 📋 What to Build
### HTML Elements
```html
<!-- [Description of HTML to add] -->
<div class="[tailwind-classes]">
<!-- Specific HTML structure here -->
</div>
```
### JavaScript (if needed)
```javascript
// [Description of JavaScript functionality]
function [functionName]() {
// Implementation
}
```
### Tailwind Classes to Use
**Key classes for this section**:
- `[class-category]`: `[specific-classes]`
- `[class-category]`: `[specific-classes]`
**Example combinations**:
```html
<!-- Input field -->
<input class="w-full px-3 py-2 border border-gray-300 rounded-lg focus:outline-none focus:ring-2 focus:ring-[project]-500">
<!-- Button -->
<button class="w-full py-3 bg-[project]-600 text-white rounded-lg font-semibold hover:bg-[project]-700 transition-colors">
```
---
## 🔗 Dependencies
**Shared code**:
-`shared/prototype-api.js` (already loaded)
-`shared/init.js` (already loaded)
**Components** (load if not already included):
- [ ] `components/image-crop.js` (if using image upload)
- [ ] `components/toast.js` (if showing notifications)
- [ ] `components/modal.js` (if using modals)
- [ ] `components/form-validation.js` (if validating forms)
---
## 📸 Baseline State
_Capture with Puppeteer before implementation when modifying existing features. Skip for new sections._
| Element | Current Value | Notes |
|---------|---------------|-------|
| [element] | [current value] | [any relevant context] |
---
## 📝 Implementation Steps
### Step 1: [First Step]
[What to do first]
### Step 2: [Second Step]
[What to do second]
### Step 3: [Third Step]
[What to do third]
---
## ✅ Acceptance Criteria
### Agent-Verifiable (Puppeteer)
| # | Criterion | Element | Expected | How to Verify |
|---|-----------|---------|----------|---------------|
| 1 | [Criterion] | `[selector]` | [Value] | [Method] |
| 2 | [Criterion] | `[selector]` | [Value] | [Method] |
| 3 | [Criterion] | `[selector]` | [Value] | [Method] |
### User-Evaluable (Qualitative)
- [ ] Flow feels natural and intuitive
- [ ] Visual hierarchy guides the eye correctly
- [ ] Section feels consistent with the rest of the prototype
- [ ] [Additional qualitative criterion]
---
## 🧪 How to Test
### Puppeteer Self-Verification (Agent)
Before presenting to user:
1. Open `[Page-Number]-[Page-Name].html` in Puppeteer
2. Set viewport to target width (375px for mobile)
3. For each agent-verifiable criterion in the table above:
- Locate element
- Read actual value
- Compare to expected
- Narrate with ✓/✗
4. Fix any mismatches and re-verify until all pass
5. Check console for errors
See [Inline Testing Guide](../guides/INLINE-TESTING-GUIDE.md) for full methodology.
### User Qualitative Review
After Puppeteer verification passes, present to user:
- Summarize Puppeteer results (X/Y criteria pass)
- Ask user to evaluate qualitative criteria above
- Collect feedback on feel, flow, clarity, consistency
---
## 🐛 Common Issues & Fixes
### Issue: [Problem Description]
**Symptom**: [What user sees]
**Cause**: [Why it happens]
**Fix**: [How to fix it]
### Issue: [Problem Description]
**Symptom**: [What user sees]
**Cause**: [Why it happens]
**Fix**: [How to fix it]
---
## 🎨 Design Notes
**Visual requirements**:
- [Design consideration 1]
- [Design consideration 2]
**UX considerations**:
- [UX note 1]
- [UX note 2]
---
## 💡 Tips
- [Helpful tip 1]
- [Helpful tip 2]
---
## ➡️ Next Section
After this section is approved: `[Page].[NextSection]-[page-name]-[next-section-name].md`
---
## 📊 Status Tracking
**Status**: ⏸️ Not Started | 🚧 In Progress | ✅ Complete
**Started**: [Date/Time]
**Completed**: [Date/Time]
**Approved By**: [Name]
**Notes**: [Any special notes or changes made]
---
## 🔄 Changes from Original Plan
*Document any deviations from the work file plan here:*
- [Change 1 and reason]
- [Change 2 and reason]

View File

@@ -0,0 +1,264 @@
# ============================================================================
# PROTOTYPE WORK FILE: [Page Number] [Page Name]
# ============================================================================
# Purpose: Complete planning document for section-by-section implementation
# Created: [Date]
# Page Spec: ../[Scenario]/[Page-Number]-[Page-Name]/[Page-Number]-[Page-Name].md
# ============================================================================
metadata:
page_number: "[Page-Number]"
page_name: "[Page Name]"
scenario: "[Scenario-Number]-[Scenario-Name]"
complexity: "simple | medium | complex"
estimated_sections: [Number]
estimated_time: "[X] minutes"
# Device Compatibility
device_compatibility:
type: "mobile-only | mobile-tablet | responsive | desktop-only"
primary_viewport: "[Width]px"
test_viewports:
- width: 375
height: 667
device: "iPhone SE"
- width: 393
height: 852
device: "iPhone 14 Pro"
- width: 428
height: 926
device: "iPhone 14 Pro Max"
breakpoints: [] # For mobile-only, leave empty
touch_optimized: true
hover_interactions: false
dependencies:
- "shared/prototype-api.js"
- "shared/init.js"
# Add component dependencies as needed
# ============================================================================
# DESIGN TOKENS (Tailwind Config)
# ============================================================================
design_tokens:
colors:
primary: "#2563eb"
primary_hover: "#1d4ed8"
success: "#10b981"
error: "#ef4444"
tailwind_config:
theme_extend:
colors:
"[project-name]":
50: "#eff6ff"
500: "#2563eb"
600: "#1d4ed8"
700: "#1e40af"
fontFamily:
sans: "['Inter', 'system-ui', 'sans-serif']"
components_available:
- "image-crop (components/image-crop.js)"
- "toast (components/toast.js)"
- "modal (components/modal.js)"
- "form-validation (components/form-validation.js)"
# ============================================================================
# PAGE REQUIREMENTS (from specification)
# ============================================================================
page_purpose: |
[Brief description of what this page does and why user is here]
user_context:
- [Context point 1: What user has done before arriving]
- [Context point 2: What data is available]
- [Context point 3: User's current state]
success_criteria:
- [Criterion 1: What must be accomplished]
- [Criterion 2: Required validations]
- [Criterion 3: Data that must be saved]
- [Criterion 4: Where user navigates on success]
# ============================================================================
# DEMO DATA REQUIREMENTS
# ============================================================================
demo_data_needed:
current_user:
firstName: "[Example]"
lastName: "[Example]"
email: "[example@email.com]"
# Add other demo data needs (family, dogs, etc.)
example_submission:
# Example of completed form data
field1: "[value]"
field2: "[value]"
# ============================================================================
# OBJECT ID MAP (all interactive elements)
# ============================================================================
object_ids:
header:
- "[page]-header-back"
- "[page]-header-title"
form_inputs:
- "[page]-input-[field1]"
- "[page]-input-[field2]"
# Add all form fields
actions:
- "[page]-button-submit"
# Add all action buttons
# ============================================================================
# SECTION BREAKDOWN (implementation order)
# ============================================================================
sections:
- id: "section-1"
name: "Page Structure & Header"
scope: "HTML skeleton, header with back button, title, main container"
files_affected: ["[Page-Number]-[Page-Name].html"]
dependencies: []
object_ids:
- "[page]-header-back"
- "[page]-header-title"
tailwind_classes:
- "Layout: min-h-screen, bg-gray-50"
- "Header: bg-white, border-b, px-4, py-3"
- "Button: text-gray-600, hover:text-gray-900"
acceptance_criteria:
- "Header displays with back button and title"
- "Back button navigates to previous page"
- "Mobile viewport (375px) looks correct"
- "Tailwind styles applied correctly"
placeholder_message: "🚧 Building the form... Check back in a few minutes!"
- id: "section-2"
name: "[Section Name]"
scope: "[What this section adds]"
files_affected: ["[Page-Number]-[Page-Name].html"]
dependencies: ["[component files if needed]"]
object_ids:
- "[object-id-1]"
- "[object-id-2]"
tailwind_classes:
- "[List key Tailwind classes to use]"
acceptance_criteria:
- "[Test 1]"
- "[Test 2]"
placeholder_message: "[What's coming next]"
# Add sections 3-6+ as needed
# ============================================================================
# JAVASCRIPT REQUIREMENTS
# ============================================================================
javascript_functions:
initialization:
- "initPage() - Page-specific initialization"
- "[Other init functions]"
form_handling:
- "handleSubmit(event) - Form submission"
- "validateForm() - Validate all fields"
- "[Other form functions]"
ui_interactions:
- "[Interaction function 1]"
- "[Interaction function 2]"
api_calls:
- "DogWeekAPI.[method]([params])"
feedback:
- "showToast(message, type)"
- "setLoadingState(isLoading)"
- "[Other feedback functions]"
# ============================================================================
# NAVIGATION
# ============================================================================
navigation:
previous_page: "[Previous-Page].html"
next_page_success: "[Next-Page].html"
next_page_cancel: "[Cancel-Page].html"
# ============================================================================
# TESTING CHECKLIST (after all sections complete)
# ============================================================================
testing_checklist:
functionality:
- "[ ] All form fields work"
- "[ ] Validation shows errors correctly"
- "[ ] Submit button works"
- "[ ] Loading states display"
- "[ ] Success feedback shows"
- "[ ] Error handling works"
- "[ ] Navigation works (back, next)"
- "[ ] Data persists (reload page test)"
mobile_testing:
- "[ ] Viewport is correct width"
- "[ ] All tap targets min 44x44px"
- "[ ] Text is readable (min 16px)"
- "[ ] No horizontal scroll"
- "[ ] Touch gestures work (if applicable)"
code_quality:
- "[ ] All Object IDs present"
- "[ ] Console logs helpful"
- "[ ] No console errors"
- "[ ] Tailwind classes properly used"
- "[ ] Functions documented"
accessibility:
- "[ ] Keyboard navigation works"
- "[ ] Form labels present"
- "[ ] Error messages clear"
- "[ ] Focus states visible"
# ============================================================================
# MIGRATION NOTES (for production)
# ============================================================================
migration_todos:
- "Replace DogWeekAPI.[method]() with Supabase calls"
- "[Other production migration tasks]"
# ============================================================================
# KNOWN ISSUES / EDGE CASES
# ============================================================================
edge_cases:
- "[Edge case 1 to handle]"
- "[Edge case 2 to handle]"
# ============================================================================
# COMPLETION CRITERIA
# ============================================================================
definition_of_done:
- "All sections implemented and tested"
- "All object IDs present and correct"
- "All acceptance criteria met"
- "Console logs helpful and clear"
- "Mobile viewport works perfectly"
- "Demo data loads automatically"
- "Form validation complete"
- "Success/error feedback working"
- "Navigation works"
- "No console errors"
- "Code is clean"
- "Story files document all sections"

View File

@@ -0,0 +1,72 @@
---
name: acceptance-testing
description: Design and run acceptance tests from specification criteria
---
# [T] Acceptance Testing — Design & Run Tests from Spec Criteria
**Goal:** Validate that implementation matches design specifications through structured testing.
**When to use:** Implementation is complete (prototype or production), ready for validation against specs.
---
## INITIALIZATION
### Design Log
Read `{output_folder}/_progress/00-design-log.md`. Check Current and Backlog for context.
---
## STEPS
Execute steps in `./steps-t/`:
| Step | File | Purpose |
|------|------|---------|
| 01 | step-01-prepare.md | Gather materials, set up environment |
| 02 | step-02-execute.md | Run all test categories |
| 03 | step-03-document-issues.md | Create issue tickets |
| 04 | step-04-report.md | Compile test report |
| 05 | step-05-iterate.md | Iterate fixes or approve |
**Reference data:**
- `./data/testing-guide.md`
- `./data/test-result-templates.md`
- `./data/issue-templates.md`
---
## DESIGN LOG REPORTING POINT
When all tests pass and the user approves, append to the Design Loop Status table in `{output_folder}/_progress/00-design-log.md`:
```
| [Scenario slug] | [NN.X] | [Page name] | approved | [YYYY-MM-DD] |
```
Do NOT skip this. The design log drives Phase 4's adaptive dashboard.
## AFTER COMPLETION
**If all tests pass:**
Design log updated with `approved` status (see above). Present the transition:
<output>
**"[page name]" is approved!**
1. **Explore the next scenario step** — [next page name]
2. **Design delivery** — package for development handoff (Phase 4 [H])
</output>
**If issues found:**
Status stays `built`. Present:
<output>
**"[page name]" has [N] issues to fix.**
1. **Fix the issues** — route to bugfixing
2. **Explore the next scenario step** — fix later, continue designing
</output>

View File

@@ -0,0 +1,61 @@
---
name: analysis
description: Understand an existing codebase through systematic exploration
---
# [A] Analysis — Understand Your Own Codebase
**Goal:** Systematically explore and document an existing codebase's architecture, patterns, and dependencies.
**When to use:** Joining an existing project. Need to understand code before making changes. Architecture documentation is missing or outdated. Preparing for a major refactor or migration.
---
## CORE PRINCIPLES
- **Question-driven** — Every analysis starts with a clear question. Without a question, exploration is aimless. Define what you need to know before reading a single file.
- **Systematic** — Follow a structured path: question, scan, map, document. Do not jump to conclusions from reading one file. Cover breadth before going deep.
- **Document everything** — Findings that are not written down are lost. Every observation, pattern, risk, and recommendation goes into the output document. The analysis document becomes a team asset.
---
## INITIALIZATION
### Design Log
Read `{output_folder}/_progress/00-design-log.md`. Check Current and Backlog for context.
### Essential Guides
- **[Execution Principles](data/guides/EXECUTION-PRINCIPLES.md)** — Document before acting, plan-then-execute
- **[Session Protocol](data/guides/SESSION-PROTOCOL.md)** — Read dialog, verify plan, present status
---
## STEPS
Execute steps in `./steps-a/`:
| Step | File | Purpose |
|------|------|---------|
| 01 | step-01-define-question.md | Articulate the question, define scope and output |
| 02 | step-02-scan-codebase.md | Scan structure, tech stack, entry points |
| 03 | step-03-map-architecture.md | Map components, data flow, dependencies |
| 04 | step-04-document-findings.md | Create architecture document with diagrams |
**Flow:** 01 → 02 → 03 → 04
### Critical Rules
- **ALWAYS** start with a clear question before scanning code
- **ALWAYS** document observations as you go, not from memory at the end
- **ALWAYS** define scope boundaries to avoid unbounded exploration
- **ALWAYS** set a time box to prevent analysis paralysis
- **ALWAYS** produce a concrete output document, not just verbal findings
---
## AFTER COMPLETION
1. Append a progress entry to `{output_folder}/_progress/00-design-log.md` under `## Progress`:
`### [date] — Analysis: [what was analyzed, key findings]`
2. Suggest next action: feed into development, reverse engineer further, or start prototyping

View File

@@ -0,0 +1,64 @@
---
name: bugfixing
description: Fix bugs in existing code through structured investigation and verification
---
# [F] Bugfixing — Fix Bugs in Existing Code
**Goal:** Systematically investigate, fix, and verify bugs with minimal side effects.
**When to use:** A bug has been reported or discovered in existing code.
---
## CORE PRINCIPLES
- **Reproduce first** — Never fix what you cannot reproduce. A fix without reproduction is a guess.
- **Minimal fix** — Target the root cause with the smallest change possible. Do not refactor surrounding code during a bugfix.
- **Regression check** — Every fix must be verified against the original bug AND tested for side effects on related functionality.
---
## INITIALIZATION
### Design Log
Read `{output_folder}/_progress/00-design-log.md`. Check Current and Backlog for context.
### Essential Guides
- **[Execution Principles](data/guides/EXECUTION-PRINCIPLES.md)** — Document-first, plan-then-execute
- **[Session Protocol](data/guides/SESSION-PROTOCOL.md)** — Read dialog, verify plan, present status
- **[Inline Testing Guide](data/guides/INLINE-TESTING-GUIDE.md)** — Verify fixes with Puppeteer before presenting
---
## STEPS
Execute steps in `./steps-f/`:
| Step | File | Purpose |
|------|------|---------|
| 01 | step-01-reproduce.md | Reproduce and document the bug |
| 02 | step-02-investigate.md | Investigate root cause |
| 03 | step-03-fix.md | Implement the fix |
| 04 | step-04-verify.md | Verify fix, check regressions |
| 05 | step-05-document.md | Document fix, update tests |
**Flow:** 01 → 02 → 03 → 04 → 05
### Critical Rules
- **ALWAYS** reproduce the bug before investigating
- **ALWAYS** identify root cause before writing a fix
- **ALWAYS** create a test that catches the bug before fixing
- **ALWAYS** run regression checks after fixing
- **NEVER** refactor surrounding code in the same fix
- **NEVER** fix symptoms — fix the root cause
---
## AFTER COMPLETION
1. Append a progress entry to `{output_folder}/_progress/00-design-log.md` under `## Progress`:
`### [date] — Bugfix: [what was fixed]`
2. Suggest re-running acceptance testing (Phase 5 [T]) to verify the fix

View File

@@ -0,0 +1,89 @@
---
name: development
description: Write production code from approved specifications
---
# [D] Development — Write Production Code
**Goal:** Write production-quality code from approved specifications using structured agent collaboration.
**When to use:**
- An approved specification exists (page spec, feature spec, or component spec)
- Prototype has been validated (if prototyping was part of the process)
- The codebase and tech stack are established (this is NOT for greenfield project setup)
- You need to turn a spec into committed, tested, production-ready code
**When NOT to use:**
- No approved spec exists yet — use [A] Analysis or spec writing first
- You need to explore or understand an existing codebase — use [R] Reverse Engineering
- You are fixing a bug in existing code — use [F] Bugfixing
- You need a throwaway prototype to validate ideas — use [P] Prototyping
---
## CORE PRINCIPLES
1. **Spec-driven.** The approved specification is the source of truth. Every implementation decision traces back to the spec. If the spec is ambiguous, clarify before coding — do not guess.
2. **Incremental.** Implement one feature or component at a time. Commit after each meaningful unit of work. Never let uncommitted changes grow large.
3. **Test as you go.** Run tests after each significant change. Do not batch all testing to the end. A failing test discovered early is cheap; discovered late it is expensive.
4. **Follow existing patterns.** Match the codebase's conventions for file structure, naming, styling, state management, and error handling. Consistency with the existing code matters more than personal preference.
5. **Document deviations.** If you must deviate from the spec (technical constraint, discovered issue), document what you changed and why before moving on.
---
## REFERENCE MATERIAL
Guides in `./data/guides/` support this workflow:
| Guide | Use When |
|-------|----------|
| EXECUTION-PRINCIPLES.md | Core execution discipline (document before acting, sketch fidelity, plan-then-execute) |
| INLINE-TESTING-GUIDE.md | Self-verifying implementation with Puppeteer before presenting to user |
| SEO-VALIDATION-GUIDE.md | Public-facing pages that need SEO compliance |
| SESSION-PROTOCOL.md | Managing agent sessions and handoffs |
| FEEDBACK-PROTOCOL.md | Handling user feedback during development |
---
## INITIALIZATION
### Design Log
Read `{output_folder}/_progress/00-design-log.md`. Check Current and Backlog for context.
---
## STEPS
Execute steps in `./steps-d/`:
| Step | File | Purpose |
|------|------|---------|
| 01 | step-01-scope-and-plan.md | Read spec, identify work items, create implementation order |
| 02 | step-02-setup-environment.md | Verify environment, install dependencies, establish baseline |
| 03 | step-03-implement.md | Build features one at a time from the plan |
| 04 | step-04-verify.md | Walk through every acceptance criterion |
| 05 | step-05-finalize.md | Clean up, test suite, PR preparation |
---
## DESIGN LOG REPORTING POINTS
Append to the Design Loop Status table in `{output_folder}/_progress/00-design-log.md`:
**On entry (step 01 complete):** `| [Scenario slug] | [NN.X] | [Page name] | building | [YYYY-MM-DD] |`
**On completion (step 05 approved):** `| [Scenario slug] | [NN.X] | [Page name] | built | [YYYY-MM-DD] |`
## AFTER COMPLETION
Design log updated with `built` status (see above). Present the transition:
<output>
**"[page name]" is built!**
1. **Run acceptance testing** — validate against the specification
2. **Explore the next scenario step** — [next page name]
</output>

View File

@@ -0,0 +1,64 @@
---
name: evolution
description: Add features to existing products through targeted changes
---
# [E] Evolution — Add Features to Existing Product
**Goal:** Incrementally add features to an existing product with minimal disruption.
**When to use:** Existing product needs new functionality. Changes should be targeted, not a complete rewrite.
---
## CORE PRINCIPLES
- **Backward compatible** — Existing functionality must keep working. Every change is verified against what already exists.
- **Feature flags if needed** — When a change is risky or requires staged rollout, use feature flags to decouple deployment from activation.
- **Incremental delivery** — Ship in small, verifiable increments. Each commit should leave the system in a working state.
---
## INITIALIZATION
### Design Log
Read `{output_folder}/_progress/00-design-log.md`. Check Current and Backlog for context.
### Essential Guides
- **[Execution Principles](data/guides/EXECUTION-PRINCIPLES.md)** — Document-first, plan-then-execute
- **[Session Protocol](data/guides/SESSION-PROTOCOL.md)** — Read dialog, verify plan, present status
- **[Inline Testing Guide](data/guides/INLINE-TESTING-GUIDE.md)** — Baseline capture before modifying existing features
---
## STEPS
Execute steps in `./steps-e/`:
| Step | File | Purpose |
|------|------|---------|
| 01 | step-01-scope-change.md | Define what changes vs what stays |
| 02 | step-02-analyze-impact.md | Analyze impact on existing code |
| 03 | step-03-plan-implementation.md | Plan incremental implementation |
| 04 | step-04-implement.md | Implement changes |
| 05 | step-05-verify-and-document.md | Verify, regression check, document |
**Flow:** 01 → 02 → 03 → 04 → 05
### Critical Rules
- **ALWAYS** map what is new vs what is modified vs what is untouched before coding
- **ALWAYS** capture baseline state of existing features before modifying them
- **ALWAYS** verify backward compatibility at each commit
- **ALWAYS** plan incremental commits — never one giant change
- **NEVER** break existing functionality to add new functionality
- **NEVER** skip impact analysis — surprises in production are expensive
---
## AFTER COMPLETION
1. Append a progress entry to `{output_folder}/_progress/00-design-log.md` under `## Progress`:
`### [date] — Evolution: [what was evolved/added]`
2. Suggest acceptance testing (Phase 5 [T]) to validate the changes

View File

@@ -0,0 +1,84 @@
---
name: prototyping
description: Build interactive prototypes from specifications
---
# [P] Prototyping — Build Interactive Prototype from Specs
**Goal:** Enable non-technical designers to build production-ready code through structured AI collaboration.
**When to use:** Page specifications are complete and approved. Ready to build working implementations. Want iterative development with approval gates.
---
## INITIALIZATION
### Design Log
Read `{output_folder}/_progress/00-design-log.md`. Check Current and Backlog for context.
### Essential Guides
- **[Feedback Protocol](data/guides/FEEDBACK-PROTOCOL.md)** — Classify feedback before acting
- **[Session Protocol](data/guides/SESSION-PROTOCOL.md)** — Read dialog, verify plan, present status
- **[Execution Principles](data/guides/EXECUTION-PRINCIPLES.md)** — Document-first, sketch fidelity
---
## STEPS
Execute steps in `./steps-p/`:
| Step | File | Purpose |
|------|------|---------|
| 1 | 1-prototype-setup.md | Set up prototype environment |
| 2 | 2-scenario-analysis.md | Analyze scenario, identify views |
| 3 | 3-logical-view-breakdown.md | Break view into sections |
| 4a | 4a-announce-and-gather.md | Announce section, gather context |
| 4b | 4b-create-story-file.md | Create story file |
| 4c | 4c-implement-section.md | Implement the section |
| 4d | 4d-present-for-testing.md | Present for user testing |
| 4e | 4e-handle-issue.md | Handle reported issues |
| 4f | 4f-handle-improvement.md | Handle improvements |
| 4g | 4g-section-approved.md | Section approved, next |
| 5 | 5-finalization.md | Integration test, final approval |
**Flow:** 1 → 2 → 3 → [4a-4g loop per section] → 5
### Critical Rules
- **ALWAYS** complete Phase 1 setup before starting
- **ALWAYS** analyze scenario before selecting views
- **ALWAYS** use section-by-section approach
- **ALWAYS** get approval before next section
- **ALWAYS** create story files just-in-time (not upfront)
- **ALWAYS** verify with Puppeteer before presenting to user
- **ALWAYS** capture baseline before modifying existing features
---
## DESIGN LOG REPORTING POINTS
This workflow has TWO reporting points. Both append to the Design Loop Status table in `{output_folder}/_progress/00-design-log.md`:
**1. On entry (step 1 complete):** Append status `building`
```
| [Scenario slug] | [NN.X] | [Page name] | building | [YYYY-MM-DD] |
```
**2. On completion (step 5 approved):** Append status `built`
```
| [Scenario slug] | [NN.X] | [Page name] | built | [YYYY-MM-DD] |
```
Do NOT skip these updates. The design log drives Phase 4's adaptive dashboard.
## AFTER COMPLETION
Design log already updated with `built` status (see above). Present the transition:
<output>
**"[page name]" is built!**
1. **Run acceptance testing** — validate against the specification
2. **Explore the next scenario step** — [next page name]
</output>

View File

@@ -0,0 +1,65 @@
---
name: reverse-engineering
description: Analyze any software or website to extract specifications and design patterns
---
# [R] Reverse Engineering — Any Software/Site → Specs & Extraction
**Goal:** Analyze existing software or public websites to extract page specifications, design systems, and architectural understanding.
**When to use:** Need to understand an existing product (yours or public). Want to create WDS specifications from existing software. Competitive analysis through design extraction. Migrating or rebuilding an existing product.
**Two modes:**
- **Internal:** Analyze your own codebase (source code access)
- **External:** Analyze any public website (browser-based, no source needed)
---
## CORE PRINCIPLES
- **Observe first, then extract** — Do not start generating specs from the first page you see. Explore the full scope of the target first, then systematically extract patterns. Premature extraction leads to incomplete and inconsistent results.
- **Respect intellectual property** — Reverse engineering is for learning and specification, not for copying proprietary code. Extract patterns, structures, and design tokens. Do not copy proprietary assets, fonts, or copyrighted content.
- **Focus on patterns, not pixel-perfect copy** — The goal is to understand the design system and page structures well enough to build something equivalent or better. Capture the rules and relationships, not individual pixel values.
---
## INITIALIZATION
### Design Log
Read `{output_folder}/_progress/00-design-log.md`. Check Current and Backlog for context.
### Essential Guides
- **[Execution Principles](data/guides/EXECUTION-PRINCIPLES.md)** — Document before acting, plan-then-execute
- **[Session Protocol](data/guides/SESSION-PROTOCOL.md)** — Read dialog, verify plan, present status
---
## STEPS
Execute steps in `./steps-r/`:
| Step | File | Purpose |
|------|------|---------|
| 01 | step-01-identify-target.md | Define target, access method, extraction goals |
| 02 | step-02-explore-and-capture.md | Explore structure, capture patterns and inventory |
| 03 | step-03-generate-specs.md | Generate WDS-format page specifications |
| 04 | step-04-extract-design-system.md | Extract design tokens and component catalog |
**Flow:** 01 → 02 → 03 → 04
### Critical Rules
- **ALWAYS** explore the full target before extracting — do not start with the first page
- **ALWAYS** document observations before generating specs
- **ALWAYS** generate output in `document_output_language`
- **ALWAYS** produce WDS-compatible artifacts that can feed into prototyping
- **NEVER** copy proprietary code, assets, or copyrighted content
---
## AFTER COMPLETION
1. Append a progress entry to `{output_folder}/_progress/00-design-log.md` under `## Progress`:
`### [date] — Reverse Engineering: [what was extracted]`
2. Suggest next action: feed specs into prototyping, analyze further, or start development

View File

@@ -0,0 +1,97 @@
---
name: agentic-development
description: AI-assisted development, testing, and reverse engineering through structured agent collaboration
web_bundle: true
---
# Phase 5: Agentic Development
**Goal:** AI-assisted software development, testing, analysis, and reverse engineering through structured agent collaboration.
**Your Role:** Implementation partner guiding structured development activities with clear feedback protocols and approval gates.
---
## WORKFLOW ARCHITECTURE
Phase 5 is **menu-driven**, not linear. The user picks an activity.
### Core Principles
- **Activity-Based**: Pick the right activity for the current need
- **Non-Linear**: Start anywhere, switch between activities freely
- **Design Log Driven**: Every activity reads/updates the design log for state tracking
- **Specification as Truth**: All work traces back to approved specifications
### Step Processing Rules
1. **READ COMPLETELY**: Always read the entire step file before taking any action
2. **FOLLOW SEQUENCE**: Execute all sections in order within a step
3. **WAIT FOR INPUT**: Halt at menus and wait for user selection
4. **SAVE STATE**: Update design log when completing steps
---
## INITIALIZATION
### 1. Configuration Loading
Load and read full config from `{project-root}/_bmad/wds/config.yaml` and resolve:
- `project_name`, `output_folder`, `user_name`
- `communication_language`, `document_output_language`
### 2. Design Log
Read `{output_folder}/_progress/00-design-log.md`. Check Current and Backlog for context.
### 3. Activity Menu
Present:
```
What would you like to do?
[P] Prototyping — Build interactive prototype from specs
[D] Development — Write production code
[F] Bugfixing — Fix bugs in existing code
[E] Evolution — Add features to existing product
[A] Analysis — Understand your own codebase
[R] Reverse Engineering — Any software/site → specs & extraction
[T] Acceptance Testing — Design & run tests from spec criteria
```
### Activity Routing
| Choice | Workflow File | Steps Folder |
|--------|--------------|--------------|
| [P] | workflow-prototyping.md | steps-p/ |
| [D] | workflow-development.md | steps-d/ |
| [F] | workflow-bugfixing.md | steps-f/ |
| [E] | workflow-evolution.md | steps-e/ |
| [A] | workflow-analysis.md | steps-a/ |
| [R] | workflow-reverse-engineering.md | steps-r/ |
| [T] | workflow-acceptance-testing.md | steps-t/ |
---
## REFERENCE CONTENT
| Location | Purpose |
|----------|---------|
| `data/guides/` | Development guides, protocols, analysis |
| `data/testing-guide.md` | Acceptance testing guide |
| `data/test-result-templates.md` | Test result templates |
| `data/issue-templates.md` | Issue reporting templates |
| `templates/` | Page, story, work file, roadmap templates |
---
## OUTPUT
Activity-specific output in project structure.
---
## AFTER COMPLETION
Each activity workflow has its own explicit design log reporting points. See the individual workflow files for details on what to log and where.