docs: update all documentation and add AI tooling configs

- Rewrite README.md with current architecture, features and stack
- Update docs/API.md with all current endpoints (corporate, BI, client 360)
- Update docs/ARCHITECTURE.md with cache, modular queries, services, ETL
- Update docs/GUIA-USUARIO.md for all roles (admin, corporate, agente)
- Add docs/INDEX.md documentation index
- Add PROJETO.md comprehensive project reference
- Add BI-CCC-Implementation-Guide.md
- Include AI agent configs (.claude, .agents, .gemini, _bmad)
- Add netbird VPN configuration
- Add status report

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
2026-03-19 13:29:03 -04:00
parent c5b377e788
commit 647cbec54f
3246 changed files with 479789 additions and 983 deletions

View File

@@ -0,0 +1,409 @@
# Context Templates
Templates for gathering context in Phase 8 (Product Evolution).
---
## Limited Project Brief Template
**File:** `A-Project-Brief/limited-brief.md`
```markdown
# Limited Project Brief: [Product Name]
**Type:** Existing Product Improvement
**Date:** 2024-12-09
**Designer:** [Your name]
---
## Strategic Challenge
**Problem:**
[What specific problem are we solving?]
Example:
"User onboarding has 60% drop-off rate. Users don't understand
the family concept and abandon during setup."
**Impact:**
[Why does this matter?]
Example:
"- 60% of new users never reach the dashboard
- Acquisition cost is wasted on users who drop off
- Growth is limited by poor onboarding
- Estimated revenue loss: $50K/month"
**Root Cause:**
[Why is this happening?]
Example:
"- 'Family' concept is unclear (Swedish cultural context)
- Too many steps feel like homework
- No sense of progress or achievement
- Value proposition not clear upfront"
---
## Why WDS Designer?
**Why bring in a linchpin designer now?**
Example:
"We need expert UX design to:
- Understand user psychology and motivation
- Redesign onboarding flow for clarity
- Balance business goals with user needs
- Improve completion rates to 80%+"
---
## Scope
**What are we changing?**
Example:
"Redesign onboarding flow (4 screens):
- Welcome screen (update copy and visuals)
- Family setup (simplify and clarify concept)
- Dog addition (make it optional for MVP)
- Success state (add celebration and next steps)"
**What are we NOT changing?**
Example:
"- Tech stack: React Native + Supabase (already built)
- Brand: Colors and logo are fixed
- Other features: Only touch onboarding
- Timeline: 2 weeks to design + implement"
---
## Success Criteria
**How will we measure success?**
Example:
"- Onboarding completion rate > 80% (from 40%)
- Time to complete < 2 minutes
- User satisfaction score > 4.5/5
- 30-day retention > 60%"
---
## Constraints
**What can't we change?**
Example:
"- Tech stack: React Native + Supabase
- Brand: Colors, logo, typography fixed
- Timeline: 2 weeks total
- Budget: No additional development resources
- Scope: Only onboarding, don't touch dashboard"
---
## Timeline
**Week 1:** Design + Specifications
**Week 2:** Implementation + Validation
---
## Stakeholders
**Product Manager:** [Name]
**Developer:** [Name]
**Designer (WDS):** [Your name]
```
---
## Improvement Opportunity Template
**File:** `improvements/IMP-XXX-description.md`
```markdown
# Improvement: [Short Description]
**ID:** IMP-XXX
**Type:** [Feature Enhancement | Bug Fix | Performance | UX Improvement]
**Priority:** [High | Medium | Low]
**Status:** Identified
**Date:** 2024-12-09
---
## Opportunity
**What are we improving?**
Example:
"Feature X has low engagement (15% usage) and high drop-off (40%).
User feedback indicates confusion about how to use it."
**Why does this matter?**
Example:
"Feature X is a core value proposition. Low usage means users
aren't getting full value from the product. This impacts
retention and satisfaction."
---
## Data
**Analytics:**
- Feature X usage: 15% (target: 60%)
- Drop-off at Feature X: 40%
- Time spent: 30 seconds (too short)
**User Feedback:**
- "I don't understand how to use Feature X" (12 mentions)
- "Feature X seems broken" (3 mentions)
**Hypothesis:**
Users don't understand how to use Feature X because there's
no onboarding or guidance.
---
## Proposed Solution
**What will we change?**
Example:
"Add inline onboarding to Feature X:
- Tooltip on first use explaining purpose
- Step-by-step guide for first action
- Success celebration when completed
- Help button for future reference"
**Expected Impact:**
- Feature X usage: 15% → 60%
- Drop-off: 40% → 10%
- User satisfaction: +1.5 points
---
## Effort Estimate
**Design:** 1 day
**Implementation:** 1 day
**Testing:** 0.5 days
**Total:** 2.5 days
---
## Success Metrics
**How will we measure success?**
- Feature X usage > 60% (within 2 weeks)
- Drop-off < 10%
- User feedback mentions decrease
- Support tickets about Feature X decrease
---
## Timeline
**Week 1:** Design + Implement + Test
**Week 2:** Monitor impact
---
## Next Steps
1. Design inline onboarding (Step 03)
2. Create Design Delivery (Step 04)
3. Hand off to BMad (Step 05)
4. Validate implementation (Step 06)
5. Monitor impact (Step 07)
```
---
## First Impressions Template
```markdown
# First Impressions: [Product Name]
**Date:** 2024-12-09
**Context:** First-time user, no prior knowledge
## Onboarding
- Step 1: [What happened? How did it feel?]
- Step 2: [What happened? How did it feel?]
- Confusion points: [Where was I confused?]
- Delights: [What felt great?]
## Core Features
- Feature X: [Experience]
- Feature Y: [Experience]
## Overall Impression
[What's your gut feeling about this product?]
```
---
## Focused Trigger Map Template
**File:** `B-Trigger-Map/focused-trigger-map.md`
```markdown
# Focused Trigger Map: [Challenge Name]
**Context:** Existing product improvement
**Focus:** [Specific feature/flow you're improving]
---
## Trigger Moment
**When does this happen?**
Example:
"User completes signup and reaches dashboard for first time"
---
## Current Experience
**What happens now?**
Example:
"1. Welcome screen (confusing value prop)
2. Family setup (unclear what 'family' means)
3. Dog addition (forced, feels like homework)
4. 60% drop off before reaching dashboard"
---
## Desired Outcome
**What should happen?**
Example:
"User understands value, completes setup smoothly,
reaches dashboard feeling confident and excited"
---
## Barriers
**What's preventing the desired outcome?**
Example:
"- Unclear value proposition
- 'Family' concept is confusing (cultural context)
- Forced dog addition feels like work
- No sense of progress or achievement
- No celebration of completion"
---
## Solution Focus
**What will we change to remove barriers?**
Example:
"- Clarify value prop on welcome screen
- Simplify family concept explanation
- Make dog addition optional
- Add progress indicators
- Add celebration on completion"
```
---
## Analytics Deep Dive Template
```markdown
# Analytics: Feature X Improvement
**Date Range:** Last 30 days
**Focus:** Feature X engagement
## Usage Metrics
- Users who saw Feature X: 1,200
- Users who used Feature X: 180 (15%)
- Users who completed action: 90 (7.5%)
- Drop-off point: Step 2 (40% drop off)
## User Segments
- New users: 10% usage
- Returning users: 20% usage
- Power users: 60% usage
## Insight
New and returning users struggle with Feature X.
Power users understand it. Suggests onboarding gap.
```
---
## User Feedback Analysis Template
```markdown
# User Feedback: Feature X
**Date Range:** Last 30 days
**Total Mentions:** 24
## Themes
### Confusion (12 mentions)
- "I don't understand how to use Feature X"
- "Feature X seems broken"
- "What is Feature X for?"
### Requests (8 mentions)
- "Can Feature X do Y?"
- "Wish Feature X had Z"
### Praise (4 mentions)
- "Once I figured it out, Feature X is great!"
- "Feature X saves me time"
## Insight
Users who figure out Feature X love it.
But most users never figure it out.
Onboarding is the problem.
```
---
## Context Synthesis Template
```markdown
# Context Synthesis: [Improvement Name]
## What We Know
1. [Key insight from analytics]
2. [Key insight from user feedback]
3. [Key insight from competitive analysis]
4. [Key insight from original design]
## Root Cause
[Why is this problem happening?]
## Hypothesis
[What do we believe will solve it?]
## Validation Plan
[How will we know if we're right?]
```

View File

@@ -0,0 +1,357 @@
# Delivery Templates
Templates for Design Deliveries and Test Scenarios in Phase 8 (Product Evolution).
---
## Design Delivery Template (Small Scope)
**File:** `deliveries/DD-XXX-description.yaml`
```yaml
delivery:
id: 'DD-XXX'
name: '[Short descriptive name]'
type: 'incremental_improvement' # vs "complete_flow" for new features
scope: 'update' # vs "new" for new features
version: 'v2.0'
previous_version: 'v1.0'
created_at: '2024-12-09T14:00:00Z'
designer: '[Your name]'
status: 'ready_for_handoff'
# What's the improvement?
improvement:
summary: |
[2-3 sentence summary of what's changing and why]
Example:
"Adding inline onboarding to Feature X to improve user understanding
and increase usage from 15% to 60%. Analytics show 40% drop-off due
to confusion. This update adds tooltips, step-by-step guidance, and
success celebration."
problem: |
[What problem does this solve?]
Example:
"Feature X has low engagement (15% usage) and high drop-off (40%).
User feedback indicates confusion about how to use it. 12 support
tickets mention 'I don't understand Feature X'."
solution: |
[What's the solution?]
Example:
"Add inline onboarding that appears on first use:
- Tooltip explaining Feature X purpose
- Step-by-step guide for first action
- Success celebration when completed
- Help button for future reference"
expected_impact: |
[What will improve?]
Example:
"- Feature X usage: 15% → 60%
- Drop-off: 40% → 10%
- Support tickets: -80%
- User satisfaction: +1.5 points"
# What's changing?
changes:
scope:
screens_affected:
- 'Feature X main screen'
- 'Feature X onboarding overlay'
features_affected:
- 'Feature X interaction flow'
components_new:
- id: 'cmp-tooltip-001'
name: 'Inline Tooltip'
file: 'D-Design-System/03-Atomic-Components/Tooltips/Tooltip-Inline.md'
components_modified:
- id: 'cmp-btn-001'
name: 'Primary Button'
changes: 'Added help icon variant'
file: 'D-Design-System/03-Atomic-Components/Buttons/Button-Primary.md'
components_unchanged:
- 'All other components remain as-is'
what_stays_same:
- 'Brand colors and typography'
- 'Core layout structure'
- 'Navigation pattern'
- 'Data model'
- 'Tech stack'
# Design artifacts
design_artifacts:
specifications:
- path: 'C-UX-Scenarios/XX-feature-x-update/Frontend/specifications.md'
description: 'Updated Feature X specifications'
- path: 'C-UX-Scenarios/XX-feature-x-update/change-scope.md'
description: "What's changing vs staying"
- path: 'C-UX-Scenarios/XX-feature-x-update/before-after.md'
description: 'Before/after comparison'
components:
- path: 'D-Design-System/03-Atomic-Components/Tooltips/Tooltip-Inline.md'
description: 'New inline tooltip component'
# Technical requirements
technical_requirements:
frontend:
- 'Implement inline tooltip component'
- 'Add first-use detection logic'
- 'Implement step-by-step guide'
- 'Add success celebration animation'
- 'Add help button with persistent access'
- 'Store dismissal state in user preferences'
backend:
- 'Add user preference field: feature_x_onboarding_completed'
- 'API endpoint to save dismissal state'
data:
- 'User preferences table: add feature_x_onboarding_completed (boolean)'
integrations:
- 'Analytics: Track onboarding completion'
- 'Analytics: Track help button usage'
# Acceptance criteria
acceptance_criteria:
- id: 'AC-001'
description: 'Inline tooltip appears on first use of Feature X'
verification: 'Open Feature X as new user, tooltip appears'
- id: 'AC-002'
description: 'Step guide walks user through first action'
verification: 'Follow guide, complete first action successfully'
- id: 'AC-003'
description: 'Success celebration appears on completion'
verification: 'Complete first action, celebration appears'
- id: 'AC-004'
description: "Onboarding doesn't appear on subsequent uses"
verification: 'Use Feature X again, no onboarding shown'
- id: 'AC-005'
description: 'Help button provides access to guide anytime'
verification: 'Click help button, guide appears'
- id: 'AC-006'
description: 'Dismissal state persists across sessions'
verification: 'Dismiss, logout, login, onboarding not shown'
# Testing guidance
testing_guidance:
test_scenario_file: 'test-scenarios/TS-XXX.yaml'
key_tests:
- 'First-time user experience (happy path)'
- 'Dismissal and persistence'
- 'Help button access'
- 'Edge case: Multiple devices'
- 'Edge case: Cleared cache'
success_criteria:
- 'All acceptance criteria pass'
- 'No regressions in existing functionality'
- 'Performance impact < 50ms'
- 'Accessibility: Screen reader compatible'
# Metrics and validation
metrics:
baseline:
- metric: 'Feature X usage rate'
current: '15%'
target: '60%'
- metric: 'Drop-off rate'
current: '40%'
target: '10%'
- metric: 'Support tickets (Feature X)'
current: '12/month'
target: '2/month'
- metric: 'User satisfaction'
current: '3.2/5'
target: '4.5/5'
measurement_period: '2 weeks after release'
success_threshold:
- 'Feature X usage > 50% (minimum)'
- 'Drop-off < 15% (minimum)'
- 'Support tickets < 5/month'
rollback_criteria:
- 'Feature X usage < 20% after 2 weeks'
- 'Drop-off > 35% after 2 weeks'
- 'Critical bugs reported'
# Effort estimate
effort:
design: '1 day'
frontend: '1 day'
backend: '0.5 days'
testing: '0.5 days'
total: '3 days'
complexity: 'Low'
# Timeline
timeline:
design_complete: '2024-12-09'
handoff_date: '2024-12-09'
development_start: '2024-12-10'
development_complete: '2024-12-12'
testing_complete: '2024-12-13'
release_date: '2024-12-13'
measurement_end: '2024-12-27'
# Handoff
handoff:
architect: '[BMad Architect name]'
developer: '[BMad Developer name]'
handoff_dialog_required: false # Small update, dialog optional
notes: |
Small, focused improvement. Specifications are clear.
Dialog available if questions arise.
# Related
related:
improvement_file: 'improvements/IMP-XXX-feature-x-onboarding.md'
analytics_report: 'analytics/feature-x-usage-2024-11.md'
user_feedback: 'feedback/feature-x-confusion-2024-11.md'
original_delivery: 'deliveries/DD-XXX-feature-x.yaml' # If applicable
```
---
## Test Scenario Template (Incremental Improvement)
**File:** `test-scenarios/TS-XXX-description.yaml`
```yaml
test_scenario:
id: 'TS-XXX'
name: '[Update Name] Validation'
type: 'incremental_improvement'
delivery_id: 'DD-XXX'
created_at: '2024-12-09T14:00:00Z'
# Focus on what changed
test_focus:
- 'New onboarding flow'
- 'Dismissal persistence'
- 'Help button access'
- 'No regressions'
# Happy path (new functionality)
happy_path:
- id: 'HP-001'
name: 'First-time user sees onboarding'
steps:
- action: 'Open Feature X as new user'
expected: 'Inline tooltip appears'
- action: "Read tooltip, tap 'Next'"
expected: 'Step guide appears'
- action: 'Follow guide, complete action'
expected: 'Success celebration appears'
- action: 'Dismiss celebration'
expected: 'Feature X is ready to use'
# Regression testing (existing functionality)
regression_tests:
- id: 'REG-001'
name: 'Existing Feature X functionality unchanged'
steps:
- action: 'Use Feature X core functionality'
expected: 'Works exactly as before'
# Edge cases
edge_cases:
- id: 'EC-001'
name: 'Dismissal persists across sessions'
steps:
- action: 'Dismiss onboarding'
- action: 'Logout and login'
- action: 'Open Feature X'
expected: 'Onboarding not shown'
# Accessibility
accessibility:
- id: 'A11Y-001'
name: 'Screen reader announces onboarding'
checks:
- 'Tooltip announced correctly'
- 'Guide steps announced'
- 'Help button labeled'
```
---
## Validation Report Template
**File:** `test-reports/TR-XXX-DD-XXX-validation.md`
```markdown
# Validation Report: DD-XXX [Name]
**Date:** 2024-12-13
**Tester:** [Your name]
**Build:** v2.1.0
**Type:** Design Delivery Validation (Incremental Improvement)
---
## Result
**Status:** [PASS | FAIL]
---
## New Functionality
### Test HP-001: [Name]
- Status: [PASS | FAIL]
- Notes: [Any observations]
[Repeat for each new functionality test]
---
## Regression Testing
### Test REG-001: [Name]
- Status: [PASS | FAIL]
- Notes: [Any observations]
[Repeat for each regression test]
---
## Issues Found
**Total:** [Number]
[If any issues, list them]
---
## Recommendation
[APPROVED | NOT APPROVED]
[Brief explanation]
```

View File

@@ -0,0 +1,312 @@
# Design Templates
Templates for designing incremental updates in Phase 8 (Product Evolution).
---
## Change Scope Template
**File:** `C-UX-Scenarios/XX-update-name/change-scope.md`
```markdown
# Change Scope: [Update Name]
## What's Changing
### Screen/Feature: [Name]
**Changes:**
- [ ] Copy/messaging
- [ ] Visual hierarchy
- [ ] Component usage
- [ ] User flow
- [ ] Interaction pattern
- [ ] Data structure
**Specific changes:**
1. [Specific change 1]
2. [Specific change 2]
3. [Specific change 3]
---
## What's Staying
**Unchanged:**
- ✓ Brand colors
- ✓ Typography
- ✓ Core layout structure
- ✓ Navigation pattern
- ✓ Tech stack
- ✓ Data model
**Rationale:**
[Why are we keeping these unchanged?]
Example:
"Brand colors and typography are fixed by brand guidelines.
Core layout structure works well and changing it would
require extensive development. We're focusing on content
and interaction improvements only."
```
---
## Update Specification Template
**File:** `C-UX-Scenarios/XX-update-name/Frontend/specifications.md`
```markdown
# Frontend Specification: [Screen Name] UPDATE
**Type:** Incremental Update
**Version:** v2.0
**Previous Version:** v1.0 (see: archive/v1.0-specifications.md)
---
## Change Summary
**What's different from v1.0?**
1. [Change 1]: [Brief description]
2. [Change 2]: [Brief description]
3. [Change 3]: [Brief description]
---
## Updated Screen Structure
### Before (v1.0)
[Describe old structure]
### After (v2.0)
[Describe new structure]
---
## Component Changes
### New Components
- [Component name]: [Purpose]
### Modified Components
- [Component name]: [What changed?]
### Removed Components
- [Component name]: [Why removed?]
### Unchanged Components
- [Component name]: [Still used as-is]
---
## Interaction Changes
### Before (v1.0)
1. User does X
2. System responds Y
3. User sees Z
### After (v2.0)
1. User does X
2. **NEW:** System shows guidance
3. System responds Y
4. **NEW:** System celebrates success
5. User sees Z
---
## Copy Changes
### Before (v1.0)
"[Old copy]"
### After (v2.0)
"[New copy]"
**Rationale:** [Why this change?]
---
## Visual Changes
### Before (v1.0)
- Hierarchy: [Description]
- Emphasis: [Description]
- Spacing: [Description]
### After (v2.0)
- Hierarchy: [What changed?]
- Emphasis: [What changed?]
- Spacing: [What changed?]
---
## Success Metrics
**How will we measure if this update works?**
- Metric 1: [Before] → [Target]
- Metric 2: [Before] → [Target]
- Metric 3: [Before] → [Target]
**Measurement period:** 2 weeks after release
```
---
## New Component Template
**File:** `D-Design-System/03-Atomic-Components/[Category]/[Component-Name].md`
```markdown
# Component: [Name]
**ID:** [cmp-XXX]
**Type:** [Button | Input | Card | etc.]
**Status:** New (for Update DD-XXX)
**Version:** 1.0
---
## Purpose
**Why this component?**
Example:
"Inline tooltip to guide users through Feature X on first use.
Needed because analytics show 40% drop-off due to confusion."
---
## Specifications
[Standard component spec format]
---
## Usage
**Where used:**
- Screen X: [Context]
- Screen Y: [Context]
**When shown:**
- First time user sees Feature X
- Can be dismissed
- Doesn't show again after dismissal
```
---
## Before/After Comparison Template
**File:** `C-UX-Scenarios/XX-update-name/before-after.md`
```markdown
# Before/After: [Update Name]
## Before (v1.0)
**Screenshot/Description:**
[What it looked like before]
**User Experience:**
- User sees: [Description]
- User feels: [Description]
- Problem: [What was wrong?]
**Metrics:**
- Usage: 15%
- Drop-off: 40%
- Satisfaction: 3.2/5
---
## After (v2.0)
**Screenshot/Description:**
[What it looks like after]
**User Experience:**
- User sees: [Description]
- User feels: [Description]
- Improvement: [What's better?]
**Expected Metrics:**
- Usage: 60% (target)
- Drop-off: 10% (target)
- Satisfaction: 4.5/5 (target)
---
## Key Changes
1. **[Change 1]**
- Before: [Description]
- After: [Description]
- Impact: [Expected improvement]
2. **[Change 2]**
- Before: [Description]
- After: [Description]
- Impact: [Expected improvement]
3. **[Change 3]**
- Before: [Description]
- After: [Description]
- Impact: [Expected improvement]
```
---
## Hypothesis Validation Template
```markdown
# Hypothesis Validation: [Update Name]
## Hypothesis
[What do we believe will happen?]
Example:
"If we add inline onboarding to Feature X, usage will
increase from 15% to 60% because users will understand
how to use it."
## Assumptions
1. [Assumption 1]
2. [Assumption 2]
3. [Assumption 3]
## Risks
1. [Risk 1]: [Mitigation]
2. [Risk 2]: [Mitigation]
## Success Criteria
- [Metric 1]: [Current] → [Target]
- [Metric 2]: [Current] → [Target]
- [Timeframe]: 2 weeks after release
## Failure Criteria
If after 2 weeks:
- [Metric 1] < [Threshold]: Rollback or iterate
- [Metric 2] < [Threshold]: Rollback or iterate
```
---
## Design Self-Review Checklist
- [ ] Does this solve the root cause?
- [ ] Is this the smallest change that could work?
- [ ] Does this align with existing design system?
- [ ] Is this technically feasible?
- [ ] Can we measure the impact?
- [ ] Does this create new problems?
- [ ] Have we considered edge cases?

View File

@@ -0,0 +1,929 @@
# Phase 8: Product Evolution
**Jump into an existing product to make strategic improvements**
---
## 🔑 Key Point: You Still Create a Project Brief!
**Brownfield projects (existing products) still need a Project Brief - just adapted to focus on:**
- ✅ The strategic challenge you're solving
- ✅ The scope of changes
- ✅ Success criteria
- ✅ Constraints
**You're not skipping Phase 1 - you're adapting it to the existing product context.**
---
## Two Entry Points to WDS
### **Entry Point 1: New Product (Phases 1-7) - Greenfield + Kaikaku**
Starting from scratch, designing complete user flows
**Terminology:**
- **Greenfield:** Building from scratch with no existing constraints
- **Kaikaku (改革):** Revolutionary change, complete transformation
### **Entry Point 2: Existing Product (Phase 8) - Brownfield + Kaizen**
Jumping into an existing product to make strategic changes
**Terminology:**
- **Brownfield:** Working within existing system and constraints
- **Kaizen (改善):** Continuous improvement, small incremental changes
**This phase is for:**
- Existing products that need strategic improvements
- Products where you're brought in as a "linchpin designer"
- Situations where you're not designing complete flows from scratch
- Making targeted changes to existing screens and features
---
## Purpose
When joining an existing product, you:
1. Focus on strategic challenges (not complete redesign)
2. Make targeted improvements to existing screens
3. Add new features incrementally
4. Package changes as Design Deliveries (small scope)
5. Work within existing constraints
**This is a different workflow** - you're not designing complete flows, you're making critical updates to an existing system.
---
## Phase 8 Workflow (Existing Product)
```
Existing Product
Strategic Challenge Identified
┌─────────────────────────────────────┐
│ Step 01: Project Brief (Adapted) │
│ - What strategic challenge? │
│ - What are we trying to solve? │
│ - Why bring in WDS designer? │
│ - What's the scope? │
└─────────────────────────────────────┘
┌─────────────────────────────────────┐
│ Step 02: Existing Context │
│ - Upload business goals │
│ - Upload target group material │
│ - Print out trigger map │
│ - Understand existing product │
└─────────────────────────────────────┘
┌─────────────────────────────────────┐
│ Step 03: Critical Updates │
│ - Design targeted changes │
│ - Update existing screens │
│ - Add strategic features │
│ - Focus on solving challenge │
└─────────────────────────────────────┘
┌─────────────────────────────────────┐
│ Step 04: Design Delivery │
│ → [Touch Point 2: WDS → BMad] │
│ Hand off changes (DD-XXX) │
└─────────────────────────────────────┘
┌─────────────────────────────────────┐
│ Step 05: Validation │
│ ← [Touch Point 3: BMad → WDS] │
│ Designer validates │
└─────────────────────────────────────┘
✅ Deploy Changes
(Repeat for next strategic challenge)
```
---
## Project Setup: Choosing Your Entry Point
**During project initialization, you'll be asked:**
```
Which type of project are you working on?
1. New Product
→ Start with Phase 1 (Project Brief)
→ Design complete user flows from scratch
→ Full WDS workflow (Phases 1-7)
2. Existing Product
→ Start with Phase 8 (Product Evolution)
→ Make strategic improvements to existing product
→ Focused on critical updates, not complete redesign
```
**If you choose "Existing Product" (Brownfield):**
- **Phase 1 (Project Brief):** Adapted - focus on strategic challenge, not full vision
- **Phase 2 (Trigger Map):** Optional - print out focused trigger map if needed
- **Phase 3 (Platform Requirements):** Skip - tech stack already decided
- **Phase 4-5:** Adapted - update existing screens, not complete flows
- **Handover & Testing:** Same - deliveries (Phase 4 [H]) and validation (Phase 5 [T]) work the same way
---
## Step 01: Project Brief (Adapted for Brownfield)
**IMPORTANT: You still create a Project Brief - just adapted to the existing product context.**
**Brownfield vs Greenfield:**
- **Greenfield (New Product):** Full Project Brief covering vision, goals, stakeholders, constraints
- **Brownfield (Existing Product):** Focused Project Brief covering the strategic challenge and scope
**You're not skipping the Project Brief - you're adapting it to focus on:**
### **The Strategic Challenge**
```markdown
# Limited Project Brief: Existing Product
## Strategic Challenge
What specific problem are we solving?
Example:
"User onboarding has 60% drop-off rate. Users don't understand
the family concept and abandon during setup."
## Why WDS Designer?
Why bring in a linchpin designer now?
Example:
"We need expert UX design to redesign the onboarding flow and
improve completion rates to 80%+."
## Scope
What are we changing?
Example:
"Redesign onboarding flow (4 screens):
- Welcome screen (update copy and visuals)
- Family setup (simplify and clarify)
- Dog addition (make it optional for MVP)
- Success state (add celebration)"
## Success Criteria
How will we measure success?
Example:
"- Onboarding completion rate > 80%
- Time to complete < 2 minutes
- User satisfaction score > 4.5/5"
## Constraints
What can't we change?
Example:
"- Tech stack: React Native + Supabase (already built)
- Brand: Colors and logo are fixed
- Timeline: 2 weeks to design + implement
- Scope: Only onboarding, don't touch other features"
```
---
## Step 02: Existing Context
**Upload and review existing materials:**
### **Business Goals**
```
Upload: business-goals.pdf
Review: What's the company trying to achieve?
```
### **Target Group Material**
```
Upload: user-personas.pdf
Upload: user-research.pdf
Review: Who are the users? What do they need?
```
### **Print Out Trigger Map**
```
Based on existing materials, create a focused trigger map:
- What triggers bring users to this feature?
- What outcomes are they seeking?
- What's currently failing?
```
**Example (Focused Trigger Map):**
```markdown
# Trigger Map: Onboarding Improvement
## Trigger Moment
User downloads app and opens it for the first time
## Current Experience
1. Welcome screen (confusing value prop)
2. Login/Signup choice (too many options)
3. Family setup (unclear what "family" means)
4. Dog addition (forced, feels like homework)
5. 60% drop off before reaching dashboard
## Desired Outcome
User understands value, completes setup, reaches dashboard
## Barriers
- Unclear value proposition
- "Family" concept is confusing
- Forced dog addition feels like work
- No sense of progress or achievement
## Solution Focus
- Clarify value prop on welcome screen
- Simplify family concept explanation
- Make dog addition optional
- Add progress indicators and celebration
```
### **Understand Existing Product**
```
Review:
- Current app (use it yourself)
- Existing design system (if any)
- Technical constraints
- User feedback and analytics
```
---
## Step 03: Critical Updates (Not Complete Flows)
**Key difference: You're updating existing screens, not designing from scratch**
### **Example: Onboarding Improvement**
**Scenario 01: Welcome Screen (Update)**
```markdown
# Scenario 01: Welcome Screen (UPDATE)
## What's Changing
- Clearer value proposition
- Better visual hierarchy
- Stronger call-to-action
## What's Staying
- Brand colors
- Logo placement
- Screen structure
## Design Updates
- Hero image: Show family using app together
- Headline: "Keep your family's dog care organized"
- Subheadline: "Share tasks, track routines, never miss a walk"
- CTA: "Get Started" (larger, more prominent)
## Components
- Existing: Button (Primary)
- Update: Hero Image component
- Update: Typography (larger headline)
```
**Scenario 02: Family Setup (Redesign)**
```markdown
# Scenario 02: Family Setup (REDESIGN)
## Current Problem
Users don't understand what "family" means in this context
## Solution
- Rename "Family" to "Household"
- Add explanation: "Who helps take care of your dog?"
- Show examples: "You, your partner, your kids, your dog walker"
- Make it visual: Show avatars of household members
## Design Changes
- Screen title: "Set up your household"
- Explanation text (new)
- Visual examples (new)
- Simplified form (fewer fields)
## Components
- Existing: Input (Text)
- New: Explanation Card component
- New: Avatar Grid component
```
**Scenario 03: Dog Addition (Make Optional)**
```markdown
# Scenario 03: Dog Addition (MAKE OPTIONAL)
## Current Problem
Forcing users to add a dog feels like homework, causes drop-off
## Solution
- Make it optional for onboarding
- Show value: "Add your first dog to get started"
- Allow skip: "I'll do this later"
- Celebrate if they add: "Great! Let's meet [dog name]!"
## Design Changes
- Add "Skip for now" button
- Add celebration animation if dog added
- Update copy to be inviting, not demanding
## Components
- Existing: Button (Primary, Secondary)
- New: Celebration Animation component
```
**Notice the difference:**
- ❌ Not designing complete flows from scratch
- ✅ Updating existing screens strategically
- ✅ Focused on solving specific problems
- ✅ Working within existing constraints
---
## What Triggers a Design Delivery?
### **Accumulated Changes**
**Small changes accumulate:**
- Bug fix: Button alignment
- Refinement: Improved error message
- Enhancement: New filter option
- Fix: Loading state missing
- Improvement: Better empty state
**When enough changes accumulate:**
```
10-15 small changes = Design Delivery
OR
3-5 medium features = Design Delivery
OR
1 major feature = Design Delivery
```
### **Business Triggers**
**Scheduled releases:**
- Monthly updates
- Quarterly feature releases
- Annual major versions
**Market triggers:**
- Competitor feature launched
- User demand spike
- Business opportunity
**Technical triggers:**
- Platform update (iOS 18, Android 15)
- Security patch required
- Performance optimization needed
---
## Product Evolution Workflow
### Step 1: Monitor & Gather Feedback
**Sources:**
- User feedback (support tickets, reviews)
- Analytics (usage patterns, drop-offs)
- Team observations (bugs, issues)
- Stakeholder requests (new features)
**Track in backlog:**
```
Backlog:
- [ ] Bug: Login button misaligned on iPad
- [ ] Enhancement: Add "Remember me" checkbox
- [ ] Feature: Social login (Google, Apple)
- [ ] Refinement: Improve onboarding copy
- [ ] Fix: Loading spinner not showing
```
---
### Step 2: Prioritize Changes
**Criteria:**
- **Impact:** High user value vs low effort
- **Urgency:** Critical bugs vs nice-to-haves
- **Alignment:** Strategic goals vs random requests
- **Feasibility:** Quick wins vs complex changes
**Prioritized list:**
```
High Priority (Next Update):
1. Bug: Login button misaligned (Critical)
2. Fix: Loading spinner not showing (High)
3. Enhancement: Add "Remember me" (Medium)
Medium Priority (Future Update):
4. Feature: Social login (Medium)
5. Refinement: Improve copy (Low)
Low Priority (Backlog):
6. Enhancement: Dark mode (Low)
```
---
### Step 3: Design Changes
**Return to Phase 4-5:**
- Design fixes and refinements
- Create specifications for new features
- Update design system if needed
- Document changes
**Track changes:**
```
Changes for Design Delivery DD-011 (v1.1):
✓ Fixed: Login button alignment on iPad
✓ Added: Loading spinner to all async actions
✓ Enhanced: "Remember me" checkbox on login
✓ Updated: Error messages for clarity
✓ Improved: Empty state when no tasks
```
---
### Step 4: Create Design Delivery
**When enough changes accumulate:**
**File:** `deliveries/DD-XXX-design-delivery-vX.X.yaml`
```yaml
delivery:
id: 'DD-010'
name: 'Product Update v1.1'
type: 'incremental_improvement'
scope: 'update'
status: 'ready'
priority: 'high'
version: '1.1'
description: |
Incremental improvements with bug fixes, refinements, and enhancements
based on user feedback from v1.0 launch.
changes:
bug_fixes:
- 'Fixed login button alignment on iPad'
- 'Added loading spinner to all async actions'
- 'Fixed family invite code validation'
enhancements:
- "Added 'Remember me' checkbox on login"
- 'Improved error messages (clearer wording)'
- 'Better empty state for task list'
design_system_updates:
- 'Button component: Added loading state'
- 'Input component: Improved error styling'
affected_scenarios:
- id: '02-login'
path: 'C-UX-Scenarios/02-login/'
changes: "Added 'Remember me' checkbox, fixed alignment"
- id: '06-task-list'
path: 'C-UX-Scenarios/06-task-list/'
changes: 'Improved empty state design'
user_value:
problem: 'Users experiencing bugs and requesting improvements'
solution: 'Bug fixes and enhancements based on feedback'
success_criteria:
- 'Bug reports decrease by 50%'
- 'User satisfaction score increases'
- 'Onboarding completion rate improves'
estimated_complexity:
size: 'small'
effort: '1 week'
risk: 'low'
dependencies: []
```
---
### Step 5: Hand Off to BMad
**Same process as Phase 4 [H] Handover:**
1. Create Design Delivery (DD-XXX.yaml)
2. Create Test Scenario (TS-XXX.yaml)
3. Handoff Dialog with BMad Architect
4. BMad implements changes
5. Designer validates (Phase 5 [T] Acceptance Testing)
6. Sign off and deploy
**BMad receives:**
- Design Delivery (DD-XXX)
- Updated specifications
- Design system changes
- Test scenario
---
### Step 6: Deploy Changes
**After validation:**
```
✅ Design Delivery DD-011 (v1.1) approved
✅ All tests passed
✅ Ready to deploy
Deployment:
- Version: v1.1.0
- Changes: 8 (3 bug fixes, 5 enhancements)
- Release notes: Generated from delivery
- Deploy to: Production
```
**Release notes (auto-generated from delivery):**
```markdown
# Version 1.1 Release
## What's New
### Bug Fixes
- Fixed login button alignment on iPad
- Added loading spinner to all async actions
- Fixed family invite code validation
### Enhancements
- Added "Remember me" checkbox on login
- Improved error messages for clarity
- Better empty state when no tasks
### Design System Updates
- Button component now supports loading state
- Input component has improved error styling
```
---
### Step 7: Monitor & Repeat
**After deployment:**
- Monitor user feedback
- Track analytics
- Identify new issues
- Plan next update
**Continuous cycle:**
```
v1.0 Launch
Gather feedback (2 weeks)
v1.1 Release (bug fixes + enhancements) - DD-011
Gather feedback (4 weeks)
v1.2 Release (new features) - DD-012
Gather feedback (8 weeks)
v2.0 Major Update (significant changes) - DD-020
(Repeat)
```
---
## Types of Updates
### **Patch Updates (v1.0.1)**
**Frequency:** As needed (urgent bugs)
**Scope:** Critical bug fixes only
**Timeline:** 1-3 days
**Example:**
- Critical: Login broken on iOS 17.2
- Fix: Update authentication flow
- Deploy: Emergency patch
---
### **Minor Updates (v1.1.0)**
**Frequency:** Monthly or bi-weekly
**Scope:** Bug fixes + small enhancements
**Timeline:** 1-2 weeks
**Example:**
- 3 bug fixes
- 5 small enhancements
- 2 design system updates
- Deploy: Scheduled release
---
### **Major Updates (v2.0.0)**
**Frequency:** Quarterly or annually
**Scope:** New features + significant changes
**Timeline:** 4-8 weeks
**Example:**
- New feature: Calendar view
- New feature: Family chat
- Redesign: Navigation system
- Major: Design system overhaul
- Deploy: Major version release
---
## Ongoing Collaboration
### **Designer & Developer Partnership**
**Designer:**
- Monitors user feedback
- Identifies improvements
- Designs changes
- Creates deliveries
- Validates implementation
**Developer:**
- Implements changes
- Suggests technical improvements
- Provides feasibility feedback
- Requests design clarification
- Notifies when ready for testing
**Together:**
- Regular sync meetings
- Shared backlog
- Collaborative prioritization
- Continuous improvement
- Mutual respect and trust
---
## The Sunset Ride 🌅
**After establishing the ongoing cycle:**
```
Designer: "We've launched v1.0, iterated through v1.1 and v1.2,
and now v2.0 is live. The product is mature, the
process is smooth, and users are happy.
The design-to-development workflow is humming along
beautifully. We're a well-oiled machine!"
Developer: "Agreed! We've found our rhythm. Design Deliveries
come in, we implement them, you validate, we ship.
The 3 touch points work perfectly.
Users love the product, stakeholders are happy,
and we're continuously improving."
Designer: "Ready to ride into the sunset?"
Developer: "Let's go! 🌅"
[Designer and Developer ride off into the sunset together]
THE END! 🎬
```
---
## Success Metrics
### **Process Health**
**Velocity:**
- Time from design to deployment
- Number of updates per quarter
- Backlog size and age
**Quality:**
- Bug reports per release
- User satisfaction scores
- Design system compliance
**Collaboration:**
- Handoff smoothness
- Communication clarity
- Issue resolution time
---
### **Product Health**
**Usage:**
- Active users
- Feature adoption
- Retention rates
**Satisfaction:**
- User reviews
- NPS scores
- Support tickets
**Business:**
- Revenue growth
- Market share
- Strategic goals met
---
## Tips for Long-Term Success
### DO ✅
**Maintain momentum:**
- Regular releases (don't go dark)
- Continuous improvement
- Respond to feedback
- Celebrate wins
**Keep quality high:**
- Don't skip validation
- Maintain design system
- Test thoroughly
- Document changes
**Communicate well:**
- Regular designer-developer sync
- Clear priorities
- Transparent roadmap
- Stakeholder updates
**Stay user-focused:**
- Listen to feedback
- Measure impact
- Iterate based on data
- Solve real problems
### DON'T ❌
**Don't let backlog grow:**
- Prioritize ruthlessly
- Say no to low-value requests
- Keep backlog manageable
- Archive old items
**Don't skip process:**
- Always create deliveries
- Always validate
- Always document
- Always follow touch points
**Don't lose sight:**
- Remember user value
- Stay aligned with goals
- Don't chase shiny objects
- Focus on what matters
**Don't burn out:**
- Sustainable pace
- Realistic timelines
- Celebrate progress
- Take breaks
---
## The Long Game
**Year 1:**
- Launch v1.0 (MVP)
- Iterate rapidly (v1.1, v1.2, v1.3)
- Learn from users
- Establish process
**Year 2:**
- Major update v2.0
- Mature product
- Smooth process
- Happy users
**Year 3+:**
- Continuous evolution
- Market leadership
- Sustainable growth
- Designer & Developer harmony
---
## Resources
**Product Evolution:**
- Return to Phase 4-5 for changes
- Use Phase 4 [H] Handover for Design Deliveries (small scope)
- Use Phase 5 [T] Acceptance Testing for validation
- Repeat indefinitely
**Templates:**
- Same templates as initial development
- Add "system_update" type to deliveries
- Track version numbers
**Documentation:**
- Maintain changelog
- Update release notes
- Keep design system current
- Document learnings
---
**And they lived happily ever after, shipping great products together!** 🌅✨
**THE END!** 🎬

View File

@@ -0,0 +1,167 @@
# Step 08: Iterate (Kaizen Never Stops)
## Your Task
Use learnings from this cycle to identify and start the next improvement.
---
## Before You Start
**Ensure you have:**
- ✅ Completed step 07 (impact measured)
- ✅ Impact report created
- ✅ Learnings documented
- ✅ Results shared with team
---
## The Kaizen Philosophy
**改善 (Kaizen) = Continuous Improvement**
```
Ship → Monitor → Learn → Improve → Ship → Monitor → Learn...
You are here!
```
**This cycle never stops!**
**See:** [data/kaizen-principles.md](../data/kaizen-principles.md) for Kaizen vs Kaikaku and core principles
---
## Review Your Learnings
### From Impact Report
**See:** [data/monitoring-templates.md](../data/monitoring-templates.md) for Learnings Documentation template
Key questions:
- What worked?
- What didn't work?
- What patterns are emerging?
- What hypotheses were validated/rejected?
- What new questions arose?
---
## Identify Next Opportunity
**Three sources for next improvement:**
### 1. Iterate on Current Update
If the update was partially successful - refine it.
**See:** [data/monitoring-templates.md](../data/monitoring-templates.md) for "Iterate on Current Update" template
### 2. Apply Pattern to Similar Feature
If the update was successful - apply the pattern elsewhere.
**See:** [data/monitoring-templates.md](../data/monitoring-templates.md) for "Apply Pattern" template
### 3. Address New Problem
From monitoring and feedback - tackle new issues.
**See:** [data/monitoring-templates.md](../data/monitoring-templates.md) for "New Problem" template
---
## Prioritize Next Cycle
**Use Kaizen prioritization:**
### Priority = Impact × Effort × Learning
**See:** [data/monitoring-templates.md](../data/monitoring-templates.md) for Kaizen Prioritization template
---
## Start Next Cycle
**Return to Step 01 with your next opportunity:**
```
[M] Return to Activity Menu — start next cycle with [A] Analyze Product
```
**See:** [data/monitoring-templates.md](../data/monitoring-templates.md) for Kaizen Cycle Log template
---
## Completion
**Phase 8 is complete when:**
- ✅ Improvement identified
- ✅ Context gathered
- ✅ Update designed
- ✅ Delivery created
- ✅ Handed off to BMad
- ✅ Implementation validated
- ✅ Impact measured
- ✅ Next cycle started
**But Phase 8 never truly ends - Kaizen is continuous!**
---
## Next Steps
**You have two paths:**
### Path A: Continue Kaizen Cycle
```
[M] Return to Activity Menu — start next cycle with [A] Analyze Product
Start next improvement cycle
```
### Path B: New Product Feature
```
[N] Return to Phase 4-5 (UX Design & Design System)
Design new complete user flow
Then Phase 4 [H] Handover (Design Deliveries)
```
---
## When to Pause Kaizen
**Kaizen never stops, but you might pause for:**
- Major strategic shift (new product direction, pivot)
- Team capacity (overwhelmed, need to stabilize)
- Measurement period (waiting for data)
**But always return to Kaizen!**
---
## Success Metrics
✅ Learnings reviewed
✅ Next opportunity identified
✅ Prioritization complete
✅ Next cycle started
✅ Cycle log updated
---
## Failure Modes
❌ Not reviewing learnings
❌ Not identifying next opportunity
❌ Stopping after one cycle
❌ Not prioritizing effectively
❌ Scope creep (turning Kaizen into Kaikaku)
---
**Remember:** Great products aren't built in one big redesign. They're built through continuous, disciplined improvement. One cycle at a time. Forever. 改善

View File

@@ -0,0 +1,276 @@
# Kaizen Principles
Core principles and patterns for continuous improvement in Phase 8 (Product Evolution).
---
## The Kaizen Philosophy
**改善 (Kaizen) = Continuous Improvement**
```
Ship → Monitor → Learn → Improve → Ship → Monitor → Learn...
```
**This cycle never stops!**
---
## Kaizen vs Kaikaku
**Two approaches from Lean manufacturing:**
### Kaizen (改善) - What You're Doing Now
- **Small, incremental changes** (1-2 weeks)
- **Low cost, low risk**
- **Continuous, never stops**
- **Phase 8: Product Evolution**
### Kaikaku (改革) - Revolutionary Change
- **Large, radical changes** (months)
- **High cost, high risk**
- **One-time transformation**
- **Phases 1-7: New Product Development**
**You're in Kaizen mode!** Small improvements that compound over time.
**See:** `src/core/resources/wds/glossary.md` for full definitions
---
## Kaizen Principle 1: Focus on Process, Not Just Results
**Bad:**
- "We need to increase usage!"
- (Pressure, no learning)
**Good:**
- "Let's understand why usage is low, test a hypothesis, measure impact, and learn."
- (Process, continuous learning)
---
## Kaizen Principle 2: Eliminate Waste (Muda 無駄)
**Types of waste in design:**
- **Overproduction:** Designing features nobody uses
- **Waiting:** Blocked on approvals or development
- **Transportation:** Handoff friction
- **Over-processing:** Excessive polish on low-impact features
- **Inventory:** Unshipped designs
- **Motion:** Inefficient workflows
- **Defects:** Bugs and rework
**Kaizen eliminates waste through:**
- Small, focused improvements
- Fast cycles (ship → learn → improve)
- Continuous measurement
- Learning from every cycle
---
## Kaizen Principle 3: Respect People and Their Insights
**Listen to:**
- Users (feedback, behavior)
- Developers (technical insights)
- Support (pain points)
- Stakeholders (business context)
- Team (observations)
**Everyone contributes to Kaizen!**
---
## Kaizen Principle 4: Standardize, Then Improve
**When you find a pattern that works:**
1. **Document it**
```markdown
# Pattern: Onboarding for Complex Features
**When to use:**
- Feature has low usage (<30%)
- User feedback indicates confusion
- Feature is complex or non-obvious
**How to implement:**
1. Inline tooltip explaining purpose
2. Step-by-step guide for first action
3. Success celebration
4. Help button for future reference
**Expected impact:**
- Usage increase: 3-4x
- Drop-off decrease: 50-70%
- Effort: 2-3 days
```
2. **Create reusable components**
```
D-Design-System/03-Atomic-Components/
├── Tooltips/Tooltip-Inline.md
├── Guides/Guide-Step.md
└── Celebrations/Celebration-Success.md
```
3. **Share with team**
- Document in shared knowledge
- Train team on pattern
- Apply consistently
4. **Improve the pattern**
- Learn from each application
- Refine based on feedback
- Evolve over time
---
## Kaizen Prioritization Framework
### Priority = Impact × Effort × Learning
**Impact:** How much will this improve the product?
- High: Solves major user pain, improves key metric
- Medium: Improves experience, minor metric impact
- Low: Nice to have, minimal impact
**Effort:** How hard is this to implement?
- Low: 1-2 days
- Medium: 3-5 days
- High: 1-2 weeks
**Learning:** How much will we learn?
- High: Tests important hypothesis
- Medium: Validates assumption
- Low: Incremental improvement
---
## Kaizen Metrics Dashboard Example
```markdown
# Kaizen Metrics Dashboard
## This Quarter (Q1 2025)
**Cycles Completed:** 9
**Average Cycle Time:** 10 days
**Success Rate:** 78% (7/9 successful)
**Impact:**
- Feature usage improvements: 6 features (+40% avg)
- Performance improvements: 2 features (+15% avg)
- User satisfaction: 3.2/5 → 4.1/5 (+28%)
**Learnings:**
- 12 patterns documented
- 8 reusable components created
- 3 hypotheses validated
**Team Growth:**
- Designer: Faster iteration
- Developer: Better collaboration
- Product: Data-driven decisions
```
---
## When to Pause Kaizen
**Kaizen never stops, but you might pause for:**
### 1. Major Strategic Shift
- New product direction
- Pivot or rebrand
- Complete redesign needed
### 2. Team Capacity
- Team overwhelmed
- Need to catch up on backlog
- Need to stabilize
### 3. Measurement Period
- Waiting for data
- Seasonal variations
- External factors
**But always return to Kaizen!**
---
## Small Changes Compound
**Example trajectory:**
```
Month 1:
- Cycle 1: Feature X onboarding (+40% usage)
Month 2:
- Cycle 2: Feature Y onboarding (+60% usage)
- Cycle 3: Feature Z performance (+15% retention)
Month 3:
- Cycle 4: Feature X refinement (+7% usage)
- Cycle 5: Onboarding component library (reusable)
- Cycle 6: Feature W onboarding (+50% usage)
Month 4:
- Cycle 7: Dashboard performance (+20% engagement)
- Cycle 8: Navigation improvements (+10% discoverability)
- Cycle 9: Error handling (+30% recovery rate)
Result after 4 months:
- 9 improvements shipped
- Product quality significantly improved
- User satisfaction increased
- Team learned continuously
- Competitive advantage built
```
**Each cycle takes 1-2 weeks. Small changes compound!**
---
## Kaizen Success Story Example
```
Starting Point:
- Product satisfaction: 3.2/5
- Feature usage: 25% average
- Support tickets: 50/month
- Churn rate: 15%
After 6 Months (24 Kaizen cycles):
- Product satisfaction: 4.3/5 (+34%)
- Feature usage: 65% average (+160%)
- Support tickets: 12/month (-76%)
- Churn rate: 6% (-60%)
Investment:
- 24 cycles × 1.5 weeks = 36 weeks
- Small, focused improvements
- Continuous learning
- Compounding results
Result:
- Product transformed
- Team learned continuously
- Competitive advantage built
- Users delighted
```
**This is the power of Kaizen!** 改善
---
**Remember:** Great products aren't built in one big redesign. They're built through continuous, disciplined improvement. One cycle at a time. Forever.

View File

@@ -0,0 +1,156 @@
# Step 07: Monitor Impact
## Your Task
Monitor the impact of your Design Delivery (small scope) and measure if it achieved the expected results.
---
## Before You Start
**Ensure you have:**
- ✅ Completed step 06 (validation complete)
- ✅ Design Delivery deployed to production
- ✅ Success metrics defined
---
## The Kaizen Measurement Cycle
**改善 (Kaizen) requires measurement:**
```
Ship → Monitor → Learn → Improve → Ship...
You are here!
```
**Without measurement, you're just guessing!**
---
## Set Up Monitoring
### 1. Define Measurement Period
**From Design Delivery file:**
```yaml
metrics:
measurement_period: '2 weeks after release'
```
### 2. Track Key Metrics
**See:** [data/monitoring-templates.md](../data/monitoring-templates.md) for Metrics Tracking Dashboard
### 3. Gather Qualitative Feedback
**Monitor multiple channels:**
- User feedback (app reviews, in-app feedback, support tickets)
- Team feedback (developer observations, support insights)
**See:** [data/monitoring-templates.md](../data/monitoring-templates.md) for Qualitative Feedback template
---
## Analyze Results
### After Measurement Period
**Create:** `analytics/DD-XXX-impact-report.md`
**See:** [data/monitoring-templates.md](../data/monitoring-templates.md) for Impact Report template
Key sections:
- Executive summary (SUCCESS | PARTIAL | FAILURE)
- Metrics results (baseline → target → actual)
- What worked / what didn't
- Learnings
- Recommendations (short-term, long-term)
- Next Kaizen cycle opportunity
---
## Share Results
**Communicate impact to team:**
**See:** [data/monitoring-templates.md](../data/monitoring-templates.md) for Team Results Communication template
---
## Update Design Delivery File
**Final update to `deliveries/DD-XXX-name.yaml`:**
```yaml
delivery:
status: 'measured'
measurement_complete: '2024-12-28T10:00:00Z'
impact_report: 'analytics/DD-XXX-impact-report.md'
result: 'success'
metrics_achieved:
- 'Feature X usage: 58% (target: 60%)'
learnings:
- 'Onboarding matters for complex features'
```
---
## Next Step
After monitoring and learning:
```
[M] Return to Activity Menu — see also data/kaizen-iteration-guide.md
```
---
## Success Metrics
✅ Measurement period complete
✅ All metrics tracked
✅ Qualitative feedback gathered
✅ Impact report created
✅ Results shared with team
✅ Learnings documented
✅ Next opportunity identified
---
## Failure Modes
❌ Not measuring impact
❌ Ending measurement too early
❌ Ignoring qualitative feedback
❌ Not documenting learnings
❌ Not sharing results
❌ Not identifying next opportunity
---
## Tips
### DO ✅
**Be patient:** Give changes time to work, don't end measurement early
**Be thorough:** Track all metrics, gather qualitative feedback, document learnings
**Be honest:** Report actual results, acknowledge what didn't work
### DON'T ❌
**Don't cherry-pick:** Report all metrics, not just good ones
**Don't stop measuring:** Kaizen requires continuous measurement
**Don't skip sharing:** Team needs to know results
---
**Remember:** Measurement turns improvements into learnings. Learnings drive the next cycle!

View File

@@ -0,0 +1,388 @@
# Monitoring Templates
Templates for monitoring impact and iterating in Phase 8 (Product Evolution).
---
## Metrics Tracking Dashboard
```markdown
# Metrics Tracking: DD-XXX
**Release Date:** 2024-12-13
**Measurement Period:** 2024-12-13 to 2024-12-27
## Daily Tracking
| Date | Feature X Usage | Drop-off Rate | Notes |
| ----- | --------------- | ------------- | ------------- |
| 12/13 | 18% | 38% | Day 1 |
| 12/14 | 22% | 35% | Trending up |
| 12/15 | 28% | 30% | Good progress |
| ... | ... | ... | ... |
| 12/27 | 58% | 12% | Final |
## Trend Analysis
[Chart or description of trends]
```
---
## Qualitative Feedback Tracking
```markdown
# Qualitative Feedback: DD-XXX
## Positive Feedback (8 mentions)
- "Now I understand how to use Feature X!" (3)
- "The guide was really helpful" (2)
- "Love the new onboarding" (3)
## Negative Feedback (2 mentions)
- "Guide is too long" (1)
- "Can't skip the guide" (1)
## Neutral Feedback (3 mentions)
- "Didn't notice the change" (3)
```
---
## Impact Report Template
**File:** `analytics/DD-XXX-impact-report.md`
```markdown
# Impact Report: DD-XXX [Name]
**Release Date:** 2024-12-13
**Measurement Period:** 2024-12-13 to 2024-12-27
**Report Date:** 2024-12-28
---
## Executive Summary
**Result:** [SUCCESS | PARTIAL SUCCESS | FAILURE]
[2-3 sentences summarizing the impact]
Example:
"Design Delivery DD-XXX successfully improved Feature X usage from
15% to 58%, nearly meeting the 60% target. Drop-off decreased
from 40% to 12%, exceeding the 10% target. User feedback is
overwhelmingly positive."
---
## Metrics Results
### Metric 1: Feature X Usage Rate
- **Baseline:** 15%
- **Target:** 60%
- **Actual:** 58%
- **Result:** 97% of target ✅ (PASS)
- **Trend:** Steady increase over 2 weeks
### Metric 2: Drop-off Rate
- **Baseline:** 40%
- **Target:** 10%
- **Actual:** 12%
- **Result:** Exceeded target ✅ (PASS)
- **Trend:** Sharp decrease in first week, stabilized
### Metric 3: Support Tickets
- **Baseline:** 12/month
- **Target:** 2/month
- **Actual:** 3/month
- **Result:** 75% reduction ✅ (PASS)
### Metric 4: User Satisfaction
- **Baseline:** 3.2/5
- **Target:** 4.5/5
- **Actual:** 4.3/5
- **Result:** 96% of target ✅ (PASS)
---
## Overall Assessment
**Success Criteria:**
- Feature X usage > 50% ✅
- Drop-off < 15%
- Support tickets < 5/month
**Result:** SUCCESS
All success criteria met or exceeded.
---
## What Worked
1. **Inline onboarding was effective**
- Users understood Feature X immediately
- Completion rate increased significantly
2. **Step-by-step guide was helpful**
- User feedback praised the guide
- Reduced confusion
3. **Success celebration was motivating**
- Users felt accomplished
- Positive sentiment increased
---
## What Didn't Work
1. **Guide length**
- Some users found it too long
- Consider shortening in future iteration
2. **Skip option**
- Some users wanted to skip
- Consider adding "Skip" button
---
## Learnings
1. **Onboarding matters for complex features**
- Even simple features benefit from guidance
- First impression is critical
2. **Measurement validates hypotheses**
- Our hypothesis was correct
- Data-driven decisions work
3. **Small changes have big impact**
- 3-day effort 4x usage increase
- Kaizen philosophy validated
---
## Recommendations
### Short-term (Next Sprint)
1. Add "Skip" button to guide
2. Shorten guide from 5 steps to 3 steps
3. A/B test guide length
### Long-term (Next Quarter)
1. Apply onboarding pattern to other features
2. Create reusable onboarding component
3. Measure onboarding impact across product
---
## Next Kaizen Cycle
**Based on this success, next improvement opportunity:**
[Identify next improvement based on learnings]
Example:
"Feature Y has similar low usage (20%). Apply same onboarding
pattern to Feature Y in next Kaizen cycle."
---
## Conclusion
Design Delivery DD-XXX successfully achieved its goals. The
improvement demonstrates the power of Kaizen - small, focused
changes that compound over time.
**Status:** SUCCESS - Ready for next cycle!
```
---
## Team Results Communication
```
WDS Designer → Team
Subject: Impact Report: DD-XXX - SUCCESS ✅
Hi Team!
Impact report for DD-XXX is complete!
🎉 **Result:** SUCCESS
📊 **Key Results:**
- Feature X usage: 15% → 58% (4x increase!)
- Drop-off: 40% → 12% (70% reduction!)
- Support tickets: 12/month → 3/month (75% reduction!)
- User satisfaction: 3.2/5 → 4.3/5
💡 **Key Learning:**
Small, focused improvements (3 days effort) can have massive
impact (4x usage increase). Kaizen philosophy works!
📁 **Full Report:**
analytics/DD-XXX-impact-report.md
🔄 **Next Cycle:**
Apply same pattern to Feature Y (similar low usage issue).
Thanks for the great collaboration!
[Your name]
WDS Designer
```
---
## Kaizen Cycle Log Template
```markdown
# Kaizen Cycle Log
## Cycle 1: DD-001 Feature X Onboarding
- Started: 2024-12-09
- Completed: 2024-12-28
- Result: SUCCESS ✅
- Impact: 4x usage increase
- Learning: Onboarding matters for complex features
## Cycle 2: DD-002 Feature Y Onboarding
- Started: 2024-12-28
- Status: In Progress
- Goal: Apply validated pattern to similar feature
- Expected: 4x usage increase
```
---
## Kaizen Prioritization Template
```markdown
# Kaizen Prioritization
## Option A: Refine DD-XXX
- Impact: Medium (58% → 65%)
- Effort: Low (1 day)
- Learning: Low (incremental)
- Priority: MEDIUM
## Option B: Apply to Feature Y
- Impact: High (20% → 80%)
- Effort: Low (2 days)
- Learning: High (validates pattern)
- Priority: HIGH ✅
## Option C: Fix Feature Z Performance
- Impact: Medium (35% → 20% drop-off)
- Effort: Low (1 day)
- Learning: Medium (performance optimization)
- Priority: MEDIUM
**Decision:** Start with Option B (highest priority)
```
---
## Learnings Documentation Template
```markdown
# Learnings from DD-XXX
## What Worked
1. [Learning 1]
2. [Learning 2]
3. [Learning 3]
## What Didn't Work
1. [Learning 1]
2. [Learning 2]
## Patterns Emerging
1. [Pattern 1]
2. [Pattern 2]
## Hypotheses Validated
1. [Hypothesis 1]: ✅ Confirmed
2. [Hypothesis 2]: ❌ Rejected
## New Questions
1. [Question 1]
2. [Question 2]
```
---
## Next Iteration Templates
### Iterate on Current Update
```markdown
# Next Iteration: DD-XXX Refinement
**Current Status:**
- Feature X usage: 58% (target: 60%)
- User feedback: "Guide too long"
**Next Improvement:**
- Shorten guide from 5 steps to 3 steps
- Add "Skip" button
- A/B test guide length
**Expected Impact:**
- Feature X usage: 58% → 65%
- User satisfaction: 4.3/5 → 4.7/5
**Effort:** 1 day
**Priority:** Medium
```
### Apply Pattern to Similar Feature
```markdown
# Next Opportunity: Apply Pattern to Feature Y
**Learning from DD-XXX:**
"Onboarding increases usage 4x for complex features"
**Similar Problem:**
- Feature Y usage: 20% (low)
- User feedback: "Don't understand Feature Y"
- Similar complexity to Feature X
**Proposed Solution:**
Apply same onboarding pattern to Feature Y
**Expected Impact:**
- Feature Y usage: 20% → 80% (4x increase)
- Based on DD-XXX results
**Effort:** 2 days
**Priority:** High
```
### Address New Problem
```markdown
# Next Opportunity: New Problem Identified
**New Data:**
- Feature Z drop-off: 35% (increased from 20%)
- User feedback: "Feature Z is slow"
- Analytics: Load time 5 seconds (was 2 seconds)
**Root Cause:**
Recent update added heavy images, slowing load time
**Proposed Solution:**
Optimize images and implement lazy loading
**Expected Impact:**
- Load time: 5s → 2s
- Drop-off: 35% → 20%
**Effort:** 1 day
**Priority:** High
```

View File

@@ -0,0 +1,148 @@
---
name: 'step-01-identify'
description: 'Identify the strategic challenge or improvement opportunity for this Kaizen cycle'
# File References
nextStepFile: './step-02-gather-context.md'
# Data References
contextTemplates: '../data/context-templates.md'
---
# Step 1: Identify Opportunity
## STEP GOAL:
Identify the strategic challenge or improvement opportunity to address in this Kaizen cycle. This step works differently depending on context: entering an existing product for the first time, or continuously improving a live product you already designed.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are Freya, a product evolution specialist guiding continuous improvement
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring systematic improvement expertise and Kaizen methodology, user brings product knowledge and business context
- ✅ Maintain analytical and strategic tone throughout
### Step-Specific Rules:
- 🎯 Focus ONLY on identifying the opportunity — do not design solutions yet
- 🚫 FORBIDDEN to jump to solutions before the problem is clearly defined
- 💬 Approach: Ask strategic questions, use data, quantify impact
- 📋 Every opportunity must connect to a persona or business goal
## EXECUTION PROTOCOLS:
- 🎯 Determine context (existing product entry vs continuous improvement)
- 💾 Document opportunity in limited brief or improvement file
- 📖 Ensure opportunity is specific, measurable, and scoped for Kaizen
- 🚫 FORBIDDEN to accept vague problem definitions — push for specifics
## CONTEXT BOUNDARIES:
- Available context: Design log context from workflow entry, project configuration
- Focus: Problem identification and opportunity framing only
- Limits: Do not gather detailed context yet (that's step 02), do not design solutions
- Dependencies: Active design log updated during workflow initialization
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Determine Context
Ask the user which context applies:
**Context A: Existing Product Entry Point** — You're joining an existing product to solve a strategic challenge
**Context B: Continuous Improvement (Post-Launch)** — You're iterating on a live product based on data and feedback
### 2. Context A: Existing Product Entry Point
If the user is entering an existing product:
**Ask these strategic questions:**
1. **What's the problem?** — What specific issue, what's broken, what metrics show it?
2. **Why now?** — Why is this a priority, business impact, what if we don't fix?
3. **What's the scope?** — Which screens/features, what can we change?
4. **What's success?** — How to measure, target metric, when?
**Document the challenge:**
Create `A-Project-Brief/limited-brief.md` using the Limited Project Brief template from {contextTemplates}.
### 3. Context B: Continuous Improvement
If the user is improving a live product:
**Gather data from multiple sources:**
- **Analytics:** User engagement (DAU, WAU, MAU), feature usage, drop-off points, conversion rates
- **User Feedback:** Support tickets, app store reviews, in-app feedback, user interviews
- **Team Insights:** What are developers, support, and stakeholders noticing?
**Apply Kaizen prioritization framework:**
Priority = Impact × Effort × Learning
| Factor | High | Medium | Low |
|--------|------|--------|-----|
| **Impact** | Solves major pain | Improves experience | Nice to have |
| **Effort** | 1-2 days | 3-5 days | 1-2 weeks |
| **Learning** | Tests hypothesis | Validates assumption | Incremental |
**Document the opportunity:**
Create `improvements/IMP-XXX-description.md` using the Improvement Opportunity template from {contextTemplates}.
### 4. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Gather Context"
#### Menu Handling Logic:
- IF C: Update design log with identified opportunity, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options](#4-present-menu-options)
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions — always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN [C continue option] is selected and [opportunity identified, documented, and connected to persona or business goal], will you then load and read fully `{nextStepFile}` to execute and begin context gathering.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Strategic challenge or improvement opportunity clearly identified
- Problem defined with specifics and data (not vague)
- Impact quantified or estimated
- Scope defined and appropriate for Kaizen (small, focused)
- Success criteria established
- Documented in limited brief or improvement file
- Design log updated with opportunity summary
### ❌ SYSTEM FAILURE:
- Accepting vague problem definitions ("make it better")
- Jumping to solutions before problem is understood
- Scope too large for a Kaizen cycle
- No connection to persona or business goal
- No success metrics defined
- Proceeding without user confirmation
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,213 @@
---
name: 'step-02-gather-context'
description: 'Understand the existing product context before making changes'
# File References
workflowFile: '../workflow.md'
activityWorkflowFile: '../workflow-analyze.md'
# Data References
contextTemplates: '../data/context-templates.md'
---
# Step 2: Gather Context
## STEP GOAL:
Understand the existing product context deeply before designing improvements - whether you're joining an existing product for the first time or iterating on a product you designed.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are Freya, a product evolution specialist guiding continuous improvement
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring UX research expertise and product insight, user brings domain knowledge and product experience
- ✅ Maintain curious and analytical tone throughout
### Step-Specific Rules:
- 🎯 Focus only on gathering existing context - no solution design yet
- 🚫 FORBIDDEN to propose solutions or design changes
- 💬 Approach: Ask questions to understand deeply, help user synthesize insights
- 📋 Experience the product yourself if possible - firsthand understanding is critical
- 📋 Distinguish between two contexts: new product entry vs continuous improvement
## EXECUTION PROTOCOLS:
- 🎯 Guide user through appropriate context path (A or B) based on their situation
- 💾 Help user collect and organize materials systematically
- 📖 Reference templates from {contextTemplates} for all deliverables
- 🚫 Do not skip to solutions - root cause identification comes first
## CONTEXT BOUNDARIES:
- Available context: Limited brief or improvement file (from step 01), context templates
- Focus: Understanding current state, identifying root causes, forming hypotheses
- Limits: Do not design solutions, do not scope work (that's step S)
- Dependencies: Requires completed step 01 (opportunity identified), limited brief or improvement file created
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Determine Context Path
**Clarify user's situation:**
Are you:
- **A) Joining an existing product** (first time working on this product)
- **B) Continuous improvement** (you designed this product, now improving it)
Guide user to appropriate section below.
### 2. Context A: Existing Product Entry Point
**For users joining an existing product:**
#### 2a. Gather Existing Materials
**Help user collect everything:**
| Category | Upload To | Review For |
|----------|-----------|------------|
| **Business** | `A-Project-Brief/existing-context/business/` | Why product exists, business model, competitors |
| **Users** | `A-Project-Brief/existing-context/users/` | Who are users, needs, pain points |
| **Product** | `A-Project-Brief/existing-context/product/` | Features, tech stack, constraints |
**Prompt user to upload materials they have available.**
#### 2b. Use the Product
**Critical: Experience it yourself!**
Guide user through:
1. Download/access the product
2. Create an account, go through onboarding
3. Use all major features
4. Document your experience
**Reference:** Use First Impressions template from {contextTemplates}
#### 2c. Create Focused Trigger Map
**Based on your strategic challenge:**
**File:** `B-Trigger-Map/focused-trigger-map.md`
**Reference:** Use Focused Trigger Map template from {contextTemplates}
Help user identify:
- Trigger moment (when does this happen?)
- Current experience (what happens now?)
- Desired outcome (what should happen?)
- Barriers (what's preventing success?)
- Solution focus (what will we change?)
### 3. Context B: Continuous Improvement
**For users who designed the product:**
#### 3a. Analytics Deep Dive
Focus on the specific feature/flow you're improving.
**Reference:** Use Analytics template from {contextTemplates}
Help user analyze:
- Usage metrics for specific feature
- User segments (new vs returning vs power users)
- Drop-off points
- Time spent
- Key insights
#### 3b. User Feedback Analysis
Categorize feedback about this specific feature.
**Reference:** Use User Feedback template from {contextTemplates}
Guide user to identify:
- Themes (confusion, requests, praise)
- Frequency of mentions
- Specific quotes
- Patterns
#### 3c. Review Original Design Intent
**Ask user to reflect:**
- Why did you design it this way?
- What assumptions did you make?
- What constraints existed?
- What has changed since?
#### 3d. Competitive Analysis
**Guide user to research:**
- How do competitors handle this?
- What patterns work well?
- What can we learn?
- What should we avoid?
### 4. Synthesis (Both Paths)
**Combine all context into actionable insights:**
**Reference:** Use Context Synthesis template from {contextTemplates}
Help user create synthesis with:
- **What we know** (key insights from all sources)
- **Root cause** (why is this happening?)
- **Hypothesis** (what will solve it?)
- **Validation plan** (how will we know?)
**Critical:** Root cause must be identified before moving forward.
### 5. Present MENU OPTIONS
Display: "**Select an Option:** [M] Return to Activity Menu (suggest [S] Scope Improvement)"
#### Menu Handling Logic:
- IF M: Return to {workflowFile} or {activityWorkflowFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN user selects [M] and context gathering is complete will you then return to the activity workflow to suggest next step [S] Scope Improvement.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- All relevant materials gathered (Context A) or fresh data collected (Context B)
- Product experienced firsthand (Context A required)
- Focused trigger map created (Context A) or analytics analyzed (Context B)
- User feedback categorized and themed
- Root cause clearly identified with evidence
- Hypothesis formed with expected impact
- Validation plan defined
- Context synthesis document complete
### ❌ SYSTEM FAILURE:
- Not using the product yourself (Context A)
- Relying only on documentation without firsthand experience
- Ignoring user feedback or analytics data
- Not identifying root cause (jumping to symptoms)
- Jumping to solutions too quickly (skipping analysis)
- Generating content without user input
- Proposing design changes (not this step's purpose)
- Skipping synthesis step
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,240 @@
---
name: 'step-01-design-update'
description: 'Design incremental improvement using Kaizen principles'
# File References
workflowFile: '../workflow.md'
activityWorkflowFile: '../workflow-design.md'
# Data References
designTemplates: '../data/design-templates.md'
---
# Step 3: Design Update
## STEP GOAL:
Design a targeted, incremental improvement using Kaizen principles - not a complete redesign, but a focused update that solves the root cause with minimal scope.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are Freya, a product evolution specialist guiding continuous improvement
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring design thinking and Kaizen expertise, user brings product knowledge
- ✅ Maintain focused and pragmatic tone throughout
### Step-Specific Rules:
- 🎯 Focus only on designing the smallest effective change
- 🚫 FORBIDDEN to scope creep or suggest complete redesigns
- 💬 Approach: Challenge scope expansion, validate against root cause, ensure measurability
- 📋 Keep the Kaizen principle central: targeted improvement, not transformation
- 📋 Document what's changing AND what's staying the same
## EXECUTION PROTOCOLS:
- 🎯 Guide user to define change boundaries first (what changes, what stays)
- 💾 Help user create update specifications that reference v1.0 clearly
- 📖 Reference templates from {designTemplates} for all deliverables
- 🚫 Challenge any scope expansion with "Does this solve the root cause?" test
## CONTEXT BOUNDARIES:
- Available context: Context gathered in step 02, root cause identified, hypothesis formed
- Focus: Designing minimal effective change, documenting before/after, validating hypothesis
- Limits: Do not expand scope beyond root cause solution, do not skip validation
- Dependencies: Requires completed step 02, root cause identified, hypothesis formed, clear scope
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Kaizen Principle Reminder
**Reinforce the approach:**
| DO ✅ | DON'T ❌ |
|-------|----------|
| Targeted improvement | Complete redesign |
| Change one thing well | Change everything |
| Incremental update | Big bang release |
| Surgical precision | Scope creep |
| Focused on root cause | "While we're at it..." |
**Ask user:** "What is the ONE thing we need to change to solve the root cause?"
### 2. Define What's Changing vs What's Staying
**Create:** `C-UX-Scenarios/XX-update-name/change-scope.md`
**Reference:** Use Change Scope template from {designTemplates}
Help user document:
**What's Changing:**
- Specific screens/features affected
- Types of changes (copy, visual hierarchy, components, flow, interaction, data)
- Specific change list (numbered, clear)
**What's Staying:**
- Unchanged elements (brand, typography, layout, navigation, tech stack, data model)
- Rationale (why keeping these fixed?)
**Critical question:** "Is everything in 'What's Changing' necessary to solve the root cause?"
### 3. Create Update Specifications
**For each screen/feature being updated:**
**File:** `C-UX-Scenarios/XX-update-name/Frontend/specifications.md`
**Reference:** Use Update Specification template from {designTemplates}
Guide user to create:
**Change Summary:**
- What's different from v1.0? (brief list)
**Updated Screen Structure:**
- Before (v1.0): [Describe old structure]
- After (v2.0): [Describe new structure]
**Component Changes:**
- New components (name, purpose)
- Modified components (name, what changed)
- Removed components (name, why removed)
- Unchanged components (name, still used as-is)
**Interaction Changes:**
- Before (v1.0): [Step-by-step flow]
- After (v2.0): [Updated flow with NEW markers]
**Copy Changes:**
- Before/After pairs with rationale for each change
**Visual Changes:**
- Hierarchy, emphasis, spacing (before vs after)
**Success Metrics:**
- How will we measure if this update works?
- Measurement period (typically 2 weeks after release)
### 4. Design New/Modified Components (If Needed)
**If new components required:**
**File:** `D-Design-System/03-Atomic-Components/[Category]/[Component-Name].md`
**Reference:** Use New Component template from {designTemplates}
Help user specify:
- Purpose (why this component?)
- Specifications (standard component spec format)
- Usage (where used, when shown)
**Caution:** Ask "Can we use an existing component instead?"
### 5. Create Before/After Comparison
**Visual documentation of the change:**
**File:** `C-UX-Scenarios/XX-update-name/before-after.md`
**Reference:** Use Before/After template from {designTemplates}
Guide user to document:
**Before (v1.0):**
- Screenshot/description
- User experience (sees, feels, problem)
- Metrics (current state)
**After (v2.0):**
- Screenshot/description
- User experience (sees, feels, improvement)
- Expected metrics (targets)
**Key Changes:**
- List each change with before/after/impact
### 6. Design Validation
**Before moving forward, validate the design:**
#### 6a. Self-Review Checklist
Work through with user:
- [ ] Does this solve the root cause?
- [ ] Is this the smallest change that could work?
- [ ] Does this align with existing design system?
- [ ] Is this technically feasible?
- [ ] Can we measure the impact?
- [ ] Does this create new problems?
- [ ] Have we considered edge cases?
**All must be checked before proceeding.**
#### 6b. Hypothesis Validation
**Reference:** Use Hypothesis Validation template from {designTemplates}
Help user document:
- Hypothesis (what do we believe will happen?)
- Assumptions (what are we assuming?)
- Risks (what could go wrong? mitigations?)
- Success criteria (metrics, targets, timeframe)
- Failure criteria (rollback thresholds)
### 7. Present MENU OPTIONS
Display: "**Select an Option:** [M] Return to Activity Menu (suggest [I] Implement)"
#### Menu Handling Logic:
- IF M: Return to {workflowFile} or {activityWorkflowFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN user selects [M] and design is complete and validated will you then return to the activity workflow to suggest next step [I] Implement.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Change scope clearly defined (what changes, what stays)
- Update specifications created referencing v1.0
- New/modified components designed (only if necessary)
- Before/after comparison documented with metrics
- Hypothesis validated with success/failure criteria
- Self-review checklist completed (all items checked)
- Smallest effective change identified and justified
- No scope creep beyond root cause solution
- All changes measurable
### ❌ SYSTEM FAILURE:
- Scope creep (changing too much, "while we're at it" syndrome)
- Not documenting what's staying the same
- No before/after comparison
- Can't measure impact (no metrics defined)
- Creating new problems without mitigation
- Not validating hypothesis before proceeding
- Skipping self-review checklist
- Complete redesign instead of incremental update
- Generating specifications without user input
- Not challenging unnecessary scope expansion
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,308 @@
---
name: 'step-01-create-delivery'
description: 'Package incremental improvement as Design Delivery (DD-XXX)'
# File References
nextStepFile: './step-02-hand-off.md'
# Data References
deliveryTemplates: '../data/delivery-templates.md'
---
# Step 4: Create Design Delivery
## STEP GOAL:
Package your incremental improvement as a Design Delivery (DD-XXX) for BMad - using the same format as complete flows, but with focused scope and content.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are Freya, a product evolution specialist guiding continuous improvement
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring delivery packaging expertise, user brings design work
- ✅ Maintain organized and detail-oriented tone throughout
### Step-Specific Rules:
- 🎯 Focus only on packaging existing design work into delivery format
- 🚫 FORBIDDEN to design new features or expand scope
- 💬 Approach: Help user organize artifacts, reference specifications, define acceptance criteria
- 📋 Ensure all artifacts are created and linked before packaging
- 📋 Define clear success metrics and rollback criteria
## EXECUTION PROTOCOLS:
- 🎯 Guide user to create DD file following template exactly
- 💾 Help user create matching test scenario (TS-XXX)
- 📖 Reference templates from {deliveryTemplates} for both deliverables
- 🚫 Do not allow vague descriptions or missing artifacts
## CONTEXT BOUNDARIES:
- Available context: Completed step 03 (update designed), specifications created, change scope documented, before/after comparison ready
- Focus: Packaging design work, creating delivery file, creating test scenario
- Limits: Do not design new features, do not modify scope, do not skip metrics
- Dependencies: Requires completed step 03, update specifications, change scope, before/after comparison, all artifacts ready
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Design Delivery Format Overview
**Explain to user:**
All design work uses Design Deliveries (DD-XXX), whether it's:
- ✅ Complete new user flows (large scope)
- ✅ Incremental improvements (small scope)
**The format is the same - only the scope and content differ!**
| Scope | Description | Effort |
|-------|-------------|--------|
| **Large** (New Flows) | Multiple scenarios, complete user flow | Weeks |
| **Small** (Improvements) | Targeted changes, focused improvement | Days |
**User is creating a small scope delivery.**
### 2. Create Design Delivery File
**File:** `deliveries/DD-XXX-description.yaml`
**Numbering:** Ask user for last DD number, continue from there (use leading zeros: DD-001, DD-002, etc.)
**Reference:** Use Design Delivery (Small Scope) template from {deliveryTemplates}
Guide user through each section:
#### 2a. Delivery Metadata
```yaml
delivery:
id: 'DD-XXX'
name: '[Short descriptive name]'
type: 'incremental_improvement'
scope: 'update'
version: 'v2.0'
previous_version: 'v1.0'
created_at: '[timestamp]'
designer: '[User name]'
status: 'ready_for_handoff'
```
#### 2b. Improvement Section
Help user write:
- **summary**: 2-3 sentences (what's changing and why)
- **problem**: What problem does this solve? (with metrics)
- **solution**: What's the solution? (specific changes)
- **expected_impact**: What will improve? (with target metrics)
#### 2c. Changes Section
Guide user to specify:
- **screens_affected**: List screens
- **features_affected**: List features
- **components_new**: New components with IDs and file paths
- **components_modified**: Modified components with changes and file paths
- **components_unchanged**: "All other components remain as-is"
- **what_stays_same**: List unchanged elements
#### 2d. Design Artifacts Section
Help user link all artifacts:
- **specifications**: Path to specifications.md
- **change-scope**: Path to change-scope.md
- **before-after**: Path to before-after.md
- **components**: Paths to new/modified component files
**Verify:** All files exist at specified paths.
#### 2e. Technical Requirements Section
Guide user to document:
- **frontend**: List frontend implementation tasks
- **backend**: List backend changes (if any)
- **data**: List data model changes (if any)
- **integrations**: List integration changes (analytics, etc.)
#### 2f. Acceptance Criteria Section
Help user define testable criteria:
- Each criterion has: id (AC-001, AC-002...), description, verification method
- Criteria must be objective and testable
- Cover new functionality, edge cases, persistence
#### 2g. Metrics Section
Guide user to specify:
- **baseline**: Current metrics with targets
- **measurement_period**: Typically "2 weeks after release"
- **success_threshold**: Minimum acceptable improvements
- **rollback_criteria**: When to rollback if targets not met
**Critical:** Ensure targets are realistic and measurable.
#### 2h. Effort Estimate Section
Help user estimate:
- Design (already done)
- Frontend implementation
- Backend implementation (if any)
- Testing
- Total effort and complexity (Low/Medium/High)
#### 2i. Timeline Section
Work with user to define:
- design_complete (today)
- handoff_date (today or soon)
- development_start (estimated)
- development_complete (estimated)
- testing_complete (estimated)
- release_date (target)
- measurement_end (release + 2 weeks)
#### 2j. Handoff Section
Specify:
- architect: BMad Architect name
- developer: BMad Developer name
- handoff_dialog_required: false (for small updates)
- notes: Brief note about scope
#### 2k. Related Section
Link related files:
- improvement_file (from step 01)
- analytics_report (if exists)
- user_feedback (if exists)
- original_delivery (if this is update to previous DD)
### 3. Create Test Scenario
**File:** `test-scenarios/TS-XXX-description.yaml`
**Use same XXX number as DD-XXX.**
**Reference:** Use Test Scenario (Incremental Improvement) template from {deliveryTemplates}
Guide user to create:
#### 3a. Test Metadata
```yaml
test_scenario:
id: 'TS-XXX'
name: '[Update Name] Validation'
type: 'incremental_improvement'
delivery_id: 'DD-XXX'
created_at: '[timestamp]'
```
#### 3b. Test Focus
List key focus areas:
- New functionality (what changed)
- Regression testing (what should stay the same)
- Edge cases specific to update
- Accessibility
#### 3c. Happy Path Tests
Define tests for new functionality:
- Each test has: id (HP-001, HP-002...), name, steps
- Steps have: action, expected result
- Cover the primary user flow through new feature
#### 3d. Regression Tests
Define tests for existing functionality:
- Each test has: id (REG-001, REG-002...), name, steps
- Verify existing features work exactly as before
- Focus on areas adjacent to changes
#### 3e. Edge Cases
Define edge case tests:
- Each test has: id (EC-001, EC-002...), name, steps
- Cover unusual scenarios (dismissal persistence, multiple devices, cleared cache, etc.)
#### 3f. Accessibility
Define accessibility checks:
- Each test has: id (A11Y-001, A11Y-002...), name, checks
- Screen reader compatibility
- Keyboard navigation
- Focus management
### 4. Review and Verify
**Before proceeding, verify with user:**
- [ ] DD file created with all sections complete
- [ ] All artifact paths valid and files exist
- [ ] Acceptance criteria are testable and objective
- [ ] Metrics and targets are realistic
- [ ] Success and rollback criteria defined
- [ ] Test scenario created with all test types
- [ ] TS file references correct DD-XXX
- [ ] No vague descriptions or missing information
**All must be checked before proceeding.**
### 5. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to step-02-hand-off.md (next step in this activity)"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN user selects [C] and delivery packaging is complete will you then load and read fully `{nextStepFile}` to execute and begin step 02 (hand off to BMad).
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Design Delivery file (DD-XXX) created following template exactly
- All sections complete with no placeholders
- Change scope clearly defined in delivery
- All artifacts referenced with valid file paths
- Acceptance criteria defined and testable
- Metrics with baseline, targets, success threshold, and rollback criteria
- Test scenario (TS-XXX) created with all test types
- Happy path, regression, edge case, and accessibility tests defined
- Effort estimate and timeline realistic
- Ready for handoff to BMad
### ❌ SYSTEM FAILURE:
- Vague change description or missing sections
- Missing artifacts or broken file paths
- No success metrics or rollback criteria defined
- Scope too large (not incremental improvement)
- No before/after comparison referenced
- Acceptance criteria not testable or missing
- Test scenario missing or incomplete
- No regression tests defined
- Generating content without user input
- Skipping verification checklist
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,244 @@
---
name: 'step-02-hand-off'
description: 'Hand off Design Delivery to BMad for implementation'
# File References
workflowFile: '../workflow.md'
activityWorkflowFile: '../workflow-deploy.md'
---
# Step 5: Hand Off to BMad
## STEP GOAL:
Hand off the Design Delivery (small scope) to BMad Developer for implementation - using simplified handoff for small updates or full dialog for larger ones.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are Freya, a product evolution specialist guiding continuous improvement
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring handoff process expertise, user brings design delivery
- ✅ Maintain clear and professional tone throughout
### Step-Specific Rules:
- 🎯 Focus only on clear handoff communication to BMad
- 🚫 FORBIDDEN to modify design or add new requirements
- 💬 Approach: Help user compose clear handoff message, ensure BMad has everything needed
- 📋 Choose appropriate handoff method based on effort estimate
- 📋 Update delivery status after handoff
## EXECUTION PROTOCOLS:
- 🎯 Guide user to choose handoff method (simplified vs full dialog)
- 💾 Help user compose handoff notification with all necessary information
- 📖 Update delivery status in DD file after handoff
- 🚫 Do not allow handoff without all artifacts ready
## CONTEXT BOUNDARIES:
- Available context: Completed step 04 (Design Delivery created), all artifacts ready, test scenario created
- Focus: Handoff communication, status update
- Limits: Do not modify design, do not add requirements, do not skip status update
- Dependencies: Requires completed step 04, DD file created, TS file created, all artifacts ready
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Determine Handoff Method
**Ask user about effort estimate:**
Review the effort estimate in DD-XXX file:
- **< 3 days total effort**: Use simplified handoff
- **> 3 days total effort**: Use full handoff dialog
Guide user to appropriate section below.
### 2. Simplified Handoff (< 3 Days)
**For small, focused updates:**
Help user compose handoff notification:
```
WDS Designer → BMad Developer
Subject: Design Delivery Ready: DD-XXX [Name]
Hi Developer!
Design Delivery ready for implementation.
📦 **Delivery:** DD-XXX [Name]
**Type:** Incremental Improvement
**Scope:** Update (small)
**Effort:** [X] days
**Priority:** [High | Medium | Low]
🎯 **Goal:**
[One sentence describing the improvement]
Example:
"Add inline onboarding to Feature X to increase usage from 15% to 60%."
📊 **Current Problem:**
- [Metric 1]: [Current value]
- [Metric 2]: [Current value]
📈 **Expected Impact:**
- [Metric 1]: [Current] → [Target]
- [Metric 2]: [Current] → [Target]
📁 **Artifacts:**
- Design Delivery: deliveries/DD-XXX-name.yaml
- Specifications: C-UX-Scenarios/XX-update/Frontend/specifications.md
- Before/After: C-UX-Scenarios/XX-update/before-after.md
- Components: D-Design-System/03-Atomic-Components/...
- Test Scenario: test-scenarios/TS-XXX.yaml
✅ **Acceptance Criteria:**
- AC-001: [Description]
- AC-002: [Description]
- AC-003: [Description]
⏱️ **Timeline:**
- Development: [X] days
- Target release: [Date]
- Measurement: 2 weeks after release
❓ **Questions:**
Let me know if you need clarification on anything!
Thanks,
[Your name]
WDS Designer
```
**Work with user to fill in all bracketed values from DD file.**
### 3. Full Handoff Dialog (> 3 Days)
**For larger updates:**
Explain to user:
"For larger updates (> 3 days effort), use full handoff dialog process from Phase 4 [H] Handover, Step 04."
**Key topics to cover in dialog:**
1. Problem and solution overview
2. What's changing vs staying
3. Technical requirements
4. Component specifications
5. Acceptance criteria
6. Success metrics
7. Rollback criteria
**Note:** This is less common for Product Evolution workflow - most improvements are small scope.
### 4. BMad Acknowledges
**Help user understand expected response:**
BMad Developer should respond with:
```
BMad Developer → WDS Designer
Subject: Re: Design Delivery Ready: DD-XXX
Received! Thank you.
📋 **My Plan:**
1. Review specifications ([date])
2. Implement changes ([date])
3. Run tests ([date])
4. Notify for validation ([date])
⏱️ **Estimated Completion:** [Date]
❓ **Questions:**
[Any clarification needed]
Thanks!
BMad Developer
```
**If user receives this acknowledgment, proceed to next step.**
**If BMad has questions, help user answer them clearly.**
### 5. Update Delivery Status
**Update the DD-XXX file:**
Help user modify the delivery status section:
```yaml
delivery:
status: 'in_development' # Changed from "ready_for_handoff"
handed_off_at: '[timestamp]'
developer: '[BMad Developer name]'
development_start: '[timestamp or estimate]'
expected_completion: '[timestamp or estimate]'
```
**Verify:** Status updated correctly in DD file.
### 6. Present MENU OPTIONS
Display: "**Select an Option:** [M] Return to Activity Menu (suggest [T] Acceptance Test)"
#### Menu Handling Logic:
- IF M: Return to {workflowFile} or {activityWorkflowFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN user selects [M] and handoff is complete will you then return to the activity workflow to suggest next step [T] Acceptance Test (after BMad completes implementation).
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Handoff notification composed with all required information
- Appropriate handoff method chosen (simplified vs full dialog)
- All artifacts referenced in handoff message
- Goal, problem, expected impact clearly stated
- Acceptance criteria included in notification
- Timeline and effort estimate communicated
- BMad Developer acknowledged receipt
- Questions from BMad answered clearly (if any)
- Delivery status updated to 'in_development'
- handed_off_at timestamp recorded
- Developer name and expected completion date recorded
- User available for clarification questions during development
### ❌ SYSTEM FAILURE:
- Handoff without all artifacts ready
- Vague or incomplete handoff message
- Missing acceptance criteria or metrics
- No timeline or effort estimate
- Delivery status not updated after handoff
- Not responding to BMad's questions
- Adding new requirements during handoff (scope creep)
- Modifying design after handoff without updating DD file
- Generating handoff message without user input
- Not recording developer name or timeline
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,337 @@
---
name: 'step-01-validate'
description: 'Validate that Design Delivery was implemented correctly'
# File References
workflowFile: '../workflow.md'
activityWorkflowFile: '../workflow-test.md'
# Data References
deliveryTemplates: '../data/delivery-templates.md'
---
# Step 6: Validate Implementation
## STEP GOAL:
Validate that the Design Delivery (small scope) was implemented correctly according to specifications and acceptance criteria - focusing on new functionality and regression testing.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are Freya, a product evolution specialist guiding continuous improvement
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring testing methodology expertise, user brings product knowledge
- ✅ Maintain thorough and quality-focused tone throughout
### Step-Specific Rules:
- 🎯 Focus only on validating implementation against specifications
- 🚫 FORBIDDEN to approve without testing or skip regression tests
- 💬 Approach: Guide systematic testing, document all results, ensure quality
- 📋 Test both new functionality AND existing functionality (regression)
- 📋 Only approve when all acceptance criteria pass
## EXECUTION PROTOCOLS:
- 🎯 Guide user through test scenario systematically
- 💾 Help user document test results clearly
- 📖 Reference templates from {deliveryTemplates} for validation report
- 🚫 Do not allow approval without complete testing
## CONTEXT BOUNDARIES:
- Available context: Completed step 05 (handed off to BMad), BMad notified implementation complete, test scenario file ready
- Focus: Systematic testing, results documentation, approval/rejection decision
- Limits: Do not skip tests, do not approve with failing tests, do not modify design
- Dependencies: Requires completed step 05, BMad implementation complete, TS-XXX file ready
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. BMad Notification
**Wait for BMad to notify user:**
Expected notification format:
```
BMad Developer → WDS Designer
Subject: Design Delivery Complete: DD-XXX
Design Delivery DD-XXX is complete and ready for validation.
✅ **Implemented:** [Features/changes]
📦 **Build:** v2.1.0
🌐 **Environment:** Staging
📝 **Test Scenario:** test-scenarios/TS-XXX.yaml
Ready for your validation!
```
**Verify user has received this notification before proceeding.**
### 2. Review Test Scenario
**Load the test scenario file:**
Guide user to open: `test-scenarios/TS-XXX.yaml`
**Review test focus areas:**
- New functionality (what changed)
- Regression testing (what should stay the same)
- Edge cases specific to the update
- Accessibility
**Explain:** This is similar to Phase 5 [T] Acceptance Testing, but focused on the specific update.
### 3. Run Tests Systematically
#### 3a. Test New Functionality (Happy Path)
**Work through each happy path test:**
For each test (HP-001, HP-002, etc.):
```markdown
## New Functionality Tests
### HP-001: [Test name from TS file]
- Action: [Perform action from test]
- Expected: [Expected result from test]
- Actual: [What actually happened - USER PROVIDES]
- Result: [PASS | FAIL - based on match]
```
**Guide user through each test, document results.**
#### 3b. Test for Regressions
**Work through each regression test:**
For each test (REG-001, REG-002, etc.):
```markdown
## Regression Tests
### REG-001: [Test name from TS file]
- Action: [Use existing feature from test]
- Expected: [Works as before from test]
- Actual: [What happened - USER PROVIDES]
- Result: [PASS | FAIL - based on match]
```
**Critical:** Ensure existing functionality unchanged.
#### 3c. Test Edge Cases
**Work through each edge case test:**
For each test (EC-001, EC-002, etc.):
```markdown
## Edge Case Tests
### EC-001: [Test name from TS file]
- Action: [Perform edge case scenario]
- Expected: [Expected handling]
- Actual: [What happened - USER PROVIDES]
- Result: [PASS | FAIL - based on match]
```
**Important:** Edge cases often reveal issues.
#### 3d. Test Accessibility
**Work through accessibility checks:**
For each test (A11Y-001, A11Y-002, etc.):
```markdown
## Accessibility Tests
### A11Y-001: [Test name from TS file]
- Check: [Accessibility requirement]
- Expected: [Accessible behavior]
- Actual: [What happened - USER PROVIDES]
- Result: [PASS | FAIL - based on compliance]
```
**Essential:** Product must be accessible.
### 4. Document Results
**Create validation report:**
**File:** `test-reports/TR-XXX-DD-XXX-validation.md`
**Reference:** Use Validation Report template from {deliveryTemplates}
Help user create report with:
**Result:** [PASS | FAIL]
**New Functionality:**
- Summary of all HP tests with results
- Any notes or observations
**Regression Testing:**
- Summary of all REG tests with results
- Confirmation existing features unchanged
**Edge Cases:**
- Summary of all EC tests with results
**Accessibility:**
- Summary of all A11Y tests with results
**Issues Found:**
- Total count
- List each issue if any (ID, description, severity)
**Recommendation:**
- [APPROVED | NOT APPROVED]
- Brief explanation
### 5. Send Results to BMad
#### 5a. If APPROVED (All Tests Passed)
Help user compose:
```
WDS Designer → BMad Developer
Subject: DD-XXX Validation Complete - APPROVED ✅
✅ **Status:** APPROVED - Ready to ship!
📊 **Test Results:**
- New functionality: All tests passed
- Regression tests: No issues
- Edge cases: All handled correctly
- Accessibility: Compliant
- Issues found: 0
📁 **Validation Report:** test-reports/TR-XXX-DD-XXX-validation.md
🚀 **Next Steps:** Deploy to production!
Great work!
```
**Proceed to step 6 (update delivery status).**
#### 5b. If ISSUES FOUND (Any Tests Failed)
Help user compose:
```
WDS Designer → BMad Developer
Subject: DD-XXX Validation Complete - Issues Found
❌ **Status:** NOT APPROVED (issues found)
📊 **Test Results:**
- New functionality: [X passed, Y failed]
- Regression tests: [X passed, Y failed]
- Edge cases: [X passed, Y failed]
- Accessibility: [X passed, Y failed]
- Issues found: [Total count]
🐛 **Issues:**
- ISS-XXX: [Issue description]
- ISS-XXX: [Issue description]
📁 **Validation Report:** test-reports/TR-XXX-DD-XXX-validation.md
🔧 **Next Steps:** Please fix issues, notify for retest.
```
**Wait for BMad to fix issues, then repeat testing.**
### 6. Update Delivery Status
**If approved:**
Help user update DD-XXX file:
```yaml
delivery:
status: 'complete'
validated_at: '[timestamp]'
approved_by: '[User name]'
ready_for_production: true
```
**If issues found:**
Help user update DD-XXX file:
```yaml
delivery:
status: 'in_testing'
issues_found: [count]
retest_required: true
```
**Verify:** Status updated correctly in DD file.
### 7. Present MENU OPTIONS
Display: "**Select an Option:** [M] Return to Activity Menu (suggest [P] Deploy if approved, or [A] Analyze for next cycle)"
#### Menu Handling Logic:
- IF M: Return to {workflowFile} or {activityWorkflowFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN user selects [M] and validation is complete will you then return to the activity workflow. If approved, suggest [P] Deploy to production. If this completes an improvement cycle, suggest [A] Analyze for next improvement opportunity.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- All test types executed (happy path, regression, edge cases, accessibility)
- Results documented clearly for each test
- Validation report created following template
- BMad notified with clear status (approved or issues found)
- If approved: delivery status updated to 'complete', ready_for_production: true
- If issues: delivery status updated to 'in_testing', issues documented
- No tests skipped or omitted
- Regression tests confirm existing functionality unchanged
- Only approved when all acceptance criteria pass
- Validation report filed in test-reports directory
### ❌ SYSTEM FAILURE:
- Approving without executing all tests
- Skipping regression tests (critical failure)
- Not documenting test results
- Approving with failing tests
- Not notifying BMad of results
- Not creating validation report
- Delivery status not updated after validation
- Vague issue descriptions (if issues found)
- Testing only new functionality, ignoring regressions
- Not testing accessibility
- Generating test results without user actually testing
- No validation report created
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,71 @@
---
name: analyze-product
description: Understand current product state and find improvement targets
borrows_from: Phase 3 (scenarios)
---
# Analyze Product
**Goal:** Understand the existing product, identify what needs improving, and prioritize targets.
---
## INITIALIZATION
### Design Log
Read `{output_folder}/_progress/00-design-log.md`. Check Current and Backlog for context.
## Steps
### Step 1: Load Product Context
Gather existing product information:
1. Read project configuration and any existing WDS artifacts
2. If the product has a live URL → analyze current state (structure, pages, flows)
3. If codebase available → scan for tech stack, component patterns, design tokens
4. Document what exists: pages, navigation, key user flows
Present a **Product Snapshot** — current state summary.
### Step 2: Identify Improvement Targets
With the user, identify what needs work:
1. **User feedback** — What are users struggling with?
2. **Business goals** — What metrics need improvement?
3. **Technical debt** — What's fragile or outdated?
4. **Visual gaps** — What looks inconsistent or dated?
5. **Competitor gaps** — What are competitors doing better?
Create a prioritized list of improvement targets.
### Step 3: Select Target
From the prioritized list, pick ONE target for this improvement cycle:
```
Improvement targets (prioritized):
1. [Target] — [Impact] — [Effort]
2. [Target] — [Impact] — [Effort]
...
Which target should we tackle first?
```
### Step 4: Document Analysis
Save analysis to `{output_folder}/evolution/analysis/`:
- Product snapshot
- Improvement targets with priorities
- Selected target with rationale
---
## AFTER COMPLETION
1. Update design log
2. Suggest next action
3. Return to activity menu

View File

@@ -0,0 +1,93 @@
---
name: deploy
description: Create PR and deliver the improvement to the team
borrows_from: Phase 4 [H] (design delivery)
---
# Deploy
**Goal:** Package the tested improvement as a PR and deliver it to the development team.
---
## INITIALIZATION
### Design Log
Read `{output_folder}/_progress/00-design-log.md`. Check Current and Backlog for context.
## Steps
### Step 1: Pre-Deploy Checklist
Verify everything is ready:
- [ ] All acceptance criteria pass (from [T] test report)
- [ ] Branch is clean (no uncommitted changes)
- [ ] Commits are logical and well-described
- [ ] No unrelated changes included
- [ ] Documentation updated (if applicable)
### Step 2: Create Pull Request
Create a PR from the evolution branch:
```
gh pr create --title "[Improvement]: [Brief description]" --body "..."
```
PR body includes:
- **What changed** — Summary of the improvement
- **Why** — Link to scenario and analysis
- **How to test** — Steps from the test report
- **Screenshots** — Before/after if visual change
- **Acceptance criteria** — Checklist from spec
### Step 3: Package Delivery Context
Create a delivery summary at `{output_folder}/evolution/deliveries/`:
```markdown
# Delivery: [Scenario Name]
## PR
[Link to PR]
## Artifacts
- Analysis: [link]
- Scenario: [link]
- Specification: [link]
- Test Report: [link]
## Change Summary
[What was changed and why]
## Impact
[Expected improvement based on success criteria]
## Monitoring
[What to watch after deployment — metrics, error rates, user feedback]
```
### Step 4: Notify Team
If the project uses design log tracking or team notifications:
1. Create completion notification
2. Reference all artifacts (analysis → scenario → spec → test → PR)
3. Include any monitoring instructions
### Step 5: Plan Next Cycle
After deployment:
1. Archive this improvement cycle
2. Review remaining improvement targets from [A] analysis
3. Suggest next target or new analysis round
---
## AFTER COMPLETION
1. Update design log with completed improvement
2. Return to Phase 8 Activity Menu for next cycle

View File

@@ -0,0 +1,89 @@
---
name: design-solution
description: Sketch and specify the update for a scoped improvement
borrows_from: Phase 4 (UX design)
---
# Design Solution
**Goal:** Design the solution for a scoped improvement — from quick sketch to development-ready specification.
---
## INITIALIZATION
### Design Log
Read `{output_folder}/_progress/00-design-log.md`. Check Current and Backlog for context.
## Steps
### Step 1: Load Scenario
Read the scenario from [S] Scope Improvement:
- Target description
- Current vs desired state
- User journey
- Pages and components affected
### Step 2: Choose Design Approach
Based on the change scope, pick an approach:
- **Quick fix** — Small visual/copy change → Skip to Step 4 (specify directly)
- **Sketch first** — Layout or flow change → Sketch the before/after, then specify
- **Generate design** — Significant visual change → Use Phase 6 asset generation tools
### Step 3: Design the Change
For sketch or generate approaches:
1. **Before snapshot** — Capture or describe the current view
2. **After concept** — Sketch, generate, or describe the desired view
3. **Diff view** — Explicitly mark what changes: layout, components, content, behavior
4. **Edge cases** — What happens on mobile? With long text? With empty state?
Present design to user for feedback. Iterate until approved.
### Step 4: Write Specification
Create a mini page-spec at `{output_folder}/evolution/specs/`:
```markdown
# [Page/View Name] — Update Specification
## Change Summary
[One paragraph describing the change]
## Before
[Current state description or reference]
## After
[Detailed specification of the new state]
## Components
[List each component with its new properties/behavior]
## Responsive Behavior
[How the change adapts across breakpoints]
## Acceptance Criteria
[Testable criteria from the scenario]
```
### Step 5: Approve Specification
Present the specification for user sign-off:
- Does it match the scenario intent?
- Are acceptance criteria testable?
- Is scope still manageable?
---
## AFTER COMPLETION
1. Update design log
2. Suggest next action
3. Return to activity menu

View File

@@ -0,0 +1,80 @@
---
name: implement
description: Code the designed improvement in a new branch
borrows_from: Phase 5 (development)
---
# Implement
**Goal:** Implement the approved design in code, working in a dedicated branch like a developer on the team.
---
## INITIALIZATION
### Design Log
Read `{output_folder}/_progress/00-design-log.md`. Check Current and Backlog for context.
## Steps
### Step 1: Load Specification
Read the specification from [D] Design Solution:
- Change summary
- Component specifications
- Acceptance criteria
- Pages affected
### Step 2: Create Branch
Create a feature branch for this improvement:
```
git checkout -b evolution/[scenario-name]
```
Naming convention: `evolution/` prefix + kebab-case scenario name.
### Step 3: Understand Current Code
Before writing code, understand what exists:
1. Locate the files for affected pages/views
2. Read current component implementations
3. Identify the tech stack patterns (framework, styling approach, state management)
4. Note any existing design tokens or theme configuration
Present a brief implementation plan:
- Which files will change
- What new files are needed (if any)
- Estimated complexity
### Step 4: Implement Changes
Write the code changes following the specification:
1. **Follow existing patterns** — Match the codebase's conventions, don't introduce new ones
2. **Minimal changes** — Only change what the specification calls for
3. **Commit incrementally** — One logical commit per change unit
4. **Test as you go** — Verify each change works before moving on
For each file changed, explain what was modified and why.
### Step 5: Self-Review
Before handing off:
1. Diff all changes: `git diff evolution/[scenario-name]..main`
2. Check against specification: every acceptance criterion addressed?
3. Check for unintended side effects: other pages/components still work?
4. Clean up: no debug code, no commented-out blocks, no unrelated changes
---
## AFTER COMPLETION
1. Update design log
2. Suggest next action
3. Return to activity menu

View File

@@ -0,0 +1,90 @@
---
name: scope-improvement
description: Create a focused scenario for a specific product improvement
borrows_from: Phase 3 (scenarios)
---
# Scope Improvement
**Goal:** Turn an improvement target into a concrete scenario — one focused change with clear before/after.
---
## INITIALIZATION
### Design Log
Read `{output_folder}/_progress/00-design-log.md`. Check Current and Backlog for context.
## Steps
### Step 1: Load Analysis
Read the analysis from [A] Analyze Product:
- Product snapshot
- Selected improvement target
- Context and rationale
### Step 2: Define the Change
Scope the improvement as a mini-scenario:
1. **Which view/page** needs changing? (Be specific — one page, one flow section)
2. **Current state** — What does the user see today? What's wrong?
3. **Desired state** — What should the user experience after the change?
4. **Success criteria** — How do we know it worked? (measurable if possible)
### Step 3: Map the User Journey
For the selected view, map the micro-journey:
1. **Entry point** — How does the user arrive at this view?
2. **Current flow** — What happens step by step today?
3. **Pain points** — Where exactly does the experience break down?
4. **Proposed flow** — What should happen step by step after the change?
### Step 4: Estimate Scope
Assess the change:
- **Pages affected**: List specific pages/views
- **Components touched**: Which UI elements change?
- **Data changes**: Any API or data model changes?
- **Risk level**: Low (visual only) / Medium (behavior change) / High (structural change)
### Step 5: Write Scenario
Create a scenario document at `{output_folder}/evolution/scenarios/`:
```markdown
# [Scenario Name]
## Target
[What we're improving and why]
## Current State
[What users experience today]
## Desired State
[What users should experience after]
## User Journey
[Step-by-step flow]
## Success Criteria
[How we measure success]
## Scope
[Pages, components, risk level]
```
Present for user approval.
---
## AFTER COMPLETION
1. Update design log
2. Suggest next action
3. Return to activity menu

View File

@@ -0,0 +1,88 @@
---
name: acceptance-test
description: Test the implementation against the specification
borrows_from: Phase 5 [T] (acceptance testing)
---
# Acceptance Test
**Goal:** Validate the implementation against the specification's acceptance criteria before deploying.
---
## INITIALIZATION
### Design Log
Read `{output_folder}/_progress/00-design-log.md`. Check Current and Backlog for context.
## Steps
### Step 1: Load Test Context
Gather everything needed for testing:
1. Read specification from [D] Design Solution
2. Read scenario from [S] Scope Improvement
3. Review implementation diff from [I] Implement
4. Extract acceptance criteria into a test checklist
### Step 2: Prepare Test Environment
Ensure the implementation is running and testable:
1. Confirm branch is checked out: `evolution/[scenario-name]`
2. Start local development server if needed
3. Navigate to the affected page/view
4. Note the URL and any required test data
### Step 3: Execute Tests
For each acceptance criterion:
| # | Criterion | Steps | Expected | Actual | Pass? |
|---|-----------|-------|----------|--------|-------|
| 1 | [From spec] | [How to test] | [Expected result] | [What happened] | Y/N |
| 2 | ... | ... | ... | ... | ... |
Also test:
- **Responsive**: Check all breakpoints defined in spec
- **Edge cases**: Empty states, long content, error states
- **Regression**: Verify nothing else broke on the page
- **Cross-browser**: If specified in project requirements
### Step 4: Document Results
Create test report at `{output_folder}/evolution/test-reports/`:
```markdown
# Test Report: [Scenario Name]
## Summary
[X/Y criteria passed]
## Results
[Test table from Step 3]
## Issues Found
[List any failures with severity and description]
## Recommendation
[Pass / Pass with notes / Fail — needs rework]
```
### Step 5: Handle Failures
If tests fail:
- **Minor issues** → Fix in the same branch, retest
- **Design issues** → Route back to [D] Design Solution
- **Scope creep** → Log as separate improvement target for next cycle
---
## AFTER COMPLETION
1. Update design log
2. Suggest next action
3. Return to activity menu

View File

@@ -0,0 +1,99 @@
---
name: product-evolution
description: Brownfield improvements — the full WDS pipeline in miniature for existing products
web_bundle: true
---
# Phase 8: Product Evolution
**Goal:** Improve existing products through targeted, incremental changes — running the full WDS pipeline in miniature for each improvement.
**Your Role:** You work like a developer on the team. Pick a view that needs improving, scope it as a scenario, design the solution, implement it in a branch, test it, and deploy. Each cycle is one focused improvement.
---
## WORKFLOW ARCHITECTURE
Phase 8 is **menu-driven**, not linear. Each activity is a compressed version of a full WDS phase.
### Core Principles
- **Brownfield First**: You're joining an existing product, not building from scratch
- **Focused Scope**: One view, one scenario, one improvement at a time
- **Full Pipeline in Miniature**: Analyze → Scope → Design → Implement → Test → Deploy
- **Branch-Based**: Every change lives in its own branch until deployed
- **Kaizen**: Small, incremental, data-driven — each cycle informs the next
### Step Processing Rules
1. **READ COMPLETELY**: Always read the entire step file before action
2. **FOLLOW SEQUENCE**: Execute all sections in order
3. **WAIT FOR INPUT**: Halt at decision points and wait for user
4. **SAVE STATE**: Update design log when completing steps
---
## INITIALIZATION
### 1. Configuration Loading
Load and read full config from `{project-root}/_bmad/wds/config.yaml` and resolve:
- `project_name`, `output_folder`, `user_name`
- `communication_language`, `document_output_language`
### 2. Design Log
Read `{output_folder}/_progress/00-design-log.md`. Check Current and Backlog for context.
### 3. Activity Menu
```
What would you like to do?
[A] Analyze Product — Understand current state, find improvement targets
[S] Scope Improvement — Create a scenario for a specific update
[D] Design Solution — Sketch and specify the update
[I] Implement — Code in a new branch
[T] Acceptance Test — Test against spec
[P] Deploy — PR and deliver to the team
```
### Activity Routing
| Choice | Workflow File | Steps | Borrows From |
|--------|--------------|-------|--------------|
| [A] | workflow-analyze.md | steps-a/ | Phase 3 (scenarios) |
| [S] | workflow-scope.md | Inline | Phase 3 (scenarios) |
| [D] | workflow-design.md | steps-d/ | Phase 4 (UX design) |
| [I] | workflow-implement.md | Inline | Phase 5 (development) |
| [T] | workflow-test.md | steps-t/ | Phase 5 [T] (testing) |
| [P] | workflow-deploy.md | steps-p/ | Phase 4 [H] (delivery) |
---
## REFERENCE CONTENT
| Location | Purpose |
|----------|---------|
| `data/kaizen-principles.md` | Kaizen philosophy and patterns |
| `data/existing-product-guide.md` | Brownfield project guide |
| `data/context-templates.md` | Context gathering templates |
| `data/design-templates.md` | Design update templates |
| `data/delivery-templates.md` | Delivery packaging templates |
| `data/monitoring-templates.md` | Monitoring and impact templates |
---
## OUTPUT
- Scenarios: `{output_folder}/evolution/scenarios/`
- Specifications: `{output_folder}/evolution/specs/`
- Test Reports: `{output_folder}/evolution/test-reports/`
- Git branches with implementation
---
## AFTER COMPLETION
1. Update design log
2. Suggest next improvement or return to Activity Menu