initial commit

This commit is contained in:
2026-03-16 19:54:53 -04:00
commit bfe0e01254
3341 changed files with 483939 additions and 0 deletions

View File

@@ -0,0 +1,182 @@
---
name: 'step-01-prepare'
description: 'Gather all materials and set up testing environment before starting validation'
# File References
nextStepFile: './step-02-execute.md'
---
# Step 1: Prepare for Acceptance Testing
## STEP GOAL:
Gather all materials and set up your testing environment before starting validation.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on gathering test materials, setting up environment, preparing test data, and creating testing workspace
- 🚫 FORBIDDEN to begin executing tests — that is the next step
- 💬 Approach: Systematically walk through preparation with user, ensuring nothing is missed
- 📋 All materials must be gathered and environment verified before testing begins
## EXECUTION PROTOCOLS:
- 🎯 All materials gathered, environment set up, test data prepared, workspace created
- 💾 Document preparation status in dialog file
- 📖 Reference test scenario file and design delivery file
- 🚫 Do not execute any tests during preparation
## CONTEXT BOUNDARIES:
- Available context: Test scenario file, design delivery file, scenario specifications, design system specs
- Focus: Preparation — materials, environment, data, workspace, time estimation
- Limits: No test execution
- Dependencies: Test scenario file and design delivery must exist
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Gather Materials
#### Test Scenario
- Load test scenario file: `test-scenarios/TS-XXX.yaml`
- Review all test cases
- Understand success criteria
- Note any special setup needed
#### Design Delivery
- Load Design Delivery file: `deliveries/DD-XXX.yaml`
- Review user value and success criteria
- Review acceptance criteria
- Understand what "done" looks like
#### Scenario Specifications
- Load all scenario specs from `C-UX-Scenarios/`
- Review each scenario specification
- Note design details
- Understand expected behavior
#### Design System Specs
- Load design system specs from `D-Design-System/`
- Review component specifications
- Review design tokens
- Note exact colors, sizes, spacing
### 2. Set Up Environment
#### Access the Build
- Staging URL, credentials, platform
- Install build if needed (TestFlight, APK, or web staging)
#### Prepare Test Devices
- Primary device: charged, WiFi, screen recording enabled, screenshot tools ready
- Secondary device (if needed): different platform, screen size, OS version
#### Set Up Tools
- Screen recording (QuickTime, built-in, OBS Studio)
- Screenshot tools with annotation
- Markdown editor and note-taking
- Accessibility tools (VoiceOver/TalkBack, contrast checker, zoom)
### 3. Prepare Test Data
Create test accounts and prepare test data:
- Valid and invalid emails
- Strong and weak passwords
- Special characters
- Edge case data (long names, etc.)
### 4. Create Testing Workspace
Create file structure:
```
testing/DD-XXX/
├── screenshots/
├── screen-recordings/
├── notes.md
└── issues-found.md
```
### 5. Review Test Plan
Understand what you are testing:
- Happy Path Tests: count, flows, expected results
- Error State Tests: count, scenarios, error messages
- Edge Case Tests: count, unusual scenarios, expected behavior
- Design System Validation: components to check, specifications
- Accessibility Tests: screen reader, contrast, touch targets
### 6. Time Estimate
Calculate total testing time with 20% buffer.
### 7. Verify Checklist
- [ ] Test scenario loaded and reviewed
- [ ] Design Delivery loaded and reviewed
- [ ] All scenario specs loaded
- [ ] Design system specs loaded
- [ ] Build accessible and working
- [ ] Test devices ready
- [ ] Tools set up (recording, screenshots, notes)
- [ ] Test data prepared
- [ ] Workspace created
- [ ] Time blocked on calendar
### 8. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 2: Execute"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN all materials are gathered, environment is set up, and workspace is ready will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- All materials gathered
- Environment set up and accessible
- Test devices ready
- Tools configured
- Test data prepared
- Workspace created
- Test plan reviewed
- Time estimated
### ❌ SYSTEM FAILURE:
- Starting testing without materials
- Cannot access staging environment
- Test devices not ready
- No screen recording capability
- No test data prepared
- No time estimate
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,175 @@
---
name: 'step-02-execute'
description: 'Execute all test scenarios defined in the test scenario file and document results'
# File References
nextStepFile: './step-03-document-issues.md'
---
# Step 2: Run Test Scenarios
## STEP GOAL:
Execute all test scenarios defined in the test scenario file and document results.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on executing tests in order: happy path, error states, edge cases, design system validation, and accessibility
- 🚫 FORBIDDEN to skip test categories or rush through tests
- 💬 Approach: Execute each test category methodically, documenting results as you go
- 📋 Happy path must work before moving to error states and edge cases
## EXECUTION PROTOCOLS:
- 🎯 All test categories executed with results documented
- 💾 Document results using templates from data/test-result-templates.md
- 📖 Reference test scenario file for each test case
- 🚫 Do not skip any test category
## CONTEXT BOUNDARIES:
- Available context: All prepared materials from Step 1; test scenario file
- Focus: Test execution and result documentation
- Limits: No issue creation yet — just document pass/fail
- Dependencies: Step 1 must be complete (preparation done)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Happy Path Tests
For each test in TS-XXX.yaml `happy_path` section:
1. Start screen recording
2. Perform action exactly as written
3. Observe result, compare to expected
4. Compare to design reference
5. Mark PASS or FAIL
6. Take screenshot if FAIL (naming: `HP-XXX-step-X-FAIL.png`)
7. Document using template
### 2. Error State Tests
For each test in TS-XXX.yaml `error_states` section:
1. Set up error condition using test data
2. Trigger the error
3. Verify error handling (message, styling, recovery)
4. Check against design spec
5. Document results using template
### 3. Edge Case Tests
For each test in TS-XXX.yaml `edge_cases` section:
1. Set up unusual scenario
2. Perform edge case action
3. Verify graceful handling (no crash, smooth UX)
4. Document results using template
### 4. Design System Validation
For each component in TS-XXX.yaml `design_system_checks` section:
1. Locate all component instances
2. Measure dimensions (height, width, padding)
3. Check colors against design tokens
4. Check typography (size, weight, line height)
5. Check spacing
6. Check all states (default, hover, active, disabled, focus)
7. Document results using template
### 5. Accessibility Tests
#### Screen Reader Testing
- Enable VoiceOver (iOS) or TalkBack (Android)
- Navigate through flow using only screen reader
- Check button labels, form field labels, error announcements
#### Color Contrast Testing
- Use contrast checker tool
- Body text: 4.5:1 minimum (WCAG AA)
- Large text: 3:1 minimum
#### Touch Target Testing
- Measure all interactive elements
- Minimum: 44x44px
- Minimum 8px spacing between targets
### 6. Compile Overall Summary
After all tests complete, create overall test summary:
- Overall result (PASS/FAIL)
- Test coverage percentages
- Issues by severity
- Issues by category
- Next steps
### 7. Verify Checklist
- [ ] All happy path tests executed
- [ ] All error state tests executed
- [ ] All edge case tests executed
- [ ] Design system validation complete
- [ ] Accessibility tests complete
- [ ] All results documented
- [ ] Screenshots captured for issues
- [ ] Screen recordings saved
### 8. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 3: Document Issues"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN all test categories are executed and results documented will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- All happy path tests executed
- All error state tests executed
- All edge case tests executed
- Design system validation complete
- Accessibility tests complete
- All results documented
- Screenshots captured for issues
- Screen recordings saved
### ❌ SYSTEM FAILURE:
- Skipping test categories
- Not documenting results
- No screenshots for issues
- Not checking design references
- Rushing through tests
- Not measuring design system compliance
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,138 @@
---
name: 'step-03-document-issues'
description: 'Document all problems found during testing as issue tickets'
# File References
nextStepFile: './step-04-report.md'
---
# Step 3: Create Issues
## STEP GOAL:
Document all problems found during testing as issue tickets that can be fixed.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on creating issue files with proper severity, description, screenshots, and design references
- 🚫 FORBIDDEN to create vague or non-actionable issues
- 💬 Approach: Help user document each issue with specific details, design references, and actionable recommendations
- 📋 Every issue must include severity, steps to reproduce, expected vs actual, screenshot, and design reference
## EXECUTION PROTOCOLS:
- 🎯 All issues documented as numbered issue files with proper severity
- 💾 Create issue files in `issues/ISS-XXX-description.md` and issues summary
- 📖 Reference test results from Step 2 and issue templates
- 🚫 Do not create vague descriptions
## CONTEXT BOUNDARIES:
- Available context: Test results from Step 2; screenshots; design specifications
- Focus: Issue creation — specific, actionable, with design references
- Limits: No fixing issues — just documenting them
- Dependencies: Step 2 must be complete (all tests executed)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Issue Creation Process
For each issue found, create issue file: `issues/ISS-XXX-description.md`
Numbering: Start at ISS-001, increment for each issue, use leading zeros.
Reference: [data/issue-templates.md](data/issue-templates.md) for complete issue template
### 2. Assign Severity Levels
| Severity | Description | Fix Timeline |
|----------|-------------|--------------|
| **Critical** | App crashes, data loss, security | Immediate |
| **High** | Major functionality broken | This release |
| **Medium** | Feature wrong, confusing UX | This release |
| **Low** | Minor polish, nice to have | Future release |
### 3. Issue Writing Best Practices
**Be specific:**
- Not "Button looks wrong"
- Instead "Primary button background #3B82F6, should be #2563EB per tokens/colors.json"
**Be actionable:**
- Not "Fix the transition"
- Instead "Add 300ms fade transition per specifications.md line 45"
**Be visual:**
- Include screenshots
- Annotate key areas
- Show expected vs actual
### 4. Create Issues Summary
After creating all issues, create summary with total count and breakdown by severity.
### 5. Verify Checklist
- [ ] All issues documented with correct template
- [ ] Severity levels assigned appropriately
- [ ] Design references included
- [ ] Screenshots attached
- [ ] Recommendations provided
- [ ] Issues summary created
### 6. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 4: Report"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN all issues are documented with proper severity and design references will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- All issues documented with correct template
- Severity levels assigned appropriately
- Design references included
- Screenshots attached
- Recommendations provided
- Issues summary created
### ❌ SYSTEM FAILURE:
- Vague descriptions
- Missing severity
- No screenshots
- No design reference
- No steps to reproduce
- Not actionable
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,132 @@
---
name: 'step-04-report'
description: 'Create a comprehensive test report summarizing all testing results'
# File References
nextStepFile: './step-05-iterate.md'
---
# Step 4: Create Test Report
## STEP GOAL:
Create a comprehensive test report summarizing all testing results.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on creating the test report with summary, coverage, issues, sign-off recommendation, and attachments
- 🚫 FORBIDDEN to modify test results or issues — only compile them into the report
- 💬 Approach: Compile all results into a clear, actionable report with user
- 📋 Report must include clear PASS/FAIL determination with criteria
## EXECUTION PROTOCOLS:
- 🎯 Complete test report created with all sections
- 💾 Save report to `testing/DD-XXX/TR-XXX-[flow-name].md`
- 📖 Reference test results from Step 2 and issues from Step 3
- 🚫 Do not modify test results or issues
## CONTEXT BOUNDARIES:
- Available context: Test results from Step 2; issues from Step 3; screenshots and recordings
- Focus: Report compilation — summary, coverage, issues, recommendation
- Limits: No test result modification
- Dependencies: Step 3 must be complete (all issues documented)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Create Test Report File
File: `testing/DD-XXX/TR-XXX-[flow-name].md`
Reference: [data/issue-templates.md](data/issue-templates.md) for complete test report template
### 2. Report Sections
1. **Summary** - Overall result, total issues, blocking status
2. **Test Coverage** - Pass/fail by category
3. **Issues Found** - Table of all issues
4. **Sign-Off Recommendation** - Ready or needs fixes
5. **Next Steps** - What happens next
6. **Attachments** - Recordings, screenshots, issue files
### 3. Overall Result Determination
**PASS if:**
- All Critical issues: 0
- All High issues: Fixed or accepted risk
- Happy path: 100% pass
- Design system: > 95% compliant
**FAIL if:**
- Any Critical issues unfixed
- Any High issues blocking
- Happy path failures
- Design system < 95% compliant
### 4. Attach Supporting Files
Organize testing folder with report, screenshots, recordings, and test data.
### 5. Verify Checklist
- [ ] Test report created with all sections
- [ ] Test coverage complete
- [ ] Issues list accurate
- [ ] Clear recommendation
- [ ] All attachments organized
### 6. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 5: Iterate"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the test report is complete with all sections and clear recommendation will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Test report created with all sections
- Test coverage complete
- Issues list accurate
- Clear recommendation
- All attachments organized
### ❌ SYSTEM FAILURE:
- Missing test categories
- Incorrect issue counts
- Unclear recommendation
- Missing attachments
- Incomplete coverage data
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,127 @@
---
name: 'step-05-iterate'
description: 'Either iterate with development team to fix issues, or approve the feature for production'
# File References
activityWorkflowFile: '../workflow-acceptance-testing.md'
---
# Step 5: Iterate or Approve
## STEP GOAL:
Either iterate with development team to fix issues, or approve the feature for production.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on two paths: iterate (fix issues and retest) or approve (sign off for production)
- 🚫 FORBIDDEN to approve with unfixed high-severity issues or create endless iteration loops (max 3 iterations)
- 💬 Approach: Guide user through the appropriate path based on test results
- 📋 Maximum 3 iterations before escalation
## EXECUTION PROTOCOLS:
- 🎯 Feature either approved with sign-off document or issues fixed and retested
- 💾 Create sign-off document if approved; create retest report if iterating
- 📖 Reference test report from Step 4 and issues from Step 3
- 🚫 Do not approve with unfixed high-severity issues
## CONTEXT BOUNDARIES:
- Available context: Test report from Step 4; issues from Step 3; all test results
- Focus: Decision — iterate or approve
- Limits: Maximum 3 iterations before escalation
- Dependencies: Step 4 must be complete (test report created)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### Two Paths
#### Path A: Issues Found - Iterate
**If test result was FAIL:**
1. **Wait for Fixes** - Be available for questions, clarify issues, review early feedback
2. **Receive Ready for Retest** notification
3. **Retest** - Focus on:
- Fixed issues: Verify actually fixed
- Regression testing: Fixes did not break anything
- Related areas: Check affected parts
- Use abbreviated testing (do not rerun all tests)
4. **Update Issues** - Mark fixed issues as Closed with version, date, and verifier
5. **Create Retest Report** - Reference data/issue-templates.md for template
6. **Decision Point**:
- If all high-severity fixed: proceed to Path B (Approve)
- If issues remain: Repeat iteration (max 3 total)
#### Path B: No Issues - Approve
**If test result was PASS:**
1. **Create Sign-Off Document** - Reference data/issue-templates.md for template
2. **Notify Development Team** - Formal approval notification
3. **Update Status** - Set delivery status to 'approved' with timestamp and approver
### Iteration Limits
**Maximum iterations:** 3
If after 3 iterations issues persist:
1. Escalate to leads
2. Review requirements
3. Consider scope reduction
### Present MENU OPTIONS
Display: "**Select an Option:** [M] Return to Activity Menu"
#### Menu Handling Logic:
- IF M: Update design log, then load, read entire file, then execute {activityWorkflowFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed when user selects 'M'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the feature is approved with sign-off document or escalated will you then load and read fully `{activityWorkflowFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- All high-severity issues fixed
- Retesting complete
- Sign-off document created
- Development team notified of approval
- Status updated to approved
### ❌ SYSTEM FAILURE:
- Approving with unfixed high-severity issues
- No sign-off document
- Status not updated
- Development team not notified
- Endless iteration loop (more than 3)
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.