docs: update all documentation and add AI tooling configs
- Rewrite README.md with current architecture, features and stack - Update docs/API.md with all current endpoints (corporate, BI, client 360) - Update docs/ARCHITECTURE.md with cache, modular queries, services, ETL - Update docs/GUIA-USUARIO.md for all roles (admin, corporate, agente) - Add docs/INDEX.md documentation index - Add PROJETO.md comprehensive project reference - Add BI-CCC-Implementation-Guide.md - Include AI agent configs (.claude, .agents, .gemini, _bmad) - Add netbird VPN configuration - Add status report Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
74
_bmad/tea/workflows/testarch/README.md
Normal file
74
_bmad/tea/workflows/testarch/README.md
Normal file
@@ -0,0 +1,74 @@
|
||||
# TEA Workflow Step Files
|
||||
|
||||
This folder contains the Test Architect (TEA) workflows converted to step-file architecture for strict LLM compliance. Each workflow is tri-modal (create, edit, validate) and uses small, ordered step files instead of a single monolithic instruction file.
|
||||
|
||||
## Why Step Files
|
||||
|
||||
- Enforces sequential execution and prevents improvisation
|
||||
- Keeps context small and focused per step
|
||||
- Makes validation and edits deterministic
|
||||
|
||||
## Standard Layout (per workflow)
|
||||
|
||||
```
|
||||
<workflow>/
|
||||
├── workflow.md # Mode routing (create / edit / validate)
|
||||
├── workflow-plan.md # Design reference for step order and intent
|
||||
├── workflow.yaml # Installer metadata
|
||||
├── instructions.md # Short entrypoint / summary
|
||||
├── checklist.md # Validation criteria for outputs
|
||||
├── steps-c/ # Create mode steps
|
||||
├── steps-e/ # Edit mode steps
|
||||
├── steps-v/ # Validate mode steps
|
||||
├── templates/ # Output templates (if applicable)
|
||||
└── validation-report-*.md # Validator outputs (latest run)
|
||||
```
|
||||
|
||||
## Modes
|
||||
|
||||
- **Create (steps-c/):** Primary execution flow to generate outputs
|
||||
- **Edit (steps-e/):** Structured edits to existing outputs
|
||||
- **Validate (steps-v/):** Checklist-based validation of outputs
|
||||
|
||||
## Execution Rules (Summary)
|
||||
|
||||
- Load **one step at a time**. Do not preload future steps.
|
||||
- Follow the **MANDATORY SEQUENCE** exactly in each step.
|
||||
- Do not skip steps, reorder, or improvise.
|
||||
- If a step writes outputs, do so **before** loading the next step.
|
||||
|
||||
## Step Naming Conventions
|
||||
|
||||
- `step-01-*.md` is the init step (no menus unless explicitly required).
|
||||
- `step-01b-*.md` is a continuation/resume step if the workflow is continuable.
|
||||
- `step-0X-*.md` are sequential create-mode steps.
|
||||
- `steps-v/step-01-validate.md` is the validate mode entrypoint.
|
||||
- `steps-e/step-01-assess.md` is the edit mode entrypoint.
|
||||
|
||||
## Validation
|
||||
|
||||
- Each workflow has a latest `validation-report-*.md` in its folder.
|
||||
- Validation uses the BMad Builder workflow validator (workflow-builder).
|
||||
- The goal is 100% compliance with no warnings.
|
||||
|
||||
## References
|
||||
|
||||
- Step-file architecture: `docs/explanation/step-file-architecture.md`
|
||||
- Subagent patterns: `docs/explanation/subagent-architecture.md`
|
||||
|
||||
## TEA Workflows
|
||||
|
||||
- test-design
|
||||
- automate
|
||||
- atdd
|
||||
- test-review
|
||||
- trace
|
||||
- framework
|
||||
- ci
|
||||
- nfr-assess
|
||||
|
||||
## Notes
|
||||
|
||||
- `workflow.md` is the canonical entrypoint. `instructions.md` is a short summary for quick context.
|
||||
- Output files typically use `{test_artifacts}` or `{project-root}` variables.
|
||||
- If a workflow produces multiple artifacts (e.g., system-level vs epic-level), the step file will specify which templates and output paths to use.
|
||||
@@ -0,0 +1,6 @@
|
||||
---
|
||||
name: bmad-teach-me-testing
|
||||
description: 'Teach testing progressively through structured sessions. Use when user says "lets learn testing" or "I want to study test practices"'
|
||||
---
|
||||
|
||||
Follow the instructions in [workflow.md](workflow.md).
|
||||
@@ -0,0 +1 @@
|
||||
type: skill
|
||||
197
_bmad/tea/workflows/testarch/bmad-teach-me-testing/checklist.md
Normal file
197
_bmad/tea/workflows/testarch/bmad-teach-me-testing/checklist.md
Normal file
@@ -0,0 +1,197 @@
|
||||
# Teach Me Testing - Quality Checklist
|
||||
|
||||
## Workflow Quality Standards
|
||||
|
||||
Use this checklist to validate the teaching workflow meets quality standards.
|
||||
|
||||
---
|
||||
|
||||
## Foundation Quality
|
||||
|
||||
- [ ] **workflow.md** exists with proper frontmatter
|
||||
- [ ] Tri-modal routing logic present (Create/Edit/Validate)
|
||||
- [ ] Configuration loading references correct module (TEA)
|
||||
- [ ] First step path correct (`./steps-c/step-01-init.md`)
|
||||
- [ ] Folder structure complete (steps-c/, steps-e/, steps-v/, data/, templates/)
|
||||
|
||||
---
|
||||
|
||||
## Template Quality
|
||||
|
||||
- [ ] **progress-template.yaml** has complete schema
|
||||
- [ ] All 7 sessions defined with proper structure
|
||||
- [ ] Session status tracking fields present (not-started/in-progress/completed)
|
||||
- [ ] stepsCompleted array for continuation tracking
|
||||
- [ ] **session-notes-template.md** has all required sections
|
||||
- [ ] **certificate-template.md** includes all 7 sessions
|
||||
|
||||
---
|
||||
|
||||
## Step File Quality (CREATE mode)
|
||||
|
||||
### Initialization Steps
|
||||
|
||||
- [ ] **step-01-init.md** checks for existing progress file
|
||||
- [ ] Continuation detection logic works correctly
|
||||
- [ ] **step-01b-continue.md** loads progress and routes to session menu
|
||||
- [ ] Progress dashboard displays completion status
|
||||
|
||||
### Assessment Step
|
||||
|
||||
- [ ] **step-02-assess.md** gathers role, experience, goals
|
||||
- [ ] Validation for role (QA/Dev/Lead/VP)
|
||||
- [ ] Validation for experience (beginner/intermediate/experienced)
|
||||
- [ ] Assessment data written to progress file
|
||||
|
||||
### Session Menu Hub
|
||||
|
||||
- [ ] **step-03-session-menu.md** displays all 7 sessions
|
||||
- [ ] Completion indicators shown (✓ completed, 🔄 in-progress, ⬜ not-started)
|
||||
- [ ] Branching logic routes to selected session (1-7)
|
||||
- [ ] Exit logic (X) routes to completion if all done, otherwise saves and exits
|
||||
|
||||
### Session Steps (1-7)
|
||||
|
||||
- [ ] Each session loads relevant TEA docs just-in-time
|
||||
- [ ] Teaching content presented (mostly autonomous)
|
||||
- [ ] Quiz validation with ≥70% threshold
|
||||
- [ ] Session notes artifact generated
|
||||
- [ ] Progress file updated (status, score, artifact path)
|
||||
- [ ] Returns to session menu hub after completion
|
||||
|
||||
### Completion Step
|
||||
|
||||
- [ ] **step-05-completion.md** verifies all 7 sessions complete
|
||||
- [ ] Certificate generated with accurate data
|
||||
- [ ] Final progress file update (certificate_generated: true)
|
||||
- [ ] Congratulations message shown
|
||||
|
||||
---
|
||||
|
||||
## Data File Quality
|
||||
|
||||
- [ ] **curriculum.yaml** defines all 7 sessions
|
||||
- [ ] **role-paths.yaml** maps role customizations
|
||||
- [ ] **session-content-map.yaml** references TEA docs/fragments/URLs correctly
|
||||
- [ ] **quiz-questions.yaml** has questions for all sessions
|
||||
- [ ] **tea-resources-index.yaml** has complete documentation index
|
||||
|
||||
---
|
||||
|
||||
## Content Quality
|
||||
|
||||
### TEA Documentation Integration
|
||||
|
||||
- [ ] Local file paths correct (`/docs/*.md`, `/src/testarch/knowledge/*.md`)
|
||||
- [ ] Online URLs correct (<https://bmad-code-org.github.io/...>)
|
||||
- [ ] GitHub fragment links correct
|
||||
- [ ] Triple reference system (local + online + GitHub) implemented
|
||||
|
||||
### Role-Based Content
|
||||
|
||||
- [ ] QA examples present (practical testing focus)
|
||||
- [ ] Dev examples present (integration/TDD focus)
|
||||
- [ ] Lead examples present (architecture/patterns focus)
|
||||
- [ ] VP examples present (strategy/metrics focus)
|
||||
|
||||
### Quiz Quality
|
||||
|
||||
- [ ] Questions test understanding, not memorization
|
||||
- [ ] 3-5 questions per session
|
||||
- [ ] Mix of difficulty levels
|
||||
- [ ] Clear correct answers with explanations
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
- [ ] Corrupted progress file detection
|
||||
- [ ] Backup and recovery options
|
||||
- [ ] Missing TEA docs fallback (Web-Browsing)
|
||||
- [ ] Quiz failure recovery (review or continue)
|
||||
- [ ] Session interruption handling (auto-save)
|
||||
|
||||
---
|
||||
|
||||
## User Experience
|
||||
|
||||
- [ ] Clear navigation instructions
|
||||
- [ ] Progress visibility (completion percentage, next recommended)
|
||||
- [ ] Auto-save after each session
|
||||
- [ ] Resume capability works seamlessly
|
||||
- [ ] Exit options clear at all decision points
|
||||
|
||||
---
|
||||
|
||||
## State Management
|
||||
|
||||
- [ ] stepsCompleted array updated correctly
|
||||
- [ ] Session tracking accurate (status, dates, scores)
|
||||
- [ ] Completion percentage calculated correctly
|
||||
- [ ] Next recommended session logic works
|
||||
- [ ] lastStep and lastContinued timestamps updated
|
||||
|
||||
---
|
||||
|
||||
## Validation Mode
|
||||
|
||||
- [ ] **step-v-01-validate.md** checks all quality standards
|
||||
- [ ] Generates validation report
|
||||
- [ ] Identifies issues clearly
|
||||
- [ ] Provides remediation suggestions
|
||||
|
||||
---
|
||||
|
||||
## Edit Mode
|
||||
|
||||
- [ ] **step-e-01-assess-workflow.md** identifies what to edit
|
||||
- [ ] **step-e-02-apply-edits.md** applies modifications safely
|
||||
- [ ] Preserves workflow integrity during edits
|
||||
|
||||
---
|
||||
|
||||
## Documentation
|
||||
|
||||
- [ ] **instructions.md** clear and complete
|
||||
- [ ] **checklist.md** (this file) comprehensive
|
||||
- [ ] README (if present) accurate
|
||||
- [ ] Inline comments in complex logic
|
||||
|
||||
---
|
||||
|
||||
## Performance
|
||||
|
||||
- [ ] Just-in-time loading (not loading all docs upfront)
|
||||
- [ ] Session steps complete in reasonable time (<5 min)
|
||||
- [ ] Quiz validation fast (<1 min)
|
||||
- [ ] Progress file writes efficient
|
||||
|
||||
---
|
||||
|
||||
## Security
|
||||
|
||||
- [ ] No hardcoded credentials
|
||||
- [ ] File paths use variables
|
||||
- [ ] Progress files private to user
|
||||
- [ ] No sensitive data in session notes
|
||||
|
||||
---
|
||||
|
||||
## Completion Criteria
|
||||
|
||||
✅ **Workflow is ready for deployment when:**
|
||||
|
||||
- All checkboxes above are checked
|
||||
- All step files exist and follow standards
|
||||
- All templates present and correct
|
||||
- Data files complete and accurate
|
||||
- Error handling robust
|
||||
- User experience smooth
|
||||
- Documentation complete
|
||||
|
||||
---
|
||||
|
||||
**Validation Date:** **\*\***\_\_\_**\*\***
|
||||
**Validated By:** **\*\***\_\_\_**\*\***
|
||||
**Issues Found:** **\*\***\_\_\_**\*\***
|
||||
**Status:** ⬜ Ready for Production | ⬜ Needs Revisions
|
||||
@@ -0,0 +1,129 @@
|
||||
# TEA Academy Curriculum Structure
|
||||
# Defines the 7-session learning path with objectives and content mappings
|
||||
|
||||
sessions:
|
||||
- id: session-01-quickstart
|
||||
name: "Quick Start"
|
||||
duration: "30 min"
|
||||
difficulty: beginner
|
||||
objective: "Get immediate value by seeing TEA in action"
|
||||
description: "TEA Lite intro, run automate workflow, understand engagement models"
|
||||
recommended_for:
|
||||
- beginner
|
||||
- intermediate
|
||||
- experienced
|
||||
prerequisites: []
|
||||
|
||||
- id: session-02-concepts
|
||||
name: "Core Concepts"
|
||||
duration: "45 min"
|
||||
difficulty: beginner
|
||||
objective: "Understand WHY behind TEA principles"
|
||||
description: "Risk-based testing, DoD, testing as engineering philosophy"
|
||||
recommended_for:
|
||||
- beginner
|
||||
- intermediate
|
||||
prerequisites: []
|
||||
|
||||
- id: session-03-architecture
|
||||
name: "Architecture & Patterns"
|
||||
duration: "60 min"
|
||||
difficulty: intermediate
|
||||
objective: "Understand TEA patterns and architecture"
|
||||
description: "Fixtures, network-first patterns, data factories, step-file architecture"
|
||||
recommended_for:
|
||||
- intermediate
|
||||
- experienced
|
||||
prerequisites:
|
||||
- session-02-concepts
|
||||
|
||||
- id: session-04-test-design
|
||||
name: "Test Design"
|
||||
duration: "60 min"
|
||||
difficulty: intermediate
|
||||
objective: "Learn risk assessment and coverage planning"
|
||||
description: "Test Design workflow, risk/testability assessment, coverage planning"
|
||||
recommended_for:
|
||||
- intermediate
|
||||
- experienced
|
||||
prerequisites:
|
||||
- session-02-concepts
|
||||
|
||||
- id: session-05-atdd-automate
|
||||
name: "ATDD & Automate"
|
||||
duration: "60 min"
|
||||
difficulty: intermediate
|
||||
objective: "Generate tests with TDD red-green approach"
|
||||
description: "ATDD workflow (red phase), Automate workflow, component TDD, API testing"
|
||||
recommended_for:
|
||||
- intermediate
|
||||
- experienced
|
||||
prerequisites:
|
||||
- session-02-concepts
|
||||
|
||||
- id: session-06-quality-trace
|
||||
name: "Quality & Trace"
|
||||
duration: "45 min"
|
||||
difficulty: intermediate
|
||||
objective: "Audit quality and ensure traceability"
|
||||
description: "Test Review (5 dimensions), Trace workflow, quality metrics"
|
||||
recommended_for:
|
||||
- intermediate
|
||||
- experienced
|
||||
prerequisites:
|
||||
- session-02-concepts
|
||||
|
||||
- id: session-07-advanced
|
||||
name: "Advanced Patterns"
|
||||
duration: "ongoing"
|
||||
difficulty: advanced
|
||||
objective: "Deep-dive into specific knowledge fragments"
|
||||
description: "Menu-driven exploration of 35 knowledge fragments organized by category"
|
||||
recommended_for:
|
||||
- experienced
|
||||
prerequisites: []
|
||||
|
||||
# Learning Paths by Experience Level
|
||||
learning_paths:
|
||||
beginner:
|
||||
recommended_sequence:
|
||||
- session-01-quickstart
|
||||
- session-02-concepts
|
||||
- session-03-architecture
|
||||
- session-04-test-design
|
||||
- session-05-atdd-automate
|
||||
- session-06-quality-trace
|
||||
- session-07-advanced
|
||||
skip_optional: []
|
||||
|
||||
intermediate:
|
||||
recommended_sequence:
|
||||
- session-01-quickstart
|
||||
- session-02-concepts
|
||||
- session-03-architecture
|
||||
- session-04-test-design
|
||||
- session-05-atdd-automate
|
||||
- session-06-quality-trace
|
||||
- session-07-advanced
|
||||
skip_optional:
|
||||
- session-01-quickstart # Can skip if already familiar
|
||||
certificate_eligible_if_skipped: false
|
||||
|
||||
experienced:
|
||||
recommended_sequence:
|
||||
- session-02-concepts
|
||||
- session-03-architecture
|
||||
- session-04-test-design
|
||||
- session-05-atdd-automate
|
||||
- session-06-quality-trace
|
||||
- session-07-advanced
|
||||
skip_optional:
|
||||
- session-01-quickstart
|
||||
certificate_eligible_if_skipped: false
|
||||
|
||||
# Completion Requirements
|
||||
completion:
|
||||
minimum_sessions: 7 # All sessions required for certificate
|
||||
passing_score: 70 # Minimum quiz score to pass session
|
||||
average_score_threshold: 70 # Minimum average for certificate
|
||||
certificate_note: "Certificate eligibility requires completion.minimum_sessions. If intermediate.skip_optional or experienced.skip_optional sessions are skipped, certificate eligibility is forfeited."
|
||||
@@ -0,0 +1,206 @@
|
||||
# Quiz Questions Bank
|
||||
# Organized by session with questions, answers, and explanations
|
||||
|
||||
session-01-quickstart:
|
||||
passing_score: 70
|
||||
questions:
|
||||
- id: q1-purpose
|
||||
question: "What is the primary purpose of TEA?"
|
||||
options:
|
||||
A: "Replace all testing tools with a single framework"
|
||||
B: "Make testing expertise accessible through structured workflows and knowledge"
|
||||
C: "Automate 100% of test writing"
|
||||
D: "Only works for Playwright tests"
|
||||
correct: B
|
||||
explanation: "TEA makes testing expertise accessible and scalable through workflows and knowledge fragments. It's not about replacing tools or automating everything."
|
||||
|
||||
- id: q2-risk-matrix
|
||||
question: "What does the P0-P3 risk matrix help with?"
|
||||
options:
|
||||
A: "Prioritizing test coverage based on criticality"
|
||||
B: "Grading test code quality"
|
||||
C: "Measuring test execution speed"
|
||||
D: "Tracking bug severity"
|
||||
correct: A
|
||||
explanation: "P0-P3 helps prioritize what to test based on risk (Probability × Impact). P0 = critical features like login, P3 = nice-to-have like tooltips."
|
||||
|
||||
- id: q3-engagement
|
||||
question: "Which TEA engagement model is best for quick value in 30 minutes?"
|
||||
options:
|
||||
A: "TEA Enterprise"
|
||||
B: "TEA Lite"
|
||||
C: "TEA Integrated"
|
||||
D: "TEA Brownfield"
|
||||
correct: B
|
||||
explanation: "TEA Lite is the 30-minute quick start approach. Enterprise and Integrated are more comprehensive."
|
||||
|
||||
session-02-concepts:
|
||||
passing_score: 70
|
||||
questions:
|
||||
- id: q1-p0-priority
|
||||
question: "In the P0-P3 matrix, what priority level should login/authentication have?"
|
||||
options:
|
||||
A: "P3 - Low priority"
|
||||
B: "P2 - Medium priority"
|
||||
C: "P1 - High priority"
|
||||
D: "P0 - Critical priority"
|
||||
correct: D
|
||||
explanation: "Login/authentication is P0 - critical. Business fails if broken. High usage, high impact, business-critical."
|
||||
|
||||
- id: q2-hard-waits
|
||||
question: "What is the problem with using sleep(5000) instead of waitFor conditions?"
|
||||
options:
|
||||
A: "It makes tests slower"
|
||||
B: "It's a hard wait that doesn't react to state changes (violates DoD)"
|
||||
C: "It uses too much memory"
|
||||
D: "It's not supported in modern frameworks"
|
||||
correct: B
|
||||
explanation: "Hard waits don't react to state changes - they guess timing. Use waitFor to react to conditions. This violates TEA Definition of Done."
|
||||
|
||||
- id: q3-self-cleaning
|
||||
question: "What does 'self-cleaning tests' mean in TEA Definition of Done?"
|
||||
options:
|
||||
A: "Tests automatically fix their own bugs"
|
||||
B: "Tests delete/deactivate entities they create during testing"
|
||||
C: "Tests run faster by cleaning up code"
|
||||
D: "Tests remove old test files"
|
||||
correct: B
|
||||
explanation: "Self-cleaning means tests delete/deactivate entities they created. No manual cleanup required."
|
||||
|
||||
session-03-architecture:
|
||||
passing_score: 70
|
||||
questions:
|
||||
- id: q1-fixtures
|
||||
question: "What is the main benefit of fixture composition?"
|
||||
options:
|
||||
A: "Faster test execution"
|
||||
B: "DRY - define once, reuse everywhere"
|
||||
C: "Better error messages"
|
||||
D: "Automatic screenshot capture"
|
||||
correct: B
|
||||
explanation: "Fixture composition allows you to define setup once and reuse everywhere. DRY principle for test setup."
|
||||
|
||||
- id: q2-network-first
|
||||
question: "Why is 'network-first' better than mocking after the action?"
|
||||
options:
|
||||
A: "It's faster"
|
||||
B: "It prevents race conditions"
|
||||
C: "It uses less memory"
|
||||
D: "It's easier to write"
|
||||
correct: B
|
||||
explanation: "Setting up network interception BEFORE the action prevents race conditions. The mock is ready when the action triggers."
|
||||
|
||||
- id: q3-step-file
|
||||
question: "What pattern does this teaching workflow use?"
|
||||
options:
|
||||
A: "Page Object Model"
|
||||
B: "Behavior Driven Development"
|
||||
C: "Step-File Architecture"
|
||||
D: "Test Pyramid"
|
||||
correct: C
|
||||
explanation: "This workflow uses step-file architecture: micro-file design, just-in-time loading, sequential enforcement."
|
||||
|
||||
session-04-test-design:
|
||||
passing_score: 70
|
||||
questions:
|
||||
- id: q1-test-design-purpose
|
||||
question: "What does the Test Design workflow help you do?"
|
||||
options:
|
||||
A: "Write tests faster"
|
||||
B: "Plan tests BEFORE writing them"
|
||||
C: "Run tests in parallel"
|
||||
D: "Debug test failures"
|
||||
correct: B
|
||||
explanation: "Test Design workflow helps you plan tests before writing them. Design before code, like architecture before implementation."
|
||||
|
||||
- id: q2-risk-calculation
|
||||
question: "How do you calculate risk?"
|
||||
options:
|
||||
A: "Probability + Impact"
|
||||
B: "Probability × Impact"
|
||||
C: "Probability - Impact"
|
||||
D: "Probability / Impact"
|
||||
correct: B
|
||||
explanation: "Risk = Probability × Impact. Multiply the likelihood of failure by the impact of failure."
|
||||
|
||||
- id: q3-p0-coverage
|
||||
question: "For P0 features, which test levels should you use?"
|
||||
options:
|
||||
A: "Only E2E tests"
|
||||
B: "Only unit tests"
|
||||
C: "Unit + Integration + E2E (comprehensive)"
|
||||
D: "Manual testing only"
|
||||
correct: C
|
||||
explanation: "P0 features need comprehensive coverage: Unit + Integration + E2E. High confidence for critical features."
|
||||
|
||||
session-05-atdd-automate:
|
||||
passing_score: 70
|
||||
questions:
|
||||
- id: q1-red-phase
|
||||
question: "What is the 'red' phase in TDD?"
|
||||
options:
|
||||
A: "Tests fail (code doesn't exist yet)"
|
||||
B: "Tests pass"
|
||||
C: "Code is refactored"
|
||||
D: "Tests are deleted"
|
||||
correct: A
|
||||
explanation: "Red phase: Tests fail because the code doesn't exist yet. Write tests first, then implement."
|
||||
|
||||
- id: q2-atdd-vs-automate
|
||||
question: "What's the difference between ATDD and Automate workflows?"
|
||||
options:
|
||||
A: "ATDD generates E2E, Automate generates API tests"
|
||||
B: "ATDD writes tests first (red phase), Automate tests existing code"
|
||||
C: "ATDD is faster than Automate"
|
||||
D: "They're the same workflow"
|
||||
correct: B
|
||||
explanation: "ATDD writes failing tests first (red phase), then you implement. Automate generates tests for existing code (coverage expansion)."
|
||||
|
||||
- id: q3-api-testing
|
||||
question: "Why use pure API tests without a browser?"
|
||||
options:
|
||||
A: "They look prettier"
|
||||
B: "They're easier to debug"
|
||||
C: "They're faster and test business logic directly"
|
||||
D: "They're required by TEA"
|
||||
correct: C
|
||||
explanation: "Pure API tests are faster (no browser overhead) and test business logic directly without UI complexity."
|
||||
|
||||
session-06-quality-trace:
|
||||
passing_score: 70
|
||||
questions:
|
||||
- id: q1-five-dimensions
|
||||
question: "What are the 5 dimensions in Test Review workflow?"
|
||||
options:
|
||||
A: "Speed, cost, coverage, bugs, time"
|
||||
B: "Determinism, Isolation, Assertions, Structure, Performance"
|
||||
C: "Unit, integration, E2E, manual, exploratory"
|
||||
D: "P0, P1, P2, P3, P4"
|
||||
correct: B
|
||||
explanation: "Test Review evaluates 5 dimensions: Determinism (no flakiness), Isolation (parallel-safe), Assertions (correct checks), Structure (readable/maintainable organization), Performance (speed)."
|
||||
|
||||
- id: q2-release-gate
|
||||
question: "When should the Trace workflow gate decision be RED (block release)?"
|
||||
options:
|
||||
A: "Any test failures exist"
|
||||
B: "P0 gaps exist (critical requirements not tested)"
|
||||
C: "Code coverage is below 80%"
|
||||
D: "Tests are slow"
|
||||
correct: B
|
||||
explanation: "RED gate when P0 gaps exist - critical requirements not tested. Don't ship if critical features lack test coverage."
|
||||
|
||||
- id: q3-metrics
|
||||
question: "Which metric matters most for quality?"
|
||||
options:
|
||||
A: "Total line coverage %"
|
||||
B: "Number of tests written"
|
||||
C: "P0/P1 coverage %"
|
||||
D: "Test file count"
|
||||
correct: C
|
||||
explanation: "P0/P1 coverage matters most - it measures coverage of critical/high-priority features. Total line coverage is a vanity metric."
|
||||
|
||||
session-07-advanced:
|
||||
# No quiz - exploratory session
|
||||
# Score: 100 (completion based, not quiz based)
|
||||
passing_score: 100
|
||||
questions: []
|
||||
@@ -0,0 +1,136 @@
|
||||
# Role-Based Content Customization
|
||||
# Defines how teaching examples and focus areas adapt based on learner role
|
||||
|
||||
roles:
|
||||
qa:
|
||||
display_name: "QA Engineer"
|
||||
focus_areas:
|
||||
- Practical testing workflow usage
|
||||
- Test framework setup and maintenance
|
||||
- Test quality and coverage metrics
|
||||
- CI/CD integration
|
||||
example_contexts:
|
||||
- "Expanding test coverage for existing features"
|
||||
- "Setting up test framework for new project"
|
||||
- "Reducing flaky tests in CI pipeline"
|
||||
- "Improving test execution speed"
|
||||
recommended_sessions:
|
||||
- session-01-quickstart
|
||||
- session-02-concepts
|
||||
- session-03-architecture
|
||||
- session-05-atdd-automate
|
||||
- session-06-quality-trace
|
||||
teaching_adaptations:
|
||||
session-01-quickstart: "Focus on Automate workflow - quickly expand coverage"
|
||||
session-02-concepts: "Emphasize P0-P3 for defending coverage decisions"
|
||||
session-03-architecture: "Fixture patterns for maintainable test suites"
|
||||
session-04-test-design: "Test design for planning coverage expansion"
|
||||
session-05-atdd-automate: "ATDD and Automate for test generation"
|
||||
session-06-quality-trace: "Test Review for quality metrics reporting"
|
||||
session-07-advanced: "Playwright Utils for advanced testing patterns"
|
||||
|
||||
dev:
|
||||
display_name: "Software Developer"
|
||||
focus_areas:
|
||||
- Integration testing perspective
|
||||
- TDD approach
|
||||
- Test-driven development workflow
|
||||
- Unit and integration tests
|
||||
example_contexts:
|
||||
- "Writing tests alongside feature development"
|
||||
- "Using ATDD to drive implementation"
|
||||
- "Integrating tests into development workflow"
|
||||
- "Testing APIs and business logic"
|
||||
recommended_sessions:
|
||||
- session-01-quickstart
|
||||
- session-02-concepts
|
||||
- session-05-atdd-automate
|
||||
- session-03-architecture
|
||||
- session-04-test-design
|
||||
teaching_adaptations:
|
||||
session-01-quickstart: "Focus on ATDD - tests drive implementation"
|
||||
session-02-concepts: "Connect DoD to code quality standards"
|
||||
session-03-architecture: "Fixtures as code patterns, like dependency injection"
|
||||
session-04-test-design: "Risk assessment before writing code"
|
||||
session-05-atdd-automate: "Red-green-refactor TDD cycle"
|
||||
session-06-quality-trace: "Test quality like code quality - refactoring applies"
|
||||
session-07-advanced: "API testing patterns, component TDD"
|
||||
|
||||
lead:
|
||||
display_name: "Tech Lead / Engineering Manager"
|
||||
focus_areas:
|
||||
- Test architecture decisions
|
||||
- Team testing patterns
|
||||
- Framework and tooling choices
|
||||
- Quality standards enforcement
|
||||
example_contexts:
|
||||
- "Establishing team testing standards"
|
||||
- "Choosing test architecture patterns"
|
||||
- "Code review for test quality"
|
||||
- "Scaling test automation across team"
|
||||
recommended_sessions:
|
||||
- session-01-quickstart
|
||||
- session-03-architecture
|
||||
- session-04-test-design
|
||||
- session-06-quality-trace
|
||||
- session-07-advanced
|
||||
teaching_adaptations:
|
||||
session-01-quickstart: "TEA as team standard - scalable patterns"
|
||||
session-02-concepts: "DoD as code review checklist - enforce quality"
|
||||
session-03-architecture: "Architecture patterns for team consistency"
|
||||
session-04-test-design: "Test design as planning phase in development"
|
||||
session-05-atdd-automate: "ATDD for team TDD adoption"
|
||||
session-06-quality-trace: "Test Review for quality metrics and team standards"
|
||||
session-07-advanced: "Step-file architecture, fixture patterns, CI governance"
|
||||
|
||||
vp:
|
||||
display_name: "VP Engineering / Director"
|
||||
focus_areas:
|
||||
- Testing strategy and ROI
|
||||
- Quality metrics that matter
|
||||
- Team scalability
|
||||
- Risk management through testing
|
||||
example_contexts:
|
||||
- "Justifying test automation investment"
|
||||
- "Scaling testing across multiple teams"
|
||||
- "Quality metrics for stakeholder reporting"
|
||||
- "Risk mitigation through test coverage"
|
||||
recommended_sessions:
|
||||
- session-01-quickstart
|
||||
- session-02-concepts
|
||||
- session-04-test-design
|
||||
- session-06-quality-trace
|
||||
teaching_adaptations:
|
||||
session-01-quickstart: "TEA scales testing without scaling headcount"
|
||||
session-02-concepts: "Risk-based testing aligns engineering with business impact"
|
||||
session-03-architecture: "Architecture patterns reduce maintenance costs"
|
||||
session-04-test-design: "Test design makes risk visible to stakeholders"
|
||||
session-05-atdd-automate: "ATDD reduces defect rates early"
|
||||
session-06-quality-trace: "Quality metrics: P0/P1 coverage, not vanity metrics"
|
||||
session-07-advanced: "Governance patterns, CI orchestration, NFR assessment"
|
||||
|
||||
# Role-Based Example Types
|
||||
example_types:
|
||||
qa:
|
||||
- "Test suite maintenance scenarios"
|
||||
- "Coverage expansion projects"
|
||||
- "Flaky test debugging"
|
||||
- "CI pipeline configuration"
|
||||
|
||||
dev:
|
||||
- "Feature development with TDD"
|
||||
- "API integration testing"
|
||||
- "Unit test patterns"
|
||||
- "Mocking and stubbing"
|
||||
|
||||
lead:
|
||||
- "Team architecture decisions"
|
||||
- "Code review scenarios"
|
||||
- "Standard enforcement"
|
||||
- "Tooling selection"
|
||||
|
||||
vp:
|
||||
- "ROI calculations"
|
||||
- "Quality dashboards"
|
||||
- "Risk reporting"
|
||||
- "Team scaling strategies"
|
||||
@@ -0,0 +1,207 @@
|
||||
# Session Content Mapping
|
||||
# Maps each session to specific TEA documentation, knowledge fragments, and online resources
|
||||
|
||||
base_paths:
|
||||
tea_docs: "/docs"
|
||||
tea_knowledge: "/src/testarch/knowledge"
|
||||
online_base: "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise"
|
||||
github_knowledge: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/tree/main/src/testarch/knowledge"
|
||||
|
||||
sessions:
|
||||
session-01-quickstart:
|
||||
docs:
|
||||
- path: "/docs/tutorials/tea-lite-quickstart.md"
|
||||
title: "TEA Lite Quickstart"
|
||||
url: "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/tutorials/tea-lite-quickstart/"
|
||||
- path: "/docs/explanation/tea-overview.md"
|
||||
title: "TEA Overview"
|
||||
url: "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/explanation/tea-overview/"
|
||||
- path: "/docs/how-to/workflows/run-automate.md"
|
||||
title: "Run Automate Workflow"
|
||||
url: "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/how-to/workflows/run-automate/"
|
||||
knowledge_fragments: []
|
||||
online_references:
|
||||
- "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/"
|
||||
workflows_referenced:
|
||||
- automate
|
||||
key_concepts:
|
||||
- "What is TEA"
|
||||
- "TEA Lite approach"
|
||||
- "Engagement models"
|
||||
- "9 workflows overview"
|
||||
|
||||
session-02-concepts:
|
||||
docs:
|
||||
- path: "/docs/explanation/testing-as-engineering.md"
|
||||
title: "Testing as Engineering"
|
||||
url: "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/explanation/testing-as-engineering/"
|
||||
- path: "/docs/explanation/risk-based-testing.md"
|
||||
title: "Risk-Based Testing"
|
||||
url: "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/explanation/risk-based-testing/"
|
||||
- path: "/docs/explanation/test-quality-standards.md"
|
||||
title: "Test Quality Standards"
|
||||
url: "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/explanation/test-quality-standards/"
|
||||
knowledge_fragments:
|
||||
- path: "/src/testarch/knowledge/test-quality.md"
|
||||
title: "Test Quality (DoD Execution Limits)"
|
||||
- path: "/src/testarch/knowledge/probability-impact.md"
|
||||
title: "Probability × Impact Scoring"
|
||||
online_references:
|
||||
- "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/explanation/testing-as-engineering/"
|
||||
- "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/explanation/risk-based-testing/"
|
||||
workflows_referenced: []
|
||||
key_concepts:
|
||||
- "Testing as engineering philosophy"
|
||||
- "P0-P3 risk matrix"
|
||||
- "Probability × Impact scoring"
|
||||
- "Definition of Done (7 principles)"
|
||||
|
||||
session-03-architecture:
|
||||
docs:
|
||||
- path: "/docs/explanation/fixture-architecture.md"
|
||||
title: "Fixture Architecture"
|
||||
url: "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/explanation/fixture-architecture/"
|
||||
- path: "/docs/explanation/network-first-patterns.md"
|
||||
title: "Network-First Patterns"
|
||||
url: "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/explanation/network-first-patterns/"
|
||||
- path: "/docs/explanation/step-file-architecture.md"
|
||||
title: "Step-File Architecture"
|
||||
url: "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/explanation/step-file-architecture/"
|
||||
knowledge_fragments:
|
||||
- path: "/src/testarch/knowledge/fixture-architecture.md"
|
||||
title: "Fixture Architecture Patterns"
|
||||
- path: "/src/testarch/knowledge/network-first.md"
|
||||
title: "Network-First Implementation"
|
||||
- path: "/src/testarch/knowledge/data-factories.md"
|
||||
title: "Data Factories Pattern"
|
||||
online_references:
|
||||
- "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/explanation/fixture-architecture/"
|
||||
workflows_referenced:
|
||||
- framework
|
||||
key_concepts:
|
||||
- "Fixture composition"
|
||||
- "Network interception patterns"
|
||||
- "Data factory pattern"
|
||||
- "Step-file architecture"
|
||||
|
||||
session-04-test-design:
|
||||
docs:
|
||||
- path: "/docs/how-to/workflows/run-test-design.md"
|
||||
title: "Run Test Design Workflow"
|
||||
url: "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/how-to/workflows/run-test-design/"
|
||||
knowledge_fragments:
|
||||
- path: "/src/testarch/knowledge/test-levels-framework.md"
|
||||
title: "Test Levels Framework"
|
||||
- path: "/src/testarch/knowledge/test-priorities-matrix.md"
|
||||
title: "Test Priorities Matrix"
|
||||
online_references:
|
||||
- "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/how-to/workflows/run-test-design/"
|
||||
workflows_referenced:
|
||||
- test-design
|
||||
key_concepts:
|
||||
- "Test Design workflow steps"
|
||||
- "Risk/testability assessment"
|
||||
- "Coverage planning"
|
||||
- "Test levels (unit/integration/E2E)"
|
||||
|
||||
session-05-atdd-automate:
|
||||
docs:
|
||||
- path: "/docs/how-to/workflows/run-atdd.md"
|
||||
title: "Run ATDD Workflow"
|
||||
url: "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/how-to/workflows/run-atdd/"
|
||||
- path: "/docs/how-to/workflows/run-automate.md"
|
||||
title: "Run Automate Workflow"
|
||||
url: "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/how-to/workflows/run-automate/"
|
||||
knowledge_fragments:
|
||||
- path: "/src/testarch/knowledge/component-tdd.md"
|
||||
title: "Component TDD Red-Green Loop"
|
||||
- path: "/src/testarch/knowledge/api-testing-patterns.md"
|
||||
title: "API Testing Patterns"
|
||||
- path: "/src/testarch/knowledge/api-request.md"
|
||||
title: "API Request Utility"
|
||||
online_references:
|
||||
- "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/how-to/workflows/run-atdd/"
|
||||
- "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/how-to/workflows/run-automate/"
|
||||
workflows_referenced:
|
||||
- atdd
|
||||
- automate
|
||||
key_concepts:
|
||||
- "ATDD workflow (red phase)"
|
||||
- "TDD red-green-refactor"
|
||||
- "Automate workflow (coverage expansion)"
|
||||
- "API testing without browser"
|
||||
|
||||
session-06-quality-trace:
|
||||
docs:
|
||||
- path: "/docs/how-to/workflows/run-test-review.md"
|
||||
title: "Run Test Review Workflow"
|
||||
url: "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/how-to/workflows/run-test-review/"
|
||||
- path: "/docs/how-to/workflows/run-trace.md"
|
||||
title: "Run Trace Workflow"
|
||||
url: "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/how-to/workflows/run-trace/"
|
||||
knowledge_fragments: []
|
||||
online_references:
|
||||
- "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/how-to/workflows/run-test-review/"
|
||||
- "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/how-to/workflows/run-trace/"
|
||||
workflows_referenced:
|
||||
- test-review
|
||||
- trace
|
||||
key_concepts:
|
||||
- "5 dimensions of test quality"
|
||||
- "Quality scoring (0-100)"
|
||||
- "Requirements traceability"
|
||||
- "Release gate decisions"
|
||||
|
||||
session-07-advanced:
|
||||
docs: []
|
||||
knowledge_fragments:
|
||||
categories:
|
||||
testing_patterns:
|
||||
- fixture-architecture.md
|
||||
- network-first.md
|
||||
- data-factories.md
|
||||
- component-tdd.md
|
||||
- api-testing-patterns.md
|
||||
- test-healing-patterns.md
|
||||
- selector-resilience.md
|
||||
- timing-debugging.md
|
||||
|
||||
playwright_utils:
|
||||
- api-request.md
|
||||
- network-recorder.md
|
||||
- intercept-network-call.md
|
||||
- recurse.md
|
||||
- log.md
|
||||
- file-utils.md
|
||||
- burn-in.md
|
||||
- network-error-monitor.md
|
||||
- contract-testing.md
|
||||
|
||||
browser_automation:
|
||||
- playwright-cli.md
|
||||
|
||||
configuration_governance:
|
||||
- playwright-config.md
|
||||
- ci-burn-in.md
|
||||
- selective-testing.md
|
||||
- feature-flags.md
|
||||
- risk-governance.md
|
||||
|
||||
quality_frameworks:
|
||||
- test-quality.md
|
||||
- test-levels-framework.md
|
||||
- test-priorities-matrix.md
|
||||
- nfr-criteria.md
|
||||
|
||||
auth_security:
|
||||
- email-auth.md
|
||||
- auth-session.md
|
||||
- error-handling.md
|
||||
online_references:
|
||||
- "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/reference/knowledge-base/"
|
||||
- "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/tree/main/src/testarch/knowledge"
|
||||
workflows_referenced: []
|
||||
key_concepts:
|
||||
- "Menu-driven fragment exploration"
|
||||
- "Just-in-time deep-dive learning"
|
||||
- "35 knowledge fragments organized by category"
|
||||
@@ -0,0 +1,359 @@
|
||||
# TEA Resources Index
|
||||
# Comprehensive index of TEA documentation, knowledge fragments, and online resources
|
||||
|
||||
base_urls:
|
||||
online_docs: "https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise"
|
||||
github_repo: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise"
|
||||
github_knowledge: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/tree/main/src/testarch/knowledge"
|
||||
|
||||
# Public Documentation (32 files)
|
||||
documentation:
|
||||
tutorials:
|
||||
- name: "Getting Started with Test Architect"
|
||||
local: "/docs/tutorials/tea-lite-quickstart.md"
|
||||
online: "/tutorials/tea-lite-quickstart/"
|
||||
description: "30-minute quick start guide to TEA Lite"
|
||||
|
||||
how_to_guides:
|
||||
workflows:
|
||||
- name: "Set Up Test Framework"
|
||||
local: "/docs/how-to/workflows/setup-test-framework.md"
|
||||
online: "/how-to/workflows/setup-test-framework/"
|
||||
workflow: framework
|
||||
|
||||
- name: "Set Up CI Pipeline"
|
||||
local: "/docs/how-to/workflows/setup-ci.md"
|
||||
online: "/how-to/workflows/setup-ci/"
|
||||
workflow: ci
|
||||
|
||||
- name: "Test Design"
|
||||
local: "/docs/how-to/workflows/run-test-design.md"
|
||||
online: "/how-to/workflows/run-test-design/"
|
||||
workflow: test-design
|
||||
|
||||
- name: "ATDD"
|
||||
local: "/docs/how-to/workflows/run-atdd.md"
|
||||
online: "/how-to/workflows/run-atdd/"
|
||||
workflow: atdd
|
||||
|
||||
- name: "Automate"
|
||||
local: "/docs/how-to/workflows/run-automate.md"
|
||||
online: "/how-to/workflows/run-automate/"
|
||||
workflow: automate
|
||||
|
||||
- name: "Test Review"
|
||||
local: "/docs/how-to/workflows/run-test-review.md"
|
||||
online: "/how-to/workflows/run-test-review/"
|
||||
workflow: test-review
|
||||
|
||||
- name: "Trace"
|
||||
local: "/docs/how-to/workflows/run-trace.md"
|
||||
online: "/how-to/workflows/run-trace/"
|
||||
workflow: trace
|
||||
|
||||
- name: "NFR Assessment"
|
||||
local: "/docs/how-to/workflows/run-nfr-assess.md"
|
||||
online: "/how-to/workflows/run-nfr-assess/"
|
||||
workflow: nfr-assess
|
||||
|
||||
customization:
|
||||
- name: "Configure Browser Automation"
|
||||
local: "/docs/how-to/customization/configure-browser-automation.md"
|
||||
online: "/how-to/customization/configure-browser-automation/"
|
||||
|
||||
- name: "Integrate Playwright Utils with TEA"
|
||||
local: "/docs/how-to/customization/integrate-playwright-utils.md"
|
||||
online: "/how-to/customization/integrate-playwright-utils/"
|
||||
|
||||
brownfield:
|
||||
- name: "Running TEA for Enterprise Projects"
|
||||
local: "/docs/how-to/brownfield/use-tea-for-enterprise.md"
|
||||
online: "/how-to/brownfield/use-tea-for-enterprise/"
|
||||
|
||||
- name: "Using TEA with Existing Tests"
|
||||
local: "/docs/how-to/brownfield/use-tea-with-existing-tests.md"
|
||||
online: "/how-to/brownfield/use-tea-with-existing-tests/"
|
||||
|
||||
explanation:
|
||||
- name: "TEA Overview"
|
||||
local: "/docs/explanation/tea-overview.md"
|
||||
online: "/explanation/tea-overview/"
|
||||
topics: ["Architecture", "Engagement models"]
|
||||
|
||||
- name: "Testing as Engineering"
|
||||
local: "/docs/explanation/testing-as-engineering.md"
|
||||
online: "/explanation/testing-as-engineering/"
|
||||
topics: ["Philosophy", "Design principles"]
|
||||
|
||||
- name: "Engagement Models"
|
||||
local: "/docs/explanation/engagement-models.md"
|
||||
online: "/explanation/engagement-models/"
|
||||
topics: ["Lite", "Solo", "Integrated", "Enterprise", "Brownfield"]
|
||||
|
||||
- name: "Risk-Based Testing"
|
||||
local: "/docs/explanation/risk-based-testing.md"
|
||||
online: "/explanation/risk-based-testing/"
|
||||
topics: ["P0-P3 matrix", "Probability × Impact"]
|
||||
|
||||
- name: "Test Quality Standards"
|
||||
local: "/docs/explanation/test-quality-standards.md"
|
||||
online: "/explanation/test-quality-standards/"
|
||||
topics: ["Definition of Done", "7 principles"]
|
||||
|
||||
- name: "Knowledge Base System"
|
||||
local: "/docs/explanation/knowledge-base-system.md"
|
||||
online: "/explanation/knowledge-base-system/"
|
||||
topics: ["Fragment management", "35 fragments"]
|
||||
|
||||
- name: "Network-First Patterns"
|
||||
local: "/docs/explanation/network-first-patterns.md"
|
||||
online: "/explanation/network-first-patterns/"
|
||||
topics: ["Network interception", "Race condition prevention"]
|
||||
|
||||
- name: "Fixture Architecture"
|
||||
local: "/docs/explanation/fixture-architecture.md"
|
||||
online: "/explanation/fixture-architecture/"
|
||||
topics: ["Composition", "mergeTests pattern"]
|
||||
|
||||
- name: "Step-File Architecture"
|
||||
local: "/docs/explanation/step-file-architecture.md"
|
||||
online: "/explanation/step-file-architecture/"
|
||||
topics: ["Micro-file design", "JIT loading", "Sequential enforcement"]
|
||||
|
||||
- name: "Subagent Architecture"
|
||||
local: "/docs/explanation/subagent-architecture.md"
|
||||
online: "/explanation/subagent-architecture/"
|
||||
topics: ["Parallel execution", "Context optimization"]
|
||||
|
||||
reference:
|
||||
- name: "Commands"
|
||||
local: "/docs/reference/commands.md"
|
||||
online: "/reference/commands/"
|
||||
|
||||
- name: "Configuration"
|
||||
local: "/docs/reference/configuration.md"
|
||||
online: "/reference/configuration/"
|
||||
|
||||
- name: "Knowledge Base"
|
||||
local: "/docs/reference/knowledge-base.md"
|
||||
online: "/reference/knowledge-base/"
|
||||
github_link: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/tree/main/src/testarch/knowledge"
|
||||
|
||||
- name: "Troubleshooting"
|
||||
local: "/docs/reference/troubleshooting.md"
|
||||
online: "/reference/troubleshooting/"
|
||||
|
||||
# Knowledge Fragments (34 files)
|
||||
knowledge_fragments:
|
||||
testing_patterns:
|
||||
- name: "fixture-architecture"
|
||||
path: "/src/testarch/knowledge/fixture-architecture.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/fixture-architecture.md"
|
||||
description: "Composable fixture patterns and mergeTests"
|
||||
|
||||
- name: "fixtures-composition"
|
||||
path: "/src/testarch/knowledge/fixtures-composition.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/fixtures-composition.md"
|
||||
description: "mergeTests composition patterns for combining utilities"
|
||||
|
||||
- name: "network-first"
|
||||
path: "/src/testarch/knowledge/network-first.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/network-first.md"
|
||||
description: "Network interception safeguards"
|
||||
|
||||
- name: "data-factories"
|
||||
path: "/src/testarch/knowledge/data-factories.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/data-factories.md"
|
||||
description: "Data seeding and setup patterns"
|
||||
|
||||
- name: "component-tdd"
|
||||
path: "/src/testarch/knowledge/component-tdd.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/component-tdd.md"
|
||||
description: "TDD red-green-refactor loop"
|
||||
|
||||
- name: "api-testing-patterns"
|
||||
path: "/src/testarch/knowledge/api-testing-patterns.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/api-testing-patterns.md"
|
||||
description: "Pure API testing without browser"
|
||||
|
||||
- name: "test-healing-patterns"
|
||||
path: "/src/testarch/knowledge/test-healing-patterns.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/test-healing-patterns.md"
|
||||
description: "Auto-fix common test failures"
|
||||
|
||||
- name: "selector-resilience"
|
||||
path: "/src/testarch/knowledge/selector-resilience.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/selector-resilience.md"
|
||||
description: "Robust selectors that don't break"
|
||||
|
||||
- name: "timing-debugging"
|
||||
path: "/src/testarch/knowledge/timing-debugging.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/timing-debugging.md"
|
||||
description: "Race condition fixes"
|
||||
|
||||
playwright_utils:
|
||||
- name: "overview"
|
||||
path: "/src/testarch/knowledge/overview.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/overview.md"
|
||||
description: "Playwright Utils overview and installation"
|
||||
|
||||
- name: "api-request"
|
||||
path: "/src/testarch/knowledge/api-request.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/api-request.md"
|
||||
description: "Typed HTTP client with schema validation"
|
||||
|
||||
- name: "network-recorder"
|
||||
path: "/src/testarch/knowledge/network-recorder.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/network-recorder.md"
|
||||
description: "HAR record and playback"
|
||||
|
||||
- name: "intercept-network-call"
|
||||
path: "/src/testarch/knowledge/intercept-network-call.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/intercept-network-call.md"
|
||||
description: "Network spy and stub utilities"
|
||||
|
||||
- name: "recurse"
|
||||
path: "/src/testarch/knowledge/recurse.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/recurse.md"
|
||||
description: "Async polling for eventual consistency"
|
||||
|
||||
- name: "log"
|
||||
path: "/src/testarch/knowledge/log.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/log.md"
|
||||
description: "Test report logging utilities"
|
||||
|
||||
- name: "file-utils"
|
||||
path: "/src/testarch/knowledge/file-utils.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/file-utils.md"
|
||||
description: "CSV/XLSX/PDF/ZIP validation"
|
||||
|
||||
- name: "burn-in"
|
||||
path: "/src/testarch/knowledge/burn-in.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/burn-in.md"
|
||||
description: "Smart test selection via git diff"
|
||||
|
||||
- name: "network-error-monitor"
|
||||
path: "/src/testarch/knowledge/network-error-monitor.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/network-error-monitor.md"
|
||||
description: "HTTP 4xx/5xx detection"
|
||||
|
||||
- name: "contract-testing"
|
||||
path: "/src/testarch/knowledge/contract-testing.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/contract-testing.md"
|
||||
description: "Pact publishing and provider verification"
|
||||
|
||||
- name: "visual-debugging"
|
||||
path: "/src/testarch/knowledge/visual-debugging.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/visual-debugging.md"
|
||||
description: "Trace viewer workflows and debugging artifacts"
|
||||
|
||||
configuration_governance:
|
||||
- name: "playwright-config"
|
||||
path: "/src/testarch/knowledge/playwright-config.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/playwright-config.md"
|
||||
description: "Environment and timeout guardrails"
|
||||
|
||||
- name: "ci-burn-in"
|
||||
path: "/src/testarch/knowledge/ci-burn-in.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/ci-burn-in.md"
|
||||
description: "CI orchestration and smart selection"
|
||||
|
||||
- name: "selective-testing"
|
||||
path: "/src/testarch/knowledge/selective-testing.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/selective-testing.md"
|
||||
description: "Tag and grep filters"
|
||||
|
||||
- name: "feature-flags"
|
||||
path: "/src/testarch/knowledge/feature-flags.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/feature-flags.md"
|
||||
description: "Feature flag governance and cleanup"
|
||||
|
||||
- name: "risk-governance"
|
||||
path: "/src/testarch/knowledge/risk-governance.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/risk-governance.md"
|
||||
description: "Risk scoring matrix and gate rules"
|
||||
|
||||
- name: "adr-quality-readiness-checklist"
|
||||
path: "/src/testarch/knowledge/adr-quality-readiness-checklist.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/adr-quality-readiness-checklist.md"
|
||||
description: "Quality readiness checklist for decisions and reviews"
|
||||
|
||||
quality_frameworks:
|
||||
- name: "test-quality"
|
||||
path: "/src/testarch/knowledge/test-quality.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/test-quality.md"
|
||||
description: "Definition of Done execution limits"
|
||||
|
||||
- name: "test-levels-framework"
|
||||
path: "/src/testarch/knowledge/test-levels-framework.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/test-levels-framework.md"
|
||||
description: "Unit/Integration/E2E selection criteria"
|
||||
|
||||
- name: "test-priorities-matrix"
|
||||
path: "/src/testarch/knowledge/test-priorities-matrix.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/test-priorities-matrix.md"
|
||||
description: "P0-P3 coverage targets"
|
||||
|
||||
- name: "probability-impact"
|
||||
path: "/src/testarch/knowledge/probability-impact.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/probability-impact.md"
|
||||
description: "Probability × impact scoring definitions"
|
||||
|
||||
- name: "nfr-criteria"
|
||||
path: "/src/testarch/knowledge/nfr-criteria.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/nfr-criteria.md"
|
||||
description: "Non-functional requirements assessment"
|
||||
|
||||
auth_security:
|
||||
- name: "email-auth"
|
||||
path: "/src/testarch/knowledge/email-auth.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/email-auth.md"
|
||||
description: "Magic link extraction and auth state"
|
||||
|
||||
- name: "auth-session"
|
||||
path: "/src/testarch/knowledge/auth-session.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/auth-session.md"
|
||||
description: "Token persistence and multi-user auth"
|
||||
|
||||
- name: "error-handling"
|
||||
path: "/src/testarch/knowledge/error-handling.md"
|
||||
github: "https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/blob/main/src/testarch/knowledge/error-handling.md"
|
||||
description: "Exception handling and retry validation"
|
||||
|
||||
# Quick Reference Maps
|
||||
session_to_resources:
|
||||
session-01:
|
||||
primary_docs: ["tea-lite-quickstart", "tea-overview", "run-automate"]
|
||||
fragments: []
|
||||
|
||||
session-02:
|
||||
primary_docs: ["testing-as-engineering", "risk-based-testing", "test-quality-standards"]
|
||||
fragments: ["test-quality", "probability-impact"]
|
||||
|
||||
session-03:
|
||||
primary_docs: ["fixture-architecture", "network-first-patterns", "step-file-architecture"]
|
||||
fragments: ["fixture-architecture", "network-first", "data-factories"]
|
||||
|
||||
session-04:
|
||||
primary_docs: ["run-test-design"]
|
||||
fragments: ["test-levels-framework", "test-priorities-matrix"]
|
||||
|
||||
session-05:
|
||||
primary_docs: ["run-atdd", "run-automate"]
|
||||
fragments: ["component-tdd", "api-testing-patterns", "api-request"]
|
||||
|
||||
session-06:
|
||||
primary_docs: ["run-test-review", "run-trace"]
|
||||
fragments: []
|
||||
|
||||
session-07:
|
||||
primary_docs: []
|
||||
fragments: [] # All 35 fragments available via menu-driven exploration
|
||||
|
||||
# Web-Browsing Fallback Strategy
|
||||
fallback_urls:
|
||||
playwright_docs: "https://playwright.dev/docs/intro"
|
||||
jest_docs: "https://jestjs.io/docs/getting-started"
|
||||
cypress_docs: "https://docs.cypress.io/guides/overview/why-cypress"
|
||||
vitest_docs: "https://vitest.dev/guide/"
|
||||
testing_library: "https://testing-library.com/docs/"
|
||||
@@ -0,0 +1,130 @@
|
||||
# Teach Me Testing - Usage Instructions
|
||||
|
||||
## Overview
|
||||
|
||||
The Teach Me Testing workflow is a multi-session learning companion that teaches testing progressively through 7 structured sessions with state persistence. Designed for self-paced learning over 1-2 weeks.
|
||||
|
||||
## Who Should Use This
|
||||
|
||||
- **New QA Engineers:** Complete onboarding in testing fundamentals
|
||||
- **Developers:** Learn testing from an integration perspective
|
||||
- **Team Leads:** Understand architecture patterns and team practices
|
||||
- **VPs/Managers:** Grasp testing strategy and quality metrics
|
||||
|
||||
## How to Run
|
||||
|
||||
### Starting Fresh
|
||||
|
||||
```bash
|
||||
# From TEA module location
|
||||
cd /path/to/bmad-method-test-architecture-enterprise
|
||||
|
||||
# Run the workflow
|
||||
bmad run teach-me-testing
|
||||
```
|
||||
|
||||
Or invoke through TEA agent menu:
|
||||
|
||||
```bash
|
||||
bmad agent tea
|
||||
# Select [TMT] Teach Me Testing
|
||||
```
|
||||
|
||||
### Continuing Existing Progress
|
||||
|
||||
The workflow automatically detects existing progress and resumes where you left off. Your progress is saved at:
|
||||
|
||||
- `{test_artifacts}/teaching-progress/{your-name}-tea-progress.yaml`
|
||||
|
||||
## Workflow Structure
|
||||
|
||||
### 7 Sessions
|
||||
|
||||
1. **Quick Start (30 min)** - TEA Lite intro, run automate workflow
|
||||
2. **Core Concepts (45 min)** - Risk-based testing, DoD, philosophy
|
||||
3. **Architecture (60 min)** - Fixtures, network patterns, framework
|
||||
4. **Test Design (60 min)** - Risk assessment workflow
|
||||
5. **ATDD & Automate (60 min)** - ATDD + Automate workflows
|
||||
6. **Quality & Trace (45 min)** - Test review + Trace workflows
|
||||
7. **Advanced Patterns (ongoing)** - Menu-driven knowledge fragment exploration
|
||||
|
||||
### Non-Linear Learning
|
||||
|
||||
- Jump to any session based on your experience level
|
||||
- Beginners: Start at Session 1
|
||||
- Intermediate: Skip to Session 3-6
|
||||
- Experienced: Jump to Session 7 (Advanced)
|
||||
|
||||
### Session Flow
|
||||
|
||||
Each session follows this pattern:
|
||||
|
||||
1. Load relevant TEA docs just-in-time
|
||||
2. Present teaching content (mostly autonomous)
|
||||
3. Knowledge validation quiz (interactive)
|
||||
4. Generate session notes artifact
|
||||
5. Update progress file
|
||||
6. Return to session menu (continue or exit)
|
||||
|
||||
## Progress Tracking
|
||||
|
||||
Your progress is automatically saved after each session:
|
||||
|
||||
- **Progress file:** `{test_artifacts}/teaching-progress/{your-name}-tea-progress.yaml`
|
||||
- **Session notes:** `{test_artifacts}/tea-academy/{your-name}/session-{N}-notes.md`
|
||||
- **Certificate:** `{test_artifacts}/tea-academy/{your-name}/tea-completion-certificate.md`
|
||||
|
||||
## Quiz Scoring
|
||||
|
||||
- **Passing threshold:** ≥70%
|
||||
- **On failure:** Option to review content or continue anyway
|
||||
- **Attempts:** 3 attempts per question before showing correct answer
|
||||
|
||||
## Completion
|
||||
|
||||
Complete all 7 sessions to receive your TEA Academy completion certificate with:
|
||||
|
||||
- Session completion dates and scores
|
||||
- Skills acquired checklist
|
||||
- Learning artifacts paths
|
||||
- Recommended next steps
|
||||
|
||||
## Tips for Success
|
||||
|
||||
1. **Set aside dedicated time** - Each session requires focus (30-90 min)
|
||||
2. **Take notes** - Session notes are generated, but add your own insights
|
||||
3. **Apply immediately** - Practice concepts on your current project
|
||||
4. **Explore fragments** - Session 7 has 35 knowledge fragments to deep-dive
|
||||
5. **Share with team** - Help others learn by sharing your experience
|
||||
|
||||
## Customization by Role
|
||||
|
||||
The workflow adapts examples based on your role:
|
||||
|
||||
- **QA:** Practical testing focus, workflow usage
|
||||
- **Dev:** Integration perspective, TDD approach
|
||||
- **Lead:** Architecture decisions, team patterns
|
||||
- **VP:** Strategy, ROI, quality metrics
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Progress file corrupted
|
||||
|
||||
- Workflow detects corruption and offers fresh start
|
||||
- Backup file created automatically
|
||||
|
||||
### Missing TEA docs
|
||||
|
||||
- Workflow uses Web-Browsing fallback for external frameworks
|
||||
- Primary source is always local docs
|
||||
|
||||
### Session interrupted
|
||||
|
||||
- Progress auto-saved after quiz completion
|
||||
- Resume from session menu on next run
|
||||
|
||||
## Support
|
||||
|
||||
- **Documentation:** <https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/>
|
||||
- **Knowledge Fragments:** <https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/tree/main/src/testarch/knowledge>
|
||||
- **Issues:** Report via TEA module repository
|
||||
@@ -0,0 +1,235 @@
|
||||
---
|
||||
name: 'step-01-init'
|
||||
description: 'Initialize TEA Academy - check for existing progress and route to continuation or new assessment'
|
||||
|
||||
nextStepFile: './step-02-assess.md'
|
||||
continueFile: './step-01b-continue.md'
|
||||
progressFile: '{test_artifacts}/teaching-progress/{user_name}-tea-progress.yaml'
|
||||
progressTemplate: '../templates/progress-template.yaml'
|
||||
---
|
||||
|
||||
# Step 1: Initialize TEA Academy
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
To welcome the learner, check for existing progress from previous sessions, and route to either continuation (if progress exists) or new assessment (if starting fresh).
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate content without user input
|
||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
||||
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
|
||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a Master Test Architect and Teaching Guide
|
||||
- ✅ We engage in collaborative learning, not lectures
|
||||
- ✅ You bring expertise in TEA methodology and teaching pedagogy
|
||||
- ✅ Learner brings their role context, experience, and learning goals
|
||||
- ✅ Together we build their testing knowledge progressively
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus ONLY on initialization and routing
|
||||
- 🚫 FORBIDDEN to start teaching yet - that comes in session steps
|
||||
- 💬 Approach: Check for progress, route appropriately
|
||||
- 🚪 This is the entry point - sets up everything that follows
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Check for existing progress file
|
||||
- 💾 Create initial progress if new learner
|
||||
- 📖 Route to continuation or assessment based on progress
|
||||
- 🚫 FORBIDDEN to skip continuation check - critical for multi-session learning
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: User name, test artifacts path, templates
|
||||
- Focus: Detect continuation vs new start
|
||||
- Limits: No teaching yet, no assessment yet
|
||||
- Dependencies: None - this is the first step
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise unless user explicitly requests a change.
|
||||
|
||||
### 1. Welcome Message
|
||||
|
||||
Display:
|
||||
|
||||
"🧪 **Welcome to TEA Academy - Test Architecture Enterprise Learning**
|
||||
|
||||
A multi-session learning companion that teaches testing progressively through 7 structured sessions.
|
||||
|
||||
Let me check if you've started this journey before..."
|
||||
|
||||
### 2. Check for Existing Progress
|
||||
|
||||
Check if {progressFile} exists.
|
||||
|
||||
**How to check:**
|
||||
|
||||
- Attempt to read {progressFile}
|
||||
- If file exists and is readable → Progress found
|
||||
- If file not found or error → No progress (new learner)
|
||||
|
||||
### 3. Route Based on Progress
|
||||
|
||||
**IF progress file EXISTS:**
|
||||
|
||||
Display:
|
||||
|
||||
"✅ **Welcome back!** I found your existing progress.
|
||||
|
||||
Let me load where you left off..."
|
||||
|
||||
**THEN:** Immediately load, read entire file, then execute {continueFile}
|
||||
|
||||
---
|
||||
|
||||
**IF progress file DOES NOT EXIST:**
|
||||
|
||||
Display:
|
||||
|
||||
"📝 **Starting fresh!** I'll create your progress tracking file.
|
||||
|
||||
You can pause and resume anytime - your progress will be saved automatically after each session."
|
||||
|
||||
**THEN:** Proceed to step 4
|
||||
|
||||
### 4. Create Initial Progress File (New Learner Only)
|
||||
|
||||
Load {progressTemplate} and create {progressFile} with:
|
||||
|
||||
```yaml
|
||||
---
|
||||
# TEA Academy Progress Tracking
|
||||
user: { user_name }
|
||||
role: null # Will be set in assessment
|
||||
experience_level: null # Will be set in assessment
|
||||
learning_goals: null # Will be set in assessment
|
||||
pain_points: null # Optional, set in assessment
|
||||
|
||||
started_date: { current_date }
|
||||
last_session_date: { current_date }
|
||||
|
||||
sessions:
|
||||
- id: session-01-quickstart
|
||||
name: 'Quick Start'
|
||||
duration: '30 min'
|
||||
status: not-started
|
||||
started_date: null
|
||||
completed_date: null
|
||||
score: null
|
||||
notes_artifact: null
|
||||
|
||||
- id: session-02-concepts
|
||||
name: 'Core Concepts'
|
||||
duration: '45 min'
|
||||
status: not-started
|
||||
started_date: null
|
||||
completed_date: null
|
||||
score: null
|
||||
notes_artifact: null
|
||||
|
||||
- id: session-03-architecture
|
||||
name: 'Architecture & Patterns'
|
||||
duration: '60 min'
|
||||
status: not-started
|
||||
started_date: null
|
||||
completed_date: null
|
||||
score: null
|
||||
notes_artifact: null
|
||||
|
||||
- id: session-04-test-design
|
||||
name: 'Test Design'
|
||||
duration: '60 min'
|
||||
status: not-started
|
||||
started_date: null
|
||||
completed_date: null
|
||||
score: null
|
||||
notes_artifact: null
|
||||
|
||||
- id: session-05-atdd-automate
|
||||
name: 'ATDD & Automate'
|
||||
duration: '60 min'
|
||||
status: not-started
|
||||
started_date: null
|
||||
completed_date: null
|
||||
score: null
|
||||
notes_artifact: null
|
||||
|
||||
- id: session-06-quality-trace
|
||||
name: 'Quality & Trace'
|
||||
duration: '45 min'
|
||||
status: not-started
|
||||
started_date: null
|
||||
completed_date: null
|
||||
score: null
|
||||
notes_artifact: null
|
||||
|
||||
- id: session-07-advanced
|
||||
name: 'Advanced Patterns'
|
||||
duration: 'ongoing'
|
||||
status: not-started
|
||||
started_date: null
|
||||
completed_date: null
|
||||
score: null
|
||||
notes_artifact: null
|
||||
|
||||
sessions_completed: 0
|
||||
total_sessions: 7
|
||||
completion_percentage: 0
|
||||
next_recommended: session-01-quickstart
|
||||
|
||||
stepsCompleted: ['step-01-init']
|
||||
lastStep: 'step-01-init'
|
||||
lastContinued: { current_date }
|
||||
|
||||
certificate_generated: false
|
||||
certificate_path: null
|
||||
completion_date: null
|
||||
---
|
||||
```
|
||||
|
||||
### 5. Proceed to Assessment (New Learner Only)
|
||||
|
||||
Display:
|
||||
|
||||
"✅ **Progress file created!**
|
||||
|
||||
Now let's learn about you - your role, experience level, and learning goals.
|
||||
|
||||
This helps me customize examples and recommendations for you.
|
||||
|
||||
**Proceeding to assessment...**"
|
||||
|
||||
**THEN:** Immediately load, read entire file, then execute {nextStepFile}
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Progress file check performed correctly
|
||||
- Existing learners routed to continuation (step-01b)
|
||||
- New learners get progress file created
|
||||
- Progress file has complete schema with all 7 sessions
|
||||
- New learners routed to assessment (step-02)
|
||||
- stepsCompleted array initialized
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipping progress file check
|
||||
- Not routing to continuation for existing learners
|
||||
- Creating duplicate progress files
|
||||
- Progress file missing required fields
|
||||
- Not updating stepsCompleted array
|
||||
- Asking user questions before checking progress
|
||||
|
||||
**Master Rule:** This is an auto-proceed initialization step. Check progress, route appropriately, no user menu needed.
|
||||
@@ -0,0 +1,147 @@
|
||||
---
|
||||
name: 'step-01b-continue'
|
||||
description: 'Resume TEA Academy learning - load progress and display dashboard'
|
||||
|
||||
progressFile: '{test_artifacts}/teaching-progress/{user_name}-tea-progress.yaml'
|
||||
nextStepFile: './step-03-session-menu.md'
|
||||
---
|
||||
|
||||
# Step 1b: Continue TEA Academy
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
To resume the TEA Academy workflow from a previous session by loading progress, displaying a dashboard, and routing to the session menu.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate _new instructional content_ without user input (auto-proceed steps may display status/route)
|
||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
||||
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
|
||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a Master Test Architect and Teaching Guide
|
||||
- ✅ We engage in collaborative learning, not lectures
|
||||
- ✅ You bring expertise in TEA methodology and teaching pedagogy
|
||||
- ✅ Learner brings their role context, experience, and learning goals
|
||||
- ✅ Together we build their testing knowledge progressively
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus ONLY on loading progress and routing to session menu
|
||||
- 🚫 FORBIDDEN to start teaching - that happens in session steps
|
||||
- 💬 Approach: Load progress, show dashboard, route to menu
|
||||
- 🚪 This is the continuation entry point - seamless resume
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Load progress file completely
|
||||
- 💾 Update lastContinued timestamp
|
||||
- 📖 Display progress dashboard with completion status
|
||||
- 🚫 FORBIDDEN to skip dashboard - learners need to see progress
|
||||
- ⏭️ Auto-route to session menu after dashboard
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: Progress file with all session data
|
||||
- Focus: Display progress, route to menu
|
||||
- Limits: No teaching, no session execution
|
||||
- Dependencies: Progress file must exist (checked in step-01-init)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise unless user explicitly requests a change.
|
||||
|
||||
### 1. Load Progress File
|
||||
|
||||
Read {progressFile} completely and extract:
|
||||
|
||||
- user
|
||||
- role
|
||||
- experience_level
|
||||
- started_date
|
||||
- sessions array (all 7 sessions with status, scores)
|
||||
- sessions_completed
|
||||
- completion_percentage
|
||||
- next_recommended
|
||||
|
||||
### 2. Update Last Continued Timestamp
|
||||
|
||||
Update {progressFile} frontmatter:
|
||||
|
||||
- Set `lastContinued: {current_date}`
|
||||
- Keep all other fields unchanged
|
||||
|
||||
### 3. Display Progress Dashboard
|
||||
|
||||
Display:
|
||||
|
||||
"🧪 **Welcome back to TEA Academy, {user}!**
|
||||
|
||||
**Your Role:** {role}
|
||||
**Experience Level:** {experience_level}
|
||||
**Started:** {started_date}
|
||||
**Progress:** {completion_percentage}% ({sessions_completed} of 7 sessions completed)
|
||||
|
||||
---
|
||||
|
||||
### 📊 Session Progress
|
||||
|
||||
{Display each session with completion indicator}
|
||||
|
||||
{For each session in sessions array:}
|
||||
{If status == 'completed':}
|
||||
✅ **Session {N}:** {name} - Completed {completed_date} (Score: {score}/100)
|
||||
{If status == 'in-progress':}
|
||||
🔄 **Session {N}:** {name} - In Progress (Started {started_date})
|
||||
{If status == 'not-started':}
|
||||
⬜ **Session {N}:** {name} - Not Started
|
||||
|
||||
---
|
||||
|
||||
### 🎯 Next Recommended
|
||||
|
||||
{next_recommended}
|
||||
|
||||
---
|
||||
|
||||
**Let's continue your learning journey!**
|
||||
|
||||
Loading session menu..."
|
||||
|
||||
### 4. Route to Session Menu
|
||||
|
||||
Display:
|
||||
|
||||
"**Proceeding to session menu...**"
|
||||
|
||||
**THEN:** Immediately load, read entire file, then execute {nextStepFile}
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Progress file loaded correctly
|
||||
- lastContinued timestamp updated
|
||||
- Dashboard displayed with accurate completion status
|
||||
- Session indicators correct (✅ completed, 🔄 in-progress, ⬜ not-started)
|
||||
- Completion percentage calculated correctly
|
||||
- Next recommended session identified
|
||||
- Auto-routed to session menu (step-03)
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Not loading progress file
|
||||
- Dashboard missing or incomplete
|
||||
- Incorrect completion indicators
|
||||
- Not updating lastContinued timestamp
|
||||
- Asking user for input instead of auto-routing
|
||||
- Not routing to session menu
|
||||
|
||||
**Master Rule:** This is an auto-proceed continuation step. Load progress, show dashboard, route to session menu - no user menu needed.
|
||||
@@ -0,0 +1,258 @@
|
||||
---
|
||||
name: 'step-02-assess'
|
||||
description: 'Gather learner role, experience level, learning goals, and pain points to customize teaching'
|
||||
|
||||
nextStepFile: './step-03-session-menu.md'
|
||||
progressFile: '{test_artifacts}/teaching-progress/{user_name}-tea-progress.yaml'
|
||||
---
|
||||
|
||||
# Step 2: Learner Assessment
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
To gather the learner's role, experience level, learning goals, and pain points to customize teaching examples and recommendations throughout the curriculum.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate _new instructional content_ without user input (auto-proceed steps may display status/route)
|
||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
||||
- 🔄 CRITICAL: When loading next step (auto-proceed), ensure entire file is read
|
||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a Master Test Architect and Teaching Guide
|
||||
- ✅ We engage in collaborative learning, not lectures
|
||||
- ✅ You bring expertise in TEA methodology and teaching pedagogy
|
||||
- ✅ Learner brings their role context, experience, and learning goals
|
||||
- ✅ Together we build their testing knowledge progressively
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus ONLY on gathering assessment data
|
||||
- 🚫 FORBIDDEN to start teaching yet - that comes in session steps
|
||||
- 💬 Approach: Ask clear questions, validate responses, explain why we're asking
|
||||
- 🚪 This assessment customizes the entire learning experience
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Ask questions one at a time
|
||||
- 💾 Validate each response before moving forward
|
||||
- 📖 Update progress file with complete assessment data
|
||||
- 🚫 FORBIDDEN to skip validation - ensures data quality
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: Progress file created in step-01
|
||||
- Focus: Gather role, experience, goals, pain points
|
||||
- Limits: No teaching yet, no session execution
|
||||
- Dependencies: Progress file exists (created in step-01-init)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise unless user explicitly requests a change.
|
||||
|
||||
### 1. Welcome and Explain Assessment
|
||||
|
||||
Display:
|
||||
|
||||
"📋 **Learner Assessment**
|
||||
|
||||
Before we begin, let me learn about you. This helps me:
|
||||
|
||||
- Choose relevant examples for your role
|
||||
- Adjust complexity to your experience level
|
||||
- Focus on your specific learning goals
|
||||
- Address your pain points
|
||||
|
||||
This will take just 2-3 minutes."
|
||||
|
||||
### 2. Gather Role
|
||||
|
||||
Ask:
|
||||
|
||||
"**What is your role?**
|
||||
|
||||
Please select one:
|
||||
|
||||
- **QA** - QA Engineer / Test Engineer / SDET
|
||||
- **Dev** - Software Developer / Engineer
|
||||
- **Lead** - Tech Lead / Engineering Manager
|
||||
- **VP** - VP Engineering / Director / Executive
|
||||
|
||||
Your role helps me tailor examples to your perspective."
|
||||
|
||||
**Wait for response.**
|
||||
|
||||
**Validate response:**
|
||||
|
||||
- Must be one of: QA, Dev, Lead, VP (case-insensitive)
|
||||
- If invalid: "Please select one of the four options: QA, Dev, Lead, or VP"
|
||||
- Repeat until valid
|
||||
|
||||
**Store validated role for later update to progress file.**
|
||||
|
||||
### 3. Gather Experience Level
|
||||
|
||||
Ask:
|
||||
|
||||
"**What is your experience level with testing?**
|
||||
|
||||
Please select one:
|
||||
|
||||
- **Beginner** - New to testing, learning fundamentals
|
||||
- **Intermediate** - Have written tests, want to improve
|
||||
- **Experienced** - Strong testing background, want advanced techniques
|
||||
|
||||
Your experience level helps me adjust complexity and skip topics you already know."
|
||||
|
||||
**Wait for response.**
|
||||
|
||||
**Validate response:**
|
||||
|
||||
- Must be one of: Beginner, Intermediate, Experienced (case-insensitive)
|
||||
- If invalid: "Please select one of the three levels: Beginner, Intermediate, or Experienced"
|
||||
- Repeat until valid
|
||||
|
||||
**Store validated experience_level for later update to progress file.**
|
||||
|
||||
### 4. Gather Learning Goals
|
||||
|
||||
Ask:
|
||||
|
||||
"**What are your learning goals?**
|
||||
|
||||
Tell me what you want to achieve with TEA Academy. For example:
|
||||
|
||||
- Learn testing fundamentals from scratch
|
||||
- Understand TEA methodology and workflows
|
||||
- Improve test quality and reduce flakiness
|
||||
- Master advanced patterns (fixtures, network-first, etc.)
|
||||
- Prepare for QA onboarding at my company
|
||||
|
||||
**Your answer helps me recommend which sessions to focus on.**"
|
||||
|
||||
**Wait for response.**
|
||||
|
||||
**Validate response:**
|
||||
|
||||
- Must not be empty
|
||||
- Should be at least 10 characters
|
||||
- If too short: "Please provide more detail about your learning goals (at least a sentence)"
|
||||
- Repeat until valid
|
||||
|
||||
**Store learning_goals for later update to progress file.**
|
||||
|
||||
### 5. Gather Pain Points (Optional)
|
||||
|
||||
Ask:
|
||||
|
||||
"**What are your current pain points with testing?** _(Optional)_
|
||||
|
||||
For example:
|
||||
|
||||
- Flaky tests that fail randomly
|
||||
- Slow test suites
|
||||
- Hard to maintain tests
|
||||
- Don't know where to start
|
||||
- Team doesn't value testing
|
||||
|
||||
**This helps me provide targeted examples. You can skip this by typing 'skip' or 'none'.**"
|
||||
|
||||
**Wait for response.**
|
||||
|
||||
**Handle response:**
|
||||
|
||||
- If response is "skip", "none", or similar → Set pain_points to null
|
||||
- If response is provided → Store pain_points for later update
|
||||
- No validation needed (optional field)
|
||||
|
||||
### 6. Summarize Assessment
|
||||
|
||||
Display:
|
||||
|
||||
"✅ **Assessment Complete!**
|
||||
|
||||
Here's what I learned about you:
|
||||
|
||||
**Role:** {role}
|
||||
**Experience Level:** {experience_level}
|
||||
**Learning Goals:** {learning_goals}
|
||||
**Pain Points:** {pain_points or 'None specified'}
|
||||
|
||||
I'll use this to customize examples and recommendations throughout your learning journey."
|
||||
|
||||
### 7. Update Progress File
|
||||
|
||||
Load {progressFile} and update the following fields:
|
||||
|
||||
- `role: {role}`
|
||||
- `experience_level: {experience_level}`
|
||||
- `learning_goals: {learning_goals}`
|
||||
- `pain_points: {pain_points}` (or null if not provided)
|
||||
|
||||
Update stepsCompleted array:
|
||||
|
||||
- Append 'step-02-assess' to stepsCompleted array
|
||||
- Update lastStep: 'step-02-assess'
|
||||
|
||||
**Save the updated progress file.**
|
||||
|
||||
### 8. Provide Next Steps Preview
|
||||
|
||||
Display:
|
||||
|
||||
"**Next:** You'll see the session menu where you can choose from 7 learning sessions.
|
||||
|
||||
**Based on your experience level:**
|
||||
|
||||
{If beginner:}
|
||||
|
||||
- I recommend starting with Session 1 (Quick Start)
|
||||
- It introduces TEA with a hands-on example
|
||||
|
||||
{If intermediate:}
|
||||
|
||||
- You might want to skip to Session 3 (Architecture)
|
||||
- Or review Session 2 (Core Concepts) first if you want fundamentals
|
||||
|
||||
{If experienced:}
|
||||
|
||||
- Feel free to jump to Session 7 (Advanced Patterns)
|
||||
- Or pick specific sessions based on your goals
|
||||
|
||||
You can take sessions in any order and pause anytime!"
|
||||
|
||||
### 9. Proceed to Session Menu
|
||||
|
||||
After the assessment summary, proceed directly to the session menu:
|
||||
|
||||
- Load, read entire file, then execute {nextStepFile}
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- All required fields gathered (role, experience_level, learning_goals)
|
||||
- Optional pain_points handled correctly
|
||||
- All responses validated before proceeding
|
||||
- Progress file updated with assessment data
|
||||
- stepsCompleted array updated with 'step-02-assess'
|
||||
- Experience-based recommendations provided
|
||||
- User routed to session menu (step-03)
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipping validation of required fields
|
||||
- Not updating progress file
|
||||
- Not adding to stepsCompleted array
|
||||
- Proceeding without waiting for user responses
|
||||
- Not providing experience-based recommendations
|
||||
- Hardcoding responses instead of asking user
|
||||
|
||||
**Master Rule:** Assessment must be complete and validated before proceeding to session menu.
|
||||
@@ -0,0 +1,219 @@
|
||||
---
|
||||
name: 'step-03-session-menu'
|
||||
description: 'Session selection hub - display all 7 sessions with completion status and route to selected session or completion'
|
||||
|
||||
progressFile: '{test_artifacts}/teaching-progress/{user_name}-tea-progress.yaml'
|
||||
session01File: './step-04-session-01.md'
|
||||
session02File: './step-04-session-02.md'
|
||||
session03File: './step-04-session-03.md'
|
||||
session04File: './step-04-session-04.md'
|
||||
session05File: './step-04-session-05.md'
|
||||
session06File: './step-04-session-06.md'
|
||||
session07File: './step-04-session-07.md'
|
||||
completionFile: './step-05-completion.md'
|
||||
---
|
||||
|
||||
# Step 3: Session Menu (Hub)
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
To present all 7 learning sessions with completion status, allow non-linear session selection, and route to chosen session or completion. This is the central hub - all sessions return here.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate content without user input
|
||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
||||
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
|
||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a Master Test Architect and Teaching Guide
|
||||
- ✅ We engage in collaborative learning, not lectures
|
||||
- ✅ You bring expertise in TEA methodology and teaching pedagogy
|
||||
- ✅ Learner brings their role context, experience, and learning goals
|
||||
- ✅ Together we build their testing knowledge progressively
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus ONLY on displaying sessions and routing
|
||||
- 🚫 FORBIDDEN to start teaching - that happens in session steps
|
||||
- 💬 Approach: Show progress, let learner choose their path
|
||||
- 🚪 This is the HUB - all sessions loop back here
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Load progress file to get session completion status
|
||||
- 💾 Display sessions with accurate indicators
|
||||
- 📖 Route to selected session or completion
|
||||
- 🚫 FORBIDDEN to skip progress check - status indicators critical
|
||||
- ⏭️ No stepsCompleted update (this is a routing hub, not a content step)
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: Progress file with all session data
|
||||
- Focus: Display menu, route to selection
|
||||
- Limits: No teaching, no session execution
|
||||
- Dependencies: Progress file exists (created in step-01, updated in step-02)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise unless user explicitly requests a change.
|
||||
|
||||
### 1. Load Progress File
|
||||
|
||||
Read {progressFile} and extract:
|
||||
|
||||
- user
|
||||
- role
|
||||
- experience_level
|
||||
- sessions array (all 7 sessions with status, scores, dates)
|
||||
- sessions_completed
|
||||
- completion_percentage
|
||||
- next_recommended
|
||||
|
||||
### 2. Display Session Menu with Status
|
||||
|
||||
Display:
|
||||
|
||||
"🧪 **TEA Academy - Session Menu**
|
||||
|
||||
**Progress:** {completion_percentage}% ({sessions_completed} of 7 sessions completed)
|
||||
|
||||
---
|
||||
|
||||
### 📚 Available Sessions
|
||||
|
||||
{For each session in sessions array, display with status indicator:}
|
||||
|
||||
**Session 1: Quick Start (30 min)**
|
||||
{status_indicator} TEA Lite intro, run automate workflow
|
||||
{if completed: Score: {score}/100 | Completed: {completed_date}}
|
||||
{if in-progress: Started: {started_date}}
|
||||
|
||||
**Session 2: Core Concepts (45 min)**
|
||||
{status_indicator} Risk-based testing, DoD, testing philosophy
|
||||
{if completed: Score: {score}/100 | Completed: {completed_date}}
|
||||
{if in-progress: Started: {started_date}}
|
||||
|
||||
**Session 3: Architecture & Patterns (60 min)**
|
||||
{status_indicator} Fixtures, network patterns, framework setup
|
||||
{if completed: Score: {score}/100 | Completed: {completed_date}}
|
||||
{if in-progress: Started: {started_date}}
|
||||
|
||||
**Session 4: Test Design (60 min)**
|
||||
{status_indicator} Risk assessment, test design workflow
|
||||
{if completed: Score: {score}/100 | Completed: {completed_date}}
|
||||
{if in-progress: Started: {started_date}}
|
||||
|
||||
**Session 5: ATDD & Automate (60 min)**
|
||||
{status_indicator} ATDD + Automate workflows, TDD approach
|
||||
{if completed: Score: {score}/100 | Completed: {completed_date}}
|
||||
{if in-progress: Started: {started_date}}
|
||||
|
||||
**Session 6: Quality & Trace (45 min)**
|
||||
{status_indicator} Test review + Trace workflows, quality metrics
|
||||
{if completed: Score: {score}/100 | Completed: {completed_date}}
|
||||
{if in-progress: Started: {started_date}}
|
||||
|
||||
**Session 7: Advanced Patterns (ongoing)**
|
||||
{status_indicator} Menu-driven knowledge fragment exploration (35 fragments)
|
||||
{if completed: Score: {score}/100 | Completed: {completed_date}}
|
||||
{if in-progress: Started: {started_date}}
|
||||
|
||||
---
|
||||
|
||||
**Status Indicators:**
|
||||
|
||||
- ✅ = Completed
|
||||
- 🔄 = In Progress
|
||||
- ⬜ = Not Started
|
||||
|
||||
---
|
||||
|
||||
{If next_recommended exists:}
|
||||
💡 **Recommended Next:** {next_recommended}
|
||||
"
|
||||
|
||||
### 3. Check for Completion
|
||||
|
||||
**Before displaying menu options, check:**
|
||||
|
||||
If all 7 sessions have status 'completed' AND certificate_generated != true:
|
||||
|
||||
- Display: "🎉 **Congratulations!** You've completed all 7 sessions!"
|
||||
- Skip session menu options
|
||||
- Proceed directly to step 4b (route to completion)
|
||||
|
||||
**Otherwise:** Display session menu options in step 4a
|
||||
|
||||
### 4a. Present Session Menu Options (Sessions Remaining)
|
||||
|
||||
Display:
|
||||
|
||||
"**Select a session or exit:**
|
||||
|
||||
**[1-7]** Start or continue a session
|
||||
**[X]** Save progress and exit
|
||||
|
||||
What would you like to do?"
|
||||
|
||||
#### EXECUTION RULES:
|
||||
|
||||
- ALWAYS halt and wait for user input after presenting menu
|
||||
- Route based on user selection
|
||||
- User can ask questions - always respond and redisplay menu
|
||||
|
||||
#### Menu Handling Logic:
|
||||
|
||||
- IF 1: Load, read entire file, then execute {session01File}
|
||||
- IF 2: Load, read entire file, then execute {session02File}
|
||||
- IF 3: Load, read entire file, then execute {session03File}
|
||||
- IF 4: Load, read entire file, then execute {session04File}
|
||||
- IF 5: Load, read entire file, then execute {session05File}
|
||||
- IF 6: Load, read entire file, then execute {session06File}
|
||||
- IF 7: Load, read entire file, then execute {session07File}
|
||||
- IF X: Display "Progress saved. See you next time! 👋" and END workflow
|
||||
- IF Any other: "Please select a session number (1-7) or X to exit", then [Redisplay Menu Options](#4a-present-session-menu-options-sessions-remaining)
|
||||
|
||||
### 4b. Route to Completion (All Sessions Done)
|
||||
|
||||
**If all 7 sessions completed:**
|
||||
|
||||
Display:
|
||||
|
||||
"**Proceeding to generate your completion certificate...**"
|
||||
|
||||
Load, read entire file, then execute {completionFile}
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Progress file loaded correctly
|
||||
- All 7 sessions displayed with accurate status indicators
|
||||
- Completion percentage calculated correctly
|
||||
- Session status matches progress file (✅ completed, 🔄 in-progress, ⬜ not-started)
|
||||
- User selection validated (1-7 or X)
|
||||
- Correct routing to selected session file
|
||||
- Completion detected when all 7 done
|
||||
- Exit option saves and ends workflow cleanly
|
||||
- No stepsCompleted update (this is routing hub, not content step)
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Not loading progress file
|
||||
- Wrong status indicators
|
||||
- Incorrect completion percentage
|
||||
- Not detecting when all sessions complete
|
||||
- Routing to wrong session file
|
||||
- Updating stepsCompleted (hub should not update this)
|
||||
- Not displaying session descriptions
|
||||
- Not allowing non-linear session selection
|
||||
|
||||
**Master Rule:** This is the central hub. Display accurate status, let learner choose freely, route correctly. All sessions return here.
|
||||
@@ -0,0 +1,460 @@
|
||||
---
|
||||
name: 'step-04-session-01'
|
||||
description: 'Session 1: Quick Start - TEA Lite intro, run automate workflow (30 min)'
|
||||
|
||||
progressFile: '{test_artifacts}/teaching-progress/{user_name}-tea-progress.yaml'
|
||||
sessionNotesTemplate: '../templates/session-notes-template.md'
|
||||
sessionNotesFile: '{test_artifacts}/tea-academy/{user_name}/session-01-notes.md'
|
||||
nextStepFile: './step-03-session-menu.md'
|
||||
advancedElicitationTask: '{project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml'
|
||||
partyModeWorkflow: '{project-root}/_bmad/core/workflows/party-mode/workflow.md'
|
||||
---
|
||||
|
||||
# Step 4: Session 1 - Quick Start
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
To provide immediate value through a 30-minute introduction to TEA Lite, run the automate workflow as a hands-on example, validate understanding through a quiz, and generate session notes.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate _unsolicited_ content without user input (session flow content is allowed once session begins)
|
||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
||||
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
|
||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a Master Test Architect and Teaching Guide
|
||||
- ✅ We engage in collaborative learning, not lectures
|
||||
- ✅ You bring expertise in TEA methodology and teaching pedagogy
|
||||
- ✅ Learner brings their role context, experience, and learning goals
|
||||
- ✅ Together we build their testing knowledge progressively
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus ONLY on Session 1 content (Quick Start)
|
||||
- 🚫 FORBIDDEN to skip ahead to other sessions
|
||||
- 💬 Approach: Teach concepts, provide examples, quiz understanding
|
||||
- 🚪 Teaching is mostly autonomous, quiz is collaborative
|
||||
- 📚 Reference TEA docs and provide URLs for further reading
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Load TEA docs just-in-time (not all at once)
|
||||
- 💾 Generate session notes after completion
|
||||
- 📖 Update progress file with session completion and score
|
||||
- 🚫 FORBIDDEN to skip quiz - validates understanding
|
||||
- ⏭️ Always return to session menu hub after completion
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: Progress file with user role/experience
|
||||
- Focus: Session 1 - TEA Lite introduction
|
||||
- Limits: Only Session 1 content, don't preview other sessions
|
||||
- Dependencies: Progress file exists with assessment data
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise unless user explicitly requests a change.
|
||||
|
||||
### 1. Session Welcome
|
||||
|
||||
Display:
|
||||
|
||||
"🧪 **Session 1: Quick Start** (30 minutes)
|
||||
|
||||
**Objective:** Get immediate value by seeing TEA in action
|
||||
|
||||
**What you'll learn:**
|
||||
|
||||
- What is TEA and why it exists
|
||||
- TEA Lite quick start approach
|
||||
- How to run your first TEA workflow (Automate)
|
||||
- TEA engagement models
|
||||
|
||||
Let's get started!"
|
||||
|
||||
### 2. Update Progress File (Session Started)
|
||||
|
||||
Load {progressFile} and update session-01-quickstart:
|
||||
|
||||
- Set `status: 'in-progress'`
|
||||
- Set `started_date: {current_date}`
|
||||
|
||||
Save the updated progress file.
|
||||
|
||||
### 3. Teaching: What is TEA?
|
||||
|
||||
Present this content (mostly autonomous, clear and educational):
|
||||
|
||||
"### 📖 What is TEA (Test Architecture Enterprise)?
|
||||
|
||||
TEA is a comprehensive test architecture framework that provides:
|
||||
|
||||
- **9 Workflows:** Teach Me Testing, Framework, Test Design, ATDD, Automate, Test Review, Trace, NFR Assessment, CI
|
||||
- **35 Knowledge Fragments:** Distilled expertise on patterns, best practices, Playwright Utils
|
||||
- **Quality Standards:** Definition of Done with execution limits (no flaky tests, no hard waits, etc.)
|
||||
- **Risk-Based Testing:** P0-P3 matrix for prioritizing test coverage
|
||||
|
||||
**Why TEA exists:**
|
||||
Testing knowledge doesn't scale through manual teaching. TEA makes testing expertise accessible through:
|
||||
|
||||
- Structured workflows that guide you step-by-step
|
||||
- Documentation (32 docs) organized by type (tutorials, how-to, explanation, reference)
|
||||
- Knowledge fragments for just-in-time learning
|
||||
- Online resources: <https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/>
|
||||
|
||||
**TEA Engagement Models:**
|
||||
|
||||
1. **TEA Lite (30 min):** Quick start - run Automate workflow, generate tests
|
||||
2. **TEA Solo:** Use workflows individually as needed
|
||||
3. **TEA Integrated:** Full lifecycle - Framework → Test Design → ATDD/Automate → Review → Trace
|
||||
4. **TEA Enterprise:** Add NFR Assessment + CI integration for compliance
|
||||
5. **TEA Brownfield:** Adapt TEA for existing test suites
|
||||
|
||||
**Today we're experiencing TEA Lite!**"
|
||||
|
||||
### 4. Teaching: TEA Lite Quick Start
|
||||
|
||||
Present this content (adapt examples based on user role from progress file):
|
||||
|
||||
"### 🚀 TEA Lite: Your First Workflow
|
||||
|
||||
The **Automate workflow** generates tests for your application automatically.
|
||||
|
||||
**How it works:**
|
||||
|
||||
1. You describe what needs testing
|
||||
2. TEA analyzes your app structure
|
||||
3. Workflow generates test files with TEA best practices
|
||||
4. You review and run the tests
|
||||
|
||||
{If role == QA:}
|
||||
**For QA Engineers:** This helps you quickly expand test coverage without writing every test manually. Focus on test design, let TEA handle boilerplate.
|
||||
|
||||
{If role == Dev:}
|
||||
**For Developers:** This generates tests following best practices so you can focus on implementation. Tests are maintainable and follow fixture patterns.
|
||||
|
||||
{If role == Lead:}
|
||||
**For Tech Leads:** This standardizes test architecture across your team. Everyone writes tests the same way using TEA patterns.
|
||||
|
||||
{If role == VP:}
|
||||
**For VPs:** This scales testing across teams without manual training. New hires can generate quality tests from day one.
|
||||
|
||||
**Let me show you how the Automate workflow works conceptually:**
|
||||
|
||||
1. **Input:** You provide targets (features/pages to test)
|
||||
2. **TEA analyzes:** Understands your app structure
|
||||
3. **Test generation:** Creates API and/or E2E tests
|
||||
4. **Output:** Test files in your test suite with proper fixtures
|
||||
|
||||
**Documentation:** <https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/how-to/workflows/run-automate/>
|
||||
|
||||
**Note:** We won't actually run the workflow now (you can do that on your project later), but you understand the concept."
|
||||
|
||||
### 5. Teaching: Key Concepts
|
||||
|
||||
Present this content:
|
||||
|
||||
"### 🎯 Key Concepts from Session 1
|
||||
|
||||
**1. TEA is a framework:** Not just docs, but executable workflows that guide you
|
||||
|
||||
**2. Risk-based testing:** Prioritize what matters (P0 critical, P3 nice-to-have)
|
||||
|
||||
**3. Quality standards:** Definition of Done ensures reliable tests
|
||||
|
||||
- No flaky tests
|
||||
- No hard waits/sleeps
|
||||
- Stateless & parallelizable
|
||||
- Self-cleaning tests
|
||||
|
||||
**4. Engagement models:** Choose how much TEA you need (Lite → Solo → Integrated → Enterprise → Brownfield)
|
||||
|
||||
**5. Knowledge fragments:** 35 fragments for deep-dive topics when you need them
|
||||
|
||||
- Testing patterns (fixtures, network-first, data factories)
|
||||
- Playwright Utils (api-request, network-recorder, recurse)
|
||||
- Configuration & governance (CI, feature flags, risk)
|
||||
|
||||
**You've now experienced TEA Lite! In future sessions, we'll go deeper.**"
|
||||
|
||||
### 6. Quiz: Validate Understanding
|
||||
|
||||
Display:
|
||||
|
||||
"### ✅ Quick Knowledge Check
|
||||
|
||||
Let me ask you 3 questions to validate your understanding. Passing score: ≥70% (2 of 3 correct)."
|
||||
|
||||
**Question 1:**
|
||||
|
||||
"**Question 1 of 3:**
|
||||
|
||||
What is the primary purpose of TEA?
|
||||
|
||||
A) Replace all testing tools with a single framework
|
||||
B) Make testing expertise accessible through structured workflows and knowledge
|
||||
C) Automate 100% of test writing
|
||||
D) Only works for Playwright tests
|
||||
|
||||
Your answer (A, B, C, or D):"
|
||||
|
||||
**Wait for response. Validate:**
|
||||
|
||||
- Correct answer: B
|
||||
- If correct: "✅ Correct! TEA makes testing expertise accessible and scalable."
|
||||
- If incorrect: "❌ Not quite. TEA's purpose is to make testing expertise accessible through structured workflows and knowledge (B). It's not about replacing tools or automating everything."
|
||||
|
||||
**Store result (1 point if correct, 0 if incorrect)**
|
||||
|
||||
**Question 2:**
|
||||
|
||||
"**Question 2 of 3:**
|
||||
|
||||
What does the P0-P3 risk matrix help with?
|
||||
|
||||
A) Prioritizing test coverage based on criticality
|
||||
B) Grading test code quality
|
||||
C) Measuring test execution speed
|
||||
D) Tracking bug severity
|
||||
|
||||
Your answer (A, B, C, or D):"
|
||||
|
||||
**Wait for response. Validate:**
|
||||
|
||||
- Correct answer: A
|
||||
- If correct: "✅ Correct! P0-P3 helps prioritize what to test based on risk and criticality."
|
||||
- If incorrect: "❌ The P0-P3 matrix is about prioritizing test coverage (A). P0 = critical features like login, P3 = nice-to-have like tooltips."
|
||||
|
||||
**Store result**
|
||||
|
||||
**Question 3:**
|
||||
|
||||
"**Question 3 of 3:**
|
||||
|
||||
Which TEA engagement model is best for quick value in 30 minutes?
|
||||
|
||||
A) TEA Enterprise
|
||||
B) TEA Lite
|
||||
C) TEA Integrated
|
||||
D) TEA Brownfield
|
||||
|
||||
Your answer (A, B, C, or D):"
|
||||
|
||||
**Wait for response. Validate:**
|
||||
|
||||
- Correct answer: B
|
||||
- If correct: "✅ Correct! TEA Lite is the 30-minute quick start approach."
|
||||
- If incorrect: "❌ TEA Lite (B) is the quick start approach. Enterprise and Integrated are more comprehensive."
|
||||
|
||||
**Store result**
|
||||
|
||||
**Calculate score:**
|
||||
|
||||
- Total points / 3 \* 100 = score (0-100)
|
||||
|
||||
**Display results:**
|
||||
|
||||
"**Quiz Results:** {score}/100
|
||||
|
||||
{If score >= 70:}
|
||||
✅ **Passed!** You've demonstrated understanding of Session 1 concepts.
|
||||
|
||||
{If score < 70:}
|
||||
⚠️ **Below passing threshold.** Would you like to:
|
||||
|
||||
- **[R]** Review the content again
|
||||
- **[C]** Continue anyway (your score will be recorded)
|
||||
|
||||
{Wait for response if < 70, handle R or C}"
|
||||
|
||||
### 7. Generate Session Notes
|
||||
|
||||
Create {sessionNotesFile} using {sessionNotesTemplate} with:
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: session-01-quickstart
|
||||
session_name: 'Session 1: Quick Start'
|
||||
user: { user_name }
|
||||
role: { role }
|
||||
completed_date: { current_date }
|
||||
score: { score }
|
||||
duration: '30 min'
|
||||
---
|
||||
|
||||
# Session 1: Quick Start - Session Notes
|
||||
|
||||
**Learner:** {user_name} ({role})
|
||||
**Completed:** {current_date}
|
||||
**Score:** {score}/100
|
||||
**Duration:** 30 min
|
||||
|
||||
---
|
||||
|
||||
## Session Objectives
|
||||
|
||||
- Understand what TEA is and why it exists
|
||||
- Learn TEA Lite quick start approach
|
||||
- Conceptually understand the Automate workflow
|
||||
- Explore TEA engagement models
|
||||
|
||||
---
|
||||
|
||||
## Key Concepts Covered
|
||||
|
||||
1. **TEA Framework:** 9 workflows + 35 knowledge fragments + quality standards
|
||||
2. **Risk-Based Testing:** P0-P3 prioritization matrix
|
||||
3. **Quality Standards:** Definition of Done (no flaky tests, no hard waits, stateless, self-cleaning)
|
||||
4. **Engagement Models:** Lite, Solo, Integrated, Enterprise, Brownfield
|
||||
5. **Automate Workflow:** Generates tests automatically with TEA best practices
|
||||
|
||||
---
|
||||
|
||||
## TEA Resources Referenced
|
||||
|
||||
### Documentation
|
||||
|
||||
- TEA Overview: https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/explanation/tea-overview/
|
||||
- TEA Lite Quickstart: https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/tutorials/tea-lite-quickstart/
|
||||
- Automate Workflow: https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/how-to/workflows/run-automate/
|
||||
|
||||
### Knowledge Fragments
|
||||
|
||||
- (None used in this session - knowledge fragments explored in Session 7)
|
||||
|
||||
### Online Resources
|
||||
|
||||
- TEA Website: https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/
|
||||
- Knowledge Base: https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/reference/knowledge-base/
|
||||
|
||||
---
|
||||
|
||||
## Quiz Results
|
||||
|
||||
**Score:** {score}/100
|
||||
|
||||
### Questions & Answers
|
||||
|
||||
1. What is the primary purpose of TEA? → {user_answer} ({correct/incorrect})
|
||||
2. What does the P0-P3 risk matrix help with? → {user_answer} ({correct/incorrect})
|
||||
3. Which TEA engagement model is best for quick value? → {user_answer} ({correct/incorrect})
|
||||
|
||||
---
|
||||
|
||||
## Key Takeaways
|
||||
|
||||
- TEA makes testing expertise accessible at scale
|
||||
- Start with TEA Lite (30 min) for immediate value
|
||||
- Risk-based testing prioritizes what matters (P0 critical features first)
|
||||
- Quality standards ensure reliable, maintainable tests
|
||||
- 5 engagement models let you choose the right level of TEA adoption
|
||||
|
||||
---
|
||||
|
||||
## Next Recommended Session
|
||||
|
||||
{If experience_level == 'beginner':}
|
||||
**Session 2: Core Concepts** - Learn testing fundamentals and TEA principles
|
||||
|
||||
{If experience_level == 'intermediate':}
|
||||
**Session 2 or 3** - Review concepts or dive into architecture patterns
|
||||
|
||||
{If experience_level == 'experienced':}
|
||||
**Session 7: Advanced Patterns** - Explore 35 knowledge fragments
|
||||
|
||||
---
|
||||
|
||||
**Generated by:** TEA Academy - Teach Me Testing Workflow
|
||||
**Session Path:** Session 1 of 7
|
||||
```
|
||||
|
||||
### 8. Update Progress File (Session Complete)
|
||||
|
||||
Load {progressFile} and update session-01-quickstart:
|
||||
|
||||
- Set `status: 'completed'`
|
||||
- Set `completed_date: {current_date}`
|
||||
- Set `score: {score}`
|
||||
- Set `notes_artifact: '{sessionNotesFile}'`
|
||||
|
||||
Update progress metrics:
|
||||
|
||||
- If previous status for `session-01-quickstart` is not `completed`, increment `sessions_completed` by 1 (otherwise leave unchanged)
|
||||
- Calculate `completion_percentage: (sessions_completed / 7) * 100`
|
||||
- Set `next_recommended: 'session-02-concepts'`
|
||||
|
||||
Update stepsCompleted array:
|
||||
|
||||
- Append 'step-04-session-01' to stepsCompleted array
|
||||
- Update lastStep: 'step-04-session-01'
|
||||
|
||||
Save the updated progress file.
|
||||
|
||||
### 9. Session Complete Message
|
||||
|
||||
Display:
|
||||
|
||||
"🎉 **Session 1 Complete!**
|
||||
|
||||
**Your Score:** {score}/100
|
||||
|
||||
**Session notes saved:** {sessionNotesFile}
|
||||
|
||||
You've completed your first step in TEA Academy! You now understand what TEA is, how TEA Lite works, and the different engagement models.
|
||||
|
||||
**Next:** You'll return to the session menu where you can choose Session 2 or explore any other session.
|
||||
|
||||
**Progress:** {completion_percentage}% complete ({sessions_completed} of 7 sessions)"
|
||||
|
||||
### 10. Present MENU OPTIONS
|
||||
|
||||
Display: **Select an Option:** [A] Advanced Elicitation [P] Party Mode [C] Continue to Session Menu
|
||||
|
||||
#### EXECUTION RULES:
|
||||
|
||||
- ALWAYS halt and wait for user input after presenting menu
|
||||
- ONLY proceed to session menu when user selects 'C'
|
||||
- After other menu items execution, return to this menu
|
||||
|
||||
#### Menu Handling Logic:
|
||||
|
||||
- IF A: Execute {advancedElicitationTask}, and when finished redisplay the menu
|
||||
- IF P: Execute {partyModeWorkflow}, and when finished redisplay the menu
|
||||
- IF C: Progress file already updated in step 8, then load, read entire file, then execute {nextStepFile}
|
||||
- IF Any other: help user, then [Redisplay Menu Options](#10-present-menu-options)
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Teaching content presented clearly
|
||||
- Examples adapted to user role
|
||||
- Quiz administered with 3 questions
|
||||
- Score calculated correctly (0-100)
|
||||
- Session notes generated with all required sections
|
||||
- Progress file updated (status: completed, score, notes_artifact)
|
||||
- stepsCompleted array updated with 'step-04-session-01'
|
||||
- Completion percentage recalculated
|
||||
- Next recommended session set
|
||||
- User routed back to session menu hub
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipping quiz
|
||||
- Not adapting examples to user role
|
||||
- Not generating session notes
|
||||
- Not updating progress file
|
||||
- Not updating stepsCompleted array
|
||||
- Not calculating completion percentage
|
||||
- Not routing back to hub
|
||||
- Loading all docs at once (should be just-in-time)
|
||||
|
||||
**Master Rule:** Teach, quiz, generate notes, update progress, return to hub. This pattern repeats for all 7 sessions.
|
||||
@@ -0,0 +1,465 @@
|
||||
---
|
||||
name: 'step-04-session-02'
|
||||
description: 'Session 2: Core Concepts - Risk-based testing, DoD, testing philosophy (45 min)'
|
||||
|
||||
progressFile: '{test_artifacts}/teaching-progress/{user_name}-tea-progress.yaml'
|
||||
sessionNotesTemplate: '../templates/session-notes-template.md'
|
||||
sessionNotesFile: '{test_artifacts}/tea-academy/{user_name}/session-02-notes.md'
|
||||
nextStepFile: './step-03-session-menu.md'
|
||||
advancedElicitationTask: '{project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml'
|
||||
partyModeWorkflow: '{project-root}/_bmad/core/workflows/party-mode/workflow.md'
|
||||
---
|
||||
|
||||
# Step 4: Session 2 - Core Concepts
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
To teach testing fundamentals including risk-based testing, TEA quality standards (Definition of Done), and testing as engineering philosophy in a 45-minute session.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate content without user input
|
||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
||||
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
|
||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a Master Test Architect and Teaching Guide
|
||||
- ✅ We engage in collaborative learning, not lectures
|
||||
- ✅ You bring expertise in TEA methodology and teaching pedagogy
|
||||
- ✅ Learner brings their role context, experience, and learning goals
|
||||
- ✅ Together we build their testing knowledge progressively
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus ONLY on Session 2 content (Core Concepts)
|
||||
- 🚫 FORBIDDEN to skip ahead to other sessions
|
||||
- 💬 Approach: Teach fundamentals, provide examples, quiz understanding
|
||||
- 🚪 Teaching is mostly autonomous, quiz is collaborative
|
||||
- 📚 Reference TEA docs and knowledge fragments
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Load TEA docs just-in-time
|
||||
- 💾 Generate session notes after completion
|
||||
- 📖 Update progress file with session completion and score
|
||||
- 🚫 FORBIDDEN to skip quiz - validates understanding
|
||||
- ⏭️ Always return to session menu hub after completion
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: Progress file with user role/experience
|
||||
- Focus: Session 2 - Testing fundamentals and TEA principles
|
||||
- Limits: Only Session 2 content
|
||||
- Dependencies: Progress file exists with assessment data
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise unless user explicitly requests a change.
|
||||
|
||||
### 1. Session Welcome
|
||||
|
||||
Display:
|
||||
|
||||
"🧪 **Session 2: Core Concepts** (45 minutes)
|
||||
|
||||
**Objective:** Understand WHY behind TEA principles
|
||||
|
||||
**What you'll learn:**
|
||||
|
||||
- Testing as Engineering philosophy
|
||||
- Risk-based testing with P0-P3 matrix
|
||||
- TEA Definition of Done (quality standards)
|
||||
- Probability × Impact risk scoring
|
||||
|
||||
Let's dive into the fundamentals!"
|
||||
|
||||
### 2. Update Progress File (Session Started)
|
||||
|
||||
Load {progressFile} and update session-02-concepts:
|
||||
|
||||
- Set `status: 'in-progress'`
|
||||
- Set `started_date: {current_date}`
|
||||
|
||||
Save the updated progress file.
|
||||
|
||||
### 3. Teaching: Testing as Engineering
|
||||
|
||||
Present this content:
|
||||
|
||||
"### 🏗️ Testing as Engineering
|
||||
|
||||
**Core Philosophy:** Testing is not an afterthought - it's engineering.
|
||||
|
||||
**What this means:**
|
||||
|
||||
- Tests are **designed** before they're written (like architecture before coding)
|
||||
- Tests have **quality standards** (not just "does it run?")
|
||||
- Tests are **maintained** like production code
|
||||
- Testing decisions are **risk-based** (prioritize what matters)
|
||||
|
||||
{If role == QA:}
|
||||
**For QA Engineers:** You're not just finding bugs - you're engineering test systems that scale. Design before write, maintain like production code.
|
||||
|
||||
{If role == Dev:}
|
||||
**For Developers:** Think of tests like you think of production code. Design patterns, refactoring, DRY principles - they all apply to tests.
|
||||
|
||||
{If role == Lead:}
|
||||
**For Tech Leads:** Testing as engineering means architecture decisions: fixture patterns, data strategies, CI orchestration. Not just "write more tests."
|
||||
|
||||
{If role == VP:}
|
||||
**For VPs:** Testing is an engineering discipline requiring investment in tooling, architecture, and knowledge. Not a checklist item.
|
||||
|
||||
**Key Principle:** If you wouldn't accept sloppy production code, don't accept sloppy test code.
|
||||
|
||||
**Documentation:** <https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/explanation/testing-as-engineering/>"
|
||||
|
||||
### 4. Teaching: Risk-Based Testing
|
||||
|
||||
Present this content:
|
||||
|
||||
"### ⚖️ Risk-Based Testing: The P0-P3 Matrix
|
||||
|
||||
**Problem:** You can't test everything. How do you prioritize?
|
||||
|
||||
**Solution:** Risk = Probability × Impact
|
||||
|
||||
**The P0-P3 Matrix:**
|
||||
|
||||
**P0 - Critical (Must Test)**
|
||||
|
||||
- Login/Authentication
|
||||
- Payment processing
|
||||
- Data loss scenarios
|
||||
- Security vulnerabilities
|
||||
- **Impact:** Business fails if broken
|
||||
- **Probability:** High usage, high complexity
|
||||
|
||||
**P1 - High (Should Test)**
|
||||
|
||||
- Core user workflows
|
||||
- Key features
|
||||
- Data integrity
|
||||
- **Impact:** Major user pain
|
||||
- **Probability:** Frequent usage
|
||||
|
||||
**P2 - Medium (Nice to Test)**
|
||||
|
||||
- Secondary features
|
||||
- Edge cases with workarounds
|
||||
- **Impact:** Inconvenience
|
||||
- **Probability:** Moderate usage
|
||||
|
||||
**P3 - Low (Optional)**
|
||||
|
||||
- Tooltips, help text
|
||||
- Nice-to-have features
|
||||
- Aesthetic issues
|
||||
- **Impact:** Minimal
|
||||
- **Probability:** Low usage
|
||||
|
||||
{If role == QA:}
|
||||
**For QA Engineers:** Use P0-P3 to defend test coverage decisions. "We have 100% P0 coverage, 80% P1" is better than "we have 50% coverage overall."
|
||||
|
||||
{If role == Dev:}
|
||||
**For Developers:** When writing tests, ask "Is this P0 login or P3 tooltip?" Focus your time accordingly.
|
||||
|
||||
{If role == Lead:}
|
||||
**For Tech Leads:** P0-P3 helps allocate test automation budget. Mandate P0/P1 automation, P2/P3 is cost-benefit analysis.
|
||||
|
||||
{If role == VP:}
|
||||
**For VPs:** Risk-based testing aligns engineering effort with business impact. Metrics that matter: P0 coverage, not lines of code.
|
||||
|
||||
**Documentation:** <https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/explanation/risk-based-testing/>
|
||||
|
||||
**Knowledge Fragment:** probability-impact.md defines scoring criteria"
|
||||
|
||||
### 5. Teaching: Definition of Done (Quality Standards)
|
||||
|
||||
Present this content:
|
||||
|
||||
"### ✅ TEA Definition of Done: Quality Standards
|
||||
|
||||
**The Problem:** "The tests pass" isn't enough. What about quality?
|
||||
|
||||
**TEA Definition of Done ensures:**
|
||||
|
||||
**1. No Flaky Tests**
|
||||
|
||||
- Tests pass/fail deterministically
|
||||
- No "run it again, it'll work" tests
|
||||
- Use explicit waits, not hard sleeps
|
||||
- Handle async properly
|
||||
|
||||
**2. No Hard Waits/Sleeps**
|
||||
|
||||
- Use `waitFor` conditions, not `sleep(5000)`
|
||||
- React to state changes, don't guess timing
|
||||
- Tests complete when ready, not after arbitrary delays
|
||||
|
||||
**3. Stateless & Parallelizable**
|
||||
|
||||
- Tests run independently, any order
|
||||
- No shared state between tests
|
||||
- Can run in parallel (fast feedback)
|
||||
- Use cron jobs/semaphores only when unavoidable
|
||||
|
||||
**4. No Order Dependency**
|
||||
|
||||
- Every `it`/`describe`/`context` block works in isolation
|
||||
- Supports `.only` execution for debugging
|
||||
- Tests don't depend on previous tests
|
||||
|
||||
**5. Self-Cleaning Tests**
|
||||
|
||||
- Test sets up its own data
|
||||
- Test automatically deletes/deactivates entities created
|
||||
- No manual cleanup required
|
||||
|
||||
**6. Tests Live Near Source Code**
|
||||
|
||||
- Co-locate test files with code they validate
|
||||
- `component.tsx` → `component.spec.tsx` in same folder
|
||||
|
||||
**7. Low Maintenance**
|
||||
|
||||
- Minimize manual upkeep
|
||||
- Avoid brittle selectors
|
||||
- Use APIs to set up state, not UI clicks
|
||||
- Don't repeat UI actions
|
||||
|
||||
{If role == QA:}
|
||||
**For QA Engineers:** These standards prevent the "test maintenance nightmare." Upfront investment in quality = long-term stability.
|
||||
|
||||
{If role == Dev:}
|
||||
**For Developers:** Write tests you'd want to inherit. No flaky tests, no "run twice" culture, no mystery failures.
|
||||
|
||||
{If role == Lead:}
|
||||
**For Tech Leads:** Enforce these standards in code review. Flaky test PRs don't merge. Period.
|
||||
|
||||
{If role == VP:}
|
||||
**For VPs:** Definition of Done isn't perfectionism - it's engineering rigor. Flaky tests erode trust in CI/CD.
|
||||
|
||||
**Documentation:** <https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/explanation/test-quality-standards/>
|
||||
|
||||
**Knowledge Fragment:** test-quality.md has execution limits and criteria"
|
||||
|
||||
### 6. Teaching: Key Takeaways
|
||||
|
||||
Present this content:
|
||||
|
||||
"### 🎯 Session 2 Key Takeaways
|
||||
|
||||
**1. Testing is Engineering**
|
||||
|
||||
- Design before write
|
||||
- Maintain like production code
|
||||
- Apply engineering principles
|
||||
|
||||
**2. Risk-Based Testing**
|
||||
|
||||
- P0 = Critical (login, payment)
|
||||
- P1 = High (core workflows)
|
||||
- P2 = Medium (secondary features)
|
||||
- P3 = Low (tooltips, nice-to-have)
|
||||
- Prioritize based on Probability × Impact
|
||||
|
||||
**3. Definition of Done**
|
||||
|
||||
- No flaky tests (deterministic)
|
||||
- No hard waits (use waitFor)
|
||||
- Stateless & parallelizable
|
||||
- Self-cleaning tests
|
||||
- Low maintenance
|
||||
|
||||
**4. Quality Standards = Engineering Rigor**
|
||||
|
||||
- Not perfectionism, but reliability
|
||||
- Prevents test maintenance nightmares
|
||||
- Builds trust in CI/CD
|
||||
|
||||
**You now understand the WHY behind TEA principles!**"
|
||||
|
||||
### 7. Quiz: Validate Understanding
|
||||
|
||||
Display:
|
||||
|
||||
"### ✅ Knowledge Check
|
||||
|
||||
3 questions to validate your understanding. Passing: ≥70% (2 of 3 correct)."
|
||||
|
||||
**Question 1:**
|
||||
|
||||
"**Question 1 of 3:**
|
||||
|
||||
In the P0-P3 matrix, what priority level should login/authentication have?
|
||||
|
||||
A) P3 - Low priority
|
||||
B) P2 - Medium priority
|
||||
C) P1 - High priority
|
||||
D) P0 - Critical priority
|
||||
|
||||
Your answer (A, B, C, or D):"
|
||||
|
||||
**Wait for response. Validate:**
|
||||
|
||||
- Correct answer: D
|
||||
- If correct: "✅ Correct! Login/authentication is P0 - critical. Business fails if broken."
|
||||
- If incorrect: "❌ Login/authentication is P0 - Critical (D). It's high usage, high impact, and business-critical."
|
||||
|
||||
**Store result**
|
||||
|
||||
**Question 2:**
|
||||
|
||||
"**Question 2 of 3:**
|
||||
|
||||
What is the problem with using `sleep(5000)` instead of `waitFor` conditions?
|
||||
|
||||
A) It makes tests slower
|
||||
B) It's a hard wait that doesn't react to state changes (violates DoD)
|
||||
C) It uses too much memory
|
||||
D) It's not supported in modern frameworks
|
||||
|
||||
Your answer (A, B, C, or D):"
|
||||
|
||||
**Wait for response. Validate:**
|
||||
|
||||
- Correct answer: B
|
||||
- If correct: "✅ Correct! Hard waits don't react to state - they guess timing. Use `waitFor` to react to conditions."
|
||||
- If incorrect: "❌ The issue is that hard waits don't react to state changes (B). They guess timing instead of waiting for conditions. This violates TEA Definition of Done."
|
||||
|
||||
**Store result**
|
||||
|
||||
**Question 3:**
|
||||
|
||||
"**Question 3 of 3:**
|
||||
|
||||
What does "self-cleaning tests" mean in TEA Definition of Done?
|
||||
|
||||
A) Tests automatically fix their own bugs
|
||||
B) Tests delete/deactivate entities they create during testing
|
||||
C) Tests run faster by cleaning up code
|
||||
D) Tests remove old test files
|
||||
|
||||
Your answer (A, B, C, or D):"
|
||||
|
||||
**Wait for response. Validate:**
|
||||
|
||||
- Correct answer: B
|
||||
- If correct: "✅ Correct! Self-cleaning tests clean up their data - no manual cleanup needed."
|
||||
- If incorrect: "❌ Self-cleaning means tests delete/deactivate entities they created (B). No manual cleanup required."
|
||||
|
||||
**Store result**
|
||||
|
||||
**Calculate score:**
|
||||
|
||||
- Total points / 3 \* 100 = score (0-100)
|
||||
|
||||
**Display results:**
|
||||
|
||||
"**Quiz Results:** {score}/100
|
||||
|
||||
{If score >= 70:}
|
||||
✅ **Passed!** You understand core testing concepts.
|
||||
|
||||
{If score < 70:}
|
||||
⚠️ **Below passing.** Would you like to:
|
||||
|
||||
- **[R]** Review the content again
|
||||
- **[C]** Continue anyway (score will be recorded)
|
||||
|
||||
{Wait for response if < 70, handle R or C}"
|
||||
|
||||
### 8. Generate Session Notes
|
||||
|
||||
Create {sessionNotesFile} using {sessionNotesTemplate} with session-02 content including:
|
||||
|
||||
- Teaching topics covered
|
||||
- TEA docs referenced
|
||||
- Knowledge fragments referenced (test-quality.md, probability-impact.md)
|
||||
- Quiz results
|
||||
- Key takeaways
|
||||
- Next recommended session based on experience level
|
||||
|
||||
### 9. Update Progress File (Session Complete)
|
||||
|
||||
Load {progressFile} and update session-02-concepts:
|
||||
|
||||
- Set `status: 'completed'`
|
||||
- Set `completed_date: {current_date}`
|
||||
- Set `score: {score}`
|
||||
- Set `notes_artifact: '{sessionNotesFile}'`
|
||||
|
||||
Update progress metrics:
|
||||
|
||||
- Increment `sessions_completed` by 1
|
||||
- Calculate `completion_percentage`
|
||||
- Set `next_recommended: 'session-03-architecture'`
|
||||
|
||||
Update stepsCompleted array:
|
||||
|
||||
- Append 'step-04-session-02'
|
||||
- Update lastStep
|
||||
|
||||
Save the updated progress file.
|
||||
|
||||
### 10. Session Complete Message
|
||||
|
||||
Display:
|
||||
|
||||
"🎉 **Session 2 Complete!**
|
||||
|
||||
**Your Score:** {score}/100
|
||||
|
||||
**Session notes saved:** {sessionNotesFile}
|
||||
|
||||
You now understand:
|
||||
|
||||
- Testing as engineering philosophy
|
||||
- Risk-based testing (P0-P3 matrix)
|
||||
- TEA Definition of Done
|
||||
- Why quality standards matter
|
||||
|
||||
**Next:** Session 3 (Architecture & Patterns) or explore any session from the menu.
|
||||
|
||||
**Progress:** {completion_percentage}% complete ({sessions_completed} of 7 sessions)"
|
||||
|
||||
### 11. Present MENU OPTIONS
|
||||
|
||||
Display: **Select an Option:** [A] Advanced Elicitation [P] Party Mode [C] Continue to Session Menu
|
||||
|
||||
#### Menu Handling Logic:
|
||||
|
||||
- IF A: Execute {advancedElicitationTask}, and when finished redisplay the menu
|
||||
- IF P: Execute {partyModeWorkflow}, and when finished redisplay the menu
|
||||
- IF C: Progress file already updated, then load, read entire file, then execute {nextStepFile}
|
||||
- IF Any other: help user, then redisplay menu
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Teaching content presented (Testing as Engineering, Risk-based, DoD)
|
||||
- Examples adapted to user role
|
||||
- Quiz administered (3 questions)
|
||||
- Score calculated correctly
|
||||
- Session notes generated
|
||||
- Progress file updated
|
||||
- stepsCompleted array updated
|
||||
- User routed back to hub
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipping quiz
|
||||
- Not adapting to role
|
||||
- Not generating notes
|
||||
- Not updating progress
|
||||
- Not routing to hub
|
||||
|
||||
**Master Rule:** Teach, quiz, generate notes, update progress, return to hub.
|
||||
@@ -0,0 +1,301 @@
|
||||
---
|
||||
name: 'step-04-session-03'
|
||||
description: 'Session 3: Architecture & Patterns - Fixtures, network patterns, framework setup (60 min)'
|
||||
|
||||
progressFile: '{test_artifacts}/teaching-progress/{user_name}-tea-progress.yaml'
|
||||
sessionNotesTemplate: '../templates/session-notes-template.md'
|
||||
sessionNotesFile: '{test_artifacts}/tea-academy/{user_name}/session-03-notes.md'
|
||||
nextStepFile: './step-03-session-menu.md'
|
||||
advancedElicitationTask: '{project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml'
|
||||
partyModeWorkflow: '{project-root}/_bmad/core/workflows/party-mode/workflow.md'
|
||||
---
|
||||
|
||||
# Step 4: Session 3 - Architecture & Patterns
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
To teach TEA architecture patterns including fixture composition, network-first patterns, and step-file architecture in a 60-minute session.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate content without user input
|
||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
||||
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
|
||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a Master Test Architect and Teaching Guide
|
||||
- ✅ We engage in collaborative learning, not lectures
|
||||
- ✅ You bring expertise in TEA methodology and teaching pedagogy
|
||||
- ✅ Learner brings their role context, experience, and learning goals
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus ONLY on Session 3 content (Architecture & Patterns)
|
||||
- 🚫 FORBIDDEN to skip ahead to other sessions
|
||||
- 💬 Approach: Teach patterns, provide examples, quiz understanding
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Load TEA docs just-in-time
|
||||
- 💾 Generate session notes after completion
|
||||
- 📖 Update progress file with session completion and score
|
||||
- ⏭️ Return to session menu hub after completion
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: Progress file with user role/experience
|
||||
- Focus: Session 3 - Architecture patterns
|
||||
- Dependencies: Progress file exists
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Session Welcome
|
||||
|
||||
"🧪 **Session 3: Architecture & Patterns** (60 minutes)
|
||||
|
||||
**Objective:** Understand TEA patterns and architecture
|
||||
|
||||
**What you'll learn:**
|
||||
|
||||
- Fixture architecture and composition
|
||||
- Network-first patterns
|
||||
- Data factories and test setup
|
||||
- Step-file architecture (the pattern this workflow uses!)
|
||||
|
||||
Let's explore TEA architecture!"
|
||||
|
||||
### 2. Update Progress (Started)
|
||||
|
||||
Load {progressFile}, update session-03-architecture:
|
||||
|
||||
- `status: 'in-progress'`
|
||||
- `started_date: {current_date}`
|
||||
|
||||
### 3. Teaching: Fixture Architecture
|
||||
|
||||
"### 🏗️ Fixture Architecture
|
||||
|
||||
**The Problem:** Tests have setup/teardown boilerplate everywhere.
|
||||
|
||||
**TEA Solution:** Composable fixtures
|
||||
|
||||
**Fixture Composition Pattern:**
|
||||
|
||||
```typescript
|
||||
// Base fixtures
|
||||
const baseFixtures = {
|
||||
page: async ({}, use) => {
|
||||
/* ... */
|
||||
},
|
||||
};
|
||||
|
||||
// Composed fixtures
|
||||
const authFixtures = {
|
||||
authenticatedPage: async ({ page }, use) => {
|
||||
await page.goto('/login');
|
||||
await login(page);
|
||||
await use(page);
|
||||
},
|
||||
};
|
||||
|
||||
// Merge and use
|
||||
test.use(mergeTests(baseFixtures, authFixtures));
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
|
||||
- DRY: Define once, use everywhere
|
||||
- Composable: Build complex fixtures from simple ones
|
||||
- Automatic cleanup: Fixtures handle teardown
|
||||
- Type-safe: Full TypeScript support
|
||||
|
||||
{Role-adapted example based on user role}
|
||||
|
||||
**Documentation:** <https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/explanation/fixture-architecture/>
|
||||
**Knowledge Fragment:** fixture-architecture.md, fixtures-composition.md"
|
||||
|
||||
### 4. Teaching: Network-First Patterns
|
||||
|
||||
"### 🌐 Network-First Patterns
|
||||
|
||||
**The Problem:** Flaky tests due to network timing issues.
|
||||
|
||||
**TEA Solution:** Intercept and control network
|
||||
|
||||
**Network-First Pattern:**
|
||||
|
||||
```typescript
|
||||
// BEFORE the action, set up network interception
|
||||
await page.route('/api/users', (route) => {
|
||||
route.fulfill({ json: mockUsers });
|
||||
});
|
||||
|
||||
// THEN trigger the action
|
||||
await page.click('Load Users');
|
||||
|
||||
// Network is already mocked - no race condition
|
||||
```
|
||||
|
||||
**Why Network-First:**
|
||||
|
||||
- Prevents race conditions
|
||||
- Deterministic test behavior
|
||||
- Fast (no real API calls)
|
||||
- Control error scenarios
|
||||
|
||||
{Role-adapted example}
|
||||
|
||||
**Documentation:** <https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/explanation/network-first-patterns/>
|
||||
**Knowledge Fragment:** network-first.md, intercept-network-call.md"
|
||||
|
||||
### 5. Teaching: Data Factories
|
||||
|
||||
"### 🏭 Data Factories
|
||||
|
||||
**The Problem:** Hard-coded test data everywhere.
|
||||
|
||||
**TEA Solution:** Factory functions
|
||||
|
||||
**Factory Pattern:**
|
||||
|
||||
```typescript
|
||||
function createUser(overrides = {}) {
|
||||
return {
|
||||
id: faker.uuid(),
|
||||
email: faker.email(),
|
||||
role: 'user',
|
||||
...overrides,
|
||||
};
|
||||
}
|
||||
|
||||
// Use in tests
|
||||
const admin = createUser({ role: 'admin' });
|
||||
const user = createUser(); // defaults
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
|
||||
- No hardcoded data
|
||||
- Easy to override fields
|
||||
- Consistent test data
|
||||
- Self-documenting
|
||||
|
||||
{Role-adapted example}
|
||||
|
||||
**Knowledge Fragment:** data-factories.md"
|
||||
|
||||
### 6. Teaching: Step-File Architecture
|
||||
|
||||
"### 📋 Step-File Architecture
|
||||
|
||||
**This workflow uses step-file architecture!**
|
||||
|
||||
**Pattern:**
|
||||
|
||||
- Micro-file design: Each step is self-contained
|
||||
- Just-in-time loading: Only current step in memory
|
||||
- Sequential enforcement: No skipping steps
|
||||
- State tracking: Progress saved between steps
|
||||
|
||||
**Why:**
|
||||
|
||||
- Disciplined execution
|
||||
- Clear progression
|
||||
- Resumable (continuable workflows)
|
||||
- Maintainable (one file per step)
|
||||
|
||||
**You're experiencing this right now:** Each session is a step file!
|
||||
|
||||
**Documentation:** <https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/explanation/step-file-architecture/>"
|
||||
|
||||
### 7. Quiz (3 questions)
|
||||
|
||||
"### ✅ Knowledge Check"
|
||||
|
||||
**Q1:** "What is the main benefit of fixture composition?
|
||||
A) Faster test execution
|
||||
B) DRY - define once, reuse everywhere
|
||||
C) Better error messages
|
||||
D) Automatic screenshot capture"
|
||||
|
||||
Correct: B
|
||||
|
||||
**Q2:** "Why is 'network-first' better than mocking after the action?
|
||||
A) It's faster
|
||||
B) It prevents race conditions
|
||||
C) It uses less memory
|
||||
D) It's easier to write"
|
||||
|
||||
Correct: B
|
||||
|
||||
**Q3:** "What pattern does this teaching workflow use?
|
||||
A) Page Object Model
|
||||
B) Behavior Driven Development
|
||||
C) Step-File Architecture
|
||||
D) Test Pyramid"
|
||||
|
||||
Correct: C
|
||||
|
||||
Calculate score, handle <70% retry option.
|
||||
|
||||
### 8. Generate Session Notes
|
||||
|
||||
Create {sessionNotesFile} with:
|
||||
|
||||
- Session 3 content
|
||||
- Topics: Fixtures, network-first, data factories, step-file architecture
|
||||
- TEA docs referenced
|
||||
- Knowledge fragments: fixture-architecture.md, network-first.md, data-factories.md
|
||||
- Quiz results
|
||||
- Next recommended: session-04-test-design
|
||||
|
||||
### 9. Update Progress (Completed)
|
||||
|
||||
Update session-03-architecture:
|
||||
|
||||
- `status: 'completed'`
|
||||
- `completed_date: {current_date}`
|
||||
- `score: {score}`
|
||||
- `notes_artifact`
|
||||
|
||||
Increment sessions_completed, update completion_percentage.
|
||||
Append 'step-04-session-03' to stepsCompleted.
|
||||
|
||||
### 10. Complete Message
|
||||
|
||||
"🎉 **Session 3 Complete!** Score: {score}/100
|
||||
You understand TEA architecture patterns!
|
||||
Progress: {completion_percentage}%"
|
||||
|
||||
### 11. Menu
|
||||
|
||||
[A] Advanced Elicitation [P] Party Mode [C] Continue to Session Menu
|
||||
|
||||
Return to {nextStepFile}
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Architecture patterns taught
|
||||
- Quiz administered
|
||||
- Notes generated
|
||||
- Progress updated
|
||||
- Returned to hub
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipping patterns
|
||||
- Not generating notes
|
||||
- Not updating progress
|
||||
|
||||
**Master Rule:** Teach patterns, quiz, update, return to hub.
|
||||
@@ -0,0 +1,234 @@
|
||||
---
|
||||
name: 'step-04-session-04'
|
||||
description: 'Session 4: Test Design - Risk assessment, test design workflow (60 min)'
|
||||
|
||||
progressFile: '{test_artifacts}/teaching-progress/{user_name}-tea-progress.yaml'
|
||||
sessionNotesTemplate: '../templates/session-notes-template.md'
|
||||
sessionNotesFile: '{test_artifacts}/tea-academy/{user_name}/session-04-notes.md'
|
||||
nextStepFile: './step-03-session-menu.md'
|
||||
advancedElicitationTask: '{project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml'
|
||||
partyModeWorkflow: '{project-root}/_bmad/core/workflows/party-mode/workflow.md'
|
||||
---
|
||||
|
||||
# Step 4: Session 4 - Test Design
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
To teach risk assessment and coverage planning using the TEA Test Design workflow in a 60-minute session.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate content without user input
|
||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
||||
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
|
||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a Master Test Architect and Teaching Guide
|
||||
- ✅ We engage in collaborative learning
|
||||
- ✅ You bring expertise in TEA methodology
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus on Session 4 (Test Design)
|
||||
- 💬 Teach workflow, provide examples
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Load docs just-in-time
|
||||
- 💾 Generate notes
|
||||
- 📖 Update progress
|
||||
- ⏭️ Return to hub
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
### 1. Welcome
|
||||
|
||||
"🧪 **Session 4: Test Design** (60 minutes)
|
||||
|
||||
**Objective:** Learn risk assessment and coverage planning
|
||||
|
||||
**What you'll learn:**
|
||||
|
||||
- Test Design workflow
|
||||
- Risk/testability assessment
|
||||
- Coverage planning with test levels
|
||||
- Test priorities matrix
|
||||
|
||||
Let's plan some tests!"
|
||||
|
||||
### 2. Update Progress (Started)
|
||||
|
||||
Set session-04-test-design `status: 'in-progress'`, `started_date`.
|
||||
|
||||
### 3. Teaching: Test Design Workflow
|
||||
|
||||
"### 📐 Test Design Workflow
|
||||
|
||||
**Purpose:** Plan tests BEFORE writing them (design before code).
|
||||
|
||||
**Workflow Steps:**
|
||||
|
||||
1. **Load Context:** Understand feature/system
|
||||
2. **Risk/Testability Assessment:** Score probability × impact
|
||||
3. **Coverage Planning:** Determine what to test and how
|
||||
4. **Generate Test Design Document:** Blueprint for implementation
|
||||
|
||||
**When to Use:**
|
||||
|
||||
- New features (epic/system level)
|
||||
- Major refactors
|
||||
- Quality gate before development
|
||||
|
||||
{Role-adapted example}
|
||||
|
||||
**Documentation:** <https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/how-to/workflows/run-test-design/>"
|
||||
|
||||
### 4. Teaching: Risk/Testability Assessment
|
||||
|
||||
"### ⚖️ Risk & Testability Assessment
|
||||
|
||||
**Risk Scoring:**
|
||||
|
||||
- **Probability:** How likely is this to fail? (Low/Medium/High)
|
||||
- **Impact:** What happens if it fails? (Low/Medium/High)
|
||||
- **Risk = Probability × Impact**
|
||||
|
||||
**Example: Login Feature**
|
||||
|
||||
- Probability: High (complex, authentication)
|
||||
- Impact: High (business critical)
|
||||
- **Risk: HIGH** → P0 priority
|
||||
|
||||
**Example: Tooltip Text**
|
||||
|
||||
- Probability: Low (simple rendering)
|
||||
- Impact: Low (aesthetic only)
|
||||
- **Risk: LOW** → P3 priority
|
||||
|
||||
**Testability:**
|
||||
|
||||
- Can we test this easily?
|
||||
- Are there dependencies blocking us?
|
||||
- Do we need test infrastructure first?
|
||||
|
||||
{Role-adapted example}
|
||||
|
||||
**Knowledge Fragments:** probability-impact.md, test-priorities-matrix.md"
|
||||
|
||||
### 5. Teaching: Coverage Planning
|
||||
|
||||
"### 📋 Coverage Planning
|
||||
|
||||
**Test Levels Framework:**
|
||||
|
||||
**Unit Tests:** Isolated functions/classes
|
||||
|
||||
- Fast, focused
|
||||
- No external dependencies
|
||||
- Example: Pure functions, business logic
|
||||
|
||||
**Integration Tests:** Multiple components together
|
||||
|
||||
- Database, API interactions
|
||||
- Example: Service layer with DB
|
||||
|
||||
**E2E Tests:** Full user workflows
|
||||
|
||||
- Browser automation
|
||||
- Example: Complete checkout flow
|
||||
|
||||
**Coverage Strategy:**
|
||||
|
||||
- **P0 features:** Unit + Integration + E2E (high confidence)
|
||||
- **P1 features:** Integration + E2E (good coverage)
|
||||
- **P2 features:** E2E or Integration (basic coverage)
|
||||
- **P3 features:** Manual or skip (low priority)
|
||||
|
||||
{Role-adapted example}
|
||||
|
||||
**Knowledge Fragment:** test-levels-framework.md
|
||||
|
||||
**Documentation:** <https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/explanation/test-quality-standards/>"
|
||||
|
||||
### 6. Teaching: Test Priorities Matrix
|
||||
|
||||
"### 📊 Test Priorities Matrix
|
||||
|
||||
**P0-P3 Coverage Targets:**
|
||||
|
||||
| Priority | Unit | Integration | E2E | Manual |
|
||||
| -------- | ---- | ----------- | --- | ------ |
|
||||
| P0 | ✅ | ✅ | ✅ | ✅ |
|
||||
| P1 | ✅ | ✅ | ✅ | - |
|
||||
| P2 | - | ✅ | - | ✅ |
|
||||
| P3 | - | - | - | ✅ |
|
||||
|
||||
**Goal:** 100% P0, 80% P1, 50% P2, 20% P3
|
||||
|
||||
{Role-adapted example}
|
||||
|
||||
**Knowledge Fragment:** test-priorities-matrix.md"
|
||||
|
||||
### 7. Quiz (3 questions)
|
||||
|
||||
**Q1:** "What does the Test Design workflow help you do?
|
||||
A) Write tests faster
|
||||
B) Plan tests BEFORE writing them
|
||||
C) Run tests in parallel
|
||||
D) Debug test failures"
|
||||
|
||||
Correct: B
|
||||
|
||||
**Q2:** "How do you calculate risk?
|
||||
A) Probability + Impact
|
||||
B) Probability × Impact
|
||||
C) Probability - Impact
|
||||
D) Probability / Impact"
|
||||
|
||||
Correct: B
|
||||
|
||||
**Q3:** "For P0 features, which test levels should you use?
|
||||
A) Only E2E tests
|
||||
B) Only unit tests
|
||||
C) Unit + Integration + E2E (comprehensive)
|
||||
D) Manual testing only"
|
||||
|
||||
Correct: C
|
||||
|
||||
Calculate score, handle <70% retry.
|
||||
|
||||
### 8. Generate Session Notes
|
||||
|
||||
Create {sessionNotesFile} with Session 4 content, docs, fragments, quiz.
|
||||
|
||||
### 9. Update Progress (Completed)
|
||||
|
||||
Update session-04-test-design: completed, score, notes.
|
||||
Increment sessions_completed, update percentage.
|
||||
Append 'step-04-session-04' to stepsCompleted.
|
||||
Set next_recommended: 'session-05-atdd-automate'.
|
||||
|
||||
### 10. Complete Message
|
||||
|
||||
"🎉 **Session 4 Complete!** Score: {score}/100
|
||||
You can now plan tests using risk assessment!
|
||||
Progress: {completion_percentage}%"
|
||||
|
||||
### 11. Menu
|
||||
|
||||
[A] Advanced Elicitation [P] Party Mode [C] Continue to Session Menu
|
||||
|
||||
Return to {nextStepFile}.
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SUCCESS METRICS
|
||||
|
||||
✅ Test Design workflow taught, quiz passed, notes generated, progress updated, returned to hub.
|
||||
|
||||
**Master Rule:** Teach planning, quiz, update, return.
|
||||
@@ -0,0 +1,234 @@
|
||||
---
|
||||
name: 'step-04-session-05'
|
||||
description: 'Session 5: ATDD & Automate - TDD red-green approach, generate tests (60 min)'
|
||||
|
||||
progressFile: '{test_artifacts}/teaching-progress/{user_name}-tea-progress.yaml'
|
||||
sessionNotesTemplate: '../templates/session-notes-template.md'
|
||||
sessionNotesFile: '{test_artifacts}/tea-academy/{user_name}/session-05-notes.md'
|
||||
nextStepFile: './step-03-session-menu.md'
|
||||
advancedElicitationTask: '{project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml'
|
||||
partyModeWorkflow: '{project-root}/_bmad/core/workflows/party-mode/workflow.md'
|
||||
---
|
||||
|
||||
# Step 4: Session 5 - ATDD & Automate
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
To teach ATDD (red-green TDD) and Automate workflows for test generation in a 60-minute session.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate content without user input
|
||||
- 📖 CRITICAL: Read complete step file before action
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In {communication_language}
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ Master Test Architect and Teaching Guide
|
||||
- ✅ Collaborative learning
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus on Session 5 (ATDD & Automate)
|
||||
- 💬 Teach TDD approach
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Load docs just-in-time
|
||||
- 💾 Generate notes
|
||||
- 📖 Update progress
|
||||
- ⏭️ Return to hub
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
### 1. Welcome
|
||||
|
||||
"🧪 **Session 5: ATDD & Automate** (60 minutes)
|
||||
|
||||
**Objective:** Generate tests with TDD red-green approach
|
||||
|
||||
**What you'll learn:**
|
||||
|
||||
- ATDD workflow (failing tests first)
|
||||
- Automate workflow (expand coverage)
|
||||
- Component TDD
|
||||
- API testing patterns
|
||||
|
||||
Let's generate some tests!"
|
||||
|
||||
### 2. Update Progress (Started)
|
||||
|
||||
Load {progressFile} and update session-05-atdd-automate:
|
||||
|
||||
- Set `status: 'in-progress'`
|
||||
- Set `started_date: {current_date}` if not already set
|
||||
|
||||
Save the updated progress file.
|
||||
|
||||
### 3. Teaching: ATDD Workflow
|
||||
|
||||
"### 🔴 ATDD: Acceptance-Driven Test Development
|
||||
|
||||
**TDD Red Phase:** Write failing tests FIRST
|
||||
|
||||
**ATDD Workflow:**
|
||||
|
||||
1. **Preflight:** Check prerequisites
|
||||
2. **Test Strategy:** Define what to test
|
||||
3. **Generate FAILING Tests:** Red phase (tests fail because code doesn't exist yet)
|
||||
4. **Implement Code:** Green phase (make tests pass)
|
||||
|
||||
**Why Failing Tests First:**
|
||||
|
||||
- Validates tests actually test something
|
||||
- Prevents false positives
|
||||
- Drives implementation (tests define behavior)
|
||||
|
||||
{Role-adapted example}
|
||||
|
||||
**Documentation:** <https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/how-to/workflows/run-atdd/>"
|
||||
|
||||
### 4. Teaching: Automate Workflow
|
||||
|
||||
"### 🤖 Automate: Expand Test Coverage
|
||||
|
||||
**Purpose:** Generate tests for existing features
|
||||
|
||||
**Automate Workflow:**
|
||||
|
||||
1. **Identify Targets:** What needs testing
|
||||
2. **Generate Tests:** API and/or E2E tests
|
||||
3. **Review & Run:** Tests should pass (code already exists)
|
||||
|
||||
**Difference from ATDD:**
|
||||
|
||||
- ATDD: Tests first, then code (red → green)
|
||||
- Automate: Code first, then tests (coverage expansion)
|
||||
|
||||
{Role-adapted example}
|
||||
|
||||
**Documentation:** <https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/how-to/workflows/run-automate/>"
|
||||
|
||||
### 5. Teaching: Component TDD
|
||||
|
||||
"### 🔄 Component TDD Red-Green Loop
|
||||
|
||||
**Pattern:**
|
||||
|
||||
1. **Red:** Write failing test
|
||||
2. **Green:** Minimal code to pass
|
||||
3. **Refactor:** Improve code, tests stay green
|
||||
4. **Repeat:** Next requirement
|
||||
|
||||
**Example:**
|
||||
|
||||
```typescript
|
||||
// RED: Test fails (function doesn't exist)
|
||||
test('calculates total price', () => {
|
||||
expect(calculateTotal([10, 20])).toBe(30);
|
||||
});
|
||||
|
||||
// GREEN: Minimal implementation
|
||||
function calculateTotal(prices) {
|
||||
return prices.reduce((a, b) => a + b, 0);
|
||||
}
|
||||
|
||||
// REFACTOR: Add validation, tests still green
|
||||
```
|
||||
|
||||
{Role-adapted example}
|
||||
|
||||
**Knowledge Fragment:** component-tdd.md"
|
||||
|
||||
### 6. Teaching: API Testing Patterns
|
||||
|
||||
"### 🌐 API Testing Patterns
|
||||
|
||||
**Pure API Testing (no browser):**
|
||||
|
||||
- Fast execution
|
||||
- Test business logic
|
||||
- Validate responses
|
||||
- Schema validation
|
||||
|
||||
**Pattern:**
|
||||
|
||||
```typescript
|
||||
test('GET /users returns user list', async ({ request }) => {
|
||||
const response = await request.get('/api/users');
|
||||
expect(response.ok()).toBeTruthy();
|
||||
const users = await response.json();
|
||||
expect(users).toHaveLength(10);
|
||||
});
|
||||
```
|
||||
|
||||
{Role-adapted example}
|
||||
|
||||
**Knowledge Fragment:** api-testing-patterns.md, api-request.md"
|
||||
|
||||
### 7. Quiz (3 questions)
|
||||
|
||||
**Q1:** "What is the 'red' phase in TDD?
|
||||
A) Tests fail (code doesn't exist yet)
|
||||
B) Tests pass
|
||||
C) Code is refactored
|
||||
D) Tests are deleted"
|
||||
|
||||
Correct: A
|
||||
|
||||
**Q2:** "What's the difference between ATDD and Automate workflows?
|
||||
A) ATDD generates E2E, Automate generates API tests
|
||||
B) ATDD writes tests first (red phase), Automate tests existing code
|
||||
C) ATDD is faster than Automate
|
||||
D) They're the same workflow"
|
||||
|
||||
Correct: B
|
||||
|
||||
**Q3:** "Why use pure API tests without a browser?
|
||||
A) They look prettier
|
||||
B) They're easier to debug
|
||||
C) They're faster and test business logic directly
|
||||
D) They're required by TEA"
|
||||
|
||||
Correct: C
|
||||
|
||||
Calculate score, handle <70% retry.
|
||||
|
||||
### 8. Generate Session Notes
|
||||
|
||||
Create {sessionNotesFile} with Session 5 content:
|
||||
|
||||
- ATDD workflow (red-green TDD)
|
||||
- Automate workflow (coverage expansion)
|
||||
- Component TDD
|
||||
- API testing patterns
|
||||
- Docs: ATDD, Automate
|
||||
- Fragments: component-tdd.md, api-testing-patterns.md, api-request.md
|
||||
- Quiz results
|
||||
|
||||
### 9. Update Progress (Completed)
|
||||
|
||||
Update session-05-atdd-automate: completed, score, notes.
|
||||
Increment sessions_completed, update percentage.
|
||||
Append 'step-04-session-05' to stepsCompleted.
|
||||
Set next_recommended: 'session-06-quality-trace'.
|
||||
|
||||
### 10. Complete Message
|
||||
|
||||
"🎉 **Session 5 Complete!** Score: {score}/100
|
||||
You can now generate tests with ATDD and Automate!
|
||||
Progress: {completion_percentage}%"
|
||||
|
||||
### 11. Menu
|
||||
|
||||
[A] Advanced Elicitation [P] Party Mode [C] Continue to Session Menu
|
||||
|
||||
Return to {nextStepFile}.
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SUCCESS METRICS
|
||||
|
||||
✅ ATDD and Automate taught, TDD explained, quiz passed, notes generated, progress updated, returned to hub.
|
||||
@@ -0,0 +1,209 @@
|
||||
---
|
||||
name: 'step-04-session-06'
|
||||
description: 'Session 6: Quality & Trace - Test review, traceability, quality metrics (45 min)'
|
||||
|
||||
progressFile: '{test_artifacts}/teaching-progress/{user_name}-tea-progress.yaml'
|
||||
sessionNotesTemplate: '../templates/session-notes-template.md'
|
||||
sessionNotesFile: '{test_artifacts}/tea-academy/{user_name}/session-06-notes.md'
|
||||
nextStepFile: './step-03-session-menu.md'
|
||||
advancedElicitationTask: '{project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml'
|
||||
partyModeWorkflow: '{project-root}/_bmad/core/workflows/party-mode/workflow.md'
|
||||
---
|
||||
|
||||
# Step 4: Session 6 - Quality & Trace
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
To teach test quality auditing and requirements traceability using Test Review and Trace workflows in a 45-minute session.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate _unsolicited_ content without user input (session flow content is allowed once session begins)
|
||||
- 📖 CRITICAL: Read complete step file before action
|
||||
- ✅ SPEAK OUTPUT In {communication_language}
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ Master Test Architect and Teaching Guide
|
||||
- ✅ Collaborative learning
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus on Session 6 (Quality & Trace)
|
||||
- 💬 Teach quality metrics
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Load docs just-in-time
|
||||
- 💾 Generate notes
|
||||
- 📖 Update progress
|
||||
- ⏭️ Return to hub
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
### 1. Welcome
|
||||
|
||||
"🧪 **Session 6: Quality & Trace** (45 minutes)
|
||||
|
||||
**Objective:** Audit quality and ensure traceability
|
||||
|
||||
**What you'll learn:**
|
||||
|
||||
- Test Review workflow (quality scoring)
|
||||
- 5 dimensions of test quality
|
||||
- Trace workflow (requirements traceability)
|
||||
- Release gate decisions
|
||||
|
||||
Let's ensure quality!"
|
||||
|
||||
### 2. Update Progress (Started)
|
||||
|
||||
Set session-06-quality-trace `status: 'in-progress'`.
|
||||
|
||||
### 3. Teaching: Test Review Workflow
|
||||
|
||||
"### 🔍 Test Review Workflow
|
||||
|
||||
**Purpose:** Audit test quality with 0-100 scoring
|
||||
|
||||
**5 Dimensions of Quality:**
|
||||
|
||||
**1. Determinism (0-100)**
|
||||
|
||||
- Tests pass/fail consistently
|
||||
- No flakiness, no randomness
|
||||
- Proper async handling
|
||||
|
||||
**2. Isolation (0-100)**
|
||||
|
||||
- Tests run independently
|
||||
- No shared state
|
||||
- Parallelizable
|
||||
|
||||
**3. Assertions (0-100)**
|
||||
|
||||
- Correct checks for expected behavior
|
||||
- Meaningful assertions (not just presence)
|
||||
- Fails for the right reasons
|
||||
|
||||
**4. Structure (0-100)**
|
||||
|
||||
- Readable test code
|
||||
- Clear organization and naming
|
||||
- Minimal duplication
|
||||
|
||||
**5. Performance (0-100)**
|
||||
|
||||
- Test execution speed
|
||||
- Resource usage
|
||||
- Parallel efficiency
|
||||
|
||||
**Overall Score = Average of 5 dimensions**
|
||||
|
||||
{Role-adapted example}
|
||||
|
||||
**Documentation:** <https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/how-to/workflows/run-test-review/>"
|
||||
|
||||
### 4. Teaching: Trace Workflow
|
||||
|
||||
"### 🔗 Trace Workflow: Requirements Traceability
|
||||
|
||||
**Purpose:** Map tests to requirements, make release gate decision
|
||||
|
||||
**Trace Workflow:**
|
||||
|
||||
1. **Load Context:** Understand acceptance criteria
|
||||
2. **Discover Tests:** Find all test files
|
||||
3. **Map Criteria:** Link tests to requirements
|
||||
4. **Analyze Gaps:** What's not tested?
|
||||
5. **Gate Decision:** GREEN (ship) or RED (block)
|
||||
|
||||
**Release Gate Logic:**
|
||||
|
||||
- **GREEN:** All P0/P1 criteria have tests, gaps are P2/P3
|
||||
- **YELLOW:** Some P1 gaps, assess risk
|
||||
- **RED:** P0 gaps exist, DO NOT SHIP
|
||||
|
||||
{Role-adapted example}
|
||||
|
||||
**Documentation:** <https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/how-to/workflows/run-trace/>"
|
||||
|
||||
### 5. Teaching: Quality Metrics
|
||||
|
||||
"### 📊 Quality Metrics That Matter
|
||||
|
||||
**Track:**
|
||||
|
||||
- **P0/P1 Coverage %** (not total line coverage)
|
||||
- **Flakiness Rate** (flaky tests / total tests)
|
||||
- **Test Execution Time** (feedback loop speed)
|
||||
- **Determinism Score** (from Test Review)
|
||||
|
||||
**Don't Track (Vanity Metrics):**
|
||||
|
||||
- Total line coverage % (tells you nothing about risk)
|
||||
- Number of tests (quantity ≠ quality)
|
||||
- Test file count (irrelevant)
|
||||
|
||||
{Role-adapted example}
|
||||
|
||||
**Goal:** High P0/P1 coverage, zero flakiness, fast execution."
|
||||
|
||||
### 6. Quiz (3 questions)
|
||||
|
||||
**Q1:** "What are the 5 dimensions in Test Review workflow?
|
||||
A) Speed, cost, coverage, bugs, time
|
||||
B) Determinism, Isolation, Assertions, Structure, Performance
|
||||
C) Unit, integration, E2E, manual, exploratory
|
||||
D) P0, P1, P2, P3, P4"
|
||||
|
||||
Correct: B
|
||||
|
||||
**Q2:** "When should the Trace workflow gate decision be RED (block release)?
|
||||
A) Any test failures exist
|
||||
B) P0 gaps exist (critical requirements not tested)
|
||||
C) Code coverage is below 80%
|
||||
D) Tests are slow"
|
||||
|
||||
Correct: B
|
||||
|
||||
**Q3:** "Which metric matters most for quality?
|
||||
A) Total line coverage %
|
||||
B) Number of tests written
|
||||
C) P0/P1 coverage %
|
||||
D) Test file count"
|
||||
|
||||
Correct: C
|
||||
|
||||
Calculate score, handle <70% retry.
|
||||
|
||||
### 7. Generate Session Notes
|
||||
|
||||
Create {sessionNotesFile} with Session 6 content, Test Review + Trace workflows, quality metrics.
|
||||
|
||||
### 8. Update Progress (Completed)
|
||||
|
||||
Update session-06-quality-trace: completed, score, notes.
|
||||
Increment sessions_completed, update percentage.
|
||||
Append 'step-04-session-06' to stepsCompleted.
|
||||
Set next_recommended: 'session-07-advanced'.
|
||||
|
||||
### 9. Complete Message
|
||||
|
||||
"🎉 **Session 6 Complete!** Score: {score}/100
|
||||
You can now audit quality and ensure traceability!
|
||||
Progress: {completion_percentage}%"
|
||||
|
||||
### 10. Menu
|
||||
|
||||
[A] Advanced Elicitation [P] Party Mode [C] Continue to Session Menu
|
||||
|
||||
Return to {nextStepFile}.
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SUCCESS METRICS
|
||||
|
||||
✅ Test Review and Trace taught, quality dimensions explained, quiz passed, notes generated, returned to hub.
|
||||
@@ -0,0 +1,212 @@
|
||||
---
|
||||
name: 'step-04-session-07'
|
||||
description: 'Session 7: Advanced Patterns - Menu-driven knowledge fragment exploration (ongoing)'
|
||||
|
||||
progressFile: '{test_artifacts}/teaching-progress/{user_name}-tea-progress.yaml'
|
||||
sessionNotesTemplate: '../templates/session-notes-template.md'
|
||||
sessionNotesFile: '{test_artifacts}/tea-academy/{user_name}/session-07-notes.md'
|
||||
nextStepFile: './step-03-session-menu.md'
|
||||
advancedElicitationTask: '{project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml'
|
||||
partyModeWorkflow: '{project-root}/_bmad/core/workflows/party-mode/workflow.md'
|
||||
---
|
||||
|
||||
# Step 4: Session 7 - Advanced Patterns
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
To provide menu-driven exploration of 35 TEA knowledge fragments organized by category, allowing deep-dive into specific advanced topics on-demand.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate content without user input
|
||||
- 📖 CRITICAL: Read complete step file before action
|
||||
- ✅ SPEAK OUTPUT In {communication_language}
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ Master Test Architect and Teaching Guide
|
||||
- ✅ Collaborative exploration
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus on Session 7 (Advanced Patterns exploration)
|
||||
- 💬 Menu-driven, user chooses topics
|
||||
- 📚 This session is ONGOING - users can explore multiple fragments
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Display fragment categories
|
||||
- 💾 Generate notes after exploration
|
||||
- 📖 Update progress when user exits
|
||||
- ⏭️ Return to hub when done
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
### 1. Welcome
|
||||
|
||||
"🧪 **Session 7: Advanced Patterns** (Ongoing Exploration)
|
||||
|
||||
**Objective:** Deep-dive into 34 TEA knowledge fragments
|
||||
|
||||
**This session is different:**
|
||||
|
||||
- Menu-driven exploration (you choose topics)
|
||||
- Explore as many fragments as you want
|
||||
- Can revisit this session anytime
|
||||
- No quiz - this is reference learning
|
||||
|
||||
**35 Knowledge Fragments organized by category:**
|
||||
|
||||
Let's explore!"
|
||||
|
||||
### 2. Update Progress (Started)
|
||||
|
||||
Set session-07-advanced `status: 'in-progress'` (only first time).
|
||||
|
||||
### 3. Display Knowledge Fragment Categories
|
||||
|
||||
"### 📚 Knowledge Fragment Categories
|
||||
|
||||
**1. Testing Patterns (9 fragments)**
|
||||
|
||||
- fixture-architecture.md - Composable fixture patterns
|
||||
- fixtures-composition.md - mergeTests composition patterns
|
||||
- network-first.md - Network interception safeguards
|
||||
- data-factories.md - Data seeding & setup
|
||||
- component-tdd.md - TDD red-green loop
|
||||
- api-testing-patterns.md - Pure API testing
|
||||
- test-healing-patterns.md - Auto-fix common failures
|
||||
- selector-resilience.md - Robust selectors
|
||||
- timing-debugging.md - Race condition fixes
|
||||
|
||||
**2. Playwright Utils (11 fragments)**
|
||||
|
||||
- overview.md - Playwright Utils overview
|
||||
- api-request.md - Typed HTTP client
|
||||
- network-recorder.md - HAR record/playback
|
||||
- intercept-network-call.md - Network spy/stub
|
||||
- recurse.md - Async polling
|
||||
- log.md - Report logging
|
||||
- file-utils.md - CSV/XLSX/PDF validation
|
||||
- burn-in.md - Smart test selection
|
||||
- network-error-monitor.md - HTTP error detection
|
||||
- contract-testing.md - Pact integration
|
||||
- visual-debugging.md - Trace viewer workflows
|
||||
|
||||
**3. Configuration & Governance (6 fragments)**
|
||||
|
||||
- playwright-config.md - Environment & timeout guardrails
|
||||
- ci-burn-in.md - CI orchestration
|
||||
- selective-testing.md - Tag/grep filters
|
||||
- feature-flags.md - Governance & cleanup
|
||||
- risk-governance.md - Scoring matrix & gates
|
||||
- adr-quality-readiness-checklist.md - Quality readiness checklist
|
||||
|
||||
**4. Quality Frameworks (5 fragments)**
|
||||
|
||||
- test-quality.md - DoD execution limits
|
||||
- test-levels-framework.md - Unit/Integration/E2E
|
||||
- test-priorities-matrix.md - P0-P3 coverage targets
|
||||
- probability-impact.md - Probability × impact scoring
|
||||
- nfr-criteria.md - NFR assessment definitions
|
||||
|
||||
**5. Authentication & Security (3 fragments)**
|
||||
|
||||
- email-auth.md - Magic link extraction
|
||||
- auth-session.md - Token persistence
|
||||
- error-handling.md - Exception handling
|
||||
|
||||
**GitHub Repository:** <https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/tree/main/src/testarch/knowledge>
|
||||
|
||||
**Select a category (1-5) or specific fragment to explore, or [X] to finish:**"
|
||||
|
||||
### 4. Fragment Exploration Loop
|
||||
|
||||
**Wait for user selection.**
|
||||
|
||||
**Handle selection:**
|
||||
|
||||
- **IF 1-5 (category):** Display all fragments in that category with descriptions, ask which fragment to explore
|
||||
- **IF specific fragment name:** Load and present that fragment's content
|
||||
- **IF X:** Proceed to step 5 (complete session)
|
||||
- **IF Any other:** Help user, redisplay categories
|
||||
|
||||
**For each fragment explored:**
|
||||
|
||||
1. Present the fragment's key concepts
|
||||
2. Provide role-adapted examples
|
||||
3. Link to GitHub source
|
||||
4. Ask: "Explore another fragment? [Y/N/X to finish]"
|
||||
5. If Y: Redisplay categories
|
||||
6. If N or X: Proceed to completion
|
||||
|
||||
**Track fragments explored** (for session notes).
|
||||
|
||||
### 5. Session Summary
|
||||
|
||||
After user selects X (finish exploration):
|
||||
|
||||
"### 🎯 Session 7 Summary
|
||||
|
||||
**Fragments Explored:** {count}
|
||||
|
||||
{List each fragment explored}
|
||||
|
||||
**Key Takeaways:**
|
||||
{Summarize insights from explored fragments}
|
||||
|
||||
**Remember:** You can return to Session 7 anytime to explore more fragments!
|
||||
|
||||
**GitHub Knowledge Base:** <https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/tree/main/src/testarch/knowledge>"
|
||||
|
||||
### 6. Generate Session Notes
|
||||
|
||||
Create {sessionNotesFile} with:
|
||||
|
||||
- Session 7 content
|
||||
- List of fragments explored
|
||||
- Key insights from each
|
||||
- GitHub links
|
||||
- No quiz (exploratory session)
|
||||
- Score: 100 (completion based, not quiz based)
|
||||
|
||||
### 7. Update Progress (Completed)
|
||||
|
||||
Update session-07-advanced: completed, score: 100, notes.
|
||||
Increment sessions_completed, update percentage.
|
||||
Append 'step-04-session-07' to stepsCompleted.
|
||||
|
||||
**Check completion:**
|
||||
|
||||
- If sessions_completed == 7: Set next_recommended: 'completion'
|
||||
- Otherwise: Recommend next incomplete session
|
||||
|
||||
### 8. Complete Message
|
||||
|
||||
"🎉 **Session 7 Complete!**
|
||||
|
||||
**Fragments Explored:** {count}
|
||||
|
||||
{If sessions_completed == 7:}
|
||||
🏆 **Congratulations!** You've completed ALL 7 sessions!
|
||||
Your completion certificate will be generated when you return to the menu.
|
||||
|
||||
{Otherwise:}
|
||||
**Progress:** {completion_percentage}% complete ({sessions_completed} of 7 sessions)
|
||||
You can return to Session 7 anytime to explore more fragments!"
|
||||
|
||||
### 9. Menu
|
||||
|
||||
[A] Advanced Elicitation [P] Party Mode [C] Continue to Session Menu
|
||||
|
||||
Return to {nextStepFile}.
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SUCCESS METRICS
|
||||
|
||||
✅ Fragment categories displayed, user explored chosen fragments, notes generated with exploration summary, progress updated, returned to hub.
|
||||
|
||||
**Master Rule:** This session is exploratory and repeatable. User drives exploration, workflow facilitates.
|
||||
@@ -0,0 +1,339 @@
|
||||
---
|
||||
name: 'step-05-completion'
|
||||
description: 'Generate completion certificate, final progress update, congratulate learner'
|
||||
|
||||
progressFile: '{test_artifacts}/teaching-progress/{user_name}-tea-progress.yaml'
|
||||
certificateTemplate: '../templates/certificate-template.md'
|
||||
certificateFile: '{test_artifacts}/tea-academy/{user_name}/tea-completion-certificate.md'
|
||||
---
|
||||
|
||||
# Step 5: Completion & Certificate Generation
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
To generate the TEA Academy completion certificate, update final progress, and congratulate the learner on completing all 7 sessions.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate content without user input
|
||||
- 📖 CRITICAL: Read complete step file before action
|
||||
- ✅ SPEAK OUTPUT In {communication_language}
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ Master Test Architect and Teaching Guide
|
||||
- ✅ Celebrating completion
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus on completion and celebration
|
||||
- 🚫 FORBIDDEN to proceed without verifying all 7 sessions complete
|
||||
- 💬 Approach: Congratulate, generate certificate, inspire next steps
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Verify all sessions complete
|
||||
- 💾 Generate completion certificate
|
||||
- 📖 Final progress update
|
||||
- 🎉 This is the final step - no next step
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: Progress file with all 7 sessions completed
|
||||
- Focus: Certificate generation and celebration
|
||||
- Dependencies: All 7 sessions must be complete
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
### 1. Verify All Sessions Complete
|
||||
|
||||
Load {progressFile} and check:
|
||||
|
||||
- All 7 sessions have `status: 'completed'`
|
||||
- All 7 sessions have scores
|
||||
- sessions_completed == 7
|
||||
|
||||
**If any session NOT complete:**
|
||||
|
||||
Display:
|
||||
|
||||
"⚠️ **Not all sessions complete!**
|
||||
|
||||
You still have {7 - sessions_completed} sessions remaining.
|
||||
|
||||
Please return to the session menu to complete the remaining sessions before generating your certificate."
|
||||
|
||||
**THEN:** Stop and do not proceed. This is an error state.
|
||||
|
||||
---
|
||||
|
||||
**If all 7 sessions complete:** Proceed to step 2.
|
||||
|
||||
### 2. Calculate Final Metrics
|
||||
|
||||
From progress file, calculate:
|
||||
|
||||
**Average Score:**
|
||||
|
||||
- Sum all 7 session scores
|
||||
- Divide by 7
|
||||
- Round to nearest integer
|
||||
|
||||
**Total Duration:**
|
||||
|
||||
- Calculate days between started_date and current_date
|
||||
- Format as "{N} days" or "{N} weeks"
|
||||
|
||||
**Individual Session Scores:**
|
||||
|
||||
- Extract score for each session (session-01 through session-07)
|
||||
|
||||
### 3. Congratulations Message
|
||||
|
||||
Display:
|
||||
|
||||
"🏆 **CONGRATULATIONS, {user_name}!**
|
||||
|
||||
You've completed all 7 sessions of TEA Academy!
|
||||
|
||||
**Your Achievement:**
|
||||
|
||||
- **Started:** {started_date}
|
||||
- **Completed:** {current_date}
|
||||
- **Duration:** {total_duration}
|
||||
- **Average Score:** {average_score}/100
|
||||
- **Sessions Completed:** 7 of 7 (100%)
|
||||
|
||||
**Session Scores:**
|
||||
|
||||
- Session 1 (Quick Start): {session_01_score}/100
|
||||
- Session 2 (Core Concepts): {session_02_score}/100
|
||||
- Session 3 (Architecture): {session_03_score}/100
|
||||
- Session 4 (Test Design): {session_04_score}/100
|
||||
- Session 5 (ATDD & Automate): {session_05_score}/100
|
||||
- Session 6 (Quality & Trace): {session_06_score}/100
|
||||
- Session 7 (Advanced Patterns): {session_07_score}/100
|
||||
|
||||
Generating your completion certificate..."
|
||||
|
||||
### 4. Generate Completion Certificate
|
||||
|
||||
Load {certificateTemplate} and create {certificateFile} with:
|
||||
|
||||
```markdown
|
||||
---
|
||||
certificate_type: tea-academy-completion
|
||||
user: { user_name }
|
||||
role: { role }
|
||||
completion_date: { current_date }
|
||||
started_date: { started_date }
|
||||
total_duration: { total_duration }
|
||||
average_score: { average_score }
|
||||
---
|
||||
|
||||
# 🏆 TEA Academy Completion Certificate
|
||||
|
||||
---
|
||||
|
||||
## Certificate of Completion
|
||||
|
||||
**This certifies that**
|
||||
|
||||
# {user_name}
|
||||
|
||||
**has successfully completed the TEA Academy testing curriculum**
|
||||
|
||||
---
|
||||
|
||||
### Program Details
|
||||
|
||||
**Role:** {role}
|
||||
**Started:** {started_date}
|
||||
**Completed:** {current_date}
|
||||
**Total Duration:** {total_duration}
|
||||
**Average Score:** {average_score}/100
|
||||
|
||||
---
|
||||
|
||||
### Sessions Completed
|
||||
|
||||
✅ **Session 1:** Quick Start (30 min) - Score: {session_01_score}/100
|
||||
✅ **Session 2:** Core Concepts (45 min) - Score: {session_02_score}/100
|
||||
✅ **Session 3:** Architecture & Patterns (60 min) - Score: {session_03_score}/100
|
||||
✅ **Session 4:** Test Design (60 min) - Score: {session_04_score}/100
|
||||
✅ **Session 5:** ATDD & Automate (60 min) - Score: {session_05_score}/100
|
||||
✅ **Session 6:** Quality & Trace (45 min) - Score: {session_06_score}/100
|
||||
✅ **Session 7:** Advanced Patterns (ongoing) - Score: {session_07_score}/100
|
||||
|
||||
---
|
||||
|
||||
### Skills Acquired
|
||||
|
||||
{user_name} has demonstrated proficiency in:
|
||||
|
||||
- ✅ **Testing Fundamentals:** Risk-based testing, test pyramid, test types, P0-P3 prioritization
|
||||
- ✅ **TEA Methodology:** 9 workflows (Teach Me Testing, Framework, Test Design, ATDD, Automate, Test Review, Trace, NFR, CI)
|
||||
- ✅ **Architecture Patterns:** Fixture composition, network-first patterns, data factories, step-file architecture
|
||||
- ✅ **Test Design:** Risk assessment (Probability × Impact), coverage planning, test levels framework
|
||||
- ✅ **Test Development:** ATDD red-green TDD approach, test automation, API testing patterns
|
||||
- ✅ **Quality Assurance:** Test review (5 dimensions), traceability, release gates, quality metrics
|
||||
- ✅ **Advanced Techniques:** Knowledge fragments explored, Playwright Utils integration
|
||||
|
||||
---
|
||||
|
||||
### Learning Artifacts
|
||||
|
||||
All session notes and progress tracking available at:
|
||||
`{test_artifacts}/tea-academy/{user_name}/`
|
||||
|
||||
**Session Notes:**
|
||||
|
||||
- session-01-notes.md - Quick Start
|
||||
- session-02-notes.md - Core Concepts
|
||||
- session-03-notes.md - Architecture & Patterns
|
||||
- session-04-notes.md - Test Design
|
||||
- session-05-notes.md - ATDD & Automate
|
||||
- session-06-notes.md - Quality & Trace
|
||||
- session-07-notes.md - Advanced Patterns
|
||||
|
||||
**Progress File:**
|
||||
`{test_artifacts}/teaching-progress/{user_name}-tea-progress.yaml`
|
||||
|
||||
---
|
||||
|
||||
### Next Steps
|
||||
|
||||
**Recommended Actions:**
|
||||
|
||||
1. **Apply TEA to your project:** Start with Framework setup workflow
|
||||
2. **Run TEA workflows:** Test Design → ATDD/Automate → Test Review
|
||||
3. **Share knowledge:** Help team members through TEA Academy
|
||||
4. **Explore knowledge fragments:** 35 fragments for just-in-time learning
|
||||
5. **Contribute improvements:** Share feedback on TEA methodology
|
||||
|
||||
**TEA Resources:**
|
||||
|
||||
- **Documentation:** https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/
|
||||
- **Knowledge Base:** https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/reference/knowledge-base/
|
||||
- **GitHub Fragments:** https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise/tree/main/src/testarch/knowledge
|
||||
|
||||
---
|
||||
|
||||
**Generated by:** TEA Academy - Teach Me Testing Workflow
|
||||
**Module:** Test Architecture Enterprise (TEA)
|
||||
**Completion Date:** {current_date}
|
||||
|
||||
---
|
||||
|
||||
🧪 **Master Test Architect and Quality Advisor**
|
||||
```
|
||||
|
||||
Save certificate to {certificateFile}.
|
||||
|
||||
### 5. Update Progress File (Final)
|
||||
|
||||
Load {progressFile} and make final updates:
|
||||
|
||||
**Update session-07 (if not already):**
|
||||
|
||||
- `status: 'completed'`
|
||||
- `completed_date: {current_date}`
|
||||
- `score: 100` (exploratory session, completion based)
|
||||
- `notes_artifact: '{sessionNotesFile}'`
|
||||
|
||||
**Update completion fields:**
|
||||
|
||||
- `sessions_completed: 7`
|
||||
- `completion_percentage: 100`
|
||||
- `certificate_generated: true`
|
||||
- `certificate_path: '{certificateFile}'`
|
||||
- `completion_date: {current_date}`
|
||||
|
||||
**Update stepsCompleted:**
|
||||
|
||||
- Append 'step-04-session-07' (if session 7 just completed)
|
||||
- Append 'step-05-completion'
|
||||
- Update lastStep: 'step-05-completion'
|
||||
|
||||
Save final progress file.
|
||||
|
||||
### 6. Display Certificate
|
||||
|
||||
Display the complete certificate content to the user.
|
||||
|
||||
### 7. Final Celebration
|
||||
|
||||
Display:
|
||||
|
||||
"🎉 **CONGRATULATIONS, {user_name}!** 🎉
|
||||
|
||||
You've successfully completed the entire TEA Academy curriculum!
|
||||
|
||||
**Your Achievement:**
|
||||
|
||||
- ✅ 7 sessions completed
|
||||
- ✅ Average score: {average_score}/100
|
||||
- ✅ {total_duration} of dedicated learning
|
||||
- ✅ Certificate generated
|
||||
|
||||
**All Your Artifacts:**
|
||||
|
||||
- **Certificate:** {certificateFile}
|
||||
- **Progress:** {progressFile}
|
||||
- **Session Notes:** {test_artifacts}/tea-academy/{user_name}/
|
||||
|
||||
**You're now equipped to:**
|
||||
|
||||
- Write high-quality tests following TEA principles
|
||||
- Use all 9 TEA workflows effectively
|
||||
- Apply risk-based testing (P0-P3 prioritization)
|
||||
- Implement architecture patterns (fixtures, network-first)
|
||||
- Maintain quality through Test Review and Trace
|
||||
- Explore 35 knowledge fragments as needed
|
||||
|
||||
**Next Steps:**
|
||||
|
||||
1. Apply TEA to your current project
|
||||
2. Share this workflow with your team
|
||||
3. Help onboard new team members
|
||||
4. Continue learning through knowledge fragments
|
||||
|
||||
**Thank you for investing in testing excellence!** 🧪
|
||||
|
||||
---
|
||||
|
||||
**TEA Academy - Mission Accomplished** ✅"
|
||||
|
||||
### 8. Workflow Complete
|
||||
|
||||
**This is the final step - no menu, no next step.**
|
||||
|
||||
Workflow ends here. User can run the workflow again to re-take sessions or explore more fragments.
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- All 7 sessions verified complete before certificate generation
|
||||
- Average score calculated correctly
|
||||
- Certificate generated with all session data
|
||||
- Certificate saved to file
|
||||
- Progress file updated with completion status
|
||||
- Final celebration message displayed
|
||||
- All artifacts paths provided to user
|
||||
- Workflow completes successfully
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Generating certificate without verifying all sessions complete
|
||||
- Incorrect average score calculation
|
||||
- Missing session data in certificate
|
||||
- Not updating progress file with completion status
|
||||
- Not providing artifact paths to user
|
||||
- Proceeding to next step (this is final - no next step)
|
||||
|
||||
**Master Rule:** Verify completion, generate certificate, celebrate achievement, end workflow. This is the finale.
|
||||
@@ -0,0 +1,141 @@
|
||||
---
|
||||
name: 'step-e-01-assess-workflow'
|
||||
description: 'Assess what needs to be edited in the teaching workflow'
|
||||
|
||||
nextStepFile: './step-e-02-apply-edits.md'
|
||||
workflowPath: '../'
|
||||
advancedElicitationTask: '{project-root}/_bmad/core/workflows/advanced-elicitation/workflow.xml'
|
||||
partyModeWorkflow: '{project-root}/_bmad/core/workflows/party-mode/workflow.md'
|
||||
---
|
||||
|
||||
# Edit Step 1: Assess What to Edit
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
To identify what the user wants to edit in the teach-me-testing workflow and gather requirements for the modifications.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate content without user input
|
||||
- 📖 CRITICAL: Read complete step file before action
|
||||
- ✅ SPEAK OUTPUT In {communication_language}
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a workflow architect helping with modifications
|
||||
- ✅ Collaborative dialogue for understanding edit needs
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus on understanding what to edit
|
||||
- 🚫 FORBIDDEN to make edits yet
|
||||
- 💬 Ask questions to clarify requirements
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Understand edit requirements
|
||||
- 💾 Document what needs editing
|
||||
- 📖 Prepare for edits in next step
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
### 1. Welcome to Edit Mode
|
||||
|
||||
"**Edit Mode: Teach Me Testing Workflow**
|
||||
|
||||
What would you like to edit?
|
||||
|
||||
**Common edits:**
|
||||
|
||||
- Update session content (new concepts, updated examples)
|
||||
- Modify quiz questions
|
||||
- Add/remove knowledge fragments from session 7
|
||||
- Update TEA resource references
|
||||
- Change session durations or structure
|
||||
- Update role-based examples
|
||||
|
||||
**Tell me what you'd like to change.**"
|
||||
|
||||
### 2. Gather Edit Requirements
|
||||
|
||||
Ask targeted questions based on their response:
|
||||
|
||||
**If editing session content:**
|
||||
|
||||
- Which session? (1-7)
|
||||
- What specific content needs updating?
|
||||
- Why the change? (outdated, incorrect, needs improvement)
|
||||
|
||||
**If editing quiz questions:**
|
||||
|
||||
- Which session's quiz?
|
||||
- Which question(s)?
|
||||
- What's wrong with current questions?
|
||||
|
||||
**If editing session 7 fragments:**
|
||||
|
||||
- Add new fragment category?
|
||||
- Update existing fragment references?
|
||||
- Change organization?
|
||||
|
||||
**If editing templates:**
|
||||
|
||||
- Progress template?
|
||||
- Session notes template?
|
||||
- Certificate template?
|
||||
- What fields need changing?
|
||||
|
||||
**If editing data files:**
|
||||
|
||||
- Curriculum structure?
|
||||
- Role customizations?
|
||||
- Resource mappings?
|
||||
|
||||
### 3. Load Current Content
|
||||
|
||||
Based on what they want to edit, load the relevant files:
|
||||
|
||||
- Session step files (steps-c/step-04-session-\*.md)
|
||||
- Templates (`templates/*.md` or `*.yaml`)
|
||||
- Data files (data/\*.yaml)
|
||||
|
||||
Show user the current content.
|
||||
|
||||
### 4. Document Edit Plan
|
||||
|
||||
"**Edit Plan:**
|
||||
|
||||
**Target Files:**
|
||||
|
||||
- {list files to be modified}
|
||||
|
||||
**Changes Required:**
|
||||
|
||||
- {list specific changes}
|
||||
|
||||
**Reason:**
|
||||
|
||||
- {why these edits are needed}
|
||||
|
||||
Ready to proceed with edits?"
|
||||
|
||||
### 5. Menu
|
||||
|
||||
Display: **Select an Option:** [A] Advanced Elicitation [P] Party Mode [C] Continue to Apply Edits
|
||||
|
||||
#### Menu Handling Logic:
|
||||
|
||||
- IF A: Execute {advancedElicitationTask}, redisplay menu
|
||||
- IF P: Execute {partyModeWorkflow}, redisplay menu
|
||||
- IF C: Load, read entire file, then execute {nextStepFile}
|
||||
- IF Any other: help user, redisplay menu
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SUCCESS METRICS
|
||||
|
||||
✅ Edit requirements clearly understood, target files identified, edit plan documented, user approves plan.
|
||||
|
||||
**Master Rule:** Understand before editing. Get clear requirements first.
|
||||
@@ -0,0 +1,122 @@
|
||||
---
|
||||
name: 'step-e-02-apply-edits'
|
||||
description: 'Apply modifications to the teaching workflow based on edit plan'
|
||||
|
||||
workflowPath: '../'
|
||||
---
|
||||
|
||||
# Edit Step 2: Apply Edits
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
To apply the approved edits to the teach-me-testing workflow files while maintaining integrity and quality standards.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER edit without showing user the changes first
|
||||
- 📖 CRITICAL: Read complete step file before action
|
||||
- ✅ SPEAK OUTPUT In {communication_language}
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a workflow architect applying modifications
|
||||
- ✅ Collaborative edits with user approval
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus on applying approved edits only
|
||||
- 🚫 FORBIDDEN to make unapproved changes
|
||||
- 💬 Show changes before applying
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Apply edits systematically
|
||||
- 💾 Validate after each edit
|
||||
- 📖 Document changes made
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
### 1. Review Edit Plan
|
||||
|
||||
"**Applying approved edits to teach-me-testing workflow**
|
||||
|
||||
From step-e-01, we identified:
|
||||
{Summarize edit plan from previous step}
|
||||
|
||||
Let me apply these changes systematically."
|
||||
|
||||
### 2. Apply Edits by Category
|
||||
|
||||
**For each file to be edited:**
|
||||
|
||||
1. Load the current file
|
||||
2. Show the proposed changes (before/after)
|
||||
3. Ask: "Apply this edit? [Y/N]"
|
||||
4. If Y: Make the edit
|
||||
5. If N: Skip this edit
|
||||
6. Confirm edit applied successfully
|
||||
|
||||
### 3. Validate Edits
|
||||
|
||||
After all edits applied:
|
||||
|
||||
**Check:**
|
||||
|
||||
- Frontmatter still valid
|
||||
- File references still correct
|
||||
- Menu handling logic intact
|
||||
- Step sequence maintained
|
||||
|
||||
"**Validation:**
|
||||
|
||||
All edits applied successfully:
|
||||
|
||||
- {list files modified}
|
||||
|
||||
Checking integrity:
|
||||
|
||||
- ✅ Frontmatter valid
|
||||
- ✅ File references correct
|
||||
- ✅ Menu logic intact
|
||||
- ✅ Step sequence maintained"
|
||||
|
||||
### 4. Summary of Changes
|
||||
|
||||
"**Edit Summary:**
|
||||
|
||||
**Files Modified:** {count}
|
||||
{List each file with changes made}
|
||||
|
||||
**Changes Applied:**
|
||||
{Summarize what was changed}
|
||||
|
||||
**Workflow Status:** ✅ Edits complete, workflow intact
|
||||
|
||||
**Next:** You can run the workflow to test your changes, or run validation mode to check quality."
|
||||
|
||||
### 5. Completion
|
||||
|
||||
"**Edit Mode Complete!**
|
||||
|
||||
The teach-me-testing workflow has been updated.
|
||||
|
||||
**Modified files:**
|
||||
{List paths to modified files}
|
||||
|
||||
**Recommended next steps:**
|
||||
|
||||
1. Run validation: `bmad run teach-me-testing -v`
|
||||
2. Test the workflow: `bmad run teach-me-testing`
|
||||
3. Make additional edits if needed"
|
||||
|
||||
**This is the final edit step - workflow ends here.**
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SUCCESS METRICS
|
||||
|
||||
✅ Edits applied to approved files only, changes validated, workflow integrity maintained, user informed of modifications.
|
||||
|
||||
**Master Rule:** Show changes, get approval, apply edits, validate integrity.
|
||||
@@ -0,0 +1,263 @@
|
||||
---
|
||||
name: 'step-v-01-validate'
|
||||
description: 'Validate teach-me-testing workflow quality against BMAD standards'
|
||||
|
||||
workflowPath: '../'
|
||||
checklistFile: '../checklist.md'
|
||||
validationReport: '{test_artifacts}/workflow-validation/teach-me-testing-validation-{date}.md'
|
||||
---
|
||||
|
||||
# Validate Step 1: Quality Validation
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
To systematically validate the teach-me-testing workflow against BMAD quality standards and generate a comprehensive validation report.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER skip validation checks
|
||||
- 📖 CRITICAL: Read complete step file before action
|
||||
- ✅ SPEAK OUTPUT In {communication_language}
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a workflow quality assurance specialist
|
||||
- ✅ Systematic validation against standards
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus on comprehensive validation
|
||||
- 🚫 FORBIDDEN to skip any checks
|
||||
- 💬 Report findings clearly
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Run all validation checks
|
||||
- 💾 Generate validation report
|
||||
- 📖 Provide remediation guidance
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
### 1. Validation Start
|
||||
|
||||
"**Validating Workflow: teach-me-testing**
|
||||
|
||||
Running comprehensive quality checks against BMAD standards...
|
||||
|
||||
This will validate:
|
||||
|
||||
- Foundation structure
|
||||
- Step file quality (12 CREATE, 2 EDIT, 1 VALIDATE)
|
||||
- Template quality
|
||||
- Data file completeness
|
||||
- Frontmatter compliance
|
||||
- Menu handling patterns
|
||||
- State management
|
||||
- Documentation
|
||||
|
||||
**Starting validation...**"
|
||||
|
||||
### 2. Foundation Structure Validation
|
||||
|
||||
**Check:**
|
||||
|
||||
- [ ] workflow.md exists with proper frontmatter
|
||||
- [ ] Tri-modal routing logic present
|
||||
- [ ] Configuration loading correct
|
||||
- [ ] First step path correct
|
||||
- [ ] Folder structure complete (steps-c/, steps-e/, steps-v/, data/, templates/)
|
||||
|
||||
Report findings: Pass/Fail for each check.
|
||||
|
||||
### 3. Template Validation
|
||||
|
||||
**Check templates/:**
|
||||
|
||||
- [ ] progress-template.yaml has complete schema
|
||||
- [ ] All 7 sessions defined
|
||||
- [ ] Session status fields present
|
||||
- [ ] stepsCompleted array present
|
||||
- [ ] session-notes-template.md has required sections
|
||||
- [ ] certificate-template.md includes all 7 sessions
|
||||
|
||||
Report findings.
|
||||
|
||||
### 4. Step File Validation (CREATE Mode)
|
||||
|
||||
**For each of 12 steps in steps-c/:**
|
||||
|
||||
- [ ] Frontmatter valid (name, description present)
|
||||
- [ ] All frontmatter variables used in body
|
||||
- [ ] File references use relative paths correctly
|
||||
- [ ] Menu handling follows standards
|
||||
- [ ] Step goal clearly stated
|
||||
- [ ] MANDATORY SEQUENCE present
|
||||
- [ ] Success/failure metrics present
|
||||
- [ ] File size reasonable (<250 lines recommended)
|
||||
|
||||
Report findings per step.
|
||||
|
||||
### 5. Data File Validation
|
||||
|
||||
**Check data/:**
|
||||
|
||||
- [ ] curriculum.yaml defines all 7 sessions
|
||||
- [ ] role-paths.yaml has all 4 roles (QA/Dev/Lead/VP)
|
||||
- [ ] session-content-map.yaml maps sessions to resources
|
||||
- [ ] quiz-questions.yaml has questions for sessions 1-6
|
||||
- [ ] tea-resources-index.yaml has complete documentation index
|
||||
|
||||
Report findings.
|
||||
|
||||
### 6. Content Quality Validation
|
||||
|
||||
**Check session steps:**
|
||||
|
||||
- [ ] Teaching content present and comprehensive
|
||||
- [ ] Role-adapted examples present
|
||||
- [ ] Quiz questions validate understanding
|
||||
- [ ] TEA resource references correct
|
||||
- [ ] Knowledge fragment references accurate
|
||||
- [ ] Online URLs functional
|
||||
|
||||
Report findings.
|
||||
|
||||
### 7. State Management Validation
|
||||
|
||||
**Check continuable workflow features:**
|
||||
|
||||
- [ ] step-01-init checks for existing progress
|
||||
- [ ] step-01b-continue loads and displays progress
|
||||
- [ ] All session steps update stepsCompleted array
|
||||
- [ ] Progress file schema matches template
|
||||
- [ ] Session menu reads progress correctly
|
||||
- [ ] Completion step verifies all sessions done
|
||||
|
||||
Report findings.
|
||||
|
||||
### 8. User Experience Validation
|
||||
|
||||
**Check UX:**
|
||||
|
||||
- [ ] Clear navigation instructions
|
||||
- [ ] Progress visibility (percentage, indicators)
|
||||
- [ ] Auto-save after sessions
|
||||
- [ ] Resume capability
|
||||
- [ ] Exit options clear
|
||||
- [ ] Session descriptions helpful
|
||||
|
||||
Report findings.
|
||||
|
||||
### 9. Generate Validation Report
|
||||
|
||||
Create {validationReport}:
|
||||
|
||||
```markdown
|
||||
---
|
||||
workflow: teach-me-testing
|
||||
validation_date: { current_date }
|
||||
validator: TEA Validation Workflow
|
||||
overall_status: PASS / FAIL / PASS_WITH_WARNINGS
|
||||
---
|
||||
|
||||
# Teach Me Testing - Validation Report
|
||||
|
||||
**Date:** {current_date}
|
||||
**Workflow Version:** 1.0.0
|
||||
**Overall Status:** {status}
|
||||
|
||||
---
|
||||
|
||||
## Validation Summary
|
||||
|
||||
**Total Checks:** {count}
|
||||
**Passed:** {pass_count}
|
||||
**Failed:** {fail_count}
|
||||
**Warnings:** {warning_count}
|
||||
|
||||
**Overall Quality Score:** {score}/100
|
||||
|
||||
---
|
||||
|
||||
## Foundation Structure
|
||||
|
||||
{Report findings}
|
||||
|
||||
## Template Quality
|
||||
|
||||
{Report findings}
|
||||
|
||||
## Step File Quality
|
||||
|
||||
{Report findings for all 15 steps}
|
||||
|
||||
## Data File Quality
|
||||
|
||||
{Report findings}
|
||||
|
||||
## Content Quality
|
||||
|
||||
{Report findings}
|
||||
|
||||
## State Management
|
||||
|
||||
{Report findings}
|
||||
|
||||
## User Experience
|
||||
|
||||
{Report findings}
|
||||
|
||||
---
|
||||
|
||||
## Issues Found
|
||||
|
||||
{List all failures and warnings}
|
||||
|
||||
---
|
||||
|
||||
## Remediation Recommendations
|
||||
|
||||
{For each issue, provide fix guidance}
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
{Overall assessment}
|
||||
|
||||
**Status:** {READY_FOR_PRODUCTION / NEEDS_FIXES / PASS_WITH_MINOR_ISSUES}
|
||||
```
|
||||
|
||||
### 10. Display Results
|
||||
|
||||
"**Validation Complete!**
|
||||
|
||||
**Overall Status:** {status}
|
||||
**Quality Score:** {score}/100
|
||||
|
||||
**Report saved:** {validationReport}
|
||||
|
||||
{If PASS:}
|
||||
✅ **Workflow is ready for production!**
|
||||
|
||||
{If FAIL:}
|
||||
❌ **Issues found that need fixing.**
|
||||
See report for details: {validationReport}
|
||||
|
||||
{If WARNINGS:}
|
||||
⚠️ **Minor issues found.**
|
||||
Workflow is usable but could be improved.
|
||||
|
||||
**Validation report generated.**"
|
||||
|
||||
**This is the final validation step - workflow ends here.**
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SUCCESS METRICS
|
||||
|
||||
✅ All validation checks run, comprehensive report generated, issues identified with remediation guidance, overall status determined.
|
||||
|
||||
**Master Rule:** Check everything systematically, report findings clearly, provide actionable remediation.
|
||||
@@ -0,0 +1,86 @@
|
||||
---
|
||||
certificate_type: tea-academy-completion
|
||||
user: { { user_name } }
|
||||
role: { { role } }
|
||||
completion_date: { { completion_date } }
|
||||
started_date: { { started_date } }
|
||||
total_duration: { { total_duration } }
|
||||
average_score: { { average_score } }
|
||||
---
|
||||
|
||||
# 🏆 TEA Academy Completion Certificate
|
||||
|
||||
---
|
||||
|
||||
## Certificate of Completion
|
||||
|
||||
**This certifies that**
|
||||
|
||||
## {{user_name}}
|
||||
|
||||
**has successfully completed the TEA Academy testing curriculum**
|
||||
|
||||
---
|
||||
|
||||
### Program Details
|
||||
|
||||
**Role:** {{role}}
|
||||
**Started:** {{started_date}}
|
||||
**Completed:** {{completion_date}}
|
||||
**Total Duration:** {{total_duration}}
|
||||
**Average Score:** {{average_score}}/100
|
||||
|
||||
---
|
||||
|
||||
### Sessions Completed
|
||||
|
||||
✅ **Session 1:** Quick Start (30 min) - Score: {{session_01_score}}
|
||||
✅ **Session 2:** Core Concepts (45 min) - Score: {{session_02_score}}
|
||||
✅ **Session 3:** Architecture & Patterns (60 min) - Score: {{session_03_score}}
|
||||
✅ **Session 4:** Test Design (60 min) - Score: {{session_04_score}}
|
||||
✅ **Session 5:** ATDD & Automate (60 min) - Score: {{session_05_score}}
|
||||
✅ **Session 6:** Quality & Trace (45 min) - Score: {{session_06_score}}
|
||||
✅ **Session 7:** Advanced Patterns (ongoing) - Score: {{session_07_score}}
|
||||
|
||||
---
|
||||
|
||||
### Skills Acquired
|
||||
|
||||
{{user_name}} has demonstrated proficiency in:
|
||||
|
||||
- ✅ **Testing Fundamentals:** Risk-based testing, test pyramid, test types
|
||||
- ✅ **TEA Methodology:** 9 workflows, engagement models, quality standards
|
||||
- ✅ **Architecture Patterns:** Fixtures, network-first patterns, data factories
|
||||
- ✅ **Test Design:** Risk assessment, coverage planning, P0-P3 prioritization
|
||||
- ✅ **Test Development:** ATDD red-green approach, test automation
|
||||
- ✅ **Quality Assurance:** Test review, traceability, NFR assessment
|
||||
- ✅ **Advanced Techniques:** 35 knowledge fragments explored
|
||||
|
||||
---
|
||||
|
||||
### Learning Artifacts
|
||||
|
||||
All session notes and progress tracking available at:
|
||||
`{{artifacts_path}}`
|
||||
|
||||
---
|
||||
|
||||
### Next Steps
|
||||
|
||||
**Recommended Actions:**
|
||||
|
||||
1. Apply TEA principles to current project
|
||||
2. Run TEA workflows (Framework, Test Design, ATDD, Automate)
|
||||
3. Share knowledge with team members
|
||||
4. Continue exploring knowledge fragments as needed
|
||||
5. Contribute to TEA methodology improvements
|
||||
|
||||
---
|
||||
|
||||
**Generated by:** TEA Academy - Teach Me Testing Workflow
|
||||
**Module:** Test Architecture Enterprise (TEA)
|
||||
**Website:** <https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/>
|
||||
|
||||
---
|
||||
|
||||
🧪 **Master Test Architect and Quality Advisor**
|
||||
@@ -0,0 +1,95 @@
|
||||
---
|
||||
# TEA Academy Progress Tracking
|
||||
# This file tracks a learner's progress through the teaching workflow
|
||||
|
||||
# User Information
|
||||
user: "{{user_name}}"
|
||||
role: "{{role}}" # qa | dev | lead | vp
|
||||
experience_level: "{{experience_level}}" # beginner | intermediate | experienced
|
||||
learning_goals: "{{learning_goals}}"
|
||||
pain_points: "{{pain_points}}" # optional
|
||||
|
||||
# Session Tracking
|
||||
started_date: "{{current_date}}"
|
||||
last_session_date: "{{current_date}}"
|
||||
|
||||
# Session Array - tracks completion status for all 7 sessions
|
||||
sessions:
|
||||
- id: session-01-quickstart
|
||||
name: "Quick Start"
|
||||
duration: "30 min"
|
||||
status: not-started # not-started | in-progress | completed
|
||||
started_date: null
|
||||
completed_date: null
|
||||
score: null # 0-100
|
||||
notes_artifact: null
|
||||
|
||||
- id: session-02-concepts
|
||||
name: "Core Concepts"
|
||||
duration: "45 min"
|
||||
status: not-started
|
||||
started_date: null
|
||||
completed_date: null
|
||||
score: null
|
||||
notes_artifact: null
|
||||
|
||||
- id: session-03-architecture
|
||||
name: "Architecture & Patterns"
|
||||
duration: "60 min"
|
||||
status: not-started
|
||||
started_date: null
|
||||
completed_date: null
|
||||
score: null
|
||||
notes_artifact: null
|
||||
|
||||
- id: session-04-test-design
|
||||
name: "Test Design"
|
||||
duration: "60 min"
|
||||
status: not-started
|
||||
started_date: null
|
||||
completed_date: null
|
||||
score: null
|
||||
notes_artifact: null
|
||||
|
||||
- id: session-05-atdd-automate
|
||||
name: "ATDD & Automate"
|
||||
duration: "60 min"
|
||||
status: not-started
|
||||
started_date: null
|
||||
completed_date: null
|
||||
score: null
|
||||
notes_artifact: null
|
||||
|
||||
- id: session-06-quality-trace
|
||||
name: "Quality & Trace"
|
||||
duration: "45 min"
|
||||
status: not-started
|
||||
started_date: null
|
||||
completed_date: null
|
||||
score: null
|
||||
notes_artifact: null
|
||||
|
||||
- id: session-07-advanced
|
||||
name: "Advanced Patterns"
|
||||
duration: "ongoing"
|
||||
status: not-started
|
||||
started_date: null
|
||||
completed_date: null
|
||||
score: null
|
||||
notes_artifact: null
|
||||
|
||||
# Progress Metrics
|
||||
sessions_completed: 0
|
||||
total_sessions: 7
|
||||
completion_percentage: 0
|
||||
next_recommended: session-01-quickstart
|
||||
|
||||
# Workflow Continuation Tracking (for continuable workflow)
|
||||
stepsCompleted: []
|
||||
lastStep: ""
|
||||
lastContinued: ""
|
||||
|
||||
# Completion Certificate
|
||||
certificate_generated: false
|
||||
certificate_path: null
|
||||
completion_date: null
|
||||
@@ -0,0 +1,83 @@
|
||||
---
|
||||
session_id: { { session_id } }
|
||||
session_name: { { session_name } }
|
||||
user: { { user_name } }
|
||||
role: { { role } }
|
||||
completed_date: { { completed_date } }
|
||||
score: { { score } }
|
||||
duration: { { duration } }
|
||||
---
|
||||
|
||||
# {{session_name}} - Session Notes
|
||||
|
||||
**Learner:** {{user_name}} ({{role}})
|
||||
**Completed:** {{completed_date}}
|
||||
**Score:** {{score}}/100
|
||||
**Duration:** {{duration}}
|
||||
|
||||
---
|
||||
|
||||
## Session Objectives
|
||||
|
||||
{{session_objectives}}
|
||||
|
||||
---
|
||||
|
||||
## Key Concepts Covered
|
||||
|
||||
{{key_concepts}}
|
||||
|
||||
---
|
||||
|
||||
## TEA Resources Referenced
|
||||
|
||||
### Documentation
|
||||
|
||||
{{docs_referenced}}
|
||||
|
||||
### Knowledge Fragments
|
||||
|
||||
{{knowledge_fragments_referenced}}
|
||||
|
||||
### Online Resources
|
||||
|
||||
{{online_resources}}
|
||||
|
||||
---
|
||||
|
||||
## Quiz Results
|
||||
|
||||
**Score:** {{score}}/100
|
||||
|
||||
### Questions & Answers
|
||||
|
||||
{{quiz_results}}
|
||||
|
||||
---
|
||||
|
||||
## Practical Examples
|
||||
|
||||
{{practical_examples}}
|
||||
|
||||
---
|
||||
|
||||
## Key Takeaways
|
||||
|
||||
{{key_takeaways}}
|
||||
|
||||
---
|
||||
|
||||
## Next Recommended Session
|
||||
|
||||
{{next_recommended}}
|
||||
|
||||
---
|
||||
|
||||
## Additional Notes
|
||||
|
||||
{{additional_notes}}
|
||||
|
||||
---
|
||||
|
||||
**Generated by:** TEA Academy - Teach Me Testing Workflow
|
||||
**Session Path:** Session {{session_number}} of 7
|
||||
@@ -0,0 +1,950 @@
|
||||
---
|
||||
stepsCompleted:
|
||||
[
|
||||
'step-01-discovery',
|
||||
'step-02-classification',
|
||||
'step-03-requirements',
|
||||
'step-04-tools',
|
||||
'step-05-plan-review',
|
||||
'step-06-design',
|
||||
'step-07-foundation',
|
||||
]
|
||||
created: 2026-01-27
|
||||
status: FOUNDATION_COMPLETE
|
||||
approvedDate: 2026-01-27
|
||||
designCompletedDate: 2026-01-27
|
||||
foundationCompletedDate: 2026-01-28
|
||||
---
|
||||
|
||||
# Workflow Creation Plan
|
||||
|
||||
## Discovery Notes
|
||||
|
||||
**User's Vision:**
|
||||
Create an ongoing learning companion that teaches testing progressively through a structured curriculum. Users at the company (and beyond) lack testing knowledge regardless of experience level - from hobbyist beginners to experienced VPs. The TEA (Test Architecture Enterprise) module has extensive documentation (~24k lines, 200 files, 9 workflows, 35 knowledge fragments), but manual teaching doesn't scale. This workflow solves that by providing self-paced, structured learning with state persistence across multiple sessions.
|
||||
|
||||
**Who It's For:**
|
||||
|
||||
- New QA engineers (primary onboarding use case)
|
||||
- Developers who need testing knowledge
|
||||
- Anyone at the company requiring testing fundamentals through advanced practices
|
||||
- Scalable to entire team without manual teaching
|
||||
|
||||
**What It Produces:**
|
||||
|
||||
- Multi-session learning journey (7 sessions, 30-90 min each)
|
||||
- Session-by-session progress tracking via persistent state file
|
||||
- Learning artifacts: session notes, test files, reports, completion certificate
|
||||
- Personalized learning paths customized by role (QA vs Dev vs Lead vs VP)
|
||||
- Knowledge validation through quizzes after each session
|
||||
- Resume capability - users can pause and continue across days/weeks
|
||||
|
||||
**Key Insights:**
|
||||
|
||||
- Content volume (~24k lines) makes single-session teaching infeasible
|
||||
- State persistence is critical for multi-session continuity
|
||||
- Just-in-time content loading per session keeps context manageable
|
||||
- First use case: new QA onboarding completing in 1-2 weeks
|
||||
- Workflow must reference and integrate TEA docs and knowledge base extensively
|
||||
- Users learn at their own pace without requiring instructor availability
|
||||
|
||||
**Technical Architecture Requirements:**
|
||||
|
||||
- 7-session curriculum structure
|
||||
- State file: tracks progress, scores, completed sessions, artifacts, next recommended session
|
||||
- Role-based path customization
|
||||
- Knowledge validation gates between sessions
|
||||
- Artifact generation per session
|
||||
- Integration with TEA module documentation and knowledge base
|
||||
|
||||
## Classification Decisions
|
||||
|
||||
**Workflow Name:** teach-me-testing
|
||||
**Target Path:** {project-root}/src/workflows/testarch/bmad-teach-me-testing/
|
||||
|
||||
**4 Key Decisions:**
|
||||
|
||||
1. **Document Output:** Yes (produces progress files, session notes, artifacts, completion certificate)
|
||||
2. **Module Affiliation:** TEA module (9th workflow in test architecture)
|
||||
3. **Session Type:** Continuable (multi-session learning over 1-2 weeks)
|
||||
4. **Lifecycle Support:** Tri-modal (Create + Edit + Validate for future-proofing)
|
||||
|
||||
**Structure Implications:**
|
||||
|
||||
- **Tri-modal architecture:** Needs `steps-c/`, `steps-e/`, `steps-v/` folders
|
||||
- **Continuable workflow:** Requires `step-01-init.md` with continuation detection + `step-01b-continue.md` for resuming
|
||||
- **State tracking:** Uses `stepsCompleted` in progress file frontmatter
|
||||
- **Document templates:** Progress tracking YAML, session notes markdown, completion certificate
|
||||
- **Module integration:** Access to TEA module variables, docs paths, knowledge base paths
|
||||
- **Data folder:** Shared data for curriculum structure, role paths, session content mappings
|
||||
|
||||
## Requirements
|
||||
|
||||
**Flow Structure:**
|
||||
|
||||
- Pattern: Mixed (non-linear between sessions, linear within sessions, branching at start only)
|
||||
- Phases: Initial assessment → Session selection (non-linear) → Session execution (linear: teach → quiz → artifact) → Completion
|
||||
- Estimated steps: Init + Continue + Assessment + 7 Session steps + Final Polish/Certificate generation = ~10-12 core step files
|
||||
- Session jumping: Users can skip to any session based on experience level
|
||||
- Within session: Strictly linear progression through teaching content
|
||||
|
||||
**User Interaction:**
|
||||
|
||||
- Style: Mixed (mostly autonomous teaching with collaborative decision points)
|
||||
- Decision points:
|
||||
- Role/experience assessment (entry)
|
||||
- Session selection (menu-driven, can jump around)
|
||||
- Quiz answers (validation gates)
|
||||
- Continue to next session or exit
|
||||
- Checkpoint frequency: At session completion (save progress, offer continue/exit)
|
||||
- Teaching approach: AI presents content, user absorbs - minimal interruption once learning
|
||||
|
||||
**Inputs Required:**
|
||||
|
||||
- Required:
|
||||
- User role (QA, Dev, Lead, VP)
|
||||
- Experience level (beginner, intermediate, experienced)
|
||||
- Learning goals (fundamentals, TEA-specific, advanced patterns)
|
||||
- Optional:
|
||||
- Existing project for practical examples
|
||||
- Specific pain points (flaky tests, slow tests, hard to maintain)
|
||||
- Prerequisites:
|
||||
- TEA module installed
|
||||
- Access to TEA docs and knowledge base
|
||||
- Understanding of time commitment (30-90 min per session)
|
||||
|
||||
**Output Specifications:**
|
||||
|
||||
- Type: Multiple document types
|
||||
- Format: Mixed formats
|
||||
- Progress file: Structured YAML with specific schema (sessions, scores, artifacts, completed_date, next_recommended)
|
||||
- Session notes: Free-form markdown built progressively per session
|
||||
- Completion certificate: Structured format with completion data
|
||||
- Sections:
|
||||
- Progress file has fixed schema
|
||||
- Session notes vary by session content
|
||||
- Certificate has standard completion fields
|
||||
- Frequency:
|
||||
- Progress file: Updated after each session
|
||||
- Session notes: Generated per session
|
||||
- Certificate: Generated at final completion
|
||||
|
||||
**Success Criteria:**
|
||||
|
||||
- User completes their chosen sessions (might be 1, might be all 7)
|
||||
- Knowledge validated through quizzes (≥70% passing threshold)
|
||||
- Artifacts generated successfully (progress file exists, session notes created, learning tracked)
|
||||
- User can apply knowledge (write their first good test following TEA principles)
|
||||
- Onboarding velocity achieved (new QAs complete core sessions within 1-2 weeks)
|
||||
- Scalability proven (multiple team members learn without requiring instructor time)
|
||||
|
||||
**Instruction Style:**
|
||||
|
||||
- Overall: Mixed (prescriptive for structure, intent-based for teaching)
|
||||
- Prescriptive for:
|
||||
- Initial assessment (consistent role/experience classification)
|
||||
- Quiz questions (need exact validation logic)
|
||||
- Progress tracking (exact state file updates)
|
||||
- Session navigation (clear menu structure)
|
||||
- Intent-based for:
|
||||
- Teaching sessions (AI adapts explanations naturally)
|
||||
- Example selection (AI chooses relevant TEA docs/knowledge fragments)
|
||||
- Artifact generation (AI synthesizes learning into notes)
|
||||
- Role-flavored content (AI adjusts examples based on user role)
|
||||
|
||||
## Tools Configuration
|
||||
|
||||
**Core BMAD Tools:**
|
||||
|
||||
- **Party Mode:** Included (optional via A/P menu) - Use for collaborative exploration when the learner wants a lighter format
|
||||
- **Advanced Elicitation:** Included (optional via A/P menu) - Use for deeper discovery or clarification during sessions
|
||||
- **Brainstorming:** Excluded - Not needed for structured curriculum delivery
|
||||
|
||||
**LLM Features:**
|
||||
|
||||
- **Web-Browsing:** Included - Use case: Safety net for framework updates (Cypress, Jest, newer Playwright versions) and frameworks not covered in TEA docs. Motto: "Only reach out when you don't have the info"
|
||||
- **File I/O:** Included - Operations: Read TEA docs (/docs/_.md), read knowledge fragments (/src/testarch/knowledge/_.md), write progress file ({user}-tea-progress.yaml), write session notes, write completion certificate
|
||||
- **Sub-Agents:** Excluded - Sessions are linear teaching steps handled by TEA agent, not complex specialized tasks requiring delegation
|
||||
- **Sub-Processes:** Excluded - Learning is sequential (one session at a time), no parallel processing needed
|
||||
|
||||
**Memory:**
|
||||
|
||||
- Type: Continuable workflow with persistent state
|
||||
- Tracking:
|
||||
- `stepsCompleted` array in progress YAML
|
||||
- Session completion tracking (id, status, completed_date, score, artifacts)
|
||||
- Progress metrics (completion_percentage, next_recommended)
|
||||
- Progress file structure:
|
||||
```yaml
|
||||
user: { user_name }
|
||||
role: { qa/dev/lead/vp }
|
||||
sessions: [{ id, status, completed_date, score, artifacts }]
|
||||
completion_percentage: { percent }
|
||||
next_recommended: { session-id }
|
||||
```
|
||||
- Continuation support via step-01b-continue.md with progress dashboard
|
||||
|
||||
**External Integrations:**
|
||||
|
||||
- None - Self-contained within TEA module, no external databases/APIs/MCP servers needed
|
||||
|
||||
**Installation Requirements:**
|
||||
|
||||
- None - All selected tools are built-in (Web-Browsing and File I/O are standard LLM features)
|
||||
- User preference: N/A (no installations required)
|
||||
|
||||
## Workflow Design
|
||||
|
||||
### Complete Flow Overview
|
||||
|
||||
**Entry → Init (check for progress) → [New User: Assessment | Returning User: Dashboard] → Session Menu (hub) → Sessions 1-7 (loop back to menu) → Completion Certificate**
|
||||
|
||||
### Step Structure (CREATE mode - steps-c/)
|
||||
|
||||
**Total: 12 step files**
|
||||
|
||||
#### Phase 1: Initialization & Continuation
|
||||
|
||||
1. **step-01-init.md** (Init Step - Continuable)
|
||||
- Goal: Welcome user, check for existing progress file, explain workflow, create initial progress if new
|
||||
- Type: Init (Continuable) - checks for `{user}-tea-progress.yaml`, routes to step-01b if exists
|
||||
- Menu: Auto-proceed (Pattern 3) - no user menu
|
||||
- Logic: Checks for existing progress → routes to step-01b if exists, otherwise creates new and proceeds to step-02
|
||||
|
||||
2. **step-01b-continue.md** (Continuation Step)
|
||||
- Goal: Load existing progress, show dashboard with completion status, route to session menu
|
||||
- Type: Continuation - reads `stepsCompleted`, displays progress percentage
|
||||
- Menu: Auto-proceed (Pattern 3) - no user menu
|
||||
- Logic: Shows progress dashboard → auto-routes to step-03-session-menu
|
||||
|
||||
#### Phase 2: Assessment & Path Selection
|
||||
|
||||
3. **step-02-assess.md** (Middle Step - Standard)
|
||||
- Goal: Gather role (QA/Dev/Lead/VP), experience level, learning goals, optional pain points
|
||||
- Type: Middle (Standard) auto-proceed
|
||||
- Menu: Auto-proceed (Pattern 3) - no user menu
|
||||
- On completion: Saves assessment to progress file → loads step-03-session-menu
|
||||
|
||||
4. **step-03-session-menu.md** (Branch Step - Hub)
|
||||
- Goal: Present 7 sessions with descriptions + completion status, allow non-linear selection
|
||||
- Type: Branch Step (custom menu: 1-7, X for exit)
|
||||
- Menu: Custom branching (Pattern 4)
|
||||
- Display: [1-7] Select session | [X] Exit
|
||||
- Logic:
|
||||
- 1-7: Routes to corresponding session step
|
||||
- X: If all sessions complete → routes to step-05-completion; if incomplete → saves and exits
|
||||
- **This is the hub - all sessions return here**
|
||||
|
||||
#### Phase 3: Session Execution (7 Sessions)
|
||||
|
||||
5-11. **step-04-session-[01-07].md** (Middle Steps - Complex)
|
||||
|
||||
- Each session follows same pattern:
|
||||
- Loads relevant TEA docs just-in-time
|
||||
- Presents teaching content (mostly autonomous)
|
||||
- Knowledge validation quiz (collaborative)
|
||||
- Generates session notes artifact
|
||||
- Updates progress file
|
||||
- Returns to step-03-session-menu
|
||||
- Menu: Standard A/P/C (Pattern 1) - users might want Advanced Elicitation
|
||||
- On C: Saves session notes, updates progress (mark complete, update score), returns to hub
|
||||
|
||||
**Sessions:**
|
||||
|
||||
- **session-01**: Quick Start (30 min) - TEA Lite intro, run automate workflow
|
||||
- **session-02**: Core Concepts (45 min) - Risk-based testing, DoD, philosophy
|
||||
- **session-03**: Architecture (60 min) - Fixtures, network patterns, framework
|
||||
- **session-04**: Test Design (60 min) - Risk assessment workflow
|
||||
- **session-05**: ATDD & Automate (60 min) - ATDD + Automate workflows
|
||||
- **session-06**: Quality & Trace (45 min) - Test review + Trace workflows
|
||||
- **session-07**: Advanced Patterns (ongoing) - Menu-driven knowledge fragment exploration
|
||||
|
||||
#### Phase 4: Completion
|
||||
|
||||
12. **step-05-completion.md** (Final Step)
|
||||
- Goal: Generate completion certificate, final progress update, congratulate
|
||||
- Type: Final - no nextStepFile, marks workflow complete
|
||||
- Menu: None (final step)
|
||||
- Logic: Generates certificate, displays congratulations, workflow ends
|
||||
|
||||
### Interaction Patterns
|
||||
|
||||
- **Auto-proceed steps:** step-01-init, step-01b-continue, step-02-assess
|
||||
- **Standard A/P/C:** step-04-session-[01-07]
|
||||
- **Custom branching:** step-03-session-menu (hub)
|
||||
- **No menu:** step-05-completion (final)
|
||||
|
||||
### Data Flow
|
||||
|
||||
**Progress File:** `{test_artifacts}/teaching-progress/{user_name}-tea-progress.yaml`
|
||||
|
||||
**Schema:**
|
||||
|
||||
```yaml
|
||||
user: { user_name }
|
||||
role: { qa/dev/lead/vp }
|
||||
experience_level: { beginner/intermediate/experienced }
|
||||
learning_goals: [list]
|
||||
pain_points: [optional list]
|
||||
started_date: 2026-01-27
|
||||
last_session_date: 2026-01-27
|
||||
|
||||
sessions:
|
||||
- id: session-01-quickstart
|
||||
status: completed
|
||||
completed_date: 2026-01-27
|
||||
score: 90
|
||||
notes_artifact: '{test_artifacts}/tea-academy/{user_name}/session-01-notes.md'
|
||||
- id: session-02-concepts
|
||||
status: in-progress
|
||||
started_date: 2026-01-27
|
||||
# ... sessions 03-07
|
||||
|
||||
sessions_completed: 1
|
||||
total_sessions: 7
|
||||
completion_percentage: 14
|
||||
next_recommended: session-02-concepts
|
||||
|
||||
stepsCompleted: ['step-01-init', 'step-02-assess', 'step-04-session-01']
|
||||
lastStep: 'step-04-session-01'
|
||||
lastContinued: '2026-01-27'
|
||||
```
|
||||
|
||||
**Data Flow Per Step:**
|
||||
|
||||
- **step-01-init:** Creates initial progress YAML if new
|
||||
- **step-01b-continue:** Reads progress file, updates lastContinued
|
||||
- **step-02-assess:** Updates role, experience, goals, pain_points
|
||||
- **step-03-session-menu:** Reads sessions array (display status)
|
||||
- **step-04-session-[N]:** Reads progress (for role), writes session notes, updates sessions array
|
||||
- **step-05-completion:** Reads all sessions data, writes certificate
|
||||
|
||||
**Error Handling:**
|
||||
|
||||
- Quiz failure (<70%): Offer review or continue anyway
|
||||
- Missing TEA docs: Use Web-Browsing fallback
|
||||
- Corrupted progress: Backup and offer fresh start
|
||||
- Session interrupted: Auto-save after quiz completion
|
||||
|
||||
**Checkpoints:**
|
||||
|
||||
- After assessment complete
|
||||
- After each quiz completion
|
||||
- After each session artifact generation
|
||||
- On user exit from session menu
|
||||
|
||||
### File Structure
|
||||
|
||||
```
|
||||
teach-me-testing/
|
||||
├── workflow.md # Main entry point
|
||||
├── workflow.yaml # Workflow metadata
|
||||
│
|
||||
├── steps-c/ # CREATE mode (12 steps)
|
||||
│ ├── step-01-init.md
|
||||
│ ├── step-01b-continue.md
|
||||
│ ├── step-02-assess.md
|
||||
│ ├── step-03-session-menu.md
|
||||
│ ├── step-04-session-01.md
|
||||
│ ├── step-04-session-02.md
|
||||
│ ├── step-04-session-03.md
|
||||
│ ├── step-04-session-04.md
|
||||
│ ├── step-04-session-05.md
|
||||
│ ├── step-04-session-06.md
|
||||
│ ├── step-04-session-07.md
|
||||
│ └── step-05-completion.md
|
||||
│
|
||||
├── steps-e/ # EDIT mode (2 steps)
|
||||
│ ├── step-e-01-assess-workflow.md
|
||||
│ └── step-e-02-apply-edits.md
|
||||
│
|
||||
├── steps-v/ # VALIDATE mode (1 step)
|
||||
│ └── step-v-01-validate.md
|
||||
│
|
||||
├── data/ # Shared data files
|
||||
│ ├── curriculum.yaml
|
||||
│ ├── role-paths.yaml
|
||||
│ ├── session-content-map.yaml
|
||||
│ ├── quiz-questions.yaml
|
||||
│ └── tea-resources-index.yaml
|
||||
│
|
||||
├── templates/ # Document templates
|
||||
│ ├── progress-template.yaml
|
||||
│ ├── session-notes-template.md
|
||||
│ └── certificate-template.md
|
||||
│
|
||||
├── instructions.md
|
||||
└── checklist.md
|
||||
```
|
||||
|
||||
### Role and Persona Definition
|
||||
|
||||
**AI Role:** Master Test Architect and Teaching Guide
|
||||
|
||||
**Expertise:**
|
||||
|
||||
- Deep knowledge of testing principles (risk-based, test pyramid, types)
|
||||
- Expert in TEA methodology (9 workflows, architecture patterns, 35 knowledge fragments)
|
||||
- Familiar with Playwright, test automation, CI/CD
|
||||
- Teaching pedagogy: progressive learning, knowledge validation, role-based examples
|
||||
|
||||
**Communication Style:**
|
||||
|
||||
- **Teaching:** Clear, patient, educational - adapts complexity by role
|
||||
- **Quizzes:** Encouraging, constructive feedback, non-judgmental
|
||||
- **Navigation:** Clear, concise, shows completion status prominently
|
||||
- **Tone:** Encouraging but not patronizing, technical but accessible
|
||||
|
||||
**Teaching Principles:**
|
||||
|
||||
1. Just-in-time learning (load content when needed)
|
||||
2. Active recall (quiz after teaching)
|
||||
3. Spaced repetition (reference earlier concepts)
|
||||
4. Role-flavored examples (same concept, different contexts)
|
||||
5. Artifact generation (learners keep notes)
|
||||
|
||||
### Validation and Error Handling
|
||||
|
||||
**Output Validation:**
|
||||
|
||||
- Progress file: Schema, status, score (0-100), date, artifact paths
|
||||
- Session notes: Frontmatter present, content not empty (min 100 chars)
|
||||
- Certificate: All 7 sessions complete, valid dates, user info present
|
||||
|
||||
**User Input Validation:**
|
||||
|
||||
- Role: Must be QA, Dev, Lead, or VP
|
||||
- Experience: beginner, intermediate, or experienced
|
||||
- Quiz answers: 3 attempts before showing correct answer
|
||||
- Session selection: Must be 1-7 or X
|
||||
|
||||
**Error Recovery:**
|
||||
|
||||
- Corrupted progress: Backup, offer fresh start
|
||||
- Missing docs: Web-Browsing fallback
|
||||
- Quiz failure: Review or continue options
|
||||
- Interrupted session: Auto-save progress
|
||||
|
||||
**Success Criteria:**
|
||||
|
||||
- Session complete: Content presented, quiz passed, notes generated, progress updated
|
||||
- Workflow complete: All 7 sessions done, avg score ≥70%, artifacts created, certificate generated
|
||||
|
||||
### Special Features
|
||||
|
||||
**Conditional Logic:**
|
||||
|
||||
- Session menu routing: Check if all complete → route to completion or show menu
|
||||
- Quiz scoring: If ≥70% proceed, if <70% offer review
|
||||
|
||||
**Branch Points:**
|
||||
|
||||
- Initial entry: Progress exists? → continue vs new
|
||||
- Experience-based recommendations: Beginner → session 1, Experienced → session 7
|
||||
|
||||
**Integration with TEA Workflows:**
|
||||
|
||||
- Session 1: Demonstrates [TA] Automate
|
||||
- Session 3: May run [TF] Framework
|
||||
- Session 4: Runs [TD] Test Design
|
||||
- Session 5: Runs [AT] ATDD + [TA] Automate
|
||||
- Session 6: Runs [RV] Test Review + [TR] Trace
|
||||
|
||||
**Role-Based Content:**
|
||||
|
||||
- QA: Practical testing focus
|
||||
- Dev: Integration and TDD focus
|
||||
- Lead: Architecture and patterns focus
|
||||
- VP: Strategy and metrics focus
|
||||
|
||||
**Session 7 Special Handling:**
|
||||
|
||||
- Exploratory menu-driven deep-dive into 35 knowledge fragments
|
||||
- Organized by categories (Testing Patterns, Playwright Utils, Config/Governance, etc.)
|
||||
- Links to GitHub for browsing
|
||||
|
||||
**Content Sources (Triple Reference System):**
|
||||
|
||||
- Local files: `/docs/*.md`, `/src/testarch/knowledge/*.md`
|
||||
- Online docs: `<https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/`>
|
||||
- GitHub fragments: Direct links to knowledge fragment source files
|
||||
|
||||
### Design Summary
|
||||
|
||||
**Complete:** 12-step CREATE workflow with hub pattern
|
||||
**Continuable:** Progress file tracks state across sessions
|
||||
**Non-linear:** Users jump to any session from hub
|
||||
**Role-flavored:** Same concepts, role-specific examples
|
||||
**Triple content:** Local + online + GitHub sources
|
||||
**Web-Browsing:** Fallback for missing/updated docs
|
||||
**Auto-save:** After each session completion
|
||||
**Tri-modal:** Create (12 steps) + Edit (2 steps) + Validate (1 step)
|
||||
|
||||
## Foundation Build Complete
|
||||
|
||||
**Created:** 2026-01-28
|
||||
|
||||
**Folder Structure:**
|
||||
|
||||
```
|
||||
teach-me-testing/
|
||||
├── workflow.md ✓ Created
|
||||
├── steps-c/ ✓ Created (empty, to be populated)
|
||||
├── steps-e/ ✓ Created (empty, to be populated)
|
||||
├── steps-v/ ✓ Created (empty, to be populated)
|
||||
├── data/ ✓ Created (empty, to be populated)
|
||||
├── templates/ ✓ Created
|
||||
│ ├── progress-template.yaml ✓ Created
|
||||
│ ├── session-notes-template.md ✓ Created
|
||||
│ └── certificate-template.md ✓ Created
|
||||
├── instructions.md ✓ Created
|
||||
└── checklist.md ✓ Created
|
||||
```
|
||||
|
||||
**Location:** {external-project-root}/\_bmad-output/bmb-creations/workflows/teach-me-testing/
|
||||
|
||||
**Configuration:**
|
||||
|
||||
- Workflow name: teach-me-testing
|
||||
- Continuable: Yes (multi-session learning)
|
||||
- Document output: Yes (Progress YAML, Session notes MD, Certificate MD)
|
||||
- Mode: Tri-modal (Create + Edit + Validate)
|
||||
- Module: TEA (Test Architecture Enterprise)
|
||||
|
||||
**Files Created:**
|
||||
|
||||
1. **workflow.md**
|
||||
- Tri-modal routing logic (Create/Edit/Validate)
|
||||
- Configuration loading from TEA module
|
||||
- Step-file architecture principles
|
||||
- Initialization sequence
|
||||
|
||||
2. **templates/progress-template.yaml**
|
||||
- Complete progress tracking schema
|
||||
- 7 sessions defined
|
||||
- Session status tracking (not-started/in-progress/completed)
|
||||
- stepsCompleted array for continuation
|
||||
- Progress metrics (completion_percentage, next_recommended)
|
||||
|
||||
3. **templates/session-notes-template.md**
|
||||
- Session metadata
|
||||
- Key concepts, objectives, takeaways
|
||||
- TEA resources referenced
|
||||
- Quiz results
|
||||
- Practical examples
|
||||
|
||||
4. **templates/certificate-template.md**
|
||||
- Completion certificate structure
|
||||
- All 7 sessions with scores
|
||||
- Skills acquired checklist
|
||||
- Learning artifacts paths
|
||||
- Next steps recommendations
|
||||
|
||||
5. **instructions.md**
|
||||
- How to run the workflow
|
||||
- Session structure and flow
|
||||
- Progress tracking details
|
||||
- Troubleshooting guide
|
||||
|
||||
6. **checklist.md**
|
||||
- Quality validation checklist
|
||||
- Foundation quality checks
|
||||
- Step file quality standards
|
||||
- Data file quality requirements
|
||||
- Completion criteria
|
||||
|
||||
**Next Steps:**
|
||||
|
||||
- Step 8: Build step-01-init.md (initialization with continuation detection)
|
||||
- Step 9: Build step-01b-continue.md (continuation/resume logic)
|
||||
- Step 10+: Build remaining 10 step files (assessment, session menu, 7 sessions, completion)
|
||||
- Populate data/ folder with curriculum, role paths, session content map, quizzes, resources index
|
||||
|
||||
## Step 01 Build Complete
|
||||
|
||||
**Created:** 2026-01-28
|
||||
|
||||
**Files:**
|
||||
|
||||
- `steps-c/step-01-init.md` ✓
|
||||
- `steps-c/step-01b-continue.md` ✓
|
||||
|
||||
**Step Configuration:**
|
||||
|
||||
- **Type:** Continuable (multi-session learning)
|
||||
- **Input Discovery:** No (self-contained teaching)
|
||||
- **Progress File:** `{test_artifacts}/teaching-progress/{user_name}-tea-progress.yaml`
|
||||
- **Menu Pattern:** Auto-proceed (no user menu)
|
||||
|
||||
**step-01-init.md:**
|
||||
|
||||
- Checks for existing progress file
|
||||
- If exists → routes to step-01b-continue
|
||||
- If not → creates new progress from template, proceeds to step-02-assess
|
||||
- Initializes stepsCompleted array
|
||||
- Creates complete session tracking structure (all 7 sessions)
|
||||
|
||||
**step-01b-continue.md:**
|
||||
|
||||
- Loads existing progress file
|
||||
- Updates lastContinued timestamp
|
||||
- Displays progress dashboard with completion status
|
||||
- Shows session indicators (✅ completed, 🔄 in-progress, ⬜ not-started)
|
||||
- Auto-routes to step-03-session-menu (hub)
|
||||
|
||||
**Frontmatter Compliance:**
|
||||
|
||||
- All variables used in step body
|
||||
- Relative paths for internal references
|
||||
- No hardcoded paths
|
||||
- Follows frontmatter standards
|
||||
|
||||
**Next Steps:**
|
||||
|
||||
- Build step-02-assess.md (assessment)
|
||||
- Build step-03-session-menu.md (hub)
|
||||
- Build 7 session steps (step-04-session-01 through step-04-session-07)
|
||||
- Build step-05-completion.md (certificate generation)
|
||||
|
||||
## Step 02 Build Complete
|
||||
|
||||
**Created:** 2026-01-28
|
||||
|
||||
**Files:**
|
||||
|
||||
- `steps-c/step-02-assess.md` ✓
|
||||
|
||||
**Step Configuration:**
|
||||
|
||||
- **Type:** Middle Step (Standard) auto-proceed
|
||||
- **Next Step:** step-03-session-menu
|
||||
- **Menu Pattern:** Auto-proceed (Pattern 3) - no user menu
|
||||
|
||||
**step-02-assess.md:**
|
||||
|
||||
- Gathers role (QA/Dev/Lead/VP) with validation
|
||||
- Gathers experience level (beginner/intermediate/experienced) with validation
|
||||
- Gathers learning goals (required, validated)
|
||||
- Gathers pain points (optional)
|
||||
- Updates progress file with all assessment data
|
||||
- Provides experience-based session recommendations
|
||||
- Updates stepsCompleted array with 'step-02-assess'
|
||||
- Routes to step-03-session-menu (hub)
|
||||
|
||||
**Frontmatter Compliance:**
|
||||
|
||||
- All variables used in step body
|
||||
- Relative paths for internal references
|
||||
- No hardcoded paths
|
||||
- Follows frontmatter standards
|
||||
|
||||
**Remaining Steps:** 9 more to build
|
||||
|
||||
- step-03-session-menu (hub with branching)
|
||||
- step-04-session-01 through step-04-session-07 (7 teaching sessions)
|
||||
- step-05-completion (certificate generation)
|
||||
|
||||
## Step 03 Build Complete
|
||||
|
||||
**Created:** 2026-01-28
|
||||
|
||||
**Files:**
|
||||
|
||||
- `steps-c/step-03-session-menu.md` ✓
|
||||
|
||||
**Step Configuration:**
|
||||
|
||||
- **Type:** Branch Step (Hub) with custom menu (1-7, X)
|
||||
- **Routes To:** Any of 7 sessions OR completion OR exit
|
||||
- **Menu Pattern:** Custom branching (Pattern 4)
|
||||
|
||||
**step-03-session-menu.md:**
|
||||
|
||||
- Loads progress file to get session completion status
|
||||
- Displays all 7 sessions with status indicators (✅ completed, 🔄 in-progress, ⬜ not-started)
|
||||
- Shows completion percentage and scores
|
||||
- Provides session descriptions and durations
|
||||
- Recommends next session based on progress
|
||||
- Detects when all 7 sessions complete → routes to completion
|
||||
- Allows non-linear session selection (jump to any session)
|
||||
- Exit option (X) saves progress and ends workflow
|
||||
- This is the HUB - all sessions return here
|
||||
- No stepsCompleted update (routing hub, not content step)
|
||||
|
||||
**Routing Logic:**
|
||||
|
||||
- 1-7 → Routes to corresponding session step
|
||||
- X → Saves and exits workflow
|
||||
- All complete → Auto-routes to step-05-completion
|
||||
|
||||
**Frontmatter Compliance:**
|
||||
|
||||
- All 7 session file references used in routing logic
|
||||
- Completion file reference used for all-done scenario
|
||||
- Progress file loaded for status display
|
||||
- Relative paths for all step files
|
||||
|
||||
**Remaining Steps:** 8 more to build
|
||||
|
||||
- step-04-session-01 through step-04-session-07 (7 teaching sessions)
|
||||
- step-05-completion (certificate generation)
|
||||
|
||||
## Step 04-Session-01 Build Complete
|
||||
|
||||
**Created:** 2026-01-28
|
||||
|
||||
**Files:**
|
||||
|
||||
- `steps-c/step-04-session-01.md` ✓
|
||||
|
||||
**Step Configuration:**
|
||||
|
||||
- **Type:** Middle Step (Complex) with A/P/C menu
|
||||
- **Session:** Quick Start (30 min)
|
||||
- **Next Step:** Returns to step-03-session-menu (hub)
|
||||
- **Menu Pattern:** Standard A/P/C (Pattern 1)
|
||||
|
||||
**step-04-session-01.md:**
|
||||
|
||||
- Session 1: Quick Start - TEA Lite intro, run automate workflow
|
||||
- Updates progress (status: in-progress at start, completed at end)
|
||||
- Teaching content: What is TEA, TEA Lite, Automate workflow, engagement models
|
||||
- Role-adapted examples (QA/Dev/Lead/VP perspectives)
|
||||
- 3-question quiz with validation (passing: ≥70%)
|
||||
- Quiz retry option if failing (<70%)
|
||||
- Generates session notes using template with all quiz results
|
||||
- Updates progress file (status, score, notes_artifact, completion_percentage)
|
||||
- Updates stepsCompleted array with 'step-04-session-01'
|
||||
- Returns to session menu hub (step-03)
|
||||
|
||||
**Teaching Topics:**
|
||||
|
||||
- What is TEA and why it exists
|
||||
- 9 workflows + 35 knowledge fragments
|
||||
- Quality standards (Definition of Done)
|
||||
- Risk-based testing (P0-P3 matrix)
|
||||
- TEA engagement models (Lite/Solo/Integrated/Enterprise/Brownfield)
|
||||
- Automate workflow conceptual overview
|
||||
|
||||
**TEA Resources Referenced:**
|
||||
|
||||
- TEA Overview, TEA Lite Quickstart, Automate Workflow docs
|
||||
- Online URLs provided for further reading
|
||||
|
||||
**Remaining Steps:** 7 more to build
|
||||
|
||||
- step-04-session-02 through step-04-session-07 (6 more teaching sessions)
|
||||
- step-05-completion (certificate generation)
|
||||
|
||||
## Step 04-Session-02 Build Complete
|
||||
|
||||
**Created:** 2026-01-28
|
||||
**Files:** `steps-c/step-04-session-02.md` ✓
|
||||
**Session:** Core Concepts (45 min) - Testing as Engineering, Risk-based testing (P0-P3), TEA Definition of Done
|
||||
**Pattern:** Middle Step (Complex) with A/P/C menu, returns to hub
|
||||
**Teaching:** Philosophy, risk matrix, quality standards with role-adapted examples
|
||||
**Quiz:** 3 questions on P0-P3, hard waits, self-cleaning tests
|
||||
**Knowledge Fragments:** test-quality.md, probability-impact.md
|
||||
|
||||
**Remaining:** 6 steps (sessions 03-07 + completion)
|
||||
|
||||
## Step 04-Session-03 Build Complete
|
||||
|
||||
**Created:** 2026-01-28
|
||||
**Files:** `steps-c/step-04-session-03.md` ✓
|
||||
**Session:** Architecture & Patterns (60 min)
|
||||
**Topics:** Fixture composition, network-first patterns, data factories, step-file architecture
|
||||
**Knowledge Fragments:** fixture-architecture.md, network-first.md, data-factories.md
|
||||
**Quiz:** 3 questions on fixtures, network-first, step-file architecture
|
||||
|
||||
## Step 04-Session-04 Build Complete
|
||||
|
||||
**Created:** 2026-01-28
|
||||
**Files:** `steps-c/step-04-session-04.md` ✓
|
||||
**Session:** Test Design (60 min)
|
||||
**Topics:** Test Design workflow, risk/testability assessment, coverage planning, test priorities matrix
|
||||
**Knowledge Fragments:** test-levels-framework.md, test-priorities-matrix.md
|
||||
**Quiz:** 3 questions on test design, risk calculation, P0 coverage
|
||||
|
||||
## Step 04-Session-05 Build Complete
|
||||
|
||||
**Created:** 2026-01-28
|
||||
**Files:** `steps-c/step-04-session-05.md` ✓
|
||||
**Session:** ATDD & Automate (60 min)
|
||||
**Topics:** ATDD workflow (red-green TDD), Automate workflow, component TDD, API testing patterns
|
||||
**Knowledge Fragments:** component-tdd.md, api-testing-patterns.md, api-request.md
|
||||
**Quiz:** 3 questions on TDD red phase, ATDD vs Automate, API testing
|
||||
|
||||
## Step 04-Session-06 Build Complete
|
||||
|
||||
**Created:** 2026-01-28
|
||||
**Files:** `steps-c/step-04-session-06.md` ✓
|
||||
**Session:** Quality & Trace (45 min)
|
||||
**Topics:** Test Review workflow (5 dimensions), Trace workflow, quality metrics
|
||||
**Quiz:** 3 questions on quality dimensions, release gates, metrics
|
||||
|
||||
## Step 04-Session-07 Build Complete
|
||||
|
||||
**Created:** 2026-01-28
|
||||
**Files:** `steps-c/step-04-session-07.md` ✓
|
||||
**Session:** Advanced Patterns (ongoing)
|
||||
**Format:** Menu-driven exploration of 35 knowledge fragments
|
||||
**Categories:** Testing Patterns (9), Playwright Utils (11), Config/Governance (6), Quality Frameworks (5), Auth/Security (3)
|
||||
**No Quiz:** Exploratory session, score: 100 on completion
|
||||
**Special:** Repeatable, user can explore multiple fragments, returns to hub
|
||||
|
||||
## Step 05-Completion Build Complete
|
||||
|
||||
**Created:** 2026-01-28
|
||||
**Files:** `steps-c/step-05-completion.md` ✓
|
||||
**Type:** Final Step (no next step)
|
||||
**Purpose:** Verify all 7 sessions complete, generate certificate, final progress update, celebrate
|
||||
**Certificate:** Includes all session scores, skills acquired, learning artifacts, next steps
|
||||
**Final:** Updates progress (certificate_generated: true, completion_date)
|
||||
**No Menu:** Workflow ends here
|
||||
|
||||
---
|
||||
|
||||
## CREATE Mode Build Complete (12 Steps)
|
||||
|
||||
**All CREATE mode steps built:** ✓
|
||||
|
||||
1. step-01-init.md - Initialize with continuation detection
|
||||
2. step-01b-continue.md - Resume with progress dashboard
|
||||
3. step-02-assess.md - Role/experience assessment
|
||||
4. step-03-session-menu.md - Session selection hub
|
||||
5. step-04-session-01.md - Quick Start
|
||||
6. step-04-session-02.md - Core Concepts
|
||||
7. step-04-session-03.md - Architecture & Patterns
|
||||
8. step-04-session-04.md - Test Design
|
||||
9. step-04-session-05.md - ATDD & Automate
|
||||
10. step-04-session-06.md - Quality & Trace
|
||||
11. step-04-session-07.md - Advanced Patterns
|
||||
12. step-05-completion.md - Certificate generation
|
||||
|
||||
**Remaining:**
|
||||
|
||||
- Data files (curriculum.yaml, role-paths.yaml, session-content-map.yaml, quiz-questions.yaml, tea-resources-index.yaml)
|
||||
- EDIT mode steps (2 steps)
|
||||
- VALIDATE mode steps (1 step)
|
||||
|
||||
---
|
||||
|
||||
## Data Files Build Complete
|
||||
|
||||
**Created:** 2026-01-28
|
||||
|
||||
**Files:**
|
||||
|
||||
1. `data/curriculum.yaml` ✓ - 7-session structure, learning paths by experience, completion requirements
|
||||
2. `data/role-paths.yaml` ✓ - Role customizations for QA/Dev/Lead/VP with focus areas and teaching adaptations
|
||||
3. `data/session-content-map.yaml` ✓ - Maps sessions to TEA docs, knowledge fragments, online URLs, workflows
|
||||
4. `data/quiz-questions.yaml` ✓ - Question bank for sessions 1-6 (session 7 is exploratory, no quiz)
|
||||
5. `data/tea-resources-index.yaml` ✓ - Comprehensive index of 32 docs + 35 knowledge fragments with GitHub links
|
||||
|
||||
**All 5 data files complete.**
|
||||
|
||||
---
|
||||
|
||||
## EDIT Mode Build Complete
|
||||
|
||||
**Created:** 2026-01-28
|
||||
|
||||
**Files:**
|
||||
|
||||
1. `steps-e/step-e-01-assess-workflow.md` ✓ - Identify what to edit, gather edit requirements
|
||||
2. `steps-e/step-e-02-apply-edits.md` ✓ - Apply modifications with user approval, validate integrity
|
||||
|
||||
**All 2 EDIT mode steps complete.**
|
||||
|
||||
---
|
||||
|
||||
## VALIDATE Mode Build Complete
|
||||
|
||||
**Created:** 2026-01-28
|
||||
|
||||
**Files:**
|
||||
|
||||
1. `steps-v/step-v-01-validate.md` ✓ - Comprehensive quality validation against BMAD standards, generates validation report
|
||||
|
||||
**All 1 VALIDATE mode step complete.**
|
||||
|
||||
---
|
||||
|
||||
## 🏆 WORKFLOW BUILD COMPLETE
|
||||
|
||||
**Status:** ✅ 100% COMPLETE
|
||||
|
||||
**Total Files Created:** 24 files
|
||||
|
||||
### Foundation (6 files)
|
||||
|
||||
- workflow.md
|
||||
- instructions.md
|
||||
- checklist.md
|
||||
- workflow-plan-teach-me-testing.md
|
||||
- (plus 3 templates)
|
||||
|
||||
### Templates (3 files)
|
||||
|
||||
- progress-template.yaml
|
||||
- session-notes-template.md
|
||||
- certificate-template.md
|
||||
|
||||
### CREATE Mode (12 step files)
|
||||
|
||||
- step-01-init.md
|
||||
- step-01b-continue.md
|
||||
- step-02-assess.md
|
||||
- step-03-session-menu.md
|
||||
- step-04-session-01.md through step-04-session-07.md (7 sessions)
|
||||
- step-05-completion.md
|
||||
|
||||
### Data Files (5 files)
|
||||
|
||||
- curriculum.yaml
|
||||
- role-paths.yaml
|
||||
- session-content-map.yaml
|
||||
- quiz-questions.yaml
|
||||
- tea-resources-index.yaml
|
||||
|
||||
### EDIT Mode (2 step files)
|
||||
|
||||
- step-e-01-assess-workflow.md
|
||||
- step-e-02-apply-edits.md
|
||||
|
||||
### VALIDATE Mode (1 step file)
|
||||
|
||||
- step-v-01-validate.md
|
||||
|
||||
---
|
||||
|
||||
## Next Action Required
|
||||
|
||||
**DEPLOYMENT:** Move workflow from staging to TEA module
|
||||
|
||||
**Source (Staging):**
|
||||
`{external-project-root}/_bmad-output/bmb-creations/workflows/teach-me-testing/`
|
||||
|
||||
**Target (Production):**
|
||||
`{project-root}/src/workflows/testarch/bmad-teach-me-testing/`
|
||||
|
||||
**Command:**
|
||||
|
||||
```bash
|
||||
cp -r {external-project-root}/_bmad-output/bmb-creations/workflows/teach-me-testing \
|
||||
{project-root}/src/workflows/testarch/
|
||||
```
|
||||
|
||||
**After deployment:**
|
||||
|
||||
1. Update TEA agent menu to add [TMT] Teach Me Testing
|
||||
2. Test the workflow: `bmad run teach-me-testing`
|
||||
3. Validate: `bmad run teach-me-testing -v`
|
||||
4. Document in TEA module README
|
||||
|
||||
---
|
||||
|
||||
**Workflow Creation: COMPLETE** ✅
|
||||
**Ready for Deployment:** YES
|
||||
**Validation Status:** Not yet validated (run -v mode after deployment)
|
||||
@@ -0,0 +1,90 @@
|
||||
---
|
||||
name: bmad-teach-me-testing
|
||||
description: 'Teach testing progressively through structured sessions. Use when user says "lets learn testing" or "I want to study test practices"'
|
||||
web_bundle: true
|
||||
---
|
||||
|
||||
# Teach Me Testing - TEA Academy
|
||||
|
||||
**Goal:** Provide self-paced, multi-session learning that teaches testing fundamentals through advanced practices, scalable to entire teams without requiring instructor time.
|
||||
|
||||
**Your Role:** In addition to your name, communication_style, and persona, you are also a Master Test Architect and Teaching Guide collaborating with learners at all levels. This is a partnership, not a lecture. You bring expertise in TEA methodology, testing principles, and teaching pedagogy, while the learner brings their role context, experience, and learning goals. Work together to build their testing knowledge progressively.
|
||||
|
||||
**Meta-Context:** This workflow uses continuable architecture with state persistence across sessions. Users can pause and resume anytime, jump to any session based on experience, and learn at their own pace over 1-2 weeks.
|
||||
|
||||
---
|
||||
|
||||
## WORKFLOW ARCHITECTURE
|
||||
|
||||
This uses **step-file architecture** for disciplined execution:
|
||||
|
||||
### Core Principles
|
||||
|
||||
- **Micro-file Design**: Each step is a self-contained instruction file that is part of an overall workflow that must be followed exactly
|
||||
- **Just-In-Time Loading**: Only the current step file is in memory - never load future step files until told to do so
|
||||
- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
|
||||
- **State Tracking**: Document progress in progress file using `stepsCompleted` array and session tracking
|
||||
- **Continuable Sessions**: Users can pause after any session and resume later with full context preserved
|
||||
- **Tri-Modal Structure**: Separate step folders for Create (steps-c/), Edit (steps-e/), and Validate (steps-v/) modes
|
||||
|
||||
### Step Processing Rules
|
||||
|
||||
1. **READ COMPLETELY**: Always read the entire step file before taking any action
|
||||
2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
|
||||
3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
|
||||
4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
|
||||
5. **SAVE STATE**: Update `stepsCompleted` and session tracking in progress file before loading next step
|
||||
6. **LOAD NEXT**: When directed, load, read entire file, then execute the next step file
|
||||
|
||||
### Critical Rules (NO EXCEPTIONS)
|
||||
|
||||
- 🛑 **NEVER** load multiple step files simultaneously
|
||||
- 📖 **ALWAYS** read entire step file before execution
|
||||
- 🚫 **NEVER** skip steps or optimize the sequence
|
||||
- 💾 **ALWAYS** update progress file after each session completion
|
||||
- 🎯 **ALWAYS** follow the exact instructions in the step file
|
||||
- ⏸️ **ALWAYS** halt at menus and wait for user input
|
||||
- 📋 **NEVER** create mental todo lists from future steps
|
||||
- ✅ **ALWAYS** communicate in {communication_language}
|
||||
|
||||
---
|
||||
|
||||
## INITIALIZATION SEQUENCE
|
||||
|
||||
### 1. Configuration Loading
|
||||
|
||||
Load and read full config from {project-root}/\_bmad/tea/config.yaml (or module config if TEA module installed) and resolve:
|
||||
|
||||
- `project_name`, `user_name`, `communication_language`, `test_artifacts`
|
||||
- TEA module variables: `test_artifacts` (base output folder for test-related artifacts)
|
||||
|
||||
### 2. Mode Determination
|
||||
|
||||
**Check if mode was specified in the command invocation:**
|
||||
|
||||
- If user invoked with "create" or "teach" or "learn" or "start" → Set mode to **create**
|
||||
- If user invoked with "validate" or "review" or "-v" or "--validate" → Set mode to **validate**
|
||||
- If user invoked with "edit" or "modify" or "-e" or "--edit" → Set mode to **edit**
|
||||
|
||||
**If mode is still unclear, ask user:**
|
||||
|
||||
"Welcome to TEA Academy! What would you like to do?
|
||||
|
||||
**[C]reate** - Start learning sessions (new or continue existing progress)
|
||||
**[V]alidate** - Review workflow quality and generate validation report
|
||||
**[E]dit** - Modify workflow content or structure
|
||||
|
||||
Please select: [C]reate / [V]alidate / [E]dit"
|
||||
|
||||
### 3. Route to First Step
|
||||
|
||||
**IF mode == create:**
|
||||
Load, read the full file and then execute `./steps-c/step-01-init.md` to begin the teaching workflow.
|
||||
|
||||
**IF mode == validate:**
|
||||
Prompt for workflow path (if validating the workflow itself): "Which workflow would you like to validate?"
|
||||
Then load, read the full file and then execute `./steps-v/step-v-01-validate.md`
|
||||
|
||||
**IF mode == edit:**
|
||||
Prompt for what to edit: "What would you like to edit in the teaching workflow?"
|
||||
Then load, read the full file and then execute `./steps-e/step-e-01-assess-workflow.md`
|
||||
6
_bmad/tea/workflows/testarch/bmad-testarch-atdd/SKILL.md
Normal file
6
_bmad/tea/workflows/testarch/bmad-testarch-atdd/SKILL.md
Normal file
@@ -0,0 +1,6 @@
|
||||
---
|
||||
name: bmad-testarch-atdd
|
||||
description: 'Generate failing acceptance tests using TDD cycle. Use when the user says "lets write acceptance tests" or "I want to do ATDD"'
|
||||
---
|
||||
|
||||
Follow the instructions in [workflow.md](workflow.md).
|
||||
@@ -0,0 +1,371 @@
|
||||
---
|
||||
stepsCompleted: []
|
||||
lastStep: ''
|
||||
lastSaved: ''
|
||||
workflowType: 'testarch-atdd'
|
||||
inputDocuments: []
|
||||
---
|
||||
|
||||
# ATDD Checklist - Epic {epic_num}, Story {story_num}: {story_title}
|
||||
|
||||
**Date:** {date}
|
||||
**Author:** {user_name}
|
||||
**Primary Test Level:** {primary_level}
|
||||
|
||||
---
|
||||
|
||||
## Story Summary
|
||||
|
||||
{Brief 2-3 sentence summary of the user story}
|
||||
|
||||
**As a** {user_role}
|
||||
**I want** {feature_description}
|
||||
**So that** {business_value}
|
||||
|
||||
---
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
{List all testable acceptance criteria from the story}
|
||||
|
||||
1. {Acceptance criterion 1}
|
||||
2. {Acceptance criterion 2}
|
||||
3. {Acceptance criterion 3}
|
||||
|
||||
---
|
||||
|
||||
## Failing Tests Created (RED Phase)
|
||||
|
||||
### E2E Tests ({e2e_test_count} tests)
|
||||
|
||||
**File:** `{e2e_test_file_path}` ({line_count} lines)
|
||||
|
||||
{List each E2E test with its current status and expected failure reason}
|
||||
|
||||
- ✅ **Test:** {test_name}
|
||||
- **Status:** RED - {failure_reason}
|
||||
- **Verifies:** {what_this_test_validates}
|
||||
|
||||
### API Tests ({api_test_count} tests)
|
||||
|
||||
**File:** `{api_test_file_path}` ({line_count} lines)
|
||||
|
||||
{List each API test with its current status and expected failure reason}
|
||||
|
||||
- ✅ **Test:** {test_name}
|
||||
- **Status:** RED - {failure_reason}
|
||||
- **Verifies:** {what_this_test_validates}
|
||||
|
||||
### Component Tests ({component_test_count} tests)
|
||||
|
||||
**File:** `{component_test_file_path}` ({line_count} lines)
|
||||
|
||||
{List each component test with its current status and expected failure reason}
|
||||
|
||||
- ✅ **Test:** {test_name}
|
||||
- **Status:** RED - {failure_reason}
|
||||
- **Verifies:** {what_this_test_validates}
|
||||
|
||||
---
|
||||
|
||||
## Data Factories Created
|
||||
|
||||
{List all data factory files created with their exports}
|
||||
|
||||
### {Entity} Factory
|
||||
|
||||
**File:** `tests/support/factories/{entity}.factory.ts`
|
||||
|
||||
**Exports:**
|
||||
|
||||
- `create{Entity}(overrides?)` - Create single entity with optional overrides
|
||||
- `create{Entity}s(count)` - Create array of entities
|
||||
|
||||
**Example Usage:**
|
||||
|
||||
```typescript
|
||||
const user = createUser({ email: 'specific@example.com' });
|
||||
const users = createUsers(5); // Generate 5 random users
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Fixtures Created
|
||||
|
||||
{List all test fixture files created with their fixture names and descriptions}
|
||||
|
||||
### {Feature} Fixtures
|
||||
|
||||
**File:** `tests/support/fixtures/{feature}.fixture.ts`
|
||||
|
||||
**Fixtures:**
|
||||
|
||||
- `{fixtureName}` - {description_of_what_fixture_provides}
|
||||
- **Setup:** {what_setup_does}
|
||||
- **Provides:** {what_test_receives}
|
||||
- **Cleanup:** {what_cleanup_does}
|
||||
|
||||
**Example Usage:**
|
||||
|
||||
```typescript
|
||||
import { test } from './fixtures/{feature}.fixture';
|
||||
|
||||
test('should do something', async ({ {fixtureName} }) => {
|
||||
// {fixtureName} is ready to use with auto-cleanup
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Mock Requirements
|
||||
|
||||
{Document external services that need mocking and their requirements}
|
||||
|
||||
### {Service Name} Mock
|
||||
|
||||
**Endpoint:** `{HTTP_METHOD} {endpoint_url}`
|
||||
|
||||
**Success Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
{success_response_example}
|
||||
}
|
||||
```
|
||||
|
||||
**Failure Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
{failure_response_example}
|
||||
}
|
||||
```
|
||||
|
||||
**Notes:** {any_special_mock_requirements}
|
||||
|
||||
---
|
||||
|
||||
## Required data-testid Attributes
|
||||
|
||||
{List all data-testid attributes required in UI implementation for test stability}
|
||||
|
||||
### {Page or Component Name}
|
||||
|
||||
- `{data-testid-name}` - {description_of_element}
|
||||
- `{data-testid-name}` - {description_of_element}
|
||||
|
||||
**Implementation Example:**
|
||||
|
||||
```tsx
|
||||
<button data-testid="login-button">Log In</button>
|
||||
<input data-testid="email-input" type="email" />
|
||||
<div data-testid="error-message">{errorText}</div>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Checklist
|
||||
|
||||
{Map each failing test to concrete implementation tasks that will make it pass}
|
||||
|
||||
### Test: {test_name_1}
|
||||
|
||||
**File:** `{test_file_path}`
|
||||
|
||||
**Tasks to make this test pass:**
|
||||
|
||||
- [ ] {Implementation task 1}
|
||||
- [ ] {Implementation task 2}
|
||||
- [ ] {Implementation task 3}
|
||||
- [ ] Add required data-testid attributes: {list_of_testids}
|
||||
- [ ] Run test: `{test_execution_command}`
|
||||
- [ ] ✅ Test passes (green phase)
|
||||
|
||||
**Estimated Effort:** {effort_estimate} hours
|
||||
|
||||
---
|
||||
|
||||
### Test: {test_name_2}
|
||||
|
||||
**File:** `{test_file_path}`
|
||||
|
||||
**Tasks to make this test pass:**
|
||||
|
||||
- [ ] {Implementation task 1}
|
||||
- [ ] {Implementation task 2}
|
||||
- [ ] {Implementation task 3}
|
||||
- [ ] Add required data-testid attributes: {list_of_testids}
|
||||
- [ ] Run test: `{test_execution_command}`
|
||||
- [ ] ✅ Test passes (green phase)
|
||||
|
||||
**Estimated Effort:** {effort_estimate} hours
|
||||
|
||||
---
|
||||
|
||||
## Running Tests
|
||||
|
||||
```bash
|
||||
# Run all failing tests for this story
|
||||
{test_command_all}
|
||||
|
||||
# Run specific test file
|
||||
{test_command_specific_file}
|
||||
|
||||
# Run tests in headed mode (see browser)
|
||||
{test_command_headed}
|
||||
|
||||
# Debug specific test
|
||||
{test_command_debug}
|
||||
|
||||
# Run tests with coverage
|
||||
{test_command_coverage}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Red-Green-Refactor Workflow
|
||||
|
||||
### RED Phase (Complete) ✅
|
||||
|
||||
**TEA Agent Responsibilities:**
|
||||
|
||||
- ✅ All tests written and failing
|
||||
- ✅ Fixtures and factories created with auto-cleanup
|
||||
- ✅ Mock requirements documented
|
||||
- ✅ data-testid requirements listed
|
||||
- ✅ Implementation checklist created
|
||||
|
||||
**Verification:**
|
||||
|
||||
- All tests run and fail as expected
|
||||
- Failure messages are clear and actionable
|
||||
- Tests fail due to missing implementation, not test bugs
|
||||
|
||||
---
|
||||
|
||||
### GREEN Phase (DEV Team - Next Steps)
|
||||
|
||||
**DEV Agent Responsibilities:**
|
||||
|
||||
1. **Pick one failing test** from implementation checklist (start with highest priority)
|
||||
2. **Read the test** to understand expected behavior
|
||||
3. **Implement minimal code** to make that specific test pass
|
||||
4. **Run the test** to verify it now passes (green)
|
||||
5. **Check off the task** in implementation checklist
|
||||
6. **Move to next test** and repeat
|
||||
|
||||
**Key Principles:**
|
||||
|
||||
- One test at a time (don't try to fix all at once)
|
||||
- Minimal implementation (don't over-engineer)
|
||||
- Run tests frequently (immediate feedback)
|
||||
- Use implementation checklist as roadmap
|
||||
|
||||
**Progress Tracking:**
|
||||
|
||||
- Check off tasks as you complete them
|
||||
- Share progress in daily standup
|
||||
|
||||
---
|
||||
|
||||
### REFACTOR Phase (DEV Team - After All Tests Pass)
|
||||
|
||||
**DEV Agent Responsibilities:**
|
||||
|
||||
1. **Verify all tests pass** (green phase complete)
|
||||
2. **Review code for quality** (readability, maintainability, performance)
|
||||
3. **Extract duplications** (DRY principle)
|
||||
4. **Optimize performance** (if needed)
|
||||
5. **Ensure tests still pass** after each refactor
|
||||
6. **Update documentation** (if API contracts change)
|
||||
|
||||
**Key Principles:**
|
||||
|
||||
- Tests provide safety net (refactor with confidence)
|
||||
- Make small refactors (easier to debug if tests fail)
|
||||
- Run tests after each change
|
||||
- Don't change test behavior (only implementation)
|
||||
|
||||
**Completion:**
|
||||
|
||||
- All tests pass
|
||||
- Code quality meets team standards
|
||||
- No duplications or code smells
|
||||
- Ready for code review and story approval
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Share this checklist and failing tests** with the dev workflow (manual handoff)
|
||||
2. **Review this checklist** with team in standup or planning
|
||||
3. **Run failing tests** to confirm RED phase: `{test_command_all}`
|
||||
4. **Begin implementation** using implementation checklist as guide
|
||||
5. **Work one test at a time** (red → green for each)
|
||||
6. **Share progress** in daily standup
|
||||
7. **When all tests pass**, refactor code for quality
|
||||
8. **When refactoring complete**, manually update story status to 'done' in sprint-status.yaml
|
||||
|
||||
---
|
||||
|
||||
## Knowledge Base References Applied
|
||||
|
||||
This ATDD workflow consulted the following knowledge fragments:
|
||||
|
||||
- **fixture-architecture.md** - Test fixture patterns with setup/teardown and auto-cleanup using Playwright's `test.extend()`
|
||||
- **data-factories.md** - Factory patterns using `@faker-js/faker` for random test data generation with overrides support
|
||||
- **component-tdd.md** - Component test strategies using Playwright Component Testing
|
||||
- **network-first.md** - Route interception patterns (intercept BEFORE navigation to prevent race conditions)
|
||||
- **test-quality.md** - Test design principles (Given-When-Then, one assertion per test, determinism, isolation)
|
||||
- **test-levels-framework.md** - Test level selection framework (E2E vs API vs Component vs Unit)
|
||||
|
||||
See `tea-index.csv` for complete knowledge fragment mapping.
|
||||
|
||||
---
|
||||
|
||||
## Test Execution Evidence
|
||||
|
||||
### Initial Test Run (RED Phase Verification)
|
||||
|
||||
**Command:** `{test_command_all}`
|
||||
|
||||
**Results:**
|
||||
|
||||
```
|
||||
{paste_test_run_output_showing_all_tests_failing}
|
||||
```
|
||||
|
||||
**Summary:**
|
||||
|
||||
- Total tests: {total_test_count}
|
||||
- Passing: 0 (expected)
|
||||
- Failing: {total_test_count} (expected)
|
||||
- Status: ✅ RED phase verified
|
||||
|
||||
**Expected Failure Messages:**
|
||||
{list_expected_failure_messages_for_each_test}
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
{Any additional notes, context, or special considerations for this story}
|
||||
|
||||
- {Note 1}
|
||||
- {Note 2}
|
||||
- {Note 3}
|
||||
|
||||
---
|
||||
|
||||
## Contact
|
||||
|
||||
**Questions or Issues?**
|
||||
|
||||
- Ask in team standup
|
||||
- Tag @{tea_agent_username} in Slack/Discord
|
||||
- Refer to `./bmm/docs/tea-README.md` for workflow documentation
|
||||
- Consult `./bmm/testarch/knowledge` for testing best practices
|
||||
|
||||
---
|
||||
|
||||
**Generated by BMad TEA Agent** - {date}
|
||||
@@ -0,0 +1 @@
|
||||
type: skill
|
||||
374
_bmad/tea/workflows/testarch/bmad-testarch-atdd/checklist.md
Normal file
374
_bmad/tea/workflows/testarch/bmad-testarch-atdd/checklist.md
Normal file
@@ -0,0 +1,374 @@
|
||||
# ATDD Workflow Validation Checklist
|
||||
|
||||
Use this checklist to validate that the ATDD workflow has been executed correctly and all deliverables meet quality standards.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before starting this workflow, verify:
|
||||
|
||||
- [ ] Story approved with clear acceptance criteria (AC must be testable)
|
||||
- [ ] Development sandbox/environment ready
|
||||
- [ ] Framework scaffolding exists (run `framework` workflow if missing)
|
||||
- [ ] Test framework configuration available (playwright.config.ts or cypress.config.ts)
|
||||
- [ ] Package.json has test dependencies installed (Playwright or Cypress)
|
||||
|
||||
**Halt if missing:** Framework scaffolding or story acceptance criteria
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Story Context and Requirements
|
||||
|
||||
- [ ] Story markdown file loaded and parsed successfully
|
||||
- [ ] All acceptance criteria identified and extracted
|
||||
- [ ] Affected systems and components identified
|
||||
- [ ] Technical constraints documented
|
||||
- [ ] Framework configuration loaded (playwright.config.ts or cypress.config.ts)
|
||||
- [ ] Test directory structure identified from config
|
||||
- [ ] Existing fixture patterns reviewed for consistency
|
||||
- [ ] Similar test patterns searched and found in `{test_dir}`
|
||||
- [ ] Knowledge base fragments loaded:
|
||||
- [ ] `fixture-architecture.md`
|
||||
- [ ] `data-factories.md`
|
||||
- [ ] `component-tdd.md`
|
||||
- [ ] `network-first.md`
|
||||
- [ ] `test-quality.md`
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Test Level Selection and Strategy
|
||||
|
||||
- [ ] Each acceptance criterion analyzed for appropriate test level
|
||||
- [ ] Test level selection framework applied (E2E vs API vs Component vs Unit)
|
||||
- [ ] E2E tests: Critical user journeys and multi-system integration identified
|
||||
- [ ] API tests: Business logic and service contracts identified
|
||||
- [ ] Component tests: UI component behavior and interactions identified
|
||||
- [ ] Unit tests: Pure logic and edge cases identified (if applicable)
|
||||
- [ ] Duplicate coverage avoided (same behavior not tested at multiple levels unnecessarily)
|
||||
- [ ] Tests prioritized using P0-P3 framework (if test-design document exists)
|
||||
- [ ] Primary test level set in `primary_level` variable (typically E2E or API)
|
||||
- [ ] Test levels documented in ATDD checklist
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Failing Tests Generated
|
||||
|
||||
### Test File Structure Created
|
||||
|
||||
- [ ] Test files organized in appropriate directories:
|
||||
- [ ] `tests/e2e/` for end-to-end tests
|
||||
- [ ] `tests/api/` for API tests
|
||||
- [ ] `tests/component/` for component tests
|
||||
- [ ] `tests/support/` for infrastructure (fixtures, factories, helpers)
|
||||
|
||||
### E2E Tests (If Applicable)
|
||||
|
||||
- [ ] E2E test files created in `tests/e2e/`
|
||||
- [ ] All tests follow Given-When-Then format
|
||||
- [ ] Tests use `data-testid` selectors (not CSS classes or fragile selectors)
|
||||
- [ ] One assertion per test (atomic test design)
|
||||
- [ ] No hard waits or sleeps (explicit waits only)
|
||||
- [ ] Network-first pattern applied (route interception BEFORE navigation)
|
||||
- [ ] Tests fail initially (RED phase verified by local test run)
|
||||
- [ ] Failure messages are clear and actionable
|
||||
|
||||
### API Tests (If Applicable)
|
||||
|
||||
- [ ] API test files created in `tests/api/`
|
||||
- [ ] Tests follow Given-When-Then format
|
||||
- [ ] API contracts validated (request/response structure)
|
||||
- [ ] HTTP status codes verified
|
||||
- [ ] Response body validation includes all required fields
|
||||
- [ ] Error cases tested (400, 401, 403, 404, 500)
|
||||
- [ ] Tests fail initially (RED phase verified)
|
||||
|
||||
### Component Tests (If Applicable)
|
||||
|
||||
- [ ] Component test files created in `tests/component/`
|
||||
- [ ] Tests follow Given-When-Then format
|
||||
- [ ] Component mounting works correctly
|
||||
- [ ] Interaction testing covers user actions (click, hover, keyboard)
|
||||
- [ ] State management within component validated
|
||||
- [ ] Props and events tested
|
||||
- [ ] Tests fail initially (RED phase verified)
|
||||
|
||||
### Test Quality Validation
|
||||
|
||||
- [ ] All tests use Given-When-Then structure with clear comments
|
||||
- [ ] All tests have descriptive names explaining what they test
|
||||
- [ ] No duplicate tests (same behavior tested multiple times)
|
||||
- [ ] No flaky patterns (race conditions, timing issues)
|
||||
- [ ] No test interdependencies (tests can run in any order)
|
||||
- [ ] Tests are deterministic (same input always produces same result)
|
||||
|
||||
---
|
||||
|
||||
## Step 4: Data Infrastructure Built
|
||||
|
||||
### Data Factories Created
|
||||
|
||||
- [ ] Factory files created in `tests/support/factories/`
|
||||
- [ ] All factories use `@faker-js/faker` for random data generation (no hardcoded values)
|
||||
- [ ] Factories support overrides for specific test scenarios
|
||||
- [ ] Factories generate complete valid objects matching API contracts
|
||||
- [ ] Helper functions for bulk creation provided (e.g., `createUsers(count)`)
|
||||
- [ ] Factory exports are properly typed (TypeScript)
|
||||
|
||||
### Test Fixtures Created
|
||||
|
||||
- [ ] Fixture files created in `tests/support/fixtures/`
|
||||
- [ ] All fixtures use Playwright's `test.extend()` pattern
|
||||
- [ ] Fixtures have setup phase (arrange test preconditions)
|
||||
- [ ] Fixtures provide data to tests via `await use(data)`
|
||||
- [ ] Fixtures have teardown phase with auto-cleanup (delete created data)
|
||||
- [ ] Fixtures are composable (can use other fixtures if needed)
|
||||
- [ ] Fixtures are isolated (each test gets fresh data)
|
||||
- [ ] Fixtures are type-safe (TypeScript types defined)
|
||||
|
||||
### Mock Requirements Documented
|
||||
|
||||
- [ ] External service mocking requirements identified
|
||||
- [ ] Mock endpoints documented with URLs and methods
|
||||
- [ ] Success response examples provided
|
||||
- [ ] Failure response examples provided
|
||||
- [ ] Mock requirements documented in ATDD checklist for DEV team
|
||||
|
||||
### data-testid Requirements Listed
|
||||
|
||||
- [ ] All required data-testid attributes identified from E2E tests
|
||||
- [ ] data-testid list organized by page or component
|
||||
- [ ] Each data-testid has clear description of element it targets
|
||||
- [ ] data-testid list included in ATDD checklist for DEV team
|
||||
|
||||
---
|
||||
|
||||
## Step 5: Implementation Checklist Created
|
||||
|
||||
- [ ] Implementation checklist created with clear structure
|
||||
- [ ] Each failing test mapped to concrete implementation tasks
|
||||
- [ ] Tasks include:
|
||||
- [ ] Route/component creation
|
||||
- [ ] Business logic implementation
|
||||
- [ ] API integration
|
||||
- [ ] data-testid attribute additions
|
||||
- [ ] Error handling
|
||||
- [ ] Test execution command
|
||||
- [ ] Completion checkbox
|
||||
- [ ] Red-Green-Refactor workflow documented in checklist
|
||||
- [ ] RED phase marked as complete (TEA responsibility)
|
||||
- [ ] GREEN phase tasks listed for DEV team
|
||||
- [ ] REFACTOR phase guidance provided
|
||||
- [ ] Execution commands provided:
|
||||
- [ ] Run all tests: `npm run test:e2e`
|
||||
- [ ] Run specific test file
|
||||
- [ ] Run in headed mode
|
||||
- [ ] Debug specific test
|
||||
- [ ] Estimated effort included (hours or story points)
|
||||
|
||||
---
|
||||
|
||||
## Step 6: Deliverables Generated
|
||||
|
||||
### ATDD Checklist Document Created
|
||||
|
||||
- [ ] Output file created at `{test_artifacts}/atdd-checklist-{story_id}.md`
|
||||
- [ ] Document follows template structure from `atdd-checklist-template.md`
|
||||
- [ ] Document includes all required sections:
|
||||
- [ ] Story summary
|
||||
- [ ] Acceptance criteria breakdown
|
||||
- [ ] Failing tests created (paths and line counts)
|
||||
- [ ] Data factories created
|
||||
- [ ] Fixtures created
|
||||
- [ ] Mock requirements
|
||||
- [ ] Required data-testid attributes
|
||||
- [ ] Implementation checklist
|
||||
- [ ] Red-green-refactor workflow
|
||||
- [ ] Execution commands
|
||||
- [ ] Next steps for DEV team
|
||||
- [ ] Output shared with DEV workflow (manual handoff; not auto-consumed)
|
||||
|
||||
### All Tests Verified to Fail (RED Phase)
|
||||
|
||||
- [ ] Full test suite run locally before finalizing
|
||||
- [ ] All tests fail as expected (RED phase confirmed)
|
||||
- [ ] No tests passing before implementation (if passing, test is invalid)
|
||||
- [ ] Failure messages documented in ATDD checklist
|
||||
- [ ] Failures are due to missing implementation, not test bugs
|
||||
- [ ] Test run output captured for reference
|
||||
|
||||
### Summary Provided
|
||||
|
||||
- [ ] Summary includes:
|
||||
- [ ] Story ID
|
||||
- [ ] Primary test level
|
||||
- [ ] Test counts (E2E, API, Component)
|
||||
- [ ] Test file paths
|
||||
- [ ] Factory count
|
||||
- [ ] Fixture count
|
||||
- [ ] Mock requirements count
|
||||
- [ ] data-testid count
|
||||
- [ ] Implementation task count
|
||||
- [ ] Estimated effort
|
||||
- [ ] Next steps for DEV team
|
||||
- [ ] Output file path
|
||||
- [ ] Knowledge base references applied
|
||||
|
||||
---
|
||||
|
||||
## Quality Checks
|
||||
|
||||
### Test Design Quality
|
||||
|
||||
- [ ] Tests are readable (clear Given-When-Then structure)
|
||||
- [ ] Tests are maintainable (use factories and fixtures, not hardcoded data)
|
||||
- [ ] Tests are isolated (no shared state between tests)
|
||||
- [ ] Tests are deterministic (no race conditions or flaky patterns)
|
||||
- [ ] Tests are atomic (one assertion per test)
|
||||
- [ ] Tests are fast (no unnecessary waits or delays)
|
||||
|
||||
### Knowledge Base Integration
|
||||
|
||||
- [ ] fixture-architecture.md patterns applied to all fixtures
|
||||
- [ ] data-factories.md patterns applied to all factories
|
||||
- [ ] network-first.md patterns applied to E2E tests with network requests
|
||||
- [ ] component-tdd.md patterns applied to component tests
|
||||
- [ ] test-quality.md principles applied to all test design
|
||||
|
||||
### Code Quality
|
||||
|
||||
- [ ] All TypeScript types are correct and complete
|
||||
- [ ] No linting errors in generated test files
|
||||
- [ ] Consistent naming conventions followed
|
||||
- [ ] Imports are organized and correct
|
||||
- [ ] Code follows project style guide
|
||||
|
||||
---
|
||||
|
||||
## Integration Points
|
||||
|
||||
### With DEV Agent
|
||||
|
||||
- [ ] ATDD checklist provides clear implementation guidance
|
||||
- [ ] Implementation tasks are granular and actionable
|
||||
- [ ] data-testid requirements are complete and clear
|
||||
- [ ] Mock requirements include all necessary details
|
||||
- [ ] Execution commands work correctly
|
||||
|
||||
### With Story Workflow
|
||||
|
||||
- [ ] Story ID correctly referenced in output files
|
||||
- [ ] Acceptance criteria from story accurately reflected in tests
|
||||
- [ ] Technical constraints from story considered in test design
|
||||
|
||||
### With Framework Workflow
|
||||
|
||||
- [ ] Test framework configuration correctly detected and used
|
||||
- [ ] Directory structure matches framework setup
|
||||
- [ ] Fixtures and helpers follow established patterns
|
||||
- [ ] Naming conventions consistent with framework standards
|
||||
|
||||
### With test-design Workflow (If Available)
|
||||
|
||||
- [ ] P0 scenarios from test-design prioritized in ATDD
|
||||
- [ ] Risk assessment from test-design considered in test coverage
|
||||
- [ ] Coverage strategy from test-design aligned with ATDD tests
|
||||
|
||||
---
|
||||
|
||||
## Completion Criteria
|
||||
|
||||
All of the following must be true before marking this workflow as complete:
|
||||
|
||||
- [ ] **Story acceptance criteria analyzed** and mapped to appropriate test levels
|
||||
- [ ] **Failing tests created** at all appropriate levels (E2E, API, Component)
|
||||
- [ ] **Given-When-Then format** used consistently across all tests
|
||||
- [ ] **RED phase verified** by local test run (all tests failing as expected)
|
||||
- [ ] **Network-first pattern** applied to E2E tests with network requests
|
||||
- [ ] **Data factories created** using faker (no hardcoded test data)
|
||||
- [ ] **Fixtures created** with auto-cleanup in teardown
|
||||
- [ ] **Mock requirements documented** for external services
|
||||
- [ ] **data-testid attributes listed** for DEV team
|
||||
- [ ] **Implementation checklist created** mapping tests to code tasks
|
||||
- [ ] **Red-green-refactor workflow documented** in ATDD checklist
|
||||
- [ ] **Execution commands provided** and verified to work
|
||||
- [ ] **ATDD checklist document created** and saved to correct location
|
||||
- [ ] **Output file formatted correctly** using template structure
|
||||
- [ ] **Knowledge base references applied** and documented in summary
|
||||
- [ ] **No test quality issues** (flaky patterns, race conditions, hardcoded data)
|
||||
|
||||
---
|
||||
|
||||
## Common Issues and Resolutions
|
||||
|
||||
### Issue: Tests pass before implementation
|
||||
|
||||
**Problem:** A test passes even though no implementation code exists yet.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Review test to ensure it's testing actual behavior, not mocked/stubbed behavior
|
||||
- Check if test is accidentally using existing functionality
|
||||
- Verify test assertions are correct and meaningful
|
||||
- Rewrite test to fail until implementation is complete
|
||||
|
||||
### Issue: Network-first pattern not applied
|
||||
|
||||
**Problem:** Route interception happens after navigation, causing race conditions.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Move `await page.route()` calls BEFORE `await page.goto()`
|
||||
- Review `network-first.md` knowledge fragment
|
||||
- Update all E2E tests to follow network-first pattern
|
||||
|
||||
### Issue: Hardcoded test data in tests
|
||||
|
||||
**Problem:** Tests use hardcoded strings/numbers instead of factories.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Replace all hardcoded data with factory function calls
|
||||
- Use `faker` for all random data generation
|
||||
- Update data-factories to support all required test scenarios
|
||||
|
||||
### Issue: Fixtures missing auto-cleanup
|
||||
|
||||
**Problem:** Fixtures create data but don't clean it up in teardown.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Add cleanup logic after `await use(data)` in fixture
|
||||
- Call deletion/cleanup functions in teardown
|
||||
- Verify cleanup works by checking database/storage after test run
|
||||
|
||||
### Issue: Tests have multiple assertions
|
||||
|
||||
**Problem:** Tests verify multiple behaviors in single test (not atomic).
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Split into separate tests (one assertion per test)
|
||||
- Each test should verify exactly one behavior
|
||||
- Use descriptive test names to clarify what each test verifies
|
||||
|
||||
### Issue: Tests depend on execution order
|
||||
|
||||
**Problem:** Tests fail when run in isolation or different order.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Remove shared state between tests
|
||||
- Each test should create its own test data
|
||||
- Use fixtures for consistent setup across tests
|
||||
- Verify tests can run with `.only` flag
|
||||
|
||||
---
|
||||
|
||||
## Notes for TEA Agent
|
||||
|
||||
- **Preflight halt is critical:** Do not proceed if story has no acceptance criteria or framework is missing
|
||||
- **RED phase verification is mandatory:** Tests must fail before sharing with DEV team
|
||||
- **Network-first pattern:** Route interception BEFORE navigation prevents race conditions
|
||||
- **One assertion per test:** Atomic tests provide clear failure diagnosis
|
||||
- **Auto-cleanup is non-negotiable:** Every fixture must clean up data in teardown
|
||||
- **Use knowledge base:** Load relevant fragments (fixture-architecture, data-factories, network-first, component-tdd, test-quality) for guidance
|
||||
- **Share with DEV agent:** ATDD checklist provides implementation roadmap from red to green
|
||||
@@ -0,0 +1,45 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# Acceptance Test-Driven Development (ATDD)
|
||||
|
||||
**Workflow ID**: `_bmad/tea/testarch/bmad-testarch-atdd`
|
||||
**Version**: 5.0 (Step-File Architecture)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Generates **failing acceptance tests** before implementation (TDD red phase), plus an implementation checklist. Produces tests at appropriate levels (E2E/API/Component) with supporting fixtures and helpers.
|
||||
|
||||
---
|
||||
|
||||
## WORKFLOW ARCHITECTURE
|
||||
|
||||
This workflow uses **step-file architecture**:
|
||||
|
||||
- **Micro-file Design**: Each step is self-contained
|
||||
- **JIT Loading**: Only the current step file is in memory
|
||||
- **Sequential Enforcement**: Execute steps in order without skipping
|
||||
|
||||
---
|
||||
|
||||
## INITIALIZATION SEQUENCE
|
||||
|
||||
### 1. Configuration Loading
|
||||
|
||||
From `workflow.yaml`, resolve:
|
||||
|
||||
- `config_source`, `test_artifacts`, `user_name`, `communication_language`, `document_output_language`, `date`
|
||||
- `test_dir`
|
||||
|
||||
### 2. First Step
|
||||
|
||||
Load, read completely, and execute:
|
||||
`./steps-c/step-01-preflight-and-context.md`
|
||||
|
||||
### 3. Resume Support
|
||||
|
||||
If the user selects **Resume** mode, load, read completely, and execute:
|
||||
`./steps-c/step-01b-resume.md`
|
||||
|
||||
This checks the output document for progress tracking frontmatter and routes to the next incomplete step.
|
||||
@@ -0,0 +1,226 @@
|
||||
---
|
||||
name: 'step-01-preflight-and-context'
|
||||
description: 'Verify prerequisites and load story, framework, and knowledge base'
|
||||
outputFile: '{test_artifacts}/atdd-checklist-{story_id}.md'
|
||||
nextStepFile: './step-02-generation-mode.md'
|
||||
knowledgeIndex: '{project-root}/_bmad/tea/testarch/tea-index.csv'
|
||||
---
|
||||
|
||||
# Step 1: Preflight & Context Loading
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Verify prerequisites and load all required inputs before generating failing tests.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- 🚫 Halt if requirements are missing
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Stack Detection
|
||||
|
||||
**Read `config.test_stack_type`** from `{config_source}`.
|
||||
|
||||
**Auto-Detection Algorithm** (when `test_stack_type` is `"auto"` or not configured):
|
||||
|
||||
- Scan `{project-root}` for project manifests:
|
||||
- **Frontend indicators**: `package.json` with react/vue/angular/next dependencies, `playwright.config.*`, `vite.config.*`, `webpack.config.*`
|
||||
- **Backend indicators**: `pyproject.toml`, `pom.xml`/`build.gradle`, `go.mod`, `*.csproj`/`*.sln`, `Gemfile`, `Cargo.toml`
|
||||
- **Both present** = `fullstack`; only frontend = `frontend`; only backend = `backend`
|
||||
- Explicit `test_stack_type` config value overrides auto-detection
|
||||
- **Backward compatibility**: if `test_stack_type` is not in config, treat as `"auto"` (preserves current frontend behavior for existing installs)
|
||||
|
||||
Store result as `{detected_stack}` = `frontend` | `backend` | `fullstack`
|
||||
|
||||
---
|
||||
|
||||
## 2. Prerequisites (Hard Requirements)
|
||||
|
||||
- Story approved with **clear acceptance criteria**
|
||||
- Test framework configured:
|
||||
- **If {detected_stack} is `frontend` or `fullstack`:** `playwright.config.ts` or `cypress.config.ts`
|
||||
- **If {detected_stack} is `backend`:** relevant test config exists (e.g., `conftest.py`, `src/test/`, `*_test.go`, `.rspec`)
|
||||
- Development environment available
|
||||
|
||||
If any are missing: **HALT** and notify the user.
|
||||
|
||||
---
|
||||
|
||||
## 3. Load Story Context
|
||||
|
||||
- Read story markdown from `{story_file}` (or ask user if not provided)
|
||||
- Extract acceptance criteria and constraints
|
||||
- Identify affected components and integrations
|
||||
|
||||
---
|
||||
|
||||
## 4. Load Framework & Existing Patterns
|
||||
|
||||
- Read framework config
|
||||
- Inspect `{test_dir}` for existing test patterns, fixtures, helpers
|
||||
|
||||
## 4.5 Read TEA Config Flags
|
||||
|
||||
From `{config_source}`:
|
||||
|
||||
- `tea_use_playwright_utils`
|
||||
- `tea_use_pactjs_utils`
|
||||
- `tea_pact_mcp`
|
||||
- `tea_browser_automation`
|
||||
- `test_stack_type`
|
||||
|
||||
---
|
||||
|
||||
### Tiered Knowledge Loading
|
||||
|
||||
Load fragments based on their `tier` classification in `tea-index.csv`:
|
||||
|
||||
1. **Core tier** (always load): Foundational fragments required for this workflow
|
||||
2. **Extended tier** (load on-demand): Load when deeper analysis is needed or when the user's context requires it
|
||||
3. **Specialized tier** (load only when relevant): Load only when the specific use case matches (e.g., contract-testing only for microservices, email-auth only for email flows)
|
||||
|
||||
> **Context Efficiency**: Loading only core fragments reduces context usage by 40-50% compared to loading all fragments.
|
||||
|
||||
### Playwright Utils Loading Profiles
|
||||
|
||||
**If `tea_use_playwright_utils` is enabled**, select the appropriate loading profile:
|
||||
|
||||
- **API-only profile** (when `{detected_stack}` is `backend` or no `page.goto`/`page.locator` found in test files):
|
||||
Load: `overview`, `api-request`, `auth-session`, `recurse` (~1,800 lines)
|
||||
|
||||
- **Full UI+API profile** (when `{detected_stack}` is `frontend`/`fullstack` or browser tests detected):
|
||||
Load: all Playwright Utils core fragments (~4,500 lines)
|
||||
|
||||
**Detection**: Scan `{test_dir}` for files containing `page.goto` or `page.locator`. If none found, use API-only profile.
|
||||
|
||||
### Pact.js Utils Loading
|
||||
|
||||
**If `tea_use_pactjs_utils` is enabled** (and `{detected_stack}` is `backend` or `fullstack`, or microservices indicators detected):
|
||||
|
||||
Load: `pactjs-utils-overview.md`, `pactjs-utils-consumer-helpers.md`, `pactjs-utils-provider-verifier.md`, `pactjs-utils-request-filter.md`
|
||||
|
||||
**If `tea_use_pactjs_utils` is disabled** but contract testing is relevant:
|
||||
|
||||
Load: `contract-testing.md`
|
||||
|
||||
### Pact MCP Loading
|
||||
|
||||
**If `tea_pact_mcp` is `"mcp"`:**
|
||||
|
||||
Load: `pact-mcp.md`
|
||||
|
||||
## 5. Load Knowledge Base Fragments
|
||||
|
||||
Use `{knowledgeIndex}` to load:
|
||||
|
||||
**Core (always):**
|
||||
|
||||
- `data-factories.md`
|
||||
- `component-tdd.md`
|
||||
- `test-quality.md`
|
||||
- `test-healing-patterns.md`
|
||||
|
||||
**If {detected_stack} is `frontend` or `fullstack`:**
|
||||
|
||||
- `selector-resilience.md`
|
||||
- `timing-debugging.md`
|
||||
|
||||
**Playwright Utils (if enabled and {detected_stack} is `frontend` or `fullstack`):**
|
||||
|
||||
- `overview.md`, `api-request.md`, `network-recorder.md`, `auth-session.md`, `intercept-network-call.md`, `recurse.md`, `log.md`, `file-utils.md`, `network-error-monitor.md`, `fixtures-composition.md`
|
||||
|
||||
**Playwright CLI (if tea_browser_automation is "cli" or "auto" and {detected_stack} is `frontend` or `fullstack`):**
|
||||
|
||||
- `playwright-cli.md`
|
||||
|
||||
**MCP Patterns (if tea_browser_automation is "mcp" or "auto" and {detected_stack} is `frontend` or `fullstack`):**
|
||||
|
||||
- (existing MCP-related fragments, if any are added in future)
|
||||
|
||||
**Traditional Patterns (if utils disabled and {detected_stack} is `frontend` or `fullstack`):**
|
||||
|
||||
- `fixture-architecture.md`
|
||||
- `network-first.md`
|
||||
|
||||
**Backend Patterns (if {detected_stack} is `backend` or `fullstack`):**
|
||||
|
||||
- `test-levels-framework.md`
|
||||
- `test-priorities-matrix.md`
|
||||
- `ci-burn-in.md`
|
||||
|
||||
**Pact.js Utils (if enabled):**
|
||||
|
||||
- `pactjs-utils-overview.md`, `pactjs-utils-consumer-helpers.md`, `pactjs-utils-provider-verifier.md`, `pactjs-utils-request-filter.md`
|
||||
|
||||
**Contract Testing (if pactjs-utils disabled but relevant):**
|
||||
|
||||
- `contract-testing.md`
|
||||
|
||||
**Pact MCP (if tea_pact_mcp is "mcp"):**
|
||||
|
||||
- `pact-mcp.md`
|
||||
|
||||
---
|
||||
|
||||
## 6. Confirm Inputs
|
||||
|
||||
Summarize loaded inputs and confirm with the user. Then proceed.
|
||||
|
||||
---
|
||||
|
||||
## 7. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-01-preflight-and-context']
|
||||
lastStep: 'step-01-preflight-and-context'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-01-preflight-and-context'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-01-preflight-and-context'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section.
|
||||
|
||||
**Update `inputDocuments`**: Set `inputDocuments` in the output template frontmatter to the list of artifact paths loaded in this step (e.g., knowledge fragments, test design documents, configuration files).
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
@@ -0,0 +1,96 @@
|
||||
---
|
||||
name: 'step-01b-resume'
|
||||
description: 'Resume interrupted workflow from last completed step'
|
||||
outputFile: '{test_artifacts}/atdd-checklist-{story_id}.md'
|
||||
---
|
||||
|
||||
# Step 1b: Resume Workflow
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Resume an interrupted workflow by loading the existing output document, displaying progress, and routing to the next incomplete step.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: Output document with progress frontmatter
|
||||
- Focus: Load progress and route to next step
|
||||
- Limits: Do not re-execute completed steps
|
||||
- Dependencies: Output document must exist from a previous run
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Load Output Document
|
||||
|
||||
Read `{outputFile}` and parse YAML frontmatter for:
|
||||
|
||||
- `stepsCompleted` — array of completed step names
|
||||
- `lastStep` — last completed step name
|
||||
- `lastSaved` — timestamp of last save
|
||||
|
||||
**If `{outputFile}` does not exist**, display:
|
||||
|
||||
"⚠️ **No previous progress found.** There is no output document to resume from. Please use **[C] Create** to start a fresh workflow run."
|
||||
|
||||
**THEN:** Halt. Do not proceed.
|
||||
|
||||
---
|
||||
|
||||
### 2. Display Progress Dashboard
|
||||
|
||||
Display progress with ✅/⬜ indicators:
|
||||
|
||||
1. ✅/⬜ Preflight & Context (step-01-preflight-and-context)
|
||||
2. ✅/⬜ Generation Mode (step-02-generation-mode)
|
||||
3. ✅/⬜ Test Strategy (step-03-test-strategy)
|
||||
4. ✅/⬜ Generate Tests + Aggregate (step-04c-aggregate)
|
||||
5. ✅/⬜ Validate & Complete (step-05-validate-and-complete)
|
||||
|
||||
---
|
||||
|
||||
### 3. Route to Next Step
|
||||
|
||||
Based on `lastStep`, load the next incomplete step:
|
||||
|
||||
- `'step-01-preflight-and-context'` → load `./step-02-generation-mode.md`
|
||||
- `'step-02-generation-mode'` → load `./step-03-test-strategy.md`
|
||||
- `'step-03-test-strategy'` → load `./step-04-generate-tests.md`
|
||||
- `'step-04c-aggregate'` → load `./step-05-validate-and-complete.md`
|
||||
- `'step-05-validate-and-complete'` → **Workflow already complete.** Display: "✅ **All steps completed.** Use **[V] Validate** to review outputs or **[E] Edit** to make revisions." Then halt.
|
||||
|
||||
**If `lastStep` does not match any value above**, display: "⚠️ **Unknown progress state** (`lastStep`: {lastStep}). Please use **[C] Create** to start fresh." Then halt.
|
||||
|
||||
**Otherwise**, load the identified step file, read completely, and execute.
|
||||
|
||||
The existing content in `{outputFile}` provides context from previously completed steps.
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Output document loaded and parsed correctly
|
||||
- Progress dashboard displayed accurately
|
||||
- Routed to correct next step
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Not loading output document
|
||||
- Incorrect progress display
|
||||
- Routing to wrong step
|
||||
|
||||
**Master Rule:** Resume MUST route to the exact next incomplete step. Never re-execute completed steps.
|
||||
@@ -0,0 +1,125 @@
|
||||
---
|
||||
name: 'step-02-generation-mode'
|
||||
description: 'Choose AI generation or recording mode'
|
||||
outputFile: '{test_artifacts}/atdd-checklist-{story_id}.md'
|
||||
nextStepFile: './step-03-test-strategy.md'
|
||||
---
|
||||
|
||||
# Step 2: Generation Mode Selection
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Choose the appropriate generation mode for ATDD tests.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Default Mode: AI Generation
|
||||
|
||||
Use AI generation when:
|
||||
|
||||
- Acceptance criteria are clear
|
||||
- Scenarios are standard (CRUD, auth, API, navigation)
|
||||
- **If {detected_stack} is `backend`:** Always use AI generation (no browser recording needed)
|
||||
|
||||
Proceed directly to test strategy if this applies.
|
||||
|
||||
---
|
||||
|
||||
## 2. Optional Mode: Recording (Complex UI)
|
||||
|
||||
**Skip this section entirely if {detected_stack} is `backend`.** For backend projects, use AI generation from API documentation, OpenAPI/Swagger specs, or source code analysis instead.
|
||||
|
||||
**If {detected_stack} is `frontend` or `fullstack`:**
|
||||
|
||||
Use recording when UI interactions need live browser verification.
|
||||
|
||||
**Tool selection based on `config.tea_browser_automation`:**
|
||||
|
||||
If `auto`:
|
||||
|
||||
> **Note:** `${timestamp}` is a placeholder the agent should replace with a unique value (e.g., epoch seconds) for session isolation.
|
||||
|
||||
- **Simple recording** (snapshot selectors, capture structure): Use CLI
|
||||
- `playwright-cli -s=tea-atdd-${timestamp} open <url>` → `playwright-cli -s=tea-atdd-${timestamp} snapshot` → extract refs
|
||||
- **Complex recording** (drag/drop, wizards, multi-step state): Use MCP
|
||||
- Full browser automation with rich tool semantics
|
||||
- **Fallback:** If preferred tool unavailable, use the other; if neither, skip recording
|
||||
|
||||
If `cli`:
|
||||
|
||||
- Use Playwright CLI for all recording
|
||||
- `playwright-cli -s=tea-atdd-${timestamp} open <url>`, `snapshot`, `screenshot`, `click <ref>`, etc.
|
||||
|
||||
If `mcp`:
|
||||
|
||||
- Use Playwright MCP tools for all recording (current behavior)
|
||||
- Confirm MCP availability, record selectors and interactions
|
||||
|
||||
If `none`:
|
||||
|
||||
- Skip recording mode entirely, use AI generation from documentation
|
||||
|
||||
---
|
||||
|
||||
## 3. Confirm Mode
|
||||
|
||||
State the chosen mode and why. Then proceed.
|
||||
|
||||
---
|
||||
|
||||
## 4. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-02-generation-mode']
|
||||
lastStep: 'step-02-generation-mode'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-02-generation-mode'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-02-generation-mode'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
@@ -0,0 +1,110 @@
|
||||
---
|
||||
name: 'step-03-test-strategy'
|
||||
description: 'Map acceptance criteria to test levels and priorities'
|
||||
outputFile: '{test_artifacts}/atdd-checklist-{story_id}.md'
|
||||
nextStepFile: './step-04-generate-tests.md'
|
||||
---
|
||||
|
||||
# Step 3: Test Strategy
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Translate acceptance criteria into a prioritized, level-appropriate test plan.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- 🚫 Avoid duplicate coverage across levels
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Map Acceptance Criteria
|
||||
|
||||
- Convert each acceptance criterion into test scenarios
|
||||
- Include negative and edge cases where risk is high
|
||||
|
||||
---
|
||||
|
||||
## 2. Select Test Levels
|
||||
|
||||
Choose the best level per scenario based on `{detected_stack}`:
|
||||
|
||||
**If {detected_stack} is `frontend` or `fullstack`:**
|
||||
|
||||
- **E2E** for critical user journeys
|
||||
- **API** for business logic and service contracts
|
||||
- **Component** for UI behavior
|
||||
|
||||
**If {detected_stack} is `backend` or `fullstack`:**
|
||||
|
||||
- **Unit** for pure functions, business logic, and edge cases
|
||||
- **Integration** for service interactions, database queries, and middleware
|
||||
- **API/Contract** for endpoint validation, request/response schemas, and Pact contracts
|
||||
- **No E2E** for pure backend projects (no browser-based testing needed)
|
||||
|
||||
---
|
||||
|
||||
## 3. Prioritize Tests
|
||||
|
||||
Assign P0–P3 priorities using risk and business impact.
|
||||
|
||||
---
|
||||
|
||||
## 4. Confirm Red Phase Requirements
|
||||
|
||||
Ensure all tests are designed to **fail before implementation** (TDD red phase).
|
||||
|
||||
---
|
||||
|
||||
## 5. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-03-test-strategy']
|
||||
lastStep: 'step-03-test-strategy'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-03-test-strategy'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-03-test-strategy'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
@@ -0,0 +1,334 @@
|
||||
---
|
||||
name: 'step-04-generate-tests'
|
||||
description: 'Orchestrate adaptive FAILING test generation (TDD red phase)'
|
||||
nextStepFile: './step-04c-aggregate.md'
|
||||
---
|
||||
|
||||
# Step 4: Orchestrate Adaptive FAILING Test Generation
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Select execution mode deterministically, then generate FAILING API and E2E tests (TDD RED PHASE) with consistent output contracts across agent-team, subagent, or sequential execution.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- ✅ Resolve execution mode from config (`tea_execution_mode`, `tea_capability_probe`)
|
||||
- ✅ Apply fallback rules deterministically when requested mode is unsupported
|
||||
- ✅ Generate FAILING tests only (TDD red phase)
|
||||
- ✅ Wait for required worker steps to complete
|
||||
- ❌ Do NOT skip capability checks when probing is enabled
|
||||
- ❌ Do NOT generate passing tests (this is red phase)
|
||||
- ❌ Do NOT proceed until required worker steps finish
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Wait for subagent outputs
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, acceptance criteria from Step 1, test strategy from Step 3
|
||||
- Focus: orchestration only (mode selection + worker dispatch)
|
||||
- Limits: do not generate tests directly (delegate to worker steps)
|
||||
- Dependencies: Steps 1-3 outputs
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
### 1. Prepare Execution Context
|
||||
|
||||
**Generate unique timestamp** for temp file naming:
|
||||
|
||||
```javascript
|
||||
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
|
||||
```
|
||||
|
||||
**Prepare input context for both subagents:**
|
||||
|
||||
```javascript
|
||||
const parseBooleanFlag = (value, defaultValue = true) => {
|
||||
if (typeof value === 'string') {
|
||||
const normalized = value.trim().toLowerCase();
|
||||
if (['false', '0', 'off', 'no'].includes(normalized)) return false;
|
||||
if (['true', '1', 'on', 'yes'].includes(normalized)) return true;
|
||||
}
|
||||
if (value === undefined || value === null) return defaultValue;
|
||||
return Boolean(value);
|
||||
};
|
||||
|
||||
const subagentContext = {
|
||||
story_acceptance_criteria: /* from Step 1 */,
|
||||
test_strategy: /* from Step 3 */,
|
||||
knowledge_fragments_loaded: /* list of fragments */,
|
||||
config: {
|
||||
test_framework: config.test_framework,
|
||||
use_playwright_utils: config.tea_use_playwright_utils,
|
||||
use_pactjs_utils: config.tea_use_pactjs_utils,
|
||||
pact_mcp: config.tea_pact_mcp, // "mcp" | "none"
|
||||
browser_automation: config.tea_browser_automation,
|
||||
execution_mode: config.tea_execution_mode || 'auto', // "auto" | "subagent" | "agent-team" | "sequential"
|
||||
capability_probe: parseBooleanFlag(config.tea_capability_probe, true), // supports booleans and "false"/"true" strings
|
||||
provider_endpoint_map: /* from Step 1/3 context, if use_pactjs_utils enabled */,
|
||||
},
|
||||
timestamp: timestamp
|
||||
};
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. Resolve Execution Mode with Capability Probe
|
||||
|
||||
```javascript
|
||||
const normalizeUserExecutionMode = (mode) => {
|
||||
if (typeof mode !== 'string') return null;
|
||||
const normalized = mode.trim().toLowerCase().replace(/[-_]/g, ' ').replace(/\s+/g, ' ');
|
||||
|
||||
if (normalized === 'auto') return 'auto';
|
||||
if (normalized === 'sequential') return 'sequential';
|
||||
if (normalized === 'subagent' || normalized === 'sub agent' || normalized === 'subagents' || normalized === 'sub agents') {
|
||||
return 'subagent';
|
||||
}
|
||||
if (normalized === 'agent team' || normalized === 'agent teams' || normalized === 'agentteam') {
|
||||
return 'agent-team';
|
||||
}
|
||||
|
||||
return null;
|
||||
};
|
||||
|
||||
const normalizeConfigExecutionMode = (mode) => {
|
||||
if (mode === 'subagent') return 'subagent';
|
||||
if (mode === 'auto' || mode === 'sequential' || mode === 'subagent' || mode === 'agent-team') {
|
||||
return mode;
|
||||
}
|
||||
return null;
|
||||
};
|
||||
|
||||
// Explicit user instruction in the active run takes priority over config.
|
||||
const explicitModeFromUser = normalizeUserExecutionMode(runtime.getExplicitExecutionModeHint?.() || null);
|
||||
|
||||
const requestedMode = explicitModeFromUser || normalizeConfigExecutionMode(subagentContext.config.execution_mode) || 'auto';
|
||||
const probeEnabled = subagentContext.config.capability_probe;
|
||||
|
||||
const supports = {
|
||||
subagent: runtime.canLaunchSubagents?.() === true,
|
||||
agentTeam: runtime.canLaunchAgentTeams?.() === true,
|
||||
};
|
||||
|
||||
let resolvedMode = requestedMode;
|
||||
|
||||
if (requestedMode === 'auto') {
|
||||
if (supports.agentTeam) resolvedMode = 'agent-team';
|
||||
else if (supports.subagent) resolvedMode = 'subagent';
|
||||
else resolvedMode = 'sequential';
|
||||
} else if (probeEnabled && requestedMode === 'agent-team' && !supports.agentTeam) {
|
||||
resolvedMode = supports.subagent ? 'subagent' : 'sequential';
|
||||
} else if (probeEnabled && requestedMode === 'subagent' && !supports.subagent) {
|
||||
resolvedMode = 'sequential';
|
||||
}
|
||||
|
||||
subagentContext.execution = {
|
||||
requestedMode,
|
||||
resolvedMode,
|
||||
probeEnabled,
|
||||
supports,
|
||||
};
|
||||
|
||||
if (!probeEnabled && (requestedMode === 'agent-team' || requestedMode === 'subagent')) {
|
||||
const unsupportedRequestedMode =
|
||||
(requestedMode === 'agent-team' && !supports.agentTeam) || (requestedMode === 'subagent' && !supports.subagent);
|
||||
|
||||
if (unsupportedRequestedMode) {
|
||||
subagentContext.execution.error = `Requested execution mode "${requestedMode}" is unavailable because capability probing is disabled.`;
|
||||
throw new Error(subagentContext.execution.error);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Resolution precedence:
|
||||
|
||||
1. Explicit user request in this run (`agent team` => `agent-team`; `subagent` => `subagent`; `sequential`; `auto`)
|
||||
2. `tea_execution_mode` from config
|
||||
3. Runtime capability fallback (when probing enabled)
|
||||
|
||||
If probing is disabled, honor the requested mode strictly. If that mode cannot be executed at runtime, fail with explicit error instead of silent fallback.
|
||||
|
||||
---
|
||||
|
||||
### 3. Dispatch Worker A: Failing API Test Generation
|
||||
|
||||
**Dispatch worker:**
|
||||
|
||||
- **Subagent File:** `./step-04a-subagent-api-failing.md`
|
||||
- **Output File:** `/tmp/tea-atdd-api-tests-${timestamp}.json`
|
||||
- **Context:** Pass `subagentContext`
|
||||
- **Execution:**
|
||||
- `agent-team` or `subagent`: launch non-blocking
|
||||
- `sequential`: run blocking and wait before next dispatch
|
||||
- **TDD Phase:** RED (failing tests)
|
||||
|
||||
**System Action:**
|
||||
|
||||
```
|
||||
🚀 Launching Subagent A: FAILING API Test Generation (RED PHASE)
|
||||
📝 Output: /tmp/tea-atdd-api-tests-${timestamp}.json
|
||||
⚙️ Mode: ${resolvedMode}
|
||||
🔴 TDD Phase: RED (tests will fail until feature implemented)
|
||||
⏳ Status: Running...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4. Dispatch Worker B: Failing E2E Test Generation
|
||||
|
||||
**Dispatch worker:**
|
||||
|
||||
- **Subagent File:** `./step-04b-subagent-e2e-failing.md`
|
||||
- **Output File:** `/tmp/tea-atdd-e2e-tests-${timestamp}.json`
|
||||
- **Context:** Pass `subagentContext`
|
||||
- **Execution:**
|
||||
- `agent-team` or `subagent`: launch non-blocking
|
||||
- `sequential`: run blocking and wait before next dispatch
|
||||
- **TDD Phase:** RED (failing tests)
|
||||
|
||||
**System Action:**
|
||||
|
||||
```
|
||||
🚀 Launching Subagent B: FAILING E2E Test Generation (RED PHASE)
|
||||
📝 Output: /tmp/tea-atdd-e2e-tests-${timestamp}.json
|
||||
⚙️ Mode: ${resolvedMode}
|
||||
🔴 TDD Phase: RED (tests will fail until feature implemented)
|
||||
⏳ Status: Running...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5. Wait for Required Worker Completion
|
||||
|
||||
**If `resolvedMode` is `agent-team` or `subagent`:**
|
||||
|
||||
```
|
||||
⏳ Waiting for subagents to complete...
|
||||
├── Subagent A (API RED): Running... ⟳
|
||||
└── Subagent B (E2E RED): Running... ⟳
|
||||
|
||||
[... time passes ...]
|
||||
|
||||
├── Subagent A (API RED): Complete ✅
|
||||
└── Subagent B (E2E RED): Complete ✅
|
||||
|
||||
✅ All subagents completed successfully!
|
||||
```
|
||||
|
||||
**If `resolvedMode` is `sequential`:**
|
||||
|
||||
```
|
||||
✅ Sequential mode: each worker already completed during dispatch.
|
||||
```
|
||||
|
||||
**Verify both outputs exist:**
|
||||
|
||||
```javascript
|
||||
const apiOutputExists = fs.existsSync(`/tmp/tea-atdd-api-tests-${timestamp}.json`);
|
||||
const e2eOutputExists = fs.existsSync(`/tmp/tea-atdd-e2e-tests-${timestamp}.json`);
|
||||
|
||||
if (!apiOutputExists || !e2eOutputExists) {
|
||||
throw new Error('One or both subagent outputs missing!');
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 6. TDD Red Phase Report
|
||||
|
||||
**Display TDD status:**
|
||||
|
||||
```
|
||||
🔴 TDD RED PHASE: Failing Tests Generated
|
||||
|
||||
✅ Both subagents completed:
|
||||
- API Tests: Generated with test.skip()
|
||||
- E2E Tests: Generated with test.skip()
|
||||
|
||||
📋 All tests assert EXPECTED behavior
|
||||
📋 All tests will FAIL until feature implemented
|
||||
📋 This is INTENTIONAL (TDD red phase)
|
||||
|
||||
Next: Aggregation will verify TDD compliance
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 7. Execution Report
|
||||
|
||||
**Display performance metrics:**
|
||||
|
||||
```
|
||||
🚀 Performance Report:
|
||||
- Execution Mode: {resolvedMode}
|
||||
- API Test Generation: ~X minutes
|
||||
- E2E Test Generation: ~Y minutes
|
||||
- Total Elapsed: ~mode-dependent
|
||||
- Parallel Gain: ~50% faster when mode is subagent/agent-team
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 8. Proceed to Aggregation
|
||||
|
||||
**Load aggregation step:**
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
The aggregation step (4C) will:
|
||||
|
||||
- Read both subagent outputs
|
||||
- Verify TDD red phase compliance (all tests have test.skip())
|
||||
- Write all test files to disk
|
||||
- Generate ATDD checklist
|
||||
- Calculate summary statistics
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Proceed to Step 4C (Aggregation) when:
|
||||
|
||||
- ✅ Subagent A (API failing tests) completed successfully
|
||||
- ✅ Subagent B (E2E failing tests) completed successfully
|
||||
- ✅ Both output files exist and are valid JSON
|
||||
- ✅ TDD red phase status reported
|
||||
|
||||
**Do NOT proceed if:**
|
||||
|
||||
- ❌ One or both subagents failed
|
||||
- ❌ Output files missing or corrupted
|
||||
- ❌ Subagent generated passing tests (wrong - must be failing)
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Both subagents launched successfully
|
||||
- Both worker steps completed without errors
|
||||
- Output files generated and valid
|
||||
- Tests generated with test.skip() (TDD red phase)
|
||||
- Fallback behavior respected configuration and capability probe rules
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Failed to launch subagents
|
||||
- One or both subagents failed
|
||||
- Output files missing or invalid
|
||||
- Tests generated without test.skip() (wrong phase)
|
||||
- Unsupported requested mode with probing disabled
|
||||
|
||||
**Master Rule:** TDD RED PHASE requires FAILING tests (with test.skip()). Mode selection changes orchestration, never red-phase requirements.
|
||||
@@ -0,0 +1,286 @@
|
||||
---
|
||||
name: 'step-04a-subagent-api-failing'
|
||||
description: 'Subagent: Generate FAILING API tests (TDD red phase)'
|
||||
subagent: true
|
||||
outputFile: '/tmp/tea-atdd-api-tests-{{timestamp}}.json'
|
||||
---
|
||||
|
||||
# Subagent 4A: Generate Failing API Tests (TDD Red Phase)
|
||||
|
||||
## SUBAGENT CONTEXT
|
||||
|
||||
This is an **isolated subagent** running in parallel with E2E failing test generation.
|
||||
|
||||
**What you have from parent workflow:**
|
||||
|
||||
- Story acceptance criteria from Step 1
|
||||
- Test strategy and scenarios from Step 3
|
||||
- Knowledge fragments loaded: api-request, data-factories, api-testing-patterns
|
||||
- Config: test framework, Playwright Utils enabled/disabled, Pact.js Utils enabled/disabled (`use_pactjs_utils`), Pact MCP mode (`pact_mcp`)
|
||||
- Provider Endpoint Map (if `use_pactjs_utils` enabled and provider source accessible)
|
||||
|
||||
**Your task:** Generate API tests that will FAIL because the feature is not implemented yet (TDD RED PHASE).
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read this entire subagent file before acting
|
||||
- ✅ Generate FAILING API tests ONLY
|
||||
- ✅ Tests MUST fail when run (feature not implemented yet)
|
||||
- ✅ Output structured JSON to temp file
|
||||
- ✅ Follow knowledge fragment patterns
|
||||
- ❌ Do NOT generate E2E tests (that's subagent 4B)
|
||||
- ❌ Do NOT generate passing tests (this is TDD red phase)
|
||||
- ❌ Do NOT run tests (that's step 5)
|
||||
|
||||
---
|
||||
|
||||
## SUBAGENT TASK
|
||||
|
||||
### 1. Identify API Endpoints from Acceptance Criteria
|
||||
|
||||
From the story acceptance criteria (Step 1 output), identify:
|
||||
|
||||
- Which API endpoints will be created for this story
|
||||
- Expected request/response contracts
|
||||
- Authentication requirements
|
||||
- Expected status codes and error scenarios
|
||||
|
||||
**Example Acceptance Criteria:**
|
||||
|
||||
```
|
||||
Story: User Registration
|
||||
- As a user, I can POST to /api/users/register with email and password
|
||||
- System returns 201 Created with user object
|
||||
- System returns 400 Bad Request if email already exists
|
||||
- System returns 422 Unprocessable Entity if validation fails
|
||||
```
|
||||
|
||||
### 2. Generate FAILING API Test Files
|
||||
|
||||
For each API endpoint, create test file in `tests/api/[feature].spec.ts`:
|
||||
|
||||
**Test Structure (ATDD - Red Phase):**
|
||||
|
||||
```typescript
|
||||
import { test, expect } from '@playwright/test';
|
||||
// If Playwright Utils enabled:
|
||||
// import { apiRequest } from '@playwright-utils/api';
|
||||
|
||||
test.describe('[Story Name] API Tests (ATDD)', () => {
|
||||
test.skip('[P0] should register new user successfully', async ({ request }) => {
|
||||
// THIS TEST WILL FAIL - Endpoint not implemented yet
|
||||
const response = await request.post('/api/users/register', {
|
||||
data: {
|
||||
email: 'newuser@example.com',
|
||||
password: 'SecurePass123!',
|
||||
},
|
||||
});
|
||||
|
||||
// Expect 201 but will get 404 (endpoint doesn't exist)
|
||||
expect(response.status()).toBe(201);
|
||||
|
||||
const user = await response.json();
|
||||
expect(user).toMatchObject({
|
||||
id: expect.any(Number),
|
||||
email: 'newuser@example.com',
|
||||
});
|
||||
});
|
||||
|
||||
test.skip('[P1] should return 400 if email exists', async ({ request }) => {
|
||||
// THIS TEST WILL FAIL - Endpoint not implemented yet
|
||||
const response = await request.post('/api/users/register', {
|
||||
data: {
|
||||
email: 'existing@example.com',
|
||||
password: 'SecurePass123!',
|
||||
},
|
||||
});
|
||||
|
||||
expect(response.status()).toBe(400);
|
||||
const error = await response.json();
|
||||
expect(error.message).toContain('Email already exists');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**CRITICAL ATDD Requirements:**
|
||||
|
||||
- ✅ Use `test.skip()` to mark tests as intentionally failing (red phase)
|
||||
- ✅ Write assertions for EXPECTED behavior (even though not implemented)
|
||||
- ✅ Use realistic test data (not placeholder data)
|
||||
- ✅ Test both happy path and error scenarios from acceptance criteria
|
||||
- ✅ Use `apiRequest()` helper if Playwright Utils enabled
|
||||
- ✅ Use data factories for test data (from data-factories fragment)
|
||||
- ✅ Include priority tags [P0], [P1], [P2], [P3]
|
||||
|
||||
### 1.5 Provider Source Scrutiny for CDC in TDD Red Phase (If `use_pactjs_utils` Enabled)
|
||||
|
||||
When generating Pact consumer contract tests in the ATDD red phase, provider scrutiny applies with TDD-specific rules. Apply the **Seven-Point Scrutiny Checklist** from `contract-testing.md` (Response shape, Status codes, Field names, Enum values, Required fields, Data types, Nested structures) for both existing and new endpoints.
|
||||
|
||||
**If provider endpoint already exists** (extending an existing API):
|
||||
|
||||
- READ the provider route handler, types, and validation schemas
|
||||
- Verify all seven scrutiny points against the provider source: Response shape, Status codes, Field names, Enum values, Required fields, Data types, Nested structures
|
||||
- Add `// Provider endpoint:` comment and scrutiny evidence block documenting findings for each point
|
||||
- Wrap the entire test function in `test.skip()` (so the whole test including `executeTest` is skipped), not just the callback
|
||||
|
||||
**If provider endpoint is new** (TDD — endpoint not implemented yet):
|
||||
|
||||
- Use acceptance criteria as the source of truth for expected behavior
|
||||
- Acceptance criteria should specify all seven scrutiny points where possible (status codes, field names, types, etc.) — note any gaps as assumptions in the evidence block
|
||||
- Add `// Provider endpoint: TODO — new endpoint, not yet implemented`
|
||||
- Document expected behavior from acceptance criteria in scrutiny evidence block
|
||||
- Wrap the entire test function in `test.skip()` and use realistic expectations from the story
|
||||
|
||||
**Graceful degradation when provider source is inaccessible:**
|
||||
|
||||
1. **OpenAPI/Swagger spec available**: Use the spec as the source of truth for response shapes, status codes, and field names
|
||||
2. **Pact Broker available** (when `pact_mcp` is `"mcp"`): Use SmartBear MCP tools to fetch existing provider states and verified interactions as reference
|
||||
3. **Neither available**: For new endpoints, use acceptance criteria; for existing endpoints, use consumer-side types. Mark with `// Provider endpoint: TODO — provider source not accessible, verify manually` and set `provider_scrutiny: "pending"` in output JSON
|
||||
4. **Never silently guess**: Document all assumptions in the scrutiny evidence block
|
||||
|
||||
**Provider endpoint comments are MANDATORY** even in red-phase tests — they document the intent.
|
||||
|
||||
**Example: Red-phase Pact test with provider scrutiny:**
|
||||
|
||||
```typescript
|
||||
// Provider endpoint: TODO — new endpoint, not yet implemented
|
||||
/*
|
||||
* Provider Scrutiny Evidence:
|
||||
* - Handler: NEW — not yet implemented (TDD red phase)
|
||||
* - Expected from acceptance criteria:
|
||||
* - Endpoint: POST /api/v2/users/register
|
||||
* - Status: 201 for success, 400 for duplicate email, 422 for validation error
|
||||
* - Response: { id: number, email: string, createdAt: string }
|
||||
*/
|
||||
test.skip('[P0] should generate consumer contract for user registration', async () => {
|
||||
await provider
|
||||
.given('no users exist')
|
||||
.uponReceiving('a request to register a new user')
|
||||
.withRequest({
|
||||
method: 'POST',
|
||||
path: '/api/v2/users/register',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: { email: 'newuser@example.com', password: 'SecurePass123!' },
|
||||
})
|
||||
.willRespondWith({
|
||||
status: 201,
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: like({
|
||||
id: integer(1),
|
||||
email: string('newuser@example.com'),
|
||||
createdAt: string('2025-01-15T10:00:00Z'),
|
||||
}),
|
||||
})
|
||||
.executeTest(async (mockServer) => {
|
||||
const result = await registerUser({ email: 'newuser@example.com', password: 'SecurePass123!' }, { baseUrl: mockServer.url });
|
||||
expect(result.id).toEqual(expect.any(Number));
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Why test.skip():**
|
||||
|
||||
- Tests are written correctly for EXPECTED behavior
|
||||
- But we know they'll fail because feature isn't implemented
|
||||
- `test.skip()` documents this is intentional (TDD red phase)
|
||||
- Once feature is implemented, remove `test.skip()` to verify green phase
|
||||
|
||||
### 3. Track Fixture Needs
|
||||
|
||||
Identify fixtures needed for API tests:
|
||||
|
||||
- Authentication fixtures (if endpoints require auth)
|
||||
- Data factories (user data, etc.)
|
||||
- API client configurations
|
||||
|
||||
**Do NOT create fixtures yet** - just track what's needed for aggregation step.
|
||||
|
||||
---
|
||||
|
||||
## OUTPUT FORMAT
|
||||
|
||||
Write JSON to temp file: `/tmp/tea-atdd-api-tests-{{timestamp}}.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"subagent": "atdd-api-tests",
|
||||
"tests": [
|
||||
{
|
||||
"file": "tests/api/user-registration.spec.ts",
|
||||
"content": "[full TypeScript test file content with test.skip()]",
|
||||
"description": "ATDD API tests for user registration (RED PHASE)",
|
||||
"expected_to_fail": true,
|
||||
"acceptance_criteria_covered": [
|
||||
"User can register with email/password",
|
||||
"System returns 201 on success",
|
||||
"System returns 400 if email exists"
|
||||
],
|
||||
"priority_coverage": {
|
||||
"P0": 1,
|
||||
"P1": 2,
|
||||
"P2": 0,
|
||||
"P3": 0
|
||||
}
|
||||
}
|
||||
],
|
||||
"fixture_needs": ["userDataFactory"],
|
||||
"knowledge_fragments_used": ["api-request", "data-factories", "api-testing-patterns"],
|
||||
"test_count": 3,
|
||||
"tdd_phase": "RED",
|
||||
"provider_scrutiny": "completed",
|
||||
"summary": "Generated 3 FAILING API tests for user registration story"
|
||||
}
|
||||
```
|
||||
|
||||
**On Error:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": false,
|
||||
"subagent": "atdd-api-tests",
|
||||
"error": "Error message describing what went wrong",
|
||||
"partial_output": {
|
||||
/* any tests generated before error */
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Subagent completes when:
|
||||
|
||||
- ✅ All API endpoints from acceptance criteria have test files
|
||||
- ✅ All tests use `test.skip()` (documented failing tests)
|
||||
- ✅ All tests assert EXPECTED behavior (not placeholder assertions)
|
||||
- ✅ JSON output written to temp file
|
||||
- ✅ Fixture needs to be tracked
|
||||
|
||||
**Subagent terminates here.** Parent workflow will read output and proceed to aggregation.
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SUBAGENT SUCCESS METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- All API tests generated with test.skip()
|
||||
- Tests assert expected behavior (not placeholders)
|
||||
- JSON output valid and complete
|
||||
- No E2E/component/unit tests included (out of scope)
|
||||
- Tests follow knowledge fragment patterns
|
||||
- Every Pact interaction has `// Provider endpoint:` comment (if CDC enabled)
|
||||
- Provider scrutiny completed or TODO markers added for new endpoints (if CDC enabled)
|
||||
|
||||
### ❌ FAILURE:
|
||||
|
||||
- Generated passing tests (wrong - this is RED phase)
|
||||
- Tests without test.skip() (will break CI)
|
||||
- Placeholder assertions (expect(true).toBe(true))
|
||||
- Did not follow knowledge fragment patterns
|
||||
- Invalid or missing JSON output
|
||||
- Pact interactions missing provider endpoint comments (if CDC enabled)
|
||||
@@ -0,0 +1,244 @@
|
||||
---
|
||||
name: 'step-04b-subagent-e2e-failing'
|
||||
description: 'Subagent: Generate FAILING E2E tests (TDD red phase)'
|
||||
subagent: true
|
||||
outputFile: '/tmp/tea-atdd-e2e-tests-{{timestamp}}.json'
|
||||
---
|
||||
|
||||
# Subagent 4B: Generate Failing E2E Tests (TDD Red Phase)
|
||||
|
||||
## SUBAGENT CONTEXT
|
||||
|
||||
This is an **isolated subagent** running in parallel with API failing test generation.
|
||||
|
||||
**What you have from parent workflow:**
|
||||
|
||||
- Story acceptance criteria from Step 1
|
||||
- Test strategy and user journey scenarios from Step 3
|
||||
- Knowledge fragments loaded: fixture-architecture, network-first, selector-resilience
|
||||
- Config: test framework, Playwright Utils enabled/disabled
|
||||
|
||||
**Your task:** Generate E2E tests that will FAIL because the feature UI is not implemented yet (TDD RED PHASE).
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read this entire subagent file before acting
|
||||
- ✅ Generate FAILING E2E tests ONLY
|
||||
- ✅ Tests MUST fail when run (UI not implemented yet)
|
||||
- ✅ Output structured JSON to temp file
|
||||
- ✅ Follow knowledge fragment patterns
|
||||
- ❌ Do NOT generate API tests (that's subagent 4A)
|
||||
- ❌ Do NOT generate passing tests (this is TDD red phase)
|
||||
- ❌ Do NOT run tests (that's step 5)
|
||||
|
||||
---
|
||||
|
||||
## SUBAGENT TASK
|
||||
|
||||
### 1. Identify User Journeys from Acceptance Criteria
|
||||
|
||||
From the story acceptance criteria (Step 1 output), identify:
|
||||
|
||||
- Which UI flows will be created for this story
|
||||
- User interactions required
|
||||
- Expected visual states
|
||||
- Success/error messages expected
|
||||
|
||||
**Example Acceptance Criteria:**
|
||||
|
||||
```
|
||||
Story: User Registration
|
||||
- As a user, I can navigate to /register page
|
||||
- I can fill in email and password fields
|
||||
- I can click "Register" button
|
||||
- System shows success message and redirects to dashboard
|
||||
- System shows error if email already exists
|
||||
```
|
||||
|
||||
### 2. Browser Interaction (Selector Verification)
|
||||
|
||||
**Automation mode:** `config.tea_browser_automation`
|
||||
|
||||
If `auto` (fall back to MCP if CLI unavailable; if neither available, generate from best practices):
|
||||
|
||||
- Open the target page first, then verify selectors with a snapshot:
|
||||
`playwright-cli -s=tea-atdd-{{timestamp}} open <target_url>`
|
||||
`playwright-cli -s=tea-atdd-{{timestamp}} snapshot` → map refs to Playwright locators
|
||||
- ref `{role: "button", name: "Submit"}` → `page.getByRole('button', { name: 'Submit' })`
|
||||
- ref `{role: "textbox", name: "Email"}` → `page.getByRole('textbox', { name: 'Email' })`
|
||||
- `playwright-cli -s=tea-atdd-{{timestamp}} close` when done
|
||||
|
||||
If `cli` (CLI only — do NOT fall back to MCP; generate from best practices if CLI unavailable):
|
||||
|
||||
- Open the target page first, then verify selectors with a snapshot:
|
||||
`playwright-cli -s=tea-atdd-{{timestamp}} open <target_url>`
|
||||
`playwright-cli -s=tea-atdd-{{timestamp}} snapshot` → map refs to Playwright locators
|
||||
- ref `{role: "button", name: "Submit"}` → `page.getByRole('button', { name: 'Submit' })`
|
||||
- ref `{role: "textbox", name: "Email"}` → `page.getByRole('textbox', { name: 'Email' })`
|
||||
- `playwright-cli -s=tea-atdd-{{timestamp}} close` when done
|
||||
|
||||
> **Session Hygiene:** Always close sessions using `playwright-cli -s=tea-atdd-{{timestamp}} close`. Do NOT use `close-all` — it kills every session on the machine and breaks parallel execution.
|
||||
|
||||
If `mcp`:
|
||||
|
||||
- Use MCP tools for selector verification (current behavior)
|
||||
|
||||
If `none`:
|
||||
|
||||
- Generate selectors from best practices without browser verification
|
||||
|
||||
### 3. Generate FAILING E2E Test Files
|
||||
|
||||
For each user journey, create test file in `tests/e2e/[feature].spec.ts`:
|
||||
|
||||
**Test Structure (ATDD - Red Phase):**
|
||||
|
||||
```typescript
|
||||
import { test, expect } from '@playwright/test';
|
||||
|
||||
test.describe('[Story Name] E2E User Journey (ATDD)', () => {
|
||||
test.skip('[P0] should complete user registration successfully', async ({ page }) => {
|
||||
// THIS TEST WILL FAIL - UI not implemented yet
|
||||
await page.goto('/register');
|
||||
|
||||
// Expect registration form but will get 404 or missing elements
|
||||
await page.fill('[name="email"]', 'newuser@example.com');
|
||||
await page.fill('[name="password"]', 'SecurePass123!');
|
||||
await page.click('button:has-text("Register")');
|
||||
|
||||
// Expect success message and redirect
|
||||
await expect(page.getByText('Registration successful!')).toBeVisible();
|
||||
await page.waitForURL('/dashboard');
|
||||
});
|
||||
|
||||
test.skip('[P1] should show error if email exists', async ({ page }) => {
|
||||
// THIS TEST WILL FAIL - UI not implemented yet
|
||||
await page.goto('/register');
|
||||
|
||||
await page.fill('[name="email"]', 'existing@example.com');
|
||||
await page.fill('[name="password"]', 'SecurePass123!');
|
||||
await page.click('button:has-text("Register")');
|
||||
|
||||
// Expect error message
|
||||
await expect(page.getByText('Email already exists')).toBeVisible();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**CRITICAL ATDD Requirements:**
|
||||
|
||||
- ✅ Use `test.skip()` to mark tests as intentionally failing (red phase)
|
||||
- ✅ Write assertions for EXPECTED UI behavior (even though not implemented)
|
||||
- ✅ Use resilient selectors: getByRole, getByText, getByLabel (from selector-resilience)
|
||||
- ✅ Follow network-first patterns if API calls involved (from network-first)
|
||||
- ✅ Test complete user journeys from acceptance criteria
|
||||
- ✅ Include priority tags [P0], [P1], [P2], [P3]
|
||||
- ✅ Use proper TypeScript types
|
||||
- ✅ Deterministic waits (no hard sleeps)
|
||||
|
||||
**Why test.skip():**
|
||||
|
||||
- Tests are written correctly for EXPECTED UI behavior
|
||||
- But we know they'll fail because UI isn't implemented
|
||||
- `test.skip()` documents this is intentional (TDD red phase)
|
||||
- Once UI is implemented, remove `test.skip()` to verify green phase
|
||||
|
||||
### 4. Track Fixture Needs
|
||||
|
||||
Identify fixtures needed for E2E tests:
|
||||
|
||||
- Authentication fixtures (if journey requires logged-in state)
|
||||
- Network mocks (if API calls involved)
|
||||
- Test data fixtures
|
||||
|
||||
**Do NOT create fixtures yet** - just track what's needed for aggregation step.
|
||||
|
||||
---
|
||||
|
||||
## OUTPUT FORMAT
|
||||
|
||||
Write JSON to temp file: `/tmp/tea-atdd-e2e-tests-{{timestamp}}.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"subagent": "atdd-e2e-tests",
|
||||
"tests": [
|
||||
{
|
||||
"file": "tests/e2e/user-registration.spec.ts",
|
||||
"content": "[full TypeScript test file content with test.skip()]",
|
||||
"description": "ATDD E2E tests for user registration journey (RED PHASE)",
|
||||
"expected_to_fail": true,
|
||||
"acceptance_criteria_covered": [
|
||||
"User can navigate to /register",
|
||||
"User can fill registration form",
|
||||
"System shows success message on registration",
|
||||
"System shows error if email exists"
|
||||
],
|
||||
"priority_coverage": {
|
||||
"P0": 1,
|
||||
"P1": 1,
|
||||
"P2": 0,
|
||||
"P3": 0
|
||||
}
|
||||
}
|
||||
],
|
||||
"fixture_needs": ["registrationPageMock"],
|
||||
"knowledge_fragments_used": ["fixture-architecture", "network-first", "selector-resilience"],
|
||||
"test_count": 2,
|
||||
"tdd_phase": "RED",
|
||||
"summary": "Generated 2 FAILING E2E tests for user registration story"
|
||||
}
|
||||
```
|
||||
|
||||
**On Error:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": false,
|
||||
"subagent": "atdd-e2e-tests",
|
||||
"error": "Error message describing what went wrong",
|
||||
"partial_output": {
|
||||
/* any tests generated before error */
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Subagent completes when:
|
||||
|
||||
- ✅ All user journeys from acceptance criteria have test files
|
||||
- ✅ All tests use `test.skip()` (documented failing tests)
|
||||
- ✅ All tests assert EXPECTED UI behavior (not placeholder assertions)
|
||||
- ✅ Resilient selectors used (getByRole, getByText)
|
||||
- ✅ JSON output written to temp file
|
||||
- ✅ Fixture needs tracked
|
||||
|
||||
**Subagent terminates here.** Parent workflow will read output and proceed to aggregation.
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SUBAGENT SUCCESS METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- All E2E tests generated with test.skip()
|
||||
- Tests assert expected UI behavior (not placeholders)
|
||||
- Resilient selectors used (getByRole, getByText)
|
||||
- JSON output valid and complete
|
||||
- No API/component/unit tests included (out of scope)
|
||||
- Tests follow knowledge fragment patterns
|
||||
|
||||
### ❌ FAILURE:
|
||||
|
||||
- Generated passing tests (wrong - this is RED phase)
|
||||
- Tests without test.skip() (will break CI)
|
||||
- Placeholder assertions (expect(true).toBe(true))
|
||||
- Brittle selectors used (CSS classes, XPath)
|
||||
- Did not follow knowledge fragment patterns
|
||||
- Invalid or missing JSON output
|
||||
@@ -0,0 +1,370 @@
|
||||
---
|
||||
name: 'step-04c-aggregate'
|
||||
description: 'Aggregate subagent outputs and complete ATDD test infrastructure'
|
||||
outputFile: '{test_artifacts}/atdd-checklist-{story_id}.md'
|
||||
nextStepFile: './step-05-validate-and-complete.md'
|
||||
---
|
||||
|
||||
# Step 4C: Aggregate ATDD Test Generation Results
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Read outputs from parallel subagents (API + E2E failing test generation), aggregate results, verify TDD red phase compliance, and create supporting infrastructure.
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- ✅ Read subagent outputs from temp files
|
||||
- ✅ Verify all tests are marked with test.skip() (TDD red phase)
|
||||
- ✅ Generate shared fixtures based on fixture needs
|
||||
- ✅ Write all generated test files to disk
|
||||
- ❌ Do NOT remove test.skip() (that's done after feature implementation)
|
||||
- ❌ Do NOT run tests yet (that's step 5 - verify they fail)
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, subagent outputs from temp files
|
||||
- Focus: aggregation and TDD validation
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: Step 4A and 4B subagent outputs
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
### 1. Read Subagent Outputs
|
||||
|
||||
**Read API test subagent output:**
|
||||
|
||||
```javascript
|
||||
const apiTestsPath = '/tmp/tea-atdd-api-tests-{{timestamp}}.json';
|
||||
const apiTestsOutput = JSON.parse(fs.readFileSync(apiTestsPath, 'utf8'));
|
||||
```
|
||||
|
||||
**Read E2E test subagent output:**
|
||||
|
||||
```javascript
|
||||
const e2eTestsPath = '/tmp/tea-atdd-e2e-tests-{{timestamp}}.json';
|
||||
const e2eTestsOutput = JSON.parse(fs.readFileSync(e2eTestsPath, 'utf8'));
|
||||
```
|
||||
|
||||
**Verify both subagents succeeded:**
|
||||
|
||||
- Check `apiTestsOutput.success === true`
|
||||
- Check `e2eTestsOutput.success === true`
|
||||
- If either failed, report error and stop (don't proceed)
|
||||
|
||||
---
|
||||
|
||||
### 2. Verify TDD Red Phase Compliance
|
||||
|
||||
**CRITICAL TDD Validation:**
|
||||
|
||||
**Check API tests:**
|
||||
|
||||
```javascript
|
||||
apiTestsOutput.tests.forEach((test) => {
|
||||
// Verify test.skip() is present
|
||||
if (!test.content.includes('test.skip(')) {
|
||||
throw new Error(`ATDD ERROR: ${test.file} missing test.skip() - tests MUST be skipped in red phase!`);
|
||||
}
|
||||
|
||||
// Verify not placeholder assertions
|
||||
if (test.content.includes('expect(true).toBe(true)')) {
|
||||
throw new Error(`ATDD ERROR: ${test.file} has placeholder assertions - must assert EXPECTED behavior!`);
|
||||
}
|
||||
|
||||
// Verify expected_to_fail flag
|
||||
if (!test.expected_to_fail) {
|
||||
throw new Error(`ATDD ERROR: ${test.file} not marked as expected_to_fail!`);
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
**Check E2E tests:**
|
||||
|
||||
```javascript
|
||||
e2eTestsOutput.tests.forEach((test) => {
|
||||
// Same validation as API tests
|
||||
if (!test.content.includes('test.skip(')) {
|
||||
throw new Error(`ATDD ERROR: ${test.file} missing test.skip() - tests MUST be skipped in red phase!`);
|
||||
}
|
||||
|
||||
if (test.content.includes('expect(true).toBe(true)')) {
|
||||
throw new Error(`ATDD ERROR: ${test.file} has placeholder assertions!`);
|
||||
}
|
||||
|
||||
if (!test.expected_to_fail) {
|
||||
throw new Error(`ATDD ERROR: ${test.file} not marked as expected_to_fail!`);
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
**If validation passes:**
|
||||
|
||||
```
|
||||
✅ TDD Red Phase Validation: PASS
|
||||
- All tests use test.skip()
|
||||
- All tests assert expected behavior (not placeholders)
|
||||
- All tests marked as expected_to_fail
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. Write All Test Files to Disk
|
||||
|
||||
**Write API test files:**
|
||||
|
||||
```javascript
|
||||
apiTestsOutput.tests.forEach((test) => {
|
||||
fs.writeFileSync(test.file, test.content, 'utf8');
|
||||
console.log(`✅ Created (RED): ${test.file}`);
|
||||
});
|
||||
```
|
||||
|
||||
**Write E2E test files:**
|
||||
|
||||
```javascript
|
||||
e2eTestsOutput.tests.forEach((test) => {
|
||||
fs.writeFileSync(test.file, test.content, 'utf8');
|
||||
console.log(`✅ Created (RED): ${test.file}`);
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4. Aggregate Fixture Needs
|
||||
|
||||
**Collect all fixture needs from both subagents:**
|
||||
|
||||
```javascript
|
||||
const allFixtureNeeds = [...apiTestsOutput.fixture_needs, ...e2eTestsOutput.fixture_needs];
|
||||
|
||||
// Remove duplicates
|
||||
const uniqueFixtures = [...new Set(allFixtureNeeds)];
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5. Generate Fixture Infrastructure
|
||||
|
||||
**Create fixtures needed by ATDD tests:**
|
||||
(Similar to automate workflow, but may be simpler for ATDD since feature not implemented)
|
||||
|
||||
**Minimal fixtures for TDD red phase:**
|
||||
|
||||
```typescript
|
||||
// tests/fixtures/test-data.ts
|
||||
export const testUserData = {
|
||||
email: 'test@example.com',
|
||||
password: 'SecurePass123!',
|
||||
};
|
||||
```
|
||||
|
||||
Note: More complete fixtures will be needed when moving to green phase.
|
||||
|
||||
---
|
||||
|
||||
### 6. Generate ATDD Checklist
|
||||
|
||||
**Create ATDD checklist document:**
|
||||
|
||||
```markdown
|
||||
# ATDD Checklist: [Story Name]
|
||||
|
||||
## TDD Red Phase (Current)
|
||||
|
||||
✅ Failing tests generated
|
||||
|
||||
- API Tests: {api_test_count} tests (all skipped)
|
||||
- E2E Tests: {e2e_test_count} tests (all skipped)
|
||||
|
||||
## Acceptance Criteria Coverage
|
||||
|
||||
{list all acceptance criteria with test coverage}
|
||||
|
||||
## Next Steps (TDD Green Phase)
|
||||
|
||||
After implementing the feature:
|
||||
|
||||
1. Remove `test.skip()` from all test files
|
||||
2. Run tests: `npm test`
|
||||
3. Verify tests PASS (green phase)
|
||||
4. If any tests fail:
|
||||
- Either fix implementation (feature bug)
|
||||
- Or fix test (test bug)
|
||||
5. Commit passing tests
|
||||
|
||||
## Implementation Guidance
|
||||
|
||||
Feature endpoints to implement:
|
||||
{list endpoints from API tests}
|
||||
|
||||
UI components to implement:
|
||||
{list UI flows from E2E tests}
|
||||
```
|
||||
|
||||
**Save checklist:**
|
||||
|
||||
```javascript
|
||||
fs.writeFileSync(`{test_artifacts}/atdd-checklist-{story-id}.md`, checklistContent, 'utf8');
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 7. Calculate Summary Statistics
|
||||
|
||||
**Aggregate test counts:**
|
||||
|
||||
```javascript
|
||||
const resolvedMode = subagentContext?.execution?.resolvedMode; // Provided by Step 4's orchestration context
|
||||
const subagentExecutionLabel =
|
||||
resolvedMode === 'sequential'
|
||||
? 'SEQUENTIAL (API → E2E)'
|
||||
: resolvedMode === 'agent-team'
|
||||
? 'AGENT-TEAM (API + E2E)'
|
||||
: resolvedMode === 'subagent'
|
||||
? 'SUBAGENT (API + E2E)'
|
||||
: 'PARALLEL (API + E2E)';
|
||||
const performanceGainLabel =
|
||||
resolvedMode === 'sequential'
|
||||
? 'baseline (no parallel speedup)'
|
||||
: resolvedMode === 'agent-team' || resolvedMode === 'subagent'
|
||||
? '~50% faster than sequential'
|
||||
: 'mode-dependent';
|
||||
|
||||
const summary = {
|
||||
tdd_phase: 'RED',
|
||||
total_tests: apiTestsOutput.test_count + e2eTestsOutput.test_count,
|
||||
api_tests: apiTestsOutput.test_count,
|
||||
e2e_tests: e2eTestsOutput.test_count,
|
||||
all_tests_skipped: true,
|
||||
expected_to_fail: true,
|
||||
fixtures_created: uniqueFixtures.length,
|
||||
acceptance_criteria_covered: [
|
||||
...apiTestsOutput.tests.flatMap((t) => t.acceptance_criteria_covered),
|
||||
...e2eTestsOutput.tests.flatMap((t) => t.acceptance_criteria_covered),
|
||||
],
|
||||
knowledge_fragments_used: [...apiTestsOutput.knowledge_fragments_used, ...e2eTestsOutput.knowledge_fragments_used],
|
||||
subagent_execution: subagentExecutionLabel,
|
||||
performance_gain: performanceGainLabel,
|
||||
};
|
||||
```
|
||||
|
||||
**Store summary for Step 5:**
|
||||
|
||||
```javascript
|
||||
fs.writeFileSync('/tmp/tea-atdd-summary-{{timestamp}}.json', JSON.stringify(summary, null, 2), 'utf8');
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## OUTPUT SUMMARY
|
||||
|
||||
Display to user:
|
||||
|
||||
```
|
||||
✅ ATDD Test Generation Complete (TDD RED PHASE)
|
||||
|
||||
🔴 TDD Red Phase: Failing Tests Generated
|
||||
|
||||
📊 Summary:
|
||||
- Total Tests: {total_tests} (all with test.skip())
|
||||
- API Tests: {api_tests} (RED)
|
||||
- E2E Tests: {e2e_tests} (RED)
|
||||
- Fixtures Created: {fixtures_created}
|
||||
- All tests will FAIL until feature implemented
|
||||
|
||||
✅ Acceptance Criteria Coverage:
|
||||
{list all covered criteria}
|
||||
|
||||
🚀 Performance: {performance_gain}
|
||||
|
||||
📂 Generated Files:
|
||||
- tests/api/[feature].spec.ts (with test.skip())
|
||||
- tests/e2e/[feature].spec.ts (with test.skip())
|
||||
- tests/fixtures/test-data.ts
|
||||
- {test_artifacts}/atdd-checklist-{story-id}.md
|
||||
|
||||
📝 Next Steps:
|
||||
1. Implement the feature
|
||||
2. Remove test.skip() from tests
|
||||
3. Run tests → verify PASS (green phase)
|
||||
4. Commit passing tests
|
||||
|
||||
✅ Ready for validation (Step 5 - verify tests fail as expected)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Proceed to Step 5 when:
|
||||
|
||||
- ✅ All test files written to disk (API + E2E)
|
||||
- ✅ All tests verified to have test.skip()
|
||||
- ✅ All fixtures created
|
||||
- ✅ ATDD checklist generated
|
||||
- ✅ Summary statistics calculated and saved
|
||||
- ✅ Output displayed to user
|
||||
|
||||
---
|
||||
|
||||
### 8. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-04c-aggregate']
|
||||
lastStep: 'step-04c-aggregate'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-04c-aggregate'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-04c-aggregate'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Both subagents succeeded
|
||||
- All tests have test.skip() (TDD red phase compliant)
|
||||
- All tests assert expected behavior (not placeholders)
|
||||
- All test files written to disk
|
||||
- ATDD checklist generated
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- One or both subagents failed
|
||||
- Tests missing test.skip() (would break CI)
|
||||
- Tests have placeholder assertions
|
||||
- Test files not written to disk
|
||||
- ATDD checklist missing
|
||||
|
||||
**Master Rule:** TDD RED PHASE requires ALL tests to use test.skip() and assert expected behavior.
|
||||
@@ -0,0 +1,106 @@
|
||||
---
|
||||
name: 'step-05-validate-and-complete'
|
||||
description: 'Validate ATDD outputs and summarize'
|
||||
outputFile: '{test_artifacts}/atdd-checklist-{story_id}.md'
|
||||
---
|
||||
|
||||
# Step 5: Validate & Complete
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Validate ATDD outputs and provide a completion summary.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- ✅ Validate against the checklist
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Validation
|
||||
|
||||
Use `checklist.md` to validate:
|
||||
|
||||
- Prerequisites satisfied
|
||||
- Test files created correctly
|
||||
- Checklist matches acceptance criteria
|
||||
- Tests are designed to fail before implementation
|
||||
- [ ] CLI sessions cleaned up (no orphaned browsers)
|
||||
- [ ] Temp artifacts stored in `{test_artifacts}/` not random locations
|
||||
|
||||
Fix any gaps before completion.
|
||||
|
||||
---
|
||||
|
||||
## 2. Polish Output
|
||||
|
||||
Before finalizing, review the complete output document for quality:
|
||||
|
||||
1. **Remove duplication**: Progressive-append workflow may have created repeated sections — consolidate
|
||||
2. **Verify consistency**: Ensure terminology, risk scores, and references are consistent throughout
|
||||
3. **Check completeness**: All template sections should be populated or explicitly marked N/A
|
||||
4. **Format cleanup**: Ensure markdown formatting is clean (tables aligned, headers consistent, no orphaned references)
|
||||
|
||||
---
|
||||
|
||||
## 3. Completion Summary
|
||||
|
||||
Report:
|
||||
|
||||
- Test files created
|
||||
- Checklist output path
|
||||
- Key risks or assumptions
|
||||
- Next recommended workflow (e.g., implementation or `automate`)
|
||||
|
||||
---
|
||||
|
||||
## 4. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-05-validate-and-complete']
|
||||
lastStep: 'step-05-validate-and-complete'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-05-validate-and-complete'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-05-validate-and-complete'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section.
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
@@ -0,0 +1,65 @@
|
||||
---
|
||||
name: 'step-01-assess'
|
||||
description: 'Load an existing output for editing'
|
||||
nextStepFile: './step-02-apply-edit.md'
|
||||
---
|
||||
|
||||
# Step 1: Assess Edit Target
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Identify which output should be edited and load it.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 📖 Read the complete step file before taking any action
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are the Master Test Architect
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Ask the user which output file to edit
|
||||
- 🚫 Do not edit until target is confirmed
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: existing outputs
|
||||
- Focus: select edit target
|
||||
- Limits: no edits yet
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Identify Target
|
||||
|
||||
Ask the user to provide the output file path or select from known outputs.
|
||||
|
||||
### 2. Load Target
|
||||
|
||||
Read the provided output file in full.
|
||||
|
||||
### 3. Confirm
|
||||
|
||||
Confirm the target and proceed to edit.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Target identified and loaded
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Proceeding without a confirmed target
|
||||
@@ -0,0 +1,60 @@
|
||||
---
|
||||
name: 'step-02-apply-edit'
|
||||
description: 'Apply edits to the selected output'
|
||||
---
|
||||
|
||||
# Step 2: Apply Edits
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Apply the requested edits to the selected output and confirm changes.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 📖 Read the complete step file before taking any action
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are the Master Test Architect
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Only apply edits explicitly requested by the user
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: selected output and user changes
|
||||
- Focus: apply edits only
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Confirm Requested Changes
|
||||
|
||||
Restate what will be changed and confirm.
|
||||
|
||||
### 2. Apply Changes
|
||||
|
||||
Update the output file accordingly.
|
||||
|
||||
### 3. Report
|
||||
|
||||
Summarize the edits applied.
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Changes applied and confirmed
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Unconfirmed edits or missing update
|
||||
@@ -0,0 +1,67 @@
|
||||
---
|
||||
name: 'step-01-validate'
|
||||
description: 'Validate workflow outputs against checklist'
|
||||
outputFile: '{test_artifacts}/atdd-validation-report.md'
|
||||
validationChecklist: '../checklist.md'
|
||||
---
|
||||
|
||||
# Step 1: Validate Outputs
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Validate outputs using the workflow checklist and record findings.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 📖 Read the complete step file before taking any action
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are the Master Test Architect
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Validate against `{validationChecklist}`
|
||||
- 🚫 Do not skip checks
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Write findings to `{outputFile}`
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: workflow outputs and checklist
|
||||
- Focus: validation only
|
||||
- Limits: do not modify outputs in this step
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Load Checklist
|
||||
|
||||
Read `{validationChecklist}` and list all criteria.
|
||||
|
||||
### 2. Validate Outputs
|
||||
|
||||
Evaluate outputs against each checklist item.
|
||||
|
||||
### 3. Write Report
|
||||
|
||||
Write a validation report to `{outputFile}` with PASS/WARN/FAIL per section.
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Validation report written
|
||||
- All checklist items evaluated
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped checklist items
|
||||
- No report produced
|
||||
@@ -0,0 +1,73 @@
|
||||
---
|
||||
validationDate: 2026-01-27
|
||||
workflowName: testarch-atdd
|
||||
workflowPath: {project-root}/src/workflows/testarch/bmad-testarch-atdd
|
||||
validationStatus: COMPLETE
|
||||
completionDate: 2026-01-27 10:03:10
|
||||
---
|
||||
|
||||
# Validation Report: testarch-atdd
|
||||
|
||||
**Validation Started:** 2026-01-27 09:50:21
|
||||
**Validator:** BMAD Workflow Validation System (Codex)
|
||||
**Standards Version:** BMAD Workflow Standards
|
||||
|
||||
## File Structure & Size
|
||||
|
||||
- workflow.md present: YES
|
||||
- instructions.md present: YES
|
||||
- workflow.yaml present: YES
|
||||
- step files found: 8
|
||||
|
||||
**Step File Sizes:**
|
||||
|
||||
- steps-c/step-01-preflight-and-context.md: 101 lines [GOOD]
|
||||
- steps-c/step-02-generation-mode.md: 71 lines [GOOD]
|
||||
- steps-c/step-03-test-strategy.md: 70 lines [GOOD]
|
||||
- steps-c/step-04-generate-tests.md: 70 lines [GOOD]
|
||||
- steps-c/step-05-validate-and-complete.md: 61 lines [GOOD]
|
||||
- steps-e/step-01-assess.md: 51 lines [GOOD]
|
||||
- steps-e/step-02-apply-edit.md: 46 lines [GOOD]
|
||||
- steps-v/step-01-validate.md: 53 lines [GOOD]
|
||||
- workflow-plan.md present: YES
|
||||
|
||||
## Frontmatter Validation
|
||||
|
||||
- No frontmatter violations found
|
||||
|
||||
## Critical Path Violations
|
||||
|
||||
- No {project-root} hardcoded paths detected in body
|
||||
- No dead relative links detected
|
||||
|
||||
## Menu Handling Validation
|
||||
|
||||
- No menu structures detected (linear step flow) [N/A]
|
||||
|
||||
## Step Type Validation
|
||||
|
||||
- Last step steps-v/step-01-validate.md has no nextStepFile (final step OK)
|
||||
- Step type validation assumes linear sequence (no branching/menu). Workflow-plan.md present for reference. [INFO]
|
||||
|
||||
## Output Format Validation
|
||||
|
||||
- Templates present: atdd-checklist-template.md
|
||||
- Steps with outputFile in frontmatter:
|
||||
- steps-c/step-04-generate-tests.md
|
||||
- steps-v/step-01-validate.md
|
||||
|
||||
## Validation Design Check
|
||||
|
||||
- checklist.md present: YES
|
||||
- Validation steps folder (steps-v) present: YES
|
||||
|
||||
## Instruction Style Check
|
||||
|
||||
- All steps include STEP GOAL, MANDATORY EXECUTION RULES, EXECUTION PROTOCOLS, CONTEXT BOUNDARIES, and SUCCESS/FAILURE metrics
|
||||
|
||||
## Summary
|
||||
|
||||
- Validation completed: 2026-01-27 10:03:10
|
||||
- Critical issues: 0
|
||||
- Warnings: 0 (informational notes only)
|
||||
- Readiness: READY (manual review optional)
|
||||
@@ -0,0 +1,116 @@
|
||||
---
|
||||
validationDate: 2026-01-27
|
||||
workflowName: testarch-atdd
|
||||
workflowPath: {project-root}/src/workflows/testarch/bmad-testarch-atdd
|
||||
validationStatus: COMPLETE
|
||||
completionDate: 2026-01-27 10:24:01
|
||||
---
|
||||
|
||||
# Validation Report: testarch-atdd
|
||||
|
||||
**Validation Started:** 2026-01-27 10:24:01
|
||||
**Validator:** BMAD Workflow Validation System (Codex)
|
||||
**Standards Version:** BMAD Workflow Standards
|
||||
|
||||
## File Structure & Size
|
||||
|
||||
- workflow.md present: YES
|
||||
- instructions.md present: YES
|
||||
- workflow.yaml present: YES
|
||||
- step files found: 8
|
||||
|
||||
**Step File Sizes:**
|
||||
|
||||
- steps-c/step-01-preflight-and-context.md: 100 lines [GOOD]
|
||||
- steps-c/step-02-generation-mode.md: 70 lines [GOOD]
|
||||
- steps-c/step-03-test-strategy.md: 69 lines [GOOD]
|
||||
- steps-c/step-04-generate-tests.md: 69 lines [GOOD]
|
||||
- steps-c/step-05-validate-and-complete.md: 60 lines [GOOD]
|
||||
- steps-e/step-01-assess.md: 50 lines [GOOD]
|
||||
- steps-e/step-02-apply-edit.md: 45 lines [GOOD]
|
||||
- steps-v/step-01-validate.md: 52 lines [GOOD]
|
||||
- workflow-plan.md present: YES
|
||||
|
||||
## Frontmatter Validation
|
||||
|
||||
- No frontmatter violations found
|
||||
|
||||
## Critical Path Violations
|
||||
|
||||
### Config Variables (Exceptions)
|
||||
|
||||
Standard BMAD config variables treated as valid exceptions: bmb_creations_output_folder, communication_language, document_output_language, output_folder, planning_artifacts, project-root, project_name, test_artifacts, user_name
|
||||
|
||||
- No {project-root} hardcoded paths detected in body
|
||||
|
||||
- No dead relative links detected
|
||||
|
||||
- No module path assumptions detected
|
||||
|
||||
**Status:** ✅ PASS - No critical violations
|
||||
|
||||
## Menu Handling Validation
|
||||
|
||||
- No menu structures detected (linear step flow) [N/A]
|
||||
|
||||
## Step Type Validation
|
||||
|
||||
- steps-c/step-01-preflight-and-context.md: Init [PASS]
|
||||
- steps-c/step-02-generation-mode.md: Middle [PASS]
|
||||
- steps-c/step-03-test-strategy.md: Middle [PASS]
|
||||
- steps-c/step-04-generate-tests.md: Middle [PASS]
|
||||
- steps-c/step-05-validate-and-complete.md: Final [PASS]
|
||||
- Step type validation assumes linear sequence (no branching/menu). Workflow-plan.md present for reference. [INFO]
|
||||
|
||||
## Output Format Validation
|
||||
|
||||
- Templates present: atdd-checklist-template.md
|
||||
- Steps with outputFile in frontmatter:
|
||||
- steps-c/step-04-generate-tests.md
|
||||
- steps-v/step-01-validate.md
|
||||
- checklist.md present: YES
|
||||
|
||||
## Validation Design Check
|
||||
|
||||
- Validation steps folder (steps-v) present: YES
|
||||
- Validation step(s) present: step-01-validate.md
|
||||
- Validation steps reference checklist data and auto-proceed
|
||||
|
||||
## Instruction Style Check
|
||||
|
||||
- Instruction style: Prescriptive (appropriate for TEA quality/compliance workflows)
|
||||
- Steps emphasize mandatory sequence, explicit success/failure metrics, and risk-based guidance
|
||||
|
||||
## Collaborative Experience Check
|
||||
|
||||
- Overall facilitation quality: GOOD
|
||||
- Steps use progressive prompts and clear role reinforcement; no laundry-list interrogation detected
|
||||
- Flow progression is clear and aligned to workflow goals
|
||||
|
||||
## Subagent Optimization Opportunities
|
||||
|
||||
- No high-priority subagent optimizations identified; workflow already uses step-file architecture
|
||||
- Pattern 1 (grep/regex): N/A for most steps
|
||||
- Pattern 2 (per-file analysis): already aligned to validation structure
|
||||
- Pattern 3 (data ops): minimal data file loads
|
||||
- Pattern 4 (parallel): optional for validation only
|
||||
|
||||
## Cohesive Review
|
||||
|
||||
- Overall assessment: GOOD
|
||||
- Flow is linear, goals are clear, and outputs map to TEA artifacts
|
||||
- Voice and tone consistent with Test Architect persona
|
||||
- Recommendation: READY (minor refinements optional)
|
||||
|
||||
## Plan Quality Validation
|
||||
|
||||
- Plan file present: workflow-plan.md
|
||||
- Planned steps found: 8 (all implemented)
|
||||
- Plan implementation status: Fully Implemented
|
||||
|
||||
## Summary
|
||||
|
||||
- Validation completed: 2026-01-27 10:24:01
|
||||
- Critical issues: 0
|
||||
- Warnings: 0 (informational notes only)
|
||||
- Readiness: READY (manual review optional)
|
||||
@@ -0,0 +1,21 @@
|
||||
# Workflow Plan: testarch-atdd
|
||||
|
||||
## Create Mode (steps-c)
|
||||
- step-01-preflight-and-context.md
|
||||
|
||||
- step-02-generation-mode.md
|
||||
- step-03-test-strategy.md
|
||||
- step-04-generate-tests.md
|
||||
- step-05-validate-and-complete.md
|
||||
|
||||
## Validate Mode (steps-v)
|
||||
- step-01-validate.md
|
||||
|
||||
## Edit Mode (steps-e)
|
||||
- step-01-assess.md
|
||||
- step-02-apply-edit.md
|
||||
|
||||
## Outputs
|
||||
- {test_artifacts}/atdd-checklist-{story_id}.md
|
||||
|
||||
- Failing acceptance tests under {project-root}/tests
|
||||
41
_bmad/tea/workflows/testarch/bmad-testarch-atdd/workflow.md
Normal file
41
_bmad/tea/workflows/testarch/bmad-testarch-atdd/workflow.md
Normal file
@@ -0,0 +1,41 @@
|
||||
---
|
||||
name: bmad-testarch-atdd
|
||||
description: Generate failing acceptance tests using TDD cycle. Use when user says 'lets write acceptance tests' or 'I want to do ATDD'
|
||||
web_bundle: true
|
||||
---
|
||||
|
||||
# Acceptance Test-Driven Development (ATDD)
|
||||
|
||||
**Goal:** Generate failing acceptance tests before implementation using TDD red-green-refactor cycle
|
||||
|
||||
**Role:** You are the Master Test Architect.
|
||||
|
||||
---
|
||||
|
||||
## WORKFLOW ARCHITECTURE
|
||||
|
||||
This workflow uses **tri-modal step-file architecture**:
|
||||
|
||||
- **Create mode (steps-c/)**: primary execution flow
|
||||
- **Validate mode (steps-v/)**: validation against checklist
|
||||
- **Edit mode (steps-e/)**: revise existing outputs
|
||||
|
||||
---
|
||||
|
||||
## INITIALIZATION SEQUENCE
|
||||
|
||||
### 1. Mode Determination
|
||||
|
||||
"Welcome to the workflow. What would you like to do?"
|
||||
|
||||
- **[C] Create** — Run the workflow
|
||||
- **[R] Resume** — Resume an interrupted workflow
|
||||
- **[V] Validate** — Validate existing outputs
|
||||
- **[E] Edit** — Edit existing outputs
|
||||
|
||||
### 2. Route to First Step
|
||||
|
||||
- **If C:** Load `steps-c/step-01-preflight-and-context.md`
|
||||
- **If R:** Load `steps-c/step-01b-resume.md`
|
||||
- **If V:** Load `steps-v/step-01-validate.md`
|
||||
- **If E:** Load `steps-e/step-01-assess.md`
|
||||
@@ -0,0 +1,46 @@
|
||||
# Test Architect workflow: bmad-testarch-atdd
|
||||
name: bmad-testarch-atdd
|
||||
# prettier-ignore
|
||||
description: 'Generate failing acceptance tests using TDD cycle. Use when the user says "lets write acceptance tests" or "I want to do ATDD"'
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/tea/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
test_artifacts: "{config_source}:test_artifacts"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
document_output_language: "{config_source}:document_output_language"
|
||||
date: system-generated
|
||||
|
||||
# Workflow components
|
||||
installed_path: "."
|
||||
instructions: "./instructions.md"
|
||||
validation: "./checklist.md"
|
||||
template: "./atdd-checklist-template.md"
|
||||
|
||||
# Variables and inputs
|
||||
variables:
|
||||
test_dir: "{project-root}/tests" # Root test directory
|
||||
|
||||
# Output configuration
|
||||
default_output_file: "{test_artifacts}/atdd-checklist-{story_id}.md"
|
||||
|
||||
# Required tools
|
||||
required_tools:
|
||||
- read_file # Read story markdown, framework config
|
||||
- write_file # Create test files, checklist, factory stubs
|
||||
- create_directory # Create test directories
|
||||
- list_files # Find existing fixtures and helpers
|
||||
- search_repo # Search for similar test patterns
|
||||
|
||||
tags:
|
||||
- qa
|
||||
- atdd
|
||||
- test-architect
|
||||
- tdd
|
||||
- red-green-refactor
|
||||
|
||||
execution_hints:
|
||||
interactive: false # Minimize prompts
|
||||
autonomous: true # Proceed without user input unless blocked
|
||||
iterative: true
|
||||
@@ -0,0 +1,6 @@
|
||||
---
|
||||
name: bmad-testarch-automate
|
||||
description: 'Expand test automation coverage for codebase. Use when user says "lets expand test coverage" or "I want to automate tests"'
|
||||
---
|
||||
|
||||
Follow the instructions in [workflow.md](workflow.md).
|
||||
@@ -0,0 +1 @@
|
||||
type: skill
|
||||
611
_bmad/tea/workflows/testarch/bmad-testarch-automate/checklist.md
Normal file
611
_bmad/tea/workflows/testarch/bmad-testarch-automate/checklist.md
Normal file
@@ -0,0 +1,611 @@
|
||||
# Automate Workflow Validation Checklist
|
||||
|
||||
Use this checklist to validate that the automate workflow has been executed correctly and all deliverables meet quality standards.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before starting this workflow, verify:
|
||||
|
||||
- [ ] Framework scaffolding configured (playwright.config.ts or cypress.config.ts exists)
|
||||
- [ ] Test directory structure exists (tests/ folder with subdirectories)
|
||||
- [ ] Package.json has test framework dependencies installed
|
||||
|
||||
**Halt only if:** Framework scaffolding is completely missing (run `framework` workflow first)
|
||||
|
||||
**Note:** BMad artifacts (story, tech-spec, PRD) are OPTIONAL - workflow can run without them
|
||||
**Note:** `automate` generates tests; it does not run `*atdd` or `*test-review`. If ATDD outputs exist, use them as input and avoid duplicate coverage.
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Execution Mode Determination and Context Loading
|
||||
|
||||
### Mode Detection
|
||||
|
||||
- [ ] Execution mode correctly determined:
|
||||
- [ ] BMad-Integrated Mode (story_file variable set) OR
|
||||
- [ ] Standalone Mode (target_feature or target_files set) OR
|
||||
- [ ] Auto-discover Mode (no targets specified)
|
||||
|
||||
### BMad Artifacts (If Available - OPTIONAL)
|
||||
|
||||
- [ ] Story markdown loaded (if `{story_file}` provided)
|
||||
- [ ] Acceptance criteria extracted from story (if available)
|
||||
- [ ] Tech-spec.md loaded (if `{use_tech_spec}` true and file exists)
|
||||
- [ ] Test-design.md loaded (if `{use_test_design}` true and file exists)
|
||||
- [ ] PRD.md loaded (if `{use_prd}` true and file exists)
|
||||
- [ ] **Note**: Absence of BMad artifacts does NOT halt workflow
|
||||
|
||||
### Framework Configuration
|
||||
|
||||
- [ ] Test framework config loaded (playwright.config.ts or cypress.config.ts)
|
||||
- [ ] Test directory structure identified from `{test_dir}`
|
||||
- [ ] Existing test patterns reviewed
|
||||
- [ ] Test runner capabilities noted (parallel execution, fixtures, etc.)
|
||||
|
||||
### Coverage Analysis
|
||||
|
||||
- [ ] Existing test files searched in `{test_dir}` (if `{analyze_coverage}` true)
|
||||
- [ ] Tested features vs untested features identified
|
||||
- [ ] Coverage gaps mapped (tests to source files)
|
||||
- [ ] Existing fixture and factory patterns checked
|
||||
|
||||
### Knowledge Base Fragments Loaded
|
||||
|
||||
- [ ] `test-levels-framework.md` - Test level selection
|
||||
- [ ] `test-priorities.md` - Priority classification (P0-P3)
|
||||
- [ ] `fixture-architecture.md` - Fixture patterns with auto-cleanup
|
||||
- [ ] `data-factories.md` - Factory patterns using faker
|
||||
- [ ] `selective-testing.md` - Targeted test execution strategies
|
||||
- [ ] `ci-burn-in.md` - Flaky test detection patterns
|
||||
- [ ] `test-quality.md` - Test design principles
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Automation Targets Identification
|
||||
|
||||
### Target Determination
|
||||
|
||||
**BMad-Integrated Mode (if story available):**
|
||||
|
||||
- [ ] Acceptance criteria mapped to test scenarios
|
||||
- [ ] Features implemented in story identified
|
||||
- [ ] Existing ATDD tests checked (if any)
|
||||
- [ ] Expansion beyond ATDD planned (edge cases, negative paths)
|
||||
|
||||
**Standalone Mode (if no story):**
|
||||
|
||||
- [ ] Specific feature analyzed (if `{target_feature}` specified)
|
||||
- [ ] Specific files analyzed (if `{target_files}` specified)
|
||||
- [ ] Features auto-discovered (if `{auto_discover_features}` true)
|
||||
- [ ] Features prioritized by:
|
||||
- [ ] No test coverage (highest priority)
|
||||
- [ ] Complex business logic
|
||||
- [ ] External integrations (API, database, auth)
|
||||
- [ ] Critical user paths (login, checkout, etc.)
|
||||
|
||||
### Test Level Selection
|
||||
|
||||
- [ ] Test level selection framework applied (from `test-levels-framework.md`)
|
||||
- [ ] E2E tests identified: Critical user journeys, multi-system integration
|
||||
- [ ] API tests identified: Business logic, service contracts, data transformations
|
||||
- [ ] Component tests identified: UI behavior, interactions, state management
|
||||
- [ ] Unit tests identified: Pure logic, edge cases, error handling
|
||||
|
||||
### Duplicate Coverage Avoidance
|
||||
|
||||
- [ ] Same behavior NOT tested at multiple levels unnecessarily
|
||||
- [ ] E2E used for critical happy path only
|
||||
- [ ] API tests used for business logic variations
|
||||
- [ ] Component tests used for UI interaction edge cases
|
||||
- [ ] Unit tests used for pure logic edge cases
|
||||
|
||||
### Priority Assignment
|
||||
|
||||
- [ ] Test priorities assigned using `test-priorities.md` framework
|
||||
- [ ] P0 tests: Critical paths, security-critical, data integrity
|
||||
- [ ] P1 tests: Important features, integration points, error handling
|
||||
- [ ] P2 tests: Edge cases, less-critical variations, performance
|
||||
- [ ] P3 tests: Nice-to-have, rarely-used features, exploratory
|
||||
- [ ] Priority variables respected:
|
||||
- [ ] `{include_p0}` = true (always include)
|
||||
- [ ] `{include_p1}` = true (high priority)
|
||||
- [ ] `{include_p2}` = true (medium priority)
|
||||
- [ ] `{include_p3}` = false (low priority, skip by default)
|
||||
|
||||
### Coverage Plan Created
|
||||
|
||||
- [ ] Test coverage plan documented
|
||||
- [ ] What will be tested at each level listed
|
||||
- [ ] Priorities assigned to each test
|
||||
- [ ] Coverage strategy clear (critical-paths, comprehensive, or selective)
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Test Infrastructure Generated
|
||||
|
||||
### Fixture Architecture
|
||||
|
||||
- [ ] Existing fixtures checked in `tests/support/fixtures/`
|
||||
- [ ] Fixture architecture created/enhanced (if `{generate_fixtures}` true)
|
||||
- [ ] All fixtures use Playwright's `test.extend()` pattern
|
||||
- [ ] All fixtures have auto-cleanup in teardown
|
||||
- [ ] Common fixtures created/enhanced:
|
||||
- [ ] authenticatedUser (with auto-delete)
|
||||
- [ ] apiRequest (authenticated client)
|
||||
- [ ] mockNetwork (external service mocking)
|
||||
- [ ] testDatabase (with auto-cleanup)
|
||||
|
||||
### Data Factories
|
||||
|
||||
- [ ] Existing factories checked in `tests/support/factories/`
|
||||
- [ ] Factory architecture created/enhanced (if `{generate_factories}` true)
|
||||
- [ ] All factories use `@faker-js/faker` for random data (no hardcoded values)
|
||||
- [ ] All factories support overrides for specific scenarios
|
||||
- [ ] Common factories created/enhanced:
|
||||
- [ ] User factory (email, password, name, role)
|
||||
- [ ] Product factory (name, price, SKU)
|
||||
- [ ] Order factory (items, total, status)
|
||||
- [ ] Cleanup helpers provided (e.g., deleteUser(), deleteProduct())
|
||||
|
||||
### Helper Utilities
|
||||
|
||||
- [ ] Existing helpers checked in `tests/support/helpers/` (if `{update_helpers}` true)
|
||||
- [ ] Common utilities created/enhanced:
|
||||
- [ ] waitFor (polling for complex conditions)
|
||||
- [ ] retry (retry helper for flaky operations)
|
||||
- [ ] testData (test data generation)
|
||||
- [ ] assertions (custom assertion helpers)
|
||||
|
||||
---
|
||||
|
||||
## Step 4: Test Files Generated
|
||||
|
||||
### Test File Structure
|
||||
|
||||
- [ ] Test files organized correctly:
|
||||
- [ ] `tests/e2e/` for E2E tests
|
||||
- [ ] `tests/api/` for API tests
|
||||
- [ ] `tests/component/` for component tests
|
||||
- [ ] `tests/unit/` for unit tests
|
||||
- [ ] `tests/support/` for fixtures/factories/helpers
|
||||
|
||||
### E2E Tests (If Applicable)
|
||||
|
||||
- [ ] E2E test files created in `tests/e2e/`
|
||||
- [ ] All tests follow Given-When-Then format
|
||||
- [ ] All tests have priority tags ([P0], [P1], [P2], [P3]) in test name
|
||||
- [ ] All tests use data-testid selectors (not CSS classes)
|
||||
- [ ] One assertion per test (atomic design)
|
||||
- [ ] No hard waits or sleeps (explicit waits only)
|
||||
- [ ] Network-first pattern applied (route interception BEFORE navigation)
|
||||
- [ ] Clear Given-When-Then comments in test code
|
||||
|
||||
### API Tests (If Applicable)
|
||||
|
||||
- [ ] API test files created in `tests/api/`
|
||||
- [ ] All tests follow Given-When-Then format
|
||||
- [ ] All tests have priority tags in test name
|
||||
- [ ] API contracts validated (request/response structure)
|
||||
- [ ] HTTP status codes verified
|
||||
- [ ] Response body validation includes required fields
|
||||
- [ ] Error cases tested (400, 401, 403, 404, 500)
|
||||
- [ ] JWT token format validated (if auth tests)
|
||||
|
||||
### Consumer Contract Tests / CDC (If `use_pactjs_utils` Enabled)
|
||||
|
||||
**Provider Endpoint Comments:**
|
||||
|
||||
- [ ] Every Pact interaction has `// Provider endpoint:` comment
|
||||
- [ ] Comment includes exact file path to provider route handler, OR uses the TODO form when provider is inaccessible
|
||||
- [ ] Comment follows format: `// Provider endpoint: <path> -> <METHOD> <route>` or `// Provider endpoint: TODO — provider source not accessible, verify manually`
|
||||
|
||||
**Provider Source Scrutiny:**
|
||||
|
||||
- [ ] Provider route handlers and/or OpenAPI spec read before generating each interaction
|
||||
- [ ] Status codes verified against provider source (e.g., 201 not assumed 200)
|
||||
- [ ] Field names cross-referenced with provider type/DTO definitions
|
||||
- [ ] Data types verified (string ID vs number ID, date formats)
|
||||
- [ ] Enum/union values extracted from provider validation schemas
|
||||
- [ ] Required request fields and headers checked against provider validation
|
||||
- [ ] Nested response structures match provider's actual response construction
|
||||
- [ ] Scrutiny evidence documented as block comment in each test file
|
||||
|
||||
**CDC Quality Gates:**
|
||||
|
||||
- [ ] Postel's Law enforced: exact values in `withRequest`, matchers in `willRespondWith`
|
||||
- [ ] Response matchers (`like`, `eachLike`, `string`, `integer`) used only in `willRespondWith`
|
||||
- [ ] Provider state names are consistent with provider's state handler naming
|
||||
- [ ] DI pattern used for consumer function imports (actual consumer code, not raw `fetch()`)
|
||||
- [ ] One logical endpoint per Pact interaction (no multi-endpoint interactions)
|
||||
|
||||
### Component Tests (If Applicable)
|
||||
|
||||
- [ ] Component test files created in `tests/component/`
|
||||
- [ ] All tests follow Given-When-Then format
|
||||
- [ ] All tests have priority tags in test name
|
||||
- [ ] Component mounting works correctly
|
||||
- [ ] Interaction testing covers user actions (click, hover, keyboard)
|
||||
- [ ] State management validated
|
||||
- [ ] Props and events tested
|
||||
|
||||
### Unit Tests (If Applicable)
|
||||
|
||||
- [ ] Unit test files created in `tests/unit/`
|
||||
- [ ] All tests follow Given-When-Then format
|
||||
- [ ] All tests have priority tags in test name
|
||||
- [ ] Pure logic tested (no dependencies)
|
||||
- [ ] Edge cases covered
|
||||
- [ ] Error handling tested
|
||||
|
||||
### Quality Standards Enforced
|
||||
|
||||
- [ ] All tests use Given-When-Then format with clear comments
|
||||
- [ ] All tests have descriptive names with priority tags
|
||||
- [ ] No duplicate tests (same behavior tested multiple times)
|
||||
- [ ] No flaky patterns (race conditions, timing issues)
|
||||
- [ ] No test interdependencies (tests can run in any order)
|
||||
- [ ] Tests are deterministic (same input always produces same result)
|
||||
- [ ] All tests use data-testid selectors (E2E tests)
|
||||
- [ ] No hard waits: `await page.waitForTimeout()` (forbidden)
|
||||
- [ ] No conditional flow: `if (await element.isVisible())` (forbidden)
|
||||
- [ ] No try-catch for test logic (only for cleanup)
|
||||
- [ ] No hardcoded test data (use factories with faker)
|
||||
- [ ] No page object classes (tests are direct and simple)
|
||||
- [ ] No shared state between tests
|
||||
|
||||
### Network-First Pattern Applied
|
||||
|
||||
- [ ] Route interception set up BEFORE navigation (E2E tests with network requests)
|
||||
- [ ] `page.route()` called before `page.goto()` to prevent race conditions
|
||||
- [ ] Network-first pattern verified in all E2E tests that make API calls
|
||||
|
||||
---
|
||||
|
||||
## Step 5: Test Validation and Healing (NEW - Phase 2.5)
|
||||
|
||||
### Healing Configuration
|
||||
|
||||
- [ ] Healing configuration checked:
|
||||
- [ ] `{auto_validate}` setting noted (default: true)
|
||||
- [ ] `{auto_heal_failures}` setting noted (default: false)
|
||||
- [ ] `{max_healing_iterations}` setting noted (default: 3)
|
||||
- [ ] `{use_mcp_healing}` setting noted (default: true)
|
||||
|
||||
### Healing Knowledge Fragments Loaded (If Healing Enabled)
|
||||
|
||||
- [ ] `test-healing-patterns.md` loaded (common failure patterns and fixes)
|
||||
- [ ] `selector-resilience.md` loaded (selector refactoring guide)
|
||||
- [ ] `timing-debugging.md` loaded (race condition fixes)
|
||||
|
||||
### Test Execution and Validation
|
||||
|
||||
- [ ] Generated tests executed (if `{auto_validate}` true)
|
||||
- [ ] Test results captured:
|
||||
- [ ] Total tests run
|
||||
- [ ] Passing tests count
|
||||
- [ ] Failing tests count
|
||||
- [ ] Error messages and stack traces captured
|
||||
|
||||
### Healing Loop (If Enabled and Tests Failed)
|
||||
|
||||
- [ ] Healing loop entered (if `{auto_heal_failures}` true AND tests failed)
|
||||
- [ ] For each failing test:
|
||||
- [ ] Failure pattern identified (selector, timing, data, network, hard wait)
|
||||
- [ ] Appropriate healing strategy applied:
|
||||
- [ ] Stale selector → Replaced with data-testid or ARIA role
|
||||
- [ ] Race condition → Added network-first interception or state waits
|
||||
- [ ] Dynamic data → Replaced hardcoded values with regex/dynamic generation
|
||||
- [ ] Network error → Added route mocking
|
||||
- [ ] Hard wait → Replaced with event-based wait
|
||||
- [ ] Healed test re-run to validate fix
|
||||
- [ ] Iteration count tracked (max 3 attempts)
|
||||
|
||||
### Unfixable Tests Handling
|
||||
|
||||
- [ ] Tests that couldn't be healed after 3 iterations marked with `test.fixme()` (if `{mark_unhealable_as_fixme}` true)
|
||||
- [ ] Detailed comment added to test.fixme() tests:
|
||||
- [ ] What failure occurred
|
||||
- [ ] What healing was attempted (3 iterations)
|
||||
- [ ] Why healing failed
|
||||
- [ ] Manual investigation steps needed
|
||||
- [ ] Original test logic preserved in comments
|
||||
|
||||
### Healing Report Generated
|
||||
|
||||
- [ ] Healing report generated (if healing attempted)
|
||||
- [ ] Report includes:
|
||||
- [ ] Auto-heal enabled status
|
||||
- [ ] Healing mode (MCP-assisted or Pattern-based)
|
||||
- [ ] Iterations allowed (max_healing_iterations)
|
||||
- [ ] Validation results (total, passing, failing)
|
||||
- [ ] Successfully healed tests (count, file:line, fix applied)
|
||||
- [ ] Unable to heal tests (count, file:line, reason)
|
||||
- [ ] Healing patterns applied (selector fixes, timing fixes, data fixes)
|
||||
- [ ] Knowledge base references used
|
||||
|
||||
---
|
||||
|
||||
## Step 6: Documentation and Scripts Updated
|
||||
|
||||
### Test README Updated
|
||||
|
||||
- [ ] `tests/README.md` created or updated (if `{update_readme}` true)
|
||||
- [ ] Test suite structure overview included
|
||||
- [ ] Test execution instructions provided (all, specific files, by priority)
|
||||
- [ ] Fixture usage examples provided
|
||||
- [ ] Factory usage examples provided
|
||||
- [ ] Priority tagging convention explained ([P0], [P1], [P2], [P3])
|
||||
- [ ] How to write new tests documented
|
||||
- [ ] Common patterns documented
|
||||
- [ ] Anti-patterns documented (what to avoid)
|
||||
|
||||
### package.json Scripts Updated
|
||||
|
||||
- [ ] package.json scripts added/updated (if `{update_package_scripts}` true)
|
||||
- [ ] `test:e2e` script for all E2E tests
|
||||
- [ ] `test:e2e:p0` script for P0 tests only
|
||||
- [ ] `test:e2e:p1` script for P0 + P1 tests
|
||||
- [ ] `test:api` script for API tests
|
||||
- [ ] `test:component` script for component tests
|
||||
- [ ] `test:unit` script for unit tests (if applicable)
|
||||
|
||||
### Test Suite Executed
|
||||
|
||||
- [ ] Test suite run locally (if `{run_tests_after_generation}` true)
|
||||
- [ ] Test results captured (passing/failing counts)
|
||||
- [ ] No flaky patterns detected (tests are deterministic)
|
||||
- [ ] Setup requirements documented (if any)
|
||||
- [ ] Known issues documented (if any)
|
||||
|
||||
---
|
||||
|
||||
## Step 6: Automation Summary Generated
|
||||
|
||||
### Automation Summary Document
|
||||
|
||||
- [ ] Output file created at `{output_summary}`
|
||||
- [ ] Document includes execution mode (BMad-Integrated, Standalone, Auto-discover)
|
||||
- [ ] Feature analysis included (source files, coverage gaps) - Standalone mode
|
||||
- [ ] Tests created listed (E2E, API, Component, Unit) with counts and paths
|
||||
- [ ] Infrastructure created listed (fixtures, factories, helpers)
|
||||
- [ ] Test execution instructions provided
|
||||
- [ ] Coverage analysis included:
|
||||
- [ ] Total test count
|
||||
- [ ] Priority breakdown (P0, P1, P2, P3 counts)
|
||||
- [ ] Test level breakdown (E2E, API, Component, Unit counts)
|
||||
- [ ] Coverage percentage (if calculated)
|
||||
- [ ] Coverage status (acceptance criteria covered, gaps identified)
|
||||
- [ ] Definition of Done checklist included
|
||||
- [ ] Next steps provided
|
||||
- [ ] Recommendations included (if Standalone mode)
|
||||
|
||||
### Summary Provided to User
|
||||
|
||||
- [ ] Concise summary output provided
|
||||
- [ ] Total tests created across test levels
|
||||
- [ ] Priority breakdown (P0, P1, P2, P3 counts)
|
||||
- [ ] Infrastructure counts (fixtures, factories, helpers)
|
||||
- [ ] Test execution command provided
|
||||
- [ ] Output file path provided
|
||||
- [ ] Next steps listed
|
||||
|
||||
---
|
||||
|
||||
## Quality Checks
|
||||
|
||||
### Test Design Quality
|
||||
|
||||
- [ ] Tests are readable (clear Given-When-Then structure)
|
||||
- [ ] Tests are maintainable (use factories/fixtures, not hardcoded data)
|
||||
- [ ] Tests are isolated (no shared state between tests)
|
||||
- [ ] Tests are deterministic (no race conditions or flaky patterns)
|
||||
- [ ] Tests are atomic (one assertion per test)
|
||||
- [ ] Tests are fast (no unnecessary waits or delays)
|
||||
- [ ] Tests are lean (files under {max_file_lines} lines)
|
||||
|
||||
### Knowledge Base Integration
|
||||
|
||||
- [ ] Test level selection framework applied (from `test-levels-framework.md`)
|
||||
- [ ] Priority classification applied (from `test-priorities.md`)
|
||||
- [ ] Fixture architecture patterns applied (from `fixture-architecture.md`)
|
||||
- [ ] Data factory patterns applied (from `data-factories.md`)
|
||||
- [ ] Selective testing strategies considered (from `selective-testing.md`)
|
||||
- [ ] Flaky test detection patterns considered (from `ci-burn-in.md`)
|
||||
- [ ] Test quality principles applied (from `test-quality.md`)
|
||||
|
||||
### Code Quality
|
||||
|
||||
- [ ] All TypeScript types are correct and complete
|
||||
- [ ] No linting errors in generated test files
|
||||
- [ ] Consistent naming conventions followed
|
||||
- [ ] Imports are organized and correct
|
||||
- [ ] Code follows project style guide
|
||||
- [ ] No console.log or debug statements in test code
|
||||
|
||||
---
|
||||
|
||||
## Integration Points
|
||||
|
||||
### With Framework Workflow
|
||||
|
||||
- [ ] Test framework configuration detected and used
|
||||
- [ ] Directory structure matches framework setup
|
||||
- [ ] Fixtures and helpers follow established patterns
|
||||
- [ ] Naming conventions consistent with framework standards
|
||||
|
||||
### With BMad Workflows (If Available - OPTIONAL)
|
||||
|
||||
**With Story Workflow:**
|
||||
|
||||
- [ ] Story ID correctly referenced in output (if story available)
|
||||
- [ ] Acceptance criteria from story reflected in tests (if story available)
|
||||
- [ ] Technical constraints from story considered (if story available)
|
||||
|
||||
**With test-design Workflow:**
|
||||
|
||||
- [ ] P0 scenarios from test-design prioritized (if test-design available)
|
||||
- [ ] Risk assessment from test-design considered (if test-design available)
|
||||
- [ ] Coverage strategy aligned with test-design (if test-design available)
|
||||
|
||||
**With atdd Workflow:**
|
||||
|
||||
- [ ] ATDD artifacts provided or located (manual handoff; `atdd` not auto-run)
|
||||
- [ ] Existing ATDD tests checked (if story had ATDD workflow run)
|
||||
- [ ] Expansion beyond ATDD planned (edge cases, negative paths)
|
||||
- [ ] No duplicate coverage with ATDD tests
|
||||
|
||||
### With CI Pipeline
|
||||
|
||||
- [ ] Tests can run in CI environment
|
||||
- [ ] Tests are parallelizable (no shared state)
|
||||
- [ ] Tests have appropriate timeouts
|
||||
- [ ] Tests clean up their data (no CI environment pollution)
|
||||
|
||||
---
|
||||
|
||||
## Completion Criteria
|
||||
|
||||
All of the following must be true before marking this workflow as complete:
|
||||
|
||||
- [ ] **Execution mode determined** (BMad-Integrated, Standalone, or Auto-discover)
|
||||
- [ ] **Framework configuration loaded** and validated
|
||||
- [ ] **Coverage analysis completed** (gaps identified if analyze_coverage true)
|
||||
- [ ] **Automation targets identified** (what needs testing)
|
||||
- [ ] **Test levels selected** appropriately (E2E, API, Component, Unit)
|
||||
- [ ] **Duplicate coverage avoided** (same behavior not tested at multiple levels)
|
||||
- [ ] **Test priorities assigned** (P0, P1, P2, P3)
|
||||
- [ ] **Fixture architecture created/enhanced** with auto-cleanup
|
||||
- [ ] **Data factories created/enhanced** using faker (no hardcoded data)
|
||||
- [ ] **Helper utilities created/enhanced** (if needed)
|
||||
- [ ] **Test files generated** at appropriate levels (E2E, API, Component, Unit)
|
||||
- [ ] **Given-When-Then format used** consistently across all tests
|
||||
- [ ] **Priority tags added** to all test names ([P0], [P1], [P2], [P3])
|
||||
- [ ] **data-testid selectors used** in E2E tests (not CSS classes)
|
||||
- [ ] **Network-first pattern applied** (route interception before navigation)
|
||||
- [ ] **Quality standards enforced** (no hard waits, no flaky patterns, self-cleaning, deterministic)
|
||||
- [ ] **Test README updated** with execution instructions and patterns
|
||||
- [ ] **package.json scripts updated** with test execution commands
|
||||
- [ ] **Test suite run locally** (if run_tests_after_generation true)
|
||||
- [ ] **Tests validated** (if auto_validate enabled)
|
||||
- [ ] **Failures healed** (if auto_heal_failures enabled and tests failed)
|
||||
- [ ] **Healing report generated** (if healing attempted)
|
||||
- [ ] **Unfixable tests marked** with test.fixme() and detailed comments (if any)
|
||||
- [ ] **Automation summary created** and saved to correct location
|
||||
- [ ] **Output file formatted correctly**
|
||||
- [ ] **Knowledge base references applied** and documented (including healing fragments if used)
|
||||
- [ ] **No test quality issues** (flaky patterns, race conditions, hardcoded data, page objects)
|
||||
- [ ] **Provider scrutiny completed or gracefully degraded** for all CDC interactions — each interaction either has scrutiny evidence or a TODO marker (if `use_pactjs_utils` enabled)
|
||||
- [ ] **Provider endpoint comments present** on every Pact interaction (if `use_pactjs_utils` enabled)
|
||||
|
||||
---
|
||||
|
||||
## Common Issues and Resolutions
|
||||
|
||||
### Issue: BMad artifacts not found
|
||||
|
||||
**Problem:** Story, tech-spec, or PRD files not found when variables are set.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- **automate does NOT require BMad artifacts** - they are OPTIONAL enhancements
|
||||
- If files not found, switch to Standalone Mode automatically
|
||||
- Analyze source code directly without BMad context
|
||||
- Continue workflow without halting
|
||||
|
||||
### Issue: Framework configuration not found
|
||||
|
||||
**Problem:** No playwright.config.ts or cypress.config.ts found.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- **HALT workflow** - framework is required
|
||||
- Message: "Framework scaffolding required. Run `bmad tea *framework` first."
|
||||
- User must run framework workflow before automate
|
||||
|
||||
### Issue: No automation targets identified
|
||||
|
||||
**Problem:** Neither story, target_feature, nor target_files specified, and auto-discover finds nothing.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Check if source_dir variable is correct
|
||||
- Verify source code exists in project
|
||||
- Ask user to specify target_feature or target_files explicitly
|
||||
- Provide examples: `target_feature: "src/auth/"` or `target_files: "src/auth/login.ts,src/auth/session.ts"`
|
||||
|
||||
### Issue: Duplicate coverage detected
|
||||
|
||||
**Problem:** Same behavior tested at multiple levels (E2E + API + Component).
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Review test level selection framework (test-levels-framework.md)
|
||||
- Use E2E for critical happy path ONLY
|
||||
- Use API for business logic variations
|
||||
- Use Component for UI edge cases
|
||||
- Remove redundant tests that duplicate coverage
|
||||
|
||||
### Issue: Tests have hardcoded data
|
||||
|
||||
**Problem:** Tests use hardcoded email addresses, passwords, or other data.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Replace all hardcoded data with factory function calls
|
||||
- Use faker for all random data generation
|
||||
- Update data-factories to support all required test scenarios
|
||||
- Example: `createUser({ email: faker.internet.email() })`
|
||||
|
||||
### Issue: Tests are flaky
|
||||
|
||||
**Problem:** Tests fail intermittently, pass on retry.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Remove all hard waits (`page.waitForTimeout()`)
|
||||
- Use explicit waits (`page.waitForSelector()`)
|
||||
- Apply network-first pattern (route interception before navigation)
|
||||
- Remove conditional flow (`if (await element.isVisible())`)
|
||||
- Ensure tests are deterministic (no race conditions)
|
||||
- Run burn-in loop (10 iterations) to detect flakiness
|
||||
|
||||
### Issue: Fixtures don't clean up data
|
||||
|
||||
**Problem:** Test data persists after test run, causing test pollution.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Ensure all fixtures have cleanup in teardown phase
|
||||
- Cleanup happens AFTER `await use(data)`
|
||||
- Call deletion/cleanup functions (deleteUser, deleteProduct, etc.)
|
||||
- Verify cleanup works by checking database/storage after test run
|
||||
|
||||
### Issue: Tests too slow
|
||||
|
||||
**Problem:** Tests take longer than 90 seconds (max_test_duration).
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Remove unnecessary waits and delays
|
||||
- Use parallel execution where possible
|
||||
- Mock external services (don't make real API calls)
|
||||
- Use API tests instead of E2E for business logic
|
||||
- Optimize test data creation (use in-memory database, etc.)
|
||||
|
||||
---
|
||||
|
||||
## Notes for TEA Agent
|
||||
|
||||
- **automate is flexible:** Can work with or without BMad artifacts (story, tech-spec, PRD are OPTIONAL)
|
||||
- **Standalone mode is powerful:** Analyze any codebase and generate tests independently
|
||||
- **Auto-discover mode:** Scan codebase for features needing tests when no targets specified
|
||||
- **Framework is the ONLY hard requirement:** HALT if framework config missing, otherwise proceed
|
||||
- **Avoid duplicate coverage:** E2E for critical paths only, API/Component for variations
|
||||
- **Priority tagging enables selective execution:** P0 tests run on every commit, P1 on PR, P2 nightly
|
||||
- **Network-first pattern prevents race conditions:** Route interception BEFORE navigation
|
||||
- **No page objects:** Keep tests simple, direct, and maintainable
|
||||
- **Use knowledge base:** Load relevant fragments (test-levels, test-priorities, fixture-architecture, data-factories, healing patterns) for guidance
|
||||
- **Deterministic tests only:** No hard waits, no conditional flow, no flaky patterns allowed
|
||||
- **Optional healing:** auto_heal_failures disabled by default (opt-in for automatic test healing)
|
||||
- **Graceful degradation:** Healing works without Playwright MCP (pattern-based fallback)
|
||||
- **Unfixable tests handled:** Mark with test.fixme() and detailed comments (not silently broken)
|
||||
@@ -0,0 +1,50 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# Test Automation Expansion
|
||||
|
||||
**Workflow ID**: `_bmad/tea/testarch/bmad-testarch-automate`
|
||||
**Version**: 5.0 (Step-File Architecture)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Expands test automation coverage by generating prioritized tests at the appropriate level (E2E, API, Component, Unit) with supporting fixtures and helpers.
|
||||
|
||||
Modes:
|
||||
|
||||
- **BMad-Integrated**: Uses story/PRD/test-design artifacts when available
|
||||
- **Standalone**: Analyzes existing codebase without BMad artifacts
|
||||
|
||||
---
|
||||
|
||||
## WORKFLOW ARCHITECTURE
|
||||
|
||||
This workflow uses **step-file architecture** for disciplined execution:
|
||||
|
||||
- **Micro-file Design**: Each step is self-contained
|
||||
- **JIT Loading**: Only the current step file is in memory
|
||||
- **Sequential Enforcement**: Execute steps in order without skipping
|
||||
|
||||
---
|
||||
|
||||
## INITIALIZATION SEQUENCE
|
||||
|
||||
### 1. Configuration Loading
|
||||
|
||||
From `workflow.yaml`, resolve:
|
||||
|
||||
- `config_source`, `test_artifacts`, `user_name`, `communication_language`, `document_output_language`, `date`
|
||||
- `test_dir`, `source_dir`, `coverage_target`, `standalone_mode`
|
||||
|
||||
### 2. First Step
|
||||
|
||||
Load, read completely, and execute:
|
||||
`./steps-c/step-01-preflight-and-context.md`
|
||||
|
||||
### 3. Resume Support
|
||||
|
||||
If the user selects **Resume** mode, load, read completely, and execute:
|
||||
`./steps-c/step-01b-resume.md`
|
||||
|
||||
This checks the output document for progress tracking frontmatter and routes to the next incomplete step.
|
||||
@@ -0,0 +1,237 @@
|
||||
---
|
||||
name: 'step-01-preflight-and-context'
|
||||
description: 'Determine mode, verify framework, and load context and knowledge'
|
||||
outputFile: '{test_artifacts}/automation-summary.md'
|
||||
nextStepFile: './step-02-identify-targets.md'
|
||||
knowledgeIndex: '{project-root}/_bmad/tea/testarch/tea-index.csv'
|
||||
---
|
||||
|
||||
# Step 1: Preflight & Context Loading
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Determine execution mode, verify framework readiness, and load the necessary artifacts and knowledge fragments.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- 🚫 Halt if framework scaffolding is missing
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Stack Detection & Verify Framework
|
||||
|
||||
**Read `config.test_stack_type`** from `{config_source}`.
|
||||
|
||||
**Auto-Detection Algorithm** (when `test_stack_type` is `"auto"` or not configured):
|
||||
|
||||
- Scan `{project-root}` for project manifests:
|
||||
- **Frontend indicators**: `package.json` with react/vue/angular/next dependencies, `playwright.config.*`, `vite.config.*`, `webpack.config.*`
|
||||
- **Backend indicators**: `pyproject.toml`, `pom.xml`/`build.gradle`, `go.mod`, `*.csproj`/`*.sln`, `Gemfile`, `Cargo.toml`
|
||||
- **Both present** = `fullstack`; only frontend = `frontend`; only backend = `backend`
|
||||
- Explicit `test_stack_type` config value overrides auto-detection
|
||||
- **Backward compatibility**: if `test_stack_type` is not in config, treat as `"auto"` (preserves current frontend behavior for existing installs)
|
||||
|
||||
Store result as `{detected_stack}` = `frontend` | `backend` | `fullstack`
|
||||
|
||||
**Verify framework exists:**
|
||||
|
||||
**If {detected_stack} is `frontend` or `fullstack`:**
|
||||
|
||||
- `playwright.config.ts` or `cypress.config.ts`
|
||||
- `package.json` includes test dependencies
|
||||
|
||||
**If {detected_stack} is `backend` or `fullstack`:**
|
||||
|
||||
- Relevant test config exists (e.g., `conftest.py`, `src/test/`, `*_test.go`, `.rspec`, test project `*.csproj`)
|
||||
|
||||
If missing: **HALT** with message "Run `framework` workflow first."
|
||||
|
||||
---
|
||||
|
||||
## 2. Determine Execution Mode
|
||||
|
||||
- **BMad-Integrated** if story/tech-spec/test-design artifacts are provided or found
|
||||
- **Standalone** if only source code is available
|
||||
- If unclear, ask the user which mode to use
|
||||
|
||||
---
|
||||
|
||||
## 3. Load Context
|
||||
|
||||
### BMad-Integrated (if available)
|
||||
|
||||
- Story with acceptance criteria
|
||||
- PRD and/or tech spec
|
||||
- Test-design document (if exists)
|
||||
|
||||
### Standalone
|
||||
|
||||
- Skip artifacts; proceed to codebase analysis
|
||||
|
||||
### Always Load
|
||||
|
||||
- Test framework config
|
||||
- Existing test structure in `{test_dir}`
|
||||
- Existing tests (for coverage gaps)
|
||||
|
||||
### Read TEA Config Flags
|
||||
|
||||
- From `{config_source}` read `tea_use_playwright_utils`
|
||||
- From `{config_source}` read `tea_use_pactjs_utils`
|
||||
- From `{config_source}` read `tea_pact_mcp`
|
||||
- From `{config_source}` read `tea_browser_automation`
|
||||
- From `{config_source}` read `test_stack_type`
|
||||
|
||||
---
|
||||
|
||||
### Tiered Knowledge Loading
|
||||
|
||||
Load fragments based on their `tier` classification in `tea-index.csv`:
|
||||
|
||||
1. **Core tier** (always load): Foundational fragments required for this workflow
|
||||
2. **Extended tier** (load on-demand): Load when deeper analysis is needed or when the user's context requires it
|
||||
3. **Specialized tier** (load only when relevant): Load only when the specific use case matches (e.g., contract-testing only for microservices, email-auth only for email flows)
|
||||
|
||||
> **Context Efficiency**: Loading only core fragments reduces context usage by 40-50% compared to loading all fragments.
|
||||
|
||||
### Playwright Utils Loading Profiles
|
||||
|
||||
**If `tea_use_playwright_utils` is enabled**, select the appropriate loading profile:
|
||||
|
||||
- **API-only profile** (when `{detected_stack}` is `backend` or no `page.goto`/`page.locator` found in test files):
|
||||
Load: `overview`, `api-request`, `auth-session`, `recurse` (~1,800 lines)
|
||||
|
||||
- **Full UI+API profile** (when `{detected_stack}` is `frontend`/`fullstack` or browser tests detected):
|
||||
Load: all Playwright Utils core fragments (~4,500 lines)
|
||||
|
||||
**Detection**: Scan `{test_dir}` for files containing `page.goto` or `page.locator`. If none found, use API-only profile.
|
||||
|
||||
### Pact.js Utils Loading
|
||||
|
||||
**If `tea_use_pactjs_utils` is enabled** (and `{detected_stack}` is `backend` or `fullstack`, or microservices indicators detected):
|
||||
|
||||
Load: `pactjs-utils-overview.md`, `pactjs-utils-consumer-helpers.md`, `pactjs-utils-provider-verifier.md`, `pactjs-utils-request-filter.md` (~800 lines)
|
||||
|
||||
**If `tea_use_pactjs_utils` is disabled** but contract testing is relevant (microservices architecture detected, existing Pact config found):
|
||||
|
||||
Load: `contract-testing.md` (~960 lines)
|
||||
|
||||
**Detection**: Scan `{project-root}` for Pact indicators: `pact/` directory, `@pact-foundation/pact` in `package.json`, `pactUrls` in test files, `PACT_BROKER` in env files.
|
||||
|
||||
### Pact MCP Loading
|
||||
|
||||
**If `tea_pact_mcp` is `"mcp"`:**
|
||||
|
||||
Load: `pact-mcp.md` (~150 lines) — enables agent to use SmartBear MCP tools for fetching provider states and generating pact tests during automation.
|
||||
|
||||
## 4. Load Knowledge Base Fragments
|
||||
|
||||
Use `{knowledgeIndex}` and load only what is required.
|
||||
|
||||
**Core (always load):**
|
||||
|
||||
- `test-levels-framework.md`
|
||||
- `test-priorities-matrix.md`
|
||||
- `data-factories.md`
|
||||
- `selective-testing.md`
|
||||
- `ci-burn-in.md`
|
||||
- `test-quality.md`
|
||||
|
||||
**Playwright Utils (if enabled):**
|
||||
|
||||
- `overview.md`, `api-request.md`, `network-recorder.md`, `auth-session.md`, `intercept-network-call.md`, `recurse.md`, `log.md`, `file-utils.md`, `burn-in.md`, `network-error-monitor.md`, `fixtures-composition.md`
|
||||
|
||||
**Traditional Patterns (if Playwright Utils disabled):**
|
||||
|
||||
- `fixture-architecture.md`
|
||||
- `network-first.md`
|
||||
|
||||
**Pact.js Utils (if enabled):**
|
||||
|
||||
- `pactjs-utils-overview.md`, `pactjs-utils-consumer-helpers.md`, `pactjs-utils-provider-verifier.md`, `pactjs-utils-request-filter.md`
|
||||
|
||||
**Contract Testing (if pactjs-utils disabled but relevant):**
|
||||
|
||||
- `contract-testing.md`
|
||||
|
||||
**Pact MCP (if tea_pact_mcp is "mcp"):**
|
||||
|
||||
- `pact-mcp.md`
|
||||
|
||||
**Healing (if auto-heal enabled):**
|
||||
|
||||
- `test-healing-patterns.md`
|
||||
- `selector-resilience.md`
|
||||
- `timing-debugging.md`
|
||||
|
||||
**Playwright CLI (if tea_browser_automation is "cli" or "auto"):**
|
||||
|
||||
- `playwright-cli.md`
|
||||
|
||||
**MCP Patterns (if tea_browser_automation is "mcp" or "auto"):**
|
||||
|
||||
- (existing MCP-related fragments, if any are added in future)
|
||||
|
||||
---
|
||||
|
||||
## 5. Confirm Inputs
|
||||
|
||||
Summarize loaded artifacts, framework, and knowledge fragments, then proceed.
|
||||
|
||||
---
|
||||
|
||||
## 6. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-01-preflight-and-context']
|
||||
lastStep: 'step-01-preflight-and-context'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-01-preflight-and-context'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-01-preflight-and-context'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section.
|
||||
|
||||
**Update `inputDocuments`**: Set `inputDocuments` in the output template frontmatter to the list of artifact paths loaded in this step (e.g., knowledge fragments, test design documents, configuration files).
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
@@ -0,0 +1,94 @@
|
||||
---
|
||||
name: 'step-01b-resume'
|
||||
description: 'Resume interrupted workflow from last completed step'
|
||||
outputFile: '{test_artifacts}/automation-summary.md'
|
||||
---
|
||||
|
||||
# Step 1b: Resume Workflow
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Resume an interrupted workflow by loading the existing output document, displaying progress, and routing to the next incomplete step.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: Output document with progress frontmatter
|
||||
- Focus: Load progress and route to next step
|
||||
- Limits: Do not re-execute completed steps
|
||||
- Dependencies: Output document must exist from a previous run
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Load Output Document
|
||||
|
||||
Read `{outputFile}` and parse YAML frontmatter for:
|
||||
|
||||
- `stepsCompleted` — array of completed step names
|
||||
- `lastStep` — last completed step name
|
||||
- `lastSaved` — timestamp of last save
|
||||
|
||||
**If `{outputFile}` does not exist**, display:
|
||||
|
||||
"⚠️ **No previous progress found.** There is no output document to resume from. Please use **[C] Create** to start a fresh workflow run."
|
||||
|
||||
**THEN:** Halt. Do not proceed.
|
||||
|
||||
---
|
||||
|
||||
### 2. Display Progress Dashboard
|
||||
|
||||
Display progress with ✅/⬜ indicators:
|
||||
|
||||
1. ✅/⬜ Preflight & Context (step-01-preflight-and-context)
|
||||
2. ✅/⬜ Identify Targets (step-02-identify-targets)
|
||||
3. ✅/⬜ Generate Tests + Aggregate (step-03c-aggregate)
|
||||
4. ✅/⬜ Validate & Summarize (step-04-validate-and-summarize)
|
||||
|
||||
---
|
||||
|
||||
### 3. Route to Next Step
|
||||
|
||||
Based on `lastStep`, load the next incomplete step:
|
||||
|
||||
- `'step-01-preflight-and-context'` → load `./step-02-identify-targets.md`
|
||||
- `'step-02-identify-targets'` → load `./step-03-generate-tests.md`
|
||||
- `'step-03c-aggregate'` → load `./step-04-validate-and-summarize.md`
|
||||
- `'step-04-validate-and-summarize'` → **Workflow already complete.** Display: "✅ **All steps completed.** Use **[V] Validate** to review outputs or **[E] Edit** to make revisions." Then halt.
|
||||
|
||||
**If `lastStep` does not match any value above**, display: "⚠️ **Unknown progress state** (`lastStep`: {lastStep}). Please use **[C] Create** to start fresh." Then halt.
|
||||
|
||||
**Otherwise**, load the identified step file, read completely, and execute.
|
||||
|
||||
The existing content in `{outputFile}` provides context from previously completed steps.
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Output document loaded and parsed correctly
|
||||
- Progress dashboard displayed accurately
|
||||
- Routed to correct next step
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Not loading output document
|
||||
- Incorrect progress display
|
||||
- Routing to wrong step
|
||||
|
||||
**Master Rule:** Resume MUST route to the exact next incomplete step. Never re-execute completed steps.
|
||||
@@ -0,0 +1,169 @@
|
||||
---
|
||||
name: 'step-02-identify-targets'
|
||||
description: 'Identify automation targets and create coverage plan'
|
||||
outputFile: '{test_artifacts}/automation-summary.md'
|
||||
nextStepFile: './step-03-generate-tests.md'
|
||||
---
|
||||
|
||||
# Step 2: Identify Automation Targets
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Determine what needs to be tested and select appropriate test levels and priorities.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- 🚫 Avoid duplicate coverage across test levels
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Determine Targets
|
||||
|
||||
**BMad-Integrated:**
|
||||
|
||||
- Map acceptance criteria to test scenarios
|
||||
- Check for existing ATDD outputs to avoid duplication
|
||||
- Expand coverage with edge cases and negative paths
|
||||
|
||||
**Standalone:**
|
||||
|
||||
- If specific target feature/files are provided, focus there
|
||||
- Otherwise auto-discover features in `{source_dir}`
|
||||
- Prioritize critical paths, integrations, and untested logic
|
||||
|
||||
**If {detected_stack} is `frontend` or `fullstack`:**
|
||||
|
||||
**Browser Exploration (if `tea_browser_automation` is `cli` or `auto`):**
|
||||
|
||||
> **Fallback:** If CLI is not installed, fall back to MCP (if available) or skip browser exploration and rely on code/doc analysis.
|
||||
|
||||
Use CLI to explore the application and identify testable pages/flows:
|
||||
|
||||
1. `playwright-cli -s=tea-automate open <target_url>`
|
||||
2. `playwright-cli -s=tea-automate snapshot` → capture page structure and element refs
|
||||
3. Analyze snapshot output to identify testable elements and flows
|
||||
4. `playwright-cli -s=tea-automate close`
|
||||
|
||||
> **Session Hygiene:** Always close sessions using `playwright-cli -s=tea-automate close`. Do NOT use `close-all` — it kills every session on the machine and breaks parallel execution.
|
||||
|
||||
**If {detected_stack} is `backend` or `fullstack`:**
|
||||
|
||||
**Source & API Analysis (no browser exploration):**
|
||||
|
||||
- Scan source code for route handlers, controllers, service classes, and public APIs
|
||||
- Read OpenAPI/Swagger specs (`openapi.yaml`, `swagger.json`) if available
|
||||
- Identify database models, migrations, and data access patterns
|
||||
- Map service-to-service integrations and message queue consumers/producers
|
||||
- Check for existing contract tests (Pact, etc.)
|
||||
|
||||
---
|
||||
|
||||
**If `use_pactjs_utils` is enabled — Provider Endpoint Mapping (all stacks):**
|
||||
|
||||
When consumer-driven contract tests will be generated, build a Provider Endpoint Map during target identification. This applies to all `{detected_stack}` values — frontend, backend, and fullstack consumers all need provider scrutiny.
|
||||
|
||||
1. **Locate provider source and/or OpenAPI spec**: Scan workspace for provider project (from config, monorepo structure, or adjacent repositories). Also check for OpenAPI/Swagger spec files (`openapi.yaml`, `openapi.json`, `swagger.json`) — these document the provider's contract explicitly and can supplement or replace handler code analysis.
|
||||
2. **Map each consumer endpoint** to its provider counterpart:
|
||||
- Provider file path (route handler)
|
||||
- Route pattern (METHOD + path)
|
||||
- Validation schema location (Joi, Zod, class-validator) or OpenAPI request schema
|
||||
- Response type/DTO definition location or OpenAPI response schema
|
||||
- OpenAPI spec path (if available, e.g., `server/openapi.yaml`)
|
||||
3. **Output as "Provider Endpoint Map" table** in the coverage plan:
|
||||
|
||||
```markdown
|
||||
| Consumer Endpoint | Provider File | Route | Validation Schema | Response Type | OpenAPI Spec |
|
||||
| --------------------- | --------------------------------- | ------------------------- | ----------------------------------- | --------------- | ------------------------------------------------- |
|
||||
| GET /api/v2/users/:id | server/src/routes/userHandlers.ts | GET /api/v2/users/:userId | server/src/validation/user.ts | UserResponseDto | server/openapi.yaml#/paths/~1api~1v2~1users~1{id} |
|
||||
| POST /api/v2/users | server/src/routes/userHandlers.ts | POST /api/v2/users | server/src/validation/createUser.ts | UserResponseDto | server/openapi.yaml#/paths/~1api~1v2~1users |
|
||||
```
|
||||
|
||||
4. **If provider source not accessible**: Mark entries with `TODO — provider source not accessible` and note in coverage plan that provider scrutiny will use graceful degradation (see `contract-testing.md` Provider Scrutiny Protocol)
|
||||
|
||||
---
|
||||
|
||||
## 2. Choose Test Levels
|
||||
|
||||
Use `test-levels-framework.md` to select:
|
||||
|
||||
- **E2E** for critical user journeys
|
||||
- **API** for business logic and service contracts
|
||||
- **Component** for UI behavior
|
||||
- **Unit** for pure logic and edge cases
|
||||
|
||||
---
|
||||
|
||||
## 3. Assign Priorities
|
||||
|
||||
Use `test-priorities-matrix.md`:
|
||||
|
||||
- P0: Critical path + high risk
|
||||
- P1: Important flows + medium/high risk
|
||||
- P2: Secondary + edge cases
|
||||
- P3: Optional/rare scenarios
|
||||
|
||||
---
|
||||
|
||||
## 4. Coverage Plan
|
||||
|
||||
Produce a concise coverage plan:
|
||||
|
||||
- Targets by test level
|
||||
- Priority assignments
|
||||
- Justification for coverage scope (critical-paths/comprehensive/selective)
|
||||
|
||||
---
|
||||
|
||||
## 5. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-02-identify-targets']
|
||||
lastStep: 'step-02-identify-targets'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-02-identify-targets'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-02-identify-targets'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
@@ -0,0 +1,394 @@
|
||||
---
|
||||
name: 'step-03-generate-tests'
|
||||
description: 'Orchestrate adaptive test generation (agent-team, subagent, or sequential)'
|
||||
nextStepFile: './step-03c-aggregate.md'
|
||||
---
|
||||
|
||||
# Step 3: Orchestrate Adaptive Test Generation
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Select execution mode deterministically, then generate tests using agent-team, subagent, or sequential execution while preserving the same output contract. Worker selection depends on `{detected_stack}`.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- ✅ Resolve execution mode from config (`tea_execution_mode`, `tea_capability_probe`)
|
||||
- ✅ Apply fallback rules deterministically when requested mode is unsupported
|
||||
- ✅ Preserve output schema and temp file naming across all modes
|
||||
- ❌ Do NOT skip capability checks when probing is enabled
|
||||
- ❌ Do NOT change output paths or JSON schema by mode
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Wait for subagent outputs
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, coverage plan from Step 2, knowledge fragments
|
||||
- Focus: orchestration only (mode selection + worker dispatch)
|
||||
- Limits: do not generate tests directly (delegate to worker steps)
|
||||
- Dependencies: Step 2 outputs (coverage plan, target features)
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
### 1. Prepare Execution Context
|
||||
|
||||
**Generate unique timestamp** for temp file naming:
|
||||
|
||||
```javascript
|
||||
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
|
||||
```
|
||||
|
||||
**Prepare input context for subagents:**
|
||||
|
||||
```javascript
|
||||
const parseBooleanFlag = (value, defaultValue = true) => {
|
||||
if (typeof value === 'string') {
|
||||
const normalized = value.trim().toLowerCase();
|
||||
if (['false', '0', 'off', 'no'].includes(normalized)) return false;
|
||||
if (['true', '1', 'on', 'yes'].includes(normalized)) return true;
|
||||
}
|
||||
if (value === undefined || value === null) return defaultValue;
|
||||
return Boolean(value);
|
||||
};
|
||||
|
||||
const subagentContext = {
|
||||
features: /* from Step 2 coverage plan */,
|
||||
knowledge_fragments_loaded: /* list of fragments */,
|
||||
config: {
|
||||
test_framework: config.test_framework,
|
||||
use_playwright_utils: config.tea_use_playwright_utils,
|
||||
use_pactjs_utils: config.tea_use_pactjs_utils,
|
||||
pact_mcp: config.tea_pact_mcp, // "mcp" | "none"
|
||||
browser_automation: config.tea_browser_automation, // "auto" | "cli" | "mcp" | "none"
|
||||
detected_stack: '{detected_stack}', // "frontend" | "backend" | "fullstack"
|
||||
execution_mode: config.tea_execution_mode || 'auto', // "auto" | "subagent" | "agent-team" | "sequential"
|
||||
capability_probe: parseBooleanFlag(config.tea_capability_probe, true), // supports booleans and "false"/"true" strings
|
||||
provider_endpoint_map: /* from Step 2 coverage plan, if use_pactjs_utils enabled */,
|
||||
},
|
||||
timestamp: timestamp
|
||||
};
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. Resolve Execution Mode with Capability Probe
|
||||
|
||||
```javascript
|
||||
const normalizeUserExecutionMode = (mode) => {
|
||||
if (typeof mode !== 'string') return null;
|
||||
const normalized = mode.trim().toLowerCase().replace(/[-_]/g, ' ').replace(/\s+/g, ' ');
|
||||
|
||||
if (normalized === 'auto') return 'auto';
|
||||
if (normalized === 'sequential') return 'sequential';
|
||||
if (normalized === 'subagent' || normalized === 'sub agent' || normalized === 'subagents' || normalized === 'sub agents') {
|
||||
return 'subagent';
|
||||
}
|
||||
if (normalized === 'agent team' || normalized === 'agent teams' || normalized === 'agentteam') {
|
||||
return 'agent-team';
|
||||
}
|
||||
|
||||
return null;
|
||||
};
|
||||
|
||||
const normalizeConfigExecutionMode = (mode) => {
|
||||
if (mode === 'subagent') return 'subagent';
|
||||
if (mode === 'auto' || mode === 'sequential' || mode === 'subagent' || mode === 'agent-team') {
|
||||
return mode;
|
||||
}
|
||||
return null;
|
||||
};
|
||||
|
||||
// Explicit user instruction in the active run takes priority over config.
|
||||
const explicitModeFromUser = normalizeUserExecutionMode(runtime.getExplicitExecutionModeHint?.() || null);
|
||||
|
||||
const requestedMode = explicitModeFromUser || normalizeConfigExecutionMode(subagentContext.config.execution_mode) || 'auto';
|
||||
const probeEnabled = subagentContext.config.capability_probe;
|
||||
|
||||
const supports = {
|
||||
subagent: false,
|
||||
agentTeam: false,
|
||||
};
|
||||
|
||||
if (probeEnabled) {
|
||||
// Probe using runtime-native capability checks or a no-op launch test.
|
||||
supports.subagent = runtime.canLaunchSubagents?.() === true;
|
||||
supports.agentTeam = runtime.canLaunchAgentTeams?.() === true;
|
||||
}
|
||||
|
||||
let resolvedMode = requestedMode;
|
||||
|
||||
if (requestedMode === 'auto') {
|
||||
if (supports.agentTeam) resolvedMode = 'agent-team';
|
||||
else if (supports.subagent) resolvedMode = 'subagent';
|
||||
else resolvedMode = 'sequential';
|
||||
} else if (probeEnabled && requestedMode === 'agent-team' && !supports.agentTeam) {
|
||||
resolvedMode = supports.subagent ? 'subagent' : 'sequential';
|
||||
} else if (probeEnabled && requestedMode === 'subagent' && !supports.subagent) {
|
||||
resolvedMode = 'sequential';
|
||||
}
|
||||
|
||||
subagentContext.execution = {
|
||||
requestedMode,
|
||||
resolvedMode,
|
||||
probeEnabled,
|
||||
supports,
|
||||
};
|
||||
```
|
||||
|
||||
Resolution precedence:
|
||||
|
||||
1. Explicit user request in this run (`agent team` => `agent-team`; `subagent` => `subagent`; `sequential`; `auto`)
|
||||
2. `tea_execution_mode` from config
|
||||
3. Runtime capability fallback (when probing enabled)
|
||||
|
||||
If probing is disabled, honor the requested mode strictly. If that mode cannot be executed at runtime, fail with explicit error instead of silent fallback.
|
||||
|
||||
Report selected mode before dispatch:
|
||||
|
||||
```
|
||||
⚙️ Execution Mode Resolution:
|
||||
- Requested: {requestedMode}
|
||||
- Probe Enabled: {probeEnabled}
|
||||
- Supports agent-team: {supports.agentTeam}
|
||||
- Supports subagent: {supports.subagent}
|
||||
- Resolved: {resolvedMode}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. Subagent Dispatch Matrix
|
||||
|
||||
**Select subagents based on `{detected_stack}`:**
|
||||
|
||||
| `{detected_stack}` | Subagent A (API) | Subagent B (E2E) | Subagent B-backend |
|
||||
| ------------------ | ---------------- | ---------------- | ------------------ |
|
||||
| `frontend` | Launch | Launch | Skip |
|
||||
| `backend` | Launch | Skip | Launch |
|
||||
| `fullstack` | Launch | Launch | Launch |
|
||||
|
||||
### 3A. Runtime-Managed Parallelism
|
||||
|
||||
When `resolvedMode` is `agent-team` or `subagent`, let the runtime decide concurrency and scheduling. TEA does not impose an additional worker ceiling.
|
||||
|
||||
---
|
||||
|
||||
### Contract Test Generation Note
|
||||
|
||||
When `use_pactjs_utils` is enabled, the API test generation subagent (step-03a) also generates:
|
||||
|
||||
- **Consumer contract tests**: Using `createProviderState` for type-safe provider states
|
||||
- **Provider verification tests**: Using `buildVerifierOptions` for one-call verifier setup
|
||||
- **Message contract tests**: Using `buildMessageVerifierOptions` if async/Kafka patterns detected
|
||||
- **Helper files**: Request filter setup with `createRequestFilter`, shared state constants
|
||||
- **Provider scrutiny**: Subagent reads provider route handlers, types, and validation schemas before generating each interaction (see `contract-testing.md` Provider Scrutiny Protocol)
|
||||
|
||||
When `pact_mcp` is `"mcp"`, the subagent can use SmartBear MCP tools to fetch existing provider states and generate tests informed by broker data.
|
||||
|
||||
---
|
||||
|
||||
### 4. Dispatch Worker A: API Test Generation (always)
|
||||
|
||||
**Dispatch worker:**
|
||||
|
||||
- **Subagent File:** `./step-03a-subagent-api.md`
|
||||
- **Output File:** `/tmp/tea-automate-api-tests-${timestamp}.json`
|
||||
- **Context:** Pass `subagentContext`
|
||||
- **Execution:**
|
||||
- `agent-team` or `subagent`: launch non-blocking
|
||||
- `sequential`: run blocking and wait before next dispatch
|
||||
|
||||
**System Action:**
|
||||
|
||||
```
|
||||
🚀 Launching Subagent A: API Test Generation
|
||||
📝 Output: /tmp/tea-automate-api-tests-${timestamp}.json
|
||||
⚙️ Mode: ${resolvedMode}
|
||||
⏳ Status: Running...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5. Dispatch Worker B: E2E Test Generation (frontend/fullstack only)
|
||||
|
||||
**If {detected_stack} is `frontend` or `fullstack`:**
|
||||
|
||||
**Dispatch worker:**
|
||||
|
||||
- **Subagent File:** `./step-03b-subagent-e2e.md`
|
||||
- **Output File:** `/tmp/tea-automate-e2e-tests-${timestamp}.json`
|
||||
- **Context:** Pass `subagentContext`
|
||||
- **Execution:**
|
||||
- `agent-team` or `subagent`: launch non-blocking
|
||||
- `sequential`: run blocking and wait before next dispatch
|
||||
|
||||
**System Action:**
|
||||
|
||||
```
|
||||
🚀 Launching Subagent B: E2E Test Generation
|
||||
📝 Output: /tmp/tea-automate-e2e-tests-${timestamp}.json
|
||||
⚙️ Mode: ${resolvedMode}
|
||||
⏳ Status: Running...
|
||||
```
|
||||
|
||||
**If {detected_stack} is `backend`:** Skip this subagent.
|
||||
|
||||
---
|
||||
|
||||
### 6. Dispatch Worker B-backend: Backend Test Generation (backend/fullstack only)
|
||||
|
||||
**If {detected_stack} is `backend` or `fullstack`:**
|
||||
|
||||
**Dispatch worker:**
|
||||
|
||||
- **Subagent File:** `./step-03b-subagent-backend.md`
|
||||
- **Output File:** `/tmp/tea-automate-backend-tests-${timestamp}.json`
|
||||
- **Context:** Pass `subagentContext`
|
||||
- **Execution:**
|
||||
- `agent-team` or `subagent`: launch non-blocking
|
||||
- `sequential`: run blocking and wait before next dispatch
|
||||
|
||||
**System Action:**
|
||||
|
||||
```
|
||||
🚀 Launching Subagent B-backend: Backend Test Generation
|
||||
📝 Output: /tmp/tea-automate-backend-tests-${timestamp}.json
|
||||
⚙️ Mode: ${resolvedMode}
|
||||
⏳ Status: Running...
|
||||
```
|
||||
|
||||
**If {detected_stack} is `frontend`:** Skip this subagent.
|
||||
|
||||
---
|
||||
|
||||
### 7. Wait for Expected Worker Completion
|
||||
|
||||
**If `resolvedMode` is `agent-team` or `subagent`:**
|
||||
|
||||
```
|
||||
⏳ Waiting for subagents to complete...
|
||||
├── Subagent A (API): Running... ⟳
|
||||
├── Subagent B (E2E): Running... ⟳ [if frontend/fullstack]
|
||||
└── Subagent B-backend: Running... ⟳ [if backend/fullstack]
|
||||
|
||||
[... time passes ...]
|
||||
|
||||
├── Subagent A (API): Complete ✅
|
||||
├── Subagent B (E2E): Complete ✅ [if frontend/fullstack]
|
||||
└── Subagent B-backend: Complete ✅ [if backend/fullstack]
|
||||
|
||||
✅ All subagents completed successfully!
|
||||
```
|
||||
|
||||
**If `resolvedMode` is `sequential`:**
|
||||
|
||||
```
|
||||
✅ Sequential mode: each worker already completed during dispatch.
|
||||
```
|
||||
|
||||
**Verify outputs exist (based on `{detected_stack}`):**
|
||||
|
||||
```javascript
|
||||
const apiOutputExists = fs.existsSync(`/tmp/tea-automate-api-tests-${timestamp}.json`);
|
||||
|
||||
// Check based on detected_stack
|
||||
if (detected_stack === 'frontend' || detected_stack === 'fullstack') {
|
||||
const e2eOutputExists = fs.existsSync(`/tmp/tea-automate-e2e-tests-${timestamp}.json`);
|
||||
if (!e2eOutputExists) throw new Error('E2E subagent output missing!');
|
||||
}
|
||||
if (detected_stack === 'backend' || detected_stack === 'fullstack') {
|
||||
const backendOutputExists = fs.existsSync(`/tmp/tea-automate-backend-tests-${timestamp}.json`);
|
||||
if (!backendOutputExists) throw new Error('Backend subagent output missing!');
|
||||
}
|
||||
if (!apiOutputExists) throw new Error('API subagent output missing!');
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Subagent Output Schema Contract
|
||||
|
||||
The aggregate step expects both outputs to include `success`, but the payload shapes are intentionally different:
|
||||
|
||||
- `step-03b-subagent-e2e.md` output includes `success`, `subagent`, `tests`, `fixture_needs`, `knowledge_fragments_used`, `test_count`, and `summary`.
|
||||
- `step-03b-subagent-backend.md` output includes `success`, `subagent`, `subagentType`, `testsGenerated`, `coverageSummary` (with `fixtureNeeds`), `status`, `knowledge_fragments_used`, and `summary`.
|
||||
|
||||
The aggregate step reads whichever output file(s) exist based on `{detected_stack}` and must use the matching schema per subagent type.
|
||||
|
||||
---
|
||||
|
||||
### 8. Execution Report
|
||||
|
||||
**Display performance metrics:**
|
||||
|
||||
```
|
||||
🚀 Performance Report:
|
||||
- Execution Mode: {resolvedMode}
|
||||
- Stack Type: {detected_stack}
|
||||
- API Test Generation: ~X minutes
|
||||
- E2E Test Generation: ~Y minutes [if frontend/fullstack]
|
||||
- Backend Test Generation: ~Z minutes [if backend/fullstack]
|
||||
- Total Elapsed: ~mode-dependent
|
||||
- Parallel Gain: ~40-70% faster when mode is subagent/agent-team
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 9. Proceed to Aggregation
|
||||
|
||||
**Load aggregation step:**
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
The aggregation step (3C) will:
|
||||
|
||||
- Read all subagent outputs (based on `{detected_stack}`)
|
||||
- Write all test files to disk
|
||||
- Generate shared fixtures and helpers
|
||||
- Calculate summary statistics
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Proceed to Step 3C (Aggregation) when:
|
||||
|
||||
- ✅ Subagent A (API tests) completed successfully
|
||||
- ✅ Subagent B (E2E tests) completed successfully [if frontend/fullstack]
|
||||
- ✅ Subagent B-backend (Backend tests) completed successfully [if backend/fullstack]
|
||||
- ✅ All expected output files exist and are valid JSON
|
||||
- ✅ Execution metrics displayed
|
||||
|
||||
**Do NOT proceed if:**
|
||||
|
||||
- ❌ Any launched subagent failed
|
||||
- ❌ Output files missing or corrupted
|
||||
- ❌ Timeout occurred (parallel mode only)
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- All required subagents launched successfully (based on `{detected_stack}`)
|
||||
- All required worker steps completed without errors
|
||||
- Output files generated and valid
|
||||
- Fallback behavior respected configuration and capability probe rules
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Failed to launch subagents
|
||||
- One or more subagents failed
|
||||
- Output files missing or invalid
|
||||
- Unsupported requested mode with probing disabled
|
||||
|
||||
**Master Rule:** Deterministic mode selection + stable output contract. Use the best supported mode, then aggregate normally.
|
||||
@@ -0,0 +1,263 @@
|
||||
---
|
||||
name: 'step-03a-subagent-api'
|
||||
description: 'Subagent: Generate API tests only'
|
||||
subagent: true
|
||||
outputFile: '/tmp/tea-automate-api-tests-{{timestamp}}.json'
|
||||
---
|
||||
|
||||
# Subagent 3A: Generate API Tests
|
||||
|
||||
## SUBAGENT CONTEXT
|
||||
|
||||
This is an **isolated subagent** running in parallel with E2E test generation.
|
||||
|
||||
**What you have from parent workflow:**
|
||||
|
||||
- Target features/components identified in Step 2
|
||||
- Knowledge fragments loaded: api-request, data-factories, api-testing-patterns
|
||||
- Config: test framework, Playwright Utils enabled/disabled, Pact.js Utils enabled/disabled, Pact MCP mode
|
||||
- Coverage plan: which API endpoints need testing
|
||||
|
||||
**Your task:** Generate API tests ONLY (not E2E, not fixtures, not other test types).
|
||||
|
||||
**If `use_pactjs_utils` is enabled:** Also generate consumer contract tests and provider verification tests alongside API tests. Use the loaded pactjs-utils fragments (`pactjs-utils-overview`, `pactjs-utils-consumer-helpers`, `pactjs-utils-provider-verifier`, `pactjs-utils-request-filter`) for patterns. If `pact_mcp` is `"mcp"`, use SmartBear MCP tools (Fetch Provider States, Generate Pact Tests) to inform test generation.
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read this entire subagent file before acting
|
||||
- ✅ Generate API tests ONLY
|
||||
- ✅ Output structured JSON to temp file
|
||||
- ✅ Follow knowledge fragment patterns
|
||||
- ❌ Do NOT generate E2E tests (that's subagent 3B)
|
||||
- ❌ Do NOT run tests (that's step 4)
|
||||
- ❌ Do NOT generate fixtures yet (that's step 3C aggregation)
|
||||
|
||||
---
|
||||
|
||||
## SUBAGENT TASK
|
||||
|
||||
### 1. Identify API Endpoints
|
||||
|
||||
From the coverage plan (Step 2 output), identify:
|
||||
|
||||
- Which API endpoints need test coverage
|
||||
- Expected request/response formats
|
||||
- Authentication requirements
|
||||
- Error scenarios to test
|
||||
|
||||
### 2. Generate API Test Files
|
||||
|
||||
For each API endpoint, create test file in `tests/api/[feature].spec.ts`:
|
||||
|
||||
**Test Structure:**
|
||||
|
||||
```typescript
|
||||
import { test, expect } from '@playwright/test';
|
||||
// If Playwright Utils enabled:
|
||||
// import { apiRequest } from '@playwright-utils/api';
|
||||
|
||||
test.describe('[Feature] API Tests', () => {
|
||||
test('[P0] should handle successful [operation]', async ({ request }) => {
|
||||
// Use apiRequest helper if Playwright Utils enabled
|
||||
// Otherwise use standard request fixture
|
||||
const response = await request.post('/api/endpoint', {
|
||||
data: {
|
||||
/* test data */
|
||||
},
|
||||
});
|
||||
|
||||
expect(response.status()).toBe(200);
|
||||
expect(await response.json()).toMatchObject({
|
||||
/* expected */
|
||||
});
|
||||
});
|
||||
|
||||
test('[P1] should handle [error scenario]', async ({ request }) => {
|
||||
// Test error handling
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Requirements:**
|
||||
|
||||
- ✅ Use `apiRequest()` helper if Playwright Utils enabled (from api-request fragment)
|
||||
- ✅ Use data factories for test data (from data-factories fragment)
|
||||
- ✅ Follow API testing patterns (from api-testing-patterns fragment)
|
||||
- ✅ Include priority tags [P0], [P1], [P2], [P3]
|
||||
- ✅ Test both happy path and error scenarios
|
||||
- ✅ Use proper TypeScript types
|
||||
- ✅ Deterministic assertions (no timing dependencies)
|
||||
|
||||
**If Pact.js Utils enabled (from `subagentContext.config.use_pactjs_utils`):**
|
||||
|
||||
- ✅ Generate consumer contract tests in `pact/http/consumer/` using `createProviderState({ name, params })` pattern
|
||||
- ✅ Generate provider verification tests in `pact/http/provider/` using `buildVerifierOptions({ provider, port, includeMainAndDeployed, stateHandlers })` pattern
|
||||
- ✅ Generate request filter helpers in `pact/http/helpers/` using `createRequestFilter({ tokenGenerator: () => string })`
|
||||
- ✅ Generate shared state constants in `pact/http/helpers/states.ts`
|
||||
- ✅ If async/message patterns detected, generate message consumer tests in `pact/message/` using `buildMessageVerifierOptions`
|
||||
- ✅ **Provider endpoint comment MANDATORY** on every Pact interaction: `// Provider endpoint: <path> -> <METHOD> <route>`
|
||||
- ⚠️ **Postel's Law for matchers**: Use `like()`, `eachLike()`, `string()`, `integer()` matchers ONLY in `willRespondWith` (responses). Request bodies in `withRequest` MUST use exact values — never wrap request bodies in `like()`. The consumer controls what it sends, so contracts should be strict about request shape.
|
||||
|
||||
### 1.5 Provider Source Scrutiny (CDC Only)
|
||||
|
||||
**CRITICAL**: Before generating ANY Pact consumer interaction, perform provider source scrutiny per the **Seven-Point Scrutiny Checklist** defined in `contract-testing.md`. Do NOT generate response matchers from consumer-side types alone — this is the #1 cause of contract verification failures.
|
||||
|
||||
The seven points to verify for each interaction:
|
||||
|
||||
1. Response shape
|
||||
2. Status codes
|
||||
3. Field names
|
||||
4. Enum values
|
||||
5. Required fields
|
||||
6. Data types
|
||||
7. Nested structures
|
||||
|
||||
**Source priority**: Provider source code is most authoritative. When an OpenAPI/Swagger spec exists (`openapi.yaml`, `openapi.json`, `swagger.json`), use it as a complementary or alternative source — it documents the provider's contract explicitly and can be faster to parse than tracing through handler code. When both exist, cross-reference them; if they disagree, the source code wins. Document the discrepancy in the scrutiny evidence block (e.g., `OpenAPI shows 200 but handler returns 201; using handler behavior`) and flag it in the output JSON `summary` so it is discoverable by downstream consumers or audits.
|
||||
|
||||
**Scrutiny Sequence** (for each endpoint in the coverage plan):
|
||||
|
||||
1. **READ provider route handler and/or OpenAPI spec**: Find the handler file from `subagentContext.config.provider_endpoint_map` or by scanning the provider codebase. Also check for OpenAPI/Swagger spec files. Extract:
|
||||
- Exact status codes returned (`res.status(201)` / OpenAPI `responses` keys)
|
||||
- Response construction (`res.json({ data: ... })` / OpenAPI `schema`)
|
||||
- Error handling paths (what status codes for what conditions)
|
||||
|
||||
2. **READ provider type/model/DTO definitions**: Find the response type referenced by the handler or OpenAPI `$ref` schemas. Extract:
|
||||
- Exact field names (`transaction_id` not `transactionId`)
|
||||
- Field types (`string` ID vs `number` ID / OpenAPI `type` + `format`)
|
||||
- Optional vs required fields (OpenAPI `required` array)
|
||||
- Nested object structures (OpenAPI `$ref`, `allOf`, `oneOf`)
|
||||
|
||||
3. **READ provider validation schemas**: Find Joi/Zod/class-validator schemas or OpenAPI request body `schema.required`. Extract:
|
||||
- Required request fields and headers
|
||||
- Enum/union type allowed values (`"active" | "inactive"` / OpenAPI `enum`)
|
||||
- Request body constraints
|
||||
|
||||
4. **Cross-reference findings** against consumer expectations:
|
||||
- Does the consumer expect the same field names the provider sends?
|
||||
- Does the consumer expect the same status codes the provider returns?
|
||||
- Does the consumer expect the same nesting the provider produces?
|
||||
|
||||
5. **Document scrutiny evidence** as a block comment in the generated test:
|
||||
|
||||
```typescript
|
||||
/*
|
||||
* Provider Scrutiny Evidence:
|
||||
* - Handler: server/src/routes/userHandlers.ts:45
|
||||
* - OpenAPI: server/openapi.yaml paths./api/v2/users/{userId}.get (if available)
|
||||
* - Response type: UserResponseDto (server/src/types/user.ts:12)
|
||||
* - Status: 201 for creation (line 52), 400 for validation error (line 48)
|
||||
* - Fields: { id: number, name: string, email: string, role: "user" | "admin" }
|
||||
* - Required request headers: Authorization (Bearer token)
|
||||
*/
|
||||
```
|
||||
|
||||
6. **Graceful degradation** when provider source is not accessible (follows the canonical four-step protocol from `contract-testing.md`):
|
||||
1. **OpenAPI/Swagger spec available**: Use the spec as the source of truth for response shapes, status codes, and field names
|
||||
2. **Pact Broker available** (when `pact_mcp` is `"mcp"` in `subagentContext.config`): Use SmartBear MCP tools to fetch existing provider states and verified interactions as reference
|
||||
3. **Neither available**: Generate from consumer types but use the TODO form of the mandatory comment: `// Provider endpoint: TODO — provider source not accessible, verify manually`. Set `provider_scrutiny: "pending"` in output JSON
|
||||
4. **Never silently guess**: Document all assumptions in the scrutiny evidence block
|
||||
|
||||
> ⚠️ **Anti-pattern**: Generating response matchers from consumer-side types alone. This produces contracts that reflect what the consumer _wishes_ the provider returns, not what it _actually_ returns. Always read provider source or OpenAPI spec first.
|
||||
|
||||
### 3. Track Fixture Needs
|
||||
|
||||
Identify fixtures needed for API tests:
|
||||
|
||||
- Authentication fixtures (auth tokens, API keys)
|
||||
- Data factories (user data, product data, etc.)
|
||||
- API client configurations
|
||||
|
||||
**Do NOT create fixtures yet** - just track what's needed for aggregation step.
|
||||
|
||||
---
|
||||
|
||||
## OUTPUT FORMAT
|
||||
|
||||
Write JSON to temp file: `/tmp/tea-automate-api-tests-{{timestamp}}.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"subagent": "api-tests",
|
||||
"tests": [
|
||||
{
|
||||
"file": "tests/api/auth.spec.ts",
|
||||
"content": "[full TypeScript test file content]",
|
||||
"description": "API tests for authentication endpoints",
|
||||
"priority_coverage": {
|
||||
"P0": 3,
|
||||
"P1": 2,
|
||||
"P2": 1,
|
||||
"P3": 0
|
||||
}
|
||||
},
|
||||
{
|
||||
"file": "tests/api/checkout.spec.ts",
|
||||
"content": "[full TypeScript test file content]",
|
||||
"description": "API tests for checkout endpoints",
|
||||
"priority_coverage": {
|
||||
"P0": 2,
|
||||
"P1": 3,
|
||||
"P2": 1,
|
||||
"P3": 0
|
||||
}
|
||||
}
|
||||
],
|
||||
"fixture_needs": ["authToken", "userDataFactory", "productDataFactory"],
|
||||
"knowledge_fragments_used": ["api-request", "data-factories", "api-testing-patterns"],
|
||||
"provider_scrutiny": "completed",
|
||||
"provider_files_read": ["server/src/routes/authHandlers.ts", "server/src/routes/checkoutHandlers.ts", "server/src/types/auth.ts"],
|
||||
"test_count": 12,
|
||||
"summary": "Generated 12 API test cases covering 3 features"
|
||||
}
|
||||
```
|
||||
|
||||
**On Error:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": false,
|
||||
"subagent": "api-tests",
|
||||
"error": "Error message describing what went wrong",
|
||||
"partial_output": {
|
||||
/* any tests generated before error */
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Subagent completes when:
|
||||
|
||||
- ✅ All API endpoints have test files generated
|
||||
- ✅ All tests follow knowledge fragment patterns
|
||||
- ✅ JSON output written to temp file
|
||||
- ✅ Fixture needs tracked
|
||||
|
||||
**Subagent terminates here.** Parent workflow will read output and proceed to aggregation.
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SUBAGENT SUCCESS METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- All API tests generated following patterns
|
||||
- JSON output valid and complete
|
||||
- No E2E/component/unit tests included (out of scope)
|
||||
- Every Pact interaction has `// Provider endpoint:` comment (if CDC enabled)
|
||||
- Provider source scrutiny completed or gracefully degraded with TODO markers (if CDC enabled)
|
||||
- Scrutiny evidence documented as block comments in test files (if CDC enabled)
|
||||
|
||||
### ❌ FAILURE:
|
||||
|
||||
- Generated tests other than API tests
|
||||
- Did not follow knowledge fragment patterns
|
||||
- Invalid or missing JSON output
|
||||
- Ran tests (not subagent responsibility)
|
||||
- Pact interactions missing provider endpoint comments (if CDC enabled)
|
||||
- Response matchers generated from consumer-side types without provider scrutiny (if CDC enabled)
|
||||
@@ -0,0 +1,246 @@
|
||||
---
|
||||
name: 'step-03b-subagent-backend'
|
||||
description: 'Subagent: Generate backend tests only (unit, integration, contract)'
|
||||
subagent: true
|
||||
outputFile: '/tmp/tea-automate-backend-tests-{{timestamp}}.json'
|
||||
---
|
||||
|
||||
# Subagent 3B-backend: Generate Backend Tests
|
||||
|
||||
## SUBAGENT CONTEXT
|
||||
|
||||
This is an **isolated subagent** running in parallel with API test generation (and optionally E2E test generation for fullstack projects).
|
||||
|
||||
**What you have from parent workflow:**
|
||||
|
||||
- Target features/services identified in Step 2
|
||||
- Knowledge fragments loaded: test-levels-framework, test-priorities-matrix, data-factories
|
||||
- Config: test framework, detected stack type
|
||||
- Coverage plan: which services/modules need backend testing
|
||||
|
||||
**Your task:** Generate backend tests ONLY (unit, integration, contract - not API endpoint tests, not E2E).
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- Read this entire subagent file before acting
|
||||
- Generate backend tests ONLY (unit, integration, contract)
|
||||
- Output structured JSON to temp file using the subagent output schema contract
|
||||
- Follow knowledge fragment patterns
|
||||
- Do NOT generate API endpoint tests (that's subagent 3A)
|
||||
- Do NOT generate E2E tests (that's subagent 3B-E2E)
|
||||
- Do NOT run tests (that's step 4)
|
||||
- Do NOT generate fixtures yet (that's step 3C aggregation)
|
||||
|
||||
---
|
||||
|
||||
## SUBAGENT TASK
|
||||
|
||||
### 1. Identify Test Targets
|
||||
|
||||
From the coverage plan (Step 2 output), identify:
|
||||
|
||||
- Which services/modules need unit test coverage
|
||||
- Which integrations need integration test coverage (database, message queues, external services)
|
||||
- Which service contracts need contract test coverage (Pact, schema validation)
|
||||
- Business logic functions requiring edge case coverage
|
||||
|
||||
### 2. Detect Framework & Language
|
||||
|
||||
From `config.test_framework` and project manifests, determine:
|
||||
|
||||
- **Python (pytest)**: Use `pytest` conventions, `conftest.py` fixtures, `@pytest.mark` decorators
|
||||
- **Java/Kotlin (JUnit)**: Use JUnit 5 annotations (`@Test`, `@BeforeEach`, `@Nested`), Mockito for mocking
|
||||
- **Go (go test)**: Use `*_test.go` files, `testing.T`, table-driven tests, `testify` assertions
|
||||
- **C#/.NET (xUnit)**: Use `[Fact]`, `[Theory]`, `[InlineData]`, `Moq` for mocking
|
||||
- **Ruby (RSpec)**: Use `describe`/`context`/`it` blocks, `let`/`before` helpers, `FactoryBot`
|
||||
|
||||
### 3. Generate Unit Tests
|
||||
|
||||
For each module/service, create test files following language-idiomatic patterns:
|
||||
|
||||
**Python (pytest) example:**
|
||||
|
||||
```python
|
||||
import pytest
|
||||
from unittest.mock import MagicMock, patch
|
||||
from myapp.services.user_service import UserService
|
||||
|
||||
class TestUserService:
|
||||
"""[P0] Unit tests for UserService"""
|
||||
|
||||
def test_create_user_with_valid_data(self, user_factory):
|
||||
"""Should create user when data is valid"""
|
||||
user_data = user_factory.build()
|
||||
result = UserService.create(user_data)
|
||||
assert result.email == user_data["email"]
|
||||
|
||||
def test_create_user_rejects_duplicate_email(self, user_factory):
|
||||
"""[P1] Should reject duplicate email"""
|
||||
user_data = user_factory.build(email="existing@test.com")
|
||||
with pytest.raises(DuplicateEmailError):
|
||||
UserService.create(user_data)
|
||||
```
|
||||
|
||||
**Go (go test) example:**
|
||||
|
||||
```go
|
||||
func TestUserService_Create(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
input CreateUserInput
|
||||
wantErr bool
|
||||
}{
|
||||
{"valid user", validInput(), false},
|
||||
{"duplicate email", duplicateInput(), true},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
svc := NewUserService(mockRepo)
|
||||
_, err := svc.Create(tt.input)
|
||||
if (err != nil) != tt.wantErr {
|
||||
t.Errorf("Create() error = %v, wantErr %v", err, tt.wantErr)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Requirements:**
|
||||
|
||||
- Follow the detected framework's idiomatic test patterns
|
||||
- Include priority tags [P0], [P1], [P2], [P3] in test descriptions
|
||||
- Use proper mocking for external dependencies (database, APIs, message queues)
|
||||
- Test both happy path and error cases
|
||||
- Use proper typing/type hints where applicable
|
||||
- No hard-coded test data; use factories or builders
|
||||
|
||||
### 4. Generate Integration Tests
|
||||
|
||||
For service integrations, create integration test files:
|
||||
|
||||
- Database integration tests (with test database or in-memory alternatives)
|
||||
- Message queue consumer/producer tests
|
||||
- Cache integration tests
|
||||
- External service integration tests (with mocked HTTP clients)
|
||||
|
||||
### 5. Generate Contract Tests (if applicable)
|
||||
|
||||
If the project uses microservices or has defined API contracts:
|
||||
|
||||
- Pact consumer/provider tests
|
||||
- Schema validation tests (JSON Schema, Protobuf)
|
||||
- OpenAPI spec compliance tests
|
||||
|
||||
### 6. Track Fixture Needs
|
||||
|
||||
Identify fixtures/helpers needed for backend tests:
|
||||
|
||||
- Database fixtures (seed data, cleanup)
|
||||
- Factory functions (test data builders)
|
||||
- Mock services (HTTP mocks, message queue mocks)
|
||||
- Configuration fixtures (test environment config)
|
||||
|
||||
**Do NOT create fixtures yet** - just track what's needed for aggregation step.
|
||||
|
||||
---
|
||||
|
||||
## OUTPUT FORMAT
|
||||
|
||||
Write JSON to temp file: `/tmp/tea-automate-backend-tests-{{timestamp}}.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"subagentType": "backend",
|
||||
"testsGenerated": [
|
||||
{
|
||||
"file": "tests/unit/test_user_service.py",
|
||||
"content": "[full test file content]",
|
||||
"description": "Unit tests for UserService",
|
||||
"priority_coverage": {
|
||||
"P0": 3,
|
||||
"P1": 2,
|
||||
"P2": 1,
|
||||
"P3": 0
|
||||
}
|
||||
},
|
||||
{
|
||||
"file": "tests/integration/test_user_repository.py",
|
||||
"content": "[full test file content]",
|
||||
"description": "Integration tests for user database operations",
|
||||
"priority_coverage": {
|
||||
"P0": 1,
|
||||
"P1": 2,
|
||||
"P2": 1,
|
||||
"P3": 0
|
||||
}
|
||||
}
|
||||
],
|
||||
"coverageSummary": {
|
||||
"totalTests": 15,
|
||||
"testLevels": ["unit", "integration", "contract"],
|
||||
"fixtureNeeds": ["databaseFixture", "userFactory", "mockHttpClient"]
|
||||
},
|
||||
"status": "complete",
|
||||
"success": true,
|
||||
"subagent": "backend-tests",
|
||||
"knowledge_fragments_used": ["test-levels-framework", "test-priorities-matrix", "data-factories"],
|
||||
"summary": "Generated 15 backend test cases (10 unit, 4 integration, 1 contract)"
|
||||
}
|
||||
```
|
||||
|
||||
**On Error:**
|
||||
|
||||
```json
|
||||
{
|
||||
"subagentType": "backend",
|
||||
"testsGenerated": [],
|
||||
"coverageSummary": {
|
||||
"totalTests": 0,
|
||||
"testLevels": [],
|
||||
"fixtureNeeds": []
|
||||
},
|
||||
"status": "partial",
|
||||
"success": false,
|
||||
"subagent": "backend-tests",
|
||||
"error": "Error message describing what went wrong",
|
||||
"partial_output": {
|
||||
/* any tests generated before error */
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Subagent completes when:
|
||||
|
||||
- All identified modules have backend test files generated
|
||||
- All tests follow language-idiomatic patterns
|
||||
- JSON output written to temp file using the subagent output schema contract
|
||||
- Fixture needs tracked
|
||||
|
||||
**Subagent terminates here.** Parent workflow will read output and proceed to aggregation.
|
||||
|
||||
---
|
||||
|
||||
## SUBAGENT SUCCESS METRICS
|
||||
|
||||
### SUCCESS:
|
||||
|
||||
- All backend tests generated following idiomatic patterns
|
||||
- JSON output valid and complete, matches subagent output schema contract
|
||||
- No E2E or browser tests included (out of scope)
|
||||
- Proper mocking used for external dependencies
|
||||
- Priority tags assigned to all test cases
|
||||
|
||||
### FAILURE:
|
||||
|
||||
- Generated tests other than backend tests (unit/integration/contract)
|
||||
- Did not follow language-idiomatic patterns
|
||||
- Invalid or missing JSON output
|
||||
- Output schema does not match the contract
|
||||
- Ran tests (not subagent responsibility)
|
||||
- Used real external services instead of mocks
|
||||
@@ -0,0 +1,213 @@
|
||||
---
|
||||
name: 'step-03b-subagent-e2e'
|
||||
description: 'Subagent: Generate E2E tests only'
|
||||
subagent: true
|
||||
outputFile: '/tmp/tea-automate-e2e-tests-{{timestamp}}.json'
|
||||
---
|
||||
|
||||
# Subagent 3B: Generate E2E Tests
|
||||
|
||||
## SUBAGENT CONTEXT
|
||||
|
||||
This is an **isolated subagent** running in parallel with API test generation.
|
||||
|
||||
**What you have from parent workflow:**
|
||||
|
||||
- Target features/user journeys identified in Step 2
|
||||
- Knowledge fragments loaded: fixture-architecture, network-first, selector-resilience
|
||||
- Config: test framework, Playwright Utils enabled/disabled
|
||||
- Coverage plan: which user journeys need E2E testing
|
||||
|
||||
**Your task:** Generate E2E tests ONLY (not API, not fixtures, not other test types).
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read this entire subagent file before acting
|
||||
- ✅ Generate E2E tests ONLY
|
||||
- ✅ Output structured JSON to temp file
|
||||
- ✅ Follow knowledge fragment patterns
|
||||
- ❌ Do NOT generate API tests (that's subagent 3A)
|
||||
- ❌ Do NOT run tests (that's step 4)
|
||||
- ❌ Do NOT generate fixtures yet (that's step 3C aggregation)
|
||||
|
||||
---
|
||||
|
||||
## SUBAGENT TASK
|
||||
|
||||
### 1. Identify User Journeys
|
||||
|
||||
From the coverage plan (Step 2 output), identify:
|
||||
|
||||
- Which user journeys need E2E coverage
|
||||
- Critical user paths (authentication, checkout, profile, etc.)
|
||||
- UI interactions required
|
||||
- Expected visual states
|
||||
|
||||
### 2. Browser Interaction (Selector Verification)
|
||||
|
||||
**Automation mode:** `config.tea_browser_automation`
|
||||
|
||||
If `auto` (fall back to MCP if CLI unavailable; if neither available, generate from best practices):
|
||||
|
||||
- Open the target page first, then verify selectors with a snapshot:
|
||||
`playwright-cli -s=tea-automate-{{timestamp}} open <target_url>`
|
||||
`playwright-cli -s=tea-automate-{{timestamp}} snapshot` → map refs to Playwright locators
|
||||
- ref `{role: "button", name: "Submit"}` → `page.getByRole('button', { name: 'Submit' })`
|
||||
- ref `{role: "textbox", name: "Email"}` → `page.getByRole('textbox', { name: 'Email' })`
|
||||
- `playwright-cli -s=tea-automate-{{timestamp}} close` when done
|
||||
|
||||
If `cli` (CLI only — do NOT fall back to MCP; generate from best practices if CLI unavailable):
|
||||
|
||||
- Open the target page first, then verify selectors with a snapshot:
|
||||
`playwright-cli -s=tea-automate-{{timestamp}} open <target_url>`
|
||||
`playwright-cli -s=tea-automate-{{timestamp}} snapshot` → map refs to Playwright locators
|
||||
- ref `{role: "button", name: "Submit"}` → `page.getByRole('button', { name: 'Submit' })`
|
||||
- ref `{role: "textbox", name: "Email"}` → `page.getByRole('textbox', { name: 'Email' })`
|
||||
- `playwright-cli -s=tea-automate-{{timestamp}} close` when done
|
||||
|
||||
> **Session Hygiene:** Always close sessions using `playwright-cli -s=tea-automate-{{timestamp}} close`. Do NOT use `close-all` — it kills every session on the machine and breaks parallel execution.
|
||||
|
||||
If `mcp`:
|
||||
|
||||
- Use MCP tools for selector verification (current behavior)
|
||||
|
||||
If `none`:
|
||||
|
||||
- Generate selectors from best practices without browser verification
|
||||
|
||||
### 3. Generate E2E Test Files
|
||||
|
||||
For each user journey, create test file in `tests/e2e/[feature].spec.ts`:
|
||||
|
||||
**Test Structure:**
|
||||
|
||||
```typescript
|
||||
import { test, expect } from '@playwright/test';
|
||||
|
||||
test.describe('[Feature] E2E User Journey', () => {
|
||||
test('[P0] should complete [user journey]', async ({ page }) => {
|
||||
// Navigate to starting point
|
||||
await page.goto('/feature');
|
||||
|
||||
// Interact with UI
|
||||
await page.getByRole('button', { name: 'Submit' }).click();
|
||||
|
||||
// Assert expected state
|
||||
await expect(page.getByText('Success')).toBeVisible();
|
||||
});
|
||||
|
||||
test('[P1] should handle [edge case]', async ({ page }) => {
|
||||
// Test edge case scenario
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Requirements:**
|
||||
|
||||
- ✅ Follow fixture architecture patterns (from fixture-architecture fragment)
|
||||
- ✅ Use network-first patterns: intercept before navigate (from network-first fragment)
|
||||
- ✅ Use resilient selectors: getByRole, getByText, getByLabel (from selector-resilience fragment)
|
||||
- ✅ Include priority tags [P0], [P1], [P2], [P3]
|
||||
- ✅ Test complete user journeys (not isolated clicks)
|
||||
- ✅ Use proper TypeScript types
|
||||
- ✅ Deterministic waits (no hard sleeps, use expect().toBeVisible())
|
||||
|
||||
### 4. Track Fixture Needs
|
||||
|
||||
Identify fixtures needed for E2E tests:
|
||||
|
||||
- Page object models (if complex)
|
||||
- Authentication fixtures (logged-in user state)
|
||||
- Network mocks/intercepts
|
||||
- Test data fixtures
|
||||
|
||||
**Do NOT create fixtures yet** - just track what's needed for aggregation step.
|
||||
|
||||
---
|
||||
|
||||
## OUTPUT FORMAT
|
||||
|
||||
Write JSON to temp file: `/tmp/tea-automate-e2e-tests-{{timestamp}}.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"subagent": "e2e-tests",
|
||||
"tests": [
|
||||
{
|
||||
"file": "tests/e2e/authentication.spec.ts",
|
||||
"content": "[full TypeScript test file content]",
|
||||
"description": "E2E tests for user authentication journey",
|
||||
"priority_coverage": {
|
||||
"P0": 2,
|
||||
"P1": 3,
|
||||
"P2": 2,
|
||||
"P3": 0
|
||||
}
|
||||
},
|
||||
{
|
||||
"file": "tests/e2e/checkout.spec.ts",
|
||||
"content": "[full TypeScript test file content]",
|
||||
"description": "E2E tests for checkout journey",
|
||||
"priority_coverage": {
|
||||
"P0": 3,
|
||||
"P1": 2,
|
||||
"P2": 1,
|
||||
"P3": 0
|
||||
}
|
||||
}
|
||||
],
|
||||
"fixture_needs": ["authenticatedUserFixture", "paymentMockFixture", "checkoutDataFixture"],
|
||||
"knowledge_fragments_used": ["fixture-architecture", "network-first", "selector-resilience"],
|
||||
"test_count": 15,
|
||||
"summary": "Generated 15 E2E test cases covering 5 user journeys"
|
||||
}
|
||||
```
|
||||
|
||||
**On Error:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": false,
|
||||
"subagent": "e2e-tests",
|
||||
"error": "Error message describing what went wrong",
|
||||
"partial_output": {
|
||||
/* any tests generated before error */
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Subagent completes when:
|
||||
|
||||
- ✅ All user journeys have E2E test files generated
|
||||
- ✅ All tests follow knowledge fragment patterns
|
||||
- ✅ JSON output written to temp file
|
||||
- ✅ Fixture needs tracked
|
||||
|
||||
**Subagent terminates here.** Parent workflow will read output and proceed to aggregation.
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SUBAGENT SUCCESS METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- All E2E tests generated following patterns
|
||||
- JSON output valid and complete
|
||||
- No API/component/unit tests included (out of scope)
|
||||
- Resilient selectors used (getByRole, getByText)
|
||||
- Network-first patterns applied (intercept before navigate)
|
||||
|
||||
### ❌ FAILURE:
|
||||
|
||||
- Generated tests other than E2E tests
|
||||
- Did not follow knowledge fragment patterns
|
||||
- Invalid or missing JSON output
|
||||
- Ran tests (not subagent responsibility)
|
||||
- Used brittle selectors (CSS classes, XPath)
|
||||
@@ -0,0 +1,393 @@
|
||||
---
|
||||
name: 'step-03c-aggregate'
|
||||
description: 'Aggregate subagent outputs and complete test infrastructure'
|
||||
outputFile: '{test_artifacts}/automation-summary.md'
|
||||
nextStepFile: './step-04-validate-and-summarize.md'
|
||||
---
|
||||
|
||||
# Step 3C: Aggregate Test Generation Results
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Read outputs from parallel subagents (API + E2E and/or Backend test generation based on `{detected_stack}`), aggregate results, and create supporting infrastructure (fixtures, helpers).
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- ✅ Read subagent outputs from temp files
|
||||
- ✅ Generate shared fixtures based on fixture needs from both subagents
|
||||
- ✅ Write all generated test files to disk
|
||||
- ❌ Do NOT regenerate tests (use subagent outputs)
|
||||
- ❌ Do NOT run tests yet (that's step 4)
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, subagent outputs from temp files
|
||||
- Focus: aggregation and fixture generation only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: Step 3A and 3B subagent outputs
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
### 1. Read Subagent Outputs
|
||||
|
||||
**Read API test subagent output (always):**
|
||||
|
||||
```javascript
|
||||
const apiTestsPath = '/tmp/tea-automate-api-tests-{{timestamp}}.json';
|
||||
const apiTestsOutput = JSON.parse(fs.readFileSync(apiTestsPath, 'utf8'));
|
||||
```
|
||||
|
||||
**Read E2E test subagent output (if {detected_stack} is `frontend` or `fullstack`):**
|
||||
|
||||
```javascript
|
||||
let e2eTestsOutput = null;
|
||||
if (detected_stack === 'frontend' || detected_stack === 'fullstack') {
|
||||
const e2eTestsPath = '/tmp/tea-automate-e2e-tests-{{timestamp}}.json';
|
||||
e2eTestsOutput = JSON.parse(fs.readFileSync(e2eTestsPath, 'utf8'));
|
||||
}
|
||||
```
|
||||
|
||||
**Read Backend test subagent output (if {detected_stack} is `backend` or `fullstack`):**
|
||||
|
||||
```javascript
|
||||
let backendTestsOutput = null;
|
||||
if (detected_stack === 'backend' || detected_stack === 'fullstack') {
|
||||
const backendTestsPath = '/tmp/tea-automate-backend-tests-{{timestamp}}.json';
|
||||
backendTestsOutput = JSON.parse(fs.readFileSync(backendTestsPath, 'utf8'));
|
||||
}
|
||||
```
|
||||
|
||||
**Verify all launched subagents succeeded:**
|
||||
|
||||
- Check `apiTestsOutput.success === true`
|
||||
- If E2E was launched: check `e2eTestsOutput.success === true`
|
||||
- If Backend was launched: check `backendTestsOutput.success === true`
|
||||
- If any failed, report error and stop (don't proceed)
|
||||
|
||||
---
|
||||
|
||||
### 2. Write All Test Files to Disk
|
||||
|
||||
**Write API test files:**
|
||||
|
||||
```javascript
|
||||
apiTestsOutput.tests.forEach((test) => {
|
||||
fs.writeFileSync(test.file, test.content, 'utf8');
|
||||
console.log(`✅ Created: ${test.file}`);
|
||||
});
|
||||
```
|
||||
|
||||
**Write E2E test files (if {detected_stack} is `frontend` or `fullstack`):**
|
||||
|
||||
```javascript
|
||||
if (e2eTestsOutput) {
|
||||
e2eTestsOutput.tests.forEach((test) => {
|
||||
fs.writeFileSync(test.file, test.content, 'utf8');
|
||||
console.log(`✅ Created: ${test.file}`);
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
**Write Backend test files (if {detected_stack} is `backend` or `fullstack`):**
|
||||
|
||||
```javascript
|
||||
if (backendTestsOutput) {
|
||||
backendTestsOutput.testsGenerated.forEach((test) => {
|
||||
fs.writeFileSync(test.file, test.content, 'utf8');
|
||||
console.log(`✅ Created: ${test.file}`);
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. Aggregate Fixture Needs
|
||||
|
||||
**Collect all fixture needs from all launched subagents:**
|
||||
|
||||
```javascript
|
||||
const allFixtureNeeds = [
|
||||
...apiTestsOutput.fixture_needs,
|
||||
...(e2eTestsOutput ? e2eTestsOutput.fixture_needs : []),
|
||||
...(backendTestsOutput ? backendTestsOutput.coverageSummary?.fixtureNeeds || [] : []),
|
||||
];
|
||||
|
||||
// Remove duplicates
|
||||
const uniqueFixtures = [...new Set(allFixtureNeeds)];
|
||||
```
|
||||
|
||||
**Categorize fixtures:**
|
||||
|
||||
- **Authentication fixtures:** authToken, authenticatedUserFixture, etc.
|
||||
- **Data factories:** userDataFactory, productDataFactory, etc.
|
||||
- **Network mocks:** paymentMockFixture, apiResponseMocks, etc.
|
||||
- **Test helpers:** wait/retry/assertion helpers
|
||||
|
||||
---
|
||||
|
||||
### 4. Generate Fixture Infrastructure
|
||||
|
||||
**Create or update fixture files based on needs:**
|
||||
|
||||
**A) Authentication Fixtures** (`tests/fixtures/auth.ts`):
|
||||
|
||||
```typescript
|
||||
import { test as base } from '@playwright/test';
|
||||
|
||||
export const test = base.extend({
|
||||
authenticatedUser: async ({ page }, use) => {
|
||||
// Login logic
|
||||
await page.goto('/login');
|
||||
await page.fill('[name="email"]', 'test@example.com');
|
||||
await page.fill('[name="password"]', 'password');
|
||||
await page.click('button[type="submit"]');
|
||||
await page.waitForURL('/dashboard');
|
||||
|
||||
await use(page);
|
||||
},
|
||||
|
||||
authToken: async ({ request }, use) => {
|
||||
// Get auth token for API tests
|
||||
const response = await request.post('/api/auth/login', {
|
||||
data: { email: 'test@example.com', password: 'password' },
|
||||
});
|
||||
const { token } = await response.json();
|
||||
|
||||
await use(token);
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
**B) Data Factories** (`tests/fixtures/data-factories.ts`):
|
||||
|
||||
```typescript
|
||||
import { faker } from '@faker-js/faker';
|
||||
|
||||
export const createUserData = (overrides = {}) => ({
|
||||
name: faker.person.fullName(),
|
||||
email: faker.internet.email(),
|
||||
...overrides,
|
||||
});
|
||||
|
||||
export const createProductData = (overrides = {}) => ({
|
||||
name: faker.commerce.productName(),
|
||||
price: faker.number.int({ min: 10, max: 1000 }),
|
||||
...overrides,
|
||||
});
|
||||
```
|
||||
|
||||
**C) Network Mocks** (`tests/fixtures/network-mocks.ts`):
|
||||
|
||||
```typescript
|
||||
import { Page } from '@playwright/test';
|
||||
|
||||
export const mockPaymentSuccess = async (page: Page) => {
|
||||
await page.route('/api/payment/**', (route) => {
|
||||
route.fulfill({
|
||||
status: 200,
|
||||
body: JSON.stringify({ success: true, transactionId: '12345' }),
|
||||
});
|
||||
});
|
||||
};
|
||||
```
|
||||
|
||||
**D) Helper Utilities** (`tests/fixtures/helpers.ts`):
|
||||
|
||||
```typescript
|
||||
import { expect, Page } from '@playwright/test';
|
||||
|
||||
export const waitForApiResponse = async (page: Page, urlPattern: string) => {
|
||||
return page.waitForResponse((response) => response.url().includes(urlPattern) && response.ok());
|
||||
};
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5. Calculate Summary Statistics
|
||||
|
||||
**Aggregate test counts (based on `{detected_stack}`):**
|
||||
|
||||
```javascript
|
||||
const e2eCount = e2eTestsOutput ? e2eTestsOutput.test_count : 0;
|
||||
const backendCount = backendTestsOutput ? (backendTestsOutput.coverageSummary?.totalTests ?? 0) : 0;
|
||||
|
||||
const resolvedMode = subagentContext?.execution?.resolvedMode;
|
||||
const subagentExecutionLabel =
|
||||
resolvedMode === 'sequential'
|
||||
? 'SEQUENTIAL (API then dependent workers)'
|
||||
: resolvedMode === 'agent-team'
|
||||
? 'AGENT-TEAM (parallel worker squad)'
|
||||
: resolvedMode === 'subagent'
|
||||
? 'SUBAGENT (parallel subagents)'
|
||||
: `PARALLEL (based on ${detected_stack})`;
|
||||
const performanceGainLabel =
|
||||
resolvedMode === 'sequential'
|
||||
? 'baseline (no parallel speedup)'
|
||||
: resolvedMode === 'agent-team' || resolvedMode === 'subagent'
|
||||
? '~40-70% faster than sequential'
|
||||
: 'mode-dependent';
|
||||
|
||||
const summary = {
|
||||
detected_stack: '{detected_stack}',
|
||||
total_tests: apiTestsOutput.test_count + e2eCount + backendCount,
|
||||
api_tests: apiTestsOutput.test_count,
|
||||
e2e_tests: e2eCount,
|
||||
backend_tests: backendCount,
|
||||
fixtures_created: uniqueFixtures.length,
|
||||
api_test_files: apiTestsOutput.tests.length,
|
||||
e2e_test_files: e2eTestsOutput ? e2eTestsOutput.tests.length : 0,
|
||||
backend_test_files: backendTestsOutput ? backendTestsOutput.testsGenerated.length : 0,
|
||||
priority_coverage: {
|
||||
P0:
|
||||
(apiTestsOutput.priority_coverage?.P0 ?? 0) +
|
||||
(e2eTestsOutput?.priority_coverage?.P0 ?? 0) +
|
||||
(backendTestsOutput?.testsGenerated?.reduce((sum, t) => sum + (t.priority_coverage?.P0 ?? 0), 0) ?? 0),
|
||||
P1:
|
||||
(apiTestsOutput.priority_coverage?.P1 ?? 0) +
|
||||
(e2eTestsOutput?.priority_coverage?.P1 ?? 0) +
|
||||
(backendTestsOutput?.testsGenerated?.reduce((sum, t) => sum + (t.priority_coverage?.P1 ?? 0), 0) ?? 0),
|
||||
P2:
|
||||
(apiTestsOutput.priority_coverage?.P2 ?? 0) +
|
||||
(e2eTestsOutput?.priority_coverage?.P2 ?? 0) +
|
||||
(backendTestsOutput?.testsGenerated?.reduce((sum, t) => sum + (t.priority_coverage?.P2 ?? 0), 0) ?? 0),
|
||||
P3:
|
||||
(apiTestsOutput.priority_coverage?.P3 ?? 0) +
|
||||
(e2eTestsOutput?.priority_coverage?.P3 ?? 0) +
|
||||
(backendTestsOutput?.testsGenerated?.reduce((sum, t) => sum + (t.priority_coverage?.P3 ?? 0), 0) ?? 0),
|
||||
},
|
||||
knowledge_fragments_used: [
|
||||
...apiTestsOutput.knowledge_fragments_used,
|
||||
...(e2eTestsOutput ? e2eTestsOutput.knowledge_fragments_used : []),
|
||||
...(backendTestsOutput ? backendTestsOutput.knowledge_fragments_used || [] : []),
|
||||
],
|
||||
subagent_execution: subagentExecutionLabel,
|
||||
performance_gain: performanceGainLabel,
|
||||
};
|
||||
```
|
||||
|
||||
**Store summary for Step 4:**
|
||||
Save summary to temp file for validation step:
|
||||
|
||||
```javascript
|
||||
fs.writeFileSync('/tmp/tea-automate-summary-{{timestamp}}.json', JSON.stringify(summary, null, 2), 'utf8');
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 6. Optional Cleanup
|
||||
|
||||
**Clean up subagent temp files** (optional - can keep for debugging):
|
||||
|
||||
```javascript
|
||||
fs.unlinkSync(apiTestsPath);
|
||||
if (e2eTestsOutput) fs.unlinkSync('/tmp/tea-automate-e2e-tests-{{timestamp}}.json');
|
||||
if (backendTestsOutput) fs.unlinkSync('/tmp/tea-automate-backend-tests-{{timestamp}}.json');
|
||||
console.log('✅ Subagent temp files cleaned up');
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## OUTPUT SUMMARY
|
||||
|
||||
Display to user:
|
||||
|
||||
```
|
||||
✅ Test Generation Complete ({subagent_execution})
|
||||
|
||||
📊 Summary:
|
||||
- Stack Type: {detected_stack}
|
||||
- Total Tests: {total_tests}
|
||||
- API Tests: {api_tests} ({api_test_files} files)
|
||||
- E2E Tests: {e2e_tests} ({e2e_test_files} files) [if frontend/fullstack]
|
||||
- Backend Tests: {backend_tests} ({backend_test_files} files) [if backend/fullstack]
|
||||
- Fixtures Created: {fixtures_created}
|
||||
- Priority Coverage:
|
||||
- P0 (Critical): {P0} tests
|
||||
- P1 (High): {P1} tests
|
||||
- P2 (Medium): {P2} tests
|
||||
- P3 (Low): {P3} tests
|
||||
|
||||
🚀 Performance: {performance_gain}
|
||||
|
||||
📂 Generated Files:
|
||||
- tests/api/[feature].spec.ts [always]
|
||||
- tests/e2e/[feature].spec.ts [if frontend/fullstack]
|
||||
- tests/unit/[feature].test.* [if backend/fullstack]
|
||||
- tests/integration/[feature].test.* [if backend/fullstack]
|
||||
- tests/fixtures/ or tests/support/ [shared infrastructure]
|
||||
|
||||
✅ Ready for validation (Step 4)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Proceed to Step 4 when:
|
||||
|
||||
- ✅ All test files written to disk (API + E2E and/or Backend, based on `{detected_stack}`)
|
||||
- ✅ All fixtures and helpers created
|
||||
- ✅ Summary statistics calculated and saved
|
||||
- ✅ Output displayed to user
|
||||
|
||||
---
|
||||
|
||||
### 7. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-03c-aggregate']
|
||||
lastStep: 'step-03c-aggregate'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-03c-aggregate'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-03c-aggregate'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- All launched subagents succeeded (based on `{detected_stack}`)
|
||||
- All test files written to disk
|
||||
- Fixtures generated based on subagent needs
|
||||
- Summary complete and accurate
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- One or more subagents failed
|
||||
- Test files not written to disk
|
||||
- Fixtures missing or incomplete
|
||||
- Summary missing or inaccurate
|
||||
|
||||
**Master Rule:** Do NOT proceed to Step 4 if aggregation incomplete.
|
||||
@@ -0,0 +1,106 @@
|
||||
---
|
||||
name: 'step-04-validate-and-summarize'
|
||||
description: 'Validate outputs and produce automation summary'
|
||||
outputFile: '{test_artifacts}/automation-summary.md'
|
||||
---
|
||||
|
||||
# Step 4: Validate & Summarize
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Validate generated outputs and produce a concise automation summary.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- ✅ Validate against the checklist before completion
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Validate
|
||||
|
||||
Use `checklist.md` to validate:
|
||||
|
||||
- Framework readiness
|
||||
- Coverage mapping
|
||||
- Test quality and structure
|
||||
- Fixtures, factories, helpers
|
||||
- [ ] CLI sessions cleaned up (no orphaned browsers)
|
||||
- [ ] Temp artifacts stored in `{test_artifacts}/` not random locations
|
||||
|
||||
Fix gaps before proceeding.
|
||||
|
||||
---
|
||||
|
||||
## 2. Polish Output
|
||||
|
||||
Before finalizing, review the complete output document for quality:
|
||||
|
||||
1. **Remove duplication**: Progressive-append workflow may have created repeated sections — consolidate
|
||||
2. **Verify consistency**: Ensure terminology, risk scores, and references are consistent throughout
|
||||
3. **Check completeness**: All template sections should be populated or explicitly marked N/A
|
||||
4. **Format cleanup**: Ensure markdown formatting is clean (tables aligned, headers consistent, no orphaned references)
|
||||
|
||||
---
|
||||
|
||||
## 3. Summary Output
|
||||
|
||||
Write `{outputFile}` including:
|
||||
|
||||
- Coverage plan by test level and priority
|
||||
- Files created/updated
|
||||
- Key assumptions and risks
|
||||
- Next recommended workflow (e.g., `test-review` or `trace`)
|
||||
|
||||
---
|
||||
|
||||
## 4. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-04-validate-and-summarize']
|
||||
lastStep: 'step-04-validate-and-summarize'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-04-validate-and-summarize'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-04-validate-and-summarize'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section.
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
@@ -0,0 +1,65 @@
|
||||
---
|
||||
name: 'step-01-assess'
|
||||
description: 'Load an existing output for editing'
|
||||
nextStepFile: './step-02-apply-edit.md'
|
||||
---
|
||||
|
||||
# Step 1: Assess Edit Target
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Identify which output should be edited and load it.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 📖 Read the complete step file before taking any action
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are the Master Test Architect
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Ask the user which output file to edit
|
||||
- 🚫 Do not edit until target is confirmed
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: existing outputs
|
||||
- Focus: select edit target
|
||||
- Limits: no edits yet
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Identify Target
|
||||
|
||||
Ask the user to provide the output file path or select from known outputs.
|
||||
|
||||
### 2. Load Target
|
||||
|
||||
Read the provided output file in full.
|
||||
|
||||
### 3. Confirm
|
||||
|
||||
Confirm the target and proceed to edit.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Target identified and loaded
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Proceeding without a confirmed target
|
||||
@@ -0,0 +1,60 @@
|
||||
---
|
||||
name: 'step-02-apply-edit'
|
||||
description: 'Apply edits to the selected output'
|
||||
---
|
||||
|
||||
# Step 2: Apply Edits
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Apply the requested edits to the selected output and confirm changes.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 📖 Read the complete step file before taking any action
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are the Master Test Architect
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Only apply edits explicitly requested by the user
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: selected output and user changes
|
||||
- Focus: apply edits only
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Confirm Requested Changes
|
||||
|
||||
Restate what will be changed and confirm.
|
||||
|
||||
### 2. Apply Changes
|
||||
|
||||
Update the output file accordingly.
|
||||
|
||||
### 3. Report
|
||||
|
||||
Summarize the edits applied.
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Changes applied and confirmed
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Unconfirmed edits or missing update
|
||||
@@ -0,0 +1,67 @@
|
||||
---
|
||||
name: 'step-01-validate'
|
||||
description: 'Validate workflow outputs against checklist'
|
||||
outputFile: '{test_artifacts}/automate-validation-report.md'
|
||||
validationChecklist: '../checklist.md'
|
||||
---
|
||||
|
||||
# Step 1: Validate Outputs
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Validate outputs using the workflow checklist and record findings.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 📖 Read the complete step file before taking any action
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are the Master Test Architect
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Validate against `{validationChecklist}`
|
||||
- 🚫 Do not skip checks
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Write findings to `{outputFile}`
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: workflow outputs and checklist
|
||||
- Focus: validation only
|
||||
- Limits: do not modify outputs in this step
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Load Checklist
|
||||
|
||||
Read `{validationChecklist}` and list all criteria.
|
||||
|
||||
### 2. Validate Outputs
|
||||
|
||||
Evaluate outputs against each checklist item.
|
||||
|
||||
### 3. Write Report
|
||||
|
||||
Write a validation report to `{outputFile}` with PASS/WARN/FAIL per section.
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Validation report written
|
||||
- All checklist items evaluated
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped checklist items
|
||||
- No report produced
|
||||
@@ -0,0 +1,72 @@
|
||||
---
|
||||
validationDate: 2026-01-27
|
||||
workflowName: testarch-automate
|
||||
workflowPath: {project-root}/src/workflows/testarch/bmad-testarch-automate
|
||||
validationStatus: COMPLETE
|
||||
completionDate: 2026-01-27 10:03:10
|
||||
---
|
||||
|
||||
# Validation Report: testarch-automate
|
||||
|
||||
**Validation Started:** 2026-01-27 09:50:21
|
||||
**Validator:** BMAD Workflow Validation System (Codex)
|
||||
**Standards Version:** BMAD Workflow Standards
|
||||
|
||||
## File Structure & Size
|
||||
|
||||
- workflow.md present: YES
|
||||
- instructions.md present: YES
|
||||
- workflow.yaml present: YES
|
||||
- step files found: 7
|
||||
|
||||
**Step File Sizes:**
|
||||
|
||||
- steps-c/step-01-preflight-and-context.md: 113 lines [GOOD]
|
||||
- steps-c/step-02-identify-targets.md: 85 lines [GOOD]
|
||||
- steps-c/step-03-generate-tests.md: 76 lines [GOOD]
|
||||
- steps-c/step-04-validate-and-summarize.md: 62 lines [GOOD]
|
||||
- steps-e/step-01-assess.md: 51 lines [GOOD]
|
||||
- steps-e/step-02-apply-edit.md: 46 lines [GOOD]
|
||||
- steps-v/step-01-validate.md: 53 lines [GOOD]
|
||||
- workflow-plan.md present: YES
|
||||
|
||||
## Frontmatter Validation
|
||||
|
||||
- No frontmatter violations found
|
||||
|
||||
## Critical Path Violations
|
||||
|
||||
- No {project-root} hardcoded paths detected in body
|
||||
- No dead relative links detected
|
||||
|
||||
## Menu Handling Validation
|
||||
|
||||
- No menu structures detected (linear step flow) [N/A]
|
||||
|
||||
## Step Type Validation
|
||||
|
||||
- Last step steps-v/step-01-validate.md has no nextStepFile (final step OK)
|
||||
- Step type validation assumes linear sequence (no branching/menu). Workflow-plan.md present for reference. [INFO]
|
||||
|
||||
## Output Format Validation
|
||||
|
||||
- No templates found in workflow root
|
||||
- Steps with outputFile in frontmatter:
|
||||
- steps-c/step-04-validate-and-summarize.md
|
||||
- steps-v/step-01-validate.md
|
||||
|
||||
## Validation Design Check
|
||||
|
||||
- checklist.md present: YES
|
||||
- Validation steps folder (steps-v) present: YES
|
||||
|
||||
## Instruction Style Check
|
||||
|
||||
- All steps include STEP GOAL, MANDATORY EXECUTION RULES, EXECUTION PROTOCOLS, CONTEXT BOUNDARIES, and SUCCESS/FAILURE metrics
|
||||
|
||||
## Summary
|
||||
|
||||
- Validation completed: 2026-01-27 10:03:10
|
||||
- Critical issues: 0
|
||||
- Warnings: 0 (informational notes only)
|
||||
- Readiness: READY (manual review optional)
|
||||
@@ -0,0 +1,114 @@
|
||||
---
|
||||
validationDate: 2026-01-27
|
||||
workflowName: testarch-automate
|
||||
workflowPath: {project-root}/src/workflows/testarch/bmad-testarch-automate
|
||||
validationStatus: COMPLETE
|
||||
completionDate: 2026-01-27 10:24:01
|
||||
---
|
||||
|
||||
# Validation Report: testarch-automate
|
||||
|
||||
**Validation Started:** 2026-01-27 10:24:01
|
||||
**Validator:** BMAD Workflow Validation System (Codex)
|
||||
**Standards Version:** BMAD Workflow Standards
|
||||
|
||||
## File Structure & Size
|
||||
|
||||
- workflow.md present: YES
|
||||
- instructions.md present: YES
|
||||
- workflow.yaml present: YES
|
||||
- step files found: 7
|
||||
|
||||
**Step File Sizes:**
|
||||
|
||||
- steps-c/step-01-preflight-and-context.md: 112 lines [GOOD]
|
||||
- steps-c/step-02-identify-targets.md: 84 lines [GOOD]
|
||||
- steps-c/step-03-generate-tests.md: 75 lines [GOOD]
|
||||
- steps-c/step-04-validate-and-summarize.md: 61 lines [GOOD]
|
||||
- steps-e/step-01-assess.md: 50 lines [GOOD]
|
||||
- steps-e/step-02-apply-edit.md: 45 lines [GOOD]
|
||||
- steps-v/step-01-validate.md: 52 lines [GOOD]
|
||||
- workflow-plan.md present: YES
|
||||
|
||||
## Frontmatter Validation
|
||||
|
||||
- No frontmatter violations found
|
||||
|
||||
## Critical Path Violations
|
||||
|
||||
### Config Variables (Exceptions)
|
||||
|
||||
Standard BMAD config variables treated as valid exceptions: bmb_creations_output_folder, communication_language, document_output_language, output_folder, planning_artifacts, project-root, project_name, test_artifacts, user_name
|
||||
|
||||
- No {project-root} hardcoded paths detected in body
|
||||
|
||||
- No dead relative links detected
|
||||
|
||||
- No module path assumptions detected
|
||||
|
||||
**Status:** ✅ PASS - No critical violations
|
||||
|
||||
## Menu Handling Validation
|
||||
|
||||
- No menu structures detected (linear step flow) [N/A]
|
||||
|
||||
## Step Type Validation
|
||||
|
||||
- steps-c/step-01-preflight-and-context.md: Init [PASS]
|
||||
- steps-c/step-02-identify-targets.md: Middle [PASS]
|
||||
- steps-c/step-03-generate-tests.md: Middle [PASS]
|
||||
- steps-c/step-04-validate-and-summarize.md: Final [PASS]
|
||||
- Step type validation assumes linear sequence (no branching/menu). Workflow-plan.md present for reference. [INFO]
|
||||
|
||||
## Output Format Validation
|
||||
|
||||
- Templates present: NONE
|
||||
- Steps with outputFile in frontmatter:
|
||||
- steps-c/step-04-validate-and-summarize.md
|
||||
- steps-v/step-01-validate.md
|
||||
- checklist.md present: YES
|
||||
|
||||
## Validation Design Check
|
||||
|
||||
- Validation steps folder (steps-v) present: YES
|
||||
- Validation step(s) present: step-01-validate.md
|
||||
- Validation steps reference checklist data and auto-proceed
|
||||
|
||||
## Instruction Style Check
|
||||
|
||||
- Instruction style: Prescriptive (appropriate for TEA quality/compliance workflows)
|
||||
- Steps emphasize mandatory sequence, explicit success/failure metrics, and risk-based guidance
|
||||
|
||||
## Collaborative Experience Check
|
||||
|
||||
- Overall facilitation quality: GOOD
|
||||
- Steps use progressive prompts and clear role reinforcement; no laundry-list interrogation detected
|
||||
- Flow progression is clear and aligned to workflow goals
|
||||
|
||||
## Subagent Optimization Opportunities
|
||||
|
||||
- No high-priority subagent optimizations identified; workflow already uses step-file architecture
|
||||
- Pattern 1 (grep/regex): N/A for most steps
|
||||
- Pattern 2 (per-file analysis): already aligned to validation structure
|
||||
- Pattern 3 (data ops): minimal data file loads
|
||||
- Pattern 4 (parallel): optional for validation only
|
||||
|
||||
## Cohesive Review
|
||||
|
||||
- Overall assessment: GOOD
|
||||
- Flow is linear, goals are clear, and outputs map to TEA artifacts
|
||||
- Voice and tone consistent with Test Architect persona
|
||||
- Recommendation: READY (minor refinements optional)
|
||||
|
||||
## Plan Quality Validation
|
||||
|
||||
- Plan file present: workflow-plan.md
|
||||
- Planned steps found: 7 (all implemented)
|
||||
- Plan implementation status: Fully Implemented
|
||||
|
||||
## Summary
|
||||
|
||||
- Validation completed: 2026-01-27 10:24:01
|
||||
- Critical issues: 0
|
||||
- Warnings: 0 (informational notes only)
|
||||
- Readiness: READY (manual review optional)
|
||||
@@ -0,0 +1,20 @@
|
||||
# Workflow Plan: testarch-automate
|
||||
|
||||
## Create Mode (steps-c)
|
||||
- step-01-preflight-and-context.md
|
||||
|
||||
- step-02-identify-targets.md
|
||||
- step-03-generate-tests.md
|
||||
- step-04-validate-and-summarize.md
|
||||
|
||||
## Validate Mode (steps-v)
|
||||
- step-01-validate.md
|
||||
|
||||
## Edit Mode (steps-e)
|
||||
- step-01-assess.md
|
||||
- step-02-apply-edit.md
|
||||
|
||||
## Outputs
|
||||
- {test_artifacts}/automation-summary.md
|
||||
|
||||
- Test files under {project-root}/tests
|
||||
@@ -0,0 +1,41 @@
|
||||
---
|
||||
name: bmad-testarch-automate
|
||||
description: Expand test automation coverage for codebase. Use when user says 'lets expand test coverage' or 'I want to automate tests'
|
||||
web_bundle: true
|
||||
---
|
||||
|
||||
# Test Automation Expansion
|
||||
|
||||
**Goal:** Expand test automation coverage after implementation or analyze existing codebase to generate comprehensive test suite
|
||||
|
||||
**Role:** You are the Master Test Architect.
|
||||
|
||||
---
|
||||
|
||||
## WORKFLOW ARCHITECTURE
|
||||
|
||||
This workflow uses **tri-modal step-file architecture**:
|
||||
|
||||
- **Create mode (steps-c/)**: primary execution flow
|
||||
- **Validate mode (steps-v/)**: validation against checklist
|
||||
- **Edit mode (steps-e/)**: revise existing outputs
|
||||
|
||||
---
|
||||
|
||||
## INITIALIZATION SEQUENCE
|
||||
|
||||
### 1. Mode Determination
|
||||
|
||||
"Welcome to the workflow. What would you like to do?"
|
||||
|
||||
- **[C] Create** — Run the workflow
|
||||
- **[R] Resume** — Resume an interrupted workflow
|
||||
- **[V] Validate** — Validate existing outputs
|
||||
- **[E] Edit** — Edit existing outputs
|
||||
|
||||
### 2. Route to First Step
|
||||
|
||||
- **If C:** Load `steps-c/step-01-preflight-and-context.md`
|
||||
- **If R:** Load `steps-c/step-01b-resume.md`
|
||||
- **If V:** Load `steps-v/step-01-validate.md`
|
||||
- **If E:** Load `steps-e/step-01-assess.md`
|
||||
@@ -0,0 +1,53 @@
|
||||
# Test Architect workflow: bmad-testarch-automate
|
||||
name: bmad-testarch-automate
|
||||
# prettier-ignore
|
||||
description: 'Expand test automation coverage for codebase. Use when the user says "lets expand test coverage" or "I want to automate tests"'
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/tea/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
test_artifacts: "{config_source}:test_artifacts"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
document_output_language: "{config_source}:document_output_language"
|
||||
date: system-generated
|
||||
|
||||
# Workflow components
|
||||
installed_path: "."
|
||||
instructions: "./instructions.md"
|
||||
validation: "./checklist.md"
|
||||
template: false
|
||||
|
||||
# Variables and inputs
|
||||
variables:
|
||||
# Execution mode and targeting
|
||||
standalone_mode: true # Can work without BMad artifacts (true) or integrate with BMad (false)
|
||||
coverage_target: "critical-paths" # critical-paths, comprehensive, selective
|
||||
|
||||
# Directory paths
|
||||
test_dir: "{project-root}/tests" # Root test directory
|
||||
source_dir: "{project-root}" # Source code directory (customize if needed, e.g., {project-root}/src or {project-root}/lib)
|
||||
|
||||
# Output configuration
|
||||
default_output_file: "{test_artifacts}/automation-summary.md"
|
||||
|
||||
# Required tools
|
||||
required_tools:
|
||||
- read_file # Read source code, existing tests, BMad artifacts
|
||||
- write_file # Create test files, fixtures, factories, summaries
|
||||
- create_directory # Create test directories
|
||||
- list_files # Discover features and existing tests
|
||||
- search_repo # Find coverage gaps and patterns
|
||||
- glob # Find test files and source files
|
||||
|
||||
tags:
|
||||
- qa
|
||||
- automation
|
||||
- test-architect
|
||||
- regression
|
||||
- coverage
|
||||
|
||||
execution_hints:
|
||||
interactive: false # Minimize prompts
|
||||
autonomous: true # Proceed without user input unless blocked
|
||||
iterative: true
|
||||
6
_bmad/tea/workflows/testarch/bmad-testarch-ci/SKILL.md
Normal file
6
_bmad/tea/workflows/testarch/bmad-testarch-ci/SKILL.md
Normal file
@@ -0,0 +1,6 @@
|
||||
---
|
||||
name: bmad-testarch-ci
|
||||
description: 'Scaffold CI/CD quality pipeline with test execution. Use when the user says "lets setup CI pipeline" or "I want to create quality gates"'
|
||||
---
|
||||
|
||||
Follow the instructions in [workflow.md](workflow.md).
|
||||
@@ -0,0 +1,155 @@
|
||||
# Azure DevOps CI/CD Pipeline for Test Execution
|
||||
# Generated by BMad TEA Agent - Test Architect Module
|
||||
# Optimized for: Parallel Sharding, Burn-In Loop
|
||||
# Stack: {test_stack_type} | Framework: {test_framework}
|
||||
#
|
||||
# Variables to customize per project:
|
||||
# INSTALL_CMD - dependency install command (e.g., npm ci, pnpm install --frozen-lockfile)
|
||||
# TEST_CMD - main test command (e.g., npm run test:e2e, npm test, npx vitest)
|
||||
# LINT_CMD - lint command (e.g., npm run lint)
|
||||
# BROWSER_INSTALL - browser install command (frontend/fullstack only; omit for backend)
|
||||
# DEFAULT_NODE_VERSION - Node.js version (read from .nvmrc or default to 24)
|
||||
|
||||
trigger:
|
||||
branches:
|
||||
include:
|
||||
- main
|
||||
- develop
|
||||
|
||||
pr:
|
||||
branches:
|
||||
include:
|
||||
- main
|
||||
- develop
|
||||
|
||||
variables:
|
||||
DEFAULT_NODE_VERSION: "24"
|
||||
npm_config_cache: $(Pipeline.Workspace)/.npm
|
||||
# Set TEST_STACK_TYPE to 'backend' to skip Playwright browser installs
|
||||
TEST_STACK_TYPE: "" # Values: frontend, backend, fullstack (leave empty for auto)
|
||||
|
||||
stages:
|
||||
# Lint stage - Code quality checks
|
||||
- stage: Lint
|
||||
displayName: "Lint"
|
||||
jobs:
|
||||
- job: LintJob
|
||||
displayName: "Code Quality"
|
||||
pool:
|
||||
vmImage: "ubuntu-latest"
|
||||
timeoutInMinutes: 5
|
||||
steps:
|
||||
- task: NodeTool@0
|
||||
inputs:
|
||||
versionSpec: $(DEFAULT_NODE_VERSION)
|
||||
displayName: "Setup Node.js"
|
||||
|
||||
- task: Cache@2
|
||||
inputs:
|
||||
key: 'npm | "$(Agent.OS)" | package-lock.json'
|
||||
restoreKeys: 'npm | "$(Agent.OS)"'
|
||||
path: $(npm_config_cache)
|
||||
displayName: "Cache npm"
|
||||
|
||||
- script: npm ci
|
||||
displayName: "Install dependencies" # Replace with INSTALL_CMD
|
||||
|
||||
- script: npm run lint
|
||||
displayName: "Run linter" # Replace with LINT_CMD
|
||||
|
||||
# Test stage - Parallel execution with sharding
|
||||
- stage: Test
|
||||
displayName: "Test"
|
||||
dependsOn: Lint
|
||||
jobs:
|
||||
- job: TestShard
|
||||
displayName: "Test Shard"
|
||||
pool:
|
||||
vmImage: "ubuntu-latest"
|
||||
timeoutInMinutes: 30
|
||||
strategy:
|
||||
matrix:
|
||||
Shard1:
|
||||
SHARD_INDEX: 1
|
||||
Shard2:
|
||||
SHARD_INDEX: 2
|
||||
Shard3:
|
||||
SHARD_INDEX: 3
|
||||
Shard4:
|
||||
SHARD_INDEX: 4
|
||||
steps:
|
||||
- task: NodeTool@0
|
||||
inputs:
|
||||
versionSpec: $(DEFAULT_NODE_VERSION)
|
||||
displayName: "Setup Node.js"
|
||||
|
||||
- task: Cache@2
|
||||
inputs:
|
||||
key: 'npm | "$(Agent.OS)" | package-lock.json'
|
||||
restoreKeys: 'npm | "$(Agent.OS)"'
|
||||
path: $(npm_config_cache)
|
||||
displayName: "Cache npm"
|
||||
|
||||
- script: npm ci
|
||||
displayName: "Install dependencies" # Replace with INSTALL_CMD
|
||||
|
||||
# Frontend/Fullstack only — skipped for backend-only stacks
|
||||
- script: npx playwright install --with-deps chromium
|
||||
condition: ne(variables['TEST_STACK_TYPE'], 'backend')
|
||||
displayName: "Install Playwright browsers" # Replace with BROWSER_INSTALL
|
||||
|
||||
- script: npm run test:e2e -- --shard=$(SHARD_INDEX)/4
|
||||
displayName: "Run tests (shard $(SHARD_INDEX)/4)" # Replace with TEST_CMD + shard args
|
||||
|
||||
- task: PublishTestResults@2
|
||||
condition: always()
|
||||
inputs:
|
||||
testResultsFormat: "JUnit"
|
||||
testResultsFiles: "test-results/**/*.xml"
|
||||
mergeTestResults: true
|
||||
displayName: "Publish test results"
|
||||
|
||||
- publish: test-results/
|
||||
artifact: test-results-$(SHARD_INDEX)
|
||||
condition: failed()
|
||||
displayName: "Upload failure artifacts"
|
||||
|
||||
# Burn-in stage - Flaky test detection
|
||||
# Note: Burn-in targets UI flakiness. For backend-only stacks, remove this stage entirely.
|
||||
- stage: BurnIn
|
||||
displayName: "Burn-In (Flaky Detection)"
|
||||
dependsOn: Test
|
||||
condition: and(succeeded(), or(eq(variables['Build.Reason'], 'PullRequest'), eq(variables['Build.CronSchedule.DisplayName'], 'Weekly burn-in')))
|
||||
jobs:
|
||||
- job: BurnInJob
|
||||
displayName: "Burn-In Loop"
|
||||
pool:
|
||||
vmImage: "ubuntu-latest"
|
||||
timeoutInMinutes: 60
|
||||
steps:
|
||||
- task: NodeTool@0
|
||||
inputs:
|
||||
versionSpec: $(DEFAULT_NODE_VERSION)
|
||||
displayName: "Setup Node.js"
|
||||
|
||||
- script: npm ci
|
||||
displayName: "Install dependencies" # Replace with INSTALL_CMD
|
||||
|
||||
# Frontend/Fullstack only — skipped for backend-only stacks
|
||||
- script: npx playwright install --with-deps chromium
|
||||
condition: ne(variables['TEST_STACK_TYPE'], 'backend')
|
||||
displayName: "Install Playwright browsers" # Replace with BROWSER_INSTALL
|
||||
|
||||
- script: |
|
||||
echo "Starting burn-in loop - detecting flaky tests"
|
||||
for i in $(seq 1 10); do
|
||||
echo "Burn-in iteration $i/10"
|
||||
npm run test:e2e || exit 1
|
||||
done
|
||||
echo "Burn-in complete - no flaky tests detected"
|
||||
displayName: "Run burn-in loop (10 iterations)" # Replace npm run test:e2e with TEST_CMD
|
||||
|
||||
- publish: test-results/
|
||||
artifact: burn-in-failures
|
||||
condition: failed()
|
||||
displayName: "Upload burn-in failure artifacts"
|
||||
@@ -0,0 +1 @@
|
||||
type: skill
|
||||
289
_bmad/tea/workflows/testarch/bmad-testarch-ci/checklist.md
Normal file
289
_bmad/tea/workflows/testarch/bmad-testarch-ci/checklist.md
Normal file
@@ -0,0 +1,289 @@
|
||||
# CI/CD Pipeline Setup - Validation Checklist
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [ ] Git repository initialized (`.git/` exists)
|
||||
- [ ] Git remote configured (`git remote -v` shows origin)
|
||||
- [ ] Test framework configured (appropriate config for detected stack type)
|
||||
- [ ] Local tests pass (test command succeeds)
|
||||
- [ ] Team agrees on CI platform
|
||||
- [ ] Access to CI platform settings (if updating)
|
||||
|
||||
### Multi-Stack Detection
|
||||
|
||||
- [ ] Test stack type detected or configured (`frontend`, `backend`, `fullstack`)
|
||||
- [ ] Test framework detected or configured (Playwright, Cypress, Jest, Vitest, etc.)
|
||||
- [ ] Stack-appropriate test commands identified
|
||||
|
||||
### Multi-Platform Detection
|
||||
|
||||
- [ ] CI platform detected or configured
|
||||
- [ ] Supported platform: GitHub Actions, GitLab CI, Jenkins, Azure DevOps, Harness, or Circle CI
|
||||
- [ ] Platform-specific template selected
|
||||
|
||||
Note: CI setup is typically a one-time task per repo and can be run any time after the test framework is configured.
|
||||
|
||||
## Process Steps
|
||||
|
||||
### Step 1: Preflight Checks
|
||||
|
||||
- [ ] Git repository validated
|
||||
- [ ] Framework configuration detected
|
||||
- [ ] Local test execution successful
|
||||
- [ ] CI platform detected or selected
|
||||
- [ ] Node version identified (.nvmrc or default)
|
||||
- [ ] No blocking issues found
|
||||
|
||||
### Step 2: CI Pipeline Configuration
|
||||
|
||||
- [ ] CI configuration file created at platform-correct path
|
||||
- GitHub Actions: `.github/workflows/test.yml`
|
||||
- GitLab CI: `.gitlab-ci.yml`
|
||||
- Jenkins: `Jenkinsfile`
|
||||
- Azure DevOps: `azure-pipelines.yml`
|
||||
- Harness: `.harness/pipeline.yaml`
|
||||
- Circle CI: `.circleci/config.yml`
|
||||
- [ ] File is syntactically valid (no YAML/Groovy errors)
|
||||
- [ ] Correct framework commands configured for detected stack type
|
||||
- [ ] Node version matches project
|
||||
- [ ] Test directory paths correct
|
||||
- [ ] Stack-conditional steps applied:
|
||||
- [ ] Browser install included for frontend/fullstack stacks
|
||||
- [ ] Browser install omitted for backend-only stacks
|
||||
- [ ] Test commands match detected framework
|
||||
|
||||
### Step 3: Parallel Sharding
|
||||
|
||||
- [ ] Matrix strategy configured (4 shards default)
|
||||
- [ ] Shard syntax correct for framework
|
||||
- [ ] fail-fast set to false
|
||||
- [ ] Shard count appropriate for test suite size
|
||||
|
||||
### Step 4: Burn-In Loop
|
||||
|
||||
- [ ] Burn-in job created (frontend/fullstack stacks) or intentionally skipped (backend-only)
|
||||
- [ ] 10 iterations configured (when enabled)
|
||||
- [ ] Proper exit on failure (`|| exit 1`)
|
||||
- [ ] Runs on appropriate triggers (PR, cron)
|
||||
- [ ] Failure artifacts uploaded
|
||||
- [ ] Backend-only stacks: burn-in skipped by default (documented reason: targets UI flakiness)
|
||||
|
||||
### Step 5: Caching Configuration
|
||||
|
||||
- [ ] Dependency cache configured (npm/yarn)
|
||||
- [ ] Cache key uses lockfile hash
|
||||
- [ ] Browser cache configured (Playwright/Cypress)
|
||||
- [ ] Restore-keys defined for fallback
|
||||
- [ ] Cache paths correct for platform
|
||||
|
||||
### Step 6: Artifact Collection
|
||||
|
||||
- [ ] Artifacts upload on failure only
|
||||
- [ ] Correct artifact paths (test-results/, traces/, etc.)
|
||||
- [ ] Retention days set (30 default)
|
||||
- [ ] Artifact names unique per shard
|
||||
- [ ] No sensitive data in artifacts
|
||||
|
||||
### Step 7: Retry Logic
|
||||
|
||||
- [ ] Retry action/strategy configured
|
||||
- [ ] Max attempts: 2-3
|
||||
- [ ] Timeout appropriate (30 min)
|
||||
- [ ] Retry only on transient errors
|
||||
|
||||
### Step 8: Helper Scripts
|
||||
|
||||
- [ ] `scripts/test-changed.sh` created
|
||||
- [ ] `scripts/ci-local.sh` created
|
||||
- [ ] `scripts/burn-in.sh` created (optional)
|
||||
- [ ] Scripts are executable (`chmod +x`)
|
||||
- [ ] Scripts use correct test commands
|
||||
- [ ] Shebang present (`#!/bin/bash`)
|
||||
|
||||
### Step 9: Documentation
|
||||
|
||||
- [ ] `docs/ci.md` created with pipeline guide
|
||||
- [ ] `docs/ci-secrets-checklist.md` created
|
||||
- [ ] Required secrets documented
|
||||
- [ ] Setup instructions clear
|
||||
- [ ] Troubleshooting section included
|
||||
- [ ] Badge URLs provided (optional)
|
||||
|
||||
## Output Validation
|
||||
|
||||
### Configuration Validation
|
||||
|
||||
- [ ] CI file loads without errors
|
||||
- [ ] All paths resolve correctly
|
||||
- [ ] No hardcoded values (use env vars)
|
||||
- [ ] Triggers configured (push, pull_request, schedule)
|
||||
- [ ] Platform-specific syntax correct
|
||||
|
||||
### Execution Validation
|
||||
|
||||
- [ ] First CI run triggered (push to remote)
|
||||
- [ ] Pipeline starts without errors
|
||||
- [ ] All jobs appear in CI dashboard
|
||||
- [ ] Caching works (check logs for cache hit)
|
||||
- [ ] Tests execute in parallel
|
||||
- [ ] Artifacts collected on failure
|
||||
|
||||
### Performance Validation
|
||||
|
||||
- [ ] Lint stage: <2 minutes
|
||||
- [ ] Test stage (per shard): <10 minutes
|
||||
- [ ] Burn-in stage: <30 minutes
|
||||
- [ ] Total pipeline: <45 minutes
|
||||
- [ ] Cache reduces install time by 2-5 minutes
|
||||
|
||||
## Quality Checks
|
||||
|
||||
### Best Practices Compliance
|
||||
|
||||
- [ ] Burn-in loop follows production patterns
|
||||
- [ ] Parallel sharding configured optimally
|
||||
- [ ] Failure-only artifact collection
|
||||
- [ ] Selective testing enabled (optional)
|
||||
- [ ] Retry logic handles transient failures only
|
||||
- [ ] No secrets in configuration files
|
||||
|
||||
### Knowledge Base Alignment
|
||||
|
||||
- [ ] Burn-in pattern matches `ci-burn-in.md`
|
||||
- [ ] Selective testing matches `selective-testing.md`
|
||||
- [ ] Artifact collection matches `visual-debugging.md`
|
||||
- [ ] Test quality matches `test-quality.md`
|
||||
|
||||
### Security Checks
|
||||
|
||||
- [ ] No credentials in CI configuration
|
||||
- [ ] Secrets use platform secret management
|
||||
- [ ] Environment variables for sensitive data
|
||||
- [ ] Artifact retention appropriate (not too long)
|
||||
- [ ] No debug output exposing secrets
|
||||
- [ ] **MUST**: No `${{ inputs.* }}` or user-controlled GitHub context (`github.event.pull_request.title`, `github.event.issue.body`, `github.event.comment.body`, `github.head_ref`) directly in `run:` blocks — all passed through `env:` intermediaries and referenced as `"$ENV_VAR"`
|
||||
|
||||
## Integration Points
|
||||
|
||||
### Status File Integration
|
||||
|
||||
- [ ] CI setup logged in Quality & Testing Progress section
|
||||
- [ ] Status updated with completion timestamp
|
||||
- [ ] Platform and configuration noted
|
||||
|
||||
### Knowledge Base Integration
|
||||
|
||||
- [ ] Relevant knowledge fragments loaded
|
||||
- [ ] Patterns applied from knowledge base
|
||||
- [ ] Documentation references knowledge base
|
||||
- [ ] Knowledge base references in README
|
||||
|
||||
### Workflow Dependencies
|
||||
|
||||
- [ ] `framework` workflow completed first
|
||||
- [ ] Can proceed to `atdd` workflow after CI setup
|
||||
- [ ] Can proceed to `automate` workflow
|
||||
- [ ] CI integrates with `gate` workflow
|
||||
|
||||
## Completion Criteria
|
||||
|
||||
**All must be true:**
|
||||
|
||||
- [ ] All prerequisites met
|
||||
- [ ] All process steps completed
|
||||
- [ ] All output validations passed
|
||||
- [ ] All quality checks passed
|
||||
- [ ] All integration points verified
|
||||
- [ ] First CI run successful
|
||||
- [ ] Performance targets met
|
||||
- [ ] Documentation complete
|
||||
|
||||
## Post-Workflow Actions
|
||||
|
||||
**User must complete:**
|
||||
|
||||
1. [ ] Commit CI configuration
|
||||
2. [ ] Push to remote repository
|
||||
3. [ ] Configure required secrets in CI platform
|
||||
4. [ ] Open PR to trigger first CI run
|
||||
5. [ ] Monitor and verify pipeline execution
|
||||
6. [ ] Adjust parallelism if needed (based on actual run times)
|
||||
7. [ ] Set up notifications (optional)
|
||||
|
||||
**Recommended next workflows:**
|
||||
|
||||
1. [ ] Run `atdd` workflow for test generation
|
||||
2. [ ] Run `automate` workflow for coverage expansion
|
||||
3. [ ] Run `gate` workflow for quality gates
|
||||
|
||||
## Rollback Procedure
|
||||
|
||||
If workflow fails:
|
||||
|
||||
1. [ ] Delete CI configuration file
|
||||
2. [ ] Remove helper scripts directory
|
||||
3. [ ] Remove documentation (docs/ci.md, etc.)
|
||||
4. [ ] Clear CI platform secrets (if added)
|
||||
5. [ ] Review error logs
|
||||
6. [ ] Fix issues and retry workflow
|
||||
|
||||
## Notes
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Issue**: CI file syntax errors
|
||||
|
||||
- **Solution**: Validate YAML syntax online or with linter
|
||||
|
||||
**Issue**: Tests fail in CI but pass locally
|
||||
|
||||
- **Solution**: Use `scripts/ci-local.sh` to mirror CI environment
|
||||
|
||||
**Issue**: Caching not working
|
||||
|
||||
- **Solution**: Check cache key formula, verify paths
|
||||
|
||||
**Issue**: Burn-in too slow
|
||||
|
||||
- **Solution**: Reduce iterations or run on cron only
|
||||
|
||||
### Platform-Specific
|
||||
|
||||
**GitHub Actions:**
|
||||
|
||||
- Secrets: Repository Settings → Secrets and variables → Actions
|
||||
- Runners: Ubuntu latest recommended
|
||||
- Concurrency limits: 20 jobs for free tier
|
||||
|
||||
**GitLab CI:**
|
||||
|
||||
- Variables: Project Settings → CI/CD → Variables
|
||||
- Runners: Shared or project-specific
|
||||
- Pipeline quota: 400 minutes/month free tier
|
||||
|
||||
**Jenkins:**
|
||||
|
||||
- Credentials: Manage Jenkins → Manage Credentials
|
||||
- Agents: Configure build agents with Node.js
|
||||
- Plugins: Pipeline, JUnit, HTML Publisher recommended
|
||||
|
||||
**Azure DevOps:**
|
||||
|
||||
- Variables: Pipelines → Library → Variable groups
|
||||
- Agent pools: Azure-hosted or self-hosted
|
||||
- Parallel jobs: 1 free (Microsoft-hosted)
|
||||
|
||||
**Harness:**
|
||||
|
||||
- Connectors: Configure container registry and code repo connectors
|
||||
- Delegates: Install Harness delegate in target infrastructure
|
||||
- Steps: Use Run steps with appropriate container images
|
||||
|
||||
---
|
||||
|
||||
**Checklist Complete**: Sign off when all items validated.
|
||||
|
||||
**Completed by:** {name}
|
||||
**Date:** {date}
|
||||
**Platform:** {GitHub Actions, GitLab CI, Other}
|
||||
**Notes:** {notes}
|
||||
@@ -0,0 +1,328 @@
|
||||
# GitHub Actions CI/CD Pipeline for Test Execution
|
||||
# Generated by BMad TEA Agent - Test Architect Module
|
||||
# Optimized for: Parallel Sharding, Burn-In Loop
|
||||
# Stack: {test_stack_type} | Framework: {test_framework}
|
||||
#
|
||||
# Variables to customize per project:
|
||||
# INSTALL_CMD - dependency install command (e.g., npm ci, pnpm install --frozen-lockfile, yarn --frozen-lockfile)
|
||||
# TEST_CMD - main test command (e.g., npm run test:e2e, npm test, npx vitest)
|
||||
# LINT_CMD - lint command (e.g., npm run lint)
|
||||
# BROWSER_INSTALL - browser install command (frontend/fullstack only; omit for backend)
|
||||
# BROWSER_CACHE_PATH - browser cache path (frontend/fullstack only; omit for backend)
|
||||
|
||||
name: Test Pipeline
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main, develop]
|
||||
pull_request:
|
||||
branches: [main, develop]
|
||||
schedule:
|
||||
# Weekly burn-in on Sundays at 2 AM UTC
|
||||
- cron: "0 2 * * 0"
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
# Lint stage - Code quality checks
|
||||
lint:
|
||||
name: Lint
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 5
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Determine Node version
|
||||
id: node-version
|
||||
run: |
|
||||
if [ -f .nvmrc ]; then
|
||||
echo "value=$(cat .nvmrc)" >> "$GITHUB_OUTPUT"
|
||||
echo "Using Node from .nvmrc"
|
||||
else
|
||||
echo "value=24" >> "$GITHUB_OUTPUT"
|
||||
echo "Using default Node 24 (current LTS)"
|
||||
fi
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: ${{ steps.node-version.outputs.value }}
|
||||
cache: "npm"
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm ci # Replace with INSTALL_CMD
|
||||
|
||||
- name: Run linter
|
||||
run: npm run lint # Replace with LINT_CMD
|
||||
|
||||
# Test stage - Parallel execution with sharding
|
||||
test:
|
||||
name: Test (Shard ${{ matrix.shard }})
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 30
|
||||
needs: lint
|
||||
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
shard: [1, 2, 3, 4]
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Determine Node version
|
||||
id: node-version
|
||||
run: |
|
||||
if [ -f .nvmrc ]; then
|
||||
echo "value=$(cat .nvmrc)" >> "$GITHUB_OUTPUT"
|
||||
echo "Using Node from .nvmrc"
|
||||
else
|
||||
echo "value=22" >> "$GITHUB_OUTPUT"
|
||||
echo "Using default Node 22 (current LTS)"
|
||||
fi
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: ${{ steps.node-version.outputs.value }}
|
||||
cache: "npm"
|
||||
|
||||
- name: Cache Playwright browsers
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: ~/.cache/ms-playwright
|
||||
key: ${{ runner.os }}-playwright-${{ hashFiles('**/package-lock.json') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-playwright-
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm ci # Replace with INSTALL_CMD
|
||||
|
||||
# Frontend/Fullstack only — remove this step for backend-only stacks
|
||||
- name: Install Playwright browsers
|
||||
run: npx playwright install --with-deps chromium # Replace with BROWSER_INSTALL
|
||||
|
||||
- name: Run tests (shard ${{ matrix.shard }}/4)
|
||||
run: npm run test:e2e -- --shard=${{ matrix.shard }}/4 # Replace with TEST_CMD + shard args
|
||||
|
||||
- name: Upload test results
|
||||
if: failure()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: test-results-${{ matrix.shard }}
|
||||
path: |
|
||||
test-results/
|
||||
playwright-report/
|
||||
retention-days: 30
|
||||
|
||||
# Burn-in stage - Flaky test detection
|
||||
burn-in:
|
||||
name: Burn-In (Flaky Detection)
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 60
|
||||
needs: test
|
||||
# Only run burn-in on PRs to main/develop or on schedule
|
||||
if: github.event_name == 'pull_request' || github.event_name == 'schedule'
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Determine Node version
|
||||
id: node-version
|
||||
run: |
|
||||
if [ -f .nvmrc ]; then
|
||||
echo "value=$(cat .nvmrc)" >> "$GITHUB_OUTPUT"
|
||||
echo "Using Node from .nvmrc"
|
||||
else
|
||||
echo "value=22" >> "$GITHUB_OUTPUT"
|
||||
echo "Using default Node 22 (current LTS)"
|
||||
fi
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: ${{ steps.node-version.outputs.value }}
|
||||
cache: "npm"
|
||||
|
||||
# Frontend/Fullstack only — remove this step for backend-only stacks
|
||||
- name: Cache Playwright browsers
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: ~/.cache/ms-playwright # Replace with BROWSER_CACHE_PATH
|
||||
key: ${{ runner.os }}-playwright-${{ hashFiles('**/package-lock.json') }}
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm ci # Replace with INSTALL_CMD
|
||||
|
||||
# Frontend/Fullstack only — remove this step for backend-only stacks
|
||||
- name: Install Playwright browsers
|
||||
run: npx playwright install --with-deps chromium # Replace with BROWSER_INSTALL
|
||||
|
||||
# Note: Burn-in targets UI flakiness. For backend-only stacks, remove this job entirely.
|
||||
- name: Run burn-in loop (10 iterations)
|
||||
run: |
|
||||
echo "🔥 Starting burn-in loop - detecting flaky tests"
|
||||
for i in {1..10}; do
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "🔥 Burn-in iteration $i/10"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
npm run test:e2e || exit 1 # Replace with TEST_CMD
|
||||
done
|
||||
echo "✅ Burn-in complete - no flaky tests detected"
|
||||
|
||||
- name: Upload burn-in failure artifacts
|
||||
if: failure()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: burn-in-failures
|
||||
path: |
|
||||
test-results/
|
||||
playwright-report/
|
||||
retention-days: 30
|
||||
|
||||
# Report stage - Aggregate and publish results
|
||||
report:
|
||||
name: Test Report
|
||||
runs-on: ubuntu-latest
|
||||
needs: [test, burn-in]
|
||||
if: always()
|
||||
|
||||
steps:
|
||||
- name: Download all artifacts
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
path: artifacts
|
||||
|
||||
- name: Generate summary
|
||||
run: |
|
||||
echo "## Test Execution Summary" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- **Status**: ${{ needs.test.result }}" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- **Burn-in**: ${{ needs.burn-in.result }}" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- **Shards**: 4" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
|
||||
if [ "${{ needs.burn-in.result }}" == "failure" ]; then
|
||||
echo "⚠️ **Flaky tests detected** - Review burn-in artifacts" >> $GITHUB_STEP_SUMMARY
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# EXTENSION PATTERNS — Script Injection Prevention
|
||||
# ============================================================================
|
||||
# When extending this template into reusable workflows, manual dispatch
|
||||
# workflows, or composite actions, NEVER use ${{ inputs.* }} directly in
|
||||
# run: blocks. Always pass through env: intermediaries.
|
||||
#
|
||||
# KEY PRINCIPLE: Inputs must be DATA, not COMMANDS.
|
||||
# Pass inputs through env: and interpolate as quoted arguments into fixed
|
||||
# commands. NEVER accept command-shaped inputs (e.g., install-command,
|
||||
# test-command) that get executed as shell code — even through env:.
|
||||
#
|
||||
# --- Reusable Workflow (workflow_call) ---
|
||||
#
|
||||
# on:
|
||||
# workflow_call:
|
||||
# inputs:
|
||||
# test-grep:
|
||||
# description: 'Test grep filter (data only — not a command)'
|
||||
# type: string
|
||||
# required: false
|
||||
# default: ''
|
||||
# base-ref:
|
||||
# description: 'Base branch for diff'
|
||||
# type: string
|
||||
# required: false
|
||||
# default: 'main'
|
||||
# burn-in-count:
|
||||
# description: 'Number of burn-in iterations'
|
||||
# type: string
|
||||
# required: false
|
||||
# default: '10'
|
||||
#
|
||||
# jobs:
|
||||
# test:
|
||||
# runs-on: ubuntu-latest
|
||||
# steps:
|
||||
# - uses: actions/checkout@v4
|
||||
# # Fixed command — not derived from inputs
|
||||
# - name: Install dependencies
|
||||
# run: npm ci
|
||||
# # ✅ SAFE — input is DATA passed as an argument to a fixed command
|
||||
# - name: Run tests
|
||||
# env:
|
||||
# TEST_GREP: ${{ inputs.test-grep }}
|
||||
# run: |
|
||||
# # Security: inputs passed through env: to prevent script injection
|
||||
# if [ -n "$TEST_GREP" ]; then
|
||||
# npx playwright test --grep "$TEST_GREP"
|
||||
# else
|
||||
# npx playwright test
|
||||
# fi
|
||||
#
|
||||
# --- Manual Dispatch (workflow_dispatch) ---
|
||||
#
|
||||
# on:
|
||||
# workflow_dispatch:
|
||||
# inputs:
|
||||
# test-grep:
|
||||
# description: 'Test grep filter (data only — not a command)'
|
||||
# type: string
|
||||
# required: false
|
||||
# environment:
|
||||
# description: 'Target environment'
|
||||
# type: choice
|
||||
# options: [staging, production]
|
||||
#
|
||||
# jobs:
|
||||
# run-tests:
|
||||
# runs-on: ubuntu-latest
|
||||
# steps:
|
||||
# - uses: actions/checkout@v4
|
||||
# # ✅ SAFE — input is DATA interpolated into a fixed command
|
||||
# - name: Run selected tests
|
||||
# env:
|
||||
# TEST_GREP: ${{ inputs.test-grep }}
|
||||
# run: |
|
||||
# # Security: inputs passed through env: to prevent script injection
|
||||
# npx playwright test --grep "$TEST_GREP"
|
||||
#
|
||||
# --- Composite Action (action.yml) ---
|
||||
#
|
||||
# inputs:
|
||||
# test-grep:
|
||||
# description: 'Test grep filter (data only — not a command)'
|
||||
# required: false
|
||||
# default: ''
|
||||
# burn-in-count:
|
||||
# description: 'Number of burn-in iterations'
|
||||
# required: false
|
||||
# default: '10'
|
||||
#
|
||||
# runs:
|
||||
# using: composite
|
||||
# steps:
|
||||
# # ✅ SAFE — inputs are DATA arguments to fixed commands
|
||||
# - name: Run burn-in
|
||||
# shell: bash
|
||||
# env:
|
||||
# TEST_GREP: ${{ inputs.test-grep }}
|
||||
# BURN_IN_COUNT: ${{ inputs.burn-in-count }}
|
||||
# run: |
|
||||
# # Security: inputs passed through env: to prevent script injection
|
||||
# for i in $(seq 1 "$BURN_IN_COUNT"); do
|
||||
# echo "Burn-in iteration $i/$BURN_IN_COUNT"
|
||||
# npx playwright test --grep "$TEST_GREP" || exit 1
|
||||
# done
|
||||
#
|
||||
# ❌ NEVER DO THIS:
|
||||
# # Direct ${{ inputs.* }} in run: — GitHub expression injection
|
||||
# - run: npx playwright test --grep "${{ inputs.test-grep }}"
|
||||
#
|
||||
# # Executing input-derived env var as a command — still command injection
|
||||
# - env:
|
||||
# CMD: ${{ inputs.test-command }}
|
||||
# run: $CMD
|
||||
# ============================================================================
|
||||
@@ -0,0 +1,158 @@
|
||||
# GitLab CI/CD Pipeline for Test Execution
|
||||
# Generated by BMad TEA Agent - Test Architect Module
|
||||
# Optimized for: Parallel Sharding, Burn-In Loop
|
||||
# Stack: {test_stack_type} | Framework: {test_framework}
|
||||
#
|
||||
# Variables to customize per project:
|
||||
# INSTALL_CMD - dependency install command (e.g., npm ci, pnpm install --frozen-lockfile)
|
||||
# TEST_CMD - main test command (e.g., npm run test:e2e, npm test, npx vitest)
|
||||
# LINT_CMD - lint command (e.g., npm run lint)
|
||||
# BROWSER_INSTALL - browser install command (frontend/fullstack only; omit for backend)
|
||||
# BROWSER_CACHE_PATH - browser cache path (frontend/fullstack only; omit for backend)
|
||||
|
||||
stages:
|
||||
- lint
|
||||
- test
|
||||
- burn-in
|
||||
- report
|
||||
|
||||
variables:
|
||||
# Disable git depth for accurate change detection
|
||||
GIT_DEPTH: 0
|
||||
# Use npm ci for faster, deterministic installs
|
||||
npm_config_cache: "$CI_PROJECT_DIR/.npm"
|
||||
# Playwright browser cache
|
||||
PLAYWRIGHT_BROWSERS_PATH: "$CI_PROJECT_DIR/.cache/ms-playwright"
|
||||
# Default Node version when .nvmrc is missing
|
||||
DEFAULT_NODE_VERSION: "24"
|
||||
|
||||
# Caching configuration
|
||||
cache:
|
||||
key:
|
||||
files:
|
||||
- package-lock.json
|
||||
paths:
|
||||
- .npm/
|
||||
- .cache/ms-playwright/
|
||||
- node_modules/
|
||||
|
||||
# Lint stage - Code quality checks
|
||||
lint:
|
||||
stage: lint
|
||||
image: node:$DEFAULT_NODE_VERSION
|
||||
before_script:
|
||||
- |
|
||||
NODE_VERSION=$(cat .nvmrc 2>/dev/null || echo "$DEFAULT_NODE_VERSION")
|
||||
echo "Using Node $NODE_VERSION"
|
||||
npm install -g n
|
||||
n "$NODE_VERSION"
|
||||
node -v
|
||||
- npm ci # Replace with INSTALL_CMD
|
||||
script:
|
||||
- npm run lint # Replace with LINT_CMD
|
||||
timeout: 5 minutes
|
||||
|
||||
# Test stage - Parallel execution with sharding
|
||||
.test-template: &test-template
|
||||
stage: test
|
||||
image: node:$DEFAULT_NODE_VERSION
|
||||
needs:
|
||||
- lint
|
||||
before_script:
|
||||
- |
|
||||
NODE_VERSION=$(cat .nvmrc 2>/dev/null || echo "$DEFAULT_NODE_VERSION")
|
||||
echo "Using Node $NODE_VERSION"
|
||||
npm install -g n
|
||||
n "$NODE_VERSION"
|
||||
node -v
|
||||
- npm ci # Replace with INSTALL_CMD
|
||||
- npx playwright install --with-deps chromium # Replace with BROWSER_INSTALL; remove for backend-only
|
||||
artifacts:
|
||||
when: on_failure
|
||||
paths:
|
||||
- test-results/
|
||||
- playwright-report/
|
||||
expire_in: 30 days
|
||||
timeout: 30 minutes
|
||||
|
||||
test:shard-1:
|
||||
<<: *test-template
|
||||
script:
|
||||
- npm run test:e2e -- --shard=1/4 # Replace with TEST_CMD + shard args
|
||||
|
||||
test:shard-2:
|
||||
<<: *test-template
|
||||
script:
|
||||
- npm run test:e2e -- --shard=2/4 # Replace with TEST_CMD + shard args
|
||||
|
||||
test:shard-3:
|
||||
<<: *test-template
|
||||
script:
|
||||
- npm run test:e2e -- --shard=3/4 # Replace with TEST_CMD + shard args
|
||||
|
||||
test:shard-4:
|
||||
<<: *test-template
|
||||
script:
|
||||
- npm run test:e2e -- --shard=4/4 # Replace with TEST_CMD + shard args
|
||||
|
||||
# Burn-in stage - Flaky test detection
|
||||
burn-in:
|
||||
stage: burn-in
|
||||
image: node:$DEFAULT_NODE_VERSION
|
||||
needs:
|
||||
- test:shard-1
|
||||
- test:shard-2
|
||||
- test:shard-3
|
||||
- test:shard-4
|
||||
# Only run burn-in on merge requests to main/develop or on schedule
|
||||
rules:
|
||||
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
|
||||
- if: '$CI_PIPELINE_SOURCE == "schedule"'
|
||||
before_script:
|
||||
- |
|
||||
NODE_VERSION=$(cat .nvmrc 2>/dev/null || echo "$DEFAULT_NODE_VERSION")
|
||||
echo "Using Node $NODE_VERSION"
|
||||
npm install -g n
|
||||
n "$NODE_VERSION"
|
||||
node -v
|
||||
- npm ci # Replace with INSTALL_CMD
|
||||
- npx playwright install --with-deps chromium # Replace with BROWSER_INSTALL; remove for backend-only
|
||||
# Note: Burn-in targets UI flakiness. For backend-only stacks, remove this job entirely.
|
||||
script:
|
||||
- |
|
||||
echo "🔥 Starting burn-in loop - detecting flaky tests"
|
||||
for i in {1..10}; do
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "🔥 Burn-in iteration $i/10"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
npm run test:e2e || exit 1 # Replace with TEST_CMD
|
||||
done
|
||||
echo "✅ Burn-in complete - no flaky tests detected"
|
||||
artifacts:
|
||||
when: on_failure
|
||||
paths:
|
||||
- test-results/
|
||||
- playwright-report/
|
||||
expire_in: 30 days
|
||||
timeout: 60 minutes
|
||||
|
||||
# Report stage - Aggregate results
|
||||
report:
|
||||
stage: report
|
||||
image: alpine:latest
|
||||
needs:
|
||||
- test:shard-1
|
||||
- test:shard-2
|
||||
- test:shard-3
|
||||
- test:shard-4
|
||||
- burn-in
|
||||
when: always
|
||||
script:
|
||||
- |
|
||||
echo "## Test Execution Summary"
|
||||
echo ""
|
||||
echo "- Pipeline: $CI_PIPELINE_ID"
|
||||
echo "- Shards: 4"
|
||||
echo "- Branch: $CI_COMMIT_REF_NAME"
|
||||
echo ""
|
||||
echo "View detailed results in job artifacts"
|
||||
@@ -0,0 +1,159 @@
|
||||
# Harness CI Pipeline for Test Execution
|
||||
# Generated by BMad TEA Agent - Test Architect Module
|
||||
# Optimized for: Parallel Sharding, Burn-In Loop
|
||||
# Stack: {test_stack_type} | Framework: {test_framework}
|
||||
#
|
||||
# Variables to customize per project:
|
||||
# INSTALL_CMD - dependency install command (e.g., npm ci, pnpm install --frozen-lockfile)
|
||||
# TEST_CMD - main test command (e.g., npm run test:e2e, npm test, npx vitest)
|
||||
# LINT_CMD - lint command (e.g., npm run lint)
|
||||
# BROWSER_INSTALL - browser install command (frontend/fullstack only; omit for backend)
|
||||
|
||||
pipeline:
|
||||
name: Test Pipeline
|
||||
identifier: test_pipeline
|
||||
projectIdentifier: default
|
||||
orgIdentifier: default
|
||||
stages:
|
||||
# Lint stage - Code quality checks
|
||||
- stage:
|
||||
name: Lint
|
||||
identifier: lint
|
||||
type: CI
|
||||
spec:
|
||||
cloneCodebase: true
|
||||
infrastructure:
|
||||
type: KubernetesDirect
|
||||
spec:
|
||||
connectorRef: account.harnessImage
|
||||
namespace: default
|
||||
execution:
|
||||
steps:
|
||||
- step:
|
||||
type: Run
|
||||
name: Install dependencies
|
||||
identifier: install
|
||||
spec:
|
||||
connectorRef: account.harnessImage
|
||||
image: node:24
|
||||
shell: Sh
|
||||
command: npm ci # Replace with INSTALL_CMD
|
||||
|
||||
- step:
|
||||
type: Run
|
||||
name: Run linter
|
||||
identifier: lint
|
||||
spec:
|
||||
connectorRef: account.harnessImage
|
||||
image: node:24
|
||||
shell: Sh
|
||||
command: npm run lint # Replace with LINT_CMD
|
||||
|
||||
# Test stage - Parallel execution with sharding
|
||||
- stage:
|
||||
name: Test
|
||||
identifier: test
|
||||
type: CI
|
||||
spec:
|
||||
cloneCodebase: true
|
||||
infrastructure:
|
||||
type: KubernetesDirect
|
||||
spec:
|
||||
connectorRef: account.harnessImage
|
||||
namespace: default
|
||||
execution:
|
||||
steps:
|
||||
- step:
|
||||
type: Run
|
||||
name: Install dependencies
|
||||
identifier: install
|
||||
spec:
|
||||
connectorRef: account.harnessImage
|
||||
image: node:24
|
||||
shell: Sh
|
||||
command: npm ci # Replace with INSTALL_CMD
|
||||
|
||||
# Frontend/Fullstack only — remove this step for backend-only stacks
|
||||
- step:
|
||||
type: Run
|
||||
name: Install browsers
|
||||
identifier: browsers
|
||||
spec:
|
||||
connectorRef: account.harnessImage
|
||||
image: mcr.microsoft.com/playwright:v1.50.0-noble
|
||||
shell: Sh
|
||||
command: npx playwright install --with-deps chromium # Replace with BROWSER_INSTALL
|
||||
|
||||
- parallel:
|
||||
- step:
|
||||
type: Run
|
||||
name: Test Shard 1
|
||||
identifier: shard_1
|
||||
spec:
|
||||
connectorRef: account.harnessImage
|
||||
image: mcr.microsoft.com/playwright:v1.50.0-noble
|
||||
shell: Sh
|
||||
command: npm run test:e2e -- --shard=1/4 # Replace with TEST_CMD + shard args
|
||||
- step:
|
||||
type: Run
|
||||
name: Test Shard 2
|
||||
identifier: shard_2
|
||||
spec:
|
||||
connectorRef: account.harnessImage
|
||||
image: mcr.microsoft.com/playwright:v1.50.0-noble
|
||||
shell: Sh
|
||||
command: npm run test:e2e -- --shard=2/4 # Replace with TEST_CMD + shard args
|
||||
- step:
|
||||
type: Run
|
||||
name: Test Shard 3
|
||||
identifier: shard_3
|
||||
spec:
|
||||
connectorRef: account.harnessImage
|
||||
image: mcr.microsoft.com/playwright:v1.50.0-noble
|
||||
shell: Sh
|
||||
command: npm run test:e2e -- --shard=3/4 # Replace with TEST_CMD + shard args
|
||||
- step:
|
||||
type: Run
|
||||
name: Test Shard 4
|
||||
identifier: shard_4
|
||||
spec:
|
||||
connectorRef: account.harnessImage
|
||||
image: mcr.microsoft.com/playwright:v1.50.0-noble
|
||||
shell: Sh
|
||||
command: npm run test:e2e -- --shard=4/4 # Replace with TEST_CMD + shard args
|
||||
|
||||
# Burn-in stage - Flaky test detection
|
||||
# Note: Burn-in targets UI flakiness. For backend-only stacks, remove this stage entirely.
|
||||
- stage:
|
||||
name: Burn-In
|
||||
identifier: burn_in
|
||||
type: CI
|
||||
when:
|
||||
condition: <+pipeline.triggerType> == "WEBHOOK" || <+pipeline.triggerType> == "SCHEDULER"
|
||||
spec:
|
||||
cloneCodebase: true
|
||||
infrastructure:
|
||||
type: KubernetesDirect
|
||||
spec:
|
||||
connectorRef: account.harnessImage
|
||||
namespace: default
|
||||
execution:
|
||||
steps:
|
||||
- step:
|
||||
type: Run
|
||||
name: Install and burn-in
|
||||
identifier: burn_in_loop
|
||||
spec:
|
||||
connectorRef: account.harnessImage
|
||||
image: mcr.microsoft.com/playwright:v1.50.0-noble
|
||||
shell: Sh
|
||||
command: |
|
||||
npm ci
|
||||
npx playwright install --with-deps chromium
|
||||
echo "Starting burn-in loop - detecting flaky tests"
|
||||
for i in $(seq 1 10); do
|
||||
echo "Burn-in iteration $i/10"
|
||||
npm run test:e2e || exit 1
|
||||
done
|
||||
echo "Burn-in complete - no flaky tests detected"
|
||||
# Replace npm ci with INSTALL_CMD, npm run test:e2e with TEST_CMD
|
||||
@@ -0,0 +1,45 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# CI/CD Pipeline Setup
|
||||
|
||||
**Workflow ID**: `_bmad/tea/testarch/bmad-testarch-ci`
|
||||
**Version**: 5.0 (Step-File Architecture)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Scaffold a production-ready CI/CD quality pipeline with test execution, burn-in loops for flaky detection, parallel sharding, artifact collection, and notifications.
|
||||
|
||||
---
|
||||
|
||||
## WORKFLOW ARCHITECTURE
|
||||
|
||||
This workflow uses **step-file architecture**:
|
||||
|
||||
- **Micro-file Design**: Each step is self-contained
|
||||
- **JIT Loading**: Only the current step file is in memory
|
||||
- **Sequential Enforcement**: Execute steps in order
|
||||
|
||||
---
|
||||
|
||||
## INITIALIZATION SEQUENCE
|
||||
|
||||
### 1. Configuration Loading
|
||||
|
||||
From `workflow.yaml`, resolve:
|
||||
|
||||
- `config_source`, `test_artifacts`, `user_name`, `communication_language`, `document_output_language`, `date`
|
||||
- `ci_platform`, `test_dir`
|
||||
|
||||
### 2. First Step
|
||||
|
||||
Load, read completely, and execute:
|
||||
`./steps-c/step-01-preflight.md`
|
||||
|
||||
### 3. Resume Support
|
||||
|
||||
If the user selects **Resume** mode, load, read completely, and execute:
|
||||
`./steps-c/step-01b-resume.md`
|
||||
|
||||
This checks the output document for progress tracking frontmatter and routes to the next incomplete step.
|
||||
@@ -0,0 +1,129 @@
|
||||
// Jenkinsfile CI/CD Pipeline for Test Execution
|
||||
// Generated by BMad TEA Agent - Test Architect Module
|
||||
// Optimized for: Parallel Sharding, Burn-In Loop
|
||||
// Stack: {test_stack_type} | Framework: {test_framework}
|
||||
//
|
||||
// Variables to customize per project:
|
||||
// INSTALL_CMD - dependency install command (e.g., npm ci, pnpm install --frozen-lockfile)
|
||||
// TEST_CMD - main test command (e.g., npm run test:e2e, npm test, npx vitest)
|
||||
// LINT_CMD - lint command (e.g., npm run lint)
|
||||
// BROWSER_INSTALL - browser install command (frontend/fullstack only; omit for backend)
|
||||
//
|
||||
// Node.js version management — choose one:
|
||||
// Option A (recommended): Configure NodeJS Plugin in Jenkins Global Tool Configuration,
|
||||
// then add to pipeline: tools { nodejs 'NodeJS-24' }
|
||||
// Option B: Use nvm (pre-installed on agent) — this template uses nvm as the default
|
||||
// Option C: Use a Docker agent — agent { docker { image 'node:24' } }
|
||||
|
||||
pipeline {
|
||||
agent any
|
||||
|
||||
environment {
|
||||
CI = 'true'
|
||||
}
|
||||
|
||||
options {
|
||||
timeout(time: 45, unit: 'MINUTES')
|
||||
disableConcurrentBuilds()
|
||||
}
|
||||
|
||||
stages {
|
||||
stage('Checkout') {
|
||||
steps {
|
||||
checkout scm
|
||||
}
|
||||
}
|
||||
|
||||
stage('Install') {
|
||||
steps {
|
||||
// Detect and apply Node.js version from .nvmrc (falls back to v24)
|
||||
// If using NodeJS Plugin instead, remove this block and add: tools { nodejs 'NodeJS-24' }
|
||||
sh '''
|
||||
export NVM_DIR="$HOME/.nvm"
|
||||
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"
|
||||
NODE_VERSION=$(cat .nvmrc 2>/dev/null || echo "24")
|
||||
nvm install "$NODE_VERSION" 2>/dev/null || true
|
||||
nvm use "$NODE_VERSION" 2>/dev/null || true
|
||||
node --version
|
||||
npm ci
|
||||
''' // Replace npm ci with INSTALL_CMD
|
||||
// Stash installed dependencies so parallel shards can restore them
|
||||
stash includes: 'node_modules/**', name: 'deps'
|
||||
}
|
||||
}
|
||||
|
||||
stage('Lint') {
|
||||
steps {
|
||||
sh 'npm run lint' // Replace with LINT_CMD
|
||||
}
|
||||
}
|
||||
|
||||
// Test stage - Parallel execution with sharding
|
||||
// Each shard restores dependencies via unstash for workspace safety
|
||||
stage('Test') {
|
||||
parallel {
|
||||
stage('Shard 1') {
|
||||
steps {
|
||||
unstash 'deps'
|
||||
// Frontend/Fullstack only — remove browser install for backend-only stacks
|
||||
sh 'npx playwright install --with-deps chromium' // Replace with BROWSER_INSTALL
|
||||
sh 'npm run test:e2e -- --shard=1/4' // Replace with TEST_CMD + shard args
|
||||
}
|
||||
}
|
||||
stage('Shard 2') {
|
||||
steps {
|
||||
unstash 'deps'
|
||||
sh 'npx playwright install --with-deps chromium' // Replace with BROWSER_INSTALL
|
||||
sh 'npm run test:e2e -- --shard=2/4' // Replace with TEST_CMD + shard args
|
||||
}
|
||||
}
|
||||
stage('Shard 3') {
|
||||
steps {
|
||||
unstash 'deps'
|
||||
sh 'npx playwright install --with-deps chromium' // Replace with BROWSER_INSTALL
|
||||
sh 'npm run test:e2e -- --shard=3/4' // Replace with TEST_CMD + shard args
|
||||
}
|
||||
}
|
||||
stage('Shard 4') {
|
||||
steps {
|
||||
unstash 'deps'
|
||||
sh 'npx playwright install --with-deps chromium' // Replace with BROWSER_INSTALL
|
||||
sh 'npm run test:e2e -- --shard=4/4' // Replace with TEST_CMD + shard args
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Burn-in stage - Flaky test detection
|
||||
// Note: Burn-in targets UI flakiness. For backend-only stacks, remove this stage entirely.
|
||||
stage('Burn-In') {
|
||||
when {
|
||||
anyOf {
|
||||
changeRequest()
|
||||
triggeredBy 'TimerTrigger'
|
||||
}
|
||||
}
|
||||
steps {
|
||||
sh '''
|
||||
echo "Starting burn-in loop - detecting flaky tests"
|
||||
for i in $(seq 1 10); do
|
||||
echo "Burn-in iteration $i/10"
|
||||
npm run test:e2e || exit 1
|
||||
done
|
||||
echo "Burn-in complete - no flaky tests detected"
|
||||
''' // Replace npm run test:e2e with TEST_CMD
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
post {
|
||||
always {
|
||||
// Archive test results and reports
|
||||
archiveArtifacts artifacts: 'test-results/**,playwright-report/**', allowEmptyArchive: true
|
||||
junit testResults: 'test-results/**/*.xml', allowEmptyResults: true
|
||||
}
|
||||
failure {
|
||||
echo 'Pipeline failed - check test results and artifacts'
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,158 @@
|
||||
---
|
||||
name: 'step-01-preflight'
|
||||
description: 'Verify prerequisites and detect CI platform'
|
||||
nextStepFile: './step-02-generate-pipeline.md'
|
||||
outputFile: '{test_artifacts}/ci-pipeline-progress.md'
|
||||
---
|
||||
|
||||
# Step 1: Preflight Checks
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Verify CI prerequisites and determine target CI platform.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- 🚫 Halt if requirements fail
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Verify Git Repository
|
||||
|
||||
- `.git/` exists
|
||||
- Remote configured (if available)
|
||||
|
||||
If missing: **HALT** with "Git repository required for CI/CD setup."
|
||||
|
||||
---
|
||||
|
||||
## 2. Detect Test Stack Type
|
||||
|
||||
Determine the project's test stack type (`test_stack_type`) using the following algorithm:
|
||||
|
||||
1. If `test_stack_type` is explicitly set in config (not `"auto"`), use that value.
|
||||
2. Otherwise, auto-detect by scanning project manifests:
|
||||
- **Frontend indicators**: `playwright.config.*`, `cypress.config.*`, `vite.config.*`, `next.config.*`, `src/components/`, `src/pages/`, `src/app/`
|
||||
- **Backend indicators**: `pyproject.toml`, `pom.xml`/`build.gradle`, `go.mod`, `*.csproj`/`*.sln`, `Gemfile`, `Cargo.toml`, `jest.config.*`, `vitest.config.*`, `src/routes/`, `src/controllers/`, `src/api/`, `Dockerfile`, `serverless.yml`
|
||||
- **Both present** → `fullstack`
|
||||
- **Only frontend** → `frontend`
|
||||
- **Only backend** → `backend`
|
||||
- **Cannot determine** → default to `fullstack` and note assumption
|
||||
|
||||
Record detected `test_stack_type` in step output.
|
||||
|
||||
---
|
||||
|
||||
## 3. Verify Test Framework
|
||||
|
||||
- Check for framework configuration based on detected stack:
|
||||
- **Frontend/Fullstack**: `playwright.config.*` or `cypress.config.*` exists
|
||||
- **Backend (Node.js)**: `jest.config.*` or `vitest.config.*` or test scripts in `package.json`
|
||||
- **Backend (Python)**: `pyproject.toml` with `[tool.pytest]` or `pytest.ini` or `setup.cfg` with pytest config
|
||||
- **Backend (Java/Kotlin)**: `pom.xml` with surefire/failsafe plugins or `build.gradle` with test task
|
||||
- **Backend (Go)**: `*_test.go` files present (Go convention — no config file needed)
|
||||
- **Backend (C#/.NET)**: `*.csproj` with xUnit/NUnit/MSTest references
|
||||
- **Backend (Ruby)**: `Gemfile` with rspec or `.rspec` config file
|
||||
- If `test_framework` is `"auto"`, detect from config files and project manifests found
|
||||
- Verify test dependencies are installed (language-appropriate package manager)
|
||||
|
||||
If missing: **HALT** with "Run `framework` workflow first."
|
||||
|
||||
---
|
||||
|
||||
## 4. Ensure Tests Pass Locally
|
||||
|
||||
- Run the main test command based on detected stack and framework:
|
||||
- **Node.js**: `npm test` or `npm run test:e2e`
|
||||
- **Python**: `pytest` or `python -m pytest`
|
||||
- **Java**: `mvn test` or `gradle test`
|
||||
- **Go**: `go test ./...`
|
||||
- **C#/.NET**: `dotnet test`
|
||||
- **Ruby**: `bundle exec rspec`
|
||||
- If failing: **HALT** and request fixes before CI setup
|
||||
|
||||
---
|
||||
|
||||
## 5. Detect CI Platform
|
||||
|
||||
- If `ci_platform` is explicitly set in config (not `"auto"`), use that value.
|
||||
- Otherwise, scan for existing CI configuration files:
|
||||
- `.github/workflows/*.yml` → `github-actions`
|
||||
- `.gitlab-ci.yml` → `gitlab-ci`
|
||||
- `Jenkinsfile` → `jenkins`
|
||||
- `azure-pipelines.yml` → `azure-devops`
|
||||
- `.harness/*.yaml` → `harness`
|
||||
- `.circleci/config.yml` → `circle-ci`
|
||||
- If found, ask whether to update or replace
|
||||
- If not found, infer from git remote (github.com → `github-actions`, gitlab.com → `gitlab-ci`)
|
||||
- If still unresolved, default to `github-actions`
|
||||
|
||||
Record detected `ci_platform` in step output.
|
||||
|
||||
---
|
||||
|
||||
## 6. Read Environment Context
|
||||
|
||||
- Read environment context based on detected stack:
|
||||
- **Node.js**: Read `.nvmrc` if present (default to Node 24+ LTS if missing); read `package.json` for dependency caching strategy
|
||||
- **Python**: Read `.python-version` or `pyproject.toml` for Python version; note `pip`/`poetry`/`pipenv` for caching
|
||||
- **Java**: Read `pom.xml`/`build.gradle` for Java version; note Maven/Gradle for caching
|
||||
- **Go**: Read `go.mod` for Go version; note Go module cache path
|
||||
- **C#/.NET**: Read `*.csproj`/`global.json` for .NET SDK version; note NuGet cache
|
||||
- **Ruby**: Read `.ruby-version` or `Gemfile` for Ruby version; note Bundler cache
|
||||
|
||||
---
|
||||
|
||||
### 7. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-01-preflight']
|
||||
lastStep: 'step-01-preflight'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-01-preflight'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-01-preflight'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section of the document.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
@@ -0,0 +1,110 @@
|
||||
---
|
||||
name: 'step-01b-resume'
|
||||
description: 'Resume interrupted workflow from last completed step'
|
||||
outputFile: '{test_artifacts}/ci-pipeline-progress.md'
|
||||
---
|
||||
|
||||
# Step 1b: Resume Workflow
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Resume an interrupted workflow by loading the existing progress document, displaying progress, verifying previously created artifacts, and routing to the next incomplete step.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: Output document with progress frontmatter
|
||||
- Focus: Load progress and route to next step
|
||||
- Limits: Do not re-execute completed steps
|
||||
- Dependencies: Output document must exist from a previous run
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
### 1. Load Output Document
|
||||
|
||||
Read `{outputFile}` and parse YAML frontmatter for:
|
||||
|
||||
- `stepsCompleted` — array of completed step names
|
||||
- `lastStep` — last completed step name
|
||||
- `lastSaved` — timestamp of last save
|
||||
|
||||
**If `{outputFile}` does not exist**, display:
|
||||
|
||||
"⚠️ **No previous progress found.** There is no output document to resume from. Please use **[C] Create** to start a fresh workflow run."
|
||||
|
||||
**THEN:** Halt. Do not proceed.
|
||||
|
||||
---
|
||||
|
||||
### 2. Verify Previously Created Artifacts
|
||||
|
||||
Since this is a file-creation workflow, verify that artifacts from completed steps still exist on disk:
|
||||
|
||||
- If `step-02-generate-pipeline` is in `stepsCompleted`, check that the pipeline config file exists (e.g., `.github/workflows/test.yml` or equivalent)
|
||||
- If any expected artifact is missing, warn the user and suggest re-running from the step that creates it
|
||||
|
||||
---
|
||||
|
||||
### 3. Display Progress Dashboard
|
||||
|
||||
Display:
|
||||
|
||||
"📋 **Workflow Resume — CI/CD Pipeline Setup**
|
||||
|
||||
**Last saved:** {lastSaved}
|
||||
**Steps completed:** {stepsCompleted.length} of 4
|
||||
|
||||
1. Preflight Checks (step-01-preflight) — {✅ if in stepsCompleted, ⬜ otherwise}
|
||||
2. Generate Pipeline (step-02-generate-pipeline) — {✅ if in stepsCompleted, ⬜ otherwise}
|
||||
3. Configure Quality Gates (step-03-configure-quality-gates) — {✅ if in stepsCompleted, ⬜ otherwise}
|
||||
4. Validate & Summary (step-04-validate-and-summary) — {✅ if in stepsCompleted, ⬜ otherwise}"
|
||||
|
||||
---
|
||||
|
||||
### 4. Route to Next Step
|
||||
|
||||
Based on `lastStep`, load the next incomplete step:
|
||||
|
||||
- `'step-01-preflight'` → Load `./step-02-generate-pipeline.md`
|
||||
- `'step-02-generate-pipeline'` → Load `./step-03-configure-quality-gates.md`
|
||||
- `'step-03-configure-quality-gates'` → Load `./step-04-validate-and-summary.md`
|
||||
- `'step-04-validate-and-summary'` → **Workflow already complete.** Display: "✅ **All steps completed.** Use **[V] Validate** to review outputs or **[E] Edit** to make revisions." Then halt.
|
||||
|
||||
**If `lastStep` does not match any value above**, display: "⚠️ **Unknown progress state** (`lastStep`: {lastStep}). Please use **[C] Create** to start fresh." Then halt.
|
||||
|
||||
**Otherwise**, load the identified step file, read completely, and execute.
|
||||
|
||||
The existing content in `{outputFile}` provides context from previously completed steps. Use it as reference for remaining steps.
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Output document loaded and parsed correctly
|
||||
- Previously created artifacts verified
|
||||
- Progress dashboard displayed accurately
|
||||
- Routed to correct next step
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Not loading output document
|
||||
- Incorrect progress display
|
||||
- Routing to wrong step
|
||||
- Re-executing completed steps
|
||||
|
||||
**Master Rule:** Resume MUST route to the exact next incomplete step. Never re-execute completed steps.
|
||||
@@ -0,0 +1,279 @@
|
||||
---
|
||||
name: 'step-02-generate-pipeline'
|
||||
description: 'Generate CI pipeline configuration with adaptive orchestration (agent-team, subagent, or sequential)'
|
||||
nextStepFile: './step-03-configure-quality-gates.md'
|
||||
outputFile: '{test_artifacts}/ci-pipeline-progress.md'
|
||||
---
|
||||
|
||||
# Step 2: Generate CI Pipeline
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Create platform-specific CI configuration with test execution, sharding, burn-in, and artifacts.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- ✅ Resolve execution mode from explicit user request first, then config
|
||||
- ✅ Apply fallback rules deterministically when requested mode is unsupported
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 0. Resolve Execution Mode (User Override First)
|
||||
|
||||
```javascript
|
||||
const orchestrationContext = {
|
||||
config: {
|
||||
execution_mode: config.tea_execution_mode || 'auto', // "auto" | "subagent" | "agent-team" | "sequential"
|
||||
capability_probe: config.tea_capability_probe !== false, // true by default
|
||||
},
|
||||
timestamp: new Date().toISOString().replace(/[:.]/g, '-'),
|
||||
};
|
||||
|
||||
const normalizeUserExecutionMode = (mode) => {
|
||||
if (typeof mode !== 'string') return null;
|
||||
const normalized = mode.trim().toLowerCase().replace(/[-_]/g, ' ').replace(/\s+/g, ' ');
|
||||
|
||||
if (normalized === 'auto') return 'auto';
|
||||
if (normalized === 'sequential') return 'sequential';
|
||||
if (normalized === 'subagent' || normalized === 'sub agent' || normalized === 'subagents' || normalized === 'sub agents') {
|
||||
return 'subagent';
|
||||
}
|
||||
if (normalized === 'agent team' || normalized === 'agent teams' || normalized === 'agentteam') {
|
||||
return 'agent-team';
|
||||
}
|
||||
|
||||
return null;
|
||||
};
|
||||
|
||||
const normalizeConfigExecutionMode = (mode) => {
|
||||
if (mode === 'subagent') return 'subagent';
|
||||
if (mode === 'auto' || mode === 'sequential' || mode === 'subagent' || mode === 'agent-team') {
|
||||
return mode;
|
||||
}
|
||||
return null;
|
||||
};
|
||||
|
||||
// Explicit user instruction in the active run takes priority over config.
|
||||
const explicitModeFromUser = normalizeUserExecutionMode(runtime.getExplicitExecutionModeHint?.() || null);
|
||||
|
||||
const requestedMode = explicitModeFromUser || normalizeConfigExecutionMode(orchestrationContext.config.execution_mode) || 'auto';
|
||||
const probeEnabled = orchestrationContext.config.capability_probe;
|
||||
|
||||
const supports = { subagent: false, agentTeam: false };
|
||||
if (probeEnabled) {
|
||||
supports.subagent = runtime.canLaunchSubagents?.() === true;
|
||||
supports.agentTeam = runtime.canLaunchAgentTeams?.() === true;
|
||||
}
|
||||
|
||||
let resolvedMode = requestedMode;
|
||||
if (requestedMode === 'auto') {
|
||||
if (supports.agentTeam) resolvedMode = 'agent-team';
|
||||
else if (supports.subagent) resolvedMode = 'subagent';
|
||||
else resolvedMode = 'sequential';
|
||||
} else if (probeEnabled && requestedMode === 'agent-team' && !supports.agentTeam) {
|
||||
resolvedMode = supports.subagent ? 'subagent' : 'sequential';
|
||||
} else if (probeEnabled && requestedMode === 'subagent' && !supports.subagent) {
|
||||
resolvedMode = 'sequential';
|
||||
}
|
||||
```
|
||||
|
||||
Resolution precedence:
|
||||
|
||||
1. Explicit user request in this run (`agent team` => `agent-team`; `subagent` => `subagent`; `sequential`; `auto`)
|
||||
2. `tea_execution_mode` from config
|
||||
3. Runtime capability fallback (when probing enabled)
|
||||
|
||||
## 1. Resolve Output Path and Select Template
|
||||
|
||||
Determine the pipeline output file path based on the detected `ci_platform`:
|
||||
|
||||
| CI Platform | Output Path | Template File |
|
||||
| ---------------- | ------------------------------------------- | ----------------------------------------------- |
|
||||
| `github-actions` | `{project-root}/.github/workflows/test.yml` | `./github-actions-template.yaml` |
|
||||
| `gitlab-ci` | `{project-root}/.gitlab-ci.yml` | `./gitlab-ci-template.yaml` |
|
||||
| `jenkins` | `{project-root}/Jenkinsfile` | `./jenkins-pipeline-template.groovy` |
|
||||
| `azure-devops` | `{project-root}/azure-pipelines.yml` | `./azure-pipelines-template.yaml` |
|
||||
| `harness` | `{project-root}/.harness/pipeline.yaml` | `./harness-pipeline-template.yaml` |
|
||||
| `circle-ci` | `{project-root}/.circleci/config.yml` | _(no template; generate from first principles)_ |
|
||||
|
||||
Use templates from `./` when available. Adapt the template to the project's `test_stack_type` and `test_framework`.
|
||||
|
||||
---
|
||||
|
||||
## Security: Script Injection Prevention
|
||||
|
||||
> **CRITICAL:** Treat `${{ inputs.* }}` and the entire `${{ github.event.* }}` namespace as unsafe by default. ALWAYS route them through `env:` intermediaries and reference as double-quoted `"$ENV_VAR"` in `run:` blocks. NEVER interpolate them directly.
|
||||
|
||||
When the generated pipeline is extended into reusable workflows (`on: workflow_call`), manual dispatch (`on: workflow_dispatch`), or composite actions, these values become user-controllable and can inject arbitrary shell commands.
|
||||
|
||||
**Two rules for generated `run:` blocks:**
|
||||
|
||||
1. **No direct interpolation** — pass unsafe contexts through `env:`, reference as `"$ENV_VAR"`
|
||||
2. **Inputs must be DATA, not COMMANDS** — never accept command-shaped inputs (e.g., `inputs.install-command`) that get executed as shell code. Even through `env:`, running `$CMD` where CMD comes from an input is still command injection. Use fixed commands and pass inputs only as arguments.
|
||||
|
||||
```yaml
|
||||
# ✅ SAFE — input is DATA interpolated into a fixed command
|
||||
- name: Run tests
|
||||
env:
|
||||
TEST_GREP: ${{ inputs.test-grep }}
|
||||
run: |
|
||||
# Security: inputs passed through env: to prevent script injection
|
||||
npx playwright test --grep "$TEST_GREP"
|
||||
|
||||
# ❌ NEVER — direct GitHub expression injection
|
||||
- name: Run tests
|
||||
run: |
|
||||
npx playwright test --grep "${{ inputs.test-grep }}"
|
||||
|
||||
# ❌ NEVER — executing input-derived env var as a command
|
||||
- name: Install
|
||||
env:
|
||||
CMD: ${{ inputs.install-command }}
|
||||
run: $CMD
|
||||
```
|
||||
|
||||
Include a `# Security: inputs passed through env: to prevent script injection` comment in generated YAML wherever this pattern is applied.
|
||||
|
||||
**Safe contexts** (do NOT need `env:` intermediaries): `${{ steps.*.outputs.* }}`, `${{ matrix.* }}`, `${{ runner.os }}`, `${{ github.sha }}`, `${{ github.ref }}`, `${{ secrets.* }}`, `${{ env.* }}`.
|
||||
|
||||
---
|
||||
|
||||
## 2. Pipeline Stages
|
||||
|
||||
Include stages:
|
||||
|
||||
- lint
|
||||
- test (parallel shards)
|
||||
- contract-test (if `tea_use_pactjs_utils` enabled)
|
||||
- burn-in (flaky detection)
|
||||
- report (aggregate + publish)
|
||||
|
||||
---
|
||||
|
||||
## 3. Test Execution
|
||||
|
||||
- Parallel sharding enabled
|
||||
- CI retries configured
|
||||
- Capture artifacts (HTML report, JUnit XML, traces/videos on failure)
|
||||
- Cache dependencies (language-appropriate: node_modules, .venv, .m2, go module cache, NuGet, bundler)
|
||||
|
||||
Write the selected pipeline configuration to the resolved output path from step 1. Adjust test commands based on `test_stack_type` and `test_framework`:
|
||||
|
||||
- **Frontend/Fullstack**: Include browser install, E2E/component test commands, Playwright/Cypress artifacts
|
||||
- **Backend (Node.js)**: Use `npm test` or framework-specific commands (`vitest`, `jest`), skip browser install
|
||||
- **Backend (Python)**: Use `pytest` with coverage (`pytest --cov`), install via `pip install -r requirements.txt` or `poetry install`
|
||||
- **Backend (Java/Kotlin)**: Use `mvn test` or `gradle test`, cache `.m2/repository` or `.gradle/caches`
|
||||
- **Backend (Go)**: Use `go test ./...` with coverage (`-coverprofile`), cache Go modules
|
||||
- **Backend (C#/.NET)**: Use `dotnet test` with coverage, restore NuGet packages
|
||||
- **Backend (Ruby)**: Use `bundle exec rspec` with coverage, cache `vendor/bundle`
|
||||
|
||||
### Contract Testing Pipeline (if `tea_use_pactjs_utils` enabled)
|
||||
|
||||
When `tea_use_pactjs_utils` is enabled, add a `contract-test` stage after `test`:
|
||||
|
||||
**Required env block** (add to the generated pipeline):
|
||||
|
||||
```yaml
|
||||
env:
|
||||
PACT_BROKER_BASE_URL: ${{ secrets.PACT_BROKER_BASE_URL }}
|
||||
PACT_BROKER_TOKEN: ${{ secrets.PACT_BROKER_TOKEN }}
|
||||
GITHUB_SHA: ${{ github.sha }} # auto-set by GitHub Actions
|
||||
GITHUB_BRANCH: ${{ github.head_ref || github.ref_name }} # NOT auto-set — must be defined explicitly
|
||||
```
|
||||
|
||||
> **Note:** `GITHUB_SHA` is auto-set by GitHub Actions, but `GITHUB_BRANCH` is **not** — it must be derived from `github.head_ref` (for PRs) or `github.ref_name` (for pushes). The pactjs-utils library reads both from `process.env`.
|
||||
|
||||
1. **Consumer test + publish**: Run consumer contract tests, then publish pacts to broker
|
||||
- `npm run test:pact:consumer`
|
||||
- `npm run publish:pact`
|
||||
- Only publish on PR and main branch pushes
|
||||
|
||||
2. **Provider verification**: Run provider verification against published pacts
|
||||
- `npm run test:pact:provider:remote:contract`
|
||||
- `buildVerifierOptions` auto-reads `PACT_BROKER_BASE_URL`, `PACT_BROKER_TOKEN`, `GITHUB_SHA`, `GITHUB_BRANCH`
|
||||
- Verification results published to broker when `CI=true`
|
||||
|
||||
3. **Can-I-Deploy gate**: Block deployment if contracts are incompatible
|
||||
- `npm run can:i:deploy:provider`
|
||||
- Ensure the script adds `--retry-while-unknown 6 --retry-interval 10` for async verification
|
||||
|
||||
4. **Webhook job**: Add `repository_dispatch` trigger for `pact_changed` event
|
||||
- Provider verification runs when consumers publish new pacts
|
||||
- Ensures compatibility is checked on both consumer and provider changes
|
||||
|
||||
5. **Breaking change handling**: When `PACT_BREAKING_CHANGE=true` env var is set:
|
||||
- Provider test passes `includeMainAndDeployed: false` to `buildVerifierOptions` — verifies only matching branch
|
||||
- Coordinate with consumer team before removing the flag
|
||||
|
||||
6. **Record deployment**: After successful deployment, record version in broker
|
||||
- `npm run record:provider:deployment --env=production`
|
||||
|
||||
Required CI secrets: `PACT_BROKER_BASE_URL`, `PACT_BROKER_TOKEN`
|
||||
|
||||
**If `tea_pact_mcp` is `"mcp"`:** Reference the SmartBear MCP `Can I Deploy` and `Matrix` tools for pipeline guidance in `pact-mcp.md`.
|
||||
|
||||
---
|
||||
|
||||
### 4. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-02-generate-pipeline']
|
||||
lastStep: 'step-02-generate-pipeline'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-02-generate-pipeline'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-02-generate-pipeline'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section of the document.
|
||||
|
||||
### 5. Orchestration Notes for This Step
|
||||
|
||||
For this step, treat these work units as parallelizable when `resolvedMode` is `agent-team` or `subagent`:
|
||||
|
||||
- Worker A: resolve platform path/template and produce base pipeline skeleton (section 1)
|
||||
- Worker B: construct stage definitions and test execution blocks (sections 2-3)
|
||||
- Worker C: contract-testing block (only when `tea_use_pactjs_utils` is true)
|
||||
|
||||
If `resolvedMode` is `sequential`, execute sections 1→4 in order.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
@@ -0,0 +1,135 @@
|
||||
---
|
||||
name: 'step-03-configure-quality-gates'
|
||||
description: 'Configure burn-in, quality gates, and notifications'
|
||||
nextStepFile: './step-04-validate-and-summary.md'
|
||||
knowledgeIndex: '{project-root}/_bmad/tea/testarch/tea-index.csv'
|
||||
outputFile: '{test_artifacts}/ci-pipeline-progress.md'
|
||||
---
|
||||
|
||||
# Step 3: Quality Gates & Notifications
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Configure burn-in loops, quality thresholds, and notification hooks.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Burn-In Configuration
|
||||
|
||||
Use `{knowledgeIndex}` to load `ci-burn-in.md` guidance:
|
||||
|
||||
- Run N-iteration burn-in for flaky detection
|
||||
- Gate promotion based on burn-in stability
|
||||
|
||||
**Stack-conditional burn-in:**
|
||||
|
||||
- **Frontend or Fullstack** (`test_stack_type` is `frontend` or `fullstack`): Enable burn-in by default. Burn-in targets UI flakiness (race conditions, selector instability, timing issues).
|
||||
- **Backend only** (`test_stack_type` is `backend`): Skip burn-in by default. Backend tests (unit, integration, API) are deterministic and rarely exhibit UI-related flakiness. If the user explicitly requests burn-in for backend, honor that override.
|
||||
|
||||
**Security: Script injection prevention for reusable burn-in workflows:**
|
||||
|
||||
When burn-in is extracted into a reusable workflow (`on: workflow_call`), all `${{ inputs.* }}` values MUST be passed through `env:` intermediaries and referenced as quoted `"$ENV_VAR"`. Never interpolate them directly.
|
||||
|
||||
**Inputs must be DATA, not COMMANDS.** Do not accept command-shaped inputs (e.g., `inputs.install-command`, `inputs.test-command`) that get executed as shell code — even through `env:`, running `$CMD` is still command injection. Use fixed commands (e.g., `npm ci`, `npx playwright test`) and pass inputs only as data arguments.
|
||||
|
||||
```yaml
|
||||
# ✅ SAFE — fixed commands with data-only inputs
|
||||
- name: Install dependencies
|
||||
run: npm ci
|
||||
- name: Run burn-in loop
|
||||
env:
|
||||
TEST_GREP: ${{ inputs.test-grep }}
|
||||
BURN_IN_COUNT: ${{ inputs.burn-in-count }}
|
||||
BASE_REF: ${{ inputs.base-ref }}
|
||||
run: |
|
||||
# Security: inputs passed through env: to prevent script injection
|
||||
for i in $(seq 1 "$BURN_IN_COUNT"); do
|
||||
echo "Burn-in iteration $i/$BURN_IN_COUNT"
|
||||
npx playwright test --grep "$TEST_GREP" || exit 1
|
||||
done
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Quality Gates
|
||||
|
||||
Define:
|
||||
|
||||
- Minimum pass rates (P0 = 100%, P1 ≥ 95%)
|
||||
- Fail CI on critical test failures
|
||||
- Optional: require traceability or nfr-assess output before release
|
||||
|
||||
**Contract testing gate** (if `tea_use_pactjs_utils` is enabled):
|
||||
|
||||
- **can-i-deploy must pass** before any deployment to staging or production
|
||||
- Block the deployment pipeline if contract verification fails
|
||||
- Treat consumer pact publishing failures as CI failures (contracts must stay up-to-date)
|
||||
- Provider verification must pass for all consumer pacts before merge
|
||||
|
||||
---
|
||||
|
||||
## 3. Notifications
|
||||
|
||||
Configure:
|
||||
|
||||
- Failure notifications (Slack/email)
|
||||
- Artifact links
|
||||
|
||||
---
|
||||
|
||||
### 4. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-03-configure-quality-gates']
|
||||
lastStep: 'step-03-configure-quality-gates'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-03-configure-quality-gates'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-03-configure-quality-gates'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section of the document.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
@@ -0,0 +1,92 @@
|
||||
---
|
||||
name: 'step-04-validate-and-summary'
|
||||
description: 'Validate pipeline and summarize'
|
||||
outputFile: '{test_artifacts}/ci-pipeline-progress.md'
|
||||
---
|
||||
|
||||
# Step 4: Validate & Summarize
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Validate CI configuration and report completion details.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Validation
|
||||
|
||||
Validate against `checklist.md`:
|
||||
|
||||
- Config file created
|
||||
- Stages and sharding configured
|
||||
- Burn-in and artifacts enabled
|
||||
- Secrets/variables documented
|
||||
|
||||
Fix gaps before completion.
|
||||
|
||||
---
|
||||
|
||||
## 2. Completion Summary
|
||||
|
||||
Report:
|
||||
|
||||
- CI platform and config path
|
||||
- Key stages enabled
|
||||
- Artifacts and notifications
|
||||
- Next steps (set secrets, run pipeline)
|
||||
|
||||
---
|
||||
|
||||
### 3. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-04-validate-and-summary']
|
||||
lastStep: 'step-04-validate-and-summary'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-04-validate-and-summary'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-04-validate-and-summary'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section of the document.
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
@@ -0,0 +1,65 @@
|
||||
---
|
||||
name: 'step-01-assess'
|
||||
description: 'Load an existing output for editing'
|
||||
nextStepFile: './step-02-apply-edit.md'
|
||||
---
|
||||
|
||||
# Step 1: Assess Edit Target
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Identify which output should be edited and load it.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 📖 Read the complete step file before taking any action
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are the Master Test Architect
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Ask the user which output file to edit
|
||||
- 🚫 Do not edit until target is confirmed
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: existing outputs
|
||||
- Focus: select edit target
|
||||
- Limits: no edits yet
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Identify Target
|
||||
|
||||
Ask the user to provide the output file path or select from known outputs.
|
||||
|
||||
### 2. Load Target
|
||||
|
||||
Read the provided output file in full.
|
||||
|
||||
### 3. Confirm
|
||||
|
||||
Confirm the target and proceed to edit.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Target identified and loaded
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Proceeding without a confirmed target
|
||||
@@ -0,0 +1,60 @@
|
||||
---
|
||||
name: 'step-02-apply-edit'
|
||||
description: 'Apply edits to the selected output'
|
||||
---
|
||||
|
||||
# Step 2: Apply Edits
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Apply the requested edits to the selected output and confirm changes.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 📖 Read the complete step file before taking any action
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are the Master Test Architect
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Only apply edits explicitly requested by the user
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: selected output and user changes
|
||||
- Focus: apply edits only
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Confirm Requested Changes
|
||||
|
||||
Restate what will be changed and confirm.
|
||||
|
||||
### 2. Apply Changes
|
||||
|
||||
Update the output file accordingly.
|
||||
|
||||
### 3. Report
|
||||
|
||||
Summarize the edits applied.
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Changes applied and confirmed
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Unconfirmed edits or missing update
|
||||
@@ -0,0 +1,81 @@
|
||||
---
|
||||
name: 'step-01-validate'
|
||||
description: 'Validate workflow outputs against checklist'
|
||||
outputFile: '{test_artifacts}/ci-validation-report.md'
|
||||
validationChecklist: '../checklist.md'
|
||||
---
|
||||
|
||||
# Step 1: Validate Outputs
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Validate outputs using the workflow checklist and record findings.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 📖 Read the complete step file before taking any action
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are the Master Test Architect
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Validate against `{validationChecklist}`
|
||||
- 🚫 Do not skip checks
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Write findings to `{outputFile}`
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: workflow outputs and checklist
|
||||
- Focus: validation only
|
||||
- Limits: do not modify outputs in this step
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Load Checklist
|
||||
|
||||
Read `{validationChecklist}` and list all criteria.
|
||||
|
||||
### 2. Validate Outputs
|
||||
|
||||
Evaluate outputs against each checklist item.
|
||||
|
||||
### 2a. Script Injection Scan
|
||||
|
||||
Scan all generated YAML workflow files for unsafe interpolation patterns inside `run:` blocks.
|
||||
|
||||
**Unsafe patterns to flag (FAIL):**
|
||||
|
||||
- `${{ inputs.* }}` — all workflow inputs are user-controllable
|
||||
- `${{ github.event.* }}` — treat the entire event namespace as unsafe by default (includes PR titles, issue bodies, comment bodies, label names, etc.)
|
||||
- `${{ github.head_ref }}` — PR source branch name (user-controlled)
|
||||
|
||||
**Detection method:** For each `run:` block in generated YAML, check if any of the above expressions appears in the run script body. If found, flag as **FAIL** with the exact line and recommend converting to the safe `env:` intermediary pattern (pass through `env:`, reference as double-quoted `"$ENV_VAR"`).
|
||||
|
||||
**Safe patterns to ignore** (exempt from flagging): `${{ steps.*.outputs.* }}`, `${{ matrix.* }}`, `${{ runner.os }}`, `${{ github.sha }}`, `${{ github.ref }}`, `${{ secrets.* }}`, `${{ env.* }}` — these are safe from GitHub expression injection when used in `run:` blocks.
|
||||
|
||||
### 3. Write Report
|
||||
|
||||
Write a validation report to `{outputFile}` with PASS/WARN/FAIL per section.
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Validation report written
|
||||
- All checklist items evaluated
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped checklist items
|
||||
- No report produced
|
||||
@@ -0,0 +1,72 @@
|
||||
---
|
||||
validationDate: 2026-01-27
|
||||
workflowName: testarch-ci
|
||||
workflowPath: {project-root}/src/workflows/testarch/bmad-testarch-ci
|
||||
validationStatus: COMPLETE
|
||||
completionDate: 2026-01-27 10:03:10
|
||||
---
|
||||
|
||||
# Validation Report: testarch-ci
|
||||
|
||||
**Validation Started:** 2026-01-27 09:50:21
|
||||
**Validator:** BMAD Workflow Validation System (Codex)
|
||||
**Standards Version:** BMAD Workflow Standards
|
||||
|
||||
## File Structure & Size
|
||||
|
||||
- workflow.md present: YES
|
||||
- instructions.md present: YES
|
||||
- workflow.yaml present: YES
|
||||
- step files found: 7
|
||||
|
||||
**Step File Sizes:**
|
||||
|
||||
- steps-c/step-01-preflight.md: 87 lines [GOOD]
|
||||
- steps-c/step-02-generate-pipeline.md: 75 lines [GOOD]
|
||||
- steps-c/step-03-configure-quality-gates.md: 67 lines [GOOD]
|
||||
- steps-c/step-04-validate-and-summary.md: 60 lines [GOOD]
|
||||
- steps-e/step-01-assess.md: 51 lines [GOOD]
|
||||
- steps-e/step-02-apply-edit.md: 46 lines [GOOD]
|
||||
- steps-v/step-01-validate.md: 53 lines [GOOD]
|
||||
- workflow-plan.md present: YES
|
||||
|
||||
## Frontmatter Validation
|
||||
|
||||
- No frontmatter violations found
|
||||
|
||||
## Critical Path Violations
|
||||
|
||||
- No {project-root} hardcoded paths detected in body
|
||||
- No dead relative links detected
|
||||
|
||||
## Menu Handling Validation
|
||||
|
||||
- No menu structures detected (linear step flow) [N/A]
|
||||
|
||||
## Step Type Validation
|
||||
|
||||
- Last step steps-v/step-01-validate.md has no nextStepFile (final step OK)
|
||||
- Step type validation assumes linear sequence (no branching/menu). Workflow-plan.md present for reference. [INFO]
|
||||
|
||||
## Output Format Validation
|
||||
|
||||
- No templates found in workflow root
|
||||
- Steps with outputFile in frontmatter:
|
||||
- steps-c/step-02-generate-pipeline.md
|
||||
- steps-v/step-01-validate.md
|
||||
|
||||
## Validation Design Check
|
||||
|
||||
- checklist.md present: YES
|
||||
- Validation steps folder (steps-v) present: YES
|
||||
|
||||
## Instruction Style Check
|
||||
|
||||
- All steps include STEP GOAL, MANDATORY EXECUTION RULES, EXECUTION PROTOCOLS, CONTEXT BOUNDARIES, and SUCCESS/FAILURE metrics
|
||||
|
||||
## Summary
|
||||
|
||||
- Validation completed: 2026-01-27 10:03:10
|
||||
- Critical issues: 0
|
||||
- Warnings: 0 (informational notes only)
|
||||
- Readiness: READY (manual review optional)
|
||||
@@ -0,0 +1,114 @@
|
||||
---
|
||||
validationDate: 2026-01-27
|
||||
workflowName: testarch-ci
|
||||
workflowPath: {project-root}/src/workflows/testarch/bmad-testarch-ci
|
||||
validationStatus: COMPLETE
|
||||
completionDate: 2026-01-27 10:24:01
|
||||
---
|
||||
|
||||
# Validation Report: testarch-ci
|
||||
|
||||
**Validation Started:** 2026-01-27 10:24:01
|
||||
**Validator:** BMAD Workflow Validation System (Codex)
|
||||
**Standards Version:** BMAD Workflow Standards
|
||||
|
||||
## File Structure & Size
|
||||
|
||||
- workflow.md present: YES
|
||||
- instructions.md present: YES
|
||||
- workflow.yaml present: YES
|
||||
- step files found: 7
|
||||
|
||||
**Step File Sizes:**
|
||||
|
||||
- steps-c/step-01-preflight.md: 86 lines [GOOD]
|
||||
- steps-c/step-02-generate-pipeline.md: 74 lines [GOOD]
|
||||
- steps-c/step-03-configure-quality-gates.md: 66 lines [GOOD]
|
||||
- steps-c/step-04-validate-and-summary.md: 59 lines [GOOD]
|
||||
- steps-e/step-01-assess.md: 50 lines [GOOD]
|
||||
- steps-e/step-02-apply-edit.md: 45 lines [GOOD]
|
||||
- steps-v/step-01-validate.md: 52 lines [GOOD]
|
||||
- workflow-plan.md present: YES
|
||||
|
||||
## Frontmatter Validation
|
||||
|
||||
- No frontmatter violations found
|
||||
|
||||
## Critical Path Violations
|
||||
|
||||
### Config Variables (Exceptions)
|
||||
|
||||
Standard BMAD config variables treated as valid exceptions: bmb_creations_output_folder, communication_language, document_output_language, output_folder, planning_artifacts, project-root, project_name, test_artifacts, user_name
|
||||
|
||||
- No {project-root} hardcoded paths detected in body
|
||||
|
||||
- No dead relative links detected
|
||||
|
||||
- No module path assumptions detected
|
||||
|
||||
**Status:** ✅ PASS - No critical violations
|
||||
|
||||
## Menu Handling Validation
|
||||
|
||||
- No menu structures detected (linear step flow) [N/A]
|
||||
|
||||
## Step Type Validation
|
||||
|
||||
- steps-c/step-01-preflight.md: Init [PASS]
|
||||
- steps-c/step-02-generate-pipeline.md: Middle [PASS]
|
||||
- steps-c/step-03-configure-quality-gates.md: Middle [PASS]
|
||||
- steps-c/step-04-validate-and-summary.md: Final [PASS]
|
||||
- Step type validation assumes linear sequence (no branching/menu). Workflow-plan.md present for reference. [INFO]
|
||||
|
||||
## Output Format Validation
|
||||
|
||||
- Templates present: NONE
|
||||
- Steps with outputFile in frontmatter:
|
||||
- steps-c/step-02-generate-pipeline.md
|
||||
- steps-v/step-01-validate.md
|
||||
- checklist.md present: YES
|
||||
|
||||
## Validation Design Check
|
||||
|
||||
- Validation steps folder (steps-v) present: YES
|
||||
- Validation step(s) present: step-01-validate.md
|
||||
- Validation steps reference checklist data and auto-proceed
|
||||
|
||||
## Instruction Style Check
|
||||
|
||||
- Instruction style: Prescriptive (appropriate for TEA quality/compliance workflows)
|
||||
- Steps emphasize mandatory sequence, explicit success/failure metrics, and risk-based guidance
|
||||
|
||||
## Collaborative Experience Check
|
||||
|
||||
- Overall facilitation quality: GOOD
|
||||
- Steps use progressive prompts and clear role reinforcement; no laundry-list interrogation detected
|
||||
- Flow progression is clear and aligned to workflow goals
|
||||
|
||||
## Subagent Optimization Opportunities
|
||||
|
||||
- No high-priority subagent optimizations identified; workflow already uses step-file architecture
|
||||
- Pattern 1 (grep/regex): N/A for most steps
|
||||
- Pattern 2 (per-file analysis): already aligned to validation structure
|
||||
- Pattern 3 (data ops): minimal data file loads
|
||||
- Pattern 4 (parallel): optional for validation only
|
||||
|
||||
## Cohesive Review
|
||||
|
||||
- Overall assessment: GOOD
|
||||
- Flow is linear, goals are clear, and outputs map to TEA artifacts
|
||||
- Voice and tone consistent with Test Architect persona
|
||||
- Recommendation: READY (minor refinements optional)
|
||||
|
||||
## Plan Quality Validation
|
||||
|
||||
- Plan file present: workflow-plan.md
|
||||
- Planned steps found: 7 (all implemented)
|
||||
- Plan implementation status: Fully Implemented
|
||||
|
||||
## Summary
|
||||
|
||||
- Validation completed: 2026-01-27 10:24:01
|
||||
- Critical issues: 0
|
||||
- Warnings: 0 (informational notes only)
|
||||
- Readiness: READY (manual review optional)
|
||||
@@ -0,0 +1,20 @@
|
||||
# Workflow Plan: testarch-ci
|
||||
|
||||
## Create Mode (steps-c)
|
||||
- step-01-preflight.md
|
||||
|
||||
- step-02-generate-pipeline.md
|
||||
- step-03-configure-quality-gates.md
|
||||
- step-04-validate-and-summary.md
|
||||
|
||||
## Validate Mode (steps-v)
|
||||
- step-01-validate.md
|
||||
|
||||
## Edit Mode (steps-e)
|
||||
- step-01-assess.md
|
||||
- step-02-apply-edit.md
|
||||
|
||||
## Outputs
|
||||
- CI config (e.g., {project-root}/.github/workflows/test.yml)
|
||||
|
||||
- Pipeline guidance and artifacts configuration
|
||||
41
_bmad/tea/workflows/testarch/bmad-testarch-ci/workflow.md
Normal file
41
_bmad/tea/workflows/testarch/bmad-testarch-ci/workflow.md
Normal file
@@ -0,0 +1,41 @@
|
||||
---
|
||||
name: bmad-testarch-ci
|
||||
description: Scaffold CI/CD quality pipeline with test execution. Use when user says 'lets setup CI pipeline' or 'I want to create quality gates'
|
||||
web_bundle: true
|
||||
---
|
||||
|
||||
# CI/CD Pipeline Setup
|
||||
|
||||
**Goal:** Scaffold CI/CD quality pipeline with test execution, burn-in loops, and artifact collection
|
||||
|
||||
**Role:** You are the Master Test Architect.
|
||||
|
||||
---
|
||||
|
||||
## WORKFLOW ARCHITECTURE
|
||||
|
||||
This workflow uses **tri-modal step-file architecture**:
|
||||
|
||||
- **Create mode (steps-c/)**: primary execution flow
|
||||
- **Validate mode (steps-v/)**: validation against checklist
|
||||
- **Edit mode (steps-e/)**: revise existing outputs
|
||||
|
||||
---
|
||||
|
||||
## INITIALIZATION SEQUENCE
|
||||
|
||||
### 1. Mode Determination
|
||||
|
||||
"Welcome to the workflow. What would you like to do?"
|
||||
|
||||
- **[C] Create** — Run the workflow
|
||||
- **[R] Resume** — Resume an interrupted workflow
|
||||
- **[V] Validate** — Validate existing outputs
|
||||
- **[E] Edit** — Edit existing outputs
|
||||
|
||||
### 2. Route to First Step
|
||||
|
||||
- **If C:** Load `steps-c/step-01-preflight.md`
|
||||
- **If R:** Load `steps-c/step-01b-resume.md`
|
||||
- **If V:** Load `steps-v/step-01-validate.md`
|
||||
- **If E:** Load `steps-e/step-01-assess.md`
|
||||
48
_bmad/tea/workflows/testarch/bmad-testarch-ci/workflow.yaml
Normal file
48
_bmad/tea/workflows/testarch/bmad-testarch-ci/workflow.yaml
Normal file
@@ -0,0 +1,48 @@
|
||||
# Test Architect workflow: bmad-testarch-ci
|
||||
name: bmad-testarch-ci
|
||||
# prettier-ignore
|
||||
description: 'Scaffold CI/CD quality pipeline with test execution. Use when the user says "lets setup CI pipeline" or "I want to create quality gates"'
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/tea/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
test_artifacts: "{config_source}:test_artifacts"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
document_output_language: "{config_source}:document_output_language"
|
||||
date: system-generated
|
||||
|
||||
# Workflow components
|
||||
installed_path: "."
|
||||
instructions: "./instructions.md"
|
||||
validation: "./checklist.md"
|
||||
|
||||
# Variables and inputs
|
||||
variables:
|
||||
ci_platform: "auto" # auto, github-actions, gitlab-ci, circle-ci, jenkins, azure-devops, harness - user can override
|
||||
test_dir: "{project-root}/tests" # Root test directory
|
||||
test_stack_type: "auto" # auto, frontend, backend, fullstack - detected or user override
|
||||
test_framework: "auto" # auto, playwright, cypress, jest, vitest - detected or user override
|
||||
|
||||
# Output configuration (resolved dynamically based on ci_platform detection)
|
||||
default_output_file: "{project-root}/.github/workflows/test.yml" # GitHub Actions default; overridden per platform
|
||||
|
||||
# Required tools
|
||||
required_tools:
|
||||
- read_file # Read .nvmrc, package.json, framework config
|
||||
- write_file # Create CI config, scripts, documentation
|
||||
- create_directory # Create .github/workflows/ or .gitlab-ci/ directories
|
||||
- list_files # Detect existing CI configuration
|
||||
- search_repo # Find test files for selective testing
|
||||
|
||||
tags:
|
||||
- qa
|
||||
- ci-cd
|
||||
- test-architect
|
||||
- pipeline
|
||||
- automation
|
||||
|
||||
execution_hints:
|
||||
interactive: false # Minimize prompts, auto-detect when possible
|
||||
autonomous: true # Proceed without user input unless blocked
|
||||
iterative: true
|
||||
@@ -0,0 +1,6 @@
|
||||
---
|
||||
name: bmad-testarch-framework
|
||||
description: 'Initialize test framework with Playwright or Cypress. Use when the user says "lets setup test framework" or "I want to initialize testing framework"'
|
||||
---
|
||||
|
||||
Follow the instructions in [workflow.md](workflow.md).
|
||||
@@ -0,0 +1 @@
|
||||
type: skill
|
||||
@@ -0,0 +1,345 @@
|
||||
# Test Framework Setup - Validation Checklist
|
||||
|
||||
This checklist ensures the framework workflow completes successfully and all deliverables meet quality standards.
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before starting the workflow:
|
||||
|
||||
- [ ] Project root contains a valid project manifest (`package.json`, `pyproject.toml`, `pom.xml`, `build.gradle`, `go.mod`, `*.csproj`, `Gemfile`, or `Cargo.toml`)
|
||||
- [ ] No existing test framework detected that conflicts with the target setup
|
||||
- [ ] Project type identifiable (React, Vue, Angular, Next.js, Node, Python, Java, Go, .NET, Ruby, Rust, etc.)
|
||||
- [ ] Bundler identifiable (Vite, Webpack, Rollup, esbuild) or not applicable (backend projects)
|
||||
- [ ] User has write permissions to create directories and files
|
||||
|
||||
---
|
||||
|
||||
## Process Steps
|
||||
|
||||
### Step 1: Preflight Checks
|
||||
|
||||
- [ ] Stack type detected (`frontend`, `backend`, or `fullstack`)
|
||||
- [ ] Project manifest successfully read and parsed (`package.json`, `pyproject.toml`, `pom.xml`, `go.mod`, etc.)
|
||||
- [ ] Project type extracted correctly
|
||||
- [ ] Bundler identified (or marked as N/A for backend projects)
|
||||
- [ ] No framework conflicts detected
|
||||
- [ ] Architecture documents located (if available)
|
||||
|
||||
### Step 2: Framework Selection
|
||||
|
||||
- [ ] Framework auto-detection logic executed
|
||||
- [ ] Framework choice justified (Playwright vs Cypress for frontend; pytest/JUnit/Go test/xUnit/RSpec for backend)
|
||||
- [ ] Framework preference respected (if explicitly set via `config.test_framework`)
|
||||
- [ ] User notified of framework selection and rationale
|
||||
|
||||
### Step 3: Directory Structure
|
||||
|
||||
- [ ] `tests/` root directory created
|
||||
- [ ] `tests/e2e/` directory created (or user's preferred structure)
|
||||
- [ ] `tests/support/` directory created (critical pattern)
|
||||
- [ ] `tests/support/fixtures/` directory created
|
||||
- [ ] `tests/support/fixtures/factories/` directory created
|
||||
- [ ] `tests/support/helpers/` directory created
|
||||
- [ ] `tests/support/page-objects/` directory created (if applicable)
|
||||
- [ ] All directories have correct permissions
|
||||
|
||||
**Note**: Test organization is flexible (e2e/, api/, integration/). The **support/** folder is the key pattern.
|
||||
|
||||
### Step 4: Configuration Files
|
||||
|
||||
- [ ] Framework config file created (`playwright.config.ts` or `cypress.config.ts`)
|
||||
- [ ] Config file uses TypeScript (if `use_typescript: true`)
|
||||
- [ ] Timeouts configured correctly (action: 15s, navigation: 30s, test: 60s)
|
||||
- [ ] Base URL configured with environment variable fallback
|
||||
- [ ] Trace/screenshot/video set to retain-on-failure
|
||||
- [ ] Multiple reporters configured (HTML + JUnit + console)
|
||||
- [ ] Parallel execution enabled
|
||||
- [ ] CI-specific settings configured (retries, workers)
|
||||
- [ ] Config file is syntactically valid (no compilation errors)
|
||||
|
||||
### Step 5: Environment Configuration
|
||||
|
||||
- [ ] `.env.example` created in project root
|
||||
- [ ] `TEST_ENV` variable defined
|
||||
- [ ] `BASE_URL` variable defined with default
|
||||
- [ ] `API_URL` variable defined (if applicable)
|
||||
- [ ] Authentication variables defined (if applicable)
|
||||
- [ ] Feature flag variables defined (if applicable)
|
||||
- [ ] `.nvmrc` created with appropriate Node version
|
||||
|
||||
### Step 6: Fixture Architecture
|
||||
|
||||
- [ ] `tests/support/fixtures/index.ts` created
|
||||
- [ ] Base fixture extended from Playwright/Cypress
|
||||
- [ ] Type definitions for fixtures created
|
||||
- [ ] mergeTests pattern implemented (if multiple fixtures)
|
||||
- [ ] Auto-cleanup logic included in fixtures
|
||||
- [ ] Fixture architecture follows knowledge base patterns
|
||||
|
||||
### Step 7: Data Factories
|
||||
|
||||
- [ ] At least one factory created (e.g., UserFactory)
|
||||
- [ ] Factories use @faker-js/faker for realistic data
|
||||
- [ ] Factories track created entities (for cleanup)
|
||||
- [ ] Factories implement `cleanup()` method
|
||||
- [ ] Factories integrate with fixtures
|
||||
- [ ] Factories follow knowledge base patterns
|
||||
|
||||
### Step 8: Sample Tests
|
||||
|
||||
- [ ] Example test file created (`tests/e2e/example.spec.ts`)
|
||||
- [ ] Test uses fixture architecture
|
||||
- [ ] Test demonstrates data factory usage
|
||||
- [ ] Test uses proper selector strategy (data-testid)
|
||||
- [ ] Test follows Given-When-Then structure
|
||||
- [ ] Test includes proper assertions
|
||||
- [ ] Network interception demonstrated (if applicable)
|
||||
|
||||
### Step 9: Helper Utilities
|
||||
|
||||
- [ ] API helper created (if API testing needed)
|
||||
- [ ] Network helper created (if network mocking needed)
|
||||
- [ ] Auth helper created (if authentication needed)
|
||||
- [ ] Helpers follow functional patterns
|
||||
- [ ] Helpers have proper error handling
|
||||
|
||||
### Step 10: Documentation
|
||||
|
||||
- [ ] `tests/README.md` created
|
||||
- [ ] Setup instructions included
|
||||
- [ ] Running tests section included
|
||||
- [ ] Architecture overview section included
|
||||
- [ ] Best practices section included
|
||||
- [ ] CI integration section included
|
||||
- [ ] Knowledge base references included
|
||||
- [ ] Troubleshooting section included
|
||||
|
||||
### Step 11: Build & Test Script Updates
|
||||
|
||||
- [ ] Minimal test script added to appropriate config (`package.json` for frontend, `Makefile`/`pyproject.toml`/`build.gradle` for backend)
|
||||
- [ ] Test framework dependency added (if not already present)
|
||||
- [ ] Type definitions added (if TypeScript)
|
||||
- [ ] Users can extend with additional scripts as needed
|
||||
|
||||
---
|
||||
|
||||
## Output Validation
|
||||
|
||||
### Configuration Validation
|
||||
|
||||
- [ ] Config file loads without errors
|
||||
- [ ] Config file passes linting (if linter configured)
|
||||
- [ ] Config file uses correct syntax for chosen framework
|
||||
- [ ] All paths in config resolve correctly
|
||||
- [ ] Reporter output directories exist or are created on test run
|
||||
|
||||
### Test Execution Validation
|
||||
|
||||
- [ ] Sample test runs successfully
|
||||
- [ ] Test execution produces expected output (pass/fail)
|
||||
- [ ] Test artifacts generated correctly (traces, screenshots, videos)
|
||||
- [ ] Test report generated successfully
|
||||
- [ ] No console errors or warnings during test run
|
||||
|
||||
### Directory Structure Validation
|
||||
|
||||
- [ ] All required directories exist
|
||||
- [ ] Directory structure matches framework conventions
|
||||
- [ ] No duplicate or conflicting directories
|
||||
- [ ] Directories accessible with correct permissions
|
||||
|
||||
### File Integrity Validation
|
||||
|
||||
- [ ] All generated files are syntactically correct
|
||||
- [ ] No placeholder text left in files (e.g., "TODO", "FIXME")
|
||||
- [ ] All imports resolve correctly
|
||||
- [ ] No hardcoded credentials or secrets in files
|
||||
- [ ] All file paths use correct separators for OS
|
||||
|
||||
---
|
||||
|
||||
## Quality Checks
|
||||
|
||||
### Code Quality
|
||||
|
||||
- [ ] Generated code follows project coding standards
|
||||
- [ ] TypeScript types are complete and accurate (no `any` unless necessary)
|
||||
- [ ] No unused imports or variables
|
||||
- [ ] Consistent code formatting (matches project style)
|
||||
- [ ] No linting errors in generated files
|
||||
|
||||
### Best Practices Compliance
|
||||
|
||||
- [ ] Fixture architecture follows pure function → fixture → mergeTests pattern
|
||||
- [ ] Data factories implement auto-cleanup
|
||||
- [ ] Network interception occurs before navigation
|
||||
- [ ] Selectors use data-testid strategy
|
||||
- [ ] Artifacts only captured on failure
|
||||
- [ ] Tests follow Given-When-Then structure
|
||||
- [ ] No hard-coded waits or sleeps
|
||||
|
||||
### Knowledge Base Alignment
|
||||
|
||||
- [ ] Fixture pattern matches `fixture-architecture.md`
|
||||
- [ ] Data factories match `data-factories.md`
|
||||
- [ ] Network handling matches `network-first.md`
|
||||
- [ ] Config follows `playwright-config.md` or `test-config.md`
|
||||
- [ ] Test quality matches `test-quality.md`
|
||||
|
||||
### Pact Consumer CDC Alignment (when `tea_use_pactjs_utils` enabled)
|
||||
|
||||
- [ ] `vitest.config.pact.ts` is minimal (no pool/coverage/setup copied from unit config)
|
||||
- [ ] Script names match pactjs-utils (`test:pact:consumer`, `publish:pact`, `can:i:deploy:consumer`, `record:consumer:deployment`)
|
||||
- [ ] Scripts source `env-setup.sh` inline in package.json
|
||||
- [ ] Shell scripts use `pact-broker` not `npx pact-broker`
|
||||
- [ ] Shell scripts use `PACTICIPANT` env var pattern (not hardcoded service names)
|
||||
- [ ] `can-i-deploy.sh` has `--retry-while-unknown=10 --retry-interval=30`
|
||||
- [ ] `record-deployment.sh` has branch guard (only records on main/master)
|
||||
- [ ] `env-setup.sh` uses `set -eu`; broker scripts use `set -euo pipefail` — each with explanatory comment
|
||||
- [ ] CI workflow named `contract-test-consumer.yml`
|
||||
- [ ] CI has workflow-level env block (not per-step)
|
||||
- [ ] CI has `detect-breaking-change` step before install
|
||||
- [ ] CI step numbering skips (3) — webhook-triggered provider verification
|
||||
- [ ] CI can-i-deploy has `PACT_BREAKING_CHANGE != 'true'` condition
|
||||
- [ ] CI has NO upload-artifact step (broker is source of truth)
|
||||
- [ ] `.github/actions/detect-breaking-change/action.yml` exists
|
||||
- [ ] Consumer tests use `.pacttest.ts` extension
|
||||
- [ ] Consumer tests use PactV4 `addInteraction()` builder (not PactV3 fluent API)
|
||||
- [ ] Consumer tests call REAL consumer code (actual API client functions), NOT raw `fetch()`
|
||||
- [ ] Consumer code exposes URL injection mechanism (`setApiUrl()`, env var, or constructor param)
|
||||
- [ ] Local consumer-helpers shim present if `@seontechnologies/pactjs-utils` not installed
|
||||
- [ ] `.gitignore` includes `/pacts/` and `pact-logs/`
|
||||
|
||||
### Security Checks
|
||||
|
||||
- [ ] No credentials in configuration files
|
||||
- [ ] .env.example contains placeholders, not real values
|
||||
- [ ] Sensitive test data handled securely
|
||||
- [ ] API keys and tokens use environment variables
|
||||
- [ ] No secrets committed to version control
|
||||
|
||||
---
|
||||
|
||||
## Integration Points
|
||||
|
||||
### Status File Integration
|
||||
|
||||
- [ ] Framework initialization logged in Quality & Testing Progress section
|
||||
- [ ] Status file updated with completion timestamp
|
||||
- [ ] Status file shows framework: Playwright or Cypress
|
||||
|
||||
### Knowledge Base Integration
|
||||
|
||||
- [ ] Relevant knowledge fragments identified from tea-index.csv
|
||||
- [ ] Knowledge fragments successfully loaded
|
||||
- [ ] Patterns from knowledge base applied correctly
|
||||
- [ ] Knowledge base references included in documentation
|
||||
|
||||
### Workflow Dependencies
|
||||
|
||||
- [ ] Can proceed to `ci` workflow after completion
|
||||
- [ ] Can proceed to `test-design` workflow after completion
|
||||
- [ ] Can proceed to `atdd` workflow after completion
|
||||
- [ ] Framework setup compatible with downstream workflows
|
||||
|
||||
---
|
||||
|
||||
## Completion Criteria
|
||||
|
||||
**All of the following must be true:**
|
||||
|
||||
- [ ] All prerequisite checks passed
|
||||
- [ ] All process steps completed without errors
|
||||
- [ ] All output validations passed
|
||||
- [ ] All quality checks passed
|
||||
- [ ] All integration points verified
|
||||
- [ ] Sample test executes successfully
|
||||
- [ ] User can run the appropriate test command without errors (`npm run test:e2e`, `pytest`, `go test ./...`, etc.)
|
||||
- [ ] Documentation is complete and accurate
|
||||
- [ ] No critical issues or blockers identified
|
||||
|
||||
---
|
||||
|
||||
## Post-Workflow Actions
|
||||
|
||||
**User must complete:**
|
||||
|
||||
1. [ ] Copy `.env.example` to `.env`
|
||||
2. [ ] Fill in environment-specific values in `.env`
|
||||
3. [ ] Run `npm install` to install test dependencies
|
||||
4. [ ] Run `npm run test:e2e` to verify setup
|
||||
5. [ ] Review `tests/README.md` for project-specific guidance
|
||||
|
||||
**Recommended next workflows:**
|
||||
|
||||
1. [ ] Run `ci` workflow to set up CI/CD pipeline
|
||||
2. [ ] Run `test-design` workflow to plan test coverage
|
||||
3. [ ] Run `atdd` workflow when ready to develop stories
|
||||
|
||||
---
|
||||
|
||||
## Rollback Procedure
|
||||
|
||||
If workflow fails and needs to be rolled back:
|
||||
|
||||
1. [ ] Delete `tests/` directory
|
||||
2. [ ] Remove test scripts from package.json
|
||||
3. [ ] Delete `.env.example` (if created)
|
||||
4. [ ] Delete `.nvmrc` (if created)
|
||||
5. [ ] Delete framework config file
|
||||
6. [ ] Remove test dependencies from package.json (if added)
|
||||
7. [ ] Run `npm install` to clean up node_modules
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Issue**: Config file has TypeScript errors
|
||||
|
||||
- **Solution**: Ensure `@playwright/test` or `cypress` types are installed
|
||||
|
||||
**Issue**: Sample test fails to run
|
||||
|
||||
- **Solution**: Check BASE_URL in .env, ensure app is running
|
||||
|
||||
**Issue**: Fixture cleanup not working
|
||||
|
||||
- **Solution**: Verify cleanup() is called in fixture teardown
|
||||
|
||||
**Issue**: Network interception not working
|
||||
|
||||
- **Solution**: Ensure route setup occurs before page.goto()
|
||||
|
||||
### Framework-Specific Considerations
|
||||
|
||||
**Playwright:**
|
||||
|
||||
- Requires Node.js 18+
|
||||
- Browser binaries auto-installed on first run
|
||||
- Trace viewer requires running `npx playwright show-trace`
|
||||
|
||||
**Cypress:**
|
||||
|
||||
- Requires Node.js 18+
|
||||
- Cypress app opens on first run
|
||||
- Component testing requires additional setup
|
||||
|
||||
### Version Compatibility
|
||||
|
||||
- [ ] Node.js version matches .nvmrc
|
||||
- [ ] Framework version compatible with Node.js version
|
||||
- [ ] TypeScript version compatible with framework
|
||||
- [ ] All peer dependencies satisfied
|
||||
|
||||
---
|
||||
|
||||
**Checklist Complete**: Sign off when all items checked and validated.
|
||||
|
||||
**Completed by:** {name}
|
||||
**Date:** {date}
|
||||
**Framework:** { Playwright / Cypress or something else}
|
||||
**Notes:** {notes}
|
||||
@@ -0,0 +1,45 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# Test Framework Setup
|
||||
|
||||
**Workflow ID**: `_bmad/tea/testarch/bmad-testarch-framework`
|
||||
**Version**: 5.0 (Step-File Architecture)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Initialize a production-ready test framework (Playwright or Cypress) with fixtures, helpers, configuration, and best practices.
|
||||
|
||||
---
|
||||
|
||||
## WORKFLOW ARCHITECTURE
|
||||
|
||||
This workflow uses **step-file architecture**:
|
||||
|
||||
- **Micro-file Design**: Each step is self-contained
|
||||
- **JIT Loading**: Only the current step file is in memory
|
||||
- **Sequential Enforcement**: Execute steps in order without skipping
|
||||
|
||||
---
|
||||
|
||||
## INITIALIZATION SEQUENCE
|
||||
|
||||
### 1. Configuration Loading
|
||||
|
||||
From `workflow.yaml`, resolve:
|
||||
|
||||
- `config_source`, `test_artifacts`, `user_name`, `communication_language`, `document_output_language`, `date`
|
||||
- `test_dir`, `use_typescript`, `framework_preference`, `project_size`
|
||||
|
||||
### 2. First Step
|
||||
|
||||
Load, read completely, and execute:
|
||||
`./steps-c/step-01-preflight.md`
|
||||
|
||||
### 3. Resume Support
|
||||
|
||||
If the user selects **Resume** mode, load, read completely, and execute:
|
||||
`./steps-c/step-01b-resume.md`
|
||||
|
||||
This checks the output document for progress tracking frontmatter and routes to the next incomplete step.
|
||||
@@ -0,0 +1,132 @@
|
||||
---
|
||||
name: 'step-01-preflight'
|
||||
description: 'Verify prerequisites and gather project context'
|
||||
nextStepFile: './step-02-select-framework.md'
|
||||
outputFile: '{test_artifacts}/framework-setup-progress.md'
|
||||
---
|
||||
|
||||
# Step 1: Preflight Checks
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Verify the project is ready for framework scaffolding and gather key context.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- 🚫 Halt if preflight requirements fail
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Stack Detection
|
||||
|
||||
**Read `config.test_stack_type`** from `{config_source}`.
|
||||
|
||||
**Auto-Detection Algorithm** (when `test_stack_type` is `"auto"` or not configured):
|
||||
|
||||
- Scan `{project-root}` for project manifests:
|
||||
- **Frontend indicators**: `package.json` with react/vue/angular/next dependencies, `playwright.config.*`, `vite.config.*`, `webpack.config.*`
|
||||
- **Backend indicators**: `pyproject.toml`, `pom.xml`/`build.gradle`, `go.mod`, `*.csproj`/`*.sln`, `Gemfile`, `Cargo.toml`
|
||||
- **Both present** = `fullstack`; only frontend = `frontend`; only backend = `backend`
|
||||
- Explicit `test_stack_type` config value overrides auto-detection
|
||||
- **Backward compatibility**: if `test_stack_type` is not in config, treat as `"auto"` (preserves current frontend behavior for existing installs)
|
||||
|
||||
Store result as `{detected_stack}` = `frontend` | `backend` | `fullstack`
|
||||
|
||||
---
|
||||
|
||||
## 2. Validate Prerequisites
|
||||
|
||||
**If {detected_stack} is `frontend` or `fullstack`:**
|
||||
|
||||
- `package.json` exists in project root
|
||||
- No existing E2E framework (`playwright.config.*`, `cypress.config.*`, `cypress.json`)
|
||||
|
||||
**If {detected_stack} is `backend` or `fullstack`:**
|
||||
|
||||
- At least one backend project manifest exists (`pyproject.toml`, `pom.xml`, `build.gradle`, `go.mod`, `*.csproj`, `Gemfile`, `Cargo.toml`)
|
||||
- No existing test framework config that conflicts (e.g., `conftest.py` with full pytest suite, `src/test/` with JUnit suite)
|
||||
|
||||
- Architecture/stack context available (project type, bundler, dependencies)
|
||||
|
||||
If any fail, **HALT** and report the missing requirement.
|
||||
|
||||
---
|
||||
|
||||
## 3. Gather Project Context
|
||||
|
||||
**If {detected_stack} is `frontend` or `fullstack`:**
|
||||
|
||||
- Read `package.json` to identify framework, bundler, dependencies
|
||||
|
||||
**If {detected_stack} is `backend` or `fullstack`:**
|
||||
|
||||
- Read the relevant project manifest (`pyproject.toml`, `pom.xml`, `go.mod`, `*.csproj`, `Gemfile`, `Cargo.toml`) to identify language, framework, and dependencies
|
||||
|
||||
- Check for architecture docs (`architecture.md`, `tech-spec*.md`) if available
|
||||
- Note auth requirements and APIs (if documented)
|
||||
|
||||
---
|
||||
|
||||
## 3. Confirm Findings
|
||||
|
||||
Summarize:
|
||||
|
||||
- Project type and bundler
|
||||
- Whether a framework is already installed
|
||||
- Any relevant context docs found
|
||||
|
||||
---
|
||||
|
||||
### 4. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-01-preflight']
|
||||
lastStep: 'step-01-preflight'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-01-preflight'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-01-preflight'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section of the document.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user