initial commit

This commit is contained in:
2026-03-16 19:54:53 -04:00
commit bfe0e01254
3341 changed files with 483939 additions and 0 deletions

View File

@@ -0,0 +1,6 @@
---
name: bmad-testarch-framework
description: 'Initialize test framework with Playwright or Cypress. Use when the user says "lets setup test framework" or "I want to initialize testing framework"'
---
Follow the instructions in [workflow.md](workflow.md).

View File

@@ -0,0 +1 @@
type: skill

View File

@@ -0,0 +1,345 @@
# Test Framework Setup - Validation Checklist
This checklist ensures the framework workflow completes successfully and all deliverables meet quality standards.
---
## Prerequisites
Before starting the workflow:
- [ ] Project root contains a valid project manifest (`package.json`, `pyproject.toml`, `pom.xml`, `build.gradle`, `go.mod`, `*.csproj`, `Gemfile`, or `Cargo.toml`)
- [ ] No existing test framework detected that conflicts with the target setup
- [ ] Project type identifiable (React, Vue, Angular, Next.js, Node, Python, Java, Go, .NET, Ruby, Rust, etc.)
- [ ] Bundler identifiable (Vite, Webpack, Rollup, esbuild) or not applicable (backend projects)
- [ ] User has write permissions to create directories and files
---
## Process Steps
### Step 1: Preflight Checks
- [ ] Stack type detected (`frontend`, `backend`, or `fullstack`)
- [ ] Project manifest successfully read and parsed (`package.json`, `pyproject.toml`, `pom.xml`, `go.mod`, etc.)
- [ ] Project type extracted correctly
- [ ] Bundler identified (or marked as N/A for backend projects)
- [ ] No framework conflicts detected
- [ ] Architecture documents located (if available)
### Step 2: Framework Selection
- [ ] Framework auto-detection logic executed
- [ ] Framework choice justified (Playwright vs Cypress for frontend; pytest/JUnit/Go test/xUnit/RSpec for backend)
- [ ] Framework preference respected (if explicitly set via `config.test_framework`)
- [ ] User notified of framework selection and rationale
### Step 3: Directory Structure
- [ ] `tests/` root directory created
- [ ] `tests/e2e/` directory created (or user's preferred structure)
- [ ] `tests/support/` directory created (critical pattern)
- [ ] `tests/support/fixtures/` directory created
- [ ] `tests/support/fixtures/factories/` directory created
- [ ] `tests/support/helpers/` directory created
- [ ] `tests/support/page-objects/` directory created (if applicable)
- [ ] All directories have correct permissions
**Note**: Test organization is flexible (e2e/, api/, integration/). The **support/** folder is the key pattern.
### Step 4: Configuration Files
- [ ] Framework config file created (`playwright.config.ts` or `cypress.config.ts`)
- [ ] Config file uses TypeScript (if `use_typescript: true`)
- [ ] Timeouts configured correctly (action: 15s, navigation: 30s, test: 60s)
- [ ] Base URL configured with environment variable fallback
- [ ] Trace/screenshot/video set to retain-on-failure
- [ ] Multiple reporters configured (HTML + JUnit + console)
- [ ] Parallel execution enabled
- [ ] CI-specific settings configured (retries, workers)
- [ ] Config file is syntactically valid (no compilation errors)
### Step 5: Environment Configuration
- [ ] `.env.example` created in project root
- [ ] `TEST_ENV` variable defined
- [ ] `BASE_URL` variable defined with default
- [ ] `API_URL` variable defined (if applicable)
- [ ] Authentication variables defined (if applicable)
- [ ] Feature flag variables defined (if applicable)
- [ ] `.nvmrc` created with appropriate Node version
### Step 6: Fixture Architecture
- [ ] `tests/support/fixtures/index.ts` created
- [ ] Base fixture extended from Playwright/Cypress
- [ ] Type definitions for fixtures created
- [ ] mergeTests pattern implemented (if multiple fixtures)
- [ ] Auto-cleanup logic included in fixtures
- [ ] Fixture architecture follows knowledge base patterns
### Step 7: Data Factories
- [ ] At least one factory created (e.g., UserFactory)
- [ ] Factories use @faker-js/faker for realistic data
- [ ] Factories track created entities (for cleanup)
- [ ] Factories implement `cleanup()` method
- [ ] Factories integrate with fixtures
- [ ] Factories follow knowledge base patterns
### Step 8: Sample Tests
- [ ] Example test file created (`tests/e2e/example.spec.ts`)
- [ ] Test uses fixture architecture
- [ ] Test demonstrates data factory usage
- [ ] Test uses proper selector strategy (data-testid)
- [ ] Test follows Given-When-Then structure
- [ ] Test includes proper assertions
- [ ] Network interception demonstrated (if applicable)
### Step 9: Helper Utilities
- [ ] API helper created (if API testing needed)
- [ ] Network helper created (if network mocking needed)
- [ ] Auth helper created (if authentication needed)
- [ ] Helpers follow functional patterns
- [ ] Helpers have proper error handling
### Step 10: Documentation
- [ ] `tests/README.md` created
- [ ] Setup instructions included
- [ ] Running tests section included
- [ ] Architecture overview section included
- [ ] Best practices section included
- [ ] CI integration section included
- [ ] Knowledge base references included
- [ ] Troubleshooting section included
### Step 11: Build & Test Script Updates
- [ ] Minimal test script added to appropriate config (`package.json` for frontend, `Makefile`/`pyproject.toml`/`build.gradle` for backend)
- [ ] Test framework dependency added (if not already present)
- [ ] Type definitions added (if TypeScript)
- [ ] Users can extend with additional scripts as needed
---
## Output Validation
### Configuration Validation
- [ ] Config file loads without errors
- [ ] Config file passes linting (if linter configured)
- [ ] Config file uses correct syntax for chosen framework
- [ ] All paths in config resolve correctly
- [ ] Reporter output directories exist or are created on test run
### Test Execution Validation
- [ ] Sample test runs successfully
- [ ] Test execution produces expected output (pass/fail)
- [ ] Test artifacts generated correctly (traces, screenshots, videos)
- [ ] Test report generated successfully
- [ ] No console errors or warnings during test run
### Directory Structure Validation
- [ ] All required directories exist
- [ ] Directory structure matches framework conventions
- [ ] No duplicate or conflicting directories
- [ ] Directories accessible with correct permissions
### File Integrity Validation
- [ ] All generated files are syntactically correct
- [ ] No placeholder text left in files (e.g., "TODO", "FIXME")
- [ ] All imports resolve correctly
- [ ] No hardcoded credentials or secrets in files
- [ ] All file paths use correct separators for OS
---
## Quality Checks
### Code Quality
- [ ] Generated code follows project coding standards
- [ ] TypeScript types are complete and accurate (no `any` unless necessary)
- [ ] No unused imports or variables
- [ ] Consistent code formatting (matches project style)
- [ ] No linting errors in generated files
### Best Practices Compliance
- [ ] Fixture architecture follows pure function → fixture → mergeTests pattern
- [ ] Data factories implement auto-cleanup
- [ ] Network interception occurs before navigation
- [ ] Selectors use data-testid strategy
- [ ] Artifacts only captured on failure
- [ ] Tests follow Given-When-Then structure
- [ ] No hard-coded waits or sleeps
### Knowledge Base Alignment
- [ ] Fixture pattern matches `fixture-architecture.md`
- [ ] Data factories match `data-factories.md`
- [ ] Network handling matches `network-first.md`
- [ ] Config follows `playwright-config.md` or `test-config.md`
- [ ] Test quality matches `test-quality.md`
### Pact Consumer CDC Alignment (when `tea_use_pactjs_utils` enabled)
- [ ] `vitest.config.pact.ts` is minimal (no pool/coverage/setup copied from unit config)
- [ ] Script names match pactjs-utils (`test:pact:consumer`, `publish:pact`, `can:i:deploy:consumer`, `record:consumer:deployment`)
- [ ] Scripts source `env-setup.sh` inline in package.json
- [ ] Shell scripts use `pact-broker` not `npx pact-broker`
- [ ] Shell scripts use `PACTICIPANT` env var pattern (not hardcoded service names)
- [ ] `can-i-deploy.sh` has `--retry-while-unknown=10 --retry-interval=30`
- [ ] `record-deployment.sh` has branch guard (only records on main/master)
- [ ] `env-setup.sh` uses `set -eu`; broker scripts use `set -euo pipefail` — each with explanatory comment
- [ ] CI workflow named `contract-test-consumer.yml`
- [ ] CI has workflow-level env block (not per-step)
- [ ] CI has `detect-breaking-change` step before install
- [ ] CI step numbering skips (3) — webhook-triggered provider verification
- [ ] CI can-i-deploy has `PACT_BREAKING_CHANGE != 'true'` condition
- [ ] CI has NO upload-artifact step (broker is source of truth)
- [ ] `.github/actions/detect-breaking-change/action.yml` exists
- [ ] Consumer tests use `.pacttest.ts` extension
- [ ] Consumer tests use PactV4 `addInteraction()` builder (not PactV3 fluent API)
- [ ] Consumer tests call REAL consumer code (actual API client functions), NOT raw `fetch()`
- [ ] Consumer code exposes URL injection mechanism (`setApiUrl()`, env var, or constructor param)
- [ ] Local consumer-helpers shim present if `@seontechnologies/pactjs-utils` not installed
- [ ] `.gitignore` includes `/pacts/` and `pact-logs/`
### Security Checks
- [ ] No credentials in configuration files
- [ ] .env.example contains placeholders, not real values
- [ ] Sensitive test data handled securely
- [ ] API keys and tokens use environment variables
- [ ] No secrets committed to version control
---
## Integration Points
### Status File Integration
- [ ] Framework initialization logged in Quality & Testing Progress section
- [ ] Status file updated with completion timestamp
- [ ] Status file shows framework: Playwright or Cypress
### Knowledge Base Integration
- [ ] Relevant knowledge fragments identified from tea-index.csv
- [ ] Knowledge fragments successfully loaded
- [ ] Patterns from knowledge base applied correctly
- [ ] Knowledge base references included in documentation
### Workflow Dependencies
- [ ] Can proceed to `ci` workflow after completion
- [ ] Can proceed to `test-design` workflow after completion
- [ ] Can proceed to `atdd` workflow after completion
- [ ] Framework setup compatible with downstream workflows
---
## Completion Criteria
**All of the following must be true:**
- [ ] All prerequisite checks passed
- [ ] All process steps completed without errors
- [ ] All output validations passed
- [ ] All quality checks passed
- [ ] All integration points verified
- [ ] Sample test executes successfully
- [ ] User can run the appropriate test command without errors (`npm run test:e2e`, `pytest`, `go test ./...`, etc.)
- [ ] Documentation is complete and accurate
- [ ] No critical issues or blockers identified
---
## Post-Workflow Actions
**User must complete:**
1. [ ] Copy `.env.example` to `.env`
2. [ ] Fill in environment-specific values in `.env`
3. [ ] Run `npm install` to install test dependencies
4. [ ] Run `npm run test:e2e` to verify setup
5. [ ] Review `tests/README.md` for project-specific guidance
**Recommended next workflows:**
1. [ ] Run `ci` workflow to set up CI/CD pipeline
2. [ ] Run `test-design` workflow to plan test coverage
3. [ ] Run `atdd` workflow when ready to develop stories
---
## Rollback Procedure
If workflow fails and needs to be rolled back:
1. [ ] Delete `tests/` directory
2. [ ] Remove test scripts from package.json
3. [ ] Delete `.env.example` (if created)
4. [ ] Delete `.nvmrc` (if created)
5. [ ] Delete framework config file
6. [ ] Remove test dependencies from package.json (if added)
7. [ ] Run `npm install` to clean up node_modules
---
## Notes
### Common Issues
**Issue**: Config file has TypeScript errors
- **Solution**: Ensure `@playwright/test` or `cypress` types are installed
**Issue**: Sample test fails to run
- **Solution**: Check BASE_URL in .env, ensure app is running
**Issue**: Fixture cleanup not working
- **Solution**: Verify cleanup() is called in fixture teardown
**Issue**: Network interception not working
- **Solution**: Ensure route setup occurs before page.goto()
### Framework-Specific Considerations
**Playwright:**
- Requires Node.js 18+
- Browser binaries auto-installed on first run
- Trace viewer requires running `npx playwright show-trace`
**Cypress:**
- Requires Node.js 18+
- Cypress app opens on first run
- Component testing requires additional setup
### Version Compatibility
- [ ] Node.js version matches .nvmrc
- [ ] Framework version compatible with Node.js version
- [ ] TypeScript version compatible with framework
- [ ] All peer dependencies satisfied
---
**Checklist Complete**: Sign off when all items checked and validated.
**Completed by:** {name}
**Date:** {date}
**Framework:** { Playwright / Cypress or something else}
**Notes:** {notes}

View File

@@ -0,0 +1,45 @@
<!-- Powered by BMAD-CORE™ -->
# Test Framework Setup
**Workflow ID**: `_bmad/tea/testarch/bmad-testarch-framework`
**Version**: 5.0 (Step-File Architecture)
---
## Overview
Initialize a production-ready test framework (Playwright or Cypress) with fixtures, helpers, configuration, and best practices.
---
## WORKFLOW ARCHITECTURE
This workflow uses **step-file architecture**:
- **Micro-file Design**: Each step is self-contained
- **JIT Loading**: Only the current step file is in memory
- **Sequential Enforcement**: Execute steps in order without skipping
---
## INITIALIZATION SEQUENCE
### 1. Configuration Loading
From `workflow.yaml`, resolve:
- `config_source`, `test_artifacts`, `user_name`, `communication_language`, `document_output_language`, `date`
- `test_dir`, `use_typescript`, `framework_preference`, `project_size`
### 2. First Step
Load, read completely, and execute:
`./steps-c/step-01-preflight.md`
### 3. Resume Support
If the user selects **Resume** mode, load, read completely, and execute:
`./steps-c/step-01b-resume.md`
This checks the output document for progress tracking frontmatter and routes to the next incomplete step.

View File

@@ -0,0 +1,132 @@
---
name: 'step-01-preflight'
description: 'Verify prerequisites and gather project context'
nextStepFile: './step-02-select-framework.md'
outputFile: '{test_artifacts}/framework-setup-progress.md'
---
# Step 1: Preflight Checks
## STEP GOAL
Verify the project is ready for framework scaffolding and gather key context.
## MANDATORY EXECUTION RULES
- 📖 Read the entire step file before acting
- ✅ Speak in `{communication_language}`
- 🚫 Halt if preflight requirements fail
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 💾 Record outputs before proceeding
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: config, loaded artifacts, and knowledge fragments
- Focus: this step's goal only
- Limits: do not execute future steps
- Dependencies: prior steps' outputs (if any)
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
## 1. Stack Detection
**Read `config.test_stack_type`** from `{config_source}`.
**Auto-Detection Algorithm** (when `test_stack_type` is `"auto"` or not configured):
- Scan `{project-root}` for project manifests:
- **Frontend indicators**: `package.json` with react/vue/angular/next dependencies, `playwright.config.*`, `vite.config.*`, `webpack.config.*`
- **Backend indicators**: `pyproject.toml`, `pom.xml`/`build.gradle`, `go.mod`, `*.csproj`/`*.sln`, `Gemfile`, `Cargo.toml`
- **Both present** = `fullstack`; only frontend = `frontend`; only backend = `backend`
- Explicit `test_stack_type` config value overrides auto-detection
- **Backward compatibility**: if `test_stack_type` is not in config, treat as `"auto"` (preserves current frontend behavior for existing installs)
Store result as `{detected_stack}` = `frontend` | `backend` | `fullstack`
---
## 2. Validate Prerequisites
**If {detected_stack} is `frontend` or `fullstack`:**
- `package.json` exists in project root
- No existing E2E framework (`playwright.config.*`, `cypress.config.*`, `cypress.json`)
**If {detected_stack} is `backend` or `fullstack`:**
- At least one backend project manifest exists (`pyproject.toml`, `pom.xml`, `build.gradle`, `go.mod`, `*.csproj`, `Gemfile`, `Cargo.toml`)
- No existing test framework config that conflicts (e.g., `conftest.py` with full pytest suite, `src/test/` with JUnit suite)
- Architecture/stack context available (project type, bundler, dependencies)
If any fail, **HALT** and report the missing requirement.
---
## 3. Gather Project Context
**If {detected_stack} is `frontend` or `fullstack`:**
- Read `package.json` to identify framework, bundler, dependencies
**If {detected_stack} is `backend` or `fullstack`:**
- Read the relevant project manifest (`pyproject.toml`, `pom.xml`, `go.mod`, `*.csproj`, `Gemfile`, `Cargo.toml`) to identify language, framework, and dependencies
- Check for architecture docs (`architecture.md`, `tech-spec*.md`) if available
- Note auth requirements and APIs (if documented)
---
## 3. Confirm Findings
Summarize:
- Project type and bundler
- Whether a framework is already installed
- Any relevant context docs found
---
### 4. Save Progress
**Save this step's accumulated work to `{outputFile}`.**
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
```yaml
---
stepsCompleted: ['step-01-preflight']
lastStep: 'step-01-preflight'
lastSaved: '{date}'
---
```
Then write this step's output below the frontmatter.
- **If `{outputFile}` already exists**, update:
- Add `'step-01-preflight'` to `stepsCompleted` array (only if not already present)
- Set `lastStep: 'step-01-preflight'`
- Set `lastSaved: '{date}'`
- Append this step's output to the appropriate section of the document.
Load next step: `{nextStepFile}`
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Step completed in full with required outputs
### ❌ SYSTEM FAILURE:
- Skipped sequence steps or missing outputs
**Master Rule:** Skipping steps is FORBIDDEN.

View File

@@ -0,0 +1,116 @@
---
name: 'step-01b-resume'
description: 'Resume interrupted workflow from last completed step'
outputFile: '{test_artifacts}/framework-setup-progress.md'
---
# Step 1b: Resume Workflow
## STEP GOAL
Resume an interrupted workflow by loading the existing progress document, verifying previously created artifacts still exist on disk, displaying progress, and routing to the next incomplete step.
## MANDATORY EXECUTION RULES
- 📖 Read the entire step file before acting
- ✅ Speak in `{communication_language}`
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: Output document with progress frontmatter
- Focus: Load progress and route to next step
- Limits: Do not re-execute completed steps
- Dependencies: Output document must exist from a previous run
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
### 1. Load Output Document
Read `{outputFile}` and parse YAML frontmatter for:
- `stepsCompleted` — array of completed step names
- `lastStep` — last completed step name
- `lastSaved` — timestamp of last save
**If `{outputFile}` does not exist**, display:
"⚠️ **No previous progress found.** There is no output document to resume from. Please use **[C] Create** to start a fresh workflow run."
**THEN:** Halt. Do not proceed.
---
### 2. Verify Previously Created Artifacts
Since this workflow creates code files, verify that artifacts from completed steps still exist on disk:
- If `step-01-preflight` completed: Confirm `package.json` still exists
- If `step-03-scaffold-framework` completed: Confirm directory structure and config files exist
- If `step-04-docs-and-scripts` completed: Confirm `{test_dir}/README.md` exists
If any expected artifacts are missing, warn the user and suggest re-running from the step that created them.
---
### 3. Display Progress Dashboard
Display:
"📋 **Workflow Resume — Test Framework Setup**
**Last saved:** {lastSaved}
**Steps completed:** {stepsCompleted.length} of 5
1. ✅/⬜ Preflight Checks (step-01-preflight)
2. ✅/⬜ Select Framework (step-02-select-framework)
3. ✅/⬜ Scaffold Framework (step-03-scaffold-framework)
4. ✅/⬜ Docs & Scripts (step-04-docs-and-scripts)
5. ✅/⬜ Validate & Summary (step-05-validate-and-summary)"
---
### 4. Route to Next Step
Based on `lastStep`, load the next incomplete step:
- `'step-01-preflight'``./step-02-select-framework.md`
- `'step-02-select-framework'``./step-03-scaffold-framework.md`
- `'step-03-scaffold-framework'``./step-04-docs-and-scripts.md`
- `'step-04-docs-and-scripts'``./step-05-validate-and-summary.md`
- `'step-05-validate-and-summary'`**Workflow already complete.** Display: "✅ **All steps completed.** Use **[V] Validate** to review outputs or **[E] Edit** to make revisions." Then halt.
**If `lastStep` does not match any value above**, display: "⚠️ **Unknown progress state** (`lastStep`: {lastStep}). Please use **[C] Create** to start fresh." Then halt.
**Otherwise**, load the identified step file, read completely, and execute.
The existing content in `{outputFile}` provides context from previously completed steps.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Output document loaded and parsed correctly
- Previously created artifacts verified on disk
- Progress dashboard displayed accurately
- Routed to correct next step
### ❌ SYSTEM FAILURE:
- Not loading output document
- Not verifying existing artifacts
- Incorrect progress display
- Routing to wrong step
- Re-executing completed steps
**Master Rule:** Resume MUST route to the exact next incomplete step. Never re-execute completed steps.

View File

@@ -0,0 +1,117 @@
---
name: 'step-02-select-framework'
description: 'Select Playwright or Cypress and justify choice'
nextStepFile: './step-03-scaffold-framework.md'
outputFile: '{test_artifacts}/framework-setup-progress.md'
---
# Step 2: Framework Selection
## STEP GOAL
Choose the most appropriate framework and document the rationale.
## MANDATORY EXECUTION RULES
- 📖 Read the entire step file before acting
- ✅ Speak in `{communication_language}`
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 💾 Record outputs before proceeding
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: config, loaded artifacts, and knowledge fragments
- Focus: this step's goal only
- Limits: do not execute future steps
- Dependencies: prior steps' outputs (if any)
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
## 1. Selection Logic
Use `{detected_stack}` from Step 1 to guide framework selection.
**If {detected_stack} is `frontend` or `fullstack` (browser-based testing):**
Default to **Playwright** unless strong reasons suggest Cypress.
**Playwright recommended when:**
- Large or complex repo
- Multi-browser support needed
- Heavy API + UI integration
- CI speed/parallelism is important
**Cypress recommended when:**
- Small team prioritizes DX
- Component testing focus
- Simpler setup needed
**If {detected_stack} is `backend` (no browser-based testing):**
Select the framework matching the project language:
- **Python**: pytest (default), unittest
- **Java/Kotlin**: JUnit 5 (default), TestNG
- **Go**: Go test (built-in)
- **C#/.NET**: xUnit (default), NUnit, MSTest
- **Ruby**: RSpec (default), Minitest
- **Rust**: cargo test (built-in)
**If {detected_stack} is `fullstack`:**
Select both a browser-based framework (Playwright/Cypress) AND the appropriate backend framework for the detected language.
Respect `config.test_framework` if explicitly set (not `"auto"`).
---
## 2. Announce Decision
State the selected framework and reasoning.
---
### 3. Save Progress
**Save this step's accumulated work to `{outputFile}`.**
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
```yaml
---
stepsCompleted: ['step-02-select-framework']
lastStep: 'step-02-select-framework'
lastSaved: '{date}'
---
```
Then write this step's output below the frontmatter.
- **If `{outputFile}` already exists**, update:
- Add `'step-02-select-framework'` to `stepsCompleted` array (only if not already present)
- Set `lastStep: 'step-02-select-framework'`
- Set `lastSaved: '{date}'`
- Append this step's output to the appropriate section of the document.
Load next step: `{nextStepFile}`
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Step completed in full with required outputs
### ❌ SYSTEM FAILURE:
- Skipped sequence steps or missing outputs
**Master Rule:** Skipping steps is FORBIDDEN.

View File

@@ -0,0 +1,323 @@
---
name: 'step-03-scaffold-framework'
description: 'Create framework scaffold with adaptive orchestration (agent-team, subagent, or sequential)'
nextStepFile: './step-04-docs-and-scripts.md'
knowledgeIndex: '{project-root}/_bmad/tea/testarch/tea-index.csv'
outputFile: '{test_artifacts}/framework-setup-progress.md'
---
# Step 3: Scaffold Framework
## STEP GOAL
Generate the test directory structure, configuration files, fixtures, factories, helpers, and sample tests using deterministic mode selection with runtime fallback.
## MANDATORY EXECUTION RULES
- 📖 Read the entire step file before acting
- ✅ Speak in `{communication_language}`
- ✅ Apply knowledge base patterns where required
- ✅ Resolve execution mode from explicit user request first, then config
- ✅ Apply fallback rules deterministically when requested mode is unsupported
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 💾 Record outputs before proceeding
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: config, loaded artifacts, and knowledge fragments
- Focus: this step's goal only
- Limits: do not execute future steps
- Dependencies: prior steps' outputs (if any)
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
## 0. Resolve Execution Mode (User Override First)
```javascript
const parseBooleanFlag = (value, defaultValue = true) => {
if (typeof value === 'string') {
const normalized = value.trim().toLowerCase();
if (['false', '0', 'off', 'no'].includes(normalized)) return false;
if (['true', '1', 'on', 'yes'].includes(normalized)) return true;
}
if (value === undefined || value === null) return defaultValue;
return Boolean(value);
};
const orchestrationContext = {
config: {
execution_mode: config.tea_execution_mode || 'auto', // "auto" | "subagent" | "agent-team" | "sequential"
capability_probe: parseBooleanFlag(config.tea_capability_probe, true), // supports booleans and "false"/"true" strings
},
timestamp: new Date().toISOString().replace(/[:.]/g, '-'),
};
const normalizeUserExecutionMode = (mode) => {
if (typeof mode !== 'string') return null;
const normalized = mode.trim().toLowerCase().replace(/[-_]/g, ' ').replace(/\s+/g, ' ');
if (normalized === 'auto') return 'auto';
if (normalized === 'sequential') return 'sequential';
if (normalized === 'subagent' || normalized === 'sub agent' || normalized === 'subagents' || normalized === 'sub agents') {
return 'subagent';
}
if (normalized === 'agent team' || normalized === 'agent teams' || normalized === 'agentteam') {
return 'agent-team';
}
return null;
};
const normalizeConfigExecutionMode = (mode) => {
if (mode === 'subagent') return 'subagent';
if (mode === 'auto' || mode === 'sequential' || mode === 'subagent' || mode === 'agent-team') {
return mode;
}
return null;
};
// Explicit user instruction in the active run takes priority over config.
const explicitModeFromUser = normalizeUserExecutionMode(runtime.getExplicitExecutionModeHint?.() || null);
const requestedMode = explicitModeFromUser || normalizeConfigExecutionMode(orchestrationContext.config.execution_mode) || 'auto';
const probeEnabled = orchestrationContext.config.capability_probe;
const supports = { subagent: false, agentTeam: false };
if (probeEnabled) {
supports.subagent = runtime.canLaunchSubagents?.() === true;
supports.agentTeam = runtime.canLaunchAgentTeams?.() === true;
}
let resolvedMode = requestedMode;
if (requestedMode === 'auto') {
if (supports.agentTeam) resolvedMode = 'agent-team';
else if (supports.subagent) resolvedMode = 'subagent';
else resolvedMode = 'sequential';
} else if (probeEnabled && requestedMode === 'agent-team' && !supports.agentTeam) {
resolvedMode = supports.subagent ? 'subagent' : 'sequential';
} else if (probeEnabled && requestedMode === 'subagent' && !supports.subagent) {
resolvedMode = 'sequential';
}
```
Resolution precedence:
1. Explicit user request in this run (`agent team` => `agent-team`; `subagent` => `subagent`; `sequential`; `auto`)
2. `tea_execution_mode` from config
3. Runtime capability fallback (when probing enabled)
## 1. Create Directory Structure
Use `{detected_stack}` from Step 1 to determine directory layout.
**If {detected_stack} is `frontend` or `fullstack`:**
- `{test_dir}/e2e/`
- `{test_dir}/support/fixtures/`
- `{test_dir}/support/helpers/`
- `{test_dir}/support/page-objects/` (optional)
**If {detected_stack} is `backend` or `fullstack`:**
Create the idiomatic test directory for the detected language:
- **Python (pytest)**: `tests/` with `conftest.py`, `tests/unit/`, `tests/integration/`, `tests/api/`
- **Java/Kotlin (JUnit)**: `src/test/java/` mirroring `src/main/java/` package structure, with `unit/`, `integration/`, `api/` sub-packages
- **Go**: `*_test.go` files alongside source files (Go convention), plus `testdata/` for fixtures
- **C#/.NET (xUnit)**: `tests/` project with `Unit/`, `Integration/`, `Api/` directories
- **Ruby (RSpec)**: `spec/` with `spec/unit/`, `spec/integration/`, `spec/api/`, `spec/support/`
- **Rust**: `tests/` for integration tests, inline `#[cfg(test)]` modules for unit tests
**If `config.tea_use_pactjs_utils` is enabled and runtime is Node.js/TypeScript** (i.e., `{detected_stack}` is `frontend` or `fullstack`, or `{detected_stack}` is `backend` with Node.js/TypeScript runtime):
Create Node.js/TypeScript contract testing directory structure per `pact-consumer-framework-setup.md`:
- `tests/contract/consumer/` — consumer contract test files (`.pacttest.ts` extension)
- `tests/contract/support/` — pact config, provider state factories, consumer helpers shim
- `scripts/` — shell scripts (`env-setup.sh`, `publish-pact.sh`, `can-i-deploy.sh`, `record-deployment.sh`)
- `.github/actions/detect-breaking-change/` — PR checkbox-driven breaking change detection
- `.github/workflows/contract-test-consumer.yml` — consumer CDC CI workflow
---
## 2. Generate Framework Config
**If {detected_stack} is `frontend` or `fullstack`:**
Create `playwright.config.ts` or `cypress.config.ts` with:
- **Timeouts**: action 15s, navigation 30s, test 60s
- **Base URL**: env fallback (`BASE_URL`)
- **Artifacts**: retain-on-failure (trace/screenshot/video)
- **Reporters**: HTML + JUnit + console
- **Parallelism**: enabled (CI tuned)
Use TypeScript if `use_typescript: true`.
**If {detected_stack} is `backend` or `fullstack`:**
Create the idiomatic test config for the detected framework:
- **pytest**: `pyproject.toml` `[tool.pytest.ini_options]` or `pytest.ini` with markers, test paths, coverage settings
- **JUnit**: `build.gradle`/`pom.xml` test configuration with JUnit 5 dependencies, Surefire/Failsafe plugins
- **Go test**: no config file needed (Go convention); optionally create `Makefile` test targets
- **xUnit**: `.csproj` test project with xUnit and coverlet dependencies
- **RSpec**: `.rspec` config file with `spec_helper.rb` and `rails_helper.rb` (if Rails)
---
## 3. Environment Setup
Create `.env.example` with `TEST_ENV`, `BASE_URL`, `API_URL`.
**Stack-conditional environment files:**
**If {detected_stack} is `frontend` or `fullstack` (Node.js):**
- `.nvmrc` using current LTS Node (prefer Node 24+)
**If {detected_stack} is `backend`:**
Create the idiomatic version file for the detected language:
- **Python**: `.python-version` with current stable Python (prefer 3.12+)
- **Java**: `.java-version` or `JAVA_HOME` documentation in `.env.example`
- **Go**: Go version is already in `go.mod` (no additional file needed)
- **C#/.NET**: `global.json` with SDK version if not already present
- **Ruby**: `.ruby-version` with current stable Ruby
---
## 4. Fixtures & Factories
Read `{config_source}` and use `{knowledgeIndex}` to load fragments based on `config.tea_use_playwright_utils`:
**If Playwright Utils enabled:**
- `overview.md`, `fixtures-composition.md`, `auth-session.md`, `api-request.md`, `burn-in.md`, `network-error-monitor.md`, `data-factories.md`
- Recommend installing `@seontechnologies/playwright-utils`
**If disabled:**
- `fixture-architecture.md`, `data-factories.md`, `network-first.md`, `playwright-config.md`, `test-quality.md`
**If Pact.js Utils enabled** (`config.tea_use_pactjs_utils`):
- `pact-consumer-framework-setup.md` (CRITICAL: load this for directory structure, scripts, CI workflow, and PactV4 patterns)
- `pactjs-utils-overview.md`, `pactjs-utils-consumer-helpers.md`, `pactjs-utils-provider-verifier.md`, `pactjs-utils-request-filter.md`, `contract-testing.md`
- Recommend installing `@seontechnologies/pactjs-utils` and `@pact-foundation/pact`
**If Pact.js Utils disabled but contract testing relevant:**
- `contract-testing.md`
**If Pact MCP enabled** (`config.tea_pact_mcp` is `"mcp"`):
- `pact-mcp.md`
Implement:
- Fixture index with `mergeTests`
- Auto-cleanup hooks
- Faker-based data factories with overrides
---
## 5. Sample Tests & Helpers
**If {detected_stack} is `frontend` or `fullstack`:**
Create example tests in `{test_dir}/e2e/` demonstrating:
- Given/When/Then format
- data-testid selector strategy
- Factory usage
- Network interception pattern (if applicable)
**If {detected_stack} is `backend` or `fullstack`:**
Create example tests in the idiomatic location for the detected language:
- **Python**: `tests/test_example.py` with pytest fixtures, parametrize, and factory usage
- **Java**: `src/test/java/.../ExampleTest.java` with JUnit 5 annotations, `@BeforeEach` setup
- **Go**: `example_test.go` alongside source with table-driven tests and `testify` assertions
- **C#/.NET**: `tests/ExampleTests.cs` with xUnit `[Fact]`/`[Theory]` and fixture injection
- **Ruby**: `spec/example_spec.rb` with RSpec `describe`/`context`/`it` and factory_bot
Create helpers for:
- API clients (if needed)
- Network utilities (frontend/fullstack only)
- Auth helpers
- Test data factories (language-idiomatic patterns)
**If `config.tea_use_pactjs_utils` is enabled and runtime is Node.js/TypeScript** (i.e., `{detected_stack}` is `frontend` or `fullstack`, or `{detected_stack}` is `backend` with Node.js/TypeScript runtime):
Create Node.js/TypeScript contract test samples per `pact-consumer-framework-setup.md`:
- **Consumer test**: Example using PactV4 `addInteraction()` builder + `createProviderState` + real consumer code with URL injection (`.pacttest.ts` extension)
- **Support files**: Pact config factory (`pact-config.ts`), provider state factories (`provider-states.ts`), local consumer-helpers shim (`consumer-helpers.ts`)
- **Vitest config**: Minimal `vitest.config.pact.ts` (do NOT copy settings from unit config)
- **Shell scripts**: `env-setup.sh`, `publish-pact.sh`, `can-i-deploy.sh`, `record-deployment.sh` in `scripts/`
- **CI workflow**: `contract-test-consumer.yml` with detect-breaking-change action
- **package.json scripts**: `test:pact:consumer`, `publish:pact`, `can:i:deploy:consumer`, `record:consumer:deployment`
- **.gitignore**: Add `/pacts/` and `pact-logs/`
---
### 6. Orchestration Notes for This Step
For this step, treat these work units as parallelizable when `resolvedMode` is `agent-team` or `subagent`:
- Worker A: directory + framework config + env setup (sections 1-3)
- Worker B: fixtures + factories (section 4)
- Worker C: sample tests + helpers (section 5)
In parallel-capable modes, runtime decides worker scheduling and concurrency.
If `resolvedMode` is `sequential`, execute sections 1→5 in order.
Regardless of mode, outputs must be identical in structure and quality.
### 7. Save Progress
**Save this step's accumulated work to `{outputFile}`.**
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
```yaml
---
stepsCompleted: ['step-03-scaffold-framework']
lastStep: 'step-03-scaffold-framework'
lastSaved: '{date}'
---
```
Then write this step's output below the frontmatter.
- **If `{outputFile}` already exists**, update:
- Add `'step-03-scaffold-framework'` to `stepsCompleted` array (only if not already present)
- Set `lastStep: 'step-03-scaffold-framework'`
- Set `lastSaved: '{date}'`
- Append this step's output to the appropriate section of the document.
Load next step: `{nextStepFile}`
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Step completed in full with required outputs
### ❌ SYSTEM FAILURE:
- Skipped sequence steps or missing outputs
**Master Rule:** Skipping steps is FORBIDDEN.

View File

@@ -0,0 +1,105 @@
---
name: 'step-04-docs-and-scripts'
description: 'Document setup and add package.json scripts'
nextStepFile: './step-05-validate-and-summary.md'
outputFile: '{test_dir}/README.md'
progressFile: '{test_artifacts}/framework-setup-progress.md'
---
# Step 4: Documentation & Scripts
## STEP GOAL
Create test documentation and add build/test scripts appropriate for `{detected_stack}`.
## MANDATORY EXECUTION RULES
- 📖 Read the entire step file before acting
- ✅ Speak in `{communication_language}`
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 💾 Record outputs before proceeding
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: config, loaded artifacts, and knowledge fragments
- Focus: this step's goal only
- Limits: do not execute future steps
- Dependencies: prior steps' outputs (if any)
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
## 1. tests/README.md
Create `{outputFile}` and include:
- Setup instructions
- Running tests (local/headed/debug)
- Architecture overview (fixtures, factories, helpers)
- Best practices (selectors, isolation, cleanup)
- CI integration notes
- Knowledge base references
---
## 2. Build & Test Scripts
**If {detected_stack} is `frontend` or `fullstack`:**
Add to `package.json` at minimum:
- `test:e2e`: framework execution command (e.g., `npx playwright test`)
**If {detected_stack} is `backend` or `fullstack`:**
Add the idiomatic test commands for the detected framework:
- **Python (pytest)**: Add to `pyproject.toml` scripts or `Makefile`: `pytest`, `pytest --cov`, `pytest -m integration`
- **Java (JUnit)**: Add to `build.gradle`/`pom.xml`: `./gradlew test`, `mvn test`, `mvn verify` (integration)
- **Go**: Add to `Makefile`: `go test ./...`, `go test -race ./...`, `go test -cover ./...`
- **C#/.NET**: Add to CI scripts or `Makefile`: `dotnet test`, `dotnet test --collect:"XPlat Code Coverage"`
- **Ruby (RSpec)**: Add to `Gemfile` binstubs or `Makefile`: `bundle exec rspec`, `bundle exec rspec spec/integration`
---
### 3. Save Progress
**Save this step's accumulated work to `{progressFile}`.**
- **If `{progressFile}` does not exist** (first save), create it with YAML frontmatter:
```yaml
---
stepsCompleted: ['step-04-docs-and-scripts']
lastStep: 'step-04-docs-and-scripts'
lastSaved: '{date}'
---
```
Then write this step's output below the frontmatter.
- **If `{progressFile}` already exists**, update:
- Add `'step-04-docs-and-scripts'` to `stepsCompleted` array (only if not already present)
- Set `lastStep: 'step-04-docs-and-scripts'`
- Set `lastSaved: '{date}'`
- Append this step's output to the appropriate section of the document.
Load next step: `{nextStepFile}`
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Step completed in full with required outputs
### ❌ SYSTEM FAILURE:
- Skipped sequence steps or missing outputs
**Master Rule:** Skipping steps is FORBIDDEN.

View File

@@ -0,0 +1,93 @@
---
name: 'step-05-validate-and-summary'
description: 'Validate against checklist and summarize'
outputFile: '{test_artifacts}/framework-setup-progress.md'
---
# Step 5: Validate & Summarize
## STEP GOAL
Validate framework setup and provide a completion summary.
## MANDATORY EXECUTION RULES
- 📖 Read the entire step file before acting
- ✅ Speak in `{communication_language}`
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 💾 Record outputs before proceeding
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: config, loaded artifacts, and knowledge fragments
- Focus: this step's goal only
- Limits: do not execute future steps
- Dependencies: prior steps' outputs (if any)
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
## 1. Validation
Validate against `checklist.md`:
- Preflight success
- Directory structure created
- Config correctness
- Fixtures/factories created
- Docs and scripts present
Fix any gaps before completion.
---
## 2. Completion Summary
Report:
- Framework selected
- Artifacts created
- Next steps (install deps, run tests)
- Knowledge fragments applied
---
### 3. Save Progress
**Save this step's accumulated work to `{outputFile}`.**
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
```yaml
---
stepsCompleted: ['step-05-validate-and-summary']
lastStep: 'step-05-validate-and-summary'
lastSaved: '{date}'
---
```
Then write this step's output below the frontmatter.
- **If `{outputFile}` already exists**, update:
- Add `'step-05-validate-and-summary'` to `stepsCompleted` array (only if not already present)
- Set `lastStep: 'step-05-validate-and-summary'`
- Set `lastSaved: '{date}'`
- Append this step's output to the appropriate section of the document.
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Step completed in full with required outputs
### ❌ SYSTEM FAILURE:
- Skipped sequence steps or missing outputs
**Master Rule:** Skipping steps is FORBIDDEN.

View File

@@ -0,0 +1,65 @@
---
name: 'step-01-assess'
description: 'Load an existing output for editing'
nextStepFile: './step-02-apply-edit.md'
---
# Step 1: Assess Edit Target
## STEP GOAL:
Identify which output should be edited and load it.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 📖 Read the complete step file before taking any action
- ✅ Speak in `{communication_language}`
### Role Reinforcement:
- ✅ You are the Master Test Architect
### Step-Specific Rules:
- 🎯 Ask the user which output file to edit
- 🚫 Do not edit until target is confirmed
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
## CONTEXT BOUNDARIES:
- Available context: existing outputs
- Focus: select edit target
- Limits: no edits yet
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly.
### 1. Identify Target
Ask the user to provide the output file path or select from known outputs.
### 2. Load Target
Read the provided output file in full.
### 3. Confirm
Confirm the target and proceed to edit.
Load next step: `{nextStepFile}`
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Target identified and loaded
### ❌ SYSTEM FAILURE:
- Proceeding without a confirmed target

View File

@@ -0,0 +1,60 @@
---
name: 'step-02-apply-edit'
description: 'Apply edits to the selected output'
---
# Step 2: Apply Edits
## STEP GOAL:
Apply the requested edits to the selected output and confirm changes.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 📖 Read the complete step file before taking any action
- ✅ Speak in `{communication_language}`
### Role Reinforcement:
- ✅ You are the Master Test Architect
### Step-Specific Rules:
- 🎯 Only apply edits explicitly requested by the user
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
## CONTEXT BOUNDARIES:
- Available context: selected output and user changes
- Focus: apply edits only
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly.
### 1. Confirm Requested Changes
Restate what will be changed and confirm.
### 2. Apply Changes
Update the output file accordingly.
### 3. Report
Summarize the edits applied.
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Changes applied and confirmed
### ❌ SYSTEM FAILURE:
- Unconfirmed edits or missing update

View File

@@ -0,0 +1,67 @@
---
name: 'step-01-validate'
description: 'Validate workflow outputs against checklist'
outputFile: '{test_artifacts}/framework-validation-report.md'
validationChecklist: '../checklist.md'
---
# Step 1: Validate Outputs
## STEP GOAL:
Validate outputs using the workflow checklist and record findings.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 📖 Read the complete step file before taking any action
- ✅ Speak in `{communication_language}`
### Role Reinforcement:
- ✅ You are the Master Test Architect
### Step-Specific Rules:
- 🎯 Validate against `{validationChecklist}`
- 🚫 Do not skip checks
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 💾 Write findings to `{outputFile}`
## CONTEXT BOUNDARIES:
- Available context: workflow outputs and checklist
- Focus: validation only
- Limits: do not modify outputs in this step
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly.
### 1. Load Checklist
Read `{validationChecklist}` and list all criteria.
### 2. Validate Outputs
Evaluate outputs against each checklist item.
### 3. Write Report
Write a validation report to `{outputFile}` with PASS/WARN/FAIL per section.
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Validation report written
- All checklist items evaluated
### ❌ SYSTEM FAILURE:
- Skipped checklist items
- No report produced

View File

@@ -0,0 +1,73 @@
---
validationDate: 2026-01-27
workflowName: testarch-framework
workflowPath: {project-root}/src/workflows/testarch/bmad-testarch-framework
validationStatus: COMPLETE
completionDate: 2026-01-27 10:03:10
---
# Validation Report: testarch-framework
**Validation Started:** 2026-01-27 09:50:21
**Validator:** BMAD Workflow Validation System (Codex)
**Standards Version:** BMAD Workflow Standards
## File Structure & Size
- workflow.md present: YES
- instructions.md present: YES
- workflow.yaml present: YES
- step files found: 8
**Step File Sizes:**
- steps-c/step-01-preflight.md: 69 lines [GOOD]
- steps-c/step-02-select-framework.md: 66 lines [GOOD]
- steps-c/step-03-scaffold-framework.md: 107 lines [GOOD]
- steps-c/step-04-docs-and-scripts.md: 63 lines [GOOD]
- steps-c/step-05-validate-and-summary.md: 61 lines [GOOD]
- steps-e/step-01-assess.md: 51 lines [GOOD]
- steps-e/step-02-apply-edit.md: 46 lines [GOOD]
- steps-v/step-01-validate.md: 53 lines [GOOD]
- workflow-plan.md present: YES
## Frontmatter Validation
- No frontmatter violations found
## Critical Path Violations
- No {project-root} hardcoded paths detected in body
- No dead relative links detected
## Menu Handling Validation
- No menu structures detected (linear step flow) [N/A]
## Step Type Validation
- Last step steps-v/step-01-validate.md has no nextStepFile (final step OK)
- Step type validation assumes linear sequence (no branching/menu). Workflow-plan.md present for reference. [INFO]
## Output Format Validation
- No templates found in workflow root
- Steps with outputFile in frontmatter:
- steps-c/step-04-docs-and-scripts.md
- steps-v/step-01-validate.md
## Validation Design Check
- checklist.md present: YES
- Validation steps folder (steps-v) present: YES
## Instruction Style Check
- All steps include STEP GOAL, MANDATORY EXECUTION RULES, EXECUTION PROTOCOLS, CONTEXT BOUNDARIES, and SUCCESS/FAILURE metrics
## Summary
- Validation completed: 2026-01-27 10:03:10
- Critical issues: 0
- Warnings: 0 (informational notes only)
- Readiness: READY (manual review optional)

View File

@@ -0,0 +1,116 @@
---
validationDate: 2026-01-27
workflowName: testarch-framework
workflowPath: {project-root}/src/workflows/testarch/bmad-testarch-framework
validationStatus: COMPLETE
completionDate: 2026-01-27 10:24:01
---
# Validation Report: testarch-framework
**Validation Started:** 2026-01-27 10:24:01
**Validator:** BMAD Workflow Validation System (Codex)
**Standards Version:** BMAD Workflow Standards
## File Structure & Size
- workflow.md present: YES
- instructions.md present: YES
- workflow.yaml present: YES
- step files found: 8
**Step File Sizes:**
- steps-c/step-01-preflight.md: 68 lines [GOOD]
- steps-c/step-02-select-framework.md: 65 lines [GOOD]
- steps-c/step-03-scaffold-framework.md: 106 lines [GOOD]
- steps-c/step-04-docs-and-scripts.md: 62 lines [GOOD]
- steps-c/step-05-validate-and-summary.md: 60 lines [GOOD]
- steps-e/step-01-assess.md: 50 lines [GOOD]
- steps-e/step-02-apply-edit.md: 45 lines [GOOD]
- steps-v/step-01-validate.md: 52 lines [GOOD]
- workflow-plan.md present: YES
## Frontmatter Validation
- No frontmatter violations found
## Critical Path Violations
### Config Variables (Exceptions)
Standard BMAD config variables treated as valid exceptions: bmb_creations_output_folder, communication_language, document_output_language, output_folder, planning_artifacts, project-root, project_name, test_artifacts, user_name
- No {project-root} hardcoded paths detected in body
- No dead relative links detected
- No module path assumptions detected
**Status:** ✅ PASS - No critical violations
## Menu Handling Validation
- No menu structures detected (linear step flow) [N/A]
## Step Type Validation
- steps-c/step-01-preflight.md: Init [PASS]
- steps-c/step-02-select-framework.md: Middle [PASS]
- steps-c/step-03-scaffold-framework.md: Middle [PASS]
- steps-c/step-04-docs-and-scripts.md: Middle [PASS]
- steps-c/step-05-validate-and-summary.md: Final [PASS]
- Step type validation assumes linear sequence (no branching/menu). Workflow-plan.md present for reference. [INFO]
## Output Format Validation
- Templates present: NONE
- Steps with outputFile in frontmatter:
- steps-c/step-04-docs-and-scripts.md
- steps-v/step-01-validate.md
- checklist.md present: YES
## Validation Design Check
- Validation steps folder (steps-v) present: YES
- Validation step(s) present: step-01-validate.md
- Validation steps reference checklist data and auto-proceed
## Instruction Style Check
- Instruction style: Prescriptive (appropriate for TEA quality/compliance workflows)
- Steps emphasize mandatory sequence, explicit success/failure metrics, and risk-based guidance
## Collaborative Experience Check
- Overall facilitation quality: GOOD
- Steps use progressive prompts and clear role reinforcement; no laundry-list interrogation detected
- Flow progression is clear and aligned to workflow goals
## Subagent Optimization Opportunities
- No high-priority subagent optimizations identified; workflow already uses step-file architecture
- Pattern 1 (grep/regex): N/A for most steps
- Pattern 2 (per-file analysis): already aligned to validation structure
- Pattern 3 (data ops): minimal data file loads
- Pattern 4 (parallel): optional for validation only
## Cohesive Review
- Overall assessment: GOOD
- Flow is linear, goals are clear, and outputs map to TEA artifacts
- Voice and tone consistent with Test Architect persona
- Recommendation: READY (minor refinements optional)
## Plan Quality Validation
- Plan file present: workflow-plan.md
- Planned steps found: 8 (all implemented)
- Plan implementation status: Fully Implemented
## Summary
- Validation completed: 2026-01-27 10:24:01
- Critical issues: 0
- Warnings: 0 (informational notes only)
- Readiness: READY (manual review optional)

View File

@@ -0,0 +1,22 @@
# Workflow Plan: testarch-framework
## Create Mode (steps-c)
- step-01-preflight.md
- step-02-select-framework.md
- step-03-scaffold-framework.md
- step-04-docs-and-scripts.md
- step-05-validate-and-summary.md
## Validate Mode (steps-v)
- step-01-validate.md
## Edit Mode (steps-e)
- step-01-assess.md
- step-02-apply-edit.md
## Outputs
- Framework config (playwright.config.ts or cypress.config.ts)
- {project-root}/tests/README.md
- Test support scaffolding under {project-root}/tests

View File

@@ -0,0 +1,41 @@
---
name: bmad-testarch-framework
description: Initialize test framework with Playwright or Cypress. Use when user says 'lets setup test framework' or 'I want to initialize testing framework'
web_bundle: true
---
# Test Framework Setup
**Goal:** Initialize production-ready test framework architecture (Playwright or Cypress) with fixtures, helpers, and configuration
**Role:** You are the Master Test Architect.
---
## WORKFLOW ARCHITECTURE
This workflow uses **tri-modal step-file architecture**:
- **Create mode (steps-c/)**: primary execution flow
- **Validate mode (steps-v/)**: validation against checklist
- **Edit mode (steps-e/)**: revise existing outputs
---
## INITIALIZATION SEQUENCE
### 1. Mode Determination
"Welcome to the workflow. What would you like to do?"
- **[C] Create** — Run the workflow
- **[R] Resume** — Resume an interrupted workflow
- **[V] Validate** — Validate existing outputs
- **[E] Edit** — Edit existing outputs
### 2. Route to First Step
- **If C:** Load `steps-c/step-01-preflight.md`
- **If R:** Load `steps-c/step-01b-resume.md`
- **If V:** Load `steps-v/step-01-validate.md`
- **If E:** Load `steps-e/step-01-assess.md`

View File

@@ -0,0 +1,48 @@
# Test Architect workflow: bmad-testarch-framework
name: bmad-testarch-framework
# prettier-ignore
description: 'Initialize test framework with Playwright or Cypress. Use when the user says "lets setup test framework" or "I want to initialize testing framework"'
# Critical variables from config
config_source: "{project-root}/_bmad/tea/config.yaml"
output_folder: "{config_source}:output_folder"
test_artifacts: "{config_source}:test_artifacts"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
document_output_language: "{config_source}:document_output_language"
date: system-generated
# Workflow components
installed_path: "."
instructions: "./instructions.md"
validation: "./checklist.md"
# Variables and inputs
variables:
test_dir: "{project-root}/tests" # Root test directory
use_typescript: true # Prefer TypeScript configuration
framework_preference: "auto" # auto, playwright, cypress - user can override auto-detection
project_size: "auto" # auto, small, large - influences framework recommendation
# Output configuration
default_output_file: "{test_dir}/README.md" # Main deliverable is test setup README
# Required tools
required_tools:
- read_file # Read package.json, existing configs
- write_file # Create config files, helpers, fixtures, tests
- create_directory # Create test directory structure
- list_files # Check for existing framework
- search_repo # Find architecture docs
tags:
- qa
- setup
- test-architect
- framework
- initialization
execution_hints:
interactive: false # Minimize prompts; auto-detect when possible
autonomous: true # Proceed without user input unless blocked
iterative: true