docs: update all documentation and add AI tooling configs
- Rewrite README.md with current architecture, features and stack - Update docs/API.md with all current endpoints (corporate, BI, client 360) - Update docs/ARCHITECTURE.md with cache, modular queries, services, ETL - Update docs/GUIA-USUARIO.md for all roles (admin, corporate, agente) - Add docs/INDEX.md documentation index - Add PROJETO.md comprehensive project reference - Add BI-CCC-Implementation-Guide.md - Include AI agent configs (.claude, .agents, .gemini, _bmad) - Add netbird VPN configuration - Add status report Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -0,0 +1,134 @@
|
||||
---
|
||||
name: 'step-01-detect-mode'
|
||||
description: 'Determine system-level vs epic-level mode and validate prerequisites'
|
||||
nextStepFile: './step-02-load-context.md'
|
||||
outputFile: '{test_artifacts}/test-design-progress.md'
|
||||
---
|
||||
|
||||
# Step 1: Detect Mode & Prerequisites
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Determine whether to run **System-Level** or **Epic-Level** test design, and confirm required inputs are available.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
### Universal Rules
|
||||
|
||||
- 📖 Read this entire step file before taking any action
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- 🚫 Do not load the next step until this step is complete
|
||||
|
||||
### Role Reinforcement
|
||||
|
||||
- ✅ You are the **Master Test Architect**
|
||||
- ✅ You prioritize risk-based, evidence-backed decisions
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Mode Detection (Priority Order)
|
||||
|
||||
### A) User Intent (Highest Priority)
|
||||
|
||||
Use explicit intent if the user already indicates scope:
|
||||
|
||||
- **PRD + ADR (no epic/stories)** → **System-Level Mode**
|
||||
- **Epic + Stories (no PRD/ADR)** → **Epic-Level Mode**
|
||||
- **Both PRD/ADR + Epic/Stories** → Prefer **System-Level Mode** first
|
||||
|
||||
If intent is unclear, ask:
|
||||
|
||||
> "Should I create (A) **System-level** test design (PRD + ADR → Architecture + QA docs), or (B) **Epic-level** test design (Epic → single test plan)?"
|
||||
|
||||
### B) File-Based Detection (BMad-Integrated)
|
||||
|
||||
If user intent is unclear:
|
||||
|
||||
- If `{implementation_artifacts}/sprint-status.yaml` exists → **Epic-Level Mode**
|
||||
- Otherwise → **System-Level Mode**
|
||||
|
||||
### C) Ambiguous → Ask
|
||||
|
||||
If mode still unclear, ask the user to choose (A) or (B) and **halt** until they respond.
|
||||
|
||||
---
|
||||
|
||||
## 2. Prerequisite Check (Mode-Specific)
|
||||
|
||||
### System-Level Mode Requires:
|
||||
|
||||
- PRD (functional + non-functional requirements)
|
||||
- ADR or architecture decision records
|
||||
- Architecture or tech-spec document
|
||||
|
||||
### Epic-Level Mode Requires:
|
||||
|
||||
- Epic and/or story requirements with acceptance criteria
|
||||
- Architecture context (if available)
|
||||
|
||||
### HALT CONDITIONS
|
||||
|
||||
If required inputs are missing **and** the user cannot provide them:
|
||||
|
||||
- **System-Level**: "Please provide PRD + ADR/architecture docs to proceed."
|
||||
- **Epic-Level**: "Please provide epic/story requirements or acceptance criteria to proceed."
|
||||
|
||||
---
|
||||
|
||||
## 3. Confirm Mode
|
||||
|
||||
State which mode you will use and why. Then proceed.
|
||||
|
||||
---
|
||||
|
||||
### 4. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-01-detect-mode']
|
||||
lastStep: 'step-01-detect-mode'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-01-detect-mode'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-01-detect-mode'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section of the document.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
@@ -0,0 +1,102 @@
|
||||
---
|
||||
name: 'step-01b-resume'
|
||||
description: 'Resume interrupted workflow from last completed step'
|
||||
outputFile: '{test_artifacts}/test-design-progress.md'
|
||||
---
|
||||
|
||||
# Step 1b: Resume Workflow
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Resume an interrupted workflow by loading the existing output document, displaying progress, and routing to the next incomplete step.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: Output document with progress frontmatter
|
||||
- Focus: Load progress and route to next step
|
||||
- Limits: Do not re-execute completed steps
|
||||
- Dependencies: Output document must exist from a previous run
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
### 1. Load Output Document
|
||||
|
||||
Read `{outputFile}` and parse YAML frontmatter for:
|
||||
|
||||
- `stepsCompleted` — array of completed step names
|
||||
- `lastStep` — last completed step name
|
||||
- `lastSaved` — timestamp of last save
|
||||
|
||||
**If `{outputFile}` does not exist**, display:
|
||||
|
||||
"⚠️ **No previous progress found.** There is no output document to resume from. Please use **[C] Create** to start a fresh workflow run."
|
||||
|
||||
**THEN:** Halt. Do not proceed.
|
||||
|
||||
---
|
||||
|
||||
### 2. Display Progress Dashboard
|
||||
|
||||
Display:
|
||||
|
||||
"📋 **Workflow Resume — Test Design and Risk Assessment**
|
||||
|
||||
**Last saved:** {lastSaved}
|
||||
**Steps completed:** {stepsCompleted.length} of 5
|
||||
|
||||
1. ✅/⬜ Detect Mode (step-01-detect-mode)
|
||||
2. ✅/⬜ Load Context (step-02-load-context)
|
||||
3. ✅/⬜ Risk & Testability (step-03-risk-and-testability)
|
||||
4. ✅/⬜ Coverage Plan (step-04-coverage-plan)
|
||||
5. ✅/⬜ Generate Output (step-05-generate-output)"
|
||||
|
||||
---
|
||||
|
||||
### 3. Route to Next Step
|
||||
|
||||
Based on `lastStep`, load the next incomplete step:
|
||||
|
||||
- `'step-01-detect-mode'` → `./step-02-load-context.md`
|
||||
- `'step-02-load-context'` → `./step-03-risk-and-testability.md`
|
||||
- `'step-03-risk-and-testability'` → `./step-04-coverage-plan.md`
|
||||
- `'step-04-coverage-plan'` → `./step-05-generate-output.md`
|
||||
- `'step-05-generate-output'` → **Workflow already complete.** Display: "✅ **All steps completed.** Use **[V] Validate** to review outputs or **[E] Edit** to make revisions." Then halt.
|
||||
|
||||
**If `lastStep` does not match any value above**, display: "⚠️ **Unknown progress state** (`lastStep`: {lastStep}). Please use **[C] Create** to start fresh." Then halt.
|
||||
|
||||
**Otherwise**, load the identified step file, read completely, and execute.
|
||||
|
||||
The existing content in `{outputFile}` provides context from previously completed steps.
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Output document loaded and parsed correctly
|
||||
- Progress dashboard displayed accurately
|
||||
- Routed to correct next step
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Not loading output document
|
||||
- Incorrect progress display
|
||||
- Routing to wrong step
|
||||
- Re-executing completed steps
|
||||
|
||||
**Master Rule:** Resume MUST route to the exact next incomplete step. Never re-execute completed steps.
|
||||
@@ -0,0 +1,242 @@
|
||||
---
|
||||
name: 'step-02-load-context'
|
||||
description: 'Load documents, configuration, and knowledge fragments for the chosen mode'
|
||||
nextStepFile: './step-03-risk-and-testability.md'
|
||||
knowledgeIndex: '{project-root}/_bmad/tea/testarch/tea-index.csv'
|
||||
outputFile: '{test_artifacts}/test-design-progress.md'
|
||||
---
|
||||
|
||||
# Step 2: Load Context & Knowledge Base
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Load the required documents, config flags, and knowledge fragments needed to produce accurate test design outputs.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- 🎯 Only load artifacts required for the selected mode
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Load Configuration
|
||||
|
||||
From `{config_source}`:
|
||||
|
||||
- Read `tea_use_playwright_utils`
|
||||
- Read `tea_use_pactjs_utils`
|
||||
- Read `tea_pact_mcp`
|
||||
- Read `tea_browser_automation`
|
||||
- Read `test_stack_type` (if not set, default to `"auto"`)
|
||||
- Note `test_artifacts`
|
||||
|
||||
**Stack Detection** (for context-aware loading):
|
||||
|
||||
If `test_stack_type` is `"auto"` or not configured, infer `{detected_stack}` by scanning `{project-root}`:
|
||||
|
||||
- **Frontend indicators**: `playwright.config.*`, `cypress.config.*`, `package.json` with react/vue/angular
|
||||
- **Backend indicators**: `pyproject.toml`, `pom.xml`/`build.gradle`, `go.mod`, `*.csproj`, `Gemfile`, `Cargo.toml`
|
||||
- **Both present** → `fullstack`; only frontend → `frontend`; only backend → `backend`
|
||||
- Explicit `test_stack_type` overrides auto-detection
|
||||
|
||||
---
|
||||
|
||||
## 2. Load Project Artifacts (Mode-Specific)
|
||||
|
||||
### System-Level Mode (Phase 3)
|
||||
|
||||
Load:
|
||||
|
||||
- PRD (FRs + NFRs)
|
||||
- ADRs or architecture decisions
|
||||
- Architecture / tech-spec document
|
||||
- Epics (for scope)
|
||||
|
||||
Extract:
|
||||
|
||||
- Tech stack & dependencies
|
||||
- Integration points
|
||||
- NFRs (performance, security, reliability, compliance)
|
||||
|
||||
### Epic-Level Mode (Phase 4)
|
||||
|
||||
Load:
|
||||
|
||||
- Epic and story docs with acceptance criteria
|
||||
- PRD (if available)
|
||||
- Architecture / tech-spec (if available)
|
||||
- Prior system-level test-design outputs (if available)
|
||||
|
||||
Extract:
|
||||
|
||||
- Testable requirements
|
||||
- Integration points
|
||||
- Known coverage gaps
|
||||
|
||||
---
|
||||
|
||||
## 3. Analyze Existing Test Coverage (Epic-Level)
|
||||
|
||||
If epic-level:
|
||||
|
||||
- Scan the repository for existing tests (search for `tests/`, `spec`, `e2e`, `api` folders)
|
||||
- Identify coverage gaps and flaky areas
|
||||
- Note existing fixture and test patterns
|
||||
|
||||
### Browser Exploration (if `tea_browser_automation` is `cli` or `auto`)
|
||||
|
||||
> **Fallback:** If CLI is not installed, fall back to MCP (if available) or skip browser exploration and rely on code/doc analysis.
|
||||
|
||||
**CLI Exploration Steps:**
|
||||
All commands use the same named session to target the correct browser:
|
||||
|
||||
1. `playwright-cli -s=tea-explore open <target_url>`
|
||||
2. `playwright-cli -s=tea-explore snapshot` → capture page structure and element refs
|
||||
3. `playwright-cli -s=tea-explore screenshot --filename={test_artifacts}/exploration/explore-<page>.png`
|
||||
4. Analyze snapshot output to identify testable elements and flows
|
||||
5. `playwright-cli -s=tea-explore close`
|
||||
|
||||
Store artifacts under `{test_artifacts}/exploration/`
|
||||
|
||||
> **Session Hygiene:** Always close sessions using `playwright-cli -s=tea-explore close`. Do NOT use `close-all` — it kills every session on the machine and breaks parallel execution.
|
||||
|
||||
---
|
||||
|
||||
### Tiered Knowledge Loading
|
||||
|
||||
Load fragments based on their `tier` classification in `tea-index.csv`:
|
||||
|
||||
1. **Core tier** (always load): Foundational fragments required for this workflow
|
||||
2. **Extended tier** (load on-demand): Load when deeper analysis is needed or when the user's context requires it
|
||||
3. **Specialized tier** (load only when relevant): Load only when the specific use case matches (e.g., contract-testing only for microservices, email-auth only for email flows)
|
||||
|
||||
> **Context Efficiency**: Loading only core fragments reduces context usage by 40-50% compared to loading all fragments.
|
||||
|
||||
### Playwright Utils Loading Profiles
|
||||
|
||||
**If `tea_use_playwright_utils` is enabled**, select the appropriate loading profile:
|
||||
|
||||
- **API-only profile** (when `{detected_stack}` is `backend` or no `page.goto`/`page.locator` found in test files):
|
||||
Load: `overview`, `api-request`, `auth-session`, `recurse` (~1,800 lines)
|
||||
|
||||
- **Full UI+API profile** (when `{detected_stack}` is `frontend`/`fullstack` or browser tests detected):
|
||||
Load: all Playwright Utils core fragments (~4,500 lines)
|
||||
|
||||
**Detection**: Scan `{test_dir}` for files containing `page.goto` or `page.locator`. If none found, use API-only profile.
|
||||
|
||||
### Pact.js Utils Loading
|
||||
|
||||
**If `tea_use_pactjs_utils` is enabled** (and `{detected_stack}` is `backend` or `fullstack`, or microservices indicators detected):
|
||||
|
||||
Load: `pactjs-utils-overview.md`, `pactjs-utils-consumer-helpers.md`, `pactjs-utils-provider-verifier.md`, `pactjs-utils-request-filter.md`
|
||||
|
||||
**If `tea_use_pactjs_utils` is disabled** but contract testing is relevant:
|
||||
|
||||
Load: `contract-testing.md`
|
||||
|
||||
### Pact MCP Loading
|
||||
|
||||
**If `tea_pact_mcp` is `"mcp"`:**
|
||||
|
||||
Load: `pact-mcp.md` — enables agent to use SmartBear MCP "Fetch Provider States" and "Matrix" tools to understand existing contract landscape during test design.
|
||||
|
||||
## 4. Load Knowledge Base Fragments
|
||||
|
||||
Use `{knowledgeIndex}` to select and load only relevant fragments.
|
||||
|
||||
### System-Level Mode (Required)
|
||||
|
||||
- `adr-quality-readiness-checklist.md`
|
||||
- `test-levels-framework.md`
|
||||
- `risk-governance.md`
|
||||
- `test-quality.md`
|
||||
|
||||
### Epic-Level Mode (Required)
|
||||
|
||||
- `risk-governance.md`
|
||||
- `probability-impact.md`
|
||||
- `test-levels-framework.md`
|
||||
- `test-priorities-matrix.md`
|
||||
|
||||
**Playwright CLI (if `tea_browser_automation` is "cli" or "auto"):**
|
||||
|
||||
- `playwright-cli.md`
|
||||
|
||||
**MCP Patterns (if `tea_browser_automation` is "mcp" or "auto"):**
|
||||
|
||||
- (existing MCP-related fragments, if any are added in future)
|
||||
|
||||
**Pact.js Utils (if enabled — both System-Level and Epic-Level):**
|
||||
|
||||
- `pactjs-utils-overview.md`, `pactjs-utils-consumer-helpers.md`, `pactjs-utils-provider-verifier.md`, `pactjs-utils-request-filter.md`
|
||||
|
||||
**Contract Testing (if pactjs-utils disabled but relevant):**
|
||||
|
||||
- `contract-testing.md`
|
||||
|
||||
**Pact MCP (if tea_pact_mcp is "mcp"):**
|
||||
|
||||
- `pact-mcp.md`
|
||||
|
||||
---
|
||||
|
||||
## 5. Confirm Loaded Inputs
|
||||
|
||||
Summarize what was loaded and confirm with the user if anything is missing.
|
||||
|
||||
---
|
||||
|
||||
### 6. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-02-load-context']
|
||||
lastStep: 'step-02-load-context'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-02-load-context'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-02-load-context'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section of the document.
|
||||
|
||||
**Update `inputDocuments`**: Set `inputDocuments` in the output template frontmatter to the list of artifact paths loaded in this step (e.g., knowledge fragments, test design documents, configuration files).
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
@@ -0,0 +1,110 @@
|
||||
---
|
||||
name: 'step-03-risk-and-testability'
|
||||
description: 'Perform testability review (system-level) and risk assessment'
|
||||
nextStepFile: './step-04-coverage-plan.md'
|
||||
outputFile: '{test_artifacts}/test-design-progress.md'
|
||||
---
|
||||
|
||||
# Step 3: Testability & Risk Assessment
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Produce a defensible testability review (system-level) and a risk assessment matrix (all modes).
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- 🎯 Base conclusions on evidence from loaded artifacts
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. System-Level Mode: Testability Review
|
||||
|
||||
If **system-level**, evaluate architecture for:
|
||||
|
||||
- **Controllability** (state seeding, mockability, fault injection)
|
||||
- **Observability** (logs, metrics, traces, deterministic assertions)
|
||||
- **Reliability** (isolation, reproducibility, parallel safety)
|
||||
|
||||
**Structure output as:**
|
||||
|
||||
1. **🚨 Testability Concerns** (actionable issues first)
|
||||
2. **✅ Testability Assessment Summary** (what is already strong)
|
||||
|
||||
Also identify **ASRs** (Architecturally Significant Requirements):
|
||||
|
||||
- Mark each as **ACTIONABLE** or **FYI**
|
||||
|
||||
---
|
||||
|
||||
## 2. All Modes: Risk Assessment
|
||||
|
||||
Using `risk-governance.md` and `probability-impact.md` (if loaded):
|
||||
|
||||
- Identify real risks (not just features)
|
||||
- Classify by category: TECH / SEC / PERF / DATA / BUS / OPS
|
||||
- Score Probability (1–3) and Impact (1–3)
|
||||
- Calculate Risk Score (P × I)
|
||||
- Flag high risks (score ≥ 6)
|
||||
- Define mitigation, owner, and timeline
|
||||
|
||||
---
|
||||
|
||||
## 3. Summarize Risk Findings
|
||||
|
||||
Summarize the highest risks and their mitigation priorities.
|
||||
|
||||
---
|
||||
|
||||
### 4. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-03-risk-and-testability']
|
||||
lastStep: 'step-03-risk-and-testability'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-03-risk-and-testability'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-03-risk-and-testability'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section of the document.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
@@ -0,0 +1,123 @@
|
||||
---
|
||||
name: 'step-04-coverage-plan'
|
||||
description: 'Design test coverage, priorities, execution strategy, and estimates'
|
||||
nextStepFile: './step-05-generate-output.md'
|
||||
outputFile: '{test_artifacts}/test-design-progress.md'
|
||||
---
|
||||
|
||||
# Step 4: Coverage Plan & Execution Strategy
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Create the test coverage matrix, prioritize scenarios, and define execution strategy, resource estimates, and quality gates.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- 🚫 Avoid redundant coverage across test levels
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Coverage Matrix
|
||||
|
||||
For each requirement or risk-driven scenario:
|
||||
|
||||
- Decompose into atomic test scenarios
|
||||
- Select **test level** (E2E / API / Component / Unit) using `test-levels-framework.md`
|
||||
- Ensure no duplicate coverage across levels
|
||||
- Assign priorities (P0–P3) using `test-priorities-matrix.md`
|
||||
|
||||
**Priority rules:**
|
||||
|
||||
- P0: Blocks core functionality + high risk + no workaround
|
||||
- P1: Critical paths + medium/high risk
|
||||
- P2: Secondary flows + low/medium risk
|
||||
- P3: Nice-to-have, exploratory, benchmarks
|
||||
|
||||
---
|
||||
|
||||
## 2. Execution Strategy (Keep Simple)
|
||||
|
||||
Use a **PR / Nightly / Weekly** model:
|
||||
|
||||
- **PR**: All functional tests if <15 minutes
|
||||
- **Nightly/Weekly**: Long-running or expensive suites (perf, chaos, large datasets)
|
||||
- Avoid re-listing all tests (refer to coverage plan)
|
||||
|
||||
---
|
||||
|
||||
## 3. Resource Estimates (Ranges Only)
|
||||
|
||||
Provide intervals (no false precision):
|
||||
|
||||
- P0: e.g., "~25–40 hours"
|
||||
- P1: e.g., "~20–35 hours"
|
||||
- P2: e.g., "~10–30 hours"
|
||||
- P3: e.g., "~2–5 hours"
|
||||
- Total and timeline as ranges
|
||||
|
||||
---
|
||||
|
||||
## 4. Quality Gates
|
||||
|
||||
Define thresholds:
|
||||
|
||||
- P0 pass rate = 100%
|
||||
- P1 pass rate ≥ 95%
|
||||
- High-risk mitigations complete before release
|
||||
- Coverage target ≥ 80% (adjust if justified)
|
||||
|
||||
---
|
||||
|
||||
### 5. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-04-coverage-plan']
|
||||
lastStep: 'step-04-coverage-plan'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-04-coverage-plan'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-04-coverage-plan'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section of the document.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
@@ -0,0 +1,222 @@
|
||||
---
|
||||
name: 'step-05-generate-output'
|
||||
description: 'Generate output documents with adaptive orchestration (agent-team, subagent, or sequential)'
|
||||
outputFile: '{test_artifacts}/test-design-epic-{epic_num}.md'
|
||||
progressFile: '{test_artifacts}/test-design-progress.md'
|
||||
---
|
||||
|
||||
# Step 5: Generate Outputs & Validate
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Write the final test-design document(s) using the correct template(s), then validate against the checklist.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- ✅ Use the provided templates and output paths
|
||||
- ✅ Resolve execution mode from explicit user request first, then config
|
||||
- ✅ Apply fallback rules deterministically when requested mode is unsupported
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 0. Resolve Execution Mode (User Override First)
|
||||
|
||||
```javascript
|
||||
const orchestrationContext = {
|
||||
config: {
|
||||
execution_mode: config.tea_execution_mode || 'auto', // "auto" | "subagent" | "agent-team" | "sequential"
|
||||
capability_probe: config.tea_capability_probe !== false, // true by default
|
||||
},
|
||||
timestamp: new Date().toISOString().replace(/[:.]/g, '-'),
|
||||
};
|
||||
|
||||
const normalizeUserExecutionMode = (mode) => {
|
||||
if (typeof mode !== 'string') return null;
|
||||
const normalized = mode.trim().toLowerCase().replace(/[-_]/g, ' ').replace(/\s+/g, ' ');
|
||||
|
||||
if (normalized === 'auto') return 'auto';
|
||||
if (normalized === 'sequential') return 'sequential';
|
||||
if (normalized === 'subagent' || normalized === 'sub agent' || normalized === 'subagents' || normalized === 'sub agents') {
|
||||
return 'subagent';
|
||||
}
|
||||
if (normalized === 'agent team' || normalized === 'agent teams' || normalized === 'agentteam') {
|
||||
return 'agent-team';
|
||||
}
|
||||
|
||||
return null;
|
||||
};
|
||||
|
||||
const normalizeConfigExecutionMode = (mode) => {
|
||||
if (mode === 'subagent') return 'subagent';
|
||||
if (mode === 'auto' || mode === 'sequential' || mode === 'subagent' || mode === 'agent-team') {
|
||||
return mode;
|
||||
}
|
||||
return null;
|
||||
};
|
||||
|
||||
// Explicit user instruction in the active run takes priority over config.
|
||||
const explicitModeFromUser = normalizeUserExecutionMode(runtime.getExplicitExecutionModeHint?.() || null);
|
||||
|
||||
const requestedMode = explicitModeFromUser || normalizeConfigExecutionMode(orchestrationContext.config.execution_mode) || 'auto';
|
||||
const probeEnabled = orchestrationContext.config.capability_probe;
|
||||
|
||||
const supports = { subagent: false, agentTeam: false };
|
||||
if (probeEnabled) {
|
||||
supports.subagent = runtime.canLaunchSubagents?.() === true;
|
||||
supports.agentTeam = runtime.canLaunchAgentTeams?.() === true;
|
||||
}
|
||||
|
||||
let resolvedMode = requestedMode;
|
||||
if (requestedMode === 'auto') {
|
||||
if (supports.agentTeam) resolvedMode = 'agent-team';
|
||||
else if (supports.subagent) resolvedMode = 'subagent';
|
||||
else resolvedMode = 'sequential';
|
||||
} else if (probeEnabled && requestedMode === 'agent-team' && !supports.agentTeam) {
|
||||
resolvedMode = supports.subagent ? 'subagent' : 'sequential';
|
||||
} else if (probeEnabled && requestedMode === 'subagent' && !supports.subagent) {
|
||||
resolvedMode = 'sequential';
|
||||
}
|
||||
```
|
||||
|
||||
Resolution precedence:
|
||||
|
||||
1. Explicit user request in this run (`agent team` => `agent-team`; `subagent` => `subagent`; `sequential`; `auto`)
|
||||
2. `tea_execution_mode` from config
|
||||
3. Runtime capability fallback (when probing enabled)
|
||||
|
||||
## 1. Select Output Template(s)
|
||||
|
||||
### System-Level Mode (Phase 3)
|
||||
|
||||
Generate **two** documents:
|
||||
|
||||
- `{test_artifacts}/test-design-architecture.md` using `test-design-architecture-template.md`
|
||||
- `{test_artifacts}/test-design-qa.md` using `test-design-qa-template.md`
|
||||
|
||||
If `resolvedMode` is `agent-team` or `subagent`, these two documents can be generated in parallel as independent workers, then reconciled for consistency.
|
||||
|
||||
### Epic-Level Mode (Phase 4)
|
||||
|
||||
Generate **one** document:
|
||||
|
||||
- `{outputFile}` using `test-design-template.md`
|
||||
- If `epic_num` is unclear, ask the user
|
||||
|
||||
Epic-level mode remains single-worker by default (one output artifact).
|
||||
|
||||
---
|
||||
|
||||
## 2. Populate Templates
|
||||
|
||||
Ensure the outputs include:
|
||||
|
||||
- Risk assessment matrix
|
||||
- Coverage matrix and priorities
|
||||
- Execution strategy
|
||||
- Resource estimates (ranges)
|
||||
- Quality gate criteria
|
||||
- Any mode-specific sections required by the template
|
||||
|
||||
---
|
||||
|
||||
## 3. Validation
|
||||
|
||||
Validate the output(s) against:
|
||||
|
||||
- `checklist.md` in this workflow folder
|
||||
- [ ] CLI sessions cleaned up (no orphaned browsers)
|
||||
- [ ] Temp artifacts stored in `{test_artifacts}/` not random locations
|
||||
|
||||
If any checklist criteria are missing, fix before completion.
|
||||
|
||||
---
|
||||
|
||||
## 4. Generate BMAD Handoff Document (System-Level Mode Only)
|
||||
|
||||
**If this is a system-level test design** (not component/feature level):
|
||||
|
||||
1. Copy `test-design-handoff-template.md` to `{test_artifacts}/test-design/{project_name}-handoff.md`
|
||||
2. Populate all sections from the test design output:
|
||||
- Fill TEA Artifacts Inventory with actual paths
|
||||
- Extract P0/P1 risks into Epic-Level guidance
|
||||
- Map critical test scenarios to Story-Level guidance
|
||||
- Build risk-to-story mapping table from risk register
|
||||
3. Save alongside the test design document
|
||||
|
||||
> **Note**: The handoff document is designed for consumption by BMAD's `create-epics-and-stories` workflow. It is only generated for system-level test designs where epic/story decomposition is relevant.
|
||||
|
||||
---
|
||||
|
||||
## 5. Polish Output
|
||||
|
||||
Before finalizing, review the complete output document for quality:
|
||||
|
||||
1. **Remove duplication**: Progressive-append workflow may have created repeated sections — consolidate
|
||||
2. **Verify consistency**: Ensure terminology, risk scores, and references are consistent throughout
|
||||
3. **Check completeness**: All template sections should be populated or explicitly marked N/A
|
||||
4. **Format cleanup**: Ensure markdown formatting is clean (tables aligned, headers consistent, no orphaned references)
|
||||
|
||||
---
|
||||
|
||||
## 6. Completion Report
|
||||
|
||||
Summarize:
|
||||
|
||||
- Mode used
|
||||
- Output file paths
|
||||
- Key risks and gate thresholds
|
||||
- Any open assumptions
|
||||
|
||||
---
|
||||
|
||||
### 7. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{progressFile}`.**
|
||||
|
||||
- **If `{progressFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-05-generate-output']
|
||||
lastStep: 'step-05-generate-output'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{progressFile}` already exists**, update:
|
||||
- Add `'step-05-generate-output'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-05-generate-output'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section of the document.
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
Reference in New Issue
Block a user