docs: update all documentation and add AI tooling configs

- Rewrite README.md with current architecture, features and stack
- Update docs/API.md with all current endpoints (corporate, BI, client 360)
- Update docs/ARCHITECTURE.md with cache, modular queries, services, ETL
- Update docs/GUIA-USUARIO.md for all roles (admin, corporate, agente)
- Add docs/INDEX.md documentation index
- Add PROJETO.md comprehensive project reference
- Add BI-CCC-Implementation-Guide.md
- Include AI agent configs (.claude, .agents, .gemini, _bmad)
- Add netbird VPN configuration
- Add status report

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
2026-03-19 13:29:03 -04:00
parent c5b377e788
commit 647cbec54f
3246 changed files with 479789 additions and 983 deletions

View File

@@ -0,0 +1,132 @@
---
name: 'step-01-preflight'
description: 'Verify prerequisites and gather project context'
nextStepFile: './step-02-select-framework.md'
outputFile: '{test_artifacts}/framework-setup-progress.md'
---
# Step 1: Preflight Checks
## STEP GOAL
Verify the project is ready for framework scaffolding and gather key context.
## MANDATORY EXECUTION RULES
- 📖 Read the entire step file before acting
- ✅ Speak in `{communication_language}`
- 🚫 Halt if preflight requirements fail
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 💾 Record outputs before proceeding
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: config, loaded artifacts, and knowledge fragments
- Focus: this step's goal only
- Limits: do not execute future steps
- Dependencies: prior steps' outputs (if any)
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
## 1. Stack Detection
**Read `config.test_stack_type`** from `{config_source}`.
**Auto-Detection Algorithm** (when `test_stack_type` is `"auto"` or not configured):
- Scan `{project-root}` for project manifests:
- **Frontend indicators**: `package.json` with react/vue/angular/next dependencies, `playwright.config.*`, `vite.config.*`, `webpack.config.*`
- **Backend indicators**: `pyproject.toml`, `pom.xml`/`build.gradle`, `go.mod`, `*.csproj`/`*.sln`, `Gemfile`, `Cargo.toml`
- **Both present** = `fullstack`; only frontend = `frontend`; only backend = `backend`
- Explicit `test_stack_type` config value overrides auto-detection
- **Backward compatibility**: if `test_stack_type` is not in config, treat as `"auto"` (preserves current frontend behavior for existing installs)
Store result as `{detected_stack}` = `frontend` | `backend` | `fullstack`
---
## 2. Validate Prerequisites
**If {detected_stack} is `frontend` or `fullstack`:**
- `package.json` exists in project root
- No existing E2E framework (`playwright.config.*`, `cypress.config.*`, `cypress.json`)
**If {detected_stack} is `backend` or `fullstack`:**
- At least one backend project manifest exists (`pyproject.toml`, `pom.xml`, `build.gradle`, `go.mod`, `*.csproj`, `Gemfile`, `Cargo.toml`)
- No existing test framework config that conflicts (e.g., `conftest.py` with full pytest suite, `src/test/` with JUnit suite)
- Architecture/stack context available (project type, bundler, dependencies)
If any fail, **HALT** and report the missing requirement.
---
## 3. Gather Project Context
**If {detected_stack} is `frontend` or `fullstack`:**
- Read `package.json` to identify framework, bundler, dependencies
**If {detected_stack} is `backend` or `fullstack`:**
- Read the relevant project manifest (`pyproject.toml`, `pom.xml`, `go.mod`, `*.csproj`, `Gemfile`, `Cargo.toml`) to identify language, framework, and dependencies
- Check for architecture docs (`architecture.md`, `tech-spec*.md`) if available
- Note auth requirements and APIs (if documented)
---
## 3. Confirm Findings
Summarize:
- Project type and bundler
- Whether a framework is already installed
- Any relevant context docs found
---
### 4. Save Progress
**Save this step's accumulated work to `{outputFile}`.**
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
```yaml
---
stepsCompleted: ['step-01-preflight']
lastStep: 'step-01-preflight'
lastSaved: '{date}'
---
```
Then write this step's output below the frontmatter.
- **If `{outputFile}` already exists**, update:
- Add `'step-01-preflight'` to `stepsCompleted` array (only if not already present)
- Set `lastStep: 'step-01-preflight'`
- Set `lastSaved: '{date}'`
- Append this step's output to the appropriate section of the document.
Load next step: `{nextStepFile}`
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Step completed in full with required outputs
### ❌ SYSTEM FAILURE:
- Skipped sequence steps or missing outputs
**Master Rule:** Skipping steps is FORBIDDEN.

View File

@@ -0,0 +1,116 @@
---
name: 'step-01b-resume'
description: 'Resume interrupted workflow from last completed step'
outputFile: '{test_artifacts}/framework-setup-progress.md'
---
# Step 1b: Resume Workflow
## STEP GOAL
Resume an interrupted workflow by loading the existing progress document, verifying previously created artifacts still exist on disk, displaying progress, and routing to the next incomplete step.
## MANDATORY EXECUTION RULES
- 📖 Read the entire step file before acting
- ✅ Speak in `{communication_language}`
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: Output document with progress frontmatter
- Focus: Load progress and route to next step
- Limits: Do not re-execute completed steps
- Dependencies: Output document must exist from a previous run
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
### 1. Load Output Document
Read `{outputFile}` and parse YAML frontmatter for:
- `stepsCompleted` — array of completed step names
- `lastStep` — last completed step name
- `lastSaved` — timestamp of last save
**If `{outputFile}` does not exist**, display:
"⚠️ **No previous progress found.** There is no output document to resume from. Please use **[C] Create** to start a fresh workflow run."
**THEN:** Halt. Do not proceed.
---
### 2. Verify Previously Created Artifacts
Since this workflow creates code files, verify that artifacts from completed steps still exist on disk:
- If `step-01-preflight` completed: Confirm `package.json` still exists
- If `step-03-scaffold-framework` completed: Confirm directory structure and config files exist
- If `step-04-docs-and-scripts` completed: Confirm `{test_dir}/README.md` exists
If any expected artifacts are missing, warn the user and suggest re-running from the step that created them.
---
### 3. Display Progress Dashboard
Display:
"📋 **Workflow Resume — Test Framework Setup**
**Last saved:** {lastSaved}
**Steps completed:** {stepsCompleted.length} of 5
1. ✅/⬜ Preflight Checks (step-01-preflight)
2. ✅/⬜ Select Framework (step-02-select-framework)
3. ✅/⬜ Scaffold Framework (step-03-scaffold-framework)
4. ✅/⬜ Docs & Scripts (step-04-docs-and-scripts)
5. ✅/⬜ Validate & Summary (step-05-validate-and-summary)"
---
### 4. Route to Next Step
Based on `lastStep`, load the next incomplete step:
- `'step-01-preflight'``./step-02-select-framework.md`
- `'step-02-select-framework'``./step-03-scaffold-framework.md`
- `'step-03-scaffold-framework'``./step-04-docs-and-scripts.md`
- `'step-04-docs-and-scripts'``./step-05-validate-and-summary.md`
- `'step-05-validate-and-summary'`**Workflow already complete.** Display: "✅ **All steps completed.** Use **[V] Validate** to review outputs or **[E] Edit** to make revisions." Then halt.
**If `lastStep` does not match any value above**, display: "⚠️ **Unknown progress state** (`lastStep`: {lastStep}). Please use **[C] Create** to start fresh." Then halt.
**Otherwise**, load the identified step file, read completely, and execute.
The existing content in `{outputFile}` provides context from previously completed steps.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Output document loaded and parsed correctly
- Previously created artifacts verified on disk
- Progress dashboard displayed accurately
- Routed to correct next step
### ❌ SYSTEM FAILURE:
- Not loading output document
- Not verifying existing artifacts
- Incorrect progress display
- Routing to wrong step
- Re-executing completed steps
**Master Rule:** Resume MUST route to the exact next incomplete step. Never re-execute completed steps.

View File

@@ -0,0 +1,117 @@
---
name: 'step-02-select-framework'
description: 'Select Playwright or Cypress and justify choice'
nextStepFile: './step-03-scaffold-framework.md'
outputFile: '{test_artifacts}/framework-setup-progress.md'
---
# Step 2: Framework Selection
## STEP GOAL
Choose the most appropriate framework and document the rationale.
## MANDATORY EXECUTION RULES
- 📖 Read the entire step file before acting
- ✅ Speak in `{communication_language}`
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 💾 Record outputs before proceeding
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: config, loaded artifacts, and knowledge fragments
- Focus: this step's goal only
- Limits: do not execute future steps
- Dependencies: prior steps' outputs (if any)
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
## 1. Selection Logic
Use `{detected_stack}` from Step 1 to guide framework selection.
**If {detected_stack} is `frontend` or `fullstack` (browser-based testing):**
Default to **Playwright** unless strong reasons suggest Cypress.
**Playwright recommended when:**
- Large or complex repo
- Multi-browser support needed
- Heavy API + UI integration
- CI speed/parallelism is important
**Cypress recommended when:**
- Small team prioritizes DX
- Component testing focus
- Simpler setup needed
**If {detected_stack} is `backend` (no browser-based testing):**
Select the framework matching the project language:
- **Python**: pytest (default), unittest
- **Java/Kotlin**: JUnit 5 (default), TestNG
- **Go**: Go test (built-in)
- **C#/.NET**: xUnit (default), NUnit, MSTest
- **Ruby**: RSpec (default), Minitest
- **Rust**: cargo test (built-in)
**If {detected_stack} is `fullstack`:**
Select both a browser-based framework (Playwright/Cypress) AND the appropriate backend framework for the detected language.
Respect `config.test_framework` if explicitly set (not `"auto"`).
---
## 2. Announce Decision
State the selected framework and reasoning.
---
### 3. Save Progress
**Save this step's accumulated work to `{outputFile}`.**
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
```yaml
---
stepsCompleted: ['step-02-select-framework']
lastStep: 'step-02-select-framework'
lastSaved: '{date}'
---
```
Then write this step's output below the frontmatter.
- **If `{outputFile}` already exists**, update:
- Add `'step-02-select-framework'` to `stepsCompleted` array (only if not already present)
- Set `lastStep: 'step-02-select-framework'`
- Set `lastSaved: '{date}'`
- Append this step's output to the appropriate section of the document.
Load next step: `{nextStepFile}`
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Step completed in full with required outputs
### ❌ SYSTEM FAILURE:
- Skipped sequence steps or missing outputs
**Master Rule:** Skipping steps is FORBIDDEN.

View File

@@ -0,0 +1,323 @@
---
name: 'step-03-scaffold-framework'
description: 'Create framework scaffold with adaptive orchestration (agent-team, subagent, or sequential)'
nextStepFile: './step-04-docs-and-scripts.md'
knowledgeIndex: '{project-root}/_bmad/tea/testarch/tea-index.csv'
outputFile: '{test_artifacts}/framework-setup-progress.md'
---
# Step 3: Scaffold Framework
## STEP GOAL
Generate the test directory structure, configuration files, fixtures, factories, helpers, and sample tests using deterministic mode selection with runtime fallback.
## MANDATORY EXECUTION RULES
- 📖 Read the entire step file before acting
- ✅ Speak in `{communication_language}`
- ✅ Apply knowledge base patterns where required
- ✅ Resolve execution mode from explicit user request first, then config
- ✅ Apply fallback rules deterministically when requested mode is unsupported
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 💾 Record outputs before proceeding
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: config, loaded artifacts, and knowledge fragments
- Focus: this step's goal only
- Limits: do not execute future steps
- Dependencies: prior steps' outputs (if any)
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
## 0. Resolve Execution Mode (User Override First)
```javascript
const parseBooleanFlag = (value, defaultValue = true) => {
if (typeof value === 'string') {
const normalized = value.trim().toLowerCase();
if (['false', '0', 'off', 'no'].includes(normalized)) return false;
if (['true', '1', 'on', 'yes'].includes(normalized)) return true;
}
if (value === undefined || value === null) return defaultValue;
return Boolean(value);
};
const orchestrationContext = {
config: {
execution_mode: config.tea_execution_mode || 'auto', // "auto" | "subagent" | "agent-team" | "sequential"
capability_probe: parseBooleanFlag(config.tea_capability_probe, true), // supports booleans and "false"/"true" strings
},
timestamp: new Date().toISOString().replace(/[:.]/g, '-'),
};
const normalizeUserExecutionMode = (mode) => {
if (typeof mode !== 'string') return null;
const normalized = mode.trim().toLowerCase().replace(/[-_]/g, ' ').replace(/\s+/g, ' ');
if (normalized === 'auto') return 'auto';
if (normalized === 'sequential') return 'sequential';
if (normalized === 'subagent' || normalized === 'sub agent' || normalized === 'subagents' || normalized === 'sub agents') {
return 'subagent';
}
if (normalized === 'agent team' || normalized === 'agent teams' || normalized === 'agentteam') {
return 'agent-team';
}
return null;
};
const normalizeConfigExecutionMode = (mode) => {
if (mode === 'subagent') return 'subagent';
if (mode === 'auto' || mode === 'sequential' || mode === 'subagent' || mode === 'agent-team') {
return mode;
}
return null;
};
// Explicit user instruction in the active run takes priority over config.
const explicitModeFromUser = normalizeUserExecutionMode(runtime.getExplicitExecutionModeHint?.() || null);
const requestedMode = explicitModeFromUser || normalizeConfigExecutionMode(orchestrationContext.config.execution_mode) || 'auto';
const probeEnabled = orchestrationContext.config.capability_probe;
const supports = { subagent: false, agentTeam: false };
if (probeEnabled) {
supports.subagent = runtime.canLaunchSubagents?.() === true;
supports.agentTeam = runtime.canLaunchAgentTeams?.() === true;
}
let resolvedMode = requestedMode;
if (requestedMode === 'auto') {
if (supports.agentTeam) resolvedMode = 'agent-team';
else if (supports.subagent) resolvedMode = 'subagent';
else resolvedMode = 'sequential';
} else if (probeEnabled && requestedMode === 'agent-team' && !supports.agentTeam) {
resolvedMode = supports.subagent ? 'subagent' : 'sequential';
} else if (probeEnabled && requestedMode === 'subagent' && !supports.subagent) {
resolvedMode = 'sequential';
}
```
Resolution precedence:
1. Explicit user request in this run (`agent team` => `agent-team`; `subagent` => `subagent`; `sequential`; `auto`)
2. `tea_execution_mode` from config
3. Runtime capability fallback (when probing enabled)
## 1. Create Directory Structure
Use `{detected_stack}` from Step 1 to determine directory layout.
**If {detected_stack} is `frontend` or `fullstack`:**
- `{test_dir}/e2e/`
- `{test_dir}/support/fixtures/`
- `{test_dir}/support/helpers/`
- `{test_dir}/support/page-objects/` (optional)
**If {detected_stack} is `backend` or `fullstack`:**
Create the idiomatic test directory for the detected language:
- **Python (pytest)**: `tests/` with `conftest.py`, `tests/unit/`, `tests/integration/`, `tests/api/`
- **Java/Kotlin (JUnit)**: `src/test/java/` mirroring `src/main/java/` package structure, with `unit/`, `integration/`, `api/` sub-packages
- **Go**: `*_test.go` files alongside source files (Go convention), plus `testdata/` for fixtures
- **C#/.NET (xUnit)**: `tests/` project with `Unit/`, `Integration/`, `Api/` directories
- **Ruby (RSpec)**: `spec/` with `spec/unit/`, `spec/integration/`, `spec/api/`, `spec/support/`
- **Rust**: `tests/` for integration tests, inline `#[cfg(test)]` modules for unit tests
**If `config.tea_use_pactjs_utils` is enabled and runtime is Node.js/TypeScript** (i.e., `{detected_stack}` is `frontend` or `fullstack`, or `{detected_stack}` is `backend` with Node.js/TypeScript runtime):
Create Node.js/TypeScript contract testing directory structure per `pact-consumer-framework-setup.md`:
- `tests/contract/consumer/` — consumer contract test files (`.pacttest.ts` extension)
- `tests/contract/support/` — pact config, provider state factories, consumer helpers shim
- `scripts/` — shell scripts (`env-setup.sh`, `publish-pact.sh`, `can-i-deploy.sh`, `record-deployment.sh`)
- `.github/actions/detect-breaking-change/` — PR checkbox-driven breaking change detection
- `.github/workflows/contract-test-consumer.yml` — consumer CDC CI workflow
---
## 2. Generate Framework Config
**If {detected_stack} is `frontend` or `fullstack`:**
Create `playwright.config.ts` or `cypress.config.ts` with:
- **Timeouts**: action 15s, navigation 30s, test 60s
- **Base URL**: env fallback (`BASE_URL`)
- **Artifacts**: retain-on-failure (trace/screenshot/video)
- **Reporters**: HTML + JUnit + console
- **Parallelism**: enabled (CI tuned)
Use TypeScript if `use_typescript: true`.
**If {detected_stack} is `backend` or `fullstack`:**
Create the idiomatic test config for the detected framework:
- **pytest**: `pyproject.toml` `[tool.pytest.ini_options]` or `pytest.ini` with markers, test paths, coverage settings
- **JUnit**: `build.gradle`/`pom.xml` test configuration with JUnit 5 dependencies, Surefire/Failsafe plugins
- **Go test**: no config file needed (Go convention); optionally create `Makefile` test targets
- **xUnit**: `.csproj` test project with xUnit and coverlet dependencies
- **RSpec**: `.rspec` config file with `spec_helper.rb` and `rails_helper.rb` (if Rails)
---
## 3. Environment Setup
Create `.env.example` with `TEST_ENV`, `BASE_URL`, `API_URL`.
**Stack-conditional environment files:**
**If {detected_stack} is `frontend` or `fullstack` (Node.js):**
- `.nvmrc` using current LTS Node (prefer Node 24+)
**If {detected_stack} is `backend`:**
Create the idiomatic version file for the detected language:
- **Python**: `.python-version` with current stable Python (prefer 3.12+)
- **Java**: `.java-version` or `JAVA_HOME` documentation in `.env.example`
- **Go**: Go version is already in `go.mod` (no additional file needed)
- **C#/.NET**: `global.json` with SDK version if not already present
- **Ruby**: `.ruby-version` with current stable Ruby
---
## 4. Fixtures & Factories
Read `{config_source}` and use `{knowledgeIndex}` to load fragments based on `config.tea_use_playwright_utils`:
**If Playwright Utils enabled:**
- `overview.md`, `fixtures-composition.md`, `auth-session.md`, `api-request.md`, `burn-in.md`, `network-error-monitor.md`, `data-factories.md`
- Recommend installing `@seontechnologies/playwright-utils`
**If disabled:**
- `fixture-architecture.md`, `data-factories.md`, `network-first.md`, `playwright-config.md`, `test-quality.md`
**If Pact.js Utils enabled** (`config.tea_use_pactjs_utils`):
- `pact-consumer-framework-setup.md` (CRITICAL: load this for directory structure, scripts, CI workflow, and PactV4 patterns)
- `pactjs-utils-overview.md`, `pactjs-utils-consumer-helpers.md`, `pactjs-utils-provider-verifier.md`, `pactjs-utils-request-filter.md`, `contract-testing.md`
- Recommend installing `@seontechnologies/pactjs-utils` and `@pact-foundation/pact`
**If Pact.js Utils disabled but contract testing relevant:**
- `contract-testing.md`
**If Pact MCP enabled** (`config.tea_pact_mcp` is `"mcp"`):
- `pact-mcp.md`
Implement:
- Fixture index with `mergeTests`
- Auto-cleanup hooks
- Faker-based data factories with overrides
---
## 5. Sample Tests & Helpers
**If {detected_stack} is `frontend` or `fullstack`:**
Create example tests in `{test_dir}/e2e/` demonstrating:
- Given/When/Then format
- data-testid selector strategy
- Factory usage
- Network interception pattern (if applicable)
**If {detected_stack} is `backend` or `fullstack`:**
Create example tests in the idiomatic location for the detected language:
- **Python**: `tests/test_example.py` with pytest fixtures, parametrize, and factory usage
- **Java**: `src/test/java/.../ExampleTest.java` with JUnit 5 annotations, `@BeforeEach` setup
- **Go**: `example_test.go` alongside source with table-driven tests and `testify` assertions
- **C#/.NET**: `tests/ExampleTests.cs` with xUnit `[Fact]`/`[Theory]` and fixture injection
- **Ruby**: `spec/example_spec.rb` with RSpec `describe`/`context`/`it` and factory_bot
Create helpers for:
- API clients (if needed)
- Network utilities (frontend/fullstack only)
- Auth helpers
- Test data factories (language-idiomatic patterns)
**If `config.tea_use_pactjs_utils` is enabled and runtime is Node.js/TypeScript** (i.e., `{detected_stack}` is `frontend` or `fullstack`, or `{detected_stack}` is `backend` with Node.js/TypeScript runtime):
Create Node.js/TypeScript contract test samples per `pact-consumer-framework-setup.md`:
- **Consumer test**: Example using PactV4 `addInteraction()` builder + `createProviderState` + real consumer code with URL injection (`.pacttest.ts` extension)
- **Support files**: Pact config factory (`pact-config.ts`), provider state factories (`provider-states.ts`), local consumer-helpers shim (`consumer-helpers.ts`)
- **Vitest config**: Minimal `vitest.config.pact.ts` (do NOT copy settings from unit config)
- **Shell scripts**: `env-setup.sh`, `publish-pact.sh`, `can-i-deploy.sh`, `record-deployment.sh` in `scripts/`
- **CI workflow**: `contract-test-consumer.yml` with detect-breaking-change action
- **package.json scripts**: `test:pact:consumer`, `publish:pact`, `can:i:deploy:consumer`, `record:consumer:deployment`
- **.gitignore**: Add `/pacts/` and `pact-logs/`
---
### 6. Orchestration Notes for This Step
For this step, treat these work units as parallelizable when `resolvedMode` is `agent-team` or `subagent`:
- Worker A: directory + framework config + env setup (sections 1-3)
- Worker B: fixtures + factories (section 4)
- Worker C: sample tests + helpers (section 5)
In parallel-capable modes, runtime decides worker scheduling and concurrency.
If `resolvedMode` is `sequential`, execute sections 1→5 in order.
Regardless of mode, outputs must be identical in structure and quality.
### 7. Save Progress
**Save this step's accumulated work to `{outputFile}`.**
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
```yaml
---
stepsCompleted: ['step-03-scaffold-framework']
lastStep: 'step-03-scaffold-framework'
lastSaved: '{date}'
---
```
Then write this step's output below the frontmatter.
- **If `{outputFile}` already exists**, update:
- Add `'step-03-scaffold-framework'` to `stepsCompleted` array (only if not already present)
- Set `lastStep: 'step-03-scaffold-framework'`
- Set `lastSaved: '{date}'`
- Append this step's output to the appropriate section of the document.
Load next step: `{nextStepFile}`
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Step completed in full with required outputs
### ❌ SYSTEM FAILURE:
- Skipped sequence steps or missing outputs
**Master Rule:** Skipping steps is FORBIDDEN.

View File

@@ -0,0 +1,105 @@
---
name: 'step-04-docs-and-scripts'
description: 'Document setup and add package.json scripts'
nextStepFile: './step-05-validate-and-summary.md'
outputFile: '{test_dir}/README.md'
progressFile: '{test_artifacts}/framework-setup-progress.md'
---
# Step 4: Documentation & Scripts
## STEP GOAL
Create test documentation and add build/test scripts appropriate for `{detected_stack}`.
## MANDATORY EXECUTION RULES
- 📖 Read the entire step file before acting
- ✅ Speak in `{communication_language}`
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 💾 Record outputs before proceeding
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: config, loaded artifacts, and knowledge fragments
- Focus: this step's goal only
- Limits: do not execute future steps
- Dependencies: prior steps' outputs (if any)
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
## 1. tests/README.md
Create `{outputFile}` and include:
- Setup instructions
- Running tests (local/headed/debug)
- Architecture overview (fixtures, factories, helpers)
- Best practices (selectors, isolation, cleanup)
- CI integration notes
- Knowledge base references
---
## 2. Build & Test Scripts
**If {detected_stack} is `frontend` or `fullstack`:**
Add to `package.json` at minimum:
- `test:e2e`: framework execution command (e.g., `npx playwright test`)
**If {detected_stack} is `backend` or `fullstack`:**
Add the idiomatic test commands for the detected framework:
- **Python (pytest)**: Add to `pyproject.toml` scripts or `Makefile`: `pytest`, `pytest --cov`, `pytest -m integration`
- **Java (JUnit)**: Add to `build.gradle`/`pom.xml`: `./gradlew test`, `mvn test`, `mvn verify` (integration)
- **Go**: Add to `Makefile`: `go test ./...`, `go test -race ./...`, `go test -cover ./...`
- **C#/.NET**: Add to CI scripts or `Makefile`: `dotnet test`, `dotnet test --collect:"XPlat Code Coverage"`
- **Ruby (RSpec)**: Add to `Gemfile` binstubs or `Makefile`: `bundle exec rspec`, `bundle exec rspec spec/integration`
---
### 3. Save Progress
**Save this step's accumulated work to `{progressFile}`.**
- **If `{progressFile}` does not exist** (first save), create it with YAML frontmatter:
```yaml
---
stepsCompleted: ['step-04-docs-and-scripts']
lastStep: 'step-04-docs-and-scripts'
lastSaved: '{date}'
---
```
Then write this step's output below the frontmatter.
- **If `{progressFile}` already exists**, update:
- Add `'step-04-docs-and-scripts'` to `stepsCompleted` array (only if not already present)
- Set `lastStep: 'step-04-docs-and-scripts'`
- Set `lastSaved: '{date}'`
- Append this step's output to the appropriate section of the document.
Load next step: `{nextStepFile}`
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Step completed in full with required outputs
### ❌ SYSTEM FAILURE:
- Skipped sequence steps or missing outputs
**Master Rule:** Skipping steps is FORBIDDEN.

View File

@@ -0,0 +1,93 @@
---
name: 'step-05-validate-and-summary'
description: 'Validate against checklist and summarize'
outputFile: '{test_artifacts}/framework-setup-progress.md'
---
# Step 5: Validate & Summarize
## STEP GOAL
Validate framework setup and provide a completion summary.
## MANDATORY EXECUTION RULES
- 📖 Read the entire step file before acting
- ✅ Speak in `{communication_language}`
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 💾 Record outputs before proceeding
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: config, loaded artifacts, and knowledge fragments
- Focus: this step's goal only
- Limits: do not execute future steps
- Dependencies: prior steps' outputs (if any)
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
## 1. Validation
Validate against `checklist.md`:
- Preflight success
- Directory structure created
- Config correctness
- Fixtures/factories created
- Docs and scripts present
Fix any gaps before completion.
---
## 2. Completion Summary
Report:
- Framework selected
- Artifacts created
- Next steps (install deps, run tests)
- Knowledge fragments applied
---
### 3. Save Progress
**Save this step's accumulated work to `{outputFile}`.**
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
```yaml
---
stepsCompleted: ['step-05-validate-and-summary']
lastStep: 'step-05-validate-and-summary'
lastSaved: '{date}'
---
```
Then write this step's output below the frontmatter.
- **If `{outputFile}` already exists**, update:
- Add `'step-05-validate-and-summary'` to `stepsCompleted` array (only if not already present)
- Set `lastStep: 'step-05-validate-and-summary'`
- Set `lastSaved: '{date}'`
- Append this step's output to the appropriate section of the document.
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Step completed in full with required outputs
### ❌ SYSTEM FAILURE:
- Skipped sequence steps or missing outputs
**Master Rule:** Skipping steps is FORBIDDEN.