docs: update all documentation and add AI tooling configs
- Rewrite README.md with current architecture, features and stack - Update docs/API.md with all current endpoints (corporate, BI, client 360) - Update docs/ARCHITECTURE.md with cache, modular queries, services, ETL - Update docs/GUIA-USUARIO.md for all roles (admin, corporate, agente) - Add docs/INDEX.md documentation index - Add PROJETO.md comprehensive project reference - Add BI-CCC-Implementation-Guide.md - Include AI agent configs (.claude, .agents, .gemini, _bmad) - Add netbird VPN configuration - Add status report Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
6
.gemini/skills/bmad-testarch-automate/SKILL.md
Normal file
6
.gemini/skills/bmad-testarch-automate/SKILL.md
Normal file
@@ -0,0 +1,6 @@
|
||||
---
|
||||
name: bmad-testarch-automate
|
||||
description: 'Expand test automation coverage for codebase. Use when user says "lets expand test coverage" or "I want to automate tests"'
|
||||
---
|
||||
|
||||
Follow the instructions in [workflow.md](workflow.md).
|
||||
@@ -0,0 +1 @@
|
||||
type: skill
|
||||
611
.gemini/skills/bmad-testarch-automate/checklist.md
Normal file
611
.gemini/skills/bmad-testarch-automate/checklist.md
Normal file
@@ -0,0 +1,611 @@
|
||||
# Automate Workflow Validation Checklist
|
||||
|
||||
Use this checklist to validate that the automate workflow has been executed correctly and all deliverables meet quality standards.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before starting this workflow, verify:
|
||||
|
||||
- [ ] Framework scaffolding configured (playwright.config.ts or cypress.config.ts exists)
|
||||
- [ ] Test directory structure exists (tests/ folder with subdirectories)
|
||||
- [ ] Package.json has test framework dependencies installed
|
||||
|
||||
**Halt only if:** Framework scaffolding is completely missing (run `framework` workflow first)
|
||||
|
||||
**Note:** BMad artifacts (story, tech-spec, PRD) are OPTIONAL - workflow can run without them
|
||||
**Note:** `automate` generates tests; it does not run `*atdd` or `*test-review`. If ATDD outputs exist, use them as input and avoid duplicate coverage.
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Execution Mode Determination and Context Loading
|
||||
|
||||
### Mode Detection
|
||||
|
||||
- [ ] Execution mode correctly determined:
|
||||
- [ ] BMad-Integrated Mode (story_file variable set) OR
|
||||
- [ ] Standalone Mode (target_feature or target_files set) OR
|
||||
- [ ] Auto-discover Mode (no targets specified)
|
||||
|
||||
### BMad Artifacts (If Available - OPTIONAL)
|
||||
|
||||
- [ ] Story markdown loaded (if `{story_file}` provided)
|
||||
- [ ] Acceptance criteria extracted from story (if available)
|
||||
- [ ] Tech-spec.md loaded (if `{use_tech_spec}` true and file exists)
|
||||
- [ ] Test-design.md loaded (if `{use_test_design}` true and file exists)
|
||||
- [ ] PRD.md loaded (if `{use_prd}` true and file exists)
|
||||
- [ ] **Note**: Absence of BMad artifacts does NOT halt workflow
|
||||
|
||||
### Framework Configuration
|
||||
|
||||
- [ ] Test framework config loaded (playwright.config.ts or cypress.config.ts)
|
||||
- [ ] Test directory structure identified from `{test_dir}`
|
||||
- [ ] Existing test patterns reviewed
|
||||
- [ ] Test runner capabilities noted (parallel execution, fixtures, etc.)
|
||||
|
||||
### Coverage Analysis
|
||||
|
||||
- [ ] Existing test files searched in `{test_dir}` (if `{analyze_coverage}` true)
|
||||
- [ ] Tested features vs untested features identified
|
||||
- [ ] Coverage gaps mapped (tests to source files)
|
||||
- [ ] Existing fixture and factory patterns checked
|
||||
|
||||
### Knowledge Base Fragments Loaded
|
||||
|
||||
- [ ] `test-levels-framework.md` - Test level selection
|
||||
- [ ] `test-priorities.md` - Priority classification (P0-P3)
|
||||
- [ ] `fixture-architecture.md` - Fixture patterns with auto-cleanup
|
||||
- [ ] `data-factories.md` - Factory patterns using faker
|
||||
- [ ] `selective-testing.md` - Targeted test execution strategies
|
||||
- [ ] `ci-burn-in.md` - Flaky test detection patterns
|
||||
- [ ] `test-quality.md` - Test design principles
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Automation Targets Identification
|
||||
|
||||
### Target Determination
|
||||
|
||||
**BMad-Integrated Mode (if story available):**
|
||||
|
||||
- [ ] Acceptance criteria mapped to test scenarios
|
||||
- [ ] Features implemented in story identified
|
||||
- [ ] Existing ATDD tests checked (if any)
|
||||
- [ ] Expansion beyond ATDD planned (edge cases, negative paths)
|
||||
|
||||
**Standalone Mode (if no story):**
|
||||
|
||||
- [ ] Specific feature analyzed (if `{target_feature}` specified)
|
||||
- [ ] Specific files analyzed (if `{target_files}` specified)
|
||||
- [ ] Features auto-discovered (if `{auto_discover_features}` true)
|
||||
- [ ] Features prioritized by:
|
||||
- [ ] No test coverage (highest priority)
|
||||
- [ ] Complex business logic
|
||||
- [ ] External integrations (API, database, auth)
|
||||
- [ ] Critical user paths (login, checkout, etc.)
|
||||
|
||||
### Test Level Selection
|
||||
|
||||
- [ ] Test level selection framework applied (from `test-levels-framework.md`)
|
||||
- [ ] E2E tests identified: Critical user journeys, multi-system integration
|
||||
- [ ] API tests identified: Business logic, service contracts, data transformations
|
||||
- [ ] Component tests identified: UI behavior, interactions, state management
|
||||
- [ ] Unit tests identified: Pure logic, edge cases, error handling
|
||||
|
||||
### Duplicate Coverage Avoidance
|
||||
|
||||
- [ ] Same behavior NOT tested at multiple levels unnecessarily
|
||||
- [ ] E2E used for critical happy path only
|
||||
- [ ] API tests used for business logic variations
|
||||
- [ ] Component tests used for UI interaction edge cases
|
||||
- [ ] Unit tests used for pure logic edge cases
|
||||
|
||||
### Priority Assignment
|
||||
|
||||
- [ ] Test priorities assigned using `test-priorities.md` framework
|
||||
- [ ] P0 tests: Critical paths, security-critical, data integrity
|
||||
- [ ] P1 tests: Important features, integration points, error handling
|
||||
- [ ] P2 tests: Edge cases, less-critical variations, performance
|
||||
- [ ] P3 tests: Nice-to-have, rarely-used features, exploratory
|
||||
- [ ] Priority variables respected:
|
||||
- [ ] `{include_p0}` = true (always include)
|
||||
- [ ] `{include_p1}` = true (high priority)
|
||||
- [ ] `{include_p2}` = true (medium priority)
|
||||
- [ ] `{include_p3}` = false (low priority, skip by default)
|
||||
|
||||
### Coverage Plan Created
|
||||
|
||||
- [ ] Test coverage plan documented
|
||||
- [ ] What will be tested at each level listed
|
||||
- [ ] Priorities assigned to each test
|
||||
- [ ] Coverage strategy clear (critical-paths, comprehensive, or selective)
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Test Infrastructure Generated
|
||||
|
||||
### Fixture Architecture
|
||||
|
||||
- [ ] Existing fixtures checked in `tests/support/fixtures/`
|
||||
- [ ] Fixture architecture created/enhanced (if `{generate_fixtures}` true)
|
||||
- [ ] All fixtures use Playwright's `test.extend()` pattern
|
||||
- [ ] All fixtures have auto-cleanup in teardown
|
||||
- [ ] Common fixtures created/enhanced:
|
||||
- [ ] authenticatedUser (with auto-delete)
|
||||
- [ ] apiRequest (authenticated client)
|
||||
- [ ] mockNetwork (external service mocking)
|
||||
- [ ] testDatabase (with auto-cleanup)
|
||||
|
||||
### Data Factories
|
||||
|
||||
- [ ] Existing factories checked in `tests/support/factories/`
|
||||
- [ ] Factory architecture created/enhanced (if `{generate_factories}` true)
|
||||
- [ ] All factories use `@faker-js/faker` for random data (no hardcoded values)
|
||||
- [ ] All factories support overrides for specific scenarios
|
||||
- [ ] Common factories created/enhanced:
|
||||
- [ ] User factory (email, password, name, role)
|
||||
- [ ] Product factory (name, price, SKU)
|
||||
- [ ] Order factory (items, total, status)
|
||||
- [ ] Cleanup helpers provided (e.g., deleteUser(), deleteProduct())
|
||||
|
||||
### Helper Utilities
|
||||
|
||||
- [ ] Existing helpers checked in `tests/support/helpers/` (if `{update_helpers}` true)
|
||||
- [ ] Common utilities created/enhanced:
|
||||
- [ ] waitFor (polling for complex conditions)
|
||||
- [ ] retry (retry helper for flaky operations)
|
||||
- [ ] testData (test data generation)
|
||||
- [ ] assertions (custom assertion helpers)
|
||||
|
||||
---
|
||||
|
||||
## Step 4: Test Files Generated
|
||||
|
||||
### Test File Structure
|
||||
|
||||
- [ ] Test files organized correctly:
|
||||
- [ ] `tests/e2e/` for E2E tests
|
||||
- [ ] `tests/api/` for API tests
|
||||
- [ ] `tests/component/` for component tests
|
||||
- [ ] `tests/unit/` for unit tests
|
||||
- [ ] `tests/support/` for fixtures/factories/helpers
|
||||
|
||||
### E2E Tests (If Applicable)
|
||||
|
||||
- [ ] E2E test files created in `tests/e2e/`
|
||||
- [ ] All tests follow Given-When-Then format
|
||||
- [ ] All tests have priority tags ([P0], [P1], [P2], [P3]) in test name
|
||||
- [ ] All tests use data-testid selectors (not CSS classes)
|
||||
- [ ] One assertion per test (atomic design)
|
||||
- [ ] No hard waits or sleeps (explicit waits only)
|
||||
- [ ] Network-first pattern applied (route interception BEFORE navigation)
|
||||
- [ ] Clear Given-When-Then comments in test code
|
||||
|
||||
### API Tests (If Applicable)
|
||||
|
||||
- [ ] API test files created in `tests/api/`
|
||||
- [ ] All tests follow Given-When-Then format
|
||||
- [ ] All tests have priority tags in test name
|
||||
- [ ] API contracts validated (request/response structure)
|
||||
- [ ] HTTP status codes verified
|
||||
- [ ] Response body validation includes required fields
|
||||
- [ ] Error cases tested (400, 401, 403, 404, 500)
|
||||
- [ ] JWT token format validated (if auth tests)
|
||||
|
||||
### Consumer Contract Tests / CDC (If `use_pactjs_utils` Enabled)
|
||||
|
||||
**Provider Endpoint Comments:**
|
||||
|
||||
- [ ] Every Pact interaction has `// Provider endpoint:` comment
|
||||
- [ ] Comment includes exact file path to provider route handler, OR uses the TODO form when provider is inaccessible
|
||||
- [ ] Comment follows format: `// Provider endpoint: <path> -> <METHOD> <route>` or `// Provider endpoint: TODO — provider source not accessible, verify manually`
|
||||
|
||||
**Provider Source Scrutiny:**
|
||||
|
||||
- [ ] Provider route handlers and/or OpenAPI spec read before generating each interaction
|
||||
- [ ] Status codes verified against provider source (e.g., 201 not assumed 200)
|
||||
- [ ] Field names cross-referenced with provider type/DTO definitions
|
||||
- [ ] Data types verified (string ID vs number ID, date formats)
|
||||
- [ ] Enum/union values extracted from provider validation schemas
|
||||
- [ ] Required request fields and headers checked against provider validation
|
||||
- [ ] Nested response structures match provider's actual response construction
|
||||
- [ ] Scrutiny evidence documented as block comment in each test file
|
||||
|
||||
**CDC Quality Gates:**
|
||||
|
||||
- [ ] Postel's Law enforced: exact values in `withRequest`, matchers in `willRespondWith`
|
||||
- [ ] Response matchers (`like`, `eachLike`, `string`, `integer`) used only in `willRespondWith`
|
||||
- [ ] Provider state names are consistent with provider's state handler naming
|
||||
- [ ] DI pattern used for consumer function imports (actual consumer code, not raw `fetch()`)
|
||||
- [ ] One logical endpoint per Pact interaction (no multi-endpoint interactions)
|
||||
|
||||
### Component Tests (If Applicable)
|
||||
|
||||
- [ ] Component test files created in `tests/component/`
|
||||
- [ ] All tests follow Given-When-Then format
|
||||
- [ ] All tests have priority tags in test name
|
||||
- [ ] Component mounting works correctly
|
||||
- [ ] Interaction testing covers user actions (click, hover, keyboard)
|
||||
- [ ] State management validated
|
||||
- [ ] Props and events tested
|
||||
|
||||
### Unit Tests (If Applicable)
|
||||
|
||||
- [ ] Unit test files created in `tests/unit/`
|
||||
- [ ] All tests follow Given-When-Then format
|
||||
- [ ] All tests have priority tags in test name
|
||||
- [ ] Pure logic tested (no dependencies)
|
||||
- [ ] Edge cases covered
|
||||
- [ ] Error handling tested
|
||||
|
||||
### Quality Standards Enforced
|
||||
|
||||
- [ ] All tests use Given-When-Then format with clear comments
|
||||
- [ ] All tests have descriptive names with priority tags
|
||||
- [ ] No duplicate tests (same behavior tested multiple times)
|
||||
- [ ] No flaky patterns (race conditions, timing issues)
|
||||
- [ ] No test interdependencies (tests can run in any order)
|
||||
- [ ] Tests are deterministic (same input always produces same result)
|
||||
- [ ] All tests use data-testid selectors (E2E tests)
|
||||
- [ ] No hard waits: `await page.waitForTimeout()` (forbidden)
|
||||
- [ ] No conditional flow: `if (await element.isVisible())` (forbidden)
|
||||
- [ ] No try-catch for test logic (only for cleanup)
|
||||
- [ ] No hardcoded test data (use factories with faker)
|
||||
- [ ] No page object classes (tests are direct and simple)
|
||||
- [ ] No shared state between tests
|
||||
|
||||
### Network-First Pattern Applied
|
||||
|
||||
- [ ] Route interception set up BEFORE navigation (E2E tests with network requests)
|
||||
- [ ] `page.route()` called before `page.goto()` to prevent race conditions
|
||||
- [ ] Network-first pattern verified in all E2E tests that make API calls
|
||||
|
||||
---
|
||||
|
||||
## Step 5: Test Validation and Healing (NEW - Phase 2.5)
|
||||
|
||||
### Healing Configuration
|
||||
|
||||
- [ ] Healing configuration checked:
|
||||
- [ ] `{auto_validate}` setting noted (default: true)
|
||||
- [ ] `{auto_heal_failures}` setting noted (default: false)
|
||||
- [ ] `{max_healing_iterations}` setting noted (default: 3)
|
||||
- [ ] `{use_mcp_healing}` setting noted (default: true)
|
||||
|
||||
### Healing Knowledge Fragments Loaded (If Healing Enabled)
|
||||
|
||||
- [ ] `test-healing-patterns.md` loaded (common failure patterns and fixes)
|
||||
- [ ] `selector-resilience.md` loaded (selector refactoring guide)
|
||||
- [ ] `timing-debugging.md` loaded (race condition fixes)
|
||||
|
||||
### Test Execution and Validation
|
||||
|
||||
- [ ] Generated tests executed (if `{auto_validate}` true)
|
||||
- [ ] Test results captured:
|
||||
- [ ] Total tests run
|
||||
- [ ] Passing tests count
|
||||
- [ ] Failing tests count
|
||||
- [ ] Error messages and stack traces captured
|
||||
|
||||
### Healing Loop (If Enabled and Tests Failed)
|
||||
|
||||
- [ ] Healing loop entered (if `{auto_heal_failures}` true AND tests failed)
|
||||
- [ ] For each failing test:
|
||||
- [ ] Failure pattern identified (selector, timing, data, network, hard wait)
|
||||
- [ ] Appropriate healing strategy applied:
|
||||
- [ ] Stale selector → Replaced with data-testid or ARIA role
|
||||
- [ ] Race condition → Added network-first interception or state waits
|
||||
- [ ] Dynamic data → Replaced hardcoded values with regex/dynamic generation
|
||||
- [ ] Network error → Added route mocking
|
||||
- [ ] Hard wait → Replaced with event-based wait
|
||||
- [ ] Healed test re-run to validate fix
|
||||
- [ ] Iteration count tracked (max 3 attempts)
|
||||
|
||||
### Unfixable Tests Handling
|
||||
|
||||
- [ ] Tests that couldn't be healed after 3 iterations marked with `test.fixme()` (if `{mark_unhealable_as_fixme}` true)
|
||||
- [ ] Detailed comment added to test.fixme() tests:
|
||||
- [ ] What failure occurred
|
||||
- [ ] What healing was attempted (3 iterations)
|
||||
- [ ] Why healing failed
|
||||
- [ ] Manual investigation steps needed
|
||||
- [ ] Original test logic preserved in comments
|
||||
|
||||
### Healing Report Generated
|
||||
|
||||
- [ ] Healing report generated (if healing attempted)
|
||||
- [ ] Report includes:
|
||||
- [ ] Auto-heal enabled status
|
||||
- [ ] Healing mode (MCP-assisted or Pattern-based)
|
||||
- [ ] Iterations allowed (max_healing_iterations)
|
||||
- [ ] Validation results (total, passing, failing)
|
||||
- [ ] Successfully healed tests (count, file:line, fix applied)
|
||||
- [ ] Unable to heal tests (count, file:line, reason)
|
||||
- [ ] Healing patterns applied (selector fixes, timing fixes, data fixes)
|
||||
- [ ] Knowledge base references used
|
||||
|
||||
---
|
||||
|
||||
## Step 6: Documentation and Scripts Updated
|
||||
|
||||
### Test README Updated
|
||||
|
||||
- [ ] `tests/README.md` created or updated (if `{update_readme}` true)
|
||||
- [ ] Test suite structure overview included
|
||||
- [ ] Test execution instructions provided (all, specific files, by priority)
|
||||
- [ ] Fixture usage examples provided
|
||||
- [ ] Factory usage examples provided
|
||||
- [ ] Priority tagging convention explained ([P0], [P1], [P2], [P3])
|
||||
- [ ] How to write new tests documented
|
||||
- [ ] Common patterns documented
|
||||
- [ ] Anti-patterns documented (what to avoid)
|
||||
|
||||
### package.json Scripts Updated
|
||||
|
||||
- [ ] package.json scripts added/updated (if `{update_package_scripts}` true)
|
||||
- [ ] `test:e2e` script for all E2E tests
|
||||
- [ ] `test:e2e:p0` script for P0 tests only
|
||||
- [ ] `test:e2e:p1` script for P0 + P1 tests
|
||||
- [ ] `test:api` script for API tests
|
||||
- [ ] `test:component` script for component tests
|
||||
- [ ] `test:unit` script for unit tests (if applicable)
|
||||
|
||||
### Test Suite Executed
|
||||
|
||||
- [ ] Test suite run locally (if `{run_tests_after_generation}` true)
|
||||
- [ ] Test results captured (passing/failing counts)
|
||||
- [ ] No flaky patterns detected (tests are deterministic)
|
||||
- [ ] Setup requirements documented (if any)
|
||||
- [ ] Known issues documented (if any)
|
||||
|
||||
---
|
||||
|
||||
## Step 6: Automation Summary Generated
|
||||
|
||||
### Automation Summary Document
|
||||
|
||||
- [ ] Output file created at `{output_summary}`
|
||||
- [ ] Document includes execution mode (BMad-Integrated, Standalone, Auto-discover)
|
||||
- [ ] Feature analysis included (source files, coverage gaps) - Standalone mode
|
||||
- [ ] Tests created listed (E2E, API, Component, Unit) with counts and paths
|
||||
- [ ] Infrastructure created listed (fixtures, factories, helpers)
|
||||
- [ ] Test execution instructions provided
|
||||
- [ ] Coverage analysis included:
|
||||
- [ ] Total test count
|
||||
- [ ] Priority breakdown (P0, P1, P2, P3 counts)
|
||||
- [ ] Test level breakdown (E2E, API, Component, Unit counts)
|
||||
- [ ] Coverage percentage (if calculated)
|
||||
- [ ] Coverage status (acceptance criteria covered, gaps identified)
|
||||
- [ ] Definition of Done checklist included
|
||||
- [ ] Next steps provided
|
||||
- [ ] Recommendations included (if Standalone mode)
|
||||
|
||||
### Summary Provided to User
|
||||
|
||||
- [ ] Concise summary output provided
|
||||
- [ ] Total tests created across test levels
|
||||
- [ ] Priority breakdown (P0, P1, P2, P3 counts)
|
||||
- [ ] Infrastructure counts (fixtures, factories, helpers)
|
||||
- [ ] Test execution command provided
|
||||
- [ ] Output file path provided
|
||||
- [ ] Next steps listed
|
||||
|
||||
---
|
||||
|
||||
## Quality Checks
|
||||
|
||||
### Test Design Quality
|
||||
|
||||
- [ ] Tests are readable (clear Given-When-Then structure)
|
||||
- [ ] Tests are maintainable (use factories/fixtures, not hardcoded data)
|
||||
- [ ] Tests are isolated (no shared state between tests)
|
||||
- [ ] Tests are deterministic (no race conditions or flaky patterns)
|
||||
- [ ] Tests are atomic (one assertion per test)
|
||||
- [ ] Tests are fast (no unnecessary waits or delays)
|
||||
- [ ] Tests are lean (files under {max_file_lines} lines)
|
||||
|
||||
### Knowledge Base Integration
|
||||
|
||||
- [ ] Test level selection framework applied (from `test-levels-framework.md`)
|
||||
- [ ] Priority classification applied (from `test-priorities.md`)
|
||||
- [ ] Fixture architecture patterns applied (from `fixture-architecture.md`)
|
||||
- [ ] Data factory patterns applied (from `data-factories.md`)
|
||||
- [ ] Selective testing strategies considered (from `selective-testing.md`)
|
||||
- [ ] Flaky test detection patterns considered (from `ci-burn-in.md`)
|
||||
- [ ] Test quality principles applied (from `test-quality.md`)
|
||||
|
||||
### Code Quality
|
||||
|
||||
- [ ] All TypeScript types are correct and complete
|
||||
- [ ] No linting errors in generated test files
|
||||
- [ ] Consistent naming conventions followed
|
||||
- [ ] Imports are organized and correct
|
||||
- [ ] Code follows project style guide
|
||||
- [ ] No console.log or debug statements in test code
|
||||
|
||||
---
|
||||
|
||||
## Integration Points
|
||||
|
||||
### With Framework Workflow
|
||||
|
||||
- [ ] Test framework configuration detected and used
|
||||
- [ ] Directory structure matches framework setup
|
||||
- [ ] Fixtures and helpers follow established patterns
|
||||
- [ ] Naming conventions consistent with framework standards
|
||||
|
||||
### With BMad Workflows (If Available - OPTIONAL)
|
||||
|
||||
**With Story Workflow:**
|
||||
|
||||
- [ ] Story ID correctly referenced in output (if story available)
|
||||
- [ ] Acceptance criteria from story reflected in tests (if story available)
|
||||
- [ ] Technical constraints from story considered (if story available)
|
||||
|
||||
**With test-design Workflow:**
|
||||
|
||||
- [ ] P0 scenarios from test-design prioritized (if test-design available)
|
||||
- [ ] Risk assessment from test-design considered (if test-design available)
|
||||
- [ ] Coverage strategy aligned with test-design (if test-design available)
|
||||
|
||||
**With atdd Workflow:**
|
||||
|
||||
- [ ] ATDD artifacts provided or located (manual handoff; `atdd` not auto-run)
|
||||
- [ ] Existing ATDD tests checked (if story had ATDD workflow run)
|
||||
- [ ] Expansion beyond ATDD planned (edge cases, negative paths)
|
||||
- [ ] No duplicate coverage with ATDD tests
|
||||
|
||||
### With CI Pipeline
|
||||
|
||||
- [ ] Tests can run in CI environment
|
||||
- [ ] Tests are parallelizable (no shared state)
|
||||
- [ ] Tests have appropriate timeouts
|
||||
- [ ] Tests clean up their data (no CI environment pollution)
|
||||
|
||||
---
|
||||
|
||||
## Completion Criteria
|
||||
|
||||
All of the following must be true before marking this workflow as complete:
|
||||
|
||||
- [ ] **Execution mode determined** (BMad-Integrated, Standalone, or Auto-discover)
|
||||
- [ ] **Framework configuration loaded** and validated
|
||||
- [ ] **Coverage analysis completed** (gaps identified if analyze_coverage true)
|
||||
- [ ] **Automation targets identified** (what needs testing)
|
||||
- [ ] **Test levels selected** appropriately (E2E, API, Component, Unit)
|
||||
- [ ] **Duplicate coverage avoided** (same behavior not tested at multiple levels)
|
||||
- [ ] **Test priorities assigned** (P0, P1, P2, P3)
|
||||
- [ ] **Fixture architecture created/enhanced** with auto-cleanup
|
||||
- [ ] **Data factories created/enhanced** using faker (no hardcoded data)
|
||||
- [ ] **Helper utilities created/enhanced** (if needed)
|
||||
- [ ] **Test files generated** at appropriate levels (E2E, API, Component, Unit)
|
||||
- [ ] **Given-When-Then format used** consistently across all tests
|
||||
- [ ] **Priority tags added** to all test names ([P0], [P1], [P2], [P3])
|
||||
- [ ] **data-testid selectors used** in E2E tests (not CSS classes)
|
||||
- [ ] **Network-first pattern applied** (route interception before navigation)
|
||||
- [ ] **Quality standards enforced** (no hard waits, no flaky patterns, self-cleaning, deterministic)
|
||||
- [ ] **Test README updated** with execution instructions and patterns
|
||||
- [ ] **package.json scripts updated** with test execution commands
|
||||
- [ ] **Test suite run locally** (if run_tests_after_generation true)
|
||||
- [ ] **Tests validated** (if auto_validate enabled)
|
||||
- [ ] **Failures healed** (if auto_heal_failures enabled and tests failed)
|
||||
- [ ] **Healing report generated** (if healing attempted)
|
||||
- [ ] **Unfixable tests marked** with test.fixme() and detailed comments (if any)
|
||||
- [ ] **Automation summary created** and saved to correct location
|
||||
- [ ] **Output file formatted correctly**
|
||||
- [ ] **Knowledge base references applied** and documented (including healing fragments if used)
|
||||
- [ ] **No test quality issues** (flaky patterns, race conditions, hardcoded data, page objects)
|
||||
- [ ] **Provider scrutiny completed or gracefully degraded** for all CDC interactions — each interaction either has scrutiny evidence or a TODO marker (if `use_pactjs_utils` enabled)
|
||||
- [ ] **Provider endpoint comments present** on every Pact interaction (if `use_pactjs_utils` enabled)
|
||||
|
||||
---
|
||||
|
||||
## Common Issues and Resolutions
|
||||
|
||||
### Issue: BMad artifacts not found
|
||||
|
||||
**Problem:** Story, tech-spec, or PRD files not found when variables are set.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- **automate does NOT require BMad artifacts** - they are OPTIONAL enhancements
|
||||
- If files not found, switch to Standalone Mode automatically
|
||||
- Analyze source code directly without BMad context
|
||||
- Continue workflow without halting
|
||||
|
||||
### Issue: Framework configuration not found
|
||||
|
||||
**Problem:** No playwright.config.ts or cypress.config.ts found.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- **HALT workflow** - framework is required
|
||||
- Message: "Framework scaffolding required. Run `bmad tea *framework` first."
|
||||
- User must run framework workflow before automate
|
||||
|
||||
### Issue: No automation targets identified
|
||||
|
||||
**Problem:** Neither story, target_feature, nor target_files specified, and auto-discover finds nothing.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Check if source_dir variable is correct
|
||||
- Verify source code exists in project
|
||||
- Ask user to specify target_feature or target_files explicitly
|
||||
- Provide examples: `target_feature: "src/auth/"` or `target_files: "src/auth/login.ts,src/auth/session.ts"`
|
||||
|
||||
### Issue: Duplicate coverage detected
|
||||
|
||||
**Problem:** Same behavior tested at multiple levels (E2E + API + Component).
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Review test level selection framework (test-levels-framework.md)
|
||||
- Use E2E for critical happy path ONLY
|
||||
- Use API for business logic variations
|
||||
- Use Component for UI edge cases
|
||||
- Remove redundant tests that duplicate coverage
|
||||
|
||||
### Issue: Tests have hardcoded data
|
||||
|
||||
**Problem:** Tests use hardcoded email addresses, passwords, or other data.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Replace all hardcoded data with factory function calls
|
||||
- Use faker for all random data generation
|
||||
- Update data-factories to support all required test scenarios
|
||||
- Example: `createUser({ email: faker.internet.email() })`
|
||||
|
||||
### Issue: Tests are flaky
|
||||
|
||||
**Problem:** Tests fail intermittently, pass on retry.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Remove all hard waits (`page.waitForTimeout()`)
|
||||
- Use explicit waits (`page.waitForSelector()`)
|
||||
- Apply network-first pattern (route interception before navigation)
|
||||
- Remove conditional flow (`if (await element.isVisible())`)
|
||||
- Ensure tests are deterministic (no race conditions)
|
||||
- Run burn-in loop (10 iterations) to detect flakiness
|
||||
|
||||
### Issue: Fixtures don't clean up data
|
||||
|
||||
**Problem:** Test data persists after test run, causing test pollution.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Ensure all fixtures have cleanup in teardown phase
|
||||
- Cleanup happens AFTER `await use(data)`
|
||||
- Call deletion/cleanup functions (deleteUser, deleteProduct, etc.)
|
||||
- Verify cleanup works by checking database/storage after test run
|
||||
|
||||
### Issue: Tests too slow
|
||||
|
||||
**Problem:** Tests take longer than 90 seconds (max_test_duration).
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Remove unnecessary waits and delays
|
||||
- Use parallel execution where possible
|
||||
- Mock external services (don't make real API calls)
|
||||
- Use API tests instead of E2E for business logic
|
||||
- Optimize test data creation (use in-memory database, etc.)
|
||||
|
||||
---
|
||||
|
||||
## Notes for TEA Agent
|
||||
|
||||
- **automate is flexible:** Can work with or without BMad artifacts (story, tech-spec, PRD are OPTIONAL)
|
||||
- **Standalone mode is powerful:** Analyze any codebase and generate tests independently
|
||||
- **Auto-discover mode:** Scan codebase for features needing tests when no targets specified
|
||||
- **Framework is the ONLY hard requirement:** HALT if framework config missing, otherwise proceed
|
||||
- **Avoid duplicate coverage:** E2E for critical paths only, API/Component for variations
|
||||
- **Priority tagging enables selective execution:** P0 tests run on every commit, P1 on PR, P2 nightly
|
||||
- **Network-first pattern prevents race conditions:** Route interception BEFORE navigation
|
||||
- **No page objects:** Keep tests simple, direct, and maintainable
|
||||
- **Use knowledge base:** Load relevant fragments (test-levels, test-priorities, fixture-architecture, data-factories, healing patterns) for guidance
|
||||
- **Deterministic tests only:** No hard waits, no conditional flow, no flaky patterns allowed
|
||||
- **Optional healing:** auto_heal_failures disabled by default (opt-in for automatic test healing)
|
||||
- **Graceful degradation:** Healing works without Playwright MCP (pattern-based fallback)
|
||||
- **Unfixable tests handled:** Mark with test.fixme() and detailed comments (not silently broken)
|
||||
50
.gemini/skills/bmad-testarch-automate/instructions.md
Normal file
50
.gemini/skills/bmad-testarch-automate/instructions.md
Normal file
@@ -0,0 +1,50 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# Test Automation Expansion
|
||||
|
||||
**Workflow ID**: `_bmad/tea/testarch/bmad-testarch-automate`
|
||||
**Version**: 5.0 (Step-File Architecture)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Expands test automation coverage by generating prioritized tests at the appropriate level (E2E, API, Component, Unit) with supporting fixtures and helpers.
|
||||
|
||||
Modes:
|
||||
|
||||
- **BMad-Integrated**: Uses story/PRD/test-design artifacts when available
|
||||
- **Standalone**: Analyzes existing codebase without BMad artifacts
|
||||
|
||||
---
|
||||
|
||||
## WORKFLOW ARCHITECTURE
|
||||
|
||||
This workflow uses **step-file architecture** for disciplined execution:
|
||||
|
||||
- **Micro-file Design**: Each step is self-contained
|
||||
- **JIT Loading**: Only the current step file is in memory
|
||||
- **Sequential Enforcement**: Execute steps in order without skipping
|
||||
|
||||
---
|
||||
|
||||
## INITIALIZATION SEQUENCE
|
||||
|
||||
### 1. Configuration Loading
|
||||
|
||||
From `workflow.yaml`, resolve:
|
||||
|
||||
- `config_source`, `test_artifacts`, `user_name`, `communication_language`, `document_output_language`, `date`
|
||||
- `test_dir`, `source_dir`, `coverage_target`, `standalone_mode`
|
||||
|
||||
### 2. First Step
|
||||
|
||||
Load, read completely, and execute:
|
||||
`./steps-c/step-01-preflight-and-context.md`
|
||||
|
||||
### 3. Resume Support
|
||||
|
||||
If the user selects **Resume** mode, load, read completely, and execute:
|
||||
`./steps-c/step-01b-resume.md`
|
||||
|
||||
This checks the output document for progress tracking frontmatter and routes to the next incomplete step.
|
||||
@@ -0,0 +1,237 @@
|
||||
---
|
||||
name: 'step-01-preflight-and-context'
|
||||
description: 'Determine mode, verify framework, and load context and knowledge'
|
||||
outputFile: '{test_artifacts}/automation-summary.md'
|
||||
nextStepFile: './step-02-identify-targets.md'
|
||||
knowledgeIndex: '{project-root}/_bmad/tea/testarch/tea-index.csv'
|
||||
---
|
||||
|
||||
# Step 1: Preflight & Context Loading
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Determine execution mode, verify framework readiness, and load the necessary artifacts and knowledge fragments.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- 🚫 Halt if framework scaffolding is missing
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Stack Detection & Verify Framework
|
||||
|
||||
**Read `config.test_stack_type`** from `{config_source}`.
|
||||
|
||||
**Auto-Detection Algorithm** (when `test_stack_type` is `"auto"` or not configured):
|
||||
|
||||
- Scan `{project-root}` for project manifests:
|
||||
- **Frontend indicators**: `package.json` with react/vue/angular/next dependencies, `playwright.config.*`, `vite.config.*`, `webpack.config.*`
|
||||
- **Backend indicators**: `pyproject.toml`, `pom.xml`/`build.gradle`, `go.mod`, `*.csproj`/`*.sln`, `Gemfile`, `Cargo.toml`
|
||||
- **Both present** = `fullstack`; only frontend = `frontend`; only backend = `backend`
|
||||
- Explicit `test_stack_type` config value overrides auto-detection
|
||||
- **Backward compatibility**: if `test_stack_type` is not in config, treat as `"auto"` (preserves current frontend behavior for existing installs)
|
||||
|
||||
Store result as `{detected_stack}` = `frontend` | `backend` | `fullstack`
|
||||
|
||||
**Verify framework exists:**
|
||||
|
||||
**If {detected_stack} is `frontend` or `fullstack`:**
|
||||
|
||||
- `playwright.config.ts` or `cypress.config.ts`
|
||||
- `package.json` includes test dependencies
|
||||
|
||||
**If {detected_stack} is `backend` or `fullstack`:**
|
||||
|
||||
- Relevant test config exists (e.g., `conftest.py`, `src/test/`, `*_test.go`, `.rspec`, test project `*.csproj`)
|
||||
|
||||
If missing: **HALT** with message "Run `framework` workflow first."
|
||||
|
||||
---
|
||||
|
||||
## 2. Determine Execution Mode
|
||||
|
||||
- **BMad-Integrated** if story/tech-spec/test-design artifacts are provided or found
|
||||
- **Standalone** if only source code is available
|
||||
- If unclear, ask the user which mode to use
|
||||
|
||||
---
|
||||
|
||||
## 3. Load Context
|
||||
|
||||
### BMad-Integrated (if available)
|
||||
|
||||
- Story with acceptance criteria
|
||||
- PRD and/or tech spec
|
||||
- Test-design document (if exists)
|
||||
|
||||
### Standalone
|
||||
|
||||
- Skip artifacts; proceed to codebase analysis
|
||||
|
||||
### Always Load
|
||||
|
||||
- Test framework config
|
||||
- Existing test structure in `{test_dir}`
|
||||
- Existing tests (for coverage gaps)
|
||||
|
||||
### Read TEA Config Flags
|
||||
|
||||
- From `{config_source}` read `tea_use_playwright_utils`
|
||||
- From `{config_source}` read `tea_use_pactjs_utils`
|
||||
- From `{config_source}` read `tea_pact_mcp`
|
||||
- From `{config_source}` read `tea_browser_automation`
|
||||
- From `{config_source}` read `test_stack_type`
|
||||
|
||||
---
|
||||
|
||||
### Tiered Knowledge Loading
|
||||
|
||||
Load fragments based on their `tier` classification in `tea-index.csv`:
|
||||
|
||||
1. **Core tier** (always load): Foundational fragments required for this workflow
|
||||
2. **Extended tier** (load on-demand): Load when deeper analysis is needed or when the user's context requires it
|
||||
3. **Specialized tier** (load only when relevant): Load only when the specific use case matches (e.g., contract-testing only for microservices, email-auth only for email flows)
|
||||
|
||||
> **Context Efficiency**: Loading only core fragments reduces context usage by 40-50% compared to loading all fragments.
|
||||
|
||||
### Playwright Utils Loading Profiles
|
||||
|
||||
**If `tea_use_playwright_utils` is enabled**, select the appropriate loading profile:
|
||||
|
||||
- **API-only profile** (when `{detected_stack}` is `backend` or no `page.goto`/`page.locator` found in test files):
|
||||
Load: `overview`, `api-request`, `auth-session`, `recurse` (~1,800 lines)
|
||||
|
||||
- **Full UI+API profile** (when `{detected_stack}` is `frontend`/`fullstack` or browser tests detected):
|
||||
Load: all Playwright Utils core fragments (~4,500 lines)
|
||||
|
||||
**Detection**: Scan `{test_dir}` for files containing `page.goto` or `page.locator`. If none found, use API-only profile.
|
||||
|
||||
### Pact.js Utils Loading
|
||||
|
||||
**If `tea_use_pactjs_utils` is enabled** (and `{detected_stack}` is `backend` or `fullstack`, or microservices indicators detected):
|
||||
|
||||
Load: `pactjs-utils-overview.md`, `pactjs-utils-consumer-helpers.md`, `pactjs-utils-provider-verifier.md`, `pactjs-utils-request-filter.md` (~800 lines)
|
||||
|
||||
**If `tea_use_pactjs_utils` is disabled** but contract testing is relevant (microservices architecture detected, existing Pact config found):
|
||||
|
||||
Load: `contract-testing.md` (~960 lines)
|
||||
|
||||
**Detection**: Scan `{project-root}` for Pact indicators: `pact/` directory, `@pact-foundation/pact` in `package.json`, `pactUrls` in test files, `PACT_BROKER` in env files.
|
||||
|
||||
### Pact MCP Loading
|
||||
|
||||
**If `tea_pact_mcp` is `"mcp"`:**
|
||||
|
||||
Load: `pact-mcp.md` (~150 lines) — enables agent to use SmartBear MCP tools for fetching provider states and generating pact tests during automation.
|
||||
|
||||
## 4. Load Knowledge Base Fragments
|
||||
|
||||
Use `{knowledgeIndex}` and load only what is required.
|
||||
|
||||
**Core (always load):**
|
||||
|
||||
- `test-levels-framework.md`
|
||||
- `test-priorities-matrix.md`
|
||||
- `data-factories.md`
|
||||
- `selective-testing.md`
|
||||
- `ci-burn-in.md`
|
||||
- `test-quality.md`
|
||||
|
||||
**Playwright Utils (if enabled):**
|
||||
|
||||
- `overview.md`, `api-request.md`, `network-recorder.md`, `auth-session.md`, `intercept-network-call.md`, `recurse.md`, `log.md`, `file-utils.md`, `burn-in.md`, `network-error-monitor.md`, `fixtures-composition.md`
|
||||
|
||||
**Traditional Patterns (if Playwright Utils disabled):**
|
||||
|
||||
- `fixture-architecture.md`
|
||||
- `network-first.md`
|
||||
|
||||
**Pact.js Utils (if enabled):**
|
||||
|
||||
- `pactjs-utils-overview.md`, `pactjs-utils-consumer-helpers.md`, `pactjs-utils-provider-verifier.md`, `pactjs-utils-request-filter.md`
|
||||
|
||||
**Contract Testing (if pactjs-utils disabled but relevant):**
|
||||
|
||||
- `contract-testing.md`
|
||||
|
||||
**Pact MCP (if tea_pact_mcp is "mcp"):**
|
||||
|
||||
- `pact-mcp.md`
|
||||
|
||||
**Healing (if auto-heal enabled):**
|
||||
|
||||
- `test-healing-patterns.md`
|
||||
- `selector-resilience.md`
|
||||
- `timing-debugging.md`
|
||||
|
||||
**Playwright CLI (if tea_browser_automation is "cli" or "auto"):**
|
||||
|
||||
- `playwright-cli.md`
|
||||
|
||||
**MCP Patterns (if tea_browser_automation is "mcp" or "auto"):**
|
||||
|
||||
- (existing MCP-related fragments, if any are added in future)
|
||||
|
||||
---
|
||||
|
||||
## 5. Confirm Inputs
|
||||
|
||||
Summarize loaded artifacts, framework, and knowledge fragments, then proceed.
|
||||
|
||||
---
|
||||
|
||||
## 6. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-01-preflight-and-context']
|
||||
lastStep: 'step-01-preflight-and-context'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-01-preflight-and-context'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-01-preflight-and-context'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section.
|
||||
|
||||
**Update `inputDocuments`**: Set `inputDocuments` in the output template frontmatter to the list of artifact paths loaded in this step (e.g., knowledge fragments, test design documents, configuration files).
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
@@ -0,0 +1,94 @@
|
||||
---
|
||||
name: 'step-01b-resume'
|
||||
description: 'Resume interrupted workflow from last completed step'
|
||||
outputFile: '{test_artifacts}/automation-summary.md'
|
||||
---
|
||||
|
||||
# Step 1b: Resume Workflow
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Resume an interrupted workflow by loading the existing output document, displaying progress, and routing to the next incomplete step.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: Output document with progress frontmatter
|
||||
- Focus: Load progress and route to next step
|
||||
- Limits: Do not re-execute completed steps
|
||||
- Dependencies: Output document must exist from a previous run
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Load Output Document
|
||||
|
||||
Read `{outputFile}` and parse YAML frontmatter for:
|
||||
|
||||
- `stepsCompleted` — array of completed step names
|
||||
- `lastStep` — last completed step name
|
||||
- `lastSaved` — timestamp of last save
|
||||
|
||||
**If `{outputFile}` does not exist**, display:
|
||||
|
||||
"⚠️ **No previous progress found.** There is no output document to resume from. Please use **[C] Create** to start a fresh workflow run."
|
||||
|
||||
**THEN:** Halt. Do not proceed.
|
||||
|
||||
---
|
||||
|
||||
### 2. Display Progress Dashboard
|
||||
|
||||
Display progress with ✅/⬜ indicators:
|
||||
|
||||
1. ✅/⬜ Preflight & Context (step-01-preflight-and-context)
|
||||
2. ✅/⬜ Identify Targets (step-02-identify-targets)
|
||||
3. ✅/⬜ Generate Tests + Aggregate (step-03c-aggregate)
|
||||
4. ✅/⬜ Validate & Summarize (step-04-validate-and-summarize)
|
||||
|
||||
---
|
||||
|
||||
### 3. Route to Next Step
|
||||
|
||||
Based on `lastStep`, load the next incomplete step:
|
||||
|
||||
- `'step-01-preflight-and-context'` → load `./step-02-identify-targets.md`
|
||||
- `'step-02-identify-targets'` → load `./step-03-generate-tests.md`
|
||||
- `'step-03c-aggregate'` → load `./step-04-validate-and-summarize.md`
|
||||
- `'step-04-validate-and-summarize'` → **Workflow already complete.** Display: "✅ **All steps completed.** Use **[V] Validate** to review outputs or **[E] Edit** to make revisions." Then halt.
|
||||
|
||||
**If `lastStep` does not match any value above**, display: "⚠️ **Unknown progress state** (`lastStep`: {lastStep}). Please use **[C] Create** to start fresh." Then halt.
|
||||
|
||||
**Otherwise**, load the identified step file, read completely, and execute.
|
||||
|
||||
The existing content in `{outputFile}` provides context from previously completed steps.
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Output document loaded and parsed correctly
|
||||
- Progress dashboard displayed accurately
|
||||
- Routed to correct next step
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Not loading output document
|
||||
- Incorrect progress display
|
||||
- Routing to wrong step
|
||||
|
||||
**Master Rule:** Resume MUST route to the exact next incomplete step. Never re-execute completed steps.
|
||||
@@ -0,0 +1,169 @@
|
||||
---
|
||||
name: 'step-02-identify-targets'
|
||||
description: 'Identify automation targets and create coverage plan'
|
||||
outputFile: '{test_artifacts}/automation-summary.md'
|
||||
nextStepFile: './step-03-generate-tests.md'
|
||||
---
|
||||
|
||||
# Step 2: Identify Automation Targets
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Determine what needs to be tested and select appropriate test levels and priorities.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- 🚫 Avoid duplicate coverage across test levels
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Determine Targets
|
||||
|
||||
**BMad-Integrated:**
|
||||
|
||||
- Map acceptance criteria to test scenarios
|
||||
- Check for existing ATDD outputs to avoid duplication
|
||||
- Expand coverage with edge cases and negative paths
|
||||
|
||||
**Standalone:**
|
||||
|
||||
- If specific target feature/files are provided, focus there
|
||||
- Otherwise auto-discover features in `{source_dir}`
|
||||
- Prioritize critical paths, integrations, and untested logic
|
||||
|
||||
**If {detected_stack} is `frontend` or `fullstack`:**
|
||||
|
||||
**Browser Exploration (if `tea_browser_automation` is `cli` or `auto`):**
|
||||
|
||||
> **Fallback:** If CLI is not installed, fall back to MCP (if available) or skip browser exploration and rely on code/doc analysis.
|
||||
|
||||
Use CLI to explore the application and identify testable pages/flows:
|
||||
|
||||
1. `playwright-cli -s=tea-automate open <target_url>`
|
||||
2. `playwright-cli -s=tea-automate snapshot` → capture page structure and element refs
|
||||
3. Analyze snapshot output to identify testable elements and flows
|
||||
4. `playwright-cli -s=tea-automate close`
|
||||
|
||||
> **Session Hygiene:** Always close sessions using `playwright-cli -s=tea-automate close`. Do NOT use `close-all` — it kills every session on the machine and breaks parallel execution.
|
||||
|
||||
**If {detected_stack} is `backend` or `fullstack`:**
|
||||
|
||||
**Source & API Analysis (no browser exploration):**
|
||||
|
||||
- Scan source code for route handlers, controllers, service classes, and public APIs
|
||||
- Read OpenAPI/Swagger specs (`openapi.yaml`, `swagger.json`) if available
|
||||
- Identify database models, migrations, and data access patterns
|
||||
- Map service-to-service integrations and message queue consumers/producers
|
||||
- Check for existing contract tests (Pact, etc.)
|
||||
|
||||
---
|
||||
|
||||
**If `use_pactjs_utils` is enabled — Provider Endpoint Mapping (all stacks):**
|
||||
|
||||
When consumer-driven contract tests will be generated, build a Provider Endpoint Map during target identification. This applies to all `{detected_stack}` values — frontend, backend, and fullstack consumers all need provider scrutiny.
|
||||
|
||||
1. **Locate provider source and/or OpenAPI spec**: Scan workspace for provider project (from config, monorepo structure, or adjacent repositories). Also check for OpenAPI/Swagger spec files (`openapi.yaml`, `openapi.json`, `swagger.json`) — these document the provider's contract explicitly and can supplement or replace handler code analysis.
|
||||
2. **Map each consumer endpoint** to its provider counterpart:
|
||||
- Provider file path (route handler)
|
||||
- Route pattern (METHOD + path)
|
||||
- Validation schema location (Joi, Zod, class-validator) or OpenAPI request schema
|
||||
- Response type/DTO definition location or OpenAPI response schema
|
||||
- OpenAPI spec path (if available, e.g., `server/openapi.yaml`)
|
||||
3. **Output as "Provider Endpoint Map" table** in the coverage plan:
|
||||
|
||||
```markdown
|
||||
| Consumer Endpoint | Provider File | Route | Validation Schema | Response Type | OpenAPI Spec |
|
||||
| --------------------- | --------------------------------- | ------------------------- | ----------------------------------- | --------------- | ------------------------------------------------- |
|
||||
| GET /api/v2/users/:id | server/src/routes/userHandlers.ts | GET /api/v2/users/:userId | server/src/validation/user.ts | UserResponseDto | server/openapi.yaml#/paths/~1api~1v2~1users~1{id} |
|
||||
| POST /api/v2/users | server/src/routes/userHandlers.ts | POST /api/v2/users | server/src/validation/createUser.ts | UserResponseDto | server/openapi.yaml#/paths/~1api~1v2~1users |
|
||||
```
|
||||
|
||||
4. **If provider source not accessible**: Mark entries with `TODO — provider source not accessible` and note in coverage plan that provider scrutiny will use graceful degradation (see `contract-testing.md` Provider Scrutiny Protocol)
|
||||
|
||||
---
|
||||
|
||||
## 2. Choose Test Levels
|
||||
|
||||
Use `test-levels-framework.md` to select:
|
||||
|
||||
- **E2E** for critical user journeys
|
||||
- **API** for business logic and service contracts
|
||||
- **Component** for UI behavior
|
||||
- **Unit** for pure logic and edge cases
|
||||
|
||||
---
|
||||
|
||||
## 3. Assign Priorities
|
||||
|
||||
Use `test-priorities-matrix.md`:
|
||||
|
||||
- P0: Critical path + high risk
|
||||
- P1: Important flows + medium/high risk
|
||||
- P2: Secondary + edge cases
|
||||
- P3: Optional/rare scenarios
|
||||
|
||||
---
|
||||
|
||||
## 4. Coverage Plan
|
||||
|
||||
Produce a concise coverage plan:
|
||||
|
||||
- Targets by test level
|
||||
- Priority assignments
|
||||
- Justification for coverage scope (critical-paths/comprehensive/selective)
|
||||
|
||||
---
|
||||
|
||||
## 5. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-02-identify-targets']
|
||||
lastStep: 'step-02-identify-targets'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-02-identify-targets'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-02-identify-targets'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
@@ -0,0 +1,394 @@
|
||||
---
|
||||
name: 'step-03-generate-tests'
|
||||
description: 'Orchestrate adaptive test generation (agent-team, subagent, or sequential)'
|
||||
nextStepFile: './step-03c-aggregate.md'
|
||||
---
|
||||
|
||||
# Step 3: Orchestrate Adaptive Test Generation
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Select execution mode deterministically, then generate tests using agent-team, subagent, or sequential execution while preserving the same output contract. Worker selection depends on `{detected_stack}`.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- ✅ Resolve execution mode from config (`tea_execution_mode`, `tea_capability_probe`)
|
||||
- ✅ Apply fallback rules deterministically when requested mode is unsupported
|
||||
- ✅ Preserve output schema and temp file naming across all modes
|
||||
- ❌ Do NOT skip capability checks when probing is enabled
|
||||
- ❌ Do NOT change output paths or JSON schema by mode
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Wait for subagent outputs
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, coverage plan from Step 2, knowledge fragments
|
||||
- Focus: orchestration only (mode selection + worker dispatch)
|
||||
- Limits: do not generate tests directly (delegate to worker steps)
|
||||
- Dependencies: Step 2 outputs (coverage plan, target features)
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
### 1. Prepare Execution Context
|
||||
|
||||
**Generate unique timestamp** for temp file naming:
|
||||
|
||||
```javascript
|
||||
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
|
||||
```
|
||||
|
||||
**Prepare input context for subagents:**
|
||||
|
||||
```javascript
|
||||
const parseBooleanFlag = (value, defaultValue = true) => {
|
||||
if (typeof value === 'string') {
|
||||
const normalized = value.trim().toLowerCase();
|
||||
if (['false', '0', 'off', 'no'].includes(normalized)) return false;
|
||||
if (['true', '1', 'on', 'yes'].includes(normalized)) return true;
|
||||
}
|
||||
if (value === undefined || value === null) return defaultValue;
|
||||
return Boolean(value);
|
||||
};
|
||||
|
||||
const subagentContext = {
|
||||
features: /* from Step 2 coverage plan */,
|
||||
knowledge_fragments_loaded: /* list of fragments */,
|
||||
config: {
|
||||
test_framework: config.test_framework,
|
||||
use_playwright_utils: config.tea_use_playwright_utils,
|
||||
use_pactjs_utils: config.tea_use_pactjs_utils,
|
||||
pact_mcp: config.tea_pact_mcp, // "mcp" | "none"
|
||||
browser_automation: config.tea_browser_automation, // "auto" | "cli" | "mcp" | "none"
|
||||
detected_stack: '{detected_stack}', // "frontend" | "backend" | "fullstack"
|
||||
execution_mode: config.tea_execution_mode || 'auto', // "auto" | "subagent" | "agent-team" | "sequential"
|
||||
capability_probe: parseBooleanFlag(config.tea_capability_probe, true), // supports booleans and "false"/"true" strings
|
||||
provider_endpoint_map: /* from Step 2 coverage plan, if use_pactjs_utils enabled */,
|
||||
},
|
||||
timestamp: timestamp
|
||||
};
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. Resolve Execution Mode with Capability Probe
|
||||
|
||||
```javascript
|
||||
const normalizeUserExecutionMode = (mode) => {
|
||||
if (typeof mode !== 'string') return null;
|
||||
const normalized = mode.trim().toLowerCase().replace(/[-_]/g, ' ').replace(/\s+/g, ' ');
|
||||
|
||||
if (normalized === 'auto') return 'auto';
|
||||
if (normalized === 'sequential') return 'sequential';
|
||||
if (normalized === 'subagent' || normalized === 'sub agent' || normalized === 'subagents' || normalized === 'sub agents') {
|
||||
return 'subagent';
|
||||
}
|
||||
if (normalized === 'agent team' || normalized === 'agent teams' || normalized === 'agentteam') {
|
||||
return 'agent-team';
|
||||
}
|
||||
|
||||
return null;
|
||||
};
|
||||
|
||||
const normalizeConfigExecutionMode = (mode) => {
|
||||
if (mode === 'subagent') return 'subagent';
|
||||
if (mode === 'auto' || mode === 'sequential' || mode === 'subagent' || mode === 'agent-team') {
|
||||
return mode;
|
||||
}
|
||||
return null;
|
||||
};
|
||||
|
||||
// Explicit user instruction in the active run takes priority over config.
|
||||
const explicitModeFromUser = normalizeUserExecutionMode(runtime.getExplicitExecutionModeHint?.() || null);
|
||||
|
||||
const requestedMode = explicitModeFromUser || normalizeConfigExecutionMode(subagentContext.config.execution_mode) || 'auto';
|
||||
const probeEnabled = subagentContext.config.capability_probe;
|
||||
|
||||
const supports = {
|
||||
subagent: false,
|
||||
agentTeam: false,
|
||||
};
|
||||
|
||||
if (probeEnabled) {
|
||||
// Probe using runtime-native capability checks or a no-op launch test.
|
||||
supports.subagent = runtime.canLaunchSubagents?.() === true;
|
||||
supports.agentTeam = runtime.canLaunchAgentTeams?.() === true;
|
||||
}
|
||||
|
||||
let resolvedMode = requestedMode;
|
||||
|
||||
if (requestedMode === 'auto') {
|
||||
if (supports.agentTeam) resolvedMode = 'agent-team';
|
||||
else if (supports.subagent) resolvedMode = 'subagent';
|
||||
else resolvedMode = 'sequential';
|
||||
} else if (probeEnabled && requestedMode === 'agent-team' && !supports.agentTeam) {
|
||||
resolvedMode = supports.subagent ? 'subagent' : 'sequential';
|
||||
} else if (probeEnabled && requestedMode === 'subagent' && !supports.subagent) {
|
||||
resolvedMode = 'sequential';
|
||||
}
|
||||
|
||||
subagentContext.execution = {
|
||||
requestedMode,
|
||||
resolvedMode,
|
||||
probeEnabled,
|
||||
supports,
|
||||
};
|
||||
```
|
||||
|
||||
Resolution precedence:
|
||||
|
||||
1. Explicit user request in this run (`agent team` => `agent-team`; `subagent` => `subagent`; `sequential`; `auto`)
|
||||
2. `tea_execution_mode` from config
|
||||
3. Runtime capability fallback (when probing enabled)
|
||||
|
||||
If probing is disabled, honor the requested mode strictly. If that mode cannot be executed at runtime, fail with explicit error instead of silent fallback.
|
||||
|
||||
Report selected mode before dispatch:
|
||||
|
||||
```
|
||||
⚙️ Execution Mode Resolution:
|
||||
- Requested: {requestedMode}
|
||||
- Probe Enabled: {probeEnabled}
|
||||
- Supports agent-team: {supports.agentTeam}
|
||||
- Supports subagent: {supports.subagent}
|
||||
- Resolved: {resolvedMode}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. Subagent Dispatch Matrix
|
||||
|
||||
**Select subagents based on `{detected_stack}`:**
|
||||
|
||||
| `{detected_stack}` | Subagent A (API) | Subagent B (E2E) | Subagent B-backend |
|
||||
| ------------------ | ---------------- | ---------------- | ------------------ |
|
||||
| `frontend` | Launch | Launch | Skip |
|
||||
| `backend` | Launch | Skip | Launch |
|
||||
| `fullstack` | Launch | Launch | Launch |
|
||||
|
||||
### 3A. Runtime-Managed Parallelism
|
||||
|
||||
When `resolvedMode` is `agent-team` or `subagent`, let the runtime decide concurrency and scheduling. TEA does not impose an additional worker ceiling.
|
||||
|
||||
---
|
||||
|
||||
### Contract Test Generation Note
|
||||
|
||||
When `use_pactjs_utils` is enabled, the API test generation subagent (step-03a) also generates:
|
||||
|
||||
- **Consumer contract tests**: Using `createProviderState` for type-safe provider states
|
||||
- **Provider verification tests**: Using `buildVerifierOptions` for one-call verifier setup
|
||||
- **Message contract tests**: Using `buildMessageVerifierOptions` if async/Kafka patterns detected
|
||||
- **Helper files**: Request filter setup with `createRequestFilter`, shared state constants
|
||||
- **Provider scrutiny**: Subagent reads provider route handlers, types, and validation schemas before generating each interaction (see `contract-testing.md` Provider Scrutiny Protocol)
|
||||
|
||||
When `pact_mcp` is `"mcp"`, the subagent can use SmartBear MCP tools to fetch existing provider states and generate tests informed by broker data.
|
||||
|
||||
---
|
||||
|
||||
### 4. Dispatch Worker A: API Test Generation (always)
|
||||
|
||||
**Dispatch worker:**
|
||||
|
||||
- **Subagent File:** `./step-03a-subagent-api.md`
|
||||
- **Output File:** `/tmp/tea-automate-api-tests-${timestamp}.json`
|
||||
- **Context:** Pass `subagentContext`
|
||||
- **Execution:**
|
||||
- `agent-team` or `subagent`: launch non-blocking
|
||||
- `sequential`: run blocking and wait before next dispatch
|
||||
|
||||
**System Action:**
|
||||
|
||||
```
|
||||
🚀 Launching Subagent A: API Test Generation
|
||||
📝 Output: /tmp/tea-automate-api-tests-${timestamp}.json
|
||||
⚙️ Mode: ${resolvedMode}
|
||||
⏳ Status: Running...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5. Dispatch Worker B: E2E Test Generation (frontend/fullstack only)
|
||||
|
||||
**If {detected_stack} is `frontend` or `fullstack`:**
|
||||
|
||||
**Dispatch worker:**
|
||||
|
||||
- **Subagent File:** `./step-03b-subagent-e2e.md`
|
||||
- **Output File:** `/tmp/tea-automate-e2e-tests-${timestamp}.json`
|
||||
- **Context:** Pass `subagentContext`
|
||||
- **Execution:**
|
||||
- `agent-team` or `subagent`: launch non-blocking
|
||||
- `sequential`: run blocking and wait before next dispatch
|
||||
|
||||
**System Action:**
|
||||
|
||||
```
|
||||
🚀 Launching Subagent B: E2E Test Generation
|
||||
📝 Output: /tmp/tea-automate-e2e-tests-${timestamp}.json
|
||||
⚙️ Mode: ${resolvedMode}
|
||||
⏳ Status: Running...
|
||||
```
|
||||
|
||||
**If {detected_stack} is `backend`:** Skip this subagent.
|
||||
|
||||
---
|
||||
|
||||
### 6. Dispatch Worker B-backend: Backend Test Generation (backend/fullstack only)
|
||||
|
||||
**If {detected_stack} is `backend` or `fullstack`:**
|
||||
|
||||
**Dispatch worker:**
|
||||
|
||||
- **Subagent File:** `./step-03b-subagent-backend.md`
|
||||
- **Output File:** `/tmp/tea-automate-backend-tests-${timestamp}.json`
|
||||
- **Context:** Pass `subagentContext`
|
||||
- **Execution:**
|
||||
- `agent-team` or `subagent`: launch non-blocking
|
||||
- `sequential`: run blocking and wait before next dispatch
|
||||
|
||||
**System Action:**
|
||||
|
||||
```
|
||||
🚀 Launching Subagent B-backend: Backend Test Generation
|
||||
📝 Output: /tmp/tea-automate-backend-tests-${timestamp}.json
|
||||
⚙️ Mode: ${resolvedMode}
|
||||
⏳ Status: Running...
|
||||
```
|
||||
|
||||
**If {detected_stack} is `frontend`:** Skip this subagent.
|
||||
|
||||
---
|
||||
|
||||
### 7. Wait for Expected Worker Completion
|
||||
|
||||
**If `resolvedMode` is `agent-team` or `subagent`:**
|
||||
|
||||
```
|
||||
⏳ Waiting for subagents to complete...
|
||||
├── Subagent A (API): Running... ⟳
|
||||
├── Subagent B (E2E): Running... ⟳ [if frontend/fullstack]
|
||||
└── Subagent B-backend: Running... ⟳ [if backend/fullstack]
|
||||
|
||||
[... time passes ...]
|
||||
|
||||
├── Subagent A (API): Complete ✅
|
||||
├── Subagent B (E2E): Complete ✅ [if frontend/fullstack]
|
||||
└── Subagent B-backend: Complete ✅ [if backend/fullstack]
|
||||
|
||||
✅ All subagents completed successfully!
|
||||
```
|
||||
|
||||
**If `resolvedMode` is `sequential`:**
|
||||
|
||||
```
|
||||
✅ Sequential mode: each worker already completed during dispatch.
|
||||
```
|
||||
|
||||
**Verify outputs exist (based on `{detected_stack}`):**
|
||||
|
||||
```javascript
|
||||
const apiOutputExists = fs.existsSync(`/tmp/tea-automate-api-tests-${timestamp}.json`);
|
||||
|
||||
// Check based on detected_stack
|
||||
if (detected_stack === 'frontend' || detected_stack === 'fullstack') {
|
||||
const e2eOutputExists = fs.existsSync(`/tmp/tea-automate-e2e-tests-${timestamp}.json`);
|
||||
if (!e2eOutputExists) throw new Error('E2E subagent output missing!');
|
||||
}
|
||||
if (detected_stack === 'backend' || detected_stack === 'fullstack') {
|
||||
const backendOutputExists = fs.existsSync(`/tmp/tea-automate-backend-tests-${timestamp}.json`);
|
||||
if (!backendOutputExists) throw new Error('Backend subagent output missing!');
|
||||
}
|
||||
if (!apiOutputExists) throw new Error('API subagent output missing!');
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Subagent Output Schema Contract
|
||||
|
||||
The aggregate step expects both outputs to include `success`, but the payload shapes are intentionally different:
|
||||
|
||||
- `step-03b-subagent-e2e.md` output includes `success`, `subagent`, `tests`, `fixture_needs`, `knowledge_fragments_used`, `test_count`, and `summary`.
|
||||
- `step-03b-subagent-backend.md` output includes `success`, `subagent`, `subagentType`, `testsGenerated`, `coverageSummary` (with `fixtureNeeds`), `status`, `knowledge_fragments_used`, and `summary`.
|
||||
|
||||
The aggregate step reads whichever output file(s) exist based on `{detected_stack}` and must use the matching schema per subagent type.
|
||||
|
||||
---
|
||||
|
||||
### 8. Execution Report
|
||||
|
||||
**Display performance metrics:**
|
||||
|
||||
```
|
||||
🚀 Performance Report:
|
||||
- Execution Mode: {resolvedMode}
|
||||
- Stack Type: {detected_stack}
|
||||
- API Test Generation: ~X minutes
|
||||
- E2E Test Generation: ~Y minutes [if frontend/fullstack]
|
||||
- Backend Test Generation: ~Z minutes [if backend/fullstack]
|
||||
- Total Elapsed: ~mode-dependent
|
||||
- Parallel Gain: ~40-70% faster when mode is subagent/agent-team
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 9. Proceed to Aggregation
|
||||
|
||||
**Load aggregation step:**
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
The aggregation step (3C) will:
|
||||
|
||||
- Read all subagent outputs (based on `{detected_stack}`)
|
||||
- Write all test files to disk
|
||||
- Generate shared fixtures and helpers
|
||||
- Calculate summary statistics
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Proceed to Step 3C (Aggregation) when:
|
||||
|
||||
- ✅ Subagent A (API tests) completed successfully
|
||||
- ✅ Subagent B (E2E tests) completed successfully [if frontend/fullstack]
|
||||
- ✅ Subagent B-backend (Backend tests) completed successfully [if backend/fullstack]
|
||||
- ✅ All expected output files exist and are valid JSON
|
||||
- ✅ Execution metrics displayed
|
||||
|
||||
**Do NOT proceed if:**
|
||||
|
||||
- ❌ Any launched subagent failed
|
||||
- ❌ Output files missing or corrupted
|
||||
- ❌ Timeout occurred (parallel mode only)
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- All required subagents launched successfully (based on `{detected_stack}`)
|
||||
- All required worker steps completed without errors
|
||||
- Output files generated and valid
|
||||
- Fallback behavior respected configuration and capability probe rules
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Failed to launch subagents
|
||||
- One or more subagents failed
|
||||
- Output files missing or invalid
|
||||
- Unsupported requested mode with probing disabled
|
||||
|
||||
**Master Rule:** Deterministic mode selection + stable output contract. Use the best supported mode, then aggregate normally.
|
||||
@@ -0,0 +1,263 @@
|
||||
---
|
||||
name: 'step-03a-subagent-api'
|
||||
description: 'Subagent: Generate API tests only'
|
||||
subagent: true
|
||||
outputFile: '/tmp/tea-automate-api-tests-{{timestamp}}.json'
|
||||
---
|
||||
|
||||
# Subagent 3A: Generate API Tests
|
||||
|
||||
## SUBAGENT CONTEXT
|
||||
|
||||
This is an **isolated subagent** running in parallel with E2E test generation.
|
||||
|
||||
**What you have from parent workflow:**
|
||||
|
||||
- Target features/components identified in Step 2
|
||||
- Knowledge fragments loaded: api-request, data-factories, api-testing-patterns
|
||||
- Config: test framework, Playwright Utils enabled/disabled, Pact.js Utils enabled/disabled, Pact MCP mode
|
||||
- Coverage plan: which API endpoints need testing
|
||||
|
||||
**Your task:** Generate API tests ONLY (not E2E, not fixtures, not other test types).
|
||||
|
||||
**If `use_pactjs_utils` is enabled:** Also generate consumer contract tests and provider verification tests alongside API tests. Use the loaded pactjs-utils fragments (`pactjs-utils-overview`, `pactjs-utils-consumer-helpers`, `pactjs-utils-provider-verifier`, `pactjs-utils-request-filter`) for patterns. If `pact_mcp` is `"mcp"`, use SmartBear MCP tools (Fetch Provider States, Generate Pact Tests) to inform test generation.
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read this entire subagent file before acting
|
||||
- ✅ Generate API tests ONLY
|
||||
- ✅ Output structured JSON to temp file
|
||||
- ✅ Follow knowledge fragment patterns
|
||||
- ❌ Do NOT generate E2E tests (that's subagent 3B)
|
||||
- ❌ Do NOT run tests (that's step 4)
|
||||
- ❌ Do NOT generate fixtures yet (that's step 3C aggregation)
|
||||
|
||||
---
|
||||
|
||||
## SUBAGENT TASK
|
||||
|
||||
### 1. Identify API Endpoints
|
||||
|
||||
From the coverage plan (Step 2 output), identify:
|
||||
|
||||
- Which API endpoints need test coverage
|
||||
- Expected request/response formats
|
||||
- Authentication requirements
|
||||
- Error scenarios to test
|
||||
|
||||
### 2. Generate API Test Files
|
||||
|
||||
For each API endpoint, create test file in `tests/api/[feature].spec.ts`:
|
||||
|
||||
**Test Structure:**
|
||||
|
||||
```typescript
|
||||
import { test, expect } from '@playwright/test';
|
||||
// If Playwright Utils enabled:
|
||||
// import { apiRequest } from '@playwright-utils/api';
|
||||
|
||||
test.describe('[Feature] API Tests', () => {
|
||||
test('[P0] should handle successful [operation]', async ({ request }) => {
|
||||
// Use apiRequest helper if Playwright Utils enabled
|
||||
// Otherwise use standard request fixture
|
||||
const response = await request.post('/api/endpoint', {
|
||||
data: {
|
||||
/* test data */
|
||||
},
|
||||
});
|
||||
|
||||
expect(response.status()).toBe(200);
|
||||
expect(await response.json()).toMatchObject({
|
||||
/* expected */
|
||||
});
|
||||
});
|
||||
|
||||
test('[P1] should handle [error scenario]', async ({ request }) => {
|
||||
// Test error handling
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Requirements:**
|
||||
|
||||
- ✅ Use `apiRequest()` helper if Playwright Utils enabled (from api-request fragment)
|
||||
- ✅ Use data factories for test data (from data-factories fragment)
|
||||
- ✅ Follow API testing patterns (from api-testing-patterns fragment)
|
||||
- ✅ Include priority tags [P0], [P1], [P2], [P3]
|
||||
- ✅ Test both happy path and error scenarios
|
||||
- ✅ Use proper TypeScript types
|
||||
- ✅ Deterministic assertions (no timing dependencies)
|
||||
|
||||
**If Pact.js Utils enabled (from `subagentContext.config.use_pactjs_utils`):**
|
||||
|
||||
- ✅ Generate consumer contract tests in `pact/http/consumer/` using `createProviderState({ name, params })` pattern
|
||||
- ✅ Generate provider verification tests in `pact/http/provider/` using `buildVerifierOptions({ provider, port, includeMainAndDeployed, stateHandlers })` pattern
|
||||
- ✅ Generate request filter helpers in `pact/http/helpers/` using `createRequestFilter({ tokenGenerator: () => string })`
|
||||
- ✅ Generate shared state constants in `pact/http/helpers/states.ts`
|
||||
- ✅ If async/message patterns detected, generate message consumer tests in `pact/message/` using `buildMessageVerifierOptions`
|
||||
- ✅ **Provider endpoint comment MANDATORY** on every Pact interaction: `// Provider endpoint: <path> -> <METHOD> <route>`
|
||||
- ⚠️ **Postel's Law for matchers**: Use `like()`, `eachLike()`, `string()`, `integer()` matchers ONLY in `willRespondWith` (responses). Request bodies in `withRequest` MUST use exact values — never wrap request bodies in `like()`. The consumer controls what it sends, so contracts should be strict about request shape.
|
||||
|
||||
### 1.5 Provider Source Scrutiny (CDC Only)
|
||||
|
||||
**CRITICAL**: Before generating ANY Pact consumer interaction, perform provider source scrutiny per the **Seven-Point Scrutiny Checklist** defined in `contract-testing.md`. Do NOT generate response matchers from consumer-side types alone — this is the #1 cause of contract verification failures.
|
||||
|
||||
The seven points to verify for each interaction:
|
||||
|
||||
1. Response shape
|
||||
2. Status codes
|
||||
3. Field names
|
||||
4. Enum values
|
||||
5. Required fields
|
||||
6. Data types
|
||||
7. Nested structures
|
||||
|
||||
**Source priority**: Provider source code is most authoritative. When an OpenAPI/Swagger spec exists (`openapi.yaml`, `openapi.json`, `swagger.json`), use it as a complementary or alternative source — it documents the provider's contract explicitly and can be faster to parse than tracing through handler code. When both exist, cross-reference them; if they disagree, the source code wins. Document the discrepancy in the scrutiny evidence block (e.g., `OpenAPI shows 200 but handler returns 201; using handler behavior`) and flag it in the output JSON `summary` so it is discoverable by downstream consumers or audits.
|
||||
|
||||
**Scrutiny Sequence** (for each endpoint in the coverage plan):
|
||||
|
||||
1. **READ provider route handler and/or OpenAPI spec**: Find the handler file from `subagentContext.config.provider_endpoint_map` or by scanning the provider codebase. Also check for OpenAPI/Swagger spec files. Extract:
|
||||
- Exact status codes returned (`res.status(201)` / OpenAPI `responses` keys)
|
||||
- Response construction (`res.json({ data: ... })` / OpenAPI `schema`)
|
||||
- Error handling paths (what status codes for what conditions)
|
||||
|
||||
2. **READ provider type/model/DTO definitions**: Find the response type referenced by the handler or OpenAPI `$ref` schemas. Extract:
|
||||
- Exact field names (`transaction_id` not `transactionId`)
|
||||
- Field types (`string` ID vs `number` ID / OpenAPI `type` + `format`)
|
||||
- Optional vs required fields (OpenAPI `required` array)
|
||||
- Nested object structures (OpenAPI `$ref`, `allOf`, `oneOf`)
|
||||
|
||||
3. **READ provider validation schemas**: Find Joi/Zod/class-validator schemas or OpenAPI request body `schema.required`. Extract:
|
||||
- Required request fields and headers
|
||||
- Enum/union type allowed values (`"active" | "inactive"` / OpenAPI `enum`)
|
||||
- Request body constraints
|
||||
|
||||
4. **Cross-reference findings** against consumer expectations:
|
||||
- Does the consumer expect the same field names the provider sends?
|
||||
- Does the consumer expect the same status codes the provider returns?
|
||||
- Does the consumer expect the same nesting the provider produces?
|
||||
|
||||
5. **Document scrutiny evidence** as a block comment in the generated test:
|
||||
|
||||
```typescript
|
||||
/*
|
||||
* Provider Scrutiny Evidence:
|
||||
* - Handler: server/src/routes/userHandlers.ts:45
|
||||
* - OpenAPI: server/openapi.yaml paths./api/v2/users/{userId}.get (if available)
|
||||
* - Response type: UserResponseDto (server/src/types/user.ts:12)
|
||||
* - Status: 201 for creation (line 52), 400 for validation error (line 48)
|
||||
* - Fields: { id: number, name: string, email: string, role: "user" | "admin" }
|
||||
* - Required request headers: Authorization (Bearer token)
|
||||
*/
|
||||
```
|
||||
|
||||
6. **Graceful degradation** when provider source is not accessible (follows the canonical four-step protocol from `contract-testing.md`):
|
||||
1. **OpenAPI/Swagger spec available**: Use the spec as the source of truth for response shapes, status codes, and field names
|
||||
2. **Pact Broker available** (when `pact_mcp` is `"mcp"` in `subagentContext.config`): Use SmartBear MCP tools to fetch existing provider states and verified interactions as reference
|
||||
3. **Neither available**: Generate from consumer types but use the TODO form of the mandatory comment: `// Provider endpoint: TODO — provider source not accessible, verify manually`. Set `provider_scrutiny: "pending"` in output JSON
|
||||
4. **Never silently guess**: Document all assumptions in the scrutiny evidence block
|
||||
|
||||
> ⚠️ **Anti-pattern**: Generating response matchers from consumer-side types alone. This produces contracts that reflect what the consumer _wishes_ the provider returns, not what it _actually_ returns. Always read provider source or OpenAPI spec first.
|
||||
|
||||
### 3. Track Fixture Needs
|
||||
|
||||
Identify fixtures needed for API tests:
|
||||
|
||||
- Authentication fixtures (auth tokens, API keys)
|
||||
- Data factories (user data, product data, etc.)
|
||||
- API client configurations
|
||||
|
||||
**Do NOT create fixtures yet** - just track what's needed for aggregation step.
|
||||
|
||||
---
|
||||
|
||||
## OUTPUT FORMAT
|
||||
|
||||
Write JSON to temp file: `/tmp/tea-automate-api-tests-{{timestamp}}.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"subagent": "api-tests",
|
||||
"tests": [
|
||||
{
|
||||
"file": "tests/api/auth.spec.ts",
|
||||
"content": "[full TypeScript test file content]",
|
||||
"description": "API tests for authentication endpoints",
|
||||
"priority_coverage": {
|
||||
"P0": 3,
|
||||
"P1": 2,
|
||||
"P2": 1,
|
||||
"P3": 0
|
||||
}
|
||||
},
|
||||
{
|
||||
"file": "tests/api/checkout.spec.ts",
|
||||
"content": "[full TypeScript test file content]",
|
||||
"description": "API tests for checkout endpoints",
|
||||
"priority_coverage": {
|
||||
"P0": 2,
|
||||
"P1": 3,
|
||||
"P2": 1,
|
||||
"P3": 0
|
||||
}
|
||||
}
|
||||
],
|
||||
"fixture_needs": ["authToken", "userDataFactory", "productDataFactory"],
|
||||
"knowledge_fragments_used": ["api-request", "data-factories", "api-testing-patterns"],
|
||||
"provider_scrutiny": "completed",
|
||||
"provider_files_read": ["server/src/routes/authHandlers.ts", "server/src/routes/checkoutHandlers.ts", "server/src/types/auth.ts"],
|
||||
"test_count": 12,
|
||||
"summary": "Generated 12 API test cases covering 3 features"
|
||||
}
|
||||
```
|
||||
|
||||
**On Error:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": false,
|
||||
"subagent": "api-tests",
|
||||
"error": "Error message describing what went wrong",
|
||||
"partial_output": {
|
||||
/* any tests generated before error */
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Subagent completes when:
|
||||
|
||||
- ✅ All API endpoints have test files generated
|
||||
- ✅ All tests follow knowledge fragment patterns
|
||||
- ✅ JSON output written to temp file
|
||||
- ✅ Fixture needs tracked
|
||||
|
||||
**Subagent terminates here.** Parent workflow will read output and proceed to aggregation.
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SUBAGENT SUCCESS METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- All API tests generated following patterns
|
||||
- JSON output valid and complete
|
||||
- No E2E/component/unit tests included (out of scope)
|
||||
- Every Pact interaction has `// Provider endpoint:` comment (if CDC enabled)
|
||||
- Provider source scrutiny completed or gracefully degraded with TODO markers (if CDC enabled)
|
||||
- Scrutiny evidence documented as block comments in test files (if CDC enabled)
|
||||
|
||||
### ❌ FAILURE:
|
||||
|
||||
- Generated tests other than API tests
|
||||
- Did not follow knowledge fragment patterns
|
||||
- Invalid or missing JSON output
|
||||
- Ran tests (not subagent responsibility)
|
||||
- Pact interactions missing provider endpoint comments (if CDC enabled)
|
||||
- Response matchers generated from consumer-side types without provider scrutiny (if CDC enabled)
|
||||
@@ -0,0 +1,246 @@
|
||||
---
|
||||
name: 'step-03b-subagent-backend'
|
||||
description: 'Subagent: Generate backend tests only (unit, integration, contract)'
|
||||
subagent: true
|
||||
outputFile: '/tmp/tea-automate-backend-tests-{{timestamp}}.json'
|
||||
---
|
||||
|
||||
# Subagent 3B-backend: Generate Backend Tests
|
||||
|
||||
## SUBAGENT CONTEXT
|
||||
|
||||
This is an **isolated subagent** running in parallel with API test generation (and optionally E2E test generation for fullstack projects).
|
||||
|
||||
**What you have from parent workflow:**
|
||||
|
||||
- Target features/services identified in Step 2
|
||||
- Knowledge fragments loaded: test-levels-framework, test-priorities-matrix, data-factories
|
||||
- Config: test framework, detected stack type
|
||||
- Coverage plan: which services/modules need backend testing
|
||||
|
||||
**Your task:** Generate backend tests ONLY (unit, integration, contract - not API endpoint tests, not E2E).
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- Read this entire subagent file before acting
|
||||
- Generate backend tests ONLY (unit, integration, contract)
|
||||
- Output structured JSON to temp file using the subagent output schema contract
|
||||
- Follow knowledge fragment patterns
|
||||
- Do NOT generate API endpoint tests (that's subagent 3A)
|
||||
- Do NOT generate E2E tests (that's subagent 3B-E2E)
|
||||
- Do NOT run tests (that's step 4)
|
||||
- Do NOT generate fixtures yet (that's step 3C aggregation)
|
||||
|
||||
---
|
||||
|
||||
## SUBAGENT TASK
|
||||
|
||||
### 1. Identify Test Targets
|
||||
|
||||
From the coverage plan (Step 2 output), identify:
|
||||
|
||||
- Which services/modules need unit test coverage
|
||||
- Which integrations need integration test coverage (database, message queues, external services)
|
||||
- Which service contracts need contract test coverage (Pact, schema validation)
|
||||
- Business logic functions requiring edge case coverage
|
||||
|
||||
### 2. Detect Framework & Language
|
||||
|
||||
From `config.test_framework` and project manifests, determine:
|
||||
|
||||
- **Python (pytest)**: Use `pytest` conventions, `conftest.py` fixtures, `@pytest.mark` decorators
|
||||
- **Java/Kotlin (JUnit)**: Use JUnit 5 annotations (`@Test`, `@BeforeEach`, `@Nested`), Mockito for mocking
|
||||
- **Go (go test)**: Use `*_test.go` files, `testing.T`, table-driven tests, `testify` assertions
|
||||
- **C#/.NET (xUnit)**: Use `[Fact]`, `[Theory]`, `[InlineData]`, `Moq` for mocking
|
||||
- **Ruby (RSpec)**: Use `describe`/`context`/`it` blocks, `let`/`before` helpers, `FactoryBot`
|
||||
|
||||
### 3. Generate Unit Tests
|
||||
|
||||
For each module/service, create test files following language-idiomatic patterns:
|
||||
|
||||
**Python (pytest) example:**
|
||||
|
||||
```python
|
||||
import pytest
|
||||
from unittest.mock import MagicMock, patch
|
||||
from myapp.services.user_service import UserService
|
||||
|
||||
class TestUserService:
|
||||
"""[P0] Unit tests for UserService"""
|
||||
|
||||
def test_create_user_with_valid_data(self, user_factory):
|
||||
"""Should create user when data is valid"""
|
||||
user_data = user_factory.build()
|
||||
result = UserService.create(user_data)
|
||||
assert result.email == user_data["email"]
|
||||
|
||||
def test_create_user_rejects_duplicate_email(self, user_factory):
|
||||
"""[P1] Should reject duplicate email"""
|
||||
user_data = user_factory.build(email="existing@test.com")
|
||||
with pytest.raises(DuplicateEmailError):
|
||||
UserService.create(user_data)
|
||||
```
|
||||
|
||||
**Go (go test) example:**
|
||||
|
||||
```go
|
||||
func TestUserService_Create(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
input CreateUserInput
|
||||
wantErr bool
|
||||
}{
|
||||
{"valid user", validInput(), false},
|
||||
{"duplicate email", duplicateInput(), true},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
svc := NewUserService(mockRepo)
|
||||
_, err := svc.Create(tt.input)
|
||||
if (err != nil) != tt.wantErr {
|
||||
t.Errorf("Create() error = %v, wantErr %v", err, tt.wantErr)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Requirements:**
|
||||
|
||||
- Follow the detected framework's idiomatic test patterns
|
||||
- Include priority tags [P0], [P1], [P2], [P3] in test descriptions
|
||||
- Use proper mocking for external dependencies (database, APIs, message queues)
|
||||
- Test both happy path and error cases
|
||||
- Use proper typing/type hints where applicable
|
||||
- No hard-coded test data; use factories or builders
|
||||
|
||||
### 4. Generate Integration Tests
|
||||
|
||||
For service integrations, create integration test files:
|
||||
|
||||
- Database integration tests (with test database or in-memory alternatives)
|
||||
- Message queue consumer/producer tests
|
||||
- Cache integration tests
|
||||
- External service integration tests (with mocked HTTP clients)
|
||||
|
||||
### 5. Generate Contract Tests (if applicable)
|
||||
|
||||
If the project uses microservices or has defined API contracts:
|
||||
|
||||
- Pact consumer/provider tests
|
||||
- Schema validation tests (JSON Schema, Protobuf)
|
||||
- OpenAPI spec compliance tests
|
||||
|
||||
### 6. Track Fixture Needs
|
||||
|
||||
Identify fixtures/helpers needed for backend tests:
|
||||
|
||||
- Database fixtures (seed data, cleanup)
|
||||
- Factory functions (test data builders)
|
||||
- Mock services (HTTP mocks, message queue mocks)
|
||||
- Configuration fixtures (test environment config)
|
||||
|
||||
**Do NOT create fixtures yet** - just track what's needed for aggregation step.
|
||||
|
||||
---
|
||||
|
||||
## OUTPUT FORMAT
|
||||
|
||||
Write JSON to temp file: `/tmp/tea-automate-backend-tests-{{timestamp}}.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"subagentType": "backend",
|
||||
"testsGenerated": [
|
||||
{
|
||||
"file": "tests/unit/test_user_service.py",
|
||||
"content": "[full test file content]",
|
||||
"description": "Unit tests for UserService",
|
||||
"priority_coverage": {
|
||||
"P0": 3,
|
||||
"P1": 2,
|
||||
"P2": 1,
|
||||
"P3": 0
|
||||
}
|
||||
},
|
||||
{
|
||||
"file": "tests/integration/test_user_repository.py",
|
||||
"content": "[full test file content]",
|
||||
"description": "Integration tests for user database operations",
|
||||
"priority_coverage": {
|
||||
"P0": 1,
|
||||
"P1": 2,
|
||||
"P2": 1,
|
||||
"P3": 0
|
||||
}
|
||||
}
|
||||
],
|
||||
"coverageSummary": {
|
||||
"totalTests": 15,
|
||||
"testLevels": ["unit", "integration", "contract"],
|
||||
"fixtureNeeds": ["databaseFixture", "userFactory", "mockHttpClient"]
|
||||
},
|
||||
"status": "complete",
|
||||
"success": true,
|
||||
"subagent": "backend-tests",
|
||||
"knowledge_fragments_used": ["test-levels-framework", "test-priorities-matrix", "data-factories"],
|
||||
"summary": "Generated 15 backend test cases (10 unit, 4 integration, 1 contract)"
|
||||
}
|
||||
```
|
||||
|
||||
**On Error:**
|
||||
|
||||
```json
|
||||
{
|
||||
"subagentType": "backend",
|
||||
"testsGenerated": [],
|
||||
"coverageSummary": {
|
||||
"totalTests": 0,
|
||||
"testLevels": [],
|
||||
"fixtureNeeds": []
|
||||
},
|
||||
"status": "partial",
|
||||
"success": false,
|
||||
"subagent": "backend-tests",
|
||||
"error": "Error message describing what went wrong",
|
||||
"partial_output": {
|
||||
/* any tests generated before error */
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Subagent completes when:
|
||||
|
||||
- All identified modules have backend test files generated
|
||||
- All tests follow language-idiomatic patterns
|
||||
- JSON output written to temp file using the subagent output schema contract
|
||||
- Fixture needs tracked
|
||||
|
||||
**Subagent terminates here.** Parent workflow will read output and proceed to aggregation.
|
||||
|
||||
---
|
||||
|
||||
## SUBAGENT SUCCESS METRICS
|
||||
|
||||
### SUCCESS:
|
||||
|
||||
- All backend tests generated following idiomatic patterns
|
||||
- JSON output valid and complete, matches subagent output schema contract
|
||||
- No E2E or browser tests included (out of scope)
|
||||
- Proper mocking used for external dependencies
|
||||
- Priority tags assigned to all test cases
|
||||
|
||||
### FAILURE:
|
||||
|
||||
- Generated tests other than backend tests (unit/integration/contract)
|
||||
- Did not follow language-idiomatic patterns
|
||||
- Invalid or missing JSON output
|
||||
- Output schema does not match the contract
|
||||
- Ran tests (not subagent responsibility)
|
||||
- Used real external services instead of mocks
|
||||
@@ -0,0 +1,213 @@
|
||||
---
|
||||
name: 'step-03b-subagent-e2e'
|
||||
description: 'Subagent: Generate E2E tests only'
|
||||
subagent: true
|
||||
outputFile: '/tmp/tea-automate-e2e-tests-{{timestamp}}.json'
|
||||
---
|
||||
|
||||
# Subagent 3B: Generate E2E Tests
|
||||
|
||||
## SUBAGENT CONTEXT
|
||||
|
||||
This is an **isolated subagent** running in parallel with API test generation.
|
||||
|
||||
**What you have from parent workflow:**
|
||||
|
||||
- Target features/user journeys identified in Step 2
|
||||
- Knowledge fragments loaded: fixture-architecture, network-first, selector-resilience
|
||||
- Config: test framework, Playwright Utils enabled/disabled
|
||||
- Coverage plan: which user journeys need E2E testing
|
||||
|
||||
**Your task:** Generate E2E tests ONLY (not API, not fixtures, not other test types).
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read this entire subagent file before acting
|
||||
- ✅ Generate E2E tests ONLY
|
||||
- ✅ Output structured JSON to temp file
|
||||
- ✅ Follow knowledge fragment patterns
|
||||
- ❌ Do NOT generate API tests (that's subagent 3A)
|
||||
- ❌ Do NOT run tests (that's step 4)
|
||||
- ❌ Do NOT generate fixtures yet (that's step 3C aggregation)
|
||||
|
||||
---
|
||||
|
||||
## SUBAGENT TASK
|
||||
|
||||
### 1. Identify User Journeys
|
||||
|
||||
From the coverage plan (Step 2 output), identify:
|
||||
|
||||
- Which user journeys need E2E coverage
|
||||
- Critical user paths (authentication, checkout, profile, etc.)
|
||||
- UI interactions required
|
||||
- Expected visual states
|
||||
|
||||
### 2. Browser Interaction (Selector Verification)
|
||||
|
||||
**Automation mode:** `config.tea_browser_automation`
|
||||
|
||||
If `auto` (fall back to MCP if CLI unavailable; if neither available, generate from best practices):
|
||||
|
||||
- Open the target page first, then verify selectors with a snapshot:
|
||||
`playwright-cli -s=tea-automate-{{timestamp}} open <target_url>`
|
||||
`playwright-cli -s=tea-automate-{{timestamp}} snapshot` → map refs to Playwright locators
|
||||
- ref `{role: "button", name: "Submit"}` → `page.getByRole('button', { name: 'Submit' })`
|
||||
- ref `{role: "textbox", name: "Email"}` → `page.getByRole('textbox', { name: 'Email' })`
|
||||
- `playwright-cli -s=tea-automate-{{timestamp}} close` when done
|
||||
|
||||
If `cli` (CLI only — do NOT fall back to MCP; generate from best practices if CLI unavailable):
|
||||
|
||||
- Open the target page first, then verify selectors with a snapshot:
|
||||
`playwright-cli -s=tea-automate-{{timestamp}} open <target_url>`
|
||||
`playwright-cli -s=tea-automate-{{timestamp}} snapshot` → map refs to Playwright locators
|
||||
- ref `{role: "button", name: "Submit"}` → `page.getByRole('button', { name: 'Submit' })`
|
||||
- ref `{role: "textbox", name: "Email"}` → `page.getByRole('textbox', { name: 'Email' })`
|
||||
- `playwright-cli -s=tea-automate-{{timestamp}} close` when done
|
||||
|
||||
> **Session Hygiene:** Always close sessions using `playwright-cli -s=tea-automate-{{timestamp}} close`. Do NOT use `close-all` — it kills every session on the machine and breaks parallel execution.
|
||||
|
||||
If `mcp`:
|
||||
|
||||
- Use MCP tools for selector verification (current behavior)
|
||||
|
||||
If `none`:
|
||||
|
||||
- Generate selectors from best practices without browser verification
|
||||
|
||||
### 3. Generate E2E Test Files
|
||||
|
||||
For each user journey, create test file in `tests/e2e/[feature].spec.ts`:
|
||||
|
||||
**Test Structure:**
|
||||
|
||||
```typescript
|
||||
import { test, expect } from '@playwright/test';
|
||||
|
||||
test.describe('[Feature] E2E User Journey', () => {
|
||||
test('[P0] should complete [user journey]', async ({ page }) => {
|
||||
// Navigate to starting point
|
||||
await page.goto('/feature');
|
||||
|
||||
// Interact with UI
|
||||
await page.getByRole('button', { name: 'Submit' }).click();
|
||||
|
||||
// Assert expected state
|
||||
await expect(page.getByText('Success')).toBeVisible();
|
||||
});
|
||||
|
||||
test('[P1] should handle [edge case]', async ({ page }) => {
|
||||
// Test edge case scenario
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Requirements:**
|
||||
|
||||
- ✅ Follow fixture architecture patterns (from fixture-architecture fragment)
|
||||
- ✅ Use network-first patterns: intercept before navigate (from network-first fragment)
|
||||
- ✅ Use resilient selectors: getByRole, getByText, getByLabel (from selector-resilience fragment)
|
||||
- ✅ Include priority tags [P0], [P1], [P2], [P3]
|
||||
- ✅ Test complete user journeys (not isolated clicks)
|
||||
- ✅ Use proper TypeScript types
|
||||
- ✅ Deterministic waits (no hard sleeps, use expect().toBeVisible())
|
||||
|
||||
### 4. Track Fixture Needs
|
||||
|
||||
Identify fixtures needed for E2E tests:
|
||||
|
||||
- Page object models (if complex)
|
||||
- Authentication fixtures (logged-in user state)
|
||||
- Network mocks/intercepts
|
||||
- Test data fixtures
|
||||
|
||||
**Do NOT create fixtures yet** - just track what's needed for aggregation step.
|
||||
|
||||
---
|
||||
|
||||
## OUTPUT FORMAT
|
||||
|
||||
Write JSON to temp file: `/tmp/tea-automate-e2e-tests-{{timestamp}}.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"subagent": "e2e-tests",
|
||||
"tests": [
|
||||
{
|
||||
"file": "tests/e2e/authentication.spec.ts",
|
||||
"content": "[full TypeScript test file content]",
|
||||
"description": "E2E tests for user authentication journey",
|
||||
"priority_coverage": {
|
||||
"P0": 2,
|
||||
"P1": 3,
|
||||
"P2": 2,
|
||||
"P3": 0
|
||||
}
|
||||
},
|
||||
{
|
||||
"file": "tests/e2e/checkout.spec.ts",
|
||||
"content": "[full TypeScript test file content]",
|
||||
"description": "E2E tests for checkout journey",
|
||||
"priority_coverage": {
|
||||
"P0": 3,
|
||||
"P1": 2,
|
||||
"P2": 1,
|
||||
"P3": 0
|
||||
}
|
||||
}
|
||||
],
|
||||
"fixture_needs": ["authenticatedUserFixture", "paymentMockFixture", "checkoutDataFixture"],
|
||||
"knowledge_fragments_used": ["fixture-architecture", "network-first", "selector-resilience"],
|
||||
"test_count": 15,
|
||||
"summary": "Generated 15 E2E test cases covering 5 user journeys"
|
||||
}
|
||||
```
|
||||
|
||||
**On Error:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": false,
|
||||
"subagent": "e2e-tests",
|
||||
"error": "Error message describing what went wrong",
|
||||
"partial_output": {
|
||||
/* any tests generated before error */
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Subagent completes when:
|
||||
|
||||
- ✅ All user journeys have E2E test files generated
|
||||
- ✅ All tests follow knowledge fragment patterns
|
||||
- ✅ JSON output written to temp file
|
||||
- ✅ Fixture needs tracked
|
||||
|
||||
**Subagent terminates here.** Parent workflow will read output and proceed to aggregation.
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SUBAGENT SUCCESS METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- All E2E tests generated following patterns
|
||||
- JSON output valid and complete
|
||||
- No API/component/unit tests included (out of scope)
|
||||
- Resilient selectors used (getByRole, getByText)
|
||||
- Network-first patterns applied (intercept before navigate)
|
||||
|
||||
### ❌ FAILURE:
|
||||
|
||||
- Generated tests other than E2E tests
|
||||
- Did not follow knowledge fragment patterns
|
||||
- Invalid or missing JSON output
|
||||
- Ran tests (not subagent responsibility)
|
||||
- Used brittle selectors (CSS classes, XPath)
|
||||
@@ -0,0 +1,393 @@
|
||||
---
|
||||
name: 'step-03c-aggregate'
|
||||
description: 'Aggregate subagent outputs and complete test infrastructure'
|
||||
outputFile: '{test_artifacts}/automation-summary.md'
|
||||
nextStepFile: './step-04-validate-and-summarize.md'
|
||||
---
|
||||
|
||||
# Step 3C: Aggregate Test Generation Results
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Read outputs from parallel subagents (API + E2E and/or Backend test generation based on `{detected_stack}`), aggregate results, and create supporting infrastructure (fixtures, helpers).
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- ✅ Read subagent outputs from temp files
|
||||
- ✅ Generate shared fixtures based on fixture needs from both subagents
|
||||
- ✅ Write all generated test files to disk
|
||||
- ❌ Do NOT regenerate tests (use subagent outputs)
|
||||
- ❌ Do NOT run tests yet (that's step 4)
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, subagent outputs from temp files
|
||||
- Focus: aggregation and fixture generation only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: Step 3A and 3B subagent outputs
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
### 1. Read Subagent Outputs
|
||||
|
||||
**Read API test subagent output (always):**
|
||||
|
||||
```javascript
|
||||
const apiTestsPath = '/tmp/tea-automate-api-tests-{{timestamp}}.json';
|
||||
const apiTestsOutput = JSON.parse(fs.readFileSync(apiTestsPath, 'utf8'));
|
||||
```
|
||||
|
||||
**Read E2E test subagent output (if {detected_stack} is `frontend` or `fullstack`):**
|
||||
|
||||
```javascript
|
||||
let e2eTestsOutput = null;
|
||||
if (detected_stack === 'frontend' || detected_stack === 'fullstack') {
|
||||
const e2eTestsPath = '/tmp/tea-automate-e2e-tests-{{timestamp}}.json';
|
||||
e2eTestsOutput = JSON.parse(fs.readFileSync(e2eTestsPath, 'utf8'));
|
||||
}
|
||||
```
|
||||
|
||||
**Read Backend test subagent output (if {detected_stack} is `backend` or `fullstack`):**
|
||||
|
||||
```javascript
|
||||
let backendTestsOutput = null;
|
||||
if (detected_stack === 'backend' || detected_stack === 'fullstack') {
|
||||
const backendTestsPath = '/tmp/tea-automate-backend-tests-{{timestamp}}.json';
|
||||
backendTestsOutput = JSON.parse(fs.readFileSync(backendTestsPath, 'utf8'));
|
||||
}
|
||||
```
|
||||
|
||||
**Verify all launched subagents succeeded:**
|
||||
|
||||
- Check `apiTestsOutput.success === true`
|
||||
- If E2E was launched: check `e2eTestsOutput.success === true`
|
||||
- If Backend was launched: check `backendTestsOutput.success === true`
|
||||
- If any failed, report error and stop (don't proceed)
|
||||
|
||||
---
|
||||
|
||||
### 2. Write All Test Files to Disk
|
||||
|
||||
**Write API test files:**
|
||||
|
||||
```javascript
|
||||
apiTestsOutput.tests.forEach((test) => {
|
||||
fs.writeFileSync(test.file, test.content, 'utf8');
|
||||
console.log(`✅ Created: ${test.file}`);
|
||||
});
|
||||
```
|
||||
|
||||
**Write E2E test files (if {detected_stack} is `frontend` or `fullstack`):**
|
||||
|
||||
```javascript
|
||||
if (e2eTestsOutput) {
|
||||
e2eTestsOutput.tests.forEach((test) => {
|
||||
fs.writeFileSync(test.file, test.content, 'utf8');
|
||||
console.log(`✅ Created: ${test.file}`);
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
**Write Backend test files (if {detected_stack} is `backend` or `fullstack`):**
|
||||
|
||||
```javascript
|
||||
if (backendTestsOutput) {
|
||||
backendTestsOutput.testsGenerated.forEach((test) => {
|
||||
fs.writeFileSync(test.file, test.content, 'utf8');
|
||||
console.log(`✅ Created: ${test.file}`);
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. Aggregate Fixture Needs
|
||||
|
||||
**Collect all fixture needs from all launched subagents:**
|
||||
|
||||
```javascript
|
||||
const allFixtureNeeds = [
|
||||
...apiTestsOutput.fixture_needs,
|
||||
...(e2eTestsOutput ? e2eTestsOutput.fixture_needs : []),
|
||||
...(backendTestsOutput ? backendTestsOutput.coverageSummary?.fixtureNeeds || [] : []),
|
||||
];
|
||||
|
||||
// Remove duplicates
|
||||
const uniqueFixtures = [...new Set(allFixtureNeeds)];
|
||||
```
|
||||
|
||||
**Categorize fixtures:**
|
||||
|
||||
- **Authentication fixtures:** authToken, authenticatedUserFixture, etc.
|
||||
- **Data factories:** userDataFactory, productDataFactory, etc.
|
||||
- **Network mocks:** paymentMockFixture, apiResponseMocks, etc.
|
||||
- **Test helpers:** wait/retry/assertion helpers
|
||||
|
||||
---
|
||||
|
||||
### 4. Generate Fixture Infrastructure
|
||||
|
||||
**Create or update fixture files based on needs:**
|
||||
|
||||
**A) Authentication Fixtures** (`tests/fixtures/auth.ts`):
|
||||
|
||||
```typescript
|
||||
import { test as base } from '@playwright/test';
|
||||
|
||||
export const test = base.extend({
|
||||
authenticatedUser: async ({ page }, use) => {
|
||||
// Login logic
|
||||
await page.goto('/login');
|
||||
await page.fill('[name="email"]', 'test@example.com');
|
||||
await page.fill('[name="password"]', 'password');
|
||||
await page.click('button[type="submit"]');
|
||||
await page.waitForURL('/dashboard');
|
||||
|
||||
await use(page);
|
||||
},
|
||||
|
||||
authToken: async ({ request }, use) => {
|
||||
// Get auth token for API tests
|
||||
const response = await request.post('/api/auth/login', {
|
||||
data: { email: 'test@example.com', password: 'password' },
|
||||
});
|
||||
const { token } = await response.json();
|
||||
|
||||
await use(token);
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
**B) Data Factories** (`tests/fixtures/data-factories.ts`):
|
||||
|
||||
```typescript
|
||||
import { faker } from '@faker-js/faker';
|
||||
|
||||
export const createUserData = (overrides = {}) => ({
|
||||
name: faker.person.fullName(),
|
||||
email: faker.internet.email(),
|
||||
...overrides,
|
||||
});
|
||||
|
||||
export const createProductData = (overrides = {}) => ({
|
||||
name: faker.commerce.productName(),
|
||||
price: faker.number.int({ min: 10, max: 1000 }),
|
||||
...overrides,
|
||||
});
|
||||
```
|
||||
|
||||
**C) Network Mocks** (`tests/fixtures/network-mocks.ts`):
|
||||
|
||||
```typescript
|
||||
import { Page } from '@playwright/test';
|
||||
|
||||
export const mockPaymentSuccess = async (page: Page) => {
|
||||
await page.route('/api/payment/**', (route) => {
|
||||
route.fulfill({
|
||||
status: 200,
|
||||
body: JSON.stringify({ success: true, transactionId: '12345' }),
|
||||
});
|
||||
});
|
||||
};
|
||||
```
|
||||
|
||||
**D) Helper Utilities** (`tests/fixtures/helpers.ts`):
|
||||
|
||||
```typescript
|
||||
import { expect, Page } from '@playwright/test';
|
||||
|
||||
export const waitForApiResponse = async (page: Page, urlPattern: string) => {
|
||||
return page.waitForResponse((response) => response.url().includes(urlPattern) && response.ok());
|
||||
};
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5. Calculate Summary Statistics
|
||||
|
||||
**Aggregate test counts (based on `{detected_stack}`):**
|
||||
|
||||
```javascript
|
||||
const e2eCount = e2eTestsOutput ? e2eTestsOutput.test_count : 0;
|
||||
const backendCount = backendTestsOutput ? (backendTestsOutput.coverageSummary?.totalTests ?? 0) : 0;
|
||||
|
||||
const resolvedMode = subagentContext?.execution?.resolvedMode;
|
||||
const subagentExecutionLabel =
|
||||
resolvedMode === 'sequential'
|
||||
? 'SEQUENTIAL (API then dependent workers)'
|
||||
: resolvedMode === 'agent-team'
|
||||
? 'AGENT-TEAM (parallel worker squad)'
|
||||
: resolvedMode === 'subagent'
|
||||
? 'SUBAGENT (parallel subagents)'
|
||||
: `PARALLEL (based on ${detected_stack})`;
|
||||
const performanceGainLabel =
|
||||
resolvedMode === 'sequential'
|
||||
? 'baseline (no parallel speedup)'
|
||||
: resolvedMode === 'agent-team' || resolvedMode === 'subagent'
|
||||
? '~40-70% faster than sequential'
|
||||
: 'mode-dependent';
|
||||
|
||||
const summary = {
|
||||
detected_stack: '{detected_stack}',
|
||||
total_tests: apiTestsOutput.test_count + e2eCount + backendCount,
|
||||
api_tests: apiTestsOutput.test_count,
|
||||
e2e_tests: e2eCount,
|
||||
backend_tests: backendCount,
|
||||
fixtures_created: uniqueFixtures.length,
|
||||
api_test_files: apiTestsOutput.tests.length,
|
||||
e2e_test_files: e2eTestsOutput ? e2eTestsOutput.tests.length : 0,
|
||||
backend_test_files: backendTestsOutput ? backendTestsOutput.testsGenerated.length : 0,
|
||||
priority_coverage: {
|
||||
P0:
|
||||
(apiTestsOutput.priority_coverage?.P0 ?? 0) +
|
||||
(e2eTestsOutput?.priority_coverage?.P0 ?? 0) +
|
||||
(backendTestsOutput?.testsGenerated?.reduce((sum, t) => sum + (t.priority_coverage?.P0 ?? 0), 0) ?? 0),
|
||||
P1:
|
||||
(apiTestsOutput.priority_coverage?.P1 ?? 0) +
|
||||
(e2eTestsOutput?.priority_coverage?.P1 ?? 0) +
|
||||
(backendTestsOutput?.testsGenerated?.reduce((sum, t) => sum + (t.priority_coverage?.P1 ?? 0), 0) ?? 0),
|
||||
P2:
|
||||
(apiTestsOutput.priority_coverage?.P2 ?? 0) +
|
||||
(e2eTestsOutput?.priority_coverage?.P2 ?? 0) +
|
||||
(backendTestsOutput?.testsGenerated?.reduce((sum, t) => sum + (t.priority_coverage?.P2 ?? 0), 0) ?? 0),
|
||||
P3:
|
||||
(apiTestsOutput.priority_coverage?.P3 ?? 0) +
|
||||
(e2eTestsOutput?.priority_coverage?.P3 ?? 0) +
|
||||
(backendTestsOutput?.testsGenerated?.reduce((sum, t) => sum + (t.priority_coverage?.P3 ?? 0), 0) ?? 0),
|
||||
},
|
||||
knowledge_fragments_used: [
|
||||
...apiTestsOutput.knowledge_fragments_used,
|
||||
...(e2eTestsOutput ? e2eTestsOutput.knowledge_fragments_used : []),
|
||||
...(backendTestsOutput ? backendTestsOutput.knowledge_fragments_used || [] : []),
|
||||
],
|
||||
subagent_execution: subagentExecutionLabel,
|
||||
performance_gain: performanceGainLabel,
|
||||
};
|
||||
```
|
||||
|
||||
**Store summary for Step 4:**
|
||||
Save summary to temp file for validation step:
|
||||
|
||||
```javascript
|
||||
fs.writeFileSync('/tmp/tea-automate-summary-{{timestamp}}.json', JSON.stringify(summary, null, 2), 'utf8');
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 6. Optional Cleanup
|
||||
|
||||
**Clean up subagent temp files** (optional - can keep for debugging):
|
||||
|
||||
```javascript
|
||||
fs.unlinkSync(apiTestsPath);
|
||||
if (e2eTestsOutput) fs.unlinkSync('/tmp/tea-automate-e2e-tests-{{timestamp}}.json');
|
||||
if (backendTestsOutput) fs.unlinkSync('/tmp/tea-automate-backend-tests-{{timestamp}}.json');
|
||||
console.log('✅ Subagent temp files cleaned up');
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## OUTPUT SUMMARY
|
||||
|
||||
Display to user:
|
||||
|
||||
```
|
||||
✅ Test Generation Complete ({subagent_execution})
|
||||
|
||||
📊 Summary:
|
||||
- Stack Type: {detected_stack}
|
||||
- Total Tests: {total_tests}
|
||||
- API Tests: {api_tests} ({api_test_files} files)
|
||||
- E2E Tests: {e2e_tests} ({e2e_test_files} files) [if frontend/fullstack]
|
||||
- Backend Tests: {backend_tests} ({backend_test_files} files) [if backend/fullstack]
|
||||
- Fixtures Created: {fixtures_created}
|
||||
- Priority Coverage:
|
||||
- P0 (Critical): {P0} tests
|
||||
- P1 (High): {P1} tests
|
||||
- P2 (Medium): {P2} tests
|
||||
- P3 (Low): {P3} tests
|
||||
|
||||
🚀 Performance: {performance_gain}
|
||||
|
||||
📂 Generated Files:
|
||||
- tests/api/[feature].spec.ts [always]
|
||||
- tests/e2e/[feature].spec.ts [if frontend/fullstack]
|
||||
- tests/unit/[feature].test.* [if backend/fullstack]
|
||||
- tests/integration/[feature].test.* [if backend/fullstack]
|
||||
- tests/fixtures/ or tests/support/ [shared infrastructure]
|
||||
|
||||
✅ Ready for validation (Step 4)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Proceed to Step 4 when:
|
||||
|
||||
- ✅ All test files written to disk (API + E2E and/or Backend, based on `{detected_stack}`)
|
||||
- ✅ All fixtures and helpers created
|
||||
- ✅ Summary statistics calculated and saved
|
||||
- ✅ Output displayed to user
|
||||
|
||||
---
|
||||
|
||||
### 7. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-03c-aggregate']
|
||||
lastStep: 'step-03c-aggregate'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-03c-aggregate'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-03c-aggregate'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- All launched subagents succeeded (based on `{detected_stack}`)
|
||||
- All test files written to disk
|
||||
- Fixtures generated based on subagent needs
|
||||
- Summary complete and accurate
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- One or more subagents failed
|
||||
- Test files not written to disk
|
||||
- Fixtures missing or incomplete
|
||||
- Summary missing or inaccurate
|
||||
|
||||
**Master Rule:** Do NOT proceed to Step 4 if aggregation incomplete.
|
||||
@@ -0,0 +1,106 @@
|
||||
---
|
||||
name: 'step-04-validate-and-summarize'
|
||||
description: 'Validate outputs and produce automation summary'
|
||||
outputFile: '{test_artifacts}/automation-summary.md'
|
||||
---
|
||||
|
||||
# Step 4: Validate & Summarize
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Validate generated outputs and produce a concise automation summary.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- ✅ Validate against the checklist before completion
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Validate
|
||||
|
||||
Use `checklist.md` to validate:
|
||||
|
||||
- Framework readiness
|
||||
- Coverage mapping
|
||||
- Test quality and structure
|
||||
- Fixtures, factories, helpers
|
||||
- [ ] CLI sessions cleaned up (no orphaned browsers)
|
||||
- [ ] Temp artifacts stored in `{test_artifacts}/` not random locations
|
||||
|
||||
Fix gaps before proceeding.
|
||||
|
||||
---
|
||||
|
||||
## 2. Polish Output
|
||||
|
||||
Before finalizing, review the complete output document for quality:
|
||||
|
||||
1. **Remove duplication**: Progressive-append workflow may have created repeated sections — consolidate
|
||||
2. **Verify consistency**: Ensure terminology, risk scores, and references are consistent throughout
|
||||
3. **Check completeness**: All template sections should be populated or explicitly marked N/A
|
||||
4. **Format cleanup**: Ensure markdown formatting is clean (tables aligned, headers consistent, no orphaned references)
|
||||
|
||||
---
|
||||
|
||||
## 3. Summary Output
|
||||
|
||||
Write `{outputFile}` including:
|
||||
|
||||
- Coverage plan by test level and priority
|
||||
- Files created/updated
|
||||
- Key assumptions and risks
|
||||
- Next recommended workflow (e.g., `test-review` or `trace`)
|
||||
|
||||
---
|
||||
|
||||
## 4. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-04-validate-and-summarize']
|
||||
lastStep: 'step-04-validate-and-summarize'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-04-validate-and-summarize'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-04-validate-and-summarize'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section.
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
@@ -0,0 +1,65 @@
|
||||
---
|
||||
name: 'step-01-assess'
|
||||
description: 'Load an existing output for editing'
|
||||
nextStepFile: './step-02-apply-edit.md'
|
||||
---
|
||||
|
||||
# Step 1: Assess Edit Target
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Identify which output should be edited and load it.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 📖 Read the complete step file before taking any action
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are the Master Test Architect
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Ask the user which output file to edit
|
||||
- 🚫 Do not edit until target is confirmed
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: existing outputs
|
||||
- Focus: select edit target
|
||||
- Limits: no edits yet
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Identify Target
|
||||
|
||||
Ask the user to provide the output file path or select from known outputs.
|
||||
|
||||
### 2. Load Target
|
||||
|
||||
Read the provided output file in full.
|
||||
|
||||
### 3. Confirm
|
||||
|
||||
Confirm the target and proceed to edit.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Target identified and loaded
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Proceeding without a confirmed target
|
||||
@@ -0,0 +1,60 @@
|
||||
---
|
||||
name: 'step-02-apply-edit'
|
||||
description: 'Apply edits to the selected output'
|
||||
---
|
||||
|
||||
# Step 2: Apply Edits
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Apply the requested edits to the selected output and confirm changes.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 📖 Read the complete step file before taking any action
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are the Master Test Architect
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Only apply edits explicitly requested by the user
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: selected output and user changes
|
||||
- Focus: apply edits only
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Confirm Requested Changes
|
||||
|
||||
Restate what will be changed and confirm.
|
||||
|
||||
### 2. Apply Changes
|
||||
|
||||
Update the output file accordingly.
|
||||
|
||||
### 3. Report
|
||||
|
||||
Summarize the edits applied.
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Changes applied and confirmed
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Unconfirmed edits or missing update
|
||||
@@ -0,0 +1,67 @@
|
||||
---
|
||||
name: 'step-01-validate'
|
||||
description: 'Validate workflow outputs against checklist'
|
||||
outputFile: '{test_artifacts}/automate-validation-report.md'
|
||||
validationChecklist: '../checklist.md'
|
||||
---
|
||||
|
||||
# Step 1: Validate Outputs
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Validate outputs using the workflow checklist and record findings.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 📖 Read the complete step file before taking any action
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are the Master Test Architect
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Validate against `{validationChecklist}`
|
||||
- 🚫 Do not skip checks
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Write findings to `{outputFile}`
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: workflow outputs and checklist
|
||||
- Focus: validation only
|
||||
- Limits: do not modify outputs in this step
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Load Checklist
|
||||
|
||||
Read `{validationChecklist}` and list all criteria.
|
||||
|
||||
### 2. Validate Outputs
|
||||
|
||||
Evaluate outputs against each checklist item.
|
||||
|
||||
### 3. Write Report
|
||||
|
||||
Write a validation report to `{outputFile}` with PASS/WARN/FAIL per section.
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Validation report written
|
||||
- All checklist items evaluated
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped checklist items
|
||||
- No report produced
|
||||
@@ -0,0 +1,72 @@
|
||||
---
|
||||
validationDate: 2026-01-27
|
||||
workflowName: testarch-automate
|
||||
workflowPath: {project-root}/src/workflows/testarch/bmad-testarch-automate
|
||||
validationStatus: COMPLETE
|
||||
completionDate: 2026-01-27 10:03:10
|
||||
---
|
||||
|
||||
# Validation Report: testarch-automate
|
||||
|
||||
**Validation Started:** 2026-01-27 09:50:21
|
||||
**Validator:** BMAD Workflow Validation System (Codex)
|
||||
**Standards Version:** BMAD Workflow Standards
|
||||
|
||||
## File Structure & Size
|
||||
|
||||
- workflow.md present: YES
|
||||
- instructions.md present: YES
|
||||
- workflow.yaml present: YES
|
||||
- step files found: 7
|
||||
|
||||
**Step File Sizes:**
|
||||
|
||||
- steps-c/step-01-preflight-and-context.md: 113 lines [GOOD]
|
||||
- steps-c/step-02-identify-targets.md: 85 lines [GOOD]
|
||||
- steps-c/step-03-generate-tests.md: 76 lines [GOOD]
|
||||
- steps-c/step-04-validate-and-summarize.md: 62 lines [GOOD]
|
||||
- steps-e/step-01-assess.md: 51 lines [GOOD]
|
||||
- steps-e/step-02-apply-edit.md: 46 lines [GOOD]
|
||||
- steps-v/step-01-validate.md: 53 lines [GOOD]
|
||||
- workflow-plan.md present: YES
|
||||
|
||||
## Frontmatter Validation
|
||||
|
||||
- No frontmatter violations found
|
||||
|
||||
## Critical Path Violations
|
||||
|
||||
- No {project-root} hardcoded paths detected in body
|
||||
- No dead relative links detected
|
||||
|
||||
## Menu Handling Validation
|
||||
|
||||
- No menu structures detected (linear step flow) [N/A]
|
||||
|
||||
## Step Type Validation
|
||||
|
||||
- Last step steps-v/step-01-validate.md has no nextStepFile (final step OK)
|
||||
- Step type validation assumes linear sequence (no branching/menu). Workflow-plan.md present for reference. [INFO]
|
||||
|
||||
## Output Format Validation
|
||||
|
||||
- No templates found in workflow root
|
||||
- Steps with outputFile in frontmatter:
|
||||
- steps-c/step-04-validate-and-summarize.md
|
||||
- steps-v/step-01-validate.md
|
||||
|
||||
## Validation Design Check
|
||||
|
||||
- checklist.md present: YES
|
||||
- Validation steps folder (steps-v) present: YES
|
||||
|
||||
## Instruction Style Check
|
||||
|
||||
- All steps include STEP GOAL, MANDATORY EXECUTION RULES, EXECUTION PROTOCOLS, CONTEXT BOUNDARIES, and SUCCESS/FAILURE metrics
|
||||
|
||||
## Summary
|
||||
|
||||
- Validation completed: 2026-01-27 10:03:10
|
||||
- Critical issues: 0
|
||||
- Warnings: 0 (informational notes only)
|
||||
- Readiness: READY (manual review optional)
|
||||
@@ -0,0 +1,114 @@
|
||||
---
|
||||
validationDate: 2026-01-27
|
||||
workflowName: testarch-automate
|
||||
workflowPath: {project-root}/src/workflows/testarch/bmad-testarch-automate
|
||||
validationStatus: COMPLETE
|
||||
completionDate: 2026-01-27 10:24:01
|
||||
---
|
||||
|
||||
# Validation Report: testarch-automate
|
||||
|
||||
**Validation Started:** 2026-01-27 10:24:01
|
||||
**Validator:** BMAD Workflow Validation System (Codex)
|
||||
**Standards Version:** BMAD Workflow Standards
|
||||
|
||||
## File Structure & Size
|
||||
|
||||
- workflow.md present: YES
|
||||
- instructions.md present: YES
|
||||
- workflow.yaml present: YES
|
||||
- step files found: 7
|
||||
|
||||
**Step File Sizes:**
|
||||
|
||||
- steps-c/step-01-preflight-and-context.md: 112 lines [GOOD]
|
||||
- steps-c/step-02-identify-targets.md: 84 lines [GOOD]
|
||||
- steps-c/step-03-generate-tests.md: 75 lines [GOOD]
|
||||
- steps-c/step-04-validate-and-summarize.md: 61 lines [GOOD]
|
||||
- steps-e/step-01-assess.md: 50 lines [GOOD]
|
||||
- steps-e/step-02-apply-edit.md: 45 lines [GOOD]
|
||||
- steps-v/step-01-validate.md: 52 lines [GOOD]
|
||||
- workflow-plan.md present: YES
|
||||
|
||||
## Frontmatter Validation
|
||||
|
||||
- No frontmatter violations found
|
||||
|
||||
## Critical Path Violations
|
||||
|
||||
### Config Variables (Exceptions)
|
||||
|
||||
Standard BMAD config variables treated as valid exceptions: bmb_creations_output_folder, communication_language, document_output_language, output_folder, planning_artifacts, project-root, project_name, test_artifacts, user_name
|
||||
|
||||
- No {project-root} hardcoded paths detected in body
|
||||
|
||||
- No dead relative links detected
|
||||
|
||||
- No module path assumptions detected
|
||||
|
||||
**Status:** ✅ PASS - No critical violations
|
||||
|
||||
## Menu Handling Validation
|
||||
|
||||
- No menu structures detected (linear step flow) [N/A]
|
||||
|
||||
## Step Type Validation
|
||||
|
||||
- steps-c/step-01-preflight-and-context.md: Init [PASS]
|
||||
- steps-c/step-02-identify-targets.md: Middle [PASS]
|
||||
- steps-c/step-03-generate-tests.md: Middle [PASS]
|
||||
- steps-c/step-04-validate-and-summarize.md: Final [PASS]
|
||||
- Step type validation assumes linear sequence (no branching/menu). Workflow-plan.md present for reference. [INFO]
|
||||
|
||||
## Output Format Validation
|
||||
|
||||
- Templates present: NONE
|
||||
- Steps with outputFile in frontmatter:
|
||||
- steps-c/step-04-validate-and-summarize.md
|
||||
- steps-v/step-01-validate.md
|
||||
- checklist.md present: YES
|
||||
|
||||
## Validation Design Check
|
||||
|
||||
- Validation steps folder (steps-v) present: YES
|
||||
- Validation step(s) present: step-01-validate.md
|
||||
- Validation steps reference checklist data and auto-proceed
|
||||
|
||||
## Instruction Style Check
|
||||
|
||||
- Instruction style: Prescriptive (appropriate for TEA quality/compliance workflows)
|
||||
- Steps emphasize mandatory sequence, explicit success/failure metrics, and risk-based guidance
|
||||
|
||||
## Collaborative Experience Check
|
||||
|
||||
- Overall facilitation quality: GOOD
|
||||
- Steps use progressive prompts and clear role reinforcement; no laundry-list interrogation detected
|
||||
- Flow progression is clear and aligned to workflow goals
|
||||
|
||||
## Subagent Optimization Opportunities
|
||||
|
||||
- No high-priority subagent optimizations identified; workflow already uses step-file architecture
|
||||
- Pattern 1 (grep/regex): N/A for most steps
|
||||
- Pattern 2 (per-file analysis): already aligned to validation structure
|
||||
- Pattern 3 (data ops): minimal data file loads
|
||||
- Pattern 4 (parallel): optional for validation only
|
||||
|
||||
## Cohesive Review
|
||||
|
||||
- Overall assessment: GOOD
|
||||
- Flow is linear, goals are clear, and outputs map to TEA artifacts
|
||||
- Voice and tone consistent with Test Architect persona
|
||||
- Recommendation: READY (minor refinements optional)
|
||||
|
||||
## Plan Quality Validation
|
||||
|
||||
- Plan file present: workflow-plan.md
|
||||
- Planned steps found: 7 (all implemented)
|
||||
- Plan implementation status: Fully Implemented
|
||||
|
||||
## Summary
|
||||
|
||||
- Validation completed: 2026-01-27 10:24:01
|
||||
- Critical issues: 0
|
||||
- Warnings: 0 (informational notes only)
|
||||
- Readiness: READY (manual review optional)
|
||||
20
.gemini/skills/bmad-testarch-automate/workflow-plan.md
Normal file
20
.gemini/skills/bmad-testarch-automate/workflow-plan.md
Normal file
@@ -0,0 +1,20 @@
|
||||
# Workflow Plan: testarch-automate
|
||||
|
||||
## Create Mode (steps-c)
|
||||
- step-01-preflight-and-context.md
|
||||
|
||||
- step-02-identify-targets.md
|
||||
- step-03-generate-tests.md
|
||||
- step-04-validate-and-summarize.md
|
||||
|
||||
## Validate Mode (steps-v)
|
||||
- step-01-validate.md
|
||||
|
||||
## Edit Mode (steps-e)
|
||||
- step-01-assess.md
|
||||
- step-02-apply-edit.md
|
||||
|
||||
## Outputs
|
||||
- {test_artifacts}/automation-summary.md
|
||||
|
||||
- Test files under {project-root}/tests
|
||||
41
.gemini/skills/bmad-testarch-automate/workflow.md
Normal file
41
.gemini/skills/bmad-testarch-automate/workflow.md
Normal file
@@ -0,0 +1,41 @@
|
||||
---
|
||||
name: bmad-testarch-automate
|
||||
description: Expand test automation coverage for codebase. Use when user says 'lets expand test coverage' or 'I want to automate tests'
|
||||
web_bundle: true
|
||||
---
|
||||
|
||||
# Test Automation Expansion
|
||||
|
||||
**Goal:** Expand test automation coverage after implementation or analyze existing codebase to generate comprehensive test suite
|
||||
|
||||
**Role:** You are the Master Test Architect.
|
||||
|
||||
---
|
||||
|
||||
## WORKFLOW ARCHITECTURE
|
||||
|
||||
This workflow uses **tri-modal step-file architecture**:
|
||||
|
||||
- **Create mode (steps-c/)**: primary execution flow
|
||||
- **Validate mode (steps-v/)**: validation against checklist
|
||||
- **Edit mode (steps-e/)**: revise existing outputs
|
||||
|
||||
---
|
||||
|
||||
## INITIALIZATION SEQUENCE
|
||||
|
||||
### 1. Mode Determination
|
||||
|
||||
"Welcome to the workflow. What would you like to do?"
|
||||
|
||||
- **[C] Create** — Run the workflow
|
||||
- **[R] Resume** — Resume an interrupted workflow
|
||||
- **[V] Validate** — Validate existing outputs
|
||||
- **[E] Edit** — Edit existing outputs
|
||||
|
||||
### 2. Route to First Step
|
||||
|
||||
- **If C:** Load `steps-c/step-01-preflight-and-context.md`
|
||||
- **If R:** Load `steps-c/step-01b-resume.md`
|
||||
- **If V:** Load `steps-v/step-01-validate.md`
|
||||
- **If E:** Load `steps-e/step-01-assess.md`
|
||||
53
.gemini/skills/bmad-testarch-automate/workflow.yaml
Normal file
53
.gemini/skills/bmad-testarch-automate/workflow.yaml
Normal file
@@ -0,0 +1,53 @@
|
||||
# Test Architect workflow: bmad-testarch-automate
|
||||
name: bmad-testarch-automate
|
||||
# prettier-ignore
|
||||
description: 'Expand test automation coverage for codebase. Use when the user says "lets expand test coverage" or "I want to automate tests"'
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/tea/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
test_artifacts: "{config_source}:test_artifacts"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
document_output_language: "{config_source}:document_output_language"
|
||||
date: system-generated
|
||||
|
||||
# Workflow components
|
||||
installed_path: "."
|
||||
instructions: "./instructions.md"
|
||||
validation: "./checklist.md"
|
||||
template: false
|
||||
|
||||
# Variables and inputs
|
||||
variables:
|
||||
# Execution mode and targeting
|
||||
standalone_mode: true # Can work without BMad artifacts (true) or integrate with BMad (false)
|
||||
coverage_target: "critical-paths" # critical-paths, comprehensive, selective
|
||||
|
||||
# Directory paths
|
||||
test_dir: "{project-root}/tests" # Root test directory
|
||||
source_dir: "{project-root}" # Source code directory (customize if needed, e.g., {project-root}/src or {project-root}/lib)
|
||||
|
||||
# Output configuration
|
||||
default_output_file: "{test_artifacts}/automation-summary.md"
|
||||
|
||||
# Required tools
|
||||
required_tools:
|
||||
- read_file # Read source code, existing tests, BMad artifacts
|
||||
- write_file # Create test files, fixtures, factories, summaries
|
||||
- create_directory # Create test directories
|
||||
- list_files # Discover features and existing tests
|
||||
- search_repo # Find coverage gaps and patterns
|
||||
- glob # Find test files and source files
|
||||
|
||||
tags:
|
||||
- qa
|
||||
- automation
|
||||
- test-architect
|
||||
- regression
|
||||
- coverage
|
||||
|
||||
execution_hints:
|
||||
interactive: false # Minimize prompts
|
||||
autonomous: true # Proceed without user input unless blocked
|
||||
iterative: true
|
||||
Reference in New Issue
Block a user