initial commit
This commit is contained in:
6
.gemini/skills/bmad-testarch-atdd/SKILL.md
Normal file
6
.gemini/skills/bmad-testarch-atdd/SKILL.md
Normal file
@@ -0,0 +1,6 @@
|
||||
---
|
||||
name: bmad-testarch-atdd
|
||||
description: 'Generate failing acceptance tests using TDD cycle. Use when the user says "lets write acceptance tests" or "I want to do ATDD"'
|
||||
---
|
||||
|
||||
Follow the instructions in [workflow.md](workflow.md).
|
||||
371
.gemini/skills/bmad-testarch-atdd/atdd-checklist-template.md
Normal file
371
.gemini/skills/bmad-testarch-atdd/atdd-checklist-template.md
Normal file
@@ -0,0 +1,371 @@
|
||||
---
|
||||
stepsCompleted: []
|
||||
lastStep: ''
|
||||
lastSaved: ''
|
||||
workflowType: 'testarch-atdd'
|
||||
inputDocuments: []
|
||||
---
|
||||
|
||||
# ATDD Checklist - Epic {epic_num}, Story {story_num}: {story_title}
|
||||
|
||||
**Date:** {date}
|
||||
**Author:** {user_name}
|
||||
**Primary Test Level:** {primary_level}
|
||||
|
||||
---
|
||||
|
||||
## Story Summary
|
||||
|
||||
{Brief 2-3 sentence summary of the user story}
|
||||
|
||||
**As a** {user_role}
|
||||
**I want** {feature_description}
|
||||
**So that** {business_value}
|
||||
|
||||
---
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
{List all testable acceptance criteria from the story}
|
||||
|
||||
1. {Acceptance criterion 1}
|
||||
2. {Acceptance criterion 2}
|
||||
3. {Acceptance criterion 3}
|
||||
|
||||
---
|
||||
|
||||
## Failing Tests Created (RED Phase)
|
||||
|
||||
### E2E Tests ({e2e_test_count} tests)
|
||||
|
||||
**File:** `{e2e_test_file_path}` ({line_count} lines)
|
||||
|
||||
{List each E2E test with its current status and expected failure reason}
|
||||
|
||||
- ✅ **Test:** {test_name}
|
||||
- **Status:** RED - {failure_reason}
|
||||
- **Verifies:** {what_this_test_validates}
|
||||
|
||||
### API Tests ({api_test_count} tests)
|
||||
|
||||
**File:** `{api_test_file_path}` ({line_count} lines)
|
||||
|
||||
{List each API test with its current status and expected failure reason}
|
||||
|
||||
- ✅ **Test:** {test_name}
|
||||
- **Status:** RED - {failure_reason}
|
||||
- **Verifies:** {what_this_test_validates}
|
||||
|
||||
### Component Tests ({component_test_count} tests)
|
||||
|
||||
**File:** `{component_test_file_path}` ({line_count} lines)
|
||||
|
||||
{List each component test with its current status and expected failure reason}
|
||||
|
||||
- ✅ **Test:** {test_name}
|
||||
- **Status:** RED - {failure_reason}
|
||||
- **Verifies:** {what_this_test_validates}
|
||||
|
||||
---
|
||||
|
||||
## Data Factories Created
|
||||
|
||||
{List all data factory files created with their exports}
|
||||
|
||||
### {Entity} Factory
|
||||
|
||||
**File:** `tests/support/factories/{entity}.factory.ts`
|
||||
|
||||
**Exports:**
|
||||
|
||||
- `create{Entity}(overrides?)` - Create single entity with optional overrides
|
||||
- `create{Entity}s(count)` - Create array of entities
|
||||
|
||||
**Example Usage:**
|
||||
|
||||
```typescript
|
||||
const user = createUser({ email: 'specific@example.com' });
|
||||
const users = createUsers(5); // Generate 5 random users
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Fixtures Created
|
||||
|
||||
{List all test fixture files created with their fixture names and descriptions}
|
||||
|
||||
### {Feature} Fixtures
|
||||
|
||||
**File:** `tests/support/fixtures/{feature}.fixture.ts`
|
||||
|
||||
**Fixtures:**
|
||||
|
||||
- `{fixtureName}` - {description_of_what_fixture_provides}
|
||||
- **Setup:** {what_setup_does}
|
||||
- **Provides:** {what_test_receives}
|
||||
- **Cleanup:** {what_cleanup_does}
|
||||
|
||||
**Example Usage:**
|
||||
|
||||
```typescript
|
||||
import { test } from './fixtures/{feature}.fixture';
|
||||
|
||||
test('should do something', async ({ {fixtureName} }) => {
|
||||
// {fixtureName} is ready to use with auto-cleanup
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Mock Requirements
|
||||
|
||||
{Document external services that need mocking and their requirements}
|
||||
|
||||
### {Service Name} Mock
|
||||
|
||||
**Endpoint:** `{HTTP_METHOD} {endpoint_url}`
|
||||
|
||||
**Success Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
{success_response_example}
|
||||
}
|
||||
```
|
||||
|
||||
**Failure Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
{failure_response_example}
|
||||
}
|
||||
```
|
||||
|
||||
**Notes:** {any_special_mock_requirements}
|
||||
|
||||
---
|
||||
|
||||
## Required data-testid Attributes
|
||||
|
||||
{List all data-testid attributes required in UI implementation for test stability}
|
||||
|
||||
### {Page or Component Name}
|
||||
|
||||
- `{data-testid-name}` - {description_of_element}
|
||||
- `{data-testid-name}` - {description_of_element}
|
||||
|
||||
**Implementation Example:**
|
||||
|
||||
```tsx
|
||||
<button data-testid="login-button">Log In</button>
|
||||
<input data-testid="email-input" type="email" />
|
||||
<div data-testid="error-message">{errorText}</div>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Checklist
|
||||
|
||||
{Map each failing test to concrete implementation tasks that will make it pass}
|
||||
|
||||
### Test: {test_name_1}
|
||||
|
||||
**File:** `{test_file_path}`
|
||||
|
||||
**Tasks to make this test pass:**
|
||||
|
||||
- [ ] {Implementation task 1}
|
||||
- [ ] {Implementation task 2}
|
||||
- [ ] {Implementation task 3}
|
||||
- [ ] Add required data-testid attributes: {list_of_testids}
|
||||
- [ ] Run test: `{test_execution_command}`
|
||||
- [ ] ✅ Test passes (green phase)
|
||||
|
||||
**Estimated Effort:** {effort_estimate} hours
|
||||
|
||||
---
|
||||
|
||||
### Test: {test_name_2}
|
||||
|
||||
**File:** `{test_file_path}`
|
||||
|
||||
**Tasks to make this test pass:**
|
||||
|
||||
- [ ] {Implementation task 1}
|
||||
- [ ] {Implementation task 2}
|
||||
- [ ] {Implementation task 3}
|
||||
- [ ] Add required data-testid attributes: {list_of_testids}
|
||||
- [ ] Run test: `{test_execution_command}`
|
||||
- [ ] ✅ Test passes (green phase)
|
||||
|
||||
**Estimated Effort:** {effort_estimate} hours
|
||||
|
||||
---
|
||||
|
||||
## Running Tests
|
||||
|
||||
```bash
|
||||
# Run all failing tests for this story
|
||||
{test_command_all}
|
||||
|
||||
# Run specific test file
|
||||
{test_command_specific_file}
|
||||
|
||||
# Run tests in headed mode (see browser)
|
||||
{test_command_headed}
|
||||
|
||||
# Debug specific test
|
||||
{test_command_debug}
|
||||
|
||||
# Run tests with coverage
|
||||
{test_command_coverage}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Red-Green-Refactor Workflow
|
||||
|
||||
### RED Phase (Complete) ✅
|
||||
|
||||
**TEA Agent Responsibilities:**
|
||||
|
||||
- ✅ All tests written and failing
|
||||
- ✅ Fixtures and factories created with auto-cleanup
|
||||
- ✅ Mock requirements documented
|
||||
- ✅ data-testid requirements listed
|
||||
- ✅ Implementation checklist created
|
||||
|
||||
**Verification:**
|
||||
|
||||
- All tests run and fail as expected
|
||||
- Failure messages are clear and actionable
|
||||
- Tests fail due to missing implementation, not test bugs
|
||||
|
||||
---
|
||||
|
||||
### GREEN Phase (DEV Team - Next Steps)
|
||||
|
||||
**DEV Agent Responsibilities:**
|
||||
|
||||
1. **Pick one failing test** from implementation checklist (start with highest priority)
|
||||
2. **Read the test** to understand expected behavior
|
||||
3. **Implement minimal code** to make that specific test pass
|
||||
4. **Run the test** to verify it now passes (green)
|
||||
5. **Check off the task** in implementation checklist
|
||||
6. **Move to next test** and repeat
|
||||
|
||||
**Key Principles:**
|
||||
|
||||
- One test at a time (don't try to fix all at once)
|
||||
- Minimal implementation (don't over-engineer)
|
||||
- Run tests frequently (immediate feedback)
|
||||
- Use implementation checklist as roadmap
|
||||
|
||||
**Progress Tracking:**
|
||||
|
||||
- Check off tasks as you complete them
|
||||
- Share progress in daily standup
|
||||
|
||||
---
|
||||
|
||||
### REFACTOR Phase (DEV Team - After All Tests Pass)
|
||||
|
||||
**DEV Agent Responsibilities:**
|
||||
|
||||
1. **Verify all tests pass** (green phase complete)
|
||||
2. **Review code for quality** (readability, maintainability, performance)
|
||||
3. **Extract duplications** (DRY principle)
|
||||
4. **Optimize performance** (if needed)
|
||||
5. **Ensure tests still pass** after each refactor
|
||||
6. **Update documentation** (if API contracts change)
|
||||
|
||||
**Key Principles:**
|
||||
|
||||
- Tests provide safety net (refactor with confidence)
|
||||
- Make small refactors (easier to debug if tests fail)
|
||||
- Run tests after each change
|
||||
- Don't change test behavior (only implementation)
|
||||
|
||||
**Completion:**
|
||||
|
||||
- All tests pass
|
||||
- Code quality meets team standards
|
||||
- No duplications or code smells
|
||||
- Ready for code review and story approval
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Share this checklist and failing tests** with the dev workflow (manual handoff)
|
||||
2. **Review this checklist** with team in standup or planning
|
||||
3. **Run failing tests** to confirm RED phase: `{test_command_all}`
|
||||
4. **Begin implementation** using implementation checklist as guide
|
||||
5. **Work one test at a time** (red → green for each)
|
||||
6. **Share progress** in daily standup
|
||||
7. **When all tests pass**, refactor code for quality
|
||||
8. **When refactoring complete**, manually update story status to 'done' in sprint-status.yaml
|
||||
|
||||
---
|
||||
|
||||
## Knowledge Base References Applied
|
||||
|
||||
This ATDD workflow consulted the following knowledge fragments:
|
||||
|
||||
- **fixture-architecture.md** - Test fixture patterns with setup/teardown and auto-cleanup using Playwright's `test.extend()`
|
||||
- **data-factories.md** - Factory patterns using `@faker-js/faker` for random test data generation with overrides support
|
||||
- **component-tdd.md** - Component test strategies using Playwright Component Testing
|
||||
- **network-first.md** - Route interception patterns (intercept BEFORE navigation to prevent race conditions)
|
||||
- **test-quality.md** - Test design principles (Given-When-Then, one assertion per test, determinism, isolation)
|
||||
- **test-levels-framework.md** - Test level selection framework (E2E vs API vs Component vs Unit)
|
||||
|
||||
See `tea-index.csv` for complete knowledge fragment mapping.
|
||||
|
||||
---
|
||||
|
||||
## Test Execution Evidence
|
||||
|
||||
### Initial Test Run (RED Phase Verification)
|
||||
|
||||
**Command:** `{test_command_all}`
|
||||
|
||||
**Results:**
|
||||
|
||||
```
|
||||
{paste_test_run_output_showing_all_tests_failing}
|
||||
```
|
||||
|
||||
**Summary:**
|
||||
|
||||
- Total tests: {total_test_count}
|
||||
- Passing: 0 (expected)
|
||||
- Failing: {total_test_count} (expected)
|
||||
- Status: ✅ RED phase verified
|
||||
|
||||
**Expected Failure Messages:**
|
||||
{list_expected_failure_messages_for_each_test}
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
{Any additional notes, context, or special considerations for this story}
|
||||
|
||||
- {Note 1}
|
||||
- {Note 2}
|
||||
- {Note 3}
|
||||
|
||||
---
|
||||
|
||||
## Contact
|
||||
|
||||
**Questions or Issues?**
|
||||
|
||||
- Ask in team standup
|
||||
- Tag @{tea_agent_username} in Slack/Discord
|
||||
- Refer to `./bmm/docs/tea-README.md` for workflow documentation
|
||||
- Consult `./bmm/testarch/knowledge` for testing best practices
|
||||
|
||||
---
|
||||
|
||||
**Generated by BMad TEA Agent** - {date}
|
||||
@@ -0,0 +1 @@
|
||||
type: skill
|
||||
374
.gemini/skills/bmad-testarch-atdd/checklist.md
Normal file
374
.gemini/skills/bmad-testarch-atdd/checklist.md
Normal file
@@ -0,0 +1,374 @@
|
||||
# ATDD Workflow Validation Checklist
|
||||
|
||||
Use this checklist to validate that the ATDD workflow has been executed correctly and all deliverables meet quality standards.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before starting this workflow, verify:
|
||||
|
||||
- [ ] Story approved with clear acceptance criteria (AC must be testable)
|
||||
- [ ] Development sandbox/environment ready
|
||||
- [ ] Framework scaffolding exists (run `framework` workflow if missing)
|
||||
- [ ] Test framework configuration available (playwright.config.ts or cypress.config.ts)
|
||||
- [ ] Package.json has test dependencies installed (Playwright or Cypress)
|
||||
|
||||
**Halt if missing:** Framework scaffolding or story acceptance criteria
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Story Context and Requirements
|
||||
|
||||
- [ ] Story markdown file loaded and parsed successfully
|
||||
- [ ] All acceptance criteria identified and extracted
|
||||
- [ ] Affected systems and components identified
|
||||
- [ ] Technical constraints documented
|
||||
- [ ] Framework configuration loaded (playwright.config.ts or cypress.config.ts)
|
||||
- [ ] Test directory structure identified from config
|
||||
- [ ] Existing fixture patterns reviewed for consistency
|
||||
- [ ] Similar test patterns searched and found in `{test_dir}`
|
||||
- [ ] Knowledge base fragments loaded:
|
||||
- [ ] `fixture-architecture.md`
|
||||
- [ ] `data-factories.md`
|
||||
- [ ] `component-tdd.md`
|
||||
- [ ] `network-first.md`
|
||||
- [ ] `test-quality.md`
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Test Level Selection and Strategy
|
||||
|
||||
- [ ] Each acceptance criterion analyzed for appropriate test level
|
||||
- [ ] Test level selection framework applied (E2E vs API vs Component vs Unit)
|
||||
- [ ] E2E tests: Critical user journeys and multi-system integration identified
|
||||
- [ ] API tests: Business logic and service contracts identified
|
||||
- [ ] Component tests: UI component behavior and interactions identified
|
||||
- [ ] Unit tests: Pure logic and edge cases identified (if applicable)
|
||||
- [ ] Duplicate coverage avoided (same behavior not tested at multiple levels unnecessarily)
|
||||
- [ ] Tests prioritized using P0-P3 framework (if test-design document exists)
|
||||
- [ ] Primary test level set in `primary_level` variable (typically E2E or API)
|
||||
- [ ] Test levels documented in ATDD checklist
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Failing Tests Generated
|
||||
|
||||
### Test File Structure Created
|
||||
|
||||
- [ ] Test files organized in appropriate directories:
|
||||
- [ ] `tests/e2e/` for end-to-end tests
|
||||
- [ ] `tests/api/` for API tests
|
||||
- [ ] `tests/component/` for component tests
|
||||
- [ ] `tests/support/` for infrastructure (fixtures, factories, helpers)
|
||||
|
||||
### E2E Tests (If Applicable)
|
||||
|
||||
- [ ] E2E test files created in `tests/e2e/`
|
||||
- [ ] All tests follow Given-When-Then format
|
||||
- [ ] Tests use `data-testid` selectors (not CSS classes or fragile selectors)
|
||||
- [ ] One assertion per test (atomic test design)
|
||||
- [ ] No hard waits or sleeps (explicit waits only)
|
||||
- [ ] Network-first pattern applied (route interception BEFORE navigation)
|
||||
- [ ] Tests fail initially (RED phase verified by local test run)
|
||||
- [ ] Failure messages are clear and actionable
|
||||
|
||||
### API Tests (If Applicable)
|
||||
|
||||
- [ ] API test files created in `tests/api/`
|
||||
- [ ] Tests follow Given-When-Then format
|
||||
- [ ] API contracts validated (request/response structure)
|
||||
- [ ] HTTP status codes verified
|
||||
- [ ] Response body validation includes all required fields
|
||||
- [ ] Error cases tested (400, 401, 403, 404, 500)
|
||||
- [ ] Tests fail initially (RED phase verified)
|
||||
|
||||
### Component Tests (If Applicable)
|
||||
|
||||
- [ ] Component test files created in `tests/component/`
|
||||
- [ ] Tests follow Given-When-Then format
|
||||
- [ ] Component mounting works correctly
|
||||
- [ ] Interaction testing covers user actions (click, hover, keyboard)
|
||||
- [ ] State management within component validated
|
||||
- [ ] Props and events tested
|
||||
- [ ] Tests fail initially (RED phase verified)
|
||||
|
||||
### Test Quality Validation
|
||||
|
||||
- [ ] All tests use Given-When-Then structure with clear comments
|
||||
- [ ] All tests have descriptive names explaining what they test
|
||||
- [ ] No duplicate tests (same behavior tested multiple times)
|
||||
- [ ] No flaky patterns (race conditions, timing issues)
|
||||
- [ ] No test interdependencies (tests can run in any order)
|
||||
- [ ] Tests are deterministic (same input always produces same result)
|
||||
|
||||
---
|
||||
|
||||
## Step 4: Data Infrastructure Built
|
||||
|
||||
### Data Factories Created
|
||||
|
||||
- [ ] Factory files created in `tests/support/factories/`
|
||||
- [ ] All factories use `@faker-js/faker` for random data generation (no hardcoded values)
|
||||
- [ ] Factories support overrides for specific test scenarios
|
||||
- [ ] Factories generate complete valid objects matching API contracts
|
||||
- [ ] Helper functions for bulk creation provided (e.g., `createUsers(count)`)
|
||||
- [ ] Factory exports are properly typed (TypeScript)
|
||||
|
||||
### Test Fixtures Created
|
||||
|
||||
- [ ] Fixture files created in `tests/support/fixtures/`
|
||||
- [ ] All fixtures use Playwright's `test.extend()` pattern
|
||||
- [ ] Fixtures have setup phase (arrange test preconditions)
|
||||
- [ ] Fixtures provide data to tests via `await use(data)`
|
||||
- [ ] Fixtures have teardown phase with auto-cleanup (delete created data)
|
||||
- [ ] Fixtures are composable (can use other fixtures if needed)
|
||||
- [ ] Fixtures are isolated (each test gets fresh data)
|
||||
- [ ] Fixtures are type-safe (TypeScript types defined)
|
||||
|
||||
### Mock Requirements Documented
|
||||
|
||||
- [ ] External service mocking requirements identified
|
||||
- [ ] Mock endpoints documented with URLs and methods
|
||||
- [ ] Success response examples provided
|
||||
- [ ] Failure response examples provided
|
||||
- [ ] Mock requirements documented in ATDD checklist for DEV team
|
||||
|
||||
### data-testid Requirements Listed
|
||||
|
||||
- [ ] All required data-testid attributes identified from E2E tests
|
||||
- [ ] data-testid list organized by page or component
|
||||
- [ ] Each data-testid has clear description of element it targets
|
||||
- [ ] data-testid list included in ATDD checklist for DEV team
|
||||
|
||||
---
|
||||
|
||||
## Step 5: Implementation Checklist Created
|
||||
|
||||
- [ ] Implementation checklist created with clear structure
|
||||
- [ ] Each failing test mapped to concrete implementation tasks
|
||||
- [ ] Tasks include:
|
||||
- [ ] Route/component creation
|
||||
- [ ] Business logic implementation
|
||||
- [ ] API integration
|
||||
- [ ] data-testid attribute additions
|
||||
- [ ] Error handling
|
||||
- [ ] Test execution command
|
||||
- [ ] Completion checkbox
|
||||
- [ ] Red-Green-Refactor workflow documented in checklist
|
||||
- [ ] RED phase marked as complete (TEA responsibility)
|
||||
- [ ] GREEN phase tasks listed for DEV team
|
||||
- [ ] REFACTOR phase guidance provided
|
||||
- [ ] Execution commands provided:
|
||||
- [ ] Run all tests: `npm run test:e2e`
|
||||
- [ ] Run specific test file
|
||||
- [ ] Run in headed mode
|
||||
- [ ] Debug specific test
|
||||
- [ ] Estimated effort included (hours or story points)
|
||||
|
||||
---
|
||||
|
||||
## Step 6: Deliverables Generated
|
||||
|
||||
### ATDD Checklist Document Created
|
||||
|
||||
- [ ] Output file created at `{test_artifacts}/atdd-checklist-{story_id}.md`
|
||||
- [ ] Document follows template structure from `atdd-checklist-template.md`
|
||||
- [ ] Document includes all required sections:
|
||||
- [ ] Story summary
|
||||
- [ ] Acceptance criteria breakdown
|
||||
- [ ] Failing tests created (paths and line counts)
|
||||
- [ ] Data factories created
|
||||
- [ ] Fixtures created
|
||||
- [ ] Mock requirements
|
||||
- [ ] Required data-testid attributes
|
||||
- [ ] Implementation checklist
|
||||
- [ ] Red-green-refactor workflow
|
||||
- [ ] Execution commands
|
||||
- [ ] Next steps for DEV team
|
||||
- [ ] Output shared with DEV workflow (manual handoff; not auto-consumed)
|
||||
|
||||
### All Tests Verified to Fail (RED Phase)
|
||||
|
||||
- [ ] Full test suite run locally before finalizing
|
||||
- [ ] All tests fail as expected (RED phase confirmed)
|
||||
- [ ] No tests passing before implementation (if passing, test is invalid)
|
||||
- [ ] Failure messages documented in ATDD checklist
|
||||
- [ ] Failures are due to missing implementation, not test bugs
|
||||
- [ ] Test run output captured for reference
|
||||
|
||||
### Summary Provided
|
||||
|
||||
- [ ] Summary includes:
|
||||
- [ ] Story ID
|
||||
- [ ] Primary test level
|
||||
- [ ] Test counts (E2E, API, Component)
|
||||
- [ ] Test file paths
|
||||
- [ ] Factory count
|
||||
- [ ] Fixture count
|
||||
- [ ] Mock requirements count
|
||||
- [ ] data-testid count
|
||||
- [ ] Implementation task count
|
||||
- [ ] Estimated effort
|
||||
- [ ] Next steps for DEV team
|
||||
- [ ] Output file path
|
||||
- [ ] Knowledge base references applied
|
||||
|
||||
---
|
||||
|
||||
## Quality Checks
|
||||
|
||||
### Test Design Quality
|
||||
|
||||
- [ ] Tests are readable (clear Given-When-Then structure)
|
||||
- [ ] Tests are maintainable (use factories and fixtures, not hardcoded data)
|
||||
- [ ] Tests are isolated (no shared state between tests)
|
||||
- [ ] Tests are deterministic (no race conditions or flaky patterns)
|
||||
- [ ] Tests are atomic (one assertion per test)
|
||||
- [ ] Tests are fast (no unnecessary waits or delays)
|
||||
|
||||
### Knowledge Base Integration
|
||||
|
||||
- [ ] fixture-architecture.md patterns applied to all fixtures
|
||||
- [ ] data-factories.md patterns applied to all factories
|
||||
- [ ] network-first.md patterns applied to E2E tests with network requests
|
||||
- [ ] component-tdd.md patterns applied to component tests
|
||||
- [ ] test-quality.md principles applied to all test design
|
||||
|
||||
### Code Quality
|
||||
|
||||
- [ ] All TypeScript types are correct and complete
|
||||
- [ ] No linting errors in generated test files
|
||||
- [ ] Consistent naming conventions followed
|
||||
- [ ] Imports are organized and correct
|
||||
- [ ] Code follows project style guide
|
||||
|
||||
---
|
||||
|
||||
## Integration Points
|
||||
|
||||
### With DEV Agent
|
||||
|
||||
- [ ] ATDD checklist provides clear implementation guidance
|
||||
- [ ] Implementation tasks are granular and actionable
|
||||
- [ ] data-testid requirements are complete and clear
|
||||
- [ ] Mock requirements include all necessary details
|
||||
- [ ] Execution commands work correctly
|
||||
|
||||
### With Story Workflow
|
||||
|
||||
- [ ] Story ID correctly referenced in output files
|
||||
- [ ] Acceptance criteria from story accurately reflected in tests
|
||||
- [ ] Technical constraints from story considered in test design
|
||||
|
||||
### With Framework Workflow
|
||||
|
||||
- [ ] Test framework configuration correctly detected and used
|
||||
- [ ] Directory structure matches framework setup
|
||||
- [ ] Fixtures and helpers follow established patterns
|
||||
- [ ] Naming conventions consistent with framework standards
|
||||
|
||||
### With test-design Workflow (If Available)
|
||||
|
||||
- [ ] P0 scenarios from test-design prioritized in ATDD
|
||||
- [ ] Risk assessment from test-design considered in test coverage
|
||||
- [ ] Coverage strategy from test-design aligned with ATDD tests
|
||||
|
||||
---
|
||||
|
||||
## Completion Criteria
|
||||
|
||||
All of the following must be true before marking this workflow as complete:
|
||||
|
||||
- [ ] **Story acceptance criteria analyzed** and mapped to appropriate test levels
|
||||
- [ ] **Failing tests created** at all appropriate levels (E2E, API, Component)
|
||||
- [ ] **Given-When-Then format** used consistently across all tests
|
||||
- [ ] **RED phase verified** by local test run (all tests failing as expected)
|
||||
- [ ] **Network-first pattern** applied to E2E tests with network requests
|
||||
- [ ] **Data factories created** using faker (no hardcoded test data)
|
||||
- [ ] **Fixtures created** with auto-cleanup in teardown
|
||||
- [ ] **Mock requirements documented** for external services
|
||||
- [ ] **data-testid attributes listed** for DEV team
|
||||
- [ ] **Implementation checklist created** mapping tests to code tasks
|
||||
- [ ] **Red-green-refactor workflow documented** in ATDD checklist
|
||||
- [ ] **Execution commands provided** and verified to work
|
||||
- [ ] **ATDD checklist document created** and saved to correct location
|
||||
- [ ] **Output file formatted correctly** using template structure
|
||||
- [ ] **Knowledge base references applied** and documented in summary
|
||||
- [ ] **No test quality issues** (flaky patterns, race conditions, hardcoded data)
|
||||
|
||||
---
|
||||
|
||||
## Common Issues and Resolutions
|
||||
|
||||
### Issue: Tests pass before implementation
|
||||
|
||||
**Problem:** A test passes even though no implementation code exists yet.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Review test to ensure it's testing actual behavior, not mocked/stubbed behavior
|
||||
- Check if test is accidentally using existing functionality
|
||||
- Verify test assertions are correct and meaningful
|
||||
- Rewrite test to fail until implementation is complete
|
||||
|
||||
### Issue: Network-first pattern not applied
|
||||
|
||||
**Problem:** Route interception happens after navigation, causing race conditions.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Move `await page.route()` calls BEFORE `await page.goto()`
|
||||
- Review `network-first.md` knowledge fragment
|
||||
- Update all E2E tests to follow network-first pattern
|
||||
|
||||
### Issue: Hardcoded test data in tests
|
||||
|
||||
**Problem:** Tests use hardcoded strings/numbers instead of factories.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Replace all hardcoded data with factory function calls
|
||||
- Use `faker` for all random data generation
|
||||
- Update data-factories to support all required test scenarios
|
||||
|
||||
### Issue: Fixtures missing auto-cleanup
|
||||
|
||||
**Problem:** Fixtures create data but don't clean it up in teardown.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Add cleanup logic after `await use(data)` in fixture
|
||||
- Call deletion/cleanup functions in teardown
|
||||
- Verify cleanup works by checking database/storage after test run
|
||||
|
||||
### Issue: Tests have multiple assertions
|
||||
|
||||
**Problem:** Tests verify multiple behaviors in single test (not atomic).
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Split into separate tests (one assertion per test)
|
||||
- Each test should verify exactly one behavior
|
||||
- Use descriptive test names to clarify what each test verifies
|
||||
|
||||
### Issue: Tests depend on execution order
|
||||
|
||||
**Problem:** Tests fail when run in isolation or different order.
|
||||
|
||||
**Resolution:**
|
||||
|
||||
- Remove shared state between tests
|
||||
- Each test should create its own test data
|
||||
- Use fixtures for consistent setup across tests
|
||||
- Verify tests can run with `.only` flag
|
||||
|
||||
---
|
||||
|
||||
## Notes for TEA Agent
|
||||
|
||||
- **Preflight halt is critical:** Do not proceed if story has no acceptance criteria or framework is missing
|
||||
- **RED phase verification is mandatory:** Tests must fail before sharing with DEV team
|
||||
- **Network-first pattern:** Route interception BEFORE navigation prevents race conditions
|
||||
- **One assertion per test:** Atomic tests provide clear failure diagnosis
|
||||
- **Auto-cleanup is non-negotiable:** Every fixture must clean up data in teardown
|
||||
- **Use knowledge base:** Load relevant fragments (fixture-architecture, data-factories, network-first, component-tdd, test-quality) for guidance
|
||||
- **Share with DEV agent:** ATDD checklist provides implementation roadmap from red to green
|
||||
45
.gemini/skills/bmad-testarch-atdd/instructions.md
Normal file
45
.gemini/skills/bmad-testarch-atdd/instructions.md
Normal file
@@ -0,0 +1,45 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# Acceptance Test-Driven Development (ATDD)
|
||||
|
||||
**Workflow ID**: `_bmad/tea/testarch/bmad-testarch-atdd`
|
||||
**Version**: 5.0 (Step-File Architecture)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Generates **failing acceptance tests** before implementation (TDD red phase), plus an implementation checklist. Produces tests at appropriate levels (E2E/API/Component) with supporting fixtures and helpers.
|
||||
|
||||
---
|
||||
|
||||
## WORKFLOW ARCHITECTURE
|
||||
|
||||
This workflow uses **step-file architecture**:
|
||||
|
||||
- **Micro-file Design**: Each step is self-contained
|
||||
- **JIT Loading**: Only the current step file is in memory
|
||||
- **Sequential Enforcement**: Execute steps in order without skipping
|
||||
|
||||
---
|
||||
|
||||
## INITIALIZATION SEQUENCE
|
||||
|
||||
### 1. Configuration Loading
|
||||
|
||||
From `workflow.yaml`, resolve:
|
||||
|
||||
- `config_source`, `test_artifacts`, `user_name`, `communication_language`, `document_output_language`, `date`
|
||||
- `test_dir`
|
||||
|
||||
### 2. First Step
|
||||
|
||||
Load, read completely, and execute:
|
||||
`./steps-c/step-01-preflight-and-context.md`
|
||||
|
||||
### 3. Resume Support
|
||||
|
||||
If the user selects **Resume** mode, load, read completely, and execute:
|
||||
`./steps-c/step-01b-resume.md`
|
||||
|
||||
This checks the output document for progress tracking frontmatter and routes to the next incomplete step.
|
||||
@@ -0,0 +1,226 @@
|
||||
---
|
||||
name: 'step-01-preflight-and-context'
|
||||
description: 'Verify prerequisites and load story, framework, and knowledge base'
|
||||
outputFile: '{test_artifacts}/atdd-checklist-{story_id}.md'
|
||||
nextStepFile: './step-02-generation-mode.md'
|
||||
knowledgeIndex: '{project-root}/_bmad/tea/testarch/tea-index.csv'
|
||||
---
|
||||
|
||||
# Step 1: Preflight & Context Loading
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Verify prerequisites and load all required inputs before generating failing tests.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- 🚫 Halt if requirements are missing
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Stack Detection
|
||||
|
||||
**Read `config.test_stack_type`** from `{config_source}`.
|
||||
|
||||
**Auto-Detection Algorithm** (when `test_stack_type` is `"auto"` or not configured):
|
||||
|
||||
- Scan `{project-root}` for project manifests:
|
||||
- **Frontend indicators**: `package.json` with react/vue/angular/next dependencies, `playwright.config.*`, `vite.config.*`, `webpack.config.*`
|
||||
- **Backend indicators**: `pyproject.toml`, `pom.xml`/`build.gradle`, `go.mod`, `*.csproj`/`*.sln`, `Gemfile`, `Cargo.toml`
|
||||
- **Both present** = `fullstack`; only frontend = `frontend`; only backend = `backend`
|
||||
- Explicit `test_stack_type` config value overrides auto-detection
|
||||
- **Backward compatibility**: if `test_stack_type` is not in config, treat as `"auto"` (preserves current frontend behavior for existing installs)
|
||||
|
||||
Store result as `{detected_stack}` = `frontend` | `backend` | `fullstack`
|
||||
|
||||
---
|
||||
|
||||
## 2. Prerequisites (Hard Requirements)
|
||||
|
||||
- Story approved with **clear acceptance criteria**
|
||||
- Test framework configured:
|
||||
- **If {detected_stack} is `frontend` or `fullstack`:** `playwright.config.ts` or `cypress.config.ts`
|
||||
- **If {detected_stack} is `backend`:** relevant test config exists (e.g., `conftest.py`, `src/test/`, `*_test.go`, `.rspec`)
|
||||
- Development environment available
|
||||
|
||||
If any are missing: **HALT** and notify the user.
|
||||
|
||||
---
|
||||
|
||||
## 3. Load Story Context
|
||||
|
||||
- Read story markdown from `{story_file}` (or ask user if not provided)
|
||||
- Extract acceptance criteria and constraints
|
||||
- Identify affected components and integrations
|
||||
|
||||
---
|
||||
|
||||
## 4. Load Framework & Existing Patterns
|
||||
|
||||
- Read framework config
|
||||
- Inspect `{test_dir}` for existing test patterns, fixtures, helpers
|
||||
|
||||
## 4.5 Read TEA Config Flags
|
||||
|
||||
From `{config_source}`:
|
||||
|
||||
- `tea_use_playwright_utils`
|
||||
- `tea_use_pactjs_utils`
|
||||
- `tea_pact_mcp`
|
||||
- `tea_browser_automation`
|
||||
- `test_stack_type`
|
||||
|
||||
---
|
||||
|
||||
### Tiered Knowledge Loading
|
||||
|
||||
Load fragments based on their `tier` classification in `tea-index.csv`:
|
||||
|
||||
1. **Core tier** (always load): Foundational fragments required for this workflow
|
||||
2. **Extended tier** (load on-demand): Load when deeper analysis is needed or when the user's context requires it
|
||||
3. **Specialized tier** (load only when relevant): Load only when the specific use case matches (e.g., contract-testing only for microservices, email-auth only for email flows)
|
||||
|
||||
> **Context Efficiency**: Loading only core fragments reduces context usage by 40-50% compared to loading all fragments.
|
||||
|
||||
### Playwright Utils Loading Profiles
|
||||
|
||||
**If `tea_use_playwright_utils` is enabled**, select the appropriate loading profile:
|
||||
|
||||
- **API-only profile** (when `{detected_stack}` is `backend` or no `page.goto`/`page.locator` found in test files):
|
||||
Load: `overview`, `api-request`, `auth-session`, `recurse` (~1,800 lines)
|
||||
|
||||
- **Full UI+API profile** (when `{detected_stack}` is `frontend`/`fullstack` or browser tests detected):
|
||||
Load: all Playwright Utils core fragments (~4,500 lines)
|
||||
|
||||
**Detection**: Scan `{test_dir}` for files containing `page.goto` or `page.locator`. If none found, use API-only profile.
|
||||
|
||||
### Pact.js Utils Loading
|
||||
|
||||
**If `tea_use_pactjs_utils` is enabled** (and `{detected_stack}` is `backend` or `fullstack`, or microservices indicators detected):
|
||||
|
||||
Load: `pactjs-utils-overview.md`, `pactjs-utils-consumer-helpers.md`, `pactjs-utils-provider-verifier.md`, `pactjs-utils-request-filter.md`
|
||||
|
||||
**If `tea_use_pactjs_utils` is disabled** but contract testing is relevant:
|
||||
|
||||
Load: `contract-testing.md`
|
||||
|
||||
### Pact MCP Loading
|
||||
|
||||
**If `tea_pact_mcp` is `"mcp"`:**
|
||||
|
||||
Load: `pact-mcp.md`
|
||||
|
||||
## 5. Load Knowledge Base Fragments
|
||||
|
||||
Use `{knowledgeIndex}` to load:
|
||||
|
||||
**Core (always):**
|
||||
|
||||
- `data-factories.md`
|
||||
- `component-tdd.md`
|
||||
- `test-quality.md`
|
||||
- `test-healing-patterns.md`
|
||||
|
||||
**If {detected_stack} is `frontend` or `fullstack`:**
|
||||
|
||||
- `selector-resilience.md`
|
||||
- `timing-debugging.md`
|
||||
|
||||
**Playwright Utils (if enabled and {detected_stack} is `frontend` or `fullstack`):**
|
||||
|
||||
- `overview.md`, `api-request.md`, `network-recorder.md`, `auth-session.md`, `intercept-network-call.md`, `recurse.md`, `log.md`, `file-utils.md`, `network-error-monitor.md`, `fixtures-composition.md`
|
||||
|
||||
**Playwright CLI (if tea_browser_automation is "cli" or "auto" and {detected_stack} is `frontend` or `fullstack`):**
|
||||
|
||||
- `playwright-cli.md`
|
||||
|
||||
**MCP Patterns (if tea_browser_automation is "mcp" or "auto" and {detected_stack} is `frontend` or `fullstack`):**
|
||||
|
||||
- (existing MCP-related fragments, if any are added in future)
|
||||
|
||||
**Traditional Patterns (if utils disabled and {detected_stack} is `frontend` or `fullstack`):**
|
||||
|
||||
- `fixture-architecture.md`
|
||||
- `network-first.md`
|
||||
|
||||
**Backend Patterns (if {detected_stack} is `backend` or `fullstack`):**
|
||||
|
||||
- `test-levels-framework.md`
|
||||
- `test-priorities-matrix.md`
|
||||
- `ci-burn-in.md`
|
||||
|
||||
**Pact.js Utils (if enabled):**
|
||||
|
||||
- `pactjs-utils-overview.md`, `pactjs-utils-consumer-helpers.md`, `pactjs-utils-provider-verifier.md`, `pactjs-utils-request-filter.md`
|
||||
|
||||
**Contract Testing (if pactjs-utils disabled but relevant):**
|
||||
|
||||
- `contract-testing.md`
|
||||
|
||||
**Pact MCP (if tea_pact_mcp is "mcp"):**
|
||||
|
||||
- `pact-mcp.md`
|
||||
|
||||
---
|
||||
|
||||
## 6. Confirm Inputs
|
||||
|
||||
Summarize loaded inputs and confirm with the user. Then proceed.
|
||||
|
||||
---
|
||||
|
||||
## 7. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-01-preflight-and-context']
|
||||
lastStep: 'step-01-preflight-and-context'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-01-preflight-and-context'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-01-preflight-and-context'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section.
|
||||
|
||||
**Update `inputDocuments`**: Set `inputDocuments` in the output template frontmatter to the list of artifact paths loaded in this step (e.g., knowledge fragments, test design documents, configuration files).
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
96
.gemini/skills/bmad-testarch-atdd/steps-c/step-01b-resume.md
Normal file
96
.gemini/skills/bmad-testarch-atdd/steps-c/step-01b-resume.md
Normal file
@@ -0,0 +1,96 @@
|
||||
---
|
||||
name: 'step-01b-resume'
|
||||
description: 'Resume interrupted workflow from last completed step'
|
||||
outputFile: '{test_artifacts}/atdd-checklist-{story_id}.md'
|
||||
---
|
||||
|
||||
# Step 1b: Resume Workflow
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Resume an interrupted workflow by loading the existing output document, displaying progress, and routing to the next incomplete step.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: Output document with progress frontmatter
|
||||
- Focus: Load progress and route to next step
|
||||
- Limits: Do not re-execute completed steps
|
||||
- Dependencies: Output document must exist from a previous run
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Load Output Document
|
||||
|
||||
Read `{outputFile}` and parse YAML frontmatter for:
|
||||
|
||||
- `stepsCompleted` — array of completed step names
|
||||
- `lastStep` — last completed step name
|
||||
- `lastSaved` — timestamp of last save
|
||||
|
||||
**If `{outputFile}` does not exist**, display:
|
||||
|
||||
"⚠️ **No previous progress found.** There is no output document to resume from. Please use **[C] Create** to start a fresh workflow run."
|
||||
|
||||
**THEN:** Halt. Do not proceed.
|
||||
|
||||
---
|
||||
|
||||
### 2. Display Progress Dashboard
|
||||
|
||||
Display progress with ✅/⬜ indicators:
|
||||
|
||||
1. ✅/⬜ Preflight & Context (step-01-preflight-and-context)
|
||||
2. ✅/⬜ Generation Mode (step-02-generation-mode)
|
||||
3. ✅/⬜ Test Strategy (step-03-test-strategy)
|
||||
4. ✅/⬜ Generate Tests + Aggregate (step-04c-aggregate)
|
||||
5. ✅/⬜ Validate & Complete (step-05-validate-and-complete)
|
||||
|
||||
---
|
||||
|
||||
### 3. Route to Next Step
|
||||
|
||||
Based on `lastStep`, load the next incomplete step:
|
||||
|
||||
- `'step-01-preflight-and-context'` → load `./step-02-generation-mode.md`
|
||||
- `'step-02-generation-mode'` → load `./step-03-test-strategy.md`
|
||||
- `'step-03-test-strategy'` → load `./step-04-generate-tests.md`
|
||||
- `'step-04c-aggregate'` → load `./step-05-validate-and-complete.md`
|
||||
- `'step-05-validate-and-complete'` → **Workflow already complete.** Display: "✅ **All steps completed.** Use **[V] Validate** to review outputs or **[E] Edit** to make revisions." Then halt.
|
||||
|
||||
**If `lastStep` does not match any value above**, display: "⚠️ **Unknown progress state** (`lastStep`: {lastStep}). Please use **[C] Create** to start fresh." Then halt.
|
||||
|
||||
**Otherwise**, load the identified step file, read completely, and execute.
|
||||
|
||||
The existing content in `{outputFile}` provides context from previously completed steps.
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Output document loaded and parsed correctly
|
||||
- Progress dashboard displayed accurately
|
||||
- Routed to correct next step
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Not loading output document
|
||||
- Incorrect progress display
|
||||
- Routing to wrong step
|
||||
|
||||
**Master Rule:** Resume MUST route to the exact next incomplete step. Never re-execute completed steps.
|
||||
@@ -0,0 +1,125 @@
|
||||
---
|
||||
name: 'step-02-generation-mode'
|
||||
description: 'Choose AI generation or recording mode'
|
||||
outputFile: '{test_artifacts}/atdd-checklist-{story_id}.md'
|
||||
nextStepFile: './step-03-test-strategy.md'
|
||||
---
|
||||
|
||||
# Step 2: Generation Mode Selection
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Choose the appropriate generation mode for ATDD tests.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Default Mode: AI Generation
|
||||
|
||||
Use AI generation when:
|
||||
|
||||
- Acceptance criteria are clear
|
||||
- Scenarios are standard (CRUD, auth, API, navigation)
|
||||
- **If {detected_stack} is `backend`:** Always use AI generation (no browser recording needed)
|
||||
|
||||
Proceed directly to test strategy if this applies.
|
||||
|
||||
---
|
||||
|
||||
## 2. Optional Mode: Recording (Complex UI)
|
||||
|
||||
**Skip this section entirely if {detected_stack} is `backend`.** For backend projects, use AI generation from API documentation, OpenAPI/Swagger specs, or source code analysis instead.
|
||||
|
||||
**If {detected_stack} is `frontend` or `fullstack`:**
|
||||
|
||||
Use recording when UI interactions need live browser verification.
|
||||
|
||||
**Tool selection based on `config.tea_browser_automation`:**
|
||||
|
||||
If `auto`:
|
||||
|
||||
> **Note:** `${timestamp}` is a placeholder the agent should replace with a unique value (e.g., epoch seconds) for session isolation.
|
||||
|
||||
- **Simple recording** (snapshot selectors, capture structure): Use CLI
|
||||
- `playwright-cli -s=tea-atdd-${timestamp} open <url>` → `playwright-cli -s=tea-atdd-${timestamp} snapshot` → extract refs
|
||||
- **Complex recording** (drag/drop, wizards, multi-step state): Use MCP
|
||||
- Full browser automation with rich tool semantics
|
||||
- **Fallback:** If preferred tool unavailable, use the other; if neither, skip recording
|
||||
|
||||
If `cli`:
|
||||
|
||||
- Use Playwright CLI for all recording
|
||||
- `playwright-cli -s=tea-atdd-${timestamp} open <url>`, `snapshot`, `screenshot`, `click <ref>`, etc.
|
||||
|
||||
If `mcp`:
|
||||
|
||||
- Use Playwright MCP tools for all recording (current behavior)
|
||||
- Confirm MCP availability, record selectors and interactions
|
||||
|
||||
If `none`:
|
||||
|
||||
- Skip recording mode entirely, use AI generation from documentation
|
||||
|
||||
---
|
||||
|
||||
## 3. Confirm Mode
|
||||
|
||||
State the chosen mode and why. Then proceed.
|
||||
|
||||
---
|
||||
|
||||
## 4. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-02-generation-mode']
|
||||
lastStep: 'step-02-generation-mode'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-02-generation-mode'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-02-generation-mode'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
@@ -0,0 +1,110 @@
|
||||
---
|
||||
name: 'step-03-test-strategy'
|
||||
description: 'Map acceptance criteria to test levels and priorities'
|
||||
outputFile: '{test_artifacts}/atdd-checklist-{story_id}.md'
|
||||
nextStepFile: './step-04-generate-tests.md'
|
||||
---
|
||||
|
||||
# Step 3: Test Strategy
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Translate acceptance criteria into a prioritized, level-appropriate test plan.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- 🚫 Avoid duplicate coverage across levels
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Map Acceptance Criteria
|
||||
|
||||
- Convert each acceptance criterion into test scenarios
|
||||
- Include negative and edge cases where risk is high
|
||||
|
||||
---
|
||||
|
||||
## 2. Select Test Levels
|
||||
|
||||
Choose the best level per scenario based on `{detected_stack}`:
|
||||
|
||||
**If {detected_stack} is `frontend` or `fullstack`:**
|
||||
|
||||
- **E2E** for critical user journeys
|
||||
- **API** for business logic and service contracts
|
||||
- **Component** for UI behavior
|
||||
|
||||
**If {detected_stack} is `backend` or `fullstack`:**
|
||||
|
||||
- **Unit** for pure functions, business logic, and edge cases
|
||||
- **Integration** for service interactions, database queries, and middleware
|
||||
- **API/Contract** for endpoint validation, request/response schemas, and Pact contracts
|
||||
- **No E2E** for pure backend projects (no browser-based testing needed)
|
||||
|
||||
---
|
||||
|
||||
## 3. Prioritize Tests
|
||||
|
||||
Assign P0–P3 priorities using risk and business impact.
|
||||
|
||||
---
|
||||
|
||||
## 4. Confirm Red Phase Requirements
|
||||
|
||||
Ensure all tests are designed to **fail before implementation** (TDD red phase).
|
||||
|
||||
---
|
||||
|
||||
## 5. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-03-test-strategy']
|
||||
lastStep: 'step-03-test-strategy'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-03-test-strategy'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-03-test-strategy'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
@@ -0,0 +1,334 @@
|
||||
---
|
||||
name: 'step-04-generate-tests'
|
||||
description: 'Orchestrate adaptive FAILING test generation (TDD red phase)'
|
||||
nextStepFile: './step-04c-aggregate.md'
|
||||
---
|
||||
|
||||
# Step 4: Orchestrate Adaptive FAILING Test Generation
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Select execution mode deterministically, then generate FAILING API and E2E tests (TDD RED PHASE) with consistent output contracts across agent-team, subagent, or sequential execution.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- ✅ Resolve execution mode from config (`tea_execution_mode`, `tea_capability_probe`)
|
||||
- ✅ Apply fallback rules deterministically when requested mode is unsupported
|
||||
- ✅ Generate FAILING tests only (TDD red phase)
|
||||
- ✅ Wait for required worker steps to complete
|
||||
- ❌ Do NOT skip capability checks when probing is enabled
|
||||
- ❌ Do NOT generate passing tests (this is red phase)
|
||||
- ❌ Do NOT proceed until required worker steps finish
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Wait for subagent outputs
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, acceptance criteria from Step 1, test strategy from Step 3
|
||||
- Focus: orchestration only (mode selection + worker dispatch)
|
||||
- Limits: do not generate tests directly (delegate to worker steps)
|
||||
- Dependencies: Steps 1-3 outputs
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
### 1. Prepare Execution Context
|
||||
|
||||
**Generate unique timestamp** for temp file naming:
|
||||
|
||||
```javascript
|
||||
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
|
||||
```
|
||||
|
||||
**Prepare input context for both subagents:**
|
||||
|
||||
```javascript
|
||||
const parseBooleanFlag = (value, defaultValue = true) => {
|
||||
if (typeof value === 'string') {
|
||||
const normalized = value.trim().toLowerCase();
|
||||
if (['false', '0', 'off', 'no'].includes(normalized)) return false;
|
||||
if (['true', '1', 'on', 'yes'].includes(normalized)) return true;
|
||||
}
|
||||
if (value === undefined || value === null) return defaultValue;
|
||||
return Boolean(value);
|
||||
};
|
||||
|
||||
const subagentContext = {
|
||||
story_acceptance_criteria: /* from Step 1 */,
|
||||
test_strategy: /* from Step 3 */,
|
||||
knowledge_fragments_loaded: /* list of fragments */,
|
||||
config: {
|
||||
test_framework: config.test_framework,
|
||||
use_playwright_utils: config.tea_use_playwright_utils,
|
||||
use_pactjs_utils: config.tea_use_pactjs_utils,
|
||||
pact_mcp: config.tea_pact_mcp, // "mcp" | "none"
|
||||
browser_automation: config.tea_browser_automation,
|
||||
execution_mode: config.tea_execution_mode || 'auto', // "auto" | "subagent" | "agent-team" | "sequential"
|
||||
capability_probe: parseBooleanFlag(config.tea_capability_probe, true), // supports booleans and "false"/"true" strings
|
||||
provider_endpoint_map: /* from Step 1/3 context, if use_pactjs_utils enabled */,
|
||||
},
|
||||
timestamp: timestamp
|
||||
};
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. Resolve Execution Mode with Capability Probe
|
||||
|
||||
```javascript
|
||||
const normalizeUserExecutionMode = (mode) => {
|
||||
if (typeof mode !== 'string') return null;
|
||||
const normalized = mode.trim().toLowerCase().replace(/[-_]/g, ' ').replace(/\s+/g, ' ');
|
||||
|
||||
if (normalized === 'auto') return 'auto';
|
||||
if (normalized === 'sequential') return 'sequential';
|
||||
if (normalized === 'subagent' || normalized === 'sub agent' || normalized === 'subagents' || normalized === 'sub agents') {
|
||||
return 'subagent';
|
||||
}
|
||||
if (normalized === 'agent team' || normalized === 'agent teams' || normalized === 'agentteam') {
|
||||
return 'agent-team';
|
||||
}
|
||||
|
||||
return null;
|
||||
};
|
||||
|
||||
const normalizeConfigExecutionMode = (mode) => {
|
||||
if (mode === 'subagent') return 'subagent';
|
||||
if (mode === 'auto' || mode === 'sequential' || mode === 'subagent' || mode === 'agent-team') {
|
||||
return mode;
|
||||
}
|
||||
return null;
|
||||
};
|
||||
|
||||
// Explicit user instruction in the active run takes priority over config.
|
||||
const explicitModeFromUser = normalizeUserExecutionMode(runtime.getExplicitExecutionModeHint?.() || null);
|
||||
|
||||
const requestedMode = explicitModeFromUser || normalizeConfigExecutionMode(subagentContext.config.execution_mode) || 'auto';
|
||||
const probeEnabled = subagentContext.config.capability_probe;
|
||||
|
||||
const supports = {
|
||||
subagent: runtime.canLaunchSubagents?.() === true,
|
||||
agentTeam: runtime.canLaunchAgentTeams?.() === true,
|
||||
};
|
||||
|
||||
let resolvedMode = requestedMode;
|
||||
|
||||
if (requestedMode === 'auto') {
|
||||
if (supports.agentTeam) resolvedMode = 'agent-team';
|
||||
else if (supports.subagent) resolvedMode = 'subagent';
|
||||
else resolvedMode = 'sequential';
|
||||
} else if (probeEnabled && requestedMode === 'agent-team' && !supports.agentTeam) {
|
||||
resolvedMode = supports.subagent ? 'subagent' : 'sequential';
|
||||
} else if (probeEnabled && requestedMode === 'subagent' && !supports.subagent) {
|
||||
resolvedMode = 'sequential';
|
||||
}
|
||||
|
||||
subagentContext.execution = {
|
||||
requestedMode,
|
||||
resolvedMode,
|
||||
probeEnabled,
|
||||
supports,
|
||||
};
|
||||
|
||||
if (!probeEnabled && (requestedMode === 'agent-team' || requestedMode === 'subagent')) {
|
||||
const unsupportedRequestedMode =
|
||||
(requestedMode === 'agent-team' && !supports.agentTeam) || (requestedMode === 'subagent' && !supports.subagent);
|
||||
|
||||
if (unsupportedRequestedMode) {
|
||||
subagentContext.execution.error = `Requested execution mode "${requestedMode}" is unavailable because capability probing is disabled.`;
|
||||
throw new Error(subagentContext.execution.error);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Resolution precedence:
|
||||
|
||||
1. Explicit user request in this run (`agent team` => `agent-team`; `subagent` => `subagent`; `sequential`; `auto`)
|
||||
2. `tea_execution_mode` from config
|
||||
3. Runtime capability fallback (when probing enabled)
|
||||
|
||||
If probing is disabled, honor the requested mode strictly. If that mode cannot be executed at runtime, fail with explicit error instead of silent fallback.
|
||||
|
||||
---
|
||||
|
||||
### 3. Dispatch Worker A: Failing API Test Generation
|
||||
|
||||
**Dispatch worker:**
|
||||
|
||||
- **Subagent File:** `./step-04a-subagent-api-failing.md`
|
||||
- **Output File:** `/tmp/tea-atdd-api-tests-${timestamp}.json`
|
||||
- **Context:** Pass `subagentContext`
|
||||
- **Execution:**
|
||||
- `agent-team` or `subagent`: launch non-blocking
|
||||
- `sequential`: run blocking and wait before next dispatch
|
||||
- **TDD Phase:** RED (failing tests)
|
||||
|
||||
**System Action:**
|
||||
|
||||
```
|
||||
🚀 Launching Subagent A: FAILING API Test Generation (RED PHASE)
|
||||
📝 Output: /tmp/tea-atdd-api-tests-${timestamp}.json
|
||||
⚙️ Mode: ${resolvedMode}
|
||||
🔴 TDD Phase: RED (tests will fail until feature implemented)
|
||||
⏳ Status: Running...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4. Dispatch Worker B: Failing E2E Test Generation
|
||||
|
||||
**Dispatch worker:**
|
||||
|
||||
- **Subagent File:** `./step-04b-subagent-e2e-failing.md`
|
||||
- **Output File:** `/tmp/tea-atdd-e2e-tests-${timestamp}.json`
|
||||
- **Context:** Pass `subagentContext`
|
||||
- **Execution:**
|
||||
- `agent-team` or `subagent`: launch non-blocking
|
||||
- `sequential`: run blocking and wait before next dispatch
|
||||
- **TDD Phase:** RED (failing tests)
|
||||
|
||||
**System Action:**
|
||||
|
||||
```
|
||||
🚀 Launching Subagent B: FAILING E2E Test Generation (RED PHASE)
|
||||
📝 Output: /tmp/tea-atdd-e2e-tests-${timestamp}.json
|
||||
⚙️ Mode: ${resolvedMode}
|
||||
🔴 TDD Phase: RED (tests will fail until feature implemented)
|
||||
⏳ Status: Running...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5. Wait for Required Worker Completion
|
||||
|
||||
**If `resolvedMode` is `agent-team` or `subagent`:**
|
||||
|
||||
```
|
||||
⏳ Waiting for subagents to complete...
|
||||
├── Subagent A (API RED): Running... ⟳
|
||||
└── Subagent B (E2E RED): Running... ⟳
|
||||
|
||||
[... time passes ...]
|
||||
|
||||
├── Subagent A (API RED): Complete ✅
|
||||
└── Subagent B (E2E RED): Complete ✅
|
||||
|
||||
✅ All subagents completed successfully!
|
||||
```
|
||||
|
||||
**If `resolvedMode` is `sequential`:**
|
||||
|
||||
```
|
||||
✅ Sequential mode: each worker already completed during dispatch.
|
||||
```
|
||||
|
||||
**Verify both outputs exist:**
|
||||
|
||||
```javascript
|
||||
const apiOutputExists = fs.existsSync(`/tmp/tea-atdd-api-tests-${timestamp}.json`);
|
||||
const e2eOutputExists = fs.existsSync(`/tmp/tea-atdd-e2e-tests-${timestamp}.json`);
|
||||
|
||||
if (!apiOutputExists || !e2eOutputExists) {
|
||||
throw new Error('One or both subagent outputs missing!');
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 6. TDD Red Phase Report
|
||||
|
||||
**Display TDD status:**
|
||||
|
||||
```
|
||||
🔴 TDD RED PHASE: Failing Tests Generated
|
||||
|
||||
✅ Both subagents completed:
|
||||
- API Tests: Generated with test.skip()
|
||||
- E2E Tests: Generated with test.skip()
|
||||
|
||||
📋 All tests assert EXPECTED behavior
|
||||
📋 All tests will FAIL until feature implemented
|
||||
📋 This is INTENTIONAL (TDD red phase)
|
||||
|
||||
Next: Aggregation will verify TDD compliance
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 7. Execution Report
|
||||
|
||||
**Display performance metrics:**
|
||||
|
||||
```
|
||||
🚀 Performance Report:
|
||||
- Execution Mode: {resolvedMode}
|
||||
- API Test Generation: ~X minutes
|
||||
- E2E Test Generation: ~Y minutes
|
||||
- Total Elapsed: ~mode-dependent
|
||||
- Parallel Gain: ~50% faster when mode is subagent/agent-team
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 8. Proceed to Aggregation
|
||||
|
||||
**Load aggregation step:**
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
The aggregation step (4C) will:
|
||||
|
||||
- Read both subagent outputs
|
||||
- Verify TDD red phase compliance (all tests have test.skip())
|
||||
- Write all test files to disk
|
||||
- Generate ATDD checklist
|
||||
- Calculate summary statistics
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Proceed to Step 4C (Aggregation) when:
|
||||
|
||||
- ✅ Subagent A (API failing tests) completed successfully
|
||||
- ✅ Subagent B (E2E failing tests) completed successfully
|
||||
- ✅ Both output files exist and are valid JSON
|
||||
- ✅ TDD red phase status reported
|
||||
|
||||
**Do NOT proceed if:**
|
||||
|
||||
- ❌ One or both subagents failed
|
||||
- ❌ Output files missing or corrupted
|
||||
- ❌ Subagent generated passing tests (wrong - must be failing)
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Both subagents launched successfully
|
||||
- Both worker steps completed without errors
|
||||
- Output files generated and valid
|
||||
- Tests generated with test.skip() (TDD red phase)
|
||||
- Fallback behavior respected configuration and capability probe rules
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Failed to launch subagents
|
||||
- One or both subagents failed
|
||||
- Output files missing or invalid
|
||||
- Tests generated without test.skip() (wrong phase)
|
||||
- Unsupported requested mode with probing disabled
|
||||
|
||||
**Master Rule:** TDD RED PHASE requires FAILING tests (with test.skip()). Mode selection changes orchestration, never red-phase requirements.
|
||||
@@ -0,0 +1,286 @@
|
||||
---
|
||||
name: 'step-04a-subagent-api-failing'
|
||||
description: 'Subagent: Generate FAILING API tests (TDD red phase)'
|
||||
subagent: true
|
||||
outputFile: '/tmp/tea-atdd-api-tests-{{timestamp}}.json'
|
||||
---
|
||||
|
||||
# Subagent 4A: Generate Failing API Tests (TDD Red Phase)
|
||||
|
||||
## SUBAGENT CONTEXT
|
||||
|
||||
This is an **isolated subagent** running in parallel with E2E failing test generation.
|
||||
|
||||
**What you have from parent workflow:**
|
||||
|
||||
- Story acceptance criteria from Step 1
|
||||
- Test strategy and scenarios from Step 3
|
||||
- Knowledge fragments loaded: api-request, data-factories, api-testing-patterns
|
||||
- Config: test framework, Playwright Utils enabled/disabled, Pact.js Utils enabled/disabled (`use_pactjs_utils`), Pact MCP mode (`pact_mcp`)
|
||||
- Provider Endpoint Map (if `use_pactjs_utils` enabled and provider source accessible)
|
||||
|
||||
**Your task:** Generate API tests that will FAIL because the feature is not implemented yet (TDD RED PHASE).
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read this entire subagent file before acting
|
||||
- ✅ Generate FAILING API tests ONLY
|
||||
- ✅ Tests MUST fail when run (feature not implemented yet)
|
||||
- ✅ Output structured JSON to temp file
|
||||
- ✅ Follow knowledge fragment patterns
|
||||
- ❌ Do NOT generate E2E tests (that's subagent 4B)
|
||||
- ❌ Do NOT generate passing tests (this is TDD red phase)
|
||||
- ❌ Do NOT run tests (that's step 5)
|
||||
|
||||
---
|
||||
|
||||
## SUBAGENT TASK
|
||||
|
||||
### 1. Identify API Endpoints from Acceptance Criteria
|
||||
|
||||
From the story acceptance criteria (Step 1 output), identify:
|
||||
|
||||
- Which API endpoints will be created for this story
|
||||
- Expected request/response contracts
|
||||
- Authentication requirements
|
||||
- Expected status codes and error scenarios
|
||||
|
||||
**Example Acceptance Criteria:**
|
||||
|
||||
```
|
||||
Story: User Registration
|
||||
- As a user, I can POST to /api/users/register with email and password
|
||||
- System returns 201 Created with user object
|
||||
- System returns 400 Bad Request if email already exists
|
||||
- System returns 422 Unprocessable Entity if validation fails
|
||||
```
|
||||
|
||||
### 2. Generate FAILING API Test Files
|
||||
|
||||
For each API endpoint, create test file in `tests/api/[feature].spec.ts`:
|
||||
|
||||
**Test Structure (ATDD - Red Phase):**
|
||||
|
||||
```typescript
|
||||
import { test, expect } from '@playwright/test';
|
||||
// If Playwright Utils enabled:
|
||||
// import { apiRequest } from '@playwright-utils/api';
|
||||
|
||||
test.describe('[Story Name] API Tests (ATDD)', () => {
|
||||
test.skip('[P0] should register new user successfully', async ({ request }) => {
|
||||
// THIS TEST WILL FAIL - Endpoint not implemented yet
|
||||
const response = await request.post('/api/users/register', {
|
||||
data: {
|
||||
email: 'newuser@example.com',
|
||||
password: 'SecurePass123!',
|
||||
},
|
||||
});
|
||||
|
||||
// Expect 201 but will get 404 (endpoint doesn't exist)
|
||||
expect(response.status()).toBe(201);
|
||||
|
||||
const user = await response.json();
|
||||
expect(user).toMatchObject({
|
||||
id: expect.any(Number),
|
||||
email: 'newuser@example.com',
|
||||
});
|
||||
});
|
||||
|
||||
test.skip('[P1] should return 400 if email exists', async ({ request }) => {
|
||||
// THIS TEST WILL FAIL - Endpoint not implemented yet
|
||||
const response = await request.post('/api/users/register', {
|
||||
data: {
|
||||
email: 'existing@example.com',
|
||||
password: 'SecurePass123!',
|
||||
},
|
||||
});
|
||||
|
||||
expect(response.status()).toBe(400);
|
||||
const error = await response.json();
|
||||
expect(error.message).toContain('Email already exists');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**CRITICAL ATDD Requirements:**
|
||||
|
||||
- ✅ Use `test.skip()` to mark tests as intentionally failing (red phase)
|
||||
- ✅ Write assertions for EXPECTED behavior (even though not implemented)
|
||||
- ✅ Use realistic test data (not placeholder data)
|
||||
- ✅ Test both happy path and error scenarios from acceptance criteria
|
||||
- ✅ Use `apiRequest()` helper if Playwright Utils enabled
|
||||
- ✅ Use data factories for test data (from data-factories fragment)
|
||||
- ✅ Include priority tags [P0], [P1], [P2], [P3]
|
||||
|
||||
### 1.5 Provider Source Scrutiny for CDC in TDD Red Phase (If `use_pactjs_utils` Enabled)
|
||||
|
||||
When generating Pact consumer contract tests in the ATDD red phase, provider scrutiny applies with TDD-specific rules. Apply the **Seven-Point Scrutiny Checklist** from `contract-testing.md` (Response shape, Status codes, Field names, Enum values, Required fields, Data types, Nested structures) for both existing and new endpoints.
|
||||
|
||||
**If provider endpoint already exists** (extending an existing API):
|
||||
|
||||
- READ the provider route handler, types, and validation schemas
|
||||
- Verify all seven scrutiny points against the provider source: Response shape, Status codes, Field names, Enum values, Required fields, Data types, Nested structures
|
||||
- Add `// Provider endpoint:` comment and scrutiny evidence block documenting findings for each point
|
||||
- Wrap the entire test function in `test.skip()` (so the whole test including `executeTest` is skipped), not just the callback
|
||||
|
||||
**If provider endpoint is new** (TDD — endpoint not implemented yet):
|
||||
|
||||
- Use acceptance criteria as the source of truth for expected behavior
|
||||
- Acceptance criteria should specify all seven scrutiny points where possible (status codes, field names, types, etc.) — note any gaps as assumptions in the evidence block
|
||||
- Add `// Provider endpoint: TODO — new endpoint, not yet implemented`
|
||||
- Document expected behavior from acceptance criteria in scrutiny evidence block
|
||||
- Wrap the entire test function in `test.skip()` and use realistic expectations from the story
|
||||
|
||||
**Graceful degradation when provider source is inaccessible:**
|
||||
|
||||
1. **OpenAPI/Swagger spec available**: Use the spec as the source of truth for response shapes, status codes, and field names
|
||||
2. **Pact Broker available** (when `pact_mcp` is `"mcp"`): Use SmartBear MCP tools to fetch existing provider states and verified interactions as reference
|
||||
3. **Neither available**: For new endpoints, use acceptance criteria; for existing endpoints, use consumer-side types. Mark with `// Provider endpoint: TODO — provider source not accessible, verify manually` and set `provider_scrutiny: "pending"` in output JSON
|
||||
4. **Never silently guess**: Document all assumptions in the scrutiny evidence block
|
||||
|
||||
**Provider endpoint comments are MANDATORY** even in red-phase tests — they document the intent.
|
||||
|
||||
**Example: Red-phase Pact test with provider scrutiny:**
|
||||
|
||||
```typescript
|
||||
// Provider endpoint: TODO — new endpoint, not yet implemented
|
||||
/*
|
||||
* Provider Scrutiny Evidence:
|
||||
* - Handler: NEW — not yet implemented (TDD red phase)
|
||||
* - Expected from acceptance criteria:
|
||||
* - Endpoint: POST /api/v2/users/register
|
||||
* - Status: 201 for success, 400 for duplicate email, 422 for validation error
|
||||
* - Response: { id: number, email: string, createdAt: string }
|
||||
*/
|
||||
test.skip('[P0] should generate consumer contract for user registration', async () => {
|
||||
await provider
|
||||
.given('no users exist')
|
||||
.uponReceiving('a request to register a new user')
|
||||
.withRequest({
|
||||
method: 'POST',
|
||||
path: '/api/v2/users/register',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: { email: 'newuser@example.com', password: 'SecurePass123!' },
|
||||
})
|
||||
.willRespondWith({
|
||||
status: 201,
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: like({
|
||||
id: integer(1),
|
||||
email: string('newuser@example.com'),
|
||||
createdAt: string('2025-01-15T10:00:00Z'),
|
||||
}),
|
||||
})
|
||||
.executeTest(async (mockServer) => {
|
||||
const result = await registerUser({ email: 'newuser@example.com', password: 'SecurePass123!' }, { baseUrl: mockServer.url });
|
||||
expect(result.id).toEqual(expect.any(Number));
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Why test.skip():**
|
||||
|
||||
- Tests are written correctly for EXPECTED behavior
|
||||
- But we know they'll fail because feature isn't implemented
|
||||
- `test.skip()` documents this is intentional (TDD red phase)
|
||||
- Once feature is implemented, remove `test.skip()` to verify green phase
|
||||
|
||||
### 3. Track Fixture Needs
|
||||
|
||||
Identify fixtures needed for API tests:
|
||||
|
||||
- Authentication fixtures (if endpoints require auth)
|
||||
- Data factories (user data, etc.)
|
||||
- API client configurations
|
||||
|
||||
**Do NOT create fixtures yet** - just track what's needed for aggregation step.
|
||||
|
||||
---
|
||||
|
||||
## OUTPUT FORMAT
|
||||
|
||||
Write JSON to temp file: `/tmp/tea-atdd-api-tests-{{timestamp}}.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"subagent": "atdd-api-tests",
|
||||
"tests": [
|
||||
{
|
||||
"file": "tests/api/user-registration.spec.ts",
|
||||
"content": "[full TypeScript test file content with test.skip()]",
|
||||
"description": "ATDD API tests for user registration (RED PHASE)",
|
||||
"expected_to_fail": true,
|
||||
"acceptance_criteria_covered": [
|
||||
"User can register with email/password",
|
||||
"System returns 201 on success",
|
||||
"System returns 400 if email exists"
|
||||
],
|
||||
"priority_coverage": {
|
||||
"P0": 1,
|
||||
"P1": 2,
|
||||
"P2": 0,
|
||||
"P3": 0
|
||||
}
|
||||
}
|
||||
],
|
||||
"fixture_needs": ["userDataFactory"],
|
||||
"knowledge_fragments_used": ["api-request", "data-factories", "api-testing-patterns"],
|
||||
"test_count": 3,
|
||||
"tdd_phase": "RED",
|
||||
"provider_scrutiny": "completed",
|
||||
"summary": "Generated 3 FAILING API tests for user registration story"
|
||||
}
|
||||
```
|
||||
|
||||
**On Error:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": false,
|
||||
"subagent": "atdd-api-tests",
|
||||
"error": "Error message describing what went wrong",
|
||||
"partial_output": {
|
||||
/* any tests generated before error */
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Subagent completes when:
|
||||
|
||||
- ✅ All API endpoints from acceptance criteria have test files
|
||||
- ✅ All tests use `test.skip()` (documented failing tests)
|
||||
- ✅ All tests assert EXPECTED behavior (not placeholder assertions)
|
||||
- ✅ JSON output written to temp file
|
||||
- ✅ Fixture needs to be tracked
|
||||
|
||||
**Subagent terminates here.** Parent workflow will read output and proceed to aggregation.
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SUBAGENT SUCCESS METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- All API tests generated with test.skip()
|
||||
- Tests assert expected behavior (not placeholders)
|
||||
- JSON output valid and complete
|
||||
- No E2E/component/unit tests included (out of scope)
|
||||
- Tests follow knowledge fragment patterns
|
||||
- Every Pact interaction has `// Provider endpoint:` comment (if CDC enabled)
|
||||
- Provider scrutiny completed or TODO markers added for new endpoints (if CDC enabled)
|
||||
|
||||
### ❌ FAILURE:
|
||||
|
||||
- Generated passing tests (wrong - this is RED phase)
|
||||
- Tests without test.skip() (will break CI)
|
||||
- Placeholder assertions (expect(true).toBe(true))
|
||||
- Did not follow knowledge fragment patterns
|
||||
- Invalid or missing JSON output
|
||||
- Pact interactions missing provider endpoint comments (if CDC enabled)
|
||||
@@ -0,0 +1,244 @@
|
||||
---
|
||||
name: 'step-04b-subagent-e2e-failing'
|
||||
description: 'Subagent: Generate FAILING E2E tests (TDD red phase)'
|
||||
subagent: true
|
||||
outputFile: '/tmp/tea-atdd-e2e-tests-{{timestamp}}.json'
|
||||
---
|
||||
|
||||
# Subagent 4B: Generate Failing E2E Tests (TDD Red Phase)
|
||||
|
||||
## SUBAGENT CONTEXT
|
||||
|
||||
This is an **isolated subagent** running in parallel with API failing test generation.
|
||||
|
||||
**What you have from parent workflow:**
|
||||
|
||||
- Story acceptance criteria from Step 1
|
||||
- Test strategy and user journey scenarios from Step 3
|
||||
- Knowledge fragments loaded: fixture-architecture, network-first, selector-resilience
|
||||
- Config: test framework, Playwright Utils enabled/disabled
|
||||
|
||||
**Your task:** Generate E2E tests that will FAIL because the feature UI is not implemented yet (TDD RED PHASE).
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read this entire subagent file before acting
|
||||
- ✅ Generate FAILING E2E tests ONLY
|
||||
- ✅ Tests MUST fail when run (UI not implemented yet)
|
||||
- ✅ Output structured JSON to temp file
|
||||
- ✅ Follow knowledge fragment patterns
|
||||
- ❌ Do NOT generate API tests (that's subagent 4A)
|
||||
- ❌ Do NOT generate passing tests (this is TDD red phase)
|
||||
- ❌ Do NOT run tests (that's step 5)
|
||||
|
||||
---
|
||||
|
||||
## SUBAGENT TASK
|
||||
|
||||
### 1. Identify User Journeys from Acceptance Criteria
|
||||
|
||||
From the story acceptance criteria (Step 1 output), identify:
|
||||
|
||||
- Which UI flows will be created for this story
|
||||
- User interactions required
|
||||
- Expected visual states
|
||||
- Success/error messages expected
|
||||
|
||||
**Example Acceptance Criteria:**
|
||||
|
||||
```
|
||||
Story: User Registration
|
||||
- As a user, I can navigate to /register page
|
||||
- I can fill in email and password fields
|
||||
- I can click "Register" button
|
||||
- System shows success message and redirects to dashboard
|
||||
- System shows error if email already exists
|
||||
```
|
||||
|
||||
### 2. Browser Interaction (Selector Verification)
|
||||
|
||||
**Automation mode:** `config.tea_browser_automation`
|
||||
|
||||
If `auto` (fall back to MCP if CLI unavailable; if neither available, generate from best practices):
|
||||
|
||||
- Open the target page first, then verify selectors with a snapshot:
|
||||
`playwright-cli -s=tea-atdd-{{timestamp}} open <target_url>`
|
||||
`playwright-cli -s=tea-atdd-{{timestamp}} snapshot` → map refs to Playwright locators
|
||||
- ref `{role: "button", name: "Submit"}` → `page.getByRole('button', { name: 'Submit' })`
|
||||
- ref `{role: "textbox", name: "Email"}` → `page.getByRole('textbox', { name: 'Email' })`
|
||||
- `playwright-cli -s=tea-atdd-{{timestamp}} close` when done
|
||||
|
||||
If `cli` (CLI only — do NOT fall back to MCP; generate from best practices if CLI unavailable):
|
||||
|
||||
- Open the target page first, then verify selectors with a snapshot:
|
||||
`playwright-cli -s=tea-atdd-{{timestamp}} open <target_url>`
|
||||
`playwright-cli -s=tea-atdd-{{timestamp}} snapshot` → map refs to Playwright locators
|
||||
- ref `{role: "button", name: "Submit"}` → `page.getByRole('button', { name: 'Submit' })`
|
||||
- ref `{role: "textbox", name: "Email"}` → `page.getByRole('textbox', { name: 'Email' })`
|
||||
- `playwright-cli -s=tea-atdd-{{timestamp}} close` when done
|
||||
|
||||
> **Session Hygiene:** Always close sessions using `playwright-cli -s=tea-atdd-{{timestamp}} close`. Do NOT use `close-all` — it kills every session on the machine and breaks parallel execution.
|
||||
|
||||
If `mcp`:
|
||||
|
||||
- Use MCP tools for selector verification (current behavior)
|
||||
|
||||
If `none`:
|
||||
|
||||
- Generate selectors from best practices without browser verification
|
||||
|
||||
### 3. Generate FAILING E2E Test Files
|
||||
|
||||
For each user journey, create test file in `tests/e2e/[feature].spec.ts`:
|
||||
|
||||
**Test Structure (ATDD - Red Phase):**
|
||||
|
||||
```typescript
|
||||
import { test, expect } from '@playwright/test';
|
||||
|
||||
test.describe('[Story Name] E2E User Journey (ATDD)', () => {
|
||||
test.skip('[P0] should complete user registration successfully', async ({ page }) => {
|
||||
// THIS TEST WILL FAIL - UI not implemented yet
|
||||
await page.goto('/register');
|
||||
|
||||
// Expect registration form but will get 404 or missing elements
|
||||
await page.fill('[name="email"]', 'newuser@example.com');
|
||||
await page.fill('[name="password"]', 'SecurePass123!');
|
||||
await page.click('button:has-text("Register")');
|
||||
|
||||
// Expect success message and redirect
|
||||
await expect(page.getByText('Registration successful!')).toBeVisible();
|
||||
await page.waitForURL('/dashboard');
|
||||
});
|
||||
|
||||
test.skip('[P1] should show error if email exists', async ({ page }) => {
|
||||
// THIS TEST WILL FAIL - UI not implemented yet
|
||||
await page.goto('/register');
|
||||
|
||||
await page.fill('[name="email"]', 'existing@example.com');
|
||||
await page.fill('[name="password"]', 'SecurePass123!');
|
||||
await page.click('button:has-text("Register")');
|
||||
|
||||
// Expect error message
|
||||
await expect(page.getByText('Email already exists')).toBeVisible();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**CRITICAL ATDD Requirements:**
|
||||
|
||||
- ✅ Use `test.skip()` to mark tests as intentionally failing (red phase)
|
||||
- ✅ Write assertions for EXPECTED UI behavior (even though not implemented)
|
||||
- ✅ Use resilient selectors: getByRole, getByText, getByLabel (from selector-resilience)
|
||||
- ✅ Follow network-first patterns if API calls involved (from network-first)
|
||||
- ✅ Test complete user journeys from acceptance criteria
|
||||
- ✅ Include priority tags [P0], [P1], [P2], [P3]
|
||||
- ✅ Use proper TypeScript types
|
||||
- ✅ Deterministic waits (no hard sleeps)
|
||||
|
||||
**Why test.skip():**
|
||||
|
||||
- Tests are written correctly for EXPECTED UI behavior
|
||||
- But we know they'll fail because UI isn't implemented
|
||||
- `test.skip()` documents this is intentional (TDD red phase)
|
||||
- Once UI is implemented, remove `test.skip()` to verify green phase
|
||||
|
||||
### 4. Track Fixture Needs
|
||||
|
||||
Identify fixtures needed for E2E tests:
|
||||
|
||||
- Authentication fixtures (if journey requires logged-in state)
|
||||
- Network mocks (if API calls involved)
|
||||
- Test data fixtures
|
||||
|
||||
**Do NOT create fixtures yet** - just track what's needed for aggregation step.
|
||||
|
||||
---
|
||||
|
||||
## OUTPUT FORMAT
|
||||
|
||||
Write JSON to temp file: `/tmp/tea-atdd-e2e-tests-{{timestamp}}.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"subagent": "atdd-e2e-tests",
|
||||
"tests": [
|
||||
{
|
||||
"file": "tests/e2e/user-registration.spec.ts",
|
||||
"content": "[full TypeScript test file content with test.skip()]",
|
||||
"description": "ATDD E2E tests for user registration journey (RED PHASE)",
|
||||
"expected_to_fail": true,
|
||||
"acceptance_criteria_covered": [
|
||||
"User can navigate to /register",
|
||||
"User can fill registration form",
|
||||
"System shows success message on registration",
|
||||
"System shows error if email exists"
|
||||
],
|
||||
"priority_coverage": {
|
||||
"P0": 1,
|
||||
"P1": 1,
|
||||
"P2": 0,
|
||||
"P3": 0
|
||||
}
|
||||
}
|
||||
],
|
||||
"fixture_needs": ["registrationPageMock"],
|
||||
"knowledge_fragments_used": ["fixture-architecture", "network-first", "selector-resilience"],
|
||||
"test_count": 2,
|
||||
"tdd_phase": "RED",
|
||||
"summary": "Generated 2 FAILING E2E tests for user registration story"
|
||||
}
|
||||
```
|
||||
|
||||
**On Error:**
|
||||
|
||||
```json
|
||||
{
|
||||
"success": false,
|
||||
"subagent": "atdd-e2e-tests",
|
||||
"error": "Error message describing what went wrong",
|
||||
"partial_output": {
|
||||
/* any tests generated before error */
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Subagent completes when:
|
||||
|
||||
- ✅ All user journeys from acceptance criteria have test files
|
||||
- ✅ All tests use `test.skip()` (documented failing tests)
|
||||
- ✅ All tests assert EXPECTED UI behavior (not placeholder assertions)
|
||||
- ✅ Resilient selectors used (getByRole, getByText)
|
||||
- ✅ JSON output written to temp file
|
||||
- ✅ Fixture needs tracked
|
||||
|
||||
**Subagent terminates here.** Parent workflow will read output and proceed to aggregation.
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SUBAGENT SUCCESS METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- All E2E tests generated with test.skip()
|
||||
- Tests assert expected UI behavior (not placeholders)
|
||||
- Resilient selectors used (getByRole, getByText)
|
||||
- JSON output valid and complete
|
||||
- No API/component/unit tests included (out of scope)
|
||||
- Tests follow knowledge fragment patterns
|
||||
|
||||
### ❌ FAILURE:
|
||||
|
||||
- Generated passing tests (wrong - this is RED phase)
|
||||
- Tests without test.skip() (will break CI)
|
||||
- Placeholder assertions (expect(true).toBe(true))
|
||||
- Brittle selectors used (CSS classes, XPath)
|
||||
- Did not follow knowledge fragment patterns
|
||||
- Invalid or missing JSON output
|
||||
370
.gemini/skills/bmad-testarch-atdd/steps-c/step-04c-aggregate.md
Normal file
370
.gemini/skills/bmad-testarch-atdd/steps-c/step-04c-aggregate.md
Normal file
@@ -0,0 +1,370 @@
|
||||
---
|
||||
name: 'step-04c-aggregate'
|
||||
description: 'Aggregate subagent outputs and complete ATDD test infrastructure'
|
||||
outputFile: '{test_artifacts}/atdd-checklist-{story_id}.md'
|
||||
nextStepFile: './step-05-validate-and-complete.md'
|
||||
---
|
||||
|
||||
# Step 4C: Aggregate ATDD Test Generation Results
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Read outputs from parallel subagents (API + E2E failing test generation), aggregate results, verify TDD red phase compliance, and create supporting infrastructure.
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- ✅ Read subagent outputs from temp files
|
||||
- ✅ Verify all tests are marked with test.skip() (TDD red phase)
|
||||
- ✅ Generate shared fixtures based on fixture needs
|
||||
- ✅ Write all generated test files to disk
|
||||
- ❌ Do NOT remove test.skip() (that's done after feature implementation)
|
||||
- ❌ Do NOT run tests yet (that's step 5 - verify they fail)
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, subagent outputs from temp files
|
||||
- Focus: aggregation and TDD validation
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: Step 4A and 4B subagent outputs
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
### 1. Read Subagent Outputs
|
||||
|
||||
**Read API test subagent output:**
|
||||
|
||||
```javascript
|
||||
const apiTestsPath = '/tmp/tea-atdd-api-tests-{{timestamp}}.json';
|
||||
const apiTestsOutput = JSON.parse(fs.readFileSync(apiTestsPath, 'utf8'));
|
||||
```
|
||||
|
||||
**Read E2E test subagent output:**
|
||||
|
||||
```javascript
|
||||
const e2eTestsPath = '/tmp/tea-atdd-e2e-tests-{{timestamp}}.json';
|
||||
const e2eTestsOutput = JSON.parse(fs.readFileSync(e2eTestsPath, 'utf8'));
|
||||
```
|
||||
|
||||
**Verify both subagents succeeded:**
|
||||
|
||||
- Check `apiTestsOutput.success === true`
|
||||
- Check `e2eTestsOutput.success === true`
|
||||
- If either failed, report error and stop (don't proceed)
|
||||
|
||||
---
|
||||
|
||||
### 2. Verify TDD Red Phase Compliance
|
||||
|
||||
**CRITICAL TDD Validation:**
|
||||
|
||||
**Check API tests:**
|
||||
|
||||
```javascript
|
||||
apiTestsOutput.tests.forEach((test) => {
|
||||
// Verify test.skip() is present
|
||||
if (!test.content.includes('test.skip(')) {
|
||||
throw new Error(`ATDD ERROR: ${test.file} missing test.skip() - tests MUST be skipped in red phase!`);
|
||||
}
|
||||
|
||||
// Verify not placeholder assertions
|
||||
if (test.content.includes('expect(true).toBe(true)')) {
|
||||
throw new Error(`ATDD ERROR: ${test.file} has placeholder assertions - must assert EXPECTED behavior!`);
|
||||
}
|
||||
|
||||
// Verify expected_to_fail flag
|
||||
if (!test.expected_to_fail) {
|
||||
throw new Error(`ATDD ERROR: ${test.file} not marked as expected_to_fail!`);
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
**Check E2E tests:**
|
||||
|
||||
```javascript
|
||||
e2eTestsOutput.tests.forEach((test) => {
|
||||
// Same validation as API tests
|
||||
if (!test.content.includes('test.skip(')) {
|
||||
throw new Error(`ATDD ERROR: ${test.file} missing test.skip() - tests MUST be skipped in red phase!`);
|
||||
}
|
||||
|
||||
if (test.content.includes('expect(true).toBe(true)')) {
|
||||
throw new Error(`ATDD ERROR: ${test.file} has placeholder assertions!`);
|
||||
}
|
||||
|
||||
if (!test.expected_to_fail) {
|
||||
throw new Error(`ATDD ERROR: ${test.file} not marked as expected_to_fail!`);
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
**If validation passes:**
|
||||
|
||||
```
|
||||
✅ TDD Red Phase Validation: PASS
|
||||
- All tests use test.skip()
|
||||
- All tests assert expected behavior (not placeholders)
|
||||
- All tests marked as expected_to_fail
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. Write All Test Files to Disk
|
||||
|
||||
**Write API test files:**
|
||||
|
||||
```javascript
|
||||
apiTestsOutput.tests.forEach((test) => {
|
||||
fs.writeFileSync(test.file, test.content, 'utf8');
|
||||
console.log(`✅ Created (RED): ${test.file}`);
|
||||
});
|
||||
```
|
||||
|
||||
**Write E2E test files:**
|
||||
|
||||
```javascript
|
||||
e2eTestsOutput.tests.forEach((test) => {
|
||||
fs.writeFileSync(test.file, test.content, 'utf8');
|
||||
console.log(`✅ Created (RED): ${test.file}`);
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4. Aggregate Fixture Needs
|
||||
|
||||
**Collect all fixture needs from both subagents:**
|
||||
|
||||
```javascript
|
||||
const allFixtureNeeds = [...apiTestsOutput.fixture_needs, ...e2eTestsOutput.fixture_needs];
|
||||
|
||||
// Remove duplicates
|
||||
const uniqueFixtures = [...new Set(allFixtureNeeds)];
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5. Generate Fixture Infrastructure
|
||||
|
||||
**Create fixtures needed by ATDD tests:**
|
||||
(Similar to automate workflow, but may be simpler for ATDD since feature not implemented)
|
||||
|
||||
**Minimal fixtures for TDD red phase:**
|
||||
|
||||
```typescript
|
||||
// tests/fixtures/test-data.ts
|
||||
export const testUserData = {
|
||||
email: 'test@example.com',
|
||||
password: 'SecurePass123!',
|
||||
};
|
||||
```
|
||||
|
||||
Note: More complete fixtures will be needed when moving to green phase.
|
||||
|
||||
---
|
||||
|
||||
### 6. Generate ATDD Checklist
|
||||
|
||||
**Create ATDD checklist document:**
|
||||
|
||||
```markdown
|
||||
# ATDD Checklist: [Story Name]
|
||||
|
||||
## TDD Red Phase (Current)
|
||||
|
||||
✅ Failing tests generated
|
||||
|
||||
- API Tests: {api_test_count} tests (all skipped)
|
||||
- E2E Tests: {e2e_test_count} tests (all skipped)
|
||||
|
||||
## Acceptance Criteria Coverage
|
||||
|
||||
{list all acceptance criteria with test coverage}
|
||||
|
||||
## Next Steps (TDD Green Phase)
|
||||
|
||||
After implementing the feature:
|
||||
|
||||
1. Remove `test.skip()` from all test files
|
||||
2. Run tests: `npm test`
|
||||
3. Verify tests PASS (green phase)
|
||||
4. If any tests fail:
|
||||
- Either fix implementation (feature bug)
|
||||
- Or fix test (test bug)
|
||||
5. Commit passing tests
|
||||
|
||||
## Implementation Guidance
|
||||
|
||||
Feature endpoints to implement:
|
||||
{list endpoints from API tests}
|
||||
|
||||
UI components to implement:
|
||||
{list UI flows from E2E tests}
|
||||
```
|
||||
|
||||
**Save checklist:**
|
||||
|
||||
```javascript
|
||||
fs.writeFileSync(`{test_artifacts}/atdd-checklist-{story-id}.md`, checklistContent, 'utf8');
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 7. Calculate Summary Statistics
|
||||
|
||||
**Aggregate test counts:**
|
||||
|
||||
```javascript
|
||||
const resolvedMode = subagentContext?.execution?.resolvedMode; // Provided by Step 4's orchestration context
|
||||
const subagentExecutionLabel =
|
||||
resolvedMode === 'sequential'
|
||||
? 'SEQUENTIAL (API → E2E)'
|
||||
: resolvedMode === 'agent-team'
|
||||
? 'AGENT-TEAM (API + E2E)'
|
||||
: resolvedMode === 'subagent'
|
||||
? 'SUBAGENT (API + E2E)'
|
||||
: 'PARALLEL (API + E2E)';
|
||||
const performanceGainLabel =
|
||||
resolvedMode === 'sequential'
|
||||
? 'baseline (no parallel speedup)'
|
||||
: resolvedMode === 'agent-team' || resolvedMode === 'subagent'
|
||||
? '~50% faster than sequential'
|
||||
: 'mode-dependent';
|
||||
|
||||
const summary = {
|
||||
tdd_phase: 'RED',
|
||||
total_tests: apiTestsOutput.test_count + e2eTestsOutput.test_count,
|
||||
api_tests: apiTestsOutput.test_count,
|
||||
e2e_tests: e2eTestsOutput.test_count,
|
||||
all_tests_skipped: true,
|
||||
expected_to_fail: true,
|
||||
fixtures_created: uniqueFixtures.length,
|
||||
acceptance_criteria_covered: [
|
||||
...apiTestsOutput.tests.flatMap((t) => t.acceptance_criteria_covered),
|
||||
...e2eTestsOutput.tests.flatMap((t) => t.acceptance_criteria_covered),
|
||||
],
|
||||
knowledge_fragments_used: [...apiTestsOutput.knowledge_fragments_used, ...e2eTestsOutput.knowledge_fragments_used],
|
||||
subagent_execution: subagentExecutionLabel,
|
||||
performance_gain: performanceGainLabel,
|
||||
};
|
||||
```
|
||||
|
||||
**Store summary for Step 5:**
|
||||
|
||||
```javascript
|
||||
fs.writeFileSync('/tmp/tea-atdd-summary-{{timestamp}}.json', JSON.stringify(summary, null, 2), 'utf8');
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## OUTPUT SUMMARY
|
||||
|
||||
Display to user:
|
||||
|
||||
```
|
||||
✅ ATDD Test Generation Complete (TDD RED PHASE)
|
||||
|
||||
🔴 TDD Red Phase: Failing Tests Generated
|
||||
|
||||
📊 Summary:
|
||||
- Total Tests: {total_tests} (all with test.skip())
|
||||
- API Tests: {api_tests} (RED)
|
||||
- E2E Tests: {e2e_tests} (RED)
|
||||
- Fixtures Created: {fixtures_created}
|
||||
- All tests will FAIL until feature implemented
|
||||
|
||||
✅ Acceptance Criteria Coverage:
|
||||
{list all covered criteria}
|
||||
|
||||
🚀 Performance: {performance_gain}
|
||||
|
||||
📂 Generated Files:
|
||||
- tests/api/[feature].spec.ts (with test.skip())
|
||||
- tests/e2e/[feature].spec.ts (with test.skip())
|
||||
- tests/fixtures/test-data.ts
|
||||
- {test_artifacts}/atdd-checklist-{story-id}.md
|
||||
|
||||
📝 Next Steps:
|
||||
1. Implement the feature
|
||||
2. Remove test.skip() from tests
|
||||
3. Run tests → verify PASS (green phase)
|
||||
4. Commit passing tests
|
||||
|
||||
✅ Ready for validation (Step 5 - verify tests fail as expected)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Proceed to Step 5 when:
|
||||
|
||||
- ✅ All test files written to disk (API + E2E)
|
||||
- ✅ All tests verified to have test.skip()
|
||||
- ✅ All fixtures created
|
||||
- ✅ ATDD checklist generated
|
||||
- ✅ Summary statistics calculated and saved
|
||||
- ✅ Output displayed to user
|
||||
|
||||
---
|
||||
|
||||
### 8. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-04c-aggregate']
|
||||
lastStep: 'step-04c-aggregate'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-04c-aggregate'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-04c-aggregate'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Both subagents succeeded
|
||||
- All tests have test.skip() (TDD red phase compliant)
|
||||
- All tests assert expected behavior (not placeholders)
|
||||
- All test files written to disk
|
||||
- ATDD checklist generated
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- One or both subagents failed
|
||||
- Tests missing test.skip() (would break CI)
|
||||
- Tests have placeholder assertions
|
||||
- Test files not written to disk
|
||||
- ATDD checklist missing
|
||||
|
||||
**Master Rule:** TDD RED PHASE requires ALL tests to use test.skip() and assert expected behavior.
|
||||
@@ -0,0 +1,106 @@
|
||||
---
|
||||
name: 'step-05-validate-and-complete'
|
||||
description: 'Validate ATDD outputs and summarize'
|
||||
outputFile: '{test_artifacts}/atdd-checklist-{story_id}.md'
|
||||
---
|
||||
|
||||
# Step 5: Validate & Complete
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Validate ATDD outputs and provide a completion summary.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- ✅ Validate against the checklist
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Validation
|
||||
|
||||
Use `checklist.md` to validate:
|
||||
|
||||
- Prerequisites satisfied
|
||||
- Test files created correctly
|
||||
- Checklist matches acceptance criteria
|
||||
- Tests are designed to fail before implementation
|
||||
- [ ] CLI sessions cleaned up (no orphaned browsers)
|
||||
- [ ] Temp artifacts stored in `{test_artifacts}/` not random locations
|
||||
|
||||
Fix any gaps before completion.
|
||||
|
||||
---
|
||||
|
||||
## 2. Polish Output
|
||||
|
||||
Before finalizing, review the complete output document for quality:
|
||||
|
||||
1. **Remove duplication**: Progressive-append workflow may have created repeated sections — consolidate
|
||||
2. **Verify consistency**: Ensure terminology, risk scores, and references are consistent throughout
|
||||
3. **Check completeness**: All template sections should be populated or explicitly marked N/A
|
||||
4. **Format cleanup**: Ensure markdown formatting is clean (tables aligned, headers consistent, no orphaned references)
|
||||
|
||||
---
|
||||
|
||||
## 3. Completion Summary
|
||||
|
||||
Report:
|
||||
|
||||
- Test files created
|
||||
- Checklist output path
|
||||
- Key risks or assumptions
|
||||
- Next recommended workflow (e.g., implementation or `automate`)
|
||||
|
||||
---
|
||||
|
||||
## 4. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-05-validate-and-complete']
|
||||
lastStep: 'step-05-validate-and-complete'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-05-validate-and-complete'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-05-validate-and-complete'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section.
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
65
.gemini/skills/bmad-testarch-atdd/steps-e/step-01-assess.md
Normal file
65
.gemini/skills/bmad-testarch-atdd/steps-e/step-01-assess.md
Normal file
@@ -0,0 +1,65 @@
|
||||
---
|
||||
name: 'step-01-assess'
|
||||
description: 'Load an existing output for editing'
|
||||
nextStepFile: './step-02-apply-edit.md'
|
||||
---
|
||||
|
||||
# Step 1: Assess Edit Target
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Identify which output should be edited and load it.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 📖 Read the complete step file before taking any action
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are the Master Test Architect
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Ask the user which output file to edit
|
||||
- 🚫 Do not edit until target is confirmed
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: existing outputs
|
||||
- Focus: select edit target
|
||||
- Limits: no edits yet
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Identify Target
|
||||
|
||||
Ask the user to provide the output file path or select from known outputs.
|
||||
|
||||
### 2. Load Target
|
||||
|
||||
Read the provided output file in full.
|
||||
|
||||
### 3. Confirm
|
||||
|
||||
Confirm the target and proceed to edit.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Target identified and loaded
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Proceeding without a confirmed target
|
||||
@@ -0,0 +1,60 @@
|
||||
---
|
||||
name: 'step-02-apply-edit'
|
||||
description: 'Apply edits to the selected output'
|
||||
---
|
||||
|
||||
# Step 2: Apply Edits
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Apply the requested edits to the selected output and confirm changes.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 📖 Read the complete step file before taking any action
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are the Master Test Architect
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Only apply edits explicitly requested by the user
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: selected output and user changes
|
||||
- Focus: apply edits only
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Confirm Requested Changes
|
||||
|
||||
Restate what will be changed and confirm.
|
||||
|
||||
### 2. Apply Changes
|
||||
|
||||
Update the output file accordingly.
|
||||
|
||||
### 3. Report
|
||||
|
||||
Summarize the edits applied.
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Changes applied and confirmed
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Unconfirmed edits or missing update
|
||||
@@ -0,0 +1,67 @@
|
||||
---
|
||||
name: 'step-01-validate'
|
||||
description: 'Validate workflow outputs against checklist'
|
||||
outputFile: '{test_artifacts}/atdd-validation-report.md'
|
||||
validationChecklist: '../checklist.md'
|
||||
---
|
||||
|
||||
# Step 1: Validate Outputs
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Validate outputs using the workflow checklist and record findings.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 📖 Read the complete step file before taking any action
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are the Master Test Architect
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Validate against `{validationChecklist}`
|
||||
- 🚫 Do not skip checks
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Write findings to `{outputFile}`
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: workflow outputs and checklist
|
||||
- Focus: validation only
|
||||
- Limits: do not modify outputs in this step
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Load Checklist
|
||||
|
||||
Read `{validationChecklist}` and list all criteria.
|
||||
|
||||
### 2. Validate Outputs
|
||||
|
||||
Evaluate outputs against each checklist item.
|
||||
|
||||
### 3. Write Report
|
||||
|
||||
Write a validation report to `{outputFile}` with PASS/WARN/FAIL per section.
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Validation report written
|
||||
- All checklist items evaluated
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped checklist items
|
||||
- No report produced
|
||||
@@ -0,0 +1,73 @@
|
||||
---
|
||||
validationDate: 2026-01-27
|
||||
workflowName: testarch-atdd
|
||||
workflowPath: {project-root}/src/workflows/testarch/bmad-testarch-atdd
|
||||
validationStatus: COMPLETE
|
||||
completionDate: 2026-01-27 10:03:10
|
||||
---
|
||||
|
||||
# Validation Report: testarch-atdd
|
||||
|
||||
**Validation Started:** 2026-01-27 09:50:21
|
||||
**Validator:** BMAD Workflow Validation System (Codex)
|
||||
**Standards Version:** BMAD Workflow Standards
|
||||
|
||||
## File Structure & Size
|
||||
|
||||
- workflow.md present: YES
|
||||
- instructions.md present: YES
|
||||
- workflow.yaml present: YES
|
||||
- step files found: 8
|
||||
|
||||
**Step File Sizes:**
|
||||
|
||||
- steps-c/step-01-preflight-and-context.md: 101 lines [GOOD]
|
||||
- steps-c/step-02-generation-mode.md: 71 lines [GOOD]
|
||||
- steps-c/step-03-test-strategy.md: 70 lines [GOOD]
|
||||
- steps-c/step-04-generate-tests.md: 70 lines [GOOD]
|
||||
- steps-c/step-05-validate-and-complete.md: 61 lines [GOOD]
|
||||
- steps-e/step-01-assess.md: 51 lines [GOOD]
|
||||
- steps-e/step-02-apply-edit.md: 46 lines [GOOD]
|
||||
- steps-v/step-01-validate.md: 53 lines [GOOD]
|
||||
- workflow-plan.md present: YES
|
||||
|
||||
## Frontmatter Validation
|
||||
|
||||
- No frontmatter violations found
|
||||
|
||||
## Critical Path Violations
|
||||
|
||||
- No {project-root} hardcoded paths detected in body
|
||||
- No dead relative links detected
|
||||
|
||||
## Menu Handling Validation
|
||||
|
||||
- No menu structures detected (linear step flow) [N/A]
|
||||
|
||||
## Step Type Validation
|
||||
|
||||
- Last step steps-v/step-01-validate.md has no nextStepFile (final step OK)
|
||||
- Step type validation assumes linear sequence (no branching/menu). Workflow-plan.md present for reference. [INFO]
|
||||
|
||||
## Output Format Validation
|
||||
|
||||
- Templates present: atdd-checklist-template.md
|
||||
- Steps with outputFile in frontmatter:
|
||||
- steps-c/step-04-generate-tests.md
|
||||
- steps-v/step-01-validate.md
|
||||
|
||||
## Validation Design Check
|
||||
|
||||
- checklist.md present: YES
|
||||
- Validation steps folder (steps-v) present: YES
|
||||
|
||||
## Instruction Style Check
|
||||
|
||||
- All steps include STEP GOAL, MANDATORY EXECUTION RULES, EXECUTION PROTOCOLS, CONTEXT BOUNDARIES, and SUCCESS/FAILURE metrics
|
||||
|
||||
## Summary
|
||||
|
||||
- Validation completed: 2026-01-27 10:03:10
|
||||
- Critical issues: 0
|
||||
- Warnings: 0 (informational notes only)
|
||||
- Readiness: READY (manual review optional)
|
||||
@@ -0,0 +1,116 @@
|
||||
---
|
||||
validationDate: 2026-01-27
|
||||
workflowName: testarch-atdd
|
||||
workflowPath: {project-root}/src/workflows/testarch/bmad-testarch-atdd
|
||||
validationStatus: COMPLETE
|
||||
completionDate: 2026-01-27 10:24:01
|
||||
---
|
||||
|
||||
# Validation Report: testarch-atdd
|
||||
|
||||
**Validation Started:** 2026-01-27 10:24:01
|
||||
**Validator:** BMAD Workflow Validation System (Codex)
|
||||
**Standards Version:** BMAD Workflow Standards
|
||||
|
||||
## File Structure & Size
|
||||
|
||||
- workflow.md present: YES
|
||||
- instructions.md present: YES
|
||||
- workflow.yaml present: YES
|
||||
- step files found: 8
|
||||
|
||||
**Step File Sizes:**
|
||||
|
||||
- steps-c/step-01-preflight-and-context.md: 100 lines [GOOD]
|
||||
- steps-c/step-02-generation-mode.md: 70 lines [GOOD]
|
||||
- steps-c/step-03-test-strategy.md: 69 lines [GOOD]
|
||||
- steps-c/step-04-generate-tests.md: 69 lines [GOOD]
|
||||
- steps-c/step-05-validate-and-complete.md: 60 lines [GOOD]
|
||||
- steps-e/step-01-assess.md: 50 lines [GOOD]
|
||||
- steps-e/step-02-apply-edit.md: 45 lines [GOOD]
|
||||
- steps-v/step-01-validate.md: 52 lines [GOOD]
|
||||
- workflow-plan.md present: YES
|
||||
|
||||
## Frontmatter Validation
|
||||
|
||||
- No frontmatter violations found
|
||||
|
||||
## Critical Path Violations
|
||||
|
||||
### Config Variables (Exceptions)
|
||||
|
||||
Standard BMAD config variables treated as valid exceptions: bmb_creations_output_folder, communication_language, document_output_language, output_folder, planning_artifacts, project-root, project_name, test_artifacts, user_name
|
||||
|
||||
- No {project-root} hardcoded paths detected in body
|
||||
|
||||
- No dead relative links detected
|
||||
|
||||
- No module path assumptions detected
|
||||
|
||||
**Status:** ✅ PASS - No critical violations
|
||||
|
||||
## Menu Handling Validation
|
||||
|
||||
- No menu structures detected (linear step flow) [N/A]
|
||||
|
||||
## Step Type Validation
|
||||
|
||||
- steps-c/step-01-preflight-and-context.md: Init [PASS]
|
||||
- steps-c/step-02-generation-mode.md: Middle [PASS]
|
||||
- steps-c/step-03-test-strategy.md: Middle [PASS]
|
||||
- steps-c/step-04-generate-tests.md: Middle [PASS]
|
||||
- steps-c/step-05-validate-and-complete.md: Final [PASS]
|
||||
- Step type validation assumes linear sequence (no branching/menu). Workflow-plan.md present for reference. [INFO]
|
||||
|
||||
## Output Format Validation
|
||||
|
||||
- Templates present: atdd-checklist-template.md
|
||||
- Steps with outputFile in frontmatter:
|
||||
- steps-c/step-04-generate-tests.md
|
||||
- steps-v/step-01-validate.md
|
||||
- checklist.md present: YES
|
||||
|
||||
## Validation Design Check
|
||||
|
||||
- Validation steps folder (steps-v) present: YES
|
||||
- Validation step(s) present: step-01-validate.md
|
||||
- Validation steps reference checklist data and auto-proceed
|
||||
|
||||
## Instruction Style Check
|
||||
|
||||
- Instruction style: Prescriptive (appropriate for TEA quality/compliance workflows)
|
||||
- Steps emphasize mandatory sequence, explicit success/failure metrics, and risk-based guidance
|
||||
|
||||
## Collaborative Experience Check
|
||||
|
||||
- Overall facilitation quality: GOOD
|
||||
- Steps use progressive prompts and clear role reinforcement; no laundry-list interrogation detected
|
||||
- Flow progression is clear and aligned to workflow goals
|
||||
|
||||
## Subagent Optimization Opportunities
|
||||
|
||||
- No high-priority subagent optimizations identified; workflow already uses step-file architecture
|
||||
- Pattern 1 (grep/regex): N/A for most steps
|
||||
- Pattern 2 (per-file analysis): already aligned to validation structure
|
||||
- Pattern 3 (data ops): minimal data file loads
|
||||
- Pattern 4 (parallel): optional for validation only
|
||||
|
||||
## Cohesive Review
|
||||
|
||||
- Overall assessment: GOOD
|
||||
- Flow is linear, goals are clear, and outputs map to TEA artifacts
|
||||
- Voice and tone consistent with Test Architect persona
|
||||
- Recommendation: READY (minor refinements optional)
|
||||
|
||||
## Plan Quality Validation
|
||||
|
||||
- Plan file present: workflow-plan.md
|
||||
- Planned steps found: 8 (all implemented)
|
||||
- Plan implementation status: Fully Implemented
|
||||
|
||||
## Summary
|
||||
|
||||
- Validation completed: 2026-01-27 10:24:01
|
||||
- Critical issues: 0
|
||||
- Warnings: 0 (informational notes only)
|
||||
- Readiness: READY (manual review optional)
|
||||
21
.gemini/skills/bmad-testarch-atdd/workflow-plan.md
Normal file
21
.gemini/skills/bmad-testarch-atdd/workflow-plan.md
Normal file
@@ -0,0 +1,21 @@
|
||||
# Workflow Plan: testarch-atdd
|
||||
|
||||
## Create Mode (steps-c)
|
||||
- step-01-preflight-and-context.md
|
||||
|
||||
- step-02-generation-mode.md
|
||||
- step-03-test-strategy.md
|
||||
- step-04-generate-tests.md
|
||||
- step-05-validate-and-complete.md
|
||||
|
||||
## Validate Mode (steps-v)
|
||||
- step-01-validate.md
|
||||
|
||||
## Edit Mode (steps-e)
|
||||
- step-01-assess.md
|
||||
- step-02-apply-edit.md
|
||||
|
||||
## Outputs
|
||||
- {test_artifacts}/atdd-checklist-{story_id}.md
|
||||
|
||||
- Failing acceptance tests under {project-root}/tests
|
||||
41
.gemini/skills/bmad-testarch-atdd/workflow.md
Normal file
41
.gemini/skills/bmad-testarch-atdd/workflow.md
Normal file
@@ -0,0 +1,41 @@
|
||||
---
|
||||
name: bmad-testarch-atdd
|
||||
description: Generate failing acceptance tests using TDD cycle. Use when user says 'lets write acceptance tests' or 'I want to do ATDD'
|
||||
web_bundle: true
|
||||
---
|
||||
|
||||
# Acceptance Test-Driven Development (ATDD)
|
||||
|
||||
**Goal:** Generate failing acceptance tests before implementation using TDD red-green-refactor cycle
|
||||
|
||||
**Role:** You are the Master Test Architect.
|
||||
|
||||
---
|
||||
|
||||
## WORKFLOW ARCHITECTURE
|
||||
|
||||
This workflow uses **tri-modal step-file architecture**:
|
||||
|
||||
- **Create mode (steps-c/)**: primary execution flow
|
||||
- **Validate mode (steps-v/)**: validation against checklist
|
||||
- **Edit mode (steps-e/)**: revise existing outputs
|
||||
|
||||
---
|
||||
|
||||
## INITIALIZATION SEQUENCE
|
||||
|
||||
### 1. Mode Determination
|
||||
|
||||
"Welcome to the workflow. What would you like to do?"
|
||||
|
||||
- **[C] Create** — Run the workflow
|
||||
- **[R] Resume** — Resume an interrupted workflow
|
||||
- **[V] Validate** — Validate existing outputs
|
||||
- **[E] Edit** — Edit existing outputs
|
||||
|
||||
### 2. Route to First Step
|
||||
|
||||
- **If C:** Load `steps-c/step-01-preflight-and-context.md`
|
||||
- **If R:** Load `steps-c/step-01b-resume.md`
|
||||
- **If V:** Load `steps-v/step-01-validate.md`
|
||||
- **If E:** Load `steps-e/step-01-assess.md`
|
||||
46
.gemini/skills/bmad-testarch-atdd/workflow.yaml
Normal file
46
.gemini/skills/bmad-testarch-atdd/workflow.yaml
Normal file
@@ -0,0 +1,46 @@
|
||||
# Test Architect workflow: bmad-testarch-atdd
|
||||
name: bmad-testarch-atdd
|
||||
# prettier-ignore
|
||||
description: 'Generate failing acceptance tests using TDD cycle. Use when the user says "lets write acceptance tests" or "I want to do ATDD"'
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/tea/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
test_artifacts: "{config_source}:test_artifacts"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
document_output_language: "{config_source}:document_output_language"
|
||||
date: system-generated
|
||||
|
||||
# Workflow components
|
||||
installed_path: "."
|
||||
instructions: "./instructions.md"
|
||||
validation: "./checklist.md"
|
||||
template: "./atdd-checklist-template.md"
|
||||
|
||||
# Variables and inputs
|
||||
variables:
|
||||
test_dir: "{project-root}/tests" # Root test directory
|
||||
|
||||
# Output configuration
|
||||
default_output_file: "{test_artifacts}/atdd-checklist-{story_id}.md"
|
||||
|
||||
# Required tools
|
||||
required_tools:
|
||||
- read_file # Read story markdown, framework config
|
||||
- write_file # Create test files, checklist, factory stubs
|
||||
- create_directory # Create test directories
|
||||
- list_files # Find existing fixtures and helpers
|
||||
- search_repo # Search for similar test patterns
|
||||
|
||||
tags:
|
||||
- qa
|
||||
- atdd
|
||||
- test-architect
|
||||
- tdd
|
||||
- red-green-refactor
|
||||
|
||||
execution_hints:
|
||||
interactive: false # Minimize prompts
|
||||
autonomous: true # Proceed without user input unless blocked
|
||||
iterative: true
|
||||
Reference in New Issue
Block a user