docs: update all documentation and add AI tooling configs
- Rewrite README.md with current architecture, features and stack - Update docs/API.md with all current endpoints (corporate, BI, client 360) - Update docs/ARCHITECTURE.md with cache, modular queries, services, ETL - Update docs/GUIA-USUARIO.md for all roles (admin, corporate, agente) - Add docs/INDEX.md documentation index - Add PROJETO.md comprehensive project reference - Add BI-CCC-Implementation-Guide.md - Include AI agent configs (.claude, .agents, .gemini, _bmad) - Add netbird VPN configuration - Add status report Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
6
.agents/skills/bmad-testarch-nfr/SKILL.md
Normal file
6
.agents/skills/bmad-testarch-nfr/SKILL.md
Normal file
@@ -0,0 +1,6 @@
|
||||
---
|
||||
name: bmad-testarch-nfr
|
||||
description: 'Assess NFRs like performance security and reliability. Use when the user says "lets assess NFRs" or "I want to evaluate non-functional requirements"'
|
||||
---
|
||||
|
||||
Follow the instructions in [workflow.md](workflow.md).
|
||||
@@ -0,0 +1 @@
|
||||
type: skill
|
||||
407
.agents/skills/bmad-testarch-nfr/checklist.md
Normal file
407
.agents/skills/bmad-testarch-nfr/checklist.md
Normal file
@@ -0,0 +1,407 @@
|
||||
# Non-Functional Requirements Assessment - Validation Checklist
|
||||
|
||||
**Workflow:** `testarch-nfr`
|
||||
**Purpose:** Ensure comprehensive and evidence-based NFR assessment with actionable recommendations
|
||||
|
||||
---
|
||||
|
||||
Note: `nfr-assess` evaluates existing evidence; it does not run tests or CI workflows.
|
||||
|
||||
## Prerequisites Validation
|
||||
|
||||
- [ ] Implementation is deployed and accessible for evaluation
|
||||
- [ ] Evidence sources are available (test results, metrics, logs, CI results)
|
||||
- [ ] NFR categories are determined (performance, security, reliability, maintainability, custom)
|
||||
- [ ] Evidence directories exist and are accessible (`test_results_dir`, `metrics_dir`, `logs_dir`)
|
||||
- [ ] Knowledge base is loaded (nfr-criteria, ci-burn-in, test-quality)
|
||||
|
||||
---
|
||||
|
||||
## Context Loading
|
||||
|
||||
- [ ] Tech-spec.md loaded successfully (if available)
|
||||
- [ ] PRD.md loaded (if available)
|
||||
- [ ] Story file loaded (if applicable)
|
||||
- [ ] Relevant knowledge fragments loaded from `tea-index.csv`:
|
||||
- [ ] `nfr-criteria.md`
|
||||
- [ ] `ci-burn-in.md`
|
||||
- [ ] `test-quality.md`
|
||||
- [ ] `playwright-config.md` (if using Playwright)
|
||||
|
||||
---
|
||||
|
||||
## NFR Categories and Thresholds
|
||||
|
||||
### Performance
|
||||
|
||||
- [ ] Response time threshold defined or marked as UNKNOWN
|
||||
- [ ] Throughput threshold defined or marked as UNKNOWN
|
||||
- [ ] Resource usage thresholds defined or marked as UNKNOWN
|
||||
- [ ] Scalability requirements defined or marked as UNKNOWN
|
||||
|
||||
### Security
|
||||
|
||||
- [ ] Authentication requirements defined or marked as UNKNOWN
|
||||
- [ ] Authorization requirements defined or marked as UNKNOWN
|
||||
- [ ] Data protection requirements defined or marked as UNKNOWN
|
||||
- [ ] Vulnerability management thresholds defined or marked as UNKNOWN
|
||||
- [ ] Compliance requirements identified (GDPR, HIPAA, PCI-DSS, etc.)
|
||||
|
||||
### Reliability
|
||||
|
||||
- [ ] Availability (uptime) threshold defined or marked as UNKNOWN
|
||||
- [ ] Error rate threshold defined or marked as UNKNOWN
|
||||
- [ ] MTTR (Mean Time To Recovery) threshold defined or marked as UNKNOWN
|
||||
- [ ] Fault tolerance requirements defined or marked as UNKNOWN
|
||||
- [ ] Disaster recovery requirements defined (RTO, RPO) or marked as UNKNOWN
|
||||
|
||||
### Maintainability
|
||||
|
||||
- [ ] Test coverage threshold defined or marked as UNKNOWN
|
||||
- [ ] Code quality threshold defined or marked as UNKNOWN
|
||||
- [ ] Technical debt threshold defined or marked as UNKNOWN
|
||||
- [ ] Documentation completeness threshold defined or marked as UNKNOWN
|
||||
|
||||
### Custom NFR Categories (if applicable)
|
||||
|
||||
- [ ] Custom NFR category 1: Thresholds defined or marked as UNKNOWN
|
||||
- [ ] Custom NFR category 2: Thresholds defined or marked as UNKNOWN
|
||||
- [ ] Custom NFR category 3: Thresholds defined or marked as UNKNOWN
|
||||
|
||||
---
|
||||
|
||||
## Evidence Gathering
|
||||
|
||||
### Performance Evidence
|
||||
|
||||
- [ ] Load test results collected (JMeter, k6, Gatling, etc.)
|
||||
- [ ] Application metrics collected (response times, throughput, resource usage)
|
||||
- [ ] APM data collected (New Relic, Datadog, Dynatrace, etc.)
|
||||
- [ ] Lighthouse reports collected (if web app)
|
||||
- [ ] Playwright performance traces collected (if applicable)
|
||||
|
||||
### Security Evidence
|
||||
|
||||
- [ ] SAST results collected (SonarQube, Checkmarx, Veracode, etc.)
|
||||
- [ ] DAST results collected (OWASP ZAP, Burp Suite, etc.)
|
||||
- [ ] Dependency scanning results collected (Snyk, Dependabot, npm audit)
|
||||
- [ ] Penetration test reports collected (if available)
|
||||
- [ ] Security audit logs collected
|
||||
- [ ] Compliance audit results collected (if applicable)
|
||||
|
||||
### Reliability Evidence
|
||||
|
||||
- [ ] Uptime monitoring data collected (Pingdom, UptimeRobot, StatusCake)
|
||||
- [ ] Error logs collected
|
||||
- [ ] Error rate metrics collected
|
||||
- [ ] CI burn-in results collected (stability over time)
|
||||
- [ ] Chaos engineering test results collected (if available)
|
||||
- [ ] Failover/recovery test results collected (if available)
|
||||
- [ ] Incident reports and postmortems collected (if applicable)
|
||||
|
||||
### Maintainability Evidence
|
||||
|
||||
- [ ] Code coverage reports collected (Istanbul, NYC, c8, JaCoCo)
|
||||
- [ ] Static analysis results collected (ESLint, SonarQube, CodeClimate)
|
||||
- [ ] Technical debt metrics collected
|
||||
- [ ] Documentation audit results collected
|
||||
- [ ] Test review report collected (from test-review workflow, if available)
|
||||
- [ ] Git metrics collected (code churn, commit frequency, etc.)
|
||||
|
||||
---
|
||||
|
||||
## NFR Assessment with Deterministic Rules
|
||||
|
||||
### Performance Assessment
|
||||
|
||||
- [ ] Response time assessed against threshold
|
||||
- [ ] Throughput assessed against threshold
|
||||
- [ ] Resource usage assessed against threshold
|
||||
- [ ] Scalability assessed against requirements
|
||||
- [ ] Status classified (PASS/CONCERNS/FAIL) with justification
|
||||
- [ ] Evidence source documented (file path, metric name)
|
||||
|
||||
### Security Assessment
|
||||
|
||||
- [ ] Authentication strength assessed against requirements
|
||||
- [ ] Authorization controls assessed against requirements
|
||||
- [ ] Data protection assessed against requirements
|
||||
- [ ] Vulnerability management assessed against thresholds
|
||||
- [ ] Compliance assessed against requirements
|
||||
- [ ] Status classified (PASS/CONCERNS/FAIL) with justification
|
||||
- [ ] Evidence source documented (file path, scan result)
|
||||
|
||||
### Reliability Assessment
|
||||
|
||||
- [ ] Availability (uptime) assessed against threshold
|
||||
- [ ] Error rate assessed against threshold
|
||||
- [ ] MTTR assessed against threshold
|
||||
- [ ] Fault tolerance assessed against requirements
|
||||
- [ ] Disaster recovery assessed against requirements (RTO, RPO)
|
||||
- [ ] CI burn-in assessed (stability over time)
|
||||
- [ ] Status classified (PASS/CONCERNS/FAIL) with justification
|
||||
- [ ] Evidence source documented (file path, monitoring data)
|
||||
|
||||
### Maintainability Assessment
|
||||
|
||||
- [ ] Test coverage assessed against threshold
|
||||
- [ ] Code quality assessed against threshold
|
||||
- [ ] Technical debt assessed against threshold
|
||||
- [ ] Documentation completeness assessed against threshold
|
||||
- [ ] Test quality assessed (from test-review, if available)
|
||||
- [ ] Status classified (PASS/CONCERNS/FAIL) with justification
|
||||
- [ ] Evidence source documented (file path, coverage report)
|
||||
|
||||
### Custom NFR Assessment (if applicable)
|
||||
|
||||
- [ ] Custom NFR 1 assessed against threshold with justification
|
||||
- [ ] Custom NFR 2 assessed against threshold with justification
|
||||
- [ ] Custom NFR 3 assessed against threshold with justification
|
||||
|
||||
---
|
||||
|
||||
## Status Classification Validation
|
||||
|
||||
### PASS Criteria Verified
|
||||
|
||||
- [ ] Evidence exists for PASS status
|
||||
- [ ] Evidence meets or exceeds threshold
|
||||
- [ ] No concerns flagged in evidence
|
||||
- [ ] Quality is acceptable
|
||||
|
||||
### CONCERNS Criteria Verified
|
||||
|
||||
- [ ] Threshold is UNKNOWN (documented) OR
|
||||
- [ ] Evidence is MISSING or INCOMPLETE (documented) OR
|
||||
- [ ] Evidence is close to threshold (within 10%, documented) OR
|
||||
- [ ] Evidence shows intermittent issues (documented)
|
||||
|
||||
### FAIL Criteria Verified
|
||||
|
||||
- [ ] Evidence exists BUT does not meet threshold (documented) OR
|
||||
- [ ] Critical evidence is MISSING (documented) OR
|
||||
- [ ] Evidence shows consistent failures (documented) OR
|
||||
- [ ] Quality is unacceptable (documented)
|
||||
|
||||
### No Threshold Guessing
|
||||
|
||||
- [ ] All thresholds are either defined or marked as UNKNOWN
|
||||
- [ ] No thresholds were guessed or inferred
|
||||
- [ ] All UNKNOWN thresholds result in CONCERNS status
|
||||
|
||||
---
|
||||
|
||||
## Quick Wins and Recommended Actions
|
||||
|
||||
### Quick Wins Identified
|
||||
|
||||
- [ ] Low-effort, high-impact improvements identified for CONCERNS/FAIL
|
||||
- [ ] Configuration changes (no code changes) identified
|
||||
- [ ] Optimization opportunities identified (caching, indexing, compression)
|
||||
- [ ] Monitoring additions identified (detect issues before failures)
|
||||
|
||||
### Recommended Actions
|
||||
|
||||
- [ ] Specific remediation steps provided (not generic advice)
|
||||
- [ ] Priority assigned (CRITICAL, HIGH, MEDIUM, LOW)
|
||||
- [ ] Estimated effort provided (hours, days)
|
||||
- [ ] Owner suggestions provided (dev, ops, security)
|
||||
|
||||
### Monitoring Hooks
|
||||
|
||||
- [ ] Performance monitoring suggested (APM, synthetic monitoring)
|
||||
- [ ] Error tracking suggested (Sentry, Rollbar, error logs)
|
||||
- [ ] Security monitoring suggested (intrusion detection, audit logs)
|
||||
- [ ] Alerting thresholds suggested (notify before breach)
|
||||
|
||||
### Fail-Fast Mechanisms
|
||||
|
||||
- [ ] Circuit breakers suggested for reliability
|
||||
- [ ] Rate limiting suggested for performance
|
||||
- [ ] Validation gates suggested for security
|
||||
- [ ] Smoke tests suggested for maintainability
|
||||
|
||||
---
|
||||
|
||||
## Deliverables Generated
|
||||
|
||||
### NFR Assessment Report
|
||||
|
||||
- [ ] File created at `{test_artifacts}/nfr-assessment.md`
|
||||
- [ ] Template from `nfr-report-template.md` used
|
||||
- [ ] Executive summary included (overall status, critical issues)
|
||||
- [ ] Assessment by category included (performance, security, reliability, maintainability)
|
||||
- [ ] Evidence for each NFR documented
|
||||
- [ ] Status classifications documented (PASS/CONCERNS/FAIL)
|
||||
- [ ] Findings summary included (PASS count, CONCERNS count, FAIL count)
|
||||
- [ ] Quick wins section included
|
||||
- [ ] Recommended actions section included
|
||||
- [ ] Evidence gaps checklist included
|
||||
|
||||
### Gate YAML Snippet (if enabled)
|
||||
|
||||
- [ ] YAML snippet generated
|
||||
- [ ] Date included
|
||||
- [ ] Categories status included (performance, security, reliability, maintainability)
|
||||
- [ ] Overall status included (PASS/CONCERNS/FAIL)
|
||||
- [ ] Issue counts included (critical, high, medium, concerns)
|
||||
- [ ] Blockers flag included (true/false)
|
||||
- [ ] Recommendations included
|
||||
|
||||
### Evidence Checklist (if enabled)
|
||||
|
||||
- [ ] All NFRs with MISSING or INCOMPLETE evidence listed
|
||||
- [ ] Owners assigned for evidence collection
|
||||
- [ ] Suggested evidence sources provided
|
||||
- [ ] Deadlines set for evidence collection
|
||||
|
||||
### Updated Story File (if enabled and requested)
|
||||
|
||||
- [ ] "NFR Assessment" section added to story markdown
|
||||
- [ ] Link to NFR assessment report included
|
||||
- [ ] Overall status and critical issues included
|
||||
- [ ] Gate status included
|
||||
|
||||
---
|
||||
|
||||
## Quality Assurance
|
||||
|
||||
### Accuracy Checks
|
||||
|
||||
- [ ] All NFR categories assessed (none skipped)
|
||||
- [ ] All thresholds documented (defined or UNKNOWN)
|
||||
- [ ] All evidence sources documented (file paths, metric names)
|
||||
- [ ] Status classifications are deterministic and consistent
|
||||
- [ ] No false positives (status correctly assigned)
|
||||
- [ ] No false negatives (all issues identified)
|
||||
|
||||
### Completeness Checks
|
||||
|
||||
- [ ] All NFR categories covered (performance, security, reliability, maintainability, custom)
|
||||
- [ ] All evidence sources checked (test results, metrics, logs, CI results)
|
||||
- [ ] All status types used appropriately (PASS, CONCERNS, FAIL)
|
||||
- [ ] All NFRs with CONCERNS/FAIL have recommendations
|
||||
- [ ] All evidence gaps have owners and deadlines
|
||||
|
||||
### Actionability Checks
|
||||
|
||||
- [ ] Recommendations are specific (not generic)
|
||||
- [ ] Remediation steps are clear and actionable
|
||||
- [ ] Priorities are assigned (CRITICAL, HIGH, MEDIUM, LOW)
|
||||
- [ ] Effort estimates are provided (hours, days)
|
||||
- [ ] Owners are suggested (dev, ops, security)
|
||||
|
||||
---
|
||||
|
||||
## Integration with BMad Artifacts
|
||||
|
||||
### With tech-spec.md
|
||||
|
||||
- [ ] Tech spec loaded for NFR requirements and thresholds
|
||||
- [ ] Performance targets extracted
|
||||
- [ ] Security requirements extracted
|
||||
- [ ] Reliability SLAs extracted
|
||||
- [ ] Architectural decisions considered
|
||||
|
||||
### With test-design.md
|
||||
|
||||
- [ ] Test design loaded for NFR test plan
|
||||
- [ ] Test priorities referenced (P0/P1/P2/P3)
|
||||
- [ ] Assessment aligned with planned NFR validation
|
||||
|
||||
### With PRD.md
|
||||
|
||||
- [ ] PRD loaded for product-level NFR context
|
||||
- [ ] User experience goals considered
|
||||
- [ ] Unstated requirements checked
|
||||
- [ ] Product-level SLAs referenced
|
||||
|
||||
---
|
||||
|
||||
## Quality Gates Validation
|
||||
|
||||
### Release Blocker (FAIL)
|
||||
|
||||
- [ ] Critical NFR status checked (security, reliability)
|
||||
- [ ] Performance failures assessed for user impact
|
||||
- [ ] Release blocker flagged if critical NFR has FAIL status
|
||||
|
||||
### PR Blocker (HIGH CONCERNS)
|
||||
|
||||
- [ ] High-priority NFR status checked
|
||||
- [ ] Multiple CONCERNS assessed
|
||||
- [ ] PR blocker flagged if HIGH priority issues exist
|
||||
|
||||
### Warning (CONCERNS)
|
||||
|
||||
- [ ] Any NFR with CONCERNS status flagged
|
||||
- [ ] Missing or incomplete evidence documented
|
||||
- [ ] Warning issued to address before next release
|
||||
|
||||
### Pass (PASS)
|
||||
|
||||
- [ ] All NFRs have PASS status
|
||||
- [ ] No blockers or concerns exist
|
||||
- [ ] Ready for release confirmed
|
||||
|
||||
---
|
||||
|
||||
## Non-Prescriptive Validation
|
||||
|
||||
- [ ] NFR categories adapted to team needs
|
||||
- [ ] Thresholds appropriate for project context
|
||||
- [ ] Assessment criteria customized as needed
|
||||
- [ ] Teams can extend with custom NFR categories
|
||||
- [ ] Integration with external tools supported (New Relic, Datadog, SonarQube, JIRA)
|
||||
|
||||
---
|
||||
|
||||
## Documentation and Communication
|
||||
|
||||
- [ ] NFR assessment report is readable and well-formatted
|
||||
- [ ] Tables render correctly in markdown
|
||||
- [ ] Code blocks have proper syntax highlighting
|
||||
- [ ] Links are valid and accessible
|
||||
- [ ] Recommendations are clear and prioritized
|
||||
- [ ] Overall status is prominent and unambiguous
|
||||
- [ ] Executive summary provides quick understanding
|
||||
|
||||
---
|
||||
|
||||
## Final Validation
|
||||
|
||||
- [ ] All prerequisites met
|
||||
- [ ] All NFR categories assessed with evidence (or gaps documented)
|
||||
- [ ] No thresholds were guessed (all defined or UNKNOWN)
|
||||
- [ ] Status classifications are deterministic and justified
|
||||
- [ ] Quick wins identified for all CONCERNS/FAIL
|
||||
- [ ] Recommended actions are specific and actionable
|
||||
- [ ] Evidence gaps documented with owners and deadlines
|
||||
- [ ] NFR assessment report generated and saved
|
||||
- [ ] Gate YAML snippet generated (if enabled)
|
||||
- [ ] Evidence checklist generated (if enabled)
|
||||
- [ ] Workflow completed successfully
|
||||
|
||||
---
|
||||
|
||||
## Sign-Off
|
||||
|
||||
**NFR Assessment Status:**
|
||||
|
||||
- [ ] ✅ PASS - All NFRs meet requirements, ready for release
|
||||
- [ ] ⚠️ CONCERNS - Some NFRs have concerns, address before next release
|
||||
- [ ] ❌ FAIL - Critical NFRs not met, BLOCKER for release
|
||||
|
||||
**Next Actions:**
|
||||
|
||||
- If PASS ✅: Proceed to `*gate` workflow or release
|
||||
- If CONCERNS ⚠️: Address HIGH/CRITICAL issues, re-run `*nfr-assess`
|
||||
- If FAIL ❌: Resolve FAIL status NFRs, re-run `*nfr-assess`
|
||||
|
||||
**Critical Issues:** {COUNT}
|
||||
**High Priority Issues:** {COUNT}
|
||||
**Concerns:** {COUNT}
|
||||
|
||||
---
|
||||
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
43
.agents/skills/bmad-testarch-nfr/instructions.md
Normal file
43
.agents/skills/bmad-testarch-nfr/instructions.md
Normal file
@@ -0,0 +1,43 @@
|
||||
# Non-Functional Requirements Assessment
|
||||
|
||||
**Workflow:** `bmad-testarch-nfr`
|
||||
**Version:** 5.0 (Step-File Architecture)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Assess non-functional requirements (performance, security, reliability, maintainability) with evidence-based validation and deterministic PASS/CONCERNS/FAIL outcomes.
|
||||
|
||||
---
|
||||
|
||||
## WORKFLOW ARCHITECTURE
|
||||
|
||||
This workflow uses **step-file architecture**:
|
||||
|
||||
- **Micro-file Design**: Each step is self-contained
|
||||
- **JIT Loading**: Only the current step file is in memory
|
||||
- **Sequential Enforcement**: Execute steps in order
|
||||
|
||||
---
|
||||
|
||||
## INITIALIZATION SEQUENCE
|
||||
|
||||
### 1. Configuration Loading
|
||||
|
||||
From `workflow.yaml`, resolve:
|
||||
|
||||
- `config_source`, `test_artifacts`, `user_name`, `communication_language`, `document_output_language`, `date`
|
||||
- `custom_nfr_categories`
|
||||
|
||||
### 2. First Step
|
||||
|
||||
Load, read completely, and execute:
|
||||
`./steps-c/step-01-load-context.md`
|
||||
|
||||
### 3. Resume Support
|
||||
|
||||
If the user selects **Resume** mode, load, read completely, and execute:
|
||||
`./steps-c/step-01b-resume.md`
|
||||
|
||||
This checks the output document for progress tracking frontmatter and routes to the next incomplete step.
|
||||
470
.agents/skills/bmad-testarch-nfr/nfr-report-template.md
Normal file
470
.agents/skills/bmad-testarch-nfr/nfr-report-template.md
Normal file
@@ -0,0 +1,470 @@
|
||||
---
|
||||
stepsCompleted: []
|
||||
lastStep: ''
|
||||
lastSaved: ''
|
||||
workflowType: 'testarch-nfr-assess'
|
||||
inputDocuments: []
|
||||
---
|
||||
|
||||
# NFR Assessment - {FEATURE_NAME}
|
||||
|
||||
**Date:** {DATE}
|
||||
**Story:** {STORY_ID} (if applicable)
|
||||
**Overall Status:** {OVERALL_STATUS} {STATUS_ICON}
|
||||
|
||||
---
|
||||
|
||||
Note: This assessment summarizes existing evidence; it does not run tests or CI workflows.
|
||||
|
||||
## Executive Summary
|
||||
|
||||
**Assessment:** {PASS_COUNT} PASS, {CONCERNS_COUNT} CONCERNS, {FAIL_COUNT} FAIL
|
||||
|
||||
**Blockers:** {BLOCKER_COUNT} {BLOCKER_DESCRIPTION}
|
||||
|
||||
**High Priority Issues:** {HIGH_PRIORITY_COUNT} {HIGH_PRIORITY_DESCRIPTION}
|
||||
|
||||
**Recommendation:** {OVERALL_RECOMMENDATION}
|
||||
|
||||
---
|
||||
|
||||
## Performance Assessment
|
||||
|
||||
### Response Time (p95)
|
||||
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Threshold:** {THRESHOLD_VALUE}
|
||||
- **Actual:** {ACTUAL_VALUE}
|
||||
- **Evidence:** {EVIDENCE_SOURCE}
|
||||
- **Findings:** {FINDINGS_DESCRIPTION}
|
||||
|
||||
### Throughput
|
||||
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Threshold:** {THRESHOLD_VALUE}
|
||||
- **Actual:** {ACTUAL_VALUE}
|
||||
- **Evidence:** {EVIDENCE_SOURCE}
|
||||
- **Findings:** {FINDINGS_DESCRIPTION}
|
||||
|
||||
### Resource Usage
|
||||
|
||||
- **CPU Usage**
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Threshold:** {THRESHOLD_VALUE}
|
||||
- **Actual:** {ACTUAL_VALUE}
|
||||
- **Evidence:** {EVIDENCE_SOURCE}
|
||||
|
||||
- **Memory Usage**
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Threshold:** {THRESHOLD_VALUE}
|
||||
- **Actual:** {ACTUAL_VALUE}
|
||||
- **Evidence:** {EVIDENCE_SOURCE}
|
||||
|
||||
### Scalability
|
||||
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Threshold:** {THRESHOLD_DESCRIPTION}
|
||||
- **Actual:** {ACTUAL_DESCRIPTION}
|
||||
- **Evidence:** {EVIDENCE_SOURCE}
|
||||
- **Findings:** {FINDINGS_DESCRIPTION}
|
||||
|
||||
---
|
||||
|
||||
## Security Assessment
|
||||
|
||||
### Authentication Strength
|
||||
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Threshold:** {THRESHOLD_DESCRIPTION}
|
||||
- **Actual:** {ACTUAL_DESCRIPTION}
|
||||
- **Evidence:** {EVIDENCE_SOURCE}
|
||||
- **Findings:** {FINDINGS_DESCRIPTION}
|
||||
- **Recommendation:** {RECOMMENDATION} (if CONCERNS or FAIL)
|
||||
|
||||
### Authorization Controls
|
||||
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Threshold:** {THRESHOLD_DESCRIPTION}
|
||||
- **Actual:** {ACTUAL_DESCRIPTION}
|
||||
- **Evidence:** {EVIDENCE_SOURCE}
|
||||
- **Findings:** {FINDINGS_DESCRIPTION}
|
||||
|
||||
### Data Protection
|
||||
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Threshold:** {THRESHOLD_DESCRIPTION}
|
||||
- **Actual:** {ACTUAL_DESCRIPTION}
|
||||
- **Evidence:** {EVIDENCE_SOURCE}
|
||||
- **Findings:** {FINDINGS_DESCRIPTION}
|
||||
|
||||
### Vulnerability Management
|
||||
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Threshold:** {THRESHOLD_DESCRIPTION} (e.g., "0 critical, <3 high vulnerabilities")
|
||||
- **Actual:** {ACTUAL_DESCRIPTION} (e.g., "0 critical, 1 high, 5 medium vulnerabilities")
|
||||
- **Evidence:** {EVIDENCE_SOURCE} (e.g., "Snyk scan results - scan-2025-10-14.json")
|
||||
- **Findings:** {FINDINGS_DESCRIPTION}
|
||||
|
||||
### Compliance (if applicable)
|
||||
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Standards:** {COMPLIANCE_STANDARDS} (e.g., "GDPR, HIPAA, PCI-DSS")
|
||||
- **Actual:** {ACTUAL_COMPLIANCE_STATUS}
|
||||
- **Evidence:** {EVIDENCE_SOURCE}
|
||||
- **Findings:** {FINDINGS_DESCRIPTION}
|
||||
|
||||
---
|
||||
|
||||
## Reliability Assessment
|
||||
|
||||
### Availability (Uptime)
|
||||
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Threshold:** {THRESHOLD_VALUE} (e.g., "99.9%")
|
||||
- **Actual:** {ACTUAL_VALUE} (e.g., "99.95%")
|
||||
- **Evidence:** {EVIDENCE_SOURCE} (e.g., "Uptime monitoring - uptime-report-2025-10-14.csv")
|
||||
- **Findings:** {FINDINGS_DESCRIPTION}
|
||||
|
||||
### Error Rate
|
||||
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Threshold:** {THRESHOLD_VALUE} (e.g., "<0.1%")
|
||||
- **Actual:** {ACTUAL_VALUE} (e.g., "0.05%")
|
||||
- **Evidence:** {EVIDENCE_SOURCE} (e.g., "Error logs - logs/errors-2025-10.log")
|
||||
- **Findings:** {FINDINGS_DESCRIPTION}
|
||||
|
||||
### MTTR (Mean Time To Recovery)
|
||||
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Threshold:** {THRESHOLD_VALUE} (e.g., "<15 minutes")
|
||||
- **Actual:** {ACTUAL_VALUE} (e.g., "12 minutes")
|
||||
- **Evidence:** {EVIDENCE_SOURCE} (e.g., "Incident reports - incidents/")
|
||||
- **Findings:** {FINDINGS_DESCRIPTION}
|
||||
|
||||
### Fault Tolerance
|
||||
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Threshold:** {THRESHOLD_DESCRIPTION}
|
||||
- **Actual:** {ACTUAL_DESCRIPTION}
|
||||
- **Evidence:** {EVIDENCE_SOURCE}
|
||||
- **Findings:** {FINDINGS_DESCRIPTION}
|
||||
|
||||
### CI Burn-In (Stability)
|
||||
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Threshold:** {THRESHOLD_VALUE} (e.g., "100 consecutive successful runs")
|
||||
- **Actual:** {ACTUAL_VALUE} (e.g., "150 consecutive successful runs")
|
||||
- **Evidence:** {EVIDENCE_SOURCE} (e.g., "CI burn-in results - ci-burn-in-2025-10-14.log")
|
||||
- **Findings:** {FINDINGS_DESCRIPTION}
|
||||
|
||||
### Disaster Recovery (if applicable)
|
||||
|
||||
- **RTO (Recovery Time Objective)**
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Threshold:** {THRESHOLD_VALUE}
|
||||
- **Actual:** {ACTUAL_VALUE}
|
||||
- **Evidence:** {EVIDENCE_SOURCE}
|
||||
|
||||
- **RPO (Recovery Point Objective)**
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Threshold:** {THRESHOLD_VALUE}
|
||||
- **Actual:** {ACTUAL_VALUE}
|
||||
- **Evidence:** {EVIDENCE_SOURCE}
|
||||
|
||||
---
|
||||
|
||||
## Maintainability Assessment
|
||||
|
||||
### Test Coverage
|
||||
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Threshold:** {THRESHOLD_VALUE} (e.g., ">=80%")
|
||||
- **Actual:** {ACTUAL_VALUE} (e.g., "87%")
|
||||
- **Evidence:** {EVIDENCE_SOURCE} (e.g., "Coverage report - coverage/lcov-report/index.html")
|
||||
- **Findings:** {FINDINGS_DESCRIPTION}
|
||||
|
||||
### Code Quality
|
||||
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Threshold:** {THRESHOLD_VALUE} (e.g., ">=85/100")
|
||||
- **Actual:** {ACTUAL_VALUE} (e.g., "92/100")
|
||||
- **Evidence:** {EVIDENCE_SOURCE} (e.g., "SonarQube analysis - sonarqube-report-2025-10-14.pdf")
|
||||
- **Findings:** {FINDINGS_DESCRIPTION}
|
||||
|
||||
### Technical Debt
|
||||
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Threshold:** {THRESHOLD_VALUE} (e.g., "<5% debt ratio")
|
||||
- **Actual:** {ACTUAL_VALUE} (e.g., "3.2% debt ratio")
|
||||
- **Evidence:** {EVIDENCE_SOURCE} (e.g., "CodeClimate analysis - codeclimate-2025-10-14.json")
|
||||
- **Findings:** {FINDINGS_DESCRIPTION}
|
||||
|
||||
### Documentation Completeness
|
||||
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Threshold:** {THRESHOLD_VALUE} (e.g., ">=90%")
|
||||
- **Actual:** {ACTUAL_VALUE} (e.g., "95%")
|
||||
- **Evidence:** {EVIDENCE_SOURCE} (e.g., "Documentation audit - docs-audit-2025-10-14.md")
|
||||
- **Findings:** {FINDINGS_DESCRIPTION}
|
||||
|
||||
### Test Quality (from test-review, if available)
|
||||
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Threshold:** {THRESHOLD_DESCRIPTION}
|
||||
- **Actual:** {ACTUAL_DESCRIPTION}
|
||||
- **Evidence:** {EVIDENCE_SOURCE} (e.g., "Test review report - test-review-2025-10-14.md")
|
||||
- **Findings:** {FINDINGS_DESCRIPTION}
|
||||
|
||||
---
|
||||
|
||||
## Custom NFR Assessments (if applicable)
|
||||
|
||||
### {CUSTOM_NFR_NAME_1}
|
||||
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Threshold:** {THRESHOLD_DESCRIPTION}
|
||||
- **Actual:** {ACTUAL_DESCRIPTION}
|
||||
- **Evidence:** {EVIDENCE_SOURCE}
|
||||
- **Findings:** {FINDINGS_DESCRIPTION}
|
||||
|
||||
### {CUSTOM_NFR_NAME_2}
|
||||
|
||||
- **Status:** {STATUS} {STATUS_ICON}
|
||||
- **Threshold:** {THRESHOLD_DESCRIPTION}
|
||||
- **Actual:** {ACTUAL_DESCRIPTION}
|
||||
- **Evidence:** {EVIDENCE_SOURCE}
|
||||
- **Findings:** {FINDINGS_DESCRIPTION}
|
||||
|
||||
---
|
||||
|
||||
## Quick Wins
|
||||
|
||||
{QUICK_WIN_COUNT} quick wins identified for immediate implementation:
|
||||
|
||||
1. **{QUICK_WIN_TITLE_1}** ({NFR_CATEGORY}) - {PRIORITY} - {ESTIMATED_EFFORT}
|
||||
- {QUICK_WIN_DESCRIPTION}
|
||||
- No code changes needed / Minimal code changes
|
||||
|
||||
2. **{QUICK_WIN_TITLE_2}** ({NFR_CATEGORY}) - {PRIORITY} - {ESTIMATED_EFFORT}
|
||||
- {QUICK_WIN_DESCRIPTION}
|
||||
|
||||
---
|
||||
|
||||
## Recommended Actions
|
||||
|
||||
### Immediate (Before Release) - CRITICAL/HIGH Priority
|
||||
|
||||
1. **{ACTION_TITLE_1}** - {PRIORITY} - {ESTIMATED_EFFORT} - {OWNER}
|
||||
- {ACTION_DESCRIPTION}
|
||||
- {SPECIFIC_STEPS}
|
||||
- {VALIDATION_CRITERIA}
|
||||
|
||||
2. **{ACTION_TITLE_2}** - {PRIORITY} - {ESTIMATED_EFFORT} - {OWNER}
|
||||
- {ACTION_DESCRIPTION}
|
||||
- {SPECIFIC_STEPS}
|
||||
- {VALIDATION_CRITERIA}
|
||||
|
||||
### Short-term (Next Milestone) - MEDIUM Priority
|
||||
|
||||
1. **{ACTION_TITLE_3}** - {PRIORITY} - {ESTIMATED_EFFORT} - {OWNER}
|
||||
- {ACTION_DESCRIPTION}
|
||||
|
||||
2. **{ACTION_TITLE_4}** - {PRIORITY} - {ESTIMATED_EFFORT} - {OWNER}
|
||||
- {ACTION_DESCRIPTION}
|
||||
|
||||
### Long-term (Backlog) - LOW Priority
|
||||
|
||||
1. **{ACTION_TITLE_5}** - {PRIORITY} - {ESTIMATED_EFFORT} - {OWNER}
|
||||
- {ACTION_DESCRIPTION}
|
||||
|
||||
---
|
||||
|
||||
## Monitoring Hooks
|
||||
|
||||
{MONITORING_HOOK_COUNT} monitoring hooks recommended to detect issues before failures:
|
||||
|
||||
### Performance Monitoring
|
||||
|
||||
- [ ] {MONITORING_TOOL_1} - {MONITORING_DESCRIPTION}
|
||||
- **Owner:** {OWNER}
|
||||
- **Deadline:** {DEADLINE}
|
||||
|
||||
- [ ] {MONITORING_TOOL_2} - {MONITORING_DESCRIPTION}
|
||||
- **Owner:** {OWNER}
|
||||
- **Deadline:** {DEADLINE}
|
||||
|
||||
### Security Monitoring
|
||||
|
||||
- [ ] {MONITORING_TOOL_3} - {MONITORING_DESCRIPTION}
|
||||
- **Owner:** {OWNER}
|
||||
- **Deadline:** {DEADLINE}
|
||||
|
||||
### Reliability Monitoring
|
||||
|
||||
- [ ] {MONITORING_TOOL_4} - {MONITORING_DESCRIPTION}
|
||||
- **Owner:** {OWNER}
|
||||
- **Deadline:** {DEADLINE}
|
||||
|
||||
### Alerting Thresholds
|
||||
|
||||
- [ ] {ALERT_DESCRIPTION} - Notify when {THRESHOLD_CONDITION}
|
||||
- **Owner:** {OWNER}
|
||||
- **Deadline:** {DEADLINE}
|
||||
|
||||
---
|
||||
|
||||
## Fail-Fast Mechanisms
|
||||
|
||||
{FAIL_FAST_COUNT} fail-fast mechanisms recommended to prevent failures:
|
||||
|
||||
### Circuit Breakers (Reliability)
|
||||
|
||||
- [ ] {CIRCUIT_BREAKER_DESCRIPTION}
|
||||
- **Owner:** {OWNER}
|
||||
- **Estimated Effort:** {EFFORT}
|
||||
|
||||
### Rate Limiting (Performance)
|
||||
|
||||
- [ ] {RATE_LIMITING_DESCRIPTION}
|
||||
- **Owner:** {OWNER}
|
||||
- **Estimated Effort:** {EFFORT}
|
||||
|
||||
### Validation Gates (Security)
|
||||
|
||||
- [ ] {VALIDATION_GATE_DESCRIPTION}
|
||||
- **Owner:** {OWNER}
|
||||
- **Estimated Effort:** {EFFORT}
|
||||
|
||||
### Smoke Tests (Maintainability)
|
||||
|
||||
- [ ] {SMOKE_TEST_DESCRIPTION}
|
||||
- **Owner:** {OWNER}
|
||||
- **Estimated Effort:** {EFFORT}
|
||||
|
||||
---
|
||||
|
||||
## Evidence Gaps
|
||||
|
||||
{EVIDENCE_GAP_COUNT} evidence gaps identified - action required:
|
||||
|
||||
- [ ] **{NFR_NAME_1}** ({NFR_CATEGORY})
|
||||
- **Owner:** {OWNER}
|
||||
- **Deadline:** {DEADLINE}
|
||||
- **Suggested Evidence:** {SUGGESTED_EVIDENCE_SOURCE}
|
||||
- **Impact:** {IMPACT_DESCRIPTION}
|
||||
|
||||
- [ ] **{NFR_NAME_2}** ({NFR_CATEGORY})
|
||||
- **Owner:** {OWNER}
|
||||
- **Deadline:** {DEADLINE}
|
||||
- **Suggested Evidence:** {SUGGESTED_EVIDENCE_SOURCE}
|
||||
- **Impact:** {IMPACT_DESCRIPTION}
|
||||
|
||||
---
|
||||
|
||||
## Findings Summary
|
||||
|
||||
**Based on ADR Quality Readiness Checklist (8 categories, 29 criteria)**
|
||||
|
||||
| Category | Criteria Met | PASS | CONCERNS | FAIL | Overall Status |
|
||||
| ------------------------------------------------ | ------------------ | ---------------- | -------------------- | ---------------- | ----------------------------------- |
|
||||
| 1. Testability & Automation | {T_MET}/4 | {T_PASS} | {T_CONCERNS} | {T_FAIL} | {T_STATUS} {T_ICON} |
|
||||
| 2. Test Data Strategy | {TD_MET}/3 | {TD_PASS} | {TD_CONCERNS} | {TD_FAIL} | {TD_STATUS} {TD_ICON} |
|
||||
| 3. Scalability & Availability | {SA_MET}/4 | {SA_PASS} | {SA_CONCERNS} | {SA_FAIL} | {SA_STATUS} {SA_ICON} |
|
||||
| 4. Disaster Recovery | {DR_MET}/3 | {DR_PASS} | {DR_CONCERNS} | {DR_FAIL} | {DR_STATUS} {DR_ICON} |
|
||||
| 5. Security | {SEC_MET}/4 | {SEC_PASS} | {SEC_CONCERNS} | {SEC_FAIL} | {SEC_STATUS} {SEC_ICON} |
|
||||
| 6. Monitorability, Debuggability & Manageability | {MON_MET}/4 | {MON_PASS} | {MON_CONCERNS} | {MON_FAIL} | {MON_STATUS} {MON_ICON} |
|
||||
| 7. QoS & QoE | {QOS_MET}/4 | {QOS_PASS} | {QOS_CONCERNS} | {QOS_FAIL} | {QOS_STATUS} {QOS_ICON} |
|
||||
| 8. Deployability | {DEP_MET}/3 | {DEP_PASS} | {DEP_CONCERNS} | {DEP_FAIL} | {DEP_STATUS} {DEP_ICON} |
|
||||
| **Total** | **{TOTAL_MET}/29** | **{TOTAL_PASS}** | **{TOTAL_CONCERNS}** | **{TOTAL_FAIL}** | **{OVERALL_STATUS} {OVERALL_ICON}** |
|
||||
|
||||
**Criteria Met Scoring:**
|
||||
|
||||
- ≥26/29 (90%+) = Strong foundation
|
||||
- 20-25/29 (69-86%) = Room for improvement
|
||||
- <20/29 (<69%) = Significant gaps
|
||||
|
||||
---
|
||||
|
||||
## Gate YAML Snippet
|
||||
|
||||
```yaml
|
||||
nfr_assessment:
|
||||
date: '{DATE}'
|
||||
story_id: '{STORY_ID}'
|
||||
feature_name: '{FEATURE_NAME}'
|
||||
adr_checklist_score: '{TOTAL_MET}/29' # ADR Quality Readiness Checklist
|
||||
categories:
|
||||
testability_automation: '{T_STATUS}'
|
||||
test_data_strategy: '{TD_STATUS}'
|
||||
scalability_availability: '{SA_STATUS}'
|
||||
disaster_recovery: '{DR_STATUS}'
|
||||
security: '{SEC_STATUS}'
|
||||
monitorability: '{MON_STATUS}'
|
||||
qos_qoe: '{QOS_STATUS}'
|
||||
deployability: '{DEP_STATUS}'
|
||||
overall_status: '{OVERALL_STATUS}'
|
||||
critical_issues: { CRITICAL_COUNT }
|
||||
high_priority_issues: { HIGH_COUNT }
|
||||
medium_priority_issues: { MEDIUM_COUNT }
|
||||
concerns: { CONCERNS_COUNT }
|
||||
blockers: { BLOCKER_BOOLEAN } # true/false
|
||||
quick_wins: { QUICK_WIN_COUNT }
|
||||
evidence_gaps: { EVIDENCE_GAP_COUNT }
|
||||
recommendations:
|
||||
- '{RECOMMENDATION_1}'
|
||||
- '{RECOMMENDATION_2}'
|
||||
- '{RECOMMENDATION_3}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Artifacts
|
||||
|
||||
- **Story File:** {STORY_FILE_PATH} (if applicable)
|
||||
- **Tech Spec:** {TECH_SPEC_PATH} (if available)
|
||||
- **PRD:** {PRD_PATH} (if available)
|
||||
- **Test Design:** {TEST_DESIGN_PATH} (if available)
|
||||
- **Evidence Sources:**
|
||||
- Test Results: {TEST_RESULTS_DIR}
|
||||
- Metrics: {METRICS_DIR}
|
||||
- Logs: {LOGS_DIR}
|
||||
- CI Results: {CI_RESULTS_PATH}
|
||||
|
||||
---
|
||||
|
||||
## Recommendations Summary
|
||||
|
||||
**Release Blocker:** {RELEASE_BLOCKER_SUMMARY}
|
||||
|
||||
**High Priority:** {HIGH_PRIORITY_SUMMARY}
|
||||
|
||||
**Medium Priority:** {MEDIUM_PRIORITY_SUMMARY}
|
||||
|
||||
**Next Steps:** {NEXT_STEPS_DESCRIPTION}
|
||||
|
||||
---
|
||||
|
||||
## Sign-Off
|
||||
|
||||
**NFR Assessment:**
|
||||
|
||||
- Overall Status: {OVERALL_STATUS} {OVERALL_ICON}
|
||||
- Critical Issues: {CRITICAL_COUNT}
|
||||
- High Priority Issues: {HIGH_COUNT}
|
||||
- Concerns: {CONCERNS_COUNT}
|
||||
- Evidence Gaps: {EVIDENCE_GAP_COUNT}
|
||||
|
||||
**Gate Status:** {GATE_STATUS} {GATE_ICON}
|
||||
|
||||
**Next Actions:**
|
||||
|
||||
- If PASS ✅: Proceed to `*gate` workflow or release
|
||||
- If CONCERNS ⚠️: Address HIGH/CRITICAL issues, re-run `*nfr-assess`
|
||||
- If FAIL ❌: Resolve FAIL status NFRs, re-run `*nfr-assess`
|
||||
|
||||
**Generated:** {DATE}
|
||||
**Workflow:** testarch-nfr v4.0
|
||||
|
||||
---
|
||||
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
138
.agents/skills/bmad-testarch-nfr/steps-c/step-01-load-context.md
Normal file
138
.agents/skills/bmad-testarch-nfr/steps-c/step-01-load-context.md
Normal file
@@ -0,0 +1,138 @@
|
||||
---
|
||||
name: 'step-01-load-context'
|
||||
description: 'Load NFR requirements, evidence sources, and knowledge base'
|
||||
nextStepFile: './step-02-define-thresholds.md'
|
||||
knowledgeIndex: '{project-root}/_bmad/tea/testarch/tea-index.csv'
|
||||
outputFile: '{test_artifacts}/nfr-assessment.md'
|
||||
---
|
||||
|
||||
# Step 1: Load Context & Knowledge Base
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Gather NFR requirements, evidence sources, and knowledge fragments needed for assessment.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- 🚫 Halt if implementation or evidence is unavailable
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Prerequisites
|
||||
|
||||
- Implementation accessible for evaluation
|
||||
- Evidence sources available (test results, metrics, logs)
|
||||
|
||||
If missing: **HALT** and request the missing inputs.
|
||||
|
||||
---
|
||||
|
||||
## 2. Load Configuration
|
||||
|
||||
From `{config_source}`:
|
||||
|
||||
- Read `tea_browser_automation`
|
||||
|
||||
---
|
||||
|
||||
### Tiered Knowledge Loading
|
||||
|
||||
Load fragments based on their `tier` classification in `tea-index.csv`:
|
||||
|
||||
1. **Core tier** (always load): Foundational fragments required for this workflow
|
||||
2. **Extended tier** (load on-demand): Load when deeper analysis is needed or when the user's context requires it
|
||||
3. **Specialized tier** (load only when relevant): Load only when the specific use case matches (e.g., contract-testing only for microservices, email-auth only for email flows)
|
||||
|
||||
> **Context Efficiency**: Loading only core fragments reduces context usage by 40-50% compared to loading all fragments.
|
||||
|
||||
## 3. Load Knowledge Base Fragments
|
||||
|
||||
From `{knowledgeIndex}` load:
|
||||
|
||||
- `adr-quality-readiness-checklist.md`
|
||||
- `ci-burn-in.md`
|
||||
- `test-quality.md`
|
||||
- `playwright-config.md`
|
||||
- `error-handling.md`
|
||||
|
||||
**Playwright CLI (if `tea_browser_automation` is "cli" or "auto"):**
|
||||
|
||||
- `playwright-cli.md`
|
||||
|
||||
**MCP Patterns (if `tea_browser_automation` is "mcp" or "auto"):**
|
||||
|
||||
- (existing MCP-related fragments, if any are added in future)
|
||||
|
||||
---
|
||||
|
||||
## 4. Load Artifacts
|
||||
|
||||
If available, read:
|
||||
|
||||
- `tech-spec.md` (primary NFRs)
|
||||
- `PRD.md` (product-level NFRs)
|
||||
- `story` or `test-design` docs (feature-level NFRs)
|
||||
|
||||
---
|
||||
|
||||
## 5. Confirm Inputs
|
||||
|
||||
Summarize loaded NFR sources and evidence availability.
|
||||
|
||||
---
|
||||
|
||||
## 6. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it using the workflow template (if available) with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-01-load-context']
|
||||
lastStep: 'step-01-load-context'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-01-load-context'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-01-load-context'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section of the document.
|
||||
|
||||
**Update `inputDocuments`**: Set `inputDocuments` in the output template frontmatter to the list of artifact paths loaded in this step (e.g., knowledge fragments, test design documents, configuration files).
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
106
.agents/skills/bmad-testarch-nfr/steps-c/step-01b-resume.md
Normal file
106
.agents/skills/bmad-testarch-nfr/steps-c/step-01b-resume.md
Normal file
@@ -0,0 +1,106 @@
|
||||
---
|
||||
name: 'step-01b-resume'
|
||||
description: 'Resume interrupted workflow from last completed step'
|
||||
outputFile: '{test_artifacts}/nfr-assessment.md'
|
||||
---
|
||||
|
||||
# Step 1b: Resume Workflow
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Resume an interrupted workflow by loading the existing output document, displaying progress, and routing to the next incomplete step.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- Read the entire step file before acting
|
||||
- Speak in `{communication_language}`
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- Follow the MANDATORY SEQUENCE exactly
|
||||
- Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: Output document with progress frontmatter
|
||||
- Focus: Load progress and route to next step
|
||||
- Limits: Do not re-execute completed steps
|
||||
- Dependencies: Output document must exist from a previous run
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Load Output Document
|
||||
|
||||
Read `{outputFile}` and parse YAML frontmatter for:
|
||||
|
||||
- `stepsCompleted` -- array of completed step names
|
||||
- `lastStep` -- last completed step name
|
||||
- `lastSaved` -- timestamp of last save
|
||||
|
||||
**If `{outputFile}` does not exist**, display:
|
||||
|
||||
"No previous progress found. There is no output document to resume from. Please use **[C] Create** to start a fresh workflow run."
|
||||
|
||||
**THEN:** Halt. Do not proceed.
|
||||
|
||||
---
|
||||
|
||||
### 2. Display Progress Dashboard
|
||||
|
||||
Display progress with checkmark/empty indicators:
|
||||
|
||||
```
|
||||
NFR Assessment - Resume Progress:
|
||||
|
||||
1. Load Context (step-01-load-context) [completed/pending]
|
||||
2. Define Thresholds (step-02-define-thresholds) [completed/pending]
|
||||
3. Gather Evidence (step-03-gather-evidence) [completed/pending]
|
||||
4. Evaluate & Aggregate (step-04e-aggregate-nfr) [completed/pending]
|
||||
5. Generate Report (step-05-generate-report) [completed/pending]
|
||||
|
||||
Last saved: {lastSaved}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. Route to Next Step
|
||||
|
||||
Based on `lastStep`, load the next incomplete step:
|
||||
|
||||
| lastStep | Next Step File |
|
||||
| --------------------------- | --------------------------------- |
|
||||
| `step-01-load-context` | `./step-02-define-thresholds.md` |
|
||||
| `step-02-define-thresholds` | `./step-03-gather-evidence.md` |
|
||||
| `step-03-gather-evidence` | `./step-04-evaluate-and-score.md` |
|
||||
| `step-04e-aggregate-nfr` | `./step-05-generate-report.md` |
|
||||
| `step-05-generate-report` | **Workflow already complete.** |
|
||||
|
||||
**If `lastStep` is the final step** (`step-05-generate-report`), display: "All steps completed. Use **[C] Create** to start fresh, **[V] Validate** to review outputs, or **[E] Edit** to make revisions." Then halt.
|
||||
|
||||
**If `lastStep` does not match any value above**, display: "Unknown progress state (`lastStep`: {lastStep}). Please use **[C] Create** to start fresh." Then halt.
|
||||
|
||||
**Otherwise**, load the identified step file, read completely, and execute.
|
||||
|
||||
The existing content in `{outputFile}` provides context from previously completed steps.
|
||||
|
||||
---
|
||||
|
||||
## SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### SUCCESS:
|
||||
|
||||
- Output document loaded and parsed correctly
|
||||
- Progress dashboard displayed accurately
|
||||
- Routed to correct next step
|
||||
|
||||
### FAILURE:
|
||||
|
||||
- Not loading output document
|
||||
- Incorrect progress display
|
||||
- Routing to wrong step
|
||||
|
||||
**Master Rule:** Resume MUST route to the exact next incomplete step. Never re-execute completed steps.
|
||||
@@ -0,0 +1,107 @@
|
||||
---
|
||||
name: 'step-02-define-thresholds'
|
||||
description: 'Identify NFR categories and thresholds'
|
||||
nextStepFile: './step-03-gather-evidence.md'
|
||||
outputFile: '{test_artifacts}/nfr-assessment.md'
|
||||
---
|
||||
|
||||
# Step 2: Define NFR Categories & Thresholds
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Establish the NFR categories to assess and the thresholds used for validation.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- 🚫 Never guess thresholds
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Select Categories
|
||||
|
||||
Use the ADR Quality Readiness Checklist (8 categories):
|
||||
|
||||
1. Testability & Automation
|
||||
2. Test Data Strategy
|
||||
3. Scalability & Availability
|
||||
4. Disaster Recovery
|
||||
5. Security
|
||||
6. Monitorability/Debuggability/Manageability
|
||||
7. QoS/QoE
|
||||
8. Deployability
|
||||
|
||||
Add any `custom_nfr_categories` if provided.
|
||||
|
||||
---
|
||||
|
||||
## 2. Define Thresholds
|
||||
|
||||
For each category, extract thresholds from:
|
||||
|
||||
- tech-spec (primary)
|
||||
- PRD (secondary)
|
||||
- story or test-design (feature-specific)
|
||||
|
||||
If a threshold is unknown, mark it **UNKNOWN** and plan to report **CONCERNS**.
|
||||
|
||||
---
|
||||
|
||||
## 3. Confirm NFR Matrix
|
||||
|
||||
List each NFR category with its threshold or UNKNOWN status.
|
||||
|
||||
---
|
||||
|
||||
## 4. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it using the workflow template (if available) with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-02-define-thresholds']
|
||||
lastStep: 'step-02-define-thresholds'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-02-define-thresholds'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-02-define-thresholds'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section of the document.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
@@ -0,0 +1,108 @@
|
||||
---
|
||||
name: 'step-03-gather-evidence'
|
||||
description: 'Collect evidence for each NFR category'
|
||||
nextStepFile: './step-04-evaluate-and-score.md'
|
||||
outputFile: '{test_artifacts}/nfr-assessment.md'
|
||||
---
|
||||
|
||||
# Step 3: Gather Evidence
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Collect measurable evidence to evaluate each NFR category.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Evidence Sources
|
||||
|
||||
Collect evidence for:
|
||||
|
||||
- **Performance**: load tests, metrics, response time data
|
||||
- **Security**: scans, auth tests, vuln reports
|
||||
- **Reliability**: error rates, burn-in runs, failover tests
|
||||
- **Maintainability**: test quality, code health signals
|
||||
- **Other categories**: logs, monitoring, DR drills, deployability checks
|
||||
|
||||
---
|
||||
|
||||
## 2. Browser-Based Evidence Collection (if `tea_browser_automation` is `cli` or `auto`)
|
||||
|
||||
> **Fallback:** If CLI is not installed, fall back to MCP (if available) or skip browser-based evidence collection.
|
||||
|
||||
For performance and security categories, CLI can gather live evidence:
|
||||
|
||||
**Performance evidence (page load, response times):**
|
||||
|
||||
1. `playwright-cli -s=tea-nfr open <target_url>`
|
||||
2. `playwright-cli -s=tea-nfr network` → capture response times and payload sizes
|
||||
3. `playwright-cli -s=tea-nfr screenshot --filename={test_artifacts}/nfr/perf-<page>.png`
|
||||
4. `playwright-cli -s=tea-nfr close`
|
||||
|
||||
> **Session Hygiene:** Always close sessions using `playwright-cli -s=tea-nfr close`. Do NOT use `close-all` — it kills every session on the machine and breaks parallel execution.
|
||||
|
||||
Store artifacts under `{test_artifacts}/nfr/`
|
||||
|
||||
---
|
||||
|
||||
## 3. Evidence Gaps
|
||||
|
||||
If evidence is missing for a category, mark that category as **CONCERNS**.
|
||||
|
||||
---
|
||||
|
||||
## 4. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it using the workflow template (if available) with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-03-gather-evidence']
|
||||
lastStep: 'step-03-gather-evidence'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-03-gather-evidence'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-03-gather-evidence'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section of the document.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
@@ -0,0 +1,254 @@
|
||||
---
|
||||
name: 'step-04-evaluate-and-score'
|
||||
description: 'Orchestrate adaptive NFR domain assessments (agent-team, subagent, or sequential)'
|
||||
nextStepFile: './step-04e-aggregate-nfr.md'
|
||||
---
|
||||
|
||||
# Step 4: Orchestrate Adaptive NFR Assessment
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Select execution mode deterministically, then assess NFR domains using agent-team, subagent, or sequential execution while preserving output contracts.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- ✅ Resolve execution mode from config (`tea_execution_mode`, `tea_capability_probe`)
|
||||
- ✅ Apply fallback rules deterministically when requested mode is unsupported
|
||||
- ✅ Wait for required worker steps to complete
|
||||
- ❌ Do NOT skip capability checks when probing is enabled
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Wait for subagent outputs
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
### 1. Prepare Execution Context
|
||||
|
||||
**Generate unique timestamp:**
|
||||
|
||||
```javascript
|
||||
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
|
||||
```
|
||||
|
||||
**Prepare context:**
|
||||
|
||||
```javascript
|
||||
const parseBooleanFlag = (value, defaultValue = true) => {
|
||||
if (typeof value === 'string') {
|
||||
const normalized = value.trim().toLowerCase();
|
||||
if (['false', '0', 'off', 'no'].includes(normalized)) return false;
|
||||
if (['true', '1', 'on', 'yes'].includes(normalized)) return true;
|
||||
}
|
||||
if (value === undefined || value === null) return defaultValue;
|
||||
return Boolean(value);
|
||||
};
|
||||
|
||||
const subagentContext = {
|
||||
system_context: /* from Step 1 */,
|
||||
nfr_thresholds: /* from Step 2 */,
|
||||
evidence_gathered: /* from Step 3 */,
|
||||
config: {
|
||||
execution_mode: config.tea_execution_mode || 'auto', // "auto" | "subagent" | "agent-team" | "sequential"
|
||||
capability_probe: parseBooleanFlag(config.tea_capability_probe, true), // supports booleans and "false"/"true" strings
|
||||
},
|
||||
timestamp: timestamp
|
||||
};
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. Resolve Execution Mode with Capability Probe
|
||||
|
||||
```javascript
|
||||
const normalizeUserExecutionMode = (mode) => {
|
||||
if (typeof mode !== 'string') return null;
|
||||
const normalized = mode.trim().toLowerCase().replace(/[-_]/g, ' ').replace(/\s+/g, ' ');
|
||||
|
||||
if (normalized === 'auto') return 'auto';
|
||||
if (normalized === 'sequential') return 'sequential';
|
||||
if (normalized === 'subagent' || normalized === 'sub agent' || normalized === 'subagents' || normalized === 'sub agents') {
|
||||
return 'subagent';
|
||||
}
|
||||
if (normalized === 'agent team' || normalized === 'agent teams' || normalized === 'agentteam') {
|
||||
return 'agent-team';
|
||||
}
|
||||
|
||||
return null;
|
||||
};
|
||||
|
||||
const normalizeConfigExecutionMode = (mode) => {
|
||||
if (mode === 'subagent') return 'subagent';
|
||||
if (mode === 'auto' || mode === 'sequential' || mode === 'subagent' || mode === 'agent-team') {
|
||||
return mode;
|
||||
}
|
||||
return null;
|
||||
};
|
||||
|
||||
// Explicit user instruction in the active run takes priority over config.
|
||||
const explicitModeFromUser = normalizeUserExecutionMode(runtime.getExplicitExecutionModeHint?.() || null);
|
||||
|
||||
const requestedMode = explicitModeFromUser || normalizeConfigExecutionMode(subagentContext.config.execution_mode) || 'auto';
|
||||
const probeEnabled = subagentContext.config.capability_probe;
|
||||
|
||||
const supports = {
|
||||
subagent: false,
|
||||
agentTeam: false,
|
||||
};
|
||||
|
||||
if (probeEnabled) {
|
||||
supports.subagent = runtime.canLaunchSubagents?.() === true;
|
||||
supports.agentTeam = runtime.canLaunchAgentTeams?.() === true;
|
||||
}
|
||||
|
||||
let resolvedMode = requestedMode;
|
||||
|
||||
if (requestedMode === 'auto') {
|
||||
if (supports.agentTeam) resolvedMode = 'agent-team';
|
||||
else if (supports.subagent) resolvedMode = 'subagent';
|
||||
else resolvedMode = 'sequential';
|
||||
} else if (probeEnabled && requestedMode === 'agent-team' && !supports.agentTeam) {
|
||||
resolvedMode = supports.subagent ? 'subagent' : 'sequential';
|
||||
} else if (probeEnabled && requestedMode === 'subagent' && !supports.subagent) {
|
||||
resolvedMode = 'sequential';
|
||||
}
|
||||
|
||||
subagentContext.execution = {
|
||||
requestedMode,
|
||||
resolvedMode,
|
||||
probeEnabled,
|
||||
supports,
|
||||
};
|
||||
```
|
||||
|
||||
Resolution precedence:
|
||||
|
||||
1. Explicit user request in this run (`agent team` => `agent-team`; `subagent` => `subagent`; `sequential`; `auto`)
|
||||
2. `tea_execution_mode` from config
|
||||
3. Runtime capability fallback (when probing enabled)
|
||||
|
||||
If probing is disabled, honor the requested mode strictly. If that mode cannot be executed at runtime, fail with explicit error instead of silent fallback.
|
||||
|
||||
---
|
||||
|
||||
### 3. Dispatch 4 NFR Workers
|
||||
|
||||
**Subagent A: Security Assessment**
|
||||
|
||||
- File: `./step-04a-subagent-security.md`
|
||||
- Output: `/tmp/tea-nfr-security-${timestamp}.json`
|
||||
- Execution:
|
||||
- `agent-team` or `subagent`: launch non-blocking
|
||||
- `sequential`: run blocking and wait
|
||||
- Status: Running... ⟳
|
||||
|
||||
**Subagent B: Performance Assessment**
|
||||
|
||||
- File: `./step-04b-subagent-performance.md`
|
||||
- Output: `/tmp/tea-nfr-performance-${timestamp}.json`
|
||||
- Status: Running... ⟳
|
||||
|
||||
**Subagent C: Reliability Assessment**
|
||||
|
||||
- File: `./step-04c-subagent-reliability.md`
|
||||
- Output: `/tmp/tea-nfr-reliability-${timestamp}.json`
|
||||
- Status: Running... ⟳
|
||||
|
||||
**Subagent D: Scalability Assessment**
|
||||
|
||||
- File: `./step-04d-subagent-scalability.md`
|
||||
- Output: `/tmp/tea-nfr-scalability-${timestamp}.json`
|
||||
- Status: Running... ⟳
|
||||
|
||||
In `agent-team` and `subagent` modes, runtime decides worker scheduling and concurrency.
|
||||
|
||||
---
|
||||
|
||||
### 4. Wait for Expected Worker Completion
|
||||
|
||||
**If `resolvedMode` is `agent-team` or `subagent`:**
|
||||
|
||||
```
|
||||
⏳ Waiting for 4 NFR subagents to complete...
|
||||
├── Subagent A (Security): Running... ⟳
|
||||
├── Subagent B (Performance): Running... ⟳
|
||||
├── Subagent C (Reliability): Running... ⟳
|
||||
└── Subagent D (Scalability): Running... ⟳
|
||||
|
||||
[... time passes ...]
|
||||
|
||||
✅ All 4 NFR subagents completed!
|
||||
```
|
||||
|
||||
**If `resolvedMode` is `sequential`:**
|
||||
|
||||
```
|
||||
✅ Sequential mode: each worker already completed during dispatch.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5. Verify All Outputs Exist
|
||||
|
||||
```javascript
|
||||
const outputs = ['security', 'performance', 'reliability', 'scalability'].map((domain) => `/tmp/tea-nfr-${domain}-${timestamp}.json`);
|
||||
|
||||
outputs.forEach((output) => {
|
||||
if (!fs.existsSync(output)) {
|
||||
throw new Error(`Subagent output missing: ${output}`);
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 6. Execution Report
|
||||
|
||||
```
|
||||
🚀 Performance Report:
|
||||
- Execution Mode: {resolvedMode}
|
||||
- Total Elapsed: ~mode-dependent
|
||||
- Parallel Gain: ~67% faster when mode is subagent/agent-team
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 7. Proceed to Aggregation
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
The aggregation step will:
|
||||
|
||||
- Read all 4 NFR domain outputs
|
||||
- Calculate overall risk level
|
||||
- Aggregate compliance status
|
||||
- Identify cross-domain risks
|
||||
- Generate executive summary
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Proceed when all 4 required worker steps completed and outputs exist.
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- All required worker steps completed
|
||||
- Fallback behavior respected configuration and capability probe rules
|
||||
|
||||
### ❌ FAILURE:
|
||||
|
||||
- One or more subagents failed
|
||||
- Unsupported requested mode with probing disabled
|
||||
@@ -0,0 +1,138 @@
|
||||
---
|
||||
name: 'step-04a-subagent-security'
|
||||
description: 'Subagent: Security NFR assessment'
|
||||
subagent: true
|
||||
outputFile: '/tmp/tea-nfr-security-{{timestamp}}.json'
|
||||
---
|
||||
|
||||
# Subagent 4A: Security NFR Assessment
|
||||
|
||||
## SUBAGENT CONTEXT
|
||||
|
||||
This is an **isolated subagent** running in parallel with other NFR domain assessments.
|
||||
|
||||
**Your task:** Assess SECURITY NFR domain only.
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- ✅ Assess SECURITY only (not performance, reliability, scalability)
|
||||
- ✅ Output structured JSON to temp file
|
||||
- ❌ Do NOT assess other NFR domains
|
||||
|
||||
---
|
||||
|
||||
## SUBAGENT TASK
|
||||
|
||||
### 1. Security Assessment Categories
|
||||
|
||||
**Assess the following security dimensions:**
|
||||
|
||||
**A) Authentication & Authorization:**
|
||||
|
||||
- OAuth2/JWT implementation
|
||||
- Session management
|
||||
- Multi-factor authentication
|
||||
- Role-based access control (RBAC)
|
||||
|
||||
**B) Data Protection:**
|
||||
|
||||
- Encryption at rest
|
||||
- Encryption in transit (HTTPS/TLS)
|
||||
- Sensitive data handling (PII, passwords)
|
||||
- Database encryption
|
||||
|
||||
**C) Input Validation:**
|
||||
|
||||
- SQL injection prevention
|
||||
- XSS prevention
|
||||
- CSRF protection
|
||||
- Input sanitization
|
||||
|
||||
**D) API Security:**
|
||||
|
||||
- Rate limiting
|
||||
- API authentication
|
||||
- CORS configuration
|
||||
- Security headers
|
||||
|
||||
**E) Secrets Management:**
|
||||
|
||||
- Environment variables for secrets
|
||||
- No hardcoded credentials
|
||||
- Secret rotation policies
|
||||
- Key management systems
|
||||
|
||||
### 2. Risk Assessment
|
||||
|
||||
For each category, determine status:
|
||||
|
||||
- **PASS**: Properly implemented
|
||||
- **CONCERN**: Partially implemented or weak
|
||||
- **FAIL**: Not implemented or critical vulnerability
|
||||
- **N/A**: Not applicable to this system
|
||||
|
||||
### 3. Compliance Check
|
||||
|
||||
**Common compliance standards:**
|
||||
|
||||
- SOC2
|
||||
- GDPR
|
||||
- HIPAA
|
||||
- PCI-DSS
|
||||
- ISO 27001
|
||||
|
||||
---
|
||||
|
||||
## OUTPUT FORMAT
|
||||
|
||||
```json
|
||||
{
|
||||
"domain": "security",
|
||||
"risk_level": "MEDIUM",
|
||||
"findings": [
|
||||
{
|
||||
"category": "Authentication",
|
||||
"status": "PASS",
|
||||
"description": "OAuth2 with JWT tokens implemented",
|
||||
"evidence": ["src/auth/oauth.ts", "JWT refresh token rotation"],
|
||||
"recommendations": []
|
||||
},
|
||||
{
|
||||
"category": "Data Encryption",
|
||||
"status": "CONCERN",
|
||||
"description": "Database encryption at rest not enabled",
|
||||
"evidence": ["Database config shows no encryption"],
|
||||
"recommendations": ["Enable database encryption at rest", "Use AWS RDS encryption or equivalent", "Implement key rotation policy"]
|
||||
},
|
||||
{
|
||||
"category": "Input Validation",
|
||||
"status": "FAIL",
|
||||
"description": "SQL injection vulnerability in search endpoint",
|
||||
"evidence": ["src/api/search.ts:42 - direct SQL concatenation"],
|
||||
"recommendations": ["URGENT: Use parameterized queries", "Add input sanitization library", "Implement WAF rules"]
|
||||
}
|
||||
],
|
||||
"compliance": {
|
||||
"SOC2": "PARTIAL",
|
||||
"GDPR": "PASS",
|
||||
"HIPAA": "N/A",
|
||||
"PCI-DSS": "FAIL"
|
||||
},
|
||||
"priority_actions": [
|
||||
"Fix SQL injection vulnerability (URGENT)",
|
||||
"Enable database encryption within 30 days",
|
||||
"Implement rate limiting for all APIs"
|
||||
],
|
||||
"summary": "Security posture is MEDIUM risk with 1 critical vulnerability requiring immediate attention"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Subagent completes when JSON output written to temp file.
|
||||
|
||||
**Subagent terminates here.**
|
||||
@@ -0,0 +1,84 @@
|
||||
---
|
||||
name: 'step-04b-subagent-performance'
|
||||
description: 'Subagent: Performance NFR assessment'
|
||||
subagent: true
|
||||
outputFile: '/tmp/tea-nfr-performance-{{timestamp}}.json'
|
||||
---
|
||||
|
||||
# Subagent 4B: Performance NFR Assessment
|
||||
|
||||
## SUBAGENT CONTEXT
|
||||
|
||||
This is an **isolated subagent** running in parallel with other NFR domain assessments.
|
||||
|
||||
**Your task:** Assess PERFORMANCE NFR domain only.
|
||||
|
||||
---
|
||||
|
||||
## SUBAGENT TASK
|
||||
|
||||
### 1. Performance Assessment Categories
|
||||
|
||||
**A) Response Times:**
|
||||
|
||||
- API response times (<200ms target)
|
||||
- Page load times (<2s target)
|
||||
- Time to interactive (<3s target)
|
||||
|
||||
**B) Throughput:**
|
||||
|
||||
- Requests per second capacity
|
||||
- Concurrent user support
|
||||
- Database query performance
|
||||
|
||||
**C) Resource Usage:**
|
||||
|
||||
- Memory consumption
|
||||
- CPU utilization
|
||||
- Database connection pooling
|
||||
|
||||
**D) Optimization:**
|
||||
|
||||
- Caching strategies
|
||||
- CDN usage
|
||||
- Code splitting/lazy loading
|
||||
- Database indexing
|
||||
|
||||
---
|
||||
|
||||
## OUTPUT FORMAT
|
||||
|
||||
```json
|
||||
{
|
||||
"domain": "performance",
|
||||
"risk_level": "LOW",
|
||||
"findings": [
|
||||
{
|
||||
"category": "Response Times",
|
||||
"status": "PASS",
|
||||
"description": "API endpoints respond in <150ms (P95)",
|
||||
"evidence": ["Load testing results show 140ms P95"],
|
||||
"recommendations": []
|
||||
},
|
||||
{
|
||||
"category": "Caching",
|
||||
"status": "CONCERN",
|
||||
"description": "No CDN for static assets",
|
||||
"evidence": ["Static files served from origin"],
|
||||
"recommendations": ["Implement CDN (CloudFront/Cloudflare)", "Cache static assets for 1 year"]
|
||||
}
|
||||
],
|
||||
"compliance": {
|
||||
"SLA_99.9": "PASS",
|
||||
"SLA_99.99": "CONCERN"
|
||||
},
|
||||
"priority_actions": ["Implement CDN for static assets", "Add database query caching for frequent reads"],
|
||||
"summary": "Performance is acceptable with minor optimization opportunities"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Subagent completes when JSON output written to temp file.
|
||||
@@ -0,0 +1,85 @@
|
||||
---
|
||||
name: 'step-04c-subagent-reliability'
|
||||
description: 'Subagent: Reliability NFR assessment'
|
||||
subagent: true
|
||||
outputFile: '/tmp/tea-nfr-reliability-{{timestamp}}.json'
|
||||
---
|
||||
|
||||
# Subagent 4C: Reliability NFR Assessment
|
||||
|
||||
## SUBAGENT CONTEXT
|
||||
|
||||
This is an **isolated subagent** running in parallel with other NFR domain assessments.
|
||||
|
||||
**Your task:** Assess RELIABILITY NFR domain only.
|
||||
|
||||
---
|
||||
|
||||
## SUBAGENT TASK
|
||||
|
||||
### 1. Reliability Assessment Categories
|
||||
|
||||
**A) Error Handling:**
|
||||
|
||||
- Try-catch blocks for critical operations
|
||||
- Graceful degradation
|
||||
- Circuit breakers
|
||||
- Retry mechanisms
|
||||
|
||||
**B) Monitoring & Observability:**
|
||||
|
||||
- Logging implementation
|
||||
- Error tracking (Sentry/Datadog)
|
||||
- Health check endpoints
|
||||
- Alerting systems
|
||||
|
||||
**C) Fault Tolerance:**
|
||||
|
||||
- Database failover
|
||||
- Service redundancy
|
||||
- Backup strategies
|
||||
- Disaster recovery plan
|
||||
|
||||
**D) Uptime & Availability:**
|
||||
|
||||
- SLA targets
|
||||
- Historical uptime
|
||||
- Incident response
|
||||
|
||||
---
|
||||
|
||||
## OUTPUT FORMAT
|
||||
|
||||
```json
|
||||
{
|
||||
"domain": "reliability",
|
||||
"risk_level": "LOW",
|
||||
"findings": [
|
||||
{
|
||||
"category": "Error Handling",
|
||||
"status": "PASS",
|
||||
"description": "Comprehensive error handling with circuit breakers",
|
||||
"evidence": ["Circuit breaker pattern in src/services/", "Retry logic implemented"],
|
||||
"recommendations": []
|
||||
},
|
||||
{
|
||||
"category": "Monitoring",
|
||||
"status": "CONCERN",
|
||||
"description": "No APM (Application Performance Monitoring) tool",
|
||||
"evidence": ["Logging present but no distributed tracing"],
|
||||
"recommendations": ["Implement APM (Datadog/New Relic)", "Add distributed tracing"]
|
||||
}
|
||||
],
|
||||
"compliance": {
|
||||
"SLA_99.9": "PASS"
|
||||
},
|
||||
"priority_actions": ["Implement APM for better observability"],
|
||||
"summary": "Reliability is good with minor monitoring gaps"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Subagent completes when JSON output written to temp file.
|
||||
@@ -0,0 +1,88 @@
|
||||
---
|
||||
name: 'step-04d-subagent-scalability'
|
||||
description: 'Subagent: Scalability NFR assessment'
|
||||
subagent: true
|
||||
outputFile: '/tmp/tea-nfr-scalability-{{timestamp}}.json'
|
||||
---
|
||||
|
||||
# Subagent 4D: Scalability NFR Assessment
|
||||
|
||||
## SUBAGENT CONTEXT
|
||||
|
||||
This is an **isolated subagent** running in parallel with other NFR domain assessments.
|
||||
|
||||
**Your task:** Assess SCALABILITY NFR domain only.
|
||||
|
||||
---
|
||||
|
||||
## SUBAGENT TASK
|
||||
|
||||
### 1. Scalability Assessment Categories
|
||||
|
||||
**A) Horizontal Scaling:**
|
||||
|
||||
- Stateless architecture
|
||||
- Load balancer configuration
|
||||
- Container orchestration (K8s)
|
||||
- Auto-scaling policies
|
||||
|
||||
**B) Vertical Scaling:**
|
||||
|
||||
- Resource allocation
|
||||
- Database size limits
|
||||
- Memory management
|
||||
- CPU optimization
|
||||
|
||||
**C) Data Scaling:**
|
||||
|
||||
- Database partitioning/sharding
|
||||
- Read replicas
|
||||
- Caching layers
|
||||
- Data archival strategy
|
||||
|
||||
**D) Traffic Handling:**
|
||||
|
||||
- CDN for static assets
|
||||
- Rate limiting
|
||||
- Queue systems for async work
|
||||
- WebSocket scaling
|
||||
|
||||
---
|
||||
|
||||
## OUTPUT FORMAT
|
||||
|
||||
```json
|
||||
{
|
||||
"domain": "scalability",
|
||||
"risk_level": "MEDIUM",
|
||||
"findings": [
|
||||
{
|
||||
"category": "Horizontal Scaling",
|
||||
"status": "PASS",
|
||||
"description": "Stateless architecture with container orchestration",
|
||||
"evidence": ["Docker + Kubernetes setup", "Auto-scaling configured"],
|
||||
"recommendations": []
|
||||
},
|
||||
{
|
||||
"category": "Data Scaling",
|
||||
"status": "CONCERN",
|
||||
"description": "No database sharding strategy for large data growth",
|
||||
"evidence": ["Single database instance", "No partitioning"],
|
||||
"recommendations": ["Plan database sharding strategy", "Implement read replicas", "Consider database clustering"]
|
||||
}
|
||||
],
|
||||
"compliance": {
|
||||
"1M_users": "PASS",
|
||||
"10M_users": "CONCERN",
|
||||
"100M_users": "FAIL"
|
||||
},
|
||||
"priority_actions": ["Design database sharding strategy for future growth", "Implement read replicas for read-heavy workloads"],
|
||||
"summary": "Scalability is good up to 1M users, concerns for 10M+ users"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Subagent completes when JSON output written to temp file.
|
||||
@@ -0,0 +1,264 @@
|
||||
---
|
||||
name: 'step-04e-aggregate-nfr'
|
||||
description: 'Aggregate NFR domain assessments into executive summary'
|
||||
nextStepFile: './step-05-generate-report.md'
|
||||
outputFile: '{test_artifacts}/nfr-assessment.md'
|
||||
---
|
||||
|
||||
# Step 4E: Aggregate NFR Assessment Results
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Read outputs from 4 parallel NFR subagents, calculate overall risk level, aggregate compliance status, and identify cross-domain risks.
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
- ✅ Read all 4 subagent outputs
|
||||
- ✅ Calculate overall risk level
|
||||
- ❌ Do NOT re-assess NFRs (use subagent outputs)
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
### 1. Read All Subagent Outputs
|
||||
|
||||
```javascript
|
||||
const domains = ['security', 'performance', 'reliability', 'scalability'];
|
||||
const assessments = {};
|
||||
|
||||
domains.forEach((domain) => {
|
||||
const outputPath = `/tmp/tea-nfr-${domain}-{{timestamp}}.json`;
|
||||
assessments[domain] = JSON.parse(fs.readFileSync(outputPath, 'utf8'));
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. Calculate Overall Risk Level
|
||||
|
||||
**Risk hierarchy:** HIGH > MEDIUM > LOW > NONE
|
||||
|
||||
```javascript
|
||||
const riskLevels = { HIGH: 3, MEDIUM: 2, LOW: 1, NONE: 0 };
|
||||
const domainRisks = domains.map((d) => assessments[d].risk_level);
|
||||
const maxRiskValue = Math.max(...domainRisks.map((r) => riskLevels[r]));
|
||||
const overallRisk = Object.keys(riskLevels).find((k) => riskLevels[k] === maxRiskValue);
|
||||
```
|
||||
|
||||
**Risk assessment:**
|
||||
|
||||
- If ANY domain is HIGH → overall is HIGH
|
||||
- If ANY domain is MEDIUM (and none HIGH) → overall is MEDIUM
|
||||
- If ALL domains are LOW/NONE → overall is LOW
|
||||
|
||||
---
|
||||
|
||||
### 3. Aggregate Compliance Status
|
||||
|
||||
```javascript
|
||||
const allCompliance = {};
|
||||
|
||||
domains.forEach((domain) => {
|
||||
const compliance = assessments[domain].compliance;
|
||||
Object.entries(compliance).forEach(([standard, status]) => {
|
||||
if (!allCompliance[standard]) {
|
||||
allCompliance[standard] = [];
|
||||
}
|
||||
allCompliance[standard].push({ domain, status });
|
||||
});
|
||||
});
|
||||
|
||||
// Determine overall compliance per standard
|
||||
const complianceSummary = {};
|
||||
Object.entries(allCompliance).forEach(([standard, statuses]) => {
|
||||
const hasFail = statuses.some((s) => s.status === 'FAIL');
|
||||
const hasPartial = statuses.some((s) => s.status === 'PARTIAL' || s.status === 'CONCERN');
|
||||
|
||||
complianceSummary[standard] = hasFail ? 'FAIL' : hasPartial ? 'PARTIAL' : 'PASS';
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4. Identify Cross-Domain Risks
|
||||
|
||||
**Look for risks that span multiple domains:**
|
||||
|
||||
```javascript
|
||||
const crossDomainRisks = [];
|
||||
|
||||
// Example: Performance + Scalability issue
|
||||
const perfConcerns = assessments.performance.findings.filter((f) => f.status !== 'PASS');
|
||||
const scaleConcerns = assessments.scalability.findings.filter((f) => f.status !== 'PASS');
|
||||
if (perfConcerns.length > 0 && scaleConcerns.length > 0) {
|
||||
crossDomainRisks.push({
|
||||
domains: ['performance', 'scalability'],
|
||||
description: 'Performance issues may worsen under scale',
|
||||
impact: 'HIGH',
|
||||
});
|
||||
}
|
||||
|
||||
// Example: Security + Reliability issue
|
||||
const securityFails = assessments.security.findings.filter((f) => f.status === 'FAIL');
|
||||
const reliabilityConcerns = assessments.reliability.findings.filter((f) => f.status !== 'PASS');
|
||||
if (securityFails.length > 0 && reliabilityConcerns.length > 0) {
|
||||
crossDomainRisks.push({
|
||||
domains: ['security', 'reliability'],
|
||||
description: 'Security vulnerabilities may cause reliability incidents',
|
||||
impact: 'CRITICAL',
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5. Aggregate Priority Actions
|
||||
|
||||
```javascript
|
||||
const allPriorityActions = domains.flatMap((domain) =>
|
||||
assessments[domain].priority_actions.map((action) => ({
|
||||
domain,
|
||||
action,
|
||||
urgency: assessments[domain].risk_level === 'HIGH' ? 'URGENT' : 'NORMAL',
|
||||
})),
|
||||
);
|
||||
|
||||
// Sort by urgency
|
||||
const prioritizedActions = allPriorityActions.sort((a, b) => (a.urgency === 'URGENT' ? -1 : 1));
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 6. Generate Executive Summary
|
||||
|
||||
```javascript
|
||||
const resolvedMode = subagentContext?.execution?.resolvedMode ?? 'unknown';
|
||||
const subagentExecutionLabel =
|
||||
resolvedMode === 'sequential'
|
||||
? 'SEQUENTIAL (4 NFR domains)'
|
||||
: resolvedMode === 'agent-team'
|
||||
? 'AGENT-TEAM (4 NFR domains)'
|
||||
: resolvedMode === 'subagent'
|
||||
? 'SUBAGENT (4 NFR domains)'
|
||||
: 'MODE-DEPENDENT (4 NFR domains)';
|
||||
|
||||
const performanceGainLabel =
|
||||
resolvedMode === 'sequential'
|
||||
? 'baseline (no parallel speedup)'
|
||||
: resolvedMode === 'agent-team' || resolvedMode === 'subagent'
|
||||
? '~67% faster than sequential'
|
||||
: 'mode-dependent';
|
||||
|
||||
const executiveSummary = {
|
||||
overall_risk: overallRisk,
|
||||
assessment_date: new Date().toISOString(),
|
||||
|
||||
domain_assessments: assessments,
|
||||
|
||||
compliance_summary: complianceSummary,
|
||||
|
||||
cross_domain_risks: crossDomainRisks,
|
||||
|
||||
priority_actions: prioritizedActions,
|
||||
|
||||
risk_breakdown: {
|
||||
security: assessments.security.risk_level,
|
||||
performance: assessments.performance.risk_level,
|
||||
reliability: assessments.reliability.risk_level,
|
||||
scalability: assessments.scalability.risk_level,
|
||||
},
|
||||
|
||||
subagent_execution: subagentExecutionLabel,
|
||||
performance_gain: performanceGainLabel,
|
||||
};
|
||||
|
||||
// Save for Step 5 (report generation)
|
||||
fs.writeFileSync('/tmp/tea-nfr-summary-{{timestamp}}.json', JSON.stringify(executiveSummary, null, 2), 'utf8');
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 7. Display Summary to User
|
||||
|
||||
```
|
||||
✅ NFR Assessment Complete ({subagentExecutionLabel})
|
||||
|
||||
🎯 Overall Risk Level: {overallRisk}
|
||||
|
||||
📊 Domain Risk Breakdown:
|
||||
- Security: {security_risk}
|
||||
- Performance: {performance_risk}
|
||||
- Reliability: {reliability_risk}
|
||||
- Scalability: {scalability_risk}
|
||||
|
||||
✅ Compliance Summary:
|
||||
{list standards with PASS/PARTIAL/FAIL}
|
||||
|
||||
⚠️ Cross-Domain Risks: {cross_domain_risk_count}
|
||||
|
||||
🎯 Priority Actions: {priority_action_count}
|
||||
|
||||
🚀 Performance: {performanceGainLabel}
|
||||
|
||||
✅ Ready for report generation (Step 5)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
---
|
||||
|
||||
### 8. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it using the workflow template (if available) with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-04e-aggregate-nfr']
|
||||
lastStep: 'step-04e-aggregate-nfr'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-04e-aggregate-nfr'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-04e-aggregate-nfr'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section of the document.
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITION
|
||||
|
||||
Proceed to Step 5 when:
|
||||
|
||||
- ✅ All subagent outputs read
|
||||
- ✅ Overall risk calculated
|
||||
- ✅ Compliance aggregated
|
||||
- ✅ Summary saved
|
||||
- ✅ Progress saved to output document
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- All 4 NFR domains aggregated correctly
|
||||
- Overall risk level determined
|
||||
- Executive summary complete
|
||||
|
||||
### ❌ FAILURE:
|
||||
|
||||
- Failed to read subagent outputs
|
||||
- Risk calculation incorrect
|
||||
@@ -0,0 +1,108 @@
|
||||
---
|
||||
name: 'step-05-generate-report'
|
||||
description: 'Create NFR report and validation summary'
|
||||
outputFile: '{test_artifacts}/nfr-assessment.md'
|
||||
---
|
||||
|
||||
# Step 5: Generate Report & Validate
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Produce the NFR assessment report and validate completeness.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 📖 Read the entire step file before acting
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Record outputs before proceeding
|
||||
- 📖 Load the next step only when instructed
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: config, loaded artifacts, and knowledge fragments
|
||||
- Focus: this step's goal only
|
||||
- Limits: do not execute future steps
|
||||
- Dependencies: prior steps' outputs (if any)
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
|
||||
|
||||
## 1. Report Generation
|
||||
|
||||
Use `nfr-report-template.md` to produce `{outputFile}` containing:
|
||||
|
||||
- Category results (PASS/CONCERNS/FAIL)
|
||||
- Evidence summary
|
||||
- Remediation actions
|
||||
- Gate-ready YAML snippet (if applicable)
|
||||
|
||||
---
|
||||
|
||||
## 2. Polish Output
|
||||
|
||||
Before finalizing, review the complete output document for quality:
|
||||
|
||||
1. **Remove duplication**: Progressive-append workflow may have created repeated sections — consolidate
|
||||
2. **Verify consistency**: Ensure terminology, risk scores, and references are consistent throughout
|
||||
3. **Check completeness**: All template sections should be populated or explicitly marked N/A
|
||||
4. **Format cleanup**: Ensure markdown formatting is clean (tables aligned, headers consistent, no orphaned references)
|
||||
|
||||
---
|
||||
|
||||
## 3. Validation
|
||||
|
||||
Validate against `checklist.md` and fix gaps.
|
||||
|
||||
- [ ] CLI sessions cleaned up (no orphaned browsers)
|
||||
|
||||
---
|
||||
|
||||
## 4. Save Progress
|
||||
|
||||
**Save this step's accumulated work to `{outputFile}`.**
|
||||
|
||||
- **If `{outputFile}` does not exist** (first save), create it using the workflow template (if available) with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: ['step-05-generate-report']
|
||||
lastStep: 'step-05-generate-report'
|
||||
lastSaved: '{date}'
|
||||
---
|
||||
```
|
||||
|
||||
Then write this step's output below the frontmatter.
|
||||
|
||||
- **If `{outputFile}` already exists**, update:
|
||||
- Add `'step-05-generate-report'` to `stepsCompleted` array (only if not already present)
|
||||
- Set `lastStep: 'step-05-generate-report'`
|
||||
- Set `lastSaved: '{date}'`
|
||||
- Append this step's output to the appropriate section of the document.
|
||||
|
||||
---
|
||||
|
||||
## 5. Completion Summary
|
||||
|
||||
Report:
|
||||
|
||||
- Overall NFR status
|
||||
- Critical blockers or waivers needed
|
||||
- Next recommended workflow (`trace` or release gate)
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Step completed in full with required outputs
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped sequence steps or missing outputs
|
||||
**Master Rule:** Skipping steps is FORBIDDEN.
|
||||
65
.agents/skills/bmad-testarch-nfr/steps-e/step-01-assess.md
Normal file
65
.agents/skills/bmad-testarch-nfr/steps-e/step-01-assess.md
Normal file
@@ -0,0 +1,65 @@
|
||||
---
|
||||
name: 'step-01-assess'
|
||||
description: 'Load an existing output for editing'
|
||||
nextStepFile: './step-02-apply-edit.md'
|
||||
---
|
||||
|
||||
# Step 1: Assess Edit Target
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Identify which output should be edited and load it.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 📖 Read the complete step file before taking any action
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are the Master Test Architect
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Ask the user which output file to edit
|
||||
- 🚫 Do not edit until target is confirmed
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: existing outputs
|
||||
- Focus: select edit target
|
||||
- Limits: no edits yet
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Identify Target
|
||||
|
||||
Ask the user to provide the output file path or select from known outputs.
|
||||
|
||||
### 2. Load Target
|
||||
|
||||
Read the provided output file in full.
|
||||
|
||||
### 3. Confirm
|
||||
|
||||
Confirm the target and proceed to edit.
|
||||
|
||||
Load next step: `{nextStepFile}`
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Target identified and loaded
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Proceeding without a confirmed target
|
||||
@@ -0,0 +1,60 @@
|
||||
---
|
||||
name: 'step-02-apply-edit'
|
||||
description: 'Apply edits to the selected output'
|
||||
---
|
||||
|
||||
# Step 2: Apply Edits
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Apply the requested edits to the selected output and confirm changes.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 📖 Read the complete step file before taking any action
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are the Master Test Architect
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Only apply edits explicitly requested by the user
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: selected output and user changes
|
||||
- Focus: apply edits only
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Confirm Requested Changes
|
||||
|
||||
Restate what will be changed and confirm.
|
||||
|
||||
### 2. Apply Changes
|
||||
|
||||
Update the output file accordingly.
|
||||
|
||||
### 3. Report
|
||||
|
||||
Summarize the edits applied.
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Changes applied and confirmed
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Unconfirmed edits or missing update
|
||||
67
.agents/skills/bmad-testarch-nfr/steps-v/step-01-validate.md
Normal file
67
.agents/skills/bmad-testarch-nfr/steps-v/step-01-validate.md
Normal file
@@ -0,0 +1,67 @@
|
||||
---
|
||||
name: 'step-01-validate'
|
||||
description: 'Validate workflow outputs against checklist'
|
||||
outputFile: '{test_artifacts}/nfr-assess-validation-report.md'
|
||||
validationChecklist: '../checklist.md'
|
||||
---
|
||||
|
||||
# Step 1: Validate Outputs
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Validate outputs using the workflow checklist and record findings.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 📖 Read the complete step file before taking any action
|
||||
- ✅ Speak in `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are the Master Test Architect
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Validate against `{validationChecklist}`
|
||||
- 🚫 Do not skip checks
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Follow the MANDATORY SEQUENCE exactly
|
||||
- 💾 Write findings to `{outputFile}`
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: workflow outputs and checklist
|
||||
- Focus: validation only
|
||||
- Limits: do not modify outputs in this step
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly.
|
||||
|
||||
### 1. Load Checklist
|
||||
|
||||
Read `{validationChecklist}` and list all criteria.
|
||||
|
||||
### 2. Validate Outputs
|
||||
|
||||
Evaluate outputs against each checklist item.
|
||||
|
||||
### 3. Write Report
|
||||
|
||||
Write a validation report to `{outputFile}` with PASS/WARN/FAIL per section.
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Validation report written
|
||||
- All checklist items evaluated
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Skipped checklist items
|
||||
- No report produced
|
||||
@@ -0,0 +1,73 @@
|
||||
---
|
||||
validationDate: 2026-01-27
|
||||
workflowName: testarch-nfr
|
||||
workflowPath: {project-root}/src/workflows/testarch/bmad-testarch-nfr
|
||||
validationStatus: COMPLETE
|
||||
completionDate: 2026-01-27 10:03:10
|
||||
---
|
||||
|
||||
# Validation Report: testarch-nfr
|
||||
|
||||
**Validation Started:** 2026-01-27 09:50:21
|
||||
**Validator:** BMAD Workflow Validation System (Codex)
|
||||
**Standards Version:** BMAD Workflow Standards
|
||||
|
||||
## File Structure & Size
|
||||
|
||||
- workflow.md present: YES
|
||||
- instructions.md present: YES
|
||||
- workflow.yaml present: YES
|
||||
- step files found: 8
|
||||
|
||||
**Step File Sizes:**
|
||||
|
||||
- steps-c/step-01-load-context.md: 78 lines [GOOD]
|
||||
- steps-c/step-02-define-thresholds.md: 75 lines [GOOD]
|
||||
- steps-c/step-03-gather-evidence.md: 58 lines [GOOD]
|
||||
- steps-c/step-04-evaluate-and-score.md: 61 lines [GOOD]
|
||||
- steps-c/step-05-generate-report.md: 64 lines [GOOD]
|
||||
- steps-e/step-01-assess.md: 51 lines [GOOD]
|
||||
- steps-e/step-02-apply-edit.md: 46 lines [GOOD]
|
||||
- steps-v/step-01-validate.md: 53 lines [GOOD]
|
||||
- workflow-plan.md present: YES
|
||||
|
||||
## Frontmatter Validation
|
||||
|
||||
- No frontmatter violations found
|
||||
|
||||
## Critical Path Violations
|
||||
|
||||
- No {project-root} hardcoded paths detected in body
|
||||
- No dead relative links detected
|
||||
|
||||
## Menu Handling Validation
|
||||
|
||||
- No menu structures detected (linear step flow) [N/A]
|
||||
|
||||
## Step Type Validation
|
||||
|
||||
- Last step steps-v/step-01-validate.md has no nextStepFile (final step OK)
|
||||
- Step type validation assumes linear sequence (no branching/menu). Workflow-plan.md present for reference. [INFO]
|
||||
|
||||
## Output Format Validation
|
||||
|
||||
- Templates present: nfr-report-template.md
|
||||
- Steps with outputFile in frontmatter:
|
||||
- steps-c/step-05-generate-report.md
|
||||
- steps-v/step-01-validate.md
|
||||
|
||||
## Validation Design Check
|
||||
|
||||
- checklist.md present: YES
|
||||
- Validation steps folder (steps-v) present: YES
|
||||
|
||||
## Instruction Style Check
|
||||
|
||||
- All steps include STEP GOAL, MANDATORY EXECUTION RULES, EXECUTION PROTOCOLS, CONTEXT BOUNDARIES, and SUCCESS/FAILURE metrics
|
||||
|
||||
## Summary
|
||||
|
||||
- Validation completed: 2026-01-27 10:03:10
|
||||
- Critical issues: 0
|
||||
- Warnings: 0 (informational notes only)
|
||||
- Readiness: READY (manual review optional)
|
||||
@@ -0,0 +1,116 @@
|
||||
---
|
||||
validationDate: 2026-01-27
|
||||
workflowName: testarch-nfr
|
||||
workflowPath: {project-root}/src/workflows/testarch/bmad-testarch-nfr
|
||||
validationStatus: COMPLETE
|
||||
completionDate: 2026-01-27 10:24:01
|
||||
---
|
||||
|
||||
# Validation Report: testarch-nfr
|
||||
|
||||
**Validation Started:** 2026-01-27 10:24:01
|
||||
**Validator:** BMAD Workflow Validation System (Codex)
|
||||
**Standards Version:** BMAD Workflow Standards
|
||||
|
||||
## File Structure & Size
|
||||
|
||||
- workflow.md present: YES
|
||||
- instructions.md present: YES
|
||||
- workflow.yaml present: YES
|
||||
- step files found: 8
|
||||
|
||||
**Step File Sizes:**
|
||||
|
||||
- steps-c/step-01-load-context.md: 77 lines [GOOD]
|
||||
- steps-c/step-02-define-thresholds.md: 74 lines [GOOD]
|
||||
- steps-c/step-03-gather-evidence.md: 57 lines [GOOD]
|
||||
- steps-c/step-04-evaluate-and-score.md: 60 lines [GOOD]
|
||||
- steps-c/step-05-generate-report.md: 63 lines [GOOD]
|
||||
- steps-e/step-01-assess.md: 50 lines [GOOD]
|
||||
- steps-e/step-02-apply-edit.md: 45 lines [GOOD]
|
||||
- steps-v/step-01-validate.md: 52 lines [GOOD]
|
||||
- workflow-plan.md present: YES
|
||||
|
||||
## Frontmatter Validation
|
||||
|
||||
- No frontmatter violations found
|
||||
|
||||
## Critical Path Violations
|
||||
|
||||
### Config Variables (Exceptions)
|
||||
|
||||
Standard BMAD config variables treated as valid exceptions: bmb_creations_output_folder, communication_language, document_output_language, output_folder, planning_artifacts, project-root, project_name, test_artifacts, user_name
|
||||
|
||||
- No {project-root} hardcoded paths detected in body
|
||||
|
||||
- No dead relative links detected
|
||||
|
||||
- No module path assumptions detected
|
||||
|
||||
**Status:** ✅ PASS - No critical violations
|
||||
|
||||
## Menu Handling Validation
|
||||
|
||||
- No menu structures detected (linear step flow) [N/A]
|
||||
|
||||
## Step Type Validation
|
||||
|
||||
- steps-c/step-01-load-context.md: Init [PASS]
|
||||
- steps-c/step-02-define-thresholds.md: Middle [PASS]
|
||||
- steps-c/step-03-gather-evidence.md: Middle [PASS]
|
||||
- steps-c/step-04-evaluate-and-score.md: Middle [PASS]
|
||||
- steps-c/step-05-generate-report.md: Final [PASS]
|
||||
- Step type validation assumes linear sequence (no branching/menu). Workflow-plan.md present for reference. [INFO]
|
||||
|
||||
## Output Format Validation
|
||||
|
||||
- Templates present: nfr-report-template.md
|
||||
- Steps with outputFile in frontmatter:
|
||||
- steps-c/step-05-generate-report.md
|
||||
- steps-v/step-01-validate.md
|
||||
- checklist.md present: YES
|
||||
|
||||
## Validation Design Check
|
||||
|
||||
- Validation steps folder (steps-v) present: YES
|
||||
- Validation step(s) present: step-01-validate.md
|
||||
- Validation steps reference checklist data and auto-proceed
|
||||
|
||||
## Instruction Style Check
|
||||
|
||||
- Instruction style: Prescriptive (appropriate for TEA quality/compliance workflows)
|
||||
- Steps emphasize mandatory sequence, explicit success/failure metrics, and risk-based guidance
|
||||
|
||||
## Collaborative Experience Check
|
||||
|
||||
- Overall facilitation quality: GOOD
|
||||
- Steps use progressive prompts and clear role reinforcement; no laundry-list interrogation detected
|
||||
- Flow progression is clear and aligned to workflow goals
|
||||
|
||||
## Subagent Optimization Opportunities
|
||||
|
||||
- No high-priority subagent optimizations identified; workflow already uses step-file architecture
|
||||
- Pattern 1 (grep/regex): N/A for most steps
|
||||
- Pattern 2 (per-file analysis): already aligned to validation structure
|
||||
- Pattern 3 (data ops): minimal data file loads
|
||||
- Pattern 4 (parallel): optional for validation only
|
||||
|
||||
## Cohesive Review
|
||||
|
||||
- Overall assessment: GOOD
|
||||
- Flow is linear, goals are clear, and outputs map to TEA artifacts
|
||||
- Voice and tone consistent with Test Architect persona
|
||||
- Recommendation: READY (minor refinements optional)
|
||||
|
||||
## Plan Quality Validation
|
||||
|
||||
- Plan file present: workflow-plan.md
|
||||
- Planned steps found: 8 (all implemented)
|
||||
- Plan implementation status: Fully Implemented
|
||||
|
||||
## Summary
|
||||
|
||||
- Validation completed: 2026-01-27 10:24:01
|
||||
- Critical issues: 0
|
||||
- Warnings: 0 (informational notes only)
|
||||
- Readiness: READY (manual review optional)
|
||||
19
.agents/skills/bmad-testarch-nfr/workflow-plan.md
Normal file
19
.agents/skills/bmad-testarch-nfr/workflow-plan.md
Normal file
@@ -0,0 +1,19 @@
|
||||
# Workflow Plan: testarch-nfr
|
||||
|
||||
## Create Mode (steps-c)
|
||||
- step-01-load-context.md
|
||||
|
||||
- step-02-define-thresholds.md
|
||||
- step-03-gather-evidence.md
|
||||
- step-04-evaluate-and-score.md
|
||||
- step-05-generate-report.md
|
||||
|
||||
## Validate Mode (steps-v)
|
||||
- step-01-validate.md
|
||||
|
||||
## Edit Mode (steps-e)
|
||||
- step-01-assess.md
|
||||
- step-02-apply-edit.md
|
||||
|
||||
## Outputs
|
||||
- {test_artifacts}/nfr-assessment.md
|
||||
41
.agents/skills/bmad-testarch-nfr/workflow.md
Normal file
41
.agents/skills/bmad-testarch-nfr/workflow.md
Normal file
@@ -0,0 +1,41 @@
|
||||
---
|
||||
name: bmad-testarch-nfr
|
||||
description: Assess NFRs like performance security and reliability. Use when user says 'lets assess NFRs' or 'I want to evaluate non-functional requirements'
|
||||
web_bundle: true
|
||||
---
|
||||
|
||||
# Non-Functional Requirements Assessment
|
||||
|
||||
**Goal:** Assess non-functional requirements (performance, security, reliability, maintainability) before release with evidence-based validation
|
||||
|
||||
**Role:** You are the Master Test Architect.
|
||||
|
||||
---
|
||||
|
||||
## WORKFLOW ARCHITECTURE
|
||||
|
||||
This workflow uses **tri-modal step-file architecture**:
|
||||
|
||||
- **Create mode (steps-c/)**: primary execution flow
|
||||
- **Validate mode (steps-v/)**: validation against checklist
|
||||
- **Edit mode (steps-e/)**: revise existing outputs
|
||||
|
||||
---
|
||||
|
||||
## INITIALIZATION SEQUENCE
|
||||
|
||||
### 1. Mode Determination
|
||||
|
||||
"Welcome to the workflow. What would you like to do?"
|
||||
|
||||
- **[C] Create** — Run the workflow
|
||||
- **[R] Resume** — Resume an interrupted workflow
|
||||
- **[V] Validate** — Validate existing outputs
|
||||
- **[E] Edit** — Edit existing outputs
|
||||
|
||||
### 2. Route to First Step
|
||||
|
||||
- **If C:** Load `steps-c/step-01-load-context.md`
|
||||
- **If R:** Load `steps-c/step-01b-resume.md`
|
||||
- **If V:** Load `steps-v/step-01-validate.md`
|
||||
- **If E:** Load `steps-e/step-01-assess.md`
|
||||
48
.agents/skills/bmad-testarch-nfr/workflow.yaml
Normal file
48
.agents/skills/bmad-testarch-nfr/workflow.yaml
Normal file
@@ -0,0 +1,48 @@
|
||||
# Test Architect workflow: bmad-testarch-nfr
|
||||
name: bmad-testarch-nfr
|
||||
# prettier-ignore
|
||||
description: 'Assess NFRs like performance security and reliability. Use when the user says "lets assess NFRs" or "I want to evaluate non-functional requirements"'
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/tea/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
test_artifacts: "{config_source}:test_artifacts"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
document_output_language: "{config_source}:document_output_language"
|
||||
date: system-generated
|
||||
|
||||
# Workflow components
|
||||
installed_path: "."
|
||||
instructions: "./instructions.md"
|
||||
validation: "./checklist.md"
|
||||
template: "./nfr-report-template.md"
|
||||
|
||||
# Variables and inputs
|
||||
variables:
|
||||
# NFR category assessment (defaults to all categories)
|
||||
custom_nfr_categories: "" # Optional additional categories beyond standard (security, performance, reliability, maintainability)
|
||||
|
||||
# Output configuration
|
||||
default_output_file: "{test_artifacts}/nfr-assessment.md"
|
||||
|
||||
# Required tools
|
||||
required_tools:
|
||||
- read_file # Read story, test results, metrics, logs, BMad artifacts
|
||||
- write_file # Create NFR assessment, gate YAML, evidence checklist
|
||||
- list_files # Discover test results, metrics, logs
|
||||
- search_repo # Find NFR-related tests and evidence
|
||||
- glob # Find result files matching patterns
|
||||
|
||||
tags:
|
||||
- qa
|
||||
- nfr
|
||||
- test-architect
|
||||
- performance
|
||||
- security
|
||||
- reliability
|
||||
|
||||
execution_hints:
|
||||
interactive: false # Minimize prompts
|
||||
autonomous: true # Proceed without user input unless blocked
|
||||
iterative: true
|
||||
Reference in New Issue
Block a user