docs: update all documentation and add AI tooling configs

- Rewrite README.md with current architecture, features and stack
- Update docs/API.md with all current endpoints (corporate, BI, client 360)
- Update docs/ARCHITECTURE.md with cache, modular queries, services, ETL
- Update docs/GUIA-USUARIO.md for all roles (admin, corporate, agente)
- Add docs/INDEX.md documentation index
- Add PROJETO.md comprehensive project reference
- Add BI-CCC-Implementation-Guide.md
- Include AI agent configs (.claude, .agents, .gemini, _bmad)
- Add netbird VPN configuration
- Add status report

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
2026-03-19 13:29:03 -04:00
parent c5b377e788
commit 647cbec54f
3246 changed files with 479789 additions and 983 deletions

View File

@@ -0,0 +1,157 @@
---
name: 'step-01-scope-and-plan'
description: 'Read the approved spec, extract every deliverable, and create a sequenced implementation plan'
# File References
nextStepFile: './step-02-setup-environment.md'
---
# Step 1: Scope and Plan
## STEP GOAL:
Read the approved spec, extract every deliverable, and create a sequenced implementation plan.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on reading the spec, inventorying work items, mapping dependencies, and creating the implementation plan
- 🚫 FORBIDDEN to begin implementing or coding anything — that is a later step
- 💬 Approach: Walk through the spec with user, extract deliverables collaboratively, and sequence them
- 📋 Every acceptance criterion must be extracted into a standalone checklist
## EXECUTION PROTOCOLS:
- 🎯 Produce a complete, sequenced implementation plan with all work items inventoried
- 💾 Record inventory, implementation order, and acceptance criteria in the dialog file
- 📖 Reference the approved spec document thoroughly
- 🚫 Do not write any implementation code
## CONTEXT BOUNDARIES:
- Available context: Approved specification document
- Focus: Planning and scoping — deliverables, dependencies, order, acceptance criteria
- Limits: No implementation, no environment setup
- Dependencies: An approved spec must exist
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Read the Approved Specification
Open the approved spec (page spec, feature spec, or component spec) and read it end to end. Do not skim. Note:
- Every page, component, and feature described
- All properties, states, and variants for each component
- Responsive behavior and breakpoints
- Data requirements (APIs, models, mock data)
- Acceptance criteria (these are your definition of done)
- Design tokens and design system references
### 2. Inventory All Work Items
Create a flat list of everything that needs to be built. Be specific — not "build the form" but each field, validation rule, and submit behavior. Include:
- Pages / routes
- Components (new and modified)
- Data models or API integrations
- State management additions
- Styling / responsive work
- Tests to write
### 3. Identify Dependencies and Determine Order
Map which items depend on which. Common patterns:
- Shared components before pages that use them
- Data layer before UI that consumes it
- Layout / structure before content sections
- Base styles / tokens before component styles
Order the work so you never build something before its dependencies exist.
### 4. Estimate Effort
For each work item, note relative size (small / medium / large). This is not for time tracking — it is for spotting items that are too large and should be broken down further. Any "large" item should be split into sub-tasks.
### 5. Create the Branch
Create a feature branch from the appropriate base branch. Use a descriptive name that ties to the spec (e.g., `feature/booking-page`, `feature/user-profile-card`).
### 6. List All Acceptance Criteria
Extract every acceptance criterion from the spec into a standalone checklist. This checklist will be used in Step 04 (Verify) to confirm completion. Format:
```markdown
## Acceptance Criteria
- [ ] Criterion from spec
- [ ] Criterion from spec
- [ ] ...
```
Record the inventory, implementation order, and acceptance criteria in the dialog file.
### 7. Verify Checklist
- [ ] Spec read in full — no sections skipped
- [ ] All pages, components, and features inventoried
- [ ] Dependencies mapped, implementation order defined
- [ ] Large items broken into sub-tasks
- [ ] Feature branch created
- [ ] All acceptance criteria extracted into a checklist
- [ ] Plan documented in dialog file
### 8. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 2: Setup Environment"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the implementation plan is complete with inventoried work items, sequenced order, and acceptance criteria will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Spec read in full — no sections skipped
- All pages, components, and features inventoried
- Dependencies mapped, implementation order defined
- Large items broken into sub-tasks
- Feature branch created
- All acceptance criteria extracted into a checklist
- Plan documented in dialog file
### ❌ SYSTEM FAILURE:
- Beginning implementation before planning is complete
- Skipping sections of the spec
- Not extracting acceptance criteria
- Not creating a feature branch
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,167 @@
---
name: 'step-02-setup-environment'
description: 'Verify the development environment is ready, all dependencies are installed, and the project runs cleanly before writing any new code'
# File References
nextStepFile: './step-03-implement.md'
---
# Step 2: Setup Environment
## STEP GOAL:
Verify the development environment is ready, all dependencies are installed, and the project runs cleanly before writing any new code.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on verifying environment, installing dependencies, starting dev server, and establishing test baseline
- 🚫 FORBIDDEN to begin implementing features — that is the next step
- 💬 Approach: Walk through environment verification systematically with user
- 📋 Document any pre-existing issues so they are not confused with regressions later
## EXECUTION PROTOCOLS:
- 🎯 Confirm environment is clean and ready for implementation
- 💾 Document test baseline and any pre-existing issues in the dialog file
- 📖 Reference project configuration files and existing documentation
- 🚫 Do not write any feature code during this step
## CONTEXT BOUNDARIES:
- Available context: Implementation plan from Step 1; project configuration files
- Focus: Environment verification, dependency installation, baseline establishment
- Limits: No feature implementation
- Dependencies: Step 1 must be complete (plan exists, branch created)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Verify Tech Stack Requirements
Check that the project's required tooling is available:
- Runtime (Node.js version, Python version, etc.)
- Package manager (npm, yarn, pnpm, pip, etc.)
- Build tools (Vite, Webpack, Turbopack, etc.)
- Any CLI tools the project depends on
If version mismatches exist, resolve them now — not mid-implementation.
### 2. Install Dependencies
Run the project's install command. Watch the output for:
- Deprecation warnings (note but do not fix unless blocking)
- Peer dependency conflicts (resolve if they cause build failures)
- Missing system-level dependencies
```
npm install # or yarn, pnpm install, pip install -r requirements.txt, etc.
```
### 3. Start the Development Server
Run the dev server and confirm:
- The project builds without errors
- The existing pages / routes load in the browser
- Hot reload or watch mode works
- No console errors on existing pages
### 4. Verify Design System Access
If the project uses a design system or design tokens, confirm you can access them:
- Token files (colors, spacing, typography) are present and importable
- Component library is installed and renders correctly
- Icon sets or asset libraries are available
- Any Figma-to-code output is up to date
### 5. Create Project Structure (If Needed)
If the spec requires new directories or organizational structure, create them now:
- New page directories
- New component directories
- Test file locations
- Any configuration files for new routes
Follow the existing project conventions for naming and placement.
### 6. Run Existing Tests to Establish Baseline
Run the full test suite before touching anything:
```
npm test # or the project's equivalent
```
Record the result:
- **All pass:** Good baseline. Any future failure is from your changes.
- **Some fail:** Document which tests fail BEFORE you start. These are pre-existing failures and not your responsibility, but you need to know they exist so you do not accidentally claim them as regressions.
- **No tests exist:** Note this. Consider whether the spec requires tests (Step 03 will address test writing).
### 7. Verify Checklist
- [ ] Runtime and tooling versions verified
- [ ] Dependencies installed without blocking errors
- [ ] Dev server starts and existing pages load
- [ ] Design tokens / design system accessible
- [ ] New directories created (if needed)
- [ ] Existing test suite run, baseline documented
- [ ] Any pre-existing issues noted in dialog file
### 8. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 3: Implement"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN the environment is verified clean, dependencies installed, and test baseline documented will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Runtime and tooling versions verified
- Dependencies installed without blocking errors
- Dev server starts and existing pages load
- Design tokens / design system accessible
- Existing test suite run, baseline documented
- Pre-existing issues noted
### ❌ SYSTEM FAILURE:
- Starting implementation with unresolved environment issues
- Not establishing test baseline
- Not documenting pre-existing failures
- Skipping dependency installation
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,177 @@
---
name: 'step-03-implement'
description: 'Build every feature in the implementation plan, one at a time, following existing codebase patterns'
# File References
nextStepFile: './step-04-verify.md'
---
# Step 3: Implement
## STEP GOAL:
Build every feature in the implementation plan, one at a time, following existing codebase patterns.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on implementing plan items one at a time in dependency order
- 🚫 FORBIDDEN to skip ahead or implement multiple items simultaneously
- 💬 Approach: Work through each item methodically — re-read spec, implement, test, commit
- 📋 Follow existing codebase patterns for consistency; document any deviations
## EXECUTION PROTOCOLS:
- 🎯 Complete every work item from the implementation plan in order
- 💾 Commit incrementally after each logical unit of work
- 📖 Re-read the relevant spec section before implementing each item
- 🚫 Do not accumulate large uncommitted changesets
## CONTEXT BOUNDARIES:
- Available context: Implementation plan from Step 1; environment baseline from Step 2; approved spec
- Focus: Building features one at a time in dependency order
- Limits: Only implement what is in the plan — no scope creep
- Dependencies: Steps 1 and 2 must be complete
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Work Through the Plan Item by Item
Take the first item from your implementation order (Step 01). For each item:
1. Re-read the relevant section of the spec
2. Identify the acceptance criteria that apply to this item
3. Implement it
4. Test it (see section 5 below)
5. Commit it
6. Move to the next item
Do not jump ahead. Do not implement multiple items simultaneously. Finishing one thing completely before starting the next prevents half-done work from accumulating.
### 2. Follow Existing Codebase Patterns
Before writing new code, study how the existing codebase handles the same concerns:
| Concern | Look At |
|---------|---------|
| **File naming** | How are existing components/pages named? |
| **Component structure** | Props interface, state management, lifecycle |
| **Styling approach** | CSS modules, Tailwind, styled-components, etc. |
| **Error handling** | How do existing features handle errors? |
| **Data fetching** | What patterns are used for API calls? |
| **State management** | Local state, context, store — what is the convention? |
| **Testing patterns** | How are existing tests structured? |
Match these patterns. Consistency with the codebase is more important than what you consider "better."
### 3. Reference the Spec for Each Component
For every component you build, check the spec for:
- **Properties:** What props/attributes does it accept?
- **States:** Default, hover, active, disabled, loading, error, empty
- **Responsive behavior:** How does it change at each breakpoint?
- **Content:** Exact text, labels, placeholder copy
- **Interactions:** Click handlers, form submissions, navigation
- **Edge cases:** What happens with long text, missing data, zero items?
Do not rely on memory. Re-read the spec section each time.
### 4. Use Design System Tokens
If the project has design tokens or a design system:
- Use token variables for colors, spacing, typography — never hardcode values
- Use existing components from the design system before creating new ones
- If you need a new component, build it using the same token system
- Check that your implementation visually matches at design-review zoom levels
### 5. Run Tests After Each Major Change
After completing each work item (not just at the end):
- Run the relevant test suite
- If you wrote new tests, confirm they pass
- If existing tests break, determine if it is an intentional change or a regression
- Fix regressions immediately — do not move on with broken tests
For projects using inline testing (Puppeteer), follow the INLINE-TESTING-GUIDE.md in `data/guides/`.
### 6. Commit Incrementally
Each commit should represent one logical unit of work:
- One component added
- One feature wired up
- One set of tests for a feature
Write clear commit messages that reference what was built. Do not accumulate large uncommitted changesets.
### 7. Verify Checklist
- [ ] Working through plan items in dependency order
- [ ] Each item: spec re-read before implementation
- [ ] Following existing codebase patterns (naming, structure, styling)
- [ ] Design tokens used — no hardcoded values
- [ ] All component states implemented (not just the happy path)
- [ ] Responsive behavior implemented per spec
- [ ] Tests run after each major change
- [ ] Commits are incremental and logically scoped
- [ ] Deviations from spec documented with rationale
### 8. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 4: Verify"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN all plan items are implemented, tested, and committed will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- All plan items implemented in dependency order
- Spec re-read before each item implementation
- Existing codebase patterns followed
- Design tokens used consistently
- All component states implemented
- Tests run after each major change
- Commits are incremental and logically scoped
### ❌ SYSTEM FAILURE:
- Implementing items out of order or simultaneously
- Not re-reading spec before implementation
- Hardcoding values instead of using design tokens
- Accumulating large uncommitted changesets
- Moving on with broken tests
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,177 @@
---
name: 'step-04-verify'
description: 'Systematically confirm that the implementation satisfies every requirement in the spec'
# File References
nextStepFile: './step-05-finalize.md'
---
# Step 4: Verify
## STEP GOAL:
Systematically confirm that the implementation satisfies every requirement in the spec.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on verifying acceptance criteria, responsive behavior, interactive states, accessibility, and visual fidelity
- 🚫 FORBIDDEN to add new features or refactor — only verify and fix issues found
- 💬 Approach: Walk through each acceptance criterion with user, testing concretely
- 📋 Fix failures immediately as they are found — do not batch them
## EXECUTION PROTOCOLS:
- 🎯 Every acceptance criterion tested and passing
- 💾 Document verification results and any fixes applied
- 📖 Reference acceptance criteria from Step 1 and the original spec
- 🚫 Do not add scope — only verify what was planned
## CONTEXT BOUNDARIES:
- Available context: Acceptance criteria from Step 1; implementation from Step 3; spec
- Focus: Systematic verification against spec requirements
- Limits: No new features, no refactoring beyond fixing issues
- Dependencies: Step 3 must be complete (implementation done)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Walk Through Every Acceptance Criterion
Open the acceptance criteria checklist from Step 01. Go through each criterion one by one:
- Does the implementation satisfy it? Test it concretely, do not assume.
- If it passes, check it off.
- If it fails, note what is wrong and fix it before continuing.
Do not batch failures for later. Fix as you find them.
### 2. Test All Responsive Breakpoints
For each page and component, test at every breakpoint defined in the spec:
- Mobile (typically 375px)
- Tablet (typically 768px)
- Desktop (typically 1024px+)
- Any custom breakpoints specified
At each breakpoint, verify:
- Layout adapts correctly (stacking, reordering, hiding/showing elements)
- Text remains readable — no overflow, no truncation unless intended
- Touch targets meet minimum size (44x44px) on touch devices
- Images and media scale appropriately
- No horizontal scroll unless intended
### 3. Test All Interactive States
For every interactive element, verify each state:
| State | Verify |
|-------|--------|
| **Default** | Renders correctly on load |
| **Hover** | Visual feedback appears |
| **Focus** | Focus ring or indicator visible (keyboard users) |
| **Active / Pressed** | Visual response on click/tap |
| **Disabled** | Visually distinct, not interactive |
| **Loading** | Spinner or skeleton shown, interactions blocked |
| **Error** | Error message displayed, field highlighted |
| **Empty** | Empty state message or placeholder shown |
| **Success** | Confirmation feedback displayed |
### 4. Test Accessibility
Minimum accessibility checks for every feature:
- **Keyboard navigation:** Can you reach and operate every interactive element using only Tab, Enter, Space, Escape, and arrow keys?
- **Screen reader:** Do headings, labels, buttons, and form fields have meaningful text? Are ARIA labels present where needed?
- **Color contrast:** Does text meet WCAG AA contrast ratios (4.5:1 for normal text, 3:1 for large text)?
- **Focus management:** After modal open/close, form submit, or route change — is focus placed logically?
- **Alt text:** Do images have descriptive alt text (or empty alt for decorative images)?
### 5. Cross-Browser Check (If Specified)
If the spec requires specific browser support:
- Test in each listed browser
- Check for layout differences, font rendering, and JavaScript behavior
- Note any browser-specific issues and whether they are acceptable
### 6. Compare Implementation to Spec Side by Side
With the spec open next to the running implementation:
- Compare visual layout at each breakpoint
- Compare text content word for word
- Compare colors to spec hex values
- Compare spacing and proportions
- Note any discrepancies — fix or document as intentional deviations
For projects using Puppeteer, follow the verification process in INLINE-TESTING-GUIDE.md: measure what you can measure programmatically, and present only qualitative questions to the user.
### 7. Verify Checklist
- [ ] Every acceptance criterion tested and passing
- [ ] All responsive breakpoints verified
- [ ] All interactive states working (hover, focus, disabled, loading, error, empty, success)
- [ ] Keyboard navigation works for all interactive elements
- [ ] Screen reader labels and ARIA attributes present
- [ ] Color contrast meets WCAG AA
- [ ] Focus management correct after state changes
- [ ] Cross-browser tested (if required by spec)
- [ ] Visual comparison to spec completed — no unintended differences
- [ ] All found issues fixed or documented as intentional deviations
### 8. Present MENU OPTIONS
Display: "**Select an Option:** [C] Continue to Step 5: Finalize"
#### Menu Handling Logic:
- IF C: Update design log, then load, read entire file, then execute {nextStepFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed to next step when user selects 'C'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN all acceptance criteria are verified passing and all issues fixed will you then load and read fully `{nextStepFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Every acceptance criterion tested and passing
- All responsive breakpoints verified
- All interactive states working
- Accessibility checks completed
- Visual comparison to spec completed
- All found issues fixed or documented
### ❌ SYSTEM FAILURE:
- Assuming criteria pass without testing concretely
- Skipping responsive or accessibility verification
- Batching failures instead of fixing immediately
- Not comparing implementation to spec visually
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.

View File

@@ -0,0 +1,182 @@
---
name: 'step-05-finalize'
description: 'Clean up, run the full test suite, document deviations, and prepare the work for review'
# File References
activityWorkflowFile: '../workflow-development.md'
---
# Step 5: Finalize
## STEP GOAL:
Clean up, run the full test suite, document deviations, and prepare the work for review.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: Read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
- 📋 YOU ARE A FACILITATOR, not a content generator
- ✅ YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Role Reinforcement:
- ✅ You are an Implementation Partner guiding structured development activities
- ✅ If you already have been given a name, communication_style and persona, continue to use those while playing this new role
- ✅ We engage in collaborative dialogue, not command-response
- ✅ You bring software development methodology expertise, user brings domain knowledge and codebase familiarity
- ✅ Maintain clear and structured tone throughout
### Step-Specific Rules:
- 🎯 Focus only on cleanup, full test suite, deviation documentation, and PR preparation
- 🚫 FORBIDDEN to add new features or make non-cleanup changes
- 💬 Approach: Systematically clean up, test, document, and prepare for review with user
- 📋 Ensure deviations are documented with rationale for reviewer context
## EXECUTION PROTOCOLS:
- 🎯 Code cleaned, tests passing, deviations documented, PR prepared
- 💾 Document spec deviations and PR description in the dialog file
- 📖 Reference test baseline from Step 2 to distinguish regressions from pre-existing failures
- 🚫 Do not add features or refactor beyond cleanup
## CONTEXT BOUNDARIES:
- Available context: Implementation from Step 3; verification from Step 4; test baseline from Step 2; spec
- Focus: Final cleanup, testing, documentation, and review preparation
- Limits: No new features, no major refactoring
- Dependencies: Step 4 must be complete (verification passed)
## Sequence of Instructions (Do not deviate, skip, or optimize)
### 1. Clean Up Code
Go through all files you created or modified:
- Remove `console.log`, `debugger`, and other debug statements
- Remove commented-out code (if it was kept for reference, it belongs in a comment on the PR, not in the code)
- Remove unused imports and variables
- Verify file naming follows project conventions
- Check for any TODO comments you left — resolve them or convert to tracked issues
### 2. Run the Full Test Suite
Run the complete test suite, not just the tests for your changes:
```
npm test # or the project's equivalent
```
- **All pass:** Proceed.
- **New failures:** Determine if they are regressions from your changes or pre-existing. Fix regressions. Pre-existing failures should match the baseline from Step 02.
- **Flaky tests:** Note them but do not ignore them. If your changes made a test flaky, fix it.
If the project has linting or type checking, run those too:
```
npm run lint
npm run typecheck # if applicable
```
Fix any issues your code introduced.
### 3. Document Deviations from Spec
If you deviated from the spec during implementation (discovered in Step 03 or Step 04), document each deviation:
```markdown
## Spec Deviations
### [Component/Feature Name]
- **Spec said:** [what the spec required]
- **Implementation does:** [what you built instead]
- **Reason:** [why the deviation was necessary]
```
Record this in the dialog file. These deviations become discussion points during review.
### 4. Update Affected Documentation
Check if your changes require documentation updates:
- Component API documentation (new props, changed behavior)
- Route documentation (new pages, changed URLs)
- Environment setup docs (new dependencies, new env vars)
- Storybook stories (if the project uses Storybook)
Update only what your changes affect. Do not create new documentation that was not asked for.
### 5. Prepare PR Description
Write a pull request description that helps the reviewer:
- **Summary:** What was built and why (reference the spec)
- **Changes:** Key files and what changed in each
- **Testing:** How to test the changes (steps to reproduce each feature)
- **Deviations:** Any departures from the spec, with rationale
- **Screenshots:** Before/after if visual changes were made
- **Acceptance criteria:** Copy the checklist from Step 01 with all items checked
### 6. Suggest Acceptance Testing
Based on what was built, recommend next steps:
- If the project uses formal acceptance testing, suggest triggering the [T] Acceptance Testing workflow
- If a designer needs to review visual fidelity, flag that
- If stakeholder demo is needed, note what to demonstrate
- If the feature has user-facing copy, suggest copy review
### 7. Verify Checklist
- [ ] Debug statements removed
- [ ] Unused code and imports removed
- [ ] TODOs resolved or converted to tracked issues
- [ ] Full test suite passes (or pre-existing failures match baseline)
- [ ] Linting and type checking pass
- [ ] Spec deviations documented with rationale
- [ ] Affected documentation updated
- [ ] PR description written with summary, changes, testing steps, and deviations
- [ ] Next steps recommended (acceptance testing, design review, etc.)
### 8. Present MENU OPTIONS
Display: "**Select an Option:** [M] Return to Activity Menu"
#### Menu Handling Logic:
- IF M: Update design log, then load, read entire file, then execute {activityWorkflowFile}
- IF Any other comments or queries: help user respond then [Redisplay Menu Options]
#### EXECUTION RULES:
- ALWAYS halt and wait for user input after presenting menu
- ONLY proceed when user selects 'M'
- User can chat or ask questions - always respond and then redisplay menu options
## CRITICAL STEP COMPLETION NOTE
ONLY WHEN all cleanup is done, tests pass, deviations are documented, and PR is prepared will you then load and read fully `{activityWorkflowFile}` to execute.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Debug statements removed
- Unused code and imports removed
- Full test suite passes (or pre-existing failures match baseline)
- Linting and type checking pass
- Spec deviations documented with rationale
- PR description written
- Next steps recommended
### ❌ SYSTEM FAILURE:
- Leaving debug statements in code
- Not running the full test suite
- Not documenting spec deviations
- Not preparing a PR description
- Skipping linting or type checking
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.