initial commit
This commit is contained in:
@@ -0,0 +1,61 @@
|
||||
# Workflow Classification Reference
|
||||
|
||||
Classify the skill type based on user requirements. This table is for internal use — DO NOT show to user.
|
||||
|
||||
## 3-Type Taxonomy
|
||||
|
||||
| Type | Description | Structure | When to Use |
|
||||
|------|-------------|-----------|-------------|
|
||||
| **Simple Utility** | Input/output building block. Headless, composable, often has scripts. May opt out of bmad-init for true standalone use. | Single SKILL.md + scripts/ | Composable building block with clear input/output, single-purpose |
|
||||
| **Simple Workflow** | Multi-step process contained in a single SKILL.md. Uses bmad-init. Minimal or no prompt files. | SKILL.md + optional references/ | Multi-step process that fits in one file, no progressive disclosure needed |
|
||||
| **Complex Workflow** | Multi-stage with progressive disclosure, numbered prompt files at root, config integration. May support headless mode. | SKILL.md (routing) + prompt stages at root + references/ | Multiple stages, long-running process, progressive disclosure, routing logic |
|
||||
|
||||
## Decision Tree
|
||||
|
||||
```
|
||||
1. Is it a composable building block with clear input/output?
|
||||
└─ YES → Simple Utility
|
||||
└─ NO ↓
|
||||
|
||||
2. Can it fit in a single SKILL.md without progressive disclosure?
|
||||
└─ YES → Simple Workflow
|
||||
└─ NO ↓
|
||||
|
||||
3. Does it need multiple stages, long-running process, or progressive disclosure?
|
||||
└─ YES → Complex Workflow
|
||||
```
|
||||
|
||||
## Classification Signals
|
||||
|
||||
### Simple Utility Signals
|
||||
- Clear input → processing → output pattern
|
||||
- No user interaction needed during execution
|
||||
- Other skills/workflows call it
|
||||
- Deterministic or near-deterministic behavior
|
||||
- Could be a script but needs LLM judgment
|
||||
- Examples: JSON validator, manifest checker, format converter
|
||||
|
||||
### Simple Workflow Signals
|
||||
- 3-8 numbered steps
|
||||
- User interaction at specific points
|
||||
- Uses standard tools (gh, git, npm, etc.)
|
||||
- Produces a single output artifact
|
||||
- No need to track state across compactions
|
||||
- Examples: PR creator, deployment checklist, code review
|
||||
|
||||
### Complex Workflow Signals
|
||||
- Multiple distinct phases/stages
|
||||
- Long-running (likely to hit context compaction)
|
||||
- Progressive disclosure needed (too much for one file)
|
||||
- Routing logic in SKILL.md dispatches to stage prompts
|
||||
- Produces multiple artifacts across stages
|
||||
- May support headless/autonomous mode
|
||||
- Examples: agent builder, module builder, project scaffolder
|
||||
|
||||
## Module Context (Orthogonal)
|
||||
|
||||
Module context is asked for ALL types:
|
||||
- **Module-based:** Part of a BMad module. Uses `bmad-{modulecode}-{skillname}` naming. Has bmad-manifest.json.
|
||||
- **Standalone:** Independent skill. Uses `bmad-{skillname}` naming.
|
||||
|
||||
All workflows use `bmad-init` by default unless explicitly opted out (truly standalone utilities).
|
||||
@@ -0,0 +1,523 @@
|
||||
# BMad Module Workflows
|
||||
|
||||
Advanced patterns for BMad module workflows — long-running, multi-stage processes with progressive disclosure, config integration, and compaction survival.
|
||||
|
||||
---
|
||||
|
||||
## Workflow Persona: Facilitator Model
|
||||
|
||||
BMad workflows treat the human operator as the expert. The agent's role is **facilitator**, not replacement.
|
||||
|
||||
**Principles:**
|
||||
- Ask clarifying questions when requirements are ambiguous
|
||||
- Present options with trade-offs, don't assume preferences
|
||||
- Validate decisions before executing irreversible actions
|
||||
- The operator knows their domain; the workflow knows the process
|
||||
|
||||
**Example voice:**
|
||||
```markdown
|
||||
## Discovery
|
||||
I found 3 API endpoints that could handle this. Which approach fits your use case?
|
||||
|
||||
**Option A**: POST /bulk-import — Faster, but no validation until complete
|
||||
**Option B**: POST /validate + POST /import — Slower, but catches errors early
|
||||
**Option C**: Streaming import — Best of both, requires backend support
|
||||
|
||||
Which would you prefer?
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Config Reading and Integration
|
||||
|
||||
Workflows MUST read config values using the `bmad-init` skill.
|
||||
|
||||
### Config Loading Pattern
|
||||
|
||||
**Invoke the skill with parameters:**
|
||||
```
|
||||
Use bmad-init skill:
|
||||
- module: {bmad-module-code}
|
||||
- vars: user_name:BMad,communication_language:English,document_output_language:English,output_folder:{project-root}/_bmad-output,{output-location-variable}:{default-output-path}
|
||||
```
|
||||
|
||||
The skill returns JSON with config values. Store in memory as `{var_name}` for use in prompts.
|
||||
|
||||
### Required Core Variables
|
||||
|
||||
**Every module workflow MUST load these core variables:**
|
||||
- `user_name:BMad`
|
||||
- `communication_language:English`
|
||||
- `output_folder:{project-root}/_bmad-output`
|
||||
|
||||
**Conditionally include:**
|
||||
- `document_output_language:English` — ONLY if workflow creates documents (check capability `output-location` field)
|
||||
- Output location variable from capability `output-location` — ONLY if specified in metadata
|
||||
|
||||
**Example for BMB workflow (creates documents, has output var):**
|
||||
```
|
||||
vars: user_name:BMad,communication_language:English,document_output_language:English,output_folder:{project-root}/_bmad-output,bmad_builder_output_folder:{project-root}/bmad-builder-creations/
|
||||
```
|
||||
|
||||
**Example for analysis workflow (no documents, has output var):**
|
||||
```
|
||||
vars: user_name:BMad,communication_language:English,output_folder:{project-root}/_bmad-output,analysis_output_folder:{project-root}/_bmad-output/analysis/
|
||||
```
|
||||
|
||||
**Example for processing workflow (no documents, no output var):**
|
||||
```
|
||||
vars: user_name:BMad,communication_language:English,output_folder:{project-root}/_bmad-output
|
||||
```
|
||||
|
||||
### Using Config Values in Prompts
|
||||
|
||||
**Every prompt file MUST start with:**
|
||||
```markdown
|
||||
Language: {communication_language}
|
||||
Output Language: {document_output_language} ← ONLY if workflow creates documents
|
||||
Output Location: {output-variable} ← ONLY if capability output-location is defined
|
||||
```
|
||||
|
||||
**Use throughout prompts:**
|
||||
```markdown
|
||||
"Creating documentation in {document_output_language}..." ← ONLY if creates documents
|
||||
"Writing output to {bmad_builder_output_folder}/report.md" ← ONLY if has output var
|
||||
"Connecting to API at {my_module_api_url}..."
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## {project_root} Pattern for Portable Paths
|
||||
|
||||
Artifacts MUST use `{project_root}` for paths so the skill works regardless of install location (user directory or project).
|
||||
|
||||
### Path Pattern
|
||||
|
||||
```
|
||||
{project_root}/docs/foo.md → Correct (portable)
|
||||
./docs/foo.md → Wrong (breaks if skill in user dir)
|
||||
~/my-project/docs/foo.md → Wrong (not portable)
|
||||
/bizarre/absolute/path/foo.md → Wrong (not portable)
|
||||
```
|
||||
|
||||
### Writing Artifacts
|
||||
|
||||
```markdown
|
||||
1. Create the artifact at {project_root}/docs/architecture.md
|
||||
2. Update {project_root}/CHANGELOG.md with entry
|
||||
3. Copy template to {project_root}/.bmad-cache/template.md
|
||||
```
|
||||
|
||||
### {project_root} Resolution
|
||||
|
||||
`{project_root}` is automatically resolved to the directory where the workflow was launched. This ensures:
|
||||
- Skills work whether installed globally or per-project
|
||||
- Multiple projects can use the same skill without conflict
|
||||
- Artifact paths are always relative to the active project
|
||||
|
||||
---
|
||||
|
||||
## Long-Running Workflows: Compaction Survival
|
||||
|
||||
Workflows that run long (many steps, large context) may trigger context compaction. Critical state MUST be preserved in output files.
|
||||
|
||||
### The Document-Itself Pattern
|
||||
|
||||
**The output document is the cache.** Write directly to the file you're creating, updating it progressively as the workflow advances.
|
||||
|
||||
The document stores both content and context:
|
||||
- **YAML front matter** — paths to input files used (for recovery after compaction)
|
||||
- **Draft sections** — progressive content as it's built
|
||||
- **Status marker** — which stage is complete (for resumption)
|
||||
|
||||
This avoids:
|
||||
- File collisions when working on multiple PRDs/research projects simultaneously
|
||||
- Extra `_bmad-cache` folder overhead
|
||||
- State synchronization complexity
|
||||
|
||||
### Draft Document Structure
|
||||
|
||||
```markdown
|
||||
---
|
||||
title: "Analysis: Research Topic"
|
||||
status: "analysis" # discovery | planning | analysis | synthesis | polish
|
||||
inputs:
|
||||
- "{project_root}/docs/brief.md"
|
||||
- "{project_root}/data/sources.json"
|
||||
created: "2025-03-02T10:00:00Z"
|
||||
updated: "2025-03-02T11:30:00Z"
|
||||
---
|
||||
|
||||
# Analysis: Research Topic
|
||||
|
||||
## Discovery
|
||||
[content from stage 1...]
|
||||
|
||||
## Analysis
|
||||
[content from stage 2...]
|
||||
|
||||
---
|
||||
|
||||
*Last updated: Stage 2 complete*
|
||||
```
|
||||
|
||||
### Input Tracking Pattern
|
||||
|
||||
**Stage 1: Initialize document with inputs**
|
||||
```markdown
|
||||
## Stage 1: Discovery
|
||||
1. Gather sources and identify input files
|
||||
2. Create output document with YAML front matter:
|
||||
```yaml
|
||||
---
|
||||
title: "{document_title}"
|
||||
status: "discovery"
|
||||
inputs:
|
||||
- "{relative_path_to_input_1}"
|
||||
- "{relative_path_to_input_2}"
|
||||
created: "{timestamp}"
|
||||
updated: "{timestamp}"
|
||||
---
|
||||
```
|
||||
3. Write discovery content to document
|
||||
4. Present summary to user
|
||||
```
|
||||
|
||||
**Stage 2+: Reload context if compacted**
|
||||
```markdown
|
||||
## Stage Start: Analysis
|
||||
1. Read {output_doc_path}
|
||||
2. Parse YAML front matter for `inputs` list
|
||||
3. Re-read each input file to restore context
|
||||
4. Verify status indicates previous stage complete
|
||||
5. Proceed with analysis, updating document in place
|
||||
```
|
||||
|
||||
```markdown
|
||||
## Stage 1: Research
|
||||
1. Gather sources
|
||||
2. **Write findings to {project_root}/docs/research-topic.md**
|
||||
3. Present summary to user
|
||||
|
||||
## Stage 2: Analysis
|
||||
1. **Read {project_root}/docs/research-topic.md** (survives compaction)
|
||||
2. Analyze patterns
|
||||
3. **Append/insert analysis into the same file**
|
||||
|
||||
## Stage 3: Synthesis
|
||||
1. Read the growing document
|
||||
2. Synthesize into final structure
|
||||
3. **Update the same file in place**
|
||||
|
||||
## Stage 4: Final Polish
|
||||
1. Spawn a subagent to polish the completed document:
|
||||
- Cohesion check
|
||||
- Redundancy removal
|
||||
- Contradiction detection and fixes
|
||||
- Add TOC if long document
|
||||
2. Write final version to {project_root}/docs/research-topic.md
|
||||
```
|
||||
|
||||
### When to Use This Pattern
|
||||
|
||||
**Guided flows with long documents:** Always write updates to the document itself at each stage.
|
||||
|
||||
**Yolo flows with multiple turns:** If the workflow takes multiple conversational turns, write to the output file progressively.
|
||||
|
||||
**Single-pass yolo:** Can wait to write final output if the entire response fits in one turn.
|
||||
|
||||
### Progressive Document Structure
|
||||
|
||||
Each stage appends to or restructures the document:
|
||||
|
||||
```markdown
|
||||
## Initial Stage
|
||||
# Document Title
|
||||
|
||||
## Section 1: Initial Research
|
||||
[content...]
|
||||
|
||||
---
|
||||
|
||||
## Second Stage (reads file, appends)
|
||||
# Document Title
|
||||
|
||||
## Section 1: Initial Research
|
||||
[existing content...]
|
||||
|
||||
## Section 2: Analysis
|
||||
[new content...]
|
||||
|
||||
---
|
||||
|
||||
## Third Stage (reads file, restructures)
|
||||
# Document Title
|
||||
|
||||
## Executive Summary
|
||||
[ synthesized from sections ]
|
||||
|
||||
## Background
|
||||
[ section 1 content ]
|
||||
|
||||
## Analysis
|
||||
[ section 2 content ]
|
||||
```
|
||||
|
||||
### Final Polish Subagent
|
||||
|
||||
At workflow completion, spawn a subagent for final quality pass:
|
||||
|
||||
```markdown
|
||||
## Final Polish
|
||||
|
||||
Launch a general-purpose agent with:
|
||||
```
|
||||
Task: Polish {output_file_path}
|
||||
|
||||
Actions:
|
||||
1. Check cohesion - do sections flow logically?
|
||||
2. Find and remove redundancy
|
||||
3. Detect contradictions and fix them
|
||||
4. If document is >5 sections, add a TOC at the top
|
||||
5. Ensure consistent formatting and tone
|
||||
|
||||
Write the polished version back to the same file.
|
||||
```
|
||||
|
||||
### Compaction Recovery Pattern
|
||||
|
||||
If context is compacted mid-workflow:
|
||||
```markdown
|
||||
## Recovery Check
|
||||
1. Read {output_doc_path}
|
||||
2. Parse YAML front matter:
|
||||
- Check `status` for current stage
|
||||
- Read `inputs` list to restore context
|
||||
3. Re-read all input files from `inputs`
|
||||
4. Resume from next stage based on status
|
||||
```
|
||||
|
||||
### When NOT to Use This Pattern
|
||||
|
||||
- **Short, single-turn outputs:** Just write once at the end
|
||||
- **Purely conversational workflows:** No persistent document needed
|
||||
- **Multiple independent artifacts:** Each gets its own file; write each directly
|
||||
|
||||
---
|
||||
|
||||
## Sequential Progressive Disclosure
|
||||
|
||||
Place numbered prompt files at the skill root when:
|
||||
- Multi-phase workflow with ordered questions
|
||||
- Input of one phase affects the next
|
||||
- User requires specific sequence
|
||||
- Workflow is long-running and stages shouldn't be visible upfront
|
||||
|
||||
### Prompt File Structure
|
||||
|
||||
```
|
||||
my-workflow/
|
||||
├── SKILL.md
|
||||
├── 01-discovery.md # Stage 1: Gather requirements, start output doc
|
||||
├── 02-planning.md # Stage 2: Create plan (uses discovery output)
|
||||
├── 03-execution.md # Stage 3: Execute (uses plan, updates output)
|
||||
├── 04-review.md # Stage 4: Review and polish final output
|
||||
└── references/
|
||||
└── stage-templates.md
|
||||
```
|
||||
|
||||
### Progression Conditions
|
||||
|
||||
Each prompt file specifies when to proceed:
|
||||
|
||||
```markdown
|
||||
# 02-planning.md
|
||||
|
||||
## Prerequisites
|
||||
- Discovery complete (output doc exists and has discovery section)
|
||||
- User approved scope (user confirmed: proceed)
|
||||
|
||||
## On Activation
|
||||
1. Read the output doc to get discovery context
|
||||
2. Generate plan based on discovered requirements
|
||||
3. **Append/insert plan section into the output doc**
|
||||
4. Present plan summary to user
|
||||
|
||||
## Progression Condition
|
||||
Proceed to execution stage when user confirms: "Proceed with plan" OR user provides modifications
|
||||
|
||||
## On User Approval
|
||||
Route to 03-execution.md
|
||||
```
|
||||
|
||||
### SKILL.md Routes to Prompt Files
|
||||
|
||||
Main SKILL.md is minimal — just routing logic:
|
||||
|
||||
```markdown
|
||||
## Workflow Entry
|
||||
|
||||
1. Load config from {project-root}/_bmad/bmb/config.yaml
|
||||
|
||||
2. Check if workflow in progress:
|
||||
- If output doc exists (user specifies path or we prompt):
|
||||
- Read doc to determine current stage
|
||||
- Resume from last completed section
|
||||
- Else: Start at 01-discovery.md
|
||||
|
||||
3. Route to appropriate prompt file based on stage
|
||||
```
|
||||
|
||||
### When NOT to Use Separate Prompt Files
|
||||
|
||||
Keep inline in SKILL.md when:
|
||||
- Simple skill (session-long context fits)
|
||||
- Well-known domain tool usage
|
||||
- Single-purpose utility
|
||||
- All stages are independent or can be visible upfront
|
||||
|
||||
---
|
||||
|
||||
## Module Metadata Reference
|
||||
|
||||
BMad module workflows require extended frontmatter metadata. See `references/metadata-reference.md` for the metadata template, field explanations, and comparisons between standalone skills and module workflows.
|
||||
|
||||
---
|
||||
|
||||
## Workflow Architecture Checklist
|
||||
|
||||
Before finalizing a BMad module workflow, verify:
|
||||
|
||||
- [ ] **Facilitator persona**: Does the workflow treat the operator as expert?
|
||||
- [ ] **Config integration**: Are language, output locations, and module props read and used?
|
||||
- [ ] **Portable paths**: All artifact paths use `{project_root}`?
|
||||
- [ ] **Continuous output**: Does each stage write to the output document directly (survives compaction)?
|
||||
- [ ] **Document-as-cache**: Output doc has YAML front matter with `status` and `inputs` for recovery?
|
||||
- [ ] **Input tracking**: Does front matter list relative paths to all input files used?
|
||||
- [ ] **Final polish**: Does workflow include a subagent polish step at the end?
|
||||
- [ ] **Progressive disclosure**: Are stages in prompt files at root with clear progression conditions?
|
||||
- [ ] **Metadata complete**: All bmad-* fields present and accurate?
|
||||
- [ ] **Recovery pattern**: Can the workflow resume by reading the output doc front matter?
|
||||
|
||||
---
|
||||
|
||||
## Example: Complete BMad Workflow Skeleton
|
||||
|
||||
```
|
||||
my-module-workflow/
|
||||
├── SKILL.md # Routing + entry logic
|
||||
├── 01-discovery.md # Gather requirements
|
||||
├── 02-planning.md # Create plan
|
||||
├── 03-execution.md # Execute
|
||||
├── 04-review.md # Review results
|
||||
├── references/
|
||||
│ └── templates.md # Stage templates
|
||||
└── scripts/
|
||||
└── validator.sh # Output validation
|
||||
```
|
||||
|
||||
**SKILL.md** (minimal routing):
|
||||
```yaml
|
||||
---
|
||||
name: bmad-mymodule-workflow
|
||||
description: Complex multi-stage workflow for my module. Use when user requests to 'run my module workflow' or 'create analysis report'.
|
||||
---
|
||||
|
||||
## Workflow Entry
|
||||
|
||||
1. Use bmad-init skill (module: mm) — loads user_name, communication_language, document_output_language, output_folder, my_output_folder
|
||||
|
||||
2. Ask user for output document path (or suggest {my_output_folder}/analysis-{timestamp}.md)
|
||||
|
||||
3. Check if doc exists:
|
||||
- If yes: read to determine current stage, resume
|
||||
- If no: start at 01-discovery.md
|
||||
|
||||
4. Route to appropriate prompt file based on stage
|
||||
```
|
||||
|
||||
**01-discovery.md**:
|
||||
```markdown
|
||||
Language: {communication_language}
|
||||
Output Language: {document_output_language}
|
||||
Output Location: {my_output_folder}
|
||||
|
||||
## Discovery
|
||||
|
||||
1. What are we building?
|
||||
2. What are the constraints?
|
||||
3. What input files should we reference?
|
||||
|
||||
**Create**: {output_doc_path} with:
|
||||
```markdown
|
||||
---
|
||||
title: "Analysis: {topic}"
|
||||
status: "discovery"
|
||||
inputs:
|
||||
- "{relative_path_to_input_1}"
|
||||
- "{relative_path_to_input_2}"
|
||||
created: "{timestamp}"
|
||||
updated: "{timestamp}"
|
||||
---
|
||||
|
||||
# Analysis: {topic}
|
||||
|
||||
## Discovery
|
||||
[findings...]
|
||||
|
||||
---
|
||||
|
||||
*Status: Stage 1 complete*
|
||||
```
|
||||
|
||||
## Progression
|
||||
When complete → 02-planning.md
|
||||
```
|
||||
|
||||
**02-planning.md**:
|
||||
```markdown
|
||||
Language: {communication_language}
|
||||
Output Language: {document_output_language}
|
||||
|
||||
## Planning Start
|
||||
|
||||
1. Read {output_doc_path}
|
||||
2. Parse YAML front matter — reload all `inputs` to restore context
|
||||
3. Verify status is "discovery"
|
||||
|
||||
## Planning
|
||||
1. Generate plan based on discovery
|
||||
2. Update {output_doc_path}:
|
||||
- Update status to "planning"
|
||||
- Append planning section
|
||||
|
||||
## Progression
|
||||
When complete → 03-execution.md
|
||||
```
|
||||
|
||||
**04-review.md**:
|
||||
```markdown
|
||||
Language: {communication_language}
|
||||
Output Language: {document_output_language}
|
||||
|
||||
## Final Polish
|
||||
|
||||
1. Read the complete output doc
|
||||
2. Launch a general-purpose agent:
|
||||
```
|
||||
Task: Polish {output_doc_path}
|
||||
|
||||
Actions:
|
||||
1. Check cohesion - do sections flow logically?
|
||||
2. Find and remove redundancy
|
||||
3. Detect contradictions and fix them
|
||||
4. If document is >5 sections, add a TOC at the top
|
||||
5. Ensure consistent formatting and tone
|
||||
6. Update YAML status to "complete" and remove draft markers
|
||||
|
||||
Write the polished version back to the same file.
|
||||
```
|
||||
|
||||
## Progression
|
||||
When complete → present final result to user
|
||||
```
|
||||
@@ -0,0 +1,126 @@
|
||||
# Manifest Reference
|
||||
|
||||
Every BMad skill has a `bmad-manifest.json` at its root. This is the unified format for agents, workflows, and simple skills.
|
||||
|
||||
## File Location
|
||||
|
||||
```
|
||||
{skillname}/
|
||||
├── SKILL.md # name, description, workflow content
|
||||
├── bmad-manifest.json # Capabilities, module integration
|
||||
└── ...
|
||||
```
|
||||
|
||||
## SKILL.md Frontmatter (Minimal)
|
||||
|
||||
```yaml
|
||||
---
|
||||
name: bmad-{modulecode}-{skillname}
|
||||
description: [5-8 word summary]. [Use when user says 'X' or 'Y'.]
|
||||
---
|
||||
```
|
||||
|
||||
## bmad-manifest.json
|
||||
|
||||
**NOTE:** Do NOT include `$schema` in generated manifests. The schema is used by validation tooling only — it is not part of the delivered skill.
|
||||
|
||||
```json
|
||||
{
|
||||
"module-code": "bmb",
|
||||
"replaces-skill": "bmad-original-skill",
|
||||
"has-memory": true,
|
||||
"capabilities": [
|
||||
{
|
||||
"name": "build",
|
||||
"menu-code": "BP",
|
||||
"description": "Builds skills through conversational discovery. Outputs to skill folder.",
|
||||
"supports-headless": true,
|
||||
"prompt": "build-process.md",
|
||||
"phase-name": "design",
|
||||
"after": ["create-requirements"],
|
||||
"before": ["quality-optimize"],
|
||||
"is-required": true,
|
||||
"output-location": "{bmad_builder_output_folder}"
|
||||
},
|
||||
{
|
||||
"name": "validate",
|
||||
"menu-code": "VL",
|
||||
"description": "Runs validation checks and produces quality report.",
|
||||
"supports-headless": true
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Field Reference
|
||||
|
||||
### Top-Level Fields
|
||||
|
||||
| Field | Type | Required | Purpose |
|
||||
|-------|------|----------|---------|
|
||||
| `module-code` | string | If module | Short code for namespacing (e.g., `bmb`, `cis`) |
|
||||
| `replaces-skill` | string | No | Registered skill name this replaces. Inherits metadata during bmad-init. |
|
||||
| `persona` | string | Agents only | Succinct distillation of the agent's essence. **Presence = this is an agent.** |
|
||||
| `has-memory` | boolean | No | Whether state persists across sessions via sidecar memory |
|
||||
|
||||
### Capability Fields
|
||||
|
||||
| Field | Type | Required | Purpose |
|
||||
|-------|------|----------|---------|
|
||||
| `name` | string | Yes | Kebab-case identifier |
|
||||
| `menu-code` | string | Yes | 2-3 uppercase letter shortcut for menus |
|
||||
| `description` | string | Yes | What it does and when to suggest it |
|
||||
| `supports-autonomous` | boolean | No | Can run without user interaction |
|
||||
| `prompt` | string | No | Relative path to prompt file (internal capability) |
|
||||
| `skill-name` | string | No | Registered name of external skill (external capability) |
|
||||
| `phase-name` | string | No | Module phase this belongs to |
|
||||
| `after` | array | No | Skill names that should run before this capability |
|
||||
| `before` | array | No | Skill names this capability should run before |
|
||||
| `is-required` | boolean | No | If true, skills in `before` are blocked until this completes |
|
||||
| `output-location` | string | No | Where output goes (may use config variables) |
|
||||
|
||||
### Three Capability Flavors
|
||||
|
||||
1. **Has `prompt`** — internal capability routed to a prompt file
|
||||
2. **Has `skill-name`** — delegates to another registered skill
|
||||
3. **Has neither** — SKILL.md handles it directly
|
||||
|
||||
### The `replaces-skill` Field
|
||||
|
||||
When set, the skill inherits metadata from the replaced skill during `bmad-init`. Explicit fields in the new manifest override inherited values.
|
||||
|
||||
## Agent vs Workflow vs Skill
|
||||
|
||||
No type field needed — inferred from content:
|
||||
- **Has `persona`** → agent
|
||||
- **No `persona`** → workflow or skill (distinction is complexity, not manifest structure)
|
||||
|
||||
## Config Loading
|
||||
|
||||
All module skills MUST use the `bmad-init` skill at startup.
|
||||
|
||||
See `references/complex-workflow-patterns.md` for the config loading pattern.
|
||||
|
||||
## Path Construction Rules — CRITICAL
|
||||
|
||||
Only use `{project-root}` for `_bmad` paths.
|
||||
|
||||
**Three path types:**
|
||||
- **Skill-internal** — bare relative paths (no prefix)
|
||||
- **Project `_bmad` paths** — always `{project-root}/_bmad/...`
|
||||
- **Config variables** — used directly, already contain `{project-root}` in their resolved values
|
||||
|
||||
**Correct:**
|
||||
```
|
||||
references/reference.md # Skill-internal (bare relative)
|
||||
stage-one.md # Skill-internal (prompt at root)
|
||||
{project-root}/_bmad/planning/prd.md # Project _bmad path
|
||||
{planning_artifacts}/prd.md # Config var (already has full path)
|
||||
```
|
||||
|
||||
**Never use:**
|
||||
```
|
||||
../../other-skill/file.md # Cross-skill relative path breaks with reorganization
|
||||
{project-root}/{config_var}/output.md # Double-prefix
|
||||
./references/reference.md # Relative prefix breaks context changes
|
||||
```
|
||||
@@ -0,0 +1,45 @@
|
||||
# Quality Dimensions — Quick Reference
|
||||
|
||||
Six dimensions to keep in mind when building skills. The quality scanners check these automatically during optimization — this is a mental checklist for the build phase.
|
||||
|
||||
## 1. Informed Autonomy
|
||||
|
||||
The executing agent needs enough context to make judgment calls when situations don't match the script. The Overview section establishes this: domain framing, theory of mind, design rationale.
|
||||
|
||||
- Simple utilities need minimal context — input/output is self-explanatory
|
||||
- Interactive/complex workflows need domain understanding, user perspective, and rationale for non-obvious choices
|
||||
- When in doubt, explain *why* — an agent that understands the mission improvises better than one following blind steps
|
||||
|
||||
## 2. Intelligence Placement
|
||||
|
||||
Scripts handle plumbing (fetch, transform, validate). Prompts handle judgment (interpret, classify, decide).
|
||||
|
||||
**Test:** If a script contains an `if` that decides what content *means*, intelligence has leaked.
|
||||
|
||||
**Reverse test:** If a prompt validates structure, counts items, parses known formats, compares against schemas, or checks file existence — determinism has leaked into the LLM. That work belongs in a script. Scripts have access to full bash, Python with standard library plus PEP 723 dependencies, and system tools — think broadly about what can be offloaded.
|
||||
|
||||
## 3. Progressive Disclosure
|
||||
|
||||
SKILL.md stays focused. Detail goes where it belongs.
|
||||
|
||||
- Stage instructions → prompt files at skill root
|
||||
- Reference data, schemas, large tables → `references/`
|
||||
- Templates, config files → `assets/`
|
||||
- Multi-branch SKILL.md under ~250 lines: fine as-is
|
||||
- Single-purpose up to ~500 lines: acceptable if focused
|
||||
|
||||
## 4. Description Format
|
||||
|
||||
Two parts: `[5-8 word summary]. [Use when user says 'X' or 'Y'.]`
|
||||
|
||||
Default to conservative triggering. See `references/standard-fields.md` for full format and examples.
|
||||
|
||||
## 5. Path Construction
|
||||
|
||||
Only use `{project-root}` for `_bmad` paths. Config variables used directly — they already contain `{project-root}`.
|
||||
|
||||
See `references/standard-fields.md` for correct/incorrect patterns.
|
||||
|
||||
## 6. Token Efficiency
|
||||
|
||||
Remove genuine waste (repetition, defensive padding, meta-explanation). Preserve context that enables judgment (domain framing, theory of mind, design rationale). These are different things — the prompt-craft scanner distinguishes between them.
|
||||
@@ -0,0 +1,354 @@
|
||||
# Script Opportunities Reference — Workflow Builder
|
||||
|
||||
## Core Principle
|
||||
|
||||
Scripts handle deterministic operations (validate, transform, count). Prompts handle judgment (interpret, classify, decide). If a check has clear pass/fail criteria, it belongs in a script.
|
||||
|
||||
---
|
||||
|
||||
## Section 1: How to Spot Script Opportunities
|
||||
|
||||
### The Determinism Test
|
||||
|
||||
Ask two questions about any operation:
|
||||
|
||||
1. **Given identical input, will it always produce identical output?** If yes, it's a script candidate.
|
||||
2. **Could you write a unit test with expected output?** If yes, it's definitely a script.
|
||||
|
||||
**Script territory:** The operation has no ambiguity — same input, same result, every time.
|
||||
**Prompt territory:** The operation requires interpreting meaning, tone, or context — reasonable people could disagree on the output.
|
||||
|
||||
### The Judgment Boundary
|
||||
|
||||
| Scripts Handle | Prompts Handle |
|
||||
|----------------|----------------|
|
||||
| Fetch | Interpret |
|
||||
| Transform | Classify (with ambiguity) |
|
||||
| Validate | Create |
|
||||
| Count | Decide (with incomplete info) |
|
||||
| Parse | Evaluate quality |
|
||||
| Compare | Synthesize meaning |
|
||||
| Extract | Assess tone/style |
|
||||
| Format | Generate recommendations |
|
||||
| Check structure | Weigh tradeoffs |
|
||||
|
||||
### Pattern Recognition Checklist
|
||||
|
||||
When you see these verbs or patterns in a workflow's requirements, think scripts first:
|
||||
|
||||
| Signal Verb / Pattern | Script Type | Example |
|
||||
|----------------------|-------------|---------|
|
||||
| validate | Validation script | "Validate frontmatter fields exist" |
|
||||
| count | Metric script | "Count tokens per file" |
|
||||
| extract | Data extraction | "Extract all config variable references" |
|
||||
| convert / transform | Transformation script | "Convert stage definitions to graph" |
|
||||
| compare | Comparison script | "Compare prompt frontmatter vs manifest" |
|
||||
| scan for | Pattern scanning | "Scan for orphaned template artifacts" |
|
||||
| check structure | File structure checker | "Check skill directory has required files" |
|
||||
| against schema | Schema validation | "Validate output against JSON schema" |
|
||||
| graph / map dependencies | Dependency analysis | "Map skill-to-skill dependencies" |
|
||||
| list all | Enumeration script | "List all resource files loaded by prompts" |
|
||||
| detect pattern | Pattern detector | "Detect subagent delegation patterns" |
|
||||
| diff / changes between | Diff analysis | "Show what changed between versions" |
|
||||
|
||||
### The Outside-the-Box Test
|
||||
|
||||
Scripts are not limited to validation. Push your thinking:
|
||||
|
||||
- **Data gathering as script:** Could a script collect structured data (file sizes, dependency lists, config values) and return JSON for the LLM to interpret? The LLM gets pre-digested facts instead of reading raw files.
|
||||
- **Pre-processing:** Could a script reduce what the LLM needs to read? Extract only the relevant sections, strip boilerplate, summarize structure.
|
||||
- **Post-processing validation:** Could a script validate LLM output after generation? Check that generated YAML parses, that referenced files exist, that naming conventions are followed.
|
||||
- **Metric collection:** Could scripts count, measure, and tabulate so the LLM makes decisions based on numbers it didn't have to compute? Token counts, file counts, complexity scores — feed these to LLM judgment without making the LLM count.
|
||||
- **Workflow stage analysis:** Could a script parse stage definitions and progression conditions, giving the LLM a structural map without it needing to parse markdown?
|
||||
|
||||
### Your Toolbox
|
||||
|
||||
Scripts have access to the full capabilities of the execution environment. Think broadly — if you can express the logic as deterministic code, it's a script candidate.
|
||||
|
||||
**Bash:** Full shell power — `jq`, `grep`, `awk`, `sed`, `find`, `diff`, `wc`, `sort`, `uniq`, `curl`, plus piping and composition. Great for file discovery, text processing, and orchestrating other scripts.
|
||||
|
||||
**Python:** The entire standard library — `json`, `yaml`, `pathlib`, `re`, `argparse`, `collections`, `difflib`, `ast`, `csv`, `xml.etree`, `textwrap`, `dataclasses`, and more. Plus PEP 723 inline-declared dependencies for anything else: `tiktoken` for accurate token counting, `jsonschema` for schema validation, `pyyaml` for YAML parsing, etc.
|
||||
|
||||
**System tools:** `git` commands for history, diff, blame, and log analysis. Filesystem operations for directory scanning and structure validation. Process execution for orchestrating multi-script pipelines.
|
||||
|
||||
### The --help Pattern
|
||||
|
||||
All scripts use PEP 723 metadata and implement `--help`. This creates a powerful integration pattern for prompts:
|
||||
|
||||
Instead of inlining a script's interface details into a prompt, the prompt can simply say:
|
||||
|
||||
> Run `scripts/foo.py --help` to understand its inputs and outputs, then invoke appropriately.
|
||||
|
||||
This saves tokens in the prompt and keeps a single source of truth for the script's API. When a script's interface changes, the prompt doesn't need updating — `--help` always reflects the current contract.
|
||||
|
||||
---
|
||||
|
||||
## Section 2: Script Opportunity Catalog
|
||||
|
||||
Each entry follows the format: What it does, Why it matters for workflows, What it checks, What it outputs, and Implementation notes.
|
||||
|
||||
---
|
||||
|
||||
### 1. Frontmatter Validator
|
||||
|
||||
**What:** Validate SKILL.md frontmatter structure and content.
|
||||
|
||||
**Why:** Frontmatter drives skill triggering and routing. Malformed frontmatter means the skill never activates or activates incorrectly.
|
||||
|
||||
**Checks:**
|
||||
- `name` exists and is kebab-case
|
||||
- `description` exists and follows "Use when..." pattern
|
||||
- `argument-hint` is present if the skill accepts arguments
|
||||
- No forbidden fields or reserved prefixes
|
||||
- Optional fields have valid values if present
|
||||
|
||||
**Output:** JSON with pass/fail per field, line numbers for errors.
|
||||
|
||||
**Implementation:** Python with argparse, no external deps needed. Parse YAML frontmatter between `---` delimiters.
|
||||
|
||||
---
|
||||
|
||||
### 2. Template Artifact Scanner
|
||||
|
||||
**What:** Scan all skill files for orphaned template substitution artifacts.
|
||||
|
||||
**Why:** The build process may leave behind `{if-autonomous}`, `{displayName}`, `{skill-name}`, or other placeholders that should have been replaced. These cause runtime confusion.
|
||||
|
||||
**Checks:**
|
||||
- Scan all `.md` files for `{placeholder}` patterns
|
||||
- Distinguish real config variables (loaded at runtime) from build-time artifacts
|
||||
- Flag any that don't match known runtime variables
|
||||
|
||||
**Output:** JSON with file path, line number, artifact text, and whether it looks intentional.
|
||||
|
||||
**Implementation:** Bash script with `grep` and `jq` for JSON output, or Python with regex.
|
||||
|
||||
---
|
||||
|
||||
### 3. Prompt Frontmatter Comparator
|
||||
|
||||
**What:** Compare prompt file frontmatter against the skill's `bmad-skill-manifest.yaml`.
|
||||
|
||||
**Why:** Capability misalignment between prompts and the manifest causes routing failures — the skill advertises a capability it can't deliver, or has a prompt that's never reachable.
|
||||
|
||||
**Checks:**
|
||||
- Every prompt file at root has frontmatter with `name`, `description`, `menu-code`
|
||||
- Prompt `name` matches manifest capability name
|
||||
- `menu-code` matches manifest entry (case-insensitive)
|
||||
- Every manifest capability with `type: "prompt"` has a corresponding file
|
||||
- Flag orphaned prompts not listed in manifest
|
||||
|
||||
**Output:** JSON with mismatches, missing files, orphaned prompts.
|
||||
|
||||
**Implementation:** Python, reads `bmad-skill-manifest.yaml` and all prompt `.md` files at skill root.
|
||||
|
||||
---
|
||||
|
||||
### 4. Token Counter
|
||||
|
||||
**What:** Count approximate token counts for each file in a skill.
|
||||
|
||||
**Why:** Identify verbose files that need optimization. Catch skills that exceed context window budgets. Understand where token budget is spent across prompts, resources, and the SKILL.md.
|
||||
|
||||
**Checks:**
|
||||
- Total tokens per `.md` file (approximate: chars / 4, or accurate via tiktoken)
|
||||
- Code block tokens vs prose tokens
|
||||
- Cumulative token cost of full skill activation (SKILL.md + loaded resources + initial prompt)
|
||||
|
||||
**Output:** JSON with file path, token count, percentage of total, and a sorted ranking.
|
||||
|
||||
**Implementation:** Python. Use `tiktoken` (PEP 723 dependency) for accuracy, or fall back to character approximation.
|
||||
|
||||
---
|
||||
|
||||
### 5. Dependency Graph Generator
|
||||
|
||||
**What:** Map dependencies between the current skill and external skills it invokes.
|
||||
|
||||
**Why:** Understand the skill's dependency surface. Catch references to skills that don't exist or have been renamed.
|
||||
|
||||
**Checks:**
|
||||
- Parse `bmad-skill-manifest.yaml` for external skill references
|
||||
- Parse SKILL.md and prompts for skill invocation patterns (`invoke`, `load`, skill name references)
|
||||
- Build a dependency list with direction (this skill depends on X, Y depends on this skill)
|
||||
|
||||
**Output:** JSON adjacency list or DOT format (GraphViz). Include whether each dependency is required or optional.
|
||||
|
||||
**Implementation:** Python, JSON/YAML parsing with regex for invocation pattern detection.
|
||||
|
||||
---
|
||||
|
||||
### 6. Stage Flow Analyzer
|
||||
|
||||
**What:** Parse multi-stage workflow definitions to extract stage ordering, progression conditions, and routing logic.
|
||||
|
||||
**Why:** Complex workflows define stages with specific progression conditions. Misaligned stage ordering, missing progression gates, or unreachable stages cause workflow failures that are hard to debug at runtime.
|
||||
|
||||
**Checks:**
|
||||
- Extract all defined stages from SKILL.md and prompt files
|
||||
- Verify each stage has a clear entry condition and exit/progression condition
|
||||
- Detect unreachable stages (no path leads to them)
|
||||
- Detect dead-end stages (no progression and not marked as terminal)
|
||||
- Validate stage ordering matches the documented flow
|
||||
- Check for circular stage references
|
||||
|
||||
**Output:** JSON with stage list, progression map, and structural warnings.
|
||||
|
||||
**Implementation:** Python with regex for stage/condition extraction from markdown.
|
||||
|
||||
---
|
||||
|
||||
### 7. Config Variable Tracker
|
||||
|
||||
**What:** Find all `{var}` references across skill files and verify they are loaded or defined.
|
||||
|
||||
**Why:** Unresolved config variables cause runtime errors or produce literal `{var_name}` text in outputs. This is especially common after refactoring or renaming variables.
|
||||
|
||||
**Checks:**
|
||||
- Scan all `.md` files for `{variable_name}` patterns
|
||||
- Cross-reference against variables loaded by `bmad-init` or defined in config
|
||||
- Distinguish template variables from literal text in code blocks
|
||||
- Flag undefined variables and unused loaded variables
|
||||
|
||||
**Output:** JSON with variable name, locations where used, and whether it's defined/loaded.
|
||||
|
||||
**Implementation:** Python with regex scanning and config file parsing.
|
||||
|
||||
---
|
||||
|
||||
### 8. Resource Loading Analyzer
|
||||
|
||||
**What:** Map which resources are loaded at which point during skill execution.
|
||||
|
||||
**Why:** Resources loaded too early waste context. Resources never loaded are dead weight in the skill directory. Understanding the loading sequence helps optimize token budget.
|
||||
|
||||
**Checks:**
|
||||
- Parse SKILL.md and prompts for `Load resource` / `Read` / file reference patterns
|
||||
- Map each resource to the stage/prompt where it's first loaded
|
||||
- Identify resources in `references/` that are never referenced
|
||||
- Identify resources referenced but missing from `references/`
|
||||
- Calculate cumulative token cost at each loading point
|
||||
|
||||
**Output:** JSON with resource file, loading trigger (which prompt/stage), and orphan/missing flags.
|
||||
|
||||
**Implementation:** Python with regex for load-pattern detection and directory scanning.
|
||||
|
||||
---
|
||||
|
||||
### 9. Subagent Pattern Detector
|
||||
|
||||
**What:** Detect whether a skill that processes multiple sources uses the BMad Advanced Context Pattern (subagent delegation).
|
||||
|
||||
**Why:** Skills processing 5+ sources without subagent delegation risk context overflow and degraded output quality. This pattern is required for high-source-count workflows.
|
||||
|
||||
**Checks:**
|
||||
- Count distinct source/input references in the skill
|
||||
- Look for subagent delegation patterns: "DO NOT read sources yourself", "delegate to sub-agents", `/tmp/analysis-` temp file patterns
|
||||
- Check for sub-agent output templates (50-100 token summaries)
|
||||
- Flag skills with 5+ sources that lack the pattern
|
||||
|
||||
**Output:** JSON with source count, pattern found/missing, and recommendations.
|
||||
|
||||
**Implementation:** Python with keyword search and context extraction.
|
||||
|
||||
---
|
||||
|
||||
### 10. Prompt Chain Validator
|
||||
|
||||
**What:** Trace the chain of prompt loads through a workflow and verify every path is valid.
|
||||
|
||||
**Why:** Workflows route between prompts based on user intent and stage progression. A broken link in the chain — a `Load foo.md` where `foo.md` doesn't exist — halts the workflow.
|
||||
|
||||
**Checks:**
|
||||
- Extract all `Load *.md` prompt references from SKILL.md and every prompt file
|
||||
- Verify each referenced prompt file exists
|
||||
- Build a reachability map from SKILL.md entry points
|
||||
- Flag prompts that exist but are unreachable from any entry point
|
||||
|
||||
**Output:** JSON with prompt chain map, broken links, and unreachable prompts.
|
||||
|
||||
**Implementation:** Python with regex extraction and file existence checks.
|
||||
|
||||
---
|
||||
|
||||
### 11. Skill Health Check (Composite)
|
||||
|
||||
**What:** Run all available validation scripts and aggregate results into a single report.
|
||||
|
||||
**Why:** One command to assess overall skill quality. Useful as a build gate or pre-commit check.
|
||||
|
||||
**Composition:** Runs scripts 1-10 in sequence, collects JSON outputs, aggregates findings by severity.
|
||||
|
||||
**Output:** Unified JSON health report with per-script results and overall status.
|
||||
|
||||
**Implementation:** Bash script orchestrating Python scripts, `jq` for JSON aggregation. Or a Python orchestrator using `subprocess`.
|
||||
|
||||
---
|
||||
|
||||
### 12. Skill Comparison Validator
|
||||
|
||||
**What:** Compare two versions of a skill (or two skills) for structural differences.
|
||||
|
||||
**Why:** Validate that changes during iteration didn't break structure. Useful for reviewing edits, comparing before/after optimization, or diffing a skill against a template.
|
||||
|
||||
**Checks:**
|
||||
- Frontmatter changes
|
||||
- Capability additions/removals in manifest
|
||||
- New or removed prompt files
|
||||
- Token count changes per file
|
||||
- Stage flow changes (for workflows)
|
||||
- Resource additions/removals
|
||||
|
||||
**Output:** JSON with categorized changes and severity assessment.
|
||||
|
||||
**Implementation:** Bash with `git diff` or file comparison, Python for structural analysis.
|
||||
|
||||
---
|
||||
|
||||
## Section 3: Script Output Standard and Implementation Checklist
|
||||
|
||||
### Script Output Standard
|
||||
|
||||
All scripts MUST output structured JSON for agent consumption:
|
||||
|
||||
```json
|
||||
{
|
||||
"script": "script-name",
|
||||
"version": "1.0.0",
|
||||
"skill_path": "/path/to/skill",
|
||||
"timestamp": "2025-03-08T10:30:00Z",
|
||||
"status": "pass|fail|warning",
|
||||
"findings": [
|
||||
{
|
||||
"severity": "critical|high|medium|low|info",
|
||||
"category": "structure|security|performance|consistency",
|
||||
"location": {"file": "SKILL.md", "line": 42},
|
||||
"issue": "Clear description",
|
||||
"fix": "Specific action to resolve"
|
||||
}
|
||||
],
|
||||
"summary": {
|
||||
"total": 0,
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Implementation Checklist
|
||||
|
||||
When creating new validation scripts:
|
||||
|
||||
- [ ] Uses `--help` for documentation (PEP 723 metadata)
|
||||
- [ ] Accepts skill path as argument
|
||||
- [ ] `-o` flag for output file (defaults to stdout)
|
||||
- [ ] Writes diagnostics to stderr
|
||||
- [ ] Returns meaningful exit codes: 0=pass, 1=fail, 2=error
|
||||
- [ ] Includes `--verbose` flag for debugging
|
||||
- [ ] Self-contained (PEP 723 for Python dependencies)
|
||||
- [ ] No interactive prompts
|
||||
- [ ] No network dependencies
|
||||
- [ ] Outputs valid JSON to stdout
|
||||
- [ ] Has tests in `scripts/tests/` subfolder
|
||||
@@ -0,0 +1,218 @@
|
||||
# Skill Authoring Best Practices
|
||||
|
||||
Practical patterns for writing effective BMad skills. For field definitions and description format, see `references/standard-fields.md`. For quality dimensions, see `references/quality-dimensions.md`.
|
||||
|
||||
## Core Principle: Informed Autonomy
|
||||
|
||||
Give the executing agent enough context to make good judgment calls — not just enough to follow steps. The right test for every piece of content is: "Would the agent make *better decisions* with this context?" If yes, keep it. If it's genuinely redundant or mechanical, cut it.
|
||||
|
||||
## Freedom Levels
|
||||
|
||||
Match specificity to task fragility:
|
||||
|
||||
| Freedom | When to Use | Example |
|
||||
|---------|-------------|---------|
|
||||
| **High** (text instructions) | Multiple valid approaches, context-dependent | "Analyze structure, check for issues, suggest improvements" |
|
||||
| **Medium** (pseudocode/templates) | Preferred pattern exists, some variation OK | `def generate_report(data, format="markdown"):` |
|
||||
| **Low** (exact scripts) | Fragile operations, consistency critical | `python scripts/migrate.py --verify --backup` (do not modify) |
|
||||
|
||||
**Analogy**: Narrow bridge with cliffs = low freedom. Open field = high freedom.
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Template Pattern
|
||||
|
||||
**Strict** (must follow exactly):
|
||||
````markdown
|
||||
## Report structure
|
||||
ALWAYS use this template:
|
||||
```markdown
|
||||
# [Title]
|
||||
## Summary
|
||||
[One paragraph]
|
||||
## Findings
|
||||
- Finding 1 with data
|
||||
```
|
||||
````
|
||||
|
||||
**Flexible** (adapt as needed):
|
||||
````markdown
|
||||
Here's a sensible default, use judgment:
|
||||
```markdown
|
||||
# [Title]
|
||||
## Summary
|
||||
[Overview]
|
||||
```
|
||||
Adapt based on context.
|
||||
````
|
||||
|
||||
### Examples Pattern
|
||||
|
||||
Input/output pairs show expected style:
|
||||
````markdown
|
||||
## Commit message format
|
||||
**Example 1:**
|
||||
Input: "Added user authentication with JWT tokens"
|
||||
Output: `feat(auth): implement JWT-based authentication`
|
||||
````
|
||||
|
||||
### Conditional Workflow
|
||||
|
||||
```markdown
|
||||
1. Determine modification type:
|
||||
**Creating new?** → Creation workflow
|
||||
**Editing existing?** → Editing workflow
|
||||
```
|
||||
|
||||
### Soft Gate Elicitation
|
||||
|
||||
For guided/interactive workflows, use "anything else?" soft gates at natural transition points instead of hard menus. This pattern draws out information users didn't know they had:
|
||||
|
||||
```markdown
|
||||
## After completing a discovery section:
|
||||
Present what you've captured so far, then:
|
||||
"Anything else you'd like to add, or shall we move on?"
|
||||
```
|
||||
|
||||
**Why it works:** Users almost always remember one more thing when given a graceful exit ramp rather than a hard stop. The low-pressure phrasing invites contribution without demanding it. This consistently produces richer, more complete artifacts than rigid section-by-section questioning.
|
||||
|
||||
**When to use:** Any guided workflow with collaborative discovery — product briefs, requirements gathering, design reviews, brainstorming synthesis. Use at every natural transition between topics or sections.
|
||||
|
||||
**When NOT to use:** Autonomous/headless execution, or steps where additional input would cause scope creep rather than enrich the output.
|
||||
|
||||
### Intent-Before-Ingestion
|
||||
|
||||
Never scan artifacts, documents, or project context until you understand WHY the user is here. Scanning without purpose produces noise, not signal.
|
||||
|
||||
```markdown
|
||||
## On activation:
|
||||
1. Greet and understand intent — what is this about?
|
||||
2. Accept whatever inputs the user offers
|
||||
3. Ask if they have additional documents or context
|
||||
4. ONLY THEN scan artifacts, scoped to relevance
|
||||
```
|
||||
|
||||
**Why it works:** Without knowing what the user wants, you can't judge what's relevant in a 100-page research doc vs a brainstorming report. Intent gives you the filter. Without it, scanning is a fool's errand.
|
||||
|
||||
**When to use:** Any workflow that ingests documents, project context, or external data as part of its process.
|
||||
|
||||
### Capture-Don't-Interrupt
|
||||
|
||||
When users provide information beyond the current scope (e.g., dropping requirements during a product brief, mentioning platforms during vision discovery), capture it silently for later use rather than redirecting or stopping them.
|
||||
|
||||
```markdown
|
||||
## During discovery:
|
||||
If user provides out-of-scope but valuable info:
|
||||
- Capture it (notes, structured aside, addendum bucket)
|
||||
- Don't interrupt their flow
|
||||
- Use it later in the appropriate stage or output
|
||||
```
|
||||
|
||||
**Why it works:** Users in creative flow will share their best insights unprompted. Interrupting to say "we'll cover that later" kills momentum and may lose the insight entirely. Capture everything, distill later.
|
||||
|
||||
**When to use:** Any collaborative discovery workflow where the user is brainstorming, explaining, or brain-dumping.
|
||||
|
||||
### Dual-Output: Human Artifact + LLM Distillate
|
||||
|
||||
Any artifact-producing workflow can output two complementary documents: a polished human-facing artifact AND a token-conscious, structured distillate optimized for downstream LLM consumption.
|
||||
|
||||
```markdown
|
||||
## Output strategy:
|
||||
1. Primary: Human-facing document (exec summary, report, brief)
|
||||
2. Optional: LLM distillate — dense, structured, token-efficient
|
||||
- Captures overflow that doesn't belong in the human doc
|
||||
- Rejected ideas (so downstream doesn't re-propose them)
|
||||
- Detail bullets with just enough context to stand alone
|
||||
- Designed to be loaded as context for the next workflow
|
||||
```
|
||||
|
||||
**Why it works:** Human docs are concise by design — they can't carry all the detail surfaced during discovery. But that detail has value for downstream LLM workflows (PRD creation, architecture design, etc.). The distillate bridges the gap without bloating the primary artifact.
|
||||
|
||||
**When to use:** Any workflow producing documents that feed into subsequent LLM workflows. The distillate is always optional — offered to the user, not forced.
|
||||
|
||||
### Parallel Review Lenses
|
||||
|
||||
Before finalizing any artifact, fan out multiple reviewers with different perspectives to catch blind spots the builder/facilitator missed.
|
||||
|
||||
```markdown
|
||||
## Near completion:
|
||||
Fan out 2-3 review subagents in parallel:
|
||||
- Skeptic: "What's missing? What assumptions are untested?"
|
||||
- Opportunity Spotter: "What adjacent value? What angles?"
|
||||
- Contextual Reviewer: LLM picks the best third lens
|
||||
(e.g., "regulatory risk" for healthtech, "DX critic" for devtools)
|
||||
|
||||
Graceful degradation: If subagents unavailable,
|
||||
main agent does a single critical self-review pass.
|
||||
```
|
||||
|
||||
**Why it works:** A single perspective — even an expert one — has blind spots. Multiple lenses surface issues and opportunities that no single reviewer would catch. The contextually-chosen third lens ensures domain-specific concerns aren't missed.
|
||||
|
||||
**When to use:** Any workflow producing a significant artifact (briefs, PRDs, designs, architecture docs). The review step is lightweight but high-value.
|
||||
|
||||
### Three-Mode Architecture (Guided / Yolo / Autonomous)
|
||||
|
||||
For interactive workflows, offer three execution modes that match different user contexts:
|
||||
|
||||
| Mode | Trigger | Behavior |
|
||||
|------|---------|----------|
|
||||
| **Guided** | Default | Section-by-section with soft gates. Drafts from what it knows, questions what it doesn't. |
|
||||
| **Yolo** | `--yolo` or "just draft it" | Ingests everything, drafts complete artifact upfront, then walks user through refinement. |
|
||||
| **Headless** | `--headless` or `-H` | Headless mode. Takes inputs, produces artifact, no interaction. |
|
||||
|
||||
**Why it works:** Not every user wants the same experience. A first-timer needs guided discovery. A repeat user with clear inputs wants yolo. A pipeline wants autonomous. Same workflow, three entry points.
|
||||
|
||||
**When to use:** Any facilitative workflow that produces an artifact. Not all workflows need all three — but considering them during design prevents painting yourself into a single interaction model.
|
||||
|
||||
### Graceful Degradation
|
||||
|
||||
Every subagent-dependent feature should have a fallback path. If the platform doesn't support parallel subagents (or subagents at all), the workflow must still progress.
|
||||
|
||||
```markdown
|
||||
## Subagent-dependent step:
|
||||
Try: Fan out subagents in parallel
|
||||
Fallback: Main agent performs the work sequentially
|
||||
Never: Block the workflow because a subagent feature is unavailable
|
||||
```
|
||||
|
||||
**Why it works:** Skills run across different platforms, models, and configurations. A skill that hard-fails without subagents is fragile. A skill that gracefully falls back to sequential processing is robust everywhere.
|
||||
|
||||
**When to use:** Any workflow that uses subagents for research, review, or parallel processing.
|
||||
|
||||
### Verifiable Intermediate Outputs
|
||||
|
||||
For complex tasks: plan → validate → execute → verify
|
||||
|
||||
1. Analyze inputs
|
||||
2. **Create** `changes.json` with planned updates
|
||||
3. **Validate** with script before executing
|
||||
4. Execute changes
|
||||
5. Verify output
|
||||
|
||||
Benefits: catches errors early, machine-verifiable, reversible planning.
|
||||
|
||||
## Writing Guidelines
|
||||
|
||||
- **Consistent terminology** — choose one term per concept, stick to it
|
||||
- **Third person** in descriptions — "Processes files" not "I help process files"
|
||||
- **Descriptive file names** — `form_validation_rules.md` not `doc2.md`
|
||||
- **Forward slashes** in all paths — cross-platform
|
||||
- **One level deep** for reference files — SKILL.md → reference.md, never SKILL.md → A.md → B.md
|
||||
- **TOC for long files** — add table of contents for files >100 lines
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
| Anti-Pattern | Fix |
|
||||
|---|---|
|
||||
| Too many options upfront | One default with escape hatch for edge cases |
|
||||
| Deep reference nesting (A→B→C) | Keep references 1 level from SKILL.md |
|
||||
| Inconsistent terminology | Choose one term per concept |
|
||||
| Vague file names | Name by content, not sequence |
|
||||
| Scripts that classify meaning via regex | Intelligence belongs in prompts, not scripts |
|
||||
|
||||
## Scripts in Skills
|
||||
|
||||
- **Execute vs reference** — "Run `analyze.py` to extract fields" (execute) vs "See `analyze.py` for the algorithm" (read)
|
||||
- **Document constants** — explain why `TIMEOUT = 30`, not just what
|
||||
- **PEP 723 for Python** — self-contained scripts with inline dependency declarations
|
||||
- **MCP tools** — use fully qualified names: `ServerName:tool_name`
|
||||
@@ -0,0 +1,121 @@
|
||||
# Standard Workflow/Skill Fields
|
||||
|
||||
## Common Fields (All Types)
|
||||
|
||||
| Field | Description | Example |
|
||||
|-------|-------------|---------|
|
||||
| `name` | Full skill name (kebab-case) | `bmad-workflow-builder`, `bmad-validate-json` |
|
||||
| `skillName` | Functional name (kebab-case) | `workflow-builder`, `validate-json` |
|
||||
| `description` | [5-8 word summary]. [Use when user says 'X' or 'Y'.] | "Builds workflows through conversational discovery. Use when the user requests to 'build a workflow' or 'modify a workflow'." |
|
||||
| `role-guidance` | Brief expertise primer | "Act as a senior DevOps engineer" |
|
||||
| `module-code` | Module code (if module-based) | `bmb`, `cis` |
|
||||
|
||||
## Simple Utility Fields
|
||||
|
||||
| Field | Description | Example |
|
||||
|-------|-------------|---------|
|
||||
| `input-format` | What it accepts | JSON file path, stdin text |
|
||||
| `output-format` | What it returns | Validated JSON, error report |
|
||||
| `standalone` | Opts out of bmad-init? | true/false |
|
||||
| `composability` | How other skills use it | "Called by quality scanners for validation" |
|
||||
|
||||
## Simple Workflow Fields
|
||||
|
||||
| Field | Description | Example |
|
||||
|-------|-------------|---------|
|
||||
| `steps` | Numbered inline steps | "1. Load config 2. Read input 3. Process" |
|
||||
| `tools-used` | CLIs/tools/scripts | gh, jq, python scripts |
|
||||
| `output` | What it produces | PR, report, file |
|
||||
|
||||
## Complex Workflow Fields
|
||||
|
||||
| Field | Description | Example |
|
||||
|-------|-------------|---------|
|
||||
| `stages` | Named numbered stages | "01-discover, 02-plan, 03-build" |
|
||||
| `progression-conditions` | When stages complete | "User approves outline" |
|
||||
| `headless-mode` | Supports autonomous? | true/false |
|
||||
| `config-variables` | Beyond core vars | `planning_artifacts`, `output_folder` |
|
||||
| `output-artifacts` | What it creates (output-location) | "PRD document", "agent skill" |
|
||||
|
||||
## Overview Section Format
|
||||
|
||||
The Overview is the first section after the title — it primes the AI for everything that follows.
|
||||
|
||||
**3-part formula:**
|
||||
1. **What** — What this workflow/skill does
|
||||
2. **How** — How it works (approach, key stages)
|
||||
3. **Why/Outcome** — Value delivered, quality standard
|
||||
|
||||
**Templates by skill type:**
|
||||
|
||||
**Complex Workflow:**
|
||||
```markdown
|
||||
This skill helps you {outcome} through {approach}. Act as {role-guidance}, guiding users through {key stages}. Your output is {deliverable}.
|
||||
```
|
||||
|
||||
**Simple Workflow:**
|
||||
```markdown
|
||||
This skill {what it does} by {approach}. Act as {role-guidance}. Use when {trigger conditions}. Produces {output}.
|
||||
```
|
||||
|
||||
**Simple Utility:**
|
||||
```markdown
|
||||
This skill {what it does}. Use when {when to use}. Returns {output format} with {key feature}.
|
||||
```
|
||||
|
||||
## SKILL.md Description Format
|
||||
|
||||
The frontmatter `description` is the PRIMARY trigger mechanism — it determines when the AI invokes this skill. Most BMad skills are **explicitly invoked** by name (`/skill-name` or direct request), so descriptions should be conservative to prevent accidental triggering.
|
||||
|
||||
**Format:** Two parts, one sentence each:
|
||||
```
|
||||
[What it does in 5-8 words]. [Use when user says 'specific phrase' or 'specific phrase'.]
|
||||
```
|
||||
|
||||
**The trigger clause** uses one of these patterns depending on the skill's activation style:
|
||||
- **Explicit invocation (default):** `Use when the user requests to 'create a PRD' or 'edit an existing PRD'.` — Quotes around specific phrases the user would actually say. Conservative — won't fire on casual mentions.
|
||||
- **Organic/reactive:** `Trigger when code imports anthropic SDK, or user asks to use Claude API.` — For lightweight skills that should activate on contextual signals, not explicit requests.
|
||||
|
||||
**Examples:**
|
||||
|
||||
Good (explicit): `Builds workflows and skills through conversational discovery. Use when the user requests to 'build a workflow', 'modify a workflow', or 'quality check workflow'.`
|
||||
|
||||
Good (organic): `Initializes BMad project configuration. Trigger when any skill needs module-specific configuration values, or when setting up a new BMad project.`
|
||||
|
||||
Bad: `Helps with PRDs and product requirements.` — Too vague, would trigger on any mention of PRD even in passing conversation.
|
||||
|
||||
Bad: `Use on any mention of workflows, building, or creating things.` — Over-broad, would hijack unrelated conversations.
|
||||
|
||||
**Default to explicit invocation** unless the user specifically describes organic/reactive activation during discovery.
|
||||
|
||||
## Role Guidance Format
|
||||
|
||||
Every generated workflow SKILL.md includes a brief role statement in the Overview or as a standalone line:
|
||||
```markdown
|
||||
Act as {role-guidance}. {brief expertise/approach description}.
|
||||
```
|
||||
This provides quick prompt priming for expertise and tone. Workflows may also use full Identity/Communication Style/Principles sections when personality serves the workflow's purpose.
|
||||
|
||||
## Path Rules
|
||||
|
||||
Only use `{project-root}` for `_bmad` paths.
|
||||
|
||||
### Skill-Internal Files
|
||||
Use bare relative paths (no prefix):
|
||||
- `references/reference.md`
|
||||
- `01-discover.md`
|
||||
- `scripts/validate.py`
|
||||
|
||||
### Project `_bmad` Paths
|
||||
Use `{project-root}/_bmad/...`:
|
||||
- `{project-root}/_bmad/planning/prd.md`
|
||||
- `{project-root}/_bmad/_memory/{skillName}-sidecar/`
|
||||
|
||||
### Config Variables
|
||||
Use directly — they already contain `{project-root}` in their resolved values:
|
||||
- `{output_folder}/file.md`
|
||||
- `{planning_artifacts}/prd.md`
|
||||
|
||||
**Never:**
|
||||
- `{project-root}/{output_folder}/file.md` (WRONG — double-prefix, config var already has path)
|
||||
- `_bmad/planning/prd.md` (WRONG — bare `_bmad` must have `{project-root}` prefix)
|
||||
@@ -0,0 +1,85 @@
|
||||
# Template Substitution Rules
|
||||
|
||||
When building the workflow/skill, you MUST apply these conditional blocks to the templates:
|
||||
|
||||
## Skill Type Conditionals
|
||||
|
||||
### Complex Workflow
|
||||
- `{if-complex-workflow}` ... `{/if-complex-workflow}` → Keep the content inside
|
||||
- `{if-simple-workflow}` ... `{/if-simple-workflow}` → Remove the entire block including markers
|
||||
- `{if-simple-utility}` ... `{/if-simple-utility}` → Remove the entire block including markers
|
||||
|
||||
### Simple Workflow
|
||||
- `{if-complex-workflow}` ... `{/if-complex-workflow}` → Remove the entire block including markers
|
||||
- `{if-simple-workflow}` ... `{/if-simple-workflow}` → Keep the content inside
|
||||
- `{if-simple-utility}` ... `{/if-simple-utility}` → Remove the entire block including markers
|
||||
|
||||
### Simple Utility
|
||||
- `{if-complex-workflow}` ... `{/if-complex-workflow}` → Remove the entire block including markers
|
||||
- `{if-simple-workflow}` ... `{/if-simple-workflow}` → Remove the entire block including markers
|
||||
- `{if-simple-utility}` ... `{/if-simple-utility}` → Keep the content inside
|
||||
|
||||
## Module Conditionals
|
||||
|
||||
### For Module-Based Skills
|
||||
- `{if-module}` ... `{/if-module}` → Keep the content inside
|
||||
- `{if-standalone}` ... `{/if-standalone}` → Remove the entire block including markers
|
||||
- `{module-code-or-empty}` → Replace with module code (e.g., `bmb-`)
|
||||
|
||||
### For Standalone Skills
|
||||
- `{if-module}` ... `{/if-module}` → Remove the entire block including markers
|
||||
- `{if-standalone}` ... `{/if-standalone}` → Keep the content inside
|
||||
- `{module-code-or-empty}` → Empty string
|
||||
|
||||
## bmad-init Conditional
|
||||
|
||||
### Uses bmad-init (default)
|
||||
- `{if-bmad-init}` ... `{/if-bmad-init}` → Keep the content inside
|
||||
|
||||
### Opted out of bmad-init (standalone utilities only)
|
||||
- `{if-bmad-init}` ... `{/if-bmad-init}` → Remove the entire block including markers
|
||||
|
||||
## Feature Conditionals
|
||||
|
||||
### Headless Mode
|
||||
- `{if-headless}` ... `{/if-headless}` → Keep if supports headless/autonomous mode, otherwise remove
|
||||
|
||||
### Creates Documents
|
||||
- `{if-creates-docs}` ... `{/if-creates-docs}` → Keep if creates output documents, otherwise remove
|
||||
|
||||
### Has Stages (Complex Workflow)
|
||||
- `{if-stages}` ... `{/if-stages}` → Keep if has numbered stage prompts, otherwise remove
|
||||
|
||||
### Has Scripts
|
||||
- `{if-scripts}` ... `{/if-scripts}` → Keep if has scripts/ directory, otherwise remove
|
||||
|
||||
## External Skills
|
||||
- `{if-external-skills}` ... `{/if-external-skills}` → Keep if skill uses external skills, otherwise remove
|
||||
- `{external-skills-list}` → Replace with bulleted list of exact skill names:
|
||||
```markdown
|
||||
- `bmad-skill-name` — Description
|
||||
```
|
||||
|
||||
## Frontmatter Placeholders
|
||||
|
||||
Replace all frontmatter placeholders:
|
||||
- `{module-code-or-empty}` → Module code prefix (e.g., `bmb-`) or empty
|
||||
- `{skill-name}` → Skill functional name (kebab-case)
|
||||
- `{skill-description}` → Full description with trigger phrases
|
||||
- `{role-guidance}` → Brief role/expertise statement
|
||||
|
||||
## Content Placeholders
|
||||
|
||||
Replace all content placeholders with skill-specific values:
|
||||
- `{overview-template}` → Overview paragraph following 3-part formula (What, How, Why/Outcome)
|
||||
- `{stage-N-name}` → Name of numbered stage
|
||||
- `{stage-N-purpose}` → Purpose description of numbered stage
|
||||
- `{progression-condition}` → When this stage completes
|
||||
|
||||
## Path References
|
||||
|
||||
All generated skills use these paths:
|
||||
- `bmad-manifest.json` — Module metadata (if module-based)
|
||||
- `references/{reference}.md` — Reference documents loaded on demand
|
||||
- `01-{stage}.md` — Numbered stage prompts at skill root (complex workflows)
|
||||
- `scripts/` — Python/shell scripts for deterministic operations (if needed)
|
||||
@@ -0,0 +1,267 @@
|
||||
# Universal Scanner Output Schema
|
||||
|
||||
All quality scanners — both LLM-based and deterministic lint scripts — MUST produce output conforming to this schema. No exceptions.
|
||||
|
||||
## Top-Level Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"scanner": "scanner-name",
|
||||
"skill_path": "{path}",
|
||||
"findings": [],
|
||||
"assessments": {},
|
||||
"summary": {
|
||||
"total_findings": 0,
|
||||
"by_severity": {},
|
||||
"assessment": "1-2 sentence overall assessment"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
| Key | Type | Required | Description |
|
||||
|-----|------|----------|-------------|
|
||||
| `scanner` | string | yes | Scanner identifier (e.g., `"workflow-integrity"`, `"prompt-craft"`) |
|
||||
| `skill_path` | string | yes | Absolute path to the skill being scanned |
|
||||
| `findings` | array | yes | ALL items — issues, strengths, suggestions, opportunities. Always an array, never an object |
|
||||
| `assessments` | object | yes | Scanner-specific structured analysis (cohesion tables, health metrics, user journeys, etc.). Free-form per scanner |
|
||||
| `summary` | object | yes | Aggregate counts and brief overall assessment |
|
||||
|
||||
## Finding Schema (7 fields)
|
||||
|
||||
Every item in `findings[]` has exactly these 7 fields:
|
||||
|
||||
```json
|
||||
{
|
||||
"file": "SKILL.md",
|
||||
"line": 42,
|
||||
"severity": "high",
|
||||
"category": "frontmatter",
|
||||
"title": "Brief headline of the finding",
|
||||
"detail": "Full context — rationale, what was observed, why it matters",
|
||||
"action": "What to do about it — fix, suggestion, or script to create"
|
||||
}
|
||||
```
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `file` | string | yes | Relative path to the affected file (e.g., `"SKILL.md"`, `"scripts/build.py"`). Empty string if not file-specific |
|
||||
| `line` | int\|null | no | Line number (1-based). `null` or `0` if not line-specific |
|
||||
| `severity` | string | yes | One of the severity values below |
|
||||
| `category` | string | yes | Scanner-specific category (e.g., `"frontmatter"`, `"token-waste"`, `"lint"`) |
|
||||
| `title` | string | yes | Brief headline (1 sentence). This is the primary display text |
|
||||
| `detail` | string | yes | Full context — fold rationale, observation, impact, nuance into one narrative. Empty string if title is self-explanatory |
|
||||
| `action` | string | yes | What to do — fix instruction, suggestion, or script to create. Empty string for strengths/notes |
|
||||
|
||||
## Severity Values (complete enum)
|
||||
|
||||
```
|
||||
critical | high | medium | low | high-opportunity | medium-opportunity | low-opportunity | suggestion | strength | note
|
||||
```
|
||||
|
||||
**Routing rules:**
|
||||
- `critical`, `high` → "Truly Broken" section in report
|
||||
- `medium`, `low` → category-specific findings sections
|
||||
- `high-opportunity`, `medium-opportunity`, `low-opportunity` → enhancement/creative sections
|
||||
- `suggestion` → creative suggestions section
|
||||
- `strength` → strengths section (positive observations worth preserving)
|
||||
- `note` → informational observations, also routed to strengths
|
||||
|
||||
## Assessment Sub-Structure Contracts
|
||||
|
||||
The `assessments` object is free-form per scanner, but the HTML report renderer expects specific shapes for specific keys. These are the canonical formats.
|
||||
|
||||
### user_journeys (enhancement-opportunities scanner)
|
||||
|
||||
**Always an array of objects. Never an object keyed by persona.**
|
||||
|
||||
```json
|
||||
"user_journeys": [
|
||||
{
|
||||
"archetype": "first-timer",
|
||||
"summary": "Brief narrative of this user's experience",
|
||||
"friction_points": ["moment 1", "moment 2"],
|
||||
"bright_spots": ["what works well"]
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
### autonomous_assessment (enhancement-opportunities scanner)
|
||||
|
||||
```json
|
||||
"autonomous_assessment": {
|
||||
"potential": "headless-ready|easily-adaptable|partially-adaptable|fundamentally-interactive",
|
||||
"hitl_points": 3,
|
||||
"auto_resolvable": 2,
|
||||
"needs_input": 1,
|
||||
"notes": "Brief assessment"
|
||||
}
|
||||
```
|
||||
|
||||
### top_insights (enhancement-opportunities scanner)
|
||||
|
||||
**Always an array of objects with title/detail/action (same shape as findings but without file/line/severity/category).**
|
||||
|
||||
```json
|
||||
"top_insights": [
|
||||
{
|
||||
"title": "The key observation",
|
||||
"detail": "Why it matters",
|
||||
"action": "What to do about it"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
### cohesion_analysis (skill-cohesion / agent-cohesion scanner)
|
||||
|
||||
```json
|
||||
"cohesion_analysis": {
|
||||
"dimension_name": { "score": "strong|moderate|weak", "notes": "explanation" }
|
||||
}
|
||||
```
|
||||
|
||||
Dimension names are scanner-specific (e.g., `stage_flow_coherence`, `persona_alignment`). The report renderer iterates all keys and renders a table row per dimension.
|
||||
|
||||
### skill_identity / agent_identity (cohesion scanners)
|
||||
|
||||
```json
|
||||
"skill_identity": {
|
||||
"name": "skill-name",
|
||||
"purpose_summary": "Brief characterization",
|
||||
"primary_outcome": "What this skill produces"
|
||||
}
|
||||
```
|
||||
|
||||
### skillmd_assessment (prompt-craft scanner)
|
||||
|
||||
```json
|
||||
"skillmd_assessment": {
|
||||
"overview_quality": "appropriate|excessive|missing",
|
||||
"progressive_disclosure": "good|needs-extraction|monolithic",
|
||||
"notes": "brief assessment"
|
||||
}
|
||||
```
|
||||
|
||||
Agent variant adds `"persona_context": "appropriate|excessive|missing"`.
|
||||
|
||||
### prompt_health (prompt-craft scanner)
|
||||
|
||||
```json
|
||||
"prompt_health": {
|
||||
"total_prompts": 3,
|
||||
"with_config_header": 2,
|
||||
"with_progression": 1,
|
||||
"self_contained": 3
|
||||
}
|
||||
```
|
||||
|
||||
### skill_understanding (enhancement-opportunities scanner)
|
||||
|
||||
```json
|
||||
"skill_understanding": {
|
||||
"purpose": "what this skill does",
|
||||
"primary_user": "who it's for",
|
||||
"assumptions": ["assumption 1", "assumption 2"]
|
||||
}
|
||||
```
|
||||
|
||||
### stage_summary (workflow-integrity scanner)
|
||||
|
||||
```json
|
||||
"stage_summary": {
|
||||
"total_stages": 0,
|
||||
"missing_stages": [],
|
||||
"orphaned_stages": [],
|
||||
"stages_without_progression": [],
|
||||
"stages_without_config_header": []
|
||||
}
|
||||
```
|
||||
|
||||
### metadata (structure scanner)
|
||||
|
||||
Free-form key-value pairs. Rendered as a metadata block.
|
||||
|
||||
### script_summary (scripts lint)
|
||||
|
||||
```json
|
||||
"script_summary": {
|
||||
"total_scripts": 5,
|
||||
"by_type": {"python": 3, "shell": 2},
|
||||
"missing_tests": ["script1.py"]
|
||||
}
|
||||
```
|
||||
|
||||
### existing_scripts (script-opportunities scanner)
|
||||
|
||||
Array of strings (script paths that already exist).
|
||||
|
||||
## Complete Example
|
||||
|
||||
```json
|
||||
{
|
||||
"scanner": "workflow-integrity",
|
||||
"skill_path": "/path/to/skill",
|
||||
"findings": [
|
||||
{
|
||||
"file": "SKILL.md",
|
||||
"line": 12,
|
||||
"severity": "high",
|
||||
"category": "frontmatter",
|
||||
"title": "Missing required 'version' field in frontmatter",
|
||||
"detail": "The SKILL.md frontmatter is missing the version field. This prevents the manifest generator from producing correct output and breaks version-aware consumers.",
|
||||
"action": "Add 'version: 1.0.0' to the YAML frontmatter block"
|
||||
},
|
||||
{
|
||||
"file": "build-process.md",
|
||||
"line": null,
|
||||
"severity": "strength",
|
||||
"category": "design",
|
||||
"title": "Excellent progressive disclosure pattern in build stages",
|
||||
"detail": "Each stage provides exactly the context needed without front-loading information. This reduces token waste and improves LLM comprehension.",
|
||||
"action": ""
|
||||
},
|
||||
{
|
||||
"file": "SKILL.md",
|
||||
"line": 45,
|
||||
"severity": "medium-opportunity",
|
||||
"category": "experience-gap",
|
||||
"title": "No guidance for first-time users unfamiliar with build workflows",
|
||||
"detail": "A user encountering this skill for the first time has no onboarding path. The skill assumes familiarity with stage-based workflows, which creates friction for newcomers.",
|
||||
"action": "Add a 'Getting Started' section or link to onboarding documentation"
|
||||
}
|
||||
],
|
||||
"assessments": {
|
||||
"stage_summary": {
|
||||
"total_stages": 7,
|
||||
"missing_stages": [],
|
||||
"orphaned_stages": ["cleanup"]
|
||||
}
|
||||
},
|
||||
"summary": {
|
||||
"total_findings": 3,
|
||||
"by_severity": {"high": 1, "medium-opportunity": 1, "strength": 1},
|
||||
"assessment": "Well-structured skill with one critical frontmatter gap. Progressive disclosure is a notable strength."
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## DO NOT
|
||||
|
||||
- **DO NOT** rename fields. Use exactly: `file`, `line`, `severity`, `category`, `title`, `detail`, `action`
|
||||
- **DO NOT** use `issues` instead of `findings` — the array is always called `findings`
|
||||
- **DO NOT** add fields to findings beyond the 7 defined above. Put scanner-specific structured data in `assessments`
|
||||
- **DO NOT** use separate arrays for strengths, suggestions, or opportunities — they go in `findings` with appropriate severity values
|
||||
- **DO NOT** change `user_journeys` from an array to an object keyed by persona name
|
||||
- **DO NOT** restructure assessment sub-objects — use the shapes defined above
|
||||
- **DO NOT** put free-form narrative data into `assessments` — that belongs in `detail` fields of findings or in `summary.assessment`
|
||||
|
||||
## Self-Check Before Output
|
||||
|
||||
Before writing your JSON output, verify:
|
||||
|
||||
1. Is your array called `findings` (not `issues`, not `opportunities`)?
|
||||
2. Does every item in `findings` have all 7 fields: `file`, `line`, `severity`, `category`, `title`, `detail`, `action`?
|
||||
3. Are strengths in `findings` with `severity: "strength"` (not in a separate `strengths` array)?
|
||||
4. Are suggestions in `findings` with `severity: "suggestion"` (not in a separate `creative_suggestions` array)?
|
||||
5. Is `assessments` an object containing structured analysis data (not items that belong in findings)?
|
||||
6. Is `user_journeys` an array of objects (not an object keyed by persona)?
|
||||
7. Do `top_insights` items use `title`/`detail`/`action` (not `insight`/`suggestion`/`why_it_matters`)?
|
||||
Reference in New Issue
Block a user