docs: update all documentation and add AI tooling configs
- Rewrite README.md with current architecture, features and stack - Update docs/API.md with all current endpoints (corporate, BI, client 360) - Update docs/ARCHITECTURE.md with cache, modular queries, services, ETL - Update docs/GUIA-USUARIO.md for all roles (admin, corporate, agente) - Add docs/INDEX.md documentation index - Add PROJETO.md comprehensive project reference - Add BI-CCC-Implementation-Guide.md - Include AI agent configs (.claude, .agents, .gemini, _bmad) - Add netbird VPN configuration - Add status report Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -0,0 +1,126 @@
|
||||
# Manifest Reference
|
||||
|
||||
Every BMad skill has a `bmad-manifest.json` at its root. This is the unified format for agents, workflows, and simple skills.
|
||||
|
||||
## File Location
|
||||
|
||||
```
|
||||
{skillname}/
|
||||
├── SKILL.md # name, description, persona content
|
||||
├── bmad-manifest.json # Capabilities, module integration, persona distillate
|
||||
└── ...
|
||||
```
|
||||
|
||||
## SKILL.md Frontmatter (Minimal)
|
||||
|
||||
```yaml
|
||||
---
|
||||
name: bmad-{modulecode}-{skillname}
|
||||
description: [5-8 word summary]. [Use when user says 'X' or 'Y'.]
|
||||
---
|
||||
```
|
||||
|
||||
## bmad-manifest.json
|
||||
|
||||
**NOTE:** Do NOT include `$schema` in generated manifests. The schema is used by validation tooling only — it is not part of the delivered skill.
|
||||
|
||||
```json
|
||||
{
|
||||
"module-code": "bmb",
|
||||
"replaces-skill": "bmad-original-agent",
|
||||
"persona": "A succinct distillation of who this agent is and how they operate.",
|
||||
"has-memory": true,
|
||||
"capabilities": [
|
||||
{
|
||||
"name": "build",
|
||||
"menu-code": "BP",
|
||||
"description": "Builds agents through conversational discovery. Outputs to skill folder.",
|
||||
"supports-headless": true,
|
||||
"prompt": "build-process.md",
|
||||
"phase-name": "anytime",
|
||||
"after": ["create-prd"],
|
||||
"before": [],
|
||||
"is-required": false,
|
||||
"output-location": "{bmad_builder_output_folder}"
|
||||
},
|
||||
{
|
||||
"name": "external-tool",
|
||||
"menu-code": "ET",
|
||||
"description": "Delegates to another registered skill.",
|
||||
"supports-headless": false,
|
||||
"skill-name": "bmad-some-other-skill"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Field Reference
|
||||
|
||||
### Top-Level Fields
|
||||
|
||||
| Field | Type | Required | Purpose |
|
||||
|-------|------|----------|---------|
|
||||
| `module-code` | string | If module | Short code for namespacing (e.g., `bmb`, `cis`) |
|
||||
| `replaces-skill` | string | No | Registered skill name this replaces. Inherits metadata during bmad-init. |
|
||||
| `persona` | string | Agents only | Succinct distillation of the agent's essence. **Presence = this is an agent.** |
|
||||
| `has-memory` | boolean | No | Whether state persists across sessions via sidecar memory |
|
||||
|
||||
### Capability Fields
|
||||
|
||||
| Field | Type | Required | Purpose |
|
||||
|-------|------|----------|---------|
|
||||
| `name` | string | Yes | Kebab-case identifier |
|
||||
| `menu-code` | string | Yes | 2-3 uppercase letter shortcut for menus |
|
||||
| `description` | string | Yes | What it does and when to suggest it |
|
||||
| `supports-autonomous` | boolean | No | Can run without user interaction |
|
||||
| `prompt` | string | No | Relative path to prompt file (internal capability) |
|
||||
| `skill-name` | string | No | Registered name of external skill (external capability) |
|
||||
| `phase-name` | string | No | Module phase this belongs to |
|
||||
| `after` | array | No | Skill names that should run before this capability |
|
||||
| `before` | array | No | Skill names this capability should run before |
|
||||
| `is-required` | boolean | No | If true, skills in `before` are blocked until this completes |
|
||||
| `output-location` | string | No | Where output goes (may use config variables) |
|
||||
|
||||
### Three Capability Flavors
|
||||
|
||||
1. **Has `prompt`** — internal capability routed to a prompt file
|
||||
2. **Has `skill-name`** — delegates to another registered skill
|
||||
3. **Has neither** — SKILL.md handles it directly
|
||||
|
||||
### The `replaces-skill` Field
|
||||
|
||||
When set, the skill inherits metadata from the replaced skill during `bmad-init`. Explicit fields in the new manifest override inherited values.
|
||||
|
||||
## Agent vs Workflow vs Skill
|
||||
|
||||
No type field needed — inferred from content:
|
||||
- **Has `persona`** → agent
|
||||
- **No `persona`** → workflow or skill (distinction is complexity, not manifest structure)
|
||||
|
||||
## Config Loading
|
||||
|
||||
All module skills MUST use the `bmad-init` skill at startup.
|
||||
|
||||
## Path Construction Rules — CRITICAL
|
||||
|
||||
Only use `{project-root}` for `_bmad` paths.
|
||||
|
||||
**Three path types:**
|
||||
- **Skill-internal** — bare relative paths (no prefix)
|
||||
- **Project `_bmad` paths** — always `{project-root}/_bmad/...`
|
||||
- **Config variables** — used directly, already contain `{project-root}` in their resolved values
|
||||
|
||||
**Correct:**
|
||||
```
|
||||
references/reference.md # Skill-internal (bare relative)
|
||||
capability.md # Skill-internal (bare relative)
|
||||
{project-root}/_bmad/_memory/x-sidecar/ # Project _bmad path
|
||||
{output_folder}/report.md # Config var (already has full path)
|
||||
```
|
||||
|
||||
**Never use:**
|
||||
```
|
||||
../../other-skill/file.md # Cross-skill relative path breaks with reorganization
|
||||
{project-root}/{config_var}/output.md # Double-prefix
|
||||
./references/reference.md # Relative prefix breaks context changes
|
||||
```
|
||||
@@ -0,0 +1,46 @@
|
||||
# Quality Dimensions — Quick Reference
|
||||
|
||||
Six dimensions to keep in mind when building agent skills. The quality scanners check these automatically during optimization — this is a mental checklist for the build phase.
|
||||
|
||||
## 1. Informed Autonomy
|
||||
|
||||
The executing agent needs enough context to make judgment calls when situations don't match the script. The Overview section establishes this: domain framing, theory of mind, design rationale.
|
||||
|
||||
- Simple agents with 1-2 capabilities need minimal context
|
||||
- Agents with memory, autonomous mode, or complex capabilities need domain understanding, user perspective, and rationale for non-obvious choices
|
||||
- When in doubt, explain *why* — an agent that understands the mission improvises better than one following blind steps
|
||||
|
||||
## 2. Intelligence Placement
|
||||
|
||||
Scripts handle plumbing (fetch, transform, validate). Prompts handle judgment (interpret, classify, decide).
|
||||
|
||||
**Test:** If a script contains an `if` that decides what content *means*, intelligence has leaked.
|
||||
|
||||
**Reverse test:** If a prompt validates structure, counts items, parses known formats, compares against schemas, or checks file existence — determinism has leaked into the LLM. That work belongs in a script. Scripts have access to full bash, Python with standard library plus PEP 723 dependencies, and system tools — think broadly about what can be offloaded.
|
||||
|
||||
## 3. Progressive Disclosure
|
||||
|
||||
SKILL.md stays focused. Detail goes where it belongs.
|
||||
|
||||
- Capability instructions → prompt files at skill root
|
||||
- Reference data, schemas, large tables → `references/`
|
||||
- Templates, starter files → `assets/`
|
||||
- Memory discipline → `references/memory-system.md`
|
||||
- Multi-capability SKILL.md under ~250 lines: fine as-is
|
||||
- Single-purpose up to ~500 lines: acceptable if focused
|
||||
|
||||
## 4. Description Format
|
||||
|
||||
Two parts: `[5-8 word summary]. [Use when user says 'X' or 'Y'.]`
|
||||
|
||||
Default to conservative triggering. See `references/standard-fields.md` for full format and examples.
|
||||
|
||||
## 5. Path Construction
|
||||
|
||||
Only use `{project-root}` for `_bmad` paths. Config variables used directly — they already contain `{project-root}`.
|
||||
|
||||
See `references/standard-fields.md` for correct/incorrect patterns.
|
||||
|
||||
## 6. Token Efficiency
|
||||
|
||||
Remove genuine waste (repetition, defensive padding, meta-explanation). Preserve context that enables judgment (domain framing, theory of mind, design rationale). These are different things — the prompt-craft scanner distinguishes between them.
|
||||
@@ -0,0 +1,385 @@
|
||||
# Quality Scan Script Opportunities — Reference Guide
|
||||
|
||||
**Reference: `references/script-standards.md` for script creation guidelines.**
|
||||
|
||||
This document identifies deterministic operations that should be offloaded from the LLM into scripts for quality validation of BMad agents.
|
||||
|
||||
---
|
||||
|
||||
## Core Principle
|
||||
|
||||
Scripts validate structure and syntax (deterministic). Prompts evaluate semantics and meaning (judgment). Create scripts for checks that have clear pass/fail criteria.
|
||||
|
||||
---
|
||||
|
||||
## How to Spot Script Opportunities
|
||||
|
||||
During build, walk through every capability/operation and apply these tests:
|
||||
|
||||
### The Determinism Test
|
||||
For each operation the agent performs, ask:
|
||||
- Given identical input, will this ALWAYS produce identical output? → Script
|
||||
- Does this require interpreting meaning, tone, context, or ambiguity? → Prompt
|
||||
- Could you write a unit test with expected output for every input? → Script
|
||||
|
||||
### The Judgment Boundary
|
||||
Scripts handle: fetch, transform, validate, count, parse, compare, extract, format, check structure
|
||||
Prompts handle: interpret, classify with ambiguity, create, decide with incomplete info, evaluate quality, synthesize meaning
|
||||
|
||||
### Pattern Recognition Checklist
|
||||
Table of signal verbs/patterns mapping to script types:
|
||||
| Signal Verb/Pattern | Script Type |
|
||||
|---------------------|-------------|
|
||||
| "validate", "check", "verify" | Validation script |
|
||||
| "count", "tally", "aggregate", "sum" | Metric/counting script |
|
||||
| "extract", "parse", "pull from" | Data extraction script |
|
||||
| "convert", "transform", "format" | Transformation script |
|
||||
| "compare", "diff", "match against" | Comparison script |
|
||||
| "scan for", "find all", "list all" | Pattern scanning script |
|
||||
| "check structure", "verify exists" | File structure checker |
|
||||
| "against schema", "conforms to" | Schema validation script |
|
||||
| "graph", "map dependencies" | Dependency analysis script |
|
||||
|
||||
### The Outside-the-Box Test
|
||||
Beyond obvious validation, consider:
|
||||
- Could any data gathering step be a script that returns structured JSON for the LLM to interpret?
|
||||
- Could pre-processing reduce what the LLM needs to read?
|
||||
- Could post-processing validate what the LLM produced?
|
||||
- Could metric collection feed into LLM decision-making without the LLM doing the counting?
|
||||
|
||||
### Your Toolbox
|
||||
Scripts have access to full capabilities — think broadly:
|
||||
- **Bash**: Full shell — `jq`, `grep`, `awk`, `sed`, `find`, `diff`, `wc`, `sort`, `uniq`, `curl`, plus piping and composition
|
||||
- **Python**: Standard library (`json`, `yaml`, `pathlib`, `re`, `argparse`, `collections`, `difflib`, `ast`, `csv`, `xml`, etc.) plus PEP 723 inline-declared dependencies (`tiktoken`, `jsonschema`, `pyyaml`, etc.)
|
||||
- **System tools**: `git` commands for history/diff/blame, filesystem operations, process execution
|
||||
|
||||
If you can express the logic as deterministic code, it's a script candidate.
|
||||
|
||||
### The --help Pattern
|
||||
All scripts use PEP 723 and `--help`. When a skill's prompt needs to invoke a script, it can say "Run `scripts/foo.py --help` to understand inputs/outputs, then invoke appropriately" instead of inlining the script's interface. This saves tokens in prompts and keeps a single source of truth for the script's API.
|
||||
|
||||
---
|
||||
|
||||
## Priority 1: High-Value Validation Scripts
|
||||
|
||||
### 1. Frontmatter Validator
|
||||
|
||||
**What:** Validate SKILL.md frontmatter structure and content
|
||||
|
||||
**Why:** Frontmatter is the #1 factor in skill triggering. Catch errors early.
|
||||
|
||||
**Checks:**
|
||||
```python
|
||||
# checks:
|
||||
- name exists and is kebab-case
|
||||
- description exists and follows pattern "Use when..."
|
||||
- No forbidden fields (XML, reserved prefixes)
|
||||
- Optional fields have valid values if present
|
||||
```
|
||||
|
||||
**Output:** JSON with pass/fail per field, line numbers for errors
|
||||
|
||||
**Implementation:** Python with argparse, no external deps needed
|
||||
|
||||
---
|
||||
|
||||
### 2. Manifest Schema Validator
|
||||
|
||||
**Status:** ✅ Already exists at `scripts/manifest.py` (create, add-capability, update, read, validate)
|
||||
|
||||
**Enhancement opportunities:**
|
||||
- Add `--agent-path` flag for auto-discovery
|
||||
- Check menu code uniqueness within agent
|
||||
- Verify prompt files exist for `type: "prompt"` capabilities
|
||||
- Verify external skill names are registered (could check against skill registry)
|
||||
|
||||
---
|
||||
|
||||
### 3. Template Artifact Scanner
|
||||
|
||||
**What:** Scan for orphaned template substitution artifacts
|
||||
|
||||
**Why:** Build process may leave `{if-autonomous}`, `{displayName}`, etc.
|
||||
|
||||
**Output:** JSON with file path, line number, artifact type
|
||||
|
||||
**Implementation:** Bash script with JSON output via jq
|
||||
|
||||
---
|
||||
|
||||
### 4. Access Boundaries Extractor
|
||||
|
||||
**What:** Extract and validate access boundaries from memory-system.md
|
||||
|
||||
**Why:** Security critical — must be defined before file operations
|
||||
|
||||
**Checks:**
|
||||
```python
|
||||
# Parse memory-system.md for:
|
||||
- ## Read Access section exists
|
||||
- ## Write Access section exists
|
||||
- ## Deny Zones section exists (can be empty)
|
||||
- Paths use placeholders correctly ({project-root} for _bmad paths, relative for skill-internal)
|
||||
```
|
||||
|
||||
**Output:** Structured JSON of read/write/deny zones
|
||||
|
||||
**Implementation:** Python with markdown parsing
|
||||
|
||||
---
|
||||
|
||||
### 5. Prompt Frontmatter Comparator
|
||||
|
||||
**What:** Compare prompt file frontmatter against bmad-manifest.json
|
||||
|
||||
**Why:** Capability misalignment causes runtime errors
|
||||
|
||||
**Checks:**
|
||||
```python
|
||||
# For each prompt .md file at skill root:
|
||||
- Has frontmatter (name, description, menu-code)
|
||||
- name matches manifest capability name
|
||||
- menu-code matches manifest (case-insensitive)
|
||||
- description is present
|
||||
```
|
||||
|
||||
**Output:** JSON with mismatches, missing files
|
||||
|
||||
**Implementation:** Python, reads bmad-manifest.json and all prompt .md files at skill root
|
||||
|
||||
---
|
||||
|
||||
## Priority 2: Analysis Scripts
|
||||
|
||||
### 6. Token Counter
|
||||
|
||||
**What:** Count tokens in each file of an agent
|
||||
|
||||
**Why:** Identify verbose files that need optimization
|
||||
|
||||
**Checks:**
|
||||
```python
|
||||
# For each .md file:
|
||||
- Total tokens (approximate: chars / 4)
|
||||
- Code block tokens
|
||||
- Token density (tokens / meaningful content)
|
||||
```
|
||||
|
||||
**Output:** JSON with file path, token count, density score
|
||||
|
||||
**Implementation:** Python with tiktoken for accurate counting, or char approximation
|
||||
|
||||
---
|
||||
|
||||
### 7. Dependency Graph Generator
|
||||
|
||||
**What:** Map skill → external skill dependencies
|
||||
|
||||
**Why:** Understand agent's dependency surface
|
||||
|
||||
**Checks:**
|
||||
```python
|
||||
# Parse bmad-manifest.json for external skills
|
||||
# Parse SKILL.md for skill invocation patterns
|
||||
# Build dependency graph
|
||||
```
|
||||
|
||||
**Output:** DOT format (GraphViz) or JSON adjacency list
|
||||
|
||||
**Implementation:** Python, JSON parsing only
|
||||
|
||||
---
|
||||
|
||||
### 8. Activation Flow Analyzer
|
||||
|
||||
**What:** Parse SKILL.md On Activation section for sequence
|
||||
|
||||
**Why:** Validate activation order matches best practices
|
||||
|
||||
**Checks:**
|
||||
```python
|
||||
# Look for steps in order:
|
||||
1. Activation mode detection
|
||||
2. Config loading
|
||||
3. First-run check
|
||||
4. Access boundaries load
|
||||
5. Memory load
|
||||
6. Manifest load
|
||||
7. Greet
|
||||
8. Present menu
|
||||
```
|
||||
|
||||
**Output:** JSON with detected steps, missing steps, out-of-order warnings
|
||||
|
||||
**Implementation:** Python with regex pattern matching
|
||||
|
||||
---
|
||||
|
||||
### 9. Memory Structure Validator
|
||||
|
||||
**What:** Validate memory-system.md structure
|
||||
|
||||
**Why:** Memory files have specific requirements
|
||||
|
||||
**Checks:**
|
||||
```python
|
||||
# Required sections:
|
||||
- ## Core Principle
|
||||
- ## File Structure
|
||||
- ## Write Discipline
|
||||
- ## Memory Maintenance
|
||||
```
|
||||
|
||||
**Output:** JSON with missing sections, validation errors
|
||||
|
||||
**Implementation:** Python with markdown parsing
|
||||
|
||||
---
|
||||
|
||||
### 10. Subagent Pattern Detector
|
||||
|
||||
**What:** Detect if agent uses BMAD Advanced Context Pattern
|
||||
|
||||
**Why:** Agents processing 5+ sources MUST use subagents
|
||||
|
||||
**Checks:**
|
||||
```python
|
||||
# Pattern detection in SKILL.md:
|
||||
- "DO NOT read sources yourself"
|
||||
- "delegate to sub-agents"
|
||||
- "/tmp/analysis-" temp file pattern
|
||||
- Sub-agent output template (50-100 token summary)
|
||||
```
|
||||
|
||||
**Output:** JSON with pattern found/missing, recommendations
|
||||
|
||||
**Implementation:** Python with keyword search and context extraction
|
||||
|
||||
---
|
||||
|
||||
## Priority 3: Composite Scripts
|
||||
|
||||
### 11. Agent Health Check
|
||||
|
||||
**What:** Run all validation scripts and aggregate results
|
||||
|
||||
**Why:** One-stop shop for agent quality assessment
|
||||
|
||||
**Composition:** Runs Priority 1 scripts, aggregates JSON outputs
|
||||
|
||||
**Output:** Structured health report with severity levels
|
||||
|
||||
**Implementation:** Bash script orchestrating Python scripts, jq for aggregation
|
||||
|
||||
---
|
||||
|
||||
### 12. Comparison Validator
|
||||
|
||||
**What:** Compare two versions of an agent for differences
|
||||
|
||||
**Why:** Validate changes during iteration
|
||||
|
||||
**Checks:**
|
||||
```bash
|
||||
# Git diff with structure awareness:
|
||||
- Frontmatter changes
|
||||
- Capability additions/removals
|
||||
- New prompt files
|
||||
- Token count changes
|
||||
```
|
||||
|
||||
**Output:** JSON with categorized changes
|
||||
|
||||
**Implementation:** Bash with git, jq, python for analysis
|
||||
|
||||
---
|
||||
|
||||
## Script Output Standard
|
||||
|
||||
All scripts MUST output structured JSON for agent consumption:
|
||||
|
||||
```json
|
||||
{
|
||||
"script": "script-name",
|
||||
"version": "1.0.0",
|
||||
"agent_path": "/path/to/agent",
|
||||
"timestamp": "2025-03-08T10:30:00Z",
|
||||
"status": "pass|fail|warning",
|
||||
"findings": [
|
||||
{
|
||||
"severity": "critical|high|medium|low|info",
|
||||
"category": "structure|security|performance|consistency",
|
||||
"location": {"file": "SKILL.md", "line": 42},
|
||||
"issue": "Clear description",
|
||||
"fix": "Specific action to resolve"
|
||||
}
|
||||
],
|
||||
"summary": {
|
||||
"total": 10,
|
||||
"critical": 1,
|
||||
"high": 2,
|
||||
"medium": 3,
|
||||
"low": 4
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Checklist
|
||||
|
||||
When creating validation scripts:
|
||||
|
||||
- [ ] Uses `--help` for documentation
|
||||
- [ ] Accepts `--agent-path` for target agent
|
||||
- [ ] Outputs JSON to stdout
|
||||
- [ ] Writes diagnostics to stderr
|
||||
- [ ] Returns meaningful exit codes (0=pass, 1=fail, 2=error)
|
||||
- [ ] Includes `--verbose` flag for debugging
|
||||
- [ ] Has tests in `scripts/tests/` subfolder
|
||||
- [ ] Self-contained (PEP 723 for Python)
|
||||
- [ ] No interactive prompts
|
||||
|
||||
---
|
||||
|
||||
## Integration with Quality Optimizer
|
||||
|
||||
The Quality Optimizer should:
|
||||
|
||||
1. **First**: Run available scripts for fast, deterministic checks
|
||||
2. **Then**: Use sub-agents for semantic analysis (requires judgment)
|
||||
3. **Finally**: Synthesize both sources into report
|
||||
|
||||
**Example flow:**
|
||||
```bash
|
||||
# Run all validation scripts
|
||||
python scripts/validate-frontmatter.py --agent-path {path}
|
||||
bash scripts/scan-template-artifacts.sh --agent-path {path}
|
||||
python scripts/compare-prompts-manifest.py --agent-path {path}
|
||||
|
||||
# Collect JSON outputs
|
||||
# Spawn sub-agents only for semantic checks
|
||||
# Synthesize complete report
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Script Creation Priorities
|
||||
|
||||
**Phase 1 (Immediate value):**
|
||||
1. Template Artifact Scanner (Bash + jq)
|
||||
2. Prompt Frontmatter Comparator (Python)
|
||||
3. Access Boundaries Extractor (Python)
|
||||
|
||||
**Phase 2 (Enhanced validation):**
|
||||
4. Token Counter (Python)
|
||||
5. Subagent Pattern Detector (Python)
|
||||
6. Activation Flow Analyzer (Python)
|
||||
|
||||
**Phase 3 (Advanced features):**
|
||||
7. Dependency Graph Generator (Python)
|
||||
8. Memory Structure Validator (Python)
|
||||
9. Agent Health Check orchestrator (Bash)
|
||||
|
||||
**Phase 4 (Comparison tools):**
|
||||
10. Comparison Validator (Bash + Python)
|
||||
@@ -0,0 +1,218 @@
|
||||
# Skill Authoring Best Practices
|
||||
|
||||
Practical patterns for writing effective BMad agent skills. For field definitions and description format, see `references/standard-fields.md`. For quality dimensions, see `references/quality-dimensions.md`.
|
||||
|
||||
## Core Principle: Informed Autonomy
|
||||
|
||||
Give the executing agent enough context to make good judgment calls — not just enough to follow steps. The right test for every piece of content is: "Would the agent make *better decisions* with this context?" If yes, keep it. If it's genuinely redundant or mechanical, cut it.
|
||||
|
||||
## Freedom Levels
|
||||
|
||||
Match specificity to task fragility:
|
||||
|
||||
| Freedom | When to Use | Example |
|
||||
|---------|-------------|---------|
|
||||
| **High** (text instructions) | Multiple valid approaches, context-dependent | "Analyze the user's vision and suggest capabilities" |
|
||||
| **Medium** (pseudocode/templates) | Preferred pattern exists, some variation OK | `def generate_manifest(capabilities, format="json"):` |
|
||||
| **Low** (exact scripts) | Fragile operations, consistency critical | `python3 scripts/manifest.py validate path/to/skill` (do not modify) |
|
||||
|
||||
**Analogy**: Narrow bridge with cliffs = low freedom. Open field = high freedom.
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Template Pattern
|
||||
|
||||
**Strict** (must follow exactly):
|
||||
````markdown
|
||||
## Report structure
|
||||
ALWAYS use this template:
|
||||
```markdown
|
||||
# [Title]
|
||||
## Summary
|
||||
[One paragraph]
|
||||
## Findings
|
||||
- Finding 1 with data
|
||||
```
|
||||
````
|
||||
|
||||
**Flexible** (adapt as needed):
|
||||
````markdown
|
||||
Here's a sensible default, use judgment:
|
||||
```markdown
|
||||
# [Title]
|
||||
## Summary
|
||||
[Overview]
|
||||
```
|
||||
Adapt based on context.
|
||||
````
|
||||
|
||||
### Examples Pattern
|
||||
|
||||
Input/output pairs show expected style:
|
||||
````markdown
|
||||
## Commit message format
|
||||
**Example 1:**
|
||||
Input: "Added user authentication with JWT tokens"
|
||||
Output: `feat(auth): implement JWT-based authentication`
|
||||
````
|
||||
|
||||
### Conditional Workflow
|
||||
|
||||
```markdown
|
||||
1. Determine modification type:
|
||||
**Creating new?** → Creation workflow
|
||||
**Editing existing?** → Editing workflow
|
||||
```
|
||||
|
||||
### Soft Gate Elicitation
|
||||
|
||||
For guided/interactive workflows, use "anything else?" soft gates at natural transition points instead of hard menus. This pattern draws out information users didn't know they had:
|
||||
|
||||
```markdown
|
||||
## After completing a discovery section:
|
||||
Present what you've captured so far, then:
|
||||
"Anything else you'd like to add, or shall we move on?"
|
||||
```
|
||||
|
||||
**Why it works:** Users almost always remember one more thing when given a graceful exit ramp rather than a hard stop. The low-pressure phrasing invites contribution without demanding it. This consistently produces richer, more complete artifacts than rigid section-by-section questioning.
|
||||
|
||||
**When to use:** Any guided workflow or agent with collaborative discovery — product briefs, requirements gathering, design reviews, brainstorming synthesis. Use at every natural transition between topics or sections.
|
||||
|
||||
**When NOT to use:** Autonomous/headless execution, or steps where additional input would cause scope creep rather than enrich the output.
|
||||
|
||||
### Intent-Before-Ingestion
|
||||
|
||||
Never scan artifacts, documents, or project context until you understand WHY the user is here. Scanning without purpose produces noise, not signal.
|
||||
|
||||
```markdown
|
||||
## On activation:
|
||||
1. Greet and understand intent — what is this about?
|
||||
2. Accept whatever inputs the user offers
|
||||
3. Ask if they have additional documents or context
|
||||
4. ONLY THEN scan artifacts, scoped to relevance
|
||||
```
|
||||
|
||||
**Why it works:** Without knowing what the user wants, you can't judge what's relevant in a 100-page research doc vs a brainstorming report. Intent gives you the filter. Without it, scanning is a fool's errand.
|
||||
|
||||
**When to use:** Any agent that ingests documents, project context, or external data as part of its process.
|
||||
|
||||
### Capture-Don't-Interrupt
|
||||
|
||||
When users provide information beyond the current scope (e.g., dropping requirements during a product brief, mentioning platforms during vision discovery), capture it silently for later use rather than redirecting or stopping them.
|
||||
|
||||
```markdown
|
||||
## During discovery:
|
||||
If user provides out-of-scope but valuable info:
|
||||
- Capture it (notes, structured aside, addendum bucket)
|
||||
- Don't interrupt their flow
|
||||
- Use it later in the appropriate stage or output
|
||||
```
|
||||
|
||||
**Why it works:** Users in creative flow will share their best insights unprompted. Interrupting to say "we'll cover that later" kills momentum and may lose the insight entirely. Capture everything, distill later.
|
||||
|
||||
**When to use:** Any collaborative discovery agent where the user is brainstorming, explaining, or brain-dumping.
|
||||
|
||||
### Dual-Output: Human Artifact + LLM Distillate
|
||||
|
||||
Any artifact-producing agent can output two complementary documents: a polished human-facing artifact AND a token-conscious, structured distillate optimized for downstream LLM consumption.
|
||||
|
||||
```markdown
|
||||
## Output strategy:
|
||||
1. Primary: Human-facing document (exec summary, report, brief)
|
||||
2. Optional: LLM distillate — dense, structured, token-efficient
|
||||
- Captures overflow that doesn't belong in the human doc
|
||||
- Rejected ideas (so downstream doesn't re-propose them)
|
||||
- Detail bullets with just enough context to stand alone
|
||||
- Designed to be loaded as context for the next workflow
|
||||
```
|
||||
|
||||
**Why it works:** Human docs are concise by design — they can't carry all the detail surfaced during discovery. But that detail has value for downstream LLM workflows (PRD creation, architecture design, etc.). The distillate bridges the gap without bloating the primary artifact.
|
||||
|
||||
**When to use:** Any agent producing documents that feed into subsequent LLM workflows. The distillate is always optional — offered to the user, not forced.
|
||||
|
||||
### Parallel Review Lenses
|
||||
|
||||
Before finalizing any artifact, fan out multiple reviewers with different perspectives to catch blind spots the builder/facilitator missed.
|
||||
|
||||
```markdown
|
||||
## Near completion:
|
||||
Fan out 2-3 review subagents in parallel:
|
||||
- Skeptic: "What's missing? What assumptions are untested?"
|
||||
- Opportunity Spotter: "What adjacent value? What angles?"
|
||||
- Contextual Reviewer: LLM picks the best third lens
|
||||
(e.g., "regulatory risk" for healthtech, "DX critic" for devtools)
|
||||
|
||||
Graceful degradation: If subagents unavailable,
|
||||
main agent does a single critical self-review pass.
|
||||
```
|
||||
|
||||
**Why it works:** A single perspective — even an expert one — has blind spots. Multiple lenses surface issues and opportunities that no single reviewer would catch. The contextually-chosen third lens ensures domain-specific concerns aren't missed.
|
||||
|
||||
**When to use:** Any agent producing a significant artifact (briefs, PRDs, designs, architecture docs). The review step is lightweight but high-value.
|
||||
|
||||
### Three-Mode Architecture (Guided / Yolo / Autonomous)
|
||||
|
||||
For interactive agents, offer three execution modes that match different user contexts:
|
||||
|
||||
| Mode | Trigger | Behavior |
|
||||
|------|---------|----------|
|
||||
| **Guided** | Default | Section-by-section with soft gates. Drafts from what it knows, questions what it doesn't. |
|
||||
| **Yolo** | `--yolo` or "just draft it" | Ingests everything, drafts complete artifact upfront, then walks user through refinement. |
|
||||
| **Autonomous** | `--headless` / `-H` | Headless. Takes inputs, produces artifact, no interaction. |
|
||||
|
||||
**Why it works:** Not every user wants the same experience. A first-timer needs guided discovery. A repeat user with clear inputs wants yolo. A pipeline wants autonomous. Same agent, three entry points.
|
||||
|
||||
**When to use:** Any facilitative agent that produces an artifact. Not all agents need all three — but considering them during design prevents painting yourself into a single interaction model.
|
||||
|
||||
### Graceful Degradation
|
||||
|
||||
Every subagent-dependent feature should have a fallback path. If the platform doesn't support parallel subagents (or subagents at all), the workflow must still progress.
|
||||
|
||||
```markdown
|
||||
## Subagent-dependent step:
|
||||
Try: Fan out subagents in parallel
|
||||
Fallback: Main agent performs the work sequentially
|
||||
Never: Block the workflow because a subagent feature is unavailable
|
||||
```
|
||||
|
||||
**Why it works:** Skills run across different platforms, models, and configurations. A skill that hard-fails without subagents is fragile. A skill that gracefully falls back to sequential processing is robust everywhere.
|
||||
|
||||
**When to use:** Any agent that uses subagents for research, review, or parallel processing.
|
||||
|
||||
### Verifiable Intermediate Outputs
|
||||
|
||||
For complex tasks: plan → validate → execute → verify
|
||||
|
||||
1. Analyze inputs
|
||||
2. **Create** `changes.json` with planned updates
|
||||
3. **Validate** with script before executing
|
||||
4. Execute changes
|
||||
5. Verify output
|
||||
|
||||
Benefits: catches errors early, machine-verifiable, reversible planning.
|
||||
|
||||
## Writing Guidelines
|
||||
|
||||
- **Consistent terminology** — choose one term per concept, stick to it
|
||||
- **Third person** in descriptions — "Processes files" not "I help process files"
|
||||
- **Descriptive file names** — `form_validation_rules.md` not `doc2.md`
|
||||
- **Forward slashes** in all paths — cross-platform
|
||||
- **One level deep** for reference files — SKILL.md → reference.md, never SKILL.md → A.md → B.md
|
||||
- **TOC for long files** — add table of contents for files >100 lines
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
| Anti-Pattern | Fix |
|
||||
|---|---|
|
||||
| Too many options upfront | One default with escape hatch for edge cases |
|
||||
| Deep reference nesting (A→B→C) | Keep references 1 level from SKILL.md |
|
||||
| Inconsistent terminology | Choose one term per concept |
|
||||
| Vague file names | Name by content, not sequence |
|
||||
| Scripts that classify meaning via regex | Intelligence belongs in prompts, not scripts |
|
||||
|
||||
## Scripts in Skills
|
||||
|
||||
- **Execute vs reference** — "Run `analyze.py` to extract fields" (execute) vs "See `analyze.py` for the algorithm" (read)
|
||||
- **Document constants** — explain why `TIMEOUT = 30`, not just what
|
||||
- **PEP 723 for Python** — self-contained scripts with inline dependency declarations
|
||||
- **MCP tools** — use fully qualified names: `ServerName:tool_name`
|
||||
103
.gemini/skills/bmad-agent-builder/references/standard-fields.md
Normal file
103
.gemini/skills/bmad-agent-builder/references/standard-fields.md
Normal file
@@ -0,0 +1,103 @@
|
||||
# Standard Agent Fields
|
||||
|
||||
| Field | Description | Example |
|
||||
|-------|-------------|---------|
|
||||
| `name` | Full skill name | `bmad-agent-tech-writer`, `bmad-cis-agent-lila` |
|
||||
| `skillName` | Functional name (kebab-case) | `tech-writer`, `lila` |
|
||||
| `displayName` | Friendly name | `Paige`, `Lila`, `Floyd` |
|
||||
| `title` | Role title | `Tech Writer`, `Holodeck Operator` |
|
||||
| `icon` | Single emoji | `🔥`, `🌟` |
|
||||
| `role` | Functional role | `Technical Documentation Specialist` |
|
||||
| `sidecar` | Memory folder (optional) | `{skillName}-sidecar/` |
|
||||
|
||||
## Overview Section Format
|
||||
|
||||
The Overview is the first section after the title — it primes the AI for everything that follows.
|
||||
|
||||
**3-part formula:**
|
||||
1. **What** — What this agent does
|
||||
2. **How** — How it works (role, approach, modes)
|
||||
3. **Why/Outcome** — Value delivered, quality standard
|
||||
|
||||
**Templates by agent type:**
|
||||
|
||||
**Companion agents:**
|
||||
```markdown
|
||||
This skill provides a {role} who helps users {primary outcome}. Act as {displayName} — {key quality}. With {key features}, {displayName} {primary value proposition}.
|
||||
```
|
||||
|
||||
**Workflow agents:**
|
||||
```markdown
|
||||
This skill helps you {outcome} through {approach}. Act as {role}, guiding users through {key stages/phases}. Your output is {deliverable}.
|
||||
```
|
||||
|
||||
**Utility agents:**
|
||||
```markdown
|
||||
This skill {what it does}. Use when {when to use}. Returns {output format} with {key feature}.
|
||||
```
|
||||
|
||||
## SKILL.md Description Format
|
||||
|
||||
```
|
||||
{description of what the agent does}. Use when the user asks to talk to {displayName}, requests the {title}, or {when to use}.
|
||||
```
|
||||
|
||||
## Path Rules
|
||||
|
||||
**Critical**: When prompts reference files in memory, always use full paths.
|
||||
|
||||
### Memory Files (sidecar)
|
||||
|
||||
Always use: `{project-root}/_bmad/_memory/{skillName}-sidecar/`
|
||||
|
||||
Examples:
|
||||
- `{project-root}/_bmad/_memory/journaling-companion-sidecar/index.md`
|
||||
- `{project-root}/_bmad/_memory/journaling-companion-sidecar/access-boundaries.md` — **Required**
|
||||
- `{project-root}/_bmad/_memory/journaling-companion-sidecar/autonomous-log.md`
|
||||
- `{project-root}/_bmad/_memory/journaling-companion-sidecar/references/tags-reference.md`
|
||||
|
||||
### Access Boundaries (Standard for all agents)
|
||||
|
||||
Every agent must have an `access-boundaries.md` file in its sidecar memory:
|
||||
|
||||
**Load on every activation** — Before any file operations.
|
||||
|
||||
**Structure:**
|
||||
```markdown
|
||||
# Access Boundaries for {displayName}
|
||||
|
||||
## Read Access
|
||||
- {folder-or-pattern}
|
||||
|
||||
## Write Access
|
||||
- {folder-or-pattern}
|
||||
|
||||
## Deny Zones
|
||||
- {forbidden-path}
|
||||
```
|
||||
|
||||
**Purpose:** Define clear boundaries for what the agent can and cannot access, especially important for autonomous agents.
|
||||
|
||||
### User-Configured Locations
|
||||
|
||||
Folders/files the user provides during init (like journal location) get stored in `index.md`. Both interactive and autonomous modes:
|
||||
|
||||
1. Load `index.md` first
|
||||
2. Read the user's configured paths
|
||||
3. Use those paths for operations
|
||||
|
||||
Example pattern:
|
||||
```markdown
|
||||
## Autonomous Mode
|
||||
|
||||
When run autonomously:
|
||||
1. Load `{project-root}/_bmad/_memory/{skillName}-sidecar/index.md` to get user's journal location
|
||||
2. Read entries from that location
|
||||
3. Write results to `{project-root}/_bmad/_memory/{skillName}-sidecar/autonomous-log.md`
|
||||
```
|
||||
|
||||
## CLI Usage (Autonomous Agents)
|
||||
|
||||
Agents with autonomous mode should include a `## CLI Usage` section documenting headless invocation:
|
||||
|
||||
```markdown
|
||||
@@ -0,0 +1,72 @@
|
||||
# Template Substitution Rules
|
||||
|
||||
When building the agent, you MUST apply these conditional blocks to the templates:
|
||||
|
||||
## For Module-Based Agents
|
||||
|
||||
- `{if-module}` ... `{/if-module}` → Keep the content inside
|
||||
- `{if-standalone}` ... `{/if-standalone}` → Remove the entire block including markers
|
||||
- `{custom-config-properties}` → Replace with comma-separated custom property names (e.g., `journal_folder, adventure_logs_folder`) or remove line if none
|
||||
- `{module-code-or-empty}` → Replace with module code (e.g., `cis-`) or empty string for standalone
|
||||
|
||||
## For Standalone Agents
|
||||
|
||||
- `{if-module}` ... `{/if-module}` → Remove the entire block including markers
|
||||
- `{if-standalone}` ... `{/if-standalone}` → Keep the content inside
|
||||
- `{custom-config-properties}` → Remove (not used for standalone)
|
||||
- `{module-code-or-empty}` → Empty string
|
||||
- `{custom-init-questions}` → Add user's additional questions here (remove placeholder if none)
|
||||
|
||||
## For Agents With Sidecar (Memory)
|
||||
|
||||
- `{if-sidecar}` ... `{/if-sidecar}` → Keep the content inside
|
||||
- `{if-no-sidecar}` ... `{/if-no-sidecar}` → Remove the entire block including markers
|
||||
|
||||
## For Agents Without Sidecar
|
||||
|
||||
- `{if-sidecar}` ... `{/if-sidecar}` → Remove the entire block including markers
|
||||
- `{if-no-sidecar}` ... `{/if-no-sidecar}` → Keep the content inside
|
||||
|
||||
## External Skills
|
||||
|
||||
- `{if-external-skills}` ... `{/if-external-skills}` → Keep if agent uses external skills, otherwise remove entire block
|
||||
- `{external-skills-list}` → Replace with bulleted list of exact skill names:
|
||||
```markdown
|
||||
- `bmad-skill-name-one` — Description
|
||||
- `bmad-skill-name-two` — Description
|
||||
```
|
||||
|
||||
## Custom Init Questions
|
||||
|
||||
Add user's additional questions to the init.md template, replacing `{custom-init-questions}` placeholder. Remove the placeholder line if no custom questions.
|
||||
|
||||
## Path References
|
||||
|
||||
All generated agents use these paths:
|
||||
- `init.md` — First-run setup
|
||||
- `{name}.md` — Individual capability prompts
|
||||
- `references/memory-system.md` — Memory discipline (if sidecar needed)
|
||||
- `bmad-manifest.json` — Capabilities and metadata with menu codes
|
||||
- `scripts/` — Python/shell scripts for deterministic operations (if needed)
|
||||
|
||||
## Frontmatter Placeholders
|
||||
|
||||
Replace all frontmatter placeholders in SKILL-template.md:
|
||||
- `{module-code-or-empty}` → Module code (e.g., `cis-`) or empty
|
||||
- `{agent-name}` → Agent functional name (kebab-case)
|
||||
- `{short phrase what agent does}` → One-line description
|
||||
- `{displayName}` → Friendly name
|
||||
- `{title}` → Role title
|
||||
- `{role}` → Functional role
|
||||
- `{skillName}` → Full skill name with module prefix
|
||||
- `{user_name}` → From config
|
||||
- `{communication_language}` → From config
|
||||
|
||||
## Content Placeholders
|
||||
|
||||
Replace all content placeholders with agent-specific values:
|
||||
- `{overview-template}` → Overview paragraph (2-3 sentences) following the 3-part formula (What, How, Why/Outcome)
|
||||
- `{One-sentence identity.}` → Brief identity statement
|
||||
- `{Who is this agent? One clear sentence.}` → Identity description
|
||||
- `{How does this agent communicate? Be specific with examples.}` → Communication style
|
||||
- `{Guiding principle 1/2/3}` → Agent's principles
|
||||
@@ -0,0 +1,267 @@
|
||||
# Universal Scanner Output Schema
|
||||
|
||||
All quality scanners — both LLM-based and deterministic lint scripts — MUST produce output conforming to this schema. No exceptions.
|
||||
|
||||
## Top-Level Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"scanner": "scanner-name",
|
||||
"skill_path": "{path}",
|
||||
"findings": [],
|
||||
"assessments": {},
|
||||
"summary": {
|
||||
"total_findings": 0,
|
||||
"by_severity": {},
|
||||
"assessment": "1-2 sentence overall assessment"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
| Key | Type | Required | Description |
|
||||
|-----|------|----------|-------------|
|
||||
| `scanner` | string | yes | Scanner identifier (e.g., `"workflow-integrity"`, `"prompt-craft"`) |
|
||||
| `skill_path` | string | yes | Absolute path to the skill being scanned |
|
||||
| `findings` | array | yes | ALL items — issues, strengths, suggestions, opportunities. Always an array, never an object |
|
||||
| `assessments` | object | yes | Scanner-specific structured analysis (cohesion tables, health metrics, user journeys, etc.). Free-form per scanner |
|
||||
| `summary` | object | yes | Aggregate counts and brief overall assessment |
|
||||
|
||||
## Finding Schema (7 fields)
|
||||
|
||||
Every item in `findings[]` has exactly these 7 fields:
|
||||
|
||||
```json
|
||||
{
|
||||
"file": "SKILL.md",
|
||||
"line": 42,
|
||||
"severity": "high",
|
||||
"category": "frontmatter",
|
||||
"title": "Brief headline of the finding",
|
||||
"detail": "Full context — rationale, what was observed, why it matters",
|
||||
"action": "What to do about it — fix, suggestion, or script to create"
|
||||
}
|
||||
```
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `file` | string | yes | Relative path to the affected file (e.g., `"SKILL.md"`, `"scripts/build.py"`). Empty string if not file-specific |
|
||||
| `line` | int\|null | no | Line number (1-based). `null` or `0` if not line-specific |
|
||||
| `severity` | string | yes | One of the severity values below |
|
||||
| `category` | string | yes | Scanner-specific category (e.g., `"frontmatter"`, `"token-waste"`, `"lint"`) |
|
||||
| `title` | string | yes | Brief headline (1 sentence). This is the primary display text |
|
||||
| `detail` | string | yes | Full context — fold rationale, observation, impact, nuance into one narrative. Empty string if title is self-explanatory |
|
||||
| `action` | string | yes | What to do — fix instruction, suggestion, or script to create. Empty string for strengths/notes |
|
||||
|
||||
## Severity Values (complete enum)
|
||||
|
||||
```
|
||||
critical | high | medium | low | high-opportunity | medium-opportunity | low-opportunity | suggestion | strength | note
|
||||
```
|
||||
|
||||
**Routing rules:**
|
||||
- `critical`, `high` → "Truly Broken" section in report
|
||||
- `medium`, `low` → category-specific findings sections
|
||||
- `high-opportunity`, `medium-opportunity`, `low-opportunity` → enhancement/creative sections
|
||||
- `suggestion` → creative suggestions section
|
||||
- `strength` → strengths section (positive observations worth preserving)
|
||||
- `note` → informational observations, also routed to strengths
|
||||
|
||||
## Assessment Sub-Structure Contracts
|
||||
|
||||
The `assessments` object is free-form per scanner, but the HTML report renderer expects specific shapes for specific keys. These are the canonical formats.
|
||||
|
||||
### user_journeys (enhancement-opportunities scanner)
|
||||
|
||||
**Always an array of objects. Never an object keyed by persona.**
|
||||
|
||||
```json
|
||||
"user_journeys": [
|
||||
{
|
||||
"archetype": "first-timer",
|
||||
"summary": "Brief narrative of this user's experience",
|
||||
"friction_points": ["moment 1", "moment 2"],
|
||||
"bright_spots": ["what works well"]
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
### autonomous_assessment (enhancement-opportunities scanner)
|
||||
|
||||
```json
|
||||
"autonomous_assessment": {
|
||||
"potential": "headless-ready|easily-adaptable|partially-adaptable|fundamentally-interactive",
|
||||
"hitl_points": 3,
|
||||
"auto_resolvable": 2,
|
||||
"needs_input": 1,
|
||||
"notes": "Brief assessment"
|
||||
}
|
||||
```
|
||||
|
||||
### top_insights (enhancement-opportunities scanner)
|
||||
|
||||
**Always an array of objects with title/detail/action (same shape as findings but without file/line/severity/category).**
|
||||
|
||||
```json
|
||||
"top_insights": [
|
||||
{
|
||||
"title": "The key observation",
|
||||
"detail": "Why it matters",
|
||||
"action": "What to do about it"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
### cohesion_analysis (skill-cohesion / agent-cohesion scanner)
|
||||
|
||||
```json
|
||||
"cohesion_analysis": {
|
||||
"dimension_name": { "score": "strong|moderate|weak", "notes": "explanation" }
|
||||
}
|
||||
```
|
||||
|
||||
Dimension names are scanner-specific (e.g., `stage_flow_coherence`, `persona_alignment`). The report renderer iterates all keys and renders a table row per dimension.
|
||||
|
||||
### skill_identity / agent_identity (cohesion scanners)
|
||||
|
||||
```json
|
||||
"skill_identity": {
|
||||
"name": "skill-name",
|
||||
"purpose_summary": "Brief characterization",
|
||||
"primary_outcome": "What this skill produces"
|
||||
}
|
||||
```
|
||||
|
||||
### skillmd_assessment (prompt-craft scanner)
|
||||
|
||||
```json
|
||||
"skillmd_assessment": {
|
||||
"overview_quality": "appropriate|excessive|missing",
|
||||
"progressive_disclosure": "good|needs-extraction|monolithic",
|
||||
"notes": "brief assessment"
|
||||
}
|
||||
```
|
||||
|
||||
Agent variant adds `"persona_context": "appropriate|excessive|missing"`.
|
||||
|
||||
### prompt_health (prompt-craft scanner)
|
||||
|
||||
```json
|
||||
"prompt_health": {
|
||||
"total_prompts": 3,
|
||||
"with_config_header": 2,
|
||||
"with_progression": 1,
|
||||
"self_contained": 3
|
||||
}
|
||||
```
|
||||
|
||||
### skill_understanding (enhancement-opportunities scanner)
|
||||
|
||||
```json
|
||||
"skill_understanding": {
|
||||
"purpose": "what this skill does",
|
||||
"primary_user": "who it's for",
|
||||
"assumptions": ["assumption 1", "assumption 2"]
|
||||
}
|
||||
```
|
||||
|
||||
### stage_summary (workflow-integrity scanner)
|
||||
|
||||
```json
|
||||
"stage_summary": {
|
||||
"total_stages": 0,
|
||||
"missing_stages": [],
|
||||
"orphaned_stages": [],
|
||||
"stages_without_progression": [],
|
||||
"stages_without_config_header": []
|
||||
}
|
||||
```
|
||||
|
||||
### metadata (structure scanner)
|
||||
|
||||
Free-form key-value pairs. Rendered as a metadata block.
|
||||
|
||||
### script_summary (scripts lint)
|
||||
|
||||
```json
|
||||
"script_summary": {
|
||||
"total_scripts": 5,
|
||||
"by_type": {"python": 3, "shell": 2},
|
||||
"missing_tests": ["script1.py"]
|
||||
}
|
||||
```
|
||||
|
||||
### existing_scripts (script-opportunities scanner)
|
||||
|
||||
Array of strings (script paths that already exist).
|
||||
|
||||
## Complete Example
|
||||
|
||||
```json
|
||||
{
|
||||
"scanner": "workflow-integrity",
|
||||
"skill_path": "/path/to/skill",
|
||||
"findings": [
|
||||
{
|
||||
"file": "SKILL.md",
|
||||
"line": 12,
|
||||
"severity": "high",
|
||||
"category": "frontmatter",
|
||||
"title": "Missing required 'version' field in frontmatter",
|
||||
"detail": "The SKILL.md frontmatter is missing the version field. This prevents the manifest generator from producing correct output and breaks version-aware consumers.",
|
||||
"action": "Add 'version: 1.0.0' to the YAML frontmatter block"
|
||||
},
|
||||
{
|
||||
"file": "build-process.md",
|
||||
"line": null,
|
||||
"severity": "strength",
|
||||
"category": "design",
|
||||
"title": "Excellent progressive disclosure pattern in build stages",
|
||||
"detail": "Each stage provides exactly the context needed without front-loading information. This reduces token waste and improves LLM comprehension.",
|
||||
"action": ""
|
||||
},
|
||||
{
|
||||
"file": "SKILL.md",
|
||||
"line": 45,
|
||||
"severity": "medium-opportunity",
|
||||
"category": "experience-gap",
|
||||
"title": "No guidance for first-time users unfamiliar with build workflows",
|
||||
"detail": "A user encountering this skill for the first time has no onboarding path. The skill assumes familiarity with stage-based workflows, which creates friction for newcomers.",
|
||||
"action": "Add a 'Getting Started' section or link to onboarding documentation"
|
||||
}
|
||||
],
|
||||
"assessments": {
|
||||
"stage_summary": {
|
||||
"total_stages": 7,
|
||||
"missing_stages": [],
|
||||
"orphaned_stages": ["cleanup"]
|
||||
}
|
||||
},
|
||||
"summary": {
|
||||
"total_findings": 3,
|
||||
"by_severity": {"high": 1, "medium-opportunity": 1, "strength": 1},
|
||||
"assessment": "Well-structured skill with one critical frontmatter gap. Progressive disclosure is a notable strength."
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## DO NOT
|
||||
|
||||
- **DO NOT** rename fields. Use exactly: `file`, `line`, `severity`, `category`, `title`, `detail`, `action`
|
||||
- **DO NOT** use `issues` instead of `findings` — the array is always called `findings`
|
||||
- **DO NOT** add fields to findings beyond the 7 defined above. Put scanner-specific structured data in `assessments`
|
||||
- **DO NOT** use separate arrays for strengths, suggestions, or opportunities — they go in `findings` with appropriate severity values
|
||||
- **DO NOT** change `user_journeys` from an array to an object keyed by persona name
|
||||
- **DO NOT** restructure assessment sub-objects — use the shapes defined above
|
||||
- **DO NOT** put free-form narrative data into `assessments` — that belongs in `detail` fields of findings or in `summary.assessment`
|
||||
|
||||
## Self-Check Before Output
|
||||
|
||||
Before writing your JSON output, verify:
|
||||
|
||||
1. Is your array called `findings` (not `issues`, not `opportunities`)?
|
||||
2. Does every item in `findings` have all 7 fields: `file`, `line`, `severity`, `category`, `title`, `detail`, `action`?
|
||||
3. Are strengths in `findings` with `severity: "strength"` (not in a separate `strengths` array)?
|
||||
4. Are suggestions in `findings` with `severity: "suggestion"` (not in a separate `creative_suggestions` array)?
|
||||
5. Is `assessments` an object containing structured analysis data (not items that belong in findings)?
|
||||
6. Is `user_journeys` an array of objects (not an object keyed by persona)?
|
||||
7. Do `top_insights` items use `title`/`detail`/`action` (not `insight`/`suggestion`/`why_it_matters`)?
|
||||
Reference in New Issue
Block a user