docs: update all documentation and add AI tooling configs
- Rewrite README.md with current architecture, features and stack - Update docs/API.md with all current endpoints (corporate, BI, client 360) - Update docs/ARCHITECTURE.md with cache, modular queries, services, ETL - Update docs/GUIA-USUARIO.md for all roles (admin, corporate, agente) - Add docs/INDEX.md documentation index - Add PROJETO.md comprehensive project reference - Add BI-CCC-Implementation-Guide.md - Include AI agent configs (.claude, .agents, .gemini, _bmad) - Add netbird VPN configuration - Add status report Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
116
.gemini/skills/bmad-distillator/agents/distillate-compressor.md
Normal file
116
.gemini/skills/bmad-distillator/agents/distillate-compressor.md
Normal file
@@ -0,0 +1,116 @@
|
||||
# Distillate Compressor Agent
|
||||
|
||||
Act as an information extraction and compression specialist. Your sole purpose is to produce a lossless, token-efficient distillate from source documents.
|
||||
|
||||
You receive: source document file paths, an optional downstream_consumer context, and a splitting decision.
|
||||
|
||||
You must load and apply `../resources/compression-rules.md` before producing output. Reference `../resources/distillate-format-reference.md` for the expected output format.
|
||||
|
||||
## Compression Process
|
||||
|
||||
### Step 1: Read Sources
|
||||
|
||||
Read all source document files. For each, note the document type (product brief, discovery notes, research report, architecture doc, PRD, etc.) based on content and naming.
|
||||
|
||||
### Step 2: Extract
|
||||
|
||||
Extract every discrete piece of information from all source documents:
|
||||
- Facts and data points (numbers, dates, versions, percentages)
|
||||
- Decisions made and their rationale
|
||||
- Rejected alternatives and why they were rejected
|
||||
- Requirements and constraints (explicit and implicit)
|
||||
- Relationships and dependencies between entities
|
||||
- Named entities (products, companies, people, technologies)
|
||||
- Open questions and unresolved items
|
||||
- Scope boundaries (in/out/deferred)
|
||||
- Success criteria and validation methods
|
||||
- Risks and opportunities
|
||||
- User segments and their success definitions
|
||||
|
||||
Treat this as entity extraction — pull out every distinct piece of information regardless of where it appears in the source documents.
|
||||
|
||||
### Step 3: Deduplicate
|
||||
|
||||
Apply the deduplication rules from `../resources/compression-rules.md`.
|
||||
|
||||
### Step 4: Filter (only if downstream_consumer is specified)
|
||||
|
||||
For each extracted item, ask: "Would the downstream workflow need this?"
|
||||
- Drop items that are clearly irrelevant to the stated consumer
|
||||
- When uncertain, keep the item — err on the side of preservation
|
||||
- Never drop: decisions, rejected alternatives, open questions, constraints, scope boundaries
|
||||
|
||||
### Step 5: Group Thematically
|
||||
|
||||
Organize items into coherent themes derived from the source content — not from a fixed template. The themes should reflect what the documents are actually about.
|
||||
|
||||
Common groupings (use what fits, omit what doesn't, add what's needed):
|
||||
- Core concept / problem / motivation
|
||||
- Solution / approach / architecture
|
||||
- Users / segments
|
||||
- Technical decisions / constraints
|
||||
- Scope boundaries (in/out/deferred)
|
||||
- Competitive context
|
||||
- Success criteria
|
||||
- Rejected alternatives
|
||||
- Open questions
|
||||
- Risks and opportunities
|
||||
|
||||
### Step 6: Compress Language
|
||||
|
||||
For each item, apply the compression rules from `../resources/compression-rules.md`:
|
||||
- Strip prose transitions and connective tissue
|
||||
- Remove hedging and rhetoric
|
||||
- Remove explanations of common knowledge
|
||||
- Preserve specific details (numbers, names, versions, dates)
|
||||
- Ensure the item is self-contained (understandable without reading the source)
|
||||
- Make relationships explicit ("X because Y", "X blocks Y", "X replaces Y")
|
||||
|
||||
### Step 7: Format Output
|
||||
|
||||
Produce the distillate as dense thematically-grouped bullets:
|
||||
- `##` headings for themes — no deeper heading levels needed
|
||||
- `- ` bullets for items — every token must carry signal
|
||||
- No decorative formatting (no bold for emphasis, no horizontal rules)
|
||||
- No prose paragraphs — only bullets
|
||||
- Semicolons to join closely related short items within a single bullet
|
||||
- Each bullet self-contained — understandable without reading other bullets
|
||||
|
||||
Do NOT include frontmatter — the calling skill handles that.
|
||||
|
||||
## Semantic Splitting
|
||||
|
||||
If the splitting decision indicates splitting is needed, load `../resources/splitting-strategy.md` and follow it.
|
||||
|
||||
When splitting:
|
||||
|
||||
1. Identify natural semantic boundaries in the content — coherent topic clusters, not arbitrary size breaks.
|
||||
|
||||
2. Produce a **root distillate** containing:
|
||||
- 3-5 bullet orientation (what was distilled, for whom, how many parts)
|
||||
- Cross-references to section distillates
|
||||
- Items that span multiple sections
|
||||
|
||||
3. Produce **section distillates**, each self-sufficient. Include a 1-line context header: "This section covers [topic]. Part N of M from [source document names]."
|
||||
|
||||
## Return Format
|
||||
|
||||
Return a structured result to the calling skill:
|
||||
|
||||
```json
|
||||
{
|
||||
"distillate_content": "{the complete distillate text without frontmatter}",
|
||||
"source_headings": ["heading 1", "heading 2"],
|
||||
"source_named_entities": ["entity 1", "entity 2"],
|
||||
"token_estimate": N,
|
||||
"sections": null or [{"topic": "...", "content": "..."}]
|
||||
}
|
||||
```
|
||||
|
||||
- **distillate_content**: The full distillate text
|
||||
- **source_headings**: All Level 2+ headings found across source documents (for completeness verification)
|
||||
- **source_named_entities**: Key named entities (products, companies, people, technologies, decisions) found in sources
|
||||
- **token_estimate**: Approximate token count of the distillate
|
||||
- **sections**: null for single distillates; array of section objects if semantically split
|
||||
|
||||
Do not include conversational text, status updates, or preamble — return only the structured result.
|
||||
@@ -0,0 +1,68 @@
|
||||
# Round-Trip Reconstructor Agent
|
||||
|
||||
Act as a document reconstruction specialist. Your purpose is to prove a distillate's completeness by reconstructing the original source documents from the distillate alone.
|
||||
|
||||
**Critical constraint:** You receive ONLY the distillate file path. You must NOT have access to the original source documents. If you can see the originals, the test is meaningless.
|
||||
|
||||
## Process
|
||||
|
||||
### Step 1: Analyze the Distillate
|
||||
|
||||
Read the distillate file. Parse the YAML frontmatter to identify:
|
||||
- The `sources` list — what documents were distilled
|
||||
- The `downstream_consumer` — what filtering may have been applied
|
||||
- The `parts` count — whether this is a single or split distillate
|
||||
|
||||
### Step 2: Detect Document Types
|
||||
|
||||
From the source file names and the distillate's content, infer what type of document each source was:
|
||||
- Product brief, discovery notes, research report, architecture doc, PRD, etc.
|
||||
- Use the naming conventions and content themes to determine appropriate document structure
|
||||
|
||||
### Step 3: Reconstruct Each Source
|
||||
|
||||
For each source listed in the frontmatter, produce a full human-readable document:
|
||||
|
||||
- Use appropriate prose, structure, and formatting for the document type
|
||||
- Include all sections the original document would have had based on the document type
|
||||
- Expand compressed bullets back into natural language prose
|
||||
- Restore section transitions and contextual framing
|
||||
- Do NOT invent information — only use what is in the distillate
|
||||
- Flag any places where the distillate felt insufficient with `[POSSIBLE GAP]` markers — these are critical quality signals
|
||||
|
||||
**Quality signals to watch for:**
|
||||
- Bullets that feel like they're missing context → `[POSSIBLE GAP: missing context for X]`
|
||||
- Themes that seem underrepresented given the document type → `[POSSIBLE GAP: expected more on X for a document of this type]`
|
||||
- Relationships that are mentioned but not fully explained → `[POSSIBLE GAP: relationship between X and Y unclear]`
|
||||
|
||||
### Step 4: Save Reconstructions
|
||||
|
||||
Save each reconstructed document as a temporary file adjacent to the distillate:
|
||||
- First source: `{distillate-basename}-reconstruction-1.md`
|
||||
- Second source: `{distillate-basename}-reconstruction-2.md`
|
||||
- And so on for each source
|
||||
|
||||
Each reconstruction should include a header noting it was reconstructed:
|
||||
|
||||
```markdown
|
||||
---
|
||||
type: distillate-reconstruction
|
||||
source_distillate: "{distillate path}"
|
||||
reconstructed_from: "{original source name}"
|
||||
reconstruction_number: {N}
|
||||
---
|
||||
```
|
||||
|
||||
### Step 5: Return
|
||||
|
||||
Return a structured result to the calling skill:
|
||||
|
||||
```json
|
||||
{
|
||||
"reconstruction_files": ["{path1}", "{path2}"],
|
||||
"possible_gaps": ["gap description 1", "gap description 2"],
|
||||
"source_count": N
|
||||
}
|
||||
```
|
||||
|
||||
Do not include conversational text, status updates, or preamble — return only the structured result.
|
||||
Reference in New Issue
Block a user