Cookbook: Auto-Generate Documentation
The meta-cookbook. This is how Momental documents itself - using its own knowledge graph, agents, and task system to keep docs current without human writing effort.
The Problem This Solves
Documentation rots. A new MCP tool ships, the code is merged, and the docs never get written because engineers are already on the next feature. Or docs get written but fall out of date when the API changes three months later.
Momental's approach: treat documentation as a first-class pipeline, not an afterthought. Every public surface is a node in the knowledge graph. Every undocumented surface is a task in the backlog. Agents write the drafts. Humans review the final output.
The Full Pipeline
Code change lands (PR merged)
→ PR hook detects affected public surfaces
→ Tasks auto-created for each undocumented surface
→ Tasks assigned to your documentation agent
→ your documentation agent researches via MCI tools (public interface only):
momental_code_symbol - exact parameter names and types
momental_code_tests - real usage examples from test suite
momental_strategy_tree - why this feature exists for users
→ your documentation agent writes all four Divio sections:
Reference - parameter table, types, defaults
Tutorial - step-by-step with verified code example
How-To - most common use case
Explanation - why this exists from the user's perspective
→ Huginn reviews for accuracy + leakage:
✓ Parameter names match public API
✓ No internal agent IDs, table names, or infra details
✓ No use of "simply"
✓ Every claim has a concrete example
→ If approved: doc page published automatically
→ If flagged: escalated to human reviewer with specific issues noted
→ llms.txt regenerated on next deploy
Step 1: Create the Strategy Tree Structure
First, wire documentation into your strategy tree so every doc task has a home.
// Create the documentation objective (one-time setup)
const objective = await momental_strategy_create({
nodeType: "OBJECTIVE",
statement: "Every public surface is documented with working examples",
parentId: "your-product-mission-id"
});
// Create EPICs for each documentation area
const mcpEpic = await momental_strategy_create({
nodeType: "EPIC",
statement: "MCP Tool Reference - all tools documented",
parentId: objective.id
});
const agentEpic = await momental_strategy_create({
nodeType: "EPIC",
statement: "Agent Catalog - all agents documented",
parentId: objective.id
});
Step 2: Seed Documentation Tasks
Create one task per undocumented surface. The acceptance criteria template is the key - it tells your documentation agent exactly what to research and what to write.
function buildDocAC(toolName) {
return `
## Research (public interface only)
- [ ] momental_code_symbol("${toolName}") - all parameters extracted
- [ ] momental_code_tests - at least 2 real usage examples from test suite
- [ ] momental_strategy_tree - user-facing rationale confirmed
## Content (Divio framework)
- [ ] Reference: parameter table with types, defaults, and descriptions
- [ ] Tutorial: step-by-step using a test-derived example (full imports)
- [ ] How-to: most common real-world use case from the user's perspective
- [ ] Explanation: why this tool exists for users
## Hard rules - zero exceptions
- [ ] No internal agent IDs (use public names only)
- [ ] No database table or column names
- [ ] No internal file paths, class names, or service names
- [ ] All code examples sourced from tests - not from implementation
- [ ] All code examples include full import statements
- [ ] No use of the word "simply"
`;
}
const tools = ["momental_code_search", "momental_node_create", "momental_work_begin"];
for (const tool of tools) {
const task = await momental_task_create({
statement: `Doc: \`${tool}\` MCP tool`,
parentId: mcpEpic.id,
acceptanceCriteria: buildDocAC(tool)
});
await momental_task_assign_agent({ taskId: task.id, agentId: "your-doc-agent-id" });
}
Step 3: The Research Loop (What the Documentation Agent Does)
When your documentation agent picks up a documentation task, it follows this research protocol. You can replicate this in your own agents or scripts:
// your documentation agent's research loop for a single tool
async function researchTool(toolName) {
// 1. Get exact parameter names/types from the public signature
const signature = await momental_code_symbol(toolName);
// 2. Find real usage examples in the test suite
const tests = await momental_code_tests({ query: toolName });
// 3. Understand why this tool exists from the user's perspective
const strategy = await momental_strategy_tree({ query: toolName, depth: 2 });
return { signature, tests, strategy };
}
Step 4: The Public Surface Boundary
The most critical rule: documentation must never expose internal implementation details. Build this check into your acceptance criteria and your reviewer (Huginn) enforces it.
| Never include | Why |
|---|---|
| Internal agent IDs | These are internal references - use public names like "Huginn" or "Thor" |
| Database table/column names | Implementation detail that may change |
| GCP or cloud infrastructure details | Reveals internal architecture |
| Internal file paths or class names | Not part of the public API surface |
| Execution traces or service wiring | Exposes system internals |
| The word "simply" | Minimizes user friction - acknowledge difficulty instead |
Step 5: The Huginn Quality Gate
After your documentation agent submits, Huginn runs two checks before auto-publishing:
// What Huginn checks (you can replicate this in CI)
function checkDocForLeakage(content) {
const patterns = [
/agentd+/g, // internal agent IDs
/internal.w+.com/g, // internal service URLs
/src/internal//g, // internal file paths
/simply/gi // forbidden word
];
const violations = [];
for (const pattern of patterns) {
const matches = content.match(pattern);
if (matches) violations.push({ pattern: pattern.toString(), matches });
}
return violations;
}
// If violations found → escalate to human (do not auto-publish)
// If clean → momental_document_publish(docId)
Step 6: Auto-Publish on Approval
// your documentation agent's publish flow after self-check passes
const doc = await momental_document_add({
title: `${toolName} - MCP Reference`,
content: generatedMarkdown,
domain: "documentation"
});
// Submit for review - Huginn picks this up
await momental_task_submit_review(taskId, {
summary: "Draft complete. Leakage self-check passed. Ready for Huginn review.",
artifactId: doc.id
});
// If Huginn approves (sets task to DONE):
await momental_document_publish({ documentId: doc.id });
// Page is now live and included in llms-full.txt on next deploy
Keeping Docs Fresh: The Drift Loop
The pipeline above handles new surfaces. For keeping existing docs current, run a weekly conflict detection scan and gap audit:
// Weekly drift detection (run as a cron task assigned to Huginn)
async function weeklyDocAudit() {
// 1. Find surfaces with no documentation
const gaps = await momental_find_gaps({ scope: "documentation" });
for (const gap of gaps) {
const task = await momental_task_create({
statement: `Doc: ${gap.surfaceName} (gap detected)`,
parentId: mcpEpicId,
acceptanceCriteria: buildDocAC(gap.surfaceName)
});
await momental_task_assign_agent({ taskId: task.id, agentId: "your-doc-agent-id" });
}
// 2. Trigger conflict detection - flags stale doc claims
await momental_trigger_conflict_detection();
// Conflicts appear in momental_conflicts_list - assigned to your documentation agent for updates
}