Huginn
Huginn is not a chatbot. It's a knowledge system that happens to have a chat interface. Every answer is grounded in your team's actual decisions, learnings, and data - not the model's general training.
What Makes Huginn Different
Most AI assistants answer from general knowledge. Huginn answers from your knowledge. The difference is the retrieval system: a multi-strategy approach that combines semantic understanding, keyword matching, and graph traversal - all before a single word of the response is generated.
The result: when you ask "what decisions did we make about pricing last quarter?", Huginn retrieves the exact DECISION atoms your team created, cites them with attribution, and surfaces any conflicts with your current strategy. It does not hallucinate an answer.
Fusion Search
Huginn combines multiple independent search strategies into a single ranked list. Each strategy finds different things; the fusion ensures atoms that appear in multiple strategies rank highest.
All search strategies run in parallel and are additive, not filters. Entity search boosts entity-linked atoms without excluding results that lack entity links. This prevents the failure mode where a query about one person accidentally returns everything related to someone with a similar name.
An atom that surfaces in semantic search and keyword search accumulates a combined score that pushes it far above atoms that only appear in one strategy. The more strategies an atom surfaces in, the higher its final rank - regardless of how it performs in any single strategy.
4-Tier Memory
Huginn remembers you across sessions. Not just what you discussed - who you are, what you're working on, and what you care about. Memory is divided into four tiers with different lifetimes and retrieval behavior:
| Tier | What it stores | Loaded | Decay |
|---|---|---|---|
| IDENTITY | Who you are: team, role, working style, preferences | Always - on every message | Permanent |
| FOCUS | What you're working on: current objectives and projects | Always - on every message | Lifecycle-aware |
| TACTICAL | What's happening: specific events, meetings, decisions in flight | On-demand via search | Weeks to months |
| ARCHIVE | Completed focuses and resolved context | Rarely - only when explicitly retrieved | Slow decay |
Older memories contribute less to each response over time. IDENTITY memories never decay. TACTICAL memories fade gradually over weeks. ARCHIVE memories are retrievable but rarely surface unprompted.
Huginn builds memory automatically. When you mention your role, priorities, or preferences, it saves them without being asked. It incorporates memory seamlessly - you will not see "I remember that you..." because good memory should be invisible.
Extended Thinking
Before generating a response, Huginn reasons internally — working through conflicting atoms and planning a multi-step answer before committing to any part of it. This thinking is not shown to you - it's the work done before the first word of the answer.
Extended thinking enables Huginn to:
- Plan a multi-step response before committing to any part of it
- Weigh conflicting atoms before deciding which to surface
- Identify what's missing from the retrieved context before answering
- Handle ambiguous questions by reasoning about which interpretation is most useful
Huginn automatically adjusts its reasoning depth — deeper for complex, multi-step questions; lighter for quick lookups.
The Knowledge Extraction Flywheel
After every conversation, Huginn reads the full exchange and extracts discrete knowledge atoms from what was discussed, decided, or learned. These appear as draft atoms in the conversation UI for you to review and publish.
A real example - atoms extracted during a blog drafting session:
[
{ "type": "DATA",
"statement": "Momental is publishing a series of Graham-style essays on alignment" },
{ "type": "LEARNING",
"statement": "Momental's concept of the Context Tax describes how organizations lose velocity as institutional knowledge fragments" },
{ "type": "PRINCIPLE",
"statement": "Momental's positioning frames knowledge transfer as infrastructure, not documentation" },
{ "type": "LEARNING",
"statement": "The optimal essay length for content marketing that converts technically sophisticated readers is 2,000-3,500 words" }
] Publishing these atoms takes 30 seconds. Each one improves future conversations that touch the same domain. The team that documents through Huginn conversations builds a compounding knowledge advantage over those that don't.
This flywheel connects to the broader autonomous intelligence system described in Autonomy & agents.
Proactive Messaging
Huginn doesn't only respond to questions. Periodically, it scans for signals that need surfacing: new conflicts in the knowledge graph, strategy nodes with stalled tasks, gaps in the derivation chain, or significant changes in the codebase.
When Huginn finds something worth your attention, it posts a structured brief to the team's agent room. On quiet days, nothing is posted. On eventful ones, you get a concise summary of exactly what needs attention - with citations to the relevant atoms and tasks.
Assigning Work to Huginn
import { momental_task_create, momental_task_assign_agent } from '@momental/mcp';
// Code review
const task = await momental_task_create({
statement: "Review PR #1234 - add rate limiting to /api/keys",
parentId: "epic_security_q2",
acceptanceCriteria: [
"OWASP Top 10 patterns checked",
"No secrets visible in diff",
"Rate limit headers follow RFC 6585"
].join("\n")
});
await momental_task_assign_agent({ taskId: task.id, agentId: "huginn" });
// Huginn picks up within 2 minutes, posts findings to the task thread