Lyra

Lyra is not a chatbot. It's a knowledge system that happens to have a chat interface. Every answer is grounded in your team's actual decisions, learnings, and data - not the model's general training.

What Makes Lyra Different

Most AI assistants answer from general knowledge. Lyra answers from your knowledge. When you ask "what decisions did we make about pricing last quarter?", Lyra retrieves the exact DECISION atoms your team created, cites them with attribution, and surfaces any conflicts with your current strategy. It does not hallucinate an answer.

Finds What You Mean, Not Just What You Typed

Lyra understands your intent, not just your keywords. Ask a vague question and it still surfaces the right atoms - even when your wording doesn't match the original phrasing. The more knowledge you add to your workspace, the better Lyra's answers become.

Contextual Memory

Lyra remembers you across sessions. Not just what you discussed - who you are, what you're working on, and what you care about. The right context is always present without manual management.

Lyra builds memory automatically. When you mention your role, priorities, or preferences, it saves them without being asked. It incorporates memory seamlessly - you will not see "I remember that you..." because good memory should be invisible.

Extended Thinking

Before generating a response, Lyra reasons internally — working through conflicting atoms and planning a multi-step answer before committing to any part of it. This thinking is not shown to you - it's the work done before the first word of the answer.

Extended thinking enables Lyra to:

Lyra automatically adjusts its reasoning depth — deeper for complex, multi-step questions; lighter for quick lookups.

The Knowledge Extraction Flywheel

After every conversation, Lyra reads the full exchange and extracts discrete knowledge atoms from what was discussed, decided, or learned. These appear as draft atoms in the conversation UI for you to review and publish.

A real example - atoms extracted during a blog drafting session:

[
  { "type": "DATA",
    "statement": "Momental is publishing a series of Graham-style essays on alignment" },
  { "type": "LEARNING",
    "statement": "Momental's concept of the Context Tax describes how organizations lose velocity as institutional knowledge fragments" },
  { "type": "PRINCIPLE",
    "statement": "Momental's positioning frames knowledge transfer as infrastructure, not documentation" },
  { "type": "LEARNING",
    "statement": "The optimal essay length for content marketing that converts technically sophisticated readers is 2,000-3,500 words" }
]

Publishing these atoms takes 30 seconds. Each one improves future conversations that touch the same domain. The team that documents through Lyra conversations builds a compounding knowledge advantage over those that don't.

This flywheel connects to the broader autonomous intelligence system described in Autonomy & agents.

Proactive Messaging

Lyra doesn't only respond to questions. Periodically, it scans for signals that need surfacing: new conflicts in the knowledge graph, strategy nodes with stalled tasks, gaps in the derivation chain, or significant changes in the codebase.

When Lyra finds something worth your attention, it posts a structured brief to the team's agent room. On quiet days, nothing is posted. On eventful ones, you get a concise summary of exactly what needs attention - with citations to the relevant atoms and tasks.

Assigning Work to Lyra


// Code review
const newTask = await task({
  statement: "Review PR #1234 - add rate limiting to /api/keys",
  parentId: "epic_security_q2",
  acceptanceCriteria: [
    "OWASP Top 10 patterns checked",
    "No secrets visible in diff",
    "Rate limit headers follow RFC 6585"
  ].join("\n")
});

await task({
  action: "assign", taskId: newTask.id, agentId: "huginn" });
// Lyra picks up within 2 minutes, posts findings to the task thread