thinkn
  • Product
    Manifesto
    The reason we exist
    Founder Studioprivate beta
    Make better product decisions faster
    Belief SDKinvite only
    Add belief states to your AI system
    Request Access →Join the private beta waitlist
  • Docs
  • Pricing
  • FAQ
  • Docs
  • Pricing
  • FAQ
Sign In
Welcome
  • Start Here
  • Install
  • Quickstart
  • FAQ
  • Why beliefs
  • How it works
  • Behavioral contracts
internals/how-it-works.mdx

How it works

The lifecycle of a belief — from observation to fused state, with audit and decay along the way.

A mental model of what happens when you call before and after. The behaviors the engine is required to honor — what you build against — live on the contracts page.

The lifecycle

Every piece of information that enters the system follows the same path:

1observation ──▶ extraction ──▶ fusion ──▶ persistence + audit
2                    │              │              │
3              structured       merged into     ledger entry
4              claims out       world state     for replay

You don't manage this lifecycle yourself. Calling before and after drives it. The runtime mutates state atomically and serially, so every mutation is durable and observable the moment it lands. That's what lets an agent course-correct mid-turn — if the first tool result contradicts a hypothesis, the next call already sees the updated state.

The runtime processes updates on two timescales. Real-time updates merge as evidence arrives, so later actions in the same turn operate on the newest understanding. Background processing runs more thorough analysis between turns: relationship detection, contradiction analysis, reassessment of the overall picture. Both feed the same belief state.

Fusion: combining contributions

When multiple agents — or multiple turns of the same agent — submit beliefs about the same claim, the engine merges them by trust weight. Higher-trust contributors move the fused state more; lower-trust contributors still contribute but with proportionally less pull. The fused state sharpens when sources agree and stays uncertain when they disagree.

Each agent and source carries a reliability weight. The engine starts with a calibrated baseline based on observed reliability and you override it at runtime via beliefs.trust.set(). Trust knobs behave predictably — lowering an agent's weight attenuates its contributions proportionally without affecting any other source.

Fusion is order-independent: combining the same set of contributions in any order produces the same result. Retries after a peer's write don't change the outcome.

Decay: aging evidence

Without decay, agents act on stale analyses indefinitely — a six-month-old market estimate would carry the same weight as last week's verified data. Decay closes that gap: every belief's evidence weight shrinks over time, so old claims lose influence unless refreshed. Stale claims surface for re-verification rather than silently dominating.

Decay rates are configurable per workspace:

  • Fast — market sentiment, competitive intelligence, security posture: anything where last month's analysis is probably wrong now.
  • Standard (default) — market sizing, product positioning, strategic analyses: slow-moving but not static.
  • Slow — regulatory environments, fundamental research, architectural invariants: evidence stays relevant for quarters or years.
  • None — ground-truth observations and immutable historical facts. Use sparingly.

Decay applies on read, so the runtime always works with time-adjusted values. Decayed beliefs aren't deleted — they stay in the snapshot at reduced weight, so a UI can render them as muted/needs-re-verification rather than hiding them outright.

Evidence: types and the is/ought firewall

Different evidence types carry different weight at fusion time, calibrated so quality matters more than volume. A single verified measurement moves confidence more than several inferences.

TypeTypical source path
measurementTool results from APIs, databases, instrumentation
citationTool results with cited sources; explicit add(text, { source })
user-assertionafter(userMessage) from a user-facing surface
expert-judgmentadd(text, { evidence }) with attributed reasoning
inferenceafter(agentOutput) extraction (default for free-form agent text)
assumptionadd(text, { type: 'assumption' })

The engine assigns the type based on the source path during extraction. You can override with the evidence option on add() when you know better.

The is/ought firewall is the most important design choice in evidence handling. Factual evidence updates beliefs; normative information (preferences, goals, desires) does not.

InputEffect
"The TAM is $5B"Updates the market size belief
"Customer X reported a SOC2 audit failure on 2025-09-12"Updates compliance/risk beliefs
"I want to target enterprise"Recorded as a goal (intent)
"We've decided to target SOC2-compliant buyers"Recorded as a constraint (intent)
"Gartner reports 34% growth"Updates the growth-rate belief

Without this separation, a user repeating "I want X" would gradually inflate the agent's confidence that X is true — preferences masquerading as evidence. The firewall keeps factual claims and normative intent on separate tracks. See Intent for how the normative side is handled.

Ledger: the audit trail

Every belief mutation lands in an append-only ledger. There's no in-place editing, no silent overwrite, no merge that erases history. If a belief exists in any state today, the ledger says how it got there.

Each entry captures what changed, who changed it, the state before and after, and a human-readable reason. Supersession is recorded as a new entry referencing the old one; deletions land as tombstone entries rather than erasing history.

1// Workspace-wide trail
2const all = await beliefs.trace()
3
4// One belief's history
5const history = await beliefs.trace('claim_market_size')
6
7for (const entry of history) {
8  console.log(`${entry.timestamp} | ${entry.action}`)
9  if (entry.confidence) {
10    console.log(`  ${entry.confidence.before} → ${entry.confidence.after}`)
11  }
12  if (entry.reason) console.log(`  reason: ${entry.reason}`)
13}

For replay-shaped reads — "what did the world look like at time T?" — use beliefs.stateAt({ asOf }). It walks the ledger and rebuilds state for you.

The ledger is what makes calibration analysis possible (compare stated confidence with eventual outcomes), what makes debugging confidence shifts tractable ("why did this belief drop from 85% to 72%?"), and what makes audit trails possible without reconstruction.

Behavioral contracts

The eight guarantees the engine commits to.

Learn more

Concepts

The vocabulary: beliefs, intent, clarity, moves, world.

Learn more
PreviousScience
NextBehavioral contracts

On this page

  • The lifecycle
  • Fusion: combining contributions
  • Decay: aging evidence
  • Evidence: types and the is/ought firewall
  • Ledger: the audit trail