thinkn
  • Product
    Manifesto
    The reason we exist
    Founder Studioprivate beta
    Make better product decisions faster
    Belief SDKinvite only
    Add belief states to your AI system
    Request Access →Join the private beta waitlist
  • Docs
  • Pricing
  • FAQ
  • Docs
  • Pricing
  • FAQ
Sign In
Welcome
  • Hack Guide
  • Introduction
  • Install
  • Quickstart
  • FAQ
  • The Problem
  • Memory vs Beliefs
  • Drift
  • Examples
  • Overview
  • Core API
  • Loop Patterns
  • Scoping
  • Patterns
  • Adapters
sdk/patterns.mdx

Patterns

Common patterns for using beliefs in your agent.

The Observe-Act-Update Loop

Examples assume a valid scope

For the simplest copy-paste setup, instantiate your client with writeScope: 'space'. If you are building chat or session memory, bind writeScope: 'thread' with thread or beliefs.withThread(threadId).

The observe-act-update loop. Read state, act, observe, route on what to do next.

1const context = await beliefs.before(input)
2
3const result = await agent.run({ context: context.prompt })
4
5const delta = await beliefs.after(result.text)
6
7// Route on readiness
8if (delta.readiness === 'high') {
9  // Ready to act — draft recommendations
10} else if (delta.moves[0]?.action === 'research') {
11  // Highest-value move is research — keep investigating
12} else {
13  // Validate or clarify existing beliefs
14}

Clarity-Driven Routing

Check clarity before deciding what to do.

1const context = await beliefs.before(input)
2
3if (context.clarity < 0.3) {
4  await runResearch(context.gaps)
5} else if (context.clarity > 0.7) {
6  await draftRecommendations(context.beliefs)
7} else {
8  await investigateGaps(context.gaps)
9}

Confidence Gating

Only act on beliefs above a confidence threshold.

1const world = await beliefs.read()
2
3const strong = world.beliefs.filter(b => b.confidence > 0.7)
4const weak = world.beliefs.filter(b => b.confidence <= 0.7)
5
6// Use strong beliefs in the response
7// Flag weak beliefs for further investigation

Gap-Driven Research

Read gaps and use them to drive the next research action.

1const context = await beliefs.before(input)
2
3for (const gap of context.gaps) {
4  const result = await searchTool.run(gap)
5  await beliefs.after(result, { tool: 'search' })
6}

The agent's next action is driven by what it does not know, not just what the user asked.

Incremental Updates During Execution

Update beliefs after each tool call. Clarity and moves update in real-time so the agent can decide whether to keep researching or pivot.

1const context = await beliefs.before(input)
2
3for await (const step of agent.stream()) {
4  if (step.type === 'tool_result') {
5    const delta = await beliefs.after(step.result, { tool: step.name })
6
7    // The system tells you when you've learned enough
8    if (delta.readiness === 'high') {
9      agent.stop()
10      break
11    }
12  }
13}

Multi-Agent Shared State

Multiple agents contribute to the same shared namespace-wide state. Use the same namespace plus writeScope: 'space'.

1const researcher = new Beliefs({
2  apiKey,
3  namespace: 'shared-workspace',
4  agent: 'researcher',
5  writeScope: 'space',
6})
7
8const analyst = new Beliefs({
9  apiKey,
10  namespace: 'shared-workspace',
11  agent: 'analyst',
12  writeScope: 'space',
13})
14
15// Researcher gathers data
16await researcher.after(researchResult)
17
18// Analyst interprets it, sees the researcher's contributions
19const context = await analyst.before('Interpret the research findings')

Custom Assertion with Evidence

When you have domain-specific knowledge, assert it directly with evidence:

1await beliefs.add('Market is $6.8B', {
2  confidence: 0.92,
3  evidence: 'IDC Q4 2025 tracker, 2400 enterprise survey',
4  supersedes: 'Market is $4.2B',
5})

Explicit assertions take precedence over auto-extracted beliefs when they conflict.

Inspecting the Trace

Use the trace to debug belief transitions:

1const history = await beliefs.trace()
2
3for (const entry of history) {
4  console.log(`${entry.timestamp} | ${entry.action}`)
5  if (entry.confidence) {
6    console.log(`  confidence: ${entry.confidence.before} → ${entry.confidence.after}`)
7  }
8  if (entry.reason) {
9    console.log(`  reason: ${entry.reason}`)
10  }
11}
PreviousScoping
NextAdapters

On this page

  • The Observe-Act-Update Loop
  • Clarity-Driven Routing
  • Confidence Gating
  • Gap-Driven Research
  • Incremental Updates During Execution
  • Multi-Agent Shared State
  • Custom Assertion with Evidence
  • Inspecting the Trace