thinkn
  • Product
    Manifesto
    The reason we exist
    Founder Studioprivate beta
    Make better product decisions faster
    Belief SDKinvite only
    Add belief states to your AI system
    Request Access →Join the private beta waitlist
  • Docs
  • Pricing
  • FAQ
  • Docs
  • Pricing
  • FAQ
Sign In
Welcome
  • Hack Guide
  • Introduction
  • Install
  • Quickstart
  • FAQ
  • The Problem
  • Memory vs Beliefs
  • Drift
  • Examples
  • Beliefs
  • Intent
  • Clarity
  • Moves
  • World
core/clarity.mdx

Clarity

A single score that tells you how ready your agent is to act.

What Clarity Answers

Clarity is a 0-1 score that answers one question: does this agent understand enough to move forward?

It combines multiple signals into one number. Low clarity means keep investigating. High clarity means the agent has enough to act.

The Two-Channel Insight

This is the most important concept in the SDK.

Consider two claims, both at 50% confidence.

One has no evidence. The agent has never investigated. It is a guess. The other has 40 data points that genuinely split both ways.

The first needs research. The second needs a decision framework. Beliefs tracks two separate channels to distinguish them:

1┌──────────────────────────────────────────────────────────────┐
2│              THE TWO QUESTIONS                                │
3│                                                              │
4│   1. DECISION RESOLUTION: "Can we make a call?"             │
5│      ─────────────────────────────────────────               │
6│      80% → Yes, lean toward it                              │
7│      50% → No, it is ambiguous                              │
8│      99% → Strong signal                                    │
9│                                                              │
10│   2. KNOWLEDGE CERTAINTY: "Have we done the work?"          │
11│      ─────────────────────────────────────────               │
12│      Just stated → No evidence yet                          │
13│      10 data points → Some certainty                        │
14│      100 data points → High certainty in our assessment     │
15│                                                              │
16└──────────────────────────────────────────────────────────────┘

The Four Quadrants

1                     Knowledge Certainty
2                    Low              High
3               ┌────────────┬────────────────┐
4    High       │            │                │
5    Decision   │  Belief    │  Validated     │
6    Resolution │  without   │  belief.       │
7               │  evidence. │  Ready to act. │
8               │  ▶ Invest- │                │
9               │    igate.  │                │
10               ├────────────┼────────────────┤
11    Low        │            │                │
12    Decision   │  No idea.  │  Genuinely     │
13    Resolution │  Start     │  uncertain.    │
14               │  from      │  Surface       │
15               │  scratch.  │  trade-offs.   │
16               │            │  ▶ Decide,     │
17               │            │    don't       │
18               │            │    research.   │
19               └────────────┴────────────────┘

The bottom-right quadrant is the critical one. "We have done extensive research and this is genuinely a close call" is a valuable conclusion. The system should help the user decide.

Knowing you do not know is categorically different from not knowing. The two-channel model captures this distinction.

What Clarity Measures

Clarity combines four signals, weighted by their relative importance:

Decision resolution. Are key claims far enough from ambiguous to act on?

Knowledge certainty. Has enough evidence accumulated to trust the current picture?

Coherence. Do the beliefs hang together, or are there unresolved contradictions?

Coverage. Are important areas addressed, or are there large gaps?

Open gaps reduce clarity. Gaps with more downstream dependencies reduce it more. This creates natural pressure to fill the gaps that matter most.

Load-Bearing Beliefs

Some beliefs carry more weight than others. A load-bearing belief is one that, if proven wrong, would collapse the strategy built on top of it.

"The TAM is $4.2B" is load-bearing if the pricing model, fundraising projections, and go-to-market plan all depend on it.

The system identifies these high-dependency beliefs and flags them when they are weakly evidenced or stale.

Directing Attention

With explicit beliefs, gaps, and uncertainty, the system can identify which actions would most reduce uncertainty in the beliefs that matter most. It considers what gaps exist, which beliefs are weakly supported, and where contradictions remain unresolved.

A research action that fills a high-impact gap is prioritized over one that confirms something already well-supported. An action that tests a fragile, load-bearing assumption is prioritized over one that validates a peripheral detail.

See Moves for how the system surfaces these recommendations.

Using Clarity in Your Agent

Read clarity from beliefs.read():

1const world = await beliefs.read()

Route on it:

1if (world.clarity < 0.3) {
2  // Not enough to work with. Research the biggest gaps.
3  await runResearch(world.gaps)
4} else if (world.clarity > 0.7) {
5  // Ready to act. Draft recommendations.
6  await draftRecommendations(world.beliefs)
7} else {
8  // Middle ground. Investigate remaining gaps.
9  await investigateGaps(world.gaps)
10}

Clarity and Uncertainty

A genuinely uncertain topic can have high clarity. If the research agent investigates extensively and finds that the market could go either way, clarity can be high. The agent has done the work to understand that the question is uncertain.

Clarity measures readiness to act. Low clarity means "keep investigating." High clarity means "you have enough to make a decision, even if the decision is hard."

How Knowledge Certainty Accumulates

When you seed beliefs with add(), Knowledge Certainty starts at zero. This is by design.

add('Market is $4.2B', { confidence: 0.8 }) sets the belief's starting position (Decision Resolution reflects 0.8), but the system has not yet seen evidence for it. Knowledge Certainty tracks earned evidence — the difference between where the belief started and how much data has accumulated since.

KC grows when:

  • The same claim receives additional evidence via after() (LLM extraction finds supporting or refuting data)
  • Multiple observations reinforce the same claim over time
  • Tool results provide independent confirmation

This distinction matters. A belief stated with high confidence but no evidence is in the "belief without evidence" quadrant. The system correctly flags it as needing validation rather than treating stated confidence as proof.

To build KC quickly, use after() to process real agent output rather than seeding everything with add(). The extraction pipeline finds evidence in the agent's work and accumulates it against existing beliefs.

Moves

How the system recommends what to do next.

Learn more

Patterns

Clarity-driven routing and other patterns.

Learn more
PreviousIntent
NextMoves

On this page

  • What Clarity Answers
  • The Two-Channel Insight
  • The Four Quadrants
  • What Clarity Measures
  • Load-Bearing Beliefs
  • Directing Attention
  • Using Clarity in Your Agent
  • Clarity and Uncertainty
  • How Knowledge Certainty Accumulates