thinkn
  • Product
    Manifesto
    The reason we exist
    Founder Studioprivate beta
    Make better product decisions faster
    Belief SDKinvite only
    Add belief states to your AI system
    Request Access →Join the private beta waitlist
  • Docs
  • Pricing
  • FAQ
  • Docs
  • Pricing
  • FAQ
Sign In
Welcome
  • Hack Guide
  • Introduction
  • Install
  • Quickstart
  • FAQ
  • The Problem
  • Memory vs Beliefs
  • Drift
  • Examples
start/intro.mdx

Introduction

Structured belief state for AI agents. What it is, what it does, why it exists.

The Missing Layer

Your agent produces claims on every turn. Those claims live in the token stream and nowhere else.

There is no tracked confidence. No evidence weight. No conflict detection. No awareness of what is missing. A peer-reviewed study and a guess three turns ago look identical.

Beliefs gives your agent a structured model of its current understanding: what it holds to be true, how confident it is, what evidence backs it, and where it conflicts.

1import Beliefs from 'beliefs'
2
3const beliefs = new Beliefs({
4  apiKey: process.env.BELIEFS_KEY,
5  namespace: 'intro-example',
6  writeScope: 'space',
7})

For getting started, use writeScope: 'space' so one namespace has a single shared belief state. If your app needs per-conversation memory, bind a thread later with beliefs.withThread(threadId).

The Core Loop

Every agent turn follows three steps: read state, act, observe.

1// 1. Read current belief state
2const context = await beliefs.before(userMessage)
3
4// 2. Run the agent with belief context
5const result = await myAgent.run({ context: context.prompt })
6
7// 3. Feed the observation — get back what changed and the new state
8const delta = await beliefs.after(result.text)
9// delta.changes   — what was created/updated
10// delta.readiness — 'low', 'medium', or 'high'
11// delta.state     — full updated world state

beliefs.before() returns what the agent currently believes — claims with confidence, goals, gaps, a clarity score, and suggested next moves. beliefs.after() submits what happened — the infrastructure handles extraction, conflict detection, and provenance tracking.

What You Get

1┌──────────────────────────────────────────────────────┐
2│                    Belief State                       │
3│                                                      │
4│  Beliefs  ─── claims with confidence scores          │
5│  Edges    ─── supports, contradicts, derives         │
6│  Goals    ─── what the agent is pursuing             │
7│  Gaps     ─── what the agent does not know           │
8│  Clarity  ─── 0-1 readiness score                    │
9│  Moves    ─── ranked next actions by info gain       │
10│  History  ─── full audit trail of every transition   │
11│                                                      │
12└──────────────────────────────────────────────────────┘

API

MethodDescription
beliefs.before(input?)Read beliefs + clarity + moves before agent acts
beliefs.after(text, opts?)Feed observation → extract → fuse → get delta
beliefs.add(text, opts?)Assert a specific belief, goal, or gap
beliefs.resolve(text)Mark a gap as resolved
beliefs.read()Read the full fused world state
beliefs.search(query)Find beliefs by text
beliefs.trace(id?)Query the belief audit trail

What the Hosted API Does for You

When connected to the Thinkn backend, the infrastructure handles:

  • Extraction — LLM-powered belief extraction from agent output and tool results
  • Linking — Automatic detection of contradictions, support, and derivation relationships
  • Deduplication — Embedding-based similarity matching to prevent duplicate beliefs
  • Fusion — Trust-weighted merging across multiple agents
  • Clarity — Readiness assessment (0-1) based on decision resolution, certainty, coherence, and coverage
  • Thinking Moves — Ranked next actions by expected information gain
  • Provenance — Full audit trail of every belief transition with entropy tracking

What the SDK Does Not Do

  • Does not replace your agent framework
  • Does not decide what your agent does
  • Does not require a specific LLM provider
  • Does not sit in the critical path of your LLM calls

The SDK wraps your loop. It does not own it.

PreviousHack Guide
NextInstall

On this page

  • The Missing Layer
  • The Core Loop
  • What You Get
  • API
  • What the Hosted API Does for You
  • What the SDK Does Not Do