Setup
1import Beliefs from 'beliefs'
2
3const beliefs = new Beliefs({
4 apiKey: process.env.BELIEFS_KEY,
5 namespace: 'project-alpha',
6 writeScope: 'space',
7})That's it. The only required option is apiKey. For multi-agent systems, add agent and namespace:
1const beliefs = new Beliefs({
2 apiKey: process.env.BELIEFS_KEY,
3 agent: 'research-agent',
4 namespace: 'project-alpha',
5 writeScope: 'space',
6})| Option | Default | What it does |
|---|---|---|
apiKey | — | Required. Your API key. |
agent | 'agent' | Who is contributing beliefs. Use different names for different agents sharing a namespace. |
namespace | 'default' | Developer-facing isolation boundary. Each namespace maps to its own backing workspace. |
thread | — | Bind a thread for writeScope: 'thread'. Use this for per-conversation or per-task memory. |
writeScope | 'thread' | Which layer is authoritative: 'thread', 'agent', or 'space'. See Scoping. |
contextLayers | Depends on writeScope | Which layers before() and read() merge. Thread defaults to ['self', 'agent', 'space']. |
baseUrl | 'https://www.thinkn.ai' | Override the API origin for local or self-hosted environments. |
timeout | 120000 | Request timeout in ms. |
maxRetries | 2 | Auto-retries on 429/5xx with exponential backoff. |
debug | false | Logs every request and response to console. |
Quickstart vs chat apps
For copy-paste examples, writeScope: 'space' is the simplest setup. The SDK default is writeScope: 'thread', which is ideal for chat and session memory but requires a bound thread.
beliefs.withThread(threadId)
Bind a thread later while preserving the rest of the client config:
1const baseBeliefs = new Beliefs({
2 apiKey: process.env.BELIEFS_KEY,
3 namespace: 'support',
4 writeScope: 'thread',
5})
6
7const beliefs = baseBeliefs.withThread(conversationId)beliefs.before(input?)
Get the agent's current understanding before it acts.
1const context = await beliefs.before('Research the AI tools market')Returns:
1{
2 "prompt": "{\"state\":{\"goals\":[\"Determine total addressable market\"],\"claims\":[{\"text\":\"AI tools market is valued at $4.2B\",\"confidence\":0.85},{\"text\":\"GitHub Copilot has dominant market share\",\"confidence\":0.85}],\"phase\":\"researching\",\"uncertainty\":0.58},\"gaps\":[\"Missing APAC market data\"],\"contradictions\":[]}",
3 "beliefs": [
4 { "id": "xK9mR2vL3pT4nW8q", "text": "AI tools market is valued at $4.2B", "confidence": 0.80, "type": "claim" },
5 { "id": "pT4nW8qJ5mR7vL2x", "text": "GitHub Copilot has dominant market share", "confidence": 0.85, "type": "claim" }
6 ],
7 "goals": ["Determine total addressable market"],
8 "gaps": ["Missing APAC market data"],
9 "clarity": 0.42,
10 "moves": [
11 { "action": "research", "target": "xK9mR2vL3pT4nW8q", "reason": "Market size has one source — verify with a second", "value": 0.7 }
12 ]
13}What you use: Inject context.prompt into your agent's system prompt. Check context.clarity to decide whether to keep investigating or act. Follow context.moves[0] for the highest-value next step.
beliefs.after(text, options?)
Feed the agent's output. Beliefs are extracted, conflicts detected, and the world state updated automatically.
1const delta = await beliefs.after(agentOutput)For tool results, tag the tool name so the system knows the source:
1await beliefs.after(JSON.stringify(searchResults), { tool: 'web_search' })You can also label the source explicitly — this gets stored on each extracted belief and appears in the trace:
1await beliefs.after(agentOutput, { source: 'quarterly-earnings-call' })Returns:
1{
2 "changes": [
3 { "action": "created", "beliefId": "hV7bQ3kN9yU6wE4r", "text": "European market is 28% of global revenue" },
4 { "action": "updated", "beliefId": "xK9mR2vL3pT4nW8q", "text": "AI tools market is valued at $4.2B", "confidence": { "before": 0.80, "after": 0.75 }, "reason": "New regional data suggests original estimate may exclude segments" }
5 ],
6 "clarity": 0.58,
7 "readiness": "medium",
8 "moves": [
9 { "action": "gather_evidence", "target": "xK9mR2vL3pT4nW8q", "reason": "Market size estimate weakened — need authoritative source", "value": 0.8 }
10 ],
11 "state": { "beliefs": ["..."], "goals": ["..."], "gaps": ["..."], "clarity": 0.58, "..." : "..." }
12}What you use: Check delta.readiness to route — 'high' means act, 'low' means keep investigating. delta.changes tells you exactly what the system learned. delta.state is the full world state if you need it.
beliefs.add(text, options?)
Assert something the agent knows. Use this to seed beliefs, set goals, or flag gaps.
1await beliefs.add('The market is $4.2B', {
2 confidence: 0.85,
3 source: 'IDC Q4 2025 Tracker',
4})
5await beliefs.add('Determine total addressable market', { type: 'goal' })
6await beliefs.add('Missing APAC market data', { type: 'gap' })Options: confidence (0–1, default 0.5), type ('claim', 'assumption', 'evidence', 'risk', 'gap', 'goal'), source (where this belief came from — document name, URL, tool, etc.), evidence (source text), supersedes (text of a belief this replaces).
Returns BeliefDelta — same shape as after().
beliefs.add(items)
Assert multiple items in a single request. All items are processed as one atomic delta.
1await beliefs.add([
2 { text: 'Market is $4.2B', confidence: 0.8, source: 'IDC Q4 2025 Tracker' },
3 { text: 'Missing APAC data', type: 'gap' },
4 { text: 'Determine TAM', type: 'goal' },
5])Returns BeliefDelta — same shape as after().
beliefs.resolve(text)
Mark a gap as resolved.
1const delta = await beliefs.resolve('Missing APAC market data')Returns BeliefDelta.
beliefs.retract(beliefId, reason?)
Retract a belief. The belief stays in the graph with lifecycle: 'retracted' so the audit trail is preserved. Use this when the agent no longer believes something.
1await beliefs.retract('xK9mR2vL3pT4nW8q', 'Superseded by updated market data')The retracted belief remains visible in read() and snapshot() with lifecycle: 'retracted'. The reason appears in trace() as the reasoning field.
Returns BeliefDelta.
beliefs.remove(beliefId)
Delete a belief from the graph entirely. A final ledger entry is recorded for traceability. Use this for cleanup of garbage or accidental beliefs.
1await beliefs.remove('xK9mR2vL3pT4nW8q')Unlike retract(), the belief is gone from state after removal. Use trace() to see the removal in the audit trail.
Returns BeliefDelta.
beliefs.reset()
Remove all beliefs, goals, gaps, and intents in this scope. Every removal is recorded in the ledger.
1const { removed } = await beliefs.reset()
2console.log(`Cleared ${removed} items`)Returns { removed: number } — the count of items removed.
Destructive operation
Reset clears everything in the current authoritative scope. For writeScope: 'thread' that means one thread. For writeScope: 'agent' it means one agent's durable memory. For writeScope: 'space' it clears the shared namespace-wide state. The audit trail is preserved in the ledger, but the state itself is wiped clean.
beliefs.read()
Full world state with clarity, moves, and a serialized prompt.
1const world = await beliefs.read()Returns:
1{
2 "beliefs": [
3 { "id": "xK9mR2vL3pT4nW8q", "text": "AI tools market is valued at $6.8B", "confidence": 0.95, "type": "claim" },
4 { "id": "pT4nW8qJ5mR7vL2x", "text": "GitHub Copilot market share has declined to 32%", "confidence": 0.90, "type": "claim" }
5 ],
6 "goals": ["Determine total addressable market"],
7 "gaps": ["Missing APAC market data"],
8 "edges": [
9 { "type": "contradicts", "source": "xK9mR2vL3pT4nW8q", "target": "hV7bQ3kN9yU6wE4r", "confidence": 0.8 }
10 ],
11 "contradictions": ["AI tools market is valued at $4.2B vs AI tools market is valued at $6.8B"],
12 "clarity": 0.72,
13 "moves": [
14 { "action": "research", "target": "xK9mR2vL3pT4nW8q", "reason": "APAC data would complete the picture", "value": 0.6 }
15 ],
16 "prompt": "{\"state\":{\"goals\":[...],\"claims\":[...],\"phase\":\"researching\"},\"gaps\":[...],\"contradictions\":[...]}"
17}beliefs.snapshot()
Same as read() but faster — skips computing clarity, moves, and prompt. Use when you only need the raw state.
1const snap = await beliefs.snapshot()
2console.log(`${snap.beliefs.length} beliefs, ${snap.gaps.length} gaps`)Returns beliefs, goals, gaps, edges, and contradictions. No clarity, moves, or prompt.
beliefs.search(query)
Find beliefs by text.
1const results = await beliefs.search('market size')
2// [{ id: "xK9mR2vL3pT4nW8q", text: "AI tools market is valued at $6.8B", confidence: 0.95, type: "claim" }]beliefs.trace(beliefId?)
Audit trail. See every transition — what changed, when, why, and who changed it.
1const history = await beliefs.trace()Returns:
1[
2 {
3 "action": "updated",
4 "beliefId": "xK9mR2vL3pT4nW8q",
5 "confidence": { "before": 0.80, "after": 0.95 },
6 "agent": "research-agent",
7 "source": "IDC Q4 2025 Tracker",
8 "timestamp": "2026-04-08T14:23:01Z",
9 "reason": "IDC Q4 2025 tracker provided authoritative $6.8B figure"
10 },
11 {
12 "action": "created",
13 "beliefId": "hV7bQ3kN9yU6wE4r",
14 "agent": "research-agent",
15 "source": "agent-output",
16 "timestamp": "2026-04-08T14:22:45Z",
17 "reason": "Extracted from European market analysis"
18 }
19]Trace a single belief's history:
1const oneBeliefHistory = await beliefs.trace('xK9mR2vL3pT4nW8q')Errors
1import Beliefs, { BetaAccessError, BeliefsError } from 'beliefs'BetaAccessError — API key missing, invalid, or account lacks access (401/403).
1try {
2 await beliefs.before(input)
3} catch (err) {
4 if (err instanceof BetaAccessError) {
5 console.log(err.signupUrl) // 'https://thinkn.ai/waitlist'
6 }
7}BeliefsError — server errors with structured codes and retry guidance. The SDK auto-retries transient errors (429, 5xx) with exponential backoff, so you only see these after retries are exhausted.
1try {
2 await beliefs.after(result.text)
3} catch (err) {
4 if (err instanceof BeliefsError) {
5 console.log(err.code) // 'rate_limit/exceeded'
6 console.log(err.retryable) // true
7 }
8}| HTTP | Error | Retryable | Example codes |
|---|---|---|---|
| 400 | BeliefsError | No | validation/invalid_json |
| 401/403 | BetaAccessError | No | auth/missing_key |
| 429 | BeliefsError | Yes | rate_limit/exceeded |
| 5xx | BeliefsError | Yes | internal/error |
Types
Belief
The core unit. Every claim, assumption, and risk is a belief with a confidence score.
1{
2 id: string
3 text: string
4 confidence: number // 0–1
5 type: string // 'claim', 'assumption', 'evidence', 'risk', 'gap', 'goal'
6 createdAt: string
7}Additional fields
These are present when the server provides richer data. You don't need them to get started.
| Field | Type | What it tells you |
|---|---|---|
label | string | Semantic label: 'limiting-belief', 'load-bearing', etc. |
evidenceWeight | number | How much evidence backs this belief. 0 = uninvestigated prior. |
distribution | string | 'claim' (true/false), 'category' (multinomial), 'measurement' (numeric) |
lifecycle | string | 'active', 'retracted', 'invalidated', 'expired', 'resolved' |
provenance | string | 'user-created', 'research-discovered', 'chat-extracted', 'agent-generated' |
source | string | Where this belief came from — document name, URL, tool, agent output label. |
updatedAt | string | Last modification timestamp |
The two-channel insight
confidence alone doesn't tell you how well-founded a belief is. Confidence 0.5 with evidenceWeight: 0 means no one has looked. Confidence 0.5 with evidenceWeight: 40 means extensive evidence but genuine uncertainty. Use evidenceWeight to distinguish "unknown" from "uncertain."
Move
A suggested next action, ranked by expected information gain.
1{
2 action: string // 'research', 'gather_evidence', 'clarify', 'validate_assumption', etc.
3 target: string // which belief this move targets
4 reason: string // why this is the best next step
5 value: number // expected information gain (0–1)
6 executor?: string // 'agent', 'user', or 'both'
7}Edge
A relationship between two beliefs.
1{
2 type: string // 'supports', 'contradicts', 'derives', 'supersedes'
3 source: string
4 target: string
5 confidence: number
6}DeltaChange
What happened to a single belief during a mutation.
1{
2 action: string // 'created', 'updated', 'removed', 'resolved'
3 beliefId: string
4 text: string
5 confidence?: { before?: number, after?: number }
6 reason?: string
7 source?: string // where this change originated from
8}Clarity channels
When available, clarity breaks down into four dimensions you can access via channels on BeliefContext, BeliefDelta, or WorldState:
1const context = await beliefs.before(input)
2if (context.channels) {
3 console.log(context.channels.knowledgeCertainty) // how confident in current knowledge
4 console.log(context.channels.coverage) // how much of the goal space is covered
5 console.log(context.channels.coherence) // consistency across beliefs
6 console.log(context.channels.decisionResolution) // how well decisions are resolved
7}