# beliefs SDK — LLM Reference > Belief state infrastructure for AI agents. npm package: `beliefs` > Docs: https://thinkn.ai/dev > Hack Guide: https://thinkn.ai/dev/start/hack-guide > Manifesto: https://thinkn.ai/dev/why/problem This file is a single-read reference for coding agents. It explains why belief state exists, how it differs from memory/RAG, the mental model you need to use the SDK well, the full API surface, and how to wire it into common agent frameworks. ## Why beliefs matter ### Every breakthrough begins as a belief Progress does not begin with certainty. It begins with a view of reality that is incomplete, testable, and worth pursuing. But the current AI stack is not built for that. It stores documents, retrieves memories, and generates language. It does not maintain a living model of what is currently believed to be true, why it is believed, how strongly, what evidence supports it, where it conflicts, and what should change next. The `beliefs` SDK is that missing layer. It gives your agent a structured model of its current understanding: claims with confidence scores, conflict detection, gap awareness, a clarity score (0-1 readiness to act), and ranked next actions by expected information gain. Every transition is recorded with provenance. ### Scaling the past vs. discovering the unknown ``` ┌──────────────────────────────────────────────────────────────┐ │ SCALING THE PAST │ │ │ │ Memory ──────▶ "What happened before?" │ │ Retrieval ───▶ "What text is similar?" │ │ Generation ──▶ "What can I produce from this?" │ │ │ │ These scale what is already known. │ │ They do not model what is currently believed. │ │ They cannot surface what has never been seen. │ │ │ ├──────────────────────────────────────────────────────────────┤ │ DISCOVERING THE UNKNOWN │ │ │ │ Beliefs ─────▶ "What is true? How strongly? Why?" │ │ Evidence ────▶ "What supports or contradicts it?" │ │ Gaps ────────▶ "What do we not know?" │ │ Clarity ─────▶ "Are we ready to act or must we look │ │ deeper?" │ │ │ │ These model the present understanding of reality. │ │ They evolve as evidence changes. │ │ They surface what was previously invisible. │ └──────────────────────────────────────────────────────────────┘ ``` Without an explicit way to model and update beliefs, AI does not scale truth. It scales whatever assumptions it happened to start with. It scales inherited frames. It scales contradiction. It scales drift. As intelligence becomes abundant, coherence becomes scarce — and some beliefs are load-bearing. If the pricing model, the fundraising plan, and the diagnostic recommendation all rest on one unexamined assumption, the cost of that assumption being wrong is everything built on top of it. ### The five symptoms of drift These are what drift looks like in practice — the visible failures of systems that accumulate information without modeling what they believe: 1. **Agents contradict themselves.** Turn 3: "The market is $4.2B." Turn 12: "SEC filings suggest $3.8B." Turn 18: the agent cites $4.2B because it appeared first. No detection, no resolution, no awareness. 2. **Confidence is invisible.** The agent stated a number. Is it from one source or ten? Is it corroborated or contested? The context window does not encode this. Every piece of text looks equally valid. 3. **Guesses and facts are indistinguishable.** A user's intuition and a peer-reviewed study carry identical weight. There is no distinction between assumption and evidence. 4. **Agents do not know what they do not know.** No concept of "gap." No awareness that critical data is missing. No mechanism to prioritize what would reduce the most uncertainty. 5. **Bigger context makes it worse.** A 200K context window does not fix these problems. It carries stale assumptions further, with more fluency. More context is more surface area for drift. Belief state infrastructure is how we fix this. A shared layer where assumptions, evidence, confidence, contradictions, and decisions stay in sync, so humans and AI can think more clearly, adapt more honestly, and push together toward what has not yet been seen. ## Mental model ### The core loop Every agent turn follows three steps: read state, act, observe. ``` ┌──────────────────┐ user input ───▶│ beliefs.before() │─── returns current state, └────────┬─────────┘ clarity, gaps, moves, │ and a prompt to inject ▼ ┌──────────────────┐ │ your agent │─── runs with belief context └────────┬─────────┘ in its system prompt │ ▼ ┌──────────────────┐ │ beliefs.after() │─── extracts, fuses, └──────────────────┘ returns delta + new state ``` The SDK wraps your loop. It does not own it. It does not replace your agent framework, does not decide what your agent does, does not require a specific LLM provider, and does not sit in the critical path of your LLM calls. ### Belief types A belief is a structured assertion your agent holds about the world. Every belief has a type: | Type | Use case | |------|----------| | `claim` | An assertion supported or refuted by evidence | | `assumption` | Something taken as true without direct evidence | | `risk` | A potential negative outcome | | `evidence` | A data point or source that supports/refutes other beliefs | | `gap` | Something the agent has not investigated yet | | `goal` | What the agent is pursuing | Types are assigned automatically during extraction. You can also specify a type when adding beliefs manually via `beliefs.add(text, { type: 'assumption' })`. ### Evidence hierarchy Different evidence types carry different weight. A single verified measurement shifts confidence more than several inferences. | Type | Weight | Description | |------|--------|-------------| | `measurement` | highest | Audited metric, verified data point | | `citation` | high | Research report, external source with provenance | | `user-assertion` | medium-high | User explicitly stated this | | `expert-judgment` | medium | Expert opinion with rationale | | `inference` | low-medium | Agent-derived inference from available data | | `assumption` | lowest | Explicit assumption, no supporting evidence | Every piece of evidence has a direction: **supports** (increases confidence), **refutes** (decreases confidence), or **neutral** (adds information weight without shifting direction). Refuting evidence is captured, not discarded — nothing is silently dropped. ### Two-channel clarity Clarity is a 0-1 score that answers one question: does the agent understand enough to move forward? It decomposes into four channels, exposed on `BeliefContext.channels` and `BeliefDelta.channels`: ``` ┌──────────────────────────────────────────────────────────────┐ │ THE TWO QUESTIONS │ │ │ │ 1. DECISION RESOLUTION: "Can we make a call?" │ │ ───────────────────────────────────────── │ │ 80% → Yes, lean toward it │ │ 50% → No, it is ambiguous │ │ 99% → Strong signal │ │ │ │ 2. KNOWLEDGE CERTAINTY: "Have we done the work?" │ │ ───────────────────────────────────────── │ │ Just stated → No evidence yet │ │ 10 data points → Some certainty │ │ 100 data points → High certainty in our assessment │ │ │ └──────────────────────────────────────────────────────────────┘ ``` Two claims at 50% confidence are not the same. One has zero evidence (research it). The other has 40 data points that genuinely split both ways (decide, don't research). The two-channel model separates them. **Knowing you do not know is categorically different from not knowing.** The four quadrants: ``` Knowledge Certainty Low High ┌────────────┬────────────────┐ High │ │ │ Decision │ Belief │ Validated │ Resolution │ without │ belief. │ │ evidence. │ Ready to act. │ │ ▶ Invest- │ │ │ igate. │ │ ├────────────┼────────────────┤ Low │ │ │ Decision │ No idea. │ Genuinely │ Resolution │ Start │ uncertain. │ │ from │ Surface │ │ scratch. │ trade-offs. │ │ │ ▶ Decide, │ │ │ don't │ │ │ research. │ └────────────┴────────────────┘ ``` The other two channels: **coherence** (do the beliefs hang together, or are there unresolved contradictions?) and **coverage** (are important areas addressed, or are there large gaps?). Open gaps reduce clarity. Gaps with more downstream dependencies reduce it more. ### Fusion, not averaging When multiple agents share a namespace (`new Beliefs({ agent, namespace })`), their deltas merge into one world state. Conflicts are detected, resolved by trust weight, and kept visible in the trace. A measurement from an SEC filing outweighs an inference from an agent — but the contradiction is never silently dropped. Last-write-wins is not fusion. Averaging confidences is not fusion. Fusion is trust-weighted Bayesian merging with a visible conflict log. ### How knowledge certainty accumulates When you seed beliefs with `add()`, knowledge certainty starts at zero. `add('Market is $4.2B', { confidence: 0.8 })` sets decision resolution to 0.8, but the system has not seen evidence yet. Knowledge certainty tracks *earned evidence* — data accumulated since the belief was created. It grows when `after()` processes real agent output that references the claim, when multiple observations reinforce it, or when tool results provide independent confirmation. To build KC quickly, prefer `after()` on real output over seeding with `add()`. ## When to use beliefs Use the SDK when: - Your agent runs more than a few turns on the same topic. - Conflicting information from different sources matters to the outcome. - You need to trace why the agent believes something (compliance, debugging, audit). - Multiple agents share state and you need trust-weighted merging. - Your agent's readiness to act is decision-relevant (research more, or proceed?). Do not use it when: - The task is a single-turn chatbot reply with no persistent state. - You just need retrieval over documents — use a vector store. - You want to micromanage the fusion math — the SDK deliberately hides it. ### Anti-patterns - **Do not call `after()` per stream chunk.** Call it once per turn, after the model finishes. `after()` runs extraction and fusion; per-chunk calls are wasteful and produce inconsistent deltas. - **Do not bypass fusion.** Do not write to belief state through any path other than `after()` / `add()` / `resolve()` / `retract()` / `remove()`. The fusion engine owns conflict resolution and trust weighting. - **Do not read or depend on internal distributions, scoring models, or fusion weights.** The SDK exposes developer-facing contracts: `text`, `confidence`, `clarity`, `channels`, `readiness`, `moves`. The underlying math (Beta/Gaussian/Dirichlet distributions, entropy tracking, Bayesian updates) is intentionally hidden and may change. - **Do not treat confidence as truth.** A belief at 0.9 confidence with zero knowledge certainty is a stated guess, not a validated fact. Check `channels.knowledgeCertainty` before acting on high-confidence claims. - **Do not share an API key across untrusted tenants.** Use `namespace` for multi-tenant isolation. ## Documentation map Full docs live at https://thinkn.ai/dev. The sections below mirror the in-app navigation. ### Start — get running fast - [start/hack-guide](https://thinkn.ai/dev/start/hack-guide) — Zero to building with beliefs in 10 minutes. Framework recipes and the three-step pattern. - [start/intro](https://thinkn.ai/dev/start/intro) — The Missing Layer. What belief state is and why your agent needs it. - [start/install](https://thinkn.ai/dev/start/install) — `npm i beliefs`, get an API key, run a verification snippet. - [start/quickstart](https://thinkn.ai/dev/start/quickstart) — Eight-step test suite covering connection, seeding, extraction, contradiction detection, resolution, search, world state, and trace. - [start/faq](https://thinkn.ai/dev/start/faq) — RAG vs. beliefs, vector store vs. beliefs, do I need probability theory, how persistence works. ### Why — the positioning and the rationale - [why/problem](https://thinkn.ai/dev/why/problem) — The five symptoms of drift. Scaling the past vs. discovering the unknown. The full manifesto. - [why/memory](https://thinkn.ai/dev/why/memory) — Memory/RAG vs. beliefs across six dimensions: storage, uncertainty, conflicts, decay, provenance, gaps. - [why/drift](https://thinkn.ai/dev/why/drift) — Why epistemic drift is structural, not incidental, in transformer-based agents. - [why/example](https://thinkn.ai/dev/why/example) — A concrete walk-through showing drift without beliefs and clarity with beliefs. ### Core — the vocabulary and the model - [core/beliefs](https://thinkn.ai/dev/core/beliefs) — Belief structure, the six types, evidence hierarchy, extraction vs. manual assertion. - [core/intent](https://thinkn.ai/dev/core/intent) — Goals, gaps, and the normative layer. How the agent knows what it is pursuing. - [core/clarity](https://thinkn.ai/dev/core/clarity) — Two-channel clarity, the four quadrants, load-bearing beliefs, knowledge certainty accumulation. - [core/moves](https://thinkn.ai/dev/core/moves) — How thinking moves are ranked by expected information gain. - [core/world](https://thinkn.ai/dev/core/world) — The fused world state: beliefs, edges, goals, gaps, contradictions. ### SDK — the API surface - [sdk/overview](https://thinkn.ai/dev/sdk/overview) — Core vs. adapters, the lifecycle, the two-layer design. - [sdk/core-api](https://thinkn.ai/dev/sdk/core-api) — Full reference for every method, every option, every return field. - [sdk/loops](https://thinkn.ai/dev/sdk/loops) — Loop patterns: per-turn, streaming, tool-call, multi-step. - [sdk/scoping](https://thinkn.ai/dev/sdk/scoping) — `agent`, `namespace`, `thread` — how to isolate or share belief state. - [sdk/patterns](https://thinkn.ai/dev/sdk/patterns) — Clarity-driven routing, gap-driven research, multi-agent shared state, contradiction handling. - [sdk/adapters](https://thinkn.ai/dev/sdk/adapters) — What adapters are and how to choose one. ### Adapters — framework integrations - [adapters/claude-agent-sdk](https://thinkn.ai/dev/adapters/claude-agent-sdk) — `beliefs/claude-agent-sdk` hooks for `@anthropic-ai/claude-agent-sdk`. - [adapters/vercel-ai](https://thinkn.ai/dev/adapters/vercel-ai) — `beliefs/vercel-ai` middleware for `generateText` / `streamText`. - [adapters/react](https://thinkn.ai/dev/adapters/react) — React hooks for belief state (coming soon). - [adapters/devtools](https://thinkn.ai/dev/adapters/devtools) — Debug UI for inspecting belief state in development (coming soon). ### Use cases - [cases/finance](https://thinkn.ai/dev/cases/finance) — Research agents, risk analysis, conflict-heavy multi-source workflows. - [cases/health](https://thinkn.ai/dev/cases/health) — Differential diagnosis, evidence tracking, audit-grade provenance. - [cases/engineering](https://thinkn.ai/dev/cases/engineering) — Design assumptions, trade-off surfacing, load-bearing decisions. - [cases/science](https://thinkn.ai/dev/cases/science) — Hypothesis tracking, contradiction detection across experiments. ### Internals — how it works under the hood - [internals/evidence](https://thinkn.ai/dev/internals/evidence) — How evidence is weighted and combined. - [internals/fusion](https://thinkn.ai/dev/internals/fusion) — Trust-weighted Bayesian merging of deltas. - [internals/ledger](https://thinkn.ai/dev/internals/ledger) — The audit trail: causal parents, entropy deltas, contributors. - [internals/decay](https://thinkn.ai/dev/internals/decay) — Principled temporal decay toward uninformative priors. - [internals/runtime](https://thinkn.ai/dev/internals/runtime) — The `observe → infer → decide → act → record → publish` loop. - [internals/math](https://thinkn.ai/dev/internals/math) — Clarity math: how decision resolution, knowledge certainty, coherence, and coverage combine. ## SDK reference ### Install ```bash npm i beliefs ``` ### Authentication Get an API key at https://thinkn.ai/profile/api-keys. Set it as `BELIEFS_KEY` in your environment. ### Constructor ```ts import Beliefs from 'beliefs' // or: import { beliefs } from 'beliefs' const beliefs = new Beliefs({ apiKey: process.env.BELIEFS_KEY, // required — get at thinkn.ai/profile/api-keys agent: 'research-agent', // optional, default 'agent' — identifies this contributor namespace: 'project-alpha', // optional, default 'default' — isolates or shares state thread: 'conversation-42', // optional — scope to a conversation debug: true, // optional — log requests to console timeout: 120000, // optional, default 120000ms maxRetries: 2, // optional, default 2 }) ``` Scope rule: beliefs with the same `namespace` are fused across `agent` values (trust-weighted). Different `namespace` values are fully isolated. Use `thread` for per-conversation scoping inside a namespace. ### Methods #### before(input?: string): Promise Read current belief state before the agent acts. Inject `context.prompt` into your agent's system prompt. ```ts const context = await beliefs.before(userMessage) // context.prompt — string to inject as system prompt // context.beliefs — Belief[] with confidence scores // context.goals — string[] // context.gaps — string[] (what the agent doesn't know) // context.clarity — number 0-1 (readiness to act) // context.channels — { decisionResolution, knowledgeCertainty, coherence, coverage } // context.moves — Move[] (ranked next actions by info gain) ``` #### after(text: string, options?: AfterOptions): Promise Feed agent output after it acts. Extracts beliefs, detects conflicts, fuses into world state. **Call once per turn, not per stream chunk.** ```ts const delta = await beliefs.after(result.text) // delta.changes — DeltaChange[] (created / updated / removed / resolved) // delta.clarity — number 0-1 // delta.channels — ClarityChannels // delta.readiness — 'low' | 'medium' | 'high' // delta.moves — Move[] // delta.state — WorldState (full state after this turn) // For tool results, tag the source: const delta = await beliefs.after(toolResult, { tool: 'web_search' }) ``` #### add(text: string, options?: AddOptions): Promise Assert a single belief, goal, or gap. ```ts await beliefs.add('Market is $4.2B', { confidence: 0.85 }) await beliefs.add('Missing APAC data', { type: 'gap' }) await beliefs.add('Determine TAM', { type: 'goal' }) await beliefs.add('Market is $6.8B', { confidence: 0.95, evidence: 'IDC Q4 2025 report', supersedes: 'Market is $4.2B', }) ``` AddOptions: `confidence?: number`, `type?: 'claim'|'assumption'|'evidence'|'risk'|'gap'|'goal'`, `evidence?: string`, `supersedes?: string` #### add(items: AddManyItem[]): Promise Assert multiple beliefs, goals, or gaps in one request. ```ts await beliefs.add([ { text: 'Market is $4.2B', confidence: 0.8 }, { text: 'Missing APAC data', type: 'gap' }, { text: 'Determine TAM', type: 'goal' }, ]) ``` #### resolve(text: string): Promise Mark a gap as resolved. The gap is removed from `context.gaps` and its resolution is recorded in the trace. ```ts await beliefs.resolve('Missing APAC data') ``` #### read(): Promise Full world state: beliefs, goals, gaps, edges, contradictions, clarity, channels, moves, prompt. Use when you need everything in one call. ```ts const world = await beliefs.read() ``` #### snapshot(): Promise Lightweight read — beliefs, goals, gaps, edges, contradictions. No clarity, moves, or prompt computation. Faster than `read()` when you only need raw state. ```ts const snap = await beliefs.snapshot() ``` #### search(query: string): Promise Find beliefs by text, sorted by confidence. ```ts const results = await beliefs.search('market size') ``` #### trace(beliefId?: string): Promise Audit trail of belief transitions. Pass a `beliefId` to trace one belief; omit for the full ledger. ```ts const history = await beliefs.trace() const single = await beliefs.trace('belief-abc123') ``` #### retract(beliefId: string, reason?: string): Promise Mark a belief as retracted. The belief stays in the ledger but is excluded from the active world state. Use when you learn a claim was wrong and want to preserve the trace. ```ts await beliefs.retract('belief-abc123', 'Source was misquoted') ``` #### remove(beliefId: string): Promise Delete a belief entirely. Stronger than `retract()` — the belief is removed from the world state. Prefer `retract()` when you need the audit trail. ```ts await beliefs.remove('belief-abc123') ``` #### reset(): Promise<{ removed: number }> Clear all beliefs, goals, and gaps in the current scope. Returns the number of items removed. Destructive — primarily for tests and development. ```ts const { removed } = await beliefs.reset() ``` ### Types ```ts interface Belief { id: string; text: string; confidence: number; type: string label?: string; createdAt: string; updatedAt?: string } interface BeliefContext { prompt: string; beliefs: Belief[]; goals: string[]; gaps: string[] clarity: number; channels?: ClarityChannels; moves: Move[] } interface BeliefDelta { changes: DeltaChange[]; clarity: number; channels?: ClarityChannels readiness: 'low' | 'medium' | 'high'; moves: Move[]; state: WorldState } interface WorldState { beliefs: Belief[]; goals: string[]; gaps: string[]; edges: Edge[] contradictions: string[]; clarity: number; channels?: ClarityChannels moves: Move[]; prompt: string } interface BeliefSnapshot { beliefs: Belief[]; goals: string[]; gaps: string[]; edges: Edge[] contradictions: string[] } interface ClarityChannels { decisionResolution: number; knowledgeCertainty: number coherence: number; coverage: number } interface Move { action: string; target: string; reason: string value: number; executor?: 'agent' | 'user' | 'both' } interface Edge { type: string; source: string; target: string; confidence: number } interface DeltaChange { action: 'created' | 'updated' | 'removed' | 'resolved' beliefId: string; text: string confidence?: { before?: number; after?: number }; reason?: string } interface TraceEntry { action: 'created' | 'updated' | 'removed' | 'resolved' beliefId?: string; confidence?: { before?: number; after?: number } agent?: string; timestamp: string; reason?: string } ``` ### Error handling ```ts import Beliefs, { BetaAccessError, BeliefsError } from 'beliefs' try { const delta = await beliefs.after(text) } catch (err) { if (err instanceof BetaAccessError) { // Missing or invalid API key (401/403) } if (err instanceof BeliefsError) { // err.code — e.g. 'rate_limit/exceeded', 'validation/invalid_params' // err.retryable — boolean // err.retryAfterMs — suggested wait (ms) } } ``` Rate limit: 60 requests/minute per key. ## SDK capabilities The hosted API does real work behind each call. Understanding which capability is triggered by which method helps you pick the right entry point. | Capability | What it does | Triggered by | |------------|--------------|--------------| | **Extraction** | LLM-powered belief extraction from agent output and tool results. Finds claims, assumptions, risks, evidence, and gaps. You do not parse outputs yourself. | `after(text)` | | **Linking** | Automatic detection of contradictions, support, derivation, and supersession relationships between beliefs. Populates `edges` and `contradictions`. | `after()` → surfaced via `read().edges` | | **Deduplication** | Embedding-based similarity matching prevents duplicate beliefs when agents restate the same claim in different words. Runs transparently inside `after()`. | `after()` (invisible) | | **Fusion** | Trust-weighted merging across multiple agents sharing a namespace. Conflicts stay visible in the ledger, never silently dropped. | `Beliefs({ agent, namespace })` + `after()` | | **Clarity scoring** | 0-1 readiness assessment combining decision resolution, knowledge certainty, coherence, and coverage into one number plus four sub-channels. | `before().clarity` / `delta.clarity` / `world.channels` | | **Thinking moves** | Ranked next actions by expected information gain. Tells the agent where to look next to reduce uncertainty where it matters most. | `before().moves` / `delta.moves` | | **Provenance and trace** | Full audit trail of every transition: who stated it, what evidence, how confidence evolved, entropy before/after. | `trace()` | | **Gap tracking** | Explicit modeling of what the agent has not investigated. Gaps are first-class, penalize clarity, and drive move ranking. | `before().gaps` / `add(text, { type: 'gap' })` / `resolve(text)` | | **Retraction without loss** | Mark beliefs as wrong while preserving the ledger. Supports compliance and debugging. | `retract(id, reason)` | ## Framework integrations ### The core loop (any framework) ```ts const context = await beliefs.before(userMessage) const result = await yourAgent(context.prompt, userMessage) const delta = await beliefs.after(result) if (delta.readiness === 'high') { /* act */ } else { /* keep investigating — follow delta.moves[0] */ } ``` ### Vercel AI SDK ```ts import { generateText } from 'ai' import { anthropic } from '@ai-sdk/anthropic' const context = await beliefs.before(question) const { text } = await generateText({ model: anthropic('claude-sonnet-4-20250514'), system: context.prompt, prompt: question, }) const delta = await beliefs.after(text) ``` ### Anthropic SDK ```ts import Anthropic from '@anthropic-ai/sdk' const context = await beliefs.before(question) const message = await new Anthropic().messages.create({ model: 'claude-sonnet-4-20250514', max_tokens: 4096, system: context.prompt, messages: [{ role: 'user', content: question }], }) const text = message.content.filter(b => b.type === 'text').map(b => b.text).join('') const delta = await beliefs.after(text) ``` ### OpenAI SDK ```ts import OpenAI from 'openai' const context = await beliefs.before(question) const completion = await new OpenAI().chat.completions.create({ model: 'gpt-4o', messages: [ { role: 'system', content: context.prompt }, { role: 'user', content: question }, ], }) const text = completion.choices[0]?.message?.content ?? '' const delta = await beliefs.after(text) ``` Note: for o-series models (o3, o4-mini), use `role: 'developer'` instead of `role: 'system'`. ### Clarity-driven routing ```ts const context = await beliefs.before(input) if (context.clarity < 0.3) await runResearch(context.gaps) else if (context.clarity > 0.7) await draftRecommendations(context.beliefs) else await investigateGaps(context.gaps) ``` ### Multi-agent shared state ```ts const researcher = new Beliefs({ apiKey, agent: 'researcher', namespace: 'project' }) const analyst = new Beliefs({ apiKey, agent: 'analyst', namespace: 'project' }) await researcher.after(researchOutput) const context = await analyst.before('Interpret the findings') // analyst sees researcher's beliefs, fused by trust weight ``` ### Gap-driven research ```ts const context = await beliefs.before(input) for (const gap of context.gaps) { const result = await searchTool.run(gap) await beliefs.after(result, { tool: 'search' }) } ``` ## Built-in adapters Adapters are subpath imports from the same `beliefs` package. No extra install. ### Vercel AI SDK middleware ```ts import { generateText, wrapLanguageModel } from 'ai' import { anthropic } from '@ai-sdk/anthropic' import Beliefs from 'beliefs' import { beliefsMiddleware } from 'beliefs/vercel-ai' const beliefs = new Beliefs({ apiKey: process.env.BELIEFS_KEY }) const { text } = await generateText({ model: wrapLanguageModel({ model: anthropic('claude-sonnet-4-20250514'), middleware: beliefsMiddleware(beliefs), }), prompt: 'Research the AI tools market', }) ``` Options: `capture?: 'response' | 'tools' | 'all'` (default: `'response'`), `includeContext?: boolean` (default: `true`) ### Claude Agent SDK hooks ```ts import { query } from '@anthropic-ai/claude-agent-sdk' import Beliefs from 'beliefs' import { beliefsHooks } from 'beliefs/claude-agent-sdk' const beliefs = new Beliefs({ apiKey: process.env.BELIEFS_KEY }) const result = await query({ prompt: 'Research AI tools market', options: { hooks: beliefsHooks(beliefs) }, }) ``` Options: `capture?: 'tools' | 'all'` (default: `'tools'`), `includeContext?: boolean` (default: `true`), `toolFilter?: string` (regex) ## Public HTTP API If you cannot use the npm package, hit the HTTP API directly. All endpoints require `Authorization: Bearer `. Base URL: `https://thinkn.ai/api/sdk/v1/beliefs`. | Endpoint | Purpose | SDK equivalent | |----------|---------|----------------| | `POST /context` | Get belief context before the agent acts | `before()` | | `POST /ingest` | Submit observation for extraction and fusion | `after(text)` | | `POST /ingest-tool-result` | Submit tool result with source tagging | `after(text, { tool })` | | `POST /apply` | Apply a structured delta to world state | `add()` / `resolve()` | | `POST /reset` | Clear all beliefs in scope | `reset()` | | `GET /search` | Search beliefs by text | `search(query)` | | `GET /snapshot` | Get lightweight state snapshot | `snapshot()` | | `GET /ledger` | Query the audit trail | `trace()` | Preferred HTTP scope fields are `namespace` and `thread`. When an endpoint accepts contributor identity, the field name is `agentId` (not `agent`). - `POST /context`: JSON body uses `namespace`, optional `thread`, and optional `input`. - `POST /ingest`: JSON body uses `namespace`, optional `thread`, optional `agentId`, plus the ingest payload. - `POST /ingest-tool-result`: JSON body uses `namespace`, optional `thread`, optional `agentId`, `toolName`, `toolResult`, and optional `source`. - `POST /apply`: JSON body uses `namespace`, optional `thread`, optional `agentId`, `delta`, and optional `source`. - `POST /reset`: JSON body uses `namespace` and optional `thread`. - `GET /search`: query params `namespace`, optional `thread`, `query`, and optional `limit`. - `GET /snapshot`: query params `namespace` and optional `thread`. - `GET /ledger`: query params `namespace`, optional `beliefId`, optional `agentId`, optional `since`, optional `until`, and optional `limit`. Legacy aliases remain accepted for compatibility: `workspaceId`, `threadId`, and nested `scope.thread` / `scope.threadId` on POST bodies. Prefer `namespace` and `thread` for new integrations. --- More patterns: https://thinkn.ai/dev/sdk/patterns Full API reference: https://thinkn.ai/dev/sdk/core-api Integrations: https://thinkn.ai/dev/adapters/claude-agent-sdk, https://thinkn.ai/dev/adapters/vercel-ai