Companies and agents alike are organizing around world models instead of hierarchies and pipelines. A world model isn't a pile of context, and it isn't a smarter archive. A world model is a living set of beliefs about reality — what the system thinks is true, why, how confident, what contradicts it, and what would change its mind.
Beliefs are the operational core of a world model: claims with confidence, evidence, and lifecycle. They are how the model stays accurate as reality changes.
1npm i beliefs1import Beliefs from 'beliefs'
2
3const beliefs = new Beliefs({
4 apiKey: process.env.BELIEFS_KEY,
5 namespace: 'my-project',
6 writeScope: 'space',
7})
8
9// Before the agent acts — read current understanding
10const context = await beliefs.before(userMessage)
11
12// Run your agent with belief context injected
13const result = await myAgent.run({ system: context.prompt })
14
15// Feed the output — beliefs extracted, conflicts detected, state updated
16const delta = await beliefs.after(result.text)That's the loop. Three calls per turn, regardless of which framework you ship on.
Works with your stack
1import { beliefsHooks } from 'beliefs/claude-agent-sdk' // Anthropic Claude Agent SDK
2import { beliefsMiddleware } from 'beliefs/vercel-ai' // Vercel AI SDK
3// React hooks + browser DevTools — coming soonOr call beliefs.before() / beliefs.after() manually around any LLM (OpenAI, plain fetch, your own agent loop). See the Hack Guide for working recipes across frameworks.
I want to...
| I want to... | Start here |
|---|---|
| Ship in 10 minutes. Hackathon, prototype, exploration. | Hack Guide — install + framework recipes + project ideas |
| See it run end-to-end before committing. | Quickstart — 30 lines that print clarity rising |
| Learn the model first, then build. | Why beliefs → Concepts → Tutorial |
| Build chat memory that's separate per conversation. | Install → use writeScope: 'thread' and bind thread: 'id' |
| Run multi-agent shared state (debate, supervisor/worker, swarm). | Patterns → Multi-Agent — same namespace, writeScope: 'space' |
| Audit why an agent believes something. | How it works → Ledger and beliefs.trace() |
| Evaluate fit before integrating. | FAQ — when beliefs help, when they don't |
| Add beliefs to a Claude Agent SDK app. | Adapter: Claude Agent SDK |
| Add beliefs to a Vercel AI SDK app. | Adapter: Vercel AI |
| See it across domains (finance, health, science, engineering). | Use cases |
Why coding agents first
A codebase is already a compact world. It has laws (types, invariants), assumptions (architecture decisions, dependencies), history (commits, PRs), ownership, and contradictions (stale docs, drifted assumptions). Coding agents are already operating inside this world — but with short-lived context and weak memory.
The first world model thinkⁿ targets is the one your coding agent already lives in. Concrete beliefs the engine can hold for a repo:
1belief: Authentication is enforced at the API middleware layer
2confidence: 0.82
3evidence: middleware.ts, auth.test.ts, architecture.md
4contradicts: /api/internal/export bypasses middleware
5next move: inspect route-level auth coverage before modifying export flowThe same machinery applies to research agents (claims about a market), analyst agents (beliefs about a customer or portfolio), or any system that needs to maintain a coherent picture of reality across many turns and many sources.
Using a coding agent?
Give your agent the SDK reference: llms.txt. It writes correct code on the first try.