beliefs.trust.* lets the user adjust how much weight an agent or evidence source carries during fusion. Every override is a stated rating (confidence, strength) that the engine applies at fusion time — see behavioral contracts for the predictability guarantee.
When to use it
- A user disables an agent: set
confidence: 0, strength: 100, lock: true. - A user trusts a domain expert agent above the default:
confidence: 0.95, strength: 50. - A source category (e.g. social media) should attenuate weight: set on
{ kind: 'source', id: 'social' }.
Without an override, agents start from the engine's calibrated prior. Overrides replace that prior for the targeted entity only — every other agent and source is unaffected.
beliefs.trust.set(target, options)
Idempotent upsert.
1await beliefs.trust.set(
2 { kind: 'agent', id: 'risk-bot' },
3 { confidence: 0.4, strength: 25 },
4)
5
6// Hard-disable an unreliable source (locked overrides never drift):
7await beliefs.trust.set(
8 { kind: 'source', id: 'rumor-mill' },
9 { confidence: 0.0, strength: 100, lock: true },
10)Parameters:
| Field | Type | What it does |
|---|---|---|
target.kind | 'agent' | 'source' | Which entity type. |
target.id | string | Entity identifier. |
options.confidence | number | Mean of the user prior, in [0, 1]. |
options.strength | number | How sure you are in this rating. Higher = harder for learned data to drift the override. Use 10 for a weak preference (the engine can still adjust based on evidence), 100 for a confident rating, 500 for an immovable position you don't want learning to override. |
options.lock | boolean | When true, the engine never drifts this rating with newly-learned data. |
Returns the persisted TrustOverride.
beliefs.trust.list(options?)
1const all = await beliefs.trust.list()
2const agentsOnly = await beliefs.trust.list({ kind: 'agent' })Returns TrustOverride[].
beliefs.trust.get(target)
1const override = await beliefs.trust.get({ kind: 'agent', id: 'risk-bot' })
2if (override) console.log(override.confidence, override.strength)Returns TrustOverride | null (null when no override exists).
beliefs.trust.unset(target)
Remove an override. The entity reverts to the engine's calibrated prior at the next fusion step.
1const { removed } = await beliefs.trust.unset({ kind: 'agent', id: 'risk-bot' })Returns { removed: boolean }.
TrustEntity and TrustOverride shapes
1interface TrustEntity {
2 kind: 'agent' | 'source'
3 id: string
4}
5
6interface TrustOverride {
7 entity: TrustEntity
8 confidence: number // [0, 1]
9 strength: number // ≥ 0
10 locked: boolean
11 updatedAt: string
12}Auth
The trust namespace requires apiKey or scopeToken auth. serviceToken callers cannot mutate user-scoped trust. See Auth.
Validation
set() validates inputs synchronously — confidence must be in [0, 1], strength must be non-negative, and target.kind must be 'agent' or 'source'. Invalid inputs throw TypeError before any network call.
Tool reliability priors
beliefs.tools.* records and reads running estimates of tool reliability — distinct from the agent/source trust above. Where trust overrides are user-stated ratings the engine applies at fusion, tool priors are learned estimates: the engine tracks, per (tool, contextClass) pair, how often each tool produces useful evidence, so the agent can pick the right tool for the job.
Disambiguation
beliefs.tools.observe(envelope) is not the same as the top-level beliefs.observe(envelope). The top-level method runs the full belief-extraction pipeline on free-form content. tools.observe records a single success/failure outcome — orders of magnitude lighter, and only for tool-reliability tracking.
beliefs.tools.observe(envelope)
Record a single tool outcome. Updates the running estimate in place and returns the new summary.
1const prior = await beliefs.tools.observe({
2 tool: 'web_search',
3 success: true,
4 contextClass: 'market-research',
5 weight: 1.0,
6})
7
8console.log(`web_search rate now ${prior.rate} (${prior.confidence})`)Envelope:
| Field | Type | What it does |
|---|---|---|
tool | string | Required. Tool identifier. |
success | boolean | Required. Did the tool produce useful evidence? |
contextClass | string | Optional context label (e.g. 'exploratory-research'). |
weight | number | Optional weight (default 1.0). |
agentId | string | Override the bound agent. |
signal | AbortSignal | Cancellation. |
Returns ToolPriorSummary.
beliefs.tools.priors(options?)
List current priors in scope. Filter to narrow.
1// Every prior in scope:
2const all = await beliefs.tools.priors()
3
4// Just one tool:
5const search = await beliefs.tools.priors({ tool: 'web_search' })
6
7// Tool + context combo:
8const filtered = await beliefs.tools.priors({
9 tool: 'github_search',
10 contextClass: 'code-review',
11})Options: tool?, contextClass?, limit?, agentId?, signal?.
Returns ToolPriorSummary[].
ToolPriorSummary
1{
2 id: string
3 summary: string
4 tool: string
5 contextClass: string // empty string when uncategorized
6 /** Mean success rate, 0–1. */
7 rate: number
8 confidence: 'low' | 'medium' | 'high' | 'certain'
9 /** 90% uncertainty interval on the mean. */
10 credibleInterval: { low: number; high: number }
11 /** Total observations accumulated. */
12 observations: number
13 suggestion?: string
14}rate is "on average, this tool produces useful evidence rate × 100% of the time." confidence reflects how many observations back the estimate — low below 5 observations, medium 5–20, high 20+. credibleInterval narrows as observations accumulate.
Using priors to route
A common pattern: before calling a tool, fetch its prior. If confidence === 'low' and rate < 0.3, consider an alternative or attach a fallback. After the call, record the outcome with tools.observe() so the prior keeps learning.