thinkn
  • Product
    Manifesto
    The reason we exist
    Founder Studioprivate beta
    Make better product decisions faster
    Belief SDKinvite only
    Add belief states to your AI system
    Request Access →Join the private beta waitlist
  • Docs
  • Pricing
  • FAQ
  • Docs
  • Pricing
  • FAQ
Sign In
Welcome
  • Start Here
  • Install
  • Quickstart
  • FAQ
  • Why beliefs
  • Core API
  • Patterns
  • Scope reads
  • Moves & Forecast
  • Trust & Tools
  • Streaming
  • Scoping
  • Auth
sdk/moves.mdx

Moves: SDK

List, generate, and act on recommended next moves.

A move is an engine-recommended next action — the answer to "given what the agent currently believes, what should it investigate next?" — ranked by expected information gain. beliefs.moves.* wraps the engine's recommender. See Moves (concept) for the model behind the surface; this page covers the SDK methods.

forecast(action) and cascade(action) look similar but answer different questions. forecast projects the action's value on the current agent's belief state — "how much will this clarify my picture?" cascade projects the same action across other agents' beliefs via the influence matrix — "if I do this, how much churn does it create for the rest of the swarm?" Use forecast for self-directed planning; use cascade when you're coordinating multi-agent work.

beliefs.moves.list(options?)

Get the currently-ranked moves for the bound scope. Moves come back highest-priority first.

1const moves = await beliefs.moves.list({ topN: 3 })
2for (const m of moves) {
3  console.log(m.action, m.rationale, m.expectedDeltaH)
4}

Options:

OptionTypeWhat it does
topNnumberCap the returned slice. Server-side ranking is unchanged; this is a client-side trim.

Returns ThinkingMove[].

beliefs.moves.generate(options)

Ask the recommender for a fresh move targeting a specific belief. Use this when you want a move now (e.g., the user just opened a belief detail page) rather than waiting for one to appear in list().

1const result = await beliefs.moves.generate({
2  beliefId: 'belief-abc123',
3  includeJustification: true,
4})
5
6if (result.move) {
7  showRecommendation(result.move, result.move.justification)
8} else if (result.reason === 'belief_complete') {
9  showDoneState()
10}

Options:

OptionTypeWhat it does
beliefIdstringRequired. Belief to target.
targetIdstringAlias for beliefId. beliefId wins if both are set.
includeJustificationbooleanAttach the engine's full justification payload to the move.
sessionIdstringBind to a specific session for analytics.

Returns:

1{
2  success: true
3  move: ThinkingMoveWithJustification | null
4  target: ResolvedCanonicalTarget
5  reason?: 'belief_complete'    // present when no move was generated
6  durationMs: number
7}

move: null with reason: 'belief_complete' is the engine signaling "this belief is in good shape — nothing to recommend right now."

beliefs.moves.act(moveId, action, options?)

Record a user action on a move. The engine learns from accept/snooze/dismiss signals to improve future ranking.

1await beliefs.moves.act(move.id, 'accept')
2await beliefs.moves.act(move.id, 'snooze')
3await beliefs.moves.act(move.id, 'dismiss')

action is one of 'accept', 'snooze', 'dismiss'. Any other value throws TypeError.

Returns:

1{
2  success: true
3  move: ThinkingMove   // updated with new status / resolvedAt
4  durationMs: number
5}

beliefs.moves.rank(options?)

Engine-ranked next-best moves over the current scope. Each entry surfaces the composite ranking score, expected info-gain, cost, and a cost-normalized ratio so callers can budget-cap on any axis.

1const ranked = await beliefs.moves.rank({ topN: 3, budget: 0.05 })
2for (const m of ranked) {
3  console.log(`${m.action}/${m.subType} → q=${m.qValue} cost=${m.cost} voi=${m.valueOfInformation}`)
4}

Options: topN? (default 5, max 50), budget? (filters out moves whose cost exceeds budget before ranking), agentId?, signal?.

Returns MoveRankingSummary[]:

1{
2  id, summary, targetId
3  targetKind: 'claim' | 'goal' | 'gap'
4  action: string                            // 'gather_evidence', 'clarify', ...
5  subType: string                           // 'design_test', 'tradeoff_mapping', ...
6  qValue: number                            // composite ranking score (higher = better)
7  expectedInfoGain: number                  // expected info-gain from executing this move
8  cost: number                              // USD / tokens / effort units
9  valueOfInformation: number                // info-gain / max(cost, 0.01)
10  executor: 'agent' | 'user' | 'both'
11  confidence: 'low' | 'medium' | 'high'
12}

beliefs.moves.forecast(action, options?)

Project the expected value of a candidate action on the current belief state. The engine runs its predictive model forward from the current state and returns one summary.

1const summary = await beliefs.moves.forecast('gather_evidence', { depth: 3, rollouts: 50 })
2console.log(`score=${summary.score} confidence=${summary.confidence}`)
3console.log(`will sharpen: ${summary.willAnswer.join(', ')}`)

Options: depth? (max 5), rollouts? (max 200), maxTopics?, agentId?, signal?.

Returns ForecastSummary — same shape as beliefs.forecast.predict documented below.

beliefs.moves.cascade(action, options?)

Predict how a candidate action will ripple through other agents' beliefs via the fit influence matrix. Use this for multi-agent coordination — knowing whether your move will cause downstream churn before you make it.

1const cascade = await beliefs.moves.cascade('gather_evidence', {
2  targetBeliefId: 'b-market-size',
3  magnitude: 0.3,
4})
5for (const shift of cascade.willShift) {
6  if (shift.severity !== 'none') {
7    console.warn(`Agent ${shift.agent}: ${shift.summary}`)
8  }
9}

Options: targetBeliefId? (defaults to most-uncertain active belief), magnitude? (0–1, default 0.2), maxAgents?, agentId?, signal?.

Returns CascadeSummary:

1{
2  id, summary
3  /** Aggregate cascade risk: 0 = isolated, 1 = every known agent feels it. */
4  score: number
5  willShift: Array<{
6    agent: string
7    summary: string
8    severity: 'none' | 'low' | 'medium' | 'high'
9    affectedBeliefs?: string[]
10  }>
11  confidence: 'low' | 'medium' | 'high'
12  why: string
13}

Cold-start workspaces return score: 0 with confidence: 'low' — the influence matrix has no co-observation evidence yet.


beliefs.forecast.predict(actions, options?)

Free-form action forecasting. Where moves.forecast(action) evaluates one action against the engine's recommended-move vocabulary, forecast.predict(actions[]) runs the same predictive model against an arbitrary list of caller-supplied actions and returns one summary per input action, in input order.

1const forecasts = await beliefs.forecast.predict(
2  ['gather_evidence_apac', 'design_test_market_size', 'reframe_question'],
3  { horizon: 3, rollouts: 50 },
4)
5
6const ranked = [...forecasts].sort((a, b) => b.score - a.score)
7console.log(`Best action: ${ranked[0].summary} (score=${ranked[0].score})`)

Options:

OptionDefaultWhat it does
horizon1Rollout depth per action (max 5).
rollouts30Independent rollouts per action (max 200).
maxTopics—Cap on belief topics surfaced in willAnswer.
agentIdbound agentRun the forecast as a different agent.
signal—AbortSignal for cancellation.

Returns ForecastSummary[]:

1{
2  id: string
3  summary: string
4  /** 0–1 expected value. Higher = more useful. */
5  score: number
6  /** Plain-English belief topics most likely to sharpen under this action. */
7  willAnswer: string[]
8  /** Confidence in the forecast itself, not the action. */
9  confidence: 'low' | 'medium' | 'high'
10  /** Short human explanation. */
11  why: string
12  suggestion?: string
13  relatedBeliefs?: string[]
14}

confidence reflects how much evidence the engine's predictive model has accumulated for similar actions in this workspace — distinct from score. A high-score action with confidence: 'low' means "this looks great, but we haven't seen this action before, so the score is extrapolation rather than a track record."

Cold-start behavior

On a fresh workspace with no archived deltas, every forecast comes back with confidence: 'low' and a low score. That's the honest answer — the model has no evidence yet. Forecasts typically reach confidence: 'medium' after roughly 5–10 after() calls in the workspace, and 'high' once dozens of similar actions have been observed.


ThinkingMove shape

1{
2  id: string
3  targetId: string
4  targetEntityType?: 'claim' | 'goal' | 'gap' | 'risk' | string
5  targetEntityId?: string
6  action: 'clarify' | 'gather_evidence' | 'resolve_uncertainty' | 'compare_paths' | string
7  rationale: string
8  expectedDeltaH: number       // expected uncertainty reduction from acting on this move
9  status: 'suggested' | 'accepted' | 'snoozed' | 'dismissed' | string
10  suggestedModality?: string   // hint about how to surface (e.g., 'inline', 'banner')
11  qValue?: number              // recommender's internal score, when available
12  executor?: 'agent' | 'user' | 'both'
13  createdAt: string
14  updatedAt?: string
15  resolvedAt?: string
16}

expectedDeltaH is the recommender's estimate of how much uncertainty this move reduces if accepted — it's the value field in the concept doc.

Auth

The moves namespace requires apiKey or scopeToken auth. serviceToken callers cannot invoke moves.*. See Auth.

PreviousScope reads
NextTrust & Tools

On this page

  • beliefs.moves.list(options?)
  • beliefs.moves.generate(options)
  • beliefs.moves.act(moveId, action, options?)
  • beliefs.moves.rank(options?)
  • beliefs.moves.forecast(action, options?)
  • beliefs.moves.cascade(action, options?)
  • beliefs.forecast.predict(actions, options?)
  • ThinkingMove shape