What a Move Is
A move is a recommended action the system surfaces based on the current belief state. Each move has an action type, a target, a reason, and an expected value representing how much it would improve the agent's understanding.
1{
2 action: 'gather_evidence',
3 target: 'Missing APAC market analysis',
4 reason: 'High-impact gap with 3 downstream dependencies',
5 value: 0.72,
6 executor: 'agent',
7}- action. The type of move:
clarify,resolve_uncertainty,gather_evidence,compare_paths. - target. The belief, gap, or contradiction the move addresses.
- reason. Why this move matters, in natural language.
- value. Expected information gain, 0–1. Higher means more uncertainty reduction.
- executor. Who should act:
agent,user, orboth.
Move Types
| Action | When it surfaces |
|---|---|
gather_evidence | A gap or weakly supported belief needs investigation |
clarify | A contradiction exists between beliefs |
resolve_uncertainty | A load-bearing belief has insufficient evidence |
compare_paths | Multiple valid interpretations need a decision framework |
Each action can have a subtype for specificity:
| Subtype | Description |
|---|---|
research | Find external data or sources |
validate_assumption | Test whether an assumption holds |
resolve_contradiction | Address conflicting beliefs |
quantify_risk | Measure exposure on a risk belief |
design_test | Propose an experiment to confirm or refute |
synthesize | Combine multiple findings into a conclusion |
reframe | Restructure the problem based on new information |
How Moves Are Ranked
Moves are ranked by expected information gain: which action would most reduce uncertainty in the beliefs that matter most.
A gap with many downstream dependencies generates a higher-value move than a gap with none. A contradiction between two load-bearing beliefs generates a higher-value clarify move than a contradiction between peripheral claims.
The system considers:
- How much uncertainty the move would reduce
- How many other beliefs depend on the target
- Whether the target belief is load-bearing
- The current clarity score and what would improve it most
Reading Moves
Moves are returned from every major SDK method:
1// Before the agent acts
2const context = await beliefs.before(userMessage)
3console.log(context.moves) // ranked actions for this turn
4
5// After the agent acts
6const delta = await beliefs.after(result.text)
7console.log(delta.moves) // updated recommendations
8
9// Full world state
10const world = await beliefs.read()
11console.log(world.moves) // all current recommendationsRouting on Moves
Use moves to direct agent behavior:
1const delta = await beliefs.after(result.text)
2const next = delta.moves[0]
3
4if (!next) {
5 // No recommended actions. Clarity is likely high.
6 await finalize(delta.state)
7} else if (next.action === 'gather_evidence') {
8 await runResearch(next.target)
9} else if (next.action === 'clarify') {
10 await resolveContradiction(next.target)
11} else if (next.action === 'resolve_uncertainty') {
12 await deepDive(next.target)
13} else if (next.action === 'compare_paths') {
14 await presentTradeoffs(next.target)
15}Executor
The executor field indicates who should act on the move.
| Executor | Meaning |
|---|---|
agent | The agent can handle this autonomously |
user | This requires human input or judgment |
both | The agent can start, but the user needs to weigh in |
A user executor move might surface when the system detects a value judgment or strategic decision that the agent should not make alone.