Agent State
One AgentState TypedDict flows through the entire graph. Every
node reads from it and writes back partial updates (deltas).
Fields with reducers (operator.add, _merge_dicts) are
merged automatically by LangGraph — nodes return only their own
keys and the reducer handles accumulation across the graph and
across turns via the checkpointer.
Fields without reducers use last-write-wins — the most recent delta replaces the previous value.
Click any field below to see what it does, who sets it, and its type.
messagestrinputchannelChannelinputuser_idstr | Noneinputsession_idstr | Noneinputinstalled_skillslist[str]inputhistoryAnnotated[list[dict], operator.add]reduceroperator.addtranscriptAnnotated[list[dict], operator.add]reduceroperator.addworking_memorylist[WorkingMemoryEntry]loadedmemory.summarystrloadedmemory.procedural_ruleslist[str]loadedmemory.proactive_recall_enabledboolloadedprogressAnnotated[SessionProgressState, _merge_dicts]reducer_merge_dictsprogress.turn_countintreducerprogress.stagestrreducerprogress.exercise_typestr | Nonereducerprogress.exercise_stepint | NonereducercrisisCrisisAssessmentturn-scopedrouting.routestrturn-scopedrouting.modestrturn-scopedrouting.mode_sourcestrturn-scopedrouting.mode_typeModeTypeturn-scopedrouting.modalitystr | Noneturn-scopedresponse.textstrturn-scopedresponse.kindResponseKindturn-scopedresponse.guidancestrturn-scopeddiagnosticsAnnotated[dict, _merge_dicts]reducer_merge_dictsdiagnostics.load_memory_msfloatreducerdiagnostics.crisis_gate_msfloatreducerdiagnostics.extract_facts_msfloatreducerdiagnostics.extract_procedural_msfloatreducerLifecycle
Not all fields survive between turns. Understanding which persist and which are re-derived matters for debugging and eval design.
Input — set by build_initial_state() from AgentInput. Only
the current user turn is emitted into history and transcript —
the operator.add reducer appends it to the checkpoint's
accumulated list.
Reducer-backed — fields with Annotated[..., reducer] that
accumulate across the graph and across turns:
history/transcript—operator.addappends new turns.build_initial_stateemits[user_turn];finalize_turn_nodeemits[assistant_turn]. The checkpointer restores prior turns.progress—_merge_dictsmerges per-turn fields (turn_count,stage) with cross-turn fields (exercise_type,exercise_step) from the checkpoint. This is how exercise state persists without a manual carry-forward hack.diagnostics—_merge_dictslets all I/O nodes write their own timing/write-count keys independently. Parallel extractors write simultaneously without racing.
Loaded per turn — load_memory_node populates working_memory
with structured WorkingMemoryEntry dicts (semantic facts + episodic
arcs via hybrid RRF retrieval), memory.procedural_rules (full rule
set from the user's procedural profile), and
memory.proactive_recall_enabled (the recall toggle). These are raw
structured data — formatting happens on demand at prompt-build time.
Turn-scoped — reset at the start of each turn, then filled by nodes as the graph executes:
crisis— safety assessment (level, confidence, reason, flags) fromcrisis_gate_noderouting.route—"therapeutic"or"crisis"from the gaterouting.mode— which therapeutic mode was selected (supportive, reflective, clarifying, psychoeducation, guided_exercise, closing)routing.mode_source— how the mode was selected (keyword, llm, default)routing.mode_type— category (therapeutic, operational, crisis)routing.modality— therapeutic modality for this turn (MI, CBT, ACT, DBT, grief, IPT, PFA, or none)response— the generated reply (text, kind, guidance)
Working memory structure
working_memory carries structured WorkingMemoryEntry dicts,
not pre-formatted strings. Two entry types:
| Type | Fields | Example formatted output |
|---|---|---|
SemanticWorkingMemoryEntry | type="semantic", evidence_quote | "Previously noted: I have a sister named Sarah." |
EpisodicWorkingMemoryEntry | type="episodic", summary, primary_themes, is_catch_up | "Last session (grief): talked about loss after my dog died." |
Formatting happens via format_working_memory_entries() at three
surfaces: the therapeutic prompt builder, the dispatcher prompt, and
the CLI's context panel. Legacy str entries from older checkpoints
pass through unchanged.
Public contract
The external API sees AgentInput and AgentOutput — not
AgentState. The state is internal to the graph.
| Model | Key fields |
|---|---|
AgentInput | message, channel, user_id, session_id, history, working_memory, installed_skills |
AgentOutput | response_text, response_type, crisis, mode, mode_type, mode_source, should_persist_memory, diagnostics |
CrisisAssessment | level (0-3), confidence, reason, needs_crisis_response, needs_clarification |
Message | role, content, mode (`str |
Runtime context
WorkflowContext is a frozen dataclass (@dataclass(slots=True, frozen=True)) injected via runtime.context. All nodes access it
via attribute access (runtime.context.llm_client), not dict
access.
| Field | Type | Default |
|---|---|---|
llm_client | `BaseLLMClient | None` |
memory_store | MemoryStore | required |
crisis_log_backend | CrisisLogBackend | required |
memory_mode | MemoryMode | required |
embedding_provider | `EmbeddingProvider | None` |
Key files
| File | What it defines |
|---|---|
agent/state.py | AgentState TypedDict with reducer annotations (operator.add, _merge_dicts) |
agent/working_memory.py | WorkingMemoryEntry types, factory functions, format_working_memory_entry/entries |
agent/models.py | AgentInput, AgentOutput, Message, CrisisAssessment, Channel, MessageRole, stream event types |
agent/graph.py | build_initial_state() (input -> state), state_to_output() (state -> output) |
agent/runtime_context.py | WorkflowContext frozen dataclass — runtime dependencies via runtime.context |