Nodes
Every graph node, filterable by category or capability. Each card is a technical datasheet: what it consumes, what it produces, which policies apply, and which file owns the implementation.
Five categories map to distinct responsibilities:
- Safety — cannot be bypassed by mode, modality, or tone
- Memory — retrieval and structured working context
- Routing — dispatch to the right response pathway
- Extraction — post-response LLM side effects that run in parallel
- Terminal — transcript finalization, pure state manipulation
crisis_gate_node
Hard safety boundary. Runs BEFORE memory retrieval. Four-layer classification: (1) deterministic override, (2) regex ladder, (3) optional LLM fallback, (4) policy normalization. Returns Command(goto=...) that routes the turn.
state.messagestate.history[-6:]state.crisisstate.routing.routestate.diagnosticscrisis_response_node
Generates crisis response with PFA overlay. Only runs on the crisis branch. Uses a tighter prompt with safety resource surfacing and a single clarifying question.
state.crisisstate.historystate.responsecrisis_log_node
Always-on audit log. Appends a CrisisLogRecord regardless of memory mode — the privacy asymmetry is deliberate per schema.yaml §2. Never skipped, never rate-limited.
state.crisisstate.routingstate.messagecrisis_log backend (side effect)load_memory_node
Therapeutic branch only. Retrieves semantic + episodic + procedural context via hybrid RRF retrieval. Returns structured WorkingMemoryEntry dicts — formatting happens on demand at prompt-build time.
state.messagestate.session_idstate.user_idstate.working_memorystate.memorystate.diagnosticstherapeutic_subgraph
Compiled subgraph with dispatcher + 6 mode nodes. Uses TherapeuticSubgraphOutput to restrict what flows back to the parent, preventing reducer double-counting on transcript/history. Each child node has its own RetryPolicy.
state.messagestate.working_memorystate.progressstate.routingstate.responsestate.progressfinalize_turn_node
Appends the assistant reply as a single-element delta. The operator.add reducer on transcript/history handles accumulation. Returns empty delta for blank/whitespace responses to keep the transcript clean. No I/O, so no retry.
state.response.textstate.routing.modestate.transcript [+1]state.history [+1]extract_semantic_facts_node
LLM structured-output call that extracts semantic candidates, then runs deterministic write policy. Low-risk facts may commit immediately; sensitive or interpretive candidates can be held for session end or repetition. Runs in parallel with extract_procedural_rules_node.
state.messagestate.response.textmemory_store / session buffer (side effect)state.diagnosticsextract_procedural_rules_node
LLM structured-output call that extracts procedural candidates, then runs deterministic write policy. Explicit durable instructions may commit immediately; implicit preferences can be held for session-end promotion. Same parallel lane as extract_semantic_facts_node.
state.messagestate.response.textprocedural profile / session buffer (side effect)state.diagnosticsAdding a new node
When you add a node to the graph, think through three things:
Does it do I/O? If yes, register it with retry_policy=RetryPolicy(max_attempts=2) in build_agent_workflow. The retry policy is defense-in-depth for transient failures that escape the node's own error handling.
Does it write to a shared state field? If multiple nodes write to the same field (like diagnostics or progress), the field needs a reducer (_merge_dicts for dicts, operator.add for lists) in AgentState. Without a reducer, nodes will clobber each other's writes.
Can it run in parallel? If the node has no ordering dependency with its siblings (the way the two extractors don't), wire it as a fan-out edge from its parent. The reducers handle concurrent writes safely.
Related
- Agent Graph — full topology overview and design principles
- Agent State — what flows through the nodes
- Therapeutic Modes — the 6 modes inside the therapeutic subgraph