Skip to main content

Tools

OpenCouch treats tools narrowly: a tool is an agent-invokable capability that reaches outside the graph's own state — today that means provider-native web search for crisis resources. Tools are deliberately scarce because every external call is a safety surface.

Unlike most LangGraph tutorials, tools here are not registered with the graph as @tool-decorated functions. Instead, a node calls the tool directly when its condition is met. Benefits:

  • Explicit call sites. You always know which node will invoke which tool — no LLM-driven tool-selection layer to debug.
  • Provider-native grounding. Web search uses Google Search (Gemini) or web_search (OpenAI) via use_search=True on generate_text(), not a custom tool attachment.
  • Graceful degradation by construction. A tool failure returns empty results; the caller decides whether to continue without them.
TOOLSagent-invokable capabilities
1active
0planned
patternTools in OpenCouch are node-invoked, not LangGraph-registered. A node calls the tool function directly when its condition is met. Provider-native grounding (Google Search, OpenAI web_search) is enabled via the use_search=True kwarg on generate_text().
triggercrisis gate returns needs_crisis_response AND llm_client is available

Surfaces verified crisis hotlines local to the user. Uses provider-native web search grounding (Google Search for Gemini, web_search tool for OpenAI) — not a custom tool attachment. The call graph chains two deterministic LLM calls: first extract the user's location from the conversation, then search for resources with grounding enabled.

Pipeline
01Extract location
systemExtract location information from mental health support conversations. Return only the location mentioned, or empty string if none.
use_search=falsetemp=0
producesstr — e.g. "Singapore", "UK", ""
on failurereturns empty location, pipeline aborts, resources=[]
02Search with grounding
systemYou are a factual assistant helping to find official crisis support resources. Use your web search capability to find verified hotlines. Format: - Name | Phone | Website
use_search=truetemp=0
producesstr — pipe-separated or markdown-bold lines
on failurelogs warning, returns empty list
03Parse + normalize
system(deterministic — no LLM)
deterministictemp=0
produceslist[dict] with name/phone/url/region (max 5)
on failuredrops unparseable rows, keeps valid ones
Writes to
state.response.inferred_locationstate.response.found_resources
On failureAny stage failure returns empty results. The crisis response proceeds without resources rather than blocking on a third-party outage.
agent/tools/web_search.py::find_local_crisis_resources·tests/test_web_search_parser.py (13 parser tests)
next toolfuture candidates: session-arc summarizer, structured assessment lookup, skill-library retrieval

When does a capability become a tool?

Not every external call is a tool. The memory store, the crisis log, the embedding provider — these are all injected dependencies on WorkflowContext, not tools. The distinction:

TypeExampleLives on
Injected dependencymemory_store, crisis_log_backend, embedding_providerWorkflowContext (dataclass)
Toolfind_local_crisis_resourcesagent/tools/*.py module

A capability qualifies as a tool when:

  1. It reaches outside local infrastructure (public web, third-party API, model-provider search grounding)
  2. It can fail and must degrade gracefully without blocking the turn
  3. It has conditional invocation — a node decides at runtime whether to call it

If a capability is used on every turn (like the memory store) or fails hard when unavailable (like the database), it's a dependency, not a tool.

Adding a new tool

  1. Create agent/tools/your_tool.py with the tool function. Accept AgentState and the injected deps it needs; return structured data. Never raise — return empty/default values on any failure and log a warning.
  2. Export it from agent/tools/__init__.py.
  3. Import and call it from the node that needs it — typically guarded by a condition like if llm_client is not None.
  4. Add unit tests for any parsing helpers. LLM-calling code paths belong in dogfood scripts, not the unit test suite.
  5. If the tool writes state, add the fields to ResponseState (or wherever they belong in AgentState) and update the relevant prompt builders to consume them.
  • Agent Graph — where tools fire from
  • Nodescrisis_response_node is the only node that invokes a tool today
  • Crisis Gate philosophy — why crisis-resource surfacing is worth an external call