Skip to content

NimbleAgents.jlBuild AI Agents in Pure Julia

A simple, lightweight framework for building AI agents with tool use, session management, and MCP support

Quick Example

julia
using NimbleAgents

# Define a tool
@tool function add(x::Int, y::Int)
    "Add two integers together."
    x + y
end

# Create an agent
agent = Agent(
    name = "MathBot",
    instructions = "You are a helpful math assistant.",
    tools = [add_tool],
)

# Run it
result = run!(agent, "What is 42 + 17?")

Why NimbleAgents?

NimbleAgents is designed to be simple and Julia-native:

  • No boilerplate — tools are just Julia functions with docstrings

  • Type-safe — leverage Julia's type system for tool schemas and output parsing

  • Lightweight — native OpenAI-compatible provider layer, minimal dependencies

  • Extensible — built-in support for MCP servers, skills, CLI tools, and custom hooks

Why Julia for agents?

There are plenty of agent frameworks in Python. Here's why building in Julia feels worth it:

  • Types without a validation layer — Julia's type system is enforced at runtime, so tool schemas and structured outputs fall out naturally from function signatures. No need for a separate validation library on top.

  • Multiple dispatch reduces boilerplate — tool dispatch, hook callbacks, and output parsing all feel like a natural fit for a language where behaviour is selected by argument types. What would be isinstance chains or class hierarchies in Python becomes a few method definitions.

  • @kwdef structs are conciseAgent, Guardrail, Session and friends get full keyword constructors for free. No dataclass decorator, no __init__ boilerplate.

  • Macros do the heavy lifting@tool generates the JSON schema, wrapper struct, and callable from one annotated function. The equivalent in Python requires a decorator that inspects type hints at runtime, or writing the schema by hand.

  • No self everywhere — functions are not tied to classes, which removes a lot of visual noise from method signatures and bodies.

  • Straightforward parallelismfan_out and spawn_subagents use real threads. For workloads that go beyond waiting on LLM responses, this can be useful.

  • REPL-first iteration — with Revise.jl, you can redefine tools and agent behaviour without restarting your process. The feedback loop for experimentation is tight.

That said, the Python ecosystem for LLM tooling is much larger and more mature. If your team is already in Python, or you need broad library coverage, that's a real consideration.

Ready to get started? Check out the Getting Started guide.