How to Coordinate Tool Calls for Local LLMs in the Browser
Build a reactive state machine for the LLM tool call lifecycle: request → execute → result → continue. Observable, type-safe, and framework-agnostic.
The Problem
LLM tool calling follows a strict lifecycle:
- LLM generates a tool call request
- App validates and executes the tool
- Result feeds back to the LLM
- LLM continues generation with the result
This is currently hand-wired everywhere — scattered useState, switch statements, and manual state transitions. No observability, no type safety, error handling is an afterthought.
The Solution
callbag-recharge's stateMachine util provides a typed FSM with observable state. derived creates reactive views. effect handles side effects. The entire lifecycle is inspectable.
/**
* Tool Call State Machine for Local LLMs
*
* Demonstrates: Reactive state machine for the LLM tool call lifecycle:
* LLM requests tool → tool executes → result feeds back → LLM continues.
* Uses stateMachine util + derived for a clean, observable flow.
*/
import { derived, effect } from "callbag-recharge";
import { stateMachine } from "callbag-recharge/utils/stateMachine";
// ── Types ────────────────────────────────────────────────────
interface ToolCall {
name: string;
args: Record<string, unknown>;
}
interface ToolResult {
name: string;
result: unknown;
durationMs: number;
}
interface ToolContext {
call?: ToolCall;
result?: ToolResult;
error?: string;
startedAt?: number;
}
type ToolState = "idle" | "pending" | "executing" | "completed" | "error";
type ToolEvent = "REQUEST" | "EXECUTE" | "COMPLETE" | "ERROR" | "RESET";
// ── State machine ────────────────────────────────────────────
const toolFSM = stateMachine<ToolContext, ToolState, ToolEvent>(
{},
{
initial: "idle",
states: {
idle: {
on: { REQUEST: "pending" },
},
pending: {
on: {
EXECUTE: {
to: "executing",
action: (ctx) => ({ ...ctx, startedAt: Date.now() }),
},
RESET: "idle",
},
},
executing: {
on: {
COMPLETE: "completed",
ERROR: "error",
},
},
completed: {
on: {
REQUEST: "pending",
RESET: {
to: "idle",
action: () => ({}),
},
},
},
error: {
on: {
REQUEST: "pending",
RESET: {
to: "idle",
action: () => ({}),
},
},
},
},
},
);
// ── Derived views ────────────────────────────────────────────
const _isExecuting = derived([toolFSM.current], () => toolFSM.current.get() === "executing", {
name: "isExecuting",
});
const lastResult = derived(
[toolFSM.context],
() => {
const ctx = toolFSM.context.get();
return ctx.result ?? null;
},
{ name: "lastResult" },
);
// ── Tool registry ────────────────────────────────────────────
const tools: Record<string, (args: Record<string, unknown>) => Promise<unknown>> = {
get_weather: async (args) => ({
temp: 72,
condition: "sunny",
location: args.location,
}),
search_web: async (args) => ({
results: [`Result for "${args.query}"`, "Another result"],
}),
};
// ── Execute tool calls ───────────────────────────────────────
async function handleToolCall(call: ToolCall) {
toolFSM.send("REQUEST", { call });
console.log(`[PENDING] Tool call: ${call.name}(${JSON.stringify(call.args)})`);
toolFSM.send("EXECUTE");
console.log(`[EXECUTING] ${call.name}...`);
const handler = tools[call.name];
if (!handler) {
toolFSM.send("ERROR", { error: `Unknown tool: ${call.name}` });
return;
}
try {
const startMs = Date.now();
const result = await handler(call.args);
const toolResult: ToolResult = {
name: call.name,
result,
durationMs: Date.now() - startMs,
};
toolFSM.send("COMPLETE", { result: toolResult });
console.log(`[COMPLETED] ${call.name} →`, result);
} catch (e) {
toolFSM.send("ERROR", { error: String(e) });
console.log(`[ERROR] ${call.name}: ${e}`);
}
}
// ── Simulate LLM requesting tool calls ───────────────────────
const dispose = effect([lastResult], () => {
const result = lastResult.get();
if (result) {
console.log(`\nTool result ready to feed back to LLM:`, result);
}
});
// LLM says: "I need to check the weather"
await handleToolCall({ name: "get_weather", args: { location: "San Francisco" } });
// LLM says: "Now search for restaurants"
await handleToolCall({ name: "search_web", args: { query: "best restaurants SF" } });
toolFSM.send("RESET");
console.log("\nFinal state:", toolFSM.current.get());
// Show the FSM graph
console.log("\n── State Machine Diagram ──");
console.log(toolFSM.toMermaid());
dispose();Why This Works
stateMachine()with typed transitions — every state and event is typed. Invalid transitions are compile-time errors. The state machine enforces the lifecycle.Observable state —
toolFSM.storeis a reactiveStore. Subscribe to status changes, derive metrics, or trigger effects.derived()views —isExecutingandlastResultare derived stores that update reactively. No manual synchronization.effect()side effects — feed tool results back to the LLM, update UI, or log events — all reactive.
Multi-Tool Parallel Execution
When the LLM requests multiple tools at once:
import { reactiveMap } from 'callbag-recharge/data'
// Track multiple tool calls in parallel
const toolCalls = reactiveMap<string, ToolState>()
async function handleParallelCalls(calls: ToolCall[]) {
// Start all tools
for (const call of calls) {
toolCalls.set(call.name, { status: 'executing', call, startedAt: Date.now() })
}
// Execute in parallel
const results = await Promise.allSettled(
calls.map(async call => {
const result = await tools[call.name](call.args)
toolCalls.update(call.name, s => ({
...s, status: 'completed', result
}))
return { name: call.name, result }
})
)
// All tools resolved — reactive sizeStore tracks completion
return results
}
// Derived: are all tools done?
const allCompleted = derived(
[toolCalls.sizeStore],
() => {
let done = true
toolCalls.forEach((state) => {
if (state.status === 'executing') done = false
})
return done
}
)Tool Call with Timeout
Wrap tool execution with withTimeout:
import { withTimeout } from 'callbag-recharge/orchestrate'
const timedTool = pipe(
toolExecution,
withTimeout(5000), // 5 second timeout
rescue(() => producer(({ emit, complete }) => {
emit({ name: 'timeout', result: 'Tool execution timed out', durationMs: 5000 })
complete()
}))
)Agentic Loop Pattern
Chain tool calls in an observe-plan-act loop:
const agentPhase = state<'observe' | 'plan' | 'act'>('observe')
const agentLoop = dynamicDerived((get) => {
const phase = get(agentPhase)
switch (phase) {
case 'observe': return get(environmentState)
case 'plan': return get(llmPlan)
case 'act': return get(toolFSM.store)
}
})
effect([agentLoop], () => {
const phase = agentPhase.get()
const result = agentLoop.get()
if (phase === 'act' && result.status === 'completed') {
agentPhase.set('observe') // cycle back
}
})See Also
- On-Device LLM Streaming — manage local model token streams
- Hybrid Cloud+Edge Routing — route between local and cloud models
- AI Chat with Streaming — streaming chat pattern