Sharing Tool Calls with ToolLoopAgent and DurableAgents: A Deep Dive into Edge-Compatible AI Workflows

Sharing Tool Calls with ToolLoopAgent and DurableAgents: A Deep Dive into Edge-Compatible AI Workflows

Published: January 14, 2026

Building AI agents that can execute tools reliably across different environments is one of the most challenging aspects of modern AI development. Whether you're running real-time interactions on the edge or processing compute-intensive tasks on traditional servers, you need a consistent way to share and execute tool calls.

Today, I'll walk you through a sophisticated system I've built that enables seamless tool sharing between Workflow.dev DurableAgents and AI SDK ToolLoopAgents. This approach solves the fundamental challenge of tool call orchestration at scale.

What is Tool Call Sharing?

Before diving into the implementation, let's clarify what we mean by "sharing tool calls." In AI agent systems, tools are functions that agents can invoke to perform actions—like reading files, making API calls, or editing code. Tool call sharing means:

  1. Common Interface: Both agents use the same tool definitions and schemas
  2. Consistent Execution: Tools behave identically regardless of which agent calls them
  3. Shared Context: Tools access the same data sources and maintain state consistency
  4. Event Coordination: Tool executions trigger events that both agents can respond to

The Core Challenge

When building AI agents that perform complex tasks—code editing, file management, repository manipulation—you face a critical decision: where should these agents run?

Edge Runtime Benefits:

  • Ultra-low latency for user interactions
  • Global distribution and scalability
  • Cost-effective for short-lived operations

Edge Runtime Limitations:

  • Package compatibility restrictions
  • No direct file system access
  • Memory and execution time constraints

Traditional Server Benefits:

  • Full Node.js ecosystem access
  • Unlimited processing time
  • Direct system resource access

My solution bridges this gap with a dual-agent architecture that shares tools across both environments:

  • DurableAgent: Handles real-time user interactions on the edge
  • BackgroundAgent: Processes compute-intensive tasks on traditional servers
  • Shared Tool Library: Provides consistent tool execution across both agents

Building the Foundation: Tool Design Principles

Before implementing agents, let's understand how to create tools that work seamlessly across different execution environments. I'll use viewFile as a concrete example to demonstrate the complete pattern.

Step 1: Think Types First - Define Inputs and Outputs

The foundation of shareable tools is thinking clearly about what data flows in and out. Start by defining strong TypeScript types that capture your tool's contract:

// Start by defining what your tool context needs
type ToolContext = {
  userId: string;
  owner: string;
  repo: string;
  branch: string;
  token: string;
  pendingChanges: Map<string, FileChange>; // RedisMap for DurableAgent, Map for BackgroundAgent
};

// Define exactly what your tool will return
type ViewFileResult = {
  success: boolean;
  path: string;
  content: string;
  lineRange?: { start: number; end: number };
  totalLines: number;
  source: "pending_changes" | "github";
  error?: string;
};

Why Types First?

  • Clarity: Forces you to think through the tool's purpose and behavior
  • Consistency: Ensures both agents handle the same data structures
  • Safety: Prevents runtime errors from mismatched expectations
  • Documentation: Types serve as living documentation for other developers

Step 2: Create Schema Validation with Zod

Once you know your output types, create runtime validation for inputs and let Zod infer your input types:

// web/ai/tools/viewFile.ts
import { z } from "zod";

export const viewFileConfig = {
  description: "View the contents of a file from the GitHub repository. Optionally specify line ranges to view only specific sections.",
  inputSchema: z.object({
    path: z.string().describe("The file path relative to the repository root"),
    startLine: z.number().int().positive().optional().describe("Starting line number (1-indexed, inclusive)"),
    endLine: z.number().int().positive().optional().describe("Ending line number (1-indexed, inclusive)"),
  }),
};

// Let Zod automatically generate your input types - no manual type definitions!
export type ViewFileArgs = z.infer<typeof viewFileConfig.inputSchema>;

Key Benefits of z.infer:

  • Single Source of Truth: Schema and types are always in sync
  • Automatic Updates: Types update automatically when schema changes
  • No Type Drift: Impossible for validation and types to get out of sync
  • Less Code: No need to manually maintain duplicate type definitions

Step 2: Create Schema Validation with Zod

Once you know your types, create runtime validation that matches:

Step 3: Complete Tool Implementation with Proper Types

Now implement the tool using your well-defined types. Here's a complete example following the z.infer pattern:

// web/ai/tools/viewFile.ts
import { GithubClient } from "@/server/GithubClient";
import { z } from "zod";

type ToolContext = {
  userId: string;
  owner: string;
  repo: string;
  branch: string;
  token: string;
  pendingChanges: Map<string, FileChange>; // RedisMap for DurableAgent, Map for BackgroundAgent
};

export const viewFileConfig = {
  description: "View the contents of a file from the GitHub repository. Optionally specify line ranges to view only specific sections.",
  inputSchema: z.object({
    path: z.string().describe("The file path relative to the repository root"),
    startLine: z.number().int().positive().optional().describe("Starting line number (1-indexed, inclusive)"),
    endLine: z.number().int().positive().optional().describe("Ending line number (1-indexed, inclusive)"),
  }),
};

// Let Zod automatically generate your input types
export type ViewFileArgs = z.infer<typeof viewFileConfig.inputSchema>;

type ViewFileResult = {
  success: boolean;
  path: string;
  content: string;
  lineRange?: { start: number; end: number };
  totalLines: number;
  source: "pending_changes" | "github";
};

type ToolError = {
  success: false;
  error: string;
};

type ToolResult = ViewFileResult | ToolError;

const viewFile = async (
  context: ToolContext,
  args: ViewFileArgs,
  onError?: (args: ViewFileArgs, error: ToolError) => void,
  onSuccess?: (args: ViewFileArgs, result: ViewFileResult) => void
): Promise<ToolResult> => {
  const { path, startLine, endLine } = args;
  const githubClient = new GithubClient(context.userId, context.owner, context.repo, context.token);

  try {
    const normalizedPath = normalizePath(path);

    // Check for pending changes first (transaction-like behavior)
    const pending = context.pendingChanges.get(normalizedPath);
    let content: string;

    if (pending && pending.operation !== "delete") {
      content = pending.content;
    } else {
      content = await githubClient.getFileContentDecoded(normalizedPath, context.branch);
    }

    // Handle line range extraction
    if (startLine !== undefined || endLine !== undefined) {
      const lines = content.split("\n");
      const start = startLine ? Math.max(1, startLine) - 1 : 0;
      const end = endLine ? Math.min(lines.length, endLine) : lines.length;

      const selectedLines = lines.slice(start, end);
      const result: ViewFileResult = {
        success: true,
        path: normalizedPath,
        content: selectedLines.join("\n"),
        lineRange: { start: start + 1, end: Math.min(end, lines.length) },
        totalLines: lines.length,
        source: pending ? "pending_changes" : "github",
      };

      onSuccess?.(args, result);
      return result;
    }

    // Return full file content
    const result: ViewFileResult = {
      success: true,
      path: normalizedPath,
      content,
      totalLines: content.split("\n").length,
      source: pending ? "pending_changes" : "github",
    };

    onSuccess?.(args, result);
    return result;
  } catch (error: any) {
    const result: ToolError = {
      success: false,
      error: error.message || "Failed to read file",
    };
    onError?.(args, result);
    return result;
  }
};

export default viewFile;

Key Design Patterns:

  1. Context Object: Provides all necessary data and state
  2. Pending Changes: Implements transaction-like behavior for file operations
  3. Error Callbacks: Enable monitoring and event-driven responses
  4. Flexible Parameters: Support both full file and line-range operations

Agent Implementation: DurableAgent for Edge Runtime

Now that we have our shared tool, let's see how to use it in a DurableAgent that runs on the edge.

Understanding DurableAgents and Edge Constraints

DurableAgents run on edge runtimes, bringing incredible performance benefits but with important limitations:

Edge Runtime Restrictions:

  • Not all npm packages are compatible
  • No direct file system access
  • Stricter memory and execution time limits
  • No Node.js-specific APIs (process, fs, etc.)

Workflow.dev's Solution:

  • Durable state management survives cold starts
  • Automatic retry and recovery mechanisms
  • Step-based execution with persistence checkpoints

The Critical "use step" Directive

The "use step" directive is the foundation of DurableAgent reliability. It creates persistence checkpoints that survive edge function restarts:

// ✅ CORRECT: Simple step function
async function viewFileStep(context: ToolContext, args: ViewFileArgs) {
  "use step"; // Critical: Marks function as a durable step
  return await viewFile(context, args);
}

// ✅ CORRECT: Step with state management
async function editFileStep(context: ToolContext, args: EditFileArgs) {
  "use step";
  try {
    const result = await editFile(context, args);

    // Update pending changes for transaction-like behavior
    if (result.success) {
      context.pendingChanges.set(args.path, {
        path: args.path,
        content: args.content,
        operation: args.operation,
      });
    }

    return result;
  } catch (error) {
    console.error(`Edit file step failed:`, error);
    throw error;
  }
}

// ❌ WRONG: Don't use "use step" in conditional blocks
async function badStepFunction(context: ToolContext, args: any) {
  if (someCondition) {
    ("use step"); // This won't work!
    return await someFunction();
  }
}

What "use step" Does:

  1. Persistence: Function state survives edge runtime restarts
  2. Atomicity: Step completes fully or can be retried from the beginning
  3. Recovery: Automatic recovery from failures or cold starts
  4. State Tracking: Workflow.dev tracks execution state across invocations

Complete DurableAgent Implementation

Here's how to wire everything together:

// web/ai/agents/DurableAgent/DurableAgent.ts
import { DurableAgent } from "@workflow/ai/agent";
import { getWritable } from "workflow";
import type { UIMessageChunk } from "ai";

export async function chatWorkflow(args: any) {
  "use workflow";
  const writable = getWritable<UIMessageChunk>();
  // Redis map abstraction for state persistence across edge function restarts
  const pendingChanges = new RedisMap<string, FileChange>();

  const toolContext = {
    userId: args.user.uid,
    owner: args.owner,
    repo: args.repo,
    branch: args.branch,
    token: args.gitToken,
    pendingChanges,
  };

  const tools = createTools(toolContext);
  const agent = new DurableAgent({
    model: args.model.id,
    system: "You are a coding agent that can view, edit, and manage files.",
    tools: tools,
  });

  await agent.stream({
    messages: args.messages,
    writable,
    prepareStep: async (stepArgs) => ({
      model: stepArgs.model as any,
      system: "You are a coding agent.",
    }),
  });
}

Edge-Safe Tool Wrappers

Create the tool wrappers that bridge your shared tools with DurableAgent requirements:

// web/ai/agents/DurableAgent/tools.ts
async function viewFileStep(context: ToolContext, args: { path: string; startLine?: number; endLine?: number }) {
  "use step"; // Critical for workflow persistence
  return await viewFile(context, args);
}

async function editFileStep(context: ToolContext, args: EditFileArgs) {
  "use step";
  const result = await editFile(context, args);

  // Update pending changes in context
  if (result.success) {
    context.pendingChanges.set(args.path, {
      path: args.path,
      content: args.content,
      operation: args.operation,
    });
  }

  return result;
}

const createTools = (context: ToolContext) => {
  return {
    viewFile: {
      ...viewFileConfig, // Shared configuration
      execute: async (args) => await viewFileStep(context, args),
    },
    editFile: {
      ...editFileConfig,
      execute: async (args) => await editFileStep(context, args),
    },
    // ... other tools
  };
};

Agent Implementation: BackgroundAgent with Full Node.js Power

For compute-intensive tasks that require full Node.js compatibility, the BackgroundAgent uses ToolLoopAgent without edge runtime constraints.

Why BackgroundAgent?

While DurableAgents excel at real-time interactions, some scenarios require the full power of a traditional server environment:

  • Complex Code Analysis: Deep AST parsing and transformation
  • Large File Processing: Handling repositories with thousands of files
  • Long-Running Operations: Multi-step workflows that exceed edge timeouts
  • Node.js Dependencies: Using packages incompatible with edge runtimes

ToolLoopAgent Implementation

// web/ai/agents/BackgroundAgent/BackgroundAgent.ts
import { ToolLoopAgent, stepCountIs } from "ai";

export class BackgroundAgentRunner {
  private toolCallContext: ToolContext;

  constructor(context: ToolContext) {
    this.toolCallContext = context;
    // BackgroundAgent uses regular Map since it runs in persistent server environment
    this.toolCallContext.pendingChanges = new Map<string, FileChange>();
  }

  async run(config: BackgroundAgentConfig): Promise<BackgroundAgentResult> {
    const agent = new ToolLoopAgent({
      model: config.model.id,
      tools: this.createTools(config),
      maxSteps: 50,
    });

    const result = await agent.execute({
      prompt: config.prompt,
      onStepFinish: async (step) => {
        // Real-time progress tracking
        await this.updateProgress(step);

        // Cost and usage monitoring
        const usage = {
          promptTokens: step.usage.promptTokens,
          completionTokens: step.usage.completionTokens,
          totalTokens: step.usage.totalTokens,
        };

        await this.logStepUsage(step.stepNumber, usage);
      },
    });

    return this.processResult(result);
  }

  private async updateProgress(step: any) {
    const progress = Math.round((step.stepNumber / 50) * 100);
    await sendProgressUpdate(this.toolCallContext.userId, {
      progress,
      currentStep: step.stepNumber,
      toolCalls: step.toolCalls?.map((tc) => tc.toolName) || [],
    });
  }
}

Enhanced Tool Integration for BackgroundAgent

BackgroundAgent can leverage the full callback system for comprehensive monitoring:

// web/ai/agents/BackgroundAgent/tools.ts
const createTools = (context: ToolContext) => {
  return {
    viewFile: tool({
      ...viewFileConfig, // Same shared configuration
      execute: async ({ path, startLine, endLine }: ViewFileArgs) => {
        const startTime = Date.now();
        const args = { path, startLine, endLine };

        // Send real-time update to user
        await sendToolUpdate("viewFile", args);

        const result = await viewFile(
          context,
          args,
          // Error callback
          async (input, error) => {
            await saveToolResult("viewFile", input, error, startTime);
            await notifyToolFailure("viewFile", error);
          },
          // Success callback
          async (input, result) => {
            const duration = Date.now() - startTime;
            await saveToolResult("viewFile", input, result, startTime);
            await logToolMetrics("viewFile", duration, result.totalLines);
          }
        );

        return result;
      },
    }),

    editFile: tool({
      ...editFileConfig,
      execute: async (args: EditFileArgs) => {
        const result = await editFile(context, args);

        // Update pending changes (same transaction logic as DurableAgent)
        if (result.success) {
          context.pendingChanges.set(args.path, {
            path: args.path,
            content: args.content,
            operation: args.operation,
          });
        }

        return result;
      },
    }),
    // ... other tools
  };
};

The Power of Shared Context

Both agents use the same ToolContext structure, ensuring consistent behavior:

type ToolContext = {
  userId: string;
  owner: string;
  repo: string;
  branch: string;
  token: string;
  pendingChanges: Map<string, FileChange>; // RedisMap for edge, Map for server
};

This shared context enables:

  1. Consistent Data Access: Both agents see the same repository state
  2. Transaction Behavior: Pending changes are visible across tool calls
  3. State Coordination: Changes made by one agent are visible to the other
  4. Event Integration: Both agents can trigger the same monitoring events
  5. Environment-Appropriate Storage: RedisMap for edge persistence, Map for server memory

Project Architecture: Organizing for Maximum Reuse

Here's how I've structured the codebase to maximize tool sharing while respecting environment constraints:

web/ai/
├── tools/                     # Shared tool library
│   ├── viewFile.ts           # Core tool implementations
│   ├── editFile.ts           # with schema + logic
│   ├── listDirectory.ts
│   └── createSandbox.ts
├── agents/
│   ├── DurableAgent/         # Edge-compatible wrappers
│   │   ├── DurableAgent.ts   # Main workflow orchestrator
│   │   └── tools.ts          # Step-wrapped tool integration
│   └── BackgroundAgent/      # Full Node.js implementations
│       ├── BackgroundAgent.ts # ToolLoopAgent runner
│       └── tools.ts          # Enhanced tool integration
└── types/
    ├── ToolContext.ts        # Shared context definitions
    └── ToolResults.ts        # Common result typesx

Key Architecture Principles:

  1. Separation of Concerns: Core logic lives in /tools, agent-specific adaptations in /agents
  2. Environment Awareness: Different wrappers handle edge vs. server constraints
  3. Shared Types: Common interfaces ensure consistency across agents
  4. Event Integration: Callback system enables monitoring and coordination

Event-Driven Coordination: The Power of Callbacks

One of the most powerful aspects of this shared tool architecture is the comprehensive event system that enables real-time monitoring and coordination between agents.

Real-Time Progress Updates

Every tool execution can trigger live updates to keep users informed:

const toolContext = {
  // ... other context properties
  sendToolUpdate: async (toolName: string, args: any) => {
    await sendLiveActivity(userId, {
      type: "tool_start",
      tool: toolName,
      args: sanitizeArgs(args), // Remove sensitive data
      timestamp: Date.now(),
    });
  },

  saveToolResult: async (toolName: string, args: any, result: any, startTime: number) => {
    const duration = Date.now() - startTime;
    await logToolExecution(toolName, args, result, duration);

    // Send completion notification
    await sendLiveActivity(userId, {
      type: "tool_complete",
      tool: toolName,
      success: result.success,
      duration,
    });
  },
};

Usage and Cost Tracking

Both agents can contribute to comprehensive usage analytics:

// In BackgroundAgent
onStepFinish: async (step) => {
  // Accumulate token usage
  totalTokens.input += step.usage.promptTokens;
  totalTokens.output += step.usage.completionTokens;

  // Calculate costs (example rates)
  const inputCost = (step.usage.promptTokens / 1000) * 0.003;
  const outputCost = (step.usage.completionTokens / 1000) * 0.015;

  await updateJobMetrics(jobId, {
    step: step.stepNumber,
    toolsUsed: step.toolCalls?.map((tc) => tc.toolName) || [],
    tokenUsage: step.usage,
    estimatedCost: inputCost + outputCost,
    progress: calculateProgress(step.stepNumber, maxSteps),
  });
};

// In DurableAgent (simpler tracking due to edge constraints)
const trackToolUsage = async (toolName: string, result: any) => {
  "use step";
  await incrementToolCounter(toolName, result.success ? "success" : "error");
};

Transaction-Like State Management

The shared pendingChanges map provides transaction-like behavior across both agents:

// When DurableAgent makes a change
async function editFileStep(context: ToolContext, args: EditFileArgs) {
  "use step";
  const result = await editFile(context, args);

  if (result.success) {
    // Changes become visible to all subsequent tool calls
    context.pendingChanges.set(args.path, {
      path: args.path,
      content: args.content,
      operation: args.operation,
      timestamp: Date.now(),
      agent: "DurableAgent",
    });

    await logStateChange("file_edited", args.path);
  }

  return result;
}

// When BackgroundAgent reads the same file
async function viewFile(context: ToolContext, args: ViewFileArgs) {
  // Automatically checks pending changes first
  const pending = context.pendingChanges.get(args.path);
  if (pending && pending.operation !== "delete") {
    return {
      success: true,
      path: args.path,
      content: pending.content,
      source: "pending_changes", // BackgroundAgent sees DurableAgent's changes
      modifiedBy: pending.agent,
      modifiedAt: pending.timestamp,
    };
  }

  // Fall back to GitHub API if no pending changes
  return await fetchFromGitHub(context, args);
}

Edge Runtime Compatibility Guide

Success with DurableAgents requires understanding edge runtime constraints and choosing compatible packages.

Key Limitations

LimitationImpactWorkaround
Package CompatibilityMany Node.js packages failAudit dependencies, use edge-compatible alternatives
File System AccessCan't use fs, local filesUse APIs for all I/O operations
Process ManagementNo child_process, clusteringDesign stateless, single-threaded operations
Memory ConstraintsStricter limits than serversOptimize memory usage, avoid large data structures
Cold Start LatencyInitialization delaysMinimize import overhead, lazy-load when possible

Edge-Compatible Package Recommendations

✅ Works Great:

  • zod - Schema validation
  • ai SDK core functions
  • @octokit/rest - GitHub API client
  • date-fns - Date utilities
  • Most HTTP client libraries

❌ Avoid on Edge:

  • fs-extra - File system operations
  • child_process - Process spawning
  • sharp - Image processing (use cloud services)
  • Large AI model libraries (use API calls instead)

Testing Strategy: Always test your tool functions in an edge environment before deploying. Create a simple test harness:

// Test your tools in edge-like environment
const testEdgeCompatibility = async () => {
  try {
    const result = await viewFile(mockContext, { path: "README.md" });
    console.log("✅ Tool works on edge:", result.success);
  } catch (error) {
    console.error("❌ Edge incompatibility:", error.message);
  }
};

Looking Forward: Model Context Protocol (MCP) Integration

The architecture I've described here naturally aligns with the emerging Model Context Protocol (MCP) standard. MCP could provide a standardized way to share tools across different AI systems.

Potential MCP Benefits

Tool Standardization:

// Future: MCP-compatible tool definition
export const viewFileMCP: MCPTool = {
  name: "viewFile",
  description: viewFileConfig.description,
  inputSchema: viewFileConfig.inputSchema,
  implementation: viewFile,
  callbacks: {
    onStart: (args) => sendToolUpdate("viewFile", args),
    onComplete: (args, result) => saveToolResult("viewFile", args, result),
    onError: (args, error) => logToolError("viewFile", args, error),
  },
};

Cross-System Compatibility: MCP could enable tools built for DurableAgents to work seamlessly with other AI systems, creating a true ecosystem of reusable AI tools.

Event Standardization: The callback patterns I've implemented could become part of the MCP specification, standardizing how AI systems coordinate tool execution events.

Community Question: Have you experimented with MCP for tool sharing? I'm particularly interested in how MCP could standardize the callback mechanisms that make event-driven AI workflows possible.

Why This Architecture Works: Key Benefits

🚀 Performance & Scalability

  • Edge Execution: DurableAgents deliver ultra-low latency for user interactions
  • Automatic Scaling: Both agents scale independently based on workload
  • Global Distribution: Edge deployment reduces latency worldwide

🛡️ Reliability & Recovery

  • Durable State: Workflow.dev handles state persistence automatically
  • Automatic Retry: Failed steps retry without losing progress
  • Cold Start Recovery: Edge functions resume exactly where they left off

🔍 Complete Observability

  • Real-time Monitoring: See tool execution as it happens
  • Cost Tracking: Monitor token usage and compute costs per operation
  • Performance Metrics: Track tool execution times and success rates
  • Event-driven Insights: Rich callback system enables deep analytics

💰 Cost Optimization

  • Pay-per-execution: No idle server costs
  • Resource Right-sizing: Edge for quick tasks, servers for heavy compute
  • Token Efficiency: Shared tools reduce redundant API calls

🔧 Developer Experience

  • Code Reuse: Write tools once, use in both agents
  • Type Safety: Strong TypeScript types prevent runtime errors
  • Environment Awareness: Automatic adaptation to edge vs. server constraints
  • Event Integration: Built-in monitoring and coordination

Conclusion: Building the Future of AI Tool Orchestration

The dual-agent architecture with shared tool calls represents a significant step forward in AI system design. By combining the real-time responsiveness of edge-deployed DurableAgents with the computational power of server-based BackgroundAgents, we can build AI systems that are both fast and capable.

Key Takeaways

  1. Tool Sharing is Essential: A well-designed shared tool library eliminates duplication and ensures consistency across different execution environments.

  2. Environment-Aware Design: Different runtimes require different approaches, but the core tool logic can remain identical.

  3. Event-Driven Coordination: Comprehensive callback systems enable monitoring, cost tracking, and real-time user feedback.

  4. Transaction-Like Behavior: Shared state through pendingChanges provides consistency across distributed tool execution.

  5. Future-Proof Architecture: This design naturally aligns with emerging standards like MCP.

Implementation Guidelines

If you're building AI agent systems, consider:

  • Starting with a shared tool library design
  • Implementing comprehensive event tracking from day one
  • Testing edge compatibility early and often
  • Building monitoring and cost tracking into your architecture

Next Steps

The patterns shown here can be adapted to your specific use case. The key is starting with well-defined tool interfaces and building consistent wrappers for different execution environments.


What's your experience with tool sharing across AI agents? Have you built similar architectures or experimented with MCP? I'd love to hear about your approaches, challenges, and lessons learned in the comments below.

Tags: #AI #ToolLoopAgent #DurableAgents #WorkflowDev #AISDK #EdgeComputing #MCP #AIArchitecture #DistributedSystems