AI SDK v5 Migration

I'm sharing my experience to quickly highlight my migration to the new AI SDK v5, in hopes of making your migration easier. It's a lot to digest, so I wanted to share the migration at a high level.

Official Migration Guide

1. Push latest changes and start a new branch in git

git checkout -b ai-sdk-v5-migration

Models I am using: You will need to install a beta for all changes.

npm install ai@beta (core)
npm install @ai-sdk/react@beta (for the useChat hook)
npm install @ai-sdk/anthropic@beta
npm install @ai-sdk/deepseek@beta
npm install @ai-sdk/google@beta
npm install @ai-sdk/groq@beta
npm install @ai-sdk/openai@beta
npm install @ai-sdk/vercel@beta
npm install @ai-sdk/xai@beta

Backend Highlights

In streamText (everyone's favorite)

  • maxTokens is now ✅ maxOutputTokens
  • sendReasoning is removed
  • toolCallStreaming is removed

Usage Tracking

When retrieving usage, you'll now need the following object shape:

  • promptTokens is now ✅ inputTokens
  • completionTokens is now ✅ outputTokens
  • ➕ There is now reasonTokens
interface Usage {
  inputTokens: number;
  outputTokens: number;
  totalTokens: number;
  reasonTokens: number;
}

How to get usage? (easy, thank you AI SDK team ✌️)

const result = streamText({
  onFinish: ({ text, finishReason, usage, response, steps, totalUsage }) => {
    console.log("Usage:", JSON.stringify(usage, null, 2));
  },
});

Tools (declaring tools)

  • parameters when declaring a tool are now inputSchema

Learn More

const tools = {};
// Read File
tools.readFile = tool({
  description: "Retrieves the content of a specified file by its path. Returns file content as a string or empty string if not found.",
  // input on tool call
  inputSchema: z.object({
    path: z.string().describe("The file path to retrieve content from"),
  }),
  execute: async ({ path }) => {
    // output on tool call
    const result = await someFileReadFunction(path);
    return { status: "success", data: result.content };
  },
});

The useChat Hook & client SDK changes

  • ❌ No more input, handleInputChange, handleSubmit
  • ✅ Change handleSubmit to sendMessage and add your own state for the controlled input
  • experimental_attachments is now files
  • reload is now regenerate (retry if error)
  • onResponse is removed
  • Your endpoint is wrapped in a DefaultChatTransport

Your code should look like this:

import { useChat } from "@ai-sdk/react";
import { DefaultChatTransport } from "ai";
import React, { useCallback, useState, useRef } from "react";

const [input, setInput] = useState("");

const { sendMessage, messages } = useChat({
  transport: new DefaultChatTransport({
    api: `/api/chat`,
    headers: {
      authorization: `Bearer ${token}`,
    },
    body: {
      selectedModel,
      chatId,
    },
  }),
});

const customSubmit = useCallback(async () => {
  sendMessage(
    {
      role: "user",
      text: textAreaRef.current?.value || "",
      files: messageAttachments?.length ? messageAttachments : undefined,
      metadata: {
        uid: user?.uid || "",
      },
    },
    {
      body: {},
    }
  );
}, [sendMessage, textAreaRef, messageAttachments, user]);

Message Format

Messages now look like this when accessing the messages property: const { messages } = useChat()

No more ❌ content property, replace with ✅ text property.

[
  {
    "role": "user",
    "id": "sOyJTK2gyQB27P6S",
    "parts": [
      {
        "type": "text",
        "text": "read this file vite.config.ts\n\n"
      }
    ],
    "metadata": {
      "uid": "ydxjoDzRjqaDAp3gSOO3q8Vp7V12"
    }
  },
  {
    "id": "CNhnjqWblAJuXbhd",
    "role": "assistant",
    "parts": [
      { "type": "step-start" },
      {
        "type": "reasoning",
        "text": "The user wants me to read the content of the file `vite.config.ts`. I need to use the `readFile` function with the path parameter set to 'vite.config.ts'.",
        "providerMetadata": {
          "anthropic": {
            "signature": "xxx"
          }
        },
        "state": "done"
      },
      {
        "type": "tool-readFile",
        "toolCallId": "toolu_01HuDNRmaFNmXKNH2zi2C8fM",
        "state": "output-available",
        "input": {
          "path": "vite.config.ts"
        },
        "output": {
          "status": "success",
          "data": "import { defineConfig }..."
        }
      },
      { "type": "step-start" },
      {
        "type": "text",
        "text": "Here's the content of the `vite.config.ts` file:\n\n```typescript\nimport { defineConfig }...`)",
        "state": "done"
      }
    ]
  }
]

Tool Calls

Tool calls now look like this:

{
  "type": "tool-readFile",
  "toolCallId": "toolu_01HuDNRmaFNmXKNH2zi2C8fM",
  "state": "output-available",
  "input": {
    "path": "vite.config.ts"
  },
  "output": {
    "status": "success",
    "data": "import { defineConfig }..."
  }
}

UI Changes Based on Tool Call State

You can have a lot of fun here.

const ReadFileTool: React.FC<ToolProps> = (props) => {
  const { toolCallId, state, input, output } = props;

  switch (state) {
    case "input-streaming":
      return (
        <div className="flex items-center gap-x-2 font-times my-1 bg-accent rounded-lg px-2 py-2 truncate text-xs w-full overflow-x-auto">
          <Spinner size="sm" /> {input?.path}
        </div>
      );
    case "input-available":
      return (
        <div className="flex items-center gap-x-2 font-times my-1 bg-accent rounded-lg px-2 py-2 truncate text-xs w-full overflow-x-auto">
          <Spinner size="sm" /> {input?.path}
        </div>
      );
    case "output-available":
      return (
        <div className="flex items-center gap-x-2 font-times my-1 bg-accent rounded-lg px-2 py-2 truncate text-xs w-full overflow-x-auto">
          <Icon path={mdiHeadSnowflakeOutline} size={0.6} /> {input?.path}
        </div>
      );
    case "output-error":
      return <div>Error: Reading file</div>;
    default:
      return <pre>{JSON.stringify(props, null, 2)}</pre>;
  }
};

Weird Issues

I would get Cannot read properties of undefined (reading 'type') out of nowhere. It would work for a while and then stop working. I think reinstalling the dependencies would make it work again. Maybe it was due to my environment.

If you are using pnpm:

pnpm i ai@beta @ai-sdk/anthropic@beta @ai-sdk/deepseek@beta @ai-sdk/google@beta @ai-sdk/groq@beta @ai-sdk/openai@beta @ai-sdk/react@beta @ai-sdk/vercel@beta @ai-sdk/xai@beta

If you are using npm:

npm i ai@beta @ai-sdk/anthropic@beta @ai-sdk/deepseek@beta @ai-sdk/google@beta @ai-sdk/groq@beta @ai-sdk/openai@beta @ai-sdk/react@beta @ai-sdk/vercel@beta @ai-sdk/xai@beta

Sense in here here's an example of a minimal version of my server code the createUIMessageStream unlocks some benefits to pipe custom data.

import { streamText, convertToModelMessages, createUIMessageStream, createUIMessageStreamResponse } from "ai";
import { NextResponse } from "next/server";
import { openai } from "@ai-sdk/openai";

export const runtime = "nodejs";

export async function POST(request: Request) {
  try {
    const body = await request.json();
    const { messages, selectedModel, chatId } = body;

    const stream = createUIMessageStream({
      originalMessages: messages,
      execute: async ({ writer }) => {
        // Send chat ID to client
        writer.write({
          type: "data-chat-id",
          data: { chatId },
          transient: true,
        });

        const result = await streamText({
          model: openai(selectedModel || "gpt-4o-mini"),
          messages: convertToModelMessages(messages),
          onFinish: ({ text, finishReason, usage, response, steps, totalUsage }) => {
            try {
              // Your saveUsage implementation would go here
              // saveUsage({
              //   usage: usage as any,
              //   model: selectedModel,
              //   response: response,
              // });
            } catch (error) {
              console.log("error", error);
            }
          },
        });

        writer.merge(result.toUIMessageStream());
      },
      onFinish: ({ messages }) => {
        try {
          // Your saveChat implementation would go here
          console.log("Chat finished, messages:", messages.length);
          // saveChat(messages);
        } catch (error) {
          console.log("error", error);
        }
      },
    });

    return createUIMessageStreamResponse({ stream });
  } catch (error) {
    console.error(error);
    return NextResponse.json({ error: "Something went wrong" }, { status: 500 });
  }
}

Get the chat id from the client

const chat = useChatAi({
  transport: new DefaultChatTransport({
    api: `/api/chat/${projectId}`,
    headers: {
      authorization: `Bearer ${project.token}`,
    },
    body: {
      selectedModel: selectedModel,
      chatId: chatId,
    },
  }),
  onData: async (payload) => {
    console.log("onData", payload);

    if (payload.type === "data-chat-id") {
      setChatId(payload.data.chatId);
    }
  },
});

Files

If you're working with files, the shape will look like this:

[
    {
        "mediaType": "image/png",
        "filename": "Screenshot 2025-07-13 at 5.18.42 AM.png",
        "type": "file",
        "url": "data:image/png;base64.."
    }
]

Conclusion

I hope this helps you migrate to the new AI SDK v5. If you have any questions, feel free to reach out to me on X.

Big thanks to the AI SDK team for the great work. 🫡