The Vercel AI Chatbot provides a modern chat interface built with React and the AI SDK. It supports real-time streaming responses, message editing, and multimodal inputs including text and images.
Core components
The chat interface is built using three main components that work together:
Chat Main chat container that orchestrates the entire conversation flow
Messages Renders the message history with streaming support
MultimodalInput Handles user input with text and file attachments
Chat component
The Chat component serves as the main container for chat functionality. It uses the useChat hook from AI SDK to manage messages and streaming.
Basic usage
import { useChat } from "@ai-sdk/react" ;
import { Messages } from "@/components/messages" ;
import { MultimodalInput } from "@/components/multimodal-input" ;
export function Chat ({
id ,
initialMessages ,
initialChatModel ,
initialVisibilityType ,
isReadonly ,
} : {
id : string ;
initialMessages : ChatMessage [];
initialChatModel : string ;
initialVisibilityType : VisibilityType ;
isReadonly : boolean ;
}) {
const {
messages ,
setMessages ,
sendMessage ,
status ,
stop ,
regenerate ,
} = useChat < ChatMessage >({
id ,
messages: initialMessages ,
generateId: generateUUID ,
});
return (
< div className = "flex h-dvh flex-col bg-background" >
< Messages
messages = { messages }
status = { status }
regenerate = { regenerate }
/>
< MultimodalInput
sendMessage = { sendMessage }
status = { status }
stop = { stop }
/>
</ div >
);
}
Streaming configuration
The chat uses a custom transport to handle streaming with AI Gateway:
import { DefaultChatTransport } from "ai" ;
import { fetchWithErrorHandlers } from "@/lib/utils" ;
const {
messages ,
sendMessage ,
status ,
} = useChat < ChatMessage >({
id ,
transport: new DefaultChatTransport ({
api: "/api/chat" ,
fetch: fetchWithErrorHandlers ,
prepareSendMessagesRequest ( request ) {
return {
body: {
id: request . id ,
message: request . messages . at ( - 1 ),
selectedChatModel: currentModelIdRef . current ,
selectedVisibilityType: visibilityType ,
},
};
},
}),
onData : ( dataPart ) => {
// Handle streaming data parts
setDataStream (( ds ) => ( ds ? [ ... ds , dataPart ] : []));
},
onFinish : () => {
// Refresh chat history
mutate ( unstable_serialize ( getChatHistoryPaginationKey ));
},
});
Message rendering
The Messages component displays the conversation history with support for streaming responses:
import { PreviewMessage , ThinkingMessage } from "./message" ;
export function Messages ({
messages ,
status ,
regenerate ,
isReadonly ,
} : MessagesProps ) {
return (
< div className = "flex-1 overflow-y-auto" >
< div className = "mx-auto max-w-4xl" >
{ messages . length === 0 && < Greeting /> }
{ messages . map (( message , index ) => (
< PreviewMessage
key = { message . id }
message = { message }
isLoading = { status === "streaming" && messages . length - 1 === index }
regenerate = { regenerate }
isReadonly = { isReadonly }
/>
)) }
{ status === "submitted" && < ThinkingMessage /> }
</ div >
</ div >
);
}
Message types
Messages support multiple content types through the parts system:
Text messages
File attachments
Reasoning
if ( type === "text" ) {
return (
< MessageContent
className = "rounded-2xl px-3 py-2"
style = { { backgroundColor: "#006cff" } }
>
< Response > { sanitizeText ( part . text ) } </ Response >
</ MessageContent >
);
}
const attachmentsFromMessage = message . parts . filter (
( part ) => part . type === "file"
);
{ attachmentsFromMessage . map (( attachment ) => (
< PreviewAttachment
attachment = { {
name: attachment . filename ?? "file" ,
contentType: attachment . mediaType ,
url: attachment . url ,
} }
key = { attachment . url }
/>
))}
if ( type === "reasoning" ) {
const hasContent = part . text ?. trim (). length > 0 ;
if ( hasContent ) {
const isStreaming = "state" in part && part . state === "streaming" ;
return (
< MessageReasoning
isLoading = { isLoading || isStreaming }
reasoning = { part . text }
/>
);
}
}
The MultimodalInput component handles user input with support for text, images, and paste functionality:
components/multimodal-input.tsx
import { PromptInput , PromptInputTextarea } from "./elements/prompt-input" ;
export function MultimodalInput ({
input ,
setInput ,
attachments ,
setAttachments ,
sendMessage ,
status ,
} : MultimodalInputProps ) {
const submitForm = useCallback (() => {
sendMessage ({
role: "user" ,
parts: [
... attachments . map (( attachment ) => ({
type: "file" as const ,
url: attachment . url ,
name: attachment . name ,
mediaType: attachment . contentType ,
})),
{
type: "text" ,
text: input ,
},
],
});
setAttachments ([]);
setInput ( "" );
}, [ input , attachments , sendMessage ]);
return (
< PromptInput onSubmit = { submitForm } >
< PromptInputTextarea
placeholder = "Send a message..."
value = { input }
onChange = { ( e ) => setInput ( e . target . value ) }
/>
</ PromptInput >
);
}
File uploads
Users can attach images by clicking the attachment button or pasting directly:
components/multimodal-input.tsx
const uploadFile = useCallback ( async ( file : File ) => {
const formData = new FormData ();
formData . append ( "file" , file );
const response = await fetch ( "/api/files/upload" , {
method: "POST" ,
body: formData ,
});
if ( response . ok ) {
const data = await response . json ();
return {
url: data . url ,
name: data . pathname ,
contentType: data . contentType ,
};
}
}, []);
const handlePaste = useCallback ( async ( event : ClipboardEvent ) => {
const items = event . clipboardData ?. items ;
const imageItems = Array . from ( items ). filter (( item ) =>
item . type . startsWith ( "image/" )
);
if ( imageItems . length > 0 ) {
event . preventDefault ();
const uploadPromises = imageItems
. map (( item ) => item . getAsFile ())
. filter (( file ) : file is File => file !== null )
. map (( file ) => uploadFile ( file ));
const uploadedAttachments = await Promise . all ( uploadPromises );
setAttachments (( curr ) => [ ... curr , ... uploadedAttachments ]);
}
}, [ uploadFile ]);
Message actions
Each message has interactive actions available:
Copy Copy message content to clipboard
Edit Edit user messages and regenerate responses
Vote Upvote or downvote assistant responses
Regenerate Re-generate the assistant’s response
The chat automatically scrolls to the bottom as new messages arrive:
export function useMessages ({ status } : { status : string }) {
const containerRef = useRef < HTMLDivElement >( null );
const endRef = useRef < HTMLDivElement >( null );
const [ isAtBottom , setIsAtBottom ] = useState ( true );
const scrollToBottom = useCallback (( behavior : ScrollBehavior = "auto" ) => {
endRef . current ?. scrollIntoView ({ behavior });
}, []);
useEffect (() => {
if ( status === "streaming" ) {
scrollToBottom ( "smooth" );
}
}, [ status , scrollToBottom ]);
return {
containerRef ,
endRef ,
isAtBottom ,
scrollToBottom ,
};
}
The chat interface automatically saves input to localStorage, so users won’t lose their draft messages when navigating away.
Error handling
The chat includes comprehensive error handling with user-friendly messages:
const { messages , sendMessage } = useChat ({
onError : ( error ) => {
if ( error . message ?. includes ( "AI Gateway requires a valid credit card" )) {
setShowCreditCardAlert ( true );
} else if ( error instanceof ChatbotError ) {
toast ({
type: "error" ,
description: error . message ,
});
} else {
toast ({
type: "error" ,
description: error . message || "Oops, an error occurred!" ,
});
}
},
});
Artifacts Learn about the document artifacts system
Model providers Understand AI model configuration