Persist AI chat messages to Neon Postgres - this template showcases how to store AI SDK chats and messages to your Neon database. Message parts are stored in separate tables, rather than JSONB. This makes it easier to enforce the schema and version control as the AI SDK and message parts evolve over time.
- Click the "Deploy" button to clone this repository, create a new Vercel project, set up the Neon integration, and provision a new Neon database:
-
Next, enable the Vercel AI Gateway for the project. Learn more here.
-
Once the process is complete, you can play around with the deployed template and clone the newly created GitHub repository to start making changes locally.
- Install dependencies:
bun i- Create a
.envfile in the project root
cp .env.example .env- Get your Neon database URL
Run vercel env pull to fetch the environment variables from your Vercel project.
Alternatively, obtain the database connection string from the Connection Details widget on the Neon Dashboard and update the .env file with your database connection string:
DATABASE_URL=<your-string-here>- Schema setup
Use Drizzle to generate a database schema based on the schema definitions in lib/db/schema.ts and apply the schema to the Neon database.
bun run db:generate
bun run db:migrate- Get your Vercel AI Gateway API Key
If you deployed via Vercel, run vercel env pull to fetch the environment variables. Otherwise, create a Vercel AI Gateway API Key here and add it to your .env file:
AI_GATEWAY_API_KEY=<your-string-here>Alternatively, you can follow the AI SDK provider docs and modify the model serving in the code to use a different provider instead of Vercel AI Gateway.
- Run the development server
bun run devYou're all set! 🚀 Visit the app in your browser and click on New chat to try out the Tweet drafting assistant. After having a conversation with the agent, refresh the browser page and verify that all changes are persisted and queried from the database on page load.
- Full-stack framework: Next.js
- ORM: Drizzle
- Agent framework: AI SDK v6
- UI components: Shadcn & AI Elements
- Database: Neon Serverless Postgres
- TypeScript runtime & package manager: Bun
- New Chat: When a user clicks "New chat", they navigate to
/chats/{chatId} - Load History: The chat page loads existing messages from the database
- Send Message: The client sends the user message to the API
- Persist User Message: The API persists the user message before streaming
- Stream Response: The AI response is streamed to the client
- Persist Assistant Message:
onFinishcallback persists the assistant response - Reload: If the user refreshes, they see the full conversation history
On Vercel Fluid compute, we recommend using a pooled PostgreSQL connection that can be reused across requests (more details here). This setup uses node-postgres with Drizzle as the ORM.
bun add drizzle-orm pg @vercel/functions
bun add -D drizzle-kit @types/pgFollow the Drizzle Postgres setup guide for step-by-step instructions. Attach the database pool to your Vercel function to ensure it releases properly on function shutdown. For more information, see the Vercel connection pooling guide.
Optionally, configure the Neon MCP server by following the instructions in the MCP server README or by running bunx neonctl@latest init.
The schema uses separate tables for chats, messages, and all message part types (text, reasoning, tools, files, etc.).
The schema uses uuid_generate_v7() for default IDs. You have two options:
Option 1: Use the pg_uuidv7 extension (recommended for Neon)
CREATE EXTENSION IF NOT EXISTS pg_uuidv7;Option 2: PostgreSQL 18+
PostgreSQL 18 includes native UUID v7 support via uuidv7(). Update your schema to use uuidv7() instead of uuid_generate_v7().
Run migrations to create your tables:
bun run db:generate
bun run db:migrateThis template uses UUID v7 for message and message part IDs. UUID v7 addresses performance concerns of UUID v4 (the previous default). Most importantly, it's chronologically sortable, meaning IDs generated later are lexicographically greater than earlier ones.
With UUID v7, we avoid havign to sort by createdAt index (which breaks if we insert all message parts in a single transaction) and avoid an addtional order column that documents the order of message parts. Instead, we can sort by primary key directly.
This template is based on Shadcn UI and the AI SDK's AI Elements components.
For details on how to use Shadcn UI, refer to the Shadcn Next.js docs. Follow the Shadcn Next.js dark mode guide to learn how the dark mode is implemented.
You can find an introduction about the AI SDK AI Elements in the AI SDK docs.
Each tool call is persisted to the messageTools table with a tool_type enum constraint (see lib/db/schema.ts). Define your tools and their schemas in code and update the TOOL_TYPES array to match (see lib/ai/tools.ts). This ensures that only valid tool types are persisted to the database and allows to safely typecast tool input and output when retrieving from the database.