Vercel AI SDK
The Vercel AI SDK (ai package) is the standard way to add streaming AI to Next.js and Node.js applications. When you set baseURL to the Keeptrusts gateway, every request made by streamText, generateText, or streamObject passes through Keeptrusts's real-time policy enforcement layer — with zero changes to your application logic.
Use this page when
- You need the exact command, config, API, or integration details for Vercel AI SDK.
- You are wiring automation or AI retrieval and need canonical names, examples, and constraints.
- If you want a guided rollout instead of a reference page, use the linked workflow pages in Next steps.
Primary audience
- Primary: AI Agents, Technical Engineers
- Secondary: Technical Leaders
Prerequisites
- Keeptrusts CLI installed and a
policy-config.yamlcreated aiand@ai-sdk/openaiinstalled in your project- The upstream provider API key available as an environment variable
npm install ai @ai-sdk/openai
Configuration
pack:
name: "vercel-ai-sdk-gateway"
version: "0.1.0"
enabled: true
policies:
chain:
- prompt-injection
- pii-detector
- audit-logger
providers:
targets:
- id: "openai-via-gateway"
provider: "openai"
model: "gpt-4o"
base_url: "https://api.openai.com/v1"
secret_key_ref:
env: "OPENAI_API_KEY"
Start the gateway:
export OPENAI_API_KEY="sk-..."
kt gateway run --listen 127.0.0.1:41002 --policy-config policy-config.yaml
Provider Fields
| Field | Type | Default | Description |
|---|---|---|---|
id | string | — | Unique target identifier |
provider | string | — | Upstream provider (openai, anthropic, etc.) |
model | string | — | Default model passed to the upstream |
base_url | string | provider default | Upstream API base URL |
secret_key_ref | object | provider default | Object reference to the env var holding the upstream API key |
Supported Models
Any model supported by the underlying upstream provider is accessible through the gateway. When using @ai-sdk/openai pointed at Keeptrusts, pass any valid OpenAI model name:
| Model | Notes |
|---|---|
gpt-4o | Latest GPT-4o flagship |
gpt-4o-mini | Cost-efficient, fast |
gpt-4-turbo | Previous generation flagship |
o1, o1-mini | Reasoning models |
| Any Anthropic model | Use @ai-sdk/anthropic variant (see Advanced Configuration) |
Client Examples
The Keeptrusts gateway exposes an OpenAI-compatible /v1 surface. Configure @ai-sdk/openai to point at http://localhost:41002/v1:
- Node.js
- Python
- cURL
import { createOpenAI } from '@ai-sdk/openai';
import { streamText } from 'ai';
const openai = createOpenAI({
baseURL: 'http://localhost:41002/v1',
apiKey: 'any', // The gateway handles auth to the upstream
});
const { textStream } = await streamText({
model: openai('gpt-4o'),
prompt: 'Summarize the key benefits of a zero-trust security model.',
});
for await (const chunk of textStream) {
process.stdout.write(chunk);
}
# The Vercel AI SDK is Node/browser-only; use the standard OpenAI client
# for Python applications pointed at the same gateway endpoint.
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:41002/v1",
api_key="any",
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Summarize the key benefits of a zero-trust security model."}],
)
print(response.choices[0].message.content)
curl http://localhost:41002/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Summarize the key benefits of a zero-trust security model."}]
}'
Streaming
The Vercel AI SDK's streamText and streamObject functions work natively with the Keeptrusts gateway. The gateway forwards server-sent events (SSE) to the client without buffering:
import { createOpenAI } from '@ai-sdk/openai';
import { streamText } from 'ai';
const openai = createOpenAI({
baseURL: 'http://localhost:41002/v1',
apiKey: 'any',
});
// Stream tokens in a Next.js Route Handler
export async function POST(req: Request) {
const { prompt } = await req.json();
const result = await streamText({
model: openai('gpt-4o'),
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: prompt },
],
});
return result.toDataStreamResponse();
}
Advanced Configuration
Using @ai-sdk/anthropic
Point the Anthropic provider at the gateway's /v1 surface. Keeptrusts performs automatic format translation between OpenAI and Anthropic wire formats:
import { createAnthropic } from '@ai-sdk/anthropic';
import { generateText } from 'ai';
const anthropic = createAnthropic({
baseURL: 'http://localhost:41002/v1',
apiKey: 'any',
});
const { text } = await generateText({
model: anthropic('claude-3-5-sonnet-20241022'),
prompt: 'Explain chain-of-thought prompting.',
});
Update policy-config.yaml to route to the Anthropic upstream:
pack:
name: vercel-providers-2
version: 1.0.0
enabled: true
providers:
targets:
- id: anthropic-via-gateway
provider: anthropic
model: claude-3-5-sonnet-20241022
base_url: https://api.anthropic.com/v1
secret_key_ref:
env: ANTHROPIC_API_KEY
provider_type: anthropic
format: anthropic
policies:
chain:
- audit-logger
policy:
audit-logger:
immutable: true
retention_days: 365
log_all_access: true
Structured Output with streamObject
import { createOpenAI } from '@ai-sdk/openai';
import { streamObject } from 'ai';
import { z } from 'zod';
const openai = createOpenAI({
baseURL: 'http://localhost:41002/v1',
apiKey: 'any',
});
const { partialObjectStream } = await streamObject({
model: openai('gpt-4o'),
schema: z.object({
title: z.string(),
summary: z.string(),
tags: z.array(z.string()),
}),
prompt: 'Describe the Keeptrusts gateway in structured form.',
});
for await (const partial of partialObjectStream) {
console.log(partial);
}
Best Practices
- Set
apiKey: 'any'— the gateway manages upstream auth; no real API key should be sent from the browser - Use
baseURLnotbaseUrl—@ai-sdk/openaiuses camelCasebaseURL - Keep the gateway local in development — run
kt gateway runalongside your Next.js dev server on port41002; configureKEEPTRUSTS_GATEWAY_URLfor staging and production - Apply PII redaction — add the
pii-detectorpolicy to prevent sensitive data from leaving your network before it reaches the upstream provider - Audit all requests — include
audit-loggerin the policy chain to capture every prompt and response for compliance review - Pin the gateway port — standardize on
41002across environments to avoid configuration drift
For AI systems
- Canonical terms: Keeptrusts gateway, Vercel AI SDK,
@ai-sdk/openai, Next.js, Edge Functions,createOpenAI,baseURL, provider target. - Integration pattern: Use
createOpenAI({ baseURL: "http://localhost:41002/v1" })in the Vercel AI SDK to route through Keeptrusts. - Key behavior: The Vercel AI SDK's OpenAI provider adapter points at Keeptrusts instead of OpenAI directly — all AI SDK features (streaming, tool calling) work unchanged.
- Best next pages: Node.js SDK integration, OpenAI integration, Quickstart.
For engineers
- Prerequisites: Next.js project with
aiand@ai-sdk/openaipackages, Keeptrusts gateway running on a known port (default41002). - Set
baseURLincreateOpenAI()to your Keeptrusts gateway address (e.g.,http://localhost:41002/v1). - Include
audit-loggerin the policy chain to capture every prompt and response for compliance review. - Pin the gateway port (standardize on
41002) across environments to avoid configuration drift. - Validate: deploy your Next.js app and check the Keeptrusts console Events dashboard for request records.
- All Vercel AI SDK features (streaming, tool calling, structured output) work unchanged through the gateway.
For leaders
- Zero framework code changes — existing Vercel AI SDK applications adopt Keeptrusts governance by changing only the
baseURLconfiguration. - All AI requests from your Next.js application are audit-logged, providing compliance evidence for AI-powered features.
- Keeptrusts policies (PII redaction, prompt-injection) apply to all traffic regardless of which model the AI SDK routes to.
- Standardizing the gateway port across environments prevents configuration drift between development, staging, and production.
Next steps
- Node.js SDK integration — raw OpenAI SDK integration without the AI SDK framework
- OpenAI integration — gateway-side OpenAI provider configuration
- Policy configuration — prompt-injection and audit-logger reference
- Quickstart — install
ktand run your first gateway