TypeScript SDK Patterns for Governed AI
The Keeptrusts gateway speaks the OpenAI API, so any TypeScript SDK that targets OpenAI works with a single URL change. This guide covers the OpenAI Node SDK and the Vercel AI SDK — the two most common choices for TypeScript applications.
Use this page when
- You are integrating the OpenAI Node SDK or Vercel AI SDK with the Keeptrusts gateway in TypeScript.
- You need streaming, non-streaming, or Next.js API route patterns for governed AI calls.
- You want to handle 409 policy blocks with typed error interfaces in TypeScript.
- You are building a Vercel/Next.js application that uses
streamTextthrough the gateway.
Primary audience
- Primary: TypeScript/Node.js developers building AI-powered applications
- Secondary: Full-stack engineers using Next.js, Frontend developers adding AI features via Vercel AI SDK
OpenAI Node SDK
Installation
npm install openai
Basic Client
import OpenAI from "openai";
const client = new OpenAI({
baseURL: process.env.LLM_GATEWAY_URL ?? "http://localhost:41002/v1",
apiKey: process.env.OPENAI_API_KEY,
});
Non-Streaming Request
async function governedCompletion(prompt: string): Promise<string> {
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: prompt }],
});
return response.choices[0].message.content ?? "";
}
Streaming Request
async function streamCompletion(prompt: string): Promise<string> {
const stream = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: prompt }],
stream: true,
});
const chunks: string[] = [];
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
process.stdout.write(content);
chunks.push(content);
}
}
return chunks.join("");
}
Handling Policy Blocks
import OpenAI from "openai";
interface PolicyError {
type: string;
message: string;
policy: string;
code: string;
}
async function safeCompletion(prompt: string): Promise<string | null> {
try {
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: prompt }],
});
return response.choices[0].message.content;
} catch (err) {
if (err instanceof OpenAI.APIError && err.status === 409) {
const body = err.error as { error?: PolicyError };
console.warn(`Blocked by policy: ${body.error?.policy}`);
return null;
}
throw err;
}
}
Function Calling
const tools: OpenAI.ChatCompletionTool[] = [
{
type: "function",
function: {
name: "lookup_order",
description: "Look up an order by ID",
parameters: {
type: "object",
properties: {
order_id: { type: "string", description: "The order identifier" },
},
required: ["order_id"],
},
},
},
];
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Check order ORD-5678" }],
tools,
tool_choice: "auto",
});
const toolCalls = response.choices[0].message.tool_calls;
if (toolCalls) {
for (const call of toolCalls) {
console.log(`Tool: ${call.function.name}, Args: ${call.function.arguments}`);
}
}
Vercel AI SDK
The Vercel AI SDK is popular in Next.js applications. It supports the OpenAI provider with a custom base URL.
Installation
npm install ai @ai-sdk/openai
Provider Configuration
import { createOpenAI } from "@ai-sdk/openai";
const governedProvider = createOpenAI({
baseURL: process.env.LLM_GATEWAY_URL ?? "http://localhost:41002/v1",
apiKey: process.env.OPENAI_API_KEY,
});
Streaming Text (Server Action)
import { streamText } from "ai";
export async function POST(req: Request) {
const { prompt } = await req.json();
const result = streamText({
model: governedProvider("gpt-4o"),
prompt,
});
return result.toDataStreamResponse();
}
Generating Structured Objects
import { generateObject } from "ai";
import { z } from "zod";
const { object } = await generateObject({
model: governedProvider("gpt-4o"),
schema: z.object({
summary: z.string(),
sentiment: z.enum(["positive", "negative", "neutral"]),
confidence: z.number().min(0).max(1),
}),
prompt: "Analyze the sentiment of: 'This product is amazing!'",
});
console.log(object.sentiment, object.confidence);
Error Handling in Next.js Route Handlers
import { streamText } from "ai";
export async function POST(req: Request) {
const { messages } = await req.json();
try {
const result = streamText({
model: governedProvider("gpt-4o"),
messages,
});
return result.toDataStreamResponse();
} catch (err: unknown) {
if (err instanceof Error && "status" in err) {
const status = (err as { status: number }).status;
if (status === 409) {
return Response.json(
{ error: "Request blocked by governance policy" },
{ status: 409 }
);
}
if (status === 429) {
return Response.json(
{ error: "Rate limit exceeded, try again shortly" },
{ status: 429 }
);
}
}
return Response.json({ error: "Internal server error" }, { status: 500 });
}
}
Express.js Middleware Pattern
Wrap governance error handling into middleware:
import type { Request, Response, NextFunction } from "express";
function governanceErrorHandler(err: unknown, _req: Request, res: Response, _next: NextFunction) {
if (err instanceof OpenAI.APIError) {
if (err.status === 409) {
res.status(409).json({ error: "Blocked by AI governance policy", policy: err.message });
return;
}
if (err.status === 429) {
res.status(429).json({ error: "Rate limited by gateway" });
return;
}
}
res.status(500).json({ error: "Internal server error" });
}
Best Practices
- Use environment variables for
baseURL— never hardcode gateway addresses. - Catch 409 at the route level — return user-friendly messages, not raw errors.
- Use the Vercel AI SDK
streamText— it handles SSE framing automatically through the gateway. - Type your error envelopes — define a
PolicyErrorinterface for type-safe handling. - Set timeouts — the OpenAI Node SDK accepts a
timeoutoption in milliseconds. - Log policy blocks — track which policies fire most often to tune your configuration.
Next steps
- LangChain Integration — governed RAG pipelines
- Streaming Patterns — SSE deep dive and chunked transfer
- Error Handling — full error envelope reference
For AI systems
- Canonical terms: OpenAI Node SDK, Vercel AI SDK,
baseURL,streamText, TypeScript, Next.js API route, policy block (409),PolicyErrorinterface. - Key config:
new OpenAI({ baseURL: process.env.LLM_GATEWAY_URL ?? "http://localhost:41002/v1" }). Vercel AI SDK:openai("gpt-4o")with custom base URL. - Best next pages: LangChain Integration, Streaming Patterns, Error Handling.
For engineers
- Install:
npm install openaifor the Node SDK, ornpm install ai @ai-sdk/openaifor Vercel AI SDK. - Set
baseURLviaprocess.env.LLM_GATEWAY_URLso the same code works across environments. - Define a
PolicyErrorTypeScript interface to type the 409 error envelope for safe handling. - Use
streamTextfrom the Vercel AI SDK — it handles SSE framing automatically through the gateway. - Catch errors at the route level and return user-friendly messages; never expose raw error envelopes to the frontend.
- Set client
timeoutoption to prevent hung connections during provider outages.
For leaders
- TypeScript/Next.js is the most common stack for AI-powered web applications — Keeptrusts integrates with a one-line URL change.
- The Vercel AI SDK integration means governed AI works in serverless and edge deployments without infrastructure changes.
- Policy blocks surface as typed errors, enabling product teams to build intentional UX for governance interventions.
- Same gateway policies apply whether the SDK call comes from a Node.js backend, Next.js API route, or edge function.