Node.js SDK
Use the OpenAI Node.js SDK with Keeptrusts by pointing baseURL at the Keeptrusts gateway. All requests are intercepted and governed by your policy chain before being forwarded to the LLM provider. The integration requires no code changes beyond the base URL — every SDK feature (streaming, tool use, structured output) works as normal.
Use this page when
- You need the exact command, config, API, or integration details for Node.js SDK.
- You are wiring automation or AI retrieval and need canonical names, examples, and constraints.
- If you want a guided rollout instead of a reference page, use the linked workflow pages in Next steps.
Primary audience
- Primary: AI Agents, Technical Engineers
- Secondary: Technical Leaders
Prerequisites
- Keeptrusts gateway running locally (
kt gateway run --policy-config policy-config.yaml) - A policy config that declares a provider target (e.g.,
openai,anthropic,google-vertex) - Node.js 18+ (for native
fetch) or theopenai/@anthropic-ai/sdkpackages
npm install openai # OpenAI SDK
npm install @anthropic-ai/sdk # Anthropic SDK (optional)
npm install ai @ai-sdk/openai # Vercel AI SDK (optional)
Configuration
A minimal config for routing through Keeptrusts to OpenAI:
pack:
name: node-app-governance
version: 1.0.0
enabled: true
providers:
targets:
- id: openai-primary
provider: openai
model: gpt-4o
base_url: https://api.openai.com
secret_key_ref:
env: OPENAI_API_KEY
policies:
chain:
- prompt-injection
- pii-detector
- audit-logger
policy:
pii-detector:
action: redact
audit-logger:
retention_days: 30
Start the gateway:
export OPENAI_API_KEY=sk-...
kt policy lint --file policy-config.yaml
kt gateway run --policy-config policy-config.yaml
# Gateway listening on http://localhost:41002
Connection Settings
| Option | Type | Default | Description |
|---|---|---|---|
baseURL | string | — | Point to http://localhost:41002/v1 (or your deployed gateway URL). |
apiKey | string | — | Pass "any" when the gateway holds the upstream key; pass the real key when you want the gateway to forward it per-request. |
defaultQuery | object | — | Additional query parameters attached to every request. |
defaultHeaders | object | — | Additional headers on every request (e.g., x-kt-api-key for consumer-group routing). |
Supported Models
The Node.js SDK works with any model that your Keeptrusts provider targets expose. Common examples:
| Model | Provider target |
|---|---|
gpt-4o, gpt-4o-mini | openai:chat:<model> |
claude-opus-4-5, claude-sonnet-4-5 | anthropic:chat:<model> |
gemini-2.0-flash | google-vertex:chat:<model> |
llama-3.1-70b | ollama:chat:<model> or upstream provider |
Specify the model name in your SDK call exactly as configured in the provider target.
Client Examples
- OpenAI SDK
- Anthropic SDK
- cURL
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "http://localhost:41002/v1",
apiKey: "any", // gateway holds the upstream key
});
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What are the key principles of AI governance?" },
],
temperature: 0.7,
max_tokens: 512,
});
console.log(response.choices[0].message.content);
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic({
baseURL: "http://localhost:41002",
apiKey: "any",
});
const message = await client.messages.create({
model: "claude-opus-4-5",
max_tokens: 1024,
messages: [{ role: "user", content: "What are the key principles of AI governance?" }],
});
console.log(message.content[0].text);
curl http://localhost:41002/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer any" \
-d '{
"model": "gpt-4o",
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "What are the key principles of AI governance?" }
],
"temperature": 0.7,
"max_tokens": 512
}'
Streaming
Streaming works without any change to the SDK call — Keeptrusts passes SSE chunks through after applying streaming-compatible policy checks.
- OpenAI SDK
- Vercel AI SDK
- cURL
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "http://localhost:41002/v1",
apiKey: "any",
});
const stream = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Explain the EU AI Act in plain language." }],
stream: true,
});
for await (const chunk of stream) {
const delta = chunk.choices[0]?.delta?.content;
if (delta) process.stdout.write(delta);
}
import { createOpenAI } from "@ai-sdk/openai";
import { streamText } from "ai";
const openai = createOpenAI({
baseURL: "http://localhost:41002/v1",
apiKey: "any",
});
const { textStream } = streamText({
model: openai("gpt-4o"),
prompt: "Explain the EU AI Act in plain language.",
});
for await (const text of textStream) {
process.stdout.write(text);
}
curl http://localhost:41002/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer any" \
--no-buffer \
-d '{
"model": "gpt-4o",
"messages": [{ "role": "user", "content": "Explain the EU AI Act in plain language." }],
"stream": true
}'
Advanced Configuration
Consumer Groups
Pass the x-kt-api-key header to identify a consumer group and apply per-consumer policies:
const client = new OpenAI({
baseURL: "http://localhost:41002/v1",
apiKey: "any",
defaultHeaders: {
"x-kt-api-key": "consumer-group-key-abc123",
},
});
Tool Use (Function Calling)
const tools = [
{
type: "function",
function: {
name: "get_policy_status",
description: "Get the current AI policy status for a given scope",
parameters: {
type: "object",
properties: {
scope: { type: "string", description: "The policy scope (e.g., 'finance', 'hr')" },
},
required: ["scope"],
},
},
},
];
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "What is the policy status for the finance scope?" }],
tools,
tool_choice: "auto",
});
const toolCall = response.choices[0].message.tool_calls?.[0];
if (toolCall) {
console.log("Tool called:", toolCall.function.name);
console.log("Args:", JSON.parse(toolCall.function.arguments));
}
Structured Output
import OpenAI from "openai";
import { zodResponseFormat } from "openai/helpers/zod";
import { z } from "zod";
const client = new OpenAI({
baseURL: "http://localhost:41002/v1",
apiKey: "any",
});
const PolicyAssessment = z.object({
risk_level: z.enum(["low", "medium", "high", "critical"]),
findings: z.array(z.string()),
recommended_action: z.string(),
});
const response = await client.beta.chat.completions.parse({
model: "gpt-4o",
messages: [
{ role: "system", content: "You are an AI governance assessor." },
{ role: "user", content: "Assess the risks of deploying an unmonitored LLM in a customer-facing chatbot." },
],
response_format: zodResponseFormat(PolicyAssessment, "policy_assessment"),
});
const assessment = response.choices[0].message.parsed;
console.log("Risk level:", assessment?.risk_level);
console.log("Findings:", assessment?.findings);
Best Practices
- Never hardcode upstream API keys in client code. Let the Keeptrusts gateway hold the keys via
secret_key_refin the policy config; pass"any"asapiKeyin the SDK constructor. - Use
defaultHeadersfor consumer identity rather than embedding consumer context in message content. Thex-kt-api-keyheader is the supported signal for per-consumer policy routing. - Handle policy-block errors explicitly. When a request is blocked, the gateway returns HTTP 422 with a structured error body. Catch
OpenAI.APIErrorand checkerror.status === 422to distinguish policy blocks from upstream errors. - Set a reasonable
timeouton the SDK constructor (e.g.,timeout: 30_000) so your application fails fast when the gateway or upstream is unavailable rather than hanging indefinitely. - Prefer streaming for long completions to reduce perceived latency and allow early policy-check failures to surface before the full response is buffered.
For AI systems
- Canonical terms: Keeptrusts gateway, Node.js SDK, OpenAI Node.js SDK, TypeScript,
baseURL, access key,openainpm package. - Integration pattern: Override
baseURLto point at the Keeptrusts gateway (http://localhost:8080/v1); setapiKeyto your Keeptrusts access key. - Key behavior: The standard OpenAI Node.js SDK works unchanged — only
baseURLand optionallyapiKeyare modified. - Best next pages: Python SDK integration, Vercel AI SDK integration, Quickstart.
For engineers
- Prerequisites: Node.js 18+,
openainpm package installed (npm install openai), Keeptrusts gateway running. - Set
baseURL: "http://localhost:8080/v1"andapiKeyto your Keeptrusts access key (or"unused"if auth is handled by the gateway). - Set a reasonable
timeout(e.g.,timeout: 30_000) so your app fails fast when the gateway is unavailable. - Prefer streaming (
stream: true) for long completions to reduce perceived latency and surface early policy failures. - TypeScript types are fully preserved — the SDK returns the same response shapes regardless of which upstream provider the gateway routes to.
- Validate: run your app and check the Keeptrusts console Events dashboard for request records.
For leaders
- Zero application code changes beyond
baseURL— existing Node.js/TypeScript applications can adopt Keeptrusts governance instantly. - All requests are audit-logged regardless of which upstream provider is configured, providing compliance evidence without app-side instrumentation.
- SDK timeout configuration prevents cascading failures when the gateway or upstream is degraded.
- Works with any OpenAI-compatible provider behind the gateway — switching models or providers is invisible to application code.
Next steps
- Python SDK integration — equivalent guide for Python applications
- Vercel AI SDK integration — framework-specific integration for Next.js
- OpenAI integration — gateway-side OpenAI provider configuration
- Policy configuration — policy chain reference
- Quickstart — install
ktand run your first gateway