Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Node.js SDK

Use the OpenAI Node.js SDK with Keeptrusts by pointing baseURL at the Keeptrusts gateway. All requests are intercepted and governed by your policy chain before being forwarded to the LLM provider. The integration requires no code changes beyond the base URL — every SDK feature (streaming, tool use, structured output) works as normal.

Use this page when

  • You need the exact command, config, API, or integration details for Node.js SDK.
  • You are wiring automation or AI retrieval and need canonical names, examples, and constraints.
  • If you want a guided rollout instead of a reference page, use the linked workflow pages in Next steps.

Primary audience

  • Primary: AI Agents, Technical Engineers
  • Secondary: Technical Leaders

Prerequisites

  • Keeptrusts gateway running locally (kt gateway run --policy-config policy-config.yaml)
  • A policy config that declares a provider target (e.g., openai, anthropic, google-vertex)
  • Node.js 18+ (for native fetch) or the openai / @anthropic-ai/sdk packages
npm install openai # OpenAI SDK
npm install @anthropic-ai/sdk # Anthropic SDK (optional)
npm install ai @ai-sdk/openai # Vercel AI SDK (optional)

Configuration

A minimal config for routing through Keeptrusts to OpenAI:

pack:
name: node-app-governance
version: 1.0.0
enabled: true

providers:
targets:
- id: openai-primary
provider: openai
model: gpt-4o
base_url: https://api.openai.com
secret_key_ref:
env: OPENAI_API_KEY

policies:
chain:
- prompt-injection
- pii-detector
- audit-logger

policy:
pii-detector:
action: redact

audit-logger:
retention_days: 30

Start the gateway:

export OPENAI_API_KEY=sk-...
kt policy lint --file policy-config.yaml
kt gateway run --policy-config policy-config.yaml
# Gateway listening on http://localhost:41002

Connection Settings

OptionTypeDefaultDescription
baseURLstringPoint to http://localhost:41002/v1 (or your deployed gateway URL).
apiKeystringPass "any" when the gateway holds the upstream key; pass the real key when you want the gateway to forward it per-request.
defaultQueryobjectAdditional query parameters attached to every request.
defaultHeadersobjectAdditional headers on every request (e.g., x-kt-api-key for consumer-group routing).

Supported Models

The Node.js SDK works with any model that your Keeptrusts provider targets expose. Common examples:

ModelProvider target
gpt-4o, gpt-4o-miniopenai:chat:<model>
claude-opus-4-5, claude-sonnet-4-5anthropic:chat:<model>
gemini-2.0-flashgoogle-vertex:chat:<model>
llama-3.1-70bollama:chat:<model> or upstream provider

Specify the model name in your SDK call exactly as configured in the provider target.

Client Examples

import OpenAI from "openai";

const client = new OpenAI({
baseURL: "http://localhost:41002/v1",
apiKey: "any", // gateway holds the upstream key
});

const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What are the key principles of AI governance?" },
],
temperature: 0.7,
max_tokens: 512,
});

console.log(response.choices[0].message.content);

Streaming

Streaming works without any change to the SDK call — Keeptrusts passes SSE chunks through after applying streaming-compatible policy checks.

import OpenAI from "openai";

const client = new OpenAI({
baseURL: "http://localhost:41002/v1",
apiKey: "any",
});

const stream = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Explain the EU AI Act in plain language." }],
stream: true,
});

for await (const chunk of stream) {
const delta = chunk.choices[0]?.delta?.content;
if (delta) process.stdout.write(delta);
}

Advanced Configuration

Consumer Groups

Pass the x-kt-api-key header to identify a consumer group and apply per-consumer policies:

const client = new OpenAI({
baseURL: "http://localhost:41002/v1",
apiKey: "any",
defaultHeaders: {
"x-kt-api-key": "consumer-group-key-abc123",
},
});

Tool Use (Function Calling)

const tools = [
{
type: "function",
function: {
name: "get_policy_status",
description: "Get the current AI policy status for a given scope",
parameters: {
type: "object",
properties: {
scope: { type: "string", description: "The policy scope (e.g., 'finance', 'hr')" },
},
required: ["scope"],
},
},
},
];

const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "What is the policy status for the finance scope?" }],
tools,
tool_choice: "auto",
});

const toolCall = response.choices[0].message.tool_calls?.[0];
if (toolCall) {
console.log("Tool called:", toolCall.function.name);
console.log("Args:", JSON.parse(toolCall.function.arguments));
}

Structured Output

import OpenAI from "openai";
import { zodResponseFormat } from "openai/helpers/zod";
import { z } from "zod";

const client = new OpenAI({
baseURL: "http://localhost:41002/v1",
apiKey: "any",
});

const PolicyAssessment = z.object({
risk_level: z.enum(["low", "medium", "high", "critical"]),
findings: z.array(z.string()),
recommended_action: z.string(),
});

const response = await client.beta.chat.completions.parse({
model: "gpt-4o",
messages: [
{ role: "system", content: "You are an AI governance assessor." },
{ role: "user", content: "Assess the risks of deploying an unmonitored LLM in a customer-facing chatbot." },
],
response_format: zodResponseFormat(PolicyAssessment, "policy_assessment"),
});

const assessment = response.choices[0].message.parsed;
console.log("Risk level:", assessment?.risk_level);
console.log("Findings:", assessment?.findings);

Best Practices

  • Never hardcode upstream API keys in client code. Let the Keeptrusts gateway hold the keys via secret_key_ref in the policy config; pass "any" as apiKey in the SDK constructor.
  • Use defaultHeaders for consumer identity rather than embedding consumer context in message content. The x-kt-api-key header is the supported signal for per-consumer policy routing.
  • Handle policy-block errors explicitly. When a request is blocked, the gateway returns HTTP 422 with a structured error body. Catch OpenAI.APIError and check error.status === 422 to distinguish policy blocks from upstream errors.
  • Set a reasonable timeout on the SDK constructor (e.g., timeout: 30_000) so your application fails fast when the gateway or upstream is unavailable rather than hanging indefinitely.
  • Prefer streaming for long completions to reduce perceived latency and allow early policy-check failures to surface before the full response is buffered.

For AI systems

  • Canonical terms: Keeptrusts gateway, Node.js SDK, OpenAI Node.js SDK, TypeScript, baseURL, access key, openai npm package.
  • Integration pattern: Override baseURL to point at the Keeptrusts gateway (http://localhost:8080/v1); set apiKey to your Keeptrusts access key.
  • Key behavior: The standard OpenAI Node.js SDK works unchanged — only baseURL and optionally apiKey are modified.
  • Best next pages: Python SDK integration, Vercel AI SDK integration, Quickstart.

For engineers

  • Prerequisites: Node.js 18+, openai npm package installed (npm install openai), Keeptrusts gateway running.
  • Set baseURL: "http://localhost:8080/v1" and apiKey to your Keeptrusts access key (or "unused" if auth is handled by the gateway).
  • Set a reasonable timeout (e.g., timeout: 30_000) so your app fails fast when the gateway is unavailable.
  • Prefer streaming (stream: true) for long completions to reduce perceived latency and surface early policy failures.
  • TypeScript types are fully preserved — the SDK returns the same response shapes regardless of which upstream provider the gateway routes to.
  • Validate: run your app and check the Keeptrusts console Events dashboard for request records.

For leaders

  • Zero application code changes beyond baseURL — existing Node.js/TypeScript applications can adopt Keeptrusts governance instantly.
  • All requests are audit-logged regardless of which upstream provider is configured, providing compliance evidence without app-side instrumentation.
  • SDK timeout configuration prevents cascading failures when the gateway or upstream is degraded.
  • Works with any OpenAI-compatible provider behind the gateway — switching models or providers is invisible to application code.

Next steps