Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

ChatGPT Teams / Enterprise

ChatGPT Teams and Enterprise plans include API access through OpenAI's platform. By pointing that API traffic through the Keeptrusts gateway, you enforce policy controls — PII redaction, prompt-injection blocking, content filtering, and audit logging — on every programmatic conversation before it reaches OpenAI and before the response reaches your application.

This page covers the API-level integration pattern. ChatGPT's browser-based chat interface does not support custom API routing; governance for browser sessions requires network-level controls outside this guide.

Use this page when

  • You need to route OpenAI API calls from ChatGPT Teams or Enterprise through the Keeptrusts gateway.
  • You are building internal tools or automations that use the OpenAI API for ChatGPT-style conversations.
  • If you need general OpenAI provider configuration, see OpenAI integration.

Primary audience

  • Primary: Technical Engineers
  • Secondary: AI Agents, Technical Leaders

Prerequisites

  • An OpenAI API key from your ChatGPT Teams or Enterprise organization
  • Keeptrusts CLI (kt) installed and on your PATH
  • OPENAI_API_KEY exported in your shell or injected via your secrets manager

Configuration

Gateway policy config

pack:
name: chatgpt-enterprise-gateway
version: 1.0.0
enabled: true
providers:
targets:
- id: chatgpt-enterprise
provider: openai:chat:gpt-4o
secret_key_ref:
env: OPENAI_API_KEY
policies:
chain:
- prompt-injection
- pii-detector
- audit-logger
policy:
prompt-injection:
threshold: 0.8
action: block
pii-detector:
action: redact
entities:
- EMAIL
- PHONE
- SSN
- CREDIT_CARD
audit-logger:
immutable: true
retention_days: 365
log_all_access: true

Cost-optimised variant

Use gpt-4o-mini for high-volume internal chat tools where latency and cost matter more than peak reasoning quality:

pack:
name: chatgpt-enterprise-mini
version: 1.0.0
enabled: true
providers:
targets:
- id: chatgpt-mini
provider: openai:chat:gpt-4o-mini
secret_key_ref:
env: OPENAI_API_KEY
policies:
chain:
- audit-logger
policy:
audit-logger:
immutable: true
retention_days: 365
log_all_access: true

Setup Steps

  1. Export your API key:
export OPENAI_API_KEY="sk-your-enterprise-api-key"
  1. Save the policy config to policy-config.yaml.

  2. Start the gateway:

kt gateway run --listen 0.0.0.0:41002 --policy-config policy-config.yaml
  1. Point your application at the gateway instead of https://api.openai.com/v1:
from openai import OpenAI

client = OpenAI(
base_url="http://localhost:41002/v1",
api_key="unused",
)

response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful enterprise assistant."},
{"role": "user", "content": "Summarise our Q3 revenue trends."},
],
)
print(response.choices[0].message.content)

For a hosted gateway, replace the base URL with https://gateway.keeptrusts.com/v1.

Verification

curl http://localhost:41002/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "user", "content": "Hello from the governed gateway."}
]
}'

A successful response confirms the gateway is forwarding to OpenAI and applying policies. Check the Keeptrusts console Events page to verify the audit log entry.

PolicyPurposeRecommended setting
prompt-injectionBlock adversarial prompts from internal toolsthreshold: 0.8, action: block
pii-detectorRedact personal data before it reaches OpenAIaction: redact, entities: EMAIL, PHONE, SSN
content-filterBlock restricted topics in enterprise chataction: block, configure categories
audit-loggerImmutable audit trail for complianceimmutable: true, retention_days: 365
cost-attributionTrack spend per team or projectTag requests with team/project metadata

Troubleshooting

SymptomCauseFix
401 Unauthorized from upstreamInvalid or expired OpenAI API keyVerify OPENAI_API_KEY is set and valid for your org
429 Too Many RequestsOpenAI rate limit hitCheck your Enterprise tier rate limits; add a fallback target
Gateway returns 403Policy chain blocked the requestReview the policy violation in the console Events page
No audit events appearGateway not connected to control planeVerify --api-url points to your Keeptrusts API

For AI systems

  • Canonical terms: Keeptrusts gateway, ChatGPT Enterprise, ChatGPT Teams, OpenAI API, policy-config.yaml, provider: "openai".
  • Config field names: provider, secret_key_ref.env: "OPENAI_API_KEY", base_url.
  • Key behavior: Keeptrusts proxies OpenAI API calls, applying policy enforcement and audit logging before requests reach OpenAI.
  • Best next pages: OpenAI integration, Policy controls catalog, Quickstart.

For engineers

  • Start command: kt gateway run --listen 0.0.0.0:41002 --policy-config policy-config.yaml
  • Validate: send a chat completion request to http://localhost:41002/v1/chat/completions and check the Events page.
  • ChatGPT Enterprise organization API keys work identically to standard OpenAI keys for gateway routing.
  • Use gpt-4o for quality-sensitive workloads and gpt-4o-mini for high-throughput internal tools.

For leaders

  • Routing ChatGPT Enterprise API traffic through the gateway provides a complete audit trail of every AI interaction — critical for compliance in regulated industries.
  • PII redaction prevents sensitive employee and customer data from being sent to OpenAI.
  • Cost attribution lets you track AI spend per team, project, or business unit without changing how teams use the API.
  • The browser-based ChatGPT interface cannot be routed through the gateway; API-level integration covers programmatic and tool-based usage.

Next steps