Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

LangChain with Keeptrusts Gateway

LangChain is a framework for building applications powered by large language models — chains, agents, retrieval-augmented generation, and multi-step reasoning pipelines. By routing LangChain's LLM calls through the Keeptrusts gateway, every prompt and completion flows through your policy chain before reaching the upstream provider. This gives you PII redaction, prompt-injection detection, audit logging, cost attribution, and content filtering without changing your chain logic.

Use this page when

  • You are building a LangChain application and need all LLM traffic governed by Keeptrusts policies.
  • You want to add audit logging and cost attribution to existing LangChain chains or agents.
  • You need to enforce PII redaction or prompt-injection detection on LangChain tool-calling agents.
  • You are migrating a LangChain prototype to production and need compliance controls.

Primary audience

  • Primary: Technical Engineers
  • Secondary: AI Agents, Technical Leaders

Prerequisites

  • Keeptrusts CLI installed and a gateway running locally or centrally (Quickstart).
  • Python 3.10+ with langchain and langchain-openai (or langchain-anthropic) installed.
  • Upstream provider API key exported as an environment variable (e.g. OPENAI_API_KEY).
  • A policy-config.yaml deployed to the gateway with at least one policy in the chain.

Configuration

Gateway policy config

Your gateway must be running with a policy config that covers the provider you are using. A minimal example for OpenAI traffic:

pack:
name: langchain-gateway
version: "1.0"

providers:
- name: openai
model: gpt-4o
secret_key_ref:
env: OPENAI_API_KEY

policies:
chain:
- prompt-injection
- pii-detector
- quality-scorer

policy:
prompt-injection:
action: block
pii-detector:
action: redact
quality-scorer:
threshold: 0.6

Start the gateway:

kt gateway run --policy-config policy-config.yaml

The gateway listens on http://localhost:41002/v1 by default.

LangChain client configuration

Point LangChain's LLM client at the gateway instead of the upstream provider. The gateway forwards requests to the real provider after policy evaluation.

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
model="gpt-4o",
openai_api_base="http://localhost:41002/v1",
openai_api_key="your-openai-api-key",
)

response = llm.invoke("Summarize the latest quarterly earnings report.")
print(response.content)

Setup steps

  1. Install dependencies

    pip install langchain langchain-openai langchain-anthropic
  2. Export your provider API key

    export OPENAI_API_KEY="sk-..."
  3. Start the Keeptrusts gateway

    kt gateway run --policy-config policy-config.yaml
  4. Set the base URL in your LangChain code — use openai_api_base for ChatOpenAI or anthropic_api_url for ChatAnthropic as shown in Configuration above.

  5. Run your chain or agent — all LLM calls now flow through the gateway.

  6. Verify in the Keeptrusts console — open Events to confirm requests appear with policy outcomes.

Verification

After running a LangChain call through the gateway, verify that traffic is flowing correctly:

curl http://localhost:41002/keeptrusts/health

Expected response: {"status":"ok"}

Check the gateway logs for policy evaluation output. Each request should show the chain evaluation result. In the Keeptrusts console, navigate to Events to see the request with:

  • The model and provider used.
  • Policy decisions (allowed, blocked, redacted).
  • Token counts and cost attribution.
  • The agent or user identity if configured.
PolicyPurposePhase
prompt-injectionBlock jailbreak attempts from user or tool inputInput
pii-detectorRedact PII before it reaches the providerInput
agent-firewallRestrict which tools an agent can call and set rate limitsInput
safety-filterBlock unsafe or harmful contentInput
quality-scorerScore and threshold output qualityOutput
bias-monitorDetect bias patterns in generated responsesOutput
audit-loggerAttach audit metadata for compliance evidenceInput

Troubleshooting

SymptomCauseFix
ConnectionRefusedError when calling llm.invoke()Gateway is not runningStart the gateway with kt gateway run --policy-config policy-config.yaml
401 Unauthorized from the gatewayAPI key is missing or incorrectVerify OPENAI_API_KEY is exported and matches the secret_key_ref.env in your config
Requests succeed but do not appear in EventsGateway is not connected to the control planeSet KEEPTRUSTS_API_URL and KEEPTRUSTS_GATEWAY_TOKEN environment variables before starting the gateway
422 Unprocessable Entity on Anthropic callsAnthropic endpoint path mismatchUse anthropic_api_url="http://localhost:41002" without the /v1 suffix for Anthropic
Streaming responses are truncatedPolicy evaluation timeoutIncrease the gateway timeout or simplify the policy chain for latency-sensitive paths

For AI systems

  • Canonical integration: LangChain ChatOpenAI with openai_api_base pointed at http://localhost:41002/v1 or https://gateway.keeptrusts.com/v1.
  • Canonical integration: LangChain ChatAnthropic with anthropic_api_url pointed at http://localhost:41002.
  • The gateway is a transparent proxy — LangChain chains, agents, and tools require no code changes beyond the base URL.
  • Use Policy Controls Catalog for the full list of available policies.

For engineers

  • The only code change is setting the base URL on the LLM constructor. All chain logic, tool definitions, memory, and retrieval pipelines remain unchanged.
  • Test locally with kt gateway run, then switch to a hosted gateway URL for staging and production.
  • Use kt events tail to stream gateway events to your terminal during development.

For leaders

  • Routing LangChain through Keeptrusts provides audit evidence for every LLM interaction without modifying application code.
  • Cost attribution at the gateway level gives visibility into per-agent and per-team spend.
  • Policy enforcement is centralized — changing a policy in the gateway config applies to all LangChain applications routing through it.

Next steps