LangChain with Keeptrusts Gateway
LangChain is a framework for building applications powered by large language models — chains, agents, retrieval-augmented generation, and multi-step reasoning pipelines. By routing LangChain's LLM calls through the Keeptrusts gateway, every prompt and completion flows through your policy chain before reaching the upstream provider. This gives you PII redaction, prompt-injection detection, audit logging, cost attribution, and content filtering without changing your chain logic.
Use this page when
- You are building a LangChain application and need all LLM traffic governed by Keeptrusts policies.
- You want to add audit logging and cost attribution to existing LangChain chains or agents.
- You need to enforce PII redaction or prompt-injection detection on LangChain tool-calling agents.
- You are migrating a LangChain prototype to production and need compliance controls.
Primary audience
- Primary: Technical Engineers
- Secondary: AI Agents, Technical Leaders
Prerequisites
- Keeptrusts CLI installed and a gateway running locally or centrally (Quickstart).
- Python 3.10+ with
langchainandlangchain-openai(orlangchain-anthropic) installed. - Upstream provider API key exported as an environment variable (e.g.
OPENAI_API_KEY). - A
policy-config.yamldeployed to the gateway with at least one policy in the chain.
Configuration
Gateway policy config
Your gateway must be running with a policy config that covers the provider you are using. A minimal example for OpenAI traffic:
pack:
name: langchain-gateway
version: "1.0"
providers:
- name: openai
model: gpt-4o
secret_key_ref:
env: OPENAI_API_KEY
policies:
chain:
- prompt-injection
- pii-detector
- quality-scorer
policy:
prompt-injection:
action: block
pii-detector:
action: redact
quality-scorer:
threshold: 0.6
Start the gateway:
kt gateway run --policy-config policy-config.yaml
The gateway listens on http://localhost:41002/v1 by default.
LangChain client configuration
Point LangChain's LLM client at the gateway instead of the upstream provider. The gateway forwards requests to the real provider after policy evaluation.
- ChatOpenAI
- ChatAnthropic
- Hosted gateway
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="gpt-4o",
openai_api_base="http://localhost:41002/v1",
openai_api_key="your-openai-api-key",
)
response = llm.invoke("Summarize the latest quarterly earnings report.")
print(response.content)
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(
model="claude-sonnet-4-20250514",
anthropic_api_url="http://localhost:41002",
anthropic_api_key="your-anthropic-api-key",
)
response = llm.invoke("Draft a privacy policy summary for our SaaS product.")
print(response.content)
For a hosted Keeptrusts gateway, replace the base URL:
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="gpt-4o",
openai_api_base="https://gateway.keeptrusts.com/v1",
openai_api_key="your-openai-api-key",
)
Setup steps
-
Install dependencies
pip install langchain langchain-openai langchain-anthropic -
Export your provider API key
export OPENAI_API_KEY="sk-..." -
Start the Keeptrusts gateway
kt gateway run --policy-config policy-config.yaml -
Set the base URL in your LangChain code — use
openai_api_baseforChatOpenAIoranthropic_api_urlforChatAnthropicas shown in Configuration above. -
Run your chain or agent — all LLM calls now flow through the gateway.
-
Verify in the Keeptrusts console — open Events to confirm requests appear with policy outcomes.
Verification
After running a LangChain call through the gateway, verify that traffic is flowing correctly:
curl http://localhost:41002/keeptrusts/health
Expected response: {"status":"ok"}
Check the gateway logs for policy evaluation output. Each request should show the chain evaluation result. In the Keeptrusts console, navigate to Events to see the request with:
- The model and provider used.
- Policy decisions (allowed, blocked, redacted).
- Token counts and cost attribution.
- The agent or user identity if configured.
Recommended policies
| Policy | Purpose | Phase |
|---|---|---|
prompt-injection | Block jailbreak attempts from user or tool input | Input |
pii-detector | Redact PII before it reaches the provider | Input |
agent-firewall | Restrict which tools an agent can call and set rate limits | Input |
safety-filter | Block unsafe or harmful content | Input |
quality-scorer | Score and threshold output quality | Output |
bias-monitor | Detect bias patterns in generated responses | Output |
audit-logger | Attach audit metadata for compliance evidence | Input |
Troubleshooting
| Symptom | Cause | Fix |
|---|---|---|
ConnectionRefusedError when calling llm.invoke() | Gateway is not running | Start the gateway with kt gateway run --policy-config policy-config.yaml |
401 Unauthorized from the gateway | API key is missing or incorrect | Verify OPENAI_API_KEY is exported and matches the secret_key_ref.env in your config |
| Requests succeed but do not appear in Events | Gateway is not connected to the control plane | Set KEEPTRUSTS_API_URL and KEEPTRUSTS_GATEWAY_TOKEN environment variables before starting the gateway |
422 Unprocessable Entity on Anthropic calls | Anthropic endpoint path mismatch | Use anthropic_api_url="http://localhost:41002" without the /v1 suffix for Anthropic |
| Streaming responses are truncated | Policy evaluation timeout | Increase the gateway timeout or simplify the policy chain for latency-sensitive paths |
For AI systems
- Canonical integration: LangChain
ChatOpenAIwithopenai_api_basepointed athttp://localhost:41002/v1orhttps://gateway.keeptrusts.com/v1. - Canonical integration: LangChain
ChatAnthropicwithanthropic_api_urlpointed athttp://localhost:41002. - The gateway is a transparent proxy — LangChain chains, agents, and tools require no code changes beyond the base URL.
- Use Policy Controls Catalog for the full list of available policies.
For engineers
- The only code change is setting the base URL on the LLM constructor. All chain logic, tool definitions, memory, and retrieval pipelines remain unchanged.
- Test locally with
kt gateway run, then switch to a hosted gateway URL for staging and production. - Use
kt events tailto stream gateway events to your terminal during development.
For leaders
- Routing LangChain through Keeptrusts provides audit evidence for every LLM interaction without modifying application code.
- Cost attribution at the gateway level gives visibility into per-agent and per-team spend.
- Policy enforcement is centralized — changing a policy in the gateway config applies to all LangChain applications routing through it.
Next steps
- Quickstart — set up your first gateway and policy config.
- Policy Controls Catalog — full inventory of available policies.
- Events and Traces — understand the audit trail.
- Gateway Runtime Features — advanced gateway capabilities.
- Agents — register agent identities for per-agent policy scoping.