PydanticAI with Keeptrusts Gateway
PydanticAI is a Python agent framework built by the Pydantic team that brings type safety, structured outputs, and dependency injection to LLM-powered applications. By routing PydanticAI's model calls through the Keeptrusts gateway, every agent interaction passes through your policy chain — prompt-injection detection, PII redaction, audit logging, cost attribution, and content filtering — without losing PydanticAI's type-safe guarantees.
Use this page when
- You are building a PydanticAI agent and need all LLM calls governed by Keeptrusts policies.
- You want audit logging and cost attribution for PydanticAI structured output workflows.
- You need compliance controls on agents that use tools and dependency injection.
- You are deploying PydanticAI agents to production with governance requirements.
Primary audience
- Primary: Technical Engineers
- Secondary: AI Agents, Technical Leaders
Prerequisites
- Keeptrusts CLI installed and a gateway running locally or centrally (Quickstart).
- Python 3.10+ with
pydantic-aiinstalled. - Upstream provider API key exported as an environment variable (e.g.
OPENAI_API_KEY). - A
policy-config.yamldeployed to the gateway.
Configuration
Gateway policy config
A minimal config for PydanticAI traffic:
pack:
name: pydantic-ai-gateway
version: "1.0"
providers:
- name: openai
model: gpt-4o
secret_key_ref:
env: OPENAI_API_KEY
policies:
chain:
- prompt-injection
- pii-detector
- quality-scorer
policy:
prompt-injection:
action: block
pii-detector:
action: redact
quality-scorer:
threshold: 0.6
Start the gateway:
kt gateway run --policy-config policy-config.yaml
PydanticAI agent configuration
PydanticAI's OpenAI model provider accepts a base_url parameter. Point it at the Keeptrusts gateway:
- OpenAI provider
- Structured output
- Hosted gateway
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
model = OpenAIModel(
"gpt-4o",
base_url="http://localhost:41002/v1",
api_key="your-openai-api-key",
)
agent = Agent(
model,
system_prompt="You are a compliance analyst. Provide structured risk assessments.",
)
result = agent.run_sync("Assess GDPR compliance risks for storing user emails in a US datacenter.")
print(result.data)
from pydantic import BaseModel
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
class RiskAssessment(BaseModel):
risk_level: str
findings: list[str]
recommendations: list[str]
model = OpenAIModel(
"gpt-4o",
base_url="http://localhost:41002/v1",
api_key="your-openai-api-key",
)
agent = Agent(
model,
result_type=RiskAssessment,
system_prompt="Analyze regulatory compliance risks and return structured findings.",
)
result = agent.run_sync("Review our data retention policy for CCPA compliance.")
print(f"Risk: {result.data.risk_level}")
for finding in result.data.findings:
print(f" - {finding}")
from pydantic_ai.models.openai import OpenAIModel
model = OpenAIModel(
"gpt-4o",
base_url="https://gateway.keeptrusts.com/v1",
api_key="your-openai-api-key",
)
Agent with tools
PydanticAI tools work unchanged when the gateway is configured:
from pydantic_ai import Agent, RunContext
from pydantic_ai.models.openai import OpenAIModel
model = OpenAIModel(
"gpt-4o",
base_url="http://localhost:41002/v1",
api_key="your-openai-api-key",
)
agent = Agent(model, system_prompt="You help users check compliance status.")
@agent.tool
def check_compliance_status(ctx: RunContext, regulation: str) -> str:
"""Check compliance status for a given regulation."""
statuses = {"gdpr": "compliant", "hipaa": "in-progress", "sox": "non-compliant"}
return statuses.get(regulation.lower(), "unknown")
result = agent.run_sync("What is our HIPAA compliance status?")
print(result.data)
Setup steps
-
Install dependencies
pip install pydantic-ai -
Export your provider API key
export OPENAI_API_KEY="sk-..." -
Start the Keeptrusts gateway
kt gateway run --policy-config policy-config.yaml -
Create an
OpenAIModelwithbase_urlpointing at the gateway and pass it to yourAgent. -
Run your agent — all LLM calls flow through the gateway.
-
Verify in the Keeptrusts console — open Events to confirm requests appear with policy outcomes.
Verification
Check gateway health:
curl http://localhost:41002/keeptrusts/health
Run a test agent call and confirm:
- The gateway logs show policy chain evaluation for the request.
- The Keeptrusts console Events page shows the request with model, tokens, cost, and policy decisions.
- Structured outputs are returned correctly — the gateway does not interfere with JSON mode or function calling.
Recommended policies
| Policy | Purpose | Phase |
|---|---|---|
prompt-injection | Block jailbreak attempts in agent prompts | Input |
pii-detector | Redact PII before prompts reach the provider | Input |
agent-firewall | Restrict tool access and enforce rate limits | Input |
safety-filter | Block harmful content | Input |
quality-scorer | Score and threshold response quality | Output |
bias-monitor | Detect bias in structured outputs | Output |
audit-logger | Attach audit metadata for compliance evidence | Input |
Troubleshooting
| Symptom | Cause | Fix |
|---|---|---|
ConnectionError on agent.run_sync() | Gateway is not running | Start with kt gateway run --policy-config policy-config.yaml |
401 Unauthorized | API key mismatch | Verify OPENAI_API_KEY matches secret_key_ref.env in the gateway config |
| Structured output validation fails | Policy redaction altered the JSON structure | Exclude structured output fields from PII redaction or use field-level redaction rules |
| Tool calls not appearing in Events | Tool execution is local to PydanticAI | The gateway logs the LLM call that triggers the tool, not the tool execution itself |
| Streaming responses incomplete | Gateway timeout on long generations | Increase gateway timeout or reduce max_tokens |
For AI systems
- Canonical integration: PydanticAI
OpenAIModelwithbase_urlset tohttp://localhost:41002/v1orhttps://gateway.keeptrusts.com/v1. - The gateway is transparent — structured outputs, tool calls, and dependency injection work unchanged.
- Use Policy Controls Catalog for available policies.
For engineers
- Set
base_urlonce on theOpenAIModel. All agents using that model instance route through the gateway. - Structured output validation happens after the gateway returns the response, so Pydantic models validate the governed output.
- Test locally with
kt gateway run, then switch to a hosted gateway URL for deployment.
For leaders
- PydanticAI's type-safe outputs combined with Keeptrusts governance provide both structural and policy guarantees on LLM interactions.
- Audit trails capture every agent interaction with full policy outcomes for compliance evidence.
- Cost attribution at the gateway level provides per-agent spend visibility.
Next steps
- Quickstart — set up your first gateway and policy config.
- Policy Controls Catalog — full inventory of available policies.
- Events and Traces — understand the audit trail.
- Agents — register agent identities for per-agent policy scoping.
- Gateway Runtime Features — advanced gateway capabilities.