Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

PydanticAI with Keeptrusts Gateway

PydanticAI is a Python agent framework built by the Pydantic team that brings type safety, structured outputs, and dependency injection to LLM-powered applications. By routing PydanticAI's model calls through the Keeptrusts gateway, every agent interaction passes through your policy chain — prompt-injection detection, PII redaction, audit logging, cost attribution, and content filtering — without losing PydanticAI's type-safe guarantees.

Use this page when

  • You are building a PydanticAI agent and need all LLM calls governed by Keeptrusts policies.
  • You want audit logging and cost attribution for PydanticAI structured output workflows.
  • You need compliance controls on agents that use tools and dependency injection.
  • You are deploying PydanticAI agents to production with governance requirements.

Primary audience

  • Primary: Technical Engineers
  • Secondary: AI Agents, Technical Leaders

Prerequisites

  • Keeptrusts CLI installed and a gateway running locally or centrally (Quickstart).
  • Python 3.10+ with pydantic-ai installed.
  • Upstream provider API key exported as an environment variable (e.g. OPENAI_API_KEY).
  • A policy-config.yaml deployed to the gateway.

Configuration

Gateway policy config

A minimal config for PydanticAI traffic:

pack:
name: pydantic-ai-gateway
version: "1.0"

providers:
- name: openai
model: gpt-4o
secret_key_ref:
env: OPENAI_API_KEY

policies:
chain:
- prompt-injection
- pii-detector
- quality-scorer

policy:
prompt-injection:
action: block
pii-detector:
action: redact
quality-scorer:
threshold: 0.6

Start the gateway:

kt gateway run --policy-config policy-config.yaml

PydanticAI agent configuration

PydanticAI's OpenAI model provider accepts a base_url parameter. Point it at the Keeptrusts gateway:

from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel

model = OpenAIModel(
"gpt-4o",
base_url="http://localhost:41002/v1",
api_key="your-openai-api-key",
)

agent = Agent(
model,
system_prompt="You are a compliance analyst. Provide structured risk assessments.",
)

result = agent.run_sync("Assess GDPR compliance risks for storing user emails in a US datacenter.")
print(result.data)

Agent with tools

PydanticAI tools work unchanged when the gateway is configured:

from pydantic_ai import Agent, RunContext
from pydantic_ai.models.openai import OpenAIModel

model = OpenAIModel(
"gpt-4o",
base_url="http://localhost:41002/v1",
api_key="your-openai-api-key",
)

agent = Agent(model, system_prompt="You help users check compliance status.")

@agent.tool
def check_compliance_status(ctx: RunContext, regulation: str) -> str:
"""Check compliance status for a given regulation."""
statuses = {"gdpr": "compliant", "hipaa": "in-progress", "sox": "non-compliant"}
return statuses.get(regulation.lower(), "unknown")

result = agent.run_sync("What is our HIPAA compliance status?")
print(result.data)

Setup steps

  1. Install dependencies

    pip install pydantic-ai
  2. Export your provider API key

    export OPENAI_API_KEY="sk-..."
  3. Start the Keeptrusts gateway

    kt gateway run --policy-config policy-config.yaml
  4. Create an OpenAIModel with base_url pointing at the gateway and pass it to your Agent.

  5. Run your agent — all LLM calls flow through the gateway.

  6. Verify in the Keeptrusts console — open Events to confirm requests appear with policy outcomes.

Verification

Check gateway health:

curl http://localhost:41002/keeptrusts/health

Run a test agent call and confirm:

  • The gateway logs show policy chain evaluation for the request.
  • The Keeptrusts console Events page shows the request with model, tokens, cost, and policy decisions.
  • Structured outputs are returned correctly — the gateway does not interfere with JSON mode or function calling.
PolicyPurposePhase
prompt-injectionBlock jailbreak attempts in agent promptsInput
pii-detectorRedact PII before prompts reach the providerInput
agent-firewallRestrict tool access and enforce rate limitsInput
safety-filterBlock harmful contentInput
quality-scorerScore and threshold response qualityOutput
bias-monitorDetect bias in structured outputsOutput
audit-loggerAttach audit metadata for compliance evidenceInput

Troubleshooting

SymptomCauseFix
ConnectionError on agent.run_sync()Gateway is not runningStart with kt gateway run --policy-config policy-config.yaml
401 UnauthorizedAPI key mismatchVerify OPENAI_API_KEY matches secret_key_ref.env in the gateway config
Structured output validation failsPolicy redaction altered the JSON structureExclude structured output fields from PII redaction or use field-level redaction rules
Tool calls not appearing in EventsTool execution is local to PydanticAIThe gateway logs the LLM call that triggers the tool, not the tool execution itself
Streaming responses incompleteGateway timeout on long generationsIncrease gateway timeout or reduce max_tokens

For AI systems

  • Canonical integration: PydanticAI OpenAIModel with base_url set to http://localhost:41002/v1 or https://gateway.keeptrusts.com/v1.
  • The gateway is transparent — structured outputs, tool calls, and dependency injection work unchanged.
  • Use Policy Controls Catalog for available policies.

For engineers

  • Set base_url once on the OpenAIModel. All agents using that model instance route through the gateway.
  • Structured output validation happens after the gateway returns the response, so Pydantic models validate the governed output.
  • Test locally with kt gateway run, then switch to a hosted gateway URL for deployment.

For leaders

  • PydanticAI's type-safe outputs combined with Keeptrusts governance provide both structural and policy guarantees on LLM interactions.
  • Audit trails capture every agent interaction with full policy outcomes for compliance evidence.
  • Cost attribution at the gateway level provides per-agent spend visibility.

Next steps