CrewAI with Keeptrusts Gateway
CrewAI is a multi-agent orchestration framework that lets you define autonomous AI agents with distinct roles, goals, and tools, then coordinate them to complete complex tasks. By routing CrewAI's LLM calls through the Keeptrusts gateway, every agent interaction — research, analysis, writing, tool use — passes through your policy chain. This gives you prompt-injection detection, PII redaction, tool governance, audit logging, and cost attribution across your entire crew without modifying agent logic.
Use this page when
- You are building a CrewAI crew and need governance over every agent's LLM calls.
- You want per-agent cost attribution and audit logging for multi-agent workflows.
- You need to enforce tool restrictions or rate limits on individual agents within a crew.
- You are deploying a CrewAI application to production with compliance requirements.
Primary audience
- Primary: Technical Engineers
- Secondary: AI Agents, Technical Leaders
Prerequisites
- Keeptrusts CLI installed and a gateway running locally or centrally (Quickstart).
- Python 3.10+ with
crewaiinstalled. - Upstream provider API key exported as an environment variable (e.g.
OPENAI_API_KEY). - A
policy-config.yamldeployed to the gateway.
Configuration
Gateway policy config
A minimal config for governing CrewAI multi-agent traffic:
pack:
name: crewai-gateway
version: "1.0"
providers:
- name: openai
model: gpt-4o
secret_key_ref:
env: OPENAI_API_KEY
policies:
chain:
- prompt-injection
- pii-detector
- agent-firewall
- quality-scorer
policy:
prompt-injection:
action: block
pii-detector:
action: redact
agent-firewall:
mode: enforce
quality-scorer:
threshold: 0.6
Start the gateway:
kt gateway run --policy-config policy-config.yaml
CrewAI agent configuration
CrewAI agents accept an llm parameter. Configure it to point at the Keeptrusts gateway:
- Basic crew
- Hosted gateway
from crewai import Agent, Task, Crew, LLM
llm = LLM(
model="openai/gpt-4o",
base_url="http://localhost:41002/v1",
api_key="your-openai-api-key",
)
researcher = Agent(
role="Senior Research Analyst",
goal="Find and synthesize information about AI governance trends",
backstory="You are an expert analyst specializing in AI policy.",
llm=llm,
)
writer = Agent(
role="Technical Writer",
goal="Write clear, actionable reports from research findings",
backstory="You write concise technical summaries for leadership.",
llm=llm,
)
research_task = Task(
description="Research the latest AI governance frameworks in the EU.",
expected_output="A structured summary of key frameworks and their requirements.",
agent=researcher,
)
writing_task = Task(
description="Write an executive briefing based on the research findings.",
expected_output="A 500-word executive summary.",
agent=writer,
)
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
verbose=True,
)
result = crew.kickoff()
print(result)
from crewai import LLM
llm = LLM(
model="openai/gpt-4o",
base_url="https://gateway.keeptrusts.com/v1",
api_key="your-openai-api-key",
)
Setup steps
-
Install dependencies
pip install crewai -
Export your provider API key
export OPENAI_API_KEY="sk-..." -
Start the Keeptrusts gateway
kt gateway run --policy-config policy-config.yaml -
Create an
LLMinstance withbase_urlpointing at the gateway and pass it to each agent. -
Run the crew — every agent's LLM call flows through the gateway.
-
Verify in the Keeptrusts console — open Events to see per-agent request traces.
Verification
Check gateway health:
curl http://localhost:41002/keeptrusts/health
Run your crew with verbose=True and confirm:
- Gateway logs show policy chain evaluation for each agent's LLM call.
- The Keeptrusts console Events page shows individual requests from each agent.
- Token counts and cost are attributed per request.
- Policy decisions (allowed, blocked, redacted) are visible for each interaction.
Recommended policies
| Policy | Purpose | Phase |
|---|---|---|
prompt-injection | Block jailbreak attempts in agent prompts or tool responses | Input |
pii-detector | Redact PII before agent prompts reach the provider | Input |
agent-firewall | Restrict tool access and enforce rate limits per agent | Input |
safety-filter | Block harmful content in multi-agent conversations | Input |
quality-scorer | Score agent output quality and reject low-quality responses | Output |
bias-monitor | Detect bias in agent-generated content | Output |
audit-logger | Attach audit metadata for compliance evidence per agent interaction | Input |
Troubleshooting
| Symptom | Cause | Fix |
|---|---|---|
Connection refused during crew kickoff | Gateway is not running | Start the gateway with kt gateway run --policy-config policy-config.yaml |
401 Unauthorized on agent LLM calls | API key missing or incorrect | Verify OPENAI_API_KEY is exported and matches the gateway config |
| Only one agent's calls appear in Events | Other agents not using the gateway LLM | Pass the gateway-configured llm to every agent in the crew |
| Agent tool calls are not governed | Tool HTTP calls bypass the gateway | The gateway governs LLM calls only; tool-call governance uses agent-firewall policy at the prompt level |
| High latency on multi-step crews | Policy chain evaluated on every LLM call | Profile with kt events tail and optimize chain length for interactive workflows |
For AI systems
- Canonical integration: CrewAI
LLMwithbase_urlset tohttp://localhost:41002/v1orhttps://gateway.keeptrusts.com/v1. - Pass the configured
LLMinstance to everyAgentin the crew to ensure full governance coverage. - Use Policy Controls Catalog for available policies.
- Use Agents to register per-agent identities for fine-grained policy scoping.
For engineers
- The only change is the
base_urlon theLLMconstructor. Agent roles, goals, tools, and task definitions remain unchanged. - Use
verbose=Trueduring development to correlate crew output with gateway events. - Test locally with
kt gateway run, then switch to a hosted gateway URL for deployment.
For leaders
- Multi-agent systems amplify both capability and risk. Keeptrusts provides a single enforcement point for all agents in a crew.
- Per-agent audit trails show exactly which agent generated which content, supporting accountability requirements.
- Cost attribution across the crew provides visibility into which agents and tasks consume the most resources.
Next steps
- Quickstart — set up your first gateway and policy config.
- Policy Controls Catalog — full inventory of available policies.
- Events and Traces — understand the audit trail.
- Agents — register agent identities for per-agent policy scoping.
- Gateway Runtime Features — advanced gateway capabilities.