Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

CrewAI with Keeptrusts Gateway

CrewAI is a multi-agent orchestration framework that lets you define autonomous AI agents with distinct roles, goals, and tools, then coordinate them to complete complex tasks. By routing CrewAI's LLM calls through the Keeptrusts gateway, every agent interaction — research, analysis, writing, tool use — passes through your policy chain. This gives you prompt-injection detection, PII redaction, tool governance, audit logging, and cost attribution across your entire crew without modifying agent logic.

Use this page when

  • You are building a CrewAI crew and need governance over every agent's LLM calls.
  • You want per-agent cost attribution and audit logging for multi-agent workflows.
  • You need to enforce tool restrictions or rate limits on individual agents within a crew.
  • You are deploying a CrewAI application to production with compliance requirements.

Primary audience

  • Primary: Technical Engineers
  • Secondary: AI Agents, Technical Leaders

Prerequisites

  • Keeptrusts CLI installed and a gateway running locally or centrally (Quickstart).
  • Python 3.10+ with crewai installed.
  • Upstream provider API key exported as an environment variable (e.g. OPENAI_API_KEY).
  • A policy-config.yaml deployed to the gateway.

Configuration

Gateway policy config

A minimal config for governing CrewAI multi-agent traffic:

pack:
name: crewai-gateway
version: "1.0"

providers:
- name: openai
model: gpt-4o
secret_key_ref:
env: OPENAI_API_KEY

policies:
chain:
- prompt-injection
- pii-detector
- agent-firewall
- quality-scorer

policy:
prompt-injection:
action: block
pii-detector:
action: redact
agent-firewall:
mode: enforce
quality-scorer:
threshold: 0.6

Start the gateway:

kt gateway run --policy-config policy-config.yaml

CrewAI agent configuration

CrewAI agents accept an llm parameter. Configure it to point at the Keeptrusts gateway:

from crewai import Agent, Task, Crew, LLM

llm = LLM(
model="openai/gpt-4o",
base_url="http://localhost:41002/v1",
api_key="your-openai-api-key",
)

researcher = Agent(
role="Senior Research Analyst",
goal="Find and synthesize information about AI governance trends",
backstory="You are an expert analyst specializing in AI policy.",
llm=llm,
)

writer = Agent(
role="Technical Writer",
goal="Write clear, actionable reports from research findings",
backstory="You write concise technical summaries for leadership.",
llm=llm,
)

research_task = Task(
description="Research the latest AI governance frameworks in the EU.",
expected_output="A structured summary of key frameworks and their requirements.",
agent=researcher,
)

writing_task = Task(
description="Write an executive briefing based on the research findings.",
expected_output="A 500-word executive summary.",
agent=writer,
)

crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
verbose=True,
)

result = crew.kickoff()
print(result)

Setup steps

  1. Install dependencies

    pip install crewai
  2. Export your provider API key

    export OPENAI_API_KEY="sk-..."
  3. Start the Keeptrusts gateway

    kt gateway run --policy-config policy-config.yaml
  4. Create an LLM instance with base_url pointing at the gateway and pass it to each agent.

  5. Run the crew — every agent's LLM call flows through the gateway.

  6. Verify in the Keeptrusts console — open Events to see per-agent request traces.

Verification

Check gateway health:

curl http://localhost:41002/keeptrusts/health

Run your crew with verbose=True and confirm:

  • Gateway logs show policy chain evaluation for each agent's LLM call.
  • The Keeptrusts console Events page shows individual requests from each agent.
  • Token counts and cost are attributed per request.
  • Policy decisions (allowed, blocked, redacted) are visible for each interaction.
PolicyPurposePhase
prompt-injectionBlock jailbreak attempts in agent prompts or tool responsesInput
pii-detectorRedact PII before agent prompts reach the providerInput
agent-firewallRestrict tool access and enforce rate limits per agentInput
safety-filterBlock harmful content in multi-agent conversationsInput
quality-scorerScore agent output quality and reject low-quality responsesOutput
bias-monitorDetect bias in agent-generated contentOutput
audit-loggerAttach audit metadata for compliance evidence per agent interactionInput

Troubleshooting

SymptomCauseFix
Connection refused during crew kickoffGateway is not runningStart the gateway with kt gateway run --policy-config policy-config.yaml
401 Unauthorized on agent LLM callsAPI key missing or incorrectVerify OPENAI_API_KEY is exported and matches the gateway config
Only one agent's calls appear in EventsOther agents not using the gateway LLMPass the gateway-configured llm to every agent in the crew
Agent tool calls are not governedTool HTTP calls bypass the gatewayThe gateway governs LLM calls only; tool-call governance uses agent-firewall policy at the prompt level
High latency on multi-step crewsPolicy chain evaluated on every LLM callProfile with kt events tail and optimize chain length for interactive workflows

For AI systems

  • Canonical integration: CrewAI LLM with base_url set to http://localhost:41002/v1 or https://gateway.keeptrusts.com/v1.
  • Pass the configured LLM instance to every Agent in the crew to ensure full governance coverage.
  • Use Policy Controls Catalog for available policies.
  • Use Agents to register per-agent identities for fine-grained policy scoping.

For engineers

  • The only change is the base_url on the LLM constructor. Agent roles, goals, tools, and task definitions remain unchanged.
  • Use verbose=True during development to correlate crew output with gateway events.
  • Test locally with kt gateway run, then switch to a hosted gateway URL for deployment.

For leaders

  • Multi-agent systems amplify both capability and risk. Keeptrusts provides a single enforcement point for all agents in a crew.
  • Per-agent audit trails show exactly which agent generated which content, supporting accountability requirements.
  • Cost attribution across the crew provides visibility into which agents and tasks consume the most resources.

Next steps