AutoGen with Keeptrusts Gateway
Microsoft AutoGen is a framework for building multi-agent conversational systems where agents collaborate through structured dialogue to solve complex tasks. By routing AutoGen's LLM calls through the Keeptrusts gateway, every agent conversation turn passes through your policy chain — prompt-injection detection, PII redaction, content filtering, audit logging, and cost attribution are applied to the entire multi-agent workflow without changing your agent definitions.
Use this page when
- You are building an AutoGen multi-agent system and need governance on all LLM calls.
- You want audit logging and cost attribution across conversational agents.
- You need to enforce compliance controls on agent-to-agent communication.
- You are deploying AutoGen applications to production with security requirements.
Primary audience
- Primary: Technical Engineers
- Secondary: AI Agents, Technical Leaders
Prerequisites
- Keeptrusts CLI installed and a gateway running locally or centrally (Quickstart).
- Python 3.10+ with
autogen-agentchatandautogen-extinstalled. - Upstream provider API key exported as an environment variable (e.g.
OPENAI_API_KEY). - A
policy-config.yamldeployed to the gateway.
Configuration
Gateway policy config
A minimal config for governing AutoGen traffic:
pack:
name: autogen-gateway
version: "1.0"
providers:
- name: openai
model: gpt-4o
secret_key_ref:
env: OPENAI_API_KEY
policies:
chain:
- prompt-injection
- pii-detector
- safety-filter
- quality-scorer
policy:
prompt-injection:
action: block
pii-detector:
action: redact
safety-filter:
action: block
quality-scorer:
threshold: 0.6
Start the gateway:
kt gateway run --policy-config policy-config.yaml
AutoGen client configuration
AutoGen uses model client classes that accept a base_url parameter. Point it at the gateway:
- AutoGen 0.4+ (AgentChat)
- AutoGen 0.2 (legacy)
- Hosted gateway
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
model_client = OpenAIChatCompletionClient(
model="gpt-4o",
base_url="http://localhost:41002/v1",
api_key="your-openai-api-key",
)
assistant = AssistantAgent(
name="compliance_analyst",
model_client=model_client,
system_message="You are a compliance analyst. Analyze documents for regulatory risks.",
)
import autogen
llm_config = {
"config_list": [
{
"model": "gpt-4o",
"base_url": "http://localhost:41002/v1",
"api_key": "your-openai-api-key",
}
],
}
assistant = autogen.AssistantAgent(
name="compliance_analyst",
llm_config=llm_config,
system_message="You are a compliance analyst. Analyze documents for regulatory risks.",
)
user_proxy = autogen.UserProxyAgent(
name="user",
human_input_mode="NEVER",
max_consecutive_auto_reply=3,
)
user_proxy.initiate_chat(
assistant,
message="Review this contract clause for GDPR compliance issues.",
)
from autogen_ext.models.openai import OpenAIChatCompletionClient
model_client = OpenAIChatCompletionClient(
model="gpt-4o",
base_url="https://gateway.keeptrusts.com/v1",
api_key="your-openai-api-key",
)
Multi-agent conversation with governance
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import TextMentionTermination
from autogen_ext.models.openai import OpenAIChatCompletionClient
model_client = OpenAIChatCompletionClient(
model="gpt-4o",
base_url="http://localhost:41002/v1",
api_key="your-openai-api-key",
)
researcher = AssistantAgent(
name="researcher",
model_client=model_client,
system_message="Research regulatory requirements. Say TERMINATE when done.",
)
reviewer = AssistantAgent(
name="reviewer",
model_client=model_client,
system_message="Review research for accuracy. Say TERMINATE when satisfied.",
)
termination = TextMentionTermination("TERMINATE")
team = RoundRobinGroupChat([researcher, reviewer], termination_condition=termination)
async def main():
result = await team.run(task="Summarize HIPAA data handling requirements.")
print(result)
asyncio.run(main())
Setup steps
-
Install dependencies
pip install autogen-agentchat autogen-ext[openai] -
Export your provider API key
export OPENAI_API_KEY="sk-..." -
Start the Keeptrusts gateway
kt gateway run --policy-config policy-config.yaml -
Set
base_urlon your model client as shown in Configuration above. -
Pass the model client to each agent — all conversation turns flow through the gateway.
-
Verify in the Keeptrusts console — open Events to see per-turn request traces.
Verification
Check gateway health:
curl http://localhost:41002/keeptrusts/health
Run a multi-agent conversation and confirm:
- Each conversation turn appears as a separate event in the Keeptrusts console.
- Policy decisions are recorded per turn (allowed, blocked, redacted).
- Token counts and cost are attributed per agent per turn.
- Blocked or redacted content is visible in the event detail.
Recommended policies
| Policy | Purpose | Phase |
|---|---|---|
prompt-injection | Block jailbreak attempts in agent messages | Input |
pii-detector | Redact PII in agent-to-agent communication | Input |
safety-filter | Block harmful content in multi-agent dialogue | Input |
agent-firewall | Restrict tool access and enforce rate limits | Input |
quality-scorer | Score and threshold agent response quality | Output |
human-oversight | Escalate sensitive outputs for human review | Output |
audit-logger | Attach audit metadata per conversation turn | Input |
Troubleshooting
| Symptom | Cause | Fix |
|---|---|---|
ConnectionError during agent chat | Gateway is not running | Start with kt gateway run --policy-config policy-config.yaml |
401 Unauthorized | API key mismatch | Verify OPENAI_API_KEY matches secret_key_ref.env in the gateway config |
| Some agents bypass the gateway | Agents using different llm_config | Ensure all agents share the same model client with the gateway base_url |
| Agent conversations cut short | Policy blocked a message mid-conversation | Check Events for blocked requests and adjust policy thresholds |
| High latency in group chats | Policy chain evaluated per turn | Simplify the chain or use async agents to reduce perceived latency |
For AI systems
- Canonical integration: AutoGen
OpenAIChatCompletionClientwithbase_urlset tohttp://localhost:41002/v1orhttps://gateway.keeptrusts.com/v1. - For legacy AutoGen 0.2, set
base_urlin theconfig_listentry. - The gateway is transparent — group chats, round-robin teams, and tool-using agents require no changes beyond the base URL.
- Use Policy Controls Catalog for available policies.
For engineers
- The only code change is adding
base_urlto the model client or config list. All agent definitions, conversation patterns, and tool registrations remain unchanged. - Each conversation turn is a separate gateway request, so per-turn policy evaluation and cost tracking are automatic.
- Test locally with
kt gateway run, then switch to a hosted gateway URL for deployment.
For leaders
- Multi-agent conversations can generate high volumes of LLM calls. Gateway-level cost attribution provides visibility into per-agent and per-conversation spend.
- Policy enforcement on every conversation turn prevents agents from escalating harmful content through dialogue.
- Audit trails capture the full conversation history with policy outcomes for compliance evidence.
Next steps
- Quickstart — set up your first gateway and policy config.
- Policy Controls Catalog — full inventory of available policies.
- Events and Traces — understand the audit trail.
- Agents — register agent identities for per-agent policy scoping.
- Gateway Runtime Features — advanced gateway capabilities.