Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

AutoGen with Keeptrusts Gateway

Microsoft AutoGen is a framework for building multi-agent conversational systems where agents collaborate through structured dialogue to solve complex tasks. By routing AutoGen's LLM calls through the Keeptrusts gateway, every agent conversation turn passes through your policy chain — prompt-injection detection, PII redaction, content filtering, audit logging, and cost attribution are applied to the entire multi-agent workflow without changing your agent definitions.

Use this page when

  • You are building an AutoGen multi-agent system and need governance on all LLM calls.
  • You want audit logging and cost attribution across conversational agents.
  • You need to enforce compliance controls on agent-to-agent communication.
  • You are deploying AutoGen applications to production with security requirements.

Primary audience

  • Primary: Technical Engineers
  • Secondary: AI Agents, Technical Leaders

Prerequisites

  • Keeptrusts CLI installed and a gateway running locally or centrally (Quickstart).
  • Python 3.10+ with autogen-agentchat and autogen-ext installed.
  • Upstream provider API key exported as an environment variable (e.g. OPENAI_API_KEY).
  • A policy-config.yaml deployed to the gateway.

Configuration

Gateway policy config

A minimal config for governing AutoGen traffic:

pack:
name: autogen-gateway
version: "1.0"

providers:
- name: openai
model: gpt-4o
secret_key_ref:
env: OPENAI_API_KEY

policies:
chain:
- prompt-injection
- pii-detector
- safety-filter
- quality-scorer

policy:
prompt-injection:
action: block
pii-detector:
action: redact
safety-filter:
action: block
quality-scorer:
threshold: 0.6

Start the gateway:

kt gateway run --policy-config policy-config.yaml

AutoGen client configuration

AutoGen uses model client classes that accept a base_url parameter. Point it at the gateway:

from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient

model_client = OpenAIChatCompletionClient(
model="gpt-4o",
base_url="http://localhost:41002/v1",
api_key="your-openai-api-key",
)

assistant = AssistantAgent(
name="compliance_analyst",
model_client=model_client,
system_message="You are a compliance analyst. Analyze documents for regulatory risks.",
)

Multi-agent conversation with governance

import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import TextMentionTermination
from autogen_ext.models.openai import OpenAIChatCompletionClient

model_client = OpenAIChatCompletionClient(
model="gpt-4o",
base_url="http://localhost:41002/v1",
api_key="your-openai-api-key",
)

researcher = AssistantAgent(
name="researcher",
model_client=model_client,
system_message="Research regulatory requirements. Say TERMINATE when done.",
)

reviewer = AssistantAgent(
name="reviewer",
model_client=model_client,
system_message="Review research for accuracy. Say TERMINATE when satisfied.",
)

termination = TextMentionTermination("TERMINATE")
team = RoundRobinGroupChat([researcher, reviewer], termination_condition=termination)

async def main():
result = await team.run(task="Summarize HIPAA data handling requirements.")
print(result)

asyncio.run(main())

Setup steps

  1. Install dependencies

    pip install autogen-agentchat autogen-ext[openai]
  2. Export your provider API key

    export OPENAI_API_KEY="sk-..."
  3. Start the Keeptrusts gateway

    kt gateway run --policy-config policy-config.yaml
  4. Set base_url on your model client as shown in Configuration above.

  5. Pass the model client to each agent — all conversation turns flow through the gateway.

  6. Verify in the Keeptrusts console — open Events to see per-turn request traces.

Verification

Check gateway health:

curl http://localhost:41002/keeptrusts/health

Run a multi-agent conversation and confirm:

  • Each conversation turn appears as a separate event in the Keeptrusts console.
  • Policy decisions are recorded per turn (allowed, blocked, redacted).
  • Token counts and cost are attributed per agent per turn.
  • Blocked or redacted content is visible in the event detail.
PolicyPurposePhase
prompt-injectionBlock jailbreak attempts in agent messagesInput
pii-detectorRedact PII in agent-to-agent communicationInput
safety-filterBlock harmful content in multi-agent dialogueInput
agent-firewallRestrict tool access and enforce rate limitsInput
quality-scorerScore and threshold agent response qualityOutput
human-oversightEscalate sensitive outputs for human reviewOutput
audit-loggerAttach audit metadata per conversation turnInput

Troubleshooting

SymptomCauseFix
ConnectionError during agent chatGateway is not runningStart with kt gateway run --policy-config policy-config.yaml
401 UnauthorizedAPI key mismatchVerify OPENAI_API_KEY matches secret_key_ref.env in the gateway config
Some agents bypass the gatewayAgents using different llm_configEnsure all agents share the same model client with the gateway base_url
Agent conversations cut shortPolicy blocked a message mid-conversationCheck Events for blocked requests and adjust policy thresholds
High latency in group chatsPolicy chain evaluated per turnSimplify the chain or use async agents to reduce perceived latency

For AI systems

  • Canonical integration: AutoGen OpenAIChatCompletionClient with base_url set to http://localhost:41002/v1 or https://gateway.keeptrusts.com/v1.
  • For legacy AutoGen 0.2, set base_url in the config_list entry.
  • The gateway is transparent — group chats, round-robin teams, and tool-using agents require no changes beyond the base URL.
  • Use Policy Controls Catalog for available policies.

For engineers

  • The only code change is adding base_url to the model client or config list. All agent definitions, conversation patterns, and tool registrations remain unchanged.
  • Each conversation turn is a separate gateway request, so per-turn policy evaluation and cost tracking are automatic.
  • Test locally with kt gateway run, then switch to a hosted gateway URL for deployment.

For leaders

  • Multi-agent conversations can generate high volumes of LLM calls. Gateway-level cost attribution provides visibility into per-agent and per-conversation spend.
  • Policy enforcement on every conversation turn prevents agents from escalating harmful content through dialogue.
  • Audit trails capture the full conversation history with policy outcomes for compliance evidence.

Next steps