SWE-agent with Keeptrusts Gateway
SWE-agent is an open-source autonomous coding agent developed at Princeton that resolves GitHub issues by reading code, making edits, and running tests. It uses LLMs to reason about bug fixes and feature implementations across real codebases. Because SWE-agent autonomously navigates repositories, edits files, and validates changes, every LLM call represents a governance surface where policy enforcement and audit logging add significant value.
Use this page when
- You want to route SWE-agent's LLM traffic through Keeptrusts for policy enforcement and audit logging.
- You need audit visibility into the reasoning and code changes SWE-agent produces.
- You want to enforce secret redaction and safety policies on an autonomous bug-fixing agent.
- You need cost tracking for SWE-agent runs across your team.
Primary audience
- Primary: Technical Engineers
- Secondary: AI Agents, Technical Leaders
Prerequisites
- Keeptrusts CLI installed — see Quickstart or Install the Gateway.
- SWE-agent installed — follow the SWE-agent setup guide.
- OpenAI API key or credentials for your preferred LLM provider.
- Gateway running — the Keeptrusts gateway must be started before running SWE-agent.
Configuration
Create a policy-config.yaml for SWE-agent traffic:
pack:
name: swe-agent-gateway
version: 1.0.0
enabled: true
policies:
chain:
- pii-detector
- code-sanitation
- prompt-injection
- safety-filter
- quality-scorer
- audit-logger
providers:
strategy: single
targets:
- id: openai-swe
provider: openai
model: gpt-4o
secret_key_ref:
env: OPENAI_API_KEY
Setup steps
- Export your provider API key:
export OPENAI_API_KEY="sk-your-key-here"
- Start the Keeptrusts gateway:
kt gateway run --policy-config policy-config.yaml
The gateway listens on http://localhost:41002 by default.
- Point SWE-agent at the gateway by setting the
OPENAI_API_BASEenvironment variable:
export OPENAI_API_BASE="http://localhost:41002/v1"
- Run SWE-agent with your task:
python run.py \
--model_name gpt-4o \
--data_path path/to/issue.md \
--repo_path path/to/repo
All LLM traffic now flows through the Keeptrusts gateway.
- For Docker-based runs, pass the gateway URL as an environment variable:
docker run \
-e OPENAI_API_KEY="sk-your-key-here" \
-e OPENAI_API_BASE="http://host.docker.internal:41002/v1" \
sweagent/swe-agent:latest \
python run.py --model_name gpt-4o --data_path /task/issue.md
For hosted gateways:
export OPENAI_API_BASE="https://gateway.keeptrusts.com/v1"
Verification
Confirm traffic is flowing through the gateway:
- Check gateway logs during a SWE-agent run:
kt gateway run --policy-config policy-config.yaml --log-level debug
- Tail events:
kt events tail --follow
-
Run SWE-agent on a test issue and verify events appear in the Keeptrusts console under Events for each agent reasoning step.
-
Verify with curl:
curl http://localhost:41002/v1/chat/completions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Say hello"}],
"max_tokens": 128
}'
Recommended policies
| Policy | Why it matters for SWE-agent |
|---|---|
pii-detector | Prevents personal data in source files from reaching the model |
code-sanitation | Catches secrets and credentials in repository files the agent reads |
prompt-injection | Detects injection patterns in issue descriptions or code files |
safety-filter | Blocks unsafe content in an agent that edits code autonomously |
quality-scorer | Scores the quality of agent-generated patches |
audit-logger | Creates an immutable record of every agent step and proposed fix |
agent-firewall | Controls which tools and operations the agent can invoke |
Troubleshooting
SWE-agent cannot reach the LLM API.
Verify the gateway is running with curl http://localhost:41002/v1/models. Ensure OPENAI_API_BASE includes the /v1 path.
Docker container cannot reach the gateway.
Use host.docker.internal instead of localhost in Docker. On Linux, add --network host to the Docker run command.
Agent runs are unexpectedly blocked.
Review event verdicts in the Keeptrusts console. The safety-filter or prompt-injection policy may need threshold adjustments for agentic workflows that involve complex multi-step reasoning.
Policies are not applied.
Run kt policy lint --file policy-config.yaml to validate. Ensure enabled: true is set.
Events do not appear in the console.
Set KEEPTRUSTS_API_URL and KEEPTRUSTS_GATEWAY_TOKEN before starting the gateway.
For AI systems
- Canonical terms: Keeptrusts gateway, SWE-agent, Princeton,
OPENAI_API_BASE, policy-config.yaml. - Configuration mechanism: set
OPENAI_API_BASE=http://localhost:41002/v1to route SWE-agent through the gateway. - Provider format: OpenAI-compatible (
/v1/chat/completions). - Best next pages: OpenAI integration, Policy Controls Catalog, Govern AI agents.
For engineers
- Set
OPENAI_API_BASE=http://localhost:41002/v1before running SWE-agent. - For Docker runs, use
host.docker.internalfor the gateway address. - Validate with
kt events tail --followduring agent runs to confirm events are captured. - Include
safety-filterin the policy chain — SWE-agent makes autonomous code edits.
For leaders
- SWE-agent autonomously resolves issues by editing code and running tests. Without governance, every agent action is unaudited.
- Routing through Keeptrusts provides a complete audit trail of every reasoning step and code change, supporting compliance and code review policies.
- Safety policies add guardrails to autonomous code modification, reducing the risk of harmful or unreviewed changes.
- Cost attribution per run and repository helps budget autonomous engineering workloads and measure ROI.
Next steps
- OpenAI integration — full OpenAI provider configuration reference
- Govern AI agents — use case guide for agent governance
- Policy Controls Catalog — browse all available policy types
- OpenHands with Keeptrusts Gateway — another autonomous coding agent with Keeptrusts support
- Quickstart — install
ktand run your first gateway