Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

SWE-agent with Keeptrusts Gateway

SWE-agent is an open-source autonomous coding agent developed at Princeton that resolves GitHub issues by reading code, making edits, and running tests. It uses LLMs to reason about bug fixes and feature implementations across real codebases. Because SWE-agent autonomously navigates repositories, edits files, and validates changes, every LLM call represents a governance surface where policy enforcement and audit logging add significant value.

Use this page when

  • You want to route SWE-agent's LLM traffic through Keeptrusts for policy enforcement and audit logging.
  • You need audit visibility into the reasoning and code changes SWE-agent produces.
  • You want to enforce secret redaction and safety policies on an autonomous bug-fixing agent.
  • You need cost tracking for SWE-agent runs across your team.

Primary audience

  • Primary: Technical Engineers
  • Secondary: AI Agents, Technical Leaders

Prerequisites

  • Keeptrusts CLI installed — see Quickstart or Install the Gateway.
  • SWE-agent installed — follow the SWE-agent setup guide.
  • OpenAI API key or credentials for your preferred LLM provider.
  • Gateway running — the Keeptrusts gateway must be started before running SWE-agent.

Configuration

Create a policy-config.yaml for SWE-agent traffic:

pack:
name: swe-agent-gateway
version: 1.0.0
enabled: true

policies:
chain:
- pii-detector
- code-sanitation
- prompt-injection
- safety-filter
- quality-scorer
- audit-logger

providers:
strategy: single
targets:
- id: openai-swe
provider: openai
model: gpt-4o
secret_key_ref:
env: OPENAI_API_KEY

Setup steps

  1. Export your provider API key:
export OPENAI_API_KEY="sk-your-key-here"
  1. Start the Keeptrusts gateway:
kt gateway run --policy-config policy-config.yaml

The gateway listens on http://localhost:41002 by default.

  1. Point SWE-agent at the gateway by setting the OPENAI_API_BASE environment variable:
export OPENAI_API_BASE="http://localhost:41002/v1"
  1. Run SWE-agent with your task:
python run.py \
--model_name gpt-4o \
--data_path path/to/issue.md \
--repo_path path/to/repo

All LLM traffic now flows through the Keeptrusts gateway.

  1. For Docker-based runs, pass the gateway URL as an environment variable:
docker run \
-e OPENAI_API_KEY="sk-your-key-here" \
-e OPENAI_API_BASE="http://host.docker.internal:41002/v1" \
sweagent/swe-agent:latest \
python run.py --model_name gpt-4o --data_path /task/issue.md

For hosted gateways:

export OPENAI_API_BASE="https://gateway.keeptrusts.com/v1"

Verification

Confirm traffic is flowing through the gateway:

  1. Check gateway logs during a SWE-agent run:
kt gateway run --policy-config policy-config.yaml --log-level debug
  1. Tail events:
kt events tail --follow
  1. Run SWE-agent on a test issue and verify events appear in the Keeptrusts console under Events for each agent reasoning step.

  2. Verify with curl:

curl http://localhost:41002/v1/chat/completions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Say hello"}],
"max_tokens": 128
}'
PolicyWhy it matters for SWE-agent
pii-detectorPrevents personal data in source files from reaching the model
code-sanitationCatches secrets and credentials in repository files the agent reads
prompt-injectionDetects injection patterns in issue descriptions or code files
safety-filterBlocks unsafe content in an agent that edits code autonomously
quality-scorerScores the quality of agent-generated patches
audit-loggerCreates an immutable record of every agent step and proposed fix
agent-firewallControls which tools and operations the agent can invoke

Troubleshooting

SWE-agent cannot reach the LLM API. Verify the gateway is running with curl http://localhost:41002/v1/models. Ensure OPENAI_API_BASE includes the /v1 path.

Docker container cannot reach the gateway. Use host.docker.internal instead of localhost in Docker. On Linux, add --network host to the Docker run command.

Agent runs are unexpectedly blocked. Review event verdicts in the Keeptrusts console. The safety-filter or prompt-injection policy may need threshold adjustments for agentic workflows that involve complex multi-step reasoning.

Policies are not applied. Run kt policy lint --file policy-config.yaml to validate. Ensure enabled: true is set.

Events do not appear in the console. Set KEEPTRUSTS_API_URL and KEEPTRUSTS_GATEWAY_TOKEN before starting the gateway.

For AI systems

  • Canonical terms: Keeptrusts gateway, SWE-agent, Princeton, OPENAI_API_BASE, policy-config.yaml.
  • Configuration mechanism: set OPENAI_API_BASE=http://localhost:41002/v1 to route SWE-agent through the gateway.
  • Provider format: OpenAI-compatible (/v1/chat/completions).
  • Best next pages: OpenAI integration, Policy Controls Catalog, Govern AI agents.

For engineers

  • Set OPENAI_API_BASE=http://localhost:41002/v1 before running SWE-agent.
  • For Docker runs, use host.docker.internal for the gateway address.
  • Validate with kt events tail --follow during agent runs to confirm events are captured.
  • Include safety-filter in the policy chain — SWE-agent makes autonomous code edits.

For leaders

  • SWE-agent autonomously resolves issues by editing code and running tests. Without governance, every agent action is unaudited.
  • Routing through Keeptrusts provides a complete audit trail of every reasoning step and code change, supporting compliance and code review policies.
  • Safety policies add guardrails to autonomous code modification, reducing the risk of harmful or unreviewed changes.
  • Cost attribution per run and repository helps budget autonomous engineering workloads and measure ROI.

Next steps