OpenHands with Keeptrusts Gateway
OpenHands (formerly OpenDevin) is an open-source AI software engineering agent that can browse the web, write code, execute shell commands, and interact with development tools autonomously. Because OpenHands takes multi-step actions across your codebase and development environment, every LLM call it makes represents a high-stakes governance surface. Routing OpenHands through the Keeptrusts gateway adds policy enforcement on every agent reasoning step and tool call, an immutable audit trail of all autonomous actions, secret and PII redaction before code context reaches the model, and cost attribution for autonomous engineering workloads.
Use this page when
- You want to route OpenHands agent traffic through Keeptrusts for policy enforcement and audit logging.
- You need audit visibility into the reasoning steps, tool calls, and code changes OpenHands makes.
- You want to enforce safety policies on an autonomous coding agent that executes shell commands.
- You need cost tracking for OpenHands LLM usage across your team.
Primary audience
- Primary: Technical Engineers
- Secondary: AI Agents, Technical Leaders
Prerequisites
- Keeptrusts CLI installed — see Quickstart or Install the Gateway.
- OpenHands installed — follow the OpenHands installation guide.
- OpenAI API key or credentials for your preferred LLM provider.
- Gateway running — the Keeptrusts gateway must be started before launching OpenHands.
Configuration
Create a policy-config.yaml for OpenHands agent traffic:
pack:
name: openhands-gateway
version: 1.0.0
enabled: true
policies:
chain:
- pii-detector
- code-sanitation
- prompt-injection
- safety-filter
- quality-scorer
- audit-logger
providers:
strategy: single
targets:
- id: openai-openhands
provider: openai
model: gpt-4o
secret_key_ref:
env: OPENAI_API_KEY
The safety-filter is especially important for OpenHands because the agent executes shell commands autonomously.
Setup steps
- Export your provider API key:
export OPENAI_API_KEY="sk-your-key-here"
- Start the Keeptrusts gateway:
kt gateway run --policy-config policy-config.yaml
The gateway listens on http://localhost:41002 by default.
- Configure OpenHands to use the gateway. Set the LLM base URL in your OpenHands configuration file (
config.toml):
[llm]
model = "gpt-4o"
api_key = "sk-your-key-here"
base_url = "http://localhost:41002/v1"
- Alternatively, use environment variables:
export LLM_BASE_URL="http://localhost:41002/v1"
export LLM_MODEL="gpt-4o"
export LLM_API_KEY="sk-your-key-here"
- Launch OpenHands:
python -m openhands.core.main
All LLM traffic from OpenHands now flows through the Keeptrusts gateway.
- For Docker-based deployments, pass the gateway URL as an environment variable:
docker run -e LLM_BASE_URL="http://host.docker.internal:41002/v1" \
-e LLM_API_KEY="sk-your-key-here" \
-e LLM_MODEL="gpt-4o" \
ghcr.io/all-hands-ai/openhands:latest
For hosted gateways:
export LLM_BASE_URL="https://gateway.keeptrusts.com/v1"
Verification
Confirm traffic is flowing through the gateway:
- Check gateway logs while OpenHands is running:
kt gateway run --policy-config policy-config.yaml --log-level debug
- Tail events:
kt events tail --follow
-
Assign OpenHands a task and verify events appear in the Keeptrusts console under Events with the correct policy verdicts for each reasoning step.
-
Verify with curl:
curl http://localhost:41002/v1/chat/completions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Say hello"}],
"max_tokens": 128
}'
Recommended policies
| Policy | Why it matters for OpenHands |
|---|---|
pii-detector | Prevents personal data from leaking through agent prompts |
code-sanitation | Catches secrets and credentials in code the agent reads or writes |
prompt-injection | Detects injection attempts in files the agent processes |
safety-filter | Blocks unsafe content in an agent that executes shell commands |
quality-scorer | Scores agent reasoning quality to flag low-confidence decisions |
audit-logger | Creates an immutable record of every agent step and tool call |
agent-firewall | Controls which tools and commands the agent can invoke |
Troubleshooting
OpenHands cannot connect to the LLM.
Verify the gateway is running with curl http://localhost:41002/v1/models. Ensure LLM_BASE_URL includes the /v1 path.
Docker container cannot reach the gateway on localhost.
Use host.docker.internal instead of localhost when running OpenHands in Docker. On Linux, add --network host to the Docker run command.
Agent actions are blocked by policies.
Review the event verdicts in the Keeptrusts console. If legitimate actions are blocked, adjust the policy thresholds in your config. The safety-filter may need tuning for agentic workflows.
Policies are not applied.
Run kt policy lint --file policy-config.yaml to validate the config. Ensure enabled: true is set.
Events do not appear in the console.
Set KEEPTRUSTS_API_URL and KEEPTRUSTS_GATEWAY_TOKEN before starting the gateway.
For AI systems
- Canonical terms: Keeptrusts gateway, OpenHands, OpenDevin,
LLM_BASE_URL,config.toml, policy-config.yaml. - Configuration mechanism: set
base_urlinconfig.tomlorLLM_BASE_URL=http://localhost:41002/v1environment variable. - Provider format: OpenAI-compatible (
/v1/chat/completions). - Best next pages: OpenAI integration, Policy Controls Catalog, Agent governance use case.
For engineers
- Set
LLM_BASE_URL=http://localhost:41002/v1or configurebase_urlin OpenHandsconfig.toml. - For Docker deployments, use
host.docker.internalinstead oflocalhostfor the gateway address. - Validate with
kt events tail --followwhile OpenHands runs to confirm every agent step is captured. - Include
safety-filterandagent-firewallin the policy chain — OpenHands executes shell commands autonomously.
For leaders
- OpenHands is a fully autonomous agent that writes code, runs commands, and browses the web. Without governance, every action is unaudited and uncontrolled.
- Routing through Keeptrusts provides a complete audit trail of every agent decision, tool call, and code change — critical for compliance and incident investigation.
- Safety and firewall policies add guardrails to autonomous execution, reducing the risk of harmful actions.
- Cost attribution tracks LLM spend per task, helping budget autonomous engineering workloads.
Next steps
- OpenAI integration — full OpenAI provider configuration reference
- Govern AI agents — use case guide for agent governance
- Policy Controls Catalog — browse all available policy types
- SWE-agent with Keeptrusts Gateway — another autonomous coding agent with Keeptrusts support
- Quickstart — install
ktand run your first gateway