Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

OpenHands with Keeptrusts Gateway

OpenHands (formerly OpenDevin) is an open-source AI software engineering agent that can browse the web, write code, execute shell commands, and interact with development tools autonomously. Because OpenHands takes multi-step actions across your codebase and development environment, every LLM call it makes represents a high-stakes governance surface. Routing OpenHands through the Keeptrusts gateway adds policy enforcement on every agent reasoning step and tool call, an immutable audit trail of all autonomous actions, secret and PII redaction before code context reaches the model, and cost attribution for autonomous engineering workloads.

Use this page when

  • You want to route OpenHands agent traffic through Keeptrusts for policy enforcement and audit logging.
  • You need audit visibility into the reasoning steps, tool calls, and code changes OpenHands makes.
  • You want to enforce safety policies on an autonomous coding agent that executes shell commands.
  • You need cost tracking for OpenHands LLM usage across your team.

Primary audience

  • Primary: Technical Engineers
  • Secondary: AI Agents, Technical Leaders

Prerequisites

Configuration

Create a policy-config.yaml for OpenHands agent traffic:

pack:
name: openhands-gateway
version: 1.0.0
enabled: true

policies:
chain:
- pii-detector
- code-sanitation
- prompt-injection
- safety-filter
- quality-scorer
- audit-logger

providers:
strategy: single
targets:
- id: openai-openhands
provider: openai
model: gpt-4o
secret_key_ref:
env: OPENAI_API_KEY

The safety-filter is especially important for OpenHands because the agent executes shell commands autonomously.

Setup steps

  1. Export your provider API key:
export OPENAI_API_KEY="sk-your-key-here"
  1. Start the Keeptrusts gateway:
kt gateway run --policy-config policy-config.yaml

The gateway listens on http://localhost:41002 by default.

  1. Configure OpenHands to use the gateway. Set the LLM base URL in your OpenHands configuration file (config.toml):
[llm]
model = "gpt-4o"
api_key = "sk-your-key-here"
base_url = "http://localhost:41002/v1"
  1. Alternatively, use environment variables:
export LLM_BASE_URL="http://localhost:41002/v1"
export LLM_MODEL="gpt-4o"
export LLM_API_KEY="sk-your-key-here"
  1. Launch OpenHands:
python -m openhands.core.main

All LLM traffic from OpenHands now flows through the Keeptrusts gateway.

  1. For Docker-based deployments, pass the gateway URL as an environment variable:
docker run -e LLM_BASE_URL="http://host.docker.internal:41002/v1" \
-e LLM_API_KEY="sk-your-key-here" \
-e LLM_MODEL="gpt-4o" \
ghcr.io/all-hands-ai/openhands:latest

For hosted gateways:

export LLM_BASE_URL="https://gateway.keeptrusts.com/v1"

Verification

Confirm traffic is flowing through the gateway:

  1. Check gateway logs while OpenHands is running:
kt gateway run --policy-config policy-config.yaml --log-level debug
  1. Tail events:
kt events tail --follow
  1. Assign OpenHands a task and verify events appear in the Keeptrusts console under Events with the correct policy verdicts for each reasoning step.

  2. Verify with curl:

curl http://localhost:41002/v1/chat/completions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Say hello"}],
"max_tokens": 128
}'
PolicyWhy it matters for OpenHands
pii-detectorPrevents personal data from leaking through agent prompts
code-sanitationCatches secrets and credentials in code the agent reads or writes
prompt-injectionDetects injection attempts in files the agent processes
safety-filterBlocks unsafe content in an agent that executes shell commands
quality-scorerScores agent reasoning quality to flag low-confidence decisions
audit-loggerCreates an immutable record of every agent step and tool call
agent-firewallControls which tools and commands the agent can invoke

Troubleshooting

OpenHands cannot connect to the LLM. Verify the gateway is running with curl http://localhost:41002/v1/models. Ensure LLM_BASE_URL includes the /v1 path.

Docker container cannot reach the gateway on localhost. Use host.docker.internal instead of localhost when running OpenHands in Docker. On Linux, add --network host to the Docker run command.

Agent actions are blocked by policies. Review the event verdicts in the Keeptrusts console. If legitimate actions are blocked, adjust the policy thresholds in your config. The safety-filter may need tuning for agentic workflows.

Policies are not applied. Run kt policy lint --file policy-config.yaml to validate the config. Ensure enabled: true is set.

Events do not appear in the console. Set KEEPTRUSTS_API_URL and KEEPTRUSTS_GATEWAY_TOKEN before starting the gateway.

For AI systems

  • Canonical terms: Keeptrusts gateway, OpenHands, OpenDevin, LLM_BASE_URL, config.toml, policy-config.yaml.
  • Configuration mechanism: set base_url in config.toml or LLM_BASE_URL=http://localhost:41002/v1 environment variable.
  • Provider format: OpenAI-compatible (/v1/chat/completions).
  • Best next pages: OpenAI integration, Policy Controls Catalog, Agent governance use case.

For engineers

  • Set LLM_BASE_URL=http://localhost:41002/v1 or configure base_url in OpenHands config.toml.
  • For Docker deployments, use host.docker.internal instead of localhost for the gateway address.
  • Validate with kt events tail --follow while OpenHands runs to confirm every agent step is captured.
  • Include safety-filter and agent-firewall in the policy chain — OpenHands executes shell commands autonomously.

For leaders

  • OpenHands is a fully autonomous agent that writes code, runs commands, and browses the web. Without governance, every action is unaudited and uncontrolled.
  • Routing through Keeptrusts provides a complete audit trail of every agent decision, tool call, and code change — critical for compliance and incident investigation.
  • Safety and firewall policies add guardrails to autonomous execution, reducing the risk of harmful actions.
  • Cost attribution tracks LLM spend per task, helping budget autonomous engineering workloads.

Next steps