IDE Integration Overview
Any IDE AI assistant that supports OpenAI-compatible endpoints can route through the Keeptrusts gateway. This gives you centralized policy enforcement, audit logging, cost attribution, and caching — without changing how you use your coding assistant.
Use this page when
- You are working through IDE Integration Overview as an implementation or operating workflow in Keeptrusts.
- You need the practical steps, expected outcomes, and related validation guidance in one place.
- If you need exact field-by-field reference instead of a workflow page, use the linked reference pages in Next steps.
Primary audience
- Primary: Technical Engineers
- Secondary: AI Agents, Technical Leaders
Why Route IDE AI Through the Gateway
When your IDE AI assistant connects directly to an LLM provider, you have no visibility or control over what goes in or out. Routing through the Keeptrusts gateway adds:
- Policy enforcement — block prompts containing secrets, PII, or restricted content before they reach the provider
- Secret redaction — automatically strip API keys, passwords, and tokens from code snippets sent to the LLM
- Audit trail — every request and response is logged as a governance event with full attribution
- Cost control — track spend per developer, team, or project with wallet-based cost attribution
- Caching — reduce latency and cost by caching identical completions
- Disclaimers and escalation — attach compliance notices or escalate flagged requests to reviewers
Supported IDEs and Assistants
The gateway works with any tool that can target a custom OpenAI-compatible API endpoint:
| IDE | Assistants |
|---|---|
| VS Code | GitHub Copilot (via proxy), Continue, Cody, CodeGPT, Tabby, custom extensions |
| JetBrains (IntelliJ, PyCharm, etc.) | AI Assistant (via proxy), Continue, custom plugins |
| Cursor | Built-in AI (native OpenAI-compatible config) |
| Windsurf | Built-in AI assistant |
| Zed | Built-in assistant (custom endpoint support) |
| Neovim | Copilot.lua, codecompanion.nvim, custom plugins |
| Xcode | Custom assistants with OpenAI-compatible backends |
The General Pattern
Regardless of your IDE or assistant, the setup follows three steps:
- Install and run the gateway — install the
ktCLI and start the gateway with your policy config - Point the assistant's base URL — set the API endpoint to
http://localhost:41002/v1 - Provide authentication — use an access key or your provider API key
Architecture
┌─────────────────────────────────────────────────────────────┐
│ Developer Machine │
│ │
│ ┌──────────────┐ ┌─────────────────────────────────┐ │
│ │ IDE AI │────▶│ Keeptrusts Gateway │ │
│ │ Assistant │ │ localhost:41002 │ │
│ └──────────────┘ │ │ │
│ │ ┌─────────────────────────────┐│ │
│ │ │ Policy Chain ││ │
│ │ │ • Input redaction ││ │
│ │ │ • Secret detection ││ │
│ │ │ • Content filtering ││ │
│ │ │ • Cost attribution ││ │
│ │ │ • Audit logging ││ │
│ │ └─────────────────────────────┘│ │
│ └───────────────┬─────────────────┘ │
│ │ │
└───────────────────────────────────────┼─────────────────────┘
│
▼
┌─────────────────────┐
│ LLM Provider │
│ (OpenAI, Azure, │
│ Anthropic, etc.) │
└─────────────────────┘
How It Works
When your IDE assistant sends a completion or chat request:
- The request hits the gateway at
localhost:41002 - The gateway applies input-phase policies (redaction, blocking, escalation)
- If the request passes, the gateway forwards it to the configured LLM provider
- The provider response passes through output-phase policies (content filtering, disclaimers)
- The final response returns to your IDE assistant
- A decision event is recorded for audit and attribution
Your coding experience stays the same — completions, chat, and inline suggestions all work normally. The governance layer is transparent to the assistant.
What You Get
Once connected, you can:
- View all IDE AI traffic in real time with
kt events tail - See per-developer cost breakdowns in the console dashboard
- Enforce organization-wide policies on what code and context can be sent to LLMs
- Detect and block accidental secret exposure before it reaches the provider
- Generate compliance reports showing all AI-assisted code generation activity
Per-IDE Guides
Choose your IDE and assistant for detailed setup instructions:
- Setting Up the Gateway for IDE Use
- VS Code: GitHub Copilot
- VS Code: Continue Extension
- VS Code: Sourcegraph Cody
- VS Code: Custom AI Extensions
- Access Keys and Authentication
Requirements
- The
ktCLI installed on your machine - A
policy-config.yamlwith at least one provider configured - Network access from your IDE to
localhost:41002 - An access key or provider API key for authentication
For AI systems
- Canonical terms: Keeptrusts, IDE Integration Overview, ide-integration.
- Exact feature, config, command, or page names: IDE Integration Overview.
- Use the linked audience and reference pages in Next steps when you need deeper source material.
For engineers
- Use the commands, configuration examples, API payloads, or UI steps in this page as the working baseline for IDE Integration Overview.
- Validate the result with the expected outcomes, troubleshooting notes, or linked workflow pages in this page and Next steps.
For leaders
- This page matters when planning rollout, governance, support ownership, or operating decisions for IDE Integration Overview.
- Use the linked audience, architecture, and workflow pages in Next steps to connect this detail to broader implementation choices.
Next steps
Start with Setting Up the Gateway for IDE Use to get the gateway running, then follow the guide for your specific IDE assistant.