Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

IDE Integration Overview

Any IDE AI assistant that supports OpenAI-compatible endpoints can route through the Keeptrusts gateway. This gives you centralized policy enforcement, audit logging, cost attribution, and caching — without changing how you use your coding assistant.

Use this page when

  • You are working through IDE Integration Overview as an implementation or operating workflow in Keeptrusts.
  • You need the practical steps, expected outcomes, and related validation guidance in one place.
  • If you need exact field-by-field reference instead of a workflow page, use the linked reference pages in Next steps.

Primary audience

  • Primary: Technical Engineers
  • Secondary: AI Agents, Technical Leaders

Why Route IDE AI Through the Gateway

When your IDE AI assistant connects directly to an LLM provider, you have no visibility or control over what goes in or out. Routing through the Keeptrusts gateway adds:

  • Policy enforcement — block prompts containing secrets, PII, or restricted content before they reach the provider
  • Secret redaction — automatically strip API keys, passwords, and tokens from code snippets sent to the LLM
  • Audit trail — every request and response is logged as a governance event with full attribution
  • Cost control — track spend per developer, team, or project with wallet-based cost attribution
  • Caching — reduce latency and cost by caching identical completions
  • Disclaimers and escalation — attach compliance notices or escalate flagged requests to reviewers

Supported IDEs and Assistants

The gateway works with any tool that can target a custom OpenAI-compatible API endpoint:

IDEAssistants
VS CodeGitHub Copilot (via proxy), Continue, Cody, CodeGPT, Tabby, custom extensions
JetBrains (IntelliJ, PyCharm, etc.)AI Assistant (via proxy), Continue, custom plugins
CursorBuilt-in AI (native OpenAI-compatible config)
WindsurfBuilt-in AI assistant
ZedBuilt-in assistant (custom endpoint support)
NeovimCopilot.lua, codecompanion.nvim, custom plugins
XcodeCustom assistants with OpenAI-compatible backends

The General Pattern

Regardless of your IDE or assistant, the setup follows three steps:

  1. Install and run the gateway — install the kt CLI and start the gateway with your policy config
  2. Point the assistant's base URL — set the API endpoint to http://localhost:41002/v1
  3. Provide authentication — use an access key or your provider API key

Architecture

┌─────────────────────────────────────────────────────────────┐
│ Developer Machine │
│ │
│ ┌──────────────┐ ┌─────────────────────────────────┐ │
│ │ IDE AI │────▶│ Keeptrusts Gateway │ │
│ │ Assistant │ │ localhost:41002 │ │
│ └──────────────┘ │ │ │
│ │ ┌─────────────────────────────┐│ │
│ │ │ Policy Chain ││ │
│ │ │ • Input redaction ││ │
│ │ │ • Secret detection ││ │
│ │ │ • Content filtering ││ │
│ │ │ • Cost attribution ││ │
│ │ │ • Audit logging ││ │
│ │ └─────────────────────────────┘│ │
│ └───────────────┬─────────────────┘ │
│ │ │
└───────────────────────────────────────┼─────────────────────┘


┌─────────────────────┐
│ LLM Provider │
│ (OpenAI, Azure, │
│ Anthropic, etc.) │
└─────────────────────┘

How It Works

When your IDE assistant sends a completion or chat request:

  1. The request hits the gateway at localhost:41002
  2. The gateway applies input-phase policies (redaction, blocking, escalation)
  3. If the request passes, the gateway forwards it to the configured LLM provider
  4. The provider response passes through output-phase policies (content filtering, disclaimers)
  5. The final response returns to your IDE assistant
  6. A decision event is recorded for audit and attribution

Your coding experience stays the same — completions, chat, and inline suggestions all work normally. The governance layer is transparent to the assistant.

What You Get

Once connected, you can:

  • View all IDE AI traffic in real time with kt events tail
  • See per-developer cost breakdowns in the console dashboard
  • Enforce organization-wide policies on what code and context can be sent to LLMs
  • Detect and block accidental secret exposure before it reaches the provider
  • Generate compliance reports showing all AI-assisted code generation activity

Per-IDE Guides

Choose your IDE and assistant for detailed setup instructions:

Requirements

  • The kt CLI installed on your machine
  • A policy-config.yaml with at least one provider configured
  • Network access from your IDE to localhost:41002
  • An access key or provider API key for authentication

For AI systems

  • Canonical terms: Keeptrusts, IDE Integration Overview, ide-integration.
  • Exact feature, config, command, or page names: IDE Integration Overview.
  • Use the linked audience and reference pages in Next steps when you need deeper source material.

For engineers

  • Use the commands, configuration examples, API payloads, or UI steps in this page as the working baseline for IDE Integration Overview.
  • Validate the result with the expected outcomes, troubleshooting notes, or linked workflow pages in this page and Next steps.

For leaders

  • This page matters when planning rollout, governance, support ownership, or operating decisions for IDE Integration Overview.
  • Use the linked audience, architecture, and workflow pages in Next steps to connect this detail to broader implementation choices.

Next steps

Start with Setting Up the Gateway for IDE Use to get the gateway running, then follow the guide for your specific IDE assistant.