Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

VS Code: Continue Extension with the Gateway

Continue is an open-source AI coding assistant for VS Code that natively supports custom OpenAI-compatible endpoints. This makes it one of the easiest IDE assistants to integrate with the Keeptrusts gateway — no proxy tricks required.

Use this page when

  • You are working through VS Code: Continue Extension with the Gateway as an implementation or operating workflow in Keeptrusts.
  • You need the practical steps, expected outcomes, and related validation guidance in one place.
  • If you need exact field-by-field reference instead of a workflow page, use the linked reference pages in Next steps.

Primary audience

  • Primary: Technical Engineers
  • Secondary: AI Agents, Technical Leaders

Why Continue with Keeptrusts

Continue sends completion and chat requests to whichever LLM endpoint you configure. By pointing it at the Keeptrusts gateway, you get:

  • Policy enforcement on every code completion and chat interaction
  • Secret redaction before code context reaches the LLM
  • Full audit trail of AI-assisted development
  • Cost attribution per developer
  • Caching for repeated queries

Prerequisites

  • Gateway running on localhost:41002
  • VS Code installed
  • Provider credentials configured in the gateway

Install Continue

  1. Open VS Code
  2. Go to the Extensions view (Cmd+Shift+X)
  3. Search for "Continue"
  4. Install the "Continue - Codestral, Claude, and more" extension
  5. Reload VS Code when prompted

Configure Continue for the Gateway

Continue stores its configuration in ~/.continue/config.json. Open this file to configure models that route through the gateway.

Open the Config File

Press Cmd+Shift+P and type "Continue: Open config.json", or open the file directly:

code ~/.continue/config.json

Basic Configuration

Replace or update the models array to route through the gateway:

{
"models": [
{
"title": "GPT-4o (via Keeptrusts)",
"provider": "openai",
"model": "gpt-4o",
"apiBase": "http://localhost:41002/v1",
"apiKey": "your-access-key"
},
{
"title": "Claude Sonnet (via Keeptrusts)",
"provider": "openai",
"model": "claude-sonnet-4-20250514",
"apiBase": "http://localhost:41002/v1",
"apiKey": "your-access-key"
},
{
"title": "GPT-4o Mini (via Keeptrusts)",
"provider": "openai",
"model": "gpt-4o-mini",
"apiBase": "http://localhost:41002/v1",
"apiKey": "your-access-key"
}
]
}
tip

Use "provider": "openai" for all models regardless of the actual LLM provider. The Keeptrusts gateway handles provider routing based on the model name in your policy-config.yaml.

Using Provider Keys Directly

If you are not using Keeptrusts access keys, you can pass the provider API key directly. The gateway forwards it to the upstream provider:

{
"models": [
{
"title": "GPT-4o (via Keeptrusts)",
"provider": "openai",
"model": "gpt-4o",
"apiBase": "http://localhost:41002/v1",
"apiKey": "sk-your-openai-key"
}
]
}

Configure Tab Autocomplete

Continue supports tab-based code completion. Route autocomplete through the gateway by configuring the tabAutocompleteModel:

{
"tabAutocompleteModel": {
"title": "Autocomplete (via Keeptrusts)",
"provider": "openai",
"model": "gpt-4o-mini",
"apiBase": "http://localhost:41002/v1",
"apiKey": "your-access-key"
}
}

This ensures tab completions also pass through your policy chain for redaction and logging.

Configure Embeddings

If you use Continue's @codebase context provider, route embedding requests through the gateway:

{
"embeddingsProvider": {
"provider": "openai",
"model": "text-embedding-3-small",
"apiBase": "http://localhost:41002/v1",
"apiKey": "your-access-key"
}
}

Full Configuration Example

Here is a complete config.json with chat models, autocomplete, and embeddings all routing through the gateway:

{
"models": [
{
"title": "GPT-4o",
"provider": "openai",
"model": "gpt-4o",
"apiBase": "http://localhost:41002/v1",
"apiKey": "your-access-key"
},
{
"title": "Claude Sonnet",
"provider": "openai",
"model": "claude-sonnet-4-20250514",
"apiBase": "http://localhost:41002/v1",
"apiKey": "your-access-key"
}
],
"tabAutocompleteModel": {
"title": "Fast Autocomplete",
"provider": "openai",
"model": "gpt-4o-mini",
"apiBase": "http://localhost:41002/v1",
"apiKey": "your-access-key"
},
"embeddingsProvider": {
"provider": "openai",
"model": "text-embedding-3-small",
"apiBase": "http://localhost:41002/v1",
"apiKey": "your-access-key"
}
}

Verify the Integration

  1. Open a code file in VS Code
  2. Start a Continue chat (Cmd+L) and ask a question
  3. In a separate terminal, watch for events:
kt events tail

You should see events logged for each chat message and completion:

[2024-01-15 10:30:12] INPUT openai chat/completions user:dev1 PASS
[2024-01-15 10:30:14] OUTPUT openai chat/completions user:dev1 PASS

Verify Policy Enforcement

Test secret redaction by sending a message containing a fake credential:

Can you explain this code? The API key is sk-test123456789

Check the event log to confirm the key was redacted before reaching the provider.

Troubleshooting

"Connection refused" error

  • Verify the gateway is running: curl http://localhost:41002/v1/models
  • Check the port matches between your config and the gateway

"Unauthorized" or 401 errors

  • Verify your apiKey value is correct
  • If using access keys, ensure the key is active in the console
  • If using provider keys, confirm the key works directly with the provider

Models not appearing in Continue

  • Restart VS Code after editing config.json
  • Check for JSON syntax errors in the config file
  • Verify the model names match what the gateway exposes at /v1/models

Slow responses

  • The gateway adds minimal overhead (<10ms)
  • Check kt events tail for policy evaluation timing
  • Ensure the gateway has connectivity to your LLM provider

For AI systems

  • Canonical terms: Keeptrusts, VS Code: Continue Extension with the Gateway, ide-integration.
  • Exact feature, config, command, or page names: VS Code: Continue Extension with the Gateway.
  • Use the linked audience and reference pages in Next steps when you need deeper source material.

For engineers

  • Use the commands, configuration examples, API payloads, or UI steps in this page as the working baseline for VS Code: Continue Extension with the Gateway.
  • Validate the result with the expected outcomes, troubleshooting notes, or linked workflow pages in this page and Next steps.

For leaders

  • This page matters when planning rollout, governance, support ownership, or operating decisions for VS Code: Continue Extension with the Gateway.
  • Use the linked audience, architecture, and workflow pages in Next steps to connect this detail to broader implementation choices.

Next steps