VS Code: Custom AI Extensions with the Gateway
Many VS Code AI extensions support custom OpenAI-compatible endpoints. If an extension lets you configure an "API Base URL", "Endpoint", or "Server URL", you can point it at the Keeptrusts gateway for policy enforcement and audit logging.
Use this page when
- You are working through VS Code: Custom AI Extensions with the Gateway as an implementation or operating workflow in Keeptrusts.
- You need the practical steps, expected outcomes, and related validation guidance in one place.
- If you need exact field-by-field reference instead of a workflow page, use the linked reference pages in Next steps.
Primary audience
- Primary: Technical Engineers
- Secondary: AI Agents, Technical Leaders
The General Pattern
For any VS Code AI extension that accepts a custom endpoint:
- Find the extension's endpoint or base URL setting
- Set it to
http://localhost:41002/v1 - Provide your access key or provider API key in the extension's API key field
- Select a model name that matches your gateway's provider configuration
That is all you need. The gateway receives OpenAI-compatible requests, applies your policy chain, forwards to the configured upstream provider, and returns the response.
Prerequisites
- Gateway running on
localhost:41002 - The AI extension installed in VS Code
- An access key or provider API key
CodeGPT
CodeGPT is a popular VS Code extension that supports custom OpenAI-compatible providers.
Configuration
- Open VS Code Settings (Cmd+,)
- Search for "CodeGPT"
- Set the provider to "Custom" or "OpenAI Compatible"
- Configure:
{
"codegpt.apiKey": "your-access-key",
"codegpt.customProvider.baseUrl": "http://localhost:41002/v1",
"codegpt.customProvider.model": "gpt-4o"
}
Via Settings UI
- Open CodeGPT sidebar panel
- Click the gear icon → Provider Settings
- Select "Custom Provider"
- Enter base URL:
http://localhost:41002/v1 - Enter your access key
- Select or type the model name
Tabby
Tabby is a self-hosted AI coding assistant. When running a local Tabby server, you can route its LLM backend calls through the Keeptrusts gateway.
Configure Tabby Server
In your Tabby server configuration, set the model backend to use the gateway:
[model.completion.http]
kind = "openai/completion"
api_endpoint = "http://localhost:41002/v1"
api_key = "your-access-key"
model_name = "gpt-4o-mini"
[model.chat.http]
kind = "openai/chat"
api_endpoint = "http://localhost:41002/v1"
api_key = "your-access-key"
model_name = "gpt-4o"
Configure VS Code Extension
The Tabby VS Code extension connects to your Tabby server, which then routes through the gateway:
{
"tabby.serverUrl": "http://localhost:8080"
}
The governance layer sits between your Tabby server and the LLM provider, not between VS Code and Tabby.
Generic OpenAI Client Extensions
Several VS Code extensions act as generic OpenAI API clients. These include:
- OpenAI Helper
- ChatGPT - EasyCode
- AI Toolkit
For these extensions, look for settings with names like:
apiBaseUrlorbaseUrlendpointorserverUrlopenai.apiBase
Set the URL to http://localhost:41002/v1 and provide your key.
Example: Generic Extension Settings
{
"aiExtension.apiBaseUrl": "http://localhost:41002/v1",
"aiExtension.apiKey": "your-access-key",
"aiExtension.model": "gpt-4o"
}
Extensions Using the OpenAI Node SDK
Some extensions use the OpenAI Node.js SDK internally. These respect the OPENAI_BASE_URL and OPENAI_API_KEY environment variables:
export OPENAI_BASE_URL="http://localhost:41002/v1"
export OPENAI_API_KEY="your-access-key"
code .
Launch VS Code from the terminal after setting these variables. Extensions using the SDK pick up the custom base URL automatically.
Using VS Code Workspace Settings
For project-specific gateway configuration, use workspace settings instead of user settings:
Create .vscode/settings.json in your project root:
{
"aiExtension.apiBaseUrl": "http://localhost:41002/v1",
"aiExtension.apiKey": "${env:KEEPTRUSTS_ACCESS_KEY}"
}
Not all extensions support environment variable interpolation in settings. Check the extension documentation.
Verify the Integration
After configuring any extension:
- Trigger an AI interaction (chat, completion, or inline edit)
- Watch for events in a separate terminal:
kt events tail
- Confirm events appear with the correct model and user attribution
Troubleshooting
Connection Refused
- Verify the gateway is running:
curl http://localhost:41002/v1/models - Check the URL includes
/v1— some extensions append it automatically, others do not - If the extension adds
/v1itself, try setting the base URL tohttp://localhost:41002without the path
CORS Errors
Some extensions make requests from VS Code's webview (browser context), which enforces CORS:
- The gateway does not require CORS configuration for localhost requests from Node.js contexts
- If you see CORS errors, the extension is likely making requests from a webview
- Check if the extension has a "use native fetch" or "server-side requests" option
Self-Signed Certificate Errors
When running the gateway with TLS and a self-signed cert:
{
"aiExtension.rejectUnauthorized": false
}
Or set the environment variable:
export NODE_TLS_REJECT_UNAUTHORIZED=0
code .
Only disable certificate verification in development. Use proper certificates for production.
Wrong Response Format
If the extension expects a specific response format that differs from what the gateway returns:
- Check the extension expects OpenAI-compatible responses (most do)
- Verify the model name matches one the gateway recognizes
- Check gateway logs for upstream errors
Authentication Failures
- Ensure the API key field is not empty
- Try the raw provider key if access keys are not working
- Check the key has not expired in the console
Finding Extension Settings
If you are unsure where an extension stores its endpoint configuration:
- Open VS Code Settings (Cmd+,)
- Search for the extension name
- Look for fields labeled: API, endpoint, base URL, server, host
- Check the extension's README or documentation on the marketplace
Alternatively, search the extension's source code on GitHub for baseUrl, apiBase, or endpoint.
For AI systems
- Canonical terms: Keeptrusts, VS Code: Custom AI Extensions with the Gateway, ide-integration.
- Exact feature, config, command, or page names: VS Code: Custom AI Extensions with the Gateway.
- Use the linked audience and reference pages in Next steps when you need deeper source material.
For engineers
- Use the commands, configuration examples, API payloads, or UI steps in this page as the working baseline for VS Code: Custom AI Extensions with the Gateway.
- Validate the result with the expected outcomes, troubleshooting notes, or linked workflow pages in this page and Next steps.
For leaders
- This page matters when planning rollout, governance, support ownership, or operating decisions for VS Code: Custom AI Extensions with the Gateway.
- Use the linked audience, architecture, and workflow pages in Next steps to connect this detail to broader implementation choices.
Next steps
- Access Keys and Authentication — proper key management for IDE integrations
- VS Code: Continue Extension — a well-supported option with native gateway compatibility
- IDE Integration Overview — see all supported IDEs