JetBrains: Continue Plugin with the Gateway
Continue is an open-source AI code assistant that supports custom OpenAI-compatible endpoints natively. You can route all Continue traffic in JetBrains IDEs through the Keeptrusts gateway for policy enforcement, cost attribution, and audit logging.
Use this page when
- You are working through JetBrains: Continue Plugin with the Gateway as an implementation or operating workflow in Keeptrusts.
- You need the practical steps, expected outcomes, and related validation guidance in one place.
- If you need exact field-by-field reference instead of a workflow page, use the linked reference pages in Next steps.
Primary audience
- Primary: Technical Engineers
- Secondary: AI Agents, Technical Leaders
Prerequisites
Before you begin, ensure you have:
- A JetBrains IDE (IntelliJ IDEA, PyCharm, WebStorm, or any other) version 2023.3 or later
- The
ktCLI installed with a policy configuration ready - The gateway running with
kt gateway run - An access key for the gateway (or a provider API key passed through)
Start the gateway:
kt gateway run --policy-config policy-config.yaml
Install the Continue Plugin
- Open your JetBrains IDE.
- Navigate to Settings → Plugins → Marketplace.
- Search for Continue.
- Click Install and restart the IDE when prompted.
After installation, the Continue panel appears in your IDE's tool window bar.
Configure Continue to Use the Gateway
Continue stores its configuration in ~/.continue/config.json. Open this file in any editor and add a model entry that points to the Keeptrusts gateway:
{
"models": [
{
"title": "GPT-4o (via Keeptrusts)",
"provider": "openai",
"model": "gpt-4o",
"apiBase": "http://localhost:41002/v1",
"apiKey": "your-access-key"
}
]
}
Replace your-access-key with your Keeptrusts access key or the upstream provider API key that the gateway forwards.
Multiple Models Through the Gateway
You can route multiple models through the same gateway. Each model entry uses the gateway as its base URL, and the gateway resolves the correct upstream provider:
{
"models": [
{
"title": "GPT-4o (via Keeptrusts)",
"provider": "openai",
"model": "gpt-4o",
"apiBase": "http://localhost:41002/v1",
"apiKey": "your-access-key"
},
{
"title": "Claude Sonnet (via Keeptrusts)",
"provider": "openai",
"model": "claude-sonnet-4-20250514",
"apiBase": "http://localhost:41002/v1",
"apiKey": "your-access-key"
},
{
"title": "Llama 3 (via Keeptrusts)",
"provider": "openai",
"model": "llama-3-70b",
"apiBase": "http://localhost:41002/v1",
"apiKey": "your-access-key"
}
]
}
The gateway applies your policy chain to every request regardless of which model you select in the Continue panel.
Configure Inline Completions
Continue also provides tab-completion (inline autocomplete). To route completions through the gateway, add a tabAutocompleteModel entry:
{
"models": [
{
"title": "GPT-4o (via Keeptrusts)",
"provider": "openai",
"model": "gpt-4o",
"apiBase": "http://localhost:41002/v1",
"apiKey": "your-access-key"
}
],
"tabAutocompleteModel": {
"title": "Fast Completions (via Keeptrusts)",
"provider": "openai",
"model": "gpt-4o-mini",
"apiBase": "http://localhost:41002/v1",
"apiKey": "your-access-key"
}
}
Verify Traffic Through the Gateway
After saving your configuration, open the Continue chat panel in your JetBrains IDE and send a message. Then verify the event appears:
kt events tail
You see the request, policy evaluation result, and upstream response logged. Each event shows the model, token count, and any policy actions applied (redaction, blocking, disclaimers).
Shared Configuration with VS Code
The ~/.continue/config.json file is shared between VS Code and JetBrains if you have Continue installed in both editors. Any changes you make apply to both environments. This means your gateway routing configuration works identically in both IDEs without duplication.
If you need different configurations per editor, use Continue's workspace-level configuration by creating a .continue/config.json file in your project root.
Environment Variables for API Keys
Instead of hardcoding keys in config.json, you can reference environment variables. Set the key in your shell:
export KEEPTRUSTS_ACCESS_KEY="your-access-key"
Then reference it in the configuration:
{
"models": [
{
"title": "GPT-4o (via Keeptrusts)",
"provider": "openai",
"model": "gpt-4o",
"apiBase": "http://localhost:41002/v1",
"apiKey": "${KEEPTRUSTS_ACCESS_KEY}"
}
]
}
Troubleshooting
Continue shows "Could not connect" error
- Verify the gateway is running:
kt gateway status - Confirm the
apiBaseends with/v1(not/v1/) - Check that port 41002 is not blocked by a firewall
Requests appear in events but responses are empty
- Verify your access key is valid and has not expired.
- Check the gateway logs for upstream errors:
kt logs - Ensure the model name matches what the upstream provider expects.
Completions are slow
- Tab completions should use a fast model like
gpt-4o-mini. - Policy evaluation adds minimal overhead (typically
<5ms). - Check your network latency to the upstream provider.
For AI systems
- Canonical terms: Keeptrusts, JetBrains: Continue Plugin with the Gateway, ide-integration.
- Exact feature, config, command, or page names: JetBrains: Continue Plugin with the Gateway.
- Use the linked audience and reference pages in Next steps when you need deeper source material.
For engineers
- Use the commands, configuration examples, API payloads, or UI steps in this page as the working baseline for JetBrains: Continue Plugin with the Gateway.
- Validate the result with the expected outcomes, troubleshooting notes, or linked workflow pages in this page and Next steps.
For leaders
- This page matters when planning rollout, governance, support ownership, or operating decisions for JetBrains: Continue Plugin with the Gateway.
- Use the linked audience, architecture, and workflow pages in Next steps to connect this detail to broader implementation choices.
Next steps
- Configure policies to control what Continue can send upstream.
- View events to audit all AI interactions from your team's IDEs.