JetBrains: AI Assistant Through the Gateway
You can route JetBrains AI Assistant traffic through the Keeptrusts gateway to enforce policies, log decisions, and attribute costs across your team. This guide covers configuration for IntelliJ IDEA, PyCharm, WebStorm, GoLand, RubyMine, and Rider.
Use this page when
- You are working through JetBrains: AI Assistant Through the Gateway as an implementation or operating workflow in Keeptrusts.
- You need the practical steps, expected outcomes, and related validation guidance in one place.
- If you need exact field-by-field reference instead of a workflow page, use the linked reference pages in Next steps.
Primary audience
- Primary: Technical Engineers
- Secondary: AI Agents, Technical Leaders
Prerequisites
Before you begin, ensure you have:
- A JetBrains IDE (2024.1 or later) with the AI Assistant plugin installed
- The
ktCLI installed and a policy configuration ready - The gateway running locally with
kt gateway run
Start the gateway if it is not already running:
kt gateway run --policy-config policy-config.yaml
The gateway listens on http://localhost:41002/v1 by default.
Configure the HTTP Proxy
JetBrains AI Assistant does not expose a direct "base URL" setting for its built-in AI endpoint. Instead, you route traffic through the gateway by configuring the IDE's HTTP proxy.
- Open your JetBrains IDE.
- Navigate to Settings → Appearance & Behavior → System Settings → HTTP Proxy.
- Select Manual proxy configuration.
- Set Host name to
127.0.0.1and Port to41002. - Leave No proxy for empty so all AI traffic routes through the gateway.
- Click OK to apply.
The IDE now sends AI Assistant requests through the Keeptrusts gateway, which applies your policy chain before forwarding to the upstream LLM provider.
Configure via Environment Variables
If you prefer environment-based configuration, set proxy variables before launching your IDE:
export HTTP_PROXY=http://localhost:41002
export HTTPS_PROXY=http://localhost:41002
On macOS, you can set these in your shell profile or use a launcher script:
#!/bin/bash
export HTTP_PROXY=http://localhost:41002
export HTTPS_PROXY=http://localhost:41002
open -a "IntelliJ IDEA"
On Linux, prepend the variables when launching from a terminal:
HTTP_PROXY=http://localhost:41002 HTTPS_PROXY=http://localhost:41002 idea
Verify Traffic Flows Through the Gateway
After configuring the proxy, open a file in your IDE and trigger AI Assistant (for example, ask a question in the AI chat panel or use code completion). Then verify the traffic appears in the gateway event stream:
kt events tail
You see events showing the request, the policy evaluation result, and the upstream response. If you see no events, double-check that the proxy settings are active and the gateway is running.
Per-IDE Notes
IntelliJ IDEA
IntelliJ IDEA supports AI Assistant out of the box with an active JetBrains AI subscription. The proxy settings at Settings → Appearance & Behavior → System Settings → HTTP Proxy apply to all HTTP traffic from the IDE, including AI requests.
PyCharm
PyCharm uses the same settings path as IntelliJ. If you use PyCharm Professional, AI Assistant is available directly. Community Edition users can install the AI Assistant plugin from the Marketplace.
WebStorm
WebStorm shares the same proxy configuration path. AI Assistant requests for JavaScript and TypeScript completions route through the gateway identically.
GoLand
GoLand follows the same pattern. No additional Go-specific configuration is needed for the proxy to intercept AI traffic.
RubyMine
RubyMine uses the identical proxy settings. AI Assistant requests for Ruby code suggestions pass through the gateway for policy enforcement.
Rider
Rider (.NET IDE) uses the same HTTP Proxy settings. Ensure the proxy is set before triggering any AI Assistant features for C# or F# code.
Troubleshooting
AI Assistant requests bypass the gateway
If events do not appear in kt events tail:
- Confirm the proxy is set to
127.0.0.1:41002(notlocalhost, which may resolve to IPv6 on some systems). - Restart the IDE after changing proxy settings.
- Check that no system-level proxy override is in effect.
SSL certificate errors
The gateway terminates TLS locally. If you see certificate warnings:
- Ensure you are using
http://(nothttps://) for the proxy host. - If your organization requires HTTPS proxy, configure the gateway's TLS certificate and trust it in the IDE's trust store via Settings → Tools → Server Certificates.
Slow completions
Policy evaluation adds minimal latency (typically <10ms). If completions feel slow:
- Check the gateway logs with
kt logsfor upstream timeouts. - Verify your internet connection to the LLM provider is stable.
- Review your policy chain for expensive evaluation steps.
What the Gateway Enforces
When AI Assistant traffic routes through the gateway, your policies apply to every request:
- Redaction removes sensitive data (API keys, secrets, PII) before it reaches the provider.
- Blocking prevents requests that violate content policies from reaching the upstream.
- Disclaimers attach governance notices to responses.
- Escalation flags high-risk requests for human review.
- Audit logging records every interaction for compliance.
- Cost attribution tracks token usage per user, team, or project.
For AI systems
- Canonical terms: Keeptrusts, JetBrains: AI Assistant Through the Gateway, ide-integration.
- Exact feature, config, command, or page names: JetBrains: AI Assistant Through the Gateway.
- Use the linked audience and reference pages in Next steps when you need deeper source material.
For engineers
- Use the commands, configuration examples, API payloads, or UI steps in this page as the working baseline for JetBrains: AI Assistant Through the Gateway.
- Validate the result with the expected outcomes, troubleshooting notes, or linked workflow pages in this page and Next steps.
For leaders
- This page matters when planning rollout, governance, support ownership, or operating decisions for JetBrains: AI Assistant Through the Gateway.
- Use the linked audience, architecture, and workflow pages in Next steps to connect this detail to broader implementation choices.
Next steps
- Configure policies for your team's requirements.
- View events to monitor AI usage across your JetBrains IDEs.
- Set up access keys for team-wide gateway authentication.