Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Tabnine with Keeptrusts Gateway

Tabnine is an AI code completion tool that provides inline suggestions, chat-based help, and code generation across all major IDEs. Tabnine supports both cloud-hosted and self-hosted models, and its Enterprise tier can be configured to use custom OpenAI-compatible endpoints. Routing Tabnine through the Keeptrusts gateway adds policy enforcement on every completion request, an immutable audit trail for compliance, secret and PII redaction before code context leaves your network, and per-developer cost attribution.

Use this page when

  • You want to route Tabnine's AI traffic through Keeptrusts for policy enforcement and audit logging.
  • You use Tabnine Enterprise with a custom model endpoint that you want to govern.
  • You want to enforce data-loss-prevention and secret detection on Tabnine completions.
  • You need per-developer cost tracking for Tabnine AI usage across your organization.

Primary audience

  • Primary: Technical Engineers
  • Secondary: AI Agents, Technical Leaders

Prerequisites

  • Keeptrusts CLI installed — see Quickstart or Install the Gateway.
  • Tabnine installed in your IDE — VS Code, JetBrains, or other supported IDE.
  • Tabnine Enterprise subscription (required for custom endpoint configuration).
  • OpenAI API key or credentials for your custom model endpoint.
  • Gateway running — the Keeptrusts gateway must be started before configuring Tabnine.

Configuration

Create a policy-config.yaml for Tabnine traffic:

pack:
name: tabnine-gateway
version: 1.0.0
enabled: true

policies:
chain:
- pii-detector
- code-sanitation
- prompt-injection
- audit-logger

providers:
strategy: single
targets:
- id: openai-tabnine
provider: openai
model: gpt-4o
secret_key_ref:
env: OPENAI_API_KEY

Setup steps

  1. Export your provider API key:
export OPENAI_API_KEY="sk-your-key-here"
  1. Start the Keeptrusts gateway:
kt gateway run --policy-config policy-config.yaml

The gateway listens on http://localhost:41002 by default.

  1. Configure Tabnine to use the gateway endpoint. In Tabnine Enterprise admin settings, set the custom API endpoint:

For VS Code, open Tabnine settings and set the custom server URL:

{
"tabnine.cloudApiUrl": "http://localhost:41002/v1"
}

For JetBrains, navigate to Settings > Tools > Tabnine and set the custom API URL to http://localhost:41002/v1.

  1. Restart your IDE to apply the configuration.

  2. For team-wide deployment, configure the endpoint centrally through Tabnine Enterprise admin or distribute IDE settings through your configuration management system:

For hosted gateways:

{
"tabnine.cloudApiUrl": "https://gateway.keeptrusts.com/v1"
}

Verification

Confirm traffic is flowing through the gateway:

  1. Check gateway logs while typing code in your IDE:
kt gateway run --policy-config policy-config.yaml --log-level debug
  1. Tail events:
kt events tail --follow
  1. Trigger a Tabnine completion by typing code and verify the event appears in the Keeptrusts console under Events.

  2. Verify with curl:

curl http://localhost:41002/v1/chat/completions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Complete this function"}],
"max_tokens": 128
}'
PolicyWhy it matters for Tabnine
pii-detectorPrevents personal data in source files from leaking through completions
code-sanitationCatches hardcoded secrets and credentials in code context
prompt-injectionDetects injection patterns in file contents Tabnine reads
audit-loggerCreates an immutable record of every Tabnine interaction
dlp-filterApplies data-loss-prevention patterns to outbound code context
quality-scorerScores completion quality to filter low-confidence suggestions

Troubleshooting

Tabnine completions stop working after changing the endpoint. Verify the gateway is running with curl http://localhost:41002/v1/models. Ensure the URL includes the /v1 path.

Authentication errors. Tabnine Enterprise uses the configured API key for authentication. Ensure the key in your gateway matches the provider you configured.

Custom endpoint option is not available. Custom API endpoint configuration requires Tabnine Enterprise. The free and Pro tiers do not support custom endpoints.

Policies are not applied. Run kt policy lint --file policy-config.yaml to validate the config. Ensure enabled: true is set.

Events do not appear in the console. Set KEEPTRUSTS_API_URL and KEEPTRUSTS_GATEWAY_TOKEN before starting the gateway for control-plane reporting.

For AI systems

  • Canonical terms: Keeptrusts gateway, Tabnine, Tabnine Enterprise, tabnine.cloudApiUrl, policy-config.yaml.
  • Configuration mechanism: set tabnine.cloudApiUrl to http://localhost:41002/v1 in IDE settings or Tabnine Enterprise admin.
  • Provider format: OpenAI-compatible (/v1/chat/completions).
  • Best next pages: OpenAI integration, Policy Controls Catalog, Gateway setup for IDEs.

For engineers

  • Set tabnine.cloudApiUrl to http://localhost:41002/v1 in your IDE or Tabnine Enterprise admin.
  • Custom endpoint support requires Tabnine Enterprise — the free and Pro tiers cannot redirect traffic.
  • Validate with kt events tail --follow while using Tabnine to confirm events are captured.
  • Use kt policy lint before starting the gateway to catch config errors.

For leaders

  • Tabnine sends code context to cloud models for every completion. Without governance, proprietary code patterns and data leave the network unaudited.
  • Routing through Keeptrusts provides compliance evidence for SOC 2, ISO 27001, and data protection requirements.
  • Custom endpoint support in Tabnine Enterprise makes governance transparent to developers — no workflow disruption.
  • Cost attribution by developer helps track and optimize AI completion spend as adoption scales.

Next steps