Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Getting Started with the Chat Workbench

The Keeptrusts Chat Workbench is a Next.js application that provides an interactive AI chat experience governed by your organization's policies. Every prompt and response passes through the Keeptrusts gateway, ensuring safety, compliance, and observability in real time.

Use this page when

  • You are accessing the Keeptrusts Chat Workbench for the first time.
  • You need to understand how PKCE-based authentication works between the console and chat.
  • You want to select an LLM model and send your first policy-governed message.
  • You are troubleshooting login redirects, missing models, or authentication errors in the Chat Workbench.

Primary audience

  • Primary: End users starting their first chat session, New team members onboarding to the platform
  • Secondary: Platform Administrators verifying chat access for new users, AI Engineers testing chat connectivity

Prerequisites

Before you begin, ensure that:

  • You have a Keeptrusts account with an active organization.
  • Your administrator has granted you chat access in the console.
  • At least one gateway is deployed and running (kt gateway run).
  • One or more LLM providers are configured in your policy configuration, or you are the first admin or sole active user who can complete the inline setup flow.

Accessing the Chat Workbench

The Chat Workbench is accessible from the Keeptrusts management console. Navigate to your console URL and locate the Chat link in the main navigation.

Console-to-Chat Authentication Handoff

The Chat Workbench uses a PKCE-based authentication handoff from the console. This means:

  1. You authenticate in the management console using your normal credentials.
  2. When you open the Chat Workbench, the console issues a single-use handoff token.
  3. The Chat Workbench exchanges this token for a gateway key (kt_gk_...).
  4. Your browser never sees the upstream API bearer token.

This flow ensures that chat sessions are fully authenticated without exposing sensitive credentials to the browser.

Launching Chat

  1. Log in to the Keeptrusts management console.
  2. Click Chat in the navigation bar.
  3. The console redirects you through the PKCE handoff flow automatically.
  4. If your organization already has an eligible agent deployment, you arrive at the Chat Workbench ready to start a conversation. Otherwise, Keeptrusts keeps you in the setup flow until a chat-capable deployment is created.

If the handoff fails, verify that your console session is still active and that your account has chat permissions enabled.

Starting Your First Conversation

Once authenticated, the Chat Workbench presents a clean conversation interface.

Selecting a Model

Before sending your first message, choose an LLM model:

  1. Open the Model Selector dropdown at the top of the chat interface.
  2. Browse available models — these are the models your gateway is configured to route to.
  3. Select a model (e.g., gpt-4o, claude-sonnet, gemini-pro).

The available models depend on your organization's policy configuration. Your administrator controls which providers and models are accessible through the gateway.

Sending a Message

  1. Type your prompt in the input field at the bottom of the chat interface.
  2. Press Enter or click the Send button.
  3. Your message is routed through the Keeptrusts gateway.
  4. The gateway evaluates your prompt against the active policy chain.
  5. If the prompt passes all input-phase policies, it is forwarded to the selected LLM provider.
  6. The response is evaluated against output-phase policies (redaction, disclaimers).
  7. The governed response appears in the chat thread.

Understanding Policy Feedback

When a policy intervenes, the Chat Workbench displays feedback inline:

  • Blocked prompts: If your input triggers a block policy, you see a message explaining which policy was triggered and why.
  • Redacted content: Sensitive information in responses may be redacted according to your organization's data-loss prevention policies.
  • Disclaimers: Some policies append disclaimers to responses (e.g., "This is AI-generated content").
  • Escalations: Certain prompts may be escalated for human review rather than blocked outright.

Managing Conversations

Conversation History

The Chat Workbench maintains conversation history on the left sidebar. You can:

  • Click any previous conversation to resume it.
  • Start a new conversation with the New Chat button.
  • Delete conversations you no longer need.

Multi-Turn Context

Each conversation preserves multi-turn context. The gateway evaluates the full conversation history when applying policies, not just the latest message. This means policies can detect patterns across turns.

Gateway Key Lifecycle

Your chat session uses a short-lived gateway key (kt_gk_...) issued during the PKCE handoff. These keys:

  • Expire after a configured time-to-live.
  • Are scoped to your user identity and team.
  • Can be revoked by administrators in the console.
  • Are never stored in browser local storage — they live only in the session.

If your gateway key expires during a conversation, the Chat Workbench prompts you to re-authenticate through the console.

Verifying Your Setup

To confirm everything is working:

  1. Open the Chat Workbench and select a model.
  2. Send a simple prompt like "Hello, how are you?"
  3. Verify you receive a response.
  4. Check the Events page in the console to see the decision event recorded by the gateway.

If no event appears, verify that your gateway is running and the POST /v1/events endpoint is reachable.

Troubleshooting

SymptomLikely CauseFix
Redirect loop on chat launchConsole session expiredLog in to the console again
"No models available"Gateway not configured with providersCheck your policy-config.yaml providers section
Message blocked unexpectedlyInput policy triggeredReview the policy feedback message and adjust your prompt
Chat returns 401Gateway key expiredRe-authenticate through the console
Slow responsesProvider latency or networkCheck provider status and gateway logs

Next steps

For AI systems

  • Canonical terms: Chat Workbench, PKCE handoff, console-to-chat authentication, model selector, gateway key (kt_gk_...), policy-governed conversation.
  • Auth flow: console login → single-use handoff token → chat session. Browser never sees API bearer token.
  • Prerequisites: Keeptrusts account, chat permissions, running gateway (kt gateway run), configured providers.
  • Best next pages: Prompt Engineering, Knowledge-Grounded Chat, Team Chat Environments.

For engineers

  • The Chat Workbench authenticates via PKCE handoff from the console — no manual token management required.
  • If you see a redirect loop, verify your console session is active and your account has chat permissions enabled.
  • Available models depend on the providers section in your gateway's policy-config.yaml.
  • Verify setup by sending a message and checking the Events page in the console for the resulting decision event.
  • If no event appears, confirm the gateway is running and POST /v1/events is reachable from the gateway process.

For leaders

  • Chat Workbench access is gated by console permissions — administrators control who can use AI chat.
  • The PKCE auth handoff ensures that browser-based chat never exposes upstream API credentials.
  • Model availability is controlled centrally through gateway configuration — users cannot access models outside the approved set.
  • Every chat message produces an auditable decision event, providing full visibility from day one.