Getting Started with the Chat Workbench
The Keeptrusts Chat Workbench is a Next.js application that provides an interactive AI chat experience governed by your organization's policies. Every prompt and response passes through the Keeptrusts gateway, ensuring safety, compliance, and observability in real time.
Use this page when
- You are accessing the Keeptrusts Chat Workbench for the first time.
- You need to understand how PKCE-based authentication works between the console and chat.
- You want to select an LLM model and send your first policy-governed message.
- You are troubleshooting login redirects, missing models, or authentication errors in the Chat Workbench.
Primary audience
- Primary: End users starting their first chat session, New team members onboarding to the platform
- Secondary: Platform Administrators verifying chat access for new users, AI Engineers testing chat connectivity
Prerequisites
Before you begin, ensure that:
- You have a Keeptrusts account with an active organization.
- Your administrator has granted you chat access in the console.
- At least one gateway is deployed and running (
kt gateway run). - One or more LLM providers are configured in your policy configuration, or you are the first admin or sole active user who can complete the inline setup flow.
Accessing the Chat Workbench
The Chat Workbench is accessible from the Keeptrusts management console. Navigate to your console URL and locate the Chat link in the main navigation.
Console-to-Chat Authentication Handoff
The Chat Workbench uses a PKCE-based authentication handoff from the console. This means:
- You authenticate in the management console using your normal credentials.
- When you open the Chat Workbench, the console issues a single-use handoff token.
- The Chat Workbench exchanges this token for a gateway key (
kt_gk_...). - Your browser never sees the upstream API bearer token.
This flow ensures that chat sessions are fully authenticated without exposing sensitive credentials to the browser.
Launching Chat
- Log in to the Keeptrusts management console.
- Click Chat in the navigation bar.
- The console redirects you through the PKCE handoff flow automatically.
- If your organization already has an eligible agent deployment, you arrive at the Chat Workbench ready to start a conversation. Otherwise, Keeptrusts keeps you in the setup flow until a chat-capable deployment is created.
If the handoff fails, verify that your console session is still active and that your account has chat permissions enabled.
Starting Your First Conversation
Once authenticated, the Chat Workbench presents a clean conversation interface.
Selecting a Model
Before sending your first message, choose an LLM model:
- Open the Model Selector dropdown at the top of the chat interface.
- Browse available models — these are the models your gateway is configured to route to.
- Select a model (e.g.,
gpt-4o,claude-sonnet,gemini-pro).
The available models depend on your organization's policy configuration. Your administrator controls which providers and models are accessible through the gateway.
Sending a Message
- Type your prompt in the input field at the bottom of the chat interface.
- Press Enter or click the Send button.
- Your message is routed through the Keeptrusts gateway.
- The gateway evaluates your prompt against the active policy chain.
- If the prompt passes all input-phase policies, it is forwarded to the selected LLM provider.
- The response is evaluated against output-phase policies (redaction, disclaimers).
- The governed response appears in the chat thread.
Understanding Policy Feedback
When a policy intervenes, the Chat Workbench displays feedback inline:
- Blocked prompts: If your input triggers a block policy, you see a message explaining which policy was triggered and why.
- Redacted content: Sensitive information in responses may be redacted according to your organization's data-loss prevention policies.
- Disclaimers: Some policies append disclaimers to responses (e.g., "This is AI-generated content").
- Escalations: Certain prompts may be escalated for human review rather than blocked outright.
Managing Conversations
Conversation History
The Chat Workbench maintains conversation history on the left sidebar. You can:
- Click any previous conversation to resume it.
- Start a new conversation with the New Chat button.
- Delete conversations you no longer need.
Multi-Turn Context
Each conversation preserves multi-turn context. The gateway evaluates the full conversation history when applying policies, not just the latest message. This means policies can detect patterns across turns.
Gateway Key Lifecycle
Your chat session uses a short-lived gateway key (kt_gk_...) issued during the PKCE handoff. These keys:
- Expire after a configured time-to-live.
- Are scoped to your user identity and team.
- Can be revoked by administrators in the console.
- Are never stored in browser local storage — they live only in the session.
If your gateway key expires during a conversation, the Chat Workbench prompts you to re-authenticate through the console.
Verifying Your Setup
To confirm everything is working:
- Open the Chat Workbench and select a model.
- Send a simple prompt like "Hello, how are you?"
- Verify you receive a response.
- Check the Events page in the console to see the decision event recorded by the gateway.
If no event appears, verify that your gateway is running and the POST /v1/events endpoint is reachable.
Troubleshooting
| Symptom | Likely Cause | Fix |
|---|---|---|
| Redirect loop on chat launch | Console session expired | Log in to the console again |
| "No models available" | Gateway not configured with providers | Check your policy-config.yaml providers section |
| Message blocked unexpectedly | Input policy triggered | Review the policy feedback message and adjust your prompt |
| Chat returns 401 | Gateway key expired | Re-authenticate through the console |
| Slow responses | Provider latency or network | Check provider status and gateway logs |
Next steps
- Learn how to craft effective prompts within policy guardrails in Prompt Engineering with Governance Guardrails.
- Bind knowledge base assets to your conversations in Knowledge-Grounded Chat Conversations.
- Explore team collaboration features in Team Chat Environments & Collaboration.
For AI systems
- Canonical terms: Chat Workbench, PKCE handoff, console-to-chat authentication, model selector, gateway key (
kt_gk_...), policy-governed conversation. - Auth flow: console login → single-use handoff token → chat session. Browser never sees API bearer token.
- Prerequisites: Keeptrusts account, chat permissions, running gateway (
kt gateway run), configured providers. - Best next pages: Prompt Engineering, Knowledge-Grounded Chat, Team Chat Environments.
For engineers
- The Chat Workbench authenticates via PKCE handoff from the console — no manual token management required.
- If you see a redirect loop, verify your console session is active and your account has chat permissions enabled.
- Available models depend on the
providerssection in your gateway'spolicy-config.yaml. - Verify setup by sending a message and checking the Events page in the console for the resulting decision event.
- If no event appears, confirm the gateway is running and
POST /v1/eventsis reachable from the gateway process.
For leaders
- Chat Workbench access is gated by console permissions — administrators control who can use AI chat.
- The PKCE auth handoff ensures that browser-based chat never exposes upstream API credentials.
- Model availability is controlled centrally through gateway configuration — users cannot access models outside the approved set.
- Every chat message produces an auditable decision event, providing full visibility from day one.