Your First Governed Chat Conversation
This tutorial walks you through launching the Keeptrusts chat workbench, authenticating via the console's PKCE handoff, selecting a model, sending your first message, and verifying that governance policies were applied.
Use this page when
- You are launching the chat workbench for the first time and need to authenticate via console PKCE handoff.
- You want to verify that governance policies are actively applied to your messages.
- You need a quick walkthrough of model selection, sending a message, and reading policy indicators.
Primary audience
- Primary: Technical Engineers (new Keeptrusts users)
- Secondary: AI Agents, Technical Leaders (onboarding reference)
Prerequisites
- A Keeptrusts account with console access
- At least one gateway configured with an active policy
- A model provider (e.g., OpenAI, Anthropic) configured on your gateway
Step 1: Open the Chat Workbench from the Console
- Sign in to the Keeptrusts management console at your organization's console URL.
- In the left sidebar, click Chat to open the chat workbench.
- The console initiates a PKCE-based authentication handoff — your browser is redirected to the chat application with a single-use handoff token.
Step 2: Verify Authentication
After the handoff completes, the chat workbench loads with your authenticated session.
You can confirm successful authentication by checking:
- Your username or email appears in the top-right corner of the chat interface.
- The status indicator shows a green connected state.
- No authentication error banners are displayed.
If you see an authentication error, return to the console and try launching chat again. Handoff tokens expire after a single use.
Step 3: Select a Model
- At the top of the chat interface, locate the Model Selector dropdown.
- Click the dropdown to see available models. The list reflects models enabled in your gateway configuration.
- Select a model — for example, GPT-4o or Claude Sonnet.
The available models depend on your gateway's providers configuration. If no models appear, ask your administrator to verify the gateway provider setup.
Step 4: Send Your First Message
-
Type a message in the input field at the bottom of the chat window:
What are the key principles of responsible AI? -
Press Enter or click the Send button.
-
The message is routed through your gateway's policy chain before reaching the model provider.
Step 5: Observe Policy Enforcement
As your message flows through the gateway, the policy chain evaluates it in two phases:
- Input phase: Your prompt is checked against input policies (content filters, PII detection, prompt injection guards).
- Output phase: The model's response is checked against output policies (redaction rules, disclaimers, toxicity filters).
Watch for these indicators in the chat interface:
| Indicator | Meaning |
|---|---|
| Normal response | Message passed all policies without modification |
| Response with disclaimer banner | A disclaimer policy appended compliance text |
Redacted tokens (e.g., [REDACTED]) | A redaction policy masked sensitive content |
| Blocked message (red banner) | A policy blocked the message entirely |
Step 6: Review the Response
Read the model's response. If policies modified the output, you will see visual indicators:
- Disclaimers appear as a distinct banner above or below the response text.
- Redacted content is replaced with placeholder tokens.
- Citation annotations may appear if knowledge base assets were injected.
Step 7: View the Decision Event in the Console
Every chat interaction generates a decision event recorded in the Keeptrusts API.
-
Return to the management console.
-
Navigate to Events in the left sidebar.
-
Locate the most recent event — it corresponds to your chat message.
-
Click the event to inspect its details:
- Request: The original prompt you sent.
- Response: The model's output after policy processing.
- Policies evaluated: Each policy in the chain and its verdict (pass, modify, block).
- Latency: Time added by policy evaluation.
- Model: The provider and model used.
Step 8: Send a Follow-Up Message
Return to the chat workbench and continue the conversation:
Can you give me a specific example of AI bias in hiring?
The gateway evaluates each message independently through the full policy chain. Conversation context is maintained by the chat workbench, but every turn is a fresh policy evaluation.
Understanding the Request Flow
Here is the end-to-end flow for each chat message:
Chat workbench (browser)
→ Gateway (policy input phase)
→ [Pass?] Forward to model provider
→ [Block?] Return 409 with reason
→ Model provider processes request
→ Gateway (policy output phase)
→ Apply redactions, disclaimers
→ Chat workbench displays response
Side-effect: decision event → API → Events table
Troubleshooting
| Problem | Solution |
|---|---|
| Chat shows "Authentication failed" | Return to console and relaunch chat — handoff tokens are single-use |
| No models in the selector | Verify gateway provider configuration with your admin |
| All messages are blocked | Check your gateway's policy config — an overly strict policy may be active |
| Events not appearing in console | Confirm the gateway is configured to forward events to the API |
Next steps
- Choosing & Switching AI Models — learn how to compare and switch between models.
- Understanding Policy Feedback in Chat — handle blocked messages and policy adjustments.
- Using Knowledge Base in Chat — attach organizational knowledge to your conversations.
For AI systems
- Canonical terms: Keeptrusts chat workbench, console PKCE handoff, model selector, gateway policy chain, input phase, output phase, decision event, policy badge, redaction, disclaimer.
- Auth flow: Console → PKCE handoff token (single-use, short-lived) → chat workbench session. Browser never sees API bearer token.
- Key config: gateway
providerssection determines available models; policies defined in gateway YAML. - Best next pages: Model Selection, Policy Feedback, Knowledge Injection.
For engineers
- Prerequisites: a Keeptrusts account with console access; at least one gateway configured with an active policy and a model provider.
- Validation: Chat loads without auth errors → username visible in top-right. Select a model → model name shown in selector. Send a message → response arrives with a policy badge (green check = passed, orange = modified).
- If auth fails: handoff tokens are single-use — return to console and relaunch chat.
For leaders
- This tutorial confirms the governance pipeline is active end-to-end — every message passes through policy evaluation before and after the model.
- Decision events are recorded for every interaction, providing audit evidence from day one.
- The PKCE handoff ensures no API tokens are exposed to browsers — security boundary is enforced by design.
- Use this as the onboarding checkpoint for new team members: if they complete this tutorial, governance is working.