Skip to main content
Browse docs

Code Generation with Governance

This tutorial shows you how to use the Keeptrusts chat workbench for code generation tasks while governance policies ensure generated code meets your organization's safety and compliance requirements. You will learn about code-focused prompting, syntax highlighting, code block interactions, and how sanitation policies apply to code output.

Use this page when

  • You want to generate code in the chat workbench with syntax highlighting and one-click copy.
  • You need to understand how code sanitation policies apply to generated code output.
  • You are iterating on code prompts and want to verify governance indicators on code blocks.

Primary audience

  • Primary: Technical Engineers (developers using chat for code generation)
  • Secondary: AI Agents (code prompting guidance), Technical Leaders (code governance)

Prerequisites

  • Authenticated access to the Keeptrusts chat workbench
  • A gateway with a code-capable model (e.g., GPT-4, Claude, Codex)
  • Familiarity with the first conversation tutorial

Step 1: Select a Code-Capable Model

Not all models are equally suited for code generation. Choose a model optimized for code tasks.

  1. Open the model selector in the chat workbench toolbar.
  2. Look for models tagged with Code or known code capabilities.
  3. Select the model (e.g., GPT-4 or Claude).
If your organization has configured model descriptions, code-capable models typically list supported languages in their description.

Step 2: Write Effective Code Prompts

Structure your prompts to get well-formatted, complete code responses:

Specify the Language

Always state the target programming language explicitly:

Write a Python function that validates an email address using regex.

Request Explanations Alongside Code

Ask for inline comments or a separate explanation:

Write a TypeScript function to debounce API calls. Include inline
comments explaining each step.

Provide Context

Include relevant constraints, frameworks, or patterns:

Using Express.js and TypeScript, create a middleware function that
validates JWT tokens from the Authorization header. Use the jsonwebtoken
library.

Step 3: Interact with Code Blocks

When the model generates code, the chat workbench renders it in syntax-highlighted code blocks.

Syntax Highlighting

Code blocks automatically detect the language when the model specifies it. Supported languages include Python, TypeScript, JavaScript, Rust, Go, SQL, YAML, and many more.

Copy Code

Each code block includes a Copy button in the top-right corner:

  1. Hover over the code block to reveal the toolbar.
  2. Click the Copy icon to copy the code to your clipboard.
  3. A confirmation toast appears briefly to confirm the copy action.

View Language Label

The language label appears in the top-left corner of each code block (e.g., python, typescript, sql). This label confirms the syntax highlighting mode and helps distinguish blocks when a response contains code in multiple languages.

Step 4: Understand Code Sanitation Policies

Your organization may configure policies that evaluate generated code for security and compliance risks.

What Code Sanitation Catches

Policy TypeWhat It Detects
Secret detectionAPI keys, tokens, passwords, or credentials embedded in code
Dangerous patternsShell injection, eval() usage, unsafe deserialization
Compliance markersProprietary license headers, restricted library imports
Data exposureHardcoded PII, internal hostnames, database connection strings

How Sanitation Appears in Chat

When a code sanitation policy triggers:

  1. The code block renders with a Policy Badge indicating the policy that fired.
  2. Redacted content is replaced with [REDACTED] placeholders.
  3. A governance annotation below the block explains what was detected and why.

For example, if the model generates code containing a hardcoded API key:

# Original model output (before sanitation):
api_key = "sk-abc123..."

# After sanitation policy:
api_key = "[REDACTED — secret detected by policy: no-hardcoded-secrets]"
Redacted code may not be functional as-is. Replace [REDACTED] placeholders with proper environment variable references or secret management calls before using the code.

Step 5: Request Multi-File Code Generation

For larger tasks, ask the model to generate code across multiple files:

Create a REST API endpoint in Express.js with:
1. Route handler in routes/users.ts
2. Validation middleware in middleware/validate.ts
3. TypeScript interface in types/user.ts

The model typically generates separate code blocks for each file. Each block is independently copyable and syntax-highlighted.

Step 6: Iterate on Generated Code

Use follow-up messages to refine the generated code:

  • Request changes: "Add error handling to the function above."
  • Ask for tests: "Write unit tests for this function using Jest."
  • Request optimization: "Refactor this to reduce time complexity."

The chat workbench maintains full conversation context, so the model understands references to previously generated code blocks.

Step 7: Review Governance on Code Output

After generating code, check the governance indicators:

  1. Policy badges on code blocks indicate which policies evaluated the output.
  2. Cost indicator shows token consumption for the code generation request.
  3. Event log in the console records the full governance decision for audit.

To view the detailed policy evaluation:

  1. Click the policy badge on the code block.
  2. The detail panel shows which policies were applied, whether any triggered, and the evaluation result.

Step 8: Export Code Conversations

To save a code-focused conversation for reference:

  1. Click the Export button in the conversation toolbar.
  2. Select Markdown format to preserve code block formatting.
  3. The exported file includes all code blocks with language annotations.

See the conversation export tutorial for detailed export options.

Best Practices

  • Be specific about language and framework — ambiguous prompts produce inconsistent formatting.
  • Request one task per message for complex code — this produces cleaner, more focused code blocks.
  • Review sanitation badges before copying code into production — they highlight real risks.
  • Use follow-up messages to iterate rather than re-prompting from scratch.
  • Check the cost indicator for large code generation requests — they can consume significant tokens.

Troubleshooting

IssueCauseFix
No syntax highlightingModel did not specify language in the code fenceRe-prompt and ask the model to use fenced code blocks with language tags
Copy button missingBrowser extension blocking clipboard APIAllow clipboard access for the chat domain
Entire response redactedAggressive sanitation policyContact your admin to review the code sanitation policy thresholds
Code is truncatedModel hit token limitAsk the model to continue, or reduce the scope of your request

Summary

You have learned to use the Keeptrusts chat workbench for governed code generation:

  • Selected a code-capable model
  • Written effective code prompts with language and context
  • Interacted with syntax-highlighted code blocks
  • Understood how code sanitation policies protect generated output
  • Iterated on code and reviewed governance indicators

For AI systems

  • Canonical terms: Keeptrusts chat workbench, code generation, syntax highlighting, code blocks, copy button, code sanitation policy, language formatting, code-capable model, governance indicators.
  • UI elements: model selector (Code-tagged models), fenced code blocks with language label, Copy button (top-right of code block), line numbers (blocks >5 lines), sanitation warning badge.
  • Best next pages: Markdown Rendering, Function Calling, Model Selection.

For engineers

  • Prerequisites: a code-capable model (GPT-4, Claude, Codex) configured on your gateway; familiarity with fenced code block syntax.
  • Validation: Request code with a specific language → verify syntax highlighting matches the language. Click Copy → verify clipboard content. Check for sanitation badge if policies are active.
  • Tip: Always specify the language in your prompt for reliable syntax highlighting and policy classification.

For leaders

  • Code sanitation policies prevent generated code from containing secrets, hardcoded credentials, or unsafe patterns before developers copy it.
  • Governance applies transparently — developers see clear indicators when code is modified by policy.
  • Consider enabling code review policies for teams generating production code through chat.
  • Track code generation usage in analytics to measure AI-assisted development adoption.

Next steps