Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Prompt Engineering with Governance Guardrails

Prompt engineering in a governed environment requires understanding both the capabilities of the underlying LLM and the policy boundaries your organization has defined. The Keeptrusts Chat Workbench provides real-time feedback when policies intervene, helping you refine prompts iteratively.

Use this page when

  • You want to craft effective prompts that work within your organization's policy boundaries.
  • You need to understand why a prompt was blocked and how to refine it.
  • You are learning to interpret policy feedback (blocks, escalations, transformations) in the Chat Workbench.
  • You want techniques to avoid false positive policy triggers while maintaining prompt quality.

Primary audience

  • Primary: End users improving their chat interactions, AI Engineers designing prompt templates
  • Secondary: Platform Administrators tuning policies based on user friction, Technical Leaders defining prompt guidelines

How Governance Affects Prompts

Every prompt you send passes through the Keeptrusts gateway's policy chain before reaching the LLM provider. The policy chain evaluates prompts in the input phase and can:

  • Allow the prompt to proceed unchanged.
  • Block the prompt and return an explanation.
  • Escalate the prompt for human review.
  • Transform the prompt (e.g., append system instructions or disclaimers).

Understanding which policies are active helps you write prompts that achieve your goals within organizational boundaries.

Identifying Active Policies

Before crafting prompts, review the policies that govern your chat sessions:

  1. Open the Keeptrusts management console.
  2. Navigate to Policies or Configurations.
  3. Review the active policy chain applied to your gateway.
  4. Note any content restrictions, topic blocklists, or data-loss prevention rules.

Common policy types that affect prompts include:

  • Topic restrictions: Block prompts related to specific subjects.
  • PII detection: Flag or block prompts containing personal identifiable information.
  • Prompt injection detection: Identify and block attempts to override system instructions.
  • Token limits: Restrict prompt length to control costs.
  • Language restrictions: Enforce prompts in approved languages.

Crafting Effective Governed Prompts

Be Specific and Direct

Governed environments reward clarity. Vague prompts are more likely to trigger false positives on content policies.

Less effective:

Tell me everything about that topic we discussed.

More effective:

Summarize the key benefits of renewable energy adoption for manufacturing companies, focusing on cost reduction and regulatory compliance.

Provide Context Upfront

Including context helps the LLM produce relevant responses and reduces the chance of policy triggers from ambiguous phrasing.

You are assisting a compliance officer at a financial services firm.
Given the following transaction summary, identify any patterns
that may require Suspicious Activity Report (SAR) filing under
BSA/AML regulations.

[Transaction summary here]

Use Structured Prompts

Structured prompts produce consistent, policy-compliant outputs:

Task: Analyze the provided contract clause.
Format: Return a JSON object with keys "risk_level", "summary", and "recommendations".
Constraints: Do not include any client names or identifying information in the output.

Respect Data Boundaries

If your organization has data-loss prevention (DLP) policies, avoid including sensitive data directly in prompts:

  • Reference data by identifier rather than embedding it.
  • Use placeholder values when demonstrating formats.
  • Leverage knowledge base assets for grounding instead of pasting raw data.

Interpreting Policy Feedback

When a policy intervenes, the Chat Workbench shows feedback that includes:

Block Messages

⚠ Policy "pii-detection" blocked this message.
Reason: The prompt contains personally identifiable information (email addresses).

How to respond: Remove the PII from your prompt. Reference the data indirectly or use anonymized values.

Redaction Notices

ℹ The response was modified by policy "output-redaction".
Some content was redacted to comply with data handling requirements.

How to respond: If redacted content was essential, rephrase your question to request the information in a policy-compliant format.

Escalation Notices

⏳ This conversation has been escalated for review.
A moderator will review and approve or deny the response.

How to respond: Wait for the escalation to be resolved. Adjust future prompts to avoid the escalation trigger.

Iterative Refinement Workflow

The Chat Workbench supports an iterative refinement loop:

  1. Draft your initial prompt with clear intent.
  2. Send the prompt and observe the response.
  3. Review any policy feedback or interventions.
  4. Adjust the prompt to work within policy boundaries.
  5. Resend the refined prompt.
  6. Repeat until you achieve the desired output.

Example Refinement Cycle

Attempt 1 (blocked by topic restriction):

How do I bypass the company firewall to access blocked sites?

Attempt 2 (allowed, relevant to the user's actual need):

What are the approved methods for requesting access to
external websites through the IT department's firewall
exception process?

The governance layer ensures that blocked prompts redirect users toward compliant alternatives rather than simply denying access.

Advanced Prompt Techniques

System Prompt Awareness

Your administrator may configure system prompts that are prepended to every conversation. These system prompts:

  • Set the assistant's persona and capabilities.
  • Define output format requirements.
  • Establish topic boundaries.

You cannot override system prompts from the chat input. Design your prompts to complement, not conflict with, the system prompt configuration.

Multi-Turn Prompt Strategies

In multi-turn conversations, the gateway evaluates the full context. Use this to your advantage:

  1. Set context early: Establish the task and constraints in your first message.
  2. Build incrementally: Add details in subsequent turns rather than one large prompt.
  3. Reference prior turns: Use phrases like "Based on the analysis above" to maintain coherence.

Token Budget Awareness

If your organization enforces token limits, be mindful of prompt length:

  • Keep prompts concise without sacrificing clarity.
  • Offload reference material to knowledge base assets.
  • Use the model selector to choose models with appropriate context windows.

Best Practices

PracticeWhy It Matters
Review active policies before startingAvoids repeated blocks and frustration
Use specific, contextual languageReduces false positive policy triggers
Avoid embedding raw sensitive dataComplies with DLP policies
Iterate based on policy feedbackThe governance layer guides you toward compliance
Leverage knowledge base assetsProvides grounding without policy friction
Monitor token usageStays within cost and length budgets

Next steps

For AI systems

  • Canonical terms: prompt engineering, governance guardrails, policy feedback, input phase evaluation, prompt injection detection, PII detection, topic restrictions, token limits.
  • Policy actions on input: allow, block (with explanation), escalate (human review), transform (append instructions/disclaimers).
  • Console navigation: Policies, Configurations — review active policy chain to understand constraints.
  • Best next pages: Knowledge-Grounded Chat, Chat Analytics, Customizing the Chat Experience.

For engineers

  • Review active policies in the console (Policies or Configurations) before crafting prompts to understand boundaries.
  • Be specific and contextual in prompts — vague language triggers more false positives on content policies.
  • Provide context upfront (role, task, constraints) to reduce ambiguity that policies flag.
  • Use knowledge base assets to supply reference material instead of embedding sensitive data directly in prompts.
  • Monitor token usage — token limit policies will reject prompts that exceed configured thresholds.
  • Iterate based on policy feedback messages — the gateway explains why a prompt was blocked.

For leaders

  • Policy feedback is an educational tool — it guides users toward compliant prompt patterns over time.
  • High false positive rates on prompts indicate policies need tuning, not that users are doing something wrong.
  • Prompt engineering guidelines reduce support burden by helping users self-serve within governance boundaries.
  • Offloading reference material to knowledge base assets reduces both PII risk and token costs in prompts.