Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Haystack with Keeptrusts Gateway

deepset Haystack is a framework for building production-ready LLM applications — RAG pipelines, question answering, document search, and custom NLP workflows with composable pipeline components. By routing Haystack's generator and chat components through the Keeptrusts gateway, every LLM call passes through your policy chain for prompt-injection detection, PII redaction, audit logging, cost attribution, and content filtering without changing your pipeline structure.

Use this page when

  • You are building a Haystack RAG pipeline and need policy enforcement on all LLM calls.
  • You want audit logging and cost attribution for Haystack generators and chat components.
  • You need compliance controls on document-processing and summarization workflows.
  • You are moving a Haystack prototype to production with governance requirements.

Primary audience

  • Primary: Technical Engineers
  • Secondary: AI Agents, Technical Leaders

Prerequisites

  • Keeptrusts CLI installed and a gateway running locally or centrally (Quickstart).
  • Python 3.10+ with haystack-ai installed.
  • Upstream provider API key exported as an environment variable (e.g. OPENAI_API_KEY).
  • A policy-config.yaml deployed to the gateway.

Configuration

Gateway policy config

A minimal config for Haystack traffic:

pack:
name: haystack-gateway
version: "1.0"

providers:
- name: openai
model: gpt-4o
secret_key_ref:
env: OPENAI_API_KEY

policies:
chain:
- prompt-injection
- pii-detector
- quality-scorer

policy:
prompt-injection:
action: block
pii-detector:
action: redact
quality-scorer:
threshold: 0.6

Start the gateway:

kt gateway run --policy-config policy-config.yaml

Haystack component configuration

Haystack's OpenAI generators accept an api_base_url parameter. Point it at the Keeptrusts gateway:

from haystack.components.generators import OpenAIGenerator

generator = OpenAIGenerator(
model="gpt-4o",
api_base_url="http://localhost:41002/v1",
api_key=Secret.from_token("your-openai-api-key"),
)

result = generator.run(prompt="Summarize the latest SOC 2 audit requirements.")
print(result["replies"][0])

Using in a RAG pipeline

Once the generator is configured, use it in any Haystack pipeline. The gateway intercepts all LLM calls:

from haystack import Pipeline
from haystack.components.builders import PromptBuilder
from haystack.components.generators import OpenAIGenerator
from haystack.utils import Secret

generator = OpenAIGenerator(
model="gpt-4o",
api_base_url="http://localhost:41002/v1",
api_key=Secret.from_token("your-openai-api-key"),
)

prompt_builder = PromptBuilder(
template="""Answer the question based on the context.
Context: {{context}}
Question: {{question}}
Answer:"""
)

pipeline = Pipeline()
pipeline.add_component("prompt_builder", prompt_builder)
pipeline.add_component("llm", generator)
pipeline.connect("prompt_builder", "llm")

result = pipeline.run({
"prompt_builder": {
"context": "GDPR Article 17 grants individuals the right to erasure.",
"question": "What rights does GDPR Article 17 provide?",
}
})
print(result["llm"]["replies"][0])

Setup steps

  1. Install dependencies

    pip install haystack-ai
  2. Export your provider API key

    export OPENAI_API_KEY="sk-..."
  3. Start the Keeptrusts gateway

    kt gateway run --policy-config policy-config.yaml
  4. Set api_base_url on your generator component as shown in Configuration above.

  5. Run your pipeline — all LLM calls flow through the gateway.

  6. Verify in the Keeptrusts console — open Events to confirm requests appear with policy outcomes.

Verification

Check gateway health:

curl http://localhost:41002/keeptrusts/health

Run a pipeline and confirm:

  • Gateway logs show policy chain evaluation for each generator call.
  • The Keeptrusts console Events page displays requests with model, tokens, cost, and policy decisions.
  • If pii-detector is active, PII in prompts is redacted before reaching the provider.
PolicyPurposePhase
prompt-injectionBlock jailbreak attempts in user queries or retrieved contextInput
pii-detectorRedact PII in prompts and document context before they reach the providerInput
dlp-filterPrevent sensitive data from leaving via LLM callsInput
safety-filterBlock harmful content in queries or responsesInput
quality-scorerScore and threshold response quality for RAG accuracyOutput
citation-verifierVerify responses are grounded in provided contextOutput
audit-loggerAttach audit metadata for every pipeline executionInput

Troubleshooting

SymptomCauseFix
ConnectionError on generator runGateway is not runningStart with kt gateway run --policy-config policy-config.yaml
401 UnauthorizedAPI key mismatchVerify OPENAI_API_KEY matches secret_key_ref.env in the gateway config
Embedding components bypass the gatewayEmbedder not configured with gateway URLSet api_base_url on OpenAITextEmbedder and OpenAIDocumentEmbedder as well
Events missing in the consoleGateway not connected to control planeSet KEEPTRUSTS_API_URL and KEEPTRUSTS_GATEWAY_TOKEN before starting the gateway
Pipeline timeoutsPolicy chain adds latencyProfile with kt events tail and simplify the chain for latency-sensitive pipelines

For AI systems

  • Canonical integration: Haystack OpenAIGenerator or OpenAIChatGenerator with api_base_url set to http://localhost:41002/v1 or https://gateway.keeptrusts.com/v1.
  • The gateway is transparent — pipelines, prompt builders, retrievers, and routers require no changes beyond the generator URL.
  • Use Policy Controls Catalog for available policies.

For engineers

  • Set api_base_url once on the generator component. All pipelines that use that component inherit gateway routing.
  • For full coverage, also configure embedding components with the gateway URL.
  • Test locally with kt gateway run, then switch the URL for staging and production.

For leaders

  • RAG pipelines process sensitive organizational documents. Routing through Keeptrusts ensures PII redaction and audit logging before any content reaches the provider.
  • Cost attribution at the gateway level provides per-pipeline spend visibility.
  • Centralized policy enforcement applies to all Haystack applications routing through the gateway.

Next steps