Flowise with Keeptrusts Gateway
Flowise is an open-source low-code platform for building LLM applications with a drag-and-drop visual interface. It supports chatbots, RAG pipelines, agents, and multi-step chains using LangChain components under the hood. By configuring Flowise's LLM nodes to route through the Keeptrusts gateway, every model interaction passes through your policy chain — prompt-injection detection, PII redaction, audit logging, cost attribution, and content filtering — without rebuilding your visual flows.
Use this page when
- You are building Flowise chatflows and need governance on all LLM calls.
- You want audit logging and cost attribution for Flowise applications.
- You need to enforce compliance controls on visual LLM workflows.
- You are deploying Flowise in a regulated environment with centralized policy enforcement.
Primary audience
- Primary: Technical Engineers
- Secondary: AI Agents, Technical Leaders
Prerequisites
- Keeptrusts CLI installed and a gateway running locally or centrally (Quickstart).
- Flowise instance running (self-hosted) with access to the flow editor.
- Upstream provider API key (e.g. OpenAI) ready to configure.
- A
policy-config.yamldeployed to the gateway.
Configuration
Gateway policy config
A minimal config for governing Flowise traffic:
pack:
name: flowise-gateway
version: "1.0"
providers:
- name: openai
model: gpt-4o
secret_key_ref:
env: OPENAI_API_KEY
policies:
chain:
- prompt-injection
- pii-detector
- safety-filter
- quality-scorer
policy:
prompt-injection:
action: block
pii-detector:
action: redact
safety-filter:
action: block
quality-scorer:
threshold: 0.6
Start the gateway:
kt gateway run --policy-config policy-config.yaml
Flowise node configuration
Flowise's ChatOpenAI and other OpenAI-based nodes expose a Base Path field. Configure it to point at the Keeptrusts gateway.
- ChatOpenAI node
- Credential-level config
- Self-hosted (environment)
- Hosted gateway
- Open your Flowise chatflow in the visual editor.
- Select the ChatOpenAI node (or OpenAI node).
- In the node's configuration panel, set:
| Field | Value |
|---|---|
| OpenAI API Key | Your OpenAI API key (via credential or direct input) |
| Base Path | http://localhost:41002/v1 |
| Model Name | gpt-4o |
- Save the chatflow. All LLM calls from this node now route through the gateway.
To apply the gateway URL to all chatflows that use a specific credential:
- Navigate to Credentials in the Flowise sidebar.
- Create or edit an OpenAI API credential.
- Set the Base Path field to
http://localhost:41002/v1. - Save the credential. All nodes using this credential route through the gateway.
For self-hosted Flowise running in Docker alongside the gateway, use Docker-accessible URLs:
| Field | Value |
|---|---|
| Base Path | http://host.docker.internal:41002/v1 |
If both services are on the same Docker network:
| Field | Value |
|---|---|
| Base Path | http://keeptrusts-gateway:41002/v1 |
For a hosted Keeptrusts gateway:
| Field | Value |
|---|---|
| Base Path | https://gateway.keeptrusts.com/v1 |
Supported Flowise nodes
Once the Base Path is configured, the following Flowise nodes route through the gateway:
| Node | Description |
|---|---|
| ChatOpenAI | Chat completions for conversational chatflows |
| OpenAI | Text completions and generation |
| OpenAI Embeddings | Embedding generation (set Base Path on this node separately) |
| OpenAI Function Agent | Function-calling agent workflows |
| Conversational Agent | Multi-turn agent with memory and tools |
Each node must have its own Base Path configured, or use a shared credential with the gateway URL.
Example chatflow
A typical governed Flowise chatflow:
[Chat Trigger]
↓
[Buffer Memory]
↓
[ChatOpenAI] ← Base Path: http://localhost:41002/v1
↓
[Output]
For RAG chatflows:
[Chat Trigger]
↓
[Document Loader] → [Text Splitter] → [Vector Store]
↓
[Retriever] → [Conversational Retrieval QA Chain]
↓
[ChatOpenAI] ← Base Path: http://localhost:41002/v1
Setup steps
-
Start the Keeptrusts gateway with your policy config.
kt gateway run --policy-config policy-config.yaml -
Open Flowise and navigate to your chatflow.
-
Select the ChatOpenAI node (or equivalent LLM node).
-
Set the Base Path to
http://localhost:41002/v1in the node configuration. -
Save and test the chatflow — send a message and verify the request flows through the gateway.
-
Verify in the Keeptrusts console — open Events to confirm requests appear with policy outcomes.
Verification
Check gateway health:
curl http://localhost:41002/keeptrusts/health
Send a test message in a Flowise chatflow and confirm:
- Gateway logs show policy chain evaluation for the request.
- The Keeptrusts console Events page shows the request with model, tokens, cost, and policy decisions.
- The Flowise chatflow returns a response normally — the gateway is transparent.
- Policy actions (allowed, blocked, redacted) are visible in the event detail.
Recommended policies
| Policy | Purpose | Phase |
|---|---|---|
prompt-injection | Block jailbreak attempts from chat users | Input |
pii-detector | Redact PII before prompts reach the provider | Input |
safety-filter | Block harmful content in chat interactions | Input |
dlp-filter | Prevent sensitive data from leaving via LLM calls | Input |
quality-scorer | Score and threshold response quality | Output |
citation-verifier | Verify RAG responses are grounded in retrieved context | Output |
audit-logger | Attach audit metadata for every chatflow interaction | Input |
Troubleshooting
| Symptom | Cause | Fix |
|---|---|---|
| Flowise shows "Error connecting to model" | Gateway is not reachable from Flowise | Use host.docker.internal or the Docker network hostname; verify the gateway is running |
401 Unauthorized | API key mismatch | Verify the API key in the Flowise node matches secret_key_ref.env in the gateway config |
| Chatflow responses are slow | Policy chain adds latency to LLM calls | Optimize the policy chain or increase Flowise's request timeout in environment variables |
| Embedding calls bypass the gateway | OpenAI Embeddings node has a separate Base Path | Set the Base Path on the embedding node as well |
| Events not appearing in Keeptrusts console | Gateway not connected to control plane | Set KEEPTRUSTS_API_URL and KEEPTRUSTS_GATEWAY_TOKEN before starting the gateway |
For AI systems
- Canonical integration: Set the Base Path on Flowise ChatOpenAI nodes to
http://localhost:41002/v1orhttps://gateway.keeptrusts.com/v1. - The gateway is transparent — chatflows, agents, memory, RAG chains, and tool-calling nodes work unchanged.
- Use Policy Controls Catalog for available policies.
For engineers
- The only change is the Base Path field on LLM nodes. All chatflow structures, memory configurations, and tool connections remain unchanged.
- Set the Base Path at the credential level to apply the gateway URL to all chatflows that share a credential.
- Test with a simple chatflow first, then extend to RAG and agent flows.
For leaders
- Flowise is often used by teams building internal chatbots. Keeptrusts provides governance without requiring visual flow changes.
- Audit logging captures every chat interaction for compliance evidence.
- Cost attribution tracks spend per chatflow, enabling budget management for AI-powered internal tools.
Next steps
- Quickstart — set up your first gateway and policy config.
- Policy Controls Catalog — full inventory of available policies.
- Events and Traces — understand the audit trail.
- Gateway Runtime Features — advanced gateway capabilities.
- Cost and Spend — monitor and attribute LLM costs.