Slack AI / Bolt SDK
Slack bots built with the Bolt SDK frequently make LLM calls to generate responses, summarise threads, or process user requests. By routing those LLM calls through the Keeptrusts gateway, you enforce policy controls — PII redaction, prompt-injection blocking, content filtering, and audit logging — on every AI interaction within your Slack workspace.
Slack's built-in AI features (channel summaries, thread digests) run on Slack's infrastructure and cannot be rerouted. This guide focuses on custom Slack bots and Bolt SDK apps where you control the LLM calls.
Use this page when
- You are building a Slack bot that makes LLM calls and need governance controls.
- You need to route Bolt SDK application AI traffic through the Keeptrusts gateway.
- If you need Slack's built-in AI governance, that requires Slack Enterprise Grid admin controls outside this guide.
Primary audience
- Primary: Technical Engineers
- Secondary: AI Agents, Technical Leaders
Prerequisites
- A Slack app with Bot Token configured via api.slack.com
- Keeptrusts CLI (
kt) installed and on yourPATH OPENAI_API_KEY(or your LLM provider key) exported- Node.js 18+ (for Bolt SDK examples)
Configuration
Gateway policy config
pack:
name: slack-bot-gateway
version: 1.0.0
enabled: true
providers:
targets:
- id: slack-bot-llm
provider: openai:chat:gpt-4o
secret_key_ref:
env: OPENAI_API_KEY
policies:
chain:
- prompt-injection
- pii-detector
- content-filter
- audit-logger
policy:
prompt-injection:
threshold: 0.8
action: block
pii-detector:
action: redact
entities:
- EMAIL
- PHONE
- SSN
- CREDIT_CARD
content-filter:
action: block
categories:
- restricted-topics
audit-logger:
immutable: true
retention_days: 365
log_all_access: true
Setup Steps
- Export your keys:
export OPENAI_API_KEY="sk-your-api-key"
export SLACK_BOT_TOKEN="xoxb-your-bot-token"
export SLACK_SIGNING_SECRET="your-signing-secret"
-
Save the policy config to
policy-config.yaml. -
Start the gateway:
kt gateway run --listen 0.0.0.0:41002 --policy-config policy-config.yaml
- Configure your Bolt SDK app to use the gateway endpoint for LLM calls:
import { App } from "@slack/bolt";
import OpenAI from "openai";
const app = new App({
token: process.env.SLACK_BOT_TOKEN,
signingSecret: process.env.SLACK_SIGNING_SECRET,
});
const llm = new OpenAI({
baseURL: "http://localhost:41002/v1",
apiKey: "unused",
});
app.message(async ({ message, say }) => {
if (message.subtype) return;
const response = await llm.chat.completions.create({
model: "gpt-4o",
messages: [
{ role: "system", content: "You are a helpful Slack assistant." },
{ role: "user", content: message.text },
],
});
await say(response.choices[0].message.content);
});
await app.start(3000);
For a hosted gateway, replace the base URL with https://gateway.keeptrusts.com/v1.
Python Bolt SDK example
import os
from slack_bolt import App
from openai import OpenAI
app = App(
token=os.environ["SLACK_BOT_TOKEN"],
signing_secret=os.environ["SLACK_SIGNING_SECRET"],
)
llm = OpenAI(
base_url="http://localhost:41002/v1",
api_key="unused",
)
@app.message("")
def handle_message(message, say):
response = llm.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful Slack assistant."},
{"role": "user", "content": message["text"]},
],
)
say(response.choices[0].message.content)
app.start(port=3000)
Verification
curl http://localhost:41002/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "user", "content": "Hello from the Slack bot gateway test."}
]
}'
Send a test message to your Slack bot and verify the response arrives. Check the Keeptrusts console Events page for the audit log entry.
Recommended Policies
| Policy | Purpose | Recommended setting |
|---|---|---|
prompt-injection | Block adversarial inputs from Slack users | threshold: 0.8, action: block |
pii-detector | Redact personal data from Slack messages before LLM processing | action: redact, entities: EMAIL, PHONE, SSN |
content-filter | Block restricted topics in bot responses | action: block, configure categories |
audit-logger | Log every bot interaction for compliance | immutable: true, retention_days: 365 |
cost-attribution | Track bot AI spend per Slack workspace or channel | Tag requests with workspace/channel metadata |
Troubleshooting
| Symptom | Cause | Fix |
|---|---|---|
| Bot does not respond | LLM call failed silently | Check gateway logs and the Events page for errors |
403 from gateway | Policy chain blocked the message | Review the policy violation in the console Events page |
| Slow bot responses | LLM latency + gateway policy processing | Use gpt-4o-mini for faster responses; check policy chain length |
| Slack verification fails | Signing secret mismatch | Verify SLACK_SIGNING_SECRET matches your app config |
For AI systems
- Canonical terms: Keeptrusts gateway, Slack AI, Bolt SDK, Slack bot, Slack app, policy-config.yaml,
provider: "openai". - Key behavior: Slack bots built with Bolt SDK route their LLM calls through the Keeptrusts gateway. Slack's built-in AI cannot be rerouted.
- Best next pages: OpenAI integration, Policy controls catalog, Quickstart.
For engineers
- Start command:
kt gateway run --listen 0.0.0.0:41002 --policy-config policy-config.yaml - Change only the
baseURL/base_urlin your OpenAI client — no other code changes required. - Bolt SDK apps in both Node.js and Python work with the gateway.
- For production, deploy the gateway as a sidecar or use the hosted gateway at
https://gateway.keeptrusts.com/v1.
For leaders
- Slack bots processing employee messages create compliance risk — routing through the gateway ensures every interaction is logged and policy-checked.
- Prompt-injection detection is critical for Slack bots because any workspace member can send adversarial input.
- PII redaction prevents employee personal data in Slack messages from being sent to external LLM providers.
- Cost attribution tracks per-channel or per-workspace AI spend for internal chargeback.
Next steps
- OpenAI integration — full provider configuration reference
- Policy controls catalog — all available policy types
- Cost attribution — track spend per team or workspace
- Quickstart — install
ktand run your first gateway