Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Raycast AI

Raycast is a productivity launcher for macOS that includes AI-powered features and a rich extension ecosystem. Raycast AI Extensions can be configured to use custom OpenAI-compatible API endpoints, allowing you to route LLM calls through the Keeptrusts gateway for policy enforcement, audit logging, and cost attribution.

Raycast's built-in AI chat (Raycast Pro) uses Raycast's managed infrastructure and does not support custom endpoint configuration. This guide covers Raycast AI Extensions and custom scripts where you control the API endpoint.

Use this page when

  • You are building Raycast AI Extensions that make LLM calls and need governance.
  • You want to route your Raycast extension's AI traffic through the Keeptrusts gateway.
  • If you need Raycast Pro's built-in AI governance, that is managed by Raycast and outside this guide.

Primary audience

  • Primary: Technical Engineers
  • Secondary: AI Agents, Technical Leaders

Prerequisites

  • Raycast installed on macOS
  • Raycast extension development environment set up
  • Keeptrusts CLI (kt) installed and on your PATH
  • OPENAI_API_KEY or equivalent LLM provider key exported

Configuration

Gateway policy config

pack:
name: raycast-ai-gateway
version: 1.0.0
enabled: true
providers:
targets:
- id: raycast-llm
provider: openai:chat:gpt-4o
secret_key_ref:
env: OPENAI_API_KEY
policies:
chain:
- pii-detector
- content-filter
- audit-logger
policy:
pii-detector:
action: redact
entities:
- EMAIL
- PHONE
- SSN
- CREDIT_CARD
content-filter:
action: block
categories:
- restricted-topics
audit-logger:
immutable: true
retention_days: 365
log_all_access: true

Setup Steps

  1. Export your API key:
export OPENAI_API_KEY="sk-your-api-key"
  1. Save the policy config to policy-config.yaml.

  2. Start the gateway:

kt gateway run --listen 0.0.0.0:41002 --policy-config policy-config.yaml
  1. Configure your Raycast extension to use the gateway endpoint. In your extension's TypeScript code:
import { Detail } from "@raycast/api";
import OpenAI from "openai";

const client = new OpenAI({
baseURL: "http://localhost:41002/v1",
apiKey: "unused",
});

export default function Command() {
const [result, setResult] = useState<string>("");

useEffect(() => {
async function run() {
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [
{ role: "system", content: "You are a concise assistant." },
{ role: "user", content: "What is the capital of France?" },
],
});
setResult(response.choices[0].message.content ?? "");
}
run();
}, []);

return <Detail markdown={result} />;
}
  1. For extensions using environment preferences, configure the base URL as a Raycast preference:
{
"preferences": [
{
"name": "apiEndpoint",
"type": "textfield",
"required": true,
"title": "API Endpoint",
"description": "Keeptrusts gateway endpoint",
"default": "http://localhost:41002/v1"
}
]
}

For a hosted gateway, set the default to https://gateway.keeptrusts.com/v1.

Verification

curl http://localhost:41002/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "user", "content": "Hello from the Raycast extension gateway test."}
]
}'

Run your Raycast extension and verify the AI response. Check the Keeptrusts console Events page for the audit log entry.

PolicyPurposeRecommended setting
pii-detectorRedact personal data from extension inputsaction: redact, entities: EMAIL, PHONE, SSN
content-filterBlock restricted topics in AI responsesaction: block, configure categories
audit-loggerLog every extension AI interactionimmutable: true, retention_days: 365
cost-attributionTrack extension AI spend per developerTag requests with developer metadata
prompt-injectionBlock adversarial inputs if extensions accept user textthreshold: 0.8, action: block

Troubleshooting

SymptomCauseFix
Extension shows connection errorGateway not running or wrong portVerify gateway is running on port 41002
Cannot route Raycast Pro AIRaycast Pro uses managed infrastructureOnly Raycast Extensions with custom endpoints can be routed
Gateway returns 403Policy chain blocked the requestReview the policy violation in the console Events page
Slow extension responsesGateway policy processing adds latencyUse gpt-4o-mini for latency-sensitive extensions

For AI systems

  • Canonical terms: Keeptrusts gateway, Raycast AI, Raycast Extensions, Raycast API, macOS, policy-config.yaml, provider: "openai".
  • Key behavior: Raycast AI Extensions route LLM calls through the Keeptrusts gateway by configuring a custom OpenAI-compatible base URL. Raycast Pro's built-in AI cannot be rerouted.
  • Best next pages: OpenAI integration, Policy controls catalog, Quickstart.

For engineers

  • Start command: kt gateway run --listen 0.0.0.0:41002 --policy-config policy-config.yaml
  • Configure the baseURL in your extension's OpenAI client to point to the gateway.
  • Use Raycast preferences to make the gateway endpoint configurable by end users.
  • Raycast extensions run in a Node.js-like environment — standard OpenAI SDK works.

For leaders

  • Raycast is widely adopted by developers and power users — governing extension AI traffic prevents shadow AI usage.
  • PII redaction prevents clipboard content and system data from being sent to external LLM providers.
  • Audit logging tracks individual developer AI usage through extensions for compliance visibility.
  • For enterprise rollouts, distribute extensions pre-configured with the hosted gateway URL.

Next steps