Linear + AI
Linear integrates AI features for issue triage, auto-labeling, and content drafting within its project management platform. Because Linear's built-in AI runs on Linear's infrastructure, you cannot directly reroute those LLM calls. This guide covers two governance patterns: monitoring Linear AI activity through webhooks, and routing custom AI integrations that process Linear data through the Keeptrusts gateway.
Use this page when
- You need to audit and govern AI features in your Linear workspace.
- You are building custom AI workflows that process Linear issues, projects, or comments.
- If you need direct LLM provider routing, see OpenAI integration or Anthropic integration.
Primary audience
- Primary: Technical Engineers
- Secondary: AI Agents, Technical Leaders
Prerequisites
- A Linear API key with read access to your workspace
- Keeptrusts CLI (
kt) installed and on yourPATH OPENAI_API_KEYor equivalent for your LLM provider (for custom integrations)- A webhook endpoint for Linear event monitoring (optional)
Configuration
Gateway policy config for custom Linear AI workflows
pack:
name: linear-ai-gateway
version: 1.0.0
enabled: true
providers:
targets:
- id: linear-ai-processor
provider: openai:chat:gpt-4o
secret_key_ref:
env: OPENAI_API_KEY
policies:
chain:
- pii-detector
- content-filter
- audit-logger
policy:
pii-detector:
action: redact
entities:
- EMAIL
- PHONE
- SSN
content-filter:
action: block
categories:
- restricted-topics
audit-logger:
immutable: true
retention_days: 365
log_all_access: true
Integration Patterns
Pattern 1: Webhook-based audit monitoring
Configure Linear webhooks to capture AI-related events and forward them to your audit system:
curl -X POST https://api.linear.app/graphql \
-H "Content-Type: application/json" \
-H "Authorization: ${LINEAR_API_KEY}" \
-d '{
"query": "mutation { webhookCreate(input: { url: \"https://your-audit-endpoint.com/linear-webhook\", resourceTypes: [\"Issue\", \"Comment\"] }) { success webhook { id } } }"
}'
Your webhook handler can log Linear events to the Keeptrusts control plane for unified audit visibility.
Pattern 2: Route custom Linear AI workflows through the gateway
Build AI pipelines that process Linear data — issue triage, sprint planning, bug categorisation — and route the LLM calls through the gateway:
import requests
from openai import OpenAI
LINEAR_API_KEY = "lin_api_your-key"
GRAPHQL_URL = "https://api.linear.app/graphql"
query = """
query {
issues(filter: { state: { name: { eq: "Triage" } } }, first: 10) {
nodes { id title description }
}
}
"""
issues_response = requests.post(
GRAPHQL_URL,
headers={"Authorization": LINEAR_API_KEY, "Content-Type": "application/json"},
json={"query": query},
)
issues = issues_response.json()["data"]["issues"]["nodes"]
client = OpenAI(
base_url="http://localhost:41002/v1",
api_key="unused",
)
for issue in issues:
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a triage assistant. Categorise and prioritise the issue."},
{"role": "user", "content": f"Title: {issue['title']}\nDescription: {issue['description']}"},
],
)
print(f"Issue {issue['id']}: {response.choices[0].message.content}")
Setup Steps
- Export your keys:
export LINEAR_API_KEY="lin_api_your-key"
export OPENAI_API_KEY="sk-your-api-key"
-
Save the policy config to
policy-config.yaml. -
Start the gateway:
kt gateway run --listen 0.0.0.0:41002 --policy-config policy-config.yaml
- Route your custom Linear AI pipeline through
http://localhost:41002/v1.
For a hosted gateway, replace the base URL with https://gateway.keeptrusts.com/v1.
Verification
curl http://localhost:41002/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "user", "content": "Categorise this issue: Login page returns 500 error after SSO update."}
]
}'
Confirm the response and check the Keeptrusts console Events page for the audit log entry.
Recommended Policies
| Policy | Purpose | Recommended setting |
|---|---|---|
pii-detector | Redact personal data from Linear issues before LLM processing | action: redact, entities: EMAIL, PHONE |
content-filter | Block restricted content from being processed | action: block, configure categories |
audit-logger | Log all AI processing of Linear data | immutable: true, retention_days: 365 |
dlp-filter | Detect proprietary patterns in issue descriptions | Configure regex for internal identifiers |
cost-attribution | Track AI spend on Linear workflows per team | Tag requests with team metadata |
Troubleshooting
| Symptom | Cause | Fix |
|---|---|---|
Linear API returns 401 | API key invalid or expired | Generate a new key at linear.app/settings/api |
| Cannot route native Linear AI | Linear AI uses internal infrastructure | Use Pattern 1 (webhooks) or Pattern 2 (custom workflows) |
Gateway returns 403 | Policy chain blocked the request | Review the policy violation in the console Events page |
| Webhook events not arriving | Webhook URL unreachable or misconfigured | Verify the webhook endpoint is publicly accessible |
For AI systems
- Canonical terms: Keeptrusts gateway, Linear, Linear AI, Linear API, GraphQL, webhook monitoring, policy-config.yaml.
- Key behavior: Linear's built-in AI cannot be directly routed. Governance is achieved through webhook monitoring and routing custom Linear AI workflows through the gateway.
- Best next pages: OpenAI integration, Policy controls catalog, Quickstart.
For engineers
- Linear's built-in AI features cannot be rerouted — use webhooks for audit and the gateway for custom workflows.
- Linear uses a GraphQL API — query issues, comments, and projects to build governed AI pipelines.
- Route all LLM calls through the gateway by changing the
base_urlin your OpenAI client. - Webhook events arrive as POST requests with
Linear-Signatureheaders for verification.
For leaders
- Linear AI governance requires a layered approach: webhook monitoring for built-in features and gateway routing for custom AI workflows.
- AI-powered issue triage and categorisation workflows are fully governable through the gateway.
- PII redaction prevents employee names, emails, and other personal data in issue descriptions from reaching LLM providers.
- Unified audit logging covers both Linear AI monitoring and custom AI workflows in a single compliance dashboard.
Next steps
- OpenAI integration — configure the LLM provider for Linear workflows
- Policy controls catalog — all available policy types
- Connectors — integrate external data sources with Keeptrusts
- Quickstart — install
ktand run your first gateway