Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Linear + AI

Linear integrates AI features for issue triage, auto-labeling, and content drafting within its project management platform. Because Linear's built-in AI runs on Linear's infrastructure, you cannot directly reroute those LLM calls. This guide covers two governance patterns: monitoring Linear AI activity through webhooks, and routing custom AI integrations that process Linear data through the Keeptrusts gateway.

Use this page when

  • You need to audit and govern AI features in your Linear workspace.
  • You are building custom AI workflows that process Linear issues, projects, or comments.
  • If you need direct LLM provider routing, see OpenAI integration or Anthropic integration.

Primary audience

  • Primary: Technical Engineers
  • Secondary: AI Agents, Technical Leaders

Prerequisites

  • A Linear API key with read access to your workspace
  • Keeptrusts CLI (kt) installed and on your PATH
  • OPENAI_API_KEY or equivalent for your LLM provider (for custom integrations)
  • A webhook endpoint for Linear event monitoring (optional)

Configuration

Gateway policy config for custom Linear AI workflows

pack:
name: linear-ai-gateway
version: 1.0.0
enabled: true
providers:
targets:
- id: linear-ai-processor
provider: openai:chat:gpt-4o
secret_key_ref:
env: OPENAI_API_KEY
policies:
chain:
- pii-detector
- content-filter
- audit-logger
policy:
pii-detector:
action: redact
entities:
- EMAIL
- PHONE
- SSN
content-filter:
action: block
categories:
- restricted-topics
audit-logger:
immutable: true
retention_days: 365
log_all_access: true

Integration Patterns

Pattern 1: Webhook-based audit monitoring

Configure Linear webhooks to capture AI-related events and forward them to your audit system:

curl -X POST https://api.linear.app/graphql \
-H "Content-Type: application/json" \
-H "Authorization: ${LINEAR_API_KEY}" \
-d '{
"query": "mutation { webhookCreate(input: { url: \"https://your-audit-endpoint.com/linear-webhook\", resourceTypes: [\"Issue\", \"Comment\"] }) { success webhook { id } } }"
}'

Your webhook handler can log Linear events to the Keeptrusts control plane for unified audit visibility.

Pattern 2: Route custom Linear AI workflows through the gateway

Build AI pipelines that process Linear data — issue triage, sprint planning, bug categorisation — and route the LLM calls through the gateway:

import requests
from openai import OpenAI

LINEAR_API_KEY = "lin_api_your-key"
GRAPHQL_URL = "https://api.linear.app/graphql"

query = """
query {
issues(filter: { state: { name: { eq: "Triage" } } }, first: 10) {
nodes { id title description }
}
}
"""

issues_response = requests.post(
GRAPHQL_URL,
headers={"Authorization": LINEAR_API_KEY, "Content-Type": "application/json"},
json={"query": query},
)
issues = issues_response.json()["data"]["issues"]["nodes"]

client = OpenAI(
base_url="http://localhost:41002/v1",
api_key="unused",
)

for issue in issues:
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a triage assistant. Categorise and prioritise the issue."},
{"role": "user", "content": f"Title: {issue['title']}\nDescription: {issue['description']}"},
],
)
print(f"Issue {issue['id']}: {response.choices[0].message.content}")

Setup Steps

  1. Export your keys:
export LINEAR_API_KEY="lin_api_your-key"
export OPENAI_API_KEY="sk-your-api-key"
  1. Save the policy config to policy-config.yaml.

  2. Start the gateway:

kt gateway run --listen 0.0.0.0:41002 --policy-config policy-config.yaml
  1. Route your custom Linear AI pipeline through http://localhost:41002/v1.

For a hosted gateway, replace the base URL with https://gateway.keeptrusts.com/v1.

Verification

curl http://localhost:41002/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "user", "content": "Categorise this issue: Login page returns 500 error after SSO update."}
]
}'

Confirm the response and check the Keeptrusts console Events page for the audit log entry.

PolicyPurposeRecommended setting
pii-detectorRedact personal data from Linear issues before LLM processingaction: redact, entities: EMAIL, PHONE
content-filterBlock restricted content from being processedaction: block, configure categories
audit-loggerLog all AI processing of Linear dataimmutable: true, retention_days: 365
dlp-filterDetect proprietary patterns in issue descriptionsConfigure regex for internal identifiers
cost-attributionTrack AI spend on Linear workflows per teamTag requests with team metadata

Troubleshooting

SymptomCauseFix
Linear API returns 401API key invalid or expiredGenerate a new key at linear.app/settings/api
Cannot route native Linear AILinear AI uses internal infrastructureUse Pattern 1 (webhooks) or Pattern 2 (custom workflows)
Gateway returns 403Policy chain blocked the requestReview the policy violation in the console Events page
Webhook events not arrivingWebhook URL unreachable or misconfiguredVerify the webhook endpoint is publicly accessible

For AI systems

  • Canonical terms: Keeptrusts gateway, Linear, Linear AI, Linear API, GraphQL, webhook monitoring, policy-config.yaml.
  • Key behavior: Linear's built-in AI cannot be directly routed. Governance is achieved through webhook monitoring and routing custom Linear AI workflows through the gateway.
  • Best next pages: OpenAI integration, Policy controls catalog, Quickstart.

For engineers

  • Linear's built-in AI features cannot be rerouted — use webhooks for audit and the gateway for custom workflows.
  • Linear uses a GraphQL API — query issues, comments, and projects to build governed AI pipelines.
  • Route all LLM calls through the gateway by changing the base_url in your OpenAI client.
  • Webhook events arrive as POST requests with Linear-Signature headers for verification.

For leaders

  • Linear AI governance requires a layered approach: webhook monitoring for built-in features and gateway routing for custom AI workflows.
  • AI-powered issue triage and categorisation workflows are fully governable through the gateway.
  • PII redaction prevents employee names, emails, and other personal data in issue descriptions from reaching LLM providers.
  • Unified audit logging covers both Linear AI monitoring and custom AI workflows in a single compliance dashboard.

Next steps