Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Cache Routing Rules for Different Team Policies

Not every team needs the same caching strategy. Security teams may require complete cache isolation, while platform teams benefit from shared caching. Cache routing rules let you direct traffic to different cache tiers based on team, repo, or policy classification.

Use this page when

  • You need to route specific teams, repos, or agents to private edge cache instead of the org-shared pool.
  • You are implementing data classification policies that require cache isolation for confidential workloads.
  • You want to verify that routing rules are correctly directing traffic to the intended cache tier.

Primary audience

  • Primary: AI Agents, Technical Engineers
  • Secondary: Technical Leaders

Cache Tiers

org_shared_cache

The organization-wide shared cache. All engineers with matching permissions can hit entries in this tier. Best for maximizing cost savings and hit rates across teams.

private_edge_cache

A per-gateway-instance cache that is not shared across the organization. Only the same gateway instance can hit its own entries. Best for sensitive workloads or when you need deterministic isolation.

Default Routing

By default, all traffic routes to the tier specified in default_tier:

workflow_cache:
enabled: true
default_tier: org_shared

This means every request uses the org-shared cache unless a routing rule overrides it.

Routing Rules

Routing rules override the default tier for specific teams, repos, or agents:

workflow_cache:
enabled: true
default_tier: org_shared
routing_rules:
- match:
team_id: security-team
tier: private_edge_cache
- match:
repo_id: secret-rotation
tier: private_edge_cache
- match:
agent_id: penetration-tester
tier: private_edge_cache

Rule Evaluation Order

Rules are evaluated top-to-bottom. The first matching rule determines the cache tier. If no rule matches, default_tier applies.

Match Conditions

Each rule can match on one or more conditions:

FieldDescription
team_idMatch requests from members of this team
repo_idMatch requests from this repository
agent_idMatch requests handled by this agent
model_idMatch requests targeting this model
labelMatch requests with this metadata label

You can combine conditions — all must match for the rule to apply:

routing_rules:
- match:
team_id: platform-team
repo_id: api
tier: org_shared_cache
- match:
team_id: security-team
repo_id: api
tier: private_edge_cache

Policy-Based Isolation

For organizations with strict data classification requirements, you can enforce isolation by policy label:

workflow_cache:
default_tier: org_shared
routing_rules:
- match:
label: classification:confidential
tier: private_edge_cache
- match:
label: classification:restricted
tier: private_edge_cache

Requests tagged with classification:confidential or classification:restricted are routed to the private edge cache. All other requests use the shared cache.

Team-Specific Override Examples

Security team uses private cache

workflow_cache:
default_tier: org_shared
routing_rules:
- match:
team_id: security-team
tier: private_edge_cache

The security team's requests never enter the shared cache. Their responses are isolated to the gateway instance processing the request. All other teams share cache normally.

Platform team uses shared cache, everyone else uses private

workflow_cache:
default_tier: private_edge_cache
routing_rules:
- match:
team_id: platform-engineering
tier: org_shared_cache
- match:
team_id: backend-services
tier: org_shared_cache

Only the platform and backend teams contribute to and read from the shared cache. All other teams get private edge caching by default.

Different tiers per repo

workflow_cache:
default_tier: org_shared
routing_rules:
- match:
repo_id: compliance-engine
tier: private_edge_cache
- match:
repo_id: audit-trails
tier: private_edge_cache
- match:
repo_id: public-docs
tier: org_shared_cache

Combining Routing Rules with Semantic Replay

Routing rules determine which cache tier a request uses. Semantic replay settings determine whether similarity-based matching is allowed within that tier.

You can combine them:

workflow_cache:
default_tier: org_shared
direct_semantic_replay_enabled: true
similarity_threshold: 0.95
routing_rules:
- match:
team_id: security-team
tier: private_edge_cache

The security team uses private edge cache but still benefits from semantic replay within their own gateway instance. To fully disable semantic replay for them, add an agent or repo-level override.

Performance Considerations

org_shared_cache

  • Higher hit rates due to larger pool of contributing engineers.
  • Slightly higher latency on lookup due to network round-trip to shared cache service.
  • Best for cost optimization.

private_edge_cache

  • Lower hit rates — only the same instance contributes and reads.
  • Lower latency — lookup is local to the gateway process.
  • Best for latency-sensitive or isolation-critical workloads.

Validation

After deploying routing rules:

  1. Send a request from a team or repo matched by a rule.
  2. Check the response header x-keeptrusts-cache-tier — it should reflect the expected tier.
  3. Send the same request from a different team not matched by the rule.
  4. Verify it routes to the default tier.
  5. Check the event log for cache_tier field confirming correct routing.

Updating Rules

You can update routing rules at any time by deploying a new config version. Changes take effect on the next request after the gateway reloads the config. Existing cache entries are not moved between tiers — they remain in place and expire via TTL.

For AI systems

For engineers

  • Rules evaluate top-to-bottom; first match wins. If no rule matches, default_tier applies.
  • Combine match conditions (all must match) for granular routing: e.g., team_id + repo_id.
  • Validate after deploy: send a request from a matched team/repo, check x-keeptrusts-cache-tier response header.
  • Changes take effect on next request after gateway config reload. Existing entries stay in place and expire via TTL.
  • Use label: classification:confidential for policy-based isolation of sensitive workloads.

For leaders

  • Cache routing rules enforce data classification boundaries — confidential or restricted workloads stay isolated without disabling caching entirely.
  • Security teams get private edge cache (no cross-team leakage) while platform teams share cache for maximum savings.
  • Routing rules are additive and non-destructive — removing a rule simply falls back to the default tier.
  • Monitor routing correctness via the event log cache_tier field.

Next steps