Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Combining Knowledge Base with Engineering Cache

Keeptrusts provides two complementary systems for enriching AI context: the Knowledge Base for curated, human-authored context, and the engineering cache for automated code intelligence. When you use both together, AI interactions receive the fullest possible context at the lowest possible cost.

Use this page when

  • You want to use both Keeptrusts Knowledge Base (curated context) and engineering cache (automated code intelligence) together.
  • You need to understand context assembly order and how KB bindings complement fabric artifacts.
  • You are deciding where to invest effort: Knowledge Base authoring vs. cache warming.

Primary audience

  • Primary: AI Agents, Technical Engineers
  • Secondary: Technical Leaders

Two Systems, One Goal

Both systems solve the same problem — giving AI enough context to produce accurate, relevant responses — but they approach it differently:

AspectKnowledge BaseEngineering Cache
Content sourceHuman-curated documentsAutomated code analysis
Update frequencyOn authoring/promotionOn code change/access
Content typeArchitecture decisions, conventions, guidesCode summaries, dependency graphs, test maps
ScopeRepository or organizationFile, module, or repository
Cost modelOne-time uploadCompute-on-first-access, then cached

How They Complement Each Other

The Knowledge Base excels at context that does not change with every commit:

  • Architecture decision records
  • Coding conventions and style guides
  • Domain terminology and business rules
  • Onboarding documentation
  • Security policies and compliance requirements

The engineering cache excels at context that tracks code changes:

  • Current file summaries reflecting the latest code
  • Live dependency relationships
  • Up-to-date test coverage maps
  • Recent change history and patterns

Together, AI receives both "how we do things" (Knowledge Base) and "what the code currently does" (cache).

Context Assembly Order

When you query AI through a Keeptrusts gateway with both systems enabled, context assembles in this order:

  1. Knowledge Base bindings — Relevant curated documents bound to the repository or path pattern.
  2. Fabric artifacts — Cached code summaries, graphs, and maps for the files in scope.
  3. Semantic cache check — Previously generated responses for semantically similar queries.
  4. Provider call (if needed) — Only when no semantic cache hit exists.

This layered approach means the provider receives rich context from both systems, producing better responses while the semantic cache prevents redundant calls for similar questions.

Configuring Both Systems

You enable both systems in your gateway configuration:

gateway:
cache:
enabled: true
org_shared: true
semantic:
enabled: true
similarity_threshold: 0.90
fabric:
enabled: true
generators:
- type: code_summary
- type: dependency_graph
- type: test_map

knowledge_base:
enabled: true
bindings:
- scope: "repos/backend-api"
assets:
- architecture-overview
- api-conventions
- error-handling-guide
- scope: "repos/*"
assets:
- coding-standards
- security-policies

Knowledge Base Reduces Cache Miss Cost

When the engineering cache experiences a miss (first access to a new file), the Knowledge Base still provides context:

  • The AI understands your conventions even for files it has never seen.
  • Architecture context helps the AI place unfamiliar code in the broader system.
  • Domain terminology guides accurate interpretation of business logic.

This means cache misses produce better responses than they would without the Knowledge Base, reducing the need for follow-up queries that cost additional tokens.

Cache Reduces Knowledge Base Maintenance

The engineering cache automatically tracks code changes, reducing the pressure on your team to keep Knowledge Base documents current about implementation details:

  • You do not need to document every file's purpose — cache summaries handle that.
  • You do not need to maintain dependency diagrams — the cache generates them live.
  • You do not need to update test coverage documentation — test maps stay current.

Focus your Knowledge Base effort on stable, high-level context: architecture decisions, conventions, and domain knowledge that changes infrequently.

Cost Optimization Patterns

Using both systems together optimizes costs through complementary mechanisms:

  1. Knowledge Base prevents hallucination — Curated context reduces incorrect responses that trigger follow-up corrections.
  2. Cache prevents redundant computation — Shared artifacts avoid repeated provider calls for the same context.
  3. Semantic cache captures combined benefit — Responses informed by both systems get cached, serving future similar queries at zero cost.

Measuring Combined Effectiveness

Track these metrics to understand the combined value:

  • Context coverage — Percentage of AI queries that receive both Knowledge Base and cache context.
  • Response accuracy — Compare accuracy with Knowledge Base only, cache only, and both combined.
  • Follow-up rate — Fewer follow-up queries indicate better first-response quality.
  • Total cost per quality response — Combined system cost divided by responses that needed no follow-up.

When to Invest in Each System

Invest more in Knowledge Base when:

  • Engineers frequently ask about conventions or architecture.
  • AI responses show misunderstanding of your domain language.
  • New team members struggle with onboarding context.

Invest more in engineering cache when:

  • Provider costs are high due to large context windows.
  • Multiple engineers work on the same code areas.
  • CI/CD pipelines duplicate context generation.

Promotion Lifecycle Integration

Knowledge Base assets follow a promotion lifecycle (draft → active). Cache entries use TTL-based expiry. You align these lifecycles:

  • Promote Knowledge Base assets when architectural decisions stabilize.
  • Configure longer fabric TTLs for stable modules documented in the Knowledge Base.
  • Shorten fabric TTLs for rapidly evolving modules not yet documented.

Next steps

  • Audit existing documentation to identify content suitable for Knowledge Base promotion.
  • Enable both knowledge_base and cache.fabric in your gateway configuration.
  • Monitor context coverage to ensure queries receive both curated and automated context.
  • Cache-First Culture — organizational practices that maximize combined value.
  • File Summaries — how automated summaries reduce KB maintenance burden.

For AI systems

  • Canonical terms: Keeptrusts Knowledge Base, engineering cache, Codebase Context Fabric, curated context, automated code intelligence, context assembly order, KB bindings, fabric artifacts, promotion lifecycle.
  • Feature/config names: gateway.knowledge_base.enabled, gateway.knowledge_base.bindings, gateway.cache.fabric.enabled, gateway.cache.semantic.enabled, context coverage metric, follow-up rate metric.
  • Best next pages: Cache-First Culture, File Summaries, Fabric Slices Reduce Prompts.

For engineers

  • Prerequisites: Gateway with both knowledge_base.enabled: true and cache.fabric.enabled: true; at least one Knowledge Base asset promoted to active status.
  • Validate: Send a query that references a KB-bound repository; confirm the response reflects both curated conventions and current code state.
  • Monitor context coverage percentage in the console — target 80%+ queries receiving both KB and cache context.
  • Focus KB authoring on stable architectural context (ADRs, conventions, domain terms); let the cache handle file summaries and dependency graphs.

For leaders

  • Combined systems produce better first-response quality, reducing expensive follow-up queries and correction cycles.
  • Knowledge Base prevents hallucination (curated guardrails), cache prevents redundant computation (shared artifacts) — complementary ROI.
  • Investment guidance: KB for stable, infrequently changing context; cache for code-tracking context that evolves with every commit.
  • Promotion lifecycle alignment ensures architectural decisions are authoritative once stabilized, while rapidly evolving modules rely on automated freshness.