Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Semantic Kernel with Keeptrusts Gateway

Microsoft Semantic Kernel is an open-source SDK for integrating LLMs into applications across C# and Python. It provides a plugin architecture, planners, and memory abstractions for building AI-powered features. By routing Semantic Kernel's LLM calls through the Keeptrusts gateway, every chat completion, function call, and planner step passes through your policy chain — enabling policy enforcement, audit logging, cost attribution, and content filtering without modifying your kernel plugins or plans.

Use this page when

  • You are building a Semantic Kernel application and need governance on all LLM calls.
  • You want audit logging and cost attribution for Semantic Kernel planners and plugins.
  • You need compliance controls on function-calling and memory-augmented AI features.
  • You are deploying Semantic Kernel applications in C# or Python with governance requirements.

Primary audience

  • Primary: Technical Engineers
  • Secondary: AI Agents, Technical Leaders

Prerequisites

  • Keeptrusts CLI installed and a gateway running locally or centrally (Quickstart).
  • C#: .NET 8+ with Microsoft.SemanticKernel NuGet package, or Python: Python 3.10+ with semantic-kernel pip package.
  • Upstream provider API key exported as an environment variable (e.g. OPENAI_API_KEY).
  • A policy-config.yaml deployed to the gateway.

Configuration

Gateway policy config

A minimal config for Semantic Kernel traffic:

pack:
name: semantic-kernel-gateway
version: "1.0"

providers:
- name: openai
model: gpt-4o
secret_key_ref:
env: OPENAI_API_KEY

policies:
chain:
- prompt-injection
- pii-detector
- quality-scorer

policy:
prompt-injection:
action: block
pii-detector:
action: redact
quality-scorer:
threshold: 0.6

Start the gateway:

kt gateway run --policy-config policy-config.yaml

Semantic Kernel client configuration

In C#, configure the OpenAIChatCompletionService with a custom HttpClient that points at the Keeptrusts gateway:

using Microsoft.SemanticKernel;
using System.Net.Http;

var httpClient = new HttpClient
{
BaseAddress = new Uri("http://localhost:41002/v1")
};

var builder = Kernel.CreateBuilder();
builder.AddOpenAIChatCompletion(
modelId: "gpt-4o",
apiKey: "your-openai-api-key",
httpClient: httpClient
);

var kernel = builder.Build();

var result = await kernel.InvokePromptAsync(
"Summarize the key compliance requirements for SOC 2 Type II."
);

Console.WriteLine(result);

For a hosted gateway:

var httpClient = new HttpClient
{
BaseAddress = new Uri("https://gateway.keeptrusts.com/v1")
};

Using with plugins

Once the kernel is configured, plugins and function calling work unchanged. The gateway intercepts the underlying LLM calls:

using Microsoft.SemanticKernel;
using System.ComponentModel;

public class CompliancePlugin
{
[KernelFunction, Description("Check compliance status for a regulation")]
public string CheckCompliance(string regulation)
{
return regulation.ToLower() switch
{
"gdpr" => "Compliant — last audit: 2026-03-15",
"hipaa" => "In progress — remediation due: 2026-06-01",
_ => "Unknown regulation"
};
}
}

kernel.Plugins.AddFromType<CompliancePlugin>();

var result = await kernel.InvokePromptAsync(
"What is our HIPAA compliance status? Use the compliance checker."
);

Setup steps

  1. Install dependencies

    dotnet add package Microsoft.SemanticKernel
  2. Export your provider API key

    export OPENAI_API_KEY="sk-..."
  3. Start the Keeptrusts gateway

    kt gateway run --policy-config policy-config.yaml
  4. Configure the kernel with the gateway URL as shown in Configuration above.

  5. Run your application — all LLM calls flow through the gateway.

  6. Verify in the Keeptrusts console — open Events to confirm requests appear with policy outcomes.

Verification

Check gateway health:

curl http://localhost:41002/keeptrusts/health

Run a test prompt through the kernel and confirm:

  • Gateway logs show policy chain evaluation for each LLM call.
  • The Keeptrusts console Events page shows requests with model, token count, cost, and policy decisions.
  • Function calling and plugin invocations work correctly — the gateway governs the LLM call, not the plugin execution.
PolicyPurposePhase
prompt-injectionBlock jailbreak attempts in prompts and function argumentsInput
pii-detectorRedact PII before prompts reach the providerInput
agent-firewallRestrict function calling and enforce rate limitsInput
safety-filterBlock harmful contentInput
quality-scorerScore and threshold response qualityOutput
human-oversightEscalate sensitive outputs for human reviewOutput
audit-loggerAttach audit metadata for compliance evidenceInput

Troubleshooting

SymptomCauseFix
HttpRequestException / ConnectionErrorGateway is not runningStart with kt gateway run --policy-config policy-config.yaml
401 UnauthorizedAPI key mismatchVerify the API key matches secret_key_ref.env in the gateway config
Function calls not governedPlugin execution is localThe gateway governs the LLM request that triggers function calling, not the function itself
C# HttpClient not routing correctlyBaseAddress missing trailing slashEnsure the URI ends with /v1 — the SDK appends endpoint paths
Events missing in the consoleGateway not connected to control planeSet KEEPTRUSTS_API_URL and KEEPTRUSTS_GATEWAY_TOKEN before starting the gateway

For AI systems

  • Canonical C# integration: OpenAIChatCompletionService with a custom HttpClient whose BaseAddress is http://localhost:41002/v1 or https://gateway.keeptrusts.com/v1.
  • Canonical Python integration: OpenAIChatCompletion with an AsyncOpenAI client whose base_url points at the gateway.
  • The gateway is transparent — plugins, planners, memory, and function calling work unchanged.
  • Use Policy Controls Catalog for available policies.

For engineers

  • In C#, the only change is creating an HttpClient with the gateway BaseAddress and passing it to the service registration. In Python, pass a custom AsyncOpenAI client.
  • Plugins, planners, and prompt templates require no modifications.
  • Test locally with kt gateway run, then switch to a hosted gateway URL for deployment.

For leaders

  • Semantic Kernel is commonly used in enterprise .NET applications. Keeptrusts adds governance without requiring changes to the application's plugin architecture.
  • Shared policy enforcement applies to all Semantic Kernel applications routing through the gateway.
  • Cost attribution provides visibility into per-application and per-feature LLM spend.

Next steps