Semantic Kernel with Keeptrusts Gateway
Microsoft Semantic Kernel is an open-source SDK for integrating LLMs into applications across C# and Python. It provides a plugin architecture, planners, and memory abstractions for building AI-powered features. By routing Semantic Kernel's LLM calls through the Keeptrusts gateway, every chat completion, function call, and planner step passes through your policy chain — enabling policy enforcement, audit logging, cost attribution, and content filtering without modifying your kernel plugins or plans.
Use this page when
- You are building a Semantic Kernel application and need governance on all LLM calls.
- You want audit logging and cost attribution for Semantic Kernel planners and plugins.
- You need compliance controls on function-calling and memory-augmented AI features.
- You are deploying Semantic Kernel applications in C# or Python with governance requirements.
Primary audience
- Primary: Technical Engineers
- Secondary: AI Agents, Technical Leaders
Prerequisites
- Keeptrusts CLI installed and a gateway running locally or centrally (Quickstart).
- C#: .NET 8+ with
Microsoft.SemanticKernelNuGet package, or Python: Python 3.10+ withsemantic-kernelpip package. - Upstream provider API key exported as an environment variable (e.g.
OPENAI_API_KEY). - A
policy-config.yamldeployed to the gateway.
Configuration
Gateway policy config
A minimal config for Semantic Kernel traffic:
pack:
name: semantic-kernel-gateway
version: "1.0"
providers:
- name: openai
model: gpt-4o
secret_key_ref:
env: OPENAI_API_KEY
policies:
chain:
- prompt-injection
- pii-detector
- quality-scorer
policy:
prompt-injection:
action: block
pii-detector:
action: redact
quality-scorer:
threshold: 0.6
Start the gateway:
kt gateway run --policy-config policy-config.yaml
Semantic Kernel client configuration
- C#
- Python
In C#, configure the OpenAIChatCompletionService with a custom HttpClient that points at the Keeptrusts gateway:
using Microsoft.SemanticKernel;
using System.Net.Http;
var httpClient = new HttpClient
{
BaseAddress = new Uri("http://localhost:41002/v1")
};
var builder = Kernel.CreateBuilder();
builder.AddOpenAIChatCompletion(
modelId: "gpt-4o",
apiKey: "your-openai-api-key",
httpClient: httpClient
);
var kernel = builder.Build();
var result = await kernel.InvokePromptAsync(
"Summarize the key compliance requirements for SOC 2 Type II."
);
Console.WriteLine(result);
For a hosted gateway:
var httpClient = new HttpClient
{
BaseAddress = new Uri("https://gateway.keeptrusts.com/v1")
};
In Python, set the OpenAIChatCompletion service with a custom async_client:
from semantic_kernel import Kernel
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
from openai import AsyncOpenAI
client = AsyncOpenAI(
base_url="http://localhost:41002/v1",
api_key="your-openai-api-key",
)
kernel = Kernel()
kernel.add_service(
OpenAIChatCompletion(
ai_model_id="gpt-4o",
async_client=client,
)
)
For a hosted gateway:
client = AsyncOpenAI(
base_url="https://gateway.keeptrusts.com/v1",
api_key="your-openai-api-key",
)
Using with plugins
Once the kernel is configured, plugins and function calling work unchanged. The gateway intercepts the underlying LLM calls:
- C# plugin
- Python plugin
using Microsoft.SemanticKernel;
using System.ComponentModel;
public class CompliancePlugin
{
[KernelFunction, Description("Check compliance status for a regulation")]
public string CheckCompliance(string regulation)
{
return regulation.ToLower() switch
{
"gdpr" => "Compliant — last audit: 2026-03-15",
"hipaa" => "In progress — remediation due: 2026-06-01",
_ => "Unknown regulation"
};
}
}
kernel.Plugins.AddFromType<CompliancePlugin>();
var result = await kernel.InvokePromptAsync(
"What is our HIPAA compliance status? Use the compliance checker."
);
from semantic_kernel.functions import kernel_function
class CompliancePlugin:
@kernel_function(description="Check compliance status for a regulation")
def check_compliance(self, regulation: str) -> str:
statuses = {"gdpr": "compliant", "hipaa": "in-progress", "sox": "non-compliant"}
return statuses.get(regulation.lower(), "unknown")
kernel.add_plugin(CompliancePlugin(), plugin_name="compliance")
Setup steps
-
Install dependencies
- C#
- Python
dotnet add package Microsoft.SemanticKernelpip install semantic-kernel -
Export your provider API key
export OPENAI_API_KEY="sk-..." -
Start the Keeptrusts gateway
kt gateway run --policy-config policy-config.yaml -
Configure the kernel with the gateway URL as shown in Configuration above.
-
Run your application — all LLM calls flow through the gateway.
-
Verify in the Keeptrusts console — open Events to confirm requests appear with policy outcomes.
Verification
Check gateway health:
curl http://localhost:41002/keeptrusts/health
Run a test prompt through the kernel and confirm:
- Gateway logs show policy chain evaluation for each LLM call.
- The Keeptrusts console Events page shows requests with model, token count, cost, and policy decisions.
- Function calling and plugin invocations work correctly — the gateway governs the LLM call, not the plugin execution.
Recommended policies
| Policy | Purpose | Phase |
|---|---|---|
prompt-injection | Block jailbreak attempts in prompts and function arguments | Input |
pii-detector | Redact PII before prompts reach the provider | Input |
agent-firewall | Restrict function calling and enforce rate limits | Input |
safety-filter | Block harmful content | Input |
quality-scorer | Score and threshold response quality | Output |
human-oversight | Escalate sensitive outputs for human review | Output |
audit-logger | Attach audit metadata for compliance evidence | Input |
Troubleshooting
| Symptom | Cause | Fix |
|---|---|---|
HttpRequestException / ConnectionError | Gateway is not running | Start with kt gateway run --policy-config policy-config.yaml |
401 Unauthorized | API key mismatch | Verify the API key matches secret_key_ref.env in the gateway config |
| Function calls not governed | Plugin execution is local | The gateway governs the LLM request that triggers function calling, not the function itself |
C# HttpClient not routing correctly | BaseAddress missing trailing slash | Ensure the URI ends with /v1 — the SDK appends endpoint paths |
| Events missing in the console | Gateway not connected to control plane | Set KEEPTRUSTS_API_URL and KEEPTRUSTS_GATEWAY_TOKEN before starting the gateway |
For AI systems
- Canonical C# integration:
OpenAIChatCompletionServicewith a customHttpClientwhoseBaseAddressishttp://localhost:41002/v1orhttps://gateway.keeptrusts.com/v1. - Canonical Python integration:
OpenAIChatCompletionwith anAsyncOpenAIclient whosebase_urlpoints at the gateway. - The gateway is transparent — plugins, planners, memory, and function calling work unchanged.
- Use Policy Controls Catalog for available policies.
For engineers
- In C#, the only change is creating an
HttpClientwith the gatewayBaseAddressand passing it to the service registration. In Python, pass a customAsyncOpenAIclient. - Plugins, planners, and prompt templates require no modifications.
- Test locally with
kt gateway run, then switch to a hosted gateway URL for deployment.
For leaders
- Semantic Kernel is commonly used in enterprise .NET applications. Keeptrusts adds governance without requiring changes to the application's plugin architecture.
- Shared policy enforcement applies to all Semantic Kernel applications routing through the gateway.
- Cost attribution provides visibility into per-application and per-feature LLM spend.
Next steps
- Quickstart — set up your first gateway and policy config.
- Policy Controls Catalog — full inventory of available policies.
- Events and Traces — understand the audit trail.
- Agents — register agent identities for per-agent policy scoping.
- Gateway Runtime Features — advanced gateway capabilities.