Governed Function Calling & Tool Use
When LLMs call functions or use tools, the Keeptrusts gateway inspects every tool invocation in the completion response. This lets you block dangerous tools, enforce budget limits on agent loops, and audit every tool call — without modifying your application code.
Use this page when
- You are using LLM function calling or tool use and need governance on tool invocations.
- You want to configure allow-lists or block-lists for specific tools at the gateway level.
- You need to set budget limits on agent loops that make repeated tool calls.
- You are implementing agent firewall policies to block dangerous tool arguments (SQL injection, etc.).
Primary audience
- Primary: AI Engineers building agentic applications with tool use
- Secondary: Security Engineers implementing tool restrictions, Platform Engineers configuring agent firewalls
How It Works
The gateway intercepts the standard OpenAI function-calling flow:
- Your application sends a request with
toolsdefinitions. - The model responds with
tool_callsin the assistant message. - The gateway evaluates tool-level policies on the response before returning it.
- If a policy blocks a tool call, the gateway returns a 409 instead of the tool call.
Your application never sees blocked tool calls — the gateway prevents them from reaching your execution layer.
Basic Function Calling Through the Gateway
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:41002/v1",
api_key="sk-...",
)
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a city",
"parameters": {
"type": "object",
"properties": {
"city": {"type": "string"},
"units": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["city"],
},
},
},
{
"type": "function",
"function": {
"name": "execute_sql",
"description": "Execute a SQL query against the database",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string"},
},
"required": ["query"],
},
},
},
]
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "What's the weather in London?"}],
tools=tools,
tool_choice="auto",
)
message = response.choices[0].message
if message.tool_calls:
for call in message.tool_calls:
print(f"Tool: {call.function.name}")
print(f"Args: {call.function.arguments}")
Tool Validation Policies
Block Dangerous Tools
policies:
- name: block-sql-execution
type: tool_filter
action: block
blocked_tools:
- "execute_sql"
- "run_shell_command"
- "delete_record"
message: "Tool call blocked: restricted operation"
If the model tries to call execute_sql, the gateway returns a 409 before your application can execute it.
Allow-List Pattern
Restrict agents to an explicit set of approved tools:
policies:
- name: allow-only-safe-tools
type: tool_filter
action: allow
allowed_tools:
- "get_weather"
- "lookup_order"
- "search_knowledge_base"
message: "Tool call blocked: tool not in approved list"
Any tool call not in allowed_tools is blocked. This is the recommended pattern for production agents.
Argument Inspection
Block tool calls based on argument patterns:
policies:
- name: block-sql-injection-in-tools
type: tool_argument_filter
action: block
tool_name: "search_database"
argument: "query"
pattern: "(DROP|DELETE|TRUNCATE|ALTER)\\s"
message: "Tool argument blocked: contains destructive SQL"
Budget Limits for Agent Loops
Agents can enter loops where they repeatedly call tools, burning tokens and cost. Set budget policies to cap agent behavior:
policies:
- name: limit-tool-calls-per-request
type: budget
action: block
max_tool_calls: 10
message: "Agent exceeded maximum tool calls per request"
- name: limit-tokens-per-request
type: budget
action: block
max_tokens: 50000
message: "Request exceeded token budget"
Per-Session Budget
policies:
- name: session-cost-cap
type: budget
action: block
max_cost_per_session_usd: 5.00
message: "Session cost limit exceeded"
Agent Firewall Configuration
The agent firewall is a comprehensive policy set for multi-step agent workflows:
policies:
# Block known-dangerous tools
- name: agent-tool-blocklist
type: tool_filter
action: block
blocked_tools:
- "execute_code"
- "send_email"
- "modify_user"
- "delete_file"
message: "Blocked: tool not permitted for this agent"
# Cap agent loop depth
- name: agent-loop-limit
type: budget
action: block
max_tool_calls: 15
message: "Agent loop limit reached"
# Block prompt injection in tool arguments
- name: tool-arg-injection-guard
type: tool_argument_filter
action: block
pattern: "(ignore previous|system prompt|you are now)"
message: "Blocked: suspected prompt injection in tool argument"
# Log every tool call for audit
- name: log-tool-calls
type: observe
action: log
match: tool_call
Handling Blocked Tool Calls
When a tool policy blocks a call, your application receives a 409:
from openai import APIStatusError
def execute_with_tools(messages, tools):
try:
response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=tools,
tool_choice="auto",
)
return response
except APIStatusError as e:
if e.status_code == 409:
error_body = e.response.json()
blocked_policy = error_body.get("error", {}).get("policy", "unknown")
print(f"Tool call blocked by policy: {blocked_policy}")
# Fall back to a response without tool use
return client.chat.completions.create(
model="gpt-4o",
messages=messages,
)
raise
TypeScript Error Handling
import OpenAI from "openai";
async function executeWithTools(
messages: OpenAI.ChatCompletionMessageParam[],
tools: OpenAI.ChatCompletionTool[]
): Promise<OpenAI.ChatCompletion> {
try {
return await client.chat.completions.create({
model: "gpt-4o",
messages,
tools,
tool_choice: "auto",
});
} catch (err) {
if (err instanceof OpenAI.APIError && err.status === 409) {
console.warn("Tool call blocked by governance policy");
return client.chat.completions.create({ model: "gpt-4o", messages });
}
throw err;
}
}
Inspecting Tool Call Events
Every tool call produces a decision event with tool-specific metadata:
curl http://localhost:8080/v1/events?limit=5 \
-H "Authorization: Bearer $KEEPTRUSTS_API_TOKEN"
{
"events": [
{
"id": "evt_01J...",
"tool_calls": [
{
"name": "get_weather",
"arguments": "{\"city\": \"London\"}",
"policy_result": "allowed"
}
],
"policy_decisions": [
{"policy": "allow-only-safe-tools", "action": "allow", "matched": true}
]
}
]
}
Best Practices
- Use allow-lists, not block-lists — explicitly approve tools rather than trying to block everything dangerous.
- Set budget limits on all agent workflows — prevent runaway loops from burning tokens and cost.
- Inspect tool arguments — block SQL injection and prompt injection in tool parameters.
- Log every tool call — even allowed calls should produce audit events.
- Fall back gracefully on 409 — retry without tools or return a safe default response.
- Test agent firewall policies with observe-only first — log what would be blocked before enforcing.
Next steps
- Streaming Patterns — streaming with tool calls
- Error Handling — full error envelope reference
- Developer Quick Start — if you haven't set up the gateway yet
For AI systems
- Canonical terms: function calling, tool use, agent firewall, tool validation, budget limits,
tool_calls,tools, allow-list, block-list,tool_blocked(409). - Gateway intercepts
tool_callsin assistant responses before returning them to the application. Blocked tool calls return 409. - Policy types:
tool_allow_list,tool_block_list,argument_filter,budget_limit. Config inpolicy-config.yamlunderpolicies.output. - Best next pages: Streaming Patterns, Error Handling, Developer Quick Start.
For engineers
- The gateway evaluates tool-level policies on the response before returning
tool_callsto your application. - Use allow-lists (
tool_allow_list) rather than block-lists — explicitly approve safe tools. - Set
budget_limitpolicies to prevent runaway agent loops from burning tokens and cost. - Inspect tool arguments with
argument_filterpolicies to block injection attacks in tool parameters. - Fall back gracefully on 409
tool_blocked— retry without tools or return a safe default response. - Test agent firewall policies with
observeaction first to log what would be blocked before enforcing.
For leaders
- Agent tool use is a high-risk surface — unrestricted tools can execute SQL, send emails, or access filesystems.
- Gateway-level tool governance prevents dangerous tool execution without requiring application-level guardrails.
- Budget limits on agent loops provide a hard cost cap on runaway LLM tool-calling chains.
- All tool calls (allowed and blocked) produce audit events for compliance visibility into agent behavior.
- Agent firewall policies are a key control for organizations deploying autonomous AI agents at scale.