Emacs with the Gateway
Emacs has a rich ecosystem of AI assistant packages that support custom OpenAI-compatible endpoints. You can route any of these through the Keeptrusts gateway for policy enforcement, audit logging, and cost control.
Use this page when
- You are working through Emacs with the Gateway as an implementation or operating workflow in Keeptrusts.
- You need the practical steps, expected outcomes, and related validation guidance in one place.
- If you need exact field-by-field reference instead of a workflow page, use the linked reference pages in Next steps.
Primary audience
- Primary: Technical Engineers
- Secondary: AI Agents, Technical Leaders
Prerequisites
- Keeptrusts gateway running locally (
kt gateway run) - Emacs 28+ with
use-packageorstraight.el - A Keeptrusts access key or provider API key
Supported Packages
| Package | Features | Gateway Compatible |
|---|---|---|
| gptel | Chat, inline completion, multi-backend | Yes — native custom endpoint |
| ellama | Chat, summarize, translate, code | Yes — uses llama.cpp/OpenAI endpoints |
| org-ai | Org-mode AI blocks, image gen | Yes — custom API URL |
| copilot.el | GitHub Copilot completions | Partial — proxy-based |
gptel Configuration
gptel is the most popular Emacs AI package and supports custom OpenAI-compatible endpoints natively.
Add to your init.el or Emacs config:
(use-package gptel
:config
(setq gptel-api-key
(lambda () (getenv "KEEPTRUSTS_ACCESS_KEY")))
;; Route through the Keeptrusts gateway
(setq gptel-backend
(gptel-make-openai "keeptrusts"
:host "localhost:41002"
:protocol "http"
:key (lambda () (getenv "KEEPTRUSTS_ACCESS_KEY"))
:models '("gpt-4o" "gpt-4o-mini" "claude-sonnet-4-20250514")
:stream t))
;; Set the default model
(setq gptel-model "gpt-4o"))
Set your access key as an environment variable before launching Emacs:
export KEEPTRUSTS_ACCESS_KEY="your-access-key"
emacs &
Use M-x gptel-send to send a prompt, or C-c RET in a gptel buffer.
ellama Configuration
ellama works with OpenAI-compatible endpoints through its provider system:
(use-package ellama
:init
(require 'llm-openai)
(setopt ellama-provider
(make-llm-openai-compatible
:key (getenv "KEEPTRUSTS_ACCESS_KEY")
:chat-model "gpt-4o"
:url "http://localhost:41002/v1")))
Use M-x ellama-chat for interactive chat, or M-x ellama-code-complete for code completions.
org-ai Configuration
org-ai integrates with Org-mode and supports custom API endpoints:
(use-package org-ai
:config
(setq org-ai-openai-api-token
(getenv "KEEPTRUSTS_ACCESS_KEY"))
(setq org-ai-openai-api-base
"http://localhost:41002/v1"))
Use #+begin_ai ... #+end_ai blocks in Org files to interact with the LLM through the gateway.
copilot.el (GitHub Copilot)
copilot.el communicates with GitHub's Copilot servers using their proprietary auth flow. Direct endpoint redirection is not supported, but you can use a proxy:
(use-package copilot
:config
;; Route through the gateway via proxy
(setenv "HTTP_PROXY" "http://localhost:41002")
(setenv "HTTPS_PROXY" "http://localhost:41002"))
Note: Proxy-based interception has limitations. See VS Code: GitHub Copilot Through the Gateway for details on what traffic can be intercepted.
Multiple Backends with gptel
gptel supports switching between backends. Configure multiple providers through the gateway:
;; OpenAI models through the gateway
(setq gptel-backend
(gptel-make-openai "keeptrusts-openai"
:host "localhost:41002"
:protocol "http"
:key (lambda () (getenv "KEEPTRUSTS_ACCESS_KEY"))
:models '("gpt-4o" "gpt-4o-mini")
:stream t))
;; Anthropic models through the same gateway
(gptel-make-openai "keeptrusts-anthropic"
:host "localhost:41002"
:protocol "http"
:key (lambda () (getenv "KEEPTRUSTS_ACCESS_KEY"))
:models '("claude-sonnet-4-20250514" "claude-haiku")
:stream t)
Switch backends with C-c C-b in a gptel buffer.
Verify Traffic
Open a terminal alongside Emacs and run:
kt events tail
Send a prompt from any of the configured packages. Events appear in real time:
2024-01-15T10:32:15Z ALLOW gpt-4o tokens=280 latency=0.9s
Storing Keys Securely
Avoid hardcoding API keys in your Emacs config. Use one of these approaches:
-
Environment variable — set
KEEPTRUSTS_ACCESS_KEYin your shell profile -
auth-source — store credentials in
~/.authinfo.gpg:machine localhost:41002 login apikey password your-access-key -
pass — use the
passpassword manager:(setq gptel-api-key(lambda ()(string-trim(shell-command-to-string "pass show keeptrusts/access-key"))))
Troubleshooting
| Symptom | Solution |
|---|---|
Connection refused | Verify gateway is running on port 41002 |
401 Unauthorized | Check the access key value and environment variable |
| Streaming not working | Ensure :stream t is set and gateway supports streaming |
| Slow responses | Check policy chain complexity; enable engineering cache |
| Wrong model used | Verify gptel-model matches a model in your policy config |
For AI systems
- Canonical terms: Keeptrusts, Emacs with the Gateway, ide-integration.
- Exact feature, config, command, or page names: Emacs with the Gateway.
- Use the linked audience and reference pages in Next steps when you need deeper source material.
For engineers
- Use the commands, configuration examples, API payloads, or UI steps in this page as the working baseline for Emacs with the Gateway.
- Validate the result with the expected outcomes, troubleshooting notes, or linked workflow pages in this page and Next steps.
For leaders
- This page matters when planning rollout, governance, support ownership, or operating decisions for Emacs with the Gateway.
- Use the linked audience, architecture, and workflow pages in Next steps to connect this detail to broader implementation choices.
Next steps
- Access Keys and Authentication — create and manage access keys
- Recommended Policies for IDE Traffic — optimize policies for code completions
- Monitoring IDE AI Usage — track usage and cost across your team