Skip to main content
Browse docs
By Audience
Getting Started
Configuration
Use Cases
IDE Integration
Third-Party Integrations
Engineering Cache
Console
API Reference
Gateway
Workflow Guides
Templates
Providers and SDKs
Industry Guides
Advanced Guides
Browse by Role
Deployment Guides
In-Depth Guides
Tutorials
FAQ

Pinecone

Pinecone is a managed vector database with a built-in inference API that generates embeddings and performs reranking. When using Pinecone's inference API or integrating Pinecone with external embedding providers (OpenAI, Cohere, Voyage), your application sends text data to LLM providers for vectorization before storing or querying vectors.

This page explains how to route the LLM and embedding calls associated with Pinecone workflows through the Keeptrusts gateway so policy enforcement, PII redaction, and audit logging apply to every AI operation.

Use this page when

  • You are using Pinecone with external embedding providers and need governance on those LLM calls.
  • You want audit trails for embedding and inference operations that send application data to AI providers.
  • If you need general provider integration, see OpenAI integration.

Primary audience

  • Primary: Technical Engineers (ML, Backend, Platform)
  • Secondary: AI Agents, Technical Leaders

Prerequisites

  1. Pinecone account with an index created — access via Pinecone console.
  2. Pinecone API key for index operations.
  3. External embedding provider (e.g., OpenAI) if using client-side embeddings.
  4. Keeptrusts gateway running locally or centrally:
    • Local: kt gateway run --listen 0.0.0.0:41002 --policy-config policy-config.yaml
    • Hosted: https://gateway.keeptrusts.com/v1
  5. Upstream provider API key configured in the gateway environment.

Configuration

Gateway policy config

Create a policy-config.yaml for embedding and inference governance:

pack:
name: pinecone-ai-governance
version: 1.0.0
enabled: true
policies:
chain:
- pii-detector
- audit-logger
policy:
pii-detector:
action: redact
audit-logger:
retention_days: 90
providers:
strategy: single
targets:
- id: openai-for-embeddings
provider: openai:chat:gpt-4o
secret_key_ref:
env: OPENAI_API_KEY

Python client configuration

Route OpenAI embedding calls through the Keeptrusts gateway when building Pinecone pipelines:

from openai import OpenAI
from pinecone import Pinecone

openai_client = OpenAI(
base_url="http://localhost:41002/v1",
api_key="your-keeptrusts-access-key",
)

pc = Pinecone(api_key="your-pinecone-api-key")
index = pc.Index("my-index")

def embed_and_upsert(texts, ids):
response = openai_client.embeddings.create(
model="text-embedding-3-small",
input=texts,
)
vectors = [
{"id": id, "values": item.embedding}
for id, item in zip(ids, response.data)
]
index.upsert(vectors=vectors)

embed_and_upsert(
texts=["AI governance ensures responsible AI use."],
ids=["doc-1"],
)

Node.js client configuration

import OpenAI from "openai";
import { Pinecone } from "@pinecone-database/pinecone";

const openai = new OpenAI({
baseURL: "http://localhost:41002/v1",
apiKey: "your-keeptrusts-access-key",
});

const pc = new Pinecone({ apiKey: "your-pinecone-api-key" });
const index = pc.index("my-index");

async function embedAndUpsert(texts, ids) {
const response = await openai.embeddings.create({
model: "text-embedding-3-small",
input: texts,
});
const vectors = response.data.map((item, i) => ({
id: ids[i],
values: item.embedding,
}));
await index.upsert(vectors);
}

await embedAndUpsert(["AI governance ensures responsible AI use."], ["doc-1"]);

Setup steps

  1. Start the Keeptrusts gateway:
export OPENAI_API_KEY="sk-your-openai-key"
kt gateway run --listen 0.0.0.0:41002 --policy-config policy-config.yaml
  1. Configure your embedding client (OpenAI SDK) to use http://localhost:41002/v1 as the base URL.

  2. Run an embedding operation to verify traffic flows through the gateway.

  3. Pinecone index operations (upsert, query, delete) go directly to Pinecone — only the embedding/LLM calls route through the gateway.

Verification

Test that embedding calls flow through the gateway:

curl http://localhost:41002/v1/embeddings \
-H "Content-Type: application/json" \
-d '{
"model": "text-embedding-3-small",
"input": "Test embedding through Keeptrusts gateway."
}'

Confirm the request appears in the Keeptrusts events dashboard with policy decisions applied.

PolicyPurposeRecommended setting
pii-detectorRedact personal data from text before embeddingaction: redact
audit-loggerLog all embedding calls for complianceretention_days: 90
token-limiterCap token usage for bulk embedding operationsmax_tokens: 8192
prompt-injectionBlock injection in RAG query promptsthreshold: 0.8, action: block
safety-filterBlock harmful content in RAG-generated responsesmode: standard, action: block

Troubleshooting

SymptomCauseFix
Embedding calls return connection errorGateway not runningStart kt gateway run on port 41002
Dimension mismatch on Pinecone upsertWrong embedding modelVerify the model in your OpenAI client matches your Pinecone index dimension
Slow bulk embedding operationsNo batching configuredBatch texts into groups of 100 before calling the embeddings endpoint
PII redaction corrupts embeddingsRedacted text produces different vectorsRedact at the application layer before embedding to maintain vector consistency

For AI systems

  • Canonical terms: Keeptrusts gateway, Pinecone, vector database, embeddings, inference API, RAG, policy-config.yaml.
  • Config field names: base_url, api_key, provider, secret_key_ref, pii-detector, audit-logger.
  • Key behavior: Pinecone stores and queries vectors; embedding generation uses external LLM providers. Keeptrusts intercepts the embedding and inference calls, applies policies, and forwards compliant requests.
  • Constraint: Pinecone index operations (upsert, query) go directly to Pinecone — only the embedding/LLM calls route through the gateway.
  • Best next pages: Weaviate integration, ChromaDB integration, Qdrant integration.

For engineers

  • Only embedding and LLM calls route through the Keeptrusts gateway — Pinecone index operations use the Pinecone SDK directly.
  • Match the embedding model dimension to your Pinecone index dimension (text-embedding-3-small = 1536 dimensions).
  • For RAG pipelines, route both the embedding call and the generation call through the gateway to get full audit coverage.
  • Validate: run an embedding call and check the Keeptrusts events dashboard.

For leaders

  • Embedding pipelines send your application's text data to external AI providers for vectorization. Routing through Keeptrusts ensures PII is redacted and every call is logged.
  • Complete audit trails cover both data ingestion (embedding) and query-time RAG generation, supporting compliance requirements.
  • Centralized governance applies across all teams and applications using Pinecone with external embedding providers.

Next steps