docs: clarify memory search auth

This commit is contained in:
Peter Steinberger
2026-01-13 00:25:54 +00:00
parent 4f3bedfdb7
commit 1f3ae2346e
2 changed files with 14 additions and 0 deletions

View File

@@ -79,6 +79,12 @@ Defaults:
- Uses remote embeddings (OpenAI) unless configured for local.
- Local mode uses node-llama-cpp and may require `pnpm approve-builds`.
Remote embeddings **require** an OpenAI API key (`OPENAI_API_KEY` or
`models.providers.openai.apiKey`). Codex OAuth only covers chat/completions and
does **not** satisfy embeddings for memory search. If you don't want to set an
API key, use `memorySearch.provider = "local"` or set
`memorySearch.fallback = "none"`.
Config example:
```json5

View File

@@ -231,6 +231,14 @@ Clawdbot also runs a **silent pre-compaction memory flush** to remind the model
to write durable notes before auto-compaction. This only runs when the workspace
is writable (read-only sandboxes skip it). See [Memory](/concepts/memory).
### Why does memory search need an OpenAI API key if I already signed in with Codex?
Vector memory search uses **embeddings**. Codex OAuth only covers
chat/completions and does **not** grant embeddings access, so the upstream
memory indexer needs a real OpenAI API key (`OPENAI_API_KEY` or
`models.providers.openai.apiKey`). If you dont want to set a key, switch to
`memorySearch.provider = "local"` or set `memorySearch.fallback = "none"`.
## Where things live on disk
### Where does Clawdbot store its data?