feat: add auth-aware cache defaults

This commit is contained in:
Peter Steinberger
2026-01-21 20:23:30 +00:00
parent e4b3c8b98d
commit 6492e90c1b
6 changed files with 245 additions and 4 deletions

View File

@@ -15,6 +15,17 @@ Session pruning trims **old tool results** from the in-memory context right befo
- For best results, match `ttl` to your model `cacheControlTtl`.
- After a prune, the TTL window resets so subsequent requests keep cache until `ttl` expires again.
## Smart defaults (Anthropic)
- **OAuth or setup-token** profiles: enable `cache-ttl` pruning and set heartbeat to `1h`.
- **API key** profiles: enable `cache-ttl` pruning, set heartbeat to `30m`, and default `cacheControlTtl` to `1h` on Anthropic models.
- If you set any of these values explicitly, Clawdbot does **not** override them.
## What this improves (cost + cache behavior)
- **Why prune:** Anthropic prompt caching only applies within the TTL. If a session goes idle past the TTL, the next request re-caches the full prompt unless you trim it first.
- **What gets cheaper:** pruning reduces the **cacheWrite** size for that first request after the TTL expires.
- **Why the TTL reset matters:** once pruning runs, the cache window resets, so followup requests can reuse the freshly cached prompt instead of re-caching the full history again.
- **What it does not do:** pruning doesnt add tokens or “double” costs; it only changes what gets cached on that first postTTL request.
## What can be pruned
- Only `toolResult` messages.
- User + assistant messages are **never** modified.

View File

@@ -1600,7 +1600,7 @@ Notes / current limitations:
- After a prune, the TTL window resets so subsequent requests keep cache until `ttl` expires again.
- For best results, match `contextPruning.ttl` to the model `cacheControlTtl` you set in `agents.defaults.models.*.params`.
Default (off):
Default (off, unless Anthropic auth profiles are detected):
```json5
{
agents: { defaults: { contextPruning: { mode: "off" } } }

View File

@@ -10,7 +10,7 @@ surface anything that needs attention without spamming you.
## Quick start (beginner)
1. Leave heartbeats enabled (default is `30m`) or set your own cadence.
1. Leave heartbeats enabled (default is `30m`, or `1h` for Anthropic OAuth/setup-token) or set your own cadence.
2. Create a tiny `HEARTBEAT.md` checklist in the agent workspace (optional but recommended).
3. Decide where heartbeat messages should go (`target: "last"` is the default).
4. Optional: enable heartbeat reasoning delivery for transparency.
@@ -33,7 +33,7 @@ Example config:
## Defaults
- Interval: `30m` (set `agents.defaults.heartbeat.every` or per-agent `agents.list[].heartbeat.every`; use `0m` to disable).
- Interval: `30m` (or `1h` when Anthropic OAuth/setup-token is the detected auth mode). Set `agents.defaults.heartbeat.every` or per-agent `agents.list[].heartbeat.every`; use `0m` to disable.
- Prompt body (configurable via `agents.defaults.heartbeat.prompt`):
`Read HEARTBEAT.md if it exists (workspace context). Follow it strictly. Do not infer or repeat old tasks from prior chats. If nothing needs attention, reply HEARTBEAT_OK.`
- The heartbeat prompt is sent **verbatim** as the user message. The system

View File

@@ -65,6 +65,41 @@ These are **USD per 1M tokens** for `input`, `output`, `cacheRead`, and
`cacheWrite`. If pricing is missing, Clawdbot shows tokens only. OAuth tokens
never show dollar cost.
## Cache TTL and pruning impact
Provider prompt caching only applies within the cache TTL window. Clawdbot can
optionally run **cache-ttl pruning**: it prunes the session once the cache TTL
has expired, then resets the cache window so subsequent requests can re-use the
freshly cached context instead of re-caching the full history. This keeps cache
write costs lower when a session goes idle past the TTL.
Configure it in [Gateway configuration](/gateway/configuration) and see the
behavior details in [Session pruning](/concepts/session-pruning).
Heartbeat can keep the cache **warm** across idle gaps. If your model cache TTL
is `1h`, setting the heartbeat interval just under that (e.g., `55m`) can avoid
re-caching the full prompt, reducing cache write costs.
For Anthropic API pricing, cache reads are significantly cheaper than input
tokens, while cache writes are billed at a higher multiplier. See Anthropics
prompt caching pricing for the latest rates and TTL multipliers:
https://docs.anthropic.com/docs/build-with-claude/prompt-caching
### Example: keep 1h cache warm with heartbeat
```yaml
agents:
defaults:
model:
primary: "anthropic/claude-opus-4-5"
models:
"anthropic/claude-opus-4-5":
params:
cacheControlTtl: "1h"
heartbeat:
every: "55m"
```
## Tips for reducing token pressure
- Use `/compact` to summarize long sessions.