From edfc71a47e7b96d440fdc722354fb5bd4e3eb267 Mon Sep 17 00:00:00 2001 From: Peter Steinberger Date: Tue, 6 Jan 2026 23:48:25 +0100 Subject: [PATCH] docs: update model guidance --- docs/models.md | 4 ++++ docs/security.md | 1 + 2 files changed, 5 insertions(+) diff --git a/docs/models.md b/docs/models.md index 0b8d6206a..6739bdd4e 100644 --- a/docs/models.md +++ b/docs/models.md @@ -12,6 +12,10 @@ See [`docs/model-failover.md`](https://docs.clawd.bot/model-failover) for how au Goal: give clear model visibility + control (configured vs available), plus scan tooling that prefers tool-call + image-capable models and maintains ordered fallbacks. +## Model recommendations + +Through testing, we’ve found Anthropic Opus 4.5 is the most useful general-purpose model for anything coding-related. We suggest GPT 5.2 Codex as another strong option. For personal assistant work, nothing comes close to Opus. If you’re going all-in on Claude, we recommend the Max $200 subscription: https://claude.com/pricing + ## Command tree (draft) - `clawdbot models list` diff --git a/docs/security.md b/docs/security.md index d0883237f..d8bfdee87 100644 --- a/docs/security.md +++ b/docs/security.md @@ -75,6 +75,7 @@ Even with strong system prompts, **prompt injection is not solved**. What helps - Prefer mention gating in groups; avoid “always-on” bots in public rooms. - Treat links and pasted instructions as hostile by default. - Run sensitive tool execution in a sandbox; keep secrets out of the agent’s reachable filesystem. +- **Model choice matters:** we recommend Anthropic Opus 4.5 because it’s quite good at recognizing prompt injections (see [“A step forward on safety”](https://www.anthropic.com/news/claude-opus-4-5)). Using weaker models increases risk. ## Lessons Learned (The Hard Way)