diff --git a/docs/gateway/security.md b/docs/gateway/security.md index 564b248fe..3b8f9f036 100644 --- a/docs/gateway/security.md +++ b/docs/gateway/security.md @@ -199,6 +199,7 @@ Even with strong system prompts, **prompt injection is not solved**. What helps - Prefer mention gating in groups; avoid “always-on” bots in public rooms. - Treat links, attachments, and pasted instructions as hostile by default. - Run sensitive tool execution in a sandbox; keep secrets out of the agent’s reachable filesystem. +- Note: sandboxing is opt-in; if sandbox mode is off, exec runs on the gateway host even though tools.exec.host defaults to sandbox. - Limit high-risk tools (`exec`, `browser`, `web_fetch`, `web_search`) to trusted agents or explicit allowlists. - **Model choice matters:** older/legacy models can be less robust against prompt injection and tool misuse. Prefer modern, instruction-hardened models for any bot with tools. We recommend Anthropic Opus 4.5 because it’s quite good at recognizing prompt injections (see [“A step forward on safety”](https://www.anthropic.com/news/claude-opus-4-5)).