docs: add local models guide
This commit is contained in:
@@ -1813,48 +1813,7 @@ Notes:
|
||||
|
||||
### Local models (LM Studio) — recommended setup
|
||||
|
||||
Best current local setup (what we’re running): **MiniMax M2.1** on a powerful local machine
|
||||
via **LM Studio** using the **Responses API**.
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "lmstudio/minimax-m2.1-gs32" },
|
||||
models: {
|
||||
"anthropic/claude-opus-4-5": { alias: "Opus" },
|
||||
"lmstudio/minimax-m2.1-gs32": { alias: "Minimax" }
|
||||
}
|
||||
}
|
||||
},
|
||||
models: {
|
||||
mode: "merge",
|
||||
providers: {
|
||||
lmstudio: {
|
||||
baseUrl: "http://127.0.0.1:1234/v1",
|
||||
apiKey: "lmstudio",
|
||||
api: "openai-responses",
|
||||
models: [
|
||||
{
|
||||
id: "minimax-m2.1-gs32",
|
||||
name: "MiniMax M2.1 GS32",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 196608,
|
||||
maxTokens: 8192
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Notes:
|
||||
- LM Studio must have the model loaded and the local server enabled (default URL above).
|
||||
- Responses API enables clean reasoning/output separation; WhatsApp sees only final text.
|
||||
- Adjust `contextWindow`/`maxTokens` if your LM Studio context length differs.
|
||||
See [/gateway/local-models](/gateway/local-models) for the current local guidance. TL;DR: run MiniMax M2.1 via LM Studio Responses API on serious hardware; keep hosted models merged for fallback.
|
||||
|
||||
### MiniMax M2.1
|
||||
|
||||
|
||||
94
docs/gateway/local-models.md
Normal file
94
docs/gateway/local-models.md
Normal file
@@ -0,0 +1,94 @@
|
||||
---
|
||||
summary: "Run Clawdbot on local LLMs (LM Studio, vLLM, LiteLLM, custom OpenAI endpoints)"
|
||||
read_when:
|
||||
- You want to serve models from your own GPU box
|
||||
- You are wiring LM Studio or an OpenAI-compatible proxy
|
||||
- You need the safest local model guidance
|
||||
---
|
||||
# Local models
|
||||
|
||||
Local is doable, but Clawdbot expects large context + strong defenses against prompt injection. Small cards truncate context and leak safety. Aim high: **≥2 maxed-out Mac Studios or equivalent GPU rig (~$30k+)**. A single **24 GB** GPU works only for lighter prompts with higher latency.
|
||||
|
||||
## Recommended: LM Studio + MiniMax M2.1 (Responses API)
|
||||
|
||||
Best current local stack. Load MiniMax M2.1 in LM Studio, enable the local server (default `http://127.0.0.1:1234`), and use Responses API to keep reasoning separate from final text.
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "lmstudio/minimax-m2.1-gs32" },
|
||||
models: {
|
||||
"anthropic/claude-opus-4-5": { alias: "Opus" },
|
||||
"lmstudio/minimax-m2.1-gs32": { alias: "Minimax" }
|
||||
}
|
||||
}
|
||||
},
|
||||
models: {
|
||||
mode: "merge",
|
||||
providers: {
|
||||
lmstudio: {
|
||||
baseUrl: "http://127.0.0.1:1234/v1",
|
||||
apiKey: "lmstudio",
|
||||
api: "openai-responses",
|
||||
models: [
|
||||
{
|
||||
id: "minimax-m2.1-gs32",
|
||||
name: "MiniMax M2.1 GS32",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 196608,
|
||||
maxTokens: 8192
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Setup checklist**
|
||||
- Install LM Studio: https://lmstudio.ai
|
||||
- In LM Studio, download MiniMax M2.1, start the server, confirm `http://127.0.0.1:1234/v1/models` lists it.
|
||||
- Keep the model loaded; cold-load adds startup latency.
|
||||
- Adjust `contextWindow`/`maxTokens` if your LM Studio build differs.
|
||||
- For WhatsApp, stick to Responses API so only final text is sent.
|
||||
|
||||
## Other OpenAI-compatible local proxies
|
||||
|
||||
vLLM, LiteLLM, OAI-proxy, or custom gateways work if they expose an OpenAI-style `/v1` endpoint. Replace the provider block above with your endpoint and model ID:
|
||||
|
||||
```json5
|
||||
{
|
||||
models: {
|
||||
mode: "merge",
|
||||
providers: {
|
||||
local: {
|
||||
baseUrl: "http://127.0.0.1:8000/v1",
|
||||
apiKey: "sk-local",
|
||||
api: "openai-responses",
|
||||
models: [
|
||||
{
|
||||
id: "my-local-model",
|
||||
name: "Local Model",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 120000,
|
||||
maxTokens: 8192
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Keep `models.mode: "merge"` so hosted models stay available as fallbacks.
|
||||
|
||||
## Troubleshooting
|
||||
- Gateway can reach the proxy? `curl http://127.0.0.1:1234/v1/models`.
|
||||
- LM Studio model unloaded? Reload; cold start is a common “hanging” cause.
|
||||
- Context errors? Lower `contextWindow` or raise your server limit.
|
||||
- Safety: local models skip provider-side filters; keep agents narrow and compaction on to limit prompt injection blast radius.
|
||||
@@ -116,6 +116,10 @@ Not currently. Clawdbot doesn’t ship a Bedrock provider today. If you must use
|
||||
|
||||
Clawdbot supports **OpenAI Code (Codex)** via OAuth or by reusing your Codex CLI login (`~/.codex/auth.json`). The wizard can import the CLI login or run the OAuth flow and will set the default model to `openai-codex/gpt-5.2` when appropriate. See [Model providers](/concepts/model-providers) and [Wizard](/start/wizard).
|
||||
|
||||
### Is a local model OK for casual chats?
|
||||
|
||||
Usually no. Clawdbot needs large context + strong safety; small cards truncate. See [/gateway/local-models](/gateway/local-models) for hardware expectations and the LM Studio MiniMax M2.1 setup.
|
||||
|
||||
### Can I use Bun?
|
||||
|
||||
Bun is supported for faster TypeScript execution, but **WhatsApp requires Node** in this ecosystem. The wizard lets you pick the runtime; choose **Node** if you use WhatsApp.
|
||||
|
||||
Reference in New Issue
Block a user