docs: add local models guide
This commit is contained in:
@@ -1813,48 +1813,7 @@ Notes:
|
||||
|
||||
### Local models (LM Studio) — recommended setup
|
||||
|
||||
Best current local setup (what we’re running): **MiniMax M2.1** on a powerful local machine
|
||||
via **LM Studio** using the **Responses API**.
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "lmstudio/minimax-m2.1-gs32" },
|
||||
models: {
|
||||
"anthropic/claude-opus-4-5": { alias: "Opus" },
|
||||
"lmstudio/minimax-m2.1-gs32": { alias: "Minimax" }
|
||||
}
|
||||
}
|
||||
},
|
||||
models: {
|
||||
mode: "merge",
|
||||
providers: {
|
||||
lmstudio: {
|
||||
baseUrl: "http://127.0.0.1:1234/v1",
|
||||
apiKey: "lmstudio",
|
||||
api: "openai-responses",
|
||||
models: [
|
||||
{
|
||||
id: "minimax-m2.1-gs32",
|
||||
name: "MiniMax M2.1 GS32",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 196608,
|
||||
maxTokens: 8192
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Notes:
|
||||
- LM Studio must have the model loaded and the local server enabled (default URL above).
|
||||
- Responses API enables clean reasoning/output separation; WhatsApp sees only final text.
|
||||
- Adjust `contextWindow`/`maxTokens` if your LM Studio context length differs.
|
||||
See [/gateway/local-models](/gateway/local-models) for the current local guidance. TL;DR: run MiniMax M2.1 via LM Studio Responses API on serious hardware; keep hosted models merged for fallback.
|
||||
|
||||
### MiniMax M2.1
|
||||
|
||||
|
||||
Reference in New Issue
Block a user