docs: refresh minimax setup docs
This commit is contained in:
@@ -9,7 +9,9 @@
|
||||
- Agents: automatic pre-compaction memory flush turn to store durable memories before compaction.
|
||||
|
||||
### Changes
|
||||
- CLI: simplify configure section selection (single-select with optional add-more).
|
||||
- CLI/Onboarding: simplify MiniMax auth choice to a single M2.1 option.
|
||||
- CLI: configure section selection now loops until Continue.
|
||||
- Docs: explain MiniMax vs MiniMax Lightning (speed vs cost).
|
||||
- Onboarding/CLI: group model/auth choice by provider and label Z.AI as GLM 4.7.
|
||||
- Auto-reply: add compact `/model` picker (models + available providers) and show provider endpoints in `/model status`.
|
||||
- Plugins: add extension loader (tools/RPC/CLI/services), discovery paths, and config schema + Control UI labels (uiHints).
|
||||
|
||||
@@ -221,7 +221,7 @@ Options:
|
||||
- `--non-interactive`
|
||||
- `--mode <local|remote>`
|
||||
- `--flow <quickstart|advanced>`
|
||||
- `--auth-choice <setup-token|claude-cli|token|openai-codex|openai-api-key|codex-cli|antigravity|gemini-api-key|zai-api-key|apiKey|minimax-cloud|minimax-api|minimax|opencode-zen|skip>`
|
||||
- `--auth-choice <setup-token|claude-cli|token|openai-codex|openai-api-key|codex-cli|antigravity|gemini-api-key|zai-api-key|apiKey|minimax-api|opencode-zen|skip>`
|
||||
- `--token-provider <id>` (non-interactive; used with `--auth-choice token`)
|
||||
- `--token <token>` (non-interactive; used with `--auth-choice token`)
|
||||
- `--token-profile-id <id>` (non-interactive; default: `<provider>:manual`)
|
||||
|
||||
@@ -112,8 +112,7 @@ OpenAI/Anthropic‑compatible proxies.
|
||||
|
||||
MiniMax is configured via `models.providers` because it uses custom endpoints:
|
||||
|
||||
- MiniMax (Anthropic‑compatible): `--auth-choice minimax-cloud`
|
||||
- `--auth-choice minimax-api` is a legacy alias.
|
||||
- MiniMax (Anthropic‑compatible): `--auth-choice minimax-api`
|
||||
- Auth: `MINIMAX_API_KEY`
|
||||
|
||||
See [/providers/minimax](/providers/minimax) for setup details, model options, and config snippets.
|
||||
|
||||
@@ -1807,9 +1807,9 @@ Notes:
|
||||
- Responses API enables clean reasoning/output separation; WhatsApp sees only final text.
|
||||
- Adjust `contextWindow`/`maxTokens` if your LM Studio context length differs.
|
||||
|
||||
### MiniMax API (platform.minimax.io)
|
||||
### MiniMax M2.1
|
||||
|
||||
Use MiniMax's Anthropic-compatible API directly without LM Studio:
|
||||
Use MiniMax M2.1 directly without LM Studio:
|
||||
|
||||
```json5
|
||||
{
|
||||
@@ -1833,25 +1833,7 @@ Use MiniMax's Anthropic-compatible API directly without LM Studio:
|
||||
name: "MiniMax M2.1",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
// Pricing: MiniMax doesn't publish public rates. Override in models.json for accurate costs.
|
||||
cost: { input: 15, output: 60, cacheRead: 2, cacheWrite: 10 },
|
||||
contextWindow: 200000,
|
||||
maxTokens: 8192
|
||||
},
|
||||
{
|
||||
id: "MiniMax-M2.1-lightning",
|
||||
name: "MiniMax M2.1 Lightning",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 15, output: 60, cacheRead: 2, cacheWrite: 10 },
|
||||
contextWindow: 200000,
|
||||
maxTokens: 8192
|
||||
},
|
||||
{
|
||||
id: "MiniMax-M2",
|
||||
name: "MiniMax M2",
|
||||
reasoning: true,
|
||||
input: ["text"],
|
||||
// Pricing: update in models.json if you need exact cost tracking.
|
||||
cost: { input: 15, output: 60, cacheRead: 2, cacheWrite: 10 },
|
||||
contextWindow: 200000,
|
||||
maxTokens: 8192
|
||||
@@ -1864,9 +1846,9 @@ Use MiniMax's Anthropic-compatible API directly without LM Studio:
|
||||
```
|
||||
|
||||
Notes:
|
||||
- Set `MINIMAX_API_KEY` environment variable or use `clawdbot onboard --auth-choice minimax-cloud`
|
||||
- Available models: `MiniMax-M2.1` (default), `MiniMax-M2.1-lightning` (~100 tps), `MiniMax-M2` (reasoning)
|
||||
- Pricing is a placeholder; MiniMax doesn't publish public rates. Override in `models.json` for accurate cost tracking.
|
||||
- Set `MINIMAX_API_KEY` environment variable or use `clawdbot onboard --auth-choice minimax-api`.
|
||||
- Available model: `MiniMax-M2.1` (default).
|
||||
- Update pricing in `models.json` if you need exact cost tracking.
|
||||
|
||||
Notes:
|
||||
- Supported APIs: `openai-completions`, `openai-responses`, `anthropic-messages`,
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
---
|
||||
summary: "Use MiniMax M2.1 in Clawdbot (cloud, API, or LM Studio)"
|
||||
summary: "Use MiniMax M2.1 in Clawdbot"
|
||||
read_when:
|
||||
- You want MiniMax models in Clawdbot
|
||||
- You need MiniMax cloud/API setup or LM Studio config
|
||||
- You need MiniMax setup guidance
|
||||
---
|
||||
# MiniMax
|
||||
|
||||
@@ -25,16 +25,21 @@ MiniMax highlights these improvements in M2.1:
|
||||
Droid/Factory AI, Cline, Kilo Code, Roo Code, BlackBox).
|
||||
- Higher-quality **dialogue and technical writing** outputs.
|
||||
|
||||
## MiniMax M2.1 vs MiniMax M2.1 Lightning
|
||||
|
||||
- **Speed:** MiniMax docs list ~60 tps output for M2.1 and ~100 tps for Lightning.
|
||||
- **Cost:** Pricing shows the same input cost, but Lightning has higher output cost.
|
||||
|
||||
## Choose a setup
|
||||
|
||||
### Option A: MiniMax (Anthropic-compatible `/anthropic`) — recommended
|
||||
### MiniMax M2.1 — recommended
|
||||
|
||||
**Best for:** hosted MiniMax with Anthropic-compatible API.
|
||||
|
||||
Configure via CLI:
|
||||
- Run `clawdbot configure`
|
||||
- Select **Model/auth**
|
||||
- Choose **MiniMax M2.1 (minimax.io)**
|
||||
- Choose **MiniMax M2.1**
|
||||
|
||||
```json5
|
||||
{
|
||||
@@ -64,89 +69,13 @@ Configure via CLI:
|
||||
}
|
||||
```
|
||||
|
||||
### Option B: MiniMax OpenAI-compatible `/v1` (manual)
|
||||
|
||||
**Best for:** setups that require OpenAI-compatible payloads.
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { MINIMAX_API_KEY: "sk-..." },
|
||||
agents: { defaults: { model: { primary: "minimax/MiniMax-M2.1" } } },
|
||||
models: {
|
||||
mode: "merge",
|
||||
providers: {
|
||||
minimax: {
|
||||
baseUrl: "https://api.minimax.io/v1",
|
||||
apiKey: "${MINIMAX_API_KEY}",
|
||||
api: "openai-completions",
|
||||
models: [
|
||||
{
|
||||
id: "MiniMax-M2.1",
|
||||
name: "MiniMax M2.1",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 200000,
|
||||
maxTokens: 8192
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Option C: Local via LM Studio
|
||||
|
||||
**Best for:** local inference with LM Studio.
|
||||
We have seen strong results with MiniMax M2.1 on powerful hardware (e.g. a
|
||||
desktop/server) using LM Studio's local server.
|
||||
|
||||
Configure via CLI:
|
||||
- Run `clawdbot configure`
|
||||
- Select **Model/auth**
|
||||
- Choose **MiniMax M2.1 (LM Studio)**
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "lmstudio/minimax-m2.1-gs32" },
|
||||
models: { "lmstudio/minimax-m2.1-gs32": { alias: "Minimax" } }
|
||||
}
|
||||
},
|
||||
models: {
|
||||
mode: "merge",
|
||||
providers: {
|
||||
lmstudio: {
|
||||
baseUrl: "http://127.0.0.1:1234/v1",
|
||||
apiKey: "lmstudio",
|
||||
api: "openai-responses",
|
||||
models: [
|
||||
{
|
||||
id: "minimax-m2.1-gs32",
|
||||
name: "MiniMax M2.1 GS32",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 196608,
|
||||
maxTokens: 8192
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Configure via `clawdbot configure`
|
||||
|
||||
Use the interactive config wizard to set MiniMax without editing JSON:
|
||||
|
||||
1) Run `clawdbot configure`.
|
||||
2) Select **Model/auth**.
|
||||
3) Choose **MiniMax M2.1 (minimax.io)**, **MiniMax API (platform.minimax.io)**,
|
||||
or **MiniMax M2.1 (LM Studio)**.
|
||||
3) Choose **MiniMax M2.1**.
|
||||
4) Pick your default model when prompted.
|
||||
|
||||
## Configuration options
|
||||
@@ -160,8 +89,7 @@ Use the interactive config wizard to set MiniMax without editing JSON:
|
||||
|
||||
## Notes
|
||||
|
||||
- Model refs are `minimax/<model>` or `lmstudio/<model>`.
|
||||
- MiniMax pricing is not published; the costs above are placeholders.
|
||||
Override in `models.json` for accurate tracking.
|
||||
- Model refs are `minimax/<model>`.
|
||||
- Update pricing values in `models.json` if you need exact cost tracking.
|
||||
- See [/concepts/model-providers](/concepts/model-providers) for provider rules.
|
||||
- Use `clawdbot models list` and `clawdbot models set minimax/MiniMax-M2.1` to switch.
|
||||
|
||||
@@ -80,8 +80,7 @@ Tip: `--json` does **not** imply non-interactive mode. Use `--non-interactive` (
|
||||
- **OpenAI API key**: uses `OPENAI_API_KEY` if present or prompts for a key, then saves it to `~/.clawdbot/.env` so launchd can read it.
|
||||
- **OpenCode Zen (multi-model proxy)**: prompts for `OPENCODE_API_KEY` (or `OPENCODE_ZEN_API_KEY`, get it at https://opencode.ai/auth).
|
||||
- **API key**: stores the key for you.
|
||||
- **MiniMax M2.1 (minimax.io)**: config is auto‑written for the Anthropic-compatible `/anthropic` endpoint.
|
||||
- **MiniMax M2.1 (LM Studio)**: config is auto‑written for the LM Studio endpoint.
|
||||
- **MiniMax M2.1**: config is auto‑written.
|
||||
- More detail: [MiniMax](/providers/minimax)
|
||||
- **Skip**: no auth configured yet.
|
||||
- Pick a default model from detected options (or enter provider/model manually).
|
||||
|
||||
Reference in New Issue
Block a user