Files
clawdbot/docs/providers/minimax.md
2026-01-17 19:45:54 +00:00

184 lines
5.8 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
---
summary: "Use MiniMax M2.1 in Clawdbot"
read_when:
- You want MiniMax models in Clawdbot
- You need MiniMax setup guidance
---
# MiniMax
MiniMax is an AI company that builds the **M2/M2.1** model family. The current
coding-focused release is **MiniMax M2.1** (December 23, 2025), built for
real-world complex tasks.
Source: [MiniMax M2.1 release note](https://www.minimax.io/news/minimax-m21)
## Model overview (M2.1)
MiniMax highlights these improvements in M2.1:
- Stronger **multi-language coding** (Rust, Java, Go, C++, Kotlin, Objective-C, TS/JS).
- Better **web/app development** and aesthetic output quality (including native mobile).
- Improved **composite instruction** handling for office-style workflows, building on
interleaved thinking and integrated constraint execution.
- **More concise responses** with lower token usage and faster iteration loops.
- Stronger **tool/agent framework** compatibility and context management (Claude Code,
Droid/Factory AI, Cline, Kilo Code, Roo Code, BlackBox).
- Higher-quality **dialogue and technical writing** outputs.
## MiniMax M2.1 vs MiniMax M2.1 Lightning
- **Speed:** Lightning is the “fast” variant in MiniMaxs pricing docs.
- **Cost:** Pricing shows the same input cost, but Lightning has higher output cost.
- **Coding plan routing:** The Lightning back-end isnt directly available on the MiniMax
coding plan. MiniMax auto-routes most requests to Lightning, but falls back to the
regular M2.1 back-end during traffic spikes.
## Choose a setup
### MiniMax M2.1 — recommended
**Best for:** hosted MiniMax with Anthropic-compatible API.
Configure via CLI:
- Run `clawdbot configure`
- Select **Model/auth**
- Choose **MiniMax M2.1**
```json5
{
env: { MINIMAX_API_KEY: "sk-..." },
agents: { defaults: { model: { primary: "minimax/MiniMax-M2.1" } } },
models: {
mode: "merge",
providers: {
minimax: {
baseUrl: "https://api.minimax.io/anthropic",
apiKey: "${MINIMAX_API_KEY}",
api: "anthropic-messages",
models: [
{
id: "MiniMax-M2.1",
name: "MiniMax M2.1",
reasoning: false,
input: ["text"],
cost: { input: 15, output: 60, cacheRead: 2, cacheWrite: 10 },
contextWindow: 200000,
maxTokens: 8192
}
]
}
}
}
}
```
### MiniMax M2.1 as fallback (Opus primary)
**Best for:** keep Opus 4.5 as primary, fail over to MiniMax M2.1.
```json5
{
env: { MINIMAX_API_KEY: "sk-..." },
agents: {
defaults: {
models: {
"anthropic/claude-opus-4-5": { alias: "opus" },
"minimax/MiniMax-M2.1": { alias: "minimax" }
},
model: {
primary: "anthropic/claude-opus-4-5",
fallbacks: ["minimax/MiniMax-M2.1"]
}
}
}
}
```
### Optional: Local via LM Studio (manual)
**Best for:** local inference with LM Studio.
We have seen strong results with MiniMax M2.1 on powerful hardware (e.g. a
desktop/server) using LM Studio's local server.
Configure manually via `clawdbot.json`:
```json5
{
agents: {
defaults: {
model: { primary: "lmstudio/minimax-m2.1-gs32" },
models: { "lmstudio/minimax-m2.1-gs32": { alias: "Minimax" } }
}
},
models: {
mode: "merge",
providers: {
lmstudio: {
baseUrl: "http://127.0.0.1:1234/v1",
apiKey: "lmstudio",
api: "openai-responses",
models: [
{
id: "minimax-m2.1-gs32",
name: "MiniMax M2.1 GS32",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 196608,
maxTokens: 8192
}
]
}
}
}
}
```
## Configure via `clawdbot configure`
Use the interactive config wizard to set MiniMax without editing JSON:
1) Run `clawdbot configure`.
2) Select **Model/auth**.
3) Choose **MiniMax M2.1**.
4) Pick your default model when prompted.
## Configuration options
- `models.providers.minimax.baseUrl`: prefer `https://api.minimax.io/anthropic` (Anthropic-compatible); `https://api.minimax.io/v1` is optional for OpenAI-compatible payloads.
- `models.providers.minimax.api`: prefer `anthropic-messages`; `openai-completions` is optional for OpenAI-compatible payloads.
- `models.providers.minimax.apiKey`: MiniMax API key (`MINIMAX_API_KEY`).
- `models.providers.minimax.models`: define `id`, `name`, `reasoning`, `contextWindow`, `maxTokens`, `cost`.
- `agents.defaults.models`: alias models you want in the allowlist.
- `models.mode`: keep `merge` if you want to add MiniMax alongside built-ins.
## Notes
- Model refs are `minimax/<model>`.
- Coding Plan usage API: `https://api.minimaxi.com/v1/api/openplatform/coding_plan/remains` (requires a coding plan key).
- Update pricing values in `models.json` if you need exact cost tracking.
- Referral link for MiniMax Coding Plan (10% off): https://platform.minimax.io/subscribe/coding-plan?code=DbXJTRClnb&source=link
- See [/concepts/model-providers](/concepts/model-providers) for provider rules.
- Use `clawdbot models list` and `clawdbot models set minimax/MiniMax-M2.1` to switch.
## Troubleshooting
### “Unknown model: minimax/MiniMax-M2.1”
This usually means the **MiniMax provider isnt configured** (no provider entry
and no MiniMax auth profile/env key found). A fix for this detection is in
**2026.1.12** (unreleased at the time of writing). Fix by:
- Upgrading to **2026.1.12** (or run from source `main`), then restarting the gateway.
- Running `clawdbot configure` and selecting **MiniMax M2.1**, or
- Adding the `models.providers.minimax` block manually, or
- Setting `MINIMAX_API_KEY` (or a MiniMax auth profile) so the provider can be injected.
Make sure the model id is **casesensitive**:
- `minimax/MiniMax-M2.1`
- `minimax/MiniMax-M2.1-lightning`
Then recheck with:
```bash
clawdbot models list
```