4.4 KiB
4.4 KiB
summary, read_when
| summary | read_when | ||
|---|---|---|---|
| Use MiniMax M2.1 in Clawdbot |
|
MiniMax
MiniMax is an AI company that builds the M2/M2.1 model family. The current coding-focused release is MiniMax M2.1 (December 23, 2025), built for real-world complex tasks.
Source: MiniMax M2.1 release note
Model overview (M2.1)
MiniMax highlights these improvements in M2.1:
- Stronger multi-language coding (Rust, Java, Go, C++, Kotlin, Objective-C, TS/JS).
- Better web/app development and aesthetic output quality (including native mobile).
- Improved composite instruction handling for office-style workflows, building on interleaved thinking and integrated constraint execution.
- More concise responses with lower token usage and faster iteration loops.
- Stronger tool/agent framework compatibility and context management (Claude Code, Droid/Factory AI, Cline, Kilo Code, Roo Code, BlackBox).
- Higher-quality dialogue and technical writing outputs.
MiniMax M2.1 vs MiniMax M2.1 Lightning
- Speed: Lightning is the “fast” variant in MiniMax’s pricing docs.
- Cost: Pricing shows the same input cost, but Lightning has higher output cost.
- Coding plan routing: The Lightning back-end isn’t directly available on the MiniMax coding plan. MiniMax auto-routes most requests to Lightning, but falls back to the regular M2.1 back-end during traffic spikes.
Choose a setup
MiniMax M2.1 — recommended
Best for: hosted MiniMax with Anthropic-compatible API.
Configure via CLI:
- Run
clawdbot configure - Select Model/auth
- Choose MiniMax M2.1
{
env: { MINIMAX_API_KEY: "sk-..." },
agents: { defaults: { model: { primary: "minimax/MiniMax-M2.1" } } },
models: {
mode: "merge",
providers: {
minimax: {
baseUrl: "https://api.minimax.io/anthropic",
apiKey: "${MINIMAX_API_KEY}",
api: "anthropic-messages",
models: [
{
id: "MiniMax-M2.1",
name: "MiniMax M2.1",
reasoning: false,
input: ["text"],
cost: { input: 15, output: 60, cacheRead: 2, cacheWrite: 10 },
contextWindow: 200000,
maxTokens: 8192
}
]
}
}
}
}
Optional: Local via LM Studio (manual)
Best for: local inference with LM Studio. We have seen strong results with MiniMax M2.1 on powerful hardware (e.g. a desktop/server) using LM Studio's local server.
Configure manually via clawdbot.json:
{
agents: {
defaults: {
model: { primary: "lmstudio/minimax-m2.1-gs32" },
models: { "lmstudio/minimax-m2.1-gs32": { alias: "Minimax" } }
}
},
models: {
mode: "merge",
providers: {
lmstudio: {
baseUrl: "http://127.0.0.1:1234/v1",
apiKey: "lmstudio",
api: "openai-responses",
models: [
{
id: "minimax-m2.1-gs32",
name: "MiniMax M2.1 GS32",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 196608,
maxTokens: 8192
}
]
}
}
}
}
Configure via clawdbot configure
Use the interactive config wizard to set MiniMax without editing JSON:
- Run
clawdbot configure. - Select Model/auth.
- Choose MiniMax M2.1.
- Pick your default model when prompted.
Configuration options
models.providers.minimax.baseUrl: preferhttps://api.minimax.io/anthropic(Anthropic-compatible);https://api.minimax.io/v1is optional for OpenAI-compatible payloads.models.providers.minimax.api: preferanthropic-messages;openai-completionsis optional for OpenAI-compatible payloads.models.providers.minimax.apiKey: MiniMax API key (MINIMAX_API_KEY).models.providers.minimax.models: defineid,name,reasoning,contextWindow,maxTokens,cost.agents.defaults.models: alias models you want in the allowlist.models.mode: keepmergeif you want to add MiniMax alongside built-ins.
Notes
- Model refs are
minimax/<model>. - Update pricing values in
models.jsonif you need exact cost tracking. - See /concepts/model-providers for provider rules.
- Use
clawdbot models listandclawdbot models set minimax/MiniMax-M2.1to switch.