Files
clawdbot/docs/concepts/model-providers.md
2026-01-27 12:21:02 +00:00

319 lines
8.6 KiB
Markdown
Raw Permalink Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
---
summary: "Model provider overview with example configs + CLI flows"
read_when:
- You need a provider-by-provider model setup reference
- You want example configs or CLI onboarding commands for model providers
---
# Model providers
This page covers **LLM/model providers** (not chat channels like WhatsApp/Telegram).
For model selection rules, see [/concepts/models](/concepts/models).
## Quick rules
- Model refs use `provider/model` (example: `opencode/claude-opus-4-5`).
- If you set `agents.defaults.models`, it becomes the allowlist.
- CLI helpers: `moltbot onboard`, `moltbot models list`, `moltbot models set <provider/model>`.
## Built-in providers (pi-ai catalog)
Moltbot ships with the piai catalog. These providers require **no**
`models.providers` config; just set auth + pick a model.
### OpenAI
- Provider: `openai`
- Auth: `OPENAI_API_KEY`
- Example model: `openai/gpt-5.2`
- CLI: `moltbot onboard --auth-choice openai-api-key`
```json5
{
agents: { defaults: { model: { primary: "openai/gpt-5.2" } } }
}
```
### Anthropic
- Provider: `anthropic`
- Auth: `ANTHROPIC_API_KEY` or `claude setup-token`
- Example model: `anthropic/claude-opus-4-5`
- CLI: `moltbot onboard --auth-choice token` (paste setup-token) or `moltbot models auth paste-token --provider anthropic`
```json5
{
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" } } }
}
```
### OpenAI Code (Codex)
- Provider: `openai-codex`
- Auth: OAuth (ChatGPT)
- Example model: `openai-codex/gpt-5.2`
- CLI: `moltbot onboard --auth-choice openai-codex` or `moltbot models auth login --provider openai-codex`
```json5
{
agents: { defaults: { model: { primary: "openai-codex/gpt-5.2" } } }
}
```
### OpenCode Zen
- Provider: `opencode`
- Auth: `OPENCODE_API_KEY` (or `OPENCODE_ZEN_API_KEY`)
- Example model: `opencode/claude-opus-4-5`
- CLI: `moltbot onboard --auth-choice opencode-zen`
```json5
{
agents: { defaults: { model: { primary: "opencode/claude-opus-4-5" } } }
}
```
### Google Gemini (API key)
- Provider: `google`
- Auth: `GEMINI_API_KEY`
- Example model: `google/gemini-3-pro-preview`
- CLI: `moltbot onboard --auth-choice gemini-api-key`
### Google Vertex / Antigravity / Gemini CLI
- Providers: `google-vertex`, `google-antigravity`, `google-gemini-cli`
- Auth: Vertex uses gcloud ADC; Antigravity/Gemini CLI use their respective auth flows
- Antigravity OAuth is shipped as a bundled plugin (`google-antigravity-auth`, disabled by default).
- Enable: `moltbot plugins enable google-antigravity-auth`
- Login: `moltbot models auth login --provider google-antigravity --set-default`
- Gemini CLI OAuth is shipped as a bundled plugin (`google-gemini-cli-auth`, disabled by default).
- Enable: `moltbot plugins enable google-gemini-cli-auth`
- Login: `moltbot models auth login --provider google-gemini-cli --set-default`
- Note: you do **not** paste a client id or secret into `moltbot.json`. The CLI login flow stores
tokens in auth profiles on the gateway host.
### Z.AI (GLM)
- Provider: `zai`
- Auth: `ZAI_API_KEY`
- Example model: `zai/glm-4.7`
- CLI: `moltbot onboard --auth-choice zai-api-key`
- Aliases: `z.ai/*` and `z-ai/*` normalize to `zai/*`
### Vercel AI Gateway
- Provider: `vercel-ai-gateway`
- Auth: `AI_GATEWAY_API_KEY`
- Example model: `vercel-ai-gateway/anthropic/claude-opus-4.5`
- CLI: `moltbot onboard --auth-choice ai-gateway-api-key`
### Other built-in providers
- OpenRouter: `openrouter` (`OPENROUTER_API_KEY`)
- Example model: `openrouter/anthropic/claude-sonnet-4-5`
- xAI: `xai` (`XAI_API_KEY`)
- Groq: `groq` (`GROQ_API_KEY`)
- Cerebras: `cerebras` (`CEREBRAS_API_KEY`)
- GLM models on Cerebras use ids `zai-glm-4.7` and `zai-glm-4.6`.
- OpenAI-compatible base URL: `https://api.cerebras.ai/v1`.
- Mistral: `mistral` (`MISTRAL_API_KEY`)
- GitHub Copilot: `github-copilot` (`COPILOT_GITHUB_TOKEN` / `GH_TOKEN` / `GITHUB_TOKEN`)
## Providers via `models.providers` (custom/base URL)
Use `models.providers` (or `models.json`) to add **custom** providers or
OpenAI/Anthropiccompatible proxies.
### Moonshot AI (Kimi)
Moonshot uses OpenAI-compatible endpoints, so configure it as a custom provider:
- Provider: `moonshot`
- Auth: `MOONSHOT_API_KEY`
- Example model: `moonshot/kimi-k2-0905-preview`
- Kimi K2 model IDs:
{/* moonshot-kimi-k2-model-refs:start */}
- `moonshot/kimi-k2-0905-preview`
- `moonshot/kimi-k2-turbo-preview`
- `moonshot/kimi-k2-thinking`
- `moonshot/kimi-k2-thinking-turbo`
{/* moonshot-kimi-k2-model-refs:end */}
```json5
{
agents: {
defaults: { model: { primary: "moonshot/kimi-k2-0905-preview" } }
},
models: {
mode: "merge",
providers: {
moonshot: {
baseUrl: "https://api.moonshot.ai/v1",
apiKey: "${MOONSHOT_API_KEY}",
api: "openai-completions",
models: [{ id: "kimi-k2-0905-preview", name: "Kimi K2 0905 Preview" }]
}
}
}
}
```
### Kimi Code
Kimi Code uses a dedicated endpoint and key (separate from Moonshot):
- Provider: `kimi-code`
- Auth: `KIMICODE_API_KEY`
- Example model: `kimi-code/kimi-for-coding`
```json5
{
env: { KIMICODE_API_KEY: "sk-..." },
agents: {
defaults: { model: { primary: "kimi-code/kimi-for-coding" } }
},
models: {
mode: "merge",
providers: {
"kimi-code": {
baseUrl: "https://api.kimi.com/coding/v1",
apiKey: "${KIMICODE_API_KEY}",
api: "openai-completions",
models: [{ id: "kimi-for-coding", name: "Kimi For Coding" }]
}
}
}
}
```
### Qwen OAuth (free tier)
Qwen provides OAuth access to Qwen Coder + Vision via a device-code flow.
Enable the bundled plugin, then log in:
```bash
moltbot plugins enable qwen-portal-auth
moltbot models auth login --provider qwen-portal --set-default
```
Model refs:
- `qwen-portal/coder-model`
- `qwen-portal/vision-model`
See [/providers/qwen](/providers/qwen) for setup details and notes.
### Synthetic
Synthetic provides Anthropic-compatible models behind the `synthetic` provider:
- Provider: `synthetic`
- Auth: `SYNTHETIC_API_KEY`
- Example model: `synthetic/hf:MiniMaxAI/MiniMax-M2.1`
- CLI: `moltbot onboard --auth-choice synthetic-api-key`
```json5
{
agents: {
defaults: { model: { primary: "synthetic/hf:MiniMaxAI/MiniMax-M2.1" } }
},
models: {
mode: "merge",
providers: {
synthetic: {
baseUrl: "https://api.synthetic.new/anthropic",
apiKey: "${SYNTHETIC_API_KEY}",
api: "anthropic-messages",
models: [{ id: "hf:MiniMaxAI/MiniMax-M2.1", name: "MiniMax M2.1" }]
}
}
}
}
```
### MiniMax
MiniMax is configured via `models.providers` because it uses custom endpoints:
- MiniMax (Anthropiccompatible): `--auth-choice minimax-api`
- Auth: `MINIMAX_API_KEY`
See [/providers/minimax](/providers/minimax) for setup details, model options, and config snippets.
### Ollama
Ollama is a local LLM runtime that provides an OpenAI-compatible API:
- Provider: `ollama`
- Auth: None required (local server)
- Example model: `ollama/llama3.3`
- Installation: https://ollama.ai
```bash
# Install Ollama, then pull a model:
ollama pull llama3.3
```
```json5
{
agents: {
defaults: { model: { primary: "ollama/llama3.3" } }
}
}
```
Ollama is automatically detected when running locally at `http://127.0.0.1:11434/v1`. See [/providers/ollama](/providers/ollama) for model recommendations and custom configuration.
### Local proxies (LM Studio, vLLM, LiteLLM, etc.)
Example (OpenAIcompatible):
```json5
{
agents: {
defaults: {
model: { primary: "lmstudio/minimax-m2.1-gs32" },
models: { "lmstudio/minimax-m2.1-gs32": { alias: "Minimax" } }
}
},
models: {
providers: {
lmstudio: {
baseUrl: "http://localhost:1234/v1",
apiKey: "LMSTUDIO_KEY",
api: "openai-completions",
models: [
{
id: "minimax-m2.1-gs32",
name: "MiniMax M2.1",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 200000,
maxTokens: 8192
}
]
}
}
}
}
```
Notes:
- For custom providers, `reasoning`, `input`, `cost`, `contextWindow`, and `maxTokens` are optional.
When omitted, Moltbot defaults to:
- `reasoning: false`
- `input: ["text"]`
- `cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 }`
- `contextWindow: 200000`
- `maxTokens: 8192`
- Recommended: set explicit values that match your proxy/model limits.
## CLI examples
```bash
moltbot onboard --auth-choice opencode-zen
moltbot models set opencode/claude-opus-4-5
moltbot models list
```
See also: [/gateway/configuration](/gateway/configuration) for full configuration examples.