refactor: rename clawdbot to moltbot with legacy compat

This commit is contained in:
Peter Steinberger
2026-01-27 12:19:58 +00:00
parent 83460df96f
commit 6d16a658e5
1839 changed files with 11250 additions and 11199 deletions

View File

@@ -1,13 +1,13 @@
---
summary: "Use Anthropic Claude via API keys or setup-token in Clawdbot"
summary: "Use Anthropic Claude via API keys or setup-token in Moltbot"
read_when:
- You want to use Anthropic models in Clawdbot
- You want to use Anthropic models in Moltbot
- You want setup-token instead of API keys
---
# Anthropic (Claude)
Anthropic builds the **Claude** model family and provides access via an API.
In Clawdbot you can authenticate with an API key or a **setup-token**.
In Moltbot you can authenticate with an API key or a **setup-token**.
## Option A: Anthropic API key
@@ -17,11 +17,11 @@ Create your API key in the Anthropic Console.
### CLI setup
```bash
clawdbot onboard
moltbot onboard
# choose: Anthropic API key
# or non-interactive
clawdbot onboard --anthropic-api-key "$ANTHROPIC_API_KEY"
moltbot onboard --anthropic-api-key "$ANTHROPIC_API_KEY"
```
### Config snippet
@@ -35,7 +35,7 @@ clawdbot onboard --anthropic-api-key "$ANTHROPIC_API_KEY"
## Prompt caching (Anthropic API)
Clawdbot does **not** override Anthropics default cache TTL unless you set it.
Moltbot does **not** override Anthropics default cache TTL unless you set it.
This is **API-only**; subscription auth does not honor TTL settings.
To set the TTL per model, use `cacheControlTtl` in the model `params`:
@@ -54,7 +54,7 @@ To set the TTL per model, use `cacheControlTtl` in the model `params`:
}
```
Clawdbot includes the `extended-cache-ttl-2025-04-11` beta flag for Anthropic API
Moltbot includes the `extended-cache-ttl-2025-04-11` beta flag for Anthropic API
requests; keep it if you override provider headers (see [/gateway/configuration](/gateway/configuration)).
## Option B: Claude setup-token
@@ -69,23 +69,23 @@ Setup-tokens are created by the **Claude Code CLI**, not the Anthropic Console.
claude setup-token
```
Paste the token into Clawdbot (wizard: **Anthropic token (paste setup-token)**), or run it on the gateway host:
Paste the token into Moltbot (wizard: **Anthropic token (paste setup-token)**), or run it on the gateway host:
```bash
clawdbot models auth setup-token --provider anthropic
moltbot models auth setup-token --provider anthropic
```
If you generated the token on a different machine, paste it:
```bash
clawdbot models auth paste-token --provider anthropic
moltbot models auth paste-token --provider anthropic
```
### CLI setup
```bash
# Paste a setup-token during onboarding
clawdbot onboard --auth-choice setup-token
moltbot onboard --auth-choice setup-token
```
### Config snippet
@@ -98,7 +98,7 @@ clawdbot onboard --auth-choice setup-token
## Notes
- Generate the setup-token with `claude setup-token` and paste it, or run `clawdbot models auth setup-token` on the gateway host.
- Generate the setup-token with `claude setup-token` and paste it, or run `moltbot models auth setup-token` on the gateway host.
- If you see “OAuth token refresh failed …” on a Claude subscription, re-auth with a setup-token. See [/gateway/troubleshooting#oauth-token-refresh-failed-anthropic-claude-subscription](/gateway/troubleshooting#oauth-token-refresh-failed-anthropic-claude-subscription).
- Auth details + reuse rules are in [/concepts/oauth](/concepts/oauth).
@@ -108,19 +108,19 @@ clawdbot onboard --auth-choice setup-token
- Claude subscription auth can expire or be revoked. Re-run `claude setup-token`
and paste it into the **gateway host**.
- If the Claude CLI login lives on a different machine, use
`clawdbot models auth paste-token --provider anthropic` on the gateway host.
`moltbot models auth paste-token --provider anthropic` on the gateway host.
**No API key found for provider "anthropic"**
- Auth is **per agent**. New agents dont inherit the main agents keys.
- Re-run onboarding for that agent, or paste a setup-token / API key on the
gateway host, then verify with `clawdbot models status`.
gateway host, then verify with `moltbot models status`.
**No credentials found for profile `anthropic:default`**
- Run `clawdbot models status` to see which auth profile is active.
- Run `moltbot models status` to see which auth profile is active.
- Re-run onboarding, or paste a setup-token / API key for that profile.
**No available auth profile (all in cooldown/unavailable)**
- Check `clawdbot models status --json` for `auth.unusableProfiles`.
- Check `moltbot models status --json` for `auth.unusableProfiles`.
- Add another Anthropic profile or wait for cooldown.
More: [/gateway/troubleshooting](/gateway/troubleshooting) and [/help/faq](/help/faq).

View File

@@ -67,9 +67,9 @@ curl http://localhost:3456/v1/chat/completions \
}'
```
### With Clawdbot
### With Moltbot
You can point Clawdbot at the proxy as a custom OpenAI-compatible endpoint:
You can point Moltbot at the proxy as a custom OpenAI-compatible endpoint:
```json5
{
@@ -134,12 +134,12 @@ launchctl bootstrap gui/$(id -u) ~/Library/LaunchAgents/com.claude-max-api.plist
## Notes
- This is a **community tool**, not officially supported by Anthropic or Clawdbot
- This is a **community tool**, not officially supported by Anthropic or Moltbot
- Requires an active Claude Max/Pro subscription with Claude Code CLI authenticated
- The proxy runs locally and does not send data to any third-party servers
- Streaming responses are fully supported
## See Also
- [Anthropic provider](/providers/anthropic) - Native Clawdbot integration with Claude setup-token or API keys
- [Anthropic provider](/providers/anthropic) - Native Moltbot integration with Claude setup-token or API keys
- [OpenAI provider](/providers/openai) - For OpenAI/Codex subscriptions

View File

@@ -6,10 +6,10 @@ read_when:
---
# Deepgram (Audio Transcription)
Deepgram is a speech-to-text API. In Clawdbot it is used for **inbound audio/voice note
Deepgram is a speech-to-text API. In Moltbot it is used for **inbound audio/voice note
transcription** via `tools.media.audio`.
When enabled, Clawdbot uploads the audio file to Deepgram and injects the transcript
When enabled, Moltbot uploads the audio file to Deepgram and injects the transcript
into the reply pipeline (`{{Transcript}}` + `[Audio]` block). This is **not streaming**;
it uses the pre-recorded transcription endpoint.

View File

@@ -1,28 +1,28 @@
---
summary: "Sign in to GitHub Copilot from Clawdbot using the device flow"
summary: "Sign in to GitHub Copilot from Moltbot using the device flow"
read_when:
- You want to use GitHub Copilot as a model provider
- You need the `clawdbot models auth login-github-copilot` flow
- You need the `moltbot models auth login-github-copilot` flow
---
# Github Copilot
## What is GitHub Copilot?
GitHub Copilot is GitHub's AI coding assistant. It provides access to Copilot
models for your GitHub account and plan. Clawdbot can use Copilot as a model
models for your GitHub account and plan. Moltbot can use Copilot as a model
provider in two different ways.
## Two ways to use Copilot in Clawdbot
## Two ways to use Copilot in Moltbot
### 1) Built-in GitHub Copilot provider (`github-copilot`)
Use the native device-login flow to obtain a GitHub token, then exchange it for
Copilot API tokens when Clawdbot runs. This is the **default** and simplest path
Copilot API tokens when Moltbot runs. This is the **default** and simplest path
because it does not require VS Code.
### 2) Copilot Proxy plugin (`copilot-proxy`)
Use the **Copilot Proxy** VS Code extension as a local bridge. Clawdbot talks to
Use the **Copilot Proxy** VS Code extension as a local bridge. Moltbot talks to
the proxys `/v1` endpoint and uses the model list you configure there. Choose
this when you already run Copilot Proxy in VS Code or need to route through it.
You must enable the plugin and keep the VS Code extension running.
@@ -34,7 +34,7 @@ profile.
## CLI setup
```bash
clawdbot models auth login-github-copilot
moltbot models auth login-github-copilot
```
You'll be prompted to visit a URL and enter a one-time code. Keep the terminal
@@ -43,14 +43,14 @@ open until it completes.
### Optional flags
```bash
clawdbot models auth login-github-copilot --profile-id github-copilot:work
clawdbot models auth login-github-copilot --yes
moltbot models auth login-github-copilot --profile-id github-copilot:work
moltbot models auth login-github-copilot --yes
```
## Set a default model
```bash
clawdbot models set github-copilot/gpt-4o
moltbot models set github-copilot/gpt-4o
```
### Config snippet
@@ -67,4 +67,4 @@ clawdbot models set github-copilot/gpt-4o
- Copilot model availability depends on your plan; if a model is rejected, try
another ID (for example `github-copilot/gpt-4.1`).
- The login stores a GitHub token in the auth profile store and exchanges it for a
Copilot API token when Clawdbot runs.
Copilot API token when Moltbot runs.

View File

@@ -1,18 +1,18 @@
---
summary: "GLM model family overview + how to use it in Clawdbot"
summary: "GLM model family overview + how to use it in Moltbot"
read_when:
- You want GLM models in Clawdbot
- You want GLM models in Moltbot
- You need the model naming convention and setup
---
# GLM models
GLM is a **model family** (not a company) available through the Z.AI platform. In Clawdbot, GLM
GLM is a **model family** (not a company) available through the Z.AI platform. In Moltbot, GLM
models are accessed via the `zai` provider and model IDs like `zai/glm-4.7`.
## CLI setup
```bash
clawdbot onboard --auth-choice zai-api-key
moltbot onboard --auth-choice zai-api-key
```
## Config snippet

View File

@@ -1,12 +1,12 @@
---
summary: "Model providers (LLMs) supported by Clawdbot"
summary: "Model providers (LLMs) supported by Moltbot"
read_when:
- You want to choose a model provider
- You need a quick overview of supported LLM backends
---
# Model Providers
Clawdbot can use many LLM providers. Pick a provider, authenticate, then set the
Moltbot can use many LLM providers. Pick a provider, authenticate, then set the
default model as `provider/model`.
Looking for chat channel docs (WhatsApp/Telegram/Discord/Slack/Mattermost (plugin)/etc.)? See [Channels](/channels).
@@ -22,7 +22,7 @@ See [Venice AI](/providers/venice).
## Quick start
1) Authenticate with the provider (usually via `clawdbot onboard`).
1) Authenticate with the provider (usually via `moltbot onboard`).
2) Set the default model:
```json5

View File

@@ -1,7 +1,7 @@
---
summary: "Use MiniMax M2.1 in Clawdbot"
summary: "Use MiniMax M2.1 in Moltbot"
read_when:
- You want MiniMax models in Clawdbot
- You want MiniMax models in Moltbot
- You need MiniMax setup guidance
---
# MiniMax
@@ -40,7 +40,7 @@ MiniMax highlights these improvements in M2.1:
**Best for:** hosted MiniMax with Anthropic-compatible API.
Configure via CLI:
- Run `clawdbot configure`
- Run `moltbot configure`
- Select **Model/auth**
- Choose **MiniMax M2.1**
@@ -100,7 +100,7 @@ Configure via CLI:
We have seen strong results with MiniMax M2.1 on powerful hardware (e.g. a
desktop/server) using LM Studio's local server.
Configure manually via `clawdbot.json`:
Configure manually via `moltbot.json`:
```json5
{
@@ -134,11 +134,11 @@ Configure manually via `clawdbot.json`:
}
```
## Configure via `clawdbot configure`
## Configure via `moltbot configure`
Use the interactive config wizard to set MiniMax without editing JSON:
1) Run `clawdbot configure`.
1) Run `moltbot configure`.
2) Select **Model/auth**.
3) Choose **MiniMax M2.1**.
4) Pick your default model when prompted.
@@ -159,7 +159,7 @@ Use the interactive config wizard to set MiniMax without editing JSON:
- Update pricing values in `models.json` if you need exact cost tracking.
- Referral link for MiniMax Coding Plan (10% off): https://platform.minimax.io/subscribe/coding-plan?code=DbXJTRClnb&source=link
- See [/concepts/model-providers](/concepts/model-providers) for provider rules.
- Use `clawdbot models list` and `clawdbot models set minimax/MiniMax-M2.1` to switch.
- Use `moltbot models list` and `moltbot models set minimax/MiniMax-M2.1` to switch.
## Troubleshooting
@@ -169,7 +169,7 @@ This usually means the **MiniMax provider isnt configured** (no provider entr
and no MiniMax auth profile/env key found). A fix for this detection is in
**2026.1.12** (unreleased at the time of writing). Fix by:
- Upgrading to **2026.1.12** (or run from source `main`), then restarting the gateway.
- Running `clawdbot configure` and selecting **MiniMax M2.1**, or
- Running `moltbot configure` and selecting **MiniMax M2.1**, or
- Adding the `models.providers.minimax` block manually, or
- Setting `MINIMAX_API_KEY` (or a MiniMax auth profile) so the provider can be injected.
@@ -179,5 +179,5 @@ Make sure the model id is **casesensitive**:
Then recheck with:
```bash
clawdbot models list
moltbot models list
```

View File

@@ -1,12 +1,12 @@
---
summary: "Model providers (LLMs) supported by Clawdbot"
summary: "Model providers (LLMs) supported by Moltbot"
read_when:
- You want to choose a model provider
- You want quick setup examples for LLM auth + model selection
---
# Model Providers
Clawdbot can use many LLM providers. Pick one, authenticate, then set the default
Moltbot can use many LLM providers. Pick one, authenticate, then set the default
model as `provider/model`.
## Highlight: Venius (Venice AI)
@@ -20,7 +20,7 @@ See [Venice AI](/providers/venice).
## Quick start (two steps)
1) Authenticate with the provider (usually via `clawdbot onboard`).
1) Authenticate with the provider (usually via `moltbot onboard`).
2) Set the default model:
```json5

View File

@@ -21,13 +21,13 @@ Current Kimi K2 model IDs:
{/* moonshot-kimi-k2-ids:end */}
```bash
clawdbot onboard --auth-choice moonshot-api-key
moltbot onboard --auth-choice moonshot-api-key
```
Kimi Code:
```bash
clawdbot onboard --auth-choice kimi-code-api-key
moltbot onboard --auth-choice kimi-code-api-key
```
Note: Moonshot and Kimi Code are separate providers. Keys are not interchangeable, endpoints differ, and model refs differ (Moonshot uses `moonshot/...`, Kimi Code uses `kimi-code/...`).

View File

@@ -1,12 +1,12 @@
---
summary: "Run Clawdbot with Ollama (local LLM runtime)"
summary: "Run Moltbot with Ollama (local LLM runtime)"
read_when:
- You want to run Clawdbot with local models via Ollama
- You want to run Moltbot with local models via Ollama
- You need Ollama setup and configuration guidance
---
# Ollama
Ollama is a local LLM runtime that makes it easy to run open-source models on your machine. Clawdbot integrates with Ollama's OpenAI-compatible API and can **auto-discover tool-capable models** when you opt in with `OLLAMA_API_KEY` (or an auth profile) and do not define an explicit `models.providers.ollama` entry.
Ollama is a local LLM runtime that makes it easy to run open-source models on your machine. Moltbot integrates with Ollama's OpenAI-compatible API and can **auto-discover tool-capable models** when you opt in with `OLLAMA_API_KEY` (or an auth profile) and do not define an explicit `models.providers.ollama` entry.
## Quick start
@@ -22,14 +22,14 @@ ollama pull qwen2.5-coder:32b
ollama pull deepseek-r1:32b
```
3) Enable Ollama for Clawdbot (any value works; Ollama doesn't require a real key):
3) Enable Ollama for Moltbot (any value works; Ollama doesn't require a real key):
```bash
# Set environment variable
export OLLAMA_API_KEY="ollama-local"
# Or configure in your config file
clawdbot config set models.providers.ollama.apiKey "ollama-local"
moltbot config set models.providers.ollama.apiKey "ollama-local"
```
4) Use Ollama models:
@@ -46,7 +46,7 @@ clawdbot config set models.providers.ollama.apiKey "ollama-local"
## Model discovery (implicit provider)
When you set `OLLAMA_API_KEY` (or an auth profile) and **do not** define `models.providers.ollama`, Clawdbot discovers models from the local Ollama instance at `http://127.0.0.1:11434`:
When you set `OLLAMA_API_KEY` (or an auth profile) and **do not** define `models.providers.ollama`, Moltbot discovers models from the local Ollama instance at `http://127.0.0.1:11434`:
- Queries `/api/tags` and `/api/show`
- Keeps only models that report `tools` capability
@@ -61,7 +61,7 @@ To see what models are available:
```bash
ollama list
clawdbot models list
moltbot models list
```
To add a new model, simply pull it with Ollama:
@@ -117,7 +117,7 @@ Use explicit config when:
}
```
If `OLLAMA_API_KEY` is set, you can omit `apiKey` in the provider entry and Clawdbot will fill it for availability checks.
If `OLLAMA_API_KEY` is set, you can omit `apiKey` in the provider entry and Moltbot will fill it for availability checks.
### Custom base URL (explicit config)
@@ -157,7 +157,7 @@ Once configured, all your Ollama models are available:
### Reasoning models
Clawdbot marks models as reasoning-capable when Ollama reports `thinking` in `/api/show`:
Moltbot marks models as reasoning-capable when Ollama reports `thinking` in `/api/show`:
```bash
ollama pull deepseek-r1:32b
@@ -169,7 +169,7 @@ Ollama is free and runs locally, so all model costs are set to $0.
### Context windows
For auto-discovered models, Clawdbot uses the context window reported by Ollama when available, otherwise it defaults to `8192`. You can override `contextWindow` and `maxTokens` in explicit provider config.
For auto-discovered models, Moltbot uses the context window reported by Ollama when available, otherwise it defaults to `8192`. You can override `contextWindow` and `maxTokens` in explicit provider config.
## Troubleshooting
@@ -189,7 +189,7 @@ curl http://localhost:11434/api/tags
### No models available
Clawdbot only auto-discovers models that report tool support. If your model isn't listed, either:
Moltbot only auto-discovers models that report tool support. If your model isn't listed, either:
- Pull a tool-capable model, or
- Define the model explicitly in `models.providers.ollama`.

View File

@@ -1,7 +1,7 @@
---
summary: "Use OpenAI via API keys or Codex subscription in Clawdbot"
summary: "Use OpenAI via API keys or Codex subscription in Moltbot"
read_when:
- You want to use OpenAI models in Clawdbot
- You want to use OpenAI models in Moltbot
- You want Codex subscription auth instead of API keys
---
# OpenAI
@@ -17,9 +17,9 @@ Get your API key from the OpenAI dashboard.
### CLI setup
```bash
clawdbot onboard --auth-choice openai-api-key
moltbot onboard --auth-choice openai-api-key
# or non-interactive
clawdbot onboard --openai-api-key "$OPENAI_API_KEY"
moltbot onboard --openai-api-key "$OPENAI_API_KEY"
```
### Config snippet
@@ -40,10 +40,10 @@ Codex cloud requires ChatGPT sign-in, while the Codex CLI supports ChatGPT or AP
```bash
# Run Codex OAuth in the wizard
clawdbot onboard --auth-choice openai-codex
moltbot onboard --auth-choice openai-codex
# Or run OAuth directly
clawdbot models auth login --provider openai-codex
moltbot models auth login --provider openai-codex
```
### Config snippet

View File

@@ -1,5 +1,5 @@
---
summary: "Use OpenCode Zen (curated models) with Clawdbot"
summary: "Use OpenCode Zen (curated models) with Moltbot"
read_when:
- You want OpenCode Zen for model access
- You want a curated list of coding-friendly models
@@ -13,9 +13,9 @@ Zen is currently in beta.
## CLI setup
```bash
clawdbot onboard --auth-choice opencode-zen
moltbot onboard --auth-choice opencode-zen
# or non-interactive
clawdbot onboard --opencode-zen-api-key "$OPENCODE_API_KEY"
moltbot onboard --opencode-zen-api-key "$OPENCODE_API_KEY"
```
## Config snippet

View File

@@ -1,8 +1,8 @@
---
summary: "Use OpenRouter's unified API to access many models in Clawdbot"
summary: "Use OpenRouter's unified API to access many models in Moltbot"
read_when:
- You want a single API key for many LLMs
- You want to run models via OpenRouter in Clawdbot
- You want to run models via OpenRouter in Moltbot
---
# OpenRouter
@@ -12,7 +12,7 @@ endpoint and API key. It is OpenAI-compatible, so most OpenAI SDKs work by switc
## CLI setup
```bash
clawdbot onboard --auth-choice apiKey --token-provider openrouter --token "$OPENROUTER_API_KEY"
moltbot onboard --auth-choice apiKey --token-provider openrouter --token "$OPENROUTER_API_KEY"
```
## Config snippet

View File

@@ -1,7 +1,7 @@
---
summary: "Use Qwen OAuth (free tier) in Clawdbot"
summary: "Use Qwen OAuth (free tier) in Moltbot"
read_when:
- You want to use Qwen with Clawdbot
- You want to use Qwen with Moltbot
- You want free-tier OAuth access to Qwen Coder
---
# Qwen
@@ -12,7 +12,7 @@ Qwen provides a free-tier OAuth flow for Qwen Coder and Qwen Vision models
## Enable the plugin
```bash
clawdbot plugins enable qwen-portal-auth
moltbot plugins enable qwen-portal-auth
```
Restart the Gateway after enabling.
@@ -20,7 +20,7 @@ Restart the Gateway after enabling.
## Authenticate
```bash
clawdbot models auth login --provider qwen-portal --set-default
moltbot models auth login --provider qwen-portal --set-default
```
This runs the Qwen device-code OAuth flow and writes a provider entry to your
@@ -34,12 +34,12 @@ This runs the Qwen device-code OAuth flow and writes a provider entry to your
Switch models with:
```bash
clawdbot models set qwen-portal/coder-model
moltbot models set qwen-portal/coder-model
```
## Reuse Qwen Code CLI login
If you already logged in with the Qwen Code CLI, Clawdbot will sync credentials
If you already logged in with the Qwen Code CLI, Moltbot will sync credentials
from `~/.qwen/oauth_creds.json` when it loads the auth store. You still need a
`models.providers.qwen-portal` entry (use the login command above to create one).

View File

@@ -1,12 +1,12 @@
---
summary: "Use Synthetic's Anthropic-compatible API in Clawdbot"
summary: "Use Synthetic's Anthropic-compatible API in Moltbot"
read_when:
- You want to use Synthetic as a model provider
- You need a Synthetic API key or base URL setup
---
# Synthetic
Synthetic exposes Anthropic-compatible endpoints. Clawdbot registers it as the
Synthetic exposes Anthropic-compatible endpoints. Moltbot registers it as the
`synthetic` provider and uses the Anthropic Messages API.
## Quick setup
@@ -15,7 +15,7 @@ Synthetic exposes Anthropic-compatible endpoints. Clawdbot registers it as the
2) Run onboarding:
```bash
clawdbot onboard --auth-choice synthetic-api-key
moltbot onboard --auth-choice synthetic-api-key
```
The default model is set to:
@@ -59,7 +59,7 @@ synthetic/hf:MiniMaxAI/MiniMax-M2.1
}
```
Note: Clawdbot's Anthropic client appends `/v1` to the base URL, so use
Note: Moltbot's Anthropic client appends `/v1` to the base URL, so use
`https://api.synthetic.new/anthropic` (not `/anthropic/v1`). If Synthetic changes
its base URL, override `models.providers.synthetic.baseUrl`.

View File

@@ -1,7 +1,7 @@
---
summary: "Use Venice AI privacy-focused models in Clawdbot"
summary: "Use Venice AI privacy-focused models in Moltbot"
read_when:
- You want privacy-focused inference in Clawdbot
- You want privacy-focused inference in Moltbot
- You want Venice AI setup guidance
---
# Venice AI (Venius highlight)
@@ -10,7 +10,7 @@ read_when:
Venice AI provides privacy-focused AI inference with support for uncensored models and access to major proprietary models through their anonymized proxy. All inference is private by default—no training on your data, no logging.
## Why Venice in Clawdbot
## Why Venice in Moltbot
- **Private inference** for open-source models (no logging).
- **Uncensored models** when you need them.
@@ -45,7 +45,7 @@ Venice offers two privacy levels — understanding this is key to choosing your
2. Go to **Settings → API Keys → Create new key**
3. Copy your API key (format: `vapi_xxxxxxxxxxxx`)
### 2. Configure Clawdbot
### 2. Configure Moltbot
**Option A: Environment Variable**
@@ -56,7 +56,7 @@ export VENICE_API_KEY="vapi_xxxxxxxxxxxx"
**Option B: Interactive Setup (Recommended)**
```bash
clawdbot onboard --auth-choice venice-api-key
moltbot onboard --auth-choice venice-api-key
```
This will:
@@ -68,7 +68,7 @@ This will:
**Option C: Non-interactive**
```bash
clawdbot onboard --non-interactive \
moltbot onboard --non-interactive \
--auth-choice venice-api-key \
--venice-api-key "vapi_xxxxxxxxxxxx"
```
@@ -76,12 +76,12 @@ clawdbot onboard --non-interactive \
### 3. Verify Setup
```bash
clawdbot chat --model venice/llama-3.3-70b "Hello, are you working?"
moltbot chat --model venice/llama-3.3-70b "Hello, are you working?"
```
## Model Selection
After setup, Clawdbot shows all available Venice models. Pick based on your needs:
After setup, Moltbot shows all available Venice models. Pick based on your needs:
- **Default (our pick)**: `venice/llama-3.3-70b` for private, balanced performance.
- **Best overall quality**: `venice/claude-opus-45` for hard jobs (Opus remains the strongest).
@@ -91,19 +91,19 @@ After setup, Clawdbot shows all available Venice models. Pick based on your need
Change your default model anytime:
```bash
clawdbot models set venice/claude-opus-45
clawdbot models set venice/llama-3.3-70b
moltbot models set venice/claude-opus-45
moltbot models set venice/llama-3.3-70b
```
List all available models:
```bash
clawdbot models list | grep venice
moltbot models list | grep venice
```
## Configure via `clawdbot configure`
## Configure via `moltbot configure`
1. Run `clawdbot configure`
1. Run `moltbot configure`
2. Select **Model/auth**
3. Choose **Venice AI**
@@ -159,7 +159,7 @@ clawdbot models list | grep venice
## Model Discovery
Clawdbot automatically discovers models from the Venice API when `VENICE_API_KEY` is set. If the API is unreachable, it falls back to a static catalog.
Moltbot automatically discovers models from the Venice API when `VENICE_API_KEY` is set. If the API is unreachable, it falls back to a static catalog.
The `/models` endpoint is public (no auth needed for listing), but inference requires a valid API key.
@@ -192,19 +192,19 @@ Venice uses a credit-based system. Check [venice.ai/pricing](https://venice.ai/p
```bash
# Use default private model
clawdbot chat --model venice/llama-3.3-70b
moltbot chat --model venice/llama-3.3-70b
# Use Claude via Venice (anonymized)
clawdbot chat --model venice/claude-opus-45
moltbot chat --model venice/claude-opus-45
# Use uncensored model
clawdbot chat --model venice/venice-uncensored
moltbot chat --model venice/venice-uncensored
# Use vision model with image
clawdbot chat --model venice/qwen3-vl-235b-a22b
moltbot chat --model venice/qwen3-vl-235b-a22b
# Use coding model
clawdbot chat --model venice/qwen3-coder-480b-a35b-instruct
moltbot chat --model venice/qwen3-coder-480b-a35b-instruct
```
## Troubleshooting
@@ -213,14 +213,14 @@ clawdbot chat --model venice/qwen3-coder-480b-a35b-instruct
```bash
echo $VENICE_API_KEY
clawdbot models list | grep venice
moltbot models list | grep venice
```
Ensure the key starts with `vapi_`.
### Model not available
The Venice model catalog updates dynamically. Run `clawdbot models list` to see currently available models. Some models may be temporarily offline.
The Venice model catalog updates dynamically. Run `moltbot models list` to see currently available models. Some models may be temporarily offline.
### Connection issues

View File

@@ -2,7 +2,7 @@
title: "Vercel AI Gateway"
summary: "Vercel AI Gateway setup (auth + model selection)"
read_when:
- You want to use Vercel AI Gateway with Clawdbot
- You want to use Vercel AI Gateway with Moltbot
- You need the API key env var or CLI auth choice
---
# Vercel AI Gateway
@@ -19,7 +19,7 @@ The [Vercel AI Gateway](https://vercel.com/ai-gateway) provides a unified API to
1) Set the API key (recommended: store it for the Gateway):
```bash
clawdbot onboard --auth-choice ai-gateway-api-key
moltbot onboard --auth-choice ai-gateway-api-key
```
2) Set a default model:
@@ -37,7 +37,7 @@ clawdbot onboard --auth-choice ai-gateway-api-key
## Non-interactive example
```bash
clawdbot onboard --non-interactive \
moltbot onboard --non-interactive \
--mode local \
--auth-choice ai-gateway-api-key \
--ai-gateway-api-key "$AI_GATEWAY_API_KEY"

View File

@@ -1,21 +1,21 @@
---
summary: "Use Z.AI (GLM models) with Clawdbot"
summary: "Use Z.AI (GLM models) with Moltbot"
read_when:
- You want Z.AI / GLM models in Clawdbot
- You want Z.AI / GLM models in Moltbot
- You need a simple ZAI_API_KEY setup
---
# Z.AI
Z.AI is the API platform for **GLM** models. It provides REST APIs for GLM and uses API keys
for authentication. Create your API key in the Z.AI console. Clawdbot uses the `zai` provider
for authentication. Create your API key in the Z.AI console. Moltbot uses the `zai` provider
with a Z.AI API key.
## CLI setup
```bash
clawdbot onboard --auth-choice zai-api-key
moltbot onboard --auth-choice zai-api-key
# or non-interactive
clawdbot onboard --zai-api-key "$ZAI_API_KEY"
moltbot onboard --zai-api-key "$ZAI_API_KEY"
```
## Config snippet