Venice AI is a privacy-focused AI inference provider with support for uncensored models and access to major proprietary models via their anonymized proxy. This integration adds: - Complete model catalog with 25 models: - 15 private models (Llama, Qwen, DeepSeek, Venice Uncensored, etc.) - 10 anonymized models (Claude, GPT-5.2, Gemini, Grok, Kimi, MiniMax) - Auto-discovery from Venice API with fallback to static catalog - VENICE_API_KEY environment variable support - Interactive onboarding via 'venice-api-key' auth choice - Model selection prompt showing all available Venice models - Provider auto-registration when API key is detected - Comprehensive documentation covering: - Privacy modes (private vs anonymized) - All 25 models with context windows and features - Streaming, function calling, and vision support - Model selection recommendations Privacy modes: - Private: Fully private, no logging (open-source models) - Anonymized: Proxied through Venice (proprietary models) Default model: venice/llama-3.3-70b (good balance of capability + privacy) Venice API: https://api.venice.ai/api/v1 (OpenAI-compatible)
47 lines
1.4 KiB
Markdown
47 lines
1.4 KiB
Markdown
---
|
|
summary: "Model providers (LLMs) supported by Clawdbot"
|
|
read_when:
|
|
- You want to choose a model provider
|
|
- You need a quick overview of supported LLM backends
|
|
---
|
|
# Model Providers
|
|
|
|
Clawdbot can use many LLM providers. Pick a provider, authenticate, then set the
|
|
default model as `provider/model`.
|
|
|
|
Looking for chat channel docs (WhatsApp/Telegram/Discord/Slack/Mattermost (plugin)/etc.)? See [Channels](/channels).
|
|
|
|
## Quick start
|
|
|
|
1) Authenticate with the provider (usually via `clawdbot onboard`).
|
|
2) Set the default model:
|
|
|
|
```json5
|
|
{
|
|
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" } } }
|
|
}
|
|
```
|
|
|
|
## Provider docs
|
|
|
|
- [OpenAI (API + Codex)](/providers/openai)
|
|
- [Anthropic (API + Claude Code CLI)](/providers/anthropic)
|
|
- [Qwen (OAuth)](/providers/qwen)
|
|
- [OpenRouter](/providers/openrouter)
|
|
- [Vercel AI Gateway](/providers/vercel-ai-gateway)
|
|
- [Moonshot AI (Kimi + Kimi Code)](/providers/moonshot)
|
|
- [OpenCode Zen](/providers/opencode)
|
|
- [Amazon Bedrock](/bedrock)
|
|
- [Z.AI](/providers/zai)
|
|
- [GLM models](/providers/glm)
|
|
- [MiniMax](/providers/minimax)
|
|
- [Venice AI (privacy-focused)](/providers/venice)
|
|
- [Ollama (local models)](/providers/ollama)
|
|
|
|
## Transcription providers
|
|
|
|
- [Deepgram (audio transcription)](/providers/deepgram)
|
|
|
|
For the full provider catalog (xAI, Groq, Mistral, etc.) and advanced configuration,
|
|
see [Model providers](/concepts/model-providers).
|