docs: refine venice highlight

This commit is contained in:
Peter Steinberger
2026-01-25 01:49:31 +00:00
parent 9205ee55de
commit b9dc117309
4 changed files with 80 additions and 6 deletions

View File

@@ -6,6 +6,7 @@ Docs: https://docs.clawd.bot
### Highlights ### Highlights
- Ollama: provider discovery + docs. (#1606) Thanks @abhaymundhara. https://docs.clawd.bot/providers/ollama - Ollama: provider discovery + docs. (#1606) Thanks @abhaymundhara. https://docs.clawd.bot/providers/ollama
- Venius (Venice AI): highlight provider guide + cross-links + expanded guidance. https://docs.clawd.bot/providers/venice
### Changes ### Changes
- TTS: add Edge TTS provider fallback, defaulting to keyless Edge with MP3 retry on format failures. (#1668) Thanks @steipete. https://docs.clawd.bot/tts - TTS: add Edge TTS provider fallback, defaulting to keyless Edge with MP3 retry on format failures. (#1668) Thanks @steipete. https://docs.clawd.bot/tts

View File

@@ -11,6 +11,15 @@ default model as `provider/model`.
Looking for chat channel docs (WhatsApp/Telegram/Discord/Slack/Mattermost (plugin)/etc.)? See [Channels](/channels). Looking for chat channel docs (WhatsApp/Telegram/Discord/Slack/Mattermost (plugin)/etc.)? See [Channels](/channels).
## Highlight: Venius (Venice AI)
Venius is our recommended Venice AI setup for privacy-first inference with an option to use Opus for hard tasks.
- Default: `venice/llama-3.3-70b`
- Best overall: `venice/claude-opus-45` (Opus remains the strongest)
See [Venice AI](/providers/venice).
## Quick start ## Quick start
1) Authenticate with the provider (usually via `clawdbot onboard`). 1) Authenticate with the provider (usually via `clawdbot onboard`).
@@ -35,7 +44,7 @@ Looking for chat channel docs (WhatsApp/Telegram/Discord/Slack/Mattermost (plugi
- [Z.AI](/providers/zai) - [Z.AI](/providers/zai)
- [GLM models](/providers/glm) - [GLM models](/providers/glm)
- [MiniMax](/providers/minimax) - [MiniMax](/providers/minimax)
- [Venice AI (privacy-focused)](/providers/venice) - [Venius (Venice AI, privacy-focused)](/providers/venice)
- [Ollama (local models)](/providers/ollama) - [Ollama (local models)](/providers/ollama)
## Transcription providers ## Transcription providers

View File

@@ -9,6 +9,15 @@ read_when:
Clawdbot can use many LLM providers. Pick one, authenticate, then set the default Clawdbot can use many LLM providers. Pick one, authenticate, then set the default
model as `provider/model`. model as `provider/model`.
## Highlight: Venius (Venice AI)
Venius is our recommended Venice AI setup for privacy-first inference with an option to use Opus for the hardest tasks.
- Default: `venice/llama-3.3-70b`
- Best overall: `venice/claude-opus-45` (Opus remains the strongest)
See [Venice AI](/providers/venice).
## Quick start (two steps) ## Quick start (two steps)
1) Authenticate with the provider (usually via `clawdbot onboard`). 1) Authenticate with the provider (usually via `clawdbot onboard`).
@@ -32,6 +41,7 @@ model as `provider/model`.
- [Z.AI](/providers/zai) - [Z.AI](/providers/zai)
- [GLM models](/providers/glm) - [GLM models](/providers/glm)
- [MiniMax](/providers/minimax) - [MiniMax](/providers/minimax)
- [Venius (Venice AI)](/providers/venice)
- [Amazon Bedrock](/bedrock) - [Amazon Bedrock](/bedrock)
For the full provider catalog (xAI, Groq, Mistral, etc.) and advanced configuration, For the full provider catalog (xAI, Groq, Mistral, etc.) and advanced configuration,

View File

@@ -1,7 +1,22 @@
# Venice AI Provider ---
summary: "Use Venice AI privacy-focused models in Clawdbot"
read_when:
- You want privacy-focused inference in Clawdbot
- You want Venice AI setup guidance
---
# Venice AI (Venius highlight)
**Venius** is our highlight Venice setup for privacy-first inference with optional anonymized access to proprietary models.
Venice AI provides privacy-focused AI inference with support for uncensored models and access to major proprietary models through their anonymized proxy. All inference is private by default—no training on your data, no logging. Venice AI provides privacy-focused AI inference with support for uncensored models and access to major proprietary models through their anonymized proxy. All inference is private by default—no training on your data, no logging.
## Why Venice in Clawdbot
- **Private inference** for open-source models (no logging).
- **Uncensored models** when you need them.
- **Anonymized access** to proprietary models (Opus/GPT/Gemini) when quality matters.
- OpenAI-compatible `/v1` endpoints.
## Privacy Modes ## Privacy Modes
Venice offers two privacy levels — understanding this is key to choosing your model: Venice offers two privacy levels — understanding this is key to choosing your model:
@@ -20,6 +35,7 @@ Venice offers two privacy levels — understanding this is key to choosing your
- **Streaming**: ✅ Supported on all models - **Streaming**: ✅ Supported on all models
- **Function calling**: ✅ Supported on select models (check model capabilities) - **Function calling**: ✅ Supported on select models (check model capabilities)
- **Vision**: ✅ Supported on models with vision capability - **Vision**: ✅ Supported on models with vision capability
- **No hard rate limits**: Fair-use throttling may apply for extreme usage
## Setup ## Setup
@@ -54,8 +70,7 @@ This will:
```bash ```bash
clawdbot onboard --non-interactive \ clawdbot onboard --non-interactive \
--auth-choice venice-api-key \ --auth-choice venice-api-key \
--token "vapi_xxxxxxxxxxxx" \ --venice-api-key "vapi_xxxxxxxxxxxx"
--token-provider venice
``` ```
### 3. Verify Setup ### 3. Verify Setup
@@ -68,8 +83,10 @@ clawdbot chat --model venice/llama-3.3-70b "Hello, are you working?"
After setup, Clawdbot shows all available Venice models. Pick based on your needs: After setup, Clawdbot shows all available Venice models. Pick based on your needs:
- **Privacy**: Choose "private" models for fully private inference - **Default (our pick)**: `venice/llama-3.3-70b` for private, balanced performance.
- **Capability**: Choose "anonymized" models to access Claude, GPT, Gemini via Venice's proxy - **Best overall quality**: `venice/claude-opus-45` for hard jobs (Opus remains the strongest).
- **Privacy**: Choose "private" models for fully private inference.
- **Capability**: Choose "anonymized" models to access Claude, GPT, Gemini via Venice's proxy.
Change your default model anytime: Change your default model anytime:
@@ -84,11 +101,18 @@ List all available models:
clawdbot models list | grep venice clawdbot models list | grep venice
``` ```
## Configure via `clawdbot configure`
1. Run `clawdbot configure`
2. Select **Model/auth**
3. Choose **Venice AI**
## Which Model Should I Use? ## Which Model Should I Use?
| Use Case | Recommended Model | Why | | Use Case | Recommended Model | Why |
|----------|-------------------|-----| |----------|-------------------|-----|
| **General chat** | `llama-3.3-70b` | Good all-around, fully private | | **General chat** | `llama-3.3-70b` | Good all-around, fully private |
| **Best overall quality** | `claude-opus-45` | Opus remains the strongest for hard tasks |
| **Privacy + Claude quality** | `claude-opus-45` | Best reasoning via anonymized proxy | | **Privacy + Claude quality** | `claude-opus-45` | Best reasoning via anonymized proxy |
| **Coding** | `qwen3-coder-480b-a35b-instruct` | Code-optimized, 262k context | | **Coding** | `qwen3-coder-480b-a35b-instruct` | Code-optimized, 262k context |
| **Vision tasks** | `qwen3-vl-235b-a22b` | Best private vision model | | **Vision tasks** | `qwen3-vl-235b-a22b` | Best private vision model |
@@ -202,6 +226,36 @@ The Venice model catalog updates dynamically. Run `clawdbot models list` to see
Venice API is at `https://api.venice.ai/api/v1`. Ensure your network allows HTTPS connections. Venice API is at `https://api.venice.ai/api/v1`. Ensure your network allows HTTPS connections.
## Config file example
```json5
{
env: { VENICE_API_KEY: "vapi_..." },
agents: { defaults: { model: { primary: "venice/llama-3.3-70b" } } },
models: {
mode: "merge",
providers: {
venice: {
baseUrl: "https://api.venice.ai/api/v1",
apiKey: "${VENICE_API_KEY}",
api: "openai-completions",
models: [
{
id: "llama-3.3-70b",
name: "Llama 3.3 70B",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 131072,
maxTokens: 8192
}
]
}
}
}
}
```
## Links ## Links
- [Venice AI](https://venice.ai) - [Venice AI](https://venice.ai)