docs: refine venice highlight
This commit is contained in:
@@ -9,6 +9,15 @@ read_when:
|
||||
Clawdbot can use many LLM providers. Pick one, authenticate, then set the default
|
||||
model as `provider/model`.
|
||||
|
||||
## Highlight: Venius (Venice AI)
|
||||
|
||||
Venius is our recommended Venice AI setup for privacy-first inference with an option to use Opus for the hardest tasks.
|
||||
|
||||
- Default: `venice/llama-3.3-70b`
|
||||
- Best overall: `venice/claude-opus-45` (Opus remains the strongest)
|
||||
|
||||
See [Venice AI](/providers/venice).
|
||||
|
||||
## Quick start (two steps)
|
||||
|
||||
1) Authenticate with the provider (usually via `clawdbot onboard`).
|
||||
@@ -32,6 +41,7 @@ model as `provider/model`.
|
||||
- [Z.AI](/providers/zai)
|
||||
- [GLM models](/providers/glm)
|
||||
- [MiniMax](/providers/minimax)
|
||||
- [Venius (Venice AI)](/providers/venice)
|
||||
- [Amazon Bedrock](/bedrock)
|
||||
|
||||
For the full provider catalog (xAI, Groq, Mistral, etc.) and advanced configuration,
|
||||
|
||||
Reference in New Issue
Block a user