Venice AI is a privacy-focused AI inference provider with support for
uncensored models and access to major proprietary models via their
anonymized proxy.
This integration adds:
- Complete model catalog with 25 models:
- 15 private models (Llama, Qwen, DeepSeek, Venice Uncensored, etc.)
- 10 anonymized models (Claude, GPT-5.2, Gemini, Grok, Kimi, MiniMax)
- Auto-discovery from Venice API with fallback to static catalog
- VENICE_API_KEY environment variable support
- Interactive onboarding via 'venice-api-key' auth choice
- Model selection prompt showing all available Venice models
- Provider auto-registration when API key is detected
- Comprehensive documentation covering:
- Privacy modes (private vs anonymized)
- All 25 models with context windows and features
- Streaming, function calling, and vision support
- Model selection recommendations
Privacy modes:
- Private: Fully private, no logging (open-source models)
- Anonymized: Proxied through Venice (proprietary models)
Default model: venice/llama-3.3-70b (good balance of capability + privacy)
Venice API: https://api.venice.ai/api/v1 (OpenAI-compatible)
* feat: add chunking mode for outbound messages
- Introduced `chunkMode` option in various account configurations to allow splitting messages by "length" or "newline".
- Updated message processing to handle chunking based on the selected mode.
- Added tests for new chunking functionality, ensuring correct behavior for both modes.
* feat: enhance chunking mode documentation and configuration
- Added `chunkMode` option to the BlueBubbles account configuration, allowing users to choose between "length" and "newline" for message chunking.
- Updated documentation to clarify the behavior of the `chunkMode` setting.
- Adjusted account merging logic to incorporate the new `chunkMode` configuration.
* refactor: simplify chunk mode handling for BlueBubbles
- Removed `chunkMode` configuration from various account schemas and types, centralizing chunk mode logic to BlueBubbles only.
- Updated `processMessage` to default to "newline" for BlueBubbles chunking.
- Adjusted tests to reflect changes in chunk mode handling for BlueBubbles, ensuring proper functionality.
* fix: update default chunk mode to 'length' for BlueBubbles
- Changed the default value of `chunkMode` from 'newline' to 'length' in the BlueBubbles configuration and related processing functions.
- Updated documentation to reflect the new default behavior for chunking messages.
- Adjusted tests to ensure the correct default value is returned for BlueBubbles chunk mode.
* feat: Add Ollama provider with automatic model discovery
- Add Ollama provider builder with automatic model detection
- Discover available models from local Ollama instance via /api/tags API
- Make resolveImplicitProviders async to support dynamic model discovery
- Add comprehensive Ollama documentation with setup and usage guide
- Add tests for Ollama provider integration
- Update provider index and model providers documentation
Closes#1531
* fix: Correct Ollama provider type definitions and error handling
- Fix input property type to match ModelDefinitionConfig
- Import ModelDefinitionConfig type properly
- Fix error template literal to use String() for type safety
- Simplify return type signature of discoverOllamaModels
* fix: Suppress unhandled promise warnings from ensureClawdbotModelsJson in tests
- Cast unused promise returns to 'unknown' to suppress TypeScript warnings
- Tests that don't await the promise are intentionally not awaiting it
- This fixes the failing test suite caused by unawaited async calls
* fix: Skip Ollama model discovery during tests
- Check for VITEST or NODE_ENV=test before making HTTP requests
- Prevents test timeouts and hangs from network calls
- Ollama discovery will still work in production/normal usage
* fix: Set VITEST environment variable in test setup
- Ensures Ollama discovery is skipped in all test runs
- Prevents network calls during tests that could cause timeouts
* test: Temporarily skip Ollama provider tests to diagnose CI failures
* fix: Make Ollama provider opt-in to avoid breaking existing tests
**Root Cause:**
The Ollama provider was being added to ALL configurations by default
(with a fallback API key of 'ollama-local'), which broke tests that
expected NO providers when no API keys were configured.
**Solution:**
- Removed the default fallback API key for Ollama
- Ollama provider now requires explicit configuration via:
- OLLAMA_API_KEY environment variable, OR
- Ollama profile in auth store
- Updated documentation to reflect the explicit configuration requirement
- Added a test to verify Ollama is not added by default
This fixes all 4 failing test suites:
- checks (node, test, pnpm test)
- checks (bun, test, bunx vitest run)
- checks-windows (node, test, pnpm test)
- checks-macos (test, pnpm test)
Closes#1531
- Add EC2 Instance Roles section with workaround for IMDS credential detection
- Include step-by-step IAM role and instance profile setup
- Document required permissions (bedrock:InvokeModel, ListFoundationModels)
- Update example model to Claude Opus 4.5 (latest)
The AWS SDK auto-detects EC2 instance roles via IMDS, but Clawdbot's
credential detection only checks environment variables. The workaround
is to set AWS_PROFILE=default to signal credentials are available.
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* feat(discord): add exec approval forwarding to DMs
Add support for forwarding exec approval requests to Discord DMs,
allowing users to approve/deny command execution via interactive buttons.
Features:
- New DiscordExecApprovalHandler that connects to gateway and listens
for exec.approval.requested/resolved events
- Sends DMs with embeds showing command details and 3 buttons:
Allow once, Always allow, Deny
- Configurable via channels.discord.execApprovals with:
- enabled: boolean
- approvers: Discord user IDs to notify
- agentFilter: only forward for specific agents
- sessionFilter: only forward for matching session patterns
- Updates message embed when approval is resolved or expires
Also fixes exec completion routing: when async exec completes after
approval, the heartbeat now uses a specialized prompt to ensure the
model relays the result to the user instead of responding HEARTBEAT_OK.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* feat: generic exec approvals forwarding (#1621) (thanks @czekaj)
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Peter Steinberger <steipete@gmail.com>