* feat: Add Ollama provider with automatic model discovery - Add Ollama provider builder with automatic model detection - Discover available models from local Ollama instance via /api/tags API - Make resolveImplicitProviders async to support dynamic model discovery - Add comprehensive Ollama documentation with setup and usage guide - Add tests for Ollama provider integration - Update provider index and model providers documentation Closes #1531 * fix: Correct Ollama provider type definitions and error handling - Fix input property type to match ModelDefinitionConfig - Import ModelDefinitionConfig type properly - Fix error template literal to use String() for type safety - Simplify return type signature of discoverOllamaModels * fix: Suppress unhandled promise warnings from ensureClawdbotModelsJson in tests - Cast unused promise returns to 'unknown' to suppress TypeScript warnings - Tests that don't await the promise are intentionally not awaiting it - This fixes the failing test suite caused by unawaited async calls * fix: Skip Ollama model discovery during tests - Check for VITEST or NODE_ENV=test before making HTTP requests - Prevents test timeouts and hangs from network calls - Ollama discovery will still work in production/normal usage * fix: Set VITEST environment variable in test setup - Ensures Ollama discovery is skipped in all test runs - Prevents network calls during tests that could cause timeouts * test: Temporarily skip Ollama provider tests to diagnose CI failures * fix: Make Ollama provider opt-in to avoid breaking existing tests **Root Cause:** The Ollama provider was being added to ALL configurations by default (with a fallback API key of 'ollama-local'), which broke tests that expected NO providers when no API keys were configured. **Solution:** - Removed the default fallback API key for Ollama - Ollama provider now requires explicit configuration via: - OLLAMA_API_KEY environment variable, OR - Ollama profile in auth store - Updated documentation to reflect the explicit configuration requirement - Added a test to verify Ollama is not added by default This fixes all 4 failing test suites: - checks (node, test, pnpm test) - checks (bun, test, bunx vitest run) - checks-windows (node, test, pnpm test) - checks-macos (test, pnpm test) Closes #1531
3.1 KiB
summary, read_when
| summary | read_when | ||
|---|---|---|---|
| Run Clawdbot with Ollama (local LLM runtime) |
|
Ollama
Ollama is a local LLM runtime that makes it easy to run open-source models on your machine. Clawdbot integrates with Ollama's OpenAI-compatible API and automatically discovers models installed on your machine.
Quick start
-
Install Ollama: https://ollama.ai
-
Pull a model:
ollama pull llama3.3
# or
ollama pull qwen2.5-coder:32b
# or
ollama pull deepseek-r1:32b
- Configure Clawdbot with Ollama API key:
# Set environment variable
export OLLAMA_API_KEY="ollama-local"
# Or configure in your config file
clawdbot config set models.providers.ollama.apiKey "ollama-local"
- Use Ollama models:
{
agents: {
defaults: {
model: { primary: "ollama/llama3.3" }
}
}
}
Model Discovery
When the Ollama provider is configured, Clawdbot automatically detects all models installed on your Ollama instance by querying the /api/tags endpoint at http://localhost:11434. You don't need to manually configure individual models in your config file.
To see what models are available:
ollama list
clawdbot models list
To add a new model, simply pull it with Ollama:
ollama pull mistral
The new model will be automatically discovered and available to use.
Configuration
Basic Setup
The simplest way to enable Ollama is via environment variable:
export OLLAMA_API_KEY="ollama-local"
Custom Base URL
If Ollama is running on a different host or port:
{
models: {
providers: {
ollama: {
apiKey: "ollama-local",
baseUrl: "http://192.168.1.100:11434/v1"
}
}
}
}
Model Selection
Once configured, all your Ollama models are available:
{
agents: {
defaults: {
model: {
primary: "ollama/llama3.3",
fallback: ["ollama/qwen2.5-coder:32b"]
}
}
}
}
Advanced
Reasoning Models
Models with "r1" or "reasoning" in their name are automatically detected as reasoning models and will use extended thinking features:
ollama pull deepseek-r1:32b
Model Costs
Ollama is free and runs locally, so all model costs are set to $0.
Context Windows
Ollama models use default context windows. You can customize these in your provider configuration if needed.
Troubleshooting
Ollama not detected
Make sure Ollama is running:
ollama serve
And that the API is accessible:
curl http://localhost:11434/api/tags
No models available
Pull at least one model:
ollama list # See what's installed
ollama pull llama3.3 # Pull a model
Connection refused
Check that Ollama is running on the correct port:
# Check if Ollama is running
ps aux | grep ollama
# Or restart Ollama
ollama serve
See Also
- Model Providers - Overview of all providers
- Model Selection - How to choose models
- Configuration - Full config reference