feat: Add Ollama provider with automatic model discovery (#1606)

* feat: Add Ollama provider with automatic model discovery

- Add Ollama provider builder with automatic model detection
- Discover available models from local Ollama instance via /api/tags API
- Make resolveImplicitProviders async to support dynamic model discovery
- Add comprehensive Ollama documentation with setup and usage guide
- Add tests for Ollama provider integration
- Update provider index and model providers documentation

Closes #1531

* fix: Correct Ollama provider type definitions and error handling

- Fix input property type to match ModelDefinitionConfig
- Import ModelDefinitionConfig type properly
- Fix error template literal to use String() for type safety
- Simplify return type signature of discoverOllamaModels

* fix: Suppress unhandled promise warnings from ensureClawdbotModelsJson in tests

- Cast unused promise returns to 'unknown' to suppress TypeScript warnings
- Tests that don't await the promise are intentionally not awaiting it
- This fixes the failing test suite caused by unawaited async calls

* fix: Skip Ollama model discovery during tests

- Check for VITEST or NODE_ENV=test before making HTTP requests
- Prevents test timeouts and hangs from network calls
- Ollama discovery will still work in production/normal usage

* fix: Set VITEST environment variable in test setup

- Ensures Ollama discovery is skipped in all test runs
- Prevents network calls during tests that could cause timeouts

* test: Temporarily skip Ollama provider tests to diagnose CI failures

* fix: Make Ollama provider opt-in to avoid breaking existing tests

**Root Cause:**
The Ollama provider was being added to ALL configurations by default
(with a fallback API key of 'ollama-local'), which broke tests that
expected NO providers when no API keys were configured.

**Solution:**
- Removed the default fallback API key for Ollama
- Ollama provider now requires explicit configuration via:
  - OLLAMA_API_KEY environment variable, OR
  - Ollama profile in auth store
- Updated documentation to reflect the explicit configuration requirement
- Added a test to verify Ollama is not added by default

This fixes all 4 failing test suites:
- checks (node, test, pnpm test)
- checks (bun, test, bunx vitest run)
- checks-windows (node, test, pnpm test)
- checks-macos (test, pnpm test)

Closes #1531
This commit is contained in:
Abhay
2026-01-24 22:38:52 +00:00
committed by GitHub
parent c00cbd080d
commit 51e3d16be9
15 changed files with 306 additions and 10 deletions

View File

@@ -35,6 +35,7 @@ Looking for chat channel docs (WhatsApp/Telegram/Discord/Slack/Mattermost (plugi
- [Z.AI](/providers/zai)
- [GLM models](/providers/glm)
- [MiniMax](/providers/minimax)
- [Ollama (local models)](/providers/ollama)
## Transcription providers

169
docs/providers/ollama.md Normal file
View File

@@ -0,0 +1,169 @@
---
summary: "Run Clawdbot with Ollama (local LLM runtime)"
read_when:
- You want to run Clawdbot with local models via Ollama
- You need Ollama setup and configuration guidance
---
# Ollama
Ollama is a local LLM runtime that makes it easy to run open-source models on your machine. Clawdbot integrates with Ollama's OpenAI-compatible API and **automatically discovers models** installed on your machine.
## Quick start
1) Install Ollama: https://ollama.ai
2) Pull a model:
```bash
ollama pull llama3.3
# or
ollama pull qwen2.5-coder:32b
# or
ollama pull deepseek-r1:32b
```
3) Configure Clawdbot with Ollama API key:
```bash
# Set environment variable
export OLLAMA_API_KEY="ollama-local"
# Or configure in your config file
clawdbot config set models.providers.ollama.apiKey "ollama-local"
```
4) Use Ollama models:
```json5
{
agents: {
defaults: {
model: { primary: "ollama/llama3.3" }
}
}
}
```
## Model Discovery
When the Ollama provider is configured, Clawdbot automatically detects all models installed on your Ollama instance by querying the `/api/tags` endpoint at `http://localhost:11434`. You don't need to manually configure individual models in your config file.
To see what models are available:
```bash
ollama list
clawdbot models list
```
To add a new model, simply pull it with Ollama:
```bash
ollama pull mistral
```
The new model will be automatically discovered and available to use.
## Configuration
### Basic Setup
The simplest way to enable Ollama is via environment variable:
```bash
export OLLAMA_API_KEY="ollama-local"
```
### Custom Base URL
If Ollama is running on a different host or port:
```json5
{
models: {
providers: {
ollama: {
apiKey: "ollama-local",
baseUrl: "http://192.168.1.100:11434/v1"
}
}
}
}
```
### Model Selection
Once configured, all your Ollama models are available:
```json5
{
agents: {
defaults: {
model: {
primary: "ollama/llama3.3",
fallback: ["ollama/qwen2.5-coder:32b"]
}
}
}
}
```
## Advanced
### Reasoning Models
Models with "r1" or "reasoning" in their name are automatically detected as reasoning models and will use extended thinking features:
```bash
ollama pull deepseek-r1:32b
```
### Model Costs
Ollama is free and runs locally, so all model costs are set to $0.
### Context Windows
Ollama models use default context windows. You can customize these in your provider configuration if needed.
## Troubleshooting
### Ollama not detected
Make sure Ollama is running:
```bash
ollama serve
```
And that the API is accessible:
```bash
curl http://localhost:11434/api/tags
```
### No models available
Pull at least one model:
```bash
ollama list # See what's installed
ollama pull llama3.3 # Pull a model
```
### Connection refused
Check that Ollama is running on the correct port:
```bash
# Check if Ollama is running
ps aux | grep ollama
# Or restart Ollama
ollama serve
```
## See Also
- [Model Providers](/concepts/model-providers) - Overview of all providers
- [Model Selection](/agents/model-selection) - How to choose models
- [Configuration](/configuration) - Full config reference