feat: Add Ollama provider with automatic model discovery (#1606)
* feat: Add Ollama provider with automatic model discovery - Add Ollama provider builder with automatic model detection - Discover available models from local Ollama instance via /api/tags API - Make resolveImplicitProviders async to support dynamic model discovery - Add comprehensive Ollama documentation with setup and usage guide - Add tests for Ollama provider integration - Update provider index and model providers documentation Closes #1531 * fix: Correct Ollama provider type definitions and error handling - Fix input property type to match ModelDefinitionConfig - Import ModelDefinitionConfig type properly - Fix error template literal to use String() for type safety - Simplify return type signature of discoverOllamaModels * fix: Suppress unhandled promise warnings from ensureClawdbotModelsJson in tests - Cast unused promise returns to 'unknown' to suppress TypeScript warnings - Tests that don't await the promise are intentionally not awaiting it - This fixes the failing test suite caused by unawaited async calls * fix: Skip Ollama model discovery during tests - Check for VITEST or NODE_ENV=test before making HTTP requests - Prevents test timeouts and hangs from network calls - Ollama discovery will still work in production/normal usage * fix: Set VITEST environment variable in test setup - Ensures Ollama discovery is skipped in all test runs - Prevents network calls during tests that could cause timeouts * test: Temporarily skip Ollama provider tests to diagnose CI failures * fix: Make Ollama provider opt-in to avoid breaking existing tests **Root Cause:** The Ollama provider was being added to ALL configurations by default (with a fallback API key of 'ollama-local'), which broke tests that expected NO providers when no API keys were configured. **Solution:** - Removed the default fallback API key for Ollama - Ollama provider now requires explicit configuration via: - OLLAMA_API_KEY environment variable, OR - Ollama profile in auth store - Updated documentation to reflect the explicit configuration requirement - Added a test to verify Ollama is not added by default This fixes all 4 failing test suites: - checks (node, test, pnpm test) - checks (bun, test, bunx vitest run) - checks-windows (node, test, pnpm test) - checks-macos (test, pnpm test) Closes #1531
This commit is contained in:
@@ -236,6 +236,30 @@ MiniMax is configured via `models.providers` because it uses custom endpoints:
|
||||
|
||||
See [/providers/minimax](/providers/minimax) for setup details, model options, and config snippets.
|
||||
|
||||
### Ollama
|
||||
|
||||
Ollama is a local LLM runtime that provides an OpenAI-compatible API:
|
||||
|
||||
- Provider: `ollama`
|
||||
- Auth: None required (local server)
|
||||
- Example model: `ollama/llama3.3`
|
||||
- Installation: https://ollama.ai
|
||||
|
||||
```bash
|
||||
# Install Ollama, then pull a model:
|
||||
ollama pull llama3.3
|
||||
```
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: { model: { primary: "ollama/llama3.3" } }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Ollama is automatically detected when running locally at `http://127.0.0.1:11434/v1`. See [/providers/ollama](/providers/ollama) for model recommendations and custom configuration.
|
||||
|
||||
### Local proxies (LM Studio, vLLM, LiteLLM, etc.)
|
||||
|
||||
Example (OpenAI‑compatible):
|
||||
|
||||
Reference in New Issue
Block a user