Commit Graph

23 Commits

Author SHA1 Message Date
Peter Steinberger
2e3b14187b fix: stabilize venice model discovery 2026-01-25 02:43:08 +00:00
jonisjongithub
7540d1e8c1 feat: add Venice AI provider integration
Venice AI is a privacy-focused AI inference provider with support for
uncensored models and access to major proprietary models via their
anonymized proxy.

This integration adds:

- Complete model catalog with 25 models:
  - 15 private models (Llama, Qwen, DeepSeek, Venice Uncensored, etc.)
  - 10 anonymized models (Claude, GPT-5.2, Gemini, Grok, Kimi, MiniMax)
- Auto-discovery from Venice API with fallback to static catalog
- VENICE_API_KEY environment variable support
- Interactive onboarding via 'venice-api-key' auth choice
- Model selection prompt showing all available Venice models
- Provider auto-registration when API key is detected
- Comprehensive documentation covering:
  - Privacy modes (private vs anonymized)
  - All 25 models with context windows and features
  - Streaming, function calling, and vision support
  - Model selection recommendations

Privacy modes:
- Private: Fully private, no logging (open-source models)
- Anonymized: Proxied through Venice (proprietary models)

Default model: venice/llama-3.3-70b (good balance of capability + privacy)
Venice API: https://api.venice.ai/api/v1 (OpenAI-compatible)
2026-01-25 01:11:57 +00:00
Abhay
51e3d16be9 feat: Add Ollama provider with automatic model discovery (#1606)
* feat: Add Ollama provider with automatic model discovery

- Add Ollama provider builder with automatic model detection
- Discover available models from local Ollama instance via /api/tags API
- Make resolveImplicitProviders async to support dynamic model discovery
- Add comprehensive Ollama documentation with setup and usage guide
- Add tests for Ollama provider integration
- Update provider index and model providers documentation

Closes #1531

* fix: Correct Ollama provider type definitions and error handling

- Fix input property type to match ModelDefinitionConfig
- Import ModelDefinitionConfig type properly
- Fix error template literal to use String() for type safety
- Simplify return type signature of discoverOllamaModels

* fix: Suppress unhandled promise warnings from ensureClawdbotModelsJson in tests

- Cast unused promise returns to 'unknown' to suppress TypeScript warnings
- Tests that don't await the promise are intentionally not awaiting it
- This fixes the failing test suite caused by unawaited async calls

* fix: Skip Ollama model discovery during tests

- Check for VITEST or NODE_ENV=test before making HTTP requests
- Prevents test timeouts and hangs from network calls
- Ollama discovery will still work in production/normal usage

* fix: Set VITEST environment variable in test setup

- Ensures Ollama discovery is skipped in all test runs
- Prevents network calls during tests that could cause timeouts

* test: Temporarily skip Ollama provider tests to diagnose CI failures

* fix: Make Ollama provider opt-in to avoid breaking existing tests

**Root Cause:**
The Ollama provider was being added to ALL configurations by default
(with a fallback API key of 'ollama-local'), which broke tests that
expected NO providers when no API keys were configured.

**Solution:**
- Removed the default fallback API key for Ollama
- Ollama provider now requires explicit configuration via:
  - OLLAMA_API_KEY environment variable, OR
  - Ollama profile in auth store
- Updated documentation to reflect the explicit configuration requirement
- Added a test to verify Ollama is not added by default

This fixes all 4 failing test suites:
- checks (node, test, pnpm test)
- checks (bun, test, bunx vitest run)
- checks-windows (node, test, pnpm test)
- checks-macos (test, pnpm test)

Closes #1531
2026-01-24 22:38:52 +00:00
Peter Steinberger
4e77483051 fix: refine bedrock discovery defaults (#1543) (thanks @fal3) 2026-01-24 01:18:33 +00:00
Alex Fallah
8effb557d5 feat: add dynamic Bedrock model discovery
Add automatic discovery of AWS Bedrock models using ListFoundationModels API.
When AWS credentials are detected, models that support streaming and text output
are automatically discovered and made available.

- Add @aws-sdk/client-bedrock dependency
- Add discoverBedrockModels() with caching (default 1 hour)
- Add resolveImplicitBedrockProvider() for auto-registration
- Add BedrockDiscoveryConfig for optional filtering by provider/region
- Filter to active, streaming, text-output models only
- Update docs/bedrock.md with auto-discovery documentation
2026-01-24 01:15:06 +00:00
Peter Steinberger
f1afc722da Revert "fix: improve GitHub Copilot integration"
This reverts commit 21a9b3b66f.
2026-01-23 07:14:00 +00:00
Peter Steinberger
21a9b3b66f fix: improve GitHub Copilot integration 2026-01-23 02:51:33 +00:00
Peter Steinberger
a5adedea91 refactor: add aws-sdk auth mode and tighten provider auth 2026-01-20 08:28:40 +00:00
Peter Steinberger
fabc2882aa fix: avoid keychain prompts in embedded runner 2026-01-18 04:19:28 +00:00
Muhammed Mukhthar CM
b56b67cdbd UI: label Qwen provider 2026-01-18 01:03:08 +00:00
Muhammed Mukhthar CM
8eb80ee40a Models: add Qwen Portal OAuth support 2026-01-18 01:03:08 +00:00
Peter Steinberger
7876679c5d style: apply oxfmt 2026-01-17 17:44:54 +00:00
Peter Steinberger
4a987c836d fix: add Kimi Code docs + defaults (#1085) (thanks @dan-dr) 2026-01-17 17:35:40 +00:00
ddyo
e93a1d8138 feat: add kimi code provider onboarding 2026-01-17 17:25:07 +00:00
Peter Steinberger
c379191f80 chore: migrate to oxlint and oxfmt
Co-authored-by: Christoph Nakazawa <christoph.pojer@gmail.com>
2026-01-14 15:02:19 +00:00
Peter Steinberger
61b7398cb7 refactor: centralize auth-choice model defaults 2026-01-13 05:25:16 +00:00
Peter Steinberger
66ad8a9289 fix: apply lint fixes 2026-01-13 03:36:53 +00:00
Peter Steinberger
df6634727e fix: refine synthetic provider + minimax probes 2026-01-13 03:36:53 +00:00
Travis Hinton
8b5cd97ceb Add Synthetic provider support 2026-01-13 03:36:53 +00:00
Peter Steinberger
8ff09f8337 feat(image): auto-pair image model 2026-01-12 17:50:47 +00:00
Peter Steinberger
e91aa0657e fix: add copilot tests and lint fixes 2026-01-12 17:48:08 +00:00
Peter Steinberger
fd1e959c2d fix: clean up models-config provider normalization 2026-01-12 17:14:04 +00:00
Peter Steinberger
05ac67c520 refactor: split models-config provider helpers 2026-01-12 17:05:53 +00:00