revert: remove auto-prefix, user specifies full provider/model
Support both OpenAI-compatible and Anthropic-compatible endpoints. User must specify full model name with provider prefix. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -39,25 +39,17 @@
|
||||
# LLM_MODEL=ollama/llama2
|
||||
|
||||
# =============================================================================
|
||||
# Option 7: Custom/Self-hosted (OpenAI-compatible endpoint)
|
||||
# Option 7: Custom/Self-hosted endpoint
|
||||
# See: https://docs.litellm.ai/docs/providers
|
||||
# =============================================================================
|
||||
# LLM_API_BASE=http://localhost:8000/v1
|
||||
# LLM_API_KEY=your-key
|
||||
# LLM_MODEL=qwen2.5
|
||||
# Note: When LLM_API_BASE is set, model is auto-prefixed as "openai/qwen2.5"
|
||||
|
||||
# =============================================================================
|
||||
# Model naming convention (LiteLLM requires provider prefix)
|
||||
# See: https://docs.litellm.ai/docs/providers
|
||||
# =============================================================================
|
||||
# Format: provider/model-name
|
||||
# Examples:
|
||||
# openai/gpt-4
|
||||
# anthropic/claude-3-haiku-20240307
|
||||
# gemini/gemini-pro
|
||||
# ollama/llama2
|
||||
# huggingface/starcoder
|
||||
# azure/your-deployment-name
|
||||
#
|
||||
# For OpenAI-compatible API:
|
||||
# LLM_MODEL=openai/your-model-name
|
||||
#
|
||||
# For Anthropic-compatible API:
|
||||
# LLM_MODEL=anthropic/your-model-name
|
||||
|
||||
# =============================================================================
|
||||
# Force mock mode (no API calls, uses predefined responses)
|
||||
|
||||
Reference in New Issue
Block a user