revert: remove auto-prefix, user specifies full provider/model

Support both OpenAI-compatible and Anthropic-compatible endpoints.
User must specify full model name with provider prefix.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
empty
2026-01-01 14:07:17 +08:00
parent 7c8337821e
commit 8bb78a808d
2 changed files with 8 additions and 23 deletions

View File

@@ -39,25 +39,17 @@
# LLM_MODEL=ollama/llama2
# =============================================================================
# Option 7: Custom/Self-hosted (OpenAI-compatible endpoint)
# Option 7: Custom/Self-hosted endpoint
# See: https://docs.litellm.ai/docs/providers
# =============================================================================
# LLM_API_BASE=http://localhost:8000/v1
# LLM_API_KEY=your-key
# LLM_MODEL=qwen2.5
# Note: When LLM_API_BASE is set, model is auto-prefixed as "openai/qwen2.5"
# =============================================================================
# Model naming convention (LiteLLM requires provider prefix)
# See: https://docs.litellm.ai/docs/providers
# =============================================================================
# Format: provider/model-name
# Examples:
# openai/gpt-4
# anthropic/claude-3-haiku-20240307
# gemini/gemini-pro
# ollama/llama2
# huggingface/starcoder
# azure/your-deployment-name
#
# For OpenAI-compatible API:
# LLM_MODEL=openai/your-model-name
#
# For Anthropic-compatible API:
# LLM_MODEL=anthropic/your-model-name
# =============================================================================
# Force mock mode (no API calls, uses predefined responses)

View File

@@ -76,13 +76,6 @@ class LLMService:
self._mock_mode = os.environ.get("LLM_MOCK_MODE", "").lower() == "true"
self._acompletion = None
# Auto-add provider prefix for custom endpoints
# LiteLLM requires format: provider/model (e.g., openai/gpt-4)
# See: https://docs.litellm.ai/docs/providers
if self._api_base and "/" not in self._model:
self._model = f"openai/{self._model}"
logger.info(f"Auto-prefixed model name: {self._model} (custom endpoint detected)")
if self._mock_mode:
logger.info("LLMService running in MOCK mode (forced by LLM_MOCK_MODE)")
return