fix: auto-add provider prefix for custom LLM endpoints

LiteLLM requires model names in format "provider/model-name".
When LLM_API_BASE is set, automatically prefix model with "openai/"
if no provider prefix is present.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
empty
2026-01-01 14:05:45 +08:00
parent fbf3de0238
commit 7c8337821e
2 changed files with 21 additions and 0 deletions

View File

@@ -76,6 +76,13 @@ class LLMService:
self._mock_mode = os.environ.get("LLM_MOCK_MODE", "").lower() == "true"
self._acompletion = None
# Auto-add provider prefix for custom endpoints
# LiteLLM requires format: provider/model (e.g., openai/gpt-4)
# See: https://docs.litellm.ai/docs/providers
if self._api_base and "/" not in self._model:
self._model = f"openai/{self._model}"
logger.info(f"Auto-prefixed model name: {self._model} (custom endpoint detected)")
if self._mock_mode:
logger.info("LLMService running in MOCK mode (forced by LLM_MOCK_MODE)")
return