- Add per-model provider configuration in config.json
- Implement getModelProvider() to fetch provider from model config
- Update all header generators to accept dynamic provider parameter
- Add reasoning_effort field handling for common endpoint type
- Support auto/low/medium/high/off reasoning levels for common models
This enables flexible multi-provider support and reasoning control
across different endpoint types (anthropic, openai, common).
- Add user-agent-updater.js to automatically fetch latest factory-cli version
- Fetch version from https://downloads.factory.ai/factory-cli/LATEST on startup
- Automatically refresh version every hour
- Implement retry mechanism: max 3 retries with 1-minute intervals on failure
- Use user_agent from config.json as fallback value
- Update config.js to use dynamic user-agent
- Initialize updater in server.js startup sequence
- Add common endpoint type for GLM-4.6 model
- Implement automatic system prompt injection for all requests
- Simplify README documentation for better user focus
- Update version to 1.1.0
- Add *.txt to .gitignore
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Features:
- Add new /v1/messages endpoint for transparent Anthropic request/response forwarding
- Only supports anthropic type endpoints (rejects openai with 400 error)
- No request transformation - forwards original request body as-is
- No response transformation - streams and non-streaming responses forwarded directly
Now supports three endpoint patterns:
- /v1/chat/completions: Universal with format conversion (anthropic, openai)
- /v1/responses: Direct proxy for openai endpoints only
- /v1/messages: Direct proxy for anthropic endpoints only
Features:
- Add new /v1/responses endpoint for transparent request/response forwarding
- Only supports openai type endpoints (rejects anthropic with 400 error)
- No request transformation - forwards original request body as-is
- No response transformation - streams and non-streaming responses forwarded directly
- /v1/chat/completions keeps original behavior with format conversion
Differences between endpoints:
- /v1/chat/completions: Converts formats for all endpoint types (anthropic, openai)
- /v1/responses: Direct proxy for openai endpoints only, zero transformation
- Log invalid request method, URL, path, and parameters
- Display query parameters and request body if present
- Show client IP and User-Agent information
- Return helpful error message with available endpoints
- Format console output with clear visual separators