empty
|
92183b083b
|
feat: Add Qwen VL support for character analysis, configurable via VLM_PROVIDER
|
2026-01-07 09:29:43 +08:00 |
|
empty
|
be216eacad
|
fix: Increase VLM max_tokens to 2000 to avoid response truncation
|
2026-01-07 03:37:55 +08:00 |
|
empty
|
8d82cf91d5
|
fix: Auto-detect and use GLM-4V vision model for character analysis
|
2026-01-07 03:33:56 +08:00 |
|
empty
|
8c35b0066f
|
fix: Enhance VLM response parsing to handle markdown code blocks
|
2026-01-07 03:31:42 +08:00 |
|
empty
|
b3cf9e64e5
|
feat: Implement Character Memory V1 - VLM analysis and prompt injection
|
2026-01-07 03:08:29 +08:00 |
|