Logo
Explore Help
Register Sign In
let5see/AI-Video
1
0
Fork 0
You've already forked AI-Video
Code Issues Pull Requests Actions Packages Projects Releases Wiki Activity
320 Commits 1 Branch 0 Tags
be216eacad49e3fc7c0fe8c9b0f855e6fab72bfb
Commit Graph

4 Commits

Author SHA1 Message Date
empty
be216eacad fix: Increase VLM max_tokens to 2000 to avoid response truncation 2026-01-07 03:37:55 +08:00
empty
8d82cf91d5 fix: Auto-detect and use GLM-4V vision model for character analysis 2026-01-07 03:33:56 +08:00
empty
8c35b0066f fix: Enhance VLM response parsing to handle markdown code blocks 2026-01-07 03:31:42 +08:00
empty
b3cf9e64e5 feat: Implement Character Memory V1 - VLM analysis and prompt injection 2026-01-07 03:08:29 +08:00
Powered by Gitea Version: 1.25.2 Page: 24ms Template: 2ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API