diff --git a/README.md b/README.md index 34b14c5..b4494b7 100644 --- a/README.md +++ b/README.md @@ -102,7 +102,7 @@ python3 -m vllm.entrypoints.openai.api_server \ --port 8000 ``` -- 该模型结构与 `GLM-4.1V-9B-Thinking` 相同, 关于模型部署的详细内容,你也以查看 [GLM-V](https://github.com/zai-org/GLM-V) +- 该模型结构与 `GLM-4.1V-9B-Thinking` 相同, 关于模型部署的详细内容,你也可以查看 [GLM-V](https://github.com/zai-org/GLM-V) 获取模型部署和使用指南。 - 运行成功后,将可以通过 `http://localhost:8000/v1` 访问模型服务。 如果您在远程服务器部署模型, 使用该服务器的IP访问模型. diff --git a/resources/WECHAT.md b/resources/WECHAT.md index 34b8316..58c1663 100644 --- a/resources/WECHAT.md +++ b/resources/WECHAT.md @@ -1,5 +1,5 @@
+
扫码关注公众号,加入「Open-AutoGLM 交流群」
Scan the QR code to follow the official account and join the "Open-AutoGLM Discussion Group"
diff --git a/resources/wechat.jpeg b/resources/wechat.jpeg new file mode 100644 index 0000000..77ae66a Binary files /dev/null and b/resources/wechat.jpeg differ diff --git a/resources/wechat.png b/resources/wechat.png deleted file mode 100644 index 2855cb3..0000000 Binary files a/resources/wechat.png and /dev/null differ