zR commited on
Commit
d5275af
1 Parent(s): 7d23e9e
Files changed (2) hide show
  1. README.md +4 -0
  2. README_en.md +7 -2
README.md CHANGED
@@ -17,6 +17,10 @@ inference: false
17
 
18
  Read this in [English](README_en.md).
19
 
 
 
 
 
20
  GLM-4-9B 是智谱 AI 推出的最新一代预训练模型 GLM-4 系列中的开源版本。
21
  在语义、数学、推理、代码和知识等多方面的数据集测评中,GLM-4-9B 及其人类偏好对齐的版本 GLM-4-9B-Chat 均表现出较高的性能。
22
  除了能进行多轮对话,GLM-4-9B-Chat 还具备网页浏览、代码执行、自定义工具调用(Function Call)和长文本推理(支持最大 128K
 
17
 
18
  Read this in [English](README_en.md).
19
 
20
+ **2024/07/24,我们发布了与长文本相关的最新技术解读,关注 [这里](https://medium.com/@ChatGLM/glm-long-scaling-pre-trained-model-contexts-to-millions-caa3c48dea85) 查看我们在训练 GLM-4-9B 开源模型中关于长文本技术的技术报告**
21
+
22
+ ## 模型介绍
23
+
24
  GLM-4-9B 是智谱 AI 推出的最新一代预训练模型 GLM-4 系列中的开源版本。
25
  在语义、数学、推理、代码和知识等多方面的数据集测评中,GLM-4-9B 及其人类偏好对齐的版本 GLM-4-9B-Chat 均表现出较高的性能。
26
  除了能进行多轮对话,GLM-4-9B-Chat 还具备网页浏览、代码执行、自定义工具调用(Function Call)和长文本推理(支持最大 128K
README_en.md CHANGED
@@ -1,5 +1,9 @@
1
  # GLM-4-9B-Chat-1M
2
 
 
 
 
 
3
  ## Model Introduction
4
 
5
  GLM-4-9B is the open-source version of the latest generation of pre-trained models in the GLM-4 series launched by Zhipu
@@ -32,7 +36,8 @@ The long text capability was further evaluated on LongBench, and the results are
32
 
33
  **For more inference code and requirements, please visit our [github page](https://github.com/THUDM/GLM-4).**
34
 
35
- **Please strictly follow the [dependencies](https://github.com/THUDM/GLM-4/blob/main/basic_demo/requirements.txt) to install, otherwise it will not run properly**
 
36
 
37
  ### Use the following method to quickly call the GLM-4-9B-Chat-1M language model
38
 
@@ -47,7 +52,7 @@ from transformers import (
47
 
48
  device = "cuda"
49
 
50
- tokenizer = AutoTokenizer.from_pretrained("THUDM/glm-4-9b-chat-1m",trust_remote_code=True)
51
 
52
  query = "你好"
53
 
 
1
  # GLM-4-9B-Chat-1M
2
 
3
+ **On July 24, 2024, we released the latest technical interpretation related to long texts. Check
4
+ out [here](https://medium.com/@ChatGLM/glm-long-scaling-pre-trained-model-contexts-to-millions-caa3c48dea85) to view our
5
+ technical report on long context technology in the training of the open-source GLM-4-9B model.**
6
+
7
  ## Model Introduction
8
 
9
  GLM-4-9B is the open-source version of the latest generation of pre-trained models in the GLM-4 series launched by Zhipu
 
36
 
37
  **For more inference code and requirements, please visit our [github page](https://github.com/THUDM/GLM-4).**
38
 
39
+ **Please strictly follow the [dependencies](https://github.com/THUDM/GLM-4/blob/main/basic_demo/requirements.txt) to
40
+ install, otherwise it will not run properly**
41
 
42
  ### Use the following method to quickly call the GLM-4-9B-Chat-1M language model
43
 
 
52
 
53
  device = "cuda"
54
 
55
+ tokenizer = AutoTokenizer.from_pretrained("THUDM/glm-4-9b-chat-1m", trust_remote_code=True)
56
 
57
  query = "你好"
58