wenge-research commited on
Commit
940e2d8
1 Parent(s): 2bb9b54

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -18,7 +18,7 @@ tags:
18
  以下是一个简单调用 `yayi-7b` 进行下游任务推理的示例代码,可在单张 A100/A800/3090 等GPU运行,使用FP16精度推理时约占用 20GB 显存。若需获取训练数据或基于 `yayi-7b` 进行模型微调,请参考我们的 [💻Github Repo](https://github.com/wenge-research/YaYi)。
19
 
20
  ```python
21
- from transformers import AutoTokenizer, AutoModelForCausalLM
22
 
23
  yayi_7b_path = "wenge-research/yayi-7b"
24
  tokenizer = AutoTokenizer.from_pretrained(yayi_7b_path)
@@ -53,9 +53,9 @@ class KeywordsStoppingCriteria(StoppingCriteria):
53
  ```
54
 
55
  ```python
56
- stop_criteria_7b = KeywordsStoppingCriteria([yayi_7b_tokenizer.encode(w)[0] for w in ["<|End|>"]])
57
  ...
58
- response = model.generate(**inputs, generation_config=generation_config, stop_criteria=stop_criteria_7b)
59
  ```
60
 
61
 
 
18
  以下是一个简单调用 `yayi-7b` 进行下游任务推理的示例代码,可在单张 A100/A800/3090 等GPU运行,使用FP16精度推理时约占用 20GB 显存。若需获取训练数据或基于 `yayi-7b` 进行模型微调,请参考我们的 [💻Github Repo](https://github.com/wenge-research/YaYi)。
19
 
20
  ```python
21
+ from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
22
 
23
  yayi_7b_path = "wenge-research/yayi-7b"
24
  tokenizer = AutoTokenizer.from_pretrained(yayi_7b_path)
 
53
  ```
54
 
55
  ```python
56
+ stop_criteria = KeywordsStoppingCriteria([tokenizer.encode(w)[0] for w in ["<|End|>"]])
57
  ...
58
+ response = model.generate(**inputs, generation_config=generation_config, stop_criteria=stop_criteria)
59
  ```
60
 
61