Update README.md
Browse files
README.md
CHANGED
@@ -176,14 +176,19 @@ pip install flash-attn
|
|
176 |
```
|
177 |
Then load the model in transformers:
|
178 |
```python
|
179 |
-
from transformers import AutoModelForCausalLM, AutoTokenizer
|
180 |
-
import torch
|
181 |
|
182 |
-
model = AutoModelForCausalLM.from_pretrained(
|
183 |
-
|
184 |
-
|
185 |
-
|
|
|
|
|
|
|
|
|
|
|
186 |
)
|
|
|
187 |
```
|
188 |
|
189 |
The structure of the query template follows that of Mistral-7B-Instruct, as shown below.
|
|
|
176 |
```
|
177 |
Then load the model in transformers:
|
178 |
```python
|
179 |
+
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
|
|
|
180 |
|
181 |
+
model = AutoModelForCausalLM.from_pretrained("MediaTek-Research/Breeze-7B-Instruct-v0.1")
|
182 |
+
tokenizer = AutoTokenizer.from_pretrained("MediaTek-Research/Breeze-7B-Instruct-v0.1")
|
183 |
+
|
184 |
+
# you can also using pipeline
|
185 |
+
generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
|
186 |
+
generator(
|
187 |
+
"請問台灣最高的山是",
|
188 |
+
max_length=30,
|
189 |
+
num_return_sequences=1,
|
190 |
)
|
191 |
+
|
192 |
```
|
193 |
|
194 |
The structure of the query template follows that of Mistral-7B-Instruct, as shown below.
|