Upload README.md
Browse files
README.md
CHANGED
@@ -104,9 +104,9 @@ The model is licensed under the [Research License](https://huggingface.co/micros
|
|
104 |
import torch
|
105 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
106 |
|
107 |
-
torch.set_default_device(
|
108 |
-
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", trust_remote_code=True
|
109 |
-
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1_5", trust_remote_code=True
|
110 |
inputs = tokenizer('''```python
|
111 |
def print_prime(n):
|
112 |
"""
|
@@ -118,8 +118,14 @@ text = tokenizer.batch_decode(outputs)[0]
|
|
118 |
print(text)
|
119 |
```
|
120 |
|
121 |
-
|
122 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
123 |
|
124 |
### Citation
|
125 |
|
|
|
104 |
import torch
|
105 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
106 |
|
107 |
+
torch.set_default_device("cuda")
|
108 |
+
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", trust_remote_code=True)
|
109 |
+
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1_5", trust_remote_code=True)
|
110 |
inputs = tokenizer('''```python
|
111 |
def print_prime(n):
|
112 |
"""
|
|
|
118 |
print(text)
|
119 |
```
|
120 |
|
121 |
+
If you need to use the model in a lower precision (e.g., FP16), please wrap the model's forward pass with `torch.autocast()`, as follows:
|
122 |
+
```python
|
123 |
+
with torch.autocast(model.device.type, dtype=torch.float16, enabled=True):
|
124 |
+
outputs = model.generate(**inputs, max_length=200)
|
125 |
+
```
|
126 |
+
|
127 |
+
**Remark.** In the generation function, our model currently does not support beam search (`num_beams` > 1).
|
128 |
+
Furthermore, in the forward pass of the model, we currently do not support attention mask during training, outputting hidden states or attention values, or using custom input embeddings (instead of the model's).
|
129 |
|
130 |
### Citation
|
131 |
|