Update README.md
Browse files
README.md
CHANGED
@@ -25,6 +25,23 @@ The goal of this model is to produce a SOTA model who can easily predict the nex
|
|
25 |
- **License:** apache-2.0
|
26 |
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
|
27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
29 |
|
30 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
|
|
25 |
- **License:** apache-2.0
|
26 |
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
|
27 |
|
28 |
+
|
29 |
+
## Inference with Unsloth
|
30 |
+
```py
|
31 |
+
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
|
32 |
+
inputs = tokenizer([
|
33 |
+
alpaca_prompt.format(
|
34 |
+
#"", # instruction
|
35 |
+
"Inki bima ke salaka ba gâteau ya pomme ya nsungi ?", # instruction
|
36 |
+
"", # output - leave this blank for generation!
|
37 |
+
)],
|
38 |
+
return_tensors="pt").to("cuda")
|
39 |
+
|
40 |
+
outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)
|
41 |
+
tokenizer.batch_decode(outputs)
|
42 |
+
```
|
43 |
+
|
44 |
+
|
45 |
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
46 |
|
47 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|