juliadollis
commited on
Commit
•
5799947
1
Parent(s):
00974c1
End of training
Browse files
README.md
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
---
|
2 |
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
|
3 |
library_name: transformers
|
4 |
-
model_name:
|
5 |
tags:
|
6 |
- generated_from_trainer
|
7 |
- unsloth
|
@@ -10,7 +10,7 @@ tags:
|
|
10 |
licence: license
|
11 |
---
|
12 |
|
13 |
-
# Model Card for
|
14 |
|
15 |
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.3-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-instruct-v0.3-bnb-4bit).
|
16 |
It has been trained using [TRL](https://github.com/huggingface/trl).
|
@@ -21,7 +21,7 @@ It has been trained using [TRL](https://github.com/huggingface/trl).
|
|
21 |
from transformers import pipeline
|
22 |
|
23 |
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
|
24 |
-
generator = pipeline("text-generation", model="SEMEVAL-11/
|
25 |
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
|
26 |
print(output["generated_text"])
|
27 |
```
|
|
|
1 |
---
|
2 |
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
|
3 |
library_name: transformers
|
4 |
+
model_name: mistral_engt2pt_v1
|
5 |
tags:
|
6 |
- generated_from_trainer
|
7 |
- unsloth
|
|
|
10 |
licence: license
|
11 |
---
|
12 |
|
13 |
+
# Model Card for mistral_engt2pt_v1
|
14 |
|
15 |
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.3-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-instruct-v0.3-bnb-4bit).
|
16 |
It has been trained using [TRL](https://github.com/huggingface/trl).
|
|
|
21 |
from transformers import pipeline
|
22 |
|
23 |
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
|
24 |
+
generator = pipeline("text-generation", model="SEMEVAL-11/mistral_engt2pt_v1", device="cuda")
|
25 |
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
|
26 |
print(output["generated_text"])
|
27 |
```
|