Model save
Browse files- README.md +6 -6
- last_checkpoint/config.json +1 -1
- last_checkpoint/model-00001-of-00004.safetensors +1 -1
- last_checkpoint/model-00002-of-00004.safetensors +1 -1
- last_checkpoint/model-00003-of-00004.safetensors +1 -1
- last_checkpoint/model-00004-of-00004.safetensors +1 -1
- last_checkpoint/training_args.bin +1 -1
README.md
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
---
|
2 |
-
base_model: RyanYr/
|
3 |
library_name: transformers
|
4 |
-
model_name:
|
5 |
tags:
|
6 |
- generated_from_trainer
|
7 |
- trl
|
@@ -9,9 +9,9 @@ tags:
|
|
9 |
licence: license
|
10 |
---
|
11 |
|
12 |
-
# Model Card for
|
13 |
|
14 |
-
This model is a fine-tuned version of [RyanYr/
|
15 |
It has been trained using [TRL](https://github.com/huggingface/trl).
|
16 |
|
17 |
## Quick start
|
@@ -20,14 +20,14 @@ It has been trained using [TRL](https://github.com/huggingface/trl).
|
|
20 |
from transformers import pipeline
|
21 |
|
22 |
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
|
23 |
-
generator = pipeline("text-generation", model="RyanYr/
|
24 |
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
|
25 |
print(output["generated_text"])
|
26 |
```
|
27 |
|
28 |
## Training procedure
|
29 |
|
30 |
-
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/
|
31 |
|
32 |
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
|
33 |
|
|
|
1 |
---
|
2 |
+
base_model: RyanYr/reflect_mini8Bit_om2-460k_sft-t1
|
3 |
library_name: transformers
|
4 |
+
model_name: reflect_mini8B_Om2SftT1-Om2G8kIpsdpIter1T1_b0.5
|
5 |
tags:
|
6 |
- generated_from_trainer
|
7 |
- trl
|
|
|
9 |
licence: license
|
10 |
---
|
11 |
|
12 |
+
# Model Card for reflect_mini8B_Om2SftT1-Om2G8kIpsdpIter1T1_b0.5
|
13 |
|
14 |
+
This model is a fine-tuned version of [RyanYr/reflect_mini8Bit_om2-460k_sft-t1](https://huggingface.co/RyanYr/reflect_mini8Bit_om2-460k_sft-t1).
|
15 |
It has been trained using [TRL](https://github.com/huggingface/trl).
|
16 |
|
17 |
## Quick start
|
|
|
20 |
from transformers import pipeline
|
21 |
|
22 |
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
|
23 |
+
generator = pipeline("text-generation", model="RyanYr/reflect_mini8B_Om2SftT1-Om2G8kIpsdpIter1T1_b0.5", device="cuda")
|
24 |
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
|
25 |
print(output["generated_text"])
|
26 |
```
|
27 |
|
28 |
## Training procedure
|
29 |
|
30 |
+
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/s7qpejxw)
|
31 |
|
32 |
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
|
33 |
|
last_checkpoint/config.json
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
{
|
2 |
-
"_name_or_path": "RyanYr/
|
3 |
"architectures": [
|
4 |
"MistralForCausalLM"
|
5 |
],
|
|
|
1 |
{
|
2 |
+
"_name_or_path": "RyanYr/reflect_mini8Bit_om2-460k_sft-t1",
|
3 |
"architectures": [
|
4 |
"MistralForCausalLM"
|
5 |
],
|
last_checkpoint/model-00001-of-00004.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4983016096
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:87a6984b3f24e942756124aded24eace23b40852bf0b347ed5dbdbeaa34eecb8
|
3 |
size 4983016096
|
last_checkpoint/model-00002-of-00004.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4999836776
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6d9b27c9d6314b9961bd27b3f5e899da8950777fe90ffd3f31c1e952e4a27c38
|
3 |
size 4999836776
|
last_checkpoint/model-00003-of-00004.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4983067960
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:df90c24a377f2b3a715bf3ba6c4c0fd1a1492201b2a7f63b938c138a818591a4
|
3 |
size 4983067960
|
last_checkpoint/model-00004-of-00004.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 1073750144
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:69fd87cd7a1421ed15922b06cae3920fabadc6b724bf7dd4cc05ea0b3ed1fe56
|
3 |
size 1073750144
|
last_checkpoint/training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 8056
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2bc84dc3524f2207c43fe080346b8f00d9ae5d1e806cc96e5dac8726b54c4c08
|
3 |
size 8056
|