Update README.md
Browse files
README.md
CHANGED
@@ -1,16 +1,23 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
# Model Card for Model ID
|
7 |
|
|
|
|
|
|
|
8 |
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
-
Passed argument batch_size = auto:4.0.
|
10 |
Determined largest batch size: 64
|
11 |
-
Passed argument batch_size = auto:4.0.
|
12 |
Determined largest batch size: 64
|
13 |
-
hf (pretrained=./merged_model,dtype=float), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto:4 (64,64,64,64,64)
|
14 |
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|
15 |
|---------|------:|------|-----:|--------|---|-----:|---|-----:|
|
16 |
|hellaswag| 1|none | 0|acc |↑ |0.2868|± |0.0045|
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
datasets:
|
4 |
+
- mlabonne/orpo-dpo-mix-40k
|
5 |
+
metrics:
|
6 |
+
- accuracy
|
7 |
+
base_model:
|
8 |
+
- EleutherAI/gpt-neo-125m
|
9 |
---
|
10 |
|
11 |
# Model Card for Model ID
|
12 |
|
13 |
+
Fine Tuned EleutherAI/gpt-neo-125M using dataset of https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k:
|
14 |
+
|
15 |
+
|
16 |
<!-- Provide a quick summary of what the model is/does. -->
|
17 |
+
Passed argument batch_size = auto:4.0.
|
18 |
Determined largest batch size: 64
|
19 |
+
Passed argument batch_size = auto:4.0.
|
20 |
Determined largest batch size: 64
|
|
|
21 |
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|
22 |
|---------|------:|------|-----:|--------|---|-----:|---|-----:|
|
23 |
|hellaswag| 1|none | 0|acc |↑ |0.2868|± |0.0045|
|