10 epoches
Browse files- README.md +15 -16
- mergekit_config.yml +3 -2
- model-00001-of-00002.safetensors +1 -1
- model-00002-of-00002.safetensors +1 -1
README.md
CHANGED
@@ -37,24 +37,22 @@ Since [gemma-2-2b-jpn-it-ablitered-18](https://huggingface.co/ymcki/gemma-2-2b-j
|
|
37 |
|
38 |
Using the [gemma-2-2b base model](https://huggingface.co/google/gemma-2-2b), I employed the ORPO method described by [mlabonne](https://towardsdatascience.com/fine-tune-llama-3-with-orpo-56cfab2f9ada) but the input model was read into VRAM by [unsloth](https://github.com/unslothai/unsloth) to allow using the full 40k dataset to run on a single 3090.
|
39 |
|
40 |
-
|
41 |
-
Checkpoint at epoch 7.
|
42 |
applied it to [gemma-2-2b-jpn-it-ablitered-18](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-abliterated-18) to obtain [gemma-2-2b-ORPO-jpn-it-ablitered-18](https://huggingface.co/ymcki/gemma-2-2b-ORPO-jpn-it-abliterated-18).
|
43 |
|
44 |
| Epoch | loss | eval_loss | eval_logps/rejected | eval_logps/chosen |
|
45 |
| ----- | ---- | --------- | ------------------- | ----------------- |
|
46 |
-
| 1.00 |
|
47 |
-
| 2.00 | 0.
|
48 |
-
| 3.00 |
|
49 |
-
| 4.00 | 1.
|
50 |
-
|
|
51 |
-
|
|
52 |
-
|
|
53 |
-
|
|
54 |
-
|
|
55 |
-
|
|
56 |
-
| 9.00 | 1.1939 | 0.9934 | -1.2703 | -0.6852 |
|
57 |
-
| 10.00 | 0.7421 | 1.0269 | -1.2552 | -0.7395 |
|
58 |
|
59 |
Then I followed Rombodawg's [suggestion](https://www.reddit.com/r/LocalLLaMA/comments/1fyx27y/im_pretty_happy_with_how_my_method_worked_out/) to merge [gemma-2-2b](https://huggingface.co/google/gemma-2-2b), [gemma-2-2b-ORPO-jpn-it-ablitered-18](https://huggingface.co/ymcki/gemma-2-2b-ORPO-jpn-it-abliterated-18) and [gemma-2-2b-jpn-it-ablitered-18](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-abliterated-18) to obtain this model.
|
60 |
|
@@ -69,11 +67,12 @@ Click on the model name go to the raw score json generated by Open LLM Leaderboa
|
|
69 |
| [gemma-2-2b-jpn-it](https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/google/gemma-2-2b-jpn-it/results_2024-10-15T15-21-39.173019.json) | 30.82 | 54.11 | 41.43 | 0.0 | 27.52 | 37.17 | 24.67 |
|
70 |
| [gemma-2-2b-ORPO-jpn-it-abliterated-18-merge (5 epoches)](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-ORPO-jpn-it-abliterated-18-merge/results_2024-10-30T17-06-58.119904.json) | 29.26 | 49.16 | 38.15 | 2.49 | 28.19 | 33.07 | 24.51 |
|
71 |
| gemma-2-2b-ORPO-jpn-it-abliterated-18-merge (10 epoches) | TBD | TBD | TBD | TBD | TBD | TBD | TBD |
|
72 |
-
| [gemma-2-2b-ORPO-jpn-it-abliterated-18 (5 epoches)](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-ORPO-jpn-it-abliterated-18/results_2024-10-30T22-19-29.202883.json) | 29.57 | 48.05 | 41.26 | 0.0 | 27.18 | 36.51 | 24.43
|
73 |
-
| gemma-2-2b-ORPO-jpn-it-abliterated-18 (10 epoches) |
|
74 |
| [gemma-2-2b-jpn-it-abliterated-17](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-jpn-it-abliterated-17/results_2024-10-18T15-18-46.821674.json) | 30.29 | 52.65 | 40.46 | 0.0 | 27.18 | 36.90 | 24.55 |
|
75 |
| [gemma-2-2b-jpn-it-abliterated-18](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-jpn-it-abliterated-18/results_2024-10-18T15-41-42.399571.json) | 30.61 | 53.02 | 40.96 | 0.0 | 27.35 | 37.30 | 25.05 |
|
76 |
| [gemma-2-2b-jpn-it-abliterated-24](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-jpn-it-abliterated-24/results_2024-10-25T16-29-46.542899.json) | 30.61 | 51.37 | 40.77 | 0.0 | 27.77 | 39.02 | 24.73 |
|
|
|
77 |
|
78 |
## How to run this model
|
79 |
|
|
|
37 |
|
38 |
Using the [gemma-2-2b base model](https://huggingface.co/google/gemma-2-2b), I employed the ORPO method described by [mlabonne](https://towardsdatascience.com/fine-tune-llama-3-with-orpo-56cfab2f9ada) but the input model was read into VRAM by [unsloth](https://github.com/unslothai/unsloth) to allow using the full 40k dataset to run on a single 3090.
|
39 |
|
40 |
+
Ten epoches was run. Smallest eval_loss was achieve at epoch 7.00.
|
41 |
+
Checkpoint at epoch 7.00 is used to obtain a model adapter and
|
42 |
applied it to [gemma-2-2b-jpn-it-ablitered-18](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-abliterated-18) to obtain [gemma-2-2b-ORPO-jpn-it-ablitered-18](https://huggingface.co/ymcki/gemma-2-2b-ORPO-jpn-it-abliterated-18).
|
43 |
|
44 |
| Epoch | loss | eval_loss | eval_logps/rejected | eval_logps/chosen |
|
45 |
| ----- | ---- | --------- | ------------------- | ----------------- |
|
46 |
+
| 1.00 | 0.9754 | 1.0344 | -1.1506 | -0.7516 |
|
47 |
+
| 2.00 | 0.9629 | 1.0173 | -1.2694 | -0.7351 |
|
48 |
+
| 3.00 | 0.7435 | 1.0087 | -1.4922 | -0.7388 |
|
49 |
+
| 4.00 | 1.0595 | 1.0026 | -1.5920 | -0.7310 |
|
50 |
+
| 5.00 | 1.0525 | 1.0000 | -1.6313 | -0.7311 |
|
51 |
+
| 6.00 | 1.1628 | 1.0014 | -1.7263 | -0.7393 |
|
52 |
+
| 7.00 | 0.8994 | 0.9971 | -1.7264 | -0.7324 |
|
53 |
+
| 8.00 | 0.7448 | 1.0056 | -1.7790 | -0.7482 |
|
54 |
+
| 9.00 | 0.6801 | 1.0028 | -1.7794 | -0.7429 |
|
55 |
+
| 10.00 | 0.9868 | 1.0069 | -1.8065 | -0.7505 |
|
|
|
|
|
56 |
|
57 |
Then I followed Rombodawg's [suggestion](https://www.reddit.com/r/LocalLLaMA/comments/1fyx27y/im_pretty_happy_with_how_my_method_worked_out/) to merge [gemma-2-2b](https://huggingface.co/google/gemma-2-2b), [gemma-2-2b-ORPO-jpn-it-ablitered-18](https://huggingface.co/ymcki/gemma-2-2b-ORPO-jpn-it-abliterated-18) and [gemma-2-2b-jpn-it-ablitered-18](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-abliterated-18) to obtain this model.
|
58 |
|
|
|
67 |
| [gemma-2-2b-jpn-it](https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/google/gemma-2-2b-jpn-it/results_2024-10-15T15-21-39.173019.json) | 30.82 | 54.11 | 41.43 | 0.0 | 27.52 | 37.17 | 24.67 |
|
68 |
| [gemma-2-2b-ORPO-jpn-it-abliterated-18-merge (5 epoches)](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-ORPO-jpn-it-abliterated-18-merge/results_2024-10-30T17-06-58.119904.json) | 29.26 | 49.16 | 38.15 | 2.49 | 28.19 | 33.07 | 24.51 |
|
69 |
| gemma-2-2b-ORPO-jpn-it-abliterated-18-merge (10 epoches) | TBD | TBD | TBD | TBD | TBD | TBD | TBD |
|
70 |
+
| [gemma-2-2b-ORPO-jpn-it-abliterated-18 (5 epoches)](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-ORPO-jpn-it-abliterated-18/results_2024-10-30T22-19-29.202883.json) | 29.57 | 48.05 | 41.26 | 0.0 | 27.18 | 36.51 | 24.43 |
|
71 |
+
| [gemma-2-2b-ORPO-jpn-it-abliterated-18 (10 epoches)](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-ORPO-jpn-it-abliterated-18/results_2024-11-06T18-34-02.426259.json) | 29.72 | 47.80 | 40.76 | 0.0 | 28.52 | 36.64 | 24.60 |
|
72 |
| [gemma-2-2b-jpn-it-abliterated-17](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-jpn-it-abliterated-17/results_2024-10-18T15-18-46.821674.json) | 30.29 | 52.65 | 40.46 | 0.0 | 27.18 | 36.90 | 24.55 |
|
73 |
| [gemma-2-2b-jpn-it-abliterated-18](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-jpn-it-abliterated-18/results_2024-10-18T15-41-42.399571.json) | 30.61 | 53.02 | 40.96 | 0.0 | 27.35 | 37.30 | 25.05 |
|
74 |
| [gemma-2-2b-jpn-it-abliterated-24](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-jpn-it-abliterated-24/results_2024-10-25T16-29-46.542899.json) | 30.61 | 51.37 | 40.77 | 0.0 | 27.77 | 39.02 | 24.73 |
|
75 |
+
| [gemma-2-2b-jpn-it-abliterated-17-18-24](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-jpn-it-abliterated-17-18-24/results_2024-11-06T19-05-49.169139.json) | 29.17 | 51.33 | 37.82 | 0.0 | 28.10 | 34.92 | 22.82 |
|
76 |
|
77 |
## How to run this model
|
78 |
|
mergekit_config.yml
CHANGED
@@ -4,7 +4,7 @@ models:
|
|
4 |
parameters:
|
5 |
density: 1.0
|
6 |
weight: 1.0
|
7 |
-
- model
|
8 |
dtype: bfloat16
|
9 |
parameters:
|
10 |
density: 1.0
|
@@ -16,5 +16,6 @@ parameters:
|
|
16 |
weight: 1.0
|
17 |
normalize: true
|
18 |
int8_mask: true
|
19 |
-
dtype:
|
|
|
20 |
tokenizer_source: ./gemma-2-2b-ORPO-jpn-it-abliterated-18
|
|
|
4 |
parameters:
|
5 |
density: 1.0
|
6 |
weight: 1.0
|
7 |
+
- model: ./gemma-2-2b-jpn-it-abliterated-18
|
8 |
dtype: bfloat16
|
9 |
parameters:
|
10 |
density: 1.0
|
|
|
16 |
weight: 1.0
|
17 |
normalize: true
|
18 |
int8_mask: true
|
19 |
+
dtype: float32
|
20 |
+
out_dtype: bfloat16
|
21 |
tokenizer_source: ./gemma-2-2b-ORPO-jpn-it-abliterated-18
|
model-00001-of-00002.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4959727696
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ef71b7b48085a63148f8e243bec7acdca00a4294ddc0ff680dc6a45acd05d40e
|
3 |
size 4959727696
|
model-00002-of-00002.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 268999016
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8f4b146399931f912669a08f51cbcfd9f072ac275dc2c13eaef0cf07ba56474b
|
3 |
size 268999016
|