Update README.md
Browse files
README.md
CHANGED
@@ -16,6 +16,8 @@ Who needs em, we all have em, they're just like us. Unusable models, compute opt
|
|
16 |
|
17 |
The B, C, and D classes are derived from the tokens per model ratio from LLaMA, as LLaMA 65B is nearly Chinchilla-optimal with a ratio of 21 x Million Params tokens in training. Descending down the model sizes per training set for each model gives us these classes.
|
18 |
|
|
|
|
|
19 |
| Model Name | Parameters | Class | Ratio | Tokens | Batch Size (Tokens) | Training Loss |
|
20 |
| --- | --- | --- | --- | --- | --- | --- |
|
21 |
| GerbilLab/Gerbil-A-3.3m | 3.3m | A-Class | 20 | 60M | 65.5k | 6.6644 |
|
@@ -28,5 +30,6 @@ The B, C, and D classes are derived from the tokens per model ratio from LLaMA,
|
|
28 |
| GerbilLab/Gerbil-A-15m | 15m | A-Class | 20 | 280M | 131k | 4.9999 |
|
29 |
| GerbilLab/Gerbil-A-32m | 32m | A-Class | 20 | 640M | 262K | 4.0487 |
|
30 |
| --- | --- | --- | --- | --- | --- | --- |
|
|
|
31 |
|
32 |
The only application where I can imagine these being useful in the slightest is warm-starting very small encoder-decoder models or fitting a new scaling law that takes into account smaller models. Every model was trained on a singular GPU, either a RTX2060, RTX3060, or a T4.
|
|
|
16 |
|
17 |
The B, C, and D classes are derived from the tokens per model ratio from LLaMA, as LLaMA 65B is nearly Chinchilla-optimal with a ratio of 21 x Million Params tokens in training. Descending down the model sizes per training set for each model gives us these classes.
|
18 |
|
19 |
+
I also come up with a new pretraining method inspired by UL2, the only difference is it's 10:48pm so I don't have the patience to look at the implementation to see if every detail is correct. Calling it Blender in case it's too different.
|
20 |
+
|
21 |
| Model Name | Parameters | Class | Ratio | Tokens | Batch Size (Tokens) | Training Loss |
|
22 |
| --- | --- | --- | --- | --- | --- | --- |
|
23 |
| GerbilLab/Gerbil-A-3.3m | 3.3m | A-Class | 20 | 60M | 65.5k | 6.6644 |
|
|
|
30 |
| GerbilLab/Gerbil-A-15m | 15m | A-Class | 20 | 280M | 131k | 4.9999 |
|
31 |
| GerbilLab/Gerbil-A-32m | 32m | A-Class | 20 | 640M | 262K | 4.0487 |
|
32 |
| --- | --- | --- | --- | --- | --- | --- |
|
33 |
+
| GerbilLab/Gerbil-Blender-A-15m | 15m | A-Class | 20 | 280M | 131k | coming soon |
|
34 |
|
35 |
The only application where I can imagine these being useful in the slightest is warm-starting very small encoder-decoder models or fitting a new scaling law that takes into account smaller models. Every model was trained on a singular GPU, either a RTX2060, RTX3060, or a T4.
|