Update README.md
Browse files
README.md
CHANGED
@@ -20,7 +20,7 @@ The B, C, and D classes are derived from the tokens per model ratio from LLaMA,
|
|
20 |
| --- | --- | --- | --- | --- | --- | --- |
|
21 |
| GerbilLab/Gerbil-A-3.3m | 3.3m | A-Class | 20 | 60M | 65.5k | 6.6644 |
|
22 |
| GerbilLab/Gerbil-B-3.3m | 3.3m | B-Class | 42 | 126M | 65.5k | 6.0822 |
|
23 |
-
| GerbilLab/Gerbil-C-3.3m | 3.3m | C-Class | 76 | 228M | 65.5k |
|
24 |
| --- | --- | --- | --- | --- | --- | --- |
|
25 |
| GerbilLab/Gerbil-A-6.7m | 6.7m | A-Class | 20 | 134M | 131k | 6.074100 |
|
26 |
| GerbilLab/Gerbil-B-6.7m | 6.7m | B-Class | 42 | 281M | 131k | 5.513200 |
|
@@ -29,6 +29,6 @@ The B, C, and D classes are derived from the tokens per model ratio from LLaMA,
|
|
29 |
| --- | --- | --- | --- | --- | --- | --- |
|
30 |
| GerbilLab/Gerbil-A-15m | 15m | A-Class | 20 | 280M | 131k | 4.9999 |
|
31 |
| --- | --- | --- | --- | --- | --- | --- |
|
32 |
-
| GerbilLab/Gerbil-A-32m | 32m | A-Class | 20 | 640M | 262K | 4.
|
33 |
|
34 |
The only application where I can imagine these being useful in the slightest is warm-starting very small encoder-decoder models or fitting a new scaling law that takes into account smaller models. Every model was trained on a singular GPU, either a RTX2060, RTX3060, or a T4.
|
|
|
20 |
| --- | --- | --- | --- | --- | --- | --- |
|
21 |
| GerbilLab/Gerbil-A-3.3m | 3.3m | A-Class | 20 | 60M | 65.5k | 6.6644 |
|
22 |
| GerbilLab/Gerbil-B-3.3m | 3.3m | B-Class | 42 | 126M | 65.5k | 6.0822 |
|
23 |
+
| GerbilLab/Gerbil-C-3.3m | 3.3m | C-Class | 76 | 228M | 65.5k | 5.7934 |
|
24 |
| --- | --- | --- | --- | --- | --- | --- |
|
25 |
| GerbilLab/Gerbil-A-6.7m | 6.7m | A-Class | 20 | 134M | 131k | 6.074100 |
|
26 |
| GerbilLab/Gerbil-B-6.7m | 6.7m | B-Class | 42 | 281M | 131k | 5.513200 |
|
|
|
29 |
| --- | --- | --- | --- | --- | --- | --- |
|
30 |
| GerbilLab/Gerbil-A-15m | 15m | A-Class | 20 | 280M | 131k | 4.9999 |
|
31 |
| --- | --- | --- | --- | --- | --- | --- |
|
32 |
+
| GerbilLab/Gerbil-A-32m | 32m | A-Class | 20 | 640M | 262K | 4.0487 |
|
33 |
|
34 |
The only application where I can imagine these being useful in the slightest is warm-starting very small encoder-decoder models or fitting a new scaling law that takes into account smaller models. Every model was trained on a singular GPU, either a RTX2060, RTX3060, or a T4.
|