crumb commited on
Commit
0619eb6
·
1 Parent(s): aa8e9ca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -21,14 +21,12 @@ The B, C, and D classes are derived from the tokens per model ratio from LLaMA,
21
  | GerbilLab/Gerbil-A-3.3m | 3.3m | A-Class | 20 | 60M | 65.5k | 6.6644 |
22
  | GerbilLab/Gerbil-B-3.3m | 3.3m | B-Class | 42 | 126M | 65.5k | 6.0822 |
23
  | GerbilLab/Gerbil-C-3.3m | 3.3m | C-Class | 76 | 228M | 65.5k | 5.7934 |
24
- | --- | --- | --- | --- | --- | --- | --- |
25
  | GerbilLab/Gerbil-A-6.7m | 6.7m | A-Class | 20 | 134M | 131k | 6.074100 |
26
  | GerbilLab/Gerbil-B-6.7m | 6.7m | B-Class | 42 | 281M | 131k | 5.513200 |
27
  | GerbilLab/Gerbil-C-6.7m | 6.7m | C-Class | 76 | 509M | 131k | coming soon |
28
  | GerbilLab/Gerbil-D-6.7m | 6.7m | D-Class | 142 | 951M | 131k | 4.8186 |
29
- | --- | --- | --- | --- | --- | --- | --- |
30
  | GerbilLab/Gerbil-A-15m | 15m | A-Class | 20 | 280M | 131k | 4.9999 |
31
- | --- | --- | --- | --- | --- | --- | --- |
32
  | GerbilLab/Gerbil-A-32m | 32m | A-Class | 20 | 640M | 262K | 4.0487 |
 
33
 
34
  The only application where I can imagine these being useful in the slightest is warm-starting very small encoder-decoder models or fitting a new scaling law that takes into account smaller models. Every model was trained on a singular GPU, either a RTX2060, RTX3060, or a T4.
 
21
  | GerbilLab/Gerbil-A-3.3m | 3.3m | A-Class | 20 | 60M | 65.5k | 6.6644 |
22
  | GerbilLab/Gerbil-B-3.3m | 3.3m | B-Class | 42 | 126M | 65.5k | 6.0822 |
23
  | GerbilLab/Gerbil-C-3.3m | 3.3m | C-Class | 76 | 228M | 65.5k | 5.7934 |
 
24
  | GerbilLab/Gerbil-A-6.7m | 6.7m | A-Class | 20 | 134M | 131k | 6.074100 |
25
  | GerbilLab/Gerbil-B-6.7m | 6.7m | B-Class | 42 | 281M | 131k | 5.513200 |
26
  | GerbilLab/Gerbil-C-6.7m | 6.7m | C-Class | 76 | 509M | 131k | coming soon |
27
  | GerbilLab/Gerbil-D-6.7m | 6.7m | D-Class | 142 | 951M | 131k | 4.8186 |
 
28
  | GerbilLab/Gerbil-A-15m | 15m | A-Class | 20 | 280M | 131k | 4.9999 |
 
29
  | GerbilLab/Gerbil-A-32m | 32m | A-Class | 20 | 640M | 262K | 4.0487 |
30
+ | --- | --- | --- | --- | --- | --- | --- |
31
 
32
  The only application where I can imagine these being useful in the slightest is warm-starting very small encoder-decoder models or fitting a new scaling law that takes into account smaller models. Every model was trained on a singular GPU, either a RTX2060, RTX3060, or a T4.