Update README.md
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ Who needs em, we all have em, they're just like us. Unusable models, compute opt
|
|
16 |
|
17 |
The B, C, and D classes are derived from the tokens per model ratio from LLaMA, as LLaMA 65B is nearly Chinchilla-optimal with a ratio of 21 x Million Params tokens in training. Descending down the model sizes per training set for each model gives us these classes.
|
18 |
|
19 |
-
|
20 |
|
21 |
| Model Name | Parameters | Class | Ratio | Tokens | Batch Size (Tokens) | Training Loss |
|
22 |
| --- | --- | --- | --- | --- | --- | --- |
|
|
|
16 |
|
17 |
The B, C, and D classes are derived from the tokens per model ratio from LLaMA, as LLaMA 65B is nearly Chinchilla-optimal with a ratio of 21 x Million Params tokens in training. Descending down the model sizes per training set for each model gives us these classes.
|
18 |
|
19 |
+
Mixer models are trained equally in fill-in-the-middle, causal modelling, and masked language modelling tasks.
|
20 |
|
21 |
| Model Name | Parameters | Class | Ratio | Tokens | Batch Size (Tokens) | Training Loss |
|
22 |
| --- | --- | --- | --- | --- | --- | --- |
|