crumb commited on
Commit
7bfaed9
·
1 Parent(s): 26c506a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -16,7 +16,7 @@ Who needs em, we all have em, they're just like us. Unusable models, compute opt
16
 
17
  The B, C, and D classes are derived from the tokens per model ratio from LLaMA, as LLaMA 65B is nearly Chinchilla-optimal with a ratio of 21 x Million Params tokens in training. Descending down the model sizes per training set for each model gives us these classes.
18
 
19
- I also come up with a new pretraining method inspired by UL2, the only difference is it's 10:48pm so I don't have the patience to look at the implementation to see if every detail is correct. Calling it Blender in case it's too different. I think R-Denoising is wrong but I'm too tired to read.
20
 
21
  | Model Name | Parameters | Class | Ratio | Tokens | Batch Size (Tokens) | Training Loss |
22
  | --- | --- | --- | --- | --- | --- | --- |
 
16
 
17
  The B, C, and D classes are derived from the tokens per model ratio from LLaMA, as LLaMA 65B is nearly Chinchilla-optimal with a ratio of 21 x Million Params tokens in training. Descending down the model sizes per training set for each model gives us these classes.
18
 
19
+ Mixer models are trained equally in fill-in-the-middle, causal modelling, and masked language modelling tasks.
20
 
21
  | Model Name | Parameters | Class | Ratio | Tokens | Batch Size (Tokens) | Training Loss |
22
  | --- | --- | --- | --- | --- | --- | --- |