Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ The model is a good building block for any down-stream task requiring autoregres
|
|
15 |
Along, we also provide a tokenizer (vocab and merges) with vocab size of 50257 that was used during the pre-training phase. It is the byte-level BPE tokenizer used in the original paper and was trained on the whole 5 GB train set.
|
16 |
|
17 |
# Training results
|
18 |
-
The model's perplexity on a 250 MB random slice of csTenTen17 dataset is 42.12
|
19 |
|
20 |
# Running the predictions
|
21 |
The repository includes a simple Jupyter Notebook that can help with the first steps when using the model. (TODO)
|
|
|
15 |
Along, we also provide a tokenizer (vocab and merges) with vocab size of 50257 that was used during the pre-training phase. It is the byte-level BPE tokenizer used in the original paper and was trained on the whole 5 GB train set.
|
16 |
|
17 |
# Training results
|
18 |
+
The model's perplexity on a 250 MB random slice of csTenTen17 dataset is **42.12**. This value is unfortunately not directly comparable to any other model, since there is no competition in Czech autoregressive models yet (and comparison with models for other languages is meaningless, because of different tokenization and test data).
|
19 |
|
20 |
# Running the predictions
|
21 |
The repository includes a simple Jupyter Notebook that can help with the first steps when using the model. (TODO)
|