hajekad commited on
Commit
ac29a50
·
1 Parent(s): b7fd724

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -0
README.md CHANGED
@@ -1,3 +1,24 @@
1
  ---
 
 
2
  license: cc-by-nc-sa-4.0
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: cs
3
+
4
  license: cc-by-nc-sa-4.0
5
+ datasets:
6
+ - csTenTen17
7
  ---
8
+
9
+ # CzeGPT-2
10
+ CzeGPT-2 is a Czech version of GPT-2 language model by OpenAI with LM Head on top. The model has the same architectural dimensions as the GPT-2 small (12 layers, 12 heads, 1024 tokens on input/output, and embedding vectors with 768 dimensions) resulting in 124 M trainable parameters. The model was trained on 5 GB slice of cleaned csTenTen17 dataset.
11
+
12
+ The model is a good building block for any down-stream task requiring autoregressive text generation.
13
+
14
+ # Tokenizer
15
+ Along, we also provide a tokenizer (vocab and merges) with vocab size of 50257 that was used during the pre-training phase. It is the byte-level BPE tokenizer used in the original paper and was trained on the whole 5 GB train set.
16
+
17
+ # Training results
18
+ The model's perplexity on a 250 MB random slice of csTenTen17 dataset is 42.12 but is not directly comparable to any other model, since there is no competition in Czech models yet (and comparison with models for other languages is meaningless, because of different tokenization and test data).
19
+
20
+ # Running the predictions
21
+ The repository includes a simple Jupyter Notebook that can help with first steps when using the model.
22
+
23
+
24
+