hajekad commited on
Commit
002da38
·
1 Parent(s): d71564a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -8,13 +8,14 @@ datasets:
8
 
9
  # CzeGPT-2_summarizer
10
  CzeGPT-2_summarizer is a Czech summarizer built upon the <a href="https://huggingface.co/MU-NLPC/CzeGPT-2">CzeGPT-2</a> model. The model has the same architectural dimensions as the GPT-2 small (12 layers, 12 heads, 1024 tokens on input/output, and embedding vectors with 768 dimensions) resulting in 124M trainable parameters. It was fine-tuned and evaluated on the <a href="https://aclanthology.org/L18-1551.pdf">SumeCzech</a> summarization dataset containing about 1M Czech news articles.
 
11
 
12
  ## Tokenizer
13
  Along, we also provide a Czech trained tokenizer (vocab and merges) with vocab size of 50257 that was used during the pre-training phase and fine-tuning. It is the byte-level BPE tokenizer as used in the original GPT-2 paper.
14
 
15
  ## Training results
16
  The model was evaluated on the *test* and *ood-test* partitions of the SumeCzech dataset and compared to the best summarizers yet evaluated on this benchmark (the results taken from <a href="https://ufal.mff.cuni.cz/sumeczech">here</a>).
17
- The abstract generator yields three sentences that roughly correspond to 40 token average length of abstracts. This length of summary was also confirmed by tuning on the validation set.
18
 
19
  We manage to reach state-of-the art on most metrics and
20
 
 
8
 
9
  # CzeGPT-2_summarizer
10
  CzeGPT-2_summarizer is a Czech summarizer built upon the <a href="https://huggingface.co/MU-NLPC/CzeGPT-2">CzeGPT-2</a> model. The model has the same architectural dimensions as the GPT-2 small (12 layers, 12 heads, 1024 tokens on input/output, and embedding vectors with 768 dimensions) resulting in 124M trainable parameters. It was fine-tuned and evaluated on the <a href="https://aclanthology.org/L18-1551.pdf">SumeCzech</a> summarization dataset containing about 1M Czech news articles.
11
+ The model is trained to generate the summary as long as you let it (or it runs out of sequence length). This leaves a space for developers to set their own constraints.
12
 
13
  ## Tokenizer
14
  Along, we also provide a Czech trained tokenizer (vocab and merges) with vocab size of 50257 that was used during the pre-training phase and fine-tuning. It is the byte-level BPE tokenizer as used in the original GPT-2 paper.
15
 
16
  ## Training results
17
  The model was evaluated on the *test* and *ood-test* partitions of the SumeCzech dataset and compared to the best summarizers yet evaluated on this benchmark (the results taken from <a href="https://ufal.mff.cuni.cz/sumeczech">here</a>).
18
+ The abstract generator yields three sentences that roughly correspond to 40 token average length of abstracts in the SumeCzech. This length of summary was also confirmed by tuning on the validation set.
19
 
20
  We manage to reach state-of-the art on most metrics and
21