Datasets:
SumeCzech Corpus
These are the accompanying materials of the paper:
@inproceedings{straka-etal-2018-sumeczech,
title = "{S}ume{C}zech: Large {C}zech News-Based Summarization Dataset",
author = "Straka, Milan and Mediankin, Nikita and Kocmi, Tom and
{\v{Z}}abokrtsk{\'y}, Zden{\v{e}}k and Hude{\v{c}}ek, Vojt{\v{e}}ch and Haji{\v{c}}, Jan",
booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC}-2018)",
month = may,
year = "2018",
address = "Miyazaki, Japan",
publisher = "European Languages Resources Association (ELRA)",
}
SumeCzech Download Script
To download the SumeCzech dataset, use the downloader.py
script.
The script has several dependencies (and requires an exact version for
some of them) listed in requirements.txt
, you can install them
using pip3 install -r requirements.txt
.
You can start the script using python3 downloader.py
. By default,
16 parallel processes are used to download the data (you can
override this number using the --parallel N
option).
During download, MD5 hash of every document's headline, abstract and text
is checked with the official one, allowing to detect possible errors
during download and extraction. Although not recommended, the check
can be bypassed by using the --no_verify_md5
option.
The validated documents are saved during download. If the download script is interrupted and run again, it will reuse the already processed documents and only download new ones.
Changelog:
13 Feb 2018: The original download script was released.
25 Feb 2023: An update with the following changes:
- use the new domain https://data.commoncrawl.org of the CC download;
- support Python 3.10 and 3.11, where
collections.Callable
was removed.
SumeCzech ROUGE_RAW Evaluation Metric
The RougeRAW metric is implemented in rouge_raw.py
module, which can
compute the RougeRAW-1, RougeRAW-2, RougeRAW-L metrics either for
a single pair of documents, or for a pair of corpora.
Unfortunately, slightly different tokenization was used in the original
paper. Therefore, here we provide the results of the systems from the paper
evaluated using the rouge_raw.py
module.
Results for abstract-headline on test
RougeRAW-1 RougeRAW-2 RougeRAW-L
Method P R F P R F P R F
first 13.9 23.6 16.5 04.1 07.4 05.0 12.2 20.7 14.5
random 11.0 17.8 12.8 02.6 04.5 03.1 09.6 15.5 11.1
textrank 13.3 22.8 15.9 03.7 06.8 04.6 11.6 19.9 13.8
t2t 20.2 15.9 17.2 06.7 05.1 05.6 18.6 14.7 15.8
Results for abstract-headline on oodtest
RougeRAW-1 RougeRAW-2 RougeRAW-L
Method P R F P R F P R F
first 13.3 26.5 16.7 04.7 10.0 06.0 11.6 23.3 14.7
random 10.6 20.7 13.1 03.2 06.9 04.1 09.3 18.2 11.5
textrank 12.8 25.9 16.3 04.5 09.6 05.7 11.3 22.7 14.2
t2t 19.4 15.1 16.3 07.1 05.2 05.7 18.1 14.1 15.2
Results for text-headline on test
RougeRAW-1 RougeRAW-2 RougeRAW-L
Method P R F P R F P R F
first 07.4 13.5 08.9 01.1 02.2 01.3 06.5 11.7 07.7
random 05.9 10.3 06.9 00.5 01.0 00.6 05.2 08.9 06.0
textrank 06.0 16.5 08.3 00.8 02.3 01.1 05.0 13.8 06.9
t2t 08.8 07.0 07.5 00.8 00.6 00.7 08.1 06.5 07.0
Results for text-headline on oodtest
RougeRAW-1 RougeRAW-2 RougeRAW-L
Method P R F P R F P R F
first 06.7 13.6 08.3 01.3 02.8 01.6 05.9 12.0 07.4
random 05.2 10.0 06.3 00.6 01.4 00.8 04.6 08.9 05.6
textrank 05.8 16.9 08.1 01.1 03.4 01.5 05.0 14.5 06.9
t2t 06.3 05.1 05.5 00.5 00.4 00.4 05.9 04.8 05.1
Results for text-abstract on test
RougeRAW-1 RougeRAW-2 RougeRAW-L
Method P R F P R F P R F
first 13.1 17.9 14.4 01.9 02.8 02.1 08.8 12.0 09.6
random 11.7 15.5 12.7 01.2 01.7 01.3 07.7 10.3 08.4
textrank 11.1 20.8 13.8 01.6 03.1 02.0 07.1 13.4 08.9
t2t 13.2 10.5 11.3 01.2 00.9 01.0 10.2 08.1 08.7
Results for text-abstract on oodtest
RougeRAW-1 RougeRAW-2 RougeRAW-L
Method P R F P R F P R F
first 11.1 17.1 12.7 01.6 02.7 01.9 07.6 11.7 08.7
random 10.1 15.1 11.4 01.0 01.7 01.2 06.9 10.3 07.8
textrank 09.8 19.9 12.5 01.5 03.3 02.0 06.6 13.3 08.4
t2t 12.5 09.4 10.3 00.8 00.6 00.6 09.8 07.5 08.1