IlyaGusev commited on
Commit
eb0f2f0
1 Parent(s): 70cad03

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -2
README.md CHANGED
@@ -59,11 +59,29 @@ task_ids:
59
 
60
  ### Dataset Summary
61
 
62
- Dataset for automatic summarization of Russian news. News and their summaries are from the [Gazeta](gazeta.ru) website. Summaries were parsed as the content of an HTML tag with “description” property. Additional selection of good summaries was performed. The resulting dataset consists of 63435 text-summary pairs. To form training, validation, and test datasets, these pairs were sorted by time. The first 52400 pairs are the training dataset, the proceeding 5265 pairs are the validation dataset, and the remaining 5770 pairs are the test dataset.
 
 
 
 
 
 
 
 
 
 
 
 
63
 
64
  ### Supported Tasks and Leaderboards
65
 
66
- [More Information Needed]
 
 
 
 
 
 
67
 
68
  ### Languages
69
 
 
59
 
60
  ### Dataset Summary
61
 
62
+ Dataset for automatic summarization of Russian news. News and their summaries are from the [Gazeta](gazeta.ru) website. Summaries were parsed as the content of an HTML tag with “description” property. Additional selection of good summaries was performed. There are two versions of this dataset.
63
+
64
+ Loading version 1.0:
65
+ ```python
66
+ from datasets import load_dataset
67
+ test_dataset = dataset = load_dataset('IlyaGusev/gazeta', script_version="v1.0")
68
+ ```
69
+
70
+ Loading version 2.0:
71
+ ```python
72
+ from datasets import load_dataset
73
+ test_dataset = dataset = load_dataset('IlyaGusev/gazeta', script_version="v2.0")
74
+ ```
75
 
76
  ### Supported Tasks and Leaderboards
77
 
78
+ Leaderboard on Papers With Code: [text-summarization-on-gazeta](https://paperswithcode.com/sota/text-summarization-on-gazeta).
79
+
80
+ Please use the original [evaluation script](https://github.com/IlyaGusev/summarus/blob/master/evaluate.py) with the same parameters. Example:
81
+ ```
82
+ python3 evaluate.py --predicted-path predictions.txt \
83
+ --gold-path targets.txt --language ru --tokenize-after --lower
84
+ ```
85
 
86
  ### Languages
87