jordiclive
commited on
Commit
•
b49a36a
1
Parent(s):
38da7cc
Update README.md
Browse files
README.md
CHANGED
@@ -1,11 +1,82 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
task_categories:
|
3 |
- summarization
|
4 |
-
|
5 |
-
- summarization
|
6 |
-
-
|
7 |
-
|
8 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
---
|
10 |
-
Dataset collated from https://github.com/tscheepers/Wikipedia-Summary-Dataset.
|
11 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
multilinguality:
|
5 |
+
- monolingual
|
6 |
+
pretty_name: Extreme Summarization (XSum)
|
7 |
+
paperswithcode_id: xsum
|
8 |
+
size_categories:
|
9 |
+
- 100K<n<1M
|
10 |
+
source_datasets:
|
11 |
+
- original
|
12 |
task_categories:
|
13 |
- summarization
|
14 |
+
task_ids:
|
15 |
+
- news-articles-summarization
|
16 |
+
train-eval-index:
|
17 |
+
- config: default
|
18 |
+
task: summarization
|
19 |
+
task_id: summarization
|
20 |
+
splits:
|
21 |
+
train_split: train
|
22 |
+
eval_split: test
|
23 |
+
col_mapping:
|
24 |
+
document: text
|
25 |
+
summary: target
|
26 |
+
metrics:
|
27 |
+
- type: rouge
|
28 |
+
name: Rouge
|
29 |
+
dataset_info:
|
30 |
+
features:
|
31 |
+
- name: document
|
32 |
+
dtype: string
|
33 |
+
- name: summary
|
34 |
+
dtype: string
|
35 |
+
- name: id
|
36 |
+
dtype: string
|
37 |
+
splits:
|
38 |
+
- name: withDescription
|
39 |
+
- name: withoutDescription
|
40 |
---
|
|
|
41 |
|
42 |
+
## Dataset Description
|
43 |
+
|
44 |
+
- **Homepage:**
|
45 |
+
- **Repository:** https://github.com/EdinburghNLP/XSum
|
46 |
+
- **Paper:** [Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization](https://arxiv.org/abs/1808.08745)
|
47 |
+
- **Point of Contact:** [Shashi Narayan](mailto:[email protected])
|
48 |
+
- **Size of downloaded dataset files:** 245.38 MB
|
49 |
+
- **Size of the generated dataset:** 507.60 MB
|
50 |
+
- **Total amount of disk used:** 752.98 MB
|
51 |
+
|
52 |
+
### Dataset Summary
|
53 |
+
|
54 |
+
There are three features:
|
55 |
+
- document: Input news article.
|
56 |
+
- summary: One sentence summary of the article.
|
57 |
+
- id: BBC ID of the article.
|
58 |
+
|
59 |
+
This is a dataset that can be used for research into machine learning and natural language processing. It contains all titles and summaries (or introductions) of English Wikipedia articles, extracted in September of 2017.
|
60 |
+
|
61 |
+
The dataset is different from the regular Wikipedia dump and different from the datasets that can be created by gensim because ours contains the extracted summaries and not the entire unprocessed page body. This could be useful if one wants to use the smaller, more concise, and more definitional summaries in their research. Or if one just wants to use a smaller but still diverse dataset for efficient training with resource constraints.
|
62 |
+
|
63 |
+
A summary or introduction of an article is everything starting from the page title up to the content outline.
|
64 |
+
|
65 |
+
|
66 |
+
#### default
|
67 |
+
- `document`: a `string` feature.
|
68 |
+
- `summary`: a `string` feature.
|
69 |
+
- `id`: a `string` feature.
|
70 |
+
|
71 |
+
### Citation Information
|
72 |
+
|
73 |
+
```
|
74 |
+
@mastersthesis{scheepers2017compositionality,
|
75 |
+
author = {Scheepers, Thijs},
|
76 |
+
title = {Improving the Compositionality of Word Embeddings},
|
77 |
+
school = {Universiteit van Amsterdam},
|
78 |
+
year = {2017},
|
79 |
+
month = {11},
|
80 |
+
address = {Science Park 904, Amsterdam, Netherlands}
|
81 |
+
}
|
82 |
+
```
|