Update README.md
Browse files
README.md
CHANGED
@@ -41,7 +41,13 @@ More information needed
|
|
41 |
|
42 |
## Intended uses & limitations
|
43 |
|
44 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
|
46 |
## Training and evaluation data
|
47 |
|
|
|
41 |
|
42 |
## Intended uses & limitations
|
43 |
|
44 |
+
The Extreme Summarization (XSum) dataset is a dataset for evaluation of abstractive single-document summarization systems. The goal is to create a short, one-sentence new summary answering the question “What is the article about?”. The dataset consists of 226,711 news articles accompanied with a one-sentence summary. The articles are collected from BBC articles (2010 to 2017) and cover a wide variety of domains (e.g., News, Politics, Sports, Weather, Business, Technology, Science, Health, Family, Education, Entertainment and Arts). The official random split contains 204,045 (90%), 11,332 (5%) and 11,334 (5) documents in training, validation and test sets, respectively.
|
45 |
+
|
46 |
+
T5, or Text-to-Text Transfer Transformer, is a Transformer based architecture that uses a text-to-text approach. Every task – including translation, question answering, and classification – is cast as feeding the model text as input and training it to generate some target text. This allows for the use of the same model, loss function, hyperparameters, etc. across our diverse set of tasks. The changes compared to BERT include:
|
47 |
+
|
48 |
+
- adding a causal decoder to the bidirectional architecture.
|
49 |
+
- replacing the fill-in-the-blank cloze task with a mix of alternative pre-training tasks.
|
50 |
+
|
51 |
|
52 |
## Training and evaluation data
|
53 |
|