Datasets:
GEM
/

ArXiv:
License:
Files changed (1) hide show
  1. README.md +9 -49
README.md CHANGED
@@ -3,63 +3,21 @@ annotations_creators:
3
  - none
4
  language_creators:
5
  - unknown
6
- language:
7
- - am
8
- - ar
9
- - az
10
- - bn
11
- - my
12
- - zh
13
- - en
14
- - fr
15
- - gu
16
- - ha
17
- - hi
18
- - ig
19
- - id
20
- - ja
21
- - rn
22
- - ko
23
- - ky
24
- - mr
25
- - ne
26
- - om
27
- - ps
28
- - fa
29
- - gpe
30
- - pt
31
- - pa
32
- - ru
33
- - gd
34
- - sr
35
- - rsb
36
- - si
37
- - so
38
- - es
39
- - sw
40
- - ta
41
- - te
42
- - th
43
- - ti
44
- - tr
45
- - uk
46
- - ur
47
- - uz
48
- - vi
49
- - cy
50
- - yo
51
- license:
52
  - cc-by-nc-sa-4.0
53
  multilinguality:
54
  - unknown
 
55
  size_categories:
56
  - unknown
57
  source_datasets:
58
  - original
59
  task_categories:
60
  - summarization
61
- task_ids: []
62
- pretty_name: xlsum
63
  ---
64
 
65
  # Dataset Card for GEM/xlsum
@@ -692,4 +650,6 @@ The dataset is limited to news domain only. Hence it wouldn't be advisable to us
692
 
693
  <!-- info: What are some discouraged use cases of a model trained to maximize the proposed metrics on this dataset? In particular, think about settings where decisions made by a model that performs reasonably well on the metric my still have strong negative consequences for user or members of the public. -->
694
  <!-- scope: microscope -->
695
- ROUGE evaluates the quality of the summary as a whole by considering up to 4-gram overlaps. Therefore, in an article about India if the word "India" in the generated summary gets replaced by "Pakistan" due to model hallucination, the overall score wouldn't be reduced significantly, but the entire meaning could get changed.
 
 
 
3
  - none
4
  language_creators:
5
  - unknown
6
+ languages:
7
+ - unknown
8
+ licenses:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  - cc-by-nc-sa-4.0
10
  multilinguality:
11
  - unknown
12
+ pretty_name: xlsum
13
  size_categories:
14
  - unknown
15
  source_datasets:
16
  - original
17
  task_categories:
18
  - summarization
19
+ task_ids:
20
+ - unknown
21
  ---
22
 
23
  # Dataset Card for GEM/xlsum
 
650
 
651
  <!-- info: What are some discouraged use cases of a model trained to maximize the proposed metrics on this dataset? In particular, think about settings where decisions made by a model that performs reasonably well on the metric my still have strong negative consequences for user or members of the public. -->
652
  <!-- scope: microscope -->
653
+ ROUGE evaluates the quality of the summary as a whole by considering up to 4-gram overlaps. Therefore, in an article about India if the word "India" in the generated summary gets replaced by "Pakistan" due to model hallucination, the overall score wouldn't be reduced significantly, but the entire meaning could get changed.
654
+
655
+