yonatanko commited on
Commit
1167c78
·
verified ·
1 Parent(s): 4278a5c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +131 -80
README.md CHANGED
@@ -11,137 +11,188 @@ tags:
11
  size_categories:
12
  - n<1K
13
  ---
14
- # Dataset Card for Relationship Advcie
15
-
16
- # Table of Contents
17
- - [Dataset Description] (#dataset-description)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
  ## Dataset Description
20
 
21
- <!-- Provide a longer summary of what this dataset is. -->
22
-
23
-
24
-
25
- - **Curated by:** [More Information Needed]
26
- - **Funded by [optional]:** [More Information Needed]
27
- - **Shared by [optional]:** [More Information Needed]
28
- - **Language(s) (NLP):** [More Information Needed]
29
- - **License:** [More Information Needed]
30
-
31
- ### Dataset Sources [optional]
32
-
33
- <!-- Provide the basic links for the dataset. -->
34
-
35
- - **Repository:** [More Information Needed]
36
- - **Paper [optional]:** [More Information Needed]
37
- - **Demo [optional]:** [More Information Needed]
38
-
39
- ## Uses
40
-
41
- <!-- Address questions around how the dataset is intended to be used. -->
42
 
43
- ### Direct Use
44
 
45
- <!-- This section describes suitable use cases for the dataset. -->
46
 
47
- [More Information Needed]
48
 
49
- ### Out-of-Scope Use
50
 
51
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
52
 
53
- [More Information Needed]
54
 
55
  ## Dataset Structure
56
 
57
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
58
-
59
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
 
61
  ## Dataset Creation
62
 
63
  ### Curation Rationale
64
 
65
- <!-- Motivation for the creation of this dataset. -->
66
-
67
- [More Information Needed]
68
 
69
  ### Source Data
70
 
71
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
72
-
73
- #### Data Collection and Processing
74
-
75
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
76
 
77
- [More Information Needed]
78
 
79
- #### Who are the source data producers?
80
 
81
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
82
 
83
- [More Information Needed]
84
 
85
- ### Annotations [optional]
86
 
87
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
88
 
89
  #### Annotation process
90
 
91
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
92
-
93
- [More Information Needed]
94
 
95
  #### Who are the annotators?
96
 
97
- <!-- This section describes the people or systems who created the annotations. -->
98
-
99
- [More Information Needed]
100
-
101
- #### Personal and Sensitive Information
102
-
103
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
104
 
105
- [More Information Needed]
106
 
107
- ## Bias, Risks, and Limitations
108
 
109
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
110
 
111
- [More Information Needed]
112
 
113
- ### Recommendations
114
 
115
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
116
 
117
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
118
 
119
- ## Citation [optional]
120
 
121
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
122
 
123
- **BibTeX:**
124
 
125
- [More Information Needed]
126
 
127
- **APA:**
128
 
129
- [More Information Needed]
130
 
131
- ## Glossary [optional]
132
 
133
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
134
 
135
- [More Information Needed]
136
 
137
- ## More Information [optional]
138
 
139
- [More Information Needed]
140
 
141
- ## Dataset Card Authors [optional]
142
 
143
- [More Information Needed]
144
 
145
- ## Dataset Card Contact
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
146
 
147
- [More Information Needed]
 
11
  size_categories:
12
  - n<1K
13
  ---
14
+ # Dataset Card for ELI5
15
+
16
+ ## Table of Contents
17
+ - [Dataset Description](#dataset-description)
18
+ - [Dataset Summary](#dataset-summary)
19
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
20
+ - [Languages](#languages)
21
+ - [Dataset Structure](#dataset-structure)
22
+ - [Data Instances](#data-instances)
23
+ - [Data Fields](#data-fields)
24
+ - [Data Splits](#data-splits)
25
+ - [Dataset Creation](#dataset-creation)
26
+ - [Curation Rationale](#curation-rationale)
27
+ - [Source Data](#source-data)
28
+ - [Annotations](#annotations)
29
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
30
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
31
+ - [Social Impact of Dataset](#social-impact-of-dataset)
32
+ - [Discussion of Biases](#discussion-of-biases)
33
+ - [Other Known Limitations](#other-known-limitations)
34
+ - [Additional Information](#additional-information)
35
+ - [Dataset Curators](#dataset-curators)
36
+ - [Licensing Information](#licensing-information)
37
+ - [Citation Information](#citation-information)
38
+ - [Contributions](#contributions)
39
 
40
  ## Dataset Description
41
 
42
+ - **Homepage:** [ELI5 homepage](https://facebookresearch.github.io/ELI5/explore.html)
43
+ - **Repository:** [ELI5 repository](https://github.com/facebookresearch/ELI5)
44
+ - **Paper:** [ELI5: Long Form Question Answering](https://arxiv.org/abs/1907.09190)
45
+ - **Point of Contact:** [Yacine Jernite](mailto:[email protected])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
 
47
+ ### Dataset Summary
48
 
49
+ The ELI5 dataset is an English-language dataset of questions and answers gathered from three subreddits where users ask factual questions requiring paragraph-length or longer answers. The dataset was created to support the task of open-domain long form abstractive question answering, and covers questions about general topics in its [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subset, science in it [r/askscience](https://www.reddit.com/r/askscience/) subset, and History in its [r/AskHistorians](https://www.reddit.com/r/AskHistorians/) subset.
50
 
51
+ ### Supported Tasks and Leaderboards
52
 
53
+ - `abstractive-qa`, `open-domain-abstractive-qa`: The dataset can be used to train a model for Open Domain Long Form Question Answering. An LFQA model is presented with a non-factoid and asked to retrieve relevant information from a knowledge source (such as [Wikipedia](https://www.wikipedia.org/)), then use it to generate a multi-sentence answer. The model performance is measured by how high its [ROUGE](https://huggingface.co/metrics/rouge) score to the reference is. A [BART-based model](https://huggingface.co/yjernite/bart_eli5) with a [dense retriever](https://huggingface.co/yjernite/retribert-base-uncased) trained to draw information from [Wikipedia passages](https://huggingface.co/datasets/wiki_snippets) achieves a [ROUGE-L of 0.149](https://yjernite.github.io/lfqa.html#generation).
54
 
55
+ ### Languages
56
 
57
+ The text in the dataset is in English, as spoken by Reddit users on the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), [r/askscience](https://www.reddit.com/r/askscience/), and [r/AskHistorians](https://www.reddit.com/r/AskHistorians/) subreddits. The associated BCP-47 code is `en`.
58
 
59
  ## Dataset Structure
60
 
61
+ ### Data Instances
62
+
63
+ A typical data point comprises a question, with a `title` containing the main question and a `selftext` which sometimes elaborates on it, and a list of answers from the forum sorted by the number of upvotes they obtained. Additionally, the URLs in each of the text fields have been extracted to respective lists and replaced by generic tokens in the text.
64
+
65
+ An example from the ELI5 test set looks as follows:
66
+ ```
67
+ {'q_id': '8houtx',
68
+ 'title': 'Why does water heated to room temperature feel colder than the air around it?',
69
+ 'selftext': '',
70
+ 'document': '',
71
+ 'subreddit': 'explainlikeimfive',
72
+ 'answers': {'a_id': ['dylcnfk', 'dylcj49'],
73
+ 'text': ["Water transfers heat more efficiently than air. When something feels cold it's because heat is being transferred from your skin to whatever you're touching. Since water absorbs the heat more readily than air, it feels colder.",
74
+ "Air isn't as good at transferring heat compared to something like water or steel (sit on a room temperature steel bench vs. a room temperature wooden bench, and the steel one will feel more cold).\n\nWhen you feel cold, what you're feeling is heat being transferred out of you. If there is no breeze, you feel a certain way. If there's a breeze, you will get colder faster (because the moving air is pulling the heat away from you), and if you get into water, its quite good at pulling heat from you. Get out of the water and have a breeze blow on you while you're wet, all of the water starts evaporating, pulling even more heat from you."],
75
+ 'score': [5, 2]},
76
+ 'title_urls': {'url': []},
77
+ 'selftext_urls': {'url': []},
78
+ 'answers_urls': {'url': []}}
79
+ ```
80
+
81
+ ### Data Fields
82
+
83
+ - `q_id`: a string question identifier for each example, corresponding to its ID in the [Pushshift.io](https://files.pushshift.io/reddit/submissions/) Reddit submission dumps.
84
+ - `subreddit`: One of `explainlikeimfive`, `askscience`, or `AskHistorians`, indicating which subreddit the question came from
85
+ - `title`: title of the question, with URLs extracted and replaced by `URL_n` tokens
86
+ - `title_urls`: list of the extracted URLs, the `n`th element of the list was replaced by `URL_n`
87
+ - `selftext`: either an empty string or an elaboration of the question
88
+ - `selftext_urls`: similar to `title_urls` but for `self_text`
89
+ - `answers`: a list of answers, each answer has:
90
+ - `a_id`: a string answer identifier for each answer, corresponding to its ID in the [Pushshift.io](https://files.pushshift.io/reddit/comments/) Reddit comments dumps.
91
+ - `text`: the answer text with the URLs normalized
92
+ - `score`: the number of upvotes the answer had received when the dumps were created
93
+ - `answers_urls`: a list of the extracted URLs. All answers use the same list, the numbering of the normalization token continues across answer texts
94
+
95
+ ### Data Splits
96
+
97
+ The data is split into a training, validation and test set for each of the three subreddits. In order to avoid having duplicate questions in across sets, the `title` field of each of the questions were ranked by their tf-idf match to their nearest neighbor and the ones with the smallest value were used in the test and validation sets. The final split sizes are as follow:
98
+
99
+ | | Train | Valid | Test |
100
+ | ----- | ------ | ----- | ---- |
101
+ | r/explainlikeimfive examples| 272634 | 9812 | 24512|
102
+ | r/askscience examples | 131778 | 2281 | 4462 |
103
+ | r/AskHistorians examples | 98525 | 4901 | 9764 |
104
 
105
  ## Dataset Creation
106
 
107
  ### Curation Rationale
108
 
109
+ ELI5 was built to provide a testbed for machines to learn how to answer more complex questions, which requires them to find and combine information in a coherent manner. The dataset was built by gathering questions that were asked by community members of three subreddits, including [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), along with the answers that were provided by other users. The [rules of the subreddit](https://www.reddit.com/r/explainlikeimfive/wiki/detailed_rules) make this data particularly well suited to training a model for abstractive question answering: the questions need to seek an objective explanation about well established facts, and the answers provided need to be understandable to a layperson without any particular knowledge domain.
 
 
110
 
111
  ### Source Data
112
 
113
+ #### Initial Data Collection and Normalization
 
 
 
 
114
 
115
+ The data was obtained by filtering submissions and comments from the subreddits of interest from the XML dumps of the [Reddit forum](https://www.reddit.com/) hosted on [Pushshift.io](https://files.pushshift.io/reddit/).
116
 
117
+ In order to further improve the quality of the selected examples, only questions with a score of at least 2 and at least one answer with a score of at least 2 were selected for the dataset. The dataset questions and answers span a period form August 2012 to August 2019.
118
 
119
+ #### Who are the source language producers?
120
 
121
+ The language producers are users of the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), [r/askscience](https://www.reddit.com/r/askscience/), and [r/AskHistorians](https://www.reddit.com/r/AskHistorians/) subreddits between 2012 and 2019. No further demographic information was available from the data source.
122
 
123
+ ### Annotations
124
 
125
+ The dataset does not contain any additional annotations.
126
 
127
  #### Annotation process
128
 
129
+ [N/A]
 
 
130
 
131
  #### Who are the annotators?
132
 
133
+ [N/A]
 
 
 
 
 
 
134
 
135
+ ### Personal and Sensitive Information
136
 
137
+ The authors removed the speaker IDs from the [Pushshift.io](https://files.pushshift.io/reddit/) dumps but did not otherwise anonymize the data. Some of the questions and answers are about contemporary public figures or individuals who appeared in the news.
138
 
139
+ ## Considerations for Using the Data
140
 
141
+ ### Social Impact of Dataset
142
 
143
+ The purpose of this dataset is to help develop better question answering systems.
144
 
145
+ A system that succeeds at the supported task would be able to provide a coherent answer to even complex questions requiring a multi-step explanation, which is beyond the ability of even the larger existing models. The task is also thought as a test-bed for retrieval model which can show the users which source text was used in generating the answer and allow them to confirm the information provided to them.
146
 
147
+ It should be noted however that the provided answers were written by Reddit users, an information which may be lost if models trained on it are deployed in down-stream applications and presented to users without context. The specific biases this may introduce are discussed in the next section.
148
 
149
+ ### Discussion of Biases
150
 
151
+ While Reddit hosts a number of thriving communities with high quality discussions, it is also widely known to have corners where sexism, hate, and harassment are significant issues. See for example the [recent post from Reddit founder u/spez](https://www.reddit.com/r/announcements/comments/gxas21/upcoming_changes_to_our_content_policy_our_board/) outlining some of the ways he thinks the website's historical policies have been responsible for this problem, [Adrienne Massanari's 2015 article on GamerGate](https://www.researchgate.net/publication/283848479_Gamergate_and_The_Fappening_How_Reddit's_algorithm_governance_and_culture_support_toxic_technocultures) and follow-up works, or a [2019 Wired article on misogyny on Reddit](https://www.wired.com/story/misogyny-reddit-research/).
152
 
153
+ While there has been some recent work in the NLP community on *de-biasing* models (e.g. [Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings](https://arxiv.org/abs/1904.04047) for word embeddings trained specifically on Reddit data), this problem is far from solved, and the likelihood that a trained model might learn the biases present in the data remains a significant concern.
154
 
155
+ We still note some encouraging signs for all of these communities: [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) and [r/askscience](https://www.reddit.com/r/askscience/) have similar structures and purposes, and [r/askscience](https://www.reddit.com/r/askscience/) was found in 2015 to show medium supportiveness and very low toxicity when compared to other subreddits (see a [hackerfall post](https://hackerfall.com/story/study-and-interactive-visualization-of-toxicity-in), [thecut.com write-up](https://www.thecut.com/2015/03/interactive-chart-of-reddits-toxicity.html) and supporting [data](https://chart-studio.plotly.com/~bsbell21/210/toxicity-vs-supportiveness-by-subreddit/#data)). Meanwhile, the [r/AskHistorians rules](https://www.reddit.com/r/AskHistorians/wiki/rules) mention that the admins will not tolerate "_racism, sexism, or any other forms of bigotry_". However, further analysis of whether and to what extent these rules reduce toxicity is still needed.
156
 
157
+ We also note that given the audience of the Reddit website which is more broadly used in the US and Europe, the answers will likely present a Western perspectives, which is particularly important to note when dealing with historical topics.
158
 
159
+ ### Other Known Limitations
160
 
161
+ The answers provided in the dataset are represent the opinion of Reddit users. While these communities strive to be helpful, they should not be considered to represent a ground truth.
162
 
163
+ ## Additional Information
164
 
165
+ ### Dataset Curators
166
 
167
+ The dataset was initially created by Angela Fan, Ethan Perez, Yacine Jernite, Jason Weston, Michael Auli, and David Grangier, during work done at Facebook AI Research (FAIR).
168
 
169
+ ### Licensing Information
170
 
171
+ The licensing status of the dataset hinges on the legal status of the [Pushshift.io](https://files.pushshift.io/reddit/) data which is unclear.
172
 
173
+ ### Citation Information
174
 
175
+ ```
176
+ @inproceedings{eli5_lfqa,
177
+ author = {Angela Fan and
178
+ Yacine Jernite and
179
+ Ethan Perez and
180
+ David Grangier and
181
+ Jason Weston and
182
+ Michael Auli},
183
+ editor = {Anna Korhonen and
184
+ David R. Traum and
185
+ Llu{\'{\i}}s M{\`{a}}rquez},
186
+ title = {{ELI5:} Long Form Question Answering},
187
+ booktitle = {Proceedings of the 57th Conference of the Association for Computational
188
+ Linguistics, {ACL} 2019, Florence, Italy, July 28- August 2, 2019,
189
+ Volume 1: Long Papers},
190
+ pages = {3558--3567},
191
+ publisher = {Association for Computational Linguistics},
192
+ year = {2019},
193
+ url = {https://doi.org/10.18653/v1/p19-1346},
194
+ doi = {10.18653/v1/p19-1346}
195
+ }
196
+ ```
197
 
198
+ ### Contributions