Yusuf5 commited on
Commit
c0408cd
·
verified ·
1 Parent(s): 57db383

Finish Dataset Card

Browse files
Files changed (1) hide show
  1. README.md +124 -70
README.md CHANGED
@@ -1,21 +1,21 @@
1
  ---
2
  license: mit
3
  task_categories:
4
- - text-generation
5
- - summarization
6
- - question-answering
7
- - text-classification
8
- - feature-extraction
9
- - fill-mask
10
  language:
11
- - en
12
  tags:
13
- - debate
14
- - argument
15
- - argument mining
16
  pretty_name: Open Caselist
17
  size_categories:
18
- - 1M<n<10M
19
  ---
20
 
21
  # Dataset Card for OpenCaselist
@@ -32,9 +32,9 @@ A collection of Evidence used in Collegiate and High School debate competitions.
32
 
33
  This dataset is a follow up to [DebateSum](https://huggingface.co/datasets/Hellisotherpeople/DebateSum), increasing its scope and amount of metadata collected.
34
 
35
- It expands the dataset to include evidence used during debate tournaments, rather than just evidence produced during preseason debate "camps." The total amount of evidence is approximetly 20x larger than DebateSum. It currently includes evidence from the Policy, Lincoln Douglas, and Public Forum styles of debate from the years 2013-2024.
36
 
37
- Evidence was deduplicated to detect the use of the same piece of evidence in different debates. Additional metadata about the debates evidence was used in is available in [DebateRounds](https://www.kaggle.com/datasets/yu5uf5/debate-rounds)
38
 
39
  The parsing of debate documents was also improved, most notably now detecting highlighted text, which is typically used to denote what parts of the evidence is read out loud during the debate.
40
 
@@ -42,14 +42,14 @@ The parsing of debate documents was also improved, most notably now detecting hi
42
  - **Language(s) (NLP):** English
43
  - **License:** MIT
44
 
45
- ### Dataset Sources
46
 
47
  <!-- Provide the basic links for the dataset. -->
 
48
  [openCaselist](https://opencaselist.com/)
49
 
50
  - **Repository:** [Data Processing Code and GraphQL Search API](https://github.com/OpenDebate/debate-cards/tree/v3)
51
- - **Paper [optional]:** [More Information Needed]
52
- - **Demo [optional]:** [More Information Needed]
53
 
54
  ## Uses
55
 
@@ -59,104 +59,158 @@ The parsing of debate documents was also improved, most notably now detecting hi
59
 
60
  <!-- This section describes suitable use cases for the dataset. -->
61
 
62
- [More Information Needed]
63
 
64
- ### Out-of-Scope Use
65
 
66
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
67
 
68
- [More Information Needed]
69
 
70
  ## Dataset Structure
71
 
72
  <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
73
 
74
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
  ## Dataset Creation
77
 
78
  ### Curation Rationale
79
 
80
- <!-- Motivation for the creation of this dataset. -->
81
-
82
- [More Information Needed]
83
 
84
  ### Source Data
85
 
86
  <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
87
 
 
 
88
  #### Data Collection and Processing
89
 
90
  <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
91
 
92
- [More Information Needed]
93
 
94
  #### Who are the source data producers?
95
 
96
  <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
97
 
98
- [More Information Needed]
99
-
100
- ### Annotations [optional]
101
-
102
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
103
-
104
- #### Annotation process
105
-
106
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
107
-
108
- [More Information Needed]
109
-
110
- #### Who are the annotators?
111
-
112
- <!-- This section describes the people or systems who created the annotations. -->
113
-
114
- [More Information Needed]
115
 
116
  #### Personal and Sensitive Information
117
 
118
  <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
119
 
120
- [More Information Needed]
121
 
122
  ## Bias, Risks, and Limitations
123
 
124
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
125
 
126
- [More Information Needed]
127
 
128
- ### Recommendations
129
 
130
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
131
 
132
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
133
 
134
- ## Citation [optional]
135
 
136
  <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
137
 
138
  **BibTeX:**
139
 
140
- [More Information Needed]
141
-
142
- **APA:**
143
-
144
- [More Information Needed]
145
-
146
- ## Glossary [optional]
147
-
148
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
149
-
150
- [More Information Needed]
151
-
152
- ## More Information [optional]
153
-
154
- [More Information Needed]
155
-
156
- ## Dataset Card Authors [optional]
157
-
158
- [More Information Needed]
159
-
160
- ## Dataset Card Contact
161
-
162
- [More Information Needed]
 
1
  ---
2
  license: mit
3
  task_categories:
4
+ - text-generation
5
+ - summarization
6
+ - question-answering
7
+ - text-classification
8
+ - feature-extraction
9
+ - fill-mask
10
  language:
11
+ - en
12
  tags:
13
+ - debate
14
+ - argument
15
+ - argument mining
16
  pretty_name: Open Caselist
17
  size_categories:
18
+ - 1M<n<10M
19
  ---
20
 
21
  # Dataset Card for OpenCaselist
 
32
 
33
  This dataset is a follow up to [DebateSum](https://huggingface.co/datasets/Hellisotherpeople/DebateSum), increasing its scope and amount of metadata collected.
34
 
35
+ It expands the dataset to include evidence used during debate tournaments, rather than just evidence produced during preseason debate "camps." The total amount of evidence is approximately 20x larger than DebateSum. It currently includes evidence from the Policy, Lincoln Douglas, and Public Forum styles of debate from the years 2013-2024.
36
 
37
+ Evidence was deduplicated to detect the use of the same piece of evidence in different debates. Additional metadata about the debates evidence was used in is available in [DebateRounds](https://www.kaggle.com/datasets/yu5uf5/debate-rounds).
38
 
39
  The parsing of debate documents was also improved, most notably now detecting highlighted text, which is typically used to denote what parts of the evidence is read out loud during the debate.
40
 
 
42
  - **Language(s) (NLP):** English
43
  - **License:** MIT
44
 
45
+ ### Dataset Sources
46
 
47
  <!-- Provide the basic links for the dataset. -->
48
+
49
  [openCaselist](https://opencaselist.com/)
50
 
51
  - **Repository:** [Data Processing Code and GraphQL Search API](https://github.com/OpenDebate/debate-cards/tree/v3)
52
+ - **Paper:** [OpenDebateEvidence](https://openreview.net/pdf?id=43s8hgGTOX)
 
53
 
54
  ## Uses
55
 
 
59
 
60
  <!-- This section describes suitable use cases for the dataset. -->
61
 
62
+ The most direct use of the dataset is for research in Argument Mining. The dataset contains rich metadata about the argumentative content of the text it contains. It can be used to train models for identifying arguments, classifying arguments, detecting relations between arguments, generating arguments, and other related tasks. It also contains metadata useful for abstractive and extractive summarization, natural language inference, and is useful as a pretraining dataset for many tasks.
63
 
64
+ The dataset is also useful for studying competitive debate, for example searching for common supporting evidence or counterarguments to an argument or analyzing trends in arguments across tournaments, especially when used with the `DebateRounds` dataset.
65
 
66
+ ### Out-of-Scope Use
67
 
68
+ The abstractive summaries generated by debaters make heavy use of jargon and are often in context of a broader argument, so they may not be useful directly.
69
 
70
  ## Dataset Structure
71
 
72
  <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
73
 
74
+ #### Summaries
75
+
76
+ Debate evidence is typically annotated with three different summaries.
77
+
78
+ - `tag` is typically a one sentence abstract summary written by the debater. It often makes heavy use of jargon.
79
+ - `summary` is the underlined text in the original document. Typically a phrase level summary of the evidence.
80
+ - `spoken` is the highlighted text in the original document. This is the text the debater plans to read out loud during the debate. Sometimes contains abbreviations or acronyms
81
+
82
+ Note that the `tag` will be biased to support some broader argument, and the `summary` and `spoken` summaries will be biased to support the `tag`. Some arguments called "analytics" do not contain any body text, and only have the debater written `tag`
83
+
84
+ #### Headings
85
+
86
+ Debate documents are usually organized with a hierarchy of headings.
87
+
88
+ - `pocket` Top level heading. Usually the speech name.
89
+ - `hat` Medium Level heading. Usually the broad type of argument
90
+ - `block` Low level heading. Usually the specific type of argument.
91
+
92
+ Headings for speeches later in the debate will often contain the abbreviations "A2" or "AT" (Answer to) when responding to an argument in a previous speech.
93
+
94
+ #### Deduplication
95
+
96
+ Debaters will often reuse the same evidence across different debates and repurpose evidence used by other debaters for their own arguments. We preformed a deduplication procedure to detect this repeated use.
97
+
98
+ - `bucketId` Duplicate evidence will share the same `bucketId`
99
+ - `duplicateCount` is the number of other pieces of evidence in the same bucket. This is useful as a rough measure of argument quality, since good arguments tend to be used more.
100
+
101
+ #### Files
102
+
103
+ Evidence is uploaded in .docx files. For evidence uploading during the debate season, each file is typically contains all the evidence used in a specific speech. For evidence uploaded to the preseason "Open Evidence" project, each file will typically contain a collection of evidence relating to a specific argument.
104
+
105
+ #### Round Information
106
+
107
+ The remaining fields contain metadata relating to the debate round the evidence was used in. For more extensive metadata, see the `tabroom_section_id` field in the `round` table of `caselist.sqlite` in the DebateRounds dataset. Notably, this extra metadata can be used to find the evidence read by the opponent for around 25% of evidence.
108
+
109
+ ### Dataset Fields
110
+
111
+ | Column Name | Description |
112
+ | --------------------- | ------------------------------------------------------------------------------------------------------------------------ |
113
+ | `id` | Unique identifier for the evidence |
114
+ | `tag` | Abstract summary of the evidence/argument made by the debater with evidence |
115
+ | `cite` | String indicating the short citation of the source used for the evidence |
116
+ | `fullcite` | Full citation of the source used for the evidence |
117
+ | `summary` | Longer extractive summary of the evidence |
118
+ | `spoken` | Shorter extractive summary of the evidence |
119
+ | `fulltext` | The full text of the evidence |
120
+ | `textlength` | The length of the text in the evidence in characters |
121
+ | `markup` | The full text of the evidence with HTML markup for parsing/visualization purposes |
122
+ | `pocket` | String indicating the virtual "pocket" |
123
+ | `hat` | String indicating the virtual "hat" |
124
+ | `block` | String indicating the virtual "block" |
125
+ | `bucketId` | Unique identifier for the deduplication bucket of the evidence |
126
+ | `duplicateCount` | The number of duplicates of the evidence. |
127
+ | `fileId` | Unique identifier for the file in which the evidence is stored |
128
+ | `filePath` | The file path of the file in which the evidence is stored |
129
+ | `roundId` | Unique identifier for the debate round in which the evidence was used |
130
+ | `side` | The debate side on which the evidence was used (Affirmative or Negative) |
131
+ | `tournament` | The name of the tournament in which the evidence was used |
132
+ | `round` | The round number in which the evidence was used |
133
+ | `opponent` | The name of the opposing team in the debate round in which the evidence was used |
134
+ | `judge` | The name of the judge in the debate round in which the evidence was used |
135
+ | `report` | A report associated with the debate round filled out by one of the debaters, usually summarizing the arguments presented |
136
+ | `opensourcePath` | The path to the open-source repository in which the evidence is stored |
137
+ | `caselistUpdatedAt` | The date on which the caselist was last updated |
138
+ | `teamId` | Unique identifier for the team that uploaded the evidence |
139
+ | `teamName` | The name of the team |
140
+ | `teamDisplayName` | The display name of the team |
141
+ | `teamNotes` | Notes associated with the team |
142
+ | `debater1First` | The first name of the first debater of the team. All names only include first 2 characters |
143
+ | `debater1Last` | The last name of the first debater of the team |
144
+ | `debater2First` | The first name of the second debater of the team |
145
+ | `debater2Last` | The last name of the second debater of the team |
146
+ | `schoolId` | Unique identifier for the school of the team that uploaded the evidence |
147
+ | `schoolName` | The name of the school |
148
+ | `schoolDisplayName` | The display name of the school |
149
+ | `state` | The state in which the school is located |
150
+ | `chapterId` | Unique identifier for the school on Tabroom. Rarely used |
151
+ | `caselistId` | Unique identifier for the caselist the evidence was uploaded to |
152
+ | `caselistName` | The name of the caselist |
153
+ | `caselistDisplayName` | The display name of the caselist |
154
+ | `year` | The year in which the debate round took place |
155
+ | `event` | The event in which the debate round took place |
156
+ | `level` | The level of the debate (college, high school) |
157
+ | `teamSize` | The number of debaters on the team |
158
 
159
  ## Dataset Creation
160
 
161
  ### Curation Rationale
162
 
163
+ Existing argument mining datasets are limited in scope or contain limited metadata. This dataset collects Debate Evidence produced by High School and Collegiate competitive debaters to create a large dataset annotated for argumentive content.
 
 
164
 
165
  ### Source Data
166
 
167
  <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
168
 
169
+ Debaters will typically source evidence from News Articles, Journal Articles, Books, and other sources. Many debaters upload annotated excerpts from these sources to the OpenCaselist website where this data is collected from.
170
+
171
  #### Data Collection and Processing
172
 
173
  <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
174
 
175
+ This dataset was collected from Word Document files uploaded to openCaselist. The .docx is unzipped to accesses the internal XML file, then the individual pieces of evidence in the file are extracted and organized using the formatting and structure of the document. The deduplication procedure works by identifying other evidence that contains identical sentences. Once potential matches are found, various heuristics are used to group duplicate pieces of evidence together. The primary data processing code is written in Typescript, with the data stored in a PostgresQL database, and a Redis database used in deduplication. More details about the data processing are included in the related paper and documented in the source code repository.
176
 
177
  #### Who are the source data producers?
178
 
179
  <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
180
 
181
+ The source data is created by produced by High School and Collegiate competitive debaters.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
182
 
183
  #### Personal and Sensitive Information
184
 
185
  <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
186
 
187
+ The dataset contains the names of debaters and the school they attend. Only the first 2 characters of the first and last name of debaters are included in the dataset.
188
 
189
  ## Bias, Risks, and Limitations
190
 
191
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
192
 
193
+ The dataset inherits the biases in the types of arguments used in a time limited debate format, and the biases of academic literature as a whole. Arguments tend to be mildly hyperbolic and one sided, and the arguments are on average "left" leaning and may overrepresent certain positions. As each debate was about a certain broad topic, those topics will be represented heavily in the dataset. A list of previous topics is available from the [National Speech and Debate Association](https://www.speechanddebate.org/topics/). There may be occasional errors in parsing that lead to multiple pieces of evidence being combined into one record.
194
 
195
+ <!-- ### Recommendations -->
196
 
197
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
198
 
199
+ <!-- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. -->
200
 
201
+ ## Citation
202
 
203
  <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
204
 
205
  **BibTeX:**
206
 
207
+ ```bibtex
208
+ @inproceedings{
209
+ roush2024opendebateevidence,
210
+ title={OpenDebateEvidence: A Massive-Scale Argument Mining and Summarization Dataset},
211
+ author={Allen G Roush and Yusuf Shabazz and Arvind Balaji and Peter Zhang and Stefano Mezza and Markus Zhang and Sanjay Basu and Sriram Vishwanath and Ravid Shwartz-Ziv},
212
+ booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
213
+ year={2024},
214
+ url={https://openreview.net/forum?id=43s8hgGTOX}
215
+ }
216
+ ```