Files changed (2) hide show
  1. README.md +218 -0
  2. deduplication.svg +1 -0
README.md ADDED
@@ -0,0 +1,218 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - text-generation
5
+ - summarization
6
+ - question-answering
7
+ - text-classification
8
+ - feature-extraction
9
+ - fill-mask
10
+ language:
11
+ - en
12
+ tags:
13
+ - debate
14
+ - argument
15
+ - argument mining
16
+ pretty_name: Open Caselist
17
+ size_categories:
18
+ - 1M<n<10M
19
+ ---
20
+
21
+ # Dataset Card for OpenCaselist
22
+
23
+ <!-- Provide a quick summary of the dataset. -->
24
+
25
+ A collection of Evidence used in Collegiate and High School debate competitions.
26
+
27
+ ## Dataset Details
28
+
29
+ ### Dataset Description
30
+
31
+ <!-- Provide a longer summary of what this dataset is. -->
32
+
33
+ This dataset is a follow up to [DebateSum](https://huggingface.co/datasets/Hellisotherpeople/DebateSum), increasing its scope and amount of metadata collected.
34
+
35
+ It expands the dataset to include evidence used during debate tournaments, rather than just evidence produced during preseason debate "camps." The total amount of evidence is approximately 20x larger than DebateSum. It currently includes evidence from the Policy, Lincoln Douglas, and Public Forum styles of debate from the years 2013-2024.
36
+
37
+ Evidence was deduplicated to detect the use of the same piece of evidence in different debates. Additional metadata about the debates evidence was used in is available in [DebateRounds](https://www.kaggle.com/datasets/yu5uf5/debate-rounds).
38
+
39
+ The parsing of debate documents was also improved, most notably now detecting highlighted text, which is typically used to denote what parts of the evidence is read out loud during the debate.
40
+
41
+ - **Curated by:** Yusuf Shabazz
42
+ - **Language(s) (NLP):** English
43
+ - **License:** MIT
44
+
45
+ ### Dataset Sources
46
+
47
+ <!-- Provide the basic links for the dataset. -->
48
+
49
+ [openCaselist](https://opencaselist.com/)
50
+
51
+ - **Repository:** [Data Processing Code and GraphQL Search API](https://github.com/OpenDebate/debate-cards/tree/v3)
52
+ - **Paper:** [OpenDebateEvidence](https://openreview.net/pdf?id=43s8hgGTOX)
53
+
54
+ ## Uses
55
+
56
+ <!-- Address questions around how the dataset is intended to be used. -->
57
+
58
+ ### Direct Use
59
+
60
+ <!-- This section describes suitable use cases for the dataset. -->
61
+
62
+ The most direct use of the dataset is for research in Argument Mining. The dataset contains rich metadata about the argumentative content of the text it contains. It can be used to train models for identifying arguments, classifying arguments, detecting relations between arguments, generating arguments, and other related tasks. It also contains metadata useful for abstractive and extractive summarization, natural language inference, and is useful as a pretraining dataset for many tasks.
63
+
64
+ The dataset is also useful for studying competitive debate, for example searching for common supporting evidence or counterarguments to an argument or analyzing trends in arguments across tournaments, especially when used with the `DebateRounds` dataset.
65
+
66
+ ### Out-of-Scope Use
67
+
68
+ The abstractive summaries generated by debaters make heavy use of jargon and are often in context of a broader argument, so they may not be useful directly.
69
+
70
+ ## Dataset Structure
71
+
72
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
73
+
74
+ #### Summaries
75
+
76
+ Debate evidence is typically annotated with three different summaries.
77
+
78
+ - `tag` is typically a one sentence abstract summary written by the debater. It often makes heavy use of jargon.
79
+ - `summary` is the underlined text in the original document. Typically a phrase level summary of the evidence.
80
+ - `spoken` is the highlighted text in the original document. This is the text the debater plans to read out loud during the debate. Sometimes contains abbreviations or acronyms
81
+
82
+ Note that the `tag` will be biased to support some broader argument, and the `summary` and `spoken` summaries will be biased to support the `tag`. Some arguments called "analytics" do not contain any body text, and only have the debater written `tag`
83
+
84
+ #### Headings
85
+
86
+ Debate documents are usually organized with a hierarchy of headings.
87
+
88
+ - `pocket` Top level heading. Usually the speech name.
89
+ - `hat` Medium Level heading. Usually the broad type of argument
90
+ - `block` Low level heading. Usually the specific type of argument.
91
+
92
+ Headings for speeches later in the debate will often contain the abbreviations "A2" or "AT" (Answer to) when responding to an argument in a previous speech.
93
+
94
+ #### Deduplication
95
+
96
+ Debaters will often reuse the same evidence across different debates and repurpose evidence used by other debaters for their own arguments. We preformed a deduplication procedure to detect this repeated use.
97
+
98
+ - `bucketId` Duplicate evidence will share the same `bucketId`
99
+ - `duplicateCount` is the number of other pieces of evidence in the same bucket. This is useful as a rough measure of argument quality, since good arguments tend to be used more.
100
+
101
+ <img src="./deduplication.svg" alt="deduplication diagram" height="600px">
102
+
103
+ #### Files
104
+
105
+ Evidence is uploaded in .docx files. For evidence uploading during the debate season, each file is typically contains all the evidence used in a specific speech. For evidence uploaded to the preseason "Open Evidence" project, each file will typically contain a collection of evidence relating to a specific argument.
106
+
107
+ #### Round Information
108
+
109
+ The remaining fields contain metadata relating to the debate round the evidence was used in. For more extensive metadata, see the `tabroom_section_id` field in the `round` table of `caselist.sqlite` in the DebateRounds dataset. Notably, this extra metadata can be used to find the evidence read by the opponent for around 25% of evidence.
110
+
111
+ ### Dataset Fields
112
+
113
+ | Column Name | Description |
114
+ | --------------------- | ------------------------------------------------------------------------------------------------------------------------ |
115
+ | `id` | Unique identifier for the evidence |
116
+ | `tag` | Abstract summary of the evidence/argument made by the debater with evidence |
117
+ | `cite` | String indicating the short citation of the source used for the evidence |
118
+ | `fullcite` | Full citation of the source used for the evidence |
119
+ | `summary` | Longer extractive summary of the evidence |
120
+ | `spoken` | Shorter extractive summary of the evidence |
121
+ | `fulltext` | The full text of the evidence |
122
+ | `textlength` | The length of the text in the evidence in characters |
123
+ | `markup` | The full text of the evidence with HTML markup for parsing/visualization purposes |
124
+ | `pocket` | String indicating the virtual "pocket" |
125
+ | `hat` | String indicating the virtual "hat" |
126
+ | `block` | String indicating the virtual "block" |
127
+ | `bucketId` | Unique identifier for the deduplication bucket of the evidence |
128
+ | `duplicateCount` | The number of duplicates of the evidence. |
129
+ | `fileId` | Unique identifier for the file in which the evidence is stored |
130
+ | `filePath` | The file path of the file in which the evidence is stored |
131
+ | `roundId` | Unique identifier for the debate round in which the evidence was used |
132
+ | `side` | The debate side on which the evidence was used (Affirmative or Negative) |
133
+ | `tournament` | The name of the tournament in which the evidence was used |
134
+ | `round` | The round number in which the evidence was used |
135
+ | `opponent` | The name of the opposing team in the debate round in which the evidence was used |
136
+ | `judge` | The name of the judge in the debate round in which the evidence was used |
137
+ | `report` | A report associated with the debate round filled out by one of the debaters, usually summarizing the arguments presented |
138
+ | `opensourcePath` | The path to the open-source repository in which the evidence is stored |
139
+ | `caselistUpdatedAt` | The date on which the caselist was last updated |
140
+ | `teamId` | Unique identifier for the team that uploaded the evidence |
141
+ | `teamName` | The name of the team |
142
+ | `teamDisplayName` | The display name of the team |
143
+ | `teamNotes` | Notes associated with the team |
144
+ | `debater1First` | The first name of the first debater of the team. All names only include first 2 characters |
145
+ | `debater1Last` | The last name of the first debater of the team |
146
+ | `debater2First` | The first name of the second debater of the team |
147
+ | `debater2Last` | The last name of the second debater of the team |
148
+ | `schoolId` | Unique identifier for the school of the team that uploaded the evidence |
149
+ | `schoolName` | The name of the school |
150
+ | `schoolDisplayName` | The display name of the school |
151
+ | `state` | The state in which the school is located |
152
+ | `chapterId` | Unique identifier for the school on Tabroom. Rarely used |
153
+ | `caselistId` | Unique identifier for the caselist the evidence was uploaded to |
154
+ | `caselistName` | The name of the caselist |
155
+ | `caselistDisplayName` | The display name of the caselist |
156
+ | `year` | The year in which the debate round took place |
157
+ | `event` | The event in which the debate round took place |
158
+ | `level` | The level of the debate (college, high school) |
159
+ | `teamSize` | The number of debaters on the team |
160
+
161
+ ## Dataset Creation
162
+
163
+ ### Curation Rationale
164
+
165
+ Existing argument mining datasets are limited in scope or contain limited metadata. This dataset collects Debate Evidence produced by High School and Collegiate competitive debaters to create a large dataset annotated for argumentive content.
166
+
167
+ ### Source Data
168
+
169
+ <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
170
+
171
+ Debaters will typically source evidence from News Articles, Journal Articles, Books, and other sources. Many debaters upload annotated excerpts from these sources to the OpenCaselist website where this data is collected from.
172
+
173
+ #### Data Collection and Processing
174
+
175
+ <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
176
+
177
+ This dataset was collected from Word Document files uploaded to openCaselist. The .docx is unzipped to accesses the internal XML file, then the individual pieces of evidence in the file are extracted and organized using the formatting and structure of the document. The deduplication procedure works by identifying other evidence that contains identical sentences. Once potential matches are found, various heuristics are used to group duplicate pieces of evidence together. The primary data processing code is written in Typescript, with the data stored in a PostgresQL database, and a Redis database used in deduplication. More details about the data processing are included in the related paper and documented in the source code repository.
178
+
179
+ #### Who are the source data producers?
180
+
181
+ <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
182
+
183
+ The source data is created by produced by High School and Collegiate competitive debaters.
184
+
185
+ #### Personal and Sensitive Information
186
+
187
+ <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
188
+
189
+ The dataset contains the names of debaters and the school they attend. Only the first 2 characters of the first and last name of debaters are included in the dataset.
190
+
191
+ ## Bias, Risks, and Limitations
192
+
193
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
194
+
195
+ The dataset inherits the biases in the types of arguments used in a time limited debate format, and the biases of academic literature as a whole. Arguments tend to be mildly hyperbolic and one sided, and the arguments are on average "left" leaning and may overrepresent certain positions. As each debate was about a certain broad topic, those topics will be represented heavily in the dataset. A list of previous topics is available from the [National Speech and Debate Association](https://www.speechanddebate.org/topics/). There may be occasional errors in parsing that lead to multiple pieces of evidence being combined into one record.
196
+
197
+ <!-- ### Recommendations -->
198
+
199
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
200
+
201
+ <!-- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. -->
202
+
203
+ ## Citation
204
+
205
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
206
+
207
+ **BibTeX:**
208
+
209
+ ```bibtex
210
+ @inproceedings{
211
+ roush2024opendebateevidence,
212
+ title={OpenDebateEvidence: A Massive-Scale Argument Mining and Summarization Dataset},
213
+ author={Allen G Roush and Yusuf Shabazz and Arvind Balaji and Peter Zhang and Stefano Mezza and Markus Zhang and Sanjay Basu and Sriram Vishwanath and Ravid Shwartz-Ziv},
214
+ booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
215
+ year={2024},
216
+ url={https://openreview.net/forum?id=43s8hgGTOX}
217
+ }
218
+ ```
deduplication.svg ADDED