OpenCaselist / README.md
Yusuf5's picture
Add diagram illustrating deduplication
4150d98
|
raw
history blame
17 kB
metadata
license: mit
task_categories:
  - text-generation
  - summarization
  - question-answering
  - text-classification
  - feature-extraction
  - fill-mask
language:
  - en
tags:
  - debate
  - argument
  - argument mining
pretty_name: Open Caselist
size_categories:
  - 1M<n<10M

Dataset Card for OpenCaselist

A collection of Evidence used in Collegiate and High School debate competitions.

Dataset Details

Dataset Description

This dataset is a follow up to DebateSum, increasing its scope and amount of metadata collected.

It expands the dataset to include evidence used during debate tournaments, rather than just evidence produced during preseason debate "camps." The total amount of evidence is approximately 20x larger than DebateSum. It currently includes evidence from the Policy, Lincoln Douglas, and Public Forum styles of debate from the years 2013-2024.

Evidence was deduplicated to detect the use of the same piece of evidence in different debates. Additional metadata about the debates evidence was used in is available in DebateRounds.

The parsing of debate documents was also improved, most notably now detecting highlighted text, which is typically used to denote what parts of the evidence is read out loud during the debate.

  • Curated by: Yusuf Shabazz
  • Language(s) (NLP): English
  • License: MIT

Dataset Sources

openCaselist

Uses

Direct Use

The most direct use of the dataset is for research in Argument Mining. The dataset contains rich metadata about the argumentative content of the text it contains. It can be used to train models for identifying arguments, classifying arguments, detecting relations between arguments, generating arguments, and other related tasks. It also contains metadata useful for abstractive and extractive summarization, natural language inference, and is useful as a pretraining dataset for many tasks.

The dataset is also useful for studying competitive debate, for example searching for common supporting evidence or counterarguments to an argument or analyzing trends in arguments across tournaments, especially when used with the DebateRounds dataset.

Out-of-Scope Use

The abstractive summaries generated by debaters make heavy use of jargon and are often in context of a broader argument, so they may not be useful directly.

Dataset Structure

Summaries

Debate evidence is typically annotated with three different summaries.

  • tag is typically a one sentence abstract summary written by the debater. It often makes heavy use of jargon.
  • summary is the underlined text in the original document. Typically a phrase level summary of the evidence.
  • spoken is the highlighted text in the original document. This is the text the debater plans to read out loud during the debate. Sometimes contains abbreviations or acronyms

Note that the tag will be biased to support some broader argument, and the summary and spoken summaries will be biased to support the tag. Some arguments called "analytics" do not contain any body text, and only have the debater written tag

Headings

Debate documents are usually organized with a hierarchy of headings.

  • pocket Top level heading. Usually the speech name.
  • hat Medium Level heading. Usually the broad type of argument
  • block Low level heading. Usually the specific type of argument.

Headings for speeches later in the debate will often contain the abbreviations "A2" or "AT" (Answer to) when responding to an argument in a previous speech.

Deduplication

Debaters will often reuse the same evidence across different debates and repurpose evidence used by other debaters for their own arguments. We preformed a deduplication procedure to detect this repeated use.

  • bucketId Duplicate evidence will share the same bucketId
  • duplicateCount is the number of other pieces of evidence in the same bucket. This is useful as a rough measure of argument quality, since good arguments tend to be used more.
deduplication diagram

Files

Evidence is uploaded in .docx files. For evidence uploading during the debate season, each file is typically contains all the evidence used in a specific speech. For evidence uploaded to the preseason "Open Evidence" project, each file will typically contain a collection of evidence relating to a specific argument.

Round Information

The remaining fields contain metadata relating to the debate round the evidence was used in. For more extensive metadata, see the tabroom_section_id field in the round table of caselist.sqlite in the DebateRounds dataset. Notably, this extra metadata can be used to find the evidence read by the opponent for around 25% of evidence.

Dataset Fields

Column Name Description
id Unique identifier for the evidence
tag Abstract summary of the evidence/argument made by the debater with evidence
cite String indicating the short citation of the source used for the evidence
fullcite Full citation of the source used for the evidence
summary Longer extractive summary of the evidence
spoken Shorter extractive summary of the evidence
fulltext The full text of the evidence
textlength The length of the text in the evidence in characters
markup The full text of the evidence with HTML markup for parsing/visualization purposes
pocket String indicating the virtual "pocket"
hat String indicating the virtual "hat"
block String indicating the virtual "block"
bucketId Unique identifier for the deduplication bucket of the evidence
duplicateCount The number of duplicates of the evidence.
fileId Unique identifier for the file in which the evidence is stored
filePath The file path of the file in which the evidence is stored
roundId Unique identifier for the debate round in which the evidence was used
side The debate side on which the evidence was used (Affirmative or Negative)
tournament The name of the tournament in which the evidence was used
round The round number in which the evidence was used
opponent The name of the opposing team in the debate round in which the evidence was used
judge The name of the judge in the debate round in which the evidence was used
report A report associated with the debate round filled out by one of the debaters, usually summarizing the arguments presented
opensourcePath The path to the open-source repository in which the evidence is stored
caselistUpdatedAt The date on which the caselist was last updated
teamId Unique identifier for the team that uploaded the evidence
teamName The name of the team
teamDisplayName The display name of the team
teamNotes Notes associated with the team
debater1First The first name of the first debater of the team. All names only include first 2 characters
debater1Last The last name of the first debater of the team
debater2First The first name of the second debater of the team
debater2Last The last name of the second debater of the team
schoolId Unique identifier for the school of the team that uploaded the evidence
schoolName The name of the school
schoolDisplayName The display name of the school
state The state in which the school is located
chapterId Unique identifier for the school on Tabroom. Rarely used
caselistId Unique identifier for the caselist the evidence was uploaded to
caselistName The name of the caselist
caselistDisplayName The display name of the caselist
year The year in which the debate round took place
event The event in which the debate round took place
level The level of the debate (college, high school)
teamSize The number of debaters on the team

Dataset Creation

Curation Rationale

Existing argument mining datasets are limited in scope or contain limited metadata. This dataset collects Debate Evidence produced by High School and Collegiate competitive debaters to create a large dataset annotated for argumentive content.

Source Data

Debaters will typically source evidence from News Articles, Journal Articles, Books, and other sources. Many debaters upload annotated excerpts from these sources to the OpenCaselist website where this data is collected from.

Data Collection and Processing

This dataset was collected from Word Document files uploaded to openCaselist. The .docx is unzipped to accesses the internal XML file, then the individual pieces of evidence in the file are extracted and organized using the formatting and structure of the document. The deduplication procedure works by identifying other evidence that contains identical sentences. Once potential matches are found, various heuristics are used to group duplicate pieces of evidence together. The primary data processing code is written in Typescript, with the data stored in a PostgresQL database, and a Redis database used in deduplication. More details about the data processing are included in the related paper and documented in the source code repository.

Who are the source data producers?

The source data is created by produced by High School and Collegiate competitive debaters.

Personal and Sensitive Information

The dataset contains the names of debaters and the school they attend. Only the first 2 characters of the first and last name of debaters are included in the dataset.

Bias, Risks, and Limitations

The dataset inherits the biases in the types of arguments used in a time limited debate format, and the biases of academic literature as a whole. Arguments tend to be mildly hyperbolic and one sided, and the arguments are on average "left" leaning and may overrepresent certain positions. As each debate was about a certain broad topic, those topics will be represented heavily in the dataset. A list of previous topics is available from the National Speech and Debate Association. There may be occasional errors in parsing that lead to multiple pieces of evidence being combined into one record.

Citation

BibTeX:

@inproceedings{
    roush2024opendebateevidence,
    title={OpenDebateEvidence: A Massive-Scale Argument Mining and Summarization Dataset},
    author={Allen G Roush and Yusuf Shabazz and Arvind Balaji and Peter Zhang and Stefano Mezza and Markus Zhang and Sanjay Basu and Sriram Vishwanath and Ravid Shwartz-Ziv},
    booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
    year={2024},
    url={https://openreview.net/forum?id=43s8hgGTOX}
}