Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
orionweller commited on
Commit
8e19795
1 Parent(s): a749bda

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -24
README.md CHANGED
@@ -20,7 +20,7 @@ tags:
20
 
21
  ## Dataset Description
22
 
23
- - **Repository:** [https://github.com/AbhilashaRavichander/CondaQA](https://github.com/orionw/NevIR)
24
  - **Paper:** []()
25
  - **Point of Contact:** [email protected]
26
 
@@ -39,11 +39,11 @@ If you use this dataset, we would appreciate you citing our work:
39
  }
40
  ```
41
 
42
- From the paper: "We introduce CondaQA to facilitate the future development of models that can process negation effectively. This is the first English reading comprehension dataset which requires reasoning about the implications of negated statements in paragraphs. We collect paragraphs with diverse negation cues, then have crowdworkers ask questions about the _implications_ of the negated statement in the passage. We also have workers make three kinds of edits to the passage---paraphrasing the negated statement, changing the scope of the negation, and reversing the negation---resulting in clusters of question-answer pairs that are difficult for models to answer with spurious shortcuts. CondaQA features 14,182 question-answer pairs with over 200 unique negation cues."
43
 
44
  ### Supported Tasks and Leaderboards
45
 
46
- The task is to answer a question given a Wikipedia passage that includes something being negated. There is no official leaderboard.
47
 
48
  ### Language
49
  English
@@ -54,32 +54,25 @@ English
54
  Here's an example instance:
55
 
56
  ```
57
- {"QuestionID": "q10",
58
- "original cue": "rarely",
59
- "PassageEditID": 0,
60
- "original passage": "Drug possession is the crime of having one or more illegal drugs in one's possession, either for personal use, distribution, sale or otherwise. Illegal drugs fall into different categories and sentences vary depending on the amount, type of drug, circumstances, and jurisdiction. In the U.S., the penalty for illegal drug possession and sale can vary from a small fine to a prison sentence. In some states, marijuana possession is considered to be a petty offense, with the penalty being comparable to that of a speeding violation. In some municipalities, possessing a small quantity of marijuana in one's own home is not punishable at all. Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time. Federal law makes even possession of \"soft drugs\", such as cannabis, illegal, though some local governments have laws contradicting federal laws.",
61
- "SampleID": 5294,
62
- "label": "YES",
63
- "original sentence": "Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time.",
64
- "sentence2": "If a drug addict is caught with marijuana, is there a chance he will be jailed?",
65
- "PassageID": 444,
66
- "sentence1": "Drug possession is the crime of having one or more illegal drugs in one's possession, either for personal use, distribution, sale or otherwise. Illegal drugs fall into different categories and sentences vary depending on the amount, type of drug, circumstances, and jurisdiction. In the U.S., the penalty for illegal drug possession and sale can vary from a small fine to a prison sentence. In some states, marijuana possession is considered to be a petty offense, with the penalty being comparable to that of a speeding violation. In some municipalities, possessing a small quantity of marijuana in one's own home is not punishable at all. Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time. Federal law makes even possession of \"soft drugs\", such as cannabis, illegal, though some local governments have laws contradicting federal laws."
67
  }
68
 
69
  ```
70
 
71
  ### Data Fields
72
 
73
- * `QuestionID`: unique ID for this question (might be asked for multiple passages)
74
- * `original cue`: Negation cue that was used to select this passage from Wikipedia
75
- * `PassageEditID`: 0 = original passage, 1 = paraphrase-edit passage, 2 = scope-edit passage, 3 = affirmative-edit passage
76
- * `original passage`: Original Wikipedia passage the passage is based on (note that the passage might either be the original Wikipedia passage itself, or an edit based on it)
77
- * `SampleID`: unique ID for this passage-question pair
78
- * `label`: answer
79
- * `original sentence`: Sentence that contains the negated statement
80
- * `sentence2`: question
81
- * `PassageID`: unique ID for the Wikipedia passage
82
- * `sentence1`: passage
83
 
84
  ### Data Splits
85
 
@@ -87,7 +80,7 @@ Data splits can be accessed as:
87
  ```
88
  from datasets import load_dataset
89
  train_set = load_dataset("nevir", "train")
90
- dev_set = load_dataset("nevir", "dev")
91
  test_set = load_dataset("nevir", "test")
92
  ```
93
 
 
20
 
21
  ## Dataset Description
22
 
23
+ - **Repository:** [https://github.com/orionw/NevIR](https://github.com/orionw/NevIR)
24
  - **Paper:** []()
25
  - **Point of Contact:** [email protected]
26
 
 
39
  }
40
  ```
41
 
42
+ From the paper: "Negation is a common everyday phenomena and has been a consistent area of weakness for language models (LMs). Although the Information Retrieval (IR) community has adopted LMs as the backbone of modern IR architectures, there has been little to no research in understanding how negation impacts neural IR.We therefore construct a straightforward benchmark on this theme: asking IR models to rank two documents that differ only by negation. We show that the results vary widely according to the type of IR architecture: cross-encoders perform best, followed by late-interaction models, and in last place are bi-encoder and sparse neural architectures. We find that most current information retrieval models do not consider negation, performing similarly or worse than randomly ranking.We show that although the obvious approach of continued fine-tuning on a dataset of contrastive documents containing negations increases performance (as does model size), there is still a large gap between machine and human performance.""
43
 
44
  ### Supported Tasks and Leaderboards
45
 
46
+ The task is to rank each query in the pair correctly, where only one query is relevant to one document in the pair. There is no official leaderboard.
47
 
48
  ### Language
49
  English
 
54
  Here's an example instance:
55
 
56
  ```
57
+ {
58
+ "id": "1-2",
59
+ "WorkerId": 0,
60
+ "q1": "Which mayor did more vetoing than anticipated?",
61
+ "q2": "Which mayor did less vetoing than anticipated?",
62
+ "doc1": "In his first year as mayor, Medill received very little legislative resistance from the Chicago City Council. While he vetoed what was an unprecedented eleven City Council ordinances that year, most narrowly were involved with specific financial practices considered wasteful and none of the vetoes were overridden. He used his new powers to appoint the members of the newly constituted Chicago Board of Education and the commissioners of its constituted public library. His appointments were approved unanimously by the City Council.",
63
+ "doc2": "In his first year as mayor, Medill received very little legislative resistance from the Chicago City Council. While some expected an unprecedented number of vetoes, in actuality he only vetoed eleven City Council ordinances that year, and most of those were narrowly involved with specific financial practices he considered wasteful and none of the vetoes were overridden. He used his new powers to appoint the members of the newly constituted Chicago Board of Education and the commissioners of its constituted public library. His appointments were approved unanimously by the City Council."
 
 
 
64
  }
65
 
66
  ```
67
 
68
  ### Data Fields
69
 
70
+ * `id`: unique ID for the pair, the first number indicates the document pair number in CondaQA and the second number indicates the PassageEditID in CondaQA.
71
+ * `WorkerId`: The ID for the Worker who created the queries for the pair.
72
+ * `q1`: the query that is only relevant to `doc1`
73
+ * `q2`: the query that is only relevant to `doc2`
74
+ * `doc1`: the original document, from CondaQA
75
+ * `doc2`: the edited document, from CondaQA
 
 
 
 
76
 
77
  ### Data Splits
78
 
 
80
  ```
81
  from datasets import load_dataset
82
  train_set = load_dataset("nevir", "train")
83
+ dev_set = load_dataset("nevir", "validation")
84
  test_set = load_dataset("nevir", "test")
85
  ```
86