ag2435 commited on
Commit
8a13e6e
·
1 Parent(s): 1cbe2e1

removed original data

Browse files
README.md CHANGED
@@ -8,14 +8,6 @@ configs:
8
  path: "data/dev_instances.json"
9
  - split: test
10
  path: "data/test_instances.json"
11
- # - config_name: original
12
- # data_files:
13
- # - split: train
14
- # path: "original_data/qasper-train-dev-v0.3/qasper-train-v0.3.json"
15
- # - split: dev
16
- # path: "original_data/qasper-train-dev-v0.3/qasper-dev-v0.3.json"
17
- # - split: test
18
- # path: "original_data/qasper-test-and-evaluator-v0.3/qasper-test-v0.3.json"
19
  ---
20
 
21
  # Preprocessed QASPER dataset
 
8
  path: "data/dev_instances.json"
9
  - split: test
10
  path: "data/test_instances.json"
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  # Preprocessed QASPER dataset
original_data/qasper-test-and-evaluator-v0.3/README-test.md DELETED
@@ -1,26 +0,0 @@
1
- Dataset of Information Seeking Questions and Answers Anchored in Research Papers: Test Set and Evaluator
2
- --------------------------------------------------------------------------------------------------------
3
-
4
- ## Version: 0.3
5
-
6
- The tarball you found this file in should contain the test split of the Qasper dataset version 0.3 and the official evaluator script.
7
-
8
- Please make sure you access the test file only to evaluate your finalized model.
9
-
10
- ## Images of tables and figures
11
-
12
- You can download them here: https://qasper-dataset.s3.us-west-2.amazonaws.com/test_figures_and_tables.tgz
13
-
14
- ## Evaluation
15
-
16
- You can evaluate your model using the stand alone evaluator as follows:
17
-
18
- ```
19
- python qasper_evaluator.py --predictions predictions.jsonl --gold qasper-test-v0.3.json [--text_evidence_only]
20
- ```
21
-
22
- Run the following to understand the arguments
23
-
24
- ```
25
- python qasper_evaluator.py -h
26
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
original_data/qasper-test-and-evaluator-v0.3/qasper-test-v0.3.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6e29ad410e6e39aa1936017fb965b30a20eb2e7751997f55b97c9d281aa884e5
3
- size 18078957
 
 
 
 
original_data/qasper-test-and-evaluator-v0.3/qasper_evaluator.py DELETED
@@ -1,167 +0,0 @@
1
- """
2
- Official script for evaluating models built for the Qasper dataset. The script
3
- outputs Answer F1 and Evidence F1 reported in the paper.
4
- """
5
- from collections import Counter
6
- import argparse
7
- import string
8
- import re
9
- import json
10
-
11
-
12
- def normalize_answer(s):
13
- """
14
- Taken from the official evaluation script for v1.1 of the SQuAD dataset.
15
- Lower text and remove punctuation, articles and extra whitespace.
16
- """
17
-
18
- def remove_articles(text):
19
- return re.sub(r"\b(a|an|the)\b", " ", text)
20
-
21
- def white_space_fix(text):
22
- return " ".join(text.split())
23
-
24
- def remove_punc(text):
25
- exclude = set(string.punctuation)
26
- return "".join(ch for ch in text if ch not in exclude)
27
-
28
- def lower(text):
29
- return text.lower()
30
-
31
- return white_space_fix(remove_articles(remove_punc(lower(s))))
32
-
33
-
34
- def token_f1_score(prediction, ground_truth):
35
- """
36
- Taken from the official evaluation script for v1.1 of the SQuAD dataset.
37
- """
38
- prediction_tokens = normalize_answer(prediction).split()
39
- ground_truth_tokens = normalize_answer(ground_truth).split()
40
- common = Counter(prediction_tokens) & Counter(ground_truth_tokens)
41
- num_same = sum(common.values())
42
- if num_same == 0:
43
- return 0
44
- precision = 1.0 * num_same / len(prediction_tokens)
45
- recall = 1.0 * num_same / len(ground_truth_tokens)
46
- f1 = (2 * precision * recall) / (precision + recall)
47
- return f1
48
-
49
-
50
- def paragraph_f1_score(prediction, ground_truth):
51
- if not ground_truth and not prediction:
52
- # The question is unanswerable and the prediction is empty.
53
- return 1.0
54
- num_same = len(set(ground_truth).intersection(set(prediction)))
55
- if num_same == 0:
56
- return 0.0
57
- precision = num_same / len(prediction)
58
- recall = num_same / len(ground_truth)
59
- f1 = (2 * precision * recall) / (precision + recall)
60
- return f1
61
-
62
-
63
- def get_answers_and_evidence(data, text_evidence_only):
64
- answers_and_evidence = {}
65
- for paper_data in data.values():
66
- for qa_info in paper_data["qas"]:
67
- question_id = qa_info["question_id"]
68
- references = []
69
- for annotation_info in qa_info["answers"]:
70
- answer_info = annotation_info["answer"]
71
- if answer_info["unanswerable"]:
72
- references.append({"answer": "Unanswerable", "evidence": [], "type": "none"})
73
- else:
74
- if answer_info["extractive_spans"]:
75
- answer = ", ".join(answer_info["extractive_spans"])
76
- answer_type = "extractive"
77
- elif answer_info["free_form_answer"]:
78
- answer = answer_info["free_form_answer"]
79
- answer_type = "abstractive"
80
- elif answer_info["yes_no"]:
81
- answer = "Yes"
82
- answer_type = "boolean"
83
- elif answer_info["yes_no"] is not None:
84
- answer = "No"
85
- answer_type = "boolean"
86
- else:
87
- raise RuntimeError(f"Annotation {answer_info['annotation_id']} does not contain an answer")
88
- if text_evidence_only:
89
- evidence = [text for text in answer_info["evidence"] if "FLOAT SELECTED" not in text]
90
- else:
91
- evidence = answer_info["evidence"]
92
- references.append({"answer": answer, "evidence": evidence, "type": answer_type})
93
- answers_and_evidence[question_id] = references
94
-
95
- return answers_and_evidence
96
-
97
-
98
- def evaluate(gold, predicted):
99
- max_answer_f1s = []
100
- max_evidence_f1s = []
101
- max_answer_f1s_by_type = {
102
- "extractive": [],
103
- "abstractive": [],
104
- "boolean": [],
105
- "none": [],
106
- }
107
- num_missing_predictions = 0
108
- for question_id, references in gold.items():
109
- if question_id not in predicted:
110
- num_missing_predictions += 1
111
- max_answer_f1s.append(0.0)
112
- max_evidence_f1s.append(0.0)
113
- continue
114
- answer_f1s_and_types = [
115
- (token_f1_score(predicted[question_id]["answer"], reference["answer"]),
116
- reference["type"])
117
- for reference in gold[question_id]
118
- ]
119
- max_answer_f1, answer_type = sorted(answer_f1s_and_types, key=lambda x: x[0], reverse=True)[0]
120
- max_answer_f1s.append(max_answer_f1)
121
- max_answer_f1s_by_type[answer_type].append(max_answer_f1)
122
- evidence_f1s = [
123
- paragraph_f1_score(predicted[question_id]["evidence"], reference["evidence"])
124
- for reference in gold[question_id]
125
- ]
126
- max_evidence_f1s.append(max(evidence_f1s))
127
-
128
- mean = lambda x: sum(x) / len(x) if x else 0.0
129
- return {
130
- "Answer F1": mean(max_answer_f1s),
131
- "Answer F1 by type": {key: mean(value) for key, value in max_answer_f1s_by_type.items()},
132
- "Evidence F1": mean(max_evidence_f1s),
133
- "Missing predictions": num_missing_predictions
134
- }
135
-
136
- if __name__ == "__main__":
137
- parser = argparse.ArgumentParser()
138
- parser.add_argument(
139
- "--predictions",
140
- type=str,
141
- required=True,
142
- help="""JSON lines file with each line in format:
143
- {'question_id': str, 'predicted_answer': str, 'predicted_evidence': List[str]}"""
144
- )
145
- parser.add_argument(
146
- "--gold",
147
- type=str,
148
- required=True,
149
- help="Test or dev set from the released dataset"
150
- )
151
- parser.add_argument(
152
- "--text_evidence_only",
153
- action="store_true",
154
- help="If set, the evaluator will ignore evidence in figures and tables while reporting evidence f1"
155
- )
156
- args = parser.parse_args()
157
- gold_data = json.load(open(args.gold))
158
- gold_answers_and_evidence = get_answers_and_evidence(gold_data, args.text_evidence_only)
159
- predicted_answers_and_evidence = {}
160
- for line in open(args.predictions):
161
- prediction_data = json.loads(line)
162
- predicted_answers_and_evidence[prediction_data["question_id"]] = {
163
- "answer": prediction_data["predicted_answer"],
164
- "evidence": prediction_data["predicted_evidence"]
165
- }
166
- evaluation_output = evaluate(gold_answers_and_evidence, predicted_answers_and_evidence)
167
- print(json.dumps(evaluation_output, indent=2))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
original_data/qasper-train-dev-v0.3/README.md DELETED
@@ -1,71 +0,0 @@
1
- A Dataset of Information Seeking Questions and Answers Anchored in Research Papers
2
- ----------------------------------------------------------------------------------
3
-
4
- ## Version 0.3
5
-
6
- The tarball you found this README in should contain the training and development sets of Qasper version 0.3. The images of the tables and figures
7
- in the papers associated can be found here: https://qasper-dataset.s3.us-west-2.amazonaws.com/train_dev_figures_and_tables.tgz
8
-
9
- The full text of the papers is extracted from S2ORC (Lo et al., 2020).
10
-
11
- Each file is in JSON format, where the keys are arxiv ids, and the values are dicts containing `title`, `abstract`, `full_text`, `figures_and_tables`, and `qas` (QA pairs).
12
-
13
- ## Differences from v0.2
14
-
15
- Due to an issue in the annotation interface, a small number of annotations (about 0.6%) had multiple answer types (e.g.: unanswerable and boolean; see more information on answer types in the final section of this README) in v0.2. These were manually fixed to create v0.3. These fixes affected train, development, and test sets.
16
-
17
- ## Figures and tables
18
-
19
- These are new starting version 0.2. The actual images of the figures and tables can be downloaded from the link above. The JSON files contain the
20
- captions to those images in the `figure_and_table_captions` field.
21
-
22
- This field is a dict whose keys are file names of the images of tables and figures, and the values are their captions.
23
-
24
- For example, the paper with arxiv id `1811.00942` is in the training set, and contains the following `figures_and_tables` field:
25
-
26
- ```
27
- "figures_and_tables": [
28
- {
29
- "file": "3-Table1-1.png",
30
- "caption": "Table 1: Comparison of neural language models on Penn Treebank and WikiText-103."
31
- },
32
- {
33
- "file": "4-Figure1-1.png",
34
- "caption": "Figure 1: Log perplexity\u2013recall error with KN-5."
35
- },
36
- {
37
- "file": "4-Figure2-1.png",
38
- "caption": "Figure 2: Log perplexity\u2013recall error with QRNN."
39
- },
40
- {
41
- "file": "4-Table2-1.png",
42
- "caption": "Table 2: Language modeling results on performance and model quality."
43
- }
44
- ]
45
- ```
46
-
47
- and when you download the `train_dev_figures_and_tables` tarball, you will see four files in `train/1811.00942`, with file names corresponding to
48
- the `file` fields in the list above.
49
-
50
- ## Fields specific to questions:
51
-
52
- - `nlp_background` shows the experience the question writer had. The values can be `zero` (no experience), `two` (0 - 2 years of experience), `five` (2 - 5 years of experience), and `infinity` (> 5 years of experience). The field may be empty as well, indicating the writer has chosen not to share this information.
53
-
54
- - `topic_background` shows how familiar the question writer was with the topic of the paper. The values are `unfamiliar`, `familiar`, `research` (meaning that the topic is the research area of the writer), or null.
55
-
56
- - `paper_read`, when specified shows whether the questionwriter has read the paper.
57
-
58
- - `search_query`, if not empty, is the query the question writer used to find the abstract of the paper from a large pool of abstracts we made available to them.
59
-
60
- ## Fields specific to answers
61
-
62
- Unanswerable answers have `unanswerable` set to true. The remaining answers have exactly one of the following fields being non-empty.
63
-
64
- - `extractive_spans` are spans in the paper which serve as the answer.
65
- - `free_form_answer` is a written out answer.
66
- - `yes_no` is true iff the answer is Yes, and false iff the answer is No.
67
-
68
- `evidence` is the set of paragraphs, figures or tables used to arrive at the answer. When the evidence is a table or a figure, it starts with the
69
- string `FLOAT SELECTED`, and contains the caption of the corresponding table or figure.
70
-
71
- `highlighted_evidence` is the set of sentences the answer providers selected as evidence if they chose textual evidence. The text in the `evidence` field is a mapping from these sentences to the paragraph level. That is, if you see textual evidence in the `evidence` field, it is guaranteed to be entire paragraphs, while that is not the case with `highlighted_evidence`.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
original_data/qasper-train-dev-v0.3/qasper-dev-v0.3.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:2ae7ee62a65b1c4225791c70de80c2aad4e8998cf1fd4f09a53103db4f21af93
3
- size 11398686
 
 
 
 
original_data/qasper-train-dev-v0.3/qasper-train-v0.3.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:9458bfe76074a8fa8d1685af02bcc73537aa6d338ad20591dfaff1946bc88bf4
3
- size 31969387