Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
named-entity-recognition
Languages:
English
Size:
1K - 10K
License:
Commit
·
49d1ede
1
Parent(s):
5a1e54e
Update parquet files
Browse files- .gitattributes +0 -38
- README.md +0 -85
- WIESP2022-NER-DEV-sample-predictions.jsonl +0 -0
- WIESP2022-NER-DEV.jsonl +0 -0
- WIESP2022-NER-TESTING-NO-LABELS.jsonl +0 -0
- WIESP2022-NER-TESTING.jsonl +0 -3
- WIESP2022-NER-TRAINING.jsonl +0 -0
- WIESP2022-NER-VALIDATION-NO-LABELS.jsonl +0 -0
- WIESP2022-NER-VALIDATION.jsonl +0 -0
- dataset_infos.json +0 -1
- data/test-00000-of-00001.parquet → fgrezes--WIESP2022-NER/parquet-test.parquet +2 -2
- data/train-00000-of-00001.parquet → fgrezes--WIESP2022-NER/parquet-train.parquet +2 -2
- data/validation-00000-of-00001.parquet → fgrezes--WIESP2022-NER/parquet-validation.parquet +2 -2
- ner_tags.json +0 -1
- scoring-scripts/compute_MCC.py +0 -31
- scoring-scripts/compute_seqeval.py +0 -49
- tag_definitions.md +0 -38
- tag_definitions.txt +0 -38
.gitattributes
DELETED
@@ -1,38 +0,0 @@
|
|
1 |
-
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
-
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
-
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
-
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
-
*.ftz filter=lfs diff=lfs merge=lfs -text
|
6 |
-
*.gz filter=lfs diff=lfs merge=lfs -text
|
7 |
-
*.h5 filter=lfs diff=lfs merge=lfs -text
|
8 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
9 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
10 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
11 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
12 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
13 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
14 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
15 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
16 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
17 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
18 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
19 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
20 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
21 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
22 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
23 |
-
*.wasm filter=lfs diff=lfs merge=lfs -text
|
24 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
-
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
28 |
-
# Audio files - uncompressed
|
29 |
-
*.pcm filter=lfs diff=lfs merge=lfs -text
|
30 |
-
*.sam filter=lfs diff=lfs merge=lfs -text
|
31 |
-
*.raw filter=lfs diff=lfs merge=lfs -text
|
32 |
-
# Audio files - compressed
|
33 |
-
*.aac filter=lfs diff=lfs merge=lfs -text
|
34 |
-
*.flac filter=lfs diff=lfs merge=lfs -text
|
35 |
-
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
36 |
-
*.ogg filter=lfs diff=lfs merge=lfs -text
|
37 |
-
*.wav filter=lfs diff=lfs merge=lfs -text
|
38 |
-
WIESP2022-NER-TESTING.jsonl filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
DELETED
@@ -1,85 +0,0 @@
|
|
1 |
-
---
|
2 |
-
annotations_creators:
|
3 |
-
- expert-generated
|
4 |
-
language_creators:
|
5 |
-
- found
|
6 |
-
language:
|
7 |
-
- en
|
8 |
-
license:
|
9 |
-
- cc-by-4.0
|
10 |
-
multilinguality:
|
11 |
-
- monolingual
|
12 |
-
pretty_name: 'WIESP2022-NER'
|
13 |
-
size_categories:
|
14 |
-
- 1K<n<10K
|
15 |
-
source_datasets: []
|
16 |
-
task_categories:
|
17 |
-
- token-classification
|
18 |
-
task_ids:
|
19 |
-
- named-entity-recognition
|
20 |
-
---
|
21 |
-
# Dataset for the first <a href="https://ui.adsabs.harvard.edu/WIESP/" style="color:blue">Workshop on Information Extraction from Scientific Publications (WIESP/2022)</a>.
|
22 |
-
|
23 |
-
|
24 |
-
## Dataset Description
|
25 |
-
Datasets with text fragments from astrophysics papers, provided by the [NASA Astrophysical Data System](https://ui.adsabs.harvard.edu/) with manually tagged astronomical facilities and other entities of interest (e.g., celestial objects).
|
26 |
-
Datasets are in JSON Lines format (each line is a json dictionary).
|
27 |
-
The datasets are formatted similarly to the CONLL2003 format. Each token is associated with an NER tag. The tags follow the "B-" and "I-" convention from the [IOB2 syntax]("https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)")
|
28 |
-
|
29 |
-
Each entry consists of a dictionary with the following keys:
|
30 |
-
- `"unique_id"`: a unique identifier for this data sample. Must be included in the predictions.
|
31 |
-
- `"tokens"`: the list of tokens (strings) that form the text of this sample. Must be included in the predictions.
|
32 |
-
- `"ner_tags"`: the list of NER tags (in IOB2 format)
|
33 |
-
|
34 |
-
The following keys are not strictly needed by the participants:
|
35 |
-
- `"ner_ids"`: the pre-computed list of ids corresponding ner_tags, as given by the dictionary in ner_tags.json
|
36 |
-
- `"label_studio_id"`, `"section"`, `"bibcode"`: references for internal NASA/ADS use.
|
37 |
-
|
38 |
-
## Instructions for Workshop participants:
|
39 |
-
How to load the data using the Huggingface library:
|
40 |
-
```python
|
41 |
-
from datasets import load_dataset
|
42 |
-
dataset = load_dataset("adsabs/WIESP2022-NER")
|
43 |
-
```
|
44 |
-
|
45 |
-
How to load the data if you cloned the repository locally:
|
46 |
-
(assuming `./WIESP2022-NER-DEV.jsonl` is in the current directory, change as needed)
|
47 |
-
- python (as list of dictionaries):
|
48 |
-
```python
|
49 |
-
import json
|
50 |
-
with open("./WIESP2022-NER-DEV.jsonl", 'r') as f:
|
51 |
-
wiesp_dev_json = [json.loads(l) for l in list(f)]
|
52 |
-
```
|
53 |
-
- into Huggingface (as a Huggingface Dataset):
|
54 |
-
```python
|
55 |
-
from datasets import Dataset
|
56 |
-
wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.jsonl")
|
57 |
-
```
|
58 |
-
|
59 |
-
|
60 |
-
How to compute your scores on the training data:
|
61 |
-
1. format your predictions as a list of dictionaries, each with the same `"unique_id"` and `"tokens"` keys from the dataset, as well as the list of predicted NER tags under the `"pred_ner_tags"` key (see `WIESP2022-NER-DEV-sample-predictions.jsonl` for an example).
|
62 |
-
2. pass the references and predictions datasets to the `compute_MCC()` and `compute_seqeval()` functions (from the `.py` files with the same names).
|
63 |
-
|
64 |
-
Requirement to run the scoring scripts:
|
65 |
-
[NumPy](https://numpy.org/install/)
|
66 |
-
[scikit-learn](https://scikit-learn.org/stable/install.html)
|
67 |
-
[seqeval](https://github.com/chakki-works/seqeval#installation)
|
68 |
-
|
69 |
-
To get scores on the validation data, zip your predictions file (a single `.jsonl' file formatted following the same instructions as above) and upload the `.zip` file to the [Codalabs](https://codalab.lisn.upsaclay.fr/competitions/5062) competition.
|
70 |
-
|
71 |
-
## File list
|
72 |
-
```
|
73 |
-
├── WIESP2022-NER-TRAINING.jsonl : 1753 samples for training.
|
74 |
-
├── WIESP2022-NER-DEV.jsonl : 20 samples for development.
|
75 |
-
├── WIESP2022-NER-DEV-sample-predictions.jsonl : an example file with properly formatted predictions on the development data.
|
76 |
-
├── WIESP2022-NER-VALIDATION-NO-LABELS.jsonl : 1366 samples for validation without the NER labels. Used for the WIESP2022 workshop.
|
77 |
-
├── WIESP2022-NER-VALIDATION.jsonl : 1366 samples for validation
|
78 |
-
├── WIESP2022-NER-TESTING-NO-LABELS.jsonl : 2505 samples for testing without the NER labels. Used for the WIESP2022 workshop.
|
79 |
-
├── WIESP2022-NER-TESTING.jsonl : 2505 samples for testing
|
80 |
-
├── README.MD : this file.
|
81 |
-
├── tag_definitions.md : short descriptions and examples of the tags used in the task.
|
82 |
-
└── scoring-scripts/ : scripts used to evaluate submissions.
|
83 |
-
├── compute_MCC.py : computes the Matthews correlation coefficient between two datasets.
|
84 |
-
└── compute_seqeval.py : computes the seqeval scores (precision, recall, f1, overall and for each class) between two datasets.
|
85 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
WIESP2022-NER-DEV-sample-predictions.jsonl
DELETED
The diff for this file is too large to render.
See raw diff
|
|
WIESP2022-NER-DEV.jsonl
DELETED
The diff for this file is too large to render.
See raw diff
|
|
WIESP2022-NER-TESTING-NO-LABELS.jsonl
DELETED
The diff for this file is too large to render.
See raw diff
|
|
WIESP2022-NER-TESTING.jsonl
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:69c832d9a687765c297960096a5d6b9013dc228f8dd64ccb01f698f186e90753
|
3 |
-
size 13930556
|
|
|
|
|
|
|
|
WIESP2022-NER-TRAINING.jsonl
DELETED
The diff for this file is too large to render.
See raw diff
|
|
WIESP2022-NER-VALIDATION-NO-LABELS.jsonl
DELETED
The diff for this file is too large to render.
See raw diff
|
|
WIESP2022-NER-VALIDATION.jsonl
DELETED
The diff for this file is too large to render.
See raw diff
|
|
dataset_infos.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"fgrezes--WIESP2022-NER": {"description": "", "citation": "", "homepage": "", "license": "", "features": {"bibcode": {"dtype": "string", "id": null, "_type": "Value"}, "label_studio_id": {"dtype": "int64", "id": null, "_type": "Value"}, "ner_ids": {"feature": {"dtype": "int64", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "ner_tags": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "section": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "unique_id": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "json", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 13805842, "num_examples": 1753, "dataset_name": "WIESP2022-NER"}, "validation": {"name": "validation", "num_bytes": 10803621, "num_examples": 1366, "dataset_name": "WIESP2022-NER"}, "test": {"name": "test", "num_bytes": 19270355, "num_examples": 2505, "dataset_name": "WIESP2022-NER"}}, "download_checksums": null, "download_size": 5518829, "post_processing_size": null, "dataset_size": 43879818, "size_in_bytes": 49398647}}
|
|
|
|
data/test-00000-of-00001.parquet → fgrezes--WIESP2022-NER/parquet-test.parquet
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ed3267d8c12d2d6f8aaac50aada06c55dd9c9f5dd781c2dd44087615e3045bb7
|
3 |
+
size 2566381
|
data/train-00000-of-00001.parquet → fgrezes--WIESP2022-NER/parquet-train.parquet
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fc3dfd28a3781060df502fb9ffeb9b8f725f4b0bbffa35d645ffd8123c139848
|
3 |
+
size 1898176
|
data/validation-00000-of-00001.parquet → fgrezes--WIESP2022-NER/parquet-validation.parquet
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8ec6d2978c219bb9eba5b40bbdc2af479ef1b61682723000d292d8c0a70f78f7
|
3 |
+
size 1493619
|
ner_tags.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"B-Archive": 0, "B-CelestialObject": 1, "B-CelestialObjectRegion": 2, "B-CelestialRegion": 3, "B-Citation": 4, "B-Collaboration": 5, "B-ComputingFacility": 6, "B-Database": 7, "B-Dataset": 8, "B-EntityOfFutureInterest": 9, "B-Event": 10, "B-Fellowship": 11, "B-Formula": 12, "B-Grant": 13, "B-Identifier": 14, "B-Instrument": 15, "B-Location": 16, "B-Mission": 17, "B-Model": 18, "B-ObservationalTechniques": 19, "B-Observatory": 20, "B-Organization": 21, "B-Person": 22, "B-Proposal": 23, "B-Software": 24, "B-Survey": 25, "B-Tag": 26, "B-Telescope": 27, "B-TextGarbage": 28, "B-URL": 29, "B-Wavelength": 30, "I-Archive": 31, "I-CelestialObject": 32, "I-CelestialObjectRegion": 33, "I-CelestialRegion": 34, "I-Citation": 35, "I-Collaboration": 36, "I-ComputingFacility": 37, "I-Database": 38, "I-Dataset": 39, "I-EntityOfFutureInterest": 40, "I-Event": 41, "I-Fellowship": 42, "I-Formula": 43, "I-Grant": 44, "I-Identifier": 45, "I-Instrument": 46, "I-Location": 47, "I-Mission": 48, "I-Model": 49, "I-ObservationalTechniques": 50, "I-Observatory": 51, "I-Organization": 52, "I-Person": 53, "I-Proposal": 54, "I-Software": 55, "I-Survey": 56, "I-Tag": 57, "I-Telescope": 58, "I-TextGarbage": 59, "I-URL": 60, "I-Wavelength": 61, "O": 62}
|
|
|
|
scoring-scripts/compute_MCC.py
DELETED
@@ -1,31 +0,0 @@
|
|
1 |
-
from sklearn.metrics import matthews_corrcoef
|
2 |
-
import numpy as np
|
3 |
-
def compute_MCC_jsonl(references_jsonl, predictions_jsonl, ref_col='ner_tags', pred_col='pred_ner_tags'):
|
4 |
-
'''
|
5 |
-
Computes the Matthews correlation coeff between two datasets in jsonl format (list of dicts each with same keys).
|
6 |
-
Sorts the datasets by 'unique_id' and verifies that the tokens match.
|
7 |
-
'''
|
8 |
-
# reverse the dict
|
9 |
-
ref_dict = {k:[e[k] for e in references_jsonl] for k in references_jsonl[0].keys()}
|
10 |
-
pred_dict = {k:[e[k] for e in predictions_jsonl] for k in predictions_jsonl[0].keys()}
|
11 |
-
|
12 |
-
# sort by unique_id
|
13 |
-
ref_idx = np.argsort(ref_dict['unique_id'])
|
14 |
-
pred_idx = np.argsort(pred_dict['unique_id'])
|
15 |
-
ref_ner_tags = np.array(ref_dict[ref_col], dtype=object)[ref_idx]
|
16 |
-
pred_ner_tags = np.array(pred_dict[pred_col], dtype=object)[pred_idx]
|
17 |
-
ref_tokens = np.array(ref_dict['tokens'], dtype=object)[ref_idx]
|
18 |
-
pred_tokens = np.array(pred_dict['tokens'], dtype=object)[pred_idx]
|
19 |
-
|
20 |
-
# check that tokens match
|
21 |
-
for t1,t2 in zip(ref_tokens, pred_tokens):
|
22 |
-
assert(t1==t2)
|
23 |
-
|
24 |
-
# the lists have to be flattened
|
25 |
-
flat_ref_tags = np.concatenate(ref_ner_tags)
|
26 |
-
flat_pred_tags = np.concatenate(pred_ner_tags)
|
27 |
-
|
28 |
-
mcc_score = matthews_corrcoef(y_true=flat_ref_tags,
|
29 |
-
y_pred=flat_pred_tags)
|
30 |
-
|
31 |
-
return(mcc_score)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
scoring-scripts/compute_seqeval.py
DELETED
@@ -1,49 +0,0 @@
|
|
1 |
-
from seqeval.metrics import classification_report, f1_score, precision_score, recall_score, accuracy_score
|
2 |
-
from seqeval.scheme import IOB2
|
3 |
-
import numpy as np
|
4 |
-
def compute_seqeval_jsonl(references_jsonl, predictions_jsonl, ref_col='ner_tags', pred_col='pred_ner_tags'):
|
5 |
-
'''
|
6 |
-
Computes the seqeval scores between two datasets loaded from jsonl (list of dicts with same keys).
|
7 |
-
Sorts the datasets by 'unique_id' and verifies that the tokens match.
|
8 |
-
'''
|
9 |
-
# extract the tags and reverse the dict
|
10 |
-
ref_dict = {k:[e[k] for e in references_jsonl] for k in references_jsonl[0].keys()}
|
11 |
-
pred_dict = {k:[e[k] for e in predictions_jsonl] for k in predictions_jsonl[0].keys()}
|
12 |
-
|
13 |
-
# sort by unique_id
|
14 |
-
ref_idx = np.argsort(ref_dict['unique_id'])
|
15 |
-
pred_idx = np.argsort(pred_dict['unique_id'])
|
16 |
-
ref_ner_tags = np.array(ref_dict[ref_col], dtype=object)[ref_idx]
|
17 |
-
pred_ner_tags = np.array(pred_dict[pred_col], dtype=object)[pred_idx]
|
18 |
-
ref_tokens = np.array(ref_dict['tokens'], dtype=object)[ref_idx]
|
19 |
-
pred_tokens = np.array(pred_dict['tokens'], dtype=object)[pred_idx]
|
20 |
-
|
21 |
-
# check that tokens match
|
22 |
-
assert((ref_tokens==pred_tokens).all())
|
23 |
-
|
24 |
-
|
25 |
-
# get report
|
26 |
-
report = classification_report(y_true=ref_ner_tags, y_pred=pred_ner_tags,
|
27 |
-
scheme=IOB2, output_dict=True,
|
28 |
-
)
|
29 |
-
|
30 |
-
# extract values we care about
|
31 |
-
report.pop("macro avg")
|
32 |
-
report.pop("weighted avg")
|
33 |
-
overall_score = report.pop("micro avg")
|
34 |
-
|
35 |
-
seqeval_results = {
|
36 |
-
type_name: {
|
37 |
-
"precision": score["precision"],
|
38 |
-
"recall": score["recall"],
|
39 |
-
"f1": score["f1-score"],
|
40 |
-
"suport": score["support"],
|
41 |
-
}
|
42 |
-
for type_name, score in report.items()
|
43 |
-
}
|
44 |
-
seqeval_results["overall_precision"] = overall_score["precision"]
|
45 |
-
seqeval_results["overall_recall"] = overall_score["recall"]
|
46 |
-
seqeval_results["overall_f1"] = overall_score["f1-score"]
|
47 |
-
seqeval_results["overall_accuracy"] = accuracy_score(y_true=ref_ner_tags, y_pred=pred_ner_tags)
|
48 |
-
|
49 |
-
return(seqeval_results)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
tag_definitions.md
DELETED
@@ -1,38 +0,0 @@
|
|
1 |
-
## Definitions of the Labels used in the DEAL Shared Task
|
2 |
-
Below you will find the full list of labels used in the DEAL shared task, along with their definitions and some examples.
|
3 |
-
In the dataset, tokens are labeled using the [IOB2 format](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) (ex: B-Archive, I-Archive).
|
4 |
-
|
5 |
-
|
6 |
-
| **Label** (abbreviation) | **Definition** | **Example** |
|
7 |
-
| ----------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- |
|
8 |
-
| Archive (Arc) | A curated collection of the literature or data. Very similar to Database | NASA ADS, MAST |
|
9 |
-
| CelestialObject (COb) | A named object in the sky | ONC, Andromeda galaxy |
|
10 |
-
| CelestialObjectRegion (COR) | A named area on/in a celestial body | Inner galaxy |
|
11 |
-
| CelestialRegion (CeR) | A defined region projected onto the sky, or celestial coordinates | GOODS field, l=2, b=15 |
|
12 |
-
| Citation (Cit) | A reference to previous work in the literature | Allen et al. 2012 |
|
13 |
-
| Collaboration (Col) | An organizational entity containing multiple organizations, observatories, and/or countries | the Plank Collaboration |
|
14 |
-
| ComputingFacility (CoF) | A facility whose primary purpose is to operate computational resources. | Example: Australian National Computational Infrastructure high-performance computing centre, CINECA supercomputing Centre |
|
15 |
-
| Database (DaB) | A curated and searchable set of related data tables (very similar to Archive) | Simbad |
|
16 |
-
| Dataset (DaS) | A curated set of data (essentially a single data table) | Gaia EDR3 |
|
17 |
-
| EntityOfFutureInterest (EFI) | A general catch all for things that we think may be worth thinking about in the future (often terms related to gravitational waves) | |
|
18 |
-
| Event (Eve) | A conference, workshop or other event that often brings scientists together | Protostars and Planets VI |
|
19 |
-
| Fellowship (Fel) | A grant focused towards students and/or early career researchers | Hubble Fellowship |
|
20 |
-
| Formula (For) | Mathematical formula or equations | F = Gm1m2/r^2, z=2.3 |
|
21 |
-
| Grant (Gra) | An allocation of money and/or time for a research project | grant No. 12345, ADAP grant 12345 |
|
22 |
-
| Identifier (Ide) | A unique identifier for data, images, etc | ALMA 123.12345 |
|
23 |
-
| Instrument (Ins) | A device, often, but not always, placed on a telescope, to make a measurement | Infrared Array Camera, NIRCam |
|
24 |
-
| Location (Loc) | A named location on Earth | Canada |
|
25 |
-
| Mission (Mis) | A spacecraft that is not a telescope or observatory that carries multiple instruments | WIND |
|
26 |
-
| Model (Mod) | A named scientific or computational model. | TNG100, Salpeter IMF |
|
27 |
-
| ObservationalTechniques (ObT) | A method used to observe celestial objects | Resolved Long-slit Spectroscopy, adaptive optics imaging |
|
28 |
-
| Observatory (Obs) | A, often similarly located, group of telescopes | Keck Observatory, Fermi |
|
29 |
-
| Organization (Org) | A named organization that is not an observatory | NASA, University of Toledo |
|
30 |
-
| Person (Per) | A named person or their initials | A. Einstein |
|
31 |
-
| Proposal (Pro) | A request for telescope time or funding | 74.C-0209(A), GN-2014B-Q-26 |
|
32 |
-
| Software (Sof) | Named computer code or language | Python, Numpy, ClumpFind |
|
33 |
-
| Survey (Sur) | An organized search of the sky often dedicated to large scale science projects | 2MASS, SDSS |
|
34 |
-
| Tag (Tag) | A HTML tag | <bold><\bold> |
|
35 |
-
| Telescope (Tel) | A "bucket" to catch light | Hubble Space Telescope, Discovery Channel Telescope |
|
36 |
-
| TextGarbage (TeG) | Incorrect text, often multiple punctuation marks with no inner text | ',,,,,' |
|
37 |
-
| URL (URL) | A link to a website | https://www.astropy.org/ |
|
38 |
-
| Wavelength (Wav) | A portion of the electromagnetic spectrum (can be communicated as a particular wavelength, a name, or a particular transition) | 656.46 nm, H-alpha |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
tag_definitions.txt
DELETED
@@ -1,38 +0,0 @@
|
|
1 |
-
## Definitions of the Labels used in the DEAL Shared Task
|
2 |
-
Below you will find the full list of labels used in the DEAL shared task, along with their definitions and some examples.
|
3 |
-
In the dataset, tokens are labeled using the [IOB2 format](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) (ex: B-Archive, I-Archive).
|
4 |
-
|
5 |
-
|
6 |
-
| **Label** (abbreviation) | **Definition** | **Example** |
|
7 |
-
| ----------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- |
|
8 |
-
| Archive (Arc) | A curated collection of the literature or data. Very similar to Database | NASA ADS, MAST |
|
9 |
-
| CelestialObject (COb) | A named object in the sky | ONC, Andromeda galaxy |
|
10 |
-
| CelestialObjectRegion (COR) | A named area on/in a celestial body | Inner galaxy |
|
11 |
-
| CelestialRegion (CeR) | A defined region projected onto the sky, or celestial coordinates | GOODS field, l=2, b=15 |
|
12 |
-
| Citation (Cit) | A reference to previous work in the literature | Allen et al. 2012 |
|
13 |
-
| Collaboration (Col) | An organizational entity containing multiple organizations, observatories, and/or countries | the Plank Collaboration |
|
14 |
-
| ComputingFacility (CoF) | A facility whose primary purpose is to operate computational resources. | Example: Australian National Computational Infrastructure high-performance computing centre, CINECA supercomputing Centre |
|
15 |
-
| Database (DaB) | A curated and searchable set of related data tables (very similar to Archive) | Simbad |
|
16 |
-
| Dataset (DaS) | A curated set of data (essentially a single data table) | Gaia EDR3 |
|
17 |
-
| EntityOfFutureInterest (EFI) | A general catch all for things that we think may be worth thinking about in the future (often terms related to gravitational waves) | |
|
18 |
-
| Event (Eve) | A conference, workshop or other event that often brings scientists together | Protostars and Planets VI |
|
19 |
-
| Fellowship (Fel) | A grant focused towards students and/or early career researchers | Hubble Fellowship |
|
20 |
-
| Formula (For) | Mathematical formula or equations | F = Gm1m2/r^2, z=2.3 |
|
21 |
-
| Grant (Gra) | An allocation of money and/or time for a research project | grant No. 12345, ADAP grant 12345 |
|
22 |
-
| Identifier (Ide) | A unique identifier for data, images, etc | ALMA 123.12345 |
|
23 |
-
| Instrument (Ins) | A device, often, but not always, placed on a telescope, to make a measurement | Infrared Array Camera, NIRCam |
|
24 |
-
| Location (Loc) | A named location on Earth | Canada |
|
25 |
-
| Mission (Mis) | A spacecraft that is not a telescope or observatory that carries multiple instruments | WIND |
|
26 |
-
| Model (Mod) | A named scientific or computational model. | TNG100, Salpeter IMF |
|
27 |
-
| ObservationalTechniques (ObT) | A method used to observe celestial objects | Resolved Long-slit Spectroscopy, adaptive optics imaging |
|
28 |
-
| Observatory (Obs) | A, often similarly located, group of telescopes | Keck Observatory, Fermi |
|
29 |
-
| Organization (Org) | A named organization that is not an observatory | NASA, University of Toledo |
|
30 |
-
| Person (Per) | A named person or their initials | A. Einstein |
|
31 |
-
| Proposal (Pro) | A request for telescope time or funding | 74.C-0209(A), GN-2014B-Q-26 |
|
32 |
-
| Software (Sof) | Named computer code or language | Python, Numpy, ClumpFind |
|
33 |
-
| Survey (Sur) | An organized search of the sky often dedicated to large scale science projects | 2MASS, SDSS |
|
34 |
-
| Tag (Tag) | A HTML tag | <bold><\bold> |
|
35 |
-
| Telescope (Tel) | A "bucket" to catch light | Hubble Space Telescope, Discovery Channel Telescope |
|
36 |
-
| TextGarbage (TeG) | Incorrect text, often multiple punctuation marks with no inner text | ',,,,,' |
|
37 |
-
| URL (URL) | A link to a website | https://www.astropy.org/ |
|
38 |
-
| Wavelength (Wav) | A portion of the electromagnetic spectrum (can be communicated as a particular wavelength, a name, or a particular transition) | 656.46 nm, H-alpha |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|