Datasets:
Tasks:
Token Classification
Sub-tasks:
parsing
ArneBinder
commited on
Commit
•
bb8c37d
1
Parent(s):
844de61
recent changes in the context of https://github.com/ArneBinder/pie-datasets/pull/61
Browse files
README.md
CHANGED
@@ -1,24 +1,25 @@
|
|
1 |
---
|
2 |
annotations_creators:
|
3 |
-
- expert-generated
|
4 |
language_creators:
|
5 |
-
- found
|
6 |
license: []
|
7 |
task_categories:
|
8 |
-
- token-classification
|
9 |
task_ids:
|
10 |
-
- parsing
|
11 |
---
|
12 |
|
13 |
# Information Card for Brat
|
14 |
|
15 |
## Table of Contents
|
|
|
16 |
- [Description](#description)
|
17 |
- [Summary](#summary)
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
- [Additional Information](#additional-information)
|
23 |
- [Licensing Information](#licensing-information)
|
24 |
- [Citation Information](#citation-information)
|
@@ -27,34 +28,97 @@ task_ids:
|
|
27 |
|
28 |
- **Homepage:** https://brat.nlplab.org
|
29 |
- **Paper:** https://aclanthology.org/E12-2021/
|
30 |
-
- **Leaderboard:** [Needs More Information]
|
31 |
-
- **Point of Contact:** [Needs More Information]
|
32 |
|
33 |
### Summary
|
34 |
|
35 |
Brat is an intuitive web-based tool for text annotation supported by Natural Language Processing (NLP) technology. BRAT has been developed for rich structured annota- tion for a variety of NLP tasks and aims to support manual curation efforts and increase annotator productivity using NLP techniques. brat is designed in particular for structured annotation, where the notes are not free form text but have a fixed form that can be automatically processed and interpreted by a computer.
|
36 |
|
37 |
-
|
38 |
## Dataset Structure
|
39 |
-
|
|
|
|
|
40 |
### Data Instances
|
41 |
-
|
42 |
-
### Data Fields
|
43 |
```
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
```
|
54 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
55 |
### Usage
|
56 |
|
57 |
-
brat script can be used by calling `load_dataset()` method and passing `
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
58 |
|
59 |
```python
|
60 |
from datasets import load_dataset
|
@@ -76,6 +140,9 @@ kwargs = {
|
|
76 |
}""",
|
77 |
"homepage": "https://github.com/anlausch/ArguminSci",
|
78 |
"url": "http://data.dws.informatik.uni-mannheim.de/sci-arg/compiled_corpus.zip",
|
|
|
|
|
|
|
79 |
"file_name_blacklist": ['A28'],
|
80 |
}
|
81 |
|
@@ -86,7 +153,7 @@ dataset = load_dataset('dfki-nlp/brat', **kwargs)
|
|
86 |
|
87 |
### Licensing Information
|
88 |
|
89 |
-
[Needs More Information]
|
90 |
|
91 |
### Citation Information
|
92 |
|
@@ -107,4 +174,4 @@ dataset = load_dataset('dfki-nlp/brat', **kwargs)
|
|
107 |
url = "https://aclanthology.org/E12-2021",
|
108 |
pages = "102--107",
|
109 |
}
|
110 |
-
```
|
|
|
1 |
---
|
2 |
annotations_creators:
|
3 |
+
- expert-generated
|
4 |
language_creators:
|
5 |
+
- found
|
6 |
license: []
|
7 |
task_categories:
|
8 |
+
- token-classification
|
9 |
task_ids:
|
10 |
+
- parsing
|
11 |
---
|
12 |
|
13 |
# Information Card for Brat
|
14 |
|
15 |
## Table of Contents
|
16 |
+
|
17 |
- [Description](#description)
|
18 |
- [Summary](#summary)
|
19 |
+
- [Dataset Structure](#dataset-structure)
|
20 |
+
- [Data Instances](#data-instances)
|
21 |
+
- [Data Fields](#data-instances)
|
22 |
+
- [Usage](#usage)
|
23 |
- [Additional Information](#additional-information)
|
24 |
- [Licensing Information](#licensing-information)
|
25 |
- [Citation Information](#citation-information)
|
|
|
28 |
|
29 |
- **Homepage:** https://brat.nlplab.org
|
30 |
- **Paper:** https://aclanthology.org/E12-2021/
|
31 |
+
- **Leaderboard:** \[Needs More Information\]
|
32 |
+
- **Point of Contact:** \[Needs More Information\]
|
33 |
|
34 |
### Summary
|
35 |
|
36 |
Brat is an intuitive web-based tool for text annotation supported by Natural Language Processing (NLP) technology. BRAT has been developed for rich structured annota- tion for a variety of NLP tasks and aims to support manual curation efforts and increase annotator productivity using NLP techniques. brat is designed in particular for structured annotation, where the notes are not free form text but have a fixed form that can be automatically processed and interpreted by a computer.
|
37 |
|
|
|
38 |
## Dataset Structure
|
39 |
+
|
40 |
+
Dataset annotated with brat format is processed using this script. Annotations created in brat are stored on disk in a standoff format: annotations are stored separately from the annotated document text, which is never modified by the tool. For each text document in the system, there is a corresponding annotation file. The two are associated by the file naming convention that their base name (file name without suffix) is the same: for example, the file DOC-1000.ann contains annotations for the file DOC-1000.txt. More information can be found [here](https://brat.nlplab.org/standoff.html).
|
41 |
+
|
42 |
### Data Instances
|
43 |
+
|
|
|
44 |
```
|
45 |
+
{
|
46 |
+
"context": ''<?xml version="1.0" encoding="UTF-8" standalone="no"?>\n<Document xmlns:gate="http://www.gat...'
|
47 |
+
"file_name": "A01"
|
48 |
+
"spans": {
|
49 |
+
'id': ['T1', 'T2', 'T4', 'T5', 'T6', 'T3', 'T7', 'T8', 'T9', 'T10', 'T11', 'T12',...]
|
50 |
+
'type': ['background_claim', 'background_claim', 'background_claim', 'own_claim',...]
|
51 |
+
'locations': [{'start': [2417], 'end': [2522]}, {'start': [2524], 'end': [2640]},...]
|
52 |
+
'text': ['complicated 3D character models...', 'The range of breathtaking realistic...', ...]
|
53 |
+
}
|
54 |
+
"relations": {
|
55 |
+
'id': ['R1', 'R2', 'R3', 'R4', 'R5', 'R6', 'R7', 'R8', 'R9', 'R10', 'R11', 'R12',...]
|
56 |
+
'type': ['supports', 'supports', 'supports', 'supports', 'contradicts', 'contradicts',...]
|
57 |
+
'arguments': [{'type': ['Arg1', 'Arg2'], 'target': ['T4', 'T5']},...]
|
58 |
+
}
|
59 |
+
"equivalence_relations": {'type': [], 'targets': []},
|
60 |
+
"events": {'id': [], 'type': [], 'trigger': [], 'arguments': []},
|
61 |
+
"attributions": {'id': [], 'type': [], 'target': [], 'value': []},
|
62 |
+
"normalizations": {'id': [], 'type': [], 'target': [], 'resource_id': [], 'entity_id': []},
|
63 |
+
"notes": {'id': [], 'type': [], 'target': [], 'note': []},
|
64 |
+
}
|
65 |
```
|
66 |
|
67 |
+
### Data Fields
|
68 |
+
|
69 |
+
- `context` (`str`): the textual content of the data file
|
70 |
+
- `file_name` (`str`): the name of the data / annotation file without extension
|
71 |
+
- `spans` (`dict`): span annotations of the `context` string
|
72 |
+
- `id` (`str`): the id of the span, starts with `T`
|
73 |
+
- `type` (`str`): the label of the span
|
74 |
+
- `locations` (`list`): the indices indicating the span's locations (multiple because of fragments), consisting of `dict`s with
|
75 |
+
- `start` (`list` of `int`): the indices indicating the inclusive character start positions of the span fragments
|
76 |
+
- `end` (`list` of `int`): the indices indicating the exclusive character end positions of the span fragments
|
77 |
+
- `text` (`list` of `str`): the texts of the span fragments
|
78 |
+
- `relations`: a sequence of relations between elements of `spans`
|
79 |
+
- `id` (`str`): the id of the relation, starts with `R`
|
80 |
+
- `type` (`str`): the label of the relation
|
81 |
+
- `arguments` (`list` of `dict`): the spans related to the relation, consisting of `dict`s with
|
82 |
+
- `type` (`list` of `str`): the argument roles of the spans in the relation, either `Arg1` or `Arg2`
|
83 |
+
- `target` (`list` of `str`): the spans which are the arguments of the relation
|
84 |
+
- `equivalence_relations`: contains `type` and `target` (more information needed)
|
85 |
+
- `events`: contains `id`, `type`, `trigger`, and `arguments` (more information needed)
|
86 |
+
- `attributions` (`dict`): attribute annotations of any other annotation
|
87 |
+
- `id` (`str`): the instance id of the attribution
|
88 |
+
- `type` (`str`): the type of the attribution
|
89 |
+
- `target` (`str`): the id of the annotation to which the attribution is for
|
90 |
+
- `value` (`str`): the attribution's value or mark
|
91 |
+
- `normalizations` (`dict`): the unique identification of the real-world entities referred to by specific text expressions
|
92 |
+
- `id` (`str`): the instance id of the normalized entity
|
93 |
+
- `type`(`str`): the type of the normalized entity
|
94 |
+
- `target` (`str`): the id of the annotation to which the normalized entity is for
|
95 |
+
- `resource_id` (`str`): the associated resource to the normalized entity
|
96 |
+
- `entity_id` (`str`): the instance id of normalized entity
|
97 |
+
- `notes` (`dict`): a freeform text, added to the annotation
|
98 |
+
- `id` (`str`): the instance id of the note
|
99 |
+
- `type` (`str`): the type of note
|
100 |
+
- `target` (`str`): the id of the related annotation
|
101 |
+
- `note` (`str`): the text body of the note
|
102 |
+
|
103 |
### Usage
|
104 |
|
105 |
+
The `brat` dataset script can be used by calling `load_dataset()` method and passing any arguments that are accepted by the `BratConfig` (which is a special [BuilderConfig](https://huggingface.co/docs/datasets/v2.2.1/en/package_reference/builder_classes#datasets.BuilderConfig)). It requires at least the `url` argument. The full list of arguments is as follows:
|
106 |
+
|
107 |
+
- `url` (`str`): the url of the dataset which should point to either a zip file or a directory containing the Brat data (`*.txt`) and annotation (`*.ann`) files
|
108 |
+
|
109 |
+
- `description` (`str`, optional): the description of the dataset
|
110 |
+
|
111 |
+
- `citation` (`str`, optional): the citation of the dataset
|
112 |
+
|
113 |
+
- `homepage` (`str`, optional): the homepage of the dataset
|
114 |
+
|
115 |
+
- `split_paths` (`dict`, optional): a mapping of (arbitrary) split names to subdirectories or lists of files (without extension), e.g. `{"train": "path/to/train_directory", "test": "path/to/test_director"}` or `{"train": ["path/to/train_file1", "path/to/train_file2"]}`. In both cases (subdirectory paths or file paths), the paths are relative to the url. If `split_paths` is not provided, the dataset will be loaded from the root directory and all direct subfolders will be considered as splits.
|
116 |
+
|
117 |
+
- `file_name_blacklist` (`list`, optional): a list of file names (without extension) that should be ignored, e.g. `["A28"]`. This is useful if the dataset contains files that are not valid brat files.
|
118 |
+
|
119 |
+
Important: Using the `data_dir` parameter of the `load_dataset()` method overrides the `url` parameter of the `BratConfig`.
|
120 |
+
|
121 |
+
We provide an example of [SciArg](https://aclanthology.org/W18-5206.pdf) dataset below:
|
122 |
|
123 |
```python
|
124 |
from datasets import load_dataset
|
|
|
140 |
}""",
|
141 |
"homepage": "https://github.com/anlausch/ArguminSci",
|
142 |
"url": "http://data.dws.informatik.uni-mannheim.de/sci-arg/compiled_corpus.zip",
|
143 |
+
"split_paths": {
|
144 |
+
"train": "compiled_corpus",
|
145 |
+
},
|
146 |
"file_name_blacklist": ['A28'],
|
147 |
}
|
148 |
|
|
|
153 |
|
154 |
### Licensing Information
|
155 |
|
156 |
+
\[Needs More Information\]
|
157 |
|
158 |
### Citation Information
|
159 |
|
|
|
174 |
url = "https://aclanthology.org/E12-2021",
|
175 |
pages = "102--107",
|
176 |
}
|
177 |
+
```
|