File size: 7,935 Bytes
a6da0c4 abd5170 a6da0c4 abd5170 22a285f abd5170 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 |
---
license: mit
task_categories:
- summarization
language:
- de
tags:
- wikipedia
- wikidata
- Relation Extraction
- REBEL
pretty_name: German REBEL Dataset
size_categories:
- 100K<n<1M
---
# Dataset Card for German REBEL Dataset
### Dataset Summary
This dataset is the German version of Babelscape/rebel-dataset. It has been generated using [CROCODILE](https://github.com/Babelscape/crocodile).
The Wikipedia Version is from November 2022.
### Languages
- German
## Dataset Structure
```
{"docid": "9400003",
"title": "Odin-Gletscher",
"uri": "Q7077818",
"text": "Der Odin-Gletscher ist ein kleiner Gletscher im ostantarktischen Viktorialand. Er fließt von den Westhängen des Mount Odin in der Asgard Range.\n\nDas New Zealand Antarctic Place-Names Committee benannte ihn in Anlehnung an die Benennung des Mount Odin nach Odin, Göttervater, Kriegs- und Totengott der nordischen Mythologie.",
"entities": [{"uri": "Q35666", "boundaries": [35, 44], "surfaceform": "Gletscher", "annotator": "Me"}, ... ],
"triples": [{"subject": {"uri": "Q7077818", "boundaries": [4, 18], "surfaceform": "Odin-Gletscher", "annotator": "Me"},
"predicate": {"uri": "P31", "boundaries": null, "surfaceform": "ist ein(e)", "annotator": "NoSubject-Triple-aligner"},
"object": {"uri": "Q35666", "boundaries": [35, 44], "surfaceform": "Gletscher", "annotator": "Me"}, "sentence_id": 0,
"dependency_path": null,
"confidence": 0.99560546875,
"annotator": "NoSubject-Triple-aligner"}, ...]
}
```
### Data Instances
The dataset is 1.1GB if unpacked on the system. 195MB if zipped.
### Data Fields
"docid": "9644601",
"title": Wikipedia Title
"uri": "Q4290759",
"text": Wikipedia Abstract
"entities": A list of Entities
- uri: Wikidata URI
- boundaries: Tuple of indices of the entity in the abstract
- surfaceform: text form of entity
- annotator: different annotator classes
"triples": List of Triples as dictionaries
- sentence_id: Sentence number the triple appears in.
- "confidence": float, the confidence of the NLI Model
- subject
- uri: Wikidata Entity URI
- boundaries
- surfaceform
- annotator
- predicate
- uri: Wikidata Relation URI
- boundaries: always null,
- surfaceform: Wikidata Relation Name
- annotator
- object:
- uri: Wikidata Entity URI
- boundaries
- surfaceform
- annotator
### Data Splits
No splits are provided for now since the relation classes are quite imbalanced.
To read the dataset you can adapt the function provided by https://github.com/Babelscape/rebel
```
def _generate_examples(self, filepath):
"""This function returns the examples in the raw (text) form."""
logging.info("generating examples from = %s", filepath)
relations_df = pd.read_csv(self.config.data_files['relations'], header = None, sep='\t')
relations = list(relations_df[0])
with open(filepath, encoding="utf-8") as f:
for id_, row in enumerate(f):
article = json.loads(row)
prev_len = 0
if len(article['triples']) == 0:
continue
count = 0
for text_paragraph in article['text'].split('\n'):
if len(text_paragraph) == 0:
continue
sentences = re.split(r'(?<=[.])\s', text_paragraph)
text = ''
for sentence in sentences:
text += sentence + ' '
if any([entity['boundaries'][0] < len(text) + prev_len < entity['boundaries'][1] for entity in article['entities']]):
continue
entities = sorted([entity for entity in article['entities'] if prev_len < entity['boundaries'][1] <= len(text)+prev_len], key=lambda tup: tup['boundaries'][0])
decoder_output = '<triplet> '
for int_ent, entity in enumerate(entities):
triplets = sorted([triplet for triplet in article['triples'] if triplet['subject'] == entity and prev_len< triplet['subject']['boundaries'][1]<=len(text) + prev_len and prev_len< triplet['object']['boundaries'][1]<=len(text)+ prev_len and triplet['predicate']['surfaceform'] in relations], key=lambda tup: tup['object']['boundaries'][0])
if len(triplets) == 0:
continue
decoder_output += entity['surfaceform'] + ' <subj> '
for triplet in triplets:
decoder_output += triplet['object']['surfaceform'] + ' <obj> ' + triplet['predicate']['surfaceform'] + ' <subj> '
decoder_output = decoder_output[:-len(' <subj> ')]
decoder_output += ' <triplet> '
decoder_output = decoder_output[:-len(' <triplet> ')]
count += 1
prev_len += len(text)
if len(decoder_output) == 0:
text = ''
continue
text = re.sub('([\[\].,!?()])', r' \1 ', text.replace('()', ''))
text = re.sub('\s{2,}', ' ', text)
yield article['uri'] + '-' + str(count), {
"title": article['title'],
"context": text,
"id": article['uri'] + '-' + str(count),
"triplets": decoder_output,
}
text = ''
```
## Dataset Creation
### Curation Rationale
This dataset was created to enable the training of a german BART based model as pre-training phase for Relation Extraction.
### Source Data
#### Who are the source language producers?
Any Wikipedia and Wikidata contributor.
### Annotations
#### Annotation process
The dataset extraction pipeline cRocoDiLe: Automatic Relation Extraction Dataset with NLI filtering.
#### Who are the annotators?
Automatic annottations
### Personal and Sensitive Information
All text is from Wikipedia, any Personal or Sensitive Information there may be present in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset serves as a pre-training step for Relation Extraction models. It is distantly annotated, hence it should only be used as such. A model trained solely on this dataset may produce allucinations coming from the silver nature of the dataset.
### Discussion of Biases
Since the dataset was automatically created from Wikipedia and Wikidata, it may reflect the biases withing those sources.
For Wikipedia text, see for example Dinan et al 2020 on biases in Wikipedia (esp. Table 1), or Blodgett et al 2020 for a more general discussion of the topic.
For Wikidata, there are class imbalances, also resulting from Wikipedia.
### Other Known Limitations
Not for now
## Additional Information
### Dataset Curators
Me
### Licensing Information
Since anyone can create the dataset on their own using the linked GitHub Repository, I am going to use the MIT Licence.
### Citation Information
Inspiration by:
```
@inproceedings{huguet-cabot-navigli-2021-rebel,
title = "REBEL: Relation Extraction By End-to-end Language generation",
author = "Huguet Cabot, Pere-Llu{\'\i}s and
Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Online and in the Barceló Bávaro Convention Centre, Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf",
}
```
### Contributions
None for now |