Datasets:
EMBO
/

Languages:
English
ArXiv:
DOI:
License:

entity_linking_data_loader

#3
.gitattributes CHANGED
@@ -139,27 +139,3 @@ token_classification/roles_multi/validation.jsonl filter=lfs diff=lfs merge=lfs
139
  token_classification/roles_small_mol/test.jsonl filter=lfs diff=lfs merge=lfs -text
140
  token_classification/roles_small_mol/train.jsonl filter=lfs diff=lfs merge=lfs -text
141
  token_classification/roles_small_mol/validation.jsonl filter=lfs diff=lfs merge=lfs -text
142
- image_segmentation/train/labels.cache filter=lfs diff=lfs merge=lfs -text
143
- image_segmentation/clip_panel_image_captions/ filter=lfs diff=lfs merge=lfs -text
144
- . filter=lfs diff=lfs merge=lfs -text
145
- information_extraction/ filter=lfs diff=lfs merge=lfs -text
146
- information_extraction/experimental_assay.json filter=lfs diff=lfs merge=lfs -text
147
- information_extraction/hypothesis_tested.json filter=lfs diff=lfs merge=lfs -text
148
- information_extraction/is_experiment.json filter=lfs diff=lfs merge=lfs -text
149
- information_extraction/ncbi_gene_linking.csv filter=lfs diff=lfs merge=lfs -text
150
- information_extraction/where_was_tested.csv filter=lfs diff=lfs merge=lfs -text
151
- information_extraction/where_was_tested.json filter=lfs diff=lfs merge=lfs -text
152
- information_extraction/controlled_entities.json filter=lfs diff=lfs merge=lfs -text
153
- information_extraction/diseases.json filter=lfs diff=lfs merge=lfs -text
154
- information_extraction/chemicals.csv filter=lfs diff=lfs merge=lfs -text
155
- information_extraction/is_experiment.csv filter=lfs diff=lfs merge=lfs -text
156
- information_extraction/diseases.csv filter=lfs diff=lfs merge=lfs -text
157
- information_extraction/experimental_assay.csv filter=lfs diff=lfs merge=lfs -text
158
- information_extraction/assayed_entities.csv filter=lfs diff=lfs merge=lfs -text
159
- information_extraction/controlled_entities.csv filter=lfs diff=lfs merge=lfs -text
160
- information_extraction/hypothesis_tested.csv filter=lfs diff=lfs merge=lfs -text
161
- information_extraction/ncbi_gene_linking.json filter=lfs diff=lfs merge=lfs -text
162
- information_extraction/assayed_entities.json filter=lfs diff=lfs merge=lfs -text
163
- information_extraction/chemicals.json filter=lfs diff=lfs merge=lfs -text
164
- compound_image_segmentation/segmented_images.zip filter=lfs diff=lfs merge=lfs -text
165
- compound_image_segmentation/soda_panelization_figures.zip filter=lfs diff=lfs merge=lfs -text
 
139
  token_classification/roles_small_mol/test.jsonl filter=lfs diff=lfs merge=lfs -text
140
  token_classification/roles_small_mol/train.jsonl filter=lfs diff=lfs merge=lfs -text
141
  token_classification/roles_small_mol/validation.jsonl filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -172,39 +172,6 @@ The text in the dataset is English.
172
  })
173
  ```
174
 
175
- ### Information Extraction
176
-
177
- This folder contains `caption`-`answer` pairs intended to be used for information extraction. Each of the files contains answers to given questions about the captions.
178
- Each file is provided in `csv` and `json` format for convinience for different cases.
179
- The list of files and questions they answer are shown below:
180
-
181
- * `assayed_entities`: What is the assayed/measured entity?
182
-
183
- * `chemicals`: Are there any chemical compounds or small molecules mentioned?
184
-
185
- * `diseases`: Is there any disease term mentioned, or can be infered, in the figure legend?
186
-
187
- * `experimental_assay`: What kind of experimental assay was used for this experiment?
188
-
189
- * `hypothesis_tested`: Can you formulate the hypothesis that this experiment has tested.
190
-
191
- * `is_experiment`: Does the legend describe an experiment or not?
192
-
193
- * `ncbi_gene_linking`: Can you link the identified genes to their NCBI gene identifiers?
194
-
195
- * `where_was_tested`: In what kind of cell/tissue/organism/subcellular component was the experiment performed?
196
-
197
- We refer the interested reader to the [cypher queries](https://github.com/source-data/soda-data/blob/master/src/soda_data/sdneo/info_extraction_queries.py) used to generate this data for further information.
198
-
199
-
200
- ### Compound Image Segmentation
201
-
202
- This folder contain the data for the compound image segmentation task. The data is provided in format compatible to train `YOLOv10` models.
203
-
204
- The file `soda_panelization_figures.zip` contains 13039 figures extracted from scientific manuscripts, that are labeled to use object detection algorithms to separate the figure into its panels. The dataset is divided into train, validation and test sets.
205
-
206
- The file ``segmented_images.zip`` contains `panel`-`caption` pairs. These have been used, together with multimodal LLMs to assign the correct panel label and caption to each panel in the figure.
207
-
208
  ## Dataset Creation
209
 
210
  ### Curation Rationale
 
172
  })
173
  ```
174
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
175
  ## Dataset Creation
176
 
177
  ### Curation Rationale
SourceData.py CHANGED
@@ -19,12 +19,10 @@
19
  from __future__ import absolute_import, division, print_function
20
 
21
  import json
22
- import os
23
  import datasets
24
 
25
  _BASE_URL = "https://huggingface.co/datasets/EMBO/SourceData/resolve/main/"
26
 
27
-
28
  class SourceData(datasets.GeneratorBasedBuilder):
29
  """SourceDataNLP provides datasets to train NLP tasks in cell and molecular biology."""
30
 
@@ -47,26 +45,19 @@ class SourceData(datasets.GeneratorBasedBuilder):
47
  "B-DISEASE",
48
  "I-DISEASE",
49
  "B-CELL_LINE",
50
- "I-CELL_LINE",
51
- ]
52
- _SEMANTIC_ROLES = [
53
- "O",
54
- "B-CONTROLLED_VAR",
55
- "I-CONTROLLED_VAR",
56
- "B-MEASURED_VAR",
57
- "I-MEASURED_VAR",
58
  ]
 
59
  _PANEL_START_NAMES = ["O", "B-PANEL_START", "I-PANEL_START"]
60
  _ROLES_MULTI = ["O", "GENEPROD", "SMALL_MOLECULE"]
61
 
62
  _CITATION = """\
63
- @article{abreu2023sourcedata,
64
- title={The SourceData-NLP dataset: integrating curation into scientific publishing
65
- for training large language models},
66
- author={Abreu-Vicente, Jorge and Sonntag, Hannah and Eidens, Thomas and Lemberger, Thomas},
67
- journal={arXiv preprint arXiv:2310.20440},
68
- year={2023}
69
- }
70
  """
71
 
72
  _DESCRIPTION = """\
@@ -79,73 +70,32 @@ class SourceData(datasets.GeneratorBasedBuilder):
79
 
80
  DEFAULT_CONFIG_NAME = "NER"
81
 
82
- _LATEST_VERSION = "2.0.3" # Should this be updated to 2.0.3
83
 
84
  def _info(self):
85
- VERSION = (
86
- self.config.version
87
- if self.config.version not in ["0.0.0", "latest"]
88
- else self._LATEST_VERSION
89
- )
90
  self._URLS = {
91
  "NER": f"{_BASE_URL}token_classification/v_{VERSION}/ner/",
92
  "PANELIZATION": f"{_BASE_URL}token_classification/v_{VERSION}/panelization/",
93
  "ROLES_GP": f"{_BASE_URL}token_classification/v_{VERSION}/roles_gene/",
94
  "ROLES_SM": f"{_BASE_URL}token_classification/v_{VERSION}/roles_small_mol/",
95
  "ROLES_MULTI": f"{_BASE_URL}token_classification/v_{VERSION}/roles_multi/",
96
- "FULL": os.path.join(
97
- _BASE_URL,
98
- "bigbio",
99
- # f"v_{VERSION}",
100
- ),
101
  }
102
  self.BUILDER_CONFIGS = [
103
- datasets.BuilderConfig(
104
- name="NER",
105
- version=VERSION,
106
- description="Dataset for named-entity recognition.",
107
- ),
108
- datasets.BuilderConfig(
109
- name="PANELIZATION",
110
- version=VERSION,
111
- description="Dataset to separate figure captions into panels.",
112
- ),
113
- datasets.BuilderConfig(
114
- name="ROLES_GP",
115
- version=VERSION,
116
- description="Dataset for semantic roles of gene products.",
117
- ),
118
- datasets.BuilderConfig(
119
- name="ROLES_SM",
120
- version=VERSION,
121
- description="Dataset for semantic roles of small molecules.",
122
- ),
123
- datasets.BuilderConfig(
124
- name="ROLES_MULTI",
125
- version=VERSION,
126
- description="Dataset to train roles. ROLES_GP and ROLES_SM at once.",
127
- ),
128
- datasets.BuilderConfig(
129
- name="FULL",
130
- version=VERSION,
131
- description="Full dataset including all NER + entity linking annotations, links to figure images, etc.",
132
- ),
133
- # datasets.BuilderConfig(
134
- # name="BIGBIO_KB",
135
- # version=VERSION,
136
- # description="Full dataset formatted according to BigBio KB schema (see https://huggingface.co/bigbio). Includes all NER + entity linking annotations.",
137
- # ),
138
  ]
139
-
140
  if self.config.name in ["NER", "default"]:
141
  features = datasets.Features(
142
  {
143
  "words": datasets.Sequence(feature=datasets.Value("string")),
144
  "labels": datasets.Sequence(
145
- feature=datasets.ClassLabel(
146
- num_classes=len(self._NER_LABEL_NAMES),
147
- names=self._NER_LABEL_NAMES,
148
- )
149
  ),
150
  # "is_category": datasets.Sequence(feature=datasets.Value("int8")),
151
  "tag_mask": datasets.Sequence(feature=datasets.Value("int8")),
@@ -159,7 +109,7 @@ class SourceData(datasets.GeneratorBasedBuilder):
159
  "labels": datasets.Sequence(
160
  feature=datasets.ClassLabel(
161
  num_classes=len(self._SEMANTIC_ROLES),
162
- names=self._SEMANTIC_ROLES,
163
  )
164
  ),
165
  # "is_category": datasets.Sequence(feature=datasets.Value("int8")),
@@ -174,7 +124,7 @@ class SourceData(datasets.GeneratorBasedBuilder):
174
  "labels": datasets.Sequence(
175
  feature=datasets.ClassLabel(
176
  num_classes=len(self._SEMANTIC_ROLES),
177
- names=self._SEMANTIC_ROLES,
178
  )
179
  ),
180
  # "is_category": datasets.Sequence(feature=datasets.Value("int8")),
@@ -189,12 +139,13 @@ class SourceData(datasets.GeneratorBasedBuilder):
189
  "labels": datasets.Sequence(
190
  feature=datasets.ClassLabel(
191
  num_classes=len(self._SEMANTIC_ROLES),
192
- names=self._SEMANTIC_ROLES,
193
  )
194
  ),
195
  "is_category": datasets.Sequence(
196
  feature=datasets.ClassLabel(
197
- num_classes=len(self._ROLES_MULTI), names=self._ROLES_MULTI
 
198
  )
199
  ),
200
  "tag_mask": datasets.Sequence(feature=datasets.Value("int8")),
@@ -206,57 +157,13 @@ class SourceData(datasets.GeneratorBasedBuilder):
206
  {
207
  "words": datasets.Sequence(feature=datasets.Value("string")),
208
  "labels": datasets.Sequence(
209
- feature=datasets.ClassLabel(
210
- num_classes=len(self._PANEL_START_NAMES),
211
- names=self._PANEL_START_NAMES,
212
- )
213
  ),
214
  "tag_mask": datasets.Sequence(feature=datasets.Value("int8")),
215
  }
216
  )
217
 
218
- elif self.config.name == "FULL":
219
- features = datasets.Features(
220
- {
221
- "doi": datasets.Value("string"),
222
- "abstract": datasets.Value("string"),
223
- # "split": datasets.Value("string"),
224
- "figures": [
225
- {
226
- "fig_id": datasets.Value("string"),
227
- "label": datasets.Value("string"),
228
- "fig_graphic_url": datasets.Value("string"),
229
- "panels": [
230
- {
231
- "panel_id": datasets.Value("string"),
232
- "text": datasets.Value("string"),
233
- "panel_graphic_url": datasets.Value("string"),
234
- "entities": [
235
- {
236
- "annotation_id": datasets.Value("string"),
237
- "source": datasets.Value("string"),
238
- "category": datasets.Value("string"),
239
- "entity_type": datasets.Value("string"),
240
- "role": datasets.Value("string"),
241
- "text": datasets.Value("string"),
242
- "ext_ids": datasets.Value("string"),
243
- "norm_text": datasets.Value("string"),
244
- "ext_dbs": datasets.Value("string"),
245
- "in_caption": datasets.Value("bool"),
246
- "ext_names": datasets.Value("string"),
247
- "ext_tax_ids": datasets.Value("string"),
248
- "ext_tax_names": datasets.Value("string"),
249
- "ext_urls": datasets.Value("string"),
250
- "offsets": [datasets.Value("int64")],
251
- }
252
- ],
253
- }
254
- ],
255
- }
256
- ],
257
- }
258
- )
259
-
260
  return datasets.DatasetInfo(
261
  description=self._DESCRIPTION,
262
  features=features,
@@ -265,49 +172,38 @@ class SourceData(datasets.GeneratorBasedBuilder):
265
  license=self._LICENSE,
266
  citation=self._CITATION,
267
  )
268
-
269
  def _split_generators(self, dl_manager: datasets.DownloadManager):
270
  """Returns SplitGenerators.
271
- Uses local files if a data_dir is specified. Otherwise downloads the files from their official url.
272
- """
273
 
274
  try:
275
  config_name = self.config.name if self.config.name != "default" else "NER"
276
-
277
- if config_name == "FULL":
278
- url = os.path.join(
279
- self._URLS[config_name],
280
- # "source_data_full.zip"
281
- "source_data_json_splits_2.0.2.zip",
282
- )
283
- data_dir = dl_manager.download_and_extract(url)
284
- data_files = [
285
- os.path.join(data_dir, filename)
286
- for filename in ["train.jsonl", "test.jsonl", "validation.jsonl"]
287
- ]
288
- else:
289
- urls = [
290
- os.path.join(self._URLS[config_name], "train.jsonl"),
291
- os.path.join(self._URLS[config_name], "test.jsonl"),
292
- os.path.join(self._URLS[config_name], "validation.jsonl"),
293
- ]
294
- data_files = dl_manager.download(urls)
295
  except:
296
  raise ValueError(f"unkonwn config name: {self.config.name}")
297
-
298
  return [
299
  datasets.SplitGenerator(
300
  name=datasets.Split.TRAIN,
301
  # These kwargs will be passed to _generate_examples
302
- gen_kwargs={"filepath": data_files[0]},
 
303
  ),
304
  datasets.SplitGenerator(
305
  name=datasets.Split.TEST,
306
- gen_kwargs={"filepath": data_files[1]},
 
307
  ),
308
  datasets.SplitGenerator(
309
  name=datasets.Split.VALIDATION,
310
- gen_kwargs={"filepath": data_files[2]},
 
311
  ),
312
  ]
313
 
@@ -316,45 +212,40 @@ class SourceData(datasets.GeneratorBasedBuilder):
316
  It is in charge of opening the given file and yielding (key, example) tuples from the dataset
317
  The key is not important, it's more here for legacy reason (legacy from tfds)"""
318
 
319
- no_panels = 0
320
- no_entities = 0
321
- has_panels = 0
322
- has_entities = 0
323
-
324
  with open(filepath, encoding="utf-8") as f:
325
  # logger.info("⏳ Generating examples from = %s", filepath)
326
  for id_, row in enumerate(f):
327
- data = json.loads(row.strip())
328
  if self.config.name in ["NER", "default"]:
329
  yield id_, {
330
  "words": data["words"],
331
  "labels": data["labels"],
332
  "tag_mask": data["is_category"],
333
- "text": data["text"],
334
  }
335
  elif self.config.name == "ROLES_GP":
336
  yield id_, {
337
  "words": data["words"],
338
  "labels": data["labels"],
339
  "tag_mask": data["is_category"],
340
- "text": data["text"],
341
  }
342
  elif self.config.name == "ROLES_MULTI":
343
  labels = data["labels"]
344
- tag_mask = [1 if t != 0 else 0 for t in labels]
345
  yield id_, {
346
  "words": data["words"],
347
  "labels": data["labels"],
348
  "tag_mask": tag_mask,
349
  "is_category": data["is_category"],
350
- "text": data["text"],
351
  }
352
  elif self.config.name == "ROLES_SM":
353
  yield id_, {
354
  "words": data["words"],
355
  "labels": data["labels"],
356
  "tag_mask": data["is_category"],
357
- "text": data["text"],
358
  }
359
  elif self.config.name == "PANELIZATION":
360
  labels = data["labels"]
@@ -365,66 +256,4 @@ class SourceData(datasets.GeneratorBasedBuilder):
365
  "tag_mask": tag_mask,
366
  }
367
 
368
- elif self.config.name == "FULL":
369
- doc_figs = data["figures"]
370
- all_figures = []
371
- for fig in doc_figs:
372
- all_panels = []
373
- figure = {
374
- "fig_id": fig["fig_id"],
375
- "label": fig["label"],
376
- "fig_graphic_url": fig["fig_graphic_url"],
377
- }
378
-
379
- for p in fig["panels"]:
380
- panel = {
381
- "panel_id": p["panel_id"],
382
- "text": p["text"].strip(),
383
- "panel_graphic_url": p["panel_graphic_url"],
384
- "entities": [
385
- {
386
- "annotation_id": t["tag_id"],
387
- "source": t["source"],
388
- "category": t["category"],
389
- "entity_type": t["entity_type"],
390
- "role": t["role"],
391
- "text": t["text"],
392
- "ext_ids": t["ext_ids"],
393
- "norm_text": t["norm_text"],
394
- "ext_dbs": t["ext_dbs"],
395
- "in_caption": bool(t["in_caption"]),
396
- "ext_names": t["ext_names"],
397
- "ext_tax_ids": t["ext_tax_ids"],
398
- "ext_tax_names": t["ext_tax_names"],
399
- "ext_urls": t["ext_urls"],
400
- "offsets": t["local_offsets"],
401
- }
402
- for t in p["tags"]
403
- ],
404
- }
405
- for e in panel["entities"]:
406
- assert type(e["offsets"]) == list
407
- if len(panel["entities"]) == 0:
408
- no_entities += 1
409
- continue
410
- else:
411
- has_entities += 1
412
- all_panels.append(panel)
413
-
414
- figure["panels"] = all_panels
415
-
416
- # Pass on all figures that aren't split into panels
417
- if len(all_panels) == 0:
418
- no_panels += 1
419
- continue
420
- else:
421
- has_panels += 1
422
- all_figures.append(figure)
423
-
424
- output = {
425
- "doi": data["doi"],
426
- "abstract": data["abstract"],
427
- "figures": all_figures,
428
- }
429
- yield id_, output
430
 
 
19
  from __future__ import absolute_import, division, print_function
20
 
21
  import json
 
22
  import datasets
23
 
24
  _BASE_URL = "https://huggingface.co/datasets/EMBO/SourceData/resolve/main/"
25
 
 
26
  class SourceData(datasets.GeneratorBasedBuilder):
27
  """SourceDataNLP provides datasets to train NLP tasks in cell and molecular biology."""
28
 
 
45
  "B-DISEASE",
46
  "I-DISEASE",
47
  "B-CELL_LINE",
48
+ "I-CELL_LINE"
 
 
 
 
 
 
 
49
  ]
50
+ _SEMANTIC_ROLES = ["O", "B-CONTROLLED_VAR", "I-CONTROLLED_VAR", "B-MEASURED_VAR", "I-MEASURED_VAR"]
51
  _PANEL_START_NAMES = ["O", "B-PANEL_START", "I-PANEL_START"]
52
  _ROLES_MULTI = ["O", "GENEPROD", "SMALL_MOLECULE"]
53
 
54
  _CITATION = """\
55
+ @Unpublished{
56
+ huggingface: dataset,
57
+ title = {SourceData NLP},
58
+ authors={Thomas Lemberger & Jorge Abreu-Vicente, EMBO},
59
+ year={2023}
60
+ }
 
61
  """
62
 
63
  _DESCRIPTION = """\
 
70
 
71
  DEFAULT_CONFIG_NAME = "NER"
72
 
73
+ _LATEST_VERSION = "1.0.0"
74
 
75
  def _info(self):
76
+ VERSION = self.config.version if self.config.version not in ["0.0.0", "latest"] else self._LATEST_VERSION
 
 
 
 
77
  self._URLS = {
78
  "NER": f"{_BASE_URL}token_classification/v_{VERSION}/ner/",
79
  "PANELIZATION": f"{_BASE_URL}token_classification/v_{VERSION}/panelization/",
80
  "ROLES_GP": f"{_BASE_URL}token_classification/v_{VERSION}/roles_gene/",
81
  "ROLES_SM": f"{_BASE_URL}token_classification/v_{VERSION}/roles_small_mol/",
82
  "ROLES_MULTI": f"{_BASE_URL}token_classification/v_{VERSION}/roles_multi/",
 
 
 
 
 
83
  }
84
  self.BUILDER_CONFIGS = [
85
+ datasets.BuilderConfig(name="NER", version=VERSION, description="Dataset for named-entity recognition."),
86
+ datasets.BuilderConfig(name="PANELIZATION", version=VERSION, description="Dataset to separate figure captions into panels."),
87
+ datasets.BuilderConfig(name="ROLES_GP", version=VERSION, description="Dataset for semantic roles of gene products."),
88
+ datasets.BuilderConfig(name="ROLES_SM", version=VERSION, description="Dataset for semantic roles of small molecules."),
89
+ datasets.BuilderConfig(name="ROLES_MULTI", version=VERSION, description="Dataset to train roles. ROLES_GP and ROLES_SM at once."),
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
90
  ]
91
+
92
  if self.config.name in ["NER", "default"]:
93
  features = datasets.Features(
94
  {
95
  "words": datasets.Sequence(feature=datasets.Value("string")),
96
  "labels": datasets.Sequence(
97
+ feature=datasets.ClassLabel(num_classes=len(self._NER_LABEL_NAMES),
98
+ names=self._NER_LABEL_NAMES)
 
 
99
  ),
100
  # "is_category": datasets.Sequence(feature=datasets.Value("int8")),
101
  "tag_mask": datasets.Sequence(feature=datasets.Value("int8")),
 
109
  "labels": datasets.Sequence(
110
  feature=datasets.ClassLabel(
111
  num_classes=len(self._SEMANTIC_ROLES),
112
+ names=self._SEMANTIC_ROLES
113
  )
114
  ),
115
  # "is_category": datasets.Sequence(feature=datasets.Value("int8")),
 
124
  "labels": datasets.Sequence(
125
  feature=datasets.ClassLabel(
126
  num_classes=len(self._SEMANTIC_ROLES),
127
+ names=self._SEMANTIC_ROLES
128
  )
129
  ),
130
  # "is_category": datasets.Sequence(feature=datasets.Value("int8")),
 
139
  "labels": datasets.Sequence(
140
  feature=datasets.ClassLabel(
141
  num_classes=len(self._SEMANTIC_ROLES),
142
+ names=self._SEMANTIC_ROLES
143
  )
144
  ),
145
  "is_category": datasets.Sequence(
146
  feature=datasets.ClassLabel(
147
+ num_classes=len(self._ROLES_MULTI),
148
+ names=self._ROLES_MULTI
149
  )
150
  ),
151
  "tag_mask": datasets.Sequence(feature=datasets.Value("int8")),
 
157
  {
158
  "words": datasets.Sequence(feature=datasets.Value("string")),
159
  "labels": datasets.Sequence(
160
+ feature=datasets.ClassLabel(num_classes=len(self._PANEL_START_NAMES),
161
+ names=self._PANEL_START_NAMES)
 
 
162
  ),
163
  "tag_mask": datasets.Sequence(feature=datasets.Value("int8")),
164
  }
165
  )
166
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
167
  return datasets.DatasetInfo(
168
  description=self._DESCRIPTION,
169
  features=features,
 
172
  license=self._LICENSE,
173
  citation=self._CITATION,
174
  )
175
+
176
  def _split_generators(self, dl_manager: datasets.DownloadManager):
177
  """Returns SplitGenerators.
178
+ Uses local files if a data_dir is specified. Otherwise downloads the files from their official url."""
 
179
 
180
  try:
181
  config_name = self.config.name if self.config.name != "default" else "NER"
182
+ urls = [
183
+ self._URLS[config_name]+"train.jsonl",
184
+ self._URLS[config_name]+"test.jsonl",
185
+ self._URLS[config_name]+"validation.jsonl"
186
+ ]
187
+ data_files = dl_manager.download(urls)
 
 
 
 
 
 
 
 
 
 
 
 
 
188
  except:
189
  raise ValueError(f"unkonwn config name: {self.config.name}")
190
+
191
  return [
192
  datasets.SplitGenerator(
193
  name=datasets.Split.TRAIN,
194
  # These kwargs will be passed to _generate_examples
195
+ gen_kwargs={
196
+ "filepath": data_files[0]},
197
  ),
198
  datasets.SplitGenerator(
199
  name=datasets.Split.TEST,
200
+ gen_kwargs={
201
+ "filepath": data_files[1]},
202
  ),
203
  datasets.SplitGenerator(
204
  name=datasets.Split.VALIDATION,
205
+ gen_kwargs={
206
+ "filepath": data_files[2]},
207
  ),
208
  ]
209
 
 
212
  It is in charge of opening the given file and yielding (key, example) tuples from the dataset
213
  The key is not important, it's more here for legacy reason (legacy from tfds)"""
214
 
 
 
 
 
 
215
  with open(filepath, encoding="utf-8") as f:
216
  # logger.info("⏳ Generating examples from = %s", filepath)
217
  for id_, row in enumerate(f):
218
+ data = json.loads(row)
219
  if self.config.name in ["NER", "default"]:
220
  yield id_, {
221
  "words": data["words"],
222
  "labels": data["labels"],
223
  "tag_mask": data["is_category"],
224
+ "text": data["text"]
225
  }
226
  elif self.config.name == "ROLES_GP":
227
  yield id_, {
228
  "words": data["words"],
229
  "labels": data["labels"],
230
  "tag_mask": data["is_category"],
231
+ "text": data["text"]
232
  }
233
  elif self.config.name == "ROLES_MULTI":
234
  labels = data["labels"]
235
+ tag_mask = [1 if t!=0 else 0 for t in labels]
236
  yield id_, {
237
  "words": data["words"],
238
  "labels": data["labels"],
239
  "tag_mask": tag_mask,
240
  "is_category": data["is_category"],
241
+ "text": data["text"]
242
  }
243
  elif self.config.name == "ROLES_SM":
244
  yield id_, {
245
  "words": data["words"],
246
  "labels": data["labels"],
247
  "tag_mask": data["is_category"],
248
+ "text": data["text"]
249
  }
250
  elif self.config.name == "PANELIZATION":
251
  labels = data["labels"]
 
256
  "tag_mask": tag_mask,
257
  }
258
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
259
 
compound_image_segmentation/segmented_images.zip DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c556df88e3aa04cdce6159c4751e3fcaf6648620c79f8b0d08daf3a1f90262c7
3
- size 14901995938
 
 
 
 
compound_image_segmentation/soda_panelization_figures.zip DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:3a760898184b7d26e4a25332029a80f33f387457896531d34b326eebf6c03b68
3
- size 2169668963
 
 
 
 
information_extraction/assayed_entities.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:467c1527dd8c9e81d38af5998dd7a9901f5b9b5492f2bb0936cf4678c68ac867
3
- size 24151715
 
 
 
 
information_extraction/assayed_entities.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ff44baebda0620dc93695fea3f57bf38cfb714a954db133b5d50d476f88a63c0
3
- size 26060444
 
 
 
 
information_extraction/chemicals.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:e6ed74da08bd0d24c9774689e13601353a4ae23ec71b89d6f812358ec2d3ea11
3
- size 13170975
 
 
 
 
information_extraction/chemicals.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:734bcb8e16386f7f00462c79db8f0b85e4c070d837c71aa1e06ccb3e72b654ec
3
- size 14247814
 
 
 
 
information_extraction/controlled_entities.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c11db3b159d40662674f9c9e38253eec04116c0ddf4c4a8197c2701b4bdba098
3
- size 19378411
 
 
 
 
information_extraction/controlled_entities.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d72815932fb96031f66526b042da397faa1b118513f764e4fa270f9c53340b62
3
- size 20885680
 
 
 
 
information_extraction/diseases.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:041230d5f10a439338212d9d082d4c1aef7e6517270c60b8cff947c1c55a1dbf
3
- size 1608318
 
 
 
 
information_extraction/diseases.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:f0804a3001b56709e60ed979150f80bcd04ef69413521f9387b0d0508bde3fbc
3
- size 1744745
 
 
 
 
information_extraction/experimental_assay.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:8acf6507b2cde49f5c5d8ddeedb727ae6a2f61876e08c0a38188359aa15267f3
3
- size 23098602
 
 
 
 
information_extraction/experimental_assay.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ed49fcc7e5d364406711071c29918dcc6ba5316d383e2981ca9633a1af2c70d4
3
- size 24906564
 
 
 
 
information_extraction/hypothesis_tested.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:908ba978a2fdec0f5b7754b6fbdcfb73608324e4f2b30cd426f06fe6b6adc48e
3
- size 19158853
 
 
 
 
information_extraction/hypothesis_tested.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:f09b4302edc858c1698b44794b41c63edd62c151b3dcedd40cd460fe07b64429
3
- size 20591703
 
 
 
 
information_extraction/is_experiment.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:7df5fb80a83953193b2106c999091e9e876cb956c6428ebdafc5106713676189
3
- size 27867907
 
 
 
 
information_extraction/is_experiment.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:e866bb90f1f56b32a3ae706ba212751a321643e9a3fec6765d8ca0a04347ba75
3
- size 31185074
 
 
 
 
information_extraction/ncbi_gene_linking.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:3ce76524d01b6a36365abe45729f315d1fd96524b5380eb903eb2e032a341874
3
- size 16616842
 
 
 
 
information_extraction/ncbi_gene_linking.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:f08e84eae4935749f5a5b3191dc2fedd21e99a45f574eafae007893b645a2dc2
3
- size 18004635
 
 
 
 
information_extraction/where_was_tested.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:f6a6ac5a5e76cba4f4a0d0c2848e65243b81c9b9f5c6fe73567f7064dc7ea1f9
3
- size 58760
 
 
 
 
information_extraction/where_was_tested.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:523257469b4f337b5e6d641999f7c11c0414f61922c76681308c3a0ff19608d3
3
- size 61848