Datasets:

Modalities:
Text
Languages:
English
Size:
< 1K
Libraries:
Datasets
License:
gabrielaltay commited on
Commit
90d8750
1 Parent(s): eefbc1f

upload hubscripts/mirna_hub.py to hub from bigbio repo

Browse files
Files changed (1) hide show
  1. mirna.py +383 -0
mirna.py ADDED
@@ -0,0 +1,383 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ import xml.etree.ElementTree as ET
16
+ from typing import Dict, Iterator, List, Tuple
17
+
18
+ import datasets
19
+
20
+ from .bigbiohub import kb_features
21
+ from .bigbiohub import BigBioConfig
22
+ from .bigbiohub import Tasks
23
+
24
+ _LANGUAGES = ['English']
25
+ _PUBMED = True
26
+ _LOCAL = False
27
+ _CITATION = """\
28
+ @Article{Bagewadi2014,
29
+ author={Bagewadi, Shweta
30
+ and Bobi{\'{c}}, Tamara
31
+ and Hofmann-Apitius, Martin
32
+ and Fluck, Juliane
33
+ and Klinger, Roman},
34
+ title={Detecting miRNA Mentions and Relations in Biomedical Literature},
35
+ journal={F1000Research},
36
+ year={2014},
37
+ month={Aug},
38
+ day={28},
39
+ publisher={F1000Research},
40
+ volume={3},
41
+ pages={205-205},
42
+ keywords={MicroRNAs; corpus; prediction algorithms},
43
+ abstract={
44
+ INTRODUCTION: MicroRNAs (miRNAs) have demonstrated their potential as post-transcriptional
45
+ gene expression regulators, participating in a wide spectrum of regulatory events such as
46
+ apoptosis, differentiation, and stress response. Apart from the role of miRNAs in normal
47
+ physiology, their dysregulation is implicated in a vast array of diseases. Dissection of
48
+ miRNA-related associations are valuable for contemplating their mechanism in diseases,
49
+ leading to the discovery of novel miRNAs for disease prognosis, diagnosis, and therapy.
50
+ MOTIVATION: Apart from databases and prediction tools, miRNA-related information is largely
51
+ available as unstructured text. Manual retrieval of these associations can be labor-intensive
52
+ due to steadily growing number of publications. Additionally, most of the published miRNA
53
+ entity recognition methods are keyword based, further subjected to manual inspection for
54
+ retrieval of relations. Despite the fact that several databases host miRNA-associations
55
+ derived from text, lower sensitivity and lack of published details for miRNA entity
56
+ recognition and associated relations identification has motivated the need for developing
57
+ comprehensive methods that are freely available for the scientific community. Additionally,
58
+ the lack of a standard corpus for miRNA-relations has caused difficulty in evaluating the
59
+ available systems. We propose methods to automatically extract mentions of miRNAs, species,
60
+ genes/proteins, disease, and relations from scientific literature. Our generated corpora,
61
+ along with dictionaries, and miRNA regular expression are freely available for academic
62
+ purposes. To our knowledge, these resources are the most comprehensive developed so far.
63
+ RESULTS: The identification of specific miRNA mentions reaches a recall of 0.94 and
64
+ precision of 0.93. Extraction of miRNA-disease and miRNA-gene relations lead to an
65
+ F1 score of up to 0.76. A comparison of the information extracted by our approach to
66
+ the databases miR2Disease and miRSel for the extraction of Alzheimer's disease
67
+ related relations shows the capability of our proposed methods in identifying correct
68
+ relations with improved sensitivity. The published resources and described methods can
69
+ help the researchers for maximal retrieval of miRNA-relations and generation of
70
+ miRNA-regulatory networks. AVAILABILITY: The training and test corpora, annotation
71
+ guidelines, developed dictionaries, and supplementary files are available at
72
+ http://www.scai.fraunhofer.de/mirna-corpora.html.
73
+ },
74
+ note={26535109[pmid]},
75
+ note={PMC4602280[pmcid]},
76
+ issn={2046-1402},
77
+ url={https://pubmed.ncbi.nlm.nih.gov/26535109},
78
+ language={eng}
79
+ }
80
+ """
81
+
82
+ _DATASETNAME = "mirna"
83
+ _DISPLAYNAME = "miRNA"
84
+
85
+ _DESCRIPTION = """\
86
+ The corpus consists of 301 Medline citations. The documents were screened for
87
+ mentions of miRNA in the abstract text. Gene, disease and miRNA entities were manually
88
+ annotated. The corpus comprises of two separate files, a train and a test set, coming
89
+ from 201 and 100 documents respectively.
90
+ """
91
+
92
+ _HOMEPAGE = "https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/download-mirna-test-corpus.html"
93
+
94
+ _LICENSE = 'Creative Commons Attribution Non Commercial 3.0 Unported'
95
+
96
+ _BASE = "https://www.scai.fraunhofer.de/content/dam/scai/de/downloads/bioinformatik/miRNA/miRNA-"
97
+
98
+ _URLs = {
99
+ "source": {
100
+ "train": _BASE + "Train-Corpus.xml",
101
+ "test": _BASE + "Test-Corpus.xml",
102
+ },
103
+ "bigbio_kb": {
104
+ "train": _BASE + "Train-Corpus.xml",
105
+ "test": _BASE + "Test-Corpus.xml",
106
+ },
107
+ }
108
+
109
+ _SUPPORTED_TASKS = [Tasks.NAMED_ENTITY_RECOGNITION, Tasks.NAMED_ENTITY_DISAMBIGUATION]
110
+ _SOURCE_VERSION = "1.0.0"
111
+ _BIGBIO_VERSION = "1.0.0"
112
+
113
+
114
+ class miRNADataset(datasets.GeneratorBasedBuilder):
115
+ """mirna"""
116
+
117
+ SOURCE_VERSION = datasets.Version(_SOURCE_VERSION)
118
+ BIGBIO_VERSION = datasets.Version(_BIGBIO_VERSION)
119
+
120
+ BUILDER_CONFIGS = [
121
+ BigBioConfig(
122
+ name="mirna_source",
123
+ version=SOURCE_VERSION,
124
+ description="mirna source schema",
125
+ schema="source",
126
+ subset_id="mirna",
127
+ ),
128
+ BigBioConfig(
129
+ name="mirna_bigbio_kb",
130
+ version=BIGBIO_VERSION,
131
+ description="mirna BigBio schema",
132
+ schema="bigbio_kb",
133
+ subset_id="mirna",
134
+ ),
135
+ ]
136
+
137
+ DEFAULT_CONFIG_NAME = "mirna_source"
138
+
139
+ def _info(self):
140
+
141
+ if self.config.schema == "source":
142
+
143
+ features = datasets.Features(
144
+ {
145
+ "passages": [
146
+ {
147
+ "document_id": datasets.Value("string"),
148
+ "type": datasets.Value("string"),
149
+ "text": datasets.Value("string"),
150
+ "offset": datasets.Value("int32"),
151
+ "entities": [
152
+ {
153
+ "id": datasets.Value("string"),
154
+ "offsets": [[datasets.Value("int32")]],
155
+ "text": [datasets.Value("string")],
156
+ "type": datasets.Value("string"),
157
+ "normalized": [
158
+ {
159
+ "db_name": datasets.Value("string"),
160
+ "db_id": datasets.Value("string"),
161
+ }
162
+ ],
163
+ }
164
+ ],
165
+ }
166
+ ]
167
+ }
168
+ )
169
+
170
+ elif self.config.schema == "bigbio_kb":
171
+ features = kb_features
172
+
173
+ return datasets.DatasetInfo(
174
+ description=_DESCRIPTION,
175
+ features=features,
176
+ supervised_keys=None,
177
+ homepage=_HOMEPAGE,
178
+ license=str(_LICENSE),
179
+ citation=_CITATION,
180
+ )
181
+
182
+ def _split_generators(self, dl_manager):
183
+ """Returns SplitGenerators."""
184
+
185
+ my_urls = _URLs[self.config.schema]
186
+
187
+ path_xml_train = dl_manager.download(my_urls["train"])
188
+ path_xml_test = dl_manager.download(my_urls["test"])
189
+
190
+ return [
191
+ datasets.SplitGenerator(
192
+ name=datasets.Split.TRAIN,
193
+ # These kwargs will be passed to _generate_examples
194
+ gen_kwargs={
195
+ "filepath": path_xml_train,
196
+ "split": "train",
197
+ },
198
+ ),
199
+ datasets.SplitGenerator(
200
+ name=datasets.Split.TEST,
201
+ # These kwargs will be passed to _generate_examples
202
+ gen_kwargs={
203
+ "filepath": path_xml_test,
204
+ "split": "test",
205
+ },
206
+ ),
207
+ ]
208
+
209
+ def _get_passages_and_entities(self, d) -> Tuple[List[Dict], List[List[Dict]]]:
210
+
211
+ sentences: List[Dict] = []
212
+ entities: List[List[Dict]] = []
213
+ relations: List[List[Dict]] = []
214
+
215
+ text_total_length = 0
216
+
217
+ po_start = 0
218
+
219
+ # Get sentences of the document
220
+ for _, s in enumerate(d):
221
+
222
+ # annotation used only for document indexing
223
+ if s.attrib["text"] is None or len(s.attrib["text"]) <= 0:
224
+ continue
225
+
226
+ # annotation used only for document indexing
227
+ if len(s) <= 0:
228
+ continue
229
+
230
+ text_total_length += len(s.attrib["text"]) + 1
231
+
232
+ po_end = po_start + len(s.attrib["text"])
233
+
234
+ start = po_start
235
+
236
+ dp = {
237
+ "text": s.attrib["text"],
238
+ "type": "title" if ".s0" in s.attrib["id"] else "abstract",
239
+ "offsets": [(po_start, po_end)],
240
+ "offset": 0, # original offset
241
+ }
242
+
243
+ po_start = po_end + 1
244
+
245
+ sentences.append(dp)
246
+
247
+ pe = [] # entities
248
+ re = [] # relations
249
+
250
+ # For each entity
251
+ for a in s:
252
+
253
+ # If correspond to a entity
254
+ if a.tag == "entity":
255
+
256
+ length = len(a.attrib["text"])
257
+
258
+ if a.attrib["text"] is None or length <= 0:
259
+ continue
260
+
261
+ # no in-text annotation: only for document indexing
262
+ if a.attrib["type"] in ["MeSH_Indexing_Chemical", "OTHER"]:
263
+ continue
264
+
265
+ startOffset, endOffset = a.attrib["charOffset"].split("-")
266
+ startOffset, endOffset = int(startOffset), int(endOffset)
267
+
268
+ pe.append(
269
+ {
270
+ "id": a.attrib["id"],
271
+ "type": a.attrib["type"],
272
+ "text": (a.attrib["text"],),
273
+ "offsets": [(start + startOffset, start + endOffset + 1)],
274
+ "normalized": [
275
+ {"db_name": "miRNA-corpus", "db_id": a.attrib["id"]}
276
+ ],
277
+ }
278
+ )
279
+
280
+ # If correspond to relation pair
281
+ elif a.tag == "pair":
282
+
283
+ re.append(
284
+ {
285
+ "id": a.attrib["id"],
286
+ "type": a.attrib["type"],
287
+ "arg1_id": a.attrib["e1"],
288
+ "arg2_id": a.attrib["e2"],
289
+ "normalized": [],
290
+ }
291
+ )
292
+
293
+ entities.append(pe)
294
+ relations.append(re)
295
+
296
+ return sentences, entities, relations
297
+
298
+ def _generate_examples(
299
+ self,
300
+ filepath: str,
301
+ split: str,
302
+ ) -> Iterator[Tuple[int, Dict]]:
303
+ """Yields examples as (key, example) tuples."""
304
+
305
+ reader = ET.fromstring(open(str(filepath), "r").read())
306
+
307
+ if self.config.schema == "source":
308
+
309
+ for uid, doc in enumerate(reader):
310
+
311
+ (
312
+ sentences,
313
+ sentences_entities,
314
+ relations,
315
+ ) = self._get_passages_and_entities(doc)
316
+
317
+ if (
318
+ len(sentences) < 1
319
+ or len(sentences_entities) < 1
320
+ or len(sentences_entities) != len(sentences)
321
+ ):
322
+ continue
323
+
324
+ for p, pe, re in zip(sentences, sentences_entities, relations):
325
+
326
+ p.pop("offsets") # BioC has only start for passages offsets
327
+
328
+ p["document_id"] = doc.attrib["id"]
329
+ p["entities"] = pe # BioC has per passage entities
330
+
331
+ yield uid, {"passages": sentences}
332
+
333
+ elif self.config.schema == "bigbio_kb":
334
+
335
+ uid = 0
336
+
337
+ for idx, doc in enumerate(reader):
338
+
339
+ (
340
+ sentences,
341
+ sentences_entities,
342
+ relations,
343
+ ) = self._get_passages_and_entities(doc)
344
+
345
+ if (
346
+ len(sentences) < 1
347
+ or len(sentences_entities) < 1
348
+ or len(sentences_entities) != len(sentences)
349
+ ):
350
+ continue
351
+
352
+ # global id
353
+ uid += 1
354
+
355
+ # unpack per-sentence entities
356
+ entities = [e for pe in sentences_entities for e in pe]
357
+
358
+ for p in sentences:
359
+ p.pop("offset") # drop original offset
360
+ p["text"] = (p["text"],) # text in sentence is Sequence
361
+ p["id"] = uid
362
+ uid += 1
363
+
364
+ for e in entities:
365
+ e["id"] = uid
366
+ uid += 1
367
+
368
+ # unpack per-sentence relations
369
+ relations = [r for re in relations for r in re]
370
+
371
+ for r in relations:
372
+ r["id"] = uid
373
+ uid += 1
374
+
375
+ yield idx, {
376
+ "id": uid,
377
+ "document_id": doc.attrib["id"],
378
+ "passages": sentences,
379
+ "entities": entities,
380
+ "events": [],
381
+ "coreferences": [],
382
+ "relations": relations,
383
+ }