Datasets:

Languages:
English
License:
gabrielaltay commited on
Commit
49c3010
1 Parent(s): cc6a2a8

upload hubscripts/n2c2_2018_track2_hub.py to hub from bigbio repo

Browse files
Files changed (1) hide show
  1. n2c2_2018_track2.py +448 -0
n2c2_2018_track2.py ADDED
@@ -0,0 +1,448 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+
17
+ """
18
+ A dataset loader for the n2c2 2018 Adverse Drug Events and Medication Extraction dataset.
19
+
20
+ The dataset consists of multiple archive files two of which are being used by the script,
21
+ ├── training_20180910.zip
22
+ └── gold-standard-test-data.zip
23
+
24
+ The individual data files (inside the zip and tar archives) come in 4 types,
25
+
26
+ * docs (*.txt files): text of a patient record
27
+ * annotations (*.ann files): entities and relations along with offsets used as input to a NER / RE model
28
+
29
+ The files comprising this dataset must be on the users local machine
30
+ in a single directory that is passed to `datasets.load_dataset` via
31
+ the `data_dir` kwarg. This loader script will read the archive files
32
+ directly (i.e. the user should not uncompress, untar or unzip any of
33
+ the files).
34
+
35
+ Data Access from https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
36
+
37
+ [bigbio_schema_name] = kb
38
+ """
39
+
40
+ import os
41
+ import zipfile
42
+ from collections import defaultdict
43
+ from typing import Dict, List, Tuple
44
+
45
+ import datasets
46
+
47
+ from .bigbiohub import kb_features
48
+ from .bigbiohub import BigBioConfig
49
+ from .bigbiohub import Tasks
50
+
51
+ _LANGUAGES = ['English']
52
+ _PUBMED = False
53
+ _LOCAL = True
54
+ _CITATION = """\
55
+ @article{DBLP:journals/jamia/HenryBFSU20,
56
+ author = {
57
+ Sam Henry and
58
+ Kevin Buchan and
59
+ Michele Filannino and
60
+ Amber Stubbs and
61
+ Ozlem Uzuner
62
+ },
63
+ title = {2018 n2c2 shared task on adverse drug events and medication extraction
64
+ in electronic health records},
65
+ journal = {J. Am. Medical Informatics Assoc.},
66
+ volume = {27},
67
+ number = {1},
68
+ pages = {3--12},
69
+ year = {2020},
70
+ url = {https://doi.org/10.1093/jamia/ocz166},
71
+ doi = {10.1093/jamia/ocz166},
72
+ timestamp = {Sat, 30 May 2020 19:53:56 +0200},
73
+ biburl = {https://dblp.org/rec/journals/jamia/HenryBFSU20.bib},
74
+ bibsource = {dblp computer science bibliography, https://dblp.org}
75
+ }
76
+ """
77
+
78
+ _DATASETNAME = "n2c2_2018_track2"
79
+ _DISPLAYNAME = "n2c2 2018 ADE"
80
+
81
+ _DESCRIPTION = """\
82
+ The National NLP Clinical Challenges (n2c2), organized in 2018, continued the
83
+ legacy of i2b2 (Informatics for Biology and the Bedside), adding 2 new tracks and 2
84
+ new sets of data to the shared tasks organized since 2006. Track 2 of 2018
85
+ n2c2 shared tasks focused on the extraction of medications, with their signature
86
+ information, and adverse drug events (ADEs) from clinical narratives.
87
+ This track built on our previous medication challenge, but added a special focus on ADEs.
88
+
89
+ ADEs are injuries resulting from a medical intervention related to a drugs and
90
+ can include allergic reactions, drug interactions, overdoses, and medication errors.
91
+ Collectively, ADEs are estimated to account for 30% of all hospital adverse
92
+ events; however, ADEs are preventable. Identifying potential drug interactions,
93
+ overdoses, allergies, and errors at the point of care and alerting the caregivers of
94
+ potential ADEs can improve health delivery, reduce the risk of ADEs, and improve health
95
+ outcomes.
96
+
97
+ A step in this direction requires processing narratives of clinical records
98
+ that often elaborate on the medications given to a patient, as well as the known
99
+ allergies, reactions, and adverse events of the patient. Extraction of this information
100
+ from narratives complements the structured medication information that can be
101
+ obtained from prescriptions, allowing a more thorough assessment of potential ADEs
102
+ before they happen.
103
+
104
+ The 2018 n2c2 shared task Track 2, hereon referred to as the ADE track,
105
+ tackled these natural language processing tasks in 3 different steps,
106
+ which we refer to as tasks:
107
+ 1. Concept Extraction: identification of concepts related to medications,
108
+ their signature information, and ADEs
109
+ 2. Relation Classification: linking the previously mentioned concepts to
110
+ their medication by identifying relations on gold standard concepts
111
+ 3. End-to-End: building end-to-end systems that process raw narrative text
112
+ to discover concepts and find relations of those concepts to their medications
113
+
114
+ Shared tasks provide a venue for head-to-head comparison of systems developed
115
+ for the same task and on the same data, allowing researchers to identify the state
116
+ of the art in a particular task, learn from it, and build on it.
117
+ """
118
+
119
+ _HOMEPAGE = "https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/"
120
+
121
+ _LICENSE = 'Data User Agreement'
122
+
123
+ _SUPPORTED_TASKS = [Tasks.NAMED_ENTITY_RECOGNITION, Tasks.RELATION_EXTRACTION]
124
+
125
+ _SOURCE_VERSION = "1.0.0" # 2018-09-10
126
+ _BIGBIO_VERSION = "1.0.0"
127
+
128
+ # Constants
129
+ DELIMITER = "||"
130
+ SOURCE = "source"
131
+ BIGBIO_KB = "bigbio_kb"
132
+ ID = "id"
133
+ ANNOTATIONS_EXT = "ann"
134
+ TEXT, TEXT_EXT = "text", "txt"
135
+ TAG, TAGS = "tag", "tags"
136
+ RELATION, RELATIONS = "relation", "relations"
137
+ START, END = "start", "end"
138
+
139
+ N2C2_2018_NER_LABELS = sorted(
140
+ [
141
+ "Drug",
142
+ "Frequency",
143
+ "Reason",
144
+ "ADE",
145
+ "Dosage",
146
+ "Duration",
147
+ "Form",
148
+ "Route",
149
+ "Strength",
150
+ ]
151
+ )
152
+ N2C2_2018_RELATION_LABELS = sorted(
153
+ [
154
+ "Frequency-Drug",
155
+ "Strength-Drug",
156
+ "Route-Drug",
157
+ "Dosage-Drug",
158
+ "ADE-Drug",
159
+ "Reason-Drug",
160
+ "Duration-Drug",
161
+ "Form-Drug",
162
+ ]
163
+ )
164
+
165
+
166
+ def _form_id(sample_id, entity_id, split):
167
+ return "{}-{}-{}".format(sample_id, entity_id, split)
168
+
169
+
170
+ def _build_concept_dict(tag_id, tag_start, tag_end, tag_type, tag_text):
171
+ return {
172
+ ID: tag_id,
173
+ TEXT: tag_text,
174
+ START: int(tag_start),
175
+ END: int(tag_end),
176
+ TAG: tag_type,
177
+ }
178
+
179
+
180
+ def _build_relation_dict(rel_id, arg1, arg2, rel_type):
181
+ return {
182
+ ID: rel_id,
183
+ "arg1_id": arg1,
184
+ "arg2_id": arg2,
185
+ RELATION: rel_type,
186
+ }
187
+
188
+
189
+ def _get_annotations(annotation_file):
190
+ """Return a dictionary with all the annotations in the .ann file.
191
+
192
+ A typical line has either of the following form,
193
+ 1. 'T41 Form 8977 8990 ophthalmology' -> '<ID> <CONCEPT> <START CHAR OFFSET> <END CHAR OFFSET> <TEXT>'
194
+ 2. 'R22 Form-Drug Arg1:T41 Arg2:T40' -> '<ID> <RELATION> <CONCEPT_1_ID> <CONCEPT_2_ID>'
195
+
196
+ """
197
+ tags, relations = {}, {}
198
+ lines = annotation_file.splitlines()
199
+ for line_num, line in enumerate(filter(lambda l: l.strip().startswith("T"), lines)):
200
+ try:
201
+ tag_id, tag_m, tag_text = line.strip().split("\t")
202
+ except ValueError:
203
+ print(line)
204
+
205
+ if len(tag_m.split(" ")) == 3:
206
+ tag_type, tag_start, tag_end = tag_m.split(" ")
207
+ elif len(tag_m.split(" ")) == 4:
208
+ tag_type, tag_start, _, tag_end = tag_m.split(" ")
209
+ elif len(tag_m.split(" ")) == 5:
210
+ tag_type, tag_start, _, _, tag_end = tag_m.split(" ")
211
+ else:
212
+ print(line)
213
+ tags[tag_id] = _build_concept_dict(
214
+ tag_id, tag_start, tag_end, tag_type, tag_text
215
+ )
216
+
217
+ for line_num, line in enumerate(filter(lambda l: l.strip().startswith("R"), lines)):
218
+ rel_id, rel_m = line.strip().split("\t")
219
+ rel_type, rel_arg1, rel_arg2 = rel_m.split(" ")
220
+ rel_arg1 = rel_arg1.split(":")[1]
221
+ rel_arg2 = rel_arg2.split(":")[1]
222
+ arg1 = tags[rel_arg1][ID]
223
+ arg2 = tags[rel_arg2][ID]
224
+ relations[rel_id] = _build_relation_dict(rel_id, arg1, arg2, rel_type)
225
+
226
+ return tags.values(), relations.values()
227
+
228
+
229
+ def _read_zip(file_path):
230
+ samples = defaultdict(dict)
231
+ with zipfile.ZipFile(file_path) as zf:
232
+ for info in zf.infolist():
233
+
234
+ base, filename = os.path.split(info.filename)
235
+ _, ext = os.path.splitext(filename)
236
+ ext = ext[1:] # get rid of dot
237
+ sample_id = filename.split(".")[0]
238
+
239
+ if ext in [TEXT_EXT, ANNOTATIONS_EXT] and not filename.startswith("."):
240
+ content = zf.read(info).decode("utf-8")
241
+ if ext == TEXT_EXT:
242
+ samples[sample_id][ext] = content
243
+ else:
244
+ (
245
+ samples[sample_id][TAGS],
246
+ samples[sample_id][RELATIONS],
247
+ ) = _get_annotations(content)
248
+
249
+ return samples
250
+
251
+
252
+ def _get_entities_from_sample(sample_id, sample, split):
253
+ entities = []
254
+ entity_ids = set()
255
+ text = sample[TEXT_EXT]
256
+ for entity in sample[TAGS]:
257
+ text_slice = text[entity[START] : entity[END]]
258
+ text_slice_norm_1 = text_slice.replace("\n", "").lower()
259
+ text_slice_norm_2 = text_slice.replace("\n", " ").lower()
260
+ match = text_slice_norm_1 == entity[TEXT] or text_slice_norm_2 == entity[TEXT]
261
+ if not match:
262
+ continue
263
+
264
+ entity_id = _form_id(sample_id, entity[ID], split)
265
+ entity_ids.add(entity_id)
266
+ entities.append(
267
+ {
268
+ ID: entity_id,
269
+ "type": entity[TAG],
270
+ TEXT: [text_slice],
271
+ "offsets": [(entity[START], entity[END])],
272
+ "normalized": [],
273
+ }
274
+ )
275
+
276
+ return entities, entity_ids
277
+
278
+
279
+ def _get_relations_from_sample(sample_id, sample, split, entity_ids):
280
+ """
281
+ A small number of relation from the *.ann files could not be
282
+ aligned with the text and were excluded. For this reason we
283
+ pass in the full set of matched entity IDs and ensure that
284
+ no relations refers to an excluded entity.
285
+ """
286
+ relations = []
287
+ for relation in sample[RELATIONS]:
288
+ arg1_id = _form_id(sample_id, relation["arg1_id"], split)
289
+ arg2_id = _form_id(sample_id, relation["arg2_id"], split)
290
+ if arg1_id in entity_ids and arg2_id in entity_ids:
291
+ relations.append(
292
+ {
293
+ ID: _form_id(sample_id, relation[ID], split),
294
+ "type": relation[RELATION],
295
+ "arg1_id": _form_id(sample_id, relation["arg1_id"], split),
296
+ "arg2_id": _form_id(sample_id, relation["arg2_id"], split),
297
+ "normalized": [],
298
+ }
299
+ )
300
+
301
+ return relations
302
+
303
+
304
+ class N2C2AdverseDrugEventsMedicationExtractionDataset(datasets.GeneratorBasedBuilder):
305
+ """n2c2 2018 Track 2 concept and relation task"""
306
+
307
+ SOURCE_VERSION = datasets.Version(_SOURCE_VERSION)
308
+ BIGBIO_VERSION = datasets.Version(_BIGBIO_VERSION)
309
+
310
+ SOURCE_CONFIG_NAME = _DATASETNAME + "_" + SOURCE
311
+ BIGBIO_CONFIG_NAME = _DATASETNAME + "_" + BIGBIO_KB
312
+
313
+ BUILDER_CONFIGS = [
314
+ BigBioConfig(
315
+ name=SOURCE_CONFIG_NAME,
316
+ version=SOURCE_VERSION,
317
+ description=_DATASETNAME + " source schema",
318
+ schema=SOURCE,
319
+ subset_id=_DATASETNAME,
320
+ ),
321
+ BigBioConfig(
322
+ name=BIGBIO_CONFIG_NAME,
323
+ version=BIGBIO_VERSION,
324
+ description=_DATASETNAME + " BigBio schema",
325
+ schema=BIGBIO_KB,
326
+ subset_id=_DATASETNAME,
327
+ ),
328
+ ]
329
+
330
+ DEFAULT_CONFIG_NAME = SOURCE_CONFIG_NAME
331
+
332
+ def _info(self) -> datasets.DatasetInfo:
333
+
334
+ if self.config.schema == SOURCE:
335
+ features = datasets.Features(
336
+ {
337
+ "doc_id": datasets.Value("string"),
338
+ TEXT: datasets.Value("string"),
339
+ TAGS: [
340
+ {
341
+ ID: datasets.Value("string"),
342
+ TEXT: datasets.Value("string"),
343
+ START: datasets.Value("int64"),
344
+ END: datasets.Value("int64"),
345
+ TAG: datasets.ClassLabel(names=N2C2_2018_NER_LABELS),
346
+ }
347
+ ],
348
+ RELATIONS: [
349
+ {
350
+ ID: datasets.Value("string"),
351
+ "arg1_id": datasets.Value("string"),
352
+ "arg2_id": datasets.Value("string"),
353
+ RELATION: datasets.ClassLabel(
354
+ names=N2C2_2018_RELATION_LABELS
355
+ ),
356
+ }
357
+ ],
358
+ }
359
+ )
360
+
361
+ elif self.config.schema == BIGBIO_KB:
362
+ features = kb_features
363
+
364
+ return datasets.DatasetInfo(
365
+ description=_DESCRIPTION,
366
+ features=features,
367
+ homepage=_HOMEPAGE,
368
+ license=str(_LICENSE),
369
+ citation=_CITATION,
370
+ )
371
+
372
+ def _split_generators(self, dl_manager) -> List[datasets.SplitGenerator]:
373
+ """Returns SplitGenerators."""
374
+ if self.config.data_dir is None or self.config.name is None:
375
+ raise ValueError(
376
+ "This is a local dataset. Please pass the data_dir and name kwarg to load_dataset."
377
+ )
378
+ else:
379
+ data_dir = self.config.data_dir
380
+
381
+ return [
382
+ datasets.SplitGenerator(
383
+ name=datasets.Split.TRAIN,
384
+ gen_kwargs={
385
+ "file_path": os.path.join(data_dir, "training_20180910.zip"),
386
+ "split": datasets.Split.TRAIN,
387
+ },
388
+ ),
389
+ datasets.SplitGenerator(
390
+ name=datasets.Split.TEST,
391
+ gen_kwargs={
392
+ "file_path": os.path.join(data_dir, "gold-standard-test-data.zip"),
393
+ "split": datasets.Split.TEST,
394
+ },
395
+ ),
396
+ ]
397
+
398
+ @staticmethod
399
+ def _get_source_sample(sample_id, sample):
400
+ return {
401
+ "doc_id": sample_id,
402
+ TEXT: sample.get(TEXT_EXT, ""),
403
+ TAGS: sample.get(TAGS, []),
404
+ RELATIONS: sample.get(RELATIONS, []),
405
+ }
406
+
407
+ @staticmethod
408
+ def _get_bigbio_sample(sample_id, sample, split) -> dict:
409
+
410
+ passage_text = sample.get("txt", "")
411
+ entities, entity_ids = _get_entities_from_sample(sample_id, sample, split)
412
+ relations = _get_relations_from_sample(sample_id, sample, split, entity_ids)
413
+ return {
414
+ "id": sample_id,
415
+ "document_id": sample_id,
416
+ "passages": [
417
+ {
418
+ "id": f"{sample_id}-passage-0",
419
+ "type": "discharge summary",
420
+ "text": [passage_text],
421
+ "offsets": [(0, len(passage_text))],
422
+ }
423
+ ],
424
+ "entities": entities,
425
+ "relations": relations,
426
+ "events": [],
427
+ "coreferences": [],
428
+ }
429
+
430
+ def _generate_examples(self, file_path, split: str) -> Tuple[int, Dict]:
431
+ """Yields examples as (key, example) tuples."""
432
+ samples = _read_zip(file_path)
433
+
434
+ _id = 0
435
+ for sample_id, sample in samples.items():
436
+
437
+ if (
438
+ self.config.name
439
+ == N2C2AdverseDrugEventsMedicationExtractionDataset.SOURCE_CONFIG_NAME
440
+ ):
441
+ yield _id, self._get_source_sample(sample_id, sample)
442
+ elif (
443
+ self.config.name
444
+ == N2C2AdverseDrugEventsMedicationExtractionDataset.BIGBIO_CONFIG_NAME
445
+ ):
446
+ yield _id, self._get_bigbio_sample(sample_id, sample, split)
447
+
448
+ _id += 1