Dr. Jorge Abreu Vicente commited on
Commit
f8fe24e
1 Parent(s): 84e6a0b

BioASQ added to BLURB

Browse files
Files changed (1) hide show
  1. README.md +26 -8
README.md CHANGED
@@ -202,11 +202,12 @@ We introduce PubMedQA, a novel biomedical question answering (QA) dataset collec
202
  - **Point of Contact:**
203
 
204
  #### BioASQ
205
- - **Homepage:**
206
- - **Repository:**
207
- - **Paper:**
208
- - **Leaderboard:**
209
- - **Point of Contact:**
 
210
 
211
  ### Supported Tasks and Leaderboards
212
 
@@ -221,9 +222,9 @@ We introduce PubMedQA, a novel biomedical question answering (QA) dataset collec
221
  | ChemProt | Relation Extraction | 18035 | 11268 | 15745 | Micro F1 | No |
222
  | DDI | Relation Extraction | 25296 | 2496 | 5716 | Micro F1 | No |
223
  | GAD | Relation Extraction | 4261 | 535 | 534 | Micro F1 | No |
224
- | BIOSSES | Sentence Similarity | 64 | 16 | 20 | Pearson | No |
225
  | HoC | Document Classification | 1295 | 186 | 371 | Average Micro F1 | No |
226
- | PubMedQA | Question Answering | 450 | 50 | 500 | Accuracy | No |
227
  | BioASQ | Question Answering | 670 | 75 | 140 | Accuracy | No |
228
 
229
  Datasets used in the BLURB biomedical NLP benchmark. The Train, Dev, and test splits might not be exactly identical to those proposed in BLURB.
@@ -458,7 +459,24 @@ All the datasets have been obtained and annotated by experts in thebiomedical do
458
  pages={2567--2577},
459
  year={2019}
460
  }
461
- """
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
462
 
463
  }
464
  ```
 
202
  - **Point of Contact:**
203
 
204
  #### BioASQ
205
+
206
+ Task 7b will use benchmark datasets containing training and test biomedical questions, in English, along with gold standard (reference) answers. The participants will have to respond to each test question with relevant concepts (from designated terminologies and ontologies), relevant articles (in English, from designated article repositories), relevant snippets (from the relevant articles), relevant RDF triples (from designated ontologies), exact answers (e.g., named entities in the case of factoid questions) and 'ideal' answers (English paragraph-sized summaries). 2747 training questions (that were used as dry-run or test questions in previous year) are already available, along with their gold standard answers (relevant concepts, articles, snippets, exact answers, summaries).
207
+
208
+ - **Homepage:** http://bioasq.org/
209
+ - **Repository:** http://participants-area.bioasq.org/datasets/
210
+ - **Paper:** [Automatic semantic classification of scientific literature according to the hallmarks of cancer](https://academic.oup.com/bioinformatics/article/32/3/432/1743783?login=false)
211
 
212
  ### Supported Tasks and Leaderboards
213
 
 
222
  | ChemProt | Relation Extraction | 18035 | 11268 | 15745 | Micro F1 | No |
223
  | DDI | Relation Extraction | 25296 | 2496 | 5716 | Micro F1 | No |
224
  | GAD | Relation Extraction | 4261 | 535 | 534 | Micro F1 | No |
225
+ | BIOSSES | Sentence Similarity | 64 | 16 | 20 | Pearson | Yes |
226
  | HoC | Document Classification | 1295 | 186 | 371 | Average Micro F1 | No |
227
+ | PubMedQA | Question Answering | 450 | 50 | 500 | Accuracy | Yes |
228
  | BioASQ | Question Answering | 670 | 75 | 140 | Accuracy | No |
229
 
230
  Datasets used in the BLURB biomedical NLP benchmark. The Train, Dev, and test splits might not be exactly identical to those proposed in BLURB.
 
459
  pages={2567--2577},
460
  year={2019}
461
  }
462
+ """,
463
+ "BioASQ":"""@article{10.1093/bioinformatics/btv585,
464
+ author = {Baker, Simon and Silins, Ilona and Guo, Yufan and Ali, Imran and Högberg, Johan and Stenius, Ulla and Korhonen, Anna},
465
+ title = "{Automatic semantic classification of scientific literature according to the hallmarks of cancer}",
466
+ journal = {Bioinformatics},
467
+ volume = {32},
468
+ number = {3},
469
+ pages = {432-440},
470
+ year = {2015},
471
+ month = {10},
472
+ abstract = "{Motivation: The hallmarks of cancer have become highly influential in cancer research. They reduce the complexity of cancer into 10 principles (e.g. resisting cell death and sustaining proliferative signaling) that explain the biological capabilities acquired during the development of human tumors. Since new research depends crucially on existing knowledge, technology for semantic classification of scientific literature according to the hallmarks of cancer could greatly support literature review, knowledge discovery and applications in cancer research.Results: We present the first step toward the development of such technology. We introduce a corpus of 1499 PubMed abstracts annotated according to the scientific evidence they provide for the 10 currently known hallmarks of cancer. We use this corpus to train a system that classifies PubMed literature according to the hallmarks. The system uses supervised machine learning and rich features largely based on biomedical text mining. We report good performance in both intrinsic and extrinsic evaluations, demonstrating both the accuracy of the methodology and its potential in supporting practical cancer research. We discuss how this approach could be developed and applied further in the future.Availability and implementation: The corpus of hallmark-annotated PubMed abstracts and the software for classification are available at: http://www.cl.cam.ac.uk/∼sb895/HoC.html .Contact:[email protected]}",
473
+ issn = {1367-4803},
474
+ doi = {10.1093/bioinformatics/btv585},
475
+ url = {https://doi.org/10.1093/bioinformatics/btv585},
476
+ eprint = {https://academic.oup.com/bioinformatics/article-pdf/32/3/432/19568147/btv585.pdf},
477
+ }
478
+
479
+ """
480
 
481
  }
482
  ```