sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
376f8f130939ea4c01e718c71e2cf8f88577e5ef | # GEM Submission
Submission name: SeqPlan - RotoWire
| GEM-submissions/ratishsp__seqplan__1646397829 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-04T12:43:49+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "SeqPlan - RotoWire", "tags": ["evaluation", "benchmark"]} | 2022-03-14T09:21:16+00:00 | [] | [] | TAGS
#benchmark-gem #evaluation #benchmark #region-us
| # GEM Submission
Submission name: SeqPlan - RotoWire
| [
"# GEM Submission\n\nSubmission name: SeqPlan - RotoWire"
] | [
"TAGS\n#benchmark-gem #evaluation #benchmark #region-us \n",
"# GEM Submission\n\nSubmission name: SeqPlan - RotoWire"
] |
4bbf7c8537c8d75ea9b57ec23b4e33505d365cce |
# Dataset Card alvenir_asr_da_eval
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Prompts/sentence selection](#prompts/sentence-selection)
- [Recording](#recording)
- [Evaluation](#evaluation)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** https://alvenir.ai
- **Repository:** https://github.com/danspeech/alvenir-asr-da-eval/
### Dataset Summary
This dataset was created by Alvenir in order to evaluate ASR models in Danish. It can also be used for training but the amount is very limited.
The dataset consists of .wav files with corresponding reference text. The amount of data is just above 5 hours spread across 50 speakers with age in the interval 20-60 years old. The data was collected by a third party vendor through their software and people. All recordings have been validated.
## Dataset Structure
### Data Instances
A data point consists of a path to the audio file, called path and its sentence. Additional fields will eventually be added such as age and gender.
`
{'audio': {'path': `some_path.wav', 'array': array([-0.044223, -0.00031411, -0.00435671, ..., 0.00612312, 0.00014581, 0.00091009], dtype=float32), 'sampling_rate': 16000}}
`
### Data Fields
audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
sentence: The sentence the user was prompted to speak
### Data Splits
Since the idea behind the dataset is for it to be used as a test/eval ASR dataset for Danish, there is only test split.
## Dataset Creation
### Prompts/sentence selection
The sentences used for prompts were gathered from the danish part of open subtitles (OSS) (need reference) and wikipedia (WIKI). The OSS prompts sampled randomly across the dataset making sure that all prompts are unique. The WIKI prompts were selected by first training a topic model with 30 topics on wikipedia and than randomly sampling an equal amount of unique sentences from each topic. All sentences were manually inspected.
### Recording
50 unique speakers were all sent 20 WIKI sentences and 60 sentences from OSS. The recordings took place through third party recording software.
### Evaluation
All recordings were evaluated by third party to confirm alignment between audio and text.
### Personal and Sensitive Information
The dataset consists of people who have given their voice to the dataset for ASR purposes. You agree to not attempt to determine the identity of any of the speakers in the dataset.
### Licensing Information
[cc-by-4.0](https://creativecommons.org/licenses/by/4.0/)
| Alvenir/alvenir_asr_da_eval | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-04T13:14:47+00:00 | {"license": "cc-by-4.0"} | 2022-06-16T08:13:33+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
|
# Dataset Card alvenir_asr_da_eval
## Table of Contents
- Dataset Description
- Dataset Summary
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Prompts/sentence selection
- Recording
- Evaluation
- Personal and Sensitive Information
- Licensing Information
## Dataset Description
- Homepage: URL
- Repository: URL
### Dataset Summary
This dataset was created by Alvenir in order to evaluate ASR models in Danish. It can also be used for training but the amount is very limited.
The dataset consists of .wav files with corresponding reference text. The amount of data is just above 5 hours spread across 50 speakers with age in the interval 20-60 years old. The data was collected by a third party vendor through their software and people. All recordings have been validated.
## Dataset Structure
### Data Instances
A data point consists of a path to the audio file, called path and its sentence. Additional fields will eventually be added such as age and gender.
'
{'audio': {'path': 'some_path.wav', 'array': array([-0.044223, -0.00031411, -0.00435671, ..., 0.00612312, 0.00014581, 0.00091009], dtype=float32), 'sampling_rate': 16000}}
'
### Data Fields
audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'.
sentence: The sentence the user was prompted to speak
### Data Splits
Since the idea behind the dataset is for it to be used as a test/eval ASR dataset for Danish, there is only test split.
## Dataset Creation
### Prompts/sentence selection
The sentences used for prompts were gathered from the danish part of open subtitles (OSS) (need reference) and wikipedia (WIKI). The OSS prompts sampled randomly across the dataset making sure that all prompts are unique. The WIKI prompts were selected by first training a topic model with 30 topics on wikipedia and than randomly sampling an equal amount of unique sentences from each topic. All sentences were manually inspected.
### Recording
50 unique speakers were all sent 20 WIKI sentences and 60 sentences from OSS. The recordings took place through third party recording software.
### Evaluation
All recordings were evaluated by third party to confirm alignment between audio and text.
### Personal and Sensitive Information
The dataset consists of people who have given their voice to the dataset for ASR purposes. You agree to not attempt to determine the identity of any of the speakers in the dataset.
### Licensing Information
cc-by-4.0
| [
"# Dataset Card alvenir_asr_da_eval",
"## Table of Contents\r\n- Dataset Description\r\n - Dataset Summary\r\n- Dataset Structure\r\n - Data Instances\r\n - Data Fields\r\n - Data Splits\r\n- Dataset Creation\r\n - Prompts/sentence selection\r\n - Recording\r\n - Evaluation\r\n - Personal and Sensitive Information\r\n - Licensing Information",
"## Dataset Description\r\n\r\n- Homepage: URL\r\n- Repository: URL",
"### Dataset Summary\r\n\r\nThis dataset was created by Alvenir in order to evaluate ASR models in Danish. It can also be used for training but the amount is very limited.\r\n\r\nThe dataset consists of .wav files with corresponding reference text. The amount of data is just above 5 hours spread across 50 speakers with age in the interval 20-60 years old. The data was collected by a third party vendor through their software and people. All recordings have been validated.",
"## Dataset Structure",
"### Data Instances\r\n\r\nA data point consists of a path to the audio file, called path and its sentence. Additional fields will eventually be added such as age and gender.\r\n\r\n'\r\n{'audio': {'path': 'some_path.wav', 'array': array([-0.044223, -0.00031411, -0.00435671, ..., 0.00612312, 0.00014581, 0.00091009], dtype=float32), 'sampling_rate': 16000}}\r\n'",
"### Data Fields\r\n\r\naudio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\r\n\r\nsentence: The sentence the user was prompted to speak",
"### Data Splits\r\nSince the idea behind the dataset is for it to be used as a test/eval ASR dataset for Danish, there is only test split.",
"## Dataset Creation",
"### Prompts/sentence selection\r\n\r\nThe sentences used for prompts were gathered from the danish part of open subtitles (OSS) (need reference) and wikipedia (WIKI). The OSS prompts sampled randomly across the dataset making sure that all prompts are unique. The WIKI prompts were selected by first training a topic model with 30 topics on wikipedia and than randomly sampling an equal amount of unique sentences from each topic. All sentences were manually inspected.",
"### Recording \r\n\r\n50 unique speakers were all sent 20 WIKI sentences and 60 sentences from OSS. The recordings took place through third party recording software.",
"### Evaluation\r\n\r\nAll recordings were evaluated by third party to confirm alignment between audio and text.",
"### Personal and Sensitive Information\r\n\r\nThe dataset consists of people who have given their voice to the dataset for ASR purposes. You agree to not attempt to determine the identity of any of the speakers in the dataset.",
"### Licensing Information\r\n\r\ncc-by-4.0"
] | [
"TAGS\n#license-cc-by-4.0 #region-us \n",
"# Dataset Card alvenir_asr_da_eval",
"## Table of Contents\r\n- Dataset Description\r\n - Dataset Summary\r\n- Dataset Structure\r\n - Data Instances\r\n - Data Fields\r\n - Data Splits\r\n- Dataset Creation\r\n - Prompts/sentence selection\r\n - Recording\r\n - Evaluation\r\n - Personal and Sensitive Information\r\n - Licensing Information",
"## Dataset Description\r\n\r\n- Homepage: URL\r\n- Repository: URL",
"### Dataset Summary\r\n\r\nThis dataset was created by Alvenir in order to evaluate ASR models in Danish. It can also be used for training but the amount is very limited.\r\n\r\nThe dataset consists of .wav files with corresponding reference text. The amount of data is just above 5 hours spread across 50 speakers with age in the interval 20-60 years old. The data was collected by a third party vendor through their software and people. All recordings have been validated.",
"## Dataset Structure",
"### Data Instances\r\n\r\nA data point consists of a path to the audio file, called path and its sentence. Additional fields will eventually be added such as age and gender.\r\n\r\n'\r\n{'audio': {'path': 'some_path.wav', 'array': array([-0.044223, -0.00031411, -0.00435671, ..., 0.00612312, 0.00014581, 0.00091009], dtype=float32), 'sampling_rate': 16000}}\r\n'",
"### Data Fields\r\n\r\naudio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\r\n\r\nsentence: The sentence the user was prompted to speak",
"### Data Splits\r\nSince the idea behind the dataset is for it to be used as a test/eval ASR dataset for Danish, there is only test split.",
"## Dataset Creation",
"### Prompts/sentence selection\r\n\r\nThe sentences used for prompts were gathered from the danish part of open subtitles (OSS) (need reference) and wikipedia (WIKI). The OSS prompts sampled randomly across the dataset making sure that all prompts are unique. The WIKI prompts were selected by first training a topic model with 30 topics on wikipedia and than randomly sampling an equal amount of unique sentences from each topic. All sentences were manually inspected.",
"### Recording \r\n\r\n50 unique speakers were all sent 20 WIKI sentences and 60 sentences from OSS. The recordings took place through third party recording software.",
"### Evaluation\r\n\r\nAll recordings were evaluated by third party to confirm alignment between audio and text.",
"### Personal and Sensitive Information\r\n\r\nThe dataset consists of people who have given their voice to the dataset for ASR purposes. You agree to not attempt to determine the identity of any of the speakers in the dataset.",
"### Licensing Information\r\n\r\ncc-by-4.0"
] |
3cf59334aa52a74c008a67a3de30f98dd8a28118 |
# XTREME-S
## Dataset Description
- **Fine-Tuning script:** [research-projects/xtreme-s](https://github.com/huggingface/transformers/tree/master/examples/research_projects/xtreme-s)
- **Paper:** [XTREME-S: Evaluating Cross-lingual Speech Representations](https://arxiv.org/abs/2203.10752)
- **Leaderboard:** [TODO(PVP)]()
- **FLEURS amount of disk used:** 350 GB
- **Multilingual Librispeech amount of disk used:** 2700 GB
- **Voxpopuli amount of disk used:** 400 GB
- **Covost2 amount of disk used:** 70 GB
- **Minds14 amount of disk used:** 5 GB
- **Total amount of disk used:** ca. 3500 GB
The Cross-lingual TRansfer Evaluation of Multilingual Encoders for Speech (XTREME-S) benchmark is a benchmark designed to evaluate speech representations across languages, tasks, domains and data regimes. It covers 102 languages from 10+ language families, 3 different domains and 4 task families: speech recognition, translation, classification and retrieval.
***TLDR; XTREME-S is the first speech benchmark that is both diverse, fully accessible, and reproducible. All datasets can be downloaded with a single line of code.
An easy-to-use and flexible fine-tuning script is provided and actively maintained.***
XTREME-S covers speech recognition with Fleurs, Multilingual LibriSpeech (MLS) and VoxPopuli, speech translation with CoVoST-2, speech classification with LangID (Fleurs) and intent classification (MInds-14) and finally speech(-text) retrieval with Fleurs. Each of the tasks covers a subset of the 102 languages included in XTREME-S, from various regions:
- **Western Europe**: *Asturian, Bosnian, Catalan, Croatian, Danish, Dutch, English, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Irish, Italian, Kabuverdianu, Luxembourgish, Maltese, Norwegian, Occitan, Portuguese, Spanish, Swedish, Welsh*
- **Eastern Europe**: *Armenian, Belarusian, Bulgarian, Czech, Estonian, Georgian, Latvian, Lithuanian, Macedonian, Polish, Romanian, Russian, Serbian, Slovak, Slovenian, Ukrainian*
- **Central-Asia/Middle-East/North-Africa**: *Arabic, Azerbaijani, Hebrew, Kazakh, Kyrgyz, Mongolian, Pashto, Persian, Sorani-Kurdish, Tajik, Turkish, Uzbek*
- **Sub-Saharan Africa**: *Afrikaans, Amharic, Fula, Ganda, Hausa, Igbo, Kamba, Lingala, Luo, Northern-Sotho, Nyanja, Oromo, Shona, Somali, Swahili, Umbundu, Wolof, Xhosa, Yoruba, Zulu*
- **South-Asia**: *Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Sindhi, Tamil, Telugu, Urdu*
- **South-East Asia**: *Burmese, Cebuano, Filipino, Indonesian, Javanese, Khmer, Lao, Malay, Maori, Thai, Vietnamese*
- **CJK languages**: *Cantonese and Mandarin Chinese, Japanese, Korean*
## Design principles
### Diversity
XTREME-S aims for task, domain and language
diversity. Tasks should be diverse and cover several domains to
provide a reliable evaluation of model generalization and
robustness to noisy naturally-occurring speech in different
environments. Languages should be diverse to ensure that
models can adapt to a wide range of linguistic and phonological
phenomena.
### Accessibility
The sub-dataset for each task can be downloaded
with a **single line of code** as shown in [Supported Tasks](#supported-tasks).
Each task is available under a permissive license that allows the use and redistribution
of the data for research purposes. Tasks have been selected based on their usage by
pre-existing multilingual pre-trained models, for simplicity.
### Reproducibility
We produce fully **open-sourced, maintained and easy-to-use** fine-tuning scripts
for each task as shown under [Fine-tuning Example](#fine-tuning-and-evaluation-example).
XTREME-S encourages submissions that leverage publicly available speech and text datasets. Users should detail which data they use.
In general, we encourage settings that can be reproduced by the community, but also encourage the exploration of new frontiers for speech representation learning.
## Fine-tuning and Evaluation Example
We provide a fine-tuning script under [**research-projects/xtreme-s**](https://github.com/huggingface/transformers/tree/master/examples/research_projects/xtreme-s).
The fine-tuning script is written in PyTorch and allows one to fine-tune and evaluate any [Hugging Face model](https://huggingface.co/models) on XTREME-S.
The example script is actively maintained by [@anton-l](https://github.com/anton-l) and [@patrickvonplaten](https://github.com/patrickvonplaten). Feel free
to reach out via issues or pull requests on GitHub if you have any questions.
## Leaderboards
The leaderboard for the XTREME-S benchmark can be found at [this address (TODO(PVP))]().
## Supported Tasks
Note that the suppoprted tasks are focused particularly on linguistic aspect of speech,
while nonlinguistic/paralinguistic aspects of speech relevant to e.g. speech synthesis or voice conversion are **not** evaluated.
<p align="center">
<img src="https://github.com/patrickvonplaten/scientific_images/raw/master/xtreme_s.png" alt="Datasets used in XTREME"/>
</p>
### 1. Speech Recognition (ASR)
We include three speech recognition datasets: FLEURS-ASR, MLS and VoxPopuli (optionally BABEL). Multilingual fine-tuning is used for these three datasets.
#### FLEURS-ASR
*FLEURS-ASR* is the speech version of the FLORES machine translation benchmark, covering 2000 n-way parallel sentences in n=102 languages.
```py
from datasets import load_dataset
fleurs_asr = load_dataset("google/xtreme_s", "fleurs.af_za") # for Afrikaans
# to download all data for multi-lingual fine-tuning uncomment following line
# fleurs_asr = load_dataset("google/xtreme_s", "fleurs.all")
# see structure
print(fleurs_asr)
# load audio sample on the fly
audio_input = fleurs_asr["train"][0]["audio"] # first decoded audio sample
transcription = fleurs_asr["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
# for analyses see language groups
all_language_groups = fleurs_asr["train"].features["lang_group_id"].names
lang_group_id = fleurs_asr["train"][0]["lang_group_id"]
all_language_groups[lang_group_id]
```
#### Multilingual LibriSpeech (MLS)
*MLS* is a large multilingual corpus derived from read audiobooks from LibriVox and consists of 8 languages. For this challenge the training data is limited to 10-hours splits.
```py
from datasets import load_dataset
mls = load_dataset("google/xtreme_s", "mls.pl") # for Polish
# to download all data for multi-lingual fine-tuning uncomment following line
# mls = load_dataset("google/xtreme_s", "mls.all")
# see structure
print(mls)
# load audio sample on the fly
audio_input = mls["train"][0]["audio"] # first decoded audio sample
transcription = mls["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
```
#### VoxPopuli
*VoxPopuli* is a large-scale multilingual speech corpus for representation learning and semi-supervised learning, from which we use the speech recognition dataset. The raw data is collected from 2009-2020 European Parliament event recordings. We acknowledge the European Parliament for creating and sharing these materials.
**VoxPopuli has to download the whole dataset 100GB since languages
are entangled into each other - maybe not worth testing here due to the size**
```py
from datasets import load_dataset
voxpopuli = load_dataset("google/xtreme_s", "voxpopuli.ro") # for Romanian
# to download all data for multi-lingual fine-tuning uncomment following line
# voxpopuli = load_dataset("google/xtreme_s", "voxpopuli.all")
# see structure
print(voxpopuli)
# load audio sample on the fly
audio_input = voxpopuli["train"][0]["audio"] # first decoded audio sample
transcription = voxpopuli["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
```
#### (Optionally) BABEL
*BABEL* from IARPA is a conversational speech recognition dataset in low-resource languages. First, download LDC2016S06, LDC2016S12, LDC2017S08, LDC2017S05 and LDC2016S13. BABEL is the only dataset in our benchmark who is less easily accessible, so you will need to sign in to get access to it on LDC. Although not officially part of the XTREME-S ASR datasets, BABEL is often used for evaluating speech representations on a difficult domain (phone conversations).
```py
from datasets import load_dataset
babel = load_dataset("google/xtreme_s", "babel.as")
```
**The above command is expected to fail with a nice error message,
explaining how to download BABEL**
The following should work:
```py
from datasets import load_dataset
babel = load_dataset("google/xtreme_s", "babel.as", data_dir="/path/to/IARPA_BABEL_OP1_102_LDC2016S06.zip")
# see structure
print(babel)
# load audio sample on the fly
audio_input = babel["train"][0]["audio"] # first decoded audio sample
transcription = babel["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
```
### 2. Speech Translation (ST)
We include the CoVoST-2 dataset for automatic speech translation.
#### CoVoST-2
The *CoVoST-2* benchmark has become a commonly used dataset for evaluating automatic speech translation. It covers language pairs from English into 15 languages, as well as 21 languages into English. We use only the "X->En" direction to evaluate cross-lingual representations. The amount of supervision varies greatly in this setting, from one hour for Japanese->English to 180 hours for French->English. This makes pretraining particularly useful to enable such few-shot learning. We enforce multiligual fine-tuning for simplicity. Results are splitted in high/med/low-resource language pairs as explained in the [paper (TODO(PVP))].
```py
from datasets import load_dataset
covost_2 = load_dataset("google/xtreme_s", "covost2.id.en") # for Indonesian to English
# to download all data for multi-lingual fine-tuning uncomment following line
# covost_2 = load_dataset("google/xtreme_s", "covost2.all")
# see structure
print(covost_2)
# load audio sample on the fly
audio_input = covost_2["train"][0]["audio"] # first decoded audio sample
transcription = covost_2["train"][0]["transcription"] # first transcription
translation = covost_2["train"][0]["translation"] # first translation
# use audio_input and translation to fine-tune your model for AST
```
### 3. Speech Classification
We include two multilingual speech classification datasets: FLEURS-LangID and Minds-14.
#### Language Identification - FLEURS-LangID
LangID can often be a domain classification, but in the case of FLEURS-LangID, recordings are done in a similar setting across languages and the utterances correspond to n-way parallel sentences, in the exact same domain, making this task particularly relevant for evaluating LangID. The setting is simple, FLEURS-LangID is splitted in train/valid/test for each language. We simply create a single train/valid/test for LangID by merging all.
```py
from datasets import load_dataset
fleurs_langID = load_dataset("google/xtreme_s", "fleurs.all") # to download all data
# see structure
print(fleurs_langID)
# load audio sample on the fly
audio_input = fleurs_langID["train"][0]["audio"] # first decoded audio sample
language_class = fleurs_langID["train"][0]["lang_id"] # first id class
language = fleurs_langID["train"].features["lang_id"].names[language_class]
# use audio_input and language_class to fine-tune your model for audio classification
```
#### Intent classification - Minds-14
Minds-14 is an intent classification made from e-banking speech datasets in 14 languages, with 14 intent labels. We impose a single multilingual fine-tuning to increase the size of the train and test sets and reduce the variance associated with the small size of the dataset per language.
```py
from datasets import load_dataset
minds_14 = load_dataset("google/xtreme_s", "minds14.fr-FR") # for French
# to download all data for multi-lingual fine-tuning uncomment following line
# minds_14 = load_dataset("google/xtreme_s", "minds14.all")
# see structure
print(minds_14)
# load audio sample on the fly
audio_input = minds_14["train"][0]["audio"] # first decoded audio sample
intent_class = minds_14["train"][0]["intent_class"] # first transcription
intent = minds_14["train"].features["intent_class"].names[intent_class]
# use audio_input and language_class to fine-tune your model for audio classification
```
### 4. (Optionally) Speech Retrieval
We optionally include one speech retrieval dataset: FLEURS-Retrieval as explained in the [FLEURS paper](https://arxiv.org/abs/2205.12446).
#### FLEURS-Retrieval
FLEURS-Retrieval provides n-way parallel speech and text data. Similar to how XTREME for text leverages Tatoeba to evaluate bitext mining a.k.a sentence translation retrieval, we use FLEURS-Retrieval to evaluate the quality of fixed-size representations of speech utterances. Our goal is to incentivize the creation of fixed-size speech encoder for speech retrieval. The system has to retrieve the English "key" utterance corresponding to the speech translation of "queries" in 15 languages. Results have to be reported on the test sets of FLEURS-Retrieval whose utterances are used as queries (and keys for English). We augment the English keys with a large number of utterances to make the task more difficult.
```py
from datasets import load_dataset
fleurs_retrieval = load_dataset("google/xtreme_s", "fleurs.af_za") # for Afrikaans
# to download all data for multi-lingual fine-tuning uncomment following line
# fleurs_retrieval = load_dataset("google/xtreme_s", "fleurs.all")
# see structure
print(fleurs_retrieval)
# load audio sample on the fly
audio_input = fleurs_retrieval["train"][0]["audio"] # decoded audio sample
text_sample_pos = fleurs_retrieval["train"][0]["transcription"] # positive text sample
text_sample_neg = fleurs_retrieval["train"][1:20]["transcription"] # negative text samples
# use `audio_input`, `text_sample_pos`, and `text_sample_neg` to fine-tune your model for retrieval
```
Users can leverage the training (and dev) sets of FLEURS-Retrieval with a ranking loss to build better cross-lingual fixed-size representations of speech.
## Dataset Structure
The XTREME-S benchmark is composed of the following datasets:
- [FLEURS](https://huggingface.co/datasets/google/fleurs#dataset-structure)
- [Multilingual Librispeech (MLS)](https://huggingface.co/datasets/facebook/multilingual_librispeech#dataset-structure)
Note that for MLS, XTREME-S uses `path` instead of `file` and `transcription` instead of `text`.
- [Voxpopuli](https://huggingface.co/datasets/facebook/voxpopuli#dataset-structure)
- [Minds14](https://huggingface.co/datasets/polyai/minds14#dataset-structure)
- [Covost2](https://huggingface.co/datasets/covost2#dataset-structure)
Note that for Covost2, XTREME-S uses `path` instead of `file` and `transcription` instead of `sentence`.
- [BABEL](https://huggingface.co/datasets/ldc/iarpa_babel#dataset-structure)
Please click on the link of the dataset cards to get more information about its dataset structure.
## Dataset Creation
The XTREME-S benchmark is composed of the following datasets:
- [FLEURS](https://huggingface.co/datasets/google/fleurs#dataset-creation)
- [Multilingual Librispeech (MLS)](https://huggingface.co/datasets/facebook/multilingual_librispeech#dataset-creation)
- [Voxpopuli](https://huggingface.co/datasets/facebook/voxpopuli#dataset-creation)
- [Minds14](https://huggingface.co/datasets/polyai/minds14#dataset-creation)
- [Covost2](https://huggingface.co/datasets/covost2#dataset-creation)
- [BABEL](https://huggingface.co/datasets/ldc/iarpa_babel#dataset-creation)
Please visit the corresponding dataset cards to get more information about the source data.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is meant to encourage the development of speech technology in a lot more languages of the world. One of the goal is to give equal access to technologies like speech recognition or speech translation to everyone, meaning better dubbing or better access to content from the internet (like podcasts, streaming or videos).
### Discussion of Biases
Most datasets have a fair distribution of gender utterances (e.g. the newly introduced FLEURS dataset). While many languages are covered from various regions of the world, the benchmark misses many languages that are all equally important. We believe technology built through XTREME-S should generalize to all languages.
### Other Known Limitations
The benchmark has a particular focus on read-speech because common evaluation benchmarks like CoVoST-2 or LibriSpeech evaluate on this type of speech. There is sometimes a known mismatch between performance obtained in a read-speech setting and a more noisy setting (in production for instance). Given the big progress that remains to be made on many languages, we believe better performance on XTREME-S should still correlate well with actual progress made for speech understanding.
## Additional Information
All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
### Citation Information
#### XTREME-S
```
@article{conneau2022xtreme,
title={XTREME-S: Evaluating Cross-lingual Speech Representations},
author={Conneau, Alexis and Bapna, Ankur and Zhang, Yu and Ma, Min and von Platen, Patrick and Lozhkov, Anton and Cherry, Colin and Jia, Ye and Rivera, Clara and Kale, Mihir and others},
journal={arXiv preprint arXiv:2203.10752},
year={2022}
}
```
#### MLS
```
@article{Pratap2020MLSAL,
title={MLS: A Large-Scale Multilingual Dataset for Speech Research},
author={Vineel Pratap and Qiantong Xu and Anuroop Sriram and Gabriel Synnaeve and Ronan Collobert},
journal={ArXiv},
year={2020},
volume={abs/2012.03411}
}
```
#### VoxPopuli
```
@article{wang2021voxpopuli,
title={Voxpopuli: A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation},
author={Wang, Changhan and Riviere, Morgane and Lee, Ann and Wu, Anne and Talnikar, Chaitanya and Haziza, Daniel and Williamson, Mary and Pino, Juan and Dupoux, Emmanuel},
journal={arXiv preprint arXiv:2101.00390},
year={2021}
}
```
#### CoVoST 2
```
@article{DBLP:journals/corr/abs-2007-10310,
author = {Changhan Wang and
Anne Wu and
Juan Miguel Pino},
title = {CoVoST 2: {A} Massively Multilingual Speech-to-Text Translation Corpus},
journal = {CoRR},
volume = {abs/2007.10310},
year = {2020},
url = {https://arxiv.org/abs/2007.10310},
eprinttype = {arXiv},
eprint = {2007.10310},
timestamp = {Thu, 12 Aug 2021 15:37:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2007-10310.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
#### Minds14
```
@article{gerz2021multilingual,
title={Multilingual and cross-lingual intent detection from spoken data},
author={Gerz, Daniela and Su, Pei-Hao and Kusztos, Razvan and Mondal, Avishek and Lis, Micha{\l} and Singhal, Eshan and Mrk{\v{s}}i{\'c}, Nikola and Wen, Tsung-Hsien and Vuli{\'c}, Ivan},
journal={arXiv preprint arXiv:2104.08524},
year={2021}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@anton-l](https://github.com/anton-l), [@aconneau](https://github.com/aconneau) for adding this dataset
| google/xtreme_s | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:extended|multilingual_librispeech",
"source_datasets:extended|covost2",
"language:afr",
"language:amh",
"language:ara",
"language:asm",
"language:ast",
"language:azj",
"language:bel",
"language:ben",
"language:bos",
"language:cat",
"language:ceb",
"language:cmn",
"language:ces",
"language:cym",
"language:dan",
"language:deu",
"language:ell",
"language:eng",
"language:spa",
"language:est",
"language:fas",
"language:ful",
"language:fin",
"language:tgl",
"language:fra",
"language:gle",
"language:glg",
"language:guj",
"language:hau",
"language:heb",
"language:hin",
"language:hrv",
"language:hun",
"language:hye",
"language:ind",
"language:ibo",
"language:isl",
"language:ita",
"language:jpn",
"language:jav",
"language:kat",
"language:kam",
"language:kea",
"language:kaz",
"language:khm",
"language:kan",
"language:kor",
"language:ckb",
"language:kir",
"language:ltz",
"language:lug",
"language:lin",
"language:lao",
"language:lit",
"language:luo",
"language:lav",
"language:mri",
"language:mkd",
"language:mal",
"language:mon",
"language:mar",
"language:msa",
"language:mlt",
"language:mya",
"language:nob",
"language:npi",
"language:nld",
"language:nso",
"language:nya",
"language:oci",
"language:orm",
"language:ory",
"language:pan",
"language:pol",
"language:pus",
"language:por",
"language:ron",
"language:rus",
"language:bul",
"language:snd",
"language:slk",
"language:slv",
"language:sna",
"language:som",
"language:srp",
"language:swe",
"language:swh",
"language:tam",
"language:tel",
"language:tgk",
"language:tha",
"language:tur",
"language:ukr",
"language:umb",
"language:urd",
"language:uzb",
"language:vie",
"language:wol",
"language:xho",
"language:yor",
"language:yue",
"language:zul",
"license:cc-by-4.0",
"arxiv:2203.10752",
"arxiv:2205.12446",
"arxiv:2007.10310",
"region:us"
] | 2022-03-04T14:10:40+00:00 | {"annotations_creators": ["expert-generated", "crowdsourced", "machine-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["afr", "amh", "ara", "asm", "ast", "azj", "bel", "ben", "bos", "cat", "ceb", "cmn", "ces", "cym", "dan", "deu", "ell", "eng", "spa", "est", "fas", "ful", "fin", "tgl", "fra", "gle", "glg", "guj", "hau", "heb", "hin", "hrv", "hun", "hye", "ind", "ibo", "isl", "ita", "jpn", "jav", "kat", "kam", "kea", "kaz", "khm", "kan", "kor", "ckb", "kir", "ltz", "lug", "lin", "lao", "lit", "luo", "lav", "mri", "mkd", "mal", "mon", "mar", "msa", "mlt", "mya", "nob", "npi", "nld", "nso", "nya", "oci", "orm", "ory", "pan", "pol", "pus", "por", "ron", "rus", "bul", "snd", "slk", "slv", "sna", "som", "srp", "swe", "swh", "tam", "tel", "tgk", "tha", "tur", "ukr", "umb", "urd", "uzb", "vie", "wol", "xho", "yor", "yue", "zul"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|multilingual_librispeech", "extended|covost2"], "task_categories": ["automatic-speech-recognition", "speech-processing"], "task_ids": ["speech-recognition"], "paperswithcode_id": "librispeech-1", "pretty_name": "The Cross-lingual TRansfer Evaluation of Multilingual Encoders for Speech (XTREME-S) benchmark is a benchmark designed to evaluate speech representations across languages, tasks, domains and data regimes. It covers 102 languages from 10+ language families, 3 different domains and 4 task families: speech recognition, translation, classification and retrieval."} | 2022-07-28T11:47:02+00:00 | [
"2203.10752",
"2205.12446",
"2007.10310"
] | [
"afr",
"amh",
"ara",
"asm",
"ast",
"azj",
"bel",
"ben",
"bos",
"cat",
"ceb",
"cmn",
"ces",
"cym",
"dan",
"deu",
"ell",
"eng",
"spa",
"est",
"fas",
"ful",
"fin",
"tgl",
"fra",
"gle",
"glg",
"guj",
"hau",
"heb",
"hin",
"hrv",
"hun",
"hye",
"ind",
"ibo",
"isl",
"ita",
"jpn",
"jav",
"kat",
"kam",
"kea",
"kaz",
"khm",
"kan",
"kor",
"ckb",
"kir",
"ltz",
"lug",
"lin",
"lao",
"lit",
"luo",
"lav",
"mri",
"mkd",
"mal",
"mon",
"mar",
"msa",
"mlt",
"mya",
"nob",
"npi",
"nld",
"nso",
"nya",
"oci",
"orm",
"ory",
"pan",
"pol",
"pus",
"por",
"ron",
"rus",
"bul",
"snd",
"slk",
"slv",
"sna",
"som",
"srp",
"swe",
"swh",
"tam",
"tel",
"tgk",
"tha",
"tur",
"ukr",
"umb",
"urd",
"uzb",
"vie",
"wol",
"xho",
"yor",
"yue",
"zul"
] | TAGS
#task_categories-automatic-speech-recognition #annotations_creators-expert-generated #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-extended|multilingual_librispeech #source_datasets-extended|covost2 #language-Afrikaans #language-Amharic #language-Arabic #language-Assamese #language-Asturian #language-North Azerbaijani #language-Belarusian #language-Bengali #language-Bosnian #language-Catalan #language-Cebuano #language-Mandarin Chinese #language-Czech #language-Welsh #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Persian #language-Fulah #language-Finnish #language-Tagalog #language-French #language-Irish #language-Galician #language-Gujarati #language-Hausa #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Igbo #language-Icelandic #language-Italian #language-Japanese #language-Javanese #language-Georgian #language-Kamba (Kenya) #language-Kabuverdianu #language-Kazakh #language-Khmer #language-Kannada #language-Korean #language-Central Kurdish #language-Kirghiz #language-Luxembourgish #language-Ganda #language-Lingala #language-Lao #language-Lithuanian #language-Luo (Kenya and Tanzania) #language-Latvian #language-Maori #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Malay (macrolanguage) #language-Maltese #language-Burmese #language-Norwegian Bokmål #language-Nepali (individual language) #language-Dutch #language-Pedi #language-Nyanja #language-Occitan (post 1500) #language-Oromo #language-Odia #language-Panjabi #language-Polish #language-Pushto #language-Portuguese #language-Romanian #language-Russian #language-Bulgarian #language-Sindhi #language-Slovak #language-Slovenian #language-Shona #language-Somali #language-Serbian #language-Swedish #language-Swahili (individual language) #language-Tamil #language-Telugu #language-Tajik #language-Thai #language-Turkish #language-Ukrainian #language-Umbundu #language-Urdu #language-Uzbek #language-Vietnamese #language-Wolof #language-Xhosa #language-Yoruba #language-Yue Chinese #language-Zulu #license-cc-by-4.0 #arxiv-2203.10752 #arxiv-2205.12446 #arxiv-2007.10310 #region-us
|
# XTREME-S
## Dataset Description
- Fine-Tuning script: research-projects/xtreme-s
- Paper: XTREME-S: Evaluating Cross-lingual Speech Representations
- Leaderboard: [TODO(PVP)]()
- FLEURS amount of disk used: 350 GB
- Multilingual Librispeech amount of disk used: 2700 GB
- Voxpopuli amount of disk used: 400 GB
- Covost2 amount of disk used: 70 GB
- Minds14 amount of disk used: 5 GB
- Total amount of disk used: ca. 3500 GB
The Cross-lingual TRansfer Evaluation of Multilingual Encoders for Speech (XTREME-S) benchmark is a benchmark designed to evaluate speech representations across languages, tasks, domains and data regimes. It covers 102 languages from 10+ language families, 3 different domains and 4 task families: speech recognition, translation, classification and retrieval.
*TLDR; XTREME-S is the first speech benchmark that is both diverse, fully accessible, and reproducible. All datasets can be downloaded with a single line of code.
An easy-to-use and flexible fine-tuning script is provided and actively maintained.*
XTREME-S covers speech recognition with Fleurs, Multilingual LibriSpeech (MLS) and VoxPopuli, speech translation with CoVoST-2, speech classification with LangID (Fleurs) and intent classification (MInds-14) and finally speech(-text) retrieval with Fleurs. Each of the tasks covers a subset of the 102 languages included in XTREME-S, from various regions:
- Western Europe: *Asturian, Bosnian, Catalan, Croatian, Danish, Dutch, English, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Irish, Italian, Kabuverdianu, Luxembourgish, Maltese, Norwegian, Occitan, Portuguese, Spanish, Swedish, Welsh*
- Eastern Europe: *Armenian, Belarusian, Bulgarian, Czech, Estonian, Georgian, Latvian, Lithuanian, Macedonian, Polish, Romanian, Russian, Serbian, Slovak, Slovenian, Ukrainian*
- Central-Asia/Middle-East/North-Africa: *Arabic, Azerbaijani, Hebrew, Kazakh, Kyrgyz, Mongolian, Pashto, Persian, Sorani-Kurdish, Tajik, Turkish, Uzbek*
- Sub-Saharan Africa: *Afrikaans, Amharic, Fula, Ganda, Hausa, Igbo, Kamba, Lingala, Luo, Northern-Sotho, Nyanja, Oromo, Shona, Somali, Swahili, Umbundu, Wolof, Xhosa, Yoruba, Zulu*
- South-Asia: *Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Sindhi, Tamil, Telugu, Urdu*
- South-East Asia: *Burmese, Cebuano, Filipino, Indonesian, Javanese, Khmer, Lao, Malay, Maori, Thai, Vietnamese*
- CJK languages: *Cantonese and Mandarin Chinese, Japanese, Korean*
## Design principles
### Diversity
XTREME-S aims for task, domain and language
diversity. Tasks should be diverse and cover several domains to
provide a reliable evaluation of model generalization and
robustness to noisy naturally-occurring speech in different
environments. Languages should be diverse to ensure that
models can adapt to a wide range of linguistic and phonological
phenomena.
### Accessibility
The sub-dataset for each task can be downloaded
with a single line of code as shown in Supported Tasks.
Each task is available under a permissive license that allows the use and redistribution
of the data for research purposes. Tasks have been selected based on their usage by
pre-existing multilingual pre-trained models, for simplicity.
### Reproducibility
We produce fully open-sourced, maintained and easy-to-use fine-tuning scripts
for each task as shown under Fine-tuning Example.
XTREME-S encourages submissions that leverage publicly available speech and text datasets. Users should detail which data they use.
In general, we encourage settings that can be reproduced by the community, but also encourage the exploration of new frontiers for speech representation learning.
## Fine-tuning and Evaluation Example
We provide a fine-tuning script under research-projects/xtreme-s.
The fine-tuning script is written in PyTorch and allows one to fine-tune and evaluate any Hugging Face model on XTREME-S.
The example script is actively maintained by @anton-l and @patrickvonplaten. Feel free
to reach out via issues or pull requests on GitHub if you have any questions.
## Leaderboards
The leaderboard for the XTREME-S benchmark can be found at [this address (TODO(PVP))]().
## Supported Tasks
Note that the suppoprted tasks are focused particularly on linguistic aspect of speech,
while nonlinguistic/paralinguistic aspects of speech relevant to e.g. speech synthesis or voice conversion are not evaluated.
<p align="center">
<img src="URL alt="Datasets used in XTREME"/>
</p>
### 1. Speech Recognition (ASR)
We include three speech recognition datasets: FLEURS-ASR, MLS and VoxPopuli (optionally BABEL). Multilingual fine-tuning is used for these three datasets.
#### FLEURS-ASR
*FLEURS-ASR* is the speech version of the FLORES machine translation benchmark, covering 2000 n-way parallel sentences in n=102 languages.
#### Multilingual LibriSpeech (MLS)
*MLS* is a large multilingual corpus derived from read audiobooks from LibriVox and consists of 8 languages. For this challenge the training data is limited to 10-hours splits.
#### VoxPopuli
*VoxPopuli* is a large-scale multilingual speech corpus for representation learning and semi-supervised learning, from which we use the speech recognition dataset. The raw data is collected from 2009-2020 European Parliament event recordings. We acknowledge the European Parliament for creating and sharing these materials.
VoxPopuli has to download the whole dataset 100GB since languages
are entangled into each other - maybe not worth testing here due to the size
#### (Optionally) BABEL
*BABEL* from IARPA is a conversational speech recognition dataset in low-resource languages. First, download LDC2016S06, LDC2016S12, LDC2017S08, LDC2017S05 and LDC2016S13. BABEL is the only dataset in our benchmark who is less easily accessible, so you will need to sign in to get access to it on LDC. Although not officially part of the XTREME-S ASR datasets, BABEL is often used for evaluating speech representations on a difficult domain (phone conversations).
The above command is expected to fail with a nice error message,
explaining how to download BABEL
The following should work:
### 2. Speech Translation (ST)
We include the CoVoST-2 dataset for automatic speech translation.
#### CoVoST-2
The *CoVoST-2* benchmark has become a commonly used dataset for evaluating automatic speech translation. It covers language pairs from English into 15 languages, as well as 21 languages into English. We use only the "X->En" direction to evaluate cross-lingual representations. The amount of supervision varies greatly in this setting, from one hour for Japanese->English to 180 hours for French->English. This makes pretraining particularly useful to enable such few-shot learning. We enforce multiligual fine-tuning for simplicity. Results are splitted in high/med/low-resource language pairs as explained in the [paper (TODO(PVP))].
### 3. Speech Classification
We include two multilingual speech classification datasets: FLEURS-LangID and Minds-14.
#### Language Identification - FLEURS-LangID
LangID can often be a domain classification, but in the case of FLEURS-LangID, recordings are done in a similar setting across languages and the utterances correspond to n-way parallel sentences, in the exact same domain, making this task particularly relevant for evaluating LangID. The setting is simple, FLEURS-LangID is splitted in train/valid/test for each language. We simply create a single train/valid/test for LangID by merging all.
#### Intent classification - Minds-14
Minds-14 is an intent classification made from e-banking speech datasets in 14 languages, with 14 intent labels. We impose a single multilingual fine-tuning to increase the size of the train and test sets and reduce the variance associated with the small size of the dataset per language.
### 4. (Optionally) Speech Retrieval
We optionally include one speech retrieval dataset: FLEURS-Retrieval as explained in the FLEURS paper.
#### FLEURS-Retrieval
FLEURS-Retrieval provides n-way parallel speech and text data. Similar to how XTREME for text leverages Tatoeba to evaluate bitext mining a.k.a sentence translation retrieval, we use FLEURS-Retrieval to evaluate the quality of fixed-size representations of speech utterances. Our goal is to incentivize the creation of fixed-size speech encoder for speech retrieval. The system has to retrieve the English "key" utterance corresponding to the speech translation of "queries" in 15 languages. Results have to be reported on the test sets of FLEURS-Retrieval whose utterances are used as queries (and keys for English). We augment the English keys with a large number of utterances to make the task more difficult.
Users can leverage the training (and dev) sets of FLEURS-Retrieval with a ranking loss to build better cross-lingual fixed-size representations of speech.
## Dataset Structure
The XTREME-S benchmark is composed of the following datasets:
- FLEURS
- Multilingual Librispeech (MLS)
Note that for MLS, XTREME-S uses 'path' instead of 'file' and 'transcription' instead of 'text'.
- Voxpopuli
- Minds14
- Covost2
Note that for Covost2, XTREME-S uses 'path' instead of 'file' and 'transcription' instead of 'sentence'.
- BABEL
Please click on the link of the dataset cards to get more information about its dataset structure.
## Dataset Creation
The XTREME-S benchmark is composed of the following datasets:
- FLEURS
- Multilingual Librispeech (MLS)
- Voxpopuli
- Minds14
- Covost2
- BABEL
Please visit the corresponding dataset cards to get more information about the source data.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is meant to encourage the development of speech technology in a lot more languages of the world. One of the goal is to give equal access to technologies like speech recognition or speech translation to everyone, meaning better dubbing or better access to content from the internet (like podcasts, streaming or videos).
### Discussion of Biases
Most datasets have a fair distribution of gender utterances (e.g. the newly introduced FLEURS dataset). While many languages are covered from various regions of the world, the benchmark misses many languages that are all equally important. We believe technology built through XTREME-S should generalize to all languages.
### Other Known Limitations
The benchmark has a particular focus on read-speech because common evaluation benchmarks like CoVoST-2 or LibriSpeech evaluate on this type of speech. There is sometimes a known mismatch between performance obtained in a read-speech setting and a more noisy setting (in production for instance). Given the big progress that remains to be made on many languages, we believe better performance on XTREME-S should still correlate well with actual progress made for speech understanding.
## Additional Information
All datasets are licensed under the Creative Commons license (CC-BY).
#### XTREME-S
#### MLS
#### VoxPopuli
#### CoVoST 2
#### Minds14
### Contributions
Thanks to @patrickvonplaten, @anton-l, @aconneau for adding this dataset
| [
"# XTREME-S",
"## Dataset Description\n\n- Fine-Tuning script: research-projects/xtreme-s\n- Paper: XTREME-S: Evaluating Cross-lingual Speech Representations\n- Leaderboard: [TODO(PVP)]()\n- FLEURS amount of disk used: 350 GB\n- Multilingual Librispeech amount of disk used: 2700 GB \n- Voxpopuli amount of disk used: 400 GB \n- Covost2 amount of disk used: 70 GB \n- Minds14 amount of disk used: 5 GB \n- Total amount of disk used: ca. 3500 GB \n\nThe Cross-lingual TRansfer Evaluation of Multilingual Encoders for Speech (XTREME-S) benchmark is a benchmark designed to evaluate speech representations across languages, tasks, domains and data regimes. It covers 102 languages from 10+ language families, 3 different domains and 4 task families: speech recognition, translation, classification and retrieval.\n\n*TLDR; XTREME-S is the first speech benchmark that is both diverse, fully accessible, and reproducible. All datasets can be downloaded with a single line of code. \nAn easy-to-use and flexible fine-tuning script is provided and actively maintained.*\n\nXTREME-S covers speech recognition with Fleurs, Multilingual LibriSpeech (MLS) and VoxPopuli, speech translation with CoVoST-2, speech classification with LangID (Fleurs) and intent classification (MInds-14) and finally speech(-text) retrieval with Fleurs. Each of the tasks covers a subset of the 102 languages included in XTREME-S, from various regions: \n\n- Western Europe: *Asturian, Bosnian, Catalan, Croatian, Danish, Dutch, English, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Irish, Italian, Kabuverdianu, Luxembourgish, Maltese, Norwegian, Occitan, Portuguese, Spanish, Swedish, Welsh* \n- Eastern Europe: *Armenian, Belarusian, Bulgarian, Czech, Estonian, Georgian, Latvian, Lithuanian, Macedonian, Polish, Romanian, Russian, Serbian, Slovak, Slovenian, Ukrainian*\n- Central-Asia/Middle-East/North-Africa: *Arabic, Azerbaijani, Hebrew, Kazakh, Kyrgyz, Mongolian, Pashto, Persian, Sorani-Kurdish, Tajik, Turkish, Uzbek*\n- Sub-Saharan Africa: *Afrikaans, Amharic, Fula, Ganda, Hausa, Igbo, Kamba, Lingala, Luo, Northern-Sotho, Nyanja, Oromo, Shona, Somali, Swahili, Umbundu, Wolof, Xhosa, Yoruba, Zulu*\n- South-Asia: *Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Sindhi, Tamil, Telugu, Urdu*\n- South-East Asia: *Burmese, Cebuano, Filipino, Indonesian, Javanese, Khmer, Lao, Malay, Maori, Thai, Vietnamese*\n- CJK languages: *Cantonese and Mandarin Chinese, Japanese, Korean*",
"## Design principles",
"### Diversity\n\nXTREME-S aims for task, domain and language\ndiversity. Tasks should be diverse and cover several domains to\nprovide a reliable evaluation of model generalization and\nrobustness to noisy naturally-occurring speech in different\nenvironments. Languages should be diverse to ensure that\nmodels can adapt to a wide range of linguistic and phonological\nphenomena.",
"### Accessibility\n\nThe sub-dataset for each task can be downloaded \nwith a single line of code as shown in Supported Tasks.\nEach task is available under a permissive license that allows the use and redistribution \nof the data for research purposes. Tasks have been selected based on their usage by \npre-existing multilingual pre-trained models, for simplicity.",
"### Reproducibility\n\nWe produce fully open-sourced, maintained and easy-to-use fine-tuning scripts \nfor each task as shown under Fine-tuning Example.\nXTREME-S encourages submissions that leverage publicly available speech and text datasets. Users should detail which data they use. \nIn general, we encourage settings that can be reproduced by the community, but also encourage the exploration of new frontiers for speech representation learning.",
"## Fine-tuning and Evaluation Example\n\nWe provide a fine-tuning script under research-projects/xtreme-s.\nThe fine-tuning script is written in PyTorch and allows one to fine-tune and evaluate any Hugging Face model on XTREME-S.\nThe example script is actively maintained by @anton-l and @patrickvonplaten. Feel free \nto reach out via issues or pull requests on GitHub if you have any questions.",
"## Leaderboards\n\nThe leaderboard for the XTREME-S benchmark can be found at [this address (TODO(PVP))]().",
"## Supported Tasks\n\nNote that the suppoprted tasks are focused particularly on linguistic aspect of speech,\nwhile nonlinguistic/paralinguistic aspects of speech relevant to e.g. speech synthesis or voice conversion are not evaluated.\n\n<p align=\"center\">\n <img src=\"URL alt=\"Datasets used in XTREME\"/>\n</p>",
"### 1. Speech Recognition (ASR)\n\nWe include three speech recognition datasets: FLEURS-ASR, MLS and VoxPopuli (optionally BABEL). Multilingual fine-tuning is used for these three datasets.",
"#### FLEURS-ASR\n\n*FLEURS-ASR* is the speech version of the FLORES machine translation benchmark, covering 2000 n-way parallel sentences in n=102 languages.",
"#### Multilingual LibriSpeech (MLS)\n\n*MLS* is a large multilingual corpus derived from read audiobooks from LibriVox and consists of 8 languages. For this challenge the training data is limited to 10-hours splits.",
"#### VoxPopuli\n\n*VoxPopuli* is a large-scale multilingual speech corpus for representation learning and semi-supervised learning, from which we use the speech recognition dataset. The raw data is collected from 2009-2020 European Parliament event recordings. We acknowledge the European Parliament for creating and sharing these materials.\n\nVoxPopuli has to download the whole dataset 100GB since languages \nare entangled into each other - maybe not worth testing here due to the size",
"#### (Optionally) BABEL\n\n*BABEL* from IARPA is a conversational speech recognition dataset in low-resource languages. First, download LDC2016S06, LDC2016S12, LDC2017S08, LDC2017S05 and LDC2016S13. BABEL is the only dataset in our benchmark who is less easily accessible, so you will need to sign in to get access to it on LDC. Although not officially part of the XTREME-S ASR datasets, BABEL is often used for evaluating speech representations on a difficult domain (phone conversations).\n\n\n\nThe above command is expected to fail with a nice error message,\nexplaining how to download BABEL\n\nThe following should work:",
"### 2. Speech Translation (ST)\n\nWe include the CoVoST-2 dataset for automatic speech translation.",
"#### CoVoST-2\n\nThe *CoVoST-2* benchmark has become a commonly used dataset for evaluating automatic speech translation. It covers language pairs from English into 15 languages, as well as 21 languages into English. We use only the \"X->En\" direction to evaluate cross-lingual representations. The amount of supervision varies greatly in this setting, from one hour for Japanese->English to 180 hours for French->English. This makes pretraining particularly useful to enable such few-shot learning. We enforce multiligual fine-tuning for simplicity. Results are splitted in high/med/low-resource language pairs as explained in the [paper (TODO(PVP))].",
"### 3. Speech Classification\n\nWe include two multilingual speech classification datasets: FLEURS-LangID and Minds-14.",
"#### Language Identification - FLEURS-LangID\n\nLangID can often be a domain classification, but in the case of FLEURS-LangID, recordings are done in a similar setting across languages and the utterances correspond to n-way parallel sentences, in the exact same domain, making this task particularly relevant for evaluating LangID. The setting is simple, FLEURS-LangID is splitted in train/valid/test for each language. We simply create a single train/valid/test for LangID by merging all.",
"#### Intent classification - Minds-14\n\nMinds-14 is an intent classification made from e-banking speech datasets in 14 languages, with 14 intent labels. We impose a single multilingual fine-tuning to increase the size of the train and test sets and reduce the variance associated with the small size of the dataset per language.",
"### 4. (Optionally) Speech Retrieval \n\nWe optionally include one speech retrieval dataset: FLEURS-Retrieval as explained in the FLEURS paper.",
"#### FLEURS-Retrieval\n\nFLEURS-Retrieval provides n-way parallel speech and text data. Similar to how XTREME for text leverages Tatoeba to evaluate bitext mining a.k.a sentence translation retrieval, we use FLEURS-Retrieval to evaluate the quality of fixed-size representations of speech utterances. Our goal is to incentivize the creation of fixed-size speech encoder for speech retrieval. The system has to retrieve the English \"key\" utterance corresponding to the speech translation of \"queries\" in 15 languages. Results have to be reported on the test sets of FLEURS-Retrieval whose utterances are used as queries (and keys for English). We augment the English keys with a large number of utterances to make the task more difficult.\n\n\n\nUsers can leverage the training (and dev) sets of FLEURS-Retrieval with a ranking loss to build better cross-lingual fixed-size representations of speech.",
"## Dataset Structure\n\nThe XTREME-S benchmark is composed of the following datasets:\n\n- FLEURS\n- Multilingual Librispeech (MLS)\n Note that for MLS, XTREME-S uses 'path' instead of 'file' and 'transcription' instead of 'text'.\n- Voxpopuli\n- Minds14\n- Covost2\n Note that for Covost2, XTREME-S uses 'path' instead of 'file' and 'transcription' instead of 'sentence'.\n- BABEL\n\nPlease click on the link of the dataset cards to get more information about its dataset structure.",
"## Dataset Creation\n\nThe XTREME-S benchmark is composed of the following datasets:\n\n- FLEURS\n- Multilingual Librispeech (MLS)\n- Voxpopuli\n- Minds14\n- Covost2\n- BABEL\n\nPlease visit the corresponding dataset cards to get more information about the source data.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nThis dataset is meant to encourage the development of speech technology in a lot more languages of the world. One of the goal is to give equal access to technologies like speech recognition or speech translation to everyone, meaning better dubbing or better access to content from the internet (like podcasts, streaming or videos).",
"### Discussion of Biases\nMost datasets have a fair distribution of gender utterances (e.g. the newly introduced FLEURS dataset). While many languages are covered from various regions of the world, the benchmark misses many languages that are all equally important. We believe technology built through XTREME-S should generalize to all languages.",
"### Other Known Limitations\nThe benchmark has a particular focus on read-speech because common evaluation benchmarks like CoVoST-2 or LibriSpeech evaluate on this type of speech. There is sometimes a known mismatch between performance obtained in a read-speech setting and a more noisy setting (in production for instance). Given the big progress that remains to be made on many languages, we believe better performance on XTREME-S should still correlate well with actual progress made for speech understanding.",
"## Additional Information\n\nAll datasets are licensed under the Creative Commons license (CC-BY).",
"#### XTREME-S",
"#### MLS",
"#### VoxPopuli",
"#### CoVoST 2",
"#### Minds14",
"### Contributions\n\nThanks to @patrickvonplaten, @anton-l, @aconneau for adding this dataset"
] | [
"TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-expert-generated #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-extended|multilingual_librispeech #source_datasets-extended|covost2 #language-Afrikaans #language-Amharic #language-Arabic #language-Assamese #language-Asturian #language-North Azerbaijani #language-Belarusian #language-Bengali #language-Bosnian #language-Catalan #language-Cebuano #language-Mandarin Chinese #language-Czech #language-Welsh #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Persian #language-Fulah #language-Finnish #language-Tagalog #language-French #language-Irish #language-Galician #language-Gujarati #language-Hausa #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Igbo #language-Icelandic #language-Italian #language-Japanese #language-Javanese #language-Georgian #language-Kamba (Kenya) #language-Kabuverdianu #language-Kazakh #language-Khmer #language-Kannada #language-Korean #language-Central Kurdish #language-Kirghiz #language-Luxembourgish #language-Ganda #language-Lingala #language-Lao #language-Lithuanian #language-Luo (Kenya and Tanzania) #language-Latvian #language-Maori #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Malay (macrolanguage) #language-Maltese #language-Burmese #language-Norwegian Bokmål #language-Nepali (individual language) #language-Dutch #language-Pedi #language-Nyanja #language-Occitan (post 1500) #language-Oromo #language-Odia #language-Panjabi #language-Polish #language-Pushto #language-Portuguese #language-Romanian #language-Russian #language-Bulgarian #language-Sindhi #language-Slovak #language-Slovenian #language-Shona #language-Somali #language-Serbian #language-Swedish #language-Swahili (individual language) #language-Tamil #language-Telugu #language-Tajik #language-Thai #language-Turkish #language-Ukrainian #language-Umbundu #language-Urdu #language-Uzbek #language-Vietnamese #language-Wolof #language-Xhosa #language-Yoruba #language-Yue Chinese #language-Zulu #license-cc-by-4.0 #arxiv-2203.10752 #arxiv-2205.12446 #arxiv-2007.10310 #region-us \n",
"# XTREME-S",
"## Dataset Description\n\n- Fine-Tuning script: research-projects/xtreme-s\n- Paper: XTREME-S: Evaluating Cross-lingual Speech Representations\n- Leaderboard: [TODO(PVP)]()\n- FLEURS amount of disk used: 350 GB\n- Multilingual Librispeech amount of disk used: 2700 GB \n- Voxpopuli amount of disk used: 400 GB \n- Covost2 amount of disk used: 70 GB \n- Minds14 amount of disk used: 5 GB \n- Total amount of disk used: ca. 3500 GB \n\nThe Cross-lingual TRansfer Evaluation of Multilingual Encoders for Speech (XTREME-S) benchmark is a benchmark designed to evaluate speech representations across languages, tasks, domains and data regimes. It covers 102 languages from 10+ language families, 3 different domains and 4 task families: speech recognition, translation, classification and retrieval.\n\n*TLDR; XTREME-S is the first speech benchmark that is both diverse, fully accessible, and reproducible. All datasets can be downloaded with a single line of code. \nAn easy-to-use and flexible fine-tuning script is provided and actively maintained.*\n\nXTREME-S covers speech recognition with Fleurs, Multilingual LibriSpeech (MLS) and VoxPopuli, speech translation with CoVoST-2, speech classification with LangID (Fleurs) and intent classification (MInds-14) and finally speech(-text) retrieval with Fleurs. Each of the tasks covers a subset of the 102 languages included in XTREME-S, from various regions: \n\n- Western Europe: *Asturian, Bosnian, Catalan, Croatian, Danish, Dutch, English, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Irish, Italian, Kabuverdianu, Luxembourgish, Maltese, Norwegian, Occitan, Portuguese, Spanish, Swedish, Welsh* \n- Eastern Europe: *Armenian, Belarusian, Bulgarian, Czech, Estonian, Georgian, Latvian, Lithuanian, Macedonian, Polish, Romanian, Russian, Serbian, Slovak, Slovenian, Ukrainian*\n- Central-Asia/Middle-East/North-Africa: *Arabic, Azerbaijani, Hebrew, Kazakh, Kyrgyz, Mongolian, Pashto, Persian, Sorani-Kurdish, Tajik, Turkish, Uzbek*\n- Sub-Saharan Africa: *Afrikaans, Amharic, Fula, Ganda, Hausa, Igbo, Kamba, Lingala, Luo, Northern-Sotho, Nyanja, Oromo, Shona, Somali, Swahili, Umbundu, Wolof, Xhosa, Yoruba, Zulu*\n- South-Asia: *Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Sindhi, Tamil, Telugu, Urdu*\n- South-East Asia: *Burmese, Cebuano, Filipino, Indonesian, Javanese, Khmer, Lao, Malay, Maori, Thai, Vietnamese*\n- CJK languages: *Cantonese and Mandarin Chinese, Japanese, Korean*",
"## Design principles",
"### Diversity\n\nXTREME-S aims for task, domain and language\ndiversity. Tasks should be diverse and cover several domains to\nprovide a reliable evaluation of model generalization and\nrobustness to noisy naturally-occurring speech in different\nenvironments. Languages should be diverse to ensure that\nmodels can adapt to a wide range of linguistic and phonological\nphenomena.",
"### Accessibility\n\nThe sub-dataset for each task can be downloaded \nwith a single line of code as shown in Supported Tasks.\nEach task is available under a permissive license that allows the use and redistribution \nof the data for research purposes. Tasks have been selected based on their usage by \npre-existing multilingual pre-trained models, for simplicity.",
"### Reproducibility\n\nWe produce fully open-sourced, maintained and easy-to-use fine-tuning scripts \nfor each task as shown under Fine-tuning Example.\nXTREME-S encourages submissions that leverage publicly available speech and text datasets. Users should detail which data they use. \nIn general, we encourage settings that can be reproduced by the community, but also encourage the exploration of new frontiers for speech representation learning.",
"## Fine-tuning and Evaluation Example\n\nWe provide a fine-tuning script under research-projects/xtreme-s.\nThe fine-tuning script is written in PyTorch and allows one to fine-tune and evaluate any Hugging Face model on XTREME-S.\nThe example script is actively maintained by @anton-l and @patrickvonplaten. Feel free \nto reach out via issues or pull requests on GitHub if you have any questions.",
"## Leaderboards\n\nThe leaderboard for the XTREME-S benchmark can be found at [this address (TODO(PVP))]().",
"## Supported Tasks\n\nNote that the suppoprted tasks are focused particularly on linguistic aspect of speech,\nwhile nonlinguistic/paralinguistic aspects of speech relevant to e.g. speech synthesis or voice conversion are not evaluated.\n\n<p align=\"center\">\n <img src=\"URL alt=\"Datasets used in XTREME\"/>\n</p>",
"### 1. Speech Recognition (ASR)\n\nWe include three speech recognition datasets: FLEURS-ASR, MLS and VoxPopuli (optionally BABEL). Multilingual fine-tuning is used for these three datasets.",
"#### FLEURS-ASR\n\n*FLEURS-ASR* is the speech version of the FLORES machine translation benchmark, covering 2000 n-way parallel sentences in n=102 languages.",
"#### Multilingual LibriSpeech (MLS)\n\n*MLS* is a large multilingual corpus derived from read audiobooks from LibriVox and consists of 8 languages. For this challenge the training data is limited to 10-hours splits.",
"#### VoxPopuli\n\n*VoxPopuli* is a large-scale multilingual speech corpus for representation learning and semi-supervised learning, from which we use the speech recognition dataset. The raw data is collected from 2009-2020 European Parliament event recordings. We acknowledge the European Parliament for creating and sharing these materials.\n\nVoxPopuli has to download the whole dataset 100GB since languages \nare entangled into each other - maybe not worth testing here due to the size",
"#### (Optionally) BABEL\n\n*BABEL* from IARPA is a conversational speech recognition dataset in low-resource languages. First, download LDC2016S06, LDC2016S12, LDC2017S08, LDC2017S05 and LDC2016S13. BABEL is the only dataset in our benchmark who is less easily accessible, so you will need to sign in to get access to it on LDC. Although not officially part of the XTREME-S ASR datasets, BABEL is often used for evaluating speech representations on a difficult domain (phone conversations).\n\n\n\nThe above command is expected to fail with a nice error message,\nexplaining how to download BABEL\n\nThe following should work:",
"### 2. Speech Translation (ST)\n\nWe include the CoVoST-2 dataset for automatic speech translation.",
"#### CoVoST-2\n\nThe *CoVoST-2* benchmark has become a commonly used dataset for evaluating automatic speech translation. It covers language pairs from English into 15 languages, as well as 21 languages into English. We use only the \"X->En\" direction to evaluate cross-lingual representations. The amount of supervision varies greatly in this setting, from one hour for Japanese->English to 180 hours for French->English. This makes pretraining particularly useful to enable such few-shot learning. We enforce multiligual fine-tuning for simplicity. Results are splitted in high/med/low-resource language pairs as explained in the [paper (TODO(PVP))].",
"### 3. Speech Classification\n\nWe include two multilingual speech classification datasets: FLEURS-LangID and Minds-14.",
"#### Language Identification - FLEURS-LangID\n\nLangID can often be a domain classification, but in the case of FLEURS-LangID, recordings are done in a similar setting across languages and the utterances correspond to n-way parallel sentences, in the exact same domain, making this task particularly relevant for evaluating LangID. The setting is simple, FLEURS-LangID is splitted in train/valid/test for each language. We simply create a single train/valid/test for LangID by merging all.",
"#### Intent classification - Minds-14\n\nMinds-14 is an intent classification made from e-banking speech datasets in 14 languages, with 14 intent labels. We impose a single multilingual fine-tuning to increase the size of the train and test sets and reduce the variance associated with the small size of the dataset per language.",
"### 4. (Optionally) Speech Retrieval \n\nWe optionally include one speech retrieval dataset: FLEURS-Retrieval as explained in the FLEURS paper.",
"#### FLEURS-Retrieval\n\nFLEURS-Retrieval provides n-way parallel speech and text data. Similar to how XTREME for text leverages Tatoeba to evaluate bitext mining a.k.a sentence translation retrieval, we use FLEURS-Retrieval to evaluate the quality of fixed-size representations of speech utterances. Our goal is to incentivize the creation of fixed-size speech encoder for speech retrieval. The system has to retrieve the English \"key\" utterance corresponding to the speech translation of \"queries\" in 15 languages. Results have to be reported on the test sets of FLEURS-Retrieval whose utterances are used as queries (and keys for English). We augment the English keys with a large number of utterances to make the task more difficult.\n\n\n\nUsers can leverage the training (and dev) sets of FLEURS-Retrieval with a ranking loss to build better cross-lingual fixed-size representations of speech.",
"## Dataset Structure\n\nThe XTREME-S benchmark is composed of the following datasets:\n\n- FLEURS\n- Multilingual Librispeech (MLS)\n Note that for MLS, XTREME-S uses 'path' instead of 'file' and 'transcription' instead of 'text'.\n- Voxpopuli\n- Minds14\n- Covost2\n Note that for Covost2, XTREME-S uses 'path' instead of 'file' and 'transcription' instead of 'sentence'.\n- BABEL\n\nPlease click on the link of the dataset cards to get more information about its dataset structure.",
"## Dataset Creation\n\nThe XTREME-S benchmark is composed of the following datasets:\n\n- FLEURS\n- Multilingual Librispeech (MLS)\n- Voxpopuli\n- Minds14\n- Covost2\n- BABEL\n\nPlease visit the corresponding dataset cards to get more information about the source data.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nThis dataset is meant to encourage the development of speech technology in a lot more languages of the world. One of the goal is to give equal access to technologies like speech recognition or speech translation to everyone, meaning better dubbing or better access to content from the internet (like podcasts, streaming or videos).",
"### Discussion of Biases\nMost datasets have a fair distribution of gender utterances (e.g. the newly introduced FLEURS dataset). While many languages are covered from various regions of the world, the benchmark misses many languages that are all equally important. We believe technology built through XTREME-S should generalize to all languages.",
"### Other Known Limitations\nThe benchmark has a particular focus on read-speech because common evaluation benchmarks like CoVoST-2 or LibriSpeech evaluate on this type of speech. There is sometimes a known mismatch between performance obtained in a read-speech setting and a more noisy setting (in production for instance). Given the big progress that remains to be made on many languages, we believe better performance on XTREME-S should still correlate well with actual progress made for speech understanding.",
"## Additional Information\n\nAll datasets are licensed under the Creative Commons license (CC-BY).",
"#### XTREME-S",
"#### MLS",
"#### VoxPopuli",
"#### CoVoST 2",
"#### Minds14",
"### Contributions\n\nThanks to @patrickvonplaten, @anton-l, @aconneau for adding this dataset"
] |
b46e2b76a97206642c5af891b8eb9bc6dad228b7 |
# Dataset Card for ElkarHizketak
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ElkarHizketak homepage](http://ixa.si.ehu.es/node/12934)
- **Paper:** [Conversational Question Answering in Low Resource Scenarios: A Dataset and Case Study for Basque](https://aclanthology.org/2020.lrec-1.55/)
- **Point of Contact:** [Arantxa Otegi](mailto:[email protected])
### Dataset Summary
ElkarHizketak is a low resource conversational Question Answering (QA) dataset in Basque created by Basque speaker volunteers. The dataset contains close to 400 dialogues and more than 1600 question and answers, and its small size presents a realistic low-resource scenario for conversational QA systems. The dataset is built on top of Wikipedia sections about popular people and organizations. The dialogues involve two crowd workers: (1) a student ask questions after reading a small introduction about the person, but without seeing the section text; and (2) a teacher answers the questions selecting a span of text of the section.
### Supported Tasks and Leaderboards
- `extractive-qa`: The dataset can be used to train a model for Conversational Question Answering.
### Languages
The text in the dataset is in Basque.
## Dataset Structure
### Data Instances
An example from the train split:
```
{'dialogue_id': 'C_50be3f56f0d04c99a82f1f950baf0c2d',
'wikipedia_page_title': 'Howard Becker',
'background': 'Howard Saul Becker (Chicago,Illinois, 1928ko apirilaren 18an) Estatu Batuetako soziologoa bat da. Bere ekarpen handienak desbiderakuntzaren soziologian, artearen soziologian eta musikaren soziologian egin ditu. "Outsiders" (1963) bere lanik garrantzitsuetako da eta bertan garatu zuen bere etiketatze-teoria. Nahiz eta elkarrekintza sinbolikoaren edo gizarte-konstruktibismoaren korronteen barruan sartu izan, berak ez du bere burua inongo paradigman kokatzen. Chicagoko Unibertsitatean graduatua, Becker Chicagoko Soziologia Eskolako bigarren belaunaldiaren barruan kokatu ohi da, Erving Goffman eta Anselm Strauss-ekin batera.',
'section_title': 'Hastapenak eta hezkuntza.',
'context': 'Howard Saul Becker Chicagon jaio zen 1928ko apirilaren 18an. Oso gazte zelarik piano jotzen asi zen eta 15 urte zituenean dagoeneko tabernetan aritzen zen pianoa jotzen. Beranduago Northwestern Unibertsitateko banda batean jo zuen. Beckerren arabera, erdi-profesional gisa aritu ahal izan zen Bigarren Mundu Gerra tokatu eta musikari gehienak soldadugai zeudelako. Musikari bezala egin zuen lan horretan egin zuen lehen aldiz drogaren kulturaren ezagutza, aurrerago ikerketa-gai hartuko zuena. 1946an bere graduazpiko soziologia titulua lortu zuen Chicagoko Unibertsitatean. Ikasten ari zen bitartean, pianoa jotzen jarraitu zuen modu erdi-profesionalean. Hala ere, soziologiako masterra eta doktoretza eskuratu zituen Chicagoko Unibertsitatean. Unibertsitate horretan Chicagoko Soziologia Eskolaren jatorrizko tradizioaren barruan hezia izan zen. Chicagoko Soziologia Eskolak garrantzi berezia ematen zion datu kualitatiboen analisiari eta Chicagoko hiria hartzen zuen ikerketa eremu bezala. Beckerren hasierako lan askok eskola honen tradizioaren eragina dute, bereziko Everett C. Hughes-en eragina, bere tutore eta gidari izan zena. Askotan elkarrekintzaile sinboliko bezala izendatua izan da, nahiz eta Beckerek berak ez duen gogoko izendapen hori. Haren arabera, bere leinu akademikoa Georg Simmel, Robert E. Park eta Everett Hughes dira. Doktoretza lortu ostean, 23 urterekin, Beckerrek marihuanaren erabilpena ikertu zuen "Institut for Juvenil Reseac"h-en. Ondoren Illinoisko Unibertsitatean eta Standfor Unibertsitateko ikerketa institutu batean aritu zen bere irakasle karrera hasi aurretik. CANNOTANSWER',
'turn_id': 'C_50be3f56f0d04c99a82f1f950baf0c2d_q#0',
'question': 'Zer da desbiderakuntzaren soziologia?',
'yesno': 2,
'answers': {'text': ['CANNOTANSWER'],
'answer_start': [1601],
'input_text': ['CANNOTANSWER']},
'orig_answer': {'text': 'CANNOTANSWER', 'answer_start': 1601}}
```
### Data Fields
The different fields are:
- `dialogue_id`: string,
- `wikipedia_page_title`: title of the wikipedia page as a string,
- `background`: string,
- `section_title`: title os the section as a string,
- `context`: context of the question as a string string,
- `turn_id`: string,
- `question`: question as a string,
- `yesno`: Class label that represents if the question is a yes/no question. Possible values are "y" (0), "n" (1), "x" (2),
- `answers`: a dictionary with three fields:
- `text`: list of texts of the answer as a string,
- `answer_start`: list of positions of the answers in the context as an int32,
- `input_text`: list of strings,
}
),
- `orig_answer`: {
- `text`: original answer text as a string,
- `answer_start`: original position of the answer as an int32,
},
### Data Splits
The data is split into a training, development and test set. The split sizes are as follow:
- train: 1,306 questions / 301 dialogues
- development: 161 questions / 38 dialogues
- test: 167 questions / 38 dialogues
## Dataset Creation
### Curation Rationale
This is the first non-English conversational QA dataset and the first conversational dataset for Basque. Its small size presents a realistic low-resource scenario for conversational QA systems.
### Source Data
#### Initial Data Collection and Normalization
First we selected sections of Wikipedia articles about people, as less specialized knowledge is required to converse about people than other categories. In order to retrieve articles we selected the following categories in Basque Wikipedia: Biografia (Biography is the equivalent category in English Wikipedia), Biografiak (People) and Gizabanako biziak (Living people). We applied this category filter and downloaded the articles using a querying tool provided by the Wikimedia foundation. Once we retrieved the articles, we selected sections from them that contained between 175 and 300 words. These filters and threshold were set after some pilot studies where we check the adequacy of the people involved in the selected articles and the length of the passages in order to have enough but not to much information to hold a conversation.
Then, dialogues were collected during some online sessions that we arranged with Basque speaking volunteers. The dialogues involve two crowd workers: (1) a student ask questions after reading a small introduction about the person, but without seeing the section text; and (2) a teacher answers the questions selecting a span of text of the section.
#### Who are the source language producers?
The language producers are Basque speaking volunteers which hold a conversation using a text-based chat interface developed for those purposes.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was created by Arantxa Otegi, Jon Ander Campos, Aitor Soroa and Eneko Agirre from the [HiTZ Basque Center for Language Technologies](https://www.hitz.eus/) and [Ixa NLP Group](https://www.ixa.eus/) at the University of the Basque Country (UPV/EHU).
### Licensing Information
Copyright (C) by Ixa Taldea, University of the Basque Country UPV/EHU.
This dataset is licensed under the Creative Commons Attribution-ShareAlike 4.0 International Public License (CC BY-SA 4.0).
To view a copy of this license, visit [https://creativecommons.org/licenses/by-sa/4.0/legalcode](https://creativecommons.org/licenses/by-sa/4.0/legalcode).
### Citation Information
If you are using this dataset in your work, please cite this publication:
```bibtex
@inproceedings{otegi-etal-2020-conversational,
title = "{Conversational Question Answering in Low Resource Scenarios: A Dataset and Case Study for Basque}",
author = "Otegi, Arantxa and
Agirre, Aitor and
Campos, Jon Ander and
Soroa, Aitor and
Agirre, Eneko",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.55",
pages = "436--442"
}
```
### Contributions
Thanks to [@antxa](https://github.com/antxa) for adding this dataset. | elkarhizketak | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:eu",
"license:cc-by-sa-4.0",
"dialogue-qa",
"region:us"
] | 2022-03-04T19:04:55+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["eu"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "ElkarHizketak", "tags": ["dialogue-qa"], "dataset_info": {"features": [{"name": "dialogue_id", "dtype": "string"}, {"name": "wikipedia_page_title", "dtype": "string"}, {"name": "background", "dtype": "string"}, {"name": "section_title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "turn_ids", "sequence": "string"}, {"name": "questions", "sequence": "string"}, {"name": "yesnos", "sequence": {"class_label": {"names": {"0": "y", "1": "n", "2": "x"}}}}, {"name": "answers", "sequence": [{"name": "texts", "sequence": "string"}, {"name": "answer_starts", "sequence": "int32"}, {"name": "input_texts", "sequence": "string"}]}, {"name": "orig_answers", "struct": [{"name": "texts", "sequence": "string"}, {"name": "answer_starts", "sequence": "int32"}]}], "config_name": "plain_text", "splits": [{"name": "train", "num_bytes": 1024378, "num_examples": 301}, {"name": "validation", "num_bytes": 125667, "num_examples": 38}, {"name": "test", "num_bytes": 127640, "num_examples": 38}], "download_size": 1927474, "dataset_size": 1277685}} | 2024-01-18T11:18:59+00:00 | [] | [
"eu"
] | TAGS
#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Basque #license-cc-by-sa-4.0 #dialogue-qa #region-us
|
# Dataset Card for ElkarHizketak
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Initial Data Collection and Normalization
- Who are the source language producers?
- Annotations
- Annotation process
- Who are the annotators?
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: ElkarHizketak homepage
- Paper: Conversational Question Answering in Low Resource Scenarios: A Dataset and Case Study for Basque
- Point of Contact: Arantxa Otegi
### Dataset Summary
ElkarHizketak is a low resource conversational Question Answering (QA) dataset in Basque created by Basque speaker volunteers. The dataset contains close to 400 dialogues and more than 1600 question and answers, and its small size presents a realistic low-resource scenario for conversational QA systems. The dataset is built on top of Wikipedia sections about popular people and organizations. The dialogues involve two crowd workers: (1) a student ask questions after reading a small introduction about the person, but without seeing the section text; and (2) a teacher answers the questions selecting a span of text of the section.
### Supported Tasks and Leaderboards
- 'extractive-qa': The dataset can be used to train a model for Conversational Question Answering.
### Languages
The text in the dataset is in Basque.
## Dataset Structure
### Data Instances
An example from the train split:
### Data Fields
The different fields are:
- 'dialogue_id': string,
- 'wikipedia_page_title': title of the wikipedia page as a string,
- 'background': string,
- 'section_title': title os the section as a string,
- 'context': context of the question as a string string,
- 'turn_id': string,
- 'question': question as a string,
- 'yesno': Class label that represents if the question is a yes/no question. Possible values are "y" (0), "n" (1), "x" (2),
- 'answers': a dictionary with three fields:
- 'text': list of texts of the answer as a string,
- 'answer_start': list of positions of the answers in the context as an int32,
- 'input_text': list of strings,
}
),
- 'orig_answer': {
- 'text': original answer text as a string,
- 'answer_start': original position of the answer as an int32,
},
### Data Splits
The data is split into a training, development and test set. The split sizes are as follow:
- train: 1,306 questions / 301 dialogues
- development: 161 questions / 38 dialogues
- test: 167 questions / 38 dialogues
## Dataset Creation
### Curation Rationale
This is the first non-English conversational QA dataset and the first conversational dataset for Basque. Its small size presents a realistic low-resource scenario for conversational QA systems.
### Source Data
#### Initial Data Collection and Normalization
First we selected sections of Wikipedia articles about people, as less specialized knowledge is required to converse about people than other categories. In order to retrieve articles we selected the following categories in Basque Wikipedia: Biografia (Biography is the equivalent category in English Wikipedia), Biografiak (People) and Gizabanako biziak (Living people). We applied this category filter and downloaded the articles using a querying tool provided by the Wikimedia foundation. Once we retrieved the articles, we selected sections from them that contained between 175 and 300 words. These filters and threshold were set after some pilot studies where we check the adequacy of the people involved in the selected articles and the length of the passages in order to have enough but not to much information to hold a conversation.
Then, dialogues were collected during some online sessions that we arranged with Basque speaking volunteers. The dialogues involve two crowd workers: (1) a student ask questions after reading a small introduction about the person, but without seeing the section text; and (2) a teacher answers the questions selecting a span of text of the section.
#### Who are the source language producers?
The language producers are Basque speaking volunteers which hold a conversation using a text-based chat interface developed for those purposes.
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
The dataset was created by Arantxa Otegi, Jon Ander Campos, Aitor Soroa and Eneko Agirre from the HiTZ Basque Center for Language Technologies and Ixa NLP Group at the University of the Basque Country (UPV/EHU).
### Licensing Information
Copyright (C) by Ixa Taldea, University of the Basque Country UPV/EHU.
This dataset is licensed under the Creative Commons Attribution-ShareAlike 4.0 International Public License (CC BY-SA 4.0).
To view a copy of this license, visit URL
If you are using this dataset in your work, please cite this publication:
### Contributions
Thanks to @antxa for adding this dataset. | [
"# Dataset Card for ElkarHizketak",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Initial Data Collection and Normalization\n - Who are the source language producers?\n - Annotations\n - Annotation process\n - Who are the annotators?\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: ElkarHizketak homepage\n- Paper: Conversational Question Answering in Low Resource Scenarios: A Dataset and Case Study for Basque\n- Point of Contact: Arantxa Otegi",
"### Dataset Summary\n\nElkarHizketak is a low resource conversational Question Answering (QA) dataset in Basque created by Basque speaker volunteers. The dataset contains close to 400 dialogues and more than 1600 question and answers, and its small size presents a realistic low-resource scenario for conversational QA systems. The dataset is built on top of Wikipedia sections about popular people and organizations. The dialogues involve two crowd workers: (1) a student ask questions after reading a small introduction about the person, but without seeing the section text; and (2) a teacher answers the questions selecting a span of text of the section.",
"### Supported Tasks and Leaderboards\n\n- 'extractive-qa': The dataset can be used to train a model for Conversational Question Answering.",
"### Languages\n\nThe text in the dataset is in Basque.",
"## Dataset Structure",
"### Data Instances\n\nAn example from the train split:",
"### Data Fields\n\nThe different fields are:\n\n- 'dialogue_id': string,\n- 'wikipedia_page_title': title of the wikipedia page as a string,\n- 'background': string,\n- 'section_title': title os the section as a string,\n- 'context': context of the question as a string string,\n- 'turn_id': string,\n- 'question': question as a string,\n- 'yesno': Class label that represents if the question is a yes/no question. Possible values are \"y\" (0), \"n\" (1), \"x\" (2),\n- 'answers': a dictionary with three fields:\n - 'text': list of texts of the answer as a string,\n - 'answer_start': list of positions of the answers in the context as an int32,\n - 'input_text': list of strings,\n }\n),\n- 'orig_answer': {\n - 'text': original answer text as a string,\n - 'answer_start': original position of the answer as an int32,\n},",
"### Data Splits\n\nThe data is split into a training, development and test set. The split sizes are as follow:\n\n- train: 1,306 questions / 301 dialogues\n- development: 161 questions / 38 dialogues\n- test: 167 questions / 38 dialogues",
"## Dataset Creation",
"### Curation Rationale\n\nThis is the first non-English conversational QA dataset and the first conversational dataset for Basque. Its small size presents a realistic low-resource scenario for conversational QA systems.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nFirst we selected sections of Wikipedia articles about people, as less specialized knowledge is required to converse about people than other categories. In order to retrieve articles we selected the following categories in Basque Wikipedia: Biografia (\u0019Biography\u0019 is the equivalent category in English Wikipedia), Biografiak (\u0019People\u0019) and Gizabanako biziak (\u0019Living people\u0019). We applied this category filter and downloaded the articles using a querying tool provided by the Wikimedia foundation. Once we retrieved the articles, we selected sections from them that contained between 175 and 300 words. These filters and threshold were set after some pilot studies where we check the adequacy of the people involved in the selected articles and the length of the passages in order to have enough but not to much information to hold a conversation.\n\nThen, dialogues were collected during some online sessions that we arranged with Basque speaking volunteers. The dialogues involve two crowd workers: (1) a student ask questions after reading a small introduction about the person, but without seeing the section text; and (2) a teacher answers the questions selecting a span of text of the section.",
"#### Who are the source language producers?\n\nThe language producers are Basque speaking volunteers which hold a conversation using a text-based chat interface developed for those purposes.",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nThe dataset was created by Arantxa Otegi, Jon Ander Campos, Aitor Soroa and Eneko Agirre from the HiTZ Basque Center for Language Technologies and Ixa NLP Group at the University of the Basque Country (UPV/EHU).",
"### Licensing Information\n\nCopyright (C) by Ixa Taldea, University of the Basque Country UPV/EHU.\n\nThis dataset is licensed under the Creative Commons Attribution-ShareAlike 4.0 International Public License (CC BY-SA 4.0).\nTo view a copy of this license, visit URL\n\n\n\nIf you are using this dataset in your work, please cite this publication:",
"### Contributions\n\nThanks to @antxa for adding this dataset."
] | [
"TAGS\n#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Basque #license-cc-by-sa-4.0 #dialogue-qa #region-us \n",
"# Dataset Card for ElkarHizketak",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Initial Data Collection and Normalization\n - Who are the source language producers?\n - Annotations\n - Annotation process\n - Who are the annotators?\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: ElkarHizketak homepage\n- Paper: Conversational Question Answering in Low Resource Scenarios: A Dataset and Case Study for Basque\n- Point of Contact: Arantxa Otegi",
"### Dataset Summary\n\nElkarHizketak is a low resource conversational Question Answering (QA) dataset in Basque created by Basque speaker volunteers. The dataset contains close to 400 dialogues and more than 1600 question and answers, and its small size presents a realistic low-resource scenario for conversational QA systems. The dataset is built on top of Wikipedia sections about popular people and organizations. The dialogues involve two crowd workers: (1) a student ask questions after reading a small introduction about the person, but without seeing the section text; and (2) a teacher answers the questions selecting a span of text of the section.",
"### Supported Tasks and Leaderboards\n\n- 'extractive-qa': The dataset can be used to train a model for Conversational Question Answering.",
"### Languages\n\nThe text in the dataset is in Basque.",
"## Dataset Structure",
"### Data Instances\n\nAn example from the train split:",
"### Data Fields\n\nThe different fields are:\n\n- 'dialogue_id': string,\n- 'wikipedia_page_title': title of the wikipedia page as a string,\n- 'background': string,\n- 'section_title': title os the section as a string,\n- 'context': context of the question as a string string,\n- 'turn_id': string,\n- 'question': question as a string,\n- 'yesno': Class label that represents if the question is a yes/no question. Possible values are \"y\" (0), \"n\" (1), \"x\" (2),\n- 'answers': a dictionary with three fields:\n - 'text': list of texts of the answer as a string,\n - 'answer_start': list of positions of the answers in the context as an int32,\n - 'input_text': list of strings,\n }\n),\n- 'orig_answer': {\n - 'text': original answer text as a string,\n - 'answer_start': original position of the answer as an int32,\n},",
"### Data Splits\n\nThe data is split into a training, development and test set. The split sizes are as follow:\n\n- train: 1,306 questions / 301 dialogues\n- development: 161 questions / 38 dialogues\n- test: 167 questions / 38 dialogues",
"## Dataset Creation",
"### Curation Rationale\n\nThis is the first non-English conversational QA dataset and the first conversational dataset for Basque. Its small size presents a realistic low-resource scenario for conversational QA systems.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nFirst we selected sections of Wikipedia articles about people, as less specialized knowledge is required to converse about people than other categories. In order to retrieve articles we selected the following categories in Basque Wikipedia: Biografia (\u0019Biography\u0019 is the equivalent category in English Wikipedia), Biografiak (\u0019People\u0019) and Gizabanako biziak (\u0019Living people\u0019). We applied this category filter and downloaded the articles using a querying tool provided by the Wikimedia foundation. Once we retrieved the articles, we selected sections from them that contained between 175 and 300 words. These filters and threshold were set after some pilot studies where we check the adequacy of the people involved in the selected articles and the length of the passages in order to have enough but not to much information to hold a conversation.\n\nThen, dialogues were collected during some online sessions that we arranged with Basque speaking volunteers. The dialogues involve two crowd workers: (1) a student ask questions after reading a small introduction about the person, but without seeing the section text; and (2) a teacher answers the questions selecting a span of text of the section.",
"#### Who are the source language producers?\n\nThe language producers are Basque speaking volunteers which hold a conversation using a text-based chat interface developed for those purposes.",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nThe dataset was created by Arantxa Otegi, Jon Ander Campos, Aitor Soroa and Eneko Agirre from the HiTZ Basque Center for Language Technologies and Ixa NLP Group at the University of the Basque Country (UPV/EHU).",
"### Licensing Information\n\nCopyright (C) by Ixa Taldea, University of the Basque Country UPV/EHU.\n\nThis dataset is licensed under the Creative Commons Attribution-ShareAlike 4.0 International Public License (CC BY-SA 4.0).\nTo view a copy of this license, visit URL\n\n\n\nIf you are using this dataset in your work, please cite this publication:",
"### Contributions\n\nThanks to @antxa for adding this dataset."
] |
fb8b329c87153970e0d65e79f8b50220cc2b5ed9 |
# Dataset Card for HashSet Distant Sampled
## Dataset Description
- **Repository:** [prashantkodali/HashSet](https://github.com/prashantkodali/HashSet)
- **Paper:** [HashSet -- A Dataset For Hashtag Segmentation](https://arxiv.org/abs/2201.06741)
### Dataset Summary
Hashset is a new dataset consisting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation.
HashSet Distant Sampled is a sample of 20,000 camel cased hashtags from the HashSet Distant dataset.
### Languages
Hindi and English.
## Dataset Structure
### Data Instances
```
{
'index': 282559,
'hashtag': 'Youth4Nation',
'segmentation': 'Youth 4 Nation'
}
```
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{kodali2022hashset,
title={HashSet--A Dataset For Hashtag Segmentation},
author={Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam},
journal={arXiv preprint arXiv:2201.06741},
year={2022}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/hashset_distant_sampled | [
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:hi",
"language:en",
"license:unknown",
"word-segmentation",
"arxiv:2201.06741",
"region:us"
] | 2022-03-04T22:13:50+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["hi", "en"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "HashSet Distant Sampled", "tags": ["word-segmentation"]} | 2022-10-20T18:13:24+00:00 | [
"2201.06741"
] | [
"hi",
"en"
] | TAGS
#annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-multilingual #size_categories-unknown #source_datasets-original #language-Hindi #language-English #license-unknown #word-segmentation #arxiv-2201.06741 #region-us
|
# Dataset Card for HashSet Distant Sampled
## Dataset Description
- Repository: prashantkodali/HashSet
- Paper: HashSet -- A Dataset For Hashtag Segmentation
### Dataset Summary
Hashset is a new dataset consisting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation.
HashSet Distant Sampled is a sample of 20,000 camel cased hashtags from the HashSet Distant dataset.
### Languages
Hindi and English.
## Dataset Structure
### Data Instances
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.
- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.
## Additional Information
### Contributions
This dataset was added by @ruanchaves while developing the hashformers library. | [
"# Dataset Card for HashSet Distant Sampled",
"## Dataset Description\n\n- Repository: prashantkodali/HashSet\n- Paper: HashSet -- A Dataset For Hashtag Segmentation",
"### Dataset Summary\n\nHashset is a new dataset consisting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the \nefficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other \nbaseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act \nas a good benchmark for hashtag segmentation tasks.\n\nHashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation.\n\nHashSet Distant Sampled is a sample of 20,000 camel cased hashtags from the HashSet Distant dataset.",
"### Languages\n\nHindi and English.",
"## Dataset Structure",
"### Data Instances",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] | [
"TAGS\n#annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-multilingual #size_categories-unknown #source_datasets-original #language-Hindi #language-English #license-unknown #word-segmentation #arxiv-2201.06741 #region-us \n",
"# Dataset Card for HashSet Distant Sampled",
"## Dataset Description\n\n- Repository: prashantkodali/HashSet\n- Paper: HashSet -- A Dataset For Hashtag Segmentation",
"### Dataset Summary\n\nHashset is a new dataset consisting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the \nefficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other \nbaseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act \nas a good benchmark for hashtag segmentation tasks.\n\nHashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation.\n\nHashSet Distant Sampled is a sample of 20,000 camel cased hashtags from the HashSet Distant dataset.",
"### Languages\n\nHindi and English.",
"## Dataset Structure",
"### Data Instances",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] |
0df29003f66c0cb4e17e908cb42e3843d4bd6b11 |
# Dataset Card for HashSet Distant
## Dataset Description
- **Repository:** [prashantkodali/HashSet](https://github.com/prashantkodali/HashSet)
- **Paper:** [HashSet -- A Dataset For Hashtag Segmentation](https://arxiv.org/abs/2201.06741)
### Dataset Summary
Hashset is a new dataset consisiting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation.
### Languages
Hindi and English.
## Dataset Structure
### Data Instances
```
{
'index': 282559,
'hashtag': 'Youth4Nation',
'segmentation': 'Youth 4 Nation'
}
```
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{kodali2022hashset,
title={HashSet--A Dataset For Hashtag Segmentation},
author={Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam},
journal={arXiv preprint arXiv:2201.06741},
year={2022}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/hashset_distant | [
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:hi",
"language:en",
"license:unknown",
"word-segmentation",
"arxiv:2201.06741",
"region:us"
] | 2022-03-04T22:36:15+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["hi", "en"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "HashSet Distant", "tags": ["word-segmentation"]} | 2022-10-20T18:13:21+00:00 | [
"2201.06741"
] | [
"hi",
"en"
] | TAGS
#annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-multilingual #size_categories-unknown #source_datasets-original #language-Hindi #language-English #license-unknown #word-segmentation #arxiv-2201.06741 #region-us
|
# Dataset Card for HashSet Distant
## Dataset Description
- Repository: prashantkodali/HashSet
- Paper: HashSet -- A Dataset For Hashtag Segmentation
### Dataset Summary
Hashset is a new dataset consisiting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation.
### Languages
Hindi and English.
## Dataset Structure
### Data Instances
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.
- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.
## Additional Information
### Contributions
This dataset was added by @ruanchaves while developing the hashformers library. | [
"# Dataset Card for HashSet Distant",
"## Dataset Description\n\n- Repository: prashantkodali/HashSet\n- Paper: HashSet -- A Dataset For Hashtag Segmentation",
"### Dataset Summary\n\nHashset is a new dataset consisiting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the \nefficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other \nbaseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act \nas a good benchmark for hashtag segmentation tasks.\n\nHashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation.",
"### Languages\n\nHindi and English.",
"## Dataset Structure",
"### Data Instances",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] | [
"TAGS\n#annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-multilingual #size_categories-unknown #source_datasets-original #language-Hindi #language-English #license-unknown #word-segmentation #arxiv-2201.06741 #region-us \n",
"# Dataset Card for HashSet Distant",
"## Dataset Description\n\n- Repository: prashantkodali/HashSet\n- Paper: HashSet -- A Dataset For Hashtag Segmentation",
"### Dataset Summary\n\nHashset is a new dataset consisiting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the \nefficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other \nbaseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act \nas a good benchmark for hashtag segmentation tasks.\n\nHashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation.",
"### Languages\n\nHindi and English.",
"## Dataset Structure",
"### Data Instances",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] |
0a9c9a5d4ce9c5607c1939227efded92d225b28d | Edited version of cited dataset
Citation: Gupta, Raj, Vishwanath, Ajay, and Yang, Yinping. Global Reactions to COVID-19 on Twitter: A Labelled Dataset with Latent Topic, Sentiment and Emotion Attributes: Twitter COVID dataset Jan 2021. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2021-06-20. https://doi.org/10.3886/E120321V8-89860 | chiarab/covid-tweet-sentiment | [
"region:us"
] | 2022-03-04T22:56:30+00:00 | {} | 2022-03-04T23:35:22+00:00 | [] | [] | TAGS
#region-us
| Edited version of cited dataset
Citation: Gupta, Raj, Vishwanath, Ajay, and Yang, Yinping. Global Reactions to COVID-19 on Twitter: A Labelled Dataset with Latent Topic, Sentiment and Emotion Attributes: Twitter COVID dataset Jan 2021. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2021-06-20. URL | [] | [
"TAGS\n#region-us \n"
] |
d5aeed029db258e17d93b7e2bf0d1a84ff4f56e5 |
# Dataset Card for HashSet Manual
## Dataset Description
- **Repository:** [prashantkodali/HashSet](https://github.com/prashantkodali/HashSet)
- **Paper:** [HashSet -- A Dataset For Hashtag Segmentation](https://arxiv.org/abs/2201.06741)
### Dataset Summary
Hashset is a new dataset consisting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Manual: contains 1.9k manually annotated hashtags. Each row consists of the hashtag, segmented hashtag ,named entity annotations, whether the hashtag contains mix of hindi and english tokens and/or contains non-english tokens.
### Languages
Mostly Hindi and English.
## Dataset Structure
### Data Instances
```
{
"index": 10,
"hashtag": "goodnewsmegan",
"segmentation": "good news megan",
"spans": {
"start": [
8
],
"end": [
13
],
"text": [
"megan"
]
},
"source": "roman",
"gold_position": null,
"mix": false,
"other": false,
"ner": true,
"annotator_id": 1,
"annotation_id": 2088,
"created_at": "2021-12-30 17:10:33.800607",
"updated_at": "2021-12-30 17:10:59.714840",
"lead_time": 3896.182,
"rank": {
"position": [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10
],
"candidate": [
"goodnewsmegan",
"goodnewsmeg an",
"goodnews megan",
"goodnewsmega n",
"go odnewsmegan",
"good news megan",
"good newsmegan",
"g oodnewsmegan",
"goodnewsme gan",
"goodnewsm egan"
]
}
}
```
### Data Fields
- `index`: a numerical index annotated by Kodali et al..
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
- `spans`: named entity spans.
- `source`: data source.
- `gold_position`: position of the gold segmentation on the `segmentation` field inside the `rank`.
- `mix`: The hashtag has a mix of English and Hindi tokens.
- `other`: The hashtag has non-English tokens.
- `ner`: The hashtag has named entities.
- `annotator_id`: annotator ID.
- `annotation_id`: annotation ID.
- `created_at`: Creation date timestamp.
- `updated_at`: Update date timestamp.
- `lead_time`: Lead time field annotated by Kodali et al..
- `rank`: Rank of each candidate selected by a baseline word segmenter ( WordBreaker ).
- `candidates`: Candidates selected by a baseline word segmenter ( WordBreaker ).
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{kodali2022hashset,
title={HashSet--A Dataset For Hashtag Segmentation},
author={Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam},
journal={arXiv preprint arXiv:2201.06741},
year={2022}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/hashset_manual | [
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:hi",
"language:en",
"license:unknown",
"word-segmentation",
"arxiv:2201.06741",
"region:us"
] | 2022-03-05T05:52:48+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["hi", "en"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": ["named-entity-recognition"], "pretty_name": "HashSet Manual", "tags": ["word-segmentation"]} | 2022-10-20T18:13:18+00:00 | [
"2201.06741"
] | [
"hi",
"en"
] | TAGS
#task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-multilingual #size_categories-unknown #source_datasets-original #language-Hindi #language-English #license-unknown #word-segmentation #arxiv-2201.06741 #region-us
|
# Dataset Card for HashSet Manual
## Dataset Description
- Repository: prashantkodali/HashSet
- Paper: HashSet -- A Dataset For Hashtag Segmentation
### Dataset Summary
Hashset is a new dataset consisting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Manual: contains 1.9k manually annotated hashtags. Each row consists of the hashtag, segmented hashtag ,named entity annotations, whether the hashtag contains mix of hindi and english tokens and/or contains non-english tokens.
### Languages
Mostly Hindi and English.
## Dataset Structure
### Data Instances
### Data Fields
- 'index': a numerical index annotated by Kodali et al..
- 'hashtag': the original hashtag.
- 'segmentation': the gold segmentation for the hashtag.
- 'spans': named entity spans.
- 'source': data source.
- 'gold_position': position of the gold segmentation on the 'segmentation' field inside the 'rank'.
- 'mix': The hashtag has a mix of English and Hindi tokens.
- 'other': The hashtag has non-English tokens.
- 'ner': The hashtag has named entities.
- 'annotator_id': annotator ID.
- 'annotation_id': annotation ID.
- 'created_at': Creation date timestamp.
- 'updated_at': Update date timestamp.
- 'lead_time': Lead time field annotated by Kodali et al..
- 'rank': Rank of each candidate selected by a baseline word segmenter ( WordBreaker ).
- 'candidates': Candidates selected by a baseline word segmenter ( WordBreaker ).
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.
- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.
## Additional Information
### Contributions
This dataset was added by @ruanchaves while developing the hashformers library. | [
"# Dataset Card for HashSet Manual",
"## Dataset Description\n\n- Repository: prashantkodali/HashSet\n- Paper: HashSet -- A Dataset For Hashtag Segmentation",
"### Dataset Summary\n\nHashset is a new dataset consisting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the \nefficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other \nbaseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act \nas a good benchmark for hashtag segmentation tasks.\n\nHashSet Manual: contains 1.9k manually annotated hashtags. Each row consists of the hashtag, segmented hashtag ,named entity annotations, whether the hashtag contains mix of hindi and english tokens and/or contains non-english tokens.",
"### Languages\n\nMostly Hindi and English.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index annotated by Kodali et al..\n- 'hashtag': the original hashtag.\n- 'segmentation': the gold segmentation for the hashtag.\n- 'spans': named entity spans. \n- 'source': data source.\n- 'gold_position': position of the gold segmentation on the 'segmentation' field inside the 'rank'.\n- 'mix': The hashtag has a mix of English and Hindi tokens.\n- 'other': The hashtag has non-English tokens. \n- 'ner': The hashtag has named entities.\n- 'annotator_id': annotator ID.\n- 'annotation_id': annotation ID.\n- 'created_at': Creation date timestamp.\n- 'updated_at': Update date timestamp.\n- 'lead_time': Lead time field annotated by Kodali et al..\n- 'rank': Rank of each candidate selected by a baseline word segmenter ( WordBreaker ).\n- 'candidates': Candidates selected by a baseline word segmenter ( WordBreaker ).",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] | [
"TAGS\n#task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-multilingual #size_categories-unknown #source_datasets-original #language-Hindi #language-English #license-unknown #word-segmentation #arxiv-2201.06741 #region-us \n",
"# Dataset Card for HashSet Manual",
"## Dataset Description\n\n- Repository: prashantkodali/HashSet\n- Paper: HashSet -- A Dataset For Hashtag Segmentation",
"### Dataset Summary\n\nHashset is a new dataset consisting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the \nefficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other \nbaseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act \nas a good benchmark for hashtag segmentation tasks.\n\nHashSet Manual: contains 1.9k manually annotated hashtags. Each row consists of the hashtag, segmented hashtag ,named entity annotations, whether the hashtag contains mix of hindi and english tokens and/or contains non-english tokens.",
"### Languages\n\nMostly Hindi and English.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index annotated by Kodali et al..\n- 'hashtag': the original hashtag.\n- 'segmentation': the gold segmentation for the hashtag.\n- 'spans': named entity spans. \n- 'source': data source.\n- 'gold_position': position of the gold segmentation on the 'segmentation' field inside the 'rank'.\n- 'mix': The hashtag has a mix of English and Hindi tokens.\n- 'other': The hashtag has non-English tokens. \n- 'ner': The hashtag has named entities.\n- 'annotator_id': annotator ID.\n- 'annotation_id': annotation ID.\n- 'created_at': Creation date timestamp.\n- 'updated_at': Update date timestamp.\n- 'lead_time': Lead time field annotated by Kodali et al..\n- 'rank': Rank of each candidate selected by a baseline word segmenter ( WordBreaker ).\n- 'candidates': Candidates selected by a baseline word segmenter ( WordBreaker ).",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] |
926842c8fbeadabe99a88d30d4b7ce06a42fb64c |
# Dataset Card for STAN Large
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [mounicam/hashtag_master](https://github.com/mounicam/hashtag_master)
- **Paper:** [Multi-task Pairwise Neural Ranking for Hashtag Segmentation](https://aclanthology.org/P19-1242/)
### Dataset Summary
The description below was taken from the paper "Multi-task Pairwise Neural Ranking for Hashtag Segmentation"
by Maddela et al..
"STAN large, our new expert curated dataset, which includes all 12,594 unique English hashtags and their
associated tweets from the same Stanford dataset.
STAN small is the most commonly used dataset in previous work. However, after reexamination, we found annotation
errors in 6.8% of the hashtags in this dataset, which is significant given that the error rate of the state-of-the art
models is only around 10%. Most of the errors were related to named entities. For example, #lionhead,
which refers to the “Lionhead” video game company, was labeled as “lion head”.
We therefore constructed the STAN large dataset of 12,594 hashtags with additional quality control for human annotations."
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 15,
"hashtag": "PokemonPlatinum",
"segmentation": "Pokemon Platinum",
"alternatives": {
"segmentation": [
"Pokemon platinum"
]
}
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
- `alternatives`: other segmentations that are also accepted as a gold segmentation.
Although `segmentation` has exactly the same characters as `hashtag` except for the spaces, the segmentations inside `alternatives` may have characters corrected to uppercase.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@inproceedings{maddela-etal-2019-multi,
title = "Multi-task Pairwise Neural Ranking for Hashtag Segmentation",
author = "Maddela, Mounica and
Xu, Wei and
Preo{\c{t}}iuc-Pietro, Daniel",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1242",
doi = "10.18653/v1/P19-1242",
pages = "2538--2549",
abstract = "Hashtags are often employed on social media and beyond to add metadata to a textual utterance with the goal of increasing discoverability, aiding search, or providing additional semantics. However, the semantic content of hashtags is not straightforward to infer as these represent ad-hoc conventions which frequently include multiple words joined together and can include abbreviations and unorthodox spellings. We build a dataset of 12,594 hashtags split into individual segments and propose a set of approaches for hashtag segmentation by framing it as a pairwise ranking problem between candidate segmentations. Our novel neural approaches demonstrate 24.6{\%} error reduction in hashtag segmentation accuracy compared to the current state-of-the-art method. Finally, we demonstrate that a deeper understanding of hashtag semantics obtained through segmentation is useful for downstream applications such as sentiment analysis, for which we achieved a 2.6{\%} increase in average recall on the SemEval 2017 sentiment analysis dataset.",
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/stan_large | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:agpl-3.0",
"word-segmentation",
"region:us"
] | 2022-03-05T06:47:42+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["agpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "STAN Large", "tags": ["word-segmentation"]} | 2022-10-20T18:13:15+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #license-agpl-3.0 #word-segmentation #region-us
|
# Dataset Card for STAN Large
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Dataset Creation
- Additional Information
- Citation Information
- Contributions
## Dataset Description
- Repository: mounicam/hashtag_master
- Paper: Multi-task Pairwise Neural Ranking for Hashtag Segmentation
### Dataset Summary
The description below was taken from the paper "Multi-task Pairwise Neural Ranking for Hashtag Segmentation"
by Maddela et al..
"STAN large, our new expert curated dataset, which includes all 12,594 unique English hashtags and their
associated tweets from the same Stanford dataset.
STAN small is the most commonly used dataset in previous work. However, after reexamination, we found annotation
errors in 6.8% of the hashtags in this dataset, which is significant given that the error rate of the state-of-the art
models is only around 10%. Most of the errors were related to named entities. For example, #lionhead,
which refers to the “Lionhead” video game company, was labeled as “lion head”.
We therefore constructed the STAN large dataset of 12,594 hashtags with additional quality control for human annotations."
### Languages
English
## Dataset Structure
### Data Instances
### Data Fields
- 'index': a numerical index.
- 'hashtag': the original hashtag.
- 'segmentation': the gold segmentation for the hashtag.
- 'alternatives': other segmentations that are also accepted as a gold segmentation.
Although 'segmentation' has exactly the same characters as 'hashtag' except for the spaces, the segmentations inside 'alternatives' may have characters corrected to uppercase.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.
- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.
## Additional Information
### Contributions
This dataset was added by @ruanchaves while developing the hashformers library. | [
"# Dataset Card for STAN Large",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n- Additional Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Repository: mounicam/hashtag_master\n- Paper: Multi-task Pairwise Neural Ranking for Hashtag Segmentation",
"### Dataset Summary\n\nThe description below was taken from the paper \"Multi-task Pairwise Neural Ranking for Hashtag Segmentation\"\nby Maddela et al..\n\n\"STAN large, our new expert curated dataset, which includes all 12,594 unique English hashtags and their \nassociated tweets from the same Stanford dataset.\n\nSTAN small is the most commonly used dataset in previous work. However, after reexamination, we found annotation \nerrors in 6.8% of the hashtags in this dataset, which is significant given that the error rate of the state-of-the art \nmodels is only around 10%. Most of the errors were related to named entities. For example, #lionhead, \nwhich refers to the “Lionhead” video game company, was labeled as “lion head”.\n\nWe therefore constructed the STAN large dataset of 12,594 hashtags with additional quality control for human annotations.\"",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index.\n- 'hashtag': the original hashtag.\n- 'segmentation': the gold segmentation for the hashtag.\n- 'alternatives': other segmentations that are also accepted as a gold segmentation.\n\nAlthough 'segmentation' has exactly the same characters as 'hashtag' except for the spaces, the segmentations inside 'alternatives' may have characters corrected to uppercase.",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] | [
"TAGS\n#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #license-agpl-3.0 #word-segmentation #region-us \n",
"# Dataset Card for STAN Large",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n- Additional Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Repository: mounicam/hashtag_master\n- Paper: Multi-task Pairwise Neural Ranking for Hashtag Segmentation",
"### Dataset Summary\n\nThe description below was taken from the paper \"Multi-task Pairwise Neural Ranking for Hashtag Segmentation\"\nby Maddela et al..\n\n\"STAN large, our new expert curated dataset, which includes all 12,594 unique English hashtags and their \nassociated tweets from the same Stanford dataset.\n\nSTAN small is the most commonly used dataset in previous work. However, after reexamination, we found annotation \nerrors in 6.8% of the hashtags in this dataset, which is significant given that the error rate of the state-of-the art \nmodels is only around 10%. Most of the errors were related to named entities. For example, #lionhead, \nwhich refers to the “Lionhead” video game company, was labeled as “lion head”.\n\nWe therefore constructed the STAN large dataset of 12,594 hashtags with additional quality control for human annotations.\"",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index.\n- 'hashtag': the original hashtag.\n- 'segmentation': the gold segmentation for the hashtag.\n- 'alternatives': other segmentations that are also accepted as a gold segmentation.\n\nAlthough 'segmentation' has exactly the same characters as 'hashtag' except for the spaces, the segmentations inside 'alternatives' may have characters corrected to uppercase.",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] |
af6d38e28c5033a1f89b50b9e26950fe73550e29 |
# Dataset Card for STAN Small
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [mounicam/hashtag_master](https://github.com/mounicam/hashtag_master)
- **Paper:** [Multi-task Pairwise Neural Ranking for Hashtag Segmentation](https://aclanthology.org/P19-1242/)
### Dataset Summary
Manually Annotated Stanford Sentiment Analysis Dataset by Bansal et al..
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 300,
"hashtag": "microsoftfail",
"segmentation": "microsoft fail",
"alternatives": {
"segmentation": [
"Microsoft fail"
]
}
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
- `alternatives`: other segmentations that are also accepted as a gold segmentation.
Although `segmentation` has exactly the same characters as `hashtag` except for the spaces, the segmentations inside `alternatives` may have characters corrected to uppercase.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@misc{bansal2015deep,
title={Towards Deep Semantic Analysis Of Hashtags},
author={Piyush Bansal and Romil Bansal and Vasudeva Varma},
year={2015},
eprint={1501.03210},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/stan_small | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:unknown",
"word-segmentation",
"arxiv:1501.03210",
"region:us"
] | 2022-03-05T07:02:09+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction", "conditional-text-generation"], "task_ids": [], "pretty_name": "STAN Small", "tags": ["word-segmentation"]} | 2022-10-20T18:13:12+00:00 | [
"1501.03210"
] | [
"en"
] | TAGS
#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #license-unknown #word-segmentation #arxiv-1501.03210 #region-us
|
# Dataset Card for STAN Small
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Dataset Creation
- Additional Information
- Citation Information
- Contributions
## Dataset Description
- Repository: mounicam/hashtag_master
- Paper: Multi-task Pairwise Neural Ranking for Hashtag Segmentation
### Dataset Summary
Manually Annotated Stanford Sentiment Analysis Dataset by Bansal et al..
### Languages
English
## Dataset Structure
### Data Instances
### Data Fields
- 'index': a numerical index.
- 'hashtag': the original hashtag.
- 'segmentation': the gold segmentation for the hashtag.
- 'alternatives': other segmentations that are also accepted as a gold segmentation.
Although 'segmentation' has exactly the same characters as 'hashtag' except for the spaces, the segmentations inside 'alternatives' may have characters corrected to uppercase.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.
- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.
## Additional Information
### Contributions
This dataset was added by @ruanchaves while developing the hashformers library. | [
"# Dataset Card for STAN Small",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n- Additional Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Repository: mounicam/hashtag_master\n- Paper: Multi-task Pairwise Neural Ranking for Hashtag Segmentation",
"### Dataset Summary\n\nManually Annotated Stanford Sentiment Analysis Dataset by Bansal et al..",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index.\n- 'hashtag': the original hashtag.\n- 'segmentation': the gold segmentation for the hashtag.\n- 'alternatives': other segmentations that are also accepted as a gold segmentation.\n\nAlthough 'segmentation' has exactly the same characters as 'hashtag' except for the spaces, the segmentations inside 'alternatives' may have characters corrected to uppercase.",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] | [
"TAGS\n#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #license-unknown #word-segmentation #arxiv-1501.03210 #region-us \n",
"# Dataset Card for STAN Small",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n- Additional Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Repository: mounicam/hashtag_master\n- Paper: Multi-task Pairwise Neural Ranking for Hashtag Segmentation",
"### Dataset Summary\n\nManually Annotated Stanford Sentiment Analysis Dataset by Bansal et al..",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index.\n- 'hashtag': the original hashtag.\n- 'segmentation': the gold segmentation for the hashtag.\n- 'alternatives': other segmentations that are also accepted as a gold segmentation.\n\nAlthough 'segmentation' has exactly the same characters as 'hashtag' except for the spaces, the segmentations inside 'alternatives' may have characters corrected to uppercase.",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] |
27f9f67d4662570c17e251438164c3508643c32d |
# Dataset Card for BOUN
## Dataset Description
- **Repository:** [ardax/hashtag-segmentor](https://github.com/ardax/hashtag-segmentor)
- **Paper:** [Segmenting Hashtags and Analyzing Their Grammatical Structure](https://asistdl.onlinelibrary.wiley.com/doi/epdf/10.1002/asi.23989?author_access_token=qbKcE1jrre5nbv_Tn9csbU4keas67K9QMdWULTWMo8NOtY2aA39ck2w5Sm4ePQ1MZhbjCdEuaRlPEw2Kd12jzvwhwoWP0fdroZAwWsmXHPXxryDk_oBCup1i9_VDNIpU)
### Dataset Summary
Dev-BOUN is a Development set that includes 500 manually segmented hashtags. These are selected from tweets about movies,
tv shows, popular people, sports teams etc.
Test-BOUN is a Test set that includes 500 manually segmented hashtags. These are selected from tweets about movies, tv shows, popular people, sports teams etc.
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 0,
"hashtag": "tryingtosleep",
"segmentation": "trying to sleep"
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{celebi2018segmenting,
title={Segmenting hashtags and analyzing their grammatical structure},
author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
journal={Journal of the Association for Information Science and Technology},
volume={69},
number={5},
pages={675--686},
year={2018},
publisher={Wiley Online Library}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/boun | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:unknown",
"word-segmentation",
"region:us"
] | 2022-03-05T07:17:18+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "BOUN", "tags": ["word-segmentation"]} | 2022-10-20T18:13:09+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #license-unknown #word-segmentation #region-us
|
# Dataset Card for BOUN
## Dataset Description
- Repository: ardax/hashtag-segmentor
- Paper: Segmenting Hashtags and Analyzing Their Grammatical Structure
### Dataset Summary
Dev-BOUN is a Development set that includes 500 manually segmented hashtags. These are selected from tweets about movies,
tv shows, popular people, sports teams etc.
Test-BOUN is a Test set that includes 500 manually segmented hashtags. These are selected from tweets about movies, tv shows, popular people, sports teams etc.
### Languages
English
## Dataset Structure
### Data Instances
### Data Fields
- 'index': a numerical index.
- 'hashtag': the original hashtag.
- 'segmentation': the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.
- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.
## Additional Information
### Contributions
This dataset was added by @ruanchaves while developing the hashformers library. | [
"# Dataset Card for BOUN",
"## Dataset Description\n\n- Repository: ardax/hashtag-segmentor\n- Paper: Segmenting Hashtags and Analyzing Their Grammatical Structure",
"### Dataset Summary\n\nDev-BOUN is a Development set that includes 500 manually segmented hashtags. These are selected from tweets about movies, \ntv shows, popular people, sports teams etc. \n\nTest-BOUN is a Test set that includes 500 manually segmented hashtags. These are selected from tweets about movies, tv shows, popular people, sports teams etc.",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index.\n- 'hashtag': the original hashtag.\n- 'segmentation': the gold segmentation for the hashtag.",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] | [
"TAGS\n#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #license-unknown #word-segmentation #region-us \n",
"# Dataset Card for BOUN",
"## Dataset Description\n\n- Repository: ardax/hashtag-segmentor\n- Paper: Segmenting Hashtags and Analyzing Their Grammatical Structure",
"### Dataset Summary\n\nDev-BOUN is a Development set that includes 500 manually segmented hashtags. These are selected from tweets about movies, \ntv shows, popular people, sports teams etc. \n\nTest-BOUN is a Test set that includes 500 manually segmented hashtags. These are selected from tweets about movies, tv shows, popular people, sports teams etc.",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index.\n- 'hashtag': the original hashtag.\n- 'segmentation': the gold segmentation for the hashtag.",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] |
292e00146ecc1be6feefdb52362eace417791f4f |
# Dataset Card for Dev-Stanford
## Dataset Description
- **Repository:** [ardax/hashtag-segmentor](https://github.com/ardax/hashtag-segmentor)
- **Paper:** [Segmenting Hashtags and Analyzing Their Grammatical Structure](https://asistdl.onlinelibrary.wiley.com/doi/epdf/10.1002/asi.23989?author_access_token=qbKcE1jrre5nbv_Tn9csbU4keas67K9QMdWULTWMo8NOtY2aA39ck2w5Sm4ePQ1MZhbjCdEuaRlPEw2Kd12jzvwhwoWP0fdroZAwWsmXHPXxryDk_oBCup1i9_VDNIpU)
### Dataset Summary
1000 hashtags manually segmented by Çelebi et al. for development purposes,
randomly selected from the Stanford Sentiment Tweet Corpus by Sentiment140.
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 15,
"hashtag": "marathonmonday",
"segmentation": "marathon monday"
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{celebi2018segmenting,
title={Segmenting hashtags and analyzing their grammatical structure},
author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
journal={Journal of the Association for Information Science and Technology},
volume={69},
number={5},
pages={675--686},
year={2018},
publisher={Wiley Online Library}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/dev_stanford | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:unknown",
"word-segmentation",
"region:us"
] | 2022-03-05T07:28:41+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "Dev-Stanford", "tags": ["word-segmentation"]} | 2022-10-20T18:13:37+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #license-unknown #word-segmentation #region-us
|
# Dataset Card for Dev-Stanford
## Dataset Description
- Repository: ardax/hashtag-segmentor
- Paper: Segmenting Hashtags and Analyzing Their Grammatical Structure
### Dataset Summary
1000 hashtags manually segmented by Çelebi et al. for development purposes,
randomly selected from the Stanford Sentiment Tweet Corpus by Sentiment140.
### Languages
English
## Dataset Structure
### Data Instances
### Data Fields
- 'index': a numerical index.
- 'hashtag': the original hashtag.
- 'segmentation': the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.
- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.
## Additional Information
### Contributions
This dataset was added by @ruanchaves while developing the hashformers library. | [
"# Dataset Card for Dev-Stanford",
"## Dataset Description\n\n- Repository: ardax/hashtag-segmentor\n- Paper: Segmenting Hashtags and Analyzing Their Grammatical Structure",
"### Dataset Summary\n\n1000 hashtags manually segmented by Çelebi et al. for development purposes, \nrandomly selected from the Stanford Sentiment Tweet Corpus by Sentiment140.",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index.\n- 'hashtag': the original hashtag.\n- 'segmentation': the gold segmentation for the hashtag.",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] | [
"TAGS\n#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #license-unknown #word-segmentation #region-us \n",
"# Dataset Card for Dev-Stanford",
"## Dataset Description\n\n- Repository: ardax/hashtag-segmentor\n- Paper: Segmenting Hashtags and Analyzing Their Grammatical Structure",
"### Dataset Summary\n\n1000 hashtags manually segmented by Çelebi et al. for development purposes, \nrandomly selected from the Stanford Sentiment Tweet Corpus by Sentiment140.",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index.\n- 'hashtag': the original hashtag.\n- 'segmentation': the gold segmentation for the hashtag.",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] |
48f64996c295b22e76cec4454362babfad31f581 |
# Dataset Card for Test-Stanford
## Dataset Description
- **Paper:** [Towards Deep Semantic Analysis Of Hashtags](https://arxiv.org/abs/1501.03210)
### Dataset Summary
Manually Annotated Stanford Sentiment Analysis Dataset by Bansal et al..
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 1467856821,
"hashtag": "therapyfail",
"segmentation": "therapy fail",
"gold_position": 8,
"rank": {
"position": [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20
],
"candidate": [
"therap y fail",
"the rap y fail",
"t her apy fail",
"the rap yfail",
"t he rap y fail",
"thera py fail",
"ther apy fail",
"th era py fail",
"therapy fail",
"therapy fai l",
"the r apy fail",
"the rapyfa il",
"the rapy fail",
"t herapy fail",
"the rapyfail",
"therapy f ai l",
"therapy fa il",
"the rapyf a il",
"therapy f ail",
"the ra py fail"
]
}
}
```
### Data Fields
- `index`: a numerical index annotated by Kodali et al..
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
- `gold_position`: position of the gold segmentation on the `segmentation` field inside the `rank`.
- `rank`: Rank of each candidate selected by a baseline word segmenter ( Segmentations Seeder Module ).
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@misc{bansal2015deep,
title={Towards Deep Semantic Analysis Of Hashtags},
author={Piyush Bansal and Romil Bansal and Vasudeva Varma},
year={2015},
eprint={1501.03210},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/test_stanford | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:unknown",
"word-segmentation",
"arxiv:1501.03210",
"region:us"
] | 2022-03-05T08:26:17+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "Test-Stanford", "tags": ["word-segmentation"]} | 2022-10-20T18:13:07+00:00 | [
"1501.03210"
] | [
"en"
] | TAGS
#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #license-unknown #word-segmentation #arxiv-1501.03210 #region-us
|
# Dataset Card for Test-Stanford
## Dataset Description
- Paper: Towards Deep Semantic Analysis Of Hashtags
### Dataset Summary
Manually Annotated Stanford Sentiment Analysis Dataset by Bansal et al..
### Languages
English
## Dataset Structure
### Data Instances
### Data Fields
- 'index': a numerical index annotated by Kodali et al..
- 'hashtag': the original hashtag.
- 'segmentation': the gold segmentation for the hashtag.
- 'gold_position': position of the gold segmentation on the 'segmentation' field inside the 'rank'.
- 'rank': Rank of each candidate selected by a baseline word segmenter ( Segmentations Seeder Module ).
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.
- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.
## Additional Information
### Contributions
This dataset was added by @ruanchaves while developing the hashformers library. | [
"# Dataset Card for Test-Stanford",
"## Dataset Description\n\n- Paper: Towards Deep Semantic Analysis Of Hashtags",
"### Dataset Summary\n\nManually Annotated Stanford Sentiment Analysis Dataset by Bansal et al..",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index annotated by Kodali et al..\n- 'hashtag': the original hashtag.\n- 'segmentation': the gold segmentation for the hashtag.\n- 'gold_position': position of the gold segmentation on the 'segmentation' field inside the 'rank'.\n- 'rank': Rank of each candidate selected by a baseline word segmenter ( Segmentations Seeder Module ).",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] | [
"TAGS\n#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #license-unknown #word-segmentation #arxiv-1501.03210 #region-us \n",
"# Dataset Card for Test-Stanford",
"## Dataset Description\n\n- Paper: Towards Deep Semantic Analysis Of Hashtags",
"### Dataset Summary\n\nManually Annotated Stanford Sentiment Analysis Dataset by Bansal et al..",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index annotated by Kodali et al..\n- 'hashtag': the original hashtag.\n- 'segmentation': the gold segmentation for the hashtag.\n- 'gold_position': position of the gold segmentation on the 'segmentation' field inside the 'rank'.\n- 'rank': Rank of each candidate selected by a baseline word segmenter ( Segmentations Seeder Module ).",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] |
2d33f11d465c83eb043544177daceb8f4d508343 |
# Battery Abstracts Dataset
This dataset includes 29,472 battery papers and 17,191 non-battery papers, a total of 46,663 papers. These papers are manually labelled in terms of the journals to which they belong. 14 battery journals and 1,044 non battery journals were selected to form this database.
- training_data.csv: Battery papers: 20,629, Non-battery papers: 12,034. Total: 32,663.
- val_data.csv: Battery papers: 5,895, Non-battery papers: 3,438. Total: 9,333.
- test_data.csv: Battery papers: 2,948, Non-battery papers: 1,719. Total: 4,667.
# Usage
```
from datasets import load_dataset
dataset = load_dataset("batterydata/paper-abstracts")
```
# Citation
```
@article{huang2022batterybert,
title={BatteryBERT: A Pretrained Language Model for Battery Database Enhancement},
author={Huang, Shu and Cole, Jacqueline M},
journal={J. Chem. Inf. Model.},
year={2022},
doi={10.1021/acs.jcim.2c00035},
url={DOI:10.1021/acs.jcim.2c00035},
pages={DOI: 10.1021/acs.jcim.2c00035},
publisher={ACS Publications}
}
``` | batterydata/paper-abstracts | [
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"region:us"
] | 2022-03-05T13:55:17+00:00 | {"language": ["en"], "license": ["apache-2.0"], "task_categories": ["text-classification"], "pretty_name": "Battery Abstracts Dataset"} | 2022-09-05T14:54:02+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #language-English #license-apache-2.0 #region-us
|
# Battery Abstracts Dataset
This dataset includes 29,472 battery papers and 17,191 non-battery papers, a total of 46,663 papers. These papers are manually labelled in terms of the journals to which they belong. 14 battery journals and 1,044 non battery journals were selected to form this database.
- training_data.csv: Battery papers: 20,629, Non-battery papers: 12,034. Total: 32,663.
- val_data.csv: Battery papers: 5,895, Non-battery papers: 3,438. Total: 9,333.
- test_data.csv: Battery papers: 2,948, Non-battery papers: 1,719. Total: 4,667.
# Usage
| [
"# Battery Abstracts Dataset\nThis dataset includes 29,472 battery papers and 17,191 non-battery papers, a total of 46,663 papers. These papers are manually labelled in terms of the journals to which they belong. 14 battery journals and 1,044 non battery journals were selected to form this database. \n\n\n- training_data.csv: Battery papers: 20,629, Non-battery papers: 12,034. Total: 32,663.\n- val_data.csv: Battery papers: 5,895, Non-battery papers: 3,438. Total: 9,333.\n- test_data.csv: Battery papers: 2,948, Non-battery papers: 1,719. Total: 4,667.",
"# Usage"
] | [
"TAGS\n#task_categories-text-classification #language-English #license-apache-2.0 #region-us \n",
"# Battery Abstracts Dataset\nThis dataset includes 29,472 battery papers and 17,191 non-battery papers, a total of 46,663 papers. These papers are manually labelled in terms of the journals to which they belong. 14 battery journals and 1,044 non battery journals were selected to form this database. \n\n\n- training_data.csv: Battery papers: 20,629, Non-battery papers: 12,034. Total: 32,663.\n- val_data.csv: Battery papers: 5,895, Non-battery papers: 3,438. Total: 9,333.\n- test_data.csv: Battery papers: 2,948, Non-battery papers: 1,719. Total: 4,667.",
"# Usage"
] |
586ba42e6c8a76b305b4e27fc20ce99226a2c1d4 | A new Swahili tweet dataset for sentiment analysis.
## Issues ⚠️
Incase you have any difficulties or issues while trying to run the script
you can raise it on the issues section.
## Pull Requests 🔧
If you have something to add or new idea to implement, you are welcome to create a pull requests on improvement.
## Give it a Like 👍
If you find this dataset useful, give it a like so as many people can get to know it.
## Credits
All the credits to [Davis David ](https://twitter.com/Davis_McDavid), [Zephania Reuben](https://twitter.com/nsomazr) & [Eliya Masesa](https://twitter.com/eliya_masesa) | Davis/Swahili-tweet-sentiment | [
"license:mit",
"region:us"
] | 2022-03-05T16:03:06+00:00 | {"license": "mit"} | 2022-03-05T17:58:17+00:00 | [] | [] | TAGS
#license-mit #region-us
| A new Swahili tweet dataset for sentiment analysis.
## Issues ️
Incase you have any difficulties or issues while trying to run the script
you can raise it on the issues section.
## Pull Requests
If you have something to add or new idea to implement, you are welcome to create a pull requests on improvement.
## Give it a Like
If you find this dataset useful, give it a like so as many people can get to know it.
## Credits
All the credits to Davis David , Zephania Reuben & Eliya Masesa | [
"## Issues ️\r\n\r\nIncase you have any difficulties or issues while trying to run the script\r\nyou can raise it on the issues section.",
"## Pull Requests \r\n\r\nIf you have something to add or new idea to implement, you are welcome to create a pull requests on improvement.",
"## Give it a Like \r\n\r\nIf you find this dataset useful, give it a like so as many people can get to know it.",
"## Credits \r\n\r\nAll the credits to Davis David , Zephania Reuben & Eliya Masesa"
] | [
"TAGS\n#license-mit #region-us \n",
"## Issues ️\r\n\r\nIncase you have any difficulties or issues while trying to run the script\r\nyou can raise it on the issues section.",
"## Pull Requests \r\n\r\nIf you have something to add or new idea to implement, you are welcome to create a pull requests on improvement.",
"## Give it a Like \r\n\r\nIf you find this dataset useful, give it a like so as many people can get to know it.",
"## Credits \r\n\r\nAll the credits to Davis David , Zephania Reuben & Eliya Masesa"
] |
4fb954beab9774a12cac3a13ee08616d5e10df6d |
# Dataset Card for NRU-HSE
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [glushkovato/hashtag_segmentation](https://github.com/glushkovato/hashtag_segmentation/)
- **Paper:** [Char-RNN and Active Learning for Hashtag Segmentation](https://arxiv.org/abs/1911.03270)
### Dataset Summary
Real hashtags collected from several pages about civil services on vk.com (a Russian social network) and then segmented manually.
### Languages
Russian
## Dataset Structure
### Data Instances
```
{
"index": 0,
"hashtag": "ЁлкаВЗазеркалье",
"segmentation": "Ёлка В Зазеркалье"
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{glushkova2019char,
title={Char-RNN and Active Learning for Hashtag Segmentation},
author={Glushkova, Taisiya and Artemova, Ekaterina},
journal={arXiv preprint arXiv:1911.03270},
year={2019}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/nru_hse | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:ru",
"license:unknown",
"word-segmentation",
"arxiv:1911.03270",
"region:us"
] | 2022-03-05T17:40:41+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["ru"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "NRU-HSE", "tags": ["word-segmentation"]} | 2022-10-20T18:12:59+00:00 | [
"1911.03270"
] | [
"ru"
] | TAGS
#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-Russian #license-unknown #word-segmentation #arxiv-1911.03270 #region-us
|
# Dataset Card for NRU-HSE
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Dataset Creation
- Additional Information
- Citation Information
- Contributions
## Dataset Description
- Repository: glushkovato/hashtag_segmentation
- Paper: Char-RNN and Active Learning for Hashtag Segmentation
### Dataset Summary
Real hashtags collected from several pages about civil services on URL (a Russian social network) and then segmented manually.
### Languages
Russian
## Dataset Structure
### Data Instances
### Data Fields
- 'index': a numerical index.
- 'hashtag': the original hashtag.
- 'segmentation': the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.
- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.
## Additional Information
### Contributions
This dataset was added by @ruanchaves while developing the hashformers library. | [
"# Dataset Card for NRU-HSE",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n- Additional Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Repository: glushkovato/hashtag_segmentation\n- Paper: Char-RNN and Active Learning for Hashtag Segmentation",
"### Dataset Summary\n\nReal hashtags collected from several pages about civil services on URL (a Russian social network) and then segmented manually.",
"### Languages\n\nRussian",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index.\n- 'hashtag': the original hashtag.\n- 'segmentation': the gold segmentation for the hashtag.",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] | [
"TAGS\n#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-Russian #license-unknown #word-segmentation #arxiv-1911.03270 #region-us \n",
"# Dataset Card for NRU-HSE",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n- Additional Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Repository: glushkovato/hashtag_segmentation\n- Paper: Char-RNN and Active Learning for Hashtag Segmentation",
"### Dataset Summary\n\nReal hashtags collected from several pages about civil services on URL (a Russian social network) and then segmented manually.",
"### Languages\n\nRussian",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index.\n- 'hashtag': the original hashtag.\n- 'segmentation': the gold segmentation for the hashtag.",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] |
e51544fd07e72dfa6bf830b56e417adba8dc50ba |
# Dataset Card for The Loyola University of Delaware Identifier Splitting Oracle
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Loyola University of Delaware Identifier Splitting Oracle](http://www.cs.loyola.edu/~binkley/ludiso/)
- **Paper:** [An empirical study of identifier splitting techniques](https://dl.acm.org/doi/10.1007/s10664-013-9261-0)
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
The Loyola University of Delaware Identifier Splitting Oracle is a dataset for identifier segmentation,
i.e. the task of adding spaces between the words on a identifier.
### Languages
- Java
- C
- C++
## Dataset Structure
### Data Instances
```
{
"index": 0,
"identifier": "::CreateProcess",
"segmentation": ":: Create Process",
"language": "cpp",
"source": "mozilla-source-1.1"
}
```
### Data Fields
- `index`: a numerical index.
- `identifier`: the original identifier.
- `segmentation`: the gold segmentation for the identifier.
- `language`: the programming language of the source.
- `source`: the source of the identifier.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
### Citation Information
```
@article{hill2014empirical,
title={An empirical study of identifier splitting techniques},
author={Hill, Emily and Binkley, David and Lawrie, Dawn and Pollock, Lori and Vijay-Shanker, K},
journal={Empirical Software Engineering},
volume={19},
number={6},
pages={1754--1780},
year={2014},
publisher={Springer}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/loyola | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:unknown",
"word-segmentation",
"region:us"
] | 2022-03-05T19:23:21+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["code"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "The Loyola University of Delaware Identifier Splitting Oracle", "tags": ["word-segmentation"]} | 2022-10-20T18:13:04+00:00 | [] | [
"code"
] | TAGS
#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-code #license-unknown #word-segmentation #region-us
|
# Dataset Card for The Loyola University of Delaware Identifier Splitting Oracle
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Dataset Creation
- Additional Information
- Citation Information
- Contributions
## Dataset Description
- Repository: Loyola University of Delaware Identifier Splitting Oracle
- Paper: An empirical study of identifier splitting techniques
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
The Loyola University of Delaware Identifier Splitting Oracle is a dataset for identifier segmentation,
i.e. the task of adding spaces between the words on a identifier.
### Languages
- Java
- C
- C++
## Dataset Structure
### Data Instances
### Data Fields
- 'index': a numerical index.
- 'identifier': the original identifier.
- 'segmentation': the gold segmentation for the identifier.
- 'language': the programming language of the source.
- 'source': the source of the identifier.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.
- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.
### Contributions
This dataset was added by @ruanchaves while developing the hashformers library. | [
"# Dataset Card for The Loyola University of Delaware Identifier Splitting Oracle",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n- Additional Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Repository: Loyola University of Delaware Identifier Splitting Oracle\n- Paper: An empirical study of identifier splitting techniques",
"### Dataset Summary\n\nIn programming languages, identifiers are tokens (also called symbols) which name language entities.\nSome of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.\n\nThe Loyola University of Delaware Identifier Splitting Oracle is a dataset for identifier segmentation, \ni.e. the task of adding spaces between the words on a identifier.",
"### Languages\n\n- Java\n- C\n- C++",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index.\n- 'identifier': the original identifier.\n- 'segmentation': the gold segmentation for the identifier.\n- 'language': the programming language of the source.\n- 'source': the source of the identifier.",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] | [
"TAGS\n#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-code #license-unknown #word-segmentation #region-us \n",
"# Dataset Card for The Loyola University of Delaware Identifier Splitting Oracle",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n- Additional Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Repository: Loyola University of Delaware Identifier Splitting Oracle\n- Paper: An empirical study of identifier splitting techniques",
"### Dataset Summary\n\nIn programming languages, identifiers are tokens (also called symbols) which name language entities.\nSome of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.\n\nThe Loyola University of Delaware Identifier Splitting Oracle is a dataset for identifier segmentation, \ni.e. the task of adding spaces between the words on a identifier.",
"### Languages\n\n- Java\n- C\n- C++",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index.\n- 'identifier': the original identifier.\n- 'segmentation': the gold segmentation for the identifier.\n- 'language': the programming language of the source.\n- 'source': the source of the identifier.",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] |
f47b2a116e3e6ad75fc4dbf17a4c8527d0fb0126 | This dataset is presented for the task of Answering Questions on the Holy Qur'an.
https://sites.google.com/view/quran-qa-2022
QRCD (Qur'anic Reading Comprehension Dataset) is composed of 1,093 tuples of question-passage pairs that are coupled with their extracted answers to constitute 1,337 question-passage-answer triplets. It is split into training (65%), development (10%), and test (25%) sets.
QRCD is a JSON Lines (JSONL) file; each line is a JSON object that comprises a question-passage pair, along with its answers extracted from the accompanying passage. The dataset adopts the format shown below. The sample below has two JSON objects, one for each of the above two questions. | AhmedSSoliman/QRCD | [
"region:us"
] | 2022-03-05T20:46:25+00:00 | {} | 2022-03-06T18:58:06+00:00 | [] | [] | TAGS
#region-us
| This dataset is presented for the task of Answering Questions on the Holy Qur'an.
URL
QRCD (Qur'anic Reading Comprehension Dataset) is composed of 1,093 tuples of question-passage pairs that are coupled with their extracted answers to constitute 1,337 question-passage-answer triplets. It is split into training (65%), development (10%), and test (25%) sets.
QRCD is a JSON Lines (JSONL) file; each line is a JSON object that comprises a question-passage pair, along with its answers extracted from the accompanying passage. The dataset adopts the format shown below. The sample below has two JSON objects, one for each of the above two questions. | [] | [
"TAGS\n#region-us \n"
] |
f60c3e93c0985c90741d15948afc694f9460b3d9 |
# Dataset Card for synQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [synQA homepage](https://github.com/maxbartolo/improving-qa-model-robustness)
- **Paper:** [Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation](https://aclanthology.org/2021.emnlp-main.696/)
- **Point of Contact:** [Max Bartolo]([email protected])
### Dataset Summary
SynQA is a Reading Comprehension dataset created in the work "Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation" (https://aclanthology.org/2021.emnlp-main.696/).
It consists of 314,811 synthetically generated questions on the passages in the SQuAD v1.1 (https://arxiv.org/abs/1606.05250) training set.
In this work, we use a synthetic adversarial data generation to make QA models more robust to human adversaries. We develop a data generation pipeline that selects source passages, identifies candidate answers, generates questions, then finally filters or re-labels them to improve quality. Using this approach, we amplify a smaller human-written adversarial dataset to a much larger set of synthetic question-answer pairs. By incorporating our synthetic data, we improve the state-of-the-art on the AdversarialQA (https://adversarialqa.github.io/) dataset by 3.7F1 and improve model generalisation on nine of the twelve MRQA datasets. We further conduct a novel human-in-the-loop evaluation to show that our models are considerably more robust to new human-written adversarial examples: crowdworkers can fool our model only 8.8% of the time on average, compared to 17.6% for a model trained without synthetic data.
For full details on how the dataset was created, kindly refer to the paper.
### Supported Tasks
`extractive-qa`: The dataset can be used to train a model for Extractive Question Answering, which consists in selecting the answer to a question from a passage. Success on this task is typically measured by achieving a high word-overlap [F1 score](https://huggingface.co/metrics/f1).ilable as round 1 of the QA task on [Dynabench](https://dynabench.org/tasks/2#overall) and ranks models based on F1 score.
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
Data is provided in the same format as SQuAD 1.1. An example is shown below:
```
{
"data": [
{
"title": "None",
"paragraphs": [
{
"context": "Architecturally, the school has a Catholic character. Atop the Main Building's gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \"Venite Ad Me Omnes\". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.",
"qas": [
{
"id": "689f275aacba6c43ff112b2c7cb16129bfa934fa",
"question": "What material is the statue of Christ made of?",
"answers": [
{
"answer_start": 190,
"text": "organic copper"
}
]
},
{
"id": "73bd3f52f5934e02332787898f6e568d04bc5403",
"question": "Who is on the Main Building's gold dome?",
"answers": [
{
"answer_start": 111,
"text": "the Virgin Mary."
}
]
},
{
"id": "4d459d5b75fd8a6623446290c542f99f1538cf84",
"question": "What kind of statue is at the end of the main drive?",
"answers": [
{
"answer_start": 667,
"text": "modern stone"
}
]
},
{
"id": "987a1e469c5b360f142b0a171e15cef17cd68ea6",
"question": "What type of dome is on the Main Building at Notre Dame?",
"answers": [
{
"answer_start": 79,
"text": "gold"
}
]
}
]
}
]
}
]
}
```
### Data Fields
- title: all "None" in this dataset
- context: the context/passage
- id: a string identifier for each question
- answers: a list of all provided answers (one per question in our case, but multiple may exist in SQuAD) with an `answer_start` field which is the character index of the start of the answer span, and a `text` field which is the answer text.
### Data Splits
The dataset is composed of a single split of 314,811 examples that we used in a two-stage fine-tuning process (refer to the paper for further details).
## Dataset Creation
### Curation Rationale
This dataset was created to investigate the effects of using synthetic adversarial data generation to improve robustness of state-of-the-art QA models.
### Source Data
#### Initial Data Collection and Normalization
The source passages are from Wikipedia and are the same as those used in [SQuAD v1.1](https://arxiv.org/abs/1606.05250).
#### Who are the source language producers?
The source language produces are Wikipedia editors for the passages, and a BART-Large generative model for the questions.
### Personal and Sensitive Information
No annotator identifying details are provided.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better question answering systems.
A system that succeeds at the supported task would be able to provide an accurate extractive answer from a short passage. This dataset is to be seen as a support resource for improve the ability of systems t handle questions that contemporary state-of-the-art models struggle to answer correctly, thus often requiring more complex comprehension abilities than say detecting phrases explicitly mentioned in the passage with high overlap to the question.
It should be noted, however, that the the source passages are both domain-restricted and linguistically specific, and that provided questions and answers do not constitute any particular social application.
### Discussion of Biases
The dataset may exhibit various biases in terms of the source passage selection, selected candidate answers, generated questions, quality re-labelling process, as well as any algorithmic biases that may be exacerbated from the adversarial annotation process used to collect the SQuAD and AdversarialQA data on which the generators were trained.
### Other Known Limitations
N/a
## Additional Information
### Dataset Curators
This dataset was initially created by Max Bartolo, Tristan Thrush, Robin Jia, Sebastian Riedel, Pontus Stenetorp, and Douwe Kiela during work carried out at University College London (UCL) and Facebook AI Research (FAIR).
### Licensing Information
This dataset is distributed under the [MIT License](https://opensource.org/licenses/MIT).
### Citation Information
```
@inproceedings{bartolo-etal-2021-improving,
title = "Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation",
author = "Bartolo, Max and
Thrush, Tristan and
Jia, Robin and
Riedel, Sebastian and
Stenetorp, Pontus and
Kiela, Douwe",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.696",
doi = "10.18653/v1/2021.emnlp-main.696",
pages = "8830--8848",
abstract = "Despite recent progress, state-of-the-art question answering models remain vulnerable to a variety of adversarial attacks. While dynamic adversarial data collection, in which a human annotator tries to write examples that fool a model-in-the-loop, can improve model robustness, this process is expensive which limits the scale of the collected data. In this work, we are the first to use synthetic adversarial data generation to make question answering models more robust to human adversaries. We develop a data generation pipeline that selects source passages, identifies candidate answers, generates questions, then finally filters or re-labels them to improve quality. Using this approach, we amplify a smaller human-written adversarial dataset to a much larger set of synthetic question-answer pairs. By incorporating our synthetic data, we improve the state-of-the-art on the AdversarialQA dataset by 3.7F1 and improve model generalisation on nine of the twelve MRQA datasets. We further conduct a novel human-in-the-loop evaluation and show that our models are considerably more robust to new human-written adversarial examples: crowdworkers can fool our model only 8.8{\%} of the time on average, compared to 17.6{\%} for a model trained without synthetic data.",
}
```
### Contributions
Thanks to [@maxbartolo](https://github.com/maxbartolo) for adding this dataset.
| mbartolo/synQA | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"task_ids:open-domain-qa",
"annotations_creators:generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:mit",
"arxiv:1606.05250",
"region:us"
] | 2022-03-05T21:24:45+00:00 | {"annotations_creators": ["generated"], "language_creators": ["found"], "language": ["en"], "license": "mit", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa", "open-domain-qa"], "pretty_name": "synQA"} | 2022-10-25T09:02:24+00:00 | [
"1606.05250"
] | [
"en"
] | TAGS
#task_categories-question-answering #task_ids-extractive-qa #task_ids-open-domain-qa #annotations_creators-generated #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-mit #arxiv-1606.05250 #region-us
|
# Dataset Card for synQA
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: synQA homepage
- Paper: Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation
- Point of Contact: Max Bartolo
### Dataset Summary
SynQA is a Reading Comprehension dataset created in the work "Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation" (URL
It consists of 314,811 synthetically generated questions on the passages in the SQuAD v1.1 (URL training set.
In this work, we use a synthetic adversarial data generation to make QA models more robust to human adversaries. We develop a data generation pipeline that selects source passages, identifies candidate answers, generates questions, then finally filters or re-labels them to improve quality. Using this approach, we amplify a smaller human-written adversarial dataset to a much larger set of synthetic question-answer pairs. By incorporating our synthetic data, we improve the state-of-the-art on the AdversarialQA (URL dataset by 3.7F1 and improve model generalisation on nine of the twelve MRQA datasets. We further conduct a novel human-in-the-loop evaluation to show that our models are considerably more robust to new human-written adversarial examples: crowdworkers can fool our model only 8.8% of the time on average, compared to 17.6% for a model trained without synthetic data.
For full details on how the dataset was created, kindly refer to the paper.
### Supported Tasks
'extractive-qa': The dataset can be used to train a model for Extractive Question Answering, which consists in selecting the answer to a question from a passage. Success on this task is typically measured by achieving a high word-overlap F1 URL as round 1 of the QA task on Dynabench and ranks models based on F1 score.
### Languages
The text in the dataset is in English. The associated BCP-47 code is 'en'.
## Dataset Structure
### Data Instances
Data is provided in the same format as SQuAD 1.1. An example is shown below:
### Data Fields
- title: all "None" in this dataset
- context: the context/passage
- id: a string identifier for each question
- answers: a list of all provided answers (one per question in our case, but multiple may exist in SQuAD) with an 'answer_start' field which is the character index of the start of the answer span, and a 'text' field which is the answer text.
### Data Splits
The dataset is composed of a single split of 314,811 examples that we used in a two-stage fine-tuning process (refer to the paper for further details).
## Dataset Creation
### Curation Rationale
This dataset was created to investigate the effects of using synthetic adversarial data generation to improve robustness of state-of-the-art QA models.
### Source Data
#### Initial Data Collection and Normalization
The source passages are from Wikipedia and are the same as those used in SQuAD v1.1.
#### Who are the source language producers?
The source language produces are Wikipedia editors for the passages, and a BART-Large generative model for the questions.
### Personal and Sensitive Information
No annotator identifying details are provided.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better question answering systems.
A system that succeeds at the supported task would be able to provide an accurate extractive answer from a short passage. This dataset is to be seen as a support resource for improve the ability of systems t handle questions that contemporary state-of-the-art models struggle to answer correctly, thus often requiring more complex comprehension abilities than say detecting phrases explicitly mentioned in the passage with high overlap to the question.
It should be noted, however, that the the source passages are both domain-restricted and linguistically specific, and that provided questions and answers do not constitute any particular social application.
### Discussion of Biases
The dataset may exhibit various biases in terms of the source passage selection, selected candidate answers, generated questions, quality re-labelling process, as well as any algorithmic biases that may be exacerbated from the adversarial annotation process used to collect the SQuAD and AdversarialQA data on which the generators were trained.
### Other Known Limitations
N/a
## Additional Information
### Dataset Curators
This dataset was initially created by Max Bartolo, Tristan Thrush, Robin Jia, Sebastian Riedel, Pontus Stenetorp, and Douwe Kiela during work carried out at University College London (UCL) and Facebook AI Research (FAIR).
### Licensing Information
This dataset is distributed under the MIT License.
### Contributions
Thanks to @maxbartolo for adding this dataset.
| [
"# Dataset Card for synQA",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: synQA homepage\n- Paper: Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation\n- Point of Contact: Max Bartolo",
"### Dataset Summary\n\nSynQA is a Reading Comprehension dataset created in the work \"Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation\" (URL\nIt consists of 314,811 synthetically generated questions on the passages in the SQuAD v1.1 (URL training set.\n\nIn this work, we use a synthetic adversarial data generation to make QA models more robust to human adversaries. We develop a data generation pipeline that selects source passages, identifies candidate answers, generates questions, then finally filters or re-labels them to improve quality. Using this approach, we amplify a smaller human-written adversarial dataset to a much larger set of synthetic question-answer pairs. By incorporating our synthetic data, we improve the state-of-the-art on the AdversarialQA (URL dataset by 3.7F1 and improve model generalisation on nine of the twelve MRQA datasets. We further conduct a novel human-in-the-loop evaluation to show that our models are considerably more robust to new human-written adversarial examples: crowdworkers can fool our model only 8.8% of the time on average, compared to 17.6% for a model trained without synthetic data.\n\nFor full details on how the dataset was created, kindly refer to the paper.",
"### Supported Tasks\n\n'extractive-qa': The dataset can be used to train a model for Extractive Question Answering, which consists in selecting the answer to a question from a passage. Success on this task is typically measured by achieving a high word-overlap F1 URL as round 1 of the QA task on Dynabench and ranks models based on F1 score.",
"### Languages\n\nThe text in the dataset is in English. The associated BCP-47 code is 'en'.",
"## Dataset Structure",
"### Data Instances\n\nData is provided in the same format as SQuAD 1.1. An example is shown below:",
"### Data Fields\n\n- title: all \"None\" in this dataset\n- context: the context/passage\n- id: a string identifier for each question\n- answers: a list of all provided answers (one per question in our case, but multiple may exist in SQuAD) with an 'answer_start' field which is the character index of the start of the answer span, and a 'text' field which is the answer text.",
"### Data Splits\n\nThe dataset is composed of a single split of 314,811 examples that we used in a two-stage fine-tuning process (refer to the paper for further details).",
"## Dataset Creation",
"### Curation Rationale\n\nThis dataset was created to investigate the effects of using synthetic adversarial data generation to improve robustness of state-of-the-art QA models.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe source passages are from Wikipedia and are the same as those used in SQuAD v1.1.",
"#### Who are the source language producers?\n\nThe source language produces are Wikipedia editors for the passages, and a BART-Large generative model for the questions.",
"### Personal and Sensitive Information\n\nNo annotator identifying details are provided.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe purpose of this dataset is to help develop better question answering systems.\n\nA system that succeeds at the supported task would be able to provide an accurate extractive answer from a short passage. This dataset is to be seen as a support resource for improve the ability of systems t handle questions that contemporary state-of-the-art models struggle to answer correctly, thus often requiring more complex comprehension abilities than say detecting phrases explicitly mentioned in the passage with high overlap to the question.\n\nIt should be noted, however, that the the source passages are both domain-restricted and linguistically specific, and that provided questions and answers do not constitute any particular social application.",
"### Discussion of Biases\n\nThe dataset may exhibit various biases in terms of the source passage selection, selected candidate answers, generated questions, quality re-labelling process, as well as any algorithmic biases that may be exacerbated from the adversarial annotation process used to collect the SQuAD and AdversarialQA data on which the generators were trained.",
"### Other Known Limitations\n\nN/a",
"## Additional Information",
"### Dataset Curators\n\nThis dataset was initially created by Max Bartolo, Tristan Thrush, Robin Jia, Sebastian Riedel, Pontus Stenetorp, and Douwe Kiela during work carried out at University College London (UCL) and Facebook AI Research (FAIR).",
"### Licensing Information\n\nThis dataset is distributed under the MIT License.",
"### Contributions\n\nThanks to @maxbartolo for adding this dataset."
] | [
"TAGS\n#task_categories-question-answering #task_ids-extractive-qa #task_ids-open-domain-qa #annotations_creators-generated #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-mit #arxiv-1606.05250 #region-us \n",
"# Dataset Card for synQA",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: synQA homepage\n- Paper: Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation\n- Point of Contact: Max Bartolo",
"### Dataset Summary\n\nSynQA is a Reading Comprehension dataset created in the work \"Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation\" (URL\nIt consists of 314,811 synthetically generated questions on the passages in the SQuAD v1.1 (URL training set.\n\nIn this work, we use a synthetic adversarial data generation to make QA models more robust to human adversaries. We develop a data generation pipeline that selects source passages, identifies candidate answers, generates questions, then finally filters or re-labels them to improve quality. Using this approach, we amplify a smaller human-written adversarial dataset to a much larger set of synthetic question-answer pairs. By incorporating our synthetic data, we improve the state-of-the-art on the AdversarialQA (URL dataset by 3.7F1 and improve model generalisation on nine of the twelve MRQA datasets. We further conduct a novel human-in-the-loop evaluation to show that our models are considerably more robust to new human-written adversarial examples: crowdworkers can fool our model only 8.8% of the time on average, compared to 17.6% for a model trained without synthetic data.\n\nFor full details on how the dataset was created, kindly refer to the paper.",
"### Supported Tasks\n\n'extractive-qa': The dataset can be used to train a model for Extractive Question Answering, which consists in selecting the answer to a question from a passage. Success on this task is typically measured by achieving a high word-overlap F1 URL as round 1 of the QA task on Dynabench and ranks models based on F1 score.",
"### Languages\n\nThe text in the dataset is in English. The associated BCP-47 code is 'en'.",
"## Dataset Structure",
"### Data Instances\n\nData is provided in the same format as SQuAD 1.1. An example is shown below:",
"### Data Fields\n\n- title: all \"None\" in this dataset\n- context: the context/passage\n- id: a string identifier for each question\n- answers: a list of all provided answers (one per question in our case, but multiple may exist in SQuAD) with an 'answer_start' field which is the character index of the start of the answer span, and a 'text' field which is the answer text.",
"### Data Splits\n\nThe dataset is composed of a single split of 314,811 examples that we used in a two-stage fine-tuning process (refer to the paper for further details).",
"## Dataset Creation",
"### Curation Rationale\n\nThis dataset was created to investigate the effects of using synthetic adversarial data generation to improve robustness of state-of-the-art QA models.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe source passages are from Wikipedia and are the same as those used in SQuAD v1.1.",
"#### Who are the source language producers?\n\nThe source language produces are Wikipedia editors for the passages, and a BART-Large generative model for the questions.",
"### Personal and Sensitive Information\n\nNo annotator identifying details are provided.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe purpose of this dataset is to help develop better question answering systems.\n\nA system that succeeds at the supported task would be able to provide an accurate extractive answer from a short passage. This dataset is to be seen as a support resource for improve the ability of systems t handle questions that contemporary state-of-the-art models struggle to answer correctly, thus often requiring more complex comprehension abilities than say detecting phrases explicitly mentioned in the passage with high overlap to the question.\n\nIt should be noted, however, that the the source passages are both domain-restricted and linguistically specific, and that provided questions and answers do not constitute any particular social application.",
"### Discussion of Biases\n\nThe dataset may exhibit various biases in terms of the source passage selection, selected candidate answers, generated questions, quality re-labelling process, as well as any algorithmic biases that may be exacerbated from the adversarial annotation process used to collect the SQuAD and AdversarialQA data on which the generators were trained.",
"### Other Known Limitations\n\nN/a",
"## Additional Information",
"### Dataset Curators\n\nThis dataset was initially created by Max Bartolo, Tristan Thrush, Robin Jia, Sebastian Riedel, Pontus Stenetorp, and Douwe Kiela during work carried out at University College London (UCL) and Facebook AI Research (FAIR).",
"### Licensing Information\n\nThis dataset is distributed under the MIT License.",
"### Contributions\n\nThanks to @maxbartolo for adding this dataset."
] |
1877395c47bcf77735761c694234dd55d3598bc5 |
# Dataset Card for BT11
## Dataset Description
- **Paper:** [Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF](https://ksiresearch.org/seke/seke18paper/seke18paper_167.pdf)
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
BT11 is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.
### Languages
- Java
## Dataset Structure
### Data Instances
```
{
"index": 20170,
"identifier": "currentLineHighlight",
"segmentation": "current Line Highlight"
}
```
### Data Fields
- `index`: a numerical index.
- `identifier`: the original identifier.
- `segmentation`: the gold segmentation for the identifier.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@inproceedings{butler2011improving,
title={Improving the tokenisation of identifier names},
author={Butler, Simon and Wermelinger, Michel and Yu, Yijun and Sharp, Helen},
booktitle={European Conference on Object-Oriented Programming},
pages={130--154},
year={2011},
organization={Springer}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/bt11 | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:unknown",
"word-segmentation",
"region:us"
] | 2022-03-05T22:41:34+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["code"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "BT11", "tags": ["word-segmentation"]} | 2022-10-20T18:13:02+00:00 | [] | [
"code"
] | TAGS
#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-code #license-unknown #word-segmentation #region-us
|
# Dataset Card for BT11
## Dataset Description
- Paper: Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
BT11 is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.
### Languages
- Java
## Dataset Structure
### Data Instances
### Data Fields
- 'index': a numerical index.
- 'identifier': the original identifier.
- 'segmentation': the gold segmentation for the identifier.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.
- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.
## Additional Information
### Contributions
This dataset was added by @ruanchaves while developing the hashformers library. | [
"# Dataset Card for BT11",
"## Dataset Description\n\n- Paper: Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF",
"### Dataset Summary\n\nIn programming languages, identifiers are tokens (also called symbols) which name language entities.\nSome of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.\n\nBT11 is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.",
"### Languages\n\n- Java",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index.\n- 'identifier': the original identifier.\n- 'segmentation': the gold segmentation for the identifier.",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] | [
"TAGS\n#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-code #license-unknown #word-segmentation #region-us \n",
"# Dataset Card for BT11",
"## Dataset Description\n\n- Paper: Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF",
"### Dataset Summary\n\nIn programming languages, identifiers are tokens (also called symbols) which name language entities.\nSome of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.\n\nBT11 is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.",
"### Languages\n\n- Java",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index.\n- 'identifier': the original identifier.\n- 'segmentation': the gold segmentation for the identifier.",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] |
5ccd62cfd185abd77dffc846d2cd3499e0c286c9 |
# Dataset Card for Binkley
## Dataset Description
- **Paper:** [Normalizing Source Code Vocabulary](https://www.researchgate.net/publication/224198190_Normalizing_Source_Code_Vocabulary)
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
Binkley is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.
### Languages
- C
- C++
- Java
## Dataset Structure
### Data Instances
```
{
"index": 0,
"identifier": "init_g16_i",
"segmentation": "init _ g 16 _ i"
}
```
### Data Fields
- `index`: a numerical index.
- `identifier`: the original identifier.
- `segmentation`: the gold segmentation for the identifier.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@inproceedings{inproceedings,
author = {Lawrie, Dawn and Binkley, David and Morrell, Christopher},
year = {2010},
month = {11},
pages = {3 - 12},
title = {Normalizing Source Code Vocabulary},
journal = {Proceedings - Working Conference on Reverse Engineering, WCRE},
doi = {10.1109/WCRE.2010.10}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/binkley | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:unknown",
"word-segmentation",
"region:us"
] | 2022-03-05T22:56:51+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["code"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "Binkley", "tags": ["word-segmentation"]} | 2022-10-20T18:12:56+00:00 | [] | [
"code"
] | TAGS
#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-code #license-unknown #word-segmentation #region-us
|
# Dataset Card for Binkley
## Dataset Description
- Paper: Normalizing Source Code Vocabulary
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
Binkley is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.
### Languages
- C
- C++
- Java
## Dataset Structure
### Data Instances
### Data Fields
- 'index': a numerical index.
- 'identifier': the original identifier.
- 'segmentation': the gold segmentation for the identifier.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.
- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.
## Additional Information
### Contributions
This dataset was added by @ruanchaves while developing the hashformers library. | [
"# Dataset Card for Binkley",
"## Dataset Description\n\n- Paper: Normalizing Source Code Vocabulary",
"### Dataset Summary\n\nIn programming languages, identifiers are tokens (also called symbols) which name language entities.\nSome of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.\n\nBinkley is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.",
"### Languages\n\n- C\n- C++\n- Java",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index.\n- 'identifier': the original identifier.\n- 'segmentation': the gold segmentation for the identifier.",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] | [
"TAGS\n#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-code #license-unknown #word-segmentation #region-us \n",
"# Dataset Card for Binkley",
"## Dataset Description\n\n- Paper: Normalizing Source Code Vocabulary",
"### Dataset Summary\n\nIn programming languages, identifiers are tokens (also called symbols) which name language entities.\nSome of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.\n\nBinkley is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.",
"### Languages\n\n- C\n- C++\n- Java",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index.\n- 'identifier': the original identifier.\n- 'segmentation': the gold segmentation for the identifier.",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] |
df859ecce54578af17e873cf79438b082632de1d |
# Dataset Card for Jhotdraw
## Dataset Description
- **Paper:** [Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF](https://ksiresearch.org/seke/seke18paper/seke18paper_167.pdf)
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
Jhotdraw is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.
### Languages
- Java
## Dataset Structure
### Data Instances
```
{
"index": 0,
"identifier": "abstractconnectorserializeddataversion",
"segmentation": "abstract connector serialized data version"
}
```
### Data Fields
- `index`: a numerical index.
- `identifier`: the original identifier.
- `segmentation`: the gold segmentation for the identifier.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@inproceedings{madani2010recognizing,
title={Recognizing words from source code identifiers using speech recognition techniques},
author={Madani, Nioosha and Guerrouj, Latifa and Di Penta, Massimiliano and Gueheneuc, Yann-Gael and Antoniol, Giuliano},
booktitle={2010 14th European Conference on Software Maintenance and Reengineering},
pages={68--77},
year={2010},
organization={IEEE}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/jhotdraw | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:unknown",
"word-segmentation",
"region:us"
] | 2022-03-05T23:13:37+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["code"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "Jhotdraw", "tags": ["word-segmentation"]} | 2022-10-20T18:12:53+00:00 | [] | [
"code"
] | TAGS
#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-code #license-unknown #word-segmentation #region-us
|
# Dataset Card for Jhotdraw
## Dataset Description
- Paper: Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
Jhotdraw is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.
### Languages
- Java
## Dataset Structure
### Data Instances
### Data Fields
- 'index': a numerical index.
- 'identifier': the original identifier.
- 'segmentation': the gold segmentation for the identifier.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.
- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.
## Additional Information
### Contributions
This dataset was added by @ruanchaves while developing the hashformers library. | [
"# Dataset Card for Jhotdraw",
"## Dataset Description\n\n- Paper: Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF",
"### Dataset Summary\n\nIn programming languages, identifiers are tokens (also called symbols) which name language entities.\nSome of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.\n\nJhotdraw is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.",
"### Languages\n\n- Java",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index.\n- 'identifier': the original identifier.\n- 'segmentation': the gold segmentation for the identifier.",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] | [
"TAGS\n#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-code #license-unknown #word-segmentation #region-us \n",
"# Dataset Card for Jhotdraw",
"## Dataset Description\n\n- Paper: Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF",
"### Dataset Summary\n\nIn programming languages, identifiers are tokens (also called symbols) which name language entities.\nSome of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.\n\nJhotdraw is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.",
"### Languages\n\n- Java",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index.\n- 'identifier': the original identifier.\n- 'segmentation': the gold segmentation for the identifier.",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] |
9046da8c9a595ead11d7d243780db677f2ce9618 |
# Dataset Card for Lynx
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** [Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF](https://ksiresearch.org/seke/seke18paper/seke18paper_167.pdf)
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
Lynx is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.
Besides identifier segmentation, the gold labels for this dataset also include abbreviation expansion.
### Languages
- C
## Dataset Structure
### Data Instances
```
{
"index": 3,
"identifier": "abspath",
"segmentation": "abs path",
"expansion": "absolute path",
"spans": {
"text": [
"abs"
],
"expansion": [
"absolute"
],
"start": [
0
],
"end": [
4
]
}
}
```
### Data Fields
- `index`: a numerical index.
- `identifier`: the original identifier.
- `segmentation`: the gold segmentation for the identifier, without abbreviation expansion.
- `expansion`: the gold segmentation for the identifier, with abbreviation expansion.
- `spans`: the start and end index of each abbreviation, the text of the abbreviation and its corresponding expansion.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
### Citation Information
```
@inproceedings{madani2010recognizing,
title={Recognizing words from source code identifiers using speech recognition techniques},
author={Madani, Nioosha and Guerrouj, Latifa and Di Penta, Massimiliano and Gueheneuc, Yann-Gael and Antoniol, Giuliano},
booktitle={2010 14th European Conference on Software Maintenance and Reengineering},
pages={68--77},
year={2010},
organization={IEEE}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/lynx | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:unknown",
"word-segmentation",
"region:us"
] | 2022-03-05T23:19:48+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["code"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction", "code-generation", "conditional-text-generation"], "task_ids": [], "pretty_name": "Lynx", "tags": ["word-segmentation"]} | 2022-10-20T18:12:51+00:00 | [] | [
"code"
] | TAGS
#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-code #license-unknown #word-segmentation #region-us
|
# Dataset Card for Lynx
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Dataset Creation
- Additional Information
- Citation Information
- Contributions
## Dataset Description
- Paper: Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
Lynx is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.
Besides identifier segmentation, the gold labels for this dataset also include abbreviation expansion.
### Languages
- C
## Dataset Structure
### Data Instances
### Data Fields
- 'index': a numerical index.
- 'identifier': the original identifier.
- 'segmentation': the gold segmentation for the identifier, without abbreviation expansion.
- 'expansion': the gold segmentation for the identifier, with abbreviation expansion.
- 'spans': the start and end index of each abbreviation, the text of the abbreviation and its corresponding expansion.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.
- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.
### Contributions
This dataset was added by @ruanchaves while developing the hashformers library. | [
"# Dataset Card for Lynx",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n- Additional Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Paper: Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF",
"### Dataset Summary\n\nIn programming languages, identifiers are tokens (also called symbols) which name language entities.\nSome of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.\n\nLynx is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.\n\nBesides identifier segmentation, the gold labels for this dataset also include abbreviation expansion.",
"### Languages\n\n- C",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index.\n- 'identifier': the original identifier.\n- 'segmentation': the gold segmentation for the identifier, without abbreviation expansion.\n- 'expansion': the gold segmentation for the identifier, with abbreviation expansion.\n- 'spans': the start and end index of each abbreviation, the text of the abbreviation and its corresponding expansion.",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] | [
"TAGS\n#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-code #license-unknown #word-segmentation #region-us \n",
"# Dataset Card for Lynx",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n- Additional Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Paper: Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF",
"### Dataset Summary\n\nIn programming languages, identifiers are tokens (also called symbols) which name language entities.\nSome of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.\n\nLynx is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.\n\nBesides identifier segmentation, the gold labels for this dataset also include abbreviation expansion.",
"### Languages\n\n- C",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index.\n- 'identifier': the original identifier.\n- 'segmentation': the gold segmentation for the identifier, without abbreviation expansion.\n- 'expansion': the gold segmentation for the identifier, with abbreviation expansion.\n- 'spans': the start and end index of each abbreviation, the text of the abbreviation and its corresponding expansion.",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] |
dec0e19ff4bab5b5b1a972909b2ea38118644d0f |
# Dataset Card for SNAP
## Dataset Description
- **Repository:** [ardax/hashtag-segmentor](https://github.com/ardax/hashtag-segmentor)
- **Paper:** [Segmenting hashtags using automatically created training data](http://www.lrec-conf.org/proceedings/lrec2016/pdf/708_Paper.pdf)
### Dataset Summary
Automatically segmented 803K SNAP Twitter Data Set hashtags with the heuristic described in the paper "Segmenting hashtags using automatically created training data".
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 0,
"hashtag": "BrandThunder",
"segmentation": "Brand Thunder"
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@inproceedings{celebi2016segmenting,
title={Segmenting hashtags using automatically created training data},
author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
booktitle={Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)},
pages={2981--2985},
year={2016}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library.
| ruanchaves/snap | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:unknown",
"word-segmentation",
"region:us"
] | 2022-03-06T00:17:23+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "SNAP", "tags": ["word-segmentation"]} | 2022-10-20T18:12:47+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #license-unknown #word-segmentation #region-us
|
# Dataset Card for SNAP
## Dataset Description
- Repository: ardax/hashtag-segmentor
- Paper: Segmenting hashtags using automatically created training data
### Dataset Summary
Automatically segmented 803K SNAP Twitter Data Set hashtags with the heuristic described in the paper "Segmenting hashtags using automatically created training data".
### Languages
English
## Dataset Structure
### Data Instances
### Data Fields
- 'index': a numerical index.
- 'hashtag': the original hashtag.
- 'segmentation': the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.
- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.
## Additional Information
### Contributions
This dataset was added by @ruanchaves while developing the hashformers library.
| [
"# Dataset Card for SNAP",
"## Dataset Description\n\n- Repository: ardax/hashtag-segmentor\n- Paper: Segmenting hashtags using automatically created training data",
"### Dataset Summary\n\nAutomatically segmented 803K SNAP Twitter Data Set hashtags with the heuristic described in the paper \"Segmenting hashtags using automatically created training data\".",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index.\n- 'hashtag': the original hashtag.\n- 'segmentation': the gold segmentation for the hashtag.",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] | [
"TAGS\n#annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #license-unknown #word-segmentation #region-us \n",
"# Dataset Card for SNAP",
"## Dataset Description\n\n- Repository: ardax/hashtag-segmentor\n- Paper: Segmenting hashtags using automatically created training data",
"### Dataset Summary\n\nAutomatically segmented 803K SNAP Twitter Data Set hashtags with the heuristic described in the paper \"Segmenting hashtags using automatically created training data\".",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'index': a numerical index.\n- 'hashtag': the original hashtag.\n- 'segmentation': the gold segmentation for the hashtag.",
"## Dataset Creation\n\n- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: 'hashtag' and 'segmentation' or 'identifier' and 'segmentation'.\n\n- The only difference between 'hashtag' and 'segmentation' or between 'identifier' and 'segmentation' are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.\n\n- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as '_' , ':', '~' ). \n\n- If there are any annotations for named entity recognition and other token classification tasks, they are given in a 'spans' field.",
"## Additional Information",
"### Contributions\n\nThis dataset was added by @ruanchaves while developing the hashformers library."
] |
0a295fc67ae9892cf83d9f585fbd5f29330bf502 | A collection of 38,176 emoji images from Facebook, Google, Apple, WhatsApp, Samsung, [JoyPixels](https://www.joypixels.com/), Twitter, [emojidex](https://www.emojidex.com/), LG, [OpenMoji](https://openmoji.org/), and Microsoft. It includes all the emojis for these apps/platforms as of early 2022.
* Counts: Facebook=3664, Google=3664, Apple=3961, WhatsApp=3519, Samsung=3752, JoyPixels=3538, Twitter=3544, emojidex=2040, LG=3051, OpenMoji=3512, Microsoft=3931.
* Sizes: Facebook=144x144, Google=144x144, Apple=144x144, WhatsApp=144x144, Samsung=108x108, JoyPixels=144x144, Twitter=144x144, emojidex=144x144, LG=136x128, OpenMoji=144x144, Microsoft=144x144.
* The tar files directly contain the image files (they're not inside a parent folder).
* The emoji code points are at the end of the filename, but there are some adjustments needed to parse them into the Unicode character consistently across all sets of emojis in this dataset. Here's some JavaScript code to convert the file name of an emoji image into the actual Unicode emoji character:
```js
let filename = ...;
let fixedFilename = filename.replace(/(no|light|medium|medium-light|medium-dark|dark)-skin-tone/, "").replace(/__/, "_").replace(/--/, "-");
let emoji = String.fromCodePoint(...fixedFilename.split("_")[1].split(".")[0].split("-").map(hex => parseInt(hex, 16)));
```
## Facebook examples:

## Google examples:

## Apple examples:

## WhatsApp examples:

## Samsung examples:

## JoyPixels examples:

## Twitter examples:

## emojidex examples:

## LG examples:

## OpenMoji examples:

## Microsoft examples:
 | rocca/emojis | [
"region:us"
] | 2022-03-06T02:31:30+00:00 | {} | 2022-04-29T08:37:55+00:00 | [] | [] | TAGS
#region-us
| A collection of 38,176 emoji images from Facebook, Google, Apple, WhatsApp, Samsung, JoyPixels, Twitter, emojidex, LG, OpenMoji, and Microsoft. It includes all the emojis for these apps/platforms as of early 2022.
* Counts: Facebook=3664, Google=3664, Apple=3961, WhatsApp=3519, Samsung=3752, JoyPixels=3538, Twitter=3544, emojidex=2040, LG=3051, OpenMoji=3512, Microsoft=3931.
* Sizes: Facebook=144x144, Google=144x144, Apple=144x144, WhatsApp=144x144, Samsung=108x108, JoyPixels=144x144, Twitter=144x144, emojidex=144x144, LG=136x128, OpenMoji=144x144, Microsoft=144x144.
* The tar files directly contain the image files (they're not inside a parent folder).
* The emoji code points are at the end of the filename, but there are some adjustments needed to parse them into the Unicode character consistently across all sets of emojis in this dataset. Here's some JavaScript code to convert the file name of an emoji image into the actual Unicode emoji character:
## Facebook examples:
!Facebook emoji grid
## Google examples:
!Google emoji grid
## Apple examples:
!Apple emoji grid
## WhatsApp examples:
!WhatsApp emoji grid
## Samsung examples:
!Samsung emoji grid
## JoyPixels examples:
!JoyPixels emoji grid
## Twitter examples:
!Twitter emoji grid
## emojidex examples:
!emojidex emoji grid
## LG examples:
!LG emoji grid
## OpenMoji examples:
!OpenMoji emoji grid
## Microsoft examples:
!Microsoft emoji grid | [
"## Facebook examples:\n!Facebook emoji grid",
"## Google examples:\n!Google emoji grid",
"## Apple examples:\n!Apple emoji grid",
"## WhatsApp examples:\n!WhatsApp emoji grid",
"## Samsung examples:\n!Samsung emoji grid",
"## JoyPixels examples:\n!JoyPixels emoji grid",
"## Twitter examples:\n!Twitter emoji grid",
"## emojidex examples:\n!emojidex emoji grid",
"## LG examples:\n!LG emoji grid",
"## OpenMoji examples:\n!OpenMoji emoji grid",
"## Microsoft examples:\n!Microsoft emoji grid"
] | [
"TAGS\n#region-us \n",
"## Facebook examples:\n!Facebook emoji grid",
"## Google examples:\n!Google emoji grid",
"## Apple examples:\n!Apple emoji grid",
"## WhatsApp examples:\n!WhatsApp emoji grid",
"## Samsung examples:\n!Samsung emoji grid",
"## JoyPixels examples:\n!JoyPixels emoji grid",
"## Twitter examples:\n!Twitter emoji grid",
"## emojidex examples:\n!emojidex emoji grid",
"## LG examples:\n!LG emoji grid",
"## OpenMoji examples:\n!OpenMoji emoji grid",
"## Microsoft examples:\n!Microsoft emoji grid"
] |
72047fee5890ca82c752902aedb138cc72c6fb96 |
# Dataset Card for COVID-19 French News dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The COVID-19 French News dataset is a French-language dataset containing just over 40k unique news articles from more than 50 different French-speaking online newspapers. The dataset has been prepared using [news-please](https://github.com/fhamborg/news-please) - an integrated web crawler and information extractor for news. The current version supports abstractive summarization and topic classification. Dataset Card not finished yet.
### Languages
The text in the dataset is in French.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `title`: title of the article
- `description`: description or a summary of the article
- `text`: the actual article text in raw form
- `domain`: source domain of the article (i.e. lemonde.fr)
- `url`: article URL, the original URL where it was scraped
- `labels`: classification labels
## Data Splits
COVID-19 French News dataset has only the training set, i.e. it has to be loaded with train split specified: fr_covid_news = load_dataset('gustavecortal/fr_covid_news', split="train")
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
### Annotations
#### Annotation process
[More Information Needed]
### Personal and Sensitive Information
As one can imagine, data contains contemporary public figures or individuals who appeared in the news.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help researchers develop better French topic classification and abstractive summarization models for news related to COVID-19.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The data was originally collected by Gustave Cortal ([email protected])
### Licensing Information
Usage of the dataset is restricted to non-commercial research purposes only.
### Citation Information
```
@dataset{fr_covid_news,
author = {Gustave Cortal},
year = {2022},
title = {COVID-19 - French News Dataset},
url = {https://www.gustavecortal.com}
}
```
### Contributions
[@gustavecortal](https://github.com/gustavecortal) | gustavecortal/fr_covid_news | [
"task_categories:text-classification",
"task_ids:topic-classification",
"task_ids:multi-label-classification",
"task_ids:multi-class-classification",
"task_ids:language-modeling",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:fr",
"license:unknown",
"region:us"
] | 2022-03-06T21:28:35+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["fr"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification", "sequence-modeling", "conditional-text-generation"], "task_ids": ["topic-classification", "multi-label-classification", "multi-class-classification", "language-modeling", "summarization", "other-stuctured-to-text"], "pretty_name": "COVID-19 French News dataset", "language_bcp47": ["fr-FR"]} | 2022-10-20T18:01:24+00:00 | [] | [
"fr"
] | TAGS
#task_categories-text-classification #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-multi-class-classification #task_ids-language-modeling #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-French #license-unknown #region-us
|
# Dataset Card for COVID-19 French News dataset
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
The COVID-19 French News dataset is a French-language dataset containing just over 40k unique news articles from more than 50 different French-speaking online newspapers. The dataset has been prepared using news-please - an integrated web crawler and information extractor for news. The current version supports abstractive summarization and topic classification. Dataset Card not finished yet.
### Languages
The text in the dataset is in French.
## Dataset Structure
### Data Instances
### Data Fields
- 'title': title of the article
- 'description': description or a summary of the article
- 'text': the actual article text in raw form
- 'domain': source domain of the article (i.e. URL)
- 'url': article URL, the original URL where it was scraped
- 'labels': classification labels
## Data Splits
COVID-19 French News dataset has only the training set, i.e. it has to be loaded with train split specified: fr_covid_news = load_dataset('gustavecortal/fr_covid_news', split="train")
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
### Personal and Sensitive Information
As one can imagine, data contains contemporary public figures or individuals who appeared in the news.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help researchers develop better French topic classification and abstractive summarization models for news related to COVID-19.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
The data was originally collected by Gustave Cortal (gustavecortal@URL)
### Licensing Information
Usage of the dataset is restricted to non-commercial research purposes only.
### Contributions
@gustavecortal | [
"# Dataset Card for COVID-19 French News dataset",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nThe COVID-19 French News dataset is a French-language dataset containing just over 40k unique news articles from more than 50 different French-speaking online newspapers. The dataset has been prepared using news-please - an integrated web crawler and information extractor for news. The current version supports abstractive summarization and topic classification. Dataset Card not finished yet.",
"### Languages\n\nThe text in the dataset is in French.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'title': title of the article\n- 'description': description or a summary of the article\n- 'text': the actual article text in raw form\n- 'domain': source domain of the article (i.e. URL)\n- 'url': article URL, the original URL where it was scraped\n- 'labels': classification labels",
"## Data Splits\n\nCOVID-19 French News dataset has only the training set, i.e. it has to be loaded with train split specified: fr_covid_news = load_dataset('gustavecortal/fr_covid_news', split=\"train\")",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"### Personal and Sensitive Information\n\nAs one can imagine, data contains contemporary public figures or individuals who appeared in the news.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe purpose of this dataset is to help researchers develop better French topic classification and abstractive summarization models for news related to COVID-19.",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nThe data was originally collected by Gustave Cortal (gustavecortal@URL)",
"### Licensing Information\n\nUsage of the dataset is restricted to non-commercial research purposes only.",
"### Contributions\n\n@gustavecortal"
] | [
"TAGS\n#task_categories-text-classification #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-multi-class-classification #task_ids-language-modeling #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-French #license-unknown #region-us \n",
"# Dataset Card for COVID-19 French News dataset",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nThe COVID-19 French News dataset is a French-language dataset containing just over 40k unique news articles from more than 50 different French-speaking online newspapers. The dataset has been prepared using news-please - an integrated web crawler and information extractor for news. The current version supports abstractive summarization and topic classification. Dataset Card not finished yet.",
"### Languages\n\nThe text in the dataset is in French.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'title': title of the article\n- 'description': description or a summary of the article\n- 'text': the actual article text in raw form\n- 'domain': source domain of the article (i.e. URL)\n- 'url': article URL, the original URL where it was scraped\n- 'labels': classification labels",
"## Data Splits\n\nCOVID-19 French News dataset has only the training set, i.e. it has to be loaded with train split specified: fr_covid_news = load_dataset('gustavecortal/fr_covid_news', split=\"train\")",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"### Personal and Sensitive Information\n\nAs one can imagine, data contains contemporary public figures or individuals who appeared in the news.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe purpose of this dataset is to help researchers develop better French topic classification and abstractive summarization models for news related to COVID-19.",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nThe data was originally collected by Gustave Cortal (gustavecortal@URL)",
"### Licensing Information\n\nUsage of the dataset is restricted to non-commercial research purposes only.",
"### Contributions\n\n@gustavecortal"
] |
e5322fec79e6702f69d79829efdc7853f1853802 | ---
annotations_creators:
- crowdsourced
languages:
- en
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
--- | FinScience/FS-distilroberta-fine-tuned | [
"language:en",
"region:us"
] | 2022-03-07T17:24:39+00:00 | {"language": ["en"]} | 2022-10-25T09:02:42+00:00 | [] | [
"en"
] | TAGS
#language-English #region-us
| ---
annotations_creators:
- crowdsourced
languages:
- en
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
--- | [] | [
"TAGS\n#language-English #region-us \n"
] |
87615eac7add0a10355c50b25b5cff17e782cad3 |
# Dataset Card for "MIMICause"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additinal-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/](https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/)
- **Paper:** [MIMICause: Representation and automatic extraction of causal relation types from clinical notes](https://arxiv.org/abs/2110.07090)
- **Size of downloaded dataset files:** 333.4 KB
- **Size of the generated dataset:** 491.2 KB
- **Total amount of disk used:** 668.2 KB
### Dataset Summary
MIMICause Dataset is a dataset for representation and automatic extraction of causal relation types from clinical notes. The MIMICause dataset requires manual download of the mimicause.zip file from the **Community Annotations Downloads** section of the n2c2 dataset on the [Harvard's DBMI Data Portal](https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/) after signing their agreement forms, which is a quick and easy procedure.
The dataset has 2714 samples having both explicit and implicit causality in which entities are in the same sentence or different sentences. The nine semantic causal relations (with directionality) between entitities E1 and E2 in a text snippets are -- (1) Cause(E1,E2) (2) Cause(E2,E1) (3) Enable(E1,E2) (4) Enable(E2,E1) (5) Prevent(E1,E2) (6) Prevent(E2,E1) (7) Hinder(E1,E2) (8) Hinder(E2,E1) (9) Other.
### Supported Tasks
Causal relation extraction between entities expressed implicitly or explicitly, in single or across multiple sentences.
## Dataset Structure
### Data Instances
An example of a data sample looks as follows:
```
{
"E1": "Florinef",
"E2": "fluid retention",
"Text": "Treated with <e1>Florinef</e1> in the past, was d/c'd due to <e2>fluid retention</e2>.",
"Label": 0
}
```
### Data Fields
The data fields are the same among all the splits.
- `E1`: a `string` value.
- `E2`: a `string` value.
- `Text`: a `large_string` value.
- `Label`: a `ClassLabel` categorical value.
### Data Splits
The original dataset that gets downloaded from the [Harvard's DBMI Data Portal](https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/) have all the data in a single split. The dataset loading provided here through huggingface datasets splits the data into the following train, validation and test splits for convenience.
| name |train|validation|test|
|---------|----:|---------:|---:|
|mimicause| 1953| 489 | 272|
## Additional Information
### Citation Information
```
@inproceedings{khetan-etal-2022-mimicause,
title={MIMICause: Representation and automatic extraction of causal relation types from clinical notes},
author={Vivek Khetan and Md Imbesat Hassan Rizvi and Jessica Huber and Paige Bartusiak and Bogdan Sacaleanu and Andrew Fano},
booktitle ={Findings of the Association for Computational Linguistics: ACL 2022},
month={may},
year={2022},
publisher={Association for Computational Linguistics},
address={Dublin, The Republic of Ireland},
url={},
doi={},
pages={},
}
``` | pensieves/mimicause | [
"license:apache-2.0",
"arxiv:2110.07090",
"region:us"
] | 2022-03-07T20:33:38+00:00 | {"license": "apache-2.0", "pretty_name": "MIMICause"} | 2022-03-29T13:54:48+00:00 | [
"2110.07090"
] | [] | TAGS
#license-apache-2.0 #arxiv-2110.07090 #region-us
| Dataset Card for "MIMICause"
============================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Additional Information
+ Citation Information
Dataset Description
-------------------
* Homepage: URL
* Paper: MIMICause: Representation and automatic extraction of causal relation types from clinical notes
* Size of downloaded dataset files: 333.4 KB
* Size of the generated dataset: 491.2 KB
* Total amount of disk used: 668.2 KB
### Dataset Summary
MIMICause Dataset is a dataset for representation and automatic extraction of causal relation types from clinical notes. The MIMICause dataset requires manual download of the URL file from the Community Annotations Downloads section of the n2c2 dataset on the Harvard's DBMI Data Portal after signing their agreement forms, which is a quick and easy procedure.
The dataset has 2714 samples having both explicit and implicit causality in which entities are in the same sentence or different sentences. The nine semantic causal relations (with directionality) between entitities E1 and E2 in a text snippets are -- (1) Cause(E1,E2) (2) Cause(E2,E1) (3) Enable(E1,E2) (4) Enable(E2,E1) (5) Prevent(E1,E2) (6) Prevent(E2,E1) (7) Hinder(E1,E2) (8) Hinder(E2,E1) (9) Other.
### Supported Tasks
Causal relation extraction between entities expressed implicitly or explicitly, in single or across multiple sentences.
Dataset Structure
-----------------
### Data Instances
An example of a data sample looks as follows:
### Data Fields
The data fields are the same among all the splits.
* 'E1': a 'string' value.
* 'E2': a 'string' value.
* 'Text': a 'large\_string' value.
* 'Label': a 'ClassLabel' categorical value.
### Data Splits
The original dataset that gets downloaded from the Harvard's DBMI Data Portal have all the data in a single split. The dataset loading provided here through huggingface datasets splits the data into the following train, validation and test splits for convenience.
Additional Information
----------------------
| [
"### Dataset Summary\n\n\nMIMICause Dataset is a dataset for representation and automatic extraction of causal relation types from clinical notes. The MIMICause dataset requires manual download of the URL file from the Community Annotations Downloads section of the n2c2 dataset on the Harvard's DBMI Data Portal after signing their agreement forms, which is a quick and easy procedure.\n\n\nThe dataset has 2714 samples having both explicit and implicit causality in which entities are in the same sentence or different sentences. The nine semantic causal relations (with directionality) between entitities E1 and E2 in a text snippets are -- (1) Cause(E1,E2) (2) Cause(E2,E1) (3) Enable(E1,E2) (4) Enable(E2,E1) (5) Prevent(E1,E2) (6) Prevent(E2,E1) (7) Hinder(E1,E2) (8) Hinder(E2,E1) (9) Other.",
"### Supported Tasks\n\n\nCausal relation extraction between entities expressed implicitly or explicitly, in single or across multiple sentences.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of a data sample looks as follows:",
"### Data Fields\n\n\nThe data fields are the same among all the splits.\n\n\n* 'E1': a 'string' value.\n* 'E2': a 'string' value.\n* 'Text': a 'large\\_string' value.\n* 'Label': a 'ClassLabel' categorical value.",
"### Data Splits\n\n\nThe original dataset that gets downloaded from the Harvard's DBMI Data Portal have all the data in a single split. The dataset loading provided here through huggingface datasets splits the data into the following train, validation and test splits for convenience.\n\n\n\nAdditional Information\n----------------------"
] | [
"TAGS\n#license-apache-2.0 #arxiv-2110.07090 #region-us \n",
"### Dataset Summary\n\n\nMIMICause Dataset is a dataset for representation and automatic extraction of causal relation types from clinical notes. The MIMICause dataset requires manual download of the URL file from the Community Annotations Downloads section of the n2c2 dataset on the Harvard's DBMI Data Portal after signing their agreement forms, which is a quick and easy procedure.\n\n\nThe dataset has 2714 samples having both explicit and implicit causality in which entities are in the same sentence or different sentences. The nine semantic causal relations (with directionality) between entitities E1 and E2 in a text snippets are -- (1) Cause(E1,E2) (2) Cause(E2,E1) (3) Enable(E1,E2) (4) Enable(E2,E1) (5) Prevent(E1,E2) (6) Prevent(E2,E1) (7) Hinder(E1,E2) (8) Hinder(E2,E1) (9) Other.",
"### Supported Tasks\n\n\nCausal relation extraction between entities expressed implicitly or explicitly, in single or across multiple sentences.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of a data sample looks as follows:",
"### Data Fields\n\n\nThe data fields are the same among all the splits.\n\n\n* 'E1': a 'string' value.\n* 'E2': a 'string' value.\n* 'Text': a 'large\\_string' value.\n* 'Label': a 'ClassLabel' categorical value.",
"### Data Splits\n\n\nThe original dataset that gets downloaded from the Harvard's DBMI Data Portal have all the data in a single split. The dataset loading provided here through huggingface datasets splits the data into the following train, validation and test splits for convenience.\n\n\n\nAdditional Information\n----------------------"
] |
86d2ca7da33fbef822c6a0786c12eaa8cb3772fa |
# Quasper into squad version
This is a change of format of [qasper](https://huggingface.co/datasets/qasper) dataset into squad format. | z-uo/qasper-squad | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | 2022-03-08T09:20:15+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"], "task_ids": ["closed-domain-qa"], "pretty_name": "qasper-squad", "language_bcp47": ["en-US"]} | 2022-10-25T09:02:49+00:00 | [] | [
"en"
] | TAGS
#task_categories-question-answering #task_ids-closed-domain-qa #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #language-English #region-us
|
# Quasper into squad version
This is a change of format of qasper dataset into squad format. | [
"# Quasper into squad version\n\nThis is a change of format of qasper dataset into squad format."
] | [
"TAGS\n#task_categories-question-answering #task_ids-closed-domain-qa #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #language-English #region-us \n",
"# Quasper into squad version\n\nThis is a change of format of qasper dataset into squad format."
] |
4a906f0b97bc7341bfc5d4453ae23a78edefc0b3 |
# Dataset Card for the-antiwork-subreddit-dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-antiwork-subreddit-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=theantiworksubredditdataset)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theantiworksubredditdataset)
### Dataset Summary
This corpus contains the complete data for the activity of the /r/Antiwork subreddit until 2022-02-18.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] | SocialGrep/the-antiwork-subreddit-dataset | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-03-08T21:09:51+00:00 | {"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"]} | 2022-07-01T16:57:34+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for the-antiwork-subreddit-dataset
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Point of Contact: Website
### Dataset Summary
This corpus contains the complete data for the activity of the /r/Antiwork subreddit until 2022-02-18.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'URL': the human-readable name of the data point's host subreddit.
- 'URL': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
CC-BY v4.0
### Contributions
| [
"# Dataset Card for the-antiwork-subreddit-dataset",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains the complete data for the activity of the /r/Antiwork subreddit until 2022-02-18.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] | [
"TAGS\n#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for the-antiwork-subreddit-dataset",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains the complete data for the activity of the /r/Antiwork subreddit until 2022-02-18.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] |
90b930b5609f5f668c765a5d23f9610d5d0dbcf1 | Dataset for Loyal Health Inc Software Engineer Machine Learning Interview | christianloyal/loyal_clinc_MLE | [
"license:mit",
"region:us"
] | 2022-03-09T00:42:08+00:00 | {"license": "mit"} | 2022-03-10T17:50:54+00:00 | [] | [] | TAGS
#license-mit #region-us
| Dataset for Loyal Health Inc Software Engineer Machine Learning Interview | [] | [
"TAGS\n#license-mit #region-us \n"
] |
1b9776677fd2d5b21056e200089942709d0c3206 | This is my first dataset | hadehuang/testdataset | [
"region:us"
] | 2022-03-09T08:20:00+00:00 | {} | 2022-03-09T08:24:49+00:00 | [] | [] | TAGS
#region-us
| This is my first dataset | [] | [
"TAGS\n#region-us \n"
] |
09dbf84b296f8ecf26bed37536f39a14a2048657 | # Dataset Card for Emoevent
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [EmoEvent dataset repository](https://github.com/fmplaza/EmoEvent)
- **Paper: EmoEvent:** [A Multilingual Emotion Corpus based on different Events](https://aclanthology.org/2020.lrec-1.186.pdf)
- **Leaderboard:** [Leaderboard for EmoEvent / Spanish version](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6385)
- **Point of Contact: [email protected]**
### Dataset Summary
EmoEvent is a multilingual emotion dataset of tweets based on different events that took place in April 2019.
Three annotators labeled the tweets following the six Ekman’s basic emotion model (anger, fear, sadness, joy, disgust, surprise) plus the “neutral or other emotions” category. Morevoer, the tweets are annotated as offensive (OFF) or non-offensive (NO).
### Supported Tasks and Leaderboards
This dataset is intended for multi-class emotion classification and binary offensive classification.
Competition [EmoEvalEs task on emotion detection for Spanish at IberLEF 2021](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6385)
### Languages
- Spanish
- English
## Dataset Structure
### Data Instances
For each instance, there is a string for the id of the tweet, a string for the emotion class, a string for the offensive class, and a string for the event. See the []() to explore more examples.
```
{'id': 'a0c1a858-a9b8-4cb1-8a81-1602736ff5b8',
'event': 'GameOfThrones',
'tweet': 'ARYA DE MI VIDA. ERES MAS ÉPICA QUE EL GOL DE INIESTA JODER #JuegodeTronos #VivePoniente',
'offensive': 'NO',
'emotion': 'joy',
}
```
```
{'id': '3YCT0L9OMMFP7KWKQSTJRJO0YHUSN2a0c1a858-a9b8-4cb1-8a81-1602736ff5b8',
'event': 'GameOfThrones',
'tweet': 'The #NotreDameCathedralFire is indeed sad and people call all offered donations humane acts, but please if you have money to donate, donate to humans and help bring food to their tables and affordable education first. What more humane than that? #HumanityFirst',
'offensive': 'NO',
'emotion': 'sadness',
}
```
### Data Fields
- `id`: a string to identify the tweet
- `event`: a string containing the event associated with the tweet
- `tweet`: a string containing the text of the tweet
- `offensive`: a string containing the offensive gold label
- `emotion`: a string containing the emotion gold label
### Data Splits
The EmoEvent dataset has 2 subsets: EmoEvent_es (Spanish version) and EmoEvent_en (English version)
Each subset contains 3 splits: _train_, _validation_, and _test_. Below are the statistics subsets.
| EmoEvent_es | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 5,723 |
| Validation | 844 |
| Test | 1,656 |
| EmoEvent_en | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 5,112 |
| Validation | 744 |
| Test | 1,447 |
## Dataset Creation
### Source Data
Twitter
#### Who are the annotators?
Amazon Mechanical Turkers
## Additional Information
### Licensing Information
The EmoEvent dataset is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
```
@inproceedings{plaza-del-arco-etal-2020-emoevent,
title = "{{E}mo{E}vent: A Multilingual Emotion Corpus based on different Events}",
author = "{Plaza-del-Arco}, {Flor Miriam} and Strapparava, Carlo and {Ure{\~n}a-L{\’o}pez}, L. Alfonso and {Mart{\’i}n-Valdivia}, M. Teresa",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France", publisher = "European Language Resources Association",
url = "https://www.aclweb.org/anthology/2020.lrec-1.186", pages = "1492--1498",
language = "English",
ISBN = "979-10-95546-34-4"
}
``` | fmplaza/EmoEvent | [
"language:en",
"language:es",
"license:apache-2.0",
"region:us"
] | 2022-03-09T10:17:46+00:00 | {"language": ["en", "es"], "license": "apache-2.0"} | 2024-02-06T14:28:03+00:00 | [] | [
"en",
"es"
] | TAGS
#language-English #language-Spanish #license-apache-2.0 #region-us
| Dataset Card for Emoevent
=========================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Source Data
+ Annotations
* Additional Information
+ Licensing Information
+ Citation Information
Dataset Description
-------------------
* Repository: EmoEvent dataset repository
* Paper: EmoEvent: A Multilingual Emotion Corpus based on different Events
* Leaderboard: Leaderboard for EmoEvent / Spanish version
* Point of Contact: fmplaza@URL
### Dataset Summary
EmoEvent is a multilingual emotion dataset of tweets based on different events that took place in April 2019.
Three annotators labeled the tweets following the six Ekman’s basic emotion model (anger, fear, sadness, joy, disgust, surprise) plus the “neutral or other emotions” category. Morevoer, the tweets are annotated as offensive (OFF) or non-offensive (NO).
### Supported Tasks and Leaderboards
This dataset is intended for multi-class emotion classification and binary offensive classification.
Competition EmoEvalEs task on emotion detection for Spanish at IberLEF 2021
### Languages
* Spanish
* English
Dataset Structure
-----------------
### Data Instances
For each instance, there is a string for the id of the tweet, a string for the emotion class, a string for the offensive class, and a string for the event. See the to explore more examples.
### Data Fields
* 'id': a string to identify the tweet
* 'event': a string containing the event associated with the tweet
* 'tweet': a string containing the text of the tweet
* 'offensive': a string containing the offensive gold label
* 'emotion': a string containing the emotion gold label
### Data Splits
The EmoEvent dataset has 2 subsets: EmoEvent\_es (Spanish version) and EmoEvent\_en (English version)
Each subset contains 3 splits: *train*, *validation*, and *test*. Below are the statistics subsets.
Dataset Creation
----------------
### Source Data
Twitter
#### Who are the annotators?
Amazon Mechanical Turkers
Additional Information
----------------------
### Licensing Information
The EmoEvent dataset is released under the Apache-2.0 License.
| [
"### Dataset Summary\n\n\nEmoEvent is a multilingual emotion dataset of tweets based on different events that took place in April 2019.\nThree annotators labeled the tweets following the six Ekman’s basic emotion model (anger, fear, sadness, joy, disgust, surprise) plus the “neutral or other emotions” category. Morevoer, the tweets are annotated as offensive (OFF) or non-offensive (NO).",
"### Supported Tasks and Leaderboards\n\n\nThis dataset is intended for multi-class emotion classification and binary offensive classification.\n\n\nCompetition EmoEvalEs task on emotion detection for Spanish at IberLEF 2021",
"### Languages\n\n\n* Spanish\n* English\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nFor each instance, there is a string for the id of the tweet, a string for the emotion class, a string for the offensive class, and a string for the event. See the to explore more examples.",
"### Data Fields\n\n\n* 'id': a string to identify the tweet\n* 'event': a string containing the event associated with the tweet\n* 'tweet': a string containing the text of the tweet\n* 'offensive': a string containing the offensive gold label\n* 'emotion': a string containing the emotion gold label",
"### Data Splits\n\n\nThe EmoEvent dataset has 2 subsets: EmoEvent\\_es (Spanish version) and EmoEvent\\_en (English version)\n\n\nEach subset contains 3 splits: *train*, *validation*, and *test*. Below are the statistics subsets.\n\n\n\n\nDataset Creation\n----------------",
"### Source Data\n\n\nTwitter",
"#### Who are the annotators?\n\n\nAmazon Mechanical Turkers\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe EmoEvent dataset is released under the Apache-2.0 License."
] | [
"TAGS\n#language-English #language-Spanish #license-apache-2.0 #region-us \n",
"### Dataset Summary\n\n\nEmoEvent is a multilingual emotion dataset of tweets based on different events that took place in April 2019.\nThree annotators labeled the tweets following the six Ekman’s basic emotion model (anger, fear, sadness, joy, disgust, surprise) plus the “neutral or other emotions” category. Morevoer, the tweets are annotated as offensive (OFF) or non-offensive (NO).",
"### Supported Tasks and Leaderboards\n\n\nThis dataset is intended for multi-class emotion classification and binary offensive classification.\n\n\nCompetition EmoEvalEs task on emotion detection for Spanish at IberLEF 2021",
"### Languages\n\n\n* Spanish\n* English\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nFor each instance, there is a string for the id of the tweet, a string for the emotion class, a string for the offensive class, and a string for the event. See the to explore more examples.",
"### Data Fields\n\n\n* 'id': a string to identify the tweet\n* 'event': a string containing the event associated with the tweet\n* 'tweet': a string containing the text of the tweet\n* 'offensive': a string containing the offensive gold label\n* 'emotion': a string containing the emotion gold label",
"### Data Splits\n\n\nThe EmoEvent dataset has 2 subsets: EmoEvent\\_es (Spanish version) and EmoEvent\\_en (English version)\n\n\nEach subset contains 3 splits: *train*, *validation*, and *test*. Below are the statistics subsets.\n\n\n\n\nDataset Creation\n----------------",
"### Source Data\n\n\nTwitter",
"#### Who are the annotators?\n\n\nAmazon Mechanical Turkers\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe EmoEvent dataset is released under the Apache-2.0 License."
] |
d74c67aec2ac5a2f561bcb30aa8e1fc7d7d88b92 |
# Dataset Card for "IndicParaphrase"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
IndicParaphrase is the paraphrasing dataset released as part of IndicNLG Suite. Each
input is paired with up to 5 references. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 5.57M.
### Supported Tasks and Leaderboards
**Tasks:** Paraphrase generation
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One example from the `hi` dataset is given below in JSON format.
```
{
'id': '1',
'input': 'निजी क्षेत्र में प्रदेश की 75 प्रतिशत नौकरियां हरियाणा के युवाओं के लिए आरक्षित की जाएगी।',
'references': ['प्रदेश के युवाओं को निजी उद्योगों में 75 प्रतिशत आरक्षण देंगे।',
'युवाओं के लिए हरियाणा की सभी प्राइवेट नौकरियों में 75 प्रतिशत आरक्षण लागू किया जाएगा।',
'निजी क्षेत्र में 75 प्रतिशत आरक्षित लागू कर प्रदेश के युवाओं का रोजगार सुनिश्चत किया जाएगा।',
'प्राईवेट कम्पनियों में हरियाणा के नौजवानों को 75 प्रतिशत नौकरियां में आरक्षित की जाएगी।',
'प्रदेश की प्राइवेट फैक्टरियों में 75 फीसदी रोजगार हरियाणा के युवाओं के लिए आरक्षित किए जाएंगे।'],
'target': 'प्रदेश के युवाओं को निजी उद्योगों में 75 प्रतिशत आरक्षण देंगे।'
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `pivot (string)`: English sentence used as the pivot
- `input (string)`: Input sentence
- `references (list of strings)`: Paraphrases of `input`, ordered according to the least n-gram overlap
- `target (string)`: The first reference (most dissimilar paraphrase)
### Data Splits
We first select 10K instances each for the validation and test and put remaining in the training dataset. `Assamese (as)`, due to its low-resource nature, could only be split into validation and test sets with 4,420 examples each.
Individual dataset with train-dev-test example counts are given below:
Language | ISO 639-1 Code |Train | Dev | Test |
--------------|----------------|-------|-----|------|
Assamese | as | - | 4,420 | 4,420 |
Bengali | bn | 890,445 | 10,000 | 10,000 |
Gujarati | gu | 379,202 | 10,000 | 10,000 |
Hindi | hi | 929,507 | 10,000 | 10,000 |
Kannada | kn | 522,148 | 10,000 | 10,000 |
Malayalam | ml |761,933 | 10,000 | 10,000 |
Marathi | mr |406,003 | 10,000 | 10,000 |
Oriya | or | 105,970 | 10,000 | 10,000 |
Punjabi | pa | 266,704 | 10,000 | 10,000 |
Tamil | ta | 497,798 | 10,000 | 10,000 |
Telugu | te | 596,283 | 10,000 | 10,000 |
## Dataset Creation
### Curation Rationale
[More information needed]
### Source Data
[Samanantar dataset](https://indicnlp.ai4bharat.org/samanantar/)
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
}
```
### Contributions
| ai4bharat/IndicParaphrase | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc-by-nc-4.0",
"arxiv:2203.05437",
"region:us"
] | 2022-03-09T11:28:53+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te"], "license": ["cc-by-nc-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["conditional-text-generation"], "task_ids": ["conditional-text-generation-other-paraphrase-generation"], "pretty_name": "IndicParaphrase"} | 2022-10-13T05:08:55+00:00 | [
"2203.05437"
] | [
"as",
"bn",
"gu",
"hi",
"kn",
"ml",
"mr",
"or",
"pa",
"ta",
"te"
] | TAGS
#annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #language-Assamese #language-Bengali #language-Gujarati #language-Hindi #language-Kannada #language-Malayalam #language-Marathi #language-Oriya (macrolanguage) #language-Panjabi #language-Tamil #language-Telugu #license-cc-by-nc-4.0 #arxiv-2203.05437 #region-us
| Dataset Card for "IndicParaphrase"
==================================
Table of Contents
-----------------
* Dataset Card Creation Guide
+ Table of Contents
+ Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
+ Dataset Structure
- Data Instances
- Data Fields
- Data Splits
+ Dataset Creation
- Curation Rationale
- Source Data
* Initial Data Collection and Normalization
* Who are the source language producers?
- Annotations
* Annotation process
* Who are the annotators?
- Personal and Sensitive Information
+ Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
+ Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
Dataset Description
-------------------
* Homepage: URL
* Paper: IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages
* Point of Contact:
### Dataset Summary
IndicParaphrase is the paraphrasing dataset released as part of IndicNLG Suite. Each
input is paired with up to 5 references. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 5.57M.
### Supported Tasks and Leaderboards
Tasks: Paraphrase generation
Leaderboards: Currently there is no Leaderboard for this dataset.
### Languages
* 'Assamese (as)'
* 'Bengali (bn)'
* 'Gujarati (gu)'
* 'Kannada (kn)'
* 'Hindi (hi)'
* 'Malayalam (ml)'
* 'Marathi (mr)'
* 'Oriya (or)'
* 'Punjabi (pa)'
* 'Tamil (ta)'
* 'Telugu (te)'
Dataset Structure
-----------------
### Data Instances
One example from the 'hi' dataset is given below in JSON format.
### Data Fields
* 'id (string)': Unique identifier.
* 'pivot (string)': English sentence used as the pivot
* 'input (string)': Input sentence
* 'references (list of strings)': Paraphrases of 'input', ordered according to the least n-gram overlap
* 'target (string)': The first reference (most dissimilar paraphrase)
### Data Splits
We first select 10K instances each for the validation and test and put remaining in the training dataset. 'Assamese (as)', due to its low-resource nature, could only be split into validation and test sets with 4,420 examples each.
Individual dataset with train-dev-test example counts are given below:
Dataset Creation
----------------
### Curation Rationale
[More information needed]
### Source Data
Samanantar dataset
#### Initial Data Collection and Normalization
Detailed in the paper
#### Who are the source language producers?
Detailed in the paper
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
Additional Information
----------------------
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). Copyright of the dataset contents belongs to the original copyright holders.
If you use any of the datasets, models or code modules, please cite the following paper:
### Contributions
| [
"### Dataset Summary\n\n\nIndicParaphrase is the paraphrasing dataset released as part of IndicNLG Suite. Each\ninput is paired with up to 5 references. We create this dataset in eleven\nlanguages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total\nsize of the dataset is 5.57M.",
"### Supported Tasks and Leaderboards\n\n\nTasks: Paraphrase generation\n\n\nLeaderboards: Currently there is no Leaderboard for this dataset.",
"### Languages\n\n\n* 'Assamese (as)'\n* 'Bengali (bn)'\n* 'Gujarati (gu)'\n* 'Kannada (kn)'\n* 'Hindi (hi)'\n* 'Malayalam (ml)'\n* 'Marathi (mr)'\n* 'Oriya (or)'\n* 'Punjabi (pa)'\n* 'Tamil (ta)'\n* 'Telugu (te)'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nOne example from the 'hi' dataset is given below in JSON format.",
"### Data Fields\n\n\n* 'id (string)': Unique identifier.\n* 'pivot (string)': English sentence used as the pivot\n* 'input (string)': Input sentence\n* 'references (list of strings)': Paraphrases of 'input', ordered according to the least n-gram overlap\n* 'target (string)': The first reference (most dissimilar paraphrase)",
"### Data Splits\n\n\nWe first select 10K instances each for the validation and test and put remaining in the training dataset. 'Assamese (as)', due to its low-resource nature, could only be split into validation and test sets with 4,420 examples each.\nIndividual dataset with train-dev-test example counts are given below:\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\n[More information needed]",
"### Source Data\n\n\nSamanantar dataset",
"#### Initial Data Collection and Normalization\n\n\nDetailed in the paper",
"#### Who are the source language producers?\n\n\nDetailed in the paper",
"### Annotations\n\n\n[More information needed]",
"#### Annotation process\n\n\n[More information needed]",
"#### Who are the annotators?\n\n\n[More information needed]",
"### Personal and Sensitive Information\n\n\n[More information needed]\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\n[More information needed]",
"### Discussion of Biases\n\n\n[More information needed]",
"### Other Known Limitations\n\n\n[More information needed]\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n[More information needed]",
"### Licensing Information\n\n\nContents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). Copyright of the dataset contents belongs to the original copyright holders.\n\n\nIf you use any of the datasets, models or code modules, please cite the following paper:",
"### Contributions"
] | [
"TAGS\n#annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #language-Assamese #language-Bengali #language-Gujarati #language-Hindi #language-Kannada #language-Malayalam #language-Marathi #language-Oriya (macrolanguage) #language-Panjabi #language-Tamil #language-Telugu #license-cc-by-nc-4.0 #arxiv-2203.05437 #region-us \n",
"### Dataset Summary\n\n\nIndicParaphrase is the paraphrasing dataset released as part of IndicNLG Suite. Each\ninput is paired with up to 5 references. We create this dataset in eleven\nlanguages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total\nsize of the dataset is 5.57M.",
"### Supported Tasks and Leaderboards\n\n\nTasks: Paraphrase generation\n\n\nLeaderboards: Currently there is no Leaderboard for this dataset.",
"### Languages\n\n\n* 'Assamese (as)'\n* 'Bengali (bn)'\n* 'Gujarati (gu)'\n* 'Kannada (kn)'\n* 'Hindi (hi)'\n* 'Malayalam (ml)'\n* 'Marathi (mr)'\n* 'Oriya (or)'\n* 'Punjabi (pa)'\n* 'Tamil (ta)'\n* 'Telugu (te)'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nOne example from the 'hi' dataset is given below in JSON format.",
"### Data Fields\n\n\n* 'id (string)': Unique identifier.\n* 'pivot (string)': English sentence used as the pivot\n* 'input (string)': Input sentence\n* 'references (list of strings)': Paraphrases of 'input', ordered according to the least n-gram overlap\n* 'target (string)': The first reference (most dissimilar paraphrase)",
"### Data Splits\n\n\nWe first select 10K instances each for the validation and test and put remaining in the training dataset. 'Assamese (as)', due to its low-resource nature, could only be split into validation and test sets with 4,420 examples each.\nIndividual dataset with train-dev-test example counts are given below:\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\n[More information needed]",
"### Source Data\n\n\nSamanantar dataset",
"#### Initial Data Collection and Normalization\n\n\nDetailed in the paper",
"#### Who are the source language producers?\n\n\nDetailed in the paper",
"### Annotations\n\n\n[More information needed]",
"#### Annotation process\n\n\n[More information needed]",
"#### Who are the annotators?\n\n\n[More information needed]",
"### Personal and Sensitive Information\n\n\n[More information needed]\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\n[More information needed]",
"### Discussion of Biases\n\n\n[More information needed]",
"### Other Known Limitations\n\n\n[More information needed]\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n[More information needed]",
"### Licensing Information\n\n\nContents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). Copyright of the dataset contents belongs to the original copyright holders.\n\n\nIf you use any of the datasets, models or code modules, please cite the following paper:",
"### Contributions"
] |
03d5016d18872b209e80fd9eb913225c096defd0 | # Comparing model predictions and ground truth labels with Rubrix and Hugging Face
## Build dataset
You can skip this step if you run:
```python
from datasets import load_dataset
import rubrix as rb
ds = rb.DatasetForTextClassification.from_datasets(load_dataset("rubrix/sst2_with_predictions", split="train"))
```
Otherwise, the following cell will run the pipeline over the training set and store labels and predictions.
```python
from datasets import load_dataset
from transformers import pipeline, AutoModelForSequenceClassification
import rubrix as rb
name = "distilbert-base-uncased-finetuned-sst-2-english"
# Need to define id2label because surprisingly the pipeline has uppercase label names
model = AutoModelForSequenceClassification.from_pretrained(name, id2label={0: 'negative', 1: 'positive'})
nlp = pipeline("sentiment-analysis", model=model, tokenizer=name, return_all_scores=True)
dataset = load_dataset("glue", "sst2", split="train")
# batch predict
def predict(example):
return {"prediction": nlp(example["sentence"])}
# add predictions to the dataset
dataset = dataset.map(predict, batched=True).rename_column("sentence", "text")
# build rubrix dataset from hf dataset
ds = rb.DatasetForTextClassification.from_datasets(dataset, annotation="label")
```
```python
# Install Rubrix and start exploring and sharing URLs with interesting subsets, etc.
rb.log(ds, "sst2")
```
```python
ds.to_datasets().push_to_hub("rubrix/sst2_with_predictions")
```
Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:00<?, ?it/s]
## Analize misspredictions and ambiguous labels
### With the UI
With Rubrix's UI you can:
- Combine filters and full-text/DSL queries to quickly find important samples
- All URLs contain the state so you can share with collaborator and annotator specific dataset regions to work on.
- Sort examples by score, as well as custom metadata fields.

### Programmatically
Let's find all the wrong predictions from Python. This is useful for bulk operations (relabelling, discarding, etc.) as well as
```python
import pandas as pd
# Get dataset slice with wrong predictions
df = rb.load("sst2", query="predicted:ko").to_pandas()
# display first 20 examples
with pd.option_context('display.max_colwidth', None):
display(df[["text", "prediction", "annotation"]].head(20))
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>text</th>
<th>prediction</th>
<th>annotation</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>this particular , anciently demanding métier</td>
<td>[(negative, 0.9386059045791626), (positive, 0.06139408051967621)]</td>
<td>positive</td>
</tr>
<tr>
<th>1</th>
<td>under our skin</td>
<td>[(positive, 0.7508484721183777), (negative, 0.24915160238742828)]</td>
<td>negative</td>
</tr>
<tr>
<th>2</th>
<td>evokes a palpable sense of disconnection , made all the more poignant by the incessant use of cell phones .</td>
<td>[(negative, 0.6634528636932373), (positive, 0.3365470767021179)]</td>
<td>positive</td>
</tr>
<tr>
<th>3</th>
<td>plays like a living-room war of the worlds , gaining most of its unsettling force from the suggested and the unknown .</td>
<td>[(positive, 0.9968075752258301), (negative, 0.003192420583218336)]</td>
<td>negative</td>
</tr>
<tr>
<th>4</th>
<td>into a pulpy concept that , in many other hands would be completely forgettable</td>
<td>[(positive, 0.6178210377693176), (negative, 0.3821789622306824)]</td>
<td>negative</td>
</tr>
<tr>
<th>5</th>
<td>transcends ethnic lines .</td>
<td>[(positive, 0.9758220314979553), (negative, 0.024177948012948036)]</td>
<td>negative</td>
</tr>
<tr>
<th>6</th>
<td>is barely</td>
<td>[(negative, 0.9922297596931458), (positive, 0.00777028314769268)]</td>
<td>positive</td>
</tr>
<tr>
<th>7</th>
<td>a pulpy concept that , in many other hands would be completely forgettable</td>
<td>[(negative, 0.9738760590553284), (positive, 0.026123959571123123)]</td>
<td>positive</td>
</tr>
<tr>
<th>8</th>
<td>of hollywood heart-string plucking</td>
<td>[(positive, 0.9889695644378662), (negative, 0.011030420660972595)]</td>
<td>negative</td>
</tr>
<tr>
<th>9</th>
<td>a minimalist beauty and the beast</td>
<td>[(positive, 0.9100378751754761), (negative, 0.08996208757162094)]</td>
<td>negative</td>
</tr>
<tr>
<th>10</th>
<td>the intimate , unguarded moments of folks who live in unusual homes --</td>
<td>[(positive, 0.9967381358146667), (negative, 0.0032618637196719646)]</td>
<td>negative</td>
</tr>
<tr>
<th>11</th>
<td>steals the show</td>
<td>[(negative, 0.8031412363052368), (positive, 0.1968587338924408)]</td>
<td>positive</td>
</tr>
<tr>
<th>12</th>
<td>enough</td>
<td>[(positive, 0.7941301465034485), (negative, 0.2058698982000351)]</td>
<td>negative</td>
</tr>
<tr>
<th>13</th>
<td>accept it as life and</td>
<td>[(positive, 0.9987508058547974), (negative, 0.0012492131209000945)]</td>
<td>negative</td>
</tr>
<tr>
<th>14</th>
<td>this is the kind of movie that you only need to watch for about thirty seconds before you say to yourself , ` ah , yes ,</td>
<td>[(negative, 0.7889454960823059), (positive, 0.21105451881885529)]</td>
<td>positive</td>
</tr>
<tr>
<th>15</th>
<td>plunges you into a reality that is , more often then not , difficult and sad ,</td>
<td>[(positive, 0.967541515827179), (negative, 0.03245845437049866)]</td>
<td>negative</td>
</tr>
<tr>
<th>16</th>
<td>overcomes the script 's flaws and envelops the audience in his character 's anguish , anger and frustration .</td>
<td>[(positive, 0.9953157901763916), (negative, 0.004684178624302149)]</td>
<td>negative</td>
</tr>
<tr>
<th>17</th>
<td>troubled and determined homicide cop</td>
<td>[(negative, 0.6632784008979797), (positive, 0.33672159910202026)]</td>
<td>positive</td>
</tr>
<tr>
<th>18</th>
<td>human nature is a goofball movie , in the way that malkovich was , but it tries too hard</td>
<td>[(positive, 0.5959018468856812), (negative, 0.40409812331199646)]</td>
<td>negative</td>
</tr>
<tr>
<th>19</th>
<td>to watch too many barney videos</td>
<td>[(negative, 0.9909896850585938), (positive, 0.00901023019105196)]</td>
<td>positive</td>
</tr>
</tbody>
</table>
</div>
```python
df.annotation.hist()
```
<AxesSubplot:>

```python
# Get dataset slice with wrong predictions
df = rb.load("sst2", query="predicted:ko and annotated_as:negative").to_pandas()
# display first 20 examples
with pd.option_context('display.max_colwidth', None):
display(df[["text", "prediction", "annotation"]].head(20))
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>text</th>
<th>prediction</th>
<th>annotation</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>plays like a living-room war of the worlds , gaining most of its unsettling force from the suggested and the unknown .</td>
<td>[(positive, 0.9968075752258301), (negative, 0.003192420583218336)]</td>
<td>negative</td>
</tr>
<tr>
<th>1</th>
<td>a minimalist beauty and the beast</td>
<td>[(positive, 0.9100378751754761), (negative, 0.08996208757162094)]</td>
<td>negative</td>
</tr>
<tr>
<th>2</th>
<td>accept it as life and</td>
<td>[(positive, 0.9987508058547974), (negative, 0.0012492131209000945)]</td>
<td>negative</td>
</tr>
<tr>
<th>3</th>
<td>plunges you into a reality that is , more often then not , difficult and sad ,</td>
<td>[(positive, 0.967541515827179), (negative, 0.03245845437049866)]</td>
<td>negative</td>
</tr>
<tr>
<th>4</th>
<td>overcomes the script 's flaws and envelops the audience in his character 's anguish , anger and frustration .</td>
<td>[(positive, 0.9953157901763916), (negative, 0.004684178624302149)]</td>
<td>negative</td>
</tr>
<tr>
<th>5</th>
<td>and social commentary</td>
<td>[(positive, 0.7863275408744812), (negative, 0.2136724889278412)]</td>
<td>negative</td>
</tr>
<tr>
<th>6</th>
<td>we do n't get williams ' usual tear and a smile , just sneers and bile , and the spectacle is nothing short of refreshing .</td>
<td>[(positive, 0.9982783794403076), (negative, 0.0017216014675796032)]</td>
<td>negative</td>
</tr>
<tr>
<th>7</th>
<td>before pulling the plug on the conspirators and averting an american-russian armageddon</td>
<td>[(positive, 0.6992855072021484), (negative, 0.30071452260017395)]</td>
<td>negative</td>
</tr>
<tr>
<th>8</th>
<td>in tight pants and big tits</td>
<td>[(positive, 0.7850217819213867), (negative, 0.2149781733751297)]</td>
<td>negative</td>
</tr>
<tr>
<th>9</th>
<td>that it certainly does n't feel like a film that strays past the two and a half mark</td>
<td>[(positive, 0.6591460108757019), (negative, 0.3408539891242981)]</td>
<td>negative</td>
</tr>
<tr>
<th>10</th>
<td>actress-producer and writer</td>
<td>[(positive, 0.8167378306388855), (negative, 0.1832621842622757)]</td>
<td>negative</td>
</tr>
<tr>
<th>11</th>
<td>gives devastating testimony to both people 's capacity for evil and their heroic capacity for good .</td>
<td>[(positive, 0.8960123062133789), (negative, 0.10398765653371811)]</td>
<td>negative</td>
</tr>
<tr>
<th>12</th>
<td>deep into the girls ' confusion and pain as they struggle tragically to comprehend the chasm of knowledge that 's opened between them</td>
<td>[(positive, 0.9729612469673157), (negative, 0.027038726955652237)]</td>
<td>negative</td>
</tr>
<tr>
<th>13</th>
<td>a younger lad in zen and the art of getting laid in this prickly indie comedy of manners and misanthropy</td>
<td>[(positive, 0.9875985980033875), (negative, 0.012401451356709003)]</td>
<td>negative</td>
</tr>
<tr>
<th>14</th>
<td>get on a board and , uh , shred ,</td>
<td>[(positive, 0.5352609753608704), (negative, 0.46473899483680725)]</td>
<td>negative</td>
</tr>
<tr>
<th>15</th>
<td>so preachy-keen and</td>
<td>[(positive, 0.9644021391868591), (negative, 0.035597823560237885)]</td>
<td>negative</td>
</tr>
<tr>
<th>16</th>
<td>there 's an admirable rigor to jimmy 's relentless anger , and to the script 's refusal of a happy ending ,</td>
<td>[(positive, 0.9928517937660217), (negative, 0.007148175034672022)]</td>
<td>negative</td>
</tr>
<tr>
<th>17</th>
<td>` christian bale 's quinn ( is ) a leather clad grunge-pirate with a hairdo like gandalf in a wind-tunnel and a simply astounding cor-blimey-luv-a-duck cockney accent . '</td>
<td>[(positive, 0.9713286757469177), (negative, 0.028671346604824066)]</td>
<td>negative</td>
</tr>
<tr>
<th>18</th>
<td>passion , grief and fear</td>
<td>[(positive, 0.9849751591682434), (negative, 0.015024829655885696)]</td>
<td>negative</td>
</tr>
<tr>
<th>19</th>
<td>to keep the extremes of screwball farce and blood-curdling family intensity on one continuum</td>
<td>[(positive, 0.8838250637054443), (negative, 0.11617499589920044)]</td>
<td>negative</td>
</tr>
</tbody>
</table>
</div>
```python
# Get dataset slice with wrong predictions
df = rb.load("sst2", query="predicted:ko and score:{0.99 TO *}").to_pandas()
# display first 20 examples
with pd.option_context('display.max_colwidth', None):
display(df[["text", "prediction", "annotation"]].head(20))
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>text</th>
<th>prediction</th>
<th>annotation</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>plays like a living-room war of the worlds , gaining most of its unsettling force from the suggested and the unknown .</td>
<td>[(positive, 0.9968075752258301), (negative, 0.003192420583218336)]</td>
<td>negative</td>
</tr>
<tr>
<th>1</th>
<td>accept it as life and</td>
<td>[(positive, 0.9987508058547974), (negative, 0.0012492131209000945)]</td>
<td>negative</td>
</tr>
<tr>
<th>2</th>
<td>overcomes the script 's flaws and envelops the audience in his character 's anguish , anger and frustration .</td>
<td>[(positive, 0.9953157901763916), (negative, 0.004684178624302149)]</td>
<td>negative</td>
</tr>
<tr>
<th>3</th>
<td>will no doubt rally to its cause , trotting out threadbare standbys like ` masterpiece ' and ` triumph ' and all that malarkey ,</td>
<td>[(negative, 0.9936562180519104), (positive, 0.006343740504235029)]</td>
<td>positive</td>
</tr>
<tr>
<th>4</th>
<td>we do n't get williams ' usual tear and a smile , just sneers and bile , and the spectacle is nothing short of refreshing .</td>
<td>[(positive, 0.9982783794403076), (negative, 0.0017216014675796032)]</td>
<td>negative</td>
</tr>
<tr>
<th>5</th>
<td>somehow manages to bring together kevin pollak , former wrestler chyna and dolly parton</td>
<td>[(negative, 0.9979034662246704), (positive, 0.002096540294587612)]</td>
<td>positive</td>
</tr>
<tr>
<th>6</th>
<td>there 's an admirable rigor to jimmy 's relentless anger , and to the script 's refusal of a happy ending ,</td>
<td>[(positive, 0.9928517937660217), (negative, 0.007148175034672022)]</td>
<td>negative</td>
</tr>
<tr>
<th>7</th>
<td>the bottom line with nemesis is the same as it has been with all the films in the series : fans will undoubtedly enjoy it , and the uncommitted need n't waste their time on it</td>
<td>[(positive, 0.995850682258606), (negative, 0.004149340093135834)]</td>
<td>negative</td>
</tr>
<tr>
<th>8</th>
<td>is genial but never inspired , and little</td>
<td>[(negative, 0.9921030402183533), (positive, 0.007896988652646542)]</td>
<td>positive</td>
</tr>
<tr>
<th>9</th>
<td>heaped upon a project of such vast proportions need to reap more rewards than spiffy bluescreen technique and stylish weaponry .</td>
<td>[(negative, 0.9958089590072632), (positive, 0.004191054962575436)]</td>
<td>positive</td>
</tr>
<tr>
<th>10</th>
<td>than recommended -- as visually bland as a dentist 's waiting room , complete with soothing muzak and a cushion of predictable narrative rhythms</td>
<td>[(negative, 0.9988711476325989), (positive, 0.0011287889210507274)]</td>
<td>positive</td>
</tr>
<tr>
<th>11</th>
<td>spectacle and</td>
<td>[(positive, 0.9941601753234863), (negative, 0.005839805118739605)]</td>
<td>negative</td>
</tr>
<tr>
<th>12</th>
<td>groan and</td>
<td>[(negative, 0.9987359642982483), (positive, 0.0012639997294172645)]</td>
<td>positive</td>
</tr>
<tr>
<th>13</th>
<td>'re not likely to have seen before , but beneath the exotic surface ( and exotic dancing ) it 's surprisingly old-fashioned .</td>
<td>[(positive, 0.9908103942871094), (negative, 0.009189637377858162)]</td>
<td>negative</td>
</tr>
<tr>
<th>14</th>
<td>its metaphors are opaque enough to avoid didacticism , and</td>
<td>[(negative, 0.990602970123291), (positive, 0.00939704105257988)]</td>
<td>positive</td>
</tr>
<tr>
<th>15</th>
<td>by kevin bray , whose crisp framing , edgy camera work , and wholesale ineptitude with acting , tone and pace very obviously mark him as a video helmer making his feature debut</td>
<td>[(positive, 0.9973387122154236), (negative, 0.0026612314395606518)]</td>
<td>negative</td>
</tr>
<tr>
<th>16</th>
<td>evokes the frustration , the awkwardness and the euphoria of growing up , without relying on the usual tropes .</td>
<td>[(positive, 0.9989104270935059), (negative, 0.0010896018939092755)]</td>
<td>negative</td>
</tr>
<tr>
<th>17</th>
<td>, incoherence and sub-sophomoric</td>
<td>[(negative, 0.9962475895881653), (positive, 0.003752368036657572)]</td>
<td>positive</td>
</tr>
<tr>
<th>18</th>
<td>seems intimidated by both her subject matter and the period trappings of this debut venture into the heritage business .</td>
<td>[(negative, 0.9923072457313538), (positive, 0.007692818529903889)]</td>
<td>positive</td>
</tr>
<tr>
<th>19</th>
<td>despite downplaying her good looks , carries a little too much ai n't - she-cute baggage into her lead role as a troubled and determined homicide cop to quite pull off the heavy stuff .</td>
<td>[(negative, 0.9948075413703918), (positive, 0.005192441400140524)]</td>
<td>positive</td>
</tr>
</tbody>
</table>
</div>
```python
# Get dataset slice with wrong predictions
df = rb.load("sst2", query="predicted:ko and score:{* TO 0.6}").to_pandas()
# display first 20 examples
with pd.option_context('display.max_colwidth', None):
display(df[["text", "prediction", "annotation"]].head(20))
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>text</th>
<th>prediction</th>
<th>annotation</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>get on a board and , uh , shred ,</td>
<td>[(positive, 0.5352609753608704), (negative, 0.46473899483680725)]</td>
<td>negative</td>
</tr>
<tr>
<th>1</th>
<td>is , truly and thankfully , a one-of-a-kind work</td>
<td>[(positive, 0.5819814801216125), (negative, 0.41801854968070984)]</td>
<td>negative</td>
</tr>
<tr>
<th>2</th>
<td>starts as a tart little lemon drop of a movie and</td>
<td>[(negative, 0.5641832947731018), (positive, 0.4358167052268982)]</td>
<td>positive</td>
</tr>
<tr>
<th>3</th>
<td>between flaccid satire and what</td>
<td>[(negative, 0.5532692074775696), (positive, 0.44673076272010803)]</td>
<td>positive</td>
</tr>
<tr>
<th>4</th>
<td>it certainly does n't feel like a film that strays past the two and a half mark</td>
<td>[(negative, 0.5386656522750854), (positive, 0.46133431792259216)]</td>
<td>positive</td>
</tr>
<tr>
<th>5</th>
<td>who liked there 's something about mary and both american pie movies</td>
<td>[(negative, 0.5086333751678467), (positive, 0.4913666248321533)]</td>
<td>positive</td>
</tr>
<tr>
<th>6</th>
<td>many good ideas as bad is the cold comfort that chin 's film serves up with style and empathy</td>
<td>[(positive, 0.557632327079773), (negative, 0.44236767292022705)]</td>
<td>negative</td>
</tr>
<tr>
<th>7</th>
<td>about its ideas and</td>
<td>[(positive, 0.518638551235199), (negative, 0.48136141896247864)]</td>
<td>negative</td>
</tr>
<tr>
<th>8</th>
<td>of a sick and evil woman</td>
<td>[(negative, 0.5554516315460205), (positive, 0.4445483684539795)]</td>
<td>positive</td>
</tr>
<tr>
<th>9</th>
<td>though this rude and crude film does deliver a few gut-busting laughs</td>
<td>[(positive, 0.5045541524887085), (negative, 0.4954459071159363)]</td>
<td>negative</td>
</tr>
<tr>
<th>10</th>
<td>to squeeze the action and our emotions into the all-too-familiar dramatic arc of the holocaust escape story</td>
<td>[(negative, 0.5050069093704224), (positive, 0.49499306082725525)]</td>
<td>positive</td>
</tr>
<tr>
<th>11</th>
<td>that throws a bunch of hot-button items in the viewer 's face and asks to be seen as hip , winking social commentary</td>
<td>[(negative, 0.5873904228210449), (positive, 0.41260960698127747)]</td>
<td>positive</td>
</tr>
<tr>
<th>12</th>
<td>'s soulful and unslick</td>
<td>[(positive, 0.5931627750396729), (negative, 0.40683719515800476)]</td>
<td>negative</td>
</tr>
</tbody>
</table>
</div>
```python
from rubrix.metrics.commons import *
```
```python
text_length("sst2", query="predicted:ko").visualize()
```
 | rubrix/sst2_with_predictions | [
"region:us"
] | 2022-03-09T14:13:30+00:00 | {} | 2022-09-16T12:23:05+00:00 | [] | [] | TAGS
#region-us
| # Comparing model predictions and ground truth labels with Rubrix and Hugging Face
## Build dataset
You can skip this step if you run:
Otherwise, the following cell will run the pipeline over the training set and store labels and predictions.
Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:00<?, ?it/s]
## Analize misspredictions and ambiguous labels
### With the UI
With Rubrix's UI you can:
- Combine filters and full-text/DSL queries to quickly find important samples
- All URLs contain the state so you can share with collaborator and annotator specific dataset regions to work on.
- Sort examples by score, as well as custom metadata fields.
!URL
### Programmatically
Let's find all the wrong predictions from Python. This is useful for bulk operations (relabelling, discarding, etc.) as well as
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>text</th>
<th>prediction</th>
<th>annotation</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>this particular , anciently demanding métier</td>
<td>[(negative, 0.9386059045791626), (positive, 0.06139408051967621)]</td>
<td>positive</td>
</tr>
<tr>
<th>1</th>
<td>under our skin</td>
<td>[(positive, 0.7508484721183777), (negative, 0.24915160238742828)]</td>
<td>negative</td>
</tr>
<tr>
<th>2</th>
<td>evokes a palpable sense of disconnection , made all the more poignant by the incessant use of cell phones .</td>
<td>[(negative, 0.6634528636932373), (positive, 0.3365470767021179)]</td>
<td>positive</td>
</tr>
<tr>
<th>3</th>
<td>plays like a living-room war of the worlds , gaining most of its unsettling force from the suggested and the unknown .</td>
<td>[(positive, 0.9968075752258301), (negative, 0.003192420583218336)]</td>
<td>negative</td>
</tr>
<tr>
<th>4</th>
<td>into a pulpy concept that , in many other hands would be completely forgettable</td>
<td>[(positive, 0.6178210377693176), (negative, 0.3821789622306824)]</td>
<td>negative</td>
</tr>
<tr>
<th>5</th>
<td>transcends ethnic lines .</td>
<td>[(positive, 0.9758220314979553), (negative, 0.024177948012948036)]</td>
<td>negative</td>
</tr>
<tr>
<th>6</th>
<td>is barely</td>
<td>[(negative, 0.9922297596931458), (positive, 0.00777028314769268)]</td>
<td>positive</td>
</tr>
<tr>
<th>7</th>
<td>a pulpy concept that , in many other hands would be completely forgettable</td>
<td>[(negative, 0.9738760590553284), (positive, 0.026123959571123123)]</td>
<td>positive</td>
</tr>
<tr>
<th>8</th>
<td>of hollywood heart-string plucking</td>
<td>[(positive, 0.9889695644378662), (negative, 0.011030420660972595)]</td>
<td>negative</td>
</tr>
<tr>
<th>9</th>
<td>a minimalist beauty and the beast</td>
<td>[(positive, 0.9100378751754761), (negative, 0.08996208757162094)]</td>
<td>negative</td>
</tr>
<tr>
<th>10</th>
<td>the intimate , unguarded moments of folks who live in unusual homes --</td>
<td>[(positive, 0.9967381358146667), (negative, 0.0032618637196719646)]</td>
<td>negative</td>
</tr>
<tr>
<th>11</th>
<td>steals the show</td>
<td>[(negative, 0.8031412363052368), (positive, 0.1968587338924408)]</td>
<td>positive</td>
</tr>
<tr>
<th>12</th>
<td>enough</td>
<td>[(positive, 0.7941301465034485), (negative, 0.2058698982000351)]</td>
<td>negative</td>
</tr>
<tr>
<th>13</th>
<td>accept it as life and</td>
<td>[(positive, 0.9987508058547974), (negative, 0.0012492131209000945)]</td>
<td>negative</td>
</tr>
<tr>
<th>14</th>
<td>this is the kind of movie that you only need to watch for about thirty seconds before you say to yourself , ' ah , yes ,</td>
<td>[(negative, 0.7889454960823059), (positive, 0.21105451881885529)]</td>
<td>positive</td>
</tr>
<tr>
<th>15</th>
<td>plunges you into a reality that is , more often then not , difficult and sad ,</td>
<td>[(positive, 0.967541515827179), (negative, 0.03245845437049866)]</td>
<td>negative</td>
</tr>
<tr>
<th>16</th>
<td>overcomes the script 's flaws and envelops the audience in his character 's anguish , anger and frustration .</td>
<td>[(positive, 0.9953157901763916), (negative, 0.004684178624302149)]</td>
<td>negative</td>
</tr>
<tr>
<th>17</th>
<td>troubled and determined homicide cop</td>
<td>[(negative, 0.6632784008979797), (positive, 0.33672159910202026)]</td>
<td>positive</td>
</tr>
<tr>
<th>18</th>
<td>human nature is a goofball movie , in the way that malkovich was , but it tries too hard</td>
<td>[(positive, 0.5959018468856812), (negative, 0.40409812331199646)]</td>
<td>negative</td>
</tr>
<tr>
<th>19</th>
<td>to watch too many barney videos</td>
<td>[(negative, 0.9909896850585938), (positive, 0.00901023019105196)]</td>
<td>positive</td>
</tr>
</tbody>
</table>
</div>
<AxesSubplot:>
!png
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>text</th>
<th>prediction</th>
<th>annotation</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>plays like a living-room war of the worlds , gaining most of its unsettling force from the suggested and the unknown .</td>
<td>[(positive, 0.9968075752258301), (negative, 0.003192420583218336)]</td>
<td>negative</td>
</tr>
<tr>
<th>1</th>
<td>a minimalist beauty and the beast</td>
<td>[(positive, 0.9100378751754761), (negative, 0.08996208757162094)]</td>
<td>negative</td>
</tr>
<tr>
<th>2</th>
<td>accept it as life and</td>
<td>[(positive, 0.9987508058547974), (negative, 0.0012492131209000945)]</td>
<td>negative</td>
</tr>
<tr>
<th>3</th>
<td>plunges you into a reality that is , more often then not , difficult and sad ,</td>
<td>[(positive, 0.967541515827179), (negative, 0.03245845437049866)]</td>
<td>negative</td>
</tr>
<tr>
<th>4</th>
<td>overcomes the script 's flaws and envelops the audience in his character 's anguish , anger and frustration .</td>
<td>[(positive, 0.9953157901763916), (negative, 0.004684178624302149)]</td>
<td>negative</td>
</tr>
<tr>
<th>5</th>
<td>and social commentary</td>
<td>[(positive, 0.7863275408744812), (negative, 0.2136724889278412)]</td>
<td>negative</td>
</tr>
<tr>
<th>6</th>
<td>we do n't get williams ' usual tear and a smile , just sneers and bile , and the spectacle is nothing short of refreshing .</td>
<td>[(positive, 0.9982783794403076), (negative, 0.0017216014675796032)]</td>
<td>negative</td>
</tr>
<tr>
<th>7</th>
<td>before pulling the plug on the conspirators and averting an american-russian armageddon</td>
<td>[(positive, 0.6992855072021484), (negative, 0.30071452260017395)]</td>
<td>negative</td>
</tr>
<tr>
<th>8</th>
<td>in tight pants and big tits</td>
<td>[(positive, 0.7850217819213867), (negative, 0.2149781733751297)]</td>
<td>negative</td>
</tr>
<tr>
<th>9</th>
<td>that it certainly does n't feel like a film that strays past the two and a half mark</td>
<td>[(positive, 0.6591460108757019), (negative, 0.3408539891242981)]</td>
<td>negative</td>
</tr>
<tr>
<th>10</th>
<td>actress-producer and writer</td>
<td>[(positive, 0.8167378306388855), (negative, 0.1832621842622757)]</td>
<td>negative</td>
</tr>
<tr>
<th>11</th>
<td>gives devastating testimony to both people 's capacity for evil and their heroic capacity for good .</td>
<td>[(positive, 0.8960123062133789), (negative, 0.10398765653371811)]</td>
<td>negative</td>
</tr>
<tr>
<th>12</th>
<td>deep into the girls ' confusion and pain as they struggle tragically to comprehend the chasm of knowledge that 's opened between them</td>
<td>[(positive, 0.9729612469673157), (negative, 0.027038726955652237)]</td>
<td>negative</td>
</tr>
<tr>
<th>13</th>
<td>a younger lad in zen and the art of getting laid in this prickly indie comedy of manners and misanthropy</td>
<td>[(positive, 0.9875985980033875), (negative, 0.012401451356709003)]</td>
<td>negative</td>
</tr>
<tr>
<th>14</th>
<td>get on a board and , uh , shred ,</td>
<td>[(positive, 0.5352609753608704), (negative, 0.46473899483680725)]</td>
<td>negative</td>
</tr>
<tr>
<th>15</th>
<td>so preachy-keen and</td>
<td>[(positive, 0.9644021391868591), (negative, 0.035597823560237885)]</td>
<td>negative</td>
</tr>
<tr>
<th>16</th>
<td>there 's an admirable rigor to jimmy 's relentless anger , and to the script 's refusal of a happy ending ,</td>
<td>[(positive, 0.9928517937660217), (negative, 0.007148175034672022)]</td>
<td>negative</td>
</tr>
<tr>
<th>17</th>
<td>' christian bale 's quinn ( is ) a leather clad grunge-pirate with a hairdo like gandalf in a wind-tunnel and a simply astounding cor-blimey-luv-a-duck cockney accent . '</td>
<td>[(positive, 0.9713286757469177), (negative, 0.028671346604824066)]</td>
<td>negative</td>
</tr>
<tr>
<th>18</th>
<td>passion , grief and fear</td>
<td>[(positive, 0.9849751591682434), (negative, 0.015024829655885696)]</td>
<td>negative</td>
</tr>
<tr>
<th>19</th>
<td>to keep the extremes of screwball farce and blood-curdling family intensity on one continuum</td>
<td>[(positive, 0.8838250637054443), (negative, 0.11617499589920044)]</td>
<td>negative</td>
</tr>
</tbody>
</table>
</div>
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>text</th>
<th>prediction</th>
<th>annotation</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>plays like a living-room war of the worlds , gaining most of its unsettling force from the suggested and the unknown .</td>
<td>[(positive, 0.9968075752258301), (negative, 0.003192420583218336)]</td>
<td>negative</td>
</tr>
<tr>
<th>1</th>
<td>accept it as life and</td>
<td>[(positive, 0.9987508058547974), (negative, 0.0012492131209000945)]</td>
<td>negative</td>
</tr>
<tr>
<th>2</th>
<td>overcomes the script 's flaws and envelops the audience in his character 's anguish , anger and frustration .</td>
<td>[(positive, 0.9953157901763916), (negative, 0.004684178624302149)]</td>
<td>negative</td>
</tr>
<tr>
<th>3</th>
<td>will no doubt rally to its cause , trotting out threadbare standbys like ' masterpiece ' and ' triumph ' and all that malarkey ,</td>
<td>[(negative, 0.9936562180519104), (positive, 0.006343740504235029)]</td>
<td>positive</td>
</tr>
<tr>
<th>4</th>
<td>we do n't get williams ' usual tear and a smile , just sneers and bile , and the spectacle is nothing short of refreshing .</td>
<td>[(positive, 0.9982783794403076), (negative, 0.0017216014675796032)]</td>
<td>negative</td>
</tr>
<tr>
<th>5</th>
<td>somehow manages to bring together kevin pollak , former wrestler chyna and dolly parton</td>
<td>[(negative, 0.9979034662246704), (positive, 0.002096540294587612)]</td>
<td>positive</td>
</tr>
<tr>
<th>6</th>
<td>there 's an admirable rigor to jimmy 's relentless anger , and to the script 's refusal of a happy ending ,</td>
<td>[(positive, 0.9928517937660217), (negative, 0.007148175034672022)]</td>
<td>negative</td>
</tr>
<tr>
<th>7</th>
<td>the bottom line with nemesis is the same as it has been with all the films in the series : fans will undoubtedly enjoy it , and the uncommitted need n't waste their time on it</td>
<td>[(positive, 0.995850682258606), (negative, 0.004149340093135834)]</td>
<td>negative</td>
</tr>
<tr>
<th>8</th>
<td>is genial but never inspired , and little</td>
<td>[(negative, 0.9921030402183533), (positive, 0.007896988652646542)]</td>
<td>positive</td>
</tr>
<tr>
<th>9</th>
<td>heaped upon a project of such vast proportions need to reap more rewards than spiffy bluescreen technique and stylish weaponry .</td>
<td>[(negative, 0.9958089590072632), (positive, 0.004191054962575436)]</td>
<td>positive</td>
</tr>
<tr>
<th>10</th>
<td>than recommended -- as visually bland as a dentist 's waiting room , complete with soothing muzak and a cushion of predictable narrative rhythms</td>
<td>[(negative, 0.9988711476325989), (positive, 0.0011287889210507274)]</td>
<td>positive</td>
</tr>
<tr>
<th>11</th>
<td>spectacle and</td>
<td>[(positive, 0.9941601753234863), (negative, 0.005839805118739605)]</td>
<td>negative</td>
</tr>
<tr>
<th>12</th>
<td>groan and</td>
<td>[(negative, 0.9987359642982483), (positive, 0.0012639997294172645)]</td>
<td>positive</td>
</tr>
<tr>
<th>13</th>
<td>'re not likely to have seen before , but beneath the exotic surface ( and exotic dancing ) it 's surprisingly old-fashioned .</td>
<td>[(positive, 0.9908103942871094), (negative, 0.009189637377858162)]</td>
<td>negative</td>
</tr>
<tr>
<th>14</th>
<td>its metaphors are opaque enough to avoid didacticism , and</td>
<td>[(negative, 0.990602970123291), (positive, 0.00939704105257988)]</td>
<td>positive</td>
</tr>
<tr>
<th>15</th>
<td>by kevin bray , whose crisp framing , edgy camera work , and wholesale ineptitude with acting , tone and pace very obviously mark him as a video helmer making his feature debut</td>
<td>[(positive, 0.9973387122154236), (negative, 0.0026612314395606518)]</td>
<td>negative</td>
</tr>
<tr>
<th>16</th>
<td>evokes the frustration , the awkwardness and the euphoria of growing up , without relying on the usual tropes .</td>
<td>[(positive, 0.9989104270935059), (negative, 0.0010896018939092755)]</td>
<td>negative</td>
</tr>
<tr>
<th>17</th>
<td>, incoherence and sub-sophomoric</td>
<td>[(negative, 0.9962475895881653), (positive, 0.003752368036657572)]</td>
<td>positive</td>
</tr>
<tr>
<th>18</th>
<td>seems intimidated by both her subject matter and the period trappings of this debut venture into the heritage business .</td>
<td>[(negative, 0.9923072457313538), (positive, 0.007692818529903889)]</td>
<td>positive</td>
</tr>
<tr>
<th>19</th>
<td>despite downplaying her good looks , carries a little too much ai n't - she-cute baggage into her lead role as a troubled and determined homicide cop to quite pull off the heavy stuff .</td>
<td>[(negative, 0.9948075413703918), (positive, 0.005192441400140524)]</td>
<td>positive</td>
</tr>
</tbody>
</table>
</div>
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>text</th>
<th>prediction</th>
<th>annotation</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>get on a board and , uh , shred ,</td>
<td>[(positive, 0.5352609753608704), (negative, 0.46473899483680725)]</td>
<td>negative</td>
</tr>
<tr>
<th>1</th>
<td>is , truly and thankfully , a one-of-a-kind work</td>
<td>[(positive, 0.5819814801216125), (negative, 0.41801854968070984)]</td>
<td>negative</td>
</tr>
<tr>
<th>2</th>
<td>starts as a tart little lemon drop of a movie and</td>
<td>[(negative, 0.5641832947731018), (positive, 0.4358167052268982)]</td>
<td>positive</td>
</tr>
<tr>
<th>3</th>
<td>between flaccid satire and what</td>
<td>[(negative, 0.5532692074775696), (positive, 0.44673076272010803)]</td>
<td>positive</td>
</tr>
<tr>
<th>4</th>
<td>it certainly does n't feel like a film that strays past the two and a half mark</td>
<td>[(negative, 0.5386656522750854), (positive, 0.46133431792259216)]</td>
<td>positive</td>
</tr>
<tr>
<th>5</th>
<td>who liked there 's something about mary and both american pie movies</td>
<td>[(negative, 0.5086333751678467), (positive, 0.4913666248321533)]</td>
<td>positive</td>
</tr>
<tr>
<th>6</th>
<td>many good ideas as bad is the cold comfort that chin 's film serves up with style and empathy</td>
<td>[(positive, 0.557632327079773), (negative, 0.44236767292022705)]</td>
<td>negative</td>
</tr>
<tr>
<th>7</th>
<td>about its ideas and</td>
<td>[(positive, 0.518638551235199), (negative, 0.48136141896247864)]</td>
<td>negative</td>
</tr>
<tr>
<th>8</th>
<td>of a sick and evil woman</td>
<td>[(negative, 0.5554516315460205), (positive, 0.4445483684539795)]</td>
<td>positive</td>
</tr>
<tr>
<th>9</th>
<td>though this rude and crude film does deliver a few gut-busting laughs</td>
<td>[(positive, 0.5045541524887085), (negative, 0.4954459071159363)]</td>
<td>negative</td>
</tr>
<tr>
<th>10</th>
<td>to squeeze the action and our emotions into the all-too-familiar dramatic arc of the holocaust escape story</td>
<td>[(negative, 0.5050069093704224), (positive, 0.49499306082725525)]</td>
<td>positive</td>
</tr>
<tr>
<th>11</th>
<td>that throws a bunch of hot-button items in the viewer 's face and asks to be seen as hip , winking social commentary</td>
<td>[(negative, 0.5873904228210449), (positive, 0.41260960698127747)]</td>
<td>positive</td>
</tr>
<tr>
<th>12</th>
<td>'s soulful and unslick</td>
<td>[(positive, 0.5931627750396729), (negative, 0.40683719515800476)]</td>
<td>negative</td>
</tr>
</tbody>
</table>
</div>
!URL | [
"# Comparing model predictions and ground truth labels with Rubrix and Hugging Face",
"## Build dataset\n\nYou can skip this step if you run:\n\n\n\n\n\n\nOtherwise, the following cell will run the pipeline over the training set and store labels and predictions.\n\n\n\n\n\n\n\n\n\n\n\n Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:00<?, ?it/s]",
"## Analize misspredictions and ambiguous labels",
"### With the UI\n\nWith Rubrix's UI you can:\n\n- Combine filters and full-text/DSL queries to quickly find important samples\n- All URLs contain the state so you can share with collaborator and annotator specific dataset regions to work on.\n- Sort examples by score, as well as custom metadata fields.\n\n\n\n!URL",
"### Programmatically\n\nLet's find all the wrong predictions from Python. This is useful for bulk operations (relabelling, discarding, etc.) as well as \n\n\n\n\n\n<div>\n<style scoped>\n .dataframe tbody tr th:only-of-type {\n vertical-align: middle;\n }\n\n .dataframe tbody tr th {\n vertical-align: top;\n }\n\n .dataframe thead th {\n text-align: right;\n }\n</style>\n<table border=\"1\" class=\"dataframe\">\n <thead>\n <tr style=\"text-align: right;\">\n <th></th>\n <th>text</th>\n <th>prediction</th>\n <th>annotation</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>0</th>\n <td>this particular , anciently demanding métier</td>\n <td>[(negative, 0.9386059045791626), (positive, 0.06139408051967621)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>1</th>\n <td>under our skin</td>\n <td>[(positive, 0.7508484721183777), (negative, 0.24915160238742828)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>2</th>\n <td>evokes a palpable sense of disconnection , made all the more poignant by the incessant use of cell phones .</td>\n <td>[(negative, 0.6634528636932373), (positive, 0.3365470767021179)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>3</th>\n <td>plays like a living-room war of the worlds , gaining most of its unsettling force from the suggested and the unknown .</td>\n <td>[(positive, 0.9968075752258301), (negative, 0.003192420583218336)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>4</th>\n <td>into a pulpy concept that , in many other hands would be completely forgettable</td>\n <td>[(positive, 0.6178210377693176), (negative, 0.3821789622306824)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>5</th>\n <td>transcends ethnic lines .</td>\n <td>[(positive, 0.9758220314979553), (negative, 0.024177948012948036)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>6</th>\n <td>is barely</td>\n <td>[(negative, 0.9922297596931458), (positive, 0.00777028314769268)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>7</th>\n <td>a pulpy concept that , in many other hands would be completely forgettable</td>\n <td>[(negative, 0.9738760590553284), (positive, 0.026123959571123123)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>8</th>\n <td>of hollywood heart-string plucking</td>\n <td>[(positive, 0.9889695644378662), (negative, 0.011030420660972595)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>9</th>\n <td>a minimalist beauty and the beast</td>\n <td>[(positive, 0.9100378751754761), (negative, 0.08996208757162094)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>10</th>\n <td>the intimate , unguarded moments of folks who live in unusual homes --</td>\n <td>[(positive, 0.9967381358146667), (negative, 0.0032618637196719646)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>11</th>\n <td>steals the show</td>\n <td>[(negative, 0.8031412363052368), (positive, 0.1968587338924408)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>12</th>\n <td>enough</td>\n <td>[(positive, 0.7941301465034485), (negative, 0.2058698982000351)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>13</th>\n <td>accept it as life and</td>\n <td>[(positive, 0.9987508058547974), (negative, 0.0012492131209000945)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>14</th>\n <td>this is the kind of movie that you only need to watch for about thirty seconds before you say to yourself , ' ah , yes ,</td>\n <td>[(negative, 0.7889454960823059), (positive, 0.21105451881885529)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>15</th>\n <td>plunges you into a reality that is , more often then not , difficult and sad ,</td>\n <td>[(positive, 0.967541515827179), (negative, 0.03245845437049866)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>16</th>\n <td>overcomes the script 's flaws and envelops the audience in his character 's anguish , anger and frustration .</td>\n <td>[(positive, 0.9953157901763916), (negative, 0.004684178624302149)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>17</th>\n <td>troubled and determined homicide cop</td>\n <td>[(negative, 0.6632784008979797), (positive, 0.33672159910202026)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>18</th>\n <td>human nature is a goofball movie , in the way that malkovich was , but it tries too hard</td>\n <td>[(positive, 0.5959018468856812), (negative, 0.40409812331199646)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>19</th>\n <td>to watch too many barney videos</td>\n <td>[(negative, 0.9909896850585938), (positive, 0.00901023019105196)]</td>\n <td>positive</td>\n </tr>\n </tbody>\n</table>\n</div>\n\n\n\n\n\n\n\n\n <AxesSubplot:>\n\n\n\n\n \n!png\n \n\n\n\n\n\n\n<div>\n<style scoped>\n .dataframe tbody tr th:only-of-type {\n vertical-align: middle;\n }\n\n .dataframe tbody tr th {\n vertical-align: top;\n }\n\n .dataframe thead th {\n text-align: right;\n }\n</style>\n<table border=\"1\" class=\"dataframe\">\n <thead>\n <tr style=\"text-align: right;\">\n <th></th>\n <th>text</th>\n <th>prediction</th>\n <th>annotation</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>0</th>\n <td>plays like a living-room war of the worlds , gaining most of its unsettling force from the suggested and the unknown .</td>\n <td>[(positive, 0.9968075752258301), (negative, 0.003192420583218336)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>1</th>\n <td>a minimalist beauty and the beast</td>\n <td>[(positive, 0.9100378751754761), (negative, 0.08996208757162094)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>2</th>\n <td>accept it as life and</td>\n <td>[(positive, 0.9987508058547974), (negative, 0.0012492131209000945)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>3</th>\n <td>plunges you into a reality that is , more often then not , difficult and sad ,</td>\n <td>[(positive, 0.967541515827179), (negative, 0.03245845437049866)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>4</th>\n <td>overcomes the script 's flaws and envelops the audience in his character 's anguish , anger and frustration .</td>\n <td>[(positive, 0.9953157901763916), (negative, 0.004684178624302149)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>5</th>\n <td>and social commentary</td>\n <td>[(positive, 0.7863275408744812), (negative, 0.2136724889278412)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>6</th>\n <td>we do n't get williams ' usual tear and a smile , just sneers and bile , and the spectacle is nothing short of refreshing .</td>\n <td>[(positive, 0.9982783794403076), (negative, 0.0017216014675796032)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>7</th>\n <td>before pulling the plug on the conspirators and averting an american-russian armageddon</td>\n <td>[(positive, 0.6992855072021484), (negative, 0.30071452260017395)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>8</th>\n <td>in tight pants and big tits</td>\n <td>[(positive, 0.7850217819213867), (negative, 0.2149781733751297)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>9</th>\n <td>that it certainly does n't feel like a film that strays past the two and a half mark</td>\n <td>[(positive, 0.6591460108757019), (negative, 0.3408539891242981)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>10</th>\n <td>actress-producer and writer</td>\n <td>[(positive, 0.8167378306388855), (negative, 0.1832621842622757)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>11</th>\n <td>gives devastating testimony to both people 's capacity for evil and their heroic capacity for good .</td>\n <td>[(positive, 0.8960123062133789), (negative, 0.10398765653371811)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>12</th>\n <td>deep into the girls ' confusion and pain as they struggle tragically to comprehend the chasm of knowledge that 's opened between them</td>\n <td>[(positive, 0.9729612469673157), (negative, 0.027038726955652237)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>13</th>\n <td>a younger lad in zen and the art of getting laid in this prickly indie comedy of manners and misanthropy</td>\n <td>[(positive, 0.9875985980033875), (negative, 0.012401451356709003)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>14</th>\n <td>get on a board and , uh , shred ,</td>\n <td>[(positive, 0.5352609753608704), (negative, 0.46473899483680725)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>15</th>\n <td>so preachy-keen and</td>\n <td>[(positive, 0.9644021391868591), (negative, 0.035597823560237885)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>16</th>\n <td>there 's an admirable rigor to jimmy 's relentless anger , and to the script 's refusal of a happy ending ,</td>\n <td>[(positive, 0.9928517937660217), (negative, 0.007148175034672022)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>17</th>\n <td>' christian bale 's quinn ( is ) a leather clad grunge-pirate with a hairdo like gandalf in a wind-tunnel and a simply astounding cor-blimey-luv-a-duck cockney accent . '</td>\n <td>[(positive, 0.9713286757469177), (negative, 0.028671346604824066)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>18</th>\n <td>passion , grief and fear</td>\n <td>[(positive, 0.9849751591682434), (negative, 0.015024829655885696)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>19</th>\n <td>to keep the extremes of screwball farce and blood-curdling family intensity on one continuum</td>\n <td>[(positive, 0.8838250637054443), (negative, 0.11617499589920044)]</td>\n <td>negative</td>\n </tr>\n </tbody>\n</table>\n</div>\n\n\n\n\n\n\n<div>\n<style scoped>\n .dataframe tbody tr th:only-of-type {\n vertical-align: middle;\n }\n\n .dataframe tbody tr th {\n vertical-align: top;\n }\n\n .dataframe thead th {\n text-align: right;\n }\n</style>\n<table border=\"1\" class=\"dataframe\">\n <thead>\n <tr style=\"text-align: right;\">\n <th></th>\n <th>text</th>\n <th>prediction</th>\n <th>annotation</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>0</th>\n <td>plays like a living-room war of the worlds , gaining most of its unsettling force from the suggested and the unknown .</td>\n <td>[(positive, 0.9968075752258301), (negative, 0.003192420583218336)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>1</th>\n <td>accept it as life and</td>\n <td>[(positive, 0.9987508058547974), (negative, 0.0012492131209000945)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>2</th>\n <td>overcomes the script 's flaws and envelops the audience in his character 's anguish , anger and frustration .</td>\n <td>[(positive, 0.9953157901763916), (negative, 0.004684178624302149)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>3</th>\n <td>will no doubt rally to its cause , trotting out threadbare standbys like ' masterpiece ' and ' triumph ' and all that malarkey ,</td>\n <td>[(negative, 0.9936562180519104), (positive, 0.006343740504235029)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>4</th>\n <td>we do n't get williams ' usual tear and a smile , just sneers and bile , and the spectacle is nothing short of refreshing .</td>\n <td>[(positive, 0.9982783794403076), (negative, 0.0017216014675796032)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>5</th>\n <td>somehow manages to bring together kevin pollak , former wrestler chyna and dolly parton</td>\n <td>[(negative, 0.9979034662246704), (positive, 0.002096540294587612)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>6</th>\n <td>there 's an admirable rigor to jimmy 's relentless anger , and to the script 's refusal of a happy ending ,</td>\n <td>[(positive, 0.9928517937660217), (negative, 0.007148175034672022)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>7</th>\n <td>the bottom line with nemesis is the same as it has been with all the films in the series : fans will undoubtedly enjoy it , and the uncommitted need n't waste their time on it</td>\n <td>[(positive, 0.995850682258606), (negative, 0.004149340093135834)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>8</th>\n <td>is genial but never inspired , and little</td>\n <td>[(negative, 0.9921030402183533), (positive, 0.007896988652646542)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>9</th>\n <td>heaped upon a project of such vast proportions need to reap more rewards than spiffy bluescreen technique and stylish weaponry .</td>\n <td>[(negative, 0.9958089590072632), (positive, 0.004191054962575436)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>10</th>\n <td>than recommended -- as visually bland as a dentist 's waiting room , complete with soothing muzak and a cushion of predictable narrative rhythms</td>\n <td>[(negative, 0.9988711476325989), (positive, 0.0011287889210507274)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>11</th>\n <td>spectacle and</td>\n <td>[(positive, 0.9941601753234863), (negative, 0.005839805118739605)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>12</th>\n <td>groan and</td>\n <td>[(negative, 0.9987359642982483), (positive, 0.0012639997294172645)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>13</th>\n <td>'re not likely to have seen before , but beneath the exotic surface ( and exotic dancing ) it 's surprisingly old-fashioned .</td>\n <td>[(positive, 0.9908103942871094), (negative, 0.009189637377858162)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>14</th>\n <td>its metaphors are opaque enough to avoid didacticism , and</td>\n <td>[(negative, 0.990602970123291), (positive, 0.00939704105257988)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>15</th>\n <td>by kevin bray , whose crisp framing , edgy camera work , and wholesale ineptitude with acting , tone and pace very obviously mark him as a video helmer making his feature debut</td>\n <td>[(positive, 0.9973387122154236), (negative, 0.0026612314395606518)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>16</th>\n <td>evokes the frustration , the awkwardness and the euphoria of growing up , without relying on the usual tropes .</td>\n <td>[(positive, 0.9989104270935059), (negative, 0.0010896018939092755)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>17</th>\n <td>, incoherence and sub-sophomoric</td>\n <td>[(negative, 0.9962475895881653), (positive, 0.003752368036657572)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>18</th>\n <td>seems intimidated by both her subject matter and the period trappings of this debut venture into the heritage business .</td>\n <td>[(negative, 0.9923072457313538), (positive, 0.007692818529903889)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>19</th>\n <td>despite downplaying her good looks , carries a little too much ai n't - she-cute baggage into her lead role as a troubled and determined homicide cop to quite pull off the heavy stuff .</td>\n <td>[(negative, 0.9948075413703918), (positive, 0.005192441400140524)]</td>\n <td>positive</td>\n </tr>\n </tbody>\n</table>\n</div>\n\n\n\n\n\n\n<div>\n<style scoped>\n .dataframe tbody tr th:only-of-type {\n vertical-align: middle;\n }\n\n .dataframe tbody tr th {\n vertical-align: top;\n }\n\n .dataframe thead th {\n text-align: right;\n }\n</style>\n<table border=\"1\" class=\"dataframe\">\n <thead>\n <tr style=\"text-align: right;\">\n <th></th>\n <th>text</th>\n <th>prediction</th>\n <th>annotation</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>0</th>\n <td>get on a board and , uh , shred ,</td>\n <td>[(positive, 0.5352609753608704), (negative, 0.46473899483680725)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>1</th>\n <td>is , truly and thankfully , a one-of-a-kind work</td>\n <td>[(positive, 0.5819814801216125), (negative, 0.41801854968070984)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>2</th>\n <td>starts as a tart little lemon drop of a movie and</td>\n <td>[(negative, 0.5641832947731018), (positive, 0.4358167052268982)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>3</th>\n <td>between flaccid satire and what</td>\n <td>[(negative, 0.5532692074775696), (positive, 0.44673076272010803)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>4</th>\n <td>it certainly does n't feel like a film that strays past the two and a half mark</td>\n <td>[(negative, 0.5386656522750854), (positive, 0.46133431792259216)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>5</th>\n <td>who liked there 's something about mary and both american pie movies</td>\n <td>[(negative, 0.5086333751678467), (positive, 0.4913666248321533)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>6</th>\n <td>many good ideas as bad is the cold comfort that chin 's film serves up with style and empathy</td>\n <td>[(positive, 0.557632327079773), (negative, 0.44236767292022705)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>7</th>\n <td>about its ideas and</td>\n <td>[(positive, 0.518638551235199), (negative, 0.48136141896247864)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>8</th>\n <td>of a sick and evil woman</td>\n <td>[(negative, 0.5554516315460205), (positive, 0.4445483684539795)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>9</th>\n <td>though this rude and crude film does deliver a few gut-busting laughs</td>\n <td>[(positive, 0.5045541524887085), (negative, 0.4954459071159363)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>10</th>\n <td>to squeeze the action and our emotions into the all-too-familiar dramatic arc of the holocaust escape story</td>\n <td>[(negative, 0.5050069093704224), (positive, 0.49499306082725525)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>11</th>\n <td>that throws a bunch of hot-button items in the viewer 's face and asks to be seen as hip , winking social commentary</td>\n <td>[(negative, 0.5873904228210449), (positive, 0.41260960698127747)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>12</th>\n <td>'s soulful and unslick</td>\n <td>[(positive, 0.5931627750396729), (negative, 0.40683719515800476)]</td>\n <td>negative</td>\n </tr>\n </tbody>\n</table>\n</div>\n\n\n\n\n\n\n\n!URL"
] | [
"TAGS\n#region-us \n",
"# Comparing model predictions and ground truth labels with Rubrix and Hugging Face",
"## Build dataset\n\nYou can skip this step if you run:\n\n\n\n\n\n\nOtherwise, the following cell will run the pipeline over the training set and store labels and predictions.\n\n\n\n\n\n\n\n\n\n\n\n Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:00<?, ?it/s]",
"## Analize misspredictions and ambiguous labels",
"### With the UI\n\nWith Rubrix's UI you can:\n\n- Combine filters and full-text/DSL queries to quickly find important samples\n- All URLs contain the state so you can share with collaborator and annotator specific dataset regions to work on.\n- Sort examples by score, as well as custom metadata fields.\n\n\n\n!URL",
"### Programmatically\n\nLet's find all the wrong predictions from Python. This is useful for bulk operations (relabelling, discarding, etc.) as well as \n\n\n\n\n\n<div>\n<style scoped>\n .dataframe tbody tr th:only-of-type {\n vertical-align: middle;\n }\n\n .dataframe tbody tr th {\n vertical-align: top;\n }\n\n .dataframe thead th {\n text-align: right;\n }\n</style>\n<table border=\"1\" class=\"dataframe\">\n <thead>\n <tr style=\"text-align: right;\">\n <th></th>\n <th>text</th>\n <th>prediction</th>\n <th>annotation</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>0</th>\n <td>this particular , anciently demanding métier</td>\n <td>[(negative, 0.9386059045791626), (positive, 0.06139408051967621)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>1</th>\n <td>under our skin</td>\n <td>[(positive, 0.7508484721183777), (negative, 0.24915160238742828)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>2</th>\n <td>evokes a palpable sense of disconnection , made all the more poignant by the incessant use of cell phones .</td>\n <td>[(negative, 0.6634528636932373), (positive, 0.3365470767021179)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>3</th>\n <td>plays like a living-room war of the worlds , gaining most of its unsettling force from the suggested and the unknown .</td>\n <td>[(positive, 0.9968075752258301), (negative, 0.003192420583218336)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>4</th>\n <td>into a pulpy concept that , in many other hands would be completely forgettable</td>\n <td>[(positive, 0.6178210377693176), (negative, 0.3821789622306824)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>5</th>\n <td>transcends ethnic lines .</td>\n <td>[(positive, 0.9758220314979553), (negative, 0.024177948012948036)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>6</th>\n <td>is barely</td>\n <td>[(negative, 0.9922297596931458), (positive, 0.00777028314769268)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>7</th>\n <td>a pulpy concept that , in many other hands would be completely forgettable</td>\n <td>[(negative, 0.9738760590553284), (positive, 0.026123959571123123)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>8</th>\n <td>of hollywood heart-string plucking</td>\n <td>[(positive, 0.9889695644378662), (negative, 0.011030420660972595)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>9</th>\n <td>a minimalist beauty and the beast</td>\n <td>[(positive, 0.9100378751754761), (negative, 0.08996208757162094)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>10</th>\n <td>the intimate , unguarded moments of folks who live in unusual homes --</td>\n <td>[(positive, 0.9967381358146667), (negative, 0.0032618637196719646)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>11</th>\n <td>steals the show</td>\n <td>[(negative, 0.8031412363052368), (positive, 0.1968587338924408)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>12</th>\n <td>enough</td>\n <td>[(positive, 0.7941301465034485), (negative, 0.2058698982000351)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>13</th>\n <td>accept it as life and</td>\n <td>[(positive, 0.9987508058547974), (negative, 0.0012492131209000945)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>14</th>\n <td>this is the kind of movie that you only need to watch for about thirty seconds before you say to yourself , ' ah , yes ,</td>\n <td>[(negative, 0.7889454960823059), (positive, 0.21105451881885529)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>15</th>\n <td>plunges you into a reality that is , more often then not , difficult and sad ,</td>\n <td>[(positive, 0.967541515827179), (negative, 0.03245845437049866)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>16</th>\n <td>overcomes the script 's flaws and envelops the audience in his character 's anguish , anger and frustration .</td>\n <td>[(positive, 0.9953157901763916), (negative, 0.004684178624302149)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>17</th>\n <td>troubled and determined homicide cop</td>\n <td>[(negative, 0.6632784008979797), (positive, 0.33672159910202026)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>18</th>\n <td>human nature is a goofball movie , in the way that malkovich was , but it tries too hard</td>\n <td>[(positive, 0.5959018468856812), (negative, 0.40409812331199646)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>19</th>\n <td>to watch too many barney videos</td>\n <td>[(negative, 0.9909896850585938), (positive, 0.00901023019105196)]</td>\n <td>positive</td>\n </tr>\n </tbody>\n</table>\n</div>\n\n\n\n\n\n\n\n\n <AxesSubplot:>\n\n\n\n\n \n!png\n \n\n\n\n\n\n\n<div>\n<style scoped>\n .dataframe tbody tr th:only-of-type {\n vertical-align: middle;\n }\n\n .dataframe tbody tr th {\n vertical-align: top;\n }\n\n .dataframe thead th {\n text-align: right;\n }\n</style>\n<table border=\"1\" class=\"dataframe\">\n <thead>\n <tr style=\"text-align: right;\">\n <th></th>\n <th>text</th>\n <th>prediction</th>\n <th>annotation</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>0</th>\n <td>plays like a living-room war of the worlds , gaining most of its unsettling force from the suggested and the unknown .</td>\n <td>[(positive, 0.9968075752258301), (negative, 0.003192420583218336)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>1</th>\n <td>a minimalist beauty and the beast</td>\n <td>[(positive, 0.9100378751754761), (negative, 0.08996208757162094)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>2</th>\n <td>accept it as life and</td>\n <td>[(positive, 0.9987508058547974), (negative, 0.0012492131209000945)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>3</th>\n <td>plunges you into a reality that is , more often then not , difficult and sad ,</td>\n <td>[(positive, 0.967541515827179), (negative, 0.03245845437049866)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>4</th>\n <td>overcomes the script 's flaws and envelops the audience in his character 's anguish , anger and frustration .</td>\n <td>[(positive, 0.9953157901763916), (negative, 0.004684178624302149)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>5</th>\n <td>and social commentary</td>\n <td>[(positive, 0.7863275408744812), (negative, 0.2136724889278412)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>6</th>\n <td>we do n't get williams ' usual tear and a smile , just sneers and bile , and the spectacle is nothing short of refreshing .</td>\n <td>[(positive, 0.9982783794403076), (negative, 0.0017216014675796032)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>7</th>\n <td>before pulling the plug on the conspirators and averting an american-russian armageddon</td>\n <td>[(positive, 0.6992855072021484), (negative, 0.30071452260017395)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>8</th>\n <td>in tight pants and big tits</td>\n <td>[(positive, 0.7850217819213867), (negative, 0.2149781733751297)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>9</th>\n <td>that it certainly does n't feel like a film that strays past the two and a half mark</td>\n <td>[(positive, 0.6591460108757019), (negative, 0.3408539891242981)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>10</th>\n <td>actress-producer and writer</td>\n <td>[(positive, 0.8167378306388855), (negative, 0.1832621842622757)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>11</th>\n <td>gives devastating testimony to both people 's capacity for evil and their heroic capacity for good .</td>\n <td>[(positive, 0.8960123062133789), (negative, 0.10398765653371811)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>12</th>\n <td>deep into the girls ' confusion and pain as they struggle tragically to comprehend the chasm of knowledge that 's opened between them</td>\n <td>[(positive, 0.9729612469673157), (negative, 0.027038726955652237)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>13</th>\n <td>a younger lad in zen and the art of getting laid in this prickly indie comedy of manners and misanthropy</td>\n <td>[(positive, 0.9875985980033875), (negative, 0.012401451356709003)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>14</th>\n <td>get on a board and , uh , shred ,</td>\n <td>[(positive, 0.5352609753608704), (negative, 0.46473899483680725)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>15</th>\n <td>so preachy-keen and</td>\n <td>[(positive, 0.9644021391868591), (negative, 0.035597823560237885)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>16</th>\n <td>there 's an admirable rigor to jimmy 's relentless anger , and to the script 's refusal of a happy ending ,</td>\n <td>[(positive, 0.9928517937660217), (negative, 0.007148175034672022)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>17</th>\n <td>' christian bale 's quinn ( is ) a leather clad grunge-pirate with a hairdo like gandalf in a wind-tunnel and a simply astounding cor-blimey-luv-a-duck cockney accent . '</td>\n <td>[(positive, 0.9713286757469177), (negative, 0.028671346604824066)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>18</th>\n <td>passion , grief and fear</td>\n <td>[(positive, 0.9849751591682434), (negative, 0.015024829655885696)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>19</th>\n <td>to keep the extremes of screwball farce and blood-curdling family intensity on one continuum</td>\n <td>[(positive, 0.8838250637054443), (negative, 0.11617499589920044)]</td>\n <td>negative</td>\n </tr>\n </tbody>\n</table>\n</div>\n\n\n\n\n\n\n<div>\n<style scoped>\n .dataframe tbody tr th:only-of-type {\n vertical-align: middle;\n }\n\n .dataframe tbody tr th {\n vertical-align: top;\n }\n\n .dataframe thead th {\n text-align: right;\n }\n</style>\n<table border=\"1\" class=\"dataframe\">\n <thead>\n <tr style=\"text-align: right;\">\n <th></th>\n <th>text</th>\n <th>prediction</th>\n <th>annotation</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>0</th>\n <td>plays like a living-room war of the worlds , gaining most of its unsettling force from the suggested and the unknown .</td>\n <td>[(positive, 0.9968075752258301), (negative, 0.003192420583218336)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>1</th>\n <td>accept it as life and</td>\n <td>[(positive, 0.9987508058547974), (negative, 0.0012492131209000945)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>2</th>\n <td>overcomes the script 's flaws and envelops the audience in his character 's anguish , anger and frustration .</td>\n <td>[(positive, 0.9953157901763916), (negative, 0.004684178624302149)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>3</th>\n <td>will no doubt rally to its cause , trotting out threadbare standbys like ' masterpiece ' and ' triumph ' and all that malarkey ,</td>\n <td>[(negative, 0.9936562180519104), (positive, 0.006343740504235029)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>4</th>\n <td>we do n't get williams ' usual tear and a smile , just sneers and bile , and the spectacle is nothing short of refreshing .</td>\n <td>[(positive, 0.9982783794403076), (negative, 0.0017216014675796032)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>5</th>\n <td>somehow manages to bring together kevin pollak , former wrestler chyna and dolly parton</td>\n <td>[(negative, 0.9979034662246704), (positive, 0.002096540294587612)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>6</th>\n <td>there 's an admirable rigor to jimmy 's relentless anger , and to the script 's refusal of a happy ending ,</td>\n <td>[(positive, 0.9928517937660217), (negative, 0.007148175034672022)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>7</th>\n <td>the bottom line with nemesis is the same as it has been with all the films in the series : fans will undoubtedly enjoy it , and the uncommitted need n't waste their time on it</td>\n <td>[(positive, 0.995850682258606), (negative, 0.004149340093135834)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>8</th>\n <td>is genial but never inspired , and little</td>\n <td>[(negative, 0.9921030402183533), (positive, 0.007896988652646542)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>9</th>\n <td>heaped upon a project of such vast proportions need to reap more rewards than spiffy bluescreen technique and stylish weaponry .</td>\n <td>[(negative, 0.9958089590072632), (positive, 0.004191054962575436)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>10</th>\n <td>than recommended -- as visually bland as a dentist 's waiting room , complete with soothing muzak and a cushion of predictable narrative rhythms</td>\n <td>[(negative, 0.9988711476325989), (positive, 0.0011287889210507274)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>11</th>\n <td>spectacle and</td>\n <td>[(positive, 0.9941601753234863), (negative, 0.005839805118739605)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>12</th>\n <td>groan and</td>\n <td>[(negative, 0.9987359642982483), (positive, 0.0012639997294172645)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>13</th>\n <td>'re not likely to have seen before , but beneath the exotic surface ( and exotic dancing ) it 's surprisingly old-fashioned .</td>\n <td>[(positive, 0.9908103942871094), (negative, 0.009189637377858162)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>14</th>\n <td>its metaphors are opaque enough to avoid didacticism , and</td>\n <td>[(negative, 0.990602970123291), (positive, 0.00939704105257988)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>15</th>\n <td>by kevin bray , whose crisp framing , edgy camera work , and wholesale ineptitude with acting , tone and pace very obviously mark him as a video helmer making his feature debut</td>\n <td>[(positive, 0.9973387122154236), (negative, 0.0026612314395606518)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>16</th>\n <td>evokes the frustration , the awkwardness and the euphoria of growing up , without relying on the usual tropes .</td>\n <td>[(positive, 0.9989104270935059), (negative, 0.0010896018939092755)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>17</th>\n <td>, incoherence and sub-sophomoric</td>\n <td>[(negative, 0.9962475895881653), (positive, 0.003752368036657572)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>18</th>\n <td>seems intimidated by both her subject matter and the period trappings of this debut venture into the heritage business .</td>\n <td>[(negative, 0.9923072457313538), (positive, 0.007692818529903889)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>19</th>\n <td>despite downplaying her good looks , carries a little too much ai n't - she-cute baggage into her lead role as a troubled and determined homicide cop to quite pull off the heavy stuff .</td>\n <td>[(negative, 0.9948075413703918), (positive, 0.005192441400140524)]</td>\n <td>positive</td>\n </tr>\n </tbody>\n</table>\n</div>\n\n\n\n\n\n\n<div>\n<style scoped>\n .dataframe tbody tr th:only-of-type {\n vertical-align: middle;\n }\n\n .dataframe tbody tr th {\n vertical-align: top;\n }\n\n .dataframe thead th {\n text-align: right;\n }\n</style>\n<table border=\"1\" class=\"dataframe\">\n <thead>\n <tr style=\"text-align: right;\">\n <th></th>\n <th>text</th>\n <th>prediction</th>\n <th>annotation</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>0</th>\n <td>get on a board and , uh , shred ,</td>\n <td>[(positive, 0.5352609753608704), (negative, 0.46473899483680725)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>1</th>\n <td>is , truly and thankfully , a one-of-a-kind work</td>\n <td>[(positive, 0.5819814801216125), (negative, 0.41801854968070984)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>2</th>\n <td>starts as a tart little lemon drop of a movie and</td>\n <td>[(negative, 0.5641832947731018), (positive, 0.4358167052268982)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>3</th>\n <td>between flaccid satire and what</td>\n <td>[(negative, 0.5532692074775696), (positive, 0.44673076272010803)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>4</th>\n <td>it certainly does n't feel like a film that strays past the two and a half mark</td>\n <td>[(negative, 0.5386656522750854), (positive, 0.46133431792259216)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>5</th>\n <td>who liked there 's something about mary and both american pie movies</td>\n <td>[(negative, 0.5086333751678467), (positive, 0.4913666248321533)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>6</th>\n <td>many good ideas as bad is the cold comfort that chin 's film serves up with style and empathy</td>\n <td>[(positive, 0.557632327079773), (negative, 0.44236767292022705)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>7</th>\n <td>about its ideas and</td>\n <td>[(positive, 0.518638551235199), (negative, 0.48136141896247864)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>8</th>\n <td>of a sick and evil woman</td>\n <td>[(negative, 0.5554516315460205), (positive, 0.4445483684539795)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>9</th>\n <td>though this rude and crude film does deliver a few gut-busting laughs</td>\n <td>[(positive, 0.5045541524887085), (negative, 0.4954459071159363)]</td>\n <td>negative</td>\n </tr>\n <tr>\n <th>10</th>\n <td>to squeeze the action and our emotions into the all-too-familiar dramatic arc of the holocaust escape story</td>\n <td>[(negative, 0.5050069093704224), (positive, 0.49499306082725525)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>11</th>\n <td>that throws a bunch of hot-button items in the viewer 's face and asks to be seen as hip , winking social commentary</td>\n <td>[(negative, 0.5873904228210449), (positive, 0.41260960698127747)]</td>\n <td>positive</td>\n </tr>\n <tr>\n <th>12</th>\n <td>'s soulful and unslick</td>\n <td>[(positive, 0.5931627750396729), (negative, 0.40683719515800476)]</td>\n <td>negative</td>\n </tr>\n </tbody>\n</table>\n</div>\n\n\n\n\n\n\n\n!URL"
] |
8279d43fc305c5248886d841cb49bd8380456ec9 |
## WARNING: this dataset is an extract of the OSCAR dataset published here to simulate the use of the full dataset in low-resource contexts and debug codebases that would eventually use the original OSCAR dataset.
Using this dataset is equivalent to using a processed version of OSCAR legally speaking. I take no credit for the gathering of the original data and hence refer entirely to the original dataset in the card below.
# Dataset Card for "oscar"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://oscar-corpus.com](https://oscar-corpus.com)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
OSCAR or **O**pen **S**uper-large **C**rawled [**A**LMAnaCH](https://team.inria.fr/almanach/) co**R**pus is a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [goclassy](https://github.com/pjox/goclassy) architecture. Data is distributed by language in both original and deduplicated form.
### Supported Tasks and Leaderboards
OSCAR is mainly intended to pretrain language models and word represantations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
## Dataset Structure
We show detailed information for all the configurations of the dataset.
## Dataset Creation
### Curation Rationale
OSCAR was constructed new pipeline derived from the [fastText's one](https://github.com/facebookresearch/fastText), called [_goclassy_](https://github.com/pjox/goclassy). Goclassy reuses the [fastText linear classifier](https://fasttext.cc) and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.
The order of operations is more or less the same as in the fastText pre-processing pipeline but instead of clustering multiple operations into a single blocking process, a worker is launched for each operation but bounding the number of possible parallel operations at a given time by the number of available threads instead of the number of CPUs. Goclassy is implemented in the [Go programming language](https://golang.org/) so it lets the [Go runtime](https://golang.org/src/runtime/mprof.go) handle the scheduling of the processes. Thus the goclassy's pipeline one does not have to wait for a whole WET file to download, decompress and classify in order to start downloading and processing the next one, a new file will start downloading and processing as soon as the scheduler is able to allocate a new process.
Filtering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.
### Source Data
#### Initial Data Collection and Normalization
[Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the **November 2018** snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
## Additional Information
### Dataset Curators
The corpus was put together by [Pedro J. Ortiz](https://pjortiz.eu/), [Benoît Sagot](http://pauillac.inria.fr/~sagot/), and [Laurent Romary](https://cv.archives-ouvertes.fr/laurentromary), during work done at [Inria](https://www.inria.fr/en), particularly at the [ALMAnaCH team](https://team.inria.fr/almanach/).
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
```
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
```
### Contributions
Thanks to [@pjox](https://github.com/pjox) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
| nthngdy/oscar-mini | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:oscar",
"language:af",
"language:am",
"language:ar",
"language:arz",
"language:as",
"language:az",
"language:azb",
"language:ba",
"language:be",
"language:bg",
"language:bn",
"language:bo",
"language:br",
"language:ca",
"language:ce",
"language:ceb",
"language:ckb",
"language:cs",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:dv",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:fy",
"language:ga",
"language:gl",
"language:gu",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:id",
"language:is",
"language:it",
"language:ja",
"language:ka",
"language:kk",
"language:km",
"language:kn",
"language:ko",
"language:ku",
"language:ky",
"language:la",
"language:lb",
"language:lo",
"language:lt",
"language:lv",
"language:mg",
"language:mhr",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:mt",
"language:my",
"language:nds",
"language:ne",
"language:nl",
"language:nn",
"language:no",
"language:or",
"language:os",
"language:pa",
"language:pl",
"language:pnb",
"language:ps",
"language:pt",
"language:ro",
"language:ru",
"language:sa",
"language:sah",
"language:sd",
"language:sh",
"language:si",
"language:sk",
"language:sl",
"language:sq",
"language:sr",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:tg",
"language:th",
"language:tk",
"language:tl",
"language:tr",
"language:tt",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:yi",
"language:zh",
"license:cc0-1.0",
"arxiv:2010.14571",
"region:us"
] | 2022-03-09T14:18:51+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["af", "am", "ar", "arz", "as", "az", "azb", "ba", "be", "bg", "bn", "bo", "br", "ca", "ce", "ceb", "ckb", "cs", "cv", "cy", "da", "de", "dv", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gl", "gu", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mhr", "mk", "ml", "mn", "mr", "ms", "mt", "my", "nds", "ne", "nl", "nn", "no", "or", "os", "pa", "pl", "pnb", "ps", "pt", "ro", "ru", "sa", "sah", "sd", "sh", "si", "sk", "sl", "sq", "sr", "sv", "sw", "ta", "te", "tg", "th", "tk", "tl", "tr", "tt", "ug", "uk", "ur", "uz", "vi", "yi", "zh"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "source_datasets": ["oscar"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "paperswithcode_id": "oscar", "pretty_name": "OSCAR"} | 2022-12-06T11:05:51+00:00 | [
"2010.14571"
] | [
"af",
"am",
"ar",
"arz",
"as",
"az",
"azb",
"ba",
"be",
"bg",
"bn",
"bo",
"br",
"ca",
"ce",
"ceb",
"ckb",
"cs",
"cv",
"cy",
"da",
"de",
"dv",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gl",
"gu",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mhr",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"nds",
"ne",
"nl",
"nn",
"no",
"or",
"os",
"pa",
"pl",
"pnb",
"ps",
"pt",
"ro",
"ru",
"sa",
"sah",
"sd",
"sh",
"si",
"sk",
"sl",
"sq",
"sr",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tk",
"tl",
"tr",
"tt",
"ug",
"uk",
"ur",
"uz",
"vi",
"yi",
"zh"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #source_datasets-oscar #language-Afrikaans #language-Amharic #language-Arabic #language-Egyptian Arabic #language-Assamese #language-Azerbaijani #language-South Azerbaijani #language-Bashkir #language-Belarusian #language-Bulgarian #language-Bengali #language-Tibetan #language-Breton #language-Catalan #language-Chechen #language-Cebuano #language-Central Kurdish #language-Czech #language-Chuvash #language-Welsh #language-Danish #language-German #language-Dhivehi #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Finnish #language-French #language-Western Frisian #language-Irish #language-Galician #language-Gujarati #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Icelandic #language-Italian #language-Japanese #language-Georgian #language-Kazakh #language-Khmer #language-Kannada #language-Korean #language-Kurdish #language-Kirghiz #language-Latin #language-Luxembourgish #language-Lao #language-Lithuanian #language-Latvian #language-Malagasy #language-Eastern Mari #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Malay (macrolanguage) #language-Maltese #language-Burmese #language-Low German #language-Nepali (macrolanguage) #language-Dutch #language-Norwegian Nynorsk #language-Norwegian #language-Oriya (macrolanguage) #language-Ossetian #language-Panjabi #language-Polish #language-Western Panjabi #language-Pushto #language-Portuguese #language-Romanian #language-Russian #language-Sanskrit #language-Yakut #language-Sindhi #language-Serbo-Croatian #language-Sinhala #language-Slovak #language-Slovenian #language-Albanian #language-Serbian #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Tajik #language-Thai #language-Turkmen #language-Tagalog #language-Turkish #language-Tatar #language-Uighur #language-Ukrainian #language-Urdu #language-Uzbek #language-Vietnamese #language-Yiddish #language-Chinese #license-cc0-1.0 #arxiv-2010.14571 #region-us
|
## WARNING: this dataset is an extract of the OSCAR dataset published here to simulate the use of the full dataset in low-resource contexts and debug codebases that would eventually use the original OSCAR dataset.
Using this dataset is equivalent to using a processed version of OSCAR legally speaking. I take no credit for the gathering of the original data and hence refer entirely to the original dataset in the card below.
# Dataset Card for "oscar"
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository:
- Paper:
- Point of Contact:
### Dataset Summary
OSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. Data is distributed by language in both original and deduplicated form.
### Supported Tasks and Leaderboards
OSCAR is mainly intended to pretrain language models and word represantations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection Data Splits Sample Size provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
## Dataset Structure
We show detailed information for all the configurations of the dataset.
## Dataset Creation
### Curation Rationale
OSCAR was constructed new pipeline derived from the fastText's one, called _goclassy_. Goclassy reuses the fastText linear classifier and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.
The order of operations is more or less the same as in the fastText pre-processing pipeline but instead of clustering multiple operations into a single blocking process, a worker is launched for each operation but bounding the number of possible parallel operations at a given time by the number of available threads instead of the number of CPUs. Goclassy is implemented in the Go programming language so it lets the Go runtime handle the scheduling of the processes. Thus the goclassy's pipeline one does not have to wait for a whole WET file to download, decompress and classify in order to start downloading and processing the next one, a new file will start downloading and processing as soon as the scheduler is able to allocate a new process.
Filtering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.
### Source Data
#### Initial Data Collection and Normalization
Common Crawl is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected nofollow and URL policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the November 2018 snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This must be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The fastText linear classifier is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by third parties.
## Additional Information
### Dataset Curators
The corpus was put together by Pedro J. Ortiz, Benoît Sagot, and Laurent Romary, during work done at Inria, particularly at the ALMAnaCH team.
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") URL
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Contributions
Thanks to @pjox and @lhoestq for adding this dataset.
| [
"## WARNING: this dataset is an extract of the OSCAR dataset published here to simulate the use of the full dataset in low-resource contexts and debug codebases that would eventually use the original OSCAR dataset.\nUsing this dataset is equivalent to using a processed version of OSCAR legally speaking. I take no credit for the gathering of the original data and hence refer entirely to the original dataset in the card below.",
"# Dataset Card for \"oscar\"",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact:",
"### Dataset Summary\n\nOSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. Data is distributed by language in both original and deduplicated form.",
"### Supported Tasks and Leaderboards\n\nOSCAR is mainly intended to pretrain language models and word represantations.",
"### Languages\n\nAll the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection Data Splits Sample Size provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.",
"## Dataset Structure\n\nWe show detailed information for all the configurations of the dataset.",
"## Dataset Creation",
"### Curation Rationale\n\nOSCAR was constructed new pipeline derived from the fastText's one, called _goclassy_. Goclassy reuses the fastText linear classifier and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.\n\nThe order of operations is more or less the same as in the fastText pre-processing pipeline but instead of clustering multiple operations into a single blocking process, a worker is launched for each operation but bounding the number of possible parallel operations at a given time by the number of available threads instead of the number of CPUs. Goclassy is implemented in the Go programming language so it lets the Go runtime handle the scheduling of the processes. Thus the goclassy's pipeline one does not have to wait for a whole WET file to download, decompress and classify in order to start downloading and processing the next one, a new file will start downloading and processing as soon as the scheduler is able to allocate a new process.\n\nFiltering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nCommon Crawl is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected nofollow and URL policies.\n\nEach monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.\n\nTo construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the November 2018 snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.",
"#### Who are the source language producers?\n\nThe data comes from multiple web pages in a large variety of languages.",
"### Annotations\n\nThe dataset does not contain any additional annotations.",
"#### Annotation process\n\nN/A",
"#### Who are the annotators?\n\nN/A",
"### Personal and Sensitive Information\n\nBeing constructed from Common Crawl, Personal and sensitive information might be present. This must be considered before training deep learning models with OSCAR, specially in the case of text-generation models.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nOSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.",
"### Discussion of Biases\n\nOSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.",
"### Other Known Limitations\n\nThe fastText linear classifier is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by third parties.",
"## Additional Information",
"### Dataset Curators\n\nThe corpus was put together by Pedro J. Ortiz, Benoît Sagot, and Laurent Romary, during work done at Inria, particularly at the ALMAnaCH team.",
"### Licensing Information\n\n These data are released under this licensing scheme\n We do not own any of the text from which these data has been extracted.\n We license the actual packaging of these data under the Creative Commons CC0 license (\"no rights reserved\") URL\n To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR\n This work is published from: France.\n\n Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:\n * Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.\n * Clearly identify the copyrighted work claimed to be infringed.\n * Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.\n\n We will comply to legitimate requests by removing the affected sources from the next release of the corpus.",
"### Contributions\n\nThanks to @pjox and @lhoestq for adding this dataset."
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #source_datasets-oscar #language-Afrikaans #language-Amharic #language-Arabic #language-Egyptian Arabic #language-Assamese #language-Azerbaijani #language-South Azerbaijani #language-Bashkir #language-Belarusian #language-Bulgarian #language-Bengali #language-Tibetan #language-Breton #language-Catalan #language-Chechen #language-Cebuano #language-Central Kurdish #language-Czech #language-Chuvash #language-Welsh #language-Danish #language-German #language-Dhivehi #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Finnish #language-French #language-Western Frisian #language-Irish #language-Galician #language-Gujarati #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Icelandic #language-Italian #language-Japanese #language-Georgian #language-Kazakh #language-Khmer #language-Kannada #language-Korean #language-Kurdish #language-Kirghiz #language-Latin #language-Luxembourgish #language-Lao #language-Lithuanian #language-Latvian #language-Malagasy #language-Eastern Mari #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Malay (macrolanguage) #language-Maltese #language-Burmese #language-Low German #language-Nepali (macrolanguage) #language-Dutch #language-Norwegian Nynorsk #language-Norwegian #language-Oriya (macrolanguage) #language-Ossetian #language-Panjabi #language-Polish #language-Western Panjabi #language-Pushto #language-Portuguese #language-Romanian #language-Russian #language-Sanskrit #language-Yakut #language-Sindhi #language-Serbo-Croatian #language-Sinhala #language-Slovak #language-Slovenian #language-Albanian #language-Serbian #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Tajik #language-Thai #language-Turkmen #language-Tagalog #language-Turkish #language-Tatar #language-Uighur #language-Ukrainian #language-Urdu #language-Uzbek #language-Vietnamese #language-Yiddish #language-Chinese #license-cc0-1.0 #arxiv-2010.14571 #region-us \n",
"## WARNING: this dataset is an extract of the OSCAR dataset published here to simulate the use of the full dataset in low-resource contexts and debug codebases that would eventually use the original OSCAR dataset.\nUsing this dataset is equivalent to using a processed version of OSCAR legally speaking. I take no credit for the gathering of the original data and hence refer entirely to the original dataset in the card below.",
"# Dataset Card for \"oscar\"",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact:",
"### Dataset Summary\n\nOSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. Data is distributed by language in both original and deduplicated form.",
"### Supported Tasks and Leaderboards\n\nOSCAR is mainly intended to pretrain language models and word represantations.",
"### Languages\n\nAll the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection Data Splits Sample Size provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.",
"## Dataset Structure\n\nWe show detailed information for all the configurations of the dataset.",
"## Dataset Creation",
"### Curation Rationale\n\nOSCAR was constructed new pipeline derived from the fastText's one, called _goclassy_. Goclassy reuses the fastText linear classifier and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.\n\nThe order of operations is more or less the same as in the fastText pre-processing pipeline but instead of clustering multiple operations into a single blocking process, a worker is launched for each operation but bounding the number of possible parallel operations at a given time by the number of available threads instead of the number of CPUs. Goclassy is implemented in the Go programming language so it lets the Go runtime handle the scheduling of the processes. Thus the goclassy's pipeline one does not have to wait for a whole WET file to download, decompress and classify in order to start downloading and processing the next one, a new file will start downloading and processing as soon as the scheduler is able to allocate a new process.\n\nFiltering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nCommon Crawl is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected nofollow and URL policies.\n\nEach monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.\n\nTo construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the November 2018 snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.",
"#### Who are the source language producers?\n\nThe data comes from multiple web pages in a large variety of languages.",
"### Annotations\n\nThe dataset does not contain any additional annotations.",
"#### Annotation process\n\nN/A",
"#### Who are the annotators?\n\nN/A",
"### Personal and Sensitive Information\n\nBeing constructed from Common Crawl, Personal and sensitive information might be present. This must be considered before training deep learning models with OSCAR, specially in the case of text-generation models.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nOSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.",
"### Discussion of Biases\n\nOSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.",
"### Other Known Limitations\n\nThe fastText linear classifier is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by third parties.",
"## Additional Information",
"### Dataset Curators\n\nThe corpus was put together by Pedro J. Ortiz, Benoît Sagot, and Laurent Romary, during work done at Inria, particularly at the ALMAnaCH team.",
"### Licensing Information\n\n These data are released under this licensing scheme\n We do not own any of the text from which these data has been extracted.\n We license the actual packaging of these data under the Creative Commons CC0 license (\"no rights reserved\") URL\n To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR\n This work is published from: France.\n\n Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:\n * Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.\n * Clearly identify the copyrighted work claimed to be infringed.\n * Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.\n\n We will comply to legitimate requests by removing the affected sources from the next release of the corpus.",
"### Contributions\n\nThanks to @pjox and @lhoestq for adding this dataset."
] |
f1cb70125a6b1ad5dd0cc97501476309cf540b3d | # Dataset Card for Contextualized CommonGen(C2Gen)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Initial Data Collection and Normalization](#initial-cata-collection-and-normalization)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Repository:** [Non-Residual Prompting](https://github.com/FreddeFrallan/Non-Residual-Prompting)
- **Paper:** [Fine-Grained Controllable Text Generation Using Non-Residual Prompting](https://aclanthology.org/2022.acl-long.471)
- **Point of Contact:** [Fredrik Carlsson](mailto:[email protected])
### Dataset Summary
CommonGen [Lin et al., 2020](https://arxiv.org/abs/1911.03705) is a dataset for the constrained text generation task of word inclusion. But the task does not allow to include context. Therefore, to complement CommonGen, we provide an extended test set C2Gen [Carlsson et al., 2022](https://aclanthology.org/2022.acl-long.471) where an additional context is provided for each set of target words. The task is therefore reformulated to both generate commonsensical text which include the given words, and also have the generated text adhere to the given context.
### Languages
English
## Dataset Structure
### Data Instances
{"Context": "The show came on the television with people singing. The family all gathered to watch. They all became silent when the show came on.", "Words": ["follow", "series", "voice"]}
### Data Fields
- context: the generated text by the model should adhere to this text
- words: the words that should be included in the generated continuation
### Data Splits
Test
## Dataset Creation
### Curation Rationale
C2Gen was created because the authors of the paper believed that the task formulation of CommonGen is too narrow, and that it needlessly incentivizes researchers
to focus on methods that do not support context. Which is orthogonal to their belief that many application areas necessitates the consideration of surrounding context. Therefore, to complement CommonGen, they provide an extended test set where an additional context is provided for each set of target words.
### Initial Data Collection and Normalization
The dataset was constructed with the help the crowd sourcing platform MechanicalTurk. Each remaining concept set manually received a textual context. To assure the quality of the data generation, only native English speakers with a recorded high acceptance were allowed to participate. Finally, all contexts were manually verified, and fixed in terms of typos and poor quality. Furthermore we want to raise awareness that C2GEN can contain personal data or offensive content. If you would encounter such a sample, please reach out to us.
## Licensing Information
license: cc-by-sa-4.0
| Non-Residual-Prompting/C2Gen | [
"task_categories:text-generation",
"size_categories:<100K",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:1911.03705",
"region:us"
] | 2022-03-09T16:09:50+00:00 | {"language": ["en"], "license": ["cc-by-sa-4.0"], "size_categories": ["<100K"], "task_categories": ["text-generation"]} | 2022-10-25T09:02:58+00:00 | [
"1911.03705"
] | [
"en"
] | TAGS
#task_categories-text-generation #size_categories-<100K #language-English #license-cc-by-sa-4.0 #arxiv-1911.03705 #region-us
| # Dataset Card for Contextualized CommonGen(C2Gen)
## Table of Contents
- Dataset Description
- Dataset Summary
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Initial Data Collection and Normalization
- Licensing Information
## Dataset Description
- Repository: Non-Residual Prompting
- Paper: Fine-Grained Controllable Text Generation Using Non-Residual Prompting
- Point of Contact: Fredrik Carlsson
### Dataset Summary
CommonGen Lin et al., 2020 is a dataset for the constrained text generation task of word inclusion. But the task does not allow to include context. Therefore, to complement CommonGen, we provide an extended test set C2Gen Carlsson et al., 2022 where an additional context is provided for each set of target words. The task is therefore reformulated to both generate commonsensical text which include the given words, and also have the generated text adhere to the given context.
### Languages
English
## Dataset Structure
### Data Instances
{"Context": "The show came on the television with people singing. The family all gathered to watch. They all became silent when the show came on.", "Words": ["follow", "series", "voice"]}
### Data Fields
- context: the generated text by the model should adhere to this text
- words: the words that should be included in the generated continuation
### Data Splits
Test
## Dataset Creation
### Curation Rationale
C2Gen was created because the authors of the paper believed that the task formulation of CommonGen is too narrow, and that it needlessly incentivizes researchers
to focus on methods that do not support context. Which is orthogonal to their belief that many application areas necessitates the consideration of surrounding context. Therefore, to complement CommonGen, they provide an extended test set where an additional context is provided for each set of target words.
### Initial Data Collection and Normalization
The dataset was constructed with the help the crowd sourcing platform MechanicalTurk. Each remaining concept set manually received a textual context. To assure the quality of the data generation, only native English speakers with a recorded high acceptance were allowed to participate. Finally, all contexts were manually verified, and fixed in terms of typos and poor quality. Furthermore we want to raise awareness that C2GEN can contain personal data or offensive content. If you would encounter such a sample, please reach out to us.
## Licensing Information
license: cc-by-sa-4.0
| [
"# Dataset Card for Contextualized CommonGen(C2Gen)",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Initial Data Collection and Normalization\n- Licensing Information",
"## Dataset Description\n\n- Repository: Non-Residual Prompting\n- Paper: Fine-Grained Controllable Text Generation Using Non-Residual Prompting\n- Point of Contact: Fredrik Carlsson",
"### Dataset Summary\n\nCommonGen Lin et al., 2020 is a dataset for the constrained text generation task of word inclusion. But the task does not allow to include context. Therefore, to complement CommonGen, we provide an extended test set C2Gen Carlsson et al., 2022 where an additional context is provided for each set of target words. The task is therefore reformulated to both generate commonsensical text which include the given words, and also have the generated text adhere to the given context.",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances\n\n{\"Context\": \"The show came on the television with people singing. The family all gathered to watch. They all became silent when the show came on.\", \"Words\": [\"follow\", \"series\", \"voice\"]}",
"### Data Fields\n\n- context: the generated text by the model should adhere to this text\n- words: the words that should be included in the generated continuation",
"### Data Splits\n\nTest",
"## Dataset Creation",
"### Curation Rationale\n\nC2Gen was created because the authors of the paper believed that the task formulation of CommonGen is too narrow, and that it needlessly incentivizes researchers\nto focus on methods that do not support context. Which is orthogonal to their belief that many application areas necessitates the consideration of surrounding context. Therefore, to complement CommonGen, they provide an extended test set where an additional context is provided for each set of target words.",
"### Initial Data Collection and Normalization\n\nThe dataset was constructed with the help the crowd sourcing platform MechanicalTurk. Each remaining concept set manually received a textual context. To assure the quality of the data generation, only native English speakers with a recorded high acceptance were allowed to participate. Finally, all contexts were manually verified, and fixed in terms of typos and poor quality. Furthermore we want to raise awareness that C2GEN can contain personal data or offensive content. If you would encounter such a sample, please reach out to us.",
"## Licensing Information\n\nlicense: cc-by-sa-4.0"
] | [
"TAGS\n#task_categories-text-generation #size_categories-<100K #language-English #license-cc-by-sa-4.0 #arxiv-1911.03705 #region-us \n",
"# Dataset Card for Contextualized CommonGen(C2Gen)",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Initial Data Collection and Normalization\n- Licensing Information",
"## Dataset Description\n\n- Repository: Non-Residual Prompting\n- Paper: Fine-Grained Controllable Text Generation Using Non-Residual Prompting\n- Point of Contact: Fredrik Carlsson",
"### Dataset Summary\n\nCommonGen Lin et al., 2020 is a dataset for the constrained text generation task of word inclusion. But the task does not allow to include context. Therefore, to complement CommonGen, we provide an extended test set C2Gen Carlsson et al., 2022 where an additional context is provided for each set of target words. The task is therefore reformulated to both generate commonsensical text which include the given words, and also have the generated text adhere to the given context.",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances\n\n{\"Context\": \"The show came on the television with people singing. The family all gathered to watch. They all became silent when the show came on.\", \"Words\": [\"follow\", \"series\", \"voice\"]}",
"### Data Fields\n\n- context: the generated text by the model should adhere to this text\n- words: the words that should be included in the generated continuation",
"### Data Splits\n\nTest",
"## Dataset Creation",
"### Curation Rationale\n\nC2Gen was created because the authors of the paper believed that the task formulation of CommonGen is too narrow, and that it needlessly incentivizes researchers\nto focus on methods that do not support context. Which is orthogonal to their belief that many application areas necessitates the consideration of surrounding context. Therefore, to complement CommonGen, they provide an extended test set where an additional context is provided for each set of target words.",
"### Initial Data Collection and Normalization\n\nThe dataset was constructed with the help the crowd sourcing platform MechanicalTurk. Each remaining concept set manually received a textual context. To assure the quality of the data generation, only native English speakers with a recorded high acceptance were allowed to participate. Finally, all contexts were manually verified, and fixed in terms of typos and poor quality. Furthermore we want to raise awareness that C2GEN can contain personal data or offensive content. If you would encounter such a sample, please reach out to us.",
"## Licensing Information\n\nlicense: cc-by-sa-4.0"
] |
a8158d1fac10864c3424d53662fe63bf7d82dd87 |
# Dataset Card for CLUTRR
## Table of Contents
## Dataset Description
### Dataset Summary
**CLUTRR** (**C**ompositional **L**anguage **U**nderstanding and **T**ext-based **R**elational **R**easoning), a diagnostic benchmark suite, is first introduced in (https://arxiv.org/abs/1908.06177) to test the systematic generalization and inductive reasoning capabilities of NLU systems.
The CLUTRR benchmark allows us to test a model’s ability for **systematic generalization** by testing on stories that contain unseen combinations of logical rules, and test for the various forms of **model robustness** by adding different kinds of superfluous noise facts to the stories.
### Dataset Task
CLUTRR contains a large set of semi-synthetic stories involving hypothetical families. The task is to infer the relationship between two family members, whose relationship is not explicitly mentioned in the given story.
Join the CLUTRR community in https://www.cs.mcgill.ca/~ksinha4/clutrr/
## Dataset Structure
We show detailed information for all 14 configurations of the dataset.
### configurations:
**id**: a unique series of characters and numbers that identify each instance <br>
**story**: one semi-synthetic story involving hypothetical families<br>
**query**: the target query/relation which contains two names, where the goal is to classify the relation that holds between these two entities<br>
**target**: indicator for the correct relation for the query <br>
**target_text**: text for the correct relation for the query <br>
the indicator follows the rule as follows: <br> "aunt": 0, "son-in-law": 1, "grandfather": 2, "brother": 3,
"sister": 4,
"father": 5,
"mother": 6,
"grandmother": 7,
"uncle": 8,
"daughter-in-law": 9,
"grandson": 10,
"granddaughter": 11,
"father-in-law": 12,
"mother-in-law": 13,
"nephew": 14,
"son": 15,
"daughter": 16,
"niece": 17,
"husband": 18,
"wife": 19,
"sister-in-law": 20 <br>
**clean\_story**: the story without noise factors<br>
**proof\_state**: the logical rule of the kinship generation <br>
**f\_comb**: the kinships of the query followed by the logical rule<br>
**task\_name**: the task of the sub-dataset in a form of "task_[num1].[num2]"<br>
The first number [num1] indicates the status of noise facts added in the story: 1- no noise facts; 2- Irrelevant facts*; 3- Supporting facts*; 4- Disconnected facts*.<br>
The second number [num2] directly indicates the length of clauses for the task target.<br>
*for example:*<br>
*task_1.2 -- task requiring clauses of length 2 without adding noise facts*<br>
*task_2.3 -- task requiring clauses of length 3 with Irrelevant noise facts added in the story*<br>
**story\_edges**: all the edges in the kinship graph<br>
**edge\_types**: similar to the f\_comb, another form of the query's kinships followed by the logical rule <br>
**query\_edge**: the corresponding edge of the target query in the kinship graph<br>
**genders**: genders of names appeared in the story<br>
**task\_split**: train,test <br>
*Further explanation of Irrelevant facts, Supporting facts and Disconnected facts can be found in the 3.5 Robust Reasoning section in https://arxiv.org/abs/1908.06177
### Data Instances
An example of 'train'in Task 1.2 looks as follows.
```
{
"id": b2b9752f-d7fa-46a9-83ae-d474184c35b6,
"story": "[Lillian] and her daughter [April] went to visit [Lillian]'s mother [Ashley] last Sunday.",
"query": ('April', 'Ashley'),
"target": 7,
"target_text": "grandmother",
"clean_story": [Lillian] and her daughter [April] went to visit [Lillian]'s mother [Ashley] last Sunday.,
"proof_state": [{('April', 'grandmother', 'Ashley'): [('April', 'mother', 'Lillian'), ('Lillian', 'mother', 'Ashley')]}],
"f_comb": "mother-mother",
"task_name": "task_1.2",
"story_edges": [(0, 1), (1, 2)],
"edge_types": ['mother', 'mother'],
"query_edge": (0, 2),
"genders": "April:female,Lillian:female,Ashley:female",
"task_split": trian
}
```
### Data Splits
#### Data Split Name
(corresponding with the name used in the paper)
| task_split | split name in paper | train &validation task |test task |
| :---: | :---: | :-: | :-: |
| gen_train23_test2to10 | data_089907f8 | 1.2, 1.3 | 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 1.10 |
| gen_train234_test2to10 | data_db9b8f04 | 1.2, 1.3, 1.4| 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 1.10 |
| rob_train_clean_23_test_all_23 | data_7c5b0e70 | 1.2,1.3 | 1.2, 1.3, 2.3, 3.3, 4.3 |
| rob_train_sup_23_test_all_23 | data_06b8f2a1 | 2.2, 2.3 | 2.2, 2.3, 1.3, 3.3, 4.3 |
| rob_train_irr_23_test_all_23 | data_523348e6 | 3.2, 3.3 | 3.2, 3.3, 1.3, 2.3, 4.3 |
| rob_train_disc_23_test_all_23 | data_d83ecc3e | 4.2, 4.3 | 4.2, 4.3, 1.3, 2.3, 3.3 |
#### Data Split Summary
Number of Instances in each split
| task_split | train | validation | test |
| :-: | :---: | :---: | :---: |
| gen_train23_test2to10 | 9074 | 2020 | 1146 |
| gen_train234_test2to10 | 12064 | 3019 | 1048 |
| rob_train_clean_23_test_all_23 | 8098 | 2026 | 447 |
| rob_train_disc_23_test_all_23 | 8080 | 2020 | 445 |
| rob_train_irr_23_test_all_23 | 8079 | 2020 | 444 |
| rob_train_sup_23_test_all_23 | 8123 | 2031 | 447 |
## Citation Information
```
@article{sinha2019clutrr,
Author = {Koustuv Sinha and Shagun Sodhani and Jin Dong and Joelle Pineau and William L. Hamilton},
Title = {CLUTRR: A Diagnostic Benchmark for Inductive Reasoning from Text},
Year = {2019},
journal = {Empirical Methods of Natural Language Processing (EMNLP)},
arxiv = {1908.06177}
}
``` | CLUTRR/v1 | [
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:unknown",
"arxiv:1908.06177",
"region:us"
] | 2022-03-09T19:33:00+00:00 | {"language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"]} | 2022-10-25T09:03:19+00:00 | [
"1908.06177"
] | [
"en"
] | TAGS
#multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-unknown #arxiv-1908.06177 #region-us
| Dataset Card for CLUTRR
=======================
Table of Contents
-----------------
Dataset Description
-------------------
### Dataset Summary
CLUTRR (Compositional Language Understanding and Text-based Relational Reasoning), a diagnostic benchmark suite, is first introduced in (URL to test the systematic generalization and inductive reasoning capabilities of NLU systems.
The CLUTRR benchmark allows us to test a model’s ability for systematic generalization by testing on stories that contain unseen combinations of logical rules, and test for the various forms of model robustness by adding different kinds of superfluous noise facts to the stories.
### Dataset Task
CLUTRR contains a large set of semi-synthetic stories involving hypothetical families. The task is to infer the relationship between two family members, whose relationship is not explicitly mentioned in the given story.
Join the CLUTRR community in URL
Dataset Structure
-----------------
We show detailed information for all 14 configurations of the dataset.
### configurations:
id: a unique series of characters and numbers that identify each instance
story: one semi-synthetic story involving hypothetical families
query: the target query/relation which contains two names, where the goal is to classify the relation that holds between these two entities
target: indicator for the correct relation for the query
target\_text: text for the correct relation for the query
the indicator follows the rule as follows:
"aunt": 0, "son-in-law": 1, "grandfather": 2, "brother": 3,
"sister": 4,
"father": 5,
"mother": 6,
"grandmother": 7,
"uncle": 8,
"daughter-in-law": 9,
"grandson": 10,
"granddaughter": 11,
"father-in-law": 12,
"mother-in-law": 13,
"nephew": 14,
"son": 15,
"daughter": 16,
"niece": 17,
"husband": 18,
"wife": 19,
"sister-in-law": 20
clean\_story: the story without noise factors
proof\_state: the logical rule of the kinship generation
f\_comb: the kinships of the query followed by the logical rule
task\_name: the task of the sub-dataset in a form of "task\_[num1].[num2]"
The first number [num1] indicates the status of noise facts added in the story: 1- no noise facts; 2- Irrelevant facts\*; 3- Supporting facts\*; 4- Disconnected facts\*.
The second number [num2] directly indicates the length of clauses for the task target.
*for example:*
*task\_1.2 -- task requiring clauses of length 2 without adding noise facts*
*task\_2.3 -- task requiring clauses of length 3 with Irrelevant noise facts added in the story*
story\_edges: all the edges in the kinship graph
edge\_types: similar to the f\_comb, another form of the query's kinships followed by the logical rule
query\_edge: the corresponding edge of the target query in the kinship graph
genders: genders of names appeared in the story
task\_split: train,test
\*Further explanation of Irrelevant facts, Supporting facts and Disconnected facts can be found in the 3.5 Robust Reasoning section in URL
### Data Instances
An example of 'train'in Task 1.2 looks as follows.
### Data Splits
#### Data Split Name
(corresponding with the name used in the paper)
#### Data Split Summary
Number of Instances in each split
| [
"### Dataset Summary\n\n\nCLUTRR (Compositional Language Understanding and Text-based Relational Reasoning), a diagnostic benchmark suite, is first introduced in (URL to test the systematic generalization and inductive reasoning capabilities of NLU systems.\n\n\nThe CLUTRR benchmark allows us to test a model’s ability for systematic generalization by testing on stories that contain unseen combinations of logical rules, and test for the various forms of model robustness by adding different kinds of superfluous noise facts to the stories.",
"### Dataset Task\n\n\nCLUTRR contains a large set of semi-synthetic stories involving hypothetical families. The task is to infer the relationship between two family members, whose relationship is not explicitly mentioned in the given story.\n\n\nJoin the CLUTRR community in URL\n\n\nDataset Structure\n-----------------\n\n\nWe show detailed information for all 14 configurations of the dataset.",
"### configurations:\n\n\nid: a unique series of characters and numbers that identify each instance \n\nstory: one semi-synthetic story involving hypothetical families \n\nquery: the target query/relation which contains two names, where the goal is to classify the relation that holds between these two entities \n\ntarget: indicator for the correct relation for the query \n\ntarget\\_text: text for the correct relation for the query \n\nthe indicator follows the rule as follows: \n \"aunt\": 0, \"son-in-law\": 1, \"grandfather\": 2, \"brother\": 3,\n\"sister\": 4,\n\"father\": 5,\n\"mother\": 6,\n\"grandmother\": 7,\n\"uncle\": 8,\n\"daughter-in-law\": 9,\n\"grandson\": 10,\n\"granddaughter\": 11,\n\"father-in-law\": 12,\n\"mother-in-law\": 13,\n\"nephew\": 14,\n\"son\": 15,\n\"daughter\": 16,\n\"niece\": 17,\n\"husband\": 18,\n\"wife\": 19,\n\"sister-in-law\": 20 \n\nclean\\_story: the story without noise factors \n\nproof\\_state: the logical rule of the kinship generation \n\nf\\_comb: the kinships of the query followed by the logical rule \n\ntask\\_name: the task of the sub-dataset in a form of \"task\\_[num1].[num2]\" \n\nThe first number [num1] indicates the status of noise facts added in the story: 1- no noise facts; 2- Irrelevant facts\\*; 3- Supporting facts\\*; 4- Disconnected facts\\*. \n\nThe second number [num2] directly indicates the length of clauses for the task target. \n\n*for example:* \n\n*task\\_1.2 -- task requiring clauses of length 2 without adding noise facts* \n\n*task\\_2.3 -- task requiring clauses of length 3 with Irrelevant noise facts added in the story* \n\nstory\\_edges: all the edges in the kinship graph \n\nedge\\_types: similar to the f\\_comb, another form of the query's kinships followed by the logical rule \n\nquery\\_edge: the corresponding edge of the target query in the kinship graph \n\ngenders: genders of names appeared in the story \n\ntask\\_split: train,test \n\n\n\n\\*Further explanation of Irrelevant facts, Supporting facts and Disconnected facts can be found in the 3.5 Robust Reasoning section in URL",
"### Data Instances\n\n\nAn example of 'train'in Task 1.2 looks as follows.",
"### Data Splits",
"#### Data Split Name\n\n\n(corresponding with the name used in the paper)",
"#### Data Split Summary\n\n\nNumber of Instances in each split"
] | [
"TAGS\n#multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-unknown #arxiv-1908.06177 #region-us \n",
"### Dataset Summary\n\n\nCLUTRR (Compositional Language Understanding and Text-based Relational Reasoning), a diagnostic benchmark suite, is first introduced in (URL to test the systematic generalization and inductive reasoning capabilities of NLU systems.\n\n\nThe CLUTRR benchmark allows us to test a model’s ability for systematic generalization by testing on stories that contain unseen combinations of logical rules, and test for the various forms of model robustness by adding different kinds of superfluous noise facts to the stories.",
"### Dataset Task\n\n\nCLUTRR contains a large set of semi-synthetic stories involving hypothetical families. The task is to infer the relationship between two family members, whose relationship is not explicitly mentioned in the given story.\n\n\nJoin the CLUTRR community in URL\n\n\nDataset Structure\n-----------------\n\n\nWe show detailed information for all 14 configurations of the dataset.",
"### configurations:\n\n\nid: a unique series of characters and numbers that identify each instance \n\nstory: one semi-synthetic story involving hypothetical families \n\nquery: the target query/relation which contains two names, where the goal is to classify the relation that holds between these two entities \n\ntarget: indicator for the correct relation for the query \n\ntarget\\_text: text for the correct relation for the query \n\nthe indicator follows the rule as follows: \n \"aunt\": 0, \"son-in-law\": 1, \"grandfather\": 2, \"brother\": 3,\n\"sister\": 4,\n\"father\": 5,\n\"mother\": 6,\n\"grandmother\": 7,\n\"uncle\": 8,\n\"daughter-in-law\": 9,\n\"grandson\": 10,\n\"granddaughter\": 11,\n\"father-in-law\": 12,\n\"mother-in-law\": 13,\n\"nephew\": 14,\n\"son\": 15,\n\"daughter\": 16,\n\"niece\": 17,\n\"husband\": 18,\n\"wife\": 19,\n\"sister-in-law\": 20 \n\nclean\\_story: the story without noise factors \n\nproof\\_state: the logical rule of the kinship generation \n\nf\\_comb: the kinships of the query followed by the logical rule \n\ntask\\_name: the task of the sub-dataset in a form of \"task\\_[num1].[num2]\" \n\nThe first number [num1] indicates the status of noise facts added in the story: 1- no noise facts; 2- Irrelevant facts\\*; 3- Supporting facts\\*; 4- Disconnected facts\\*. \n\nThe second number [num2] directly indicates the length of clauses for the task target. \n\n*for example:* \n\n*task\\_1.2 -- task requiring clauses of length 2 without adding noise facts* \n\n*task\\_2.3 -- task requiring clauses of length 3 with Irrelevant noise facts added in the story* \n\nstory\\_edges: all the edges in the kinship graph \n\nedge\\_types: similar to the f\\_comb, another form of the query's kinships followed by the logical rule \n\nquery\\_edge: the corresponding edge of the target query in the kinship graph \n\ngenders: genders of names appeared in the story \n\ntask\\_split: train,test \n\n\n\n\\*Further explanation of Irrelevant facts, Supporting facts and Disconnected facts can be found in the 3.5 Robust Reasoning section in URL",
"### Data Instances\n\n\nAn example of 'train'in Task 1.2 looks as follows.",
"### Data Splits",
"#### Data Split Name\n\n\n(corresponding with the name used in the paper)",
"#### Data Split Summary\n\n\nNumber of Instances in each split"
] |
095f98c5853b271b00c05bbe4f2167ecdbe8951f |
# Dataset Description
## Dataset Summary
This dataset is a mirror of the Uniprot/SwissProt database. It contains the names and sequences of >500K proteins.
This dataset was parsed from the FASTA file at https://ftp.uniprot.org/pub/databases/uniprot/current_release/knowledgebase/complete/uniprot_sprot.fasta.gz.
Supported Tasks and Leaderboards: None
Languages: English
## Dataset Structure
### Data Instances
Data Fields: id, description, sequence
Data Splits: None
## Dataset Creation
The dataset was downloaded and parsed into a `dataset` object and uploaded unchanged.
Initial Data Collection and Normalization: Dataset was downloaded and curated on 03/09/2022.
## Considerations for Using the Data
Social Impact of Dataset: Due to the tendency of HIV to mutate, drug resistance is a common issue when attempting to treat those infected with HIV.
Protease inhibitors are a class of drugs that HIV is known to develop resistance via mutations.
Thus, by providing a collection of protease sequences known to be resistant to one or more drugs, this dataset provides a significant collection of data that could be utilized to perform computational analysis of protease resistance mutations.
Discussion of Biases: Due to the sampling nature of this database, it is predominantly composed genes from "well studied" genomes. This may impact the "broadness" of the genes contained.
## Additional Information:
- Dataset Curators: Will Dampier
- Citation Information: TBA
| damlab/uniprot | [
"region:us"
] | 2022-03-09T20:00:12+00:00 | {"liscence": "mit"} | 2022-03-12T12:08:29+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Description
## Dataset Summary
This dataset is a mirror of the Uniprot/SwissProt database. It contains the names and sequences of >500K proteins.
This dataset was parsed from the FASTA file at URL
Supported Tasks and Leaderboards: None
Languages: English
## Dataset Structure
### Data Instances
Data Fields: id, description, sequence
Data Splits: None
## Dataset Creation
The dataset was downloaded and parsed into a 'dataset' object and uploaded unchanged.
Initial Data Collection and Normalization: Dataset was downloaded and curated on 03/09/2022.
## Considerations for Using the Data
Social Impact of Dataset: Due to the tendency of HIV to mutate, drug resistance is a common issue when attempting to treat those infected with HIV.
Protease inhibitors are a class of drugs that HIV is known to develop resistance via mutations.
Thus, by providing a collection of protease sequences known to be resistant to one or more drugs, this dataset provides a significant collection of data that could be utilized to perform computational analysis of protease resistance mutations.
Discussion of Biases: Due to the sampling nature of this database, it is predominantly composed genes from "well studied" genomes. This may impact the "broadness" of the genes contained.
## Additional Information:
- Dataset Curators: Will Dampier
- Citation Information: TBA
| [
"# Dataset Description",
"## Dataset Summary\n\nThis dataset is a mirror of the Uniprot/SwissProt database. It contains the names and sequences of >500K proteins. \n\nThis dataset was parsed from the FASTA file at URL\n\nSupported Tasks and Leaderboards: None \n\nLanguages: English",
"## Dataset Structure",
"### Data Instances\n\nData Fields: id, description, sequence\n\nData Splits: None",
"## Dataset Creation\n\nThe dataset was downloaded and parsed into a 'dataset' object and uploaded unchanged. \n\nInitial Data Collection and Normalization: Dataset was downloaded and curated on 03/09/2022.",
"## Considerations for Using the Data\n\nSocial Impact of Dataset: Due to the tendency of HIV to mutate, drug resistance is a common issue when attempting to treat those infected with HIV. \nProtease inhibitors are a class of drugs that HIV is known to develop resistance via mutations. \nThus, by providing a collection of protease sequences known to be resistant to one or more drugs, this dataset provides a significant collection of data that could be utilized to perform computational analysis of protease resistance mutations. \n\nDiscussion of Biases: Due to the sampling nature of this database, it is predominantly composed genes from \"well studied\" genomes. This may impact the \"broadness\" of the genes contained.",
"## Additional Information: \n - Dataset Curators: Will Dampier \n - Citation Information: TBA"
] | [
"TAGS\n#region-us \n",
"# Dataset Description",
"## Dataset Summary\n\nThis dataset is a mirror of the Uniprot/SwissProt database. It contains the names and sequences of >500K proteins. \n\nThis dataset was parsed from the FASTA file at URL\n\nSupported Tasks and Leaderboards: None \n\nLanguages: English",
"## Dataset Structure",
"### Data Instances\n\nData Fields: id, description, sequence\n\nData Splits: None",
"## Dataset Creation\n\nThe dataset was downloaded and parsed into a 'dataset' object and uploaded unchanged. \n\nInitial Data Collection and Normalization: Dataset was downloaded and curated on 03/09/2022.",
"## Considerations for Using the Data\n\nSocial Impact of Dataset: Due to the tendency of HIV to mutate, drug resistance is a common issue when attempting to treat those infected with HIV. \nProtease inhibitors are a class of drugs that HIV is known to develop resistance via mutations. \nThus, by providing a collection of protease sequences known to be resistant to one or more drugs, this dataset provides a significant collection of data that could be utilized to perform computational analysis of protease resistance mutations. \n\nDiscussion of Biases: Due to the sampling nature of this database, it is predominantly composed genes from \"well studied\" genomes. This may impact the \"broadness\" of the genes contained.",
"## Additional Information: \n - Dataset Curators: Will Dampier \n - Citation Information: TBA"
] |
4887946743ee9325f7597ddadb72ece8b74a8105 | annotations_creators:
- Parth Parekh
languages:
- en
licenses:
- MIT
multilinguality:
- monolingual
size_categories:
- 0<n<100
source_datasets:
- original
task_categories:
- sentence-categorization
# Dataset Card for spotifinders
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
[Needs More Information]
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | juched/spotifinders | [
"region:us"
] | 2022-03-10T01:44:44+00:00 | {} | 2022-03-10T01:46:51+00:00 | [] | [] | TAGS
#region-us
| annotations_creators:
- Parth Parekh
languages:
- en
licenses:
- MIT
multilinguality:
- monolingual
size_categories:
- 0<n<100
source_datasets:
- original
task_categories:
- sentence-categorization
# Dataset Card for spotifinders
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
| [
"# Dataset Card for spotifinders",
"## Table of Contents\r\n- Dataset Description\r\n - Dataset Summary\r\n - Supported Tasks\r\n - Languages\r\n- Dataset Structure\r\n - Data Instances\r\n - Data Fields\r\n - Data Splits\r\n- Dataset Creation\r\n - Curation Rationale\r\n - Source Data\r\n - Annotations\r\n - Personal and Sensitive Information\r\n- Considerations for Using the Data\r\n - Social Impact of Dataset\r\n - Discussion of Biases\r\n - Other Known Limitations\r\n- Additional Information\r\n - Dataset Curators\r\n - Licensing Information\r\n - Citation Information",
"## Dataset Description\r\n\r\n- Homepage: \r\n- Repository: \r\n- Paper: \r\n- Leaderboard: \r\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for spotifinders",
"## Table of Contents\r\n- Dataset Description\r\n - Dataset Summary\r\n - Supported Tasks\r\n - Languages\r\n- Dataset Structure\r\n - Data Instances\r\n - Data Fields\r\n - Data Splits\r\n- Dataset Creation\r\n - Curation Rationale\r\n - Source Data\r\n - Annotations\r\n - Personal and Sensitive Information\r\n- Considerations for Using the Data\r\n - Social Impact of Dataset\r\n - Discussion of Biases\r\n - Other Known Limitations\r\n- Additional Information\r\n - Dataset Curators\r\n - Licensing Information\r\n - Citation Information",
"## Dataset Description\r\n\r\n- Homepage: \r\n- Repository: \r\n- Paper: \r\n- Leaderboard: \r\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information"
] |
51f31e2aa96a98b68b3595acca660904a3ffca33 | # AutoNLP Dataset for project: cat33
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project cat33.
### Languages
The BCP-47 code for the dataset's language is zh.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "\"\u5341\u56db\u4e94\"\u65f6\u671f\uff0c\u4f9d\u6258\u6d77\u5357\u5730\u7406\u533a\u4f4d\u4f18\u52bf\u548c\u6d77\u6d0b\u8d44\u6e90\u4f18\u52bf\uff0c\u52a0\u5feb\u57f9\u80b2\u58ee\u5927\u6d77\u6d0b\u7ecf\u6d4e\uff0c\u62d3\u5c55\u6d77\u5357\u7ecf\u6d4e\u53d1\u5c55\u84dd\u8272\u7a7a\u95f4\uff0c\u5bf9\u670d\u52a1\u6d77\u6d0b\u5f3a\u56fd\u6218\u7565\u3001\u63a8\u52a8\u6d77\u5357\u81ea\u7531\u8d38\u6613\u6e2f\u5efa\u8bbe\u53ca\u5b9e\u73b0\u81ea\u8eab\u53d1\u5c55\u5177\u6709\u91cd\u8981\u610f\u4e49",
"target": 9
},
{
"text": "\u9010\u6b65\u5b9e\u65bd\u533b\u7597\u5668\u68b0\u552f\u4e00\u6807\u8bc6\uff0c\u52a0\u5f3a\u4e0e\u533b\u7597\u7ba1\u7406\u3001\u533b\u4fdd\u7ba1\u7406\u7b49\u8854\u63a5",
"target": 8
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=32, names=['\u4e92\u8054\u7f51\u670d\u52a1', '\u4ea4\u901a\u8fd0\u8f93', '\u4f11\u95f2\u670d\u52a1', '\u4f20\u5a92', '\u4fe1\u606f\u6280\u672f', '\u516c\u7528\u4e8b\u4e1a', '\u519c\u4e1a', '\u5316\u5de5\u5236\u9020', '\u533b\u836f\u751f\u7269', '\u5546\u4e1a\u8d38\u6613', '\u56fd\u9632\u519b\u5de5', '\u5bb6\u7528\u7535\u5668', '\u5efa\u7b51\u4e1a', '\u623f\u5730\u4ea7', '\u6559\u80b2', '\u6587\u5316', '\u6709\u8272\u91d1\u5c5e', '\u673a\u68b0\u88c5\u5907\u5236\u9020', '\u6797\u4e1a', '\u6c7d\u8f66\u5236\u9020', '\u6e14\u4e1a', '\u7535\u5b50\u5236\u9020', '\u7535\u6c14\u8bbe\u5907', '\u755c\u7267\u4e1a', '\u7eba\u7ec7\u670d\u88c5\u5236\u9020', '\u8f7b\u5de5\u5236\u9020', '\u901a\u4fe1', '\u91c7\u77ff\u4e1a', '\u94a2\u94c1', '\u94f6\u884c', '\u975e\u94f6\u91d1\u878d', '\u98df\u54c1\u996e\u6599'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1836 |
| valid | 460 |
| kyleinincubated/autonlp-data-cat33 | [
"task_categories:text-classification",
"language:zh",
"region:us"
] | 2022-03-10T05:59:36+00:00 | {"language": ["zh"], "task_categories": ["text-classification"]} | 2022-10-25T09:03:04+00:00 | [] | [
"zh"
] | TAGS
#task_categories-text-classification #language-Chinese #region-us
| AutoNLP Dataset for project: cat33
==================================
Table of content
----------------
* Dataset Description
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
Dataset Descritpion
-------------------
This dataset has been automatically processed by AutoNLP for project cat33.
### Languages
The BCP-47 code for the dataset's language is zh.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is zh.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-text-classification #language-Chinese #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is zh.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
ad1f65afa83d161c5860ad126ab75c4287fb6cbe | en poems and genres test | Georgii/poetry-genre | [
"region:us"
] | 2022-03-10T08:09:08+00:00 | {} | 2022-03-10T08:12:23+00:00 | [] | [] | TAGS
#region-us
| en poems and genres test | [] | [
"TAGS\n#region-us \n"
] |
d9845634dc0f9cb48d4a26c9f6d8986fb87d2027 |
# Dataset Card for "IndicHeadlineGeneration"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
IndicHeadlineGeneration is the news headline generation dataset released as part of IndicNLG Suite. Each
input document is paired with an output as title. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 1.4M.
### Supported Tasks and Leaderboards
**Tasks:** Headline Generation
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One random example from the `hi` dataset is given below in JSON format.
```
{'id': '14',
'input': "अमेरिकी सिंगर अरियाना ग्रांडे का नया म्यूजिक एल्बम 'थैंक यू नेक्स्ट' रिलीज हो गया है।एक दिन पहले ही रिलीज हुए इस गाने को देखने वालों की संख्या 37,663,702 पहुंच गई है।यूट्यूब पर अपलोड इस गाने को 24 घंटे के भीतर 3.8 मिलियन लोगों ने पसंद किया है।अरियाना ग्रांडे नई दिल्लीः अमेरिकी सिंगर अरियाना ग्रांडे का नया म्यूजिक एल्बम 'थैंक यू नेक्स्ट' रिलीज हो गया है।एक दिन पहले ही रिलीज हुए इस गाने को देखने वालों की संख्या 37,663,702 पहुंच गई है।यूट्यूब पर अपलोड इस गाने को 24 घंटे के भीतर 3.8 मिलियन लोगों ने पसंद किया है।वहीं इस वीडियो पर कमेंट्स की बाढ़ आ गई है।गाने में मीन गर्ल्स, ब्रिंग इट ऑन, लीगली ब्लॉंड और 13 गोइंग 30 के कुछ फेमस सीन्स को दिखाया गया है।गाने में क्रिस जैनर का कैमियो भी है।बता दें अभी कुछ महीने पहले ही अरियाना के एक्स ब्वॉयफ्रेंड मैक मिलर का 26 साल की उम्र में निधन हो गया था।इस खबर को सुनकर अरियाना टूट सी गई थीं।उन्होंने सोशल मीडिया पर पोस्ट कर कई बार अपनी भावनाएं व्यक्त की।अरियाना ग्रांडे और रैपर मैक मिलर ने करीब 2 साल तक एक दूसरे को डेट किया।मैक के निधन की वजह ड्रग्स की ओवरडोज बताई गई।दोनों की मुलाकात साल 2012 में हुई थी।दोनों ने एक कंसर्ट में साथ कई गानों पर परफॉर्म भी किया था।जिसके बाद दोनों एक दूसरे को डेट करने लगे लेकिन नशे की लत के कारण अरियाना ने उनसे ब्रेकअप कर लिया।पर देश-विदेश की ताजा और स्पेशल स्टोरी पढ़ते हुए अपने आप को रखिए अप-टू-डेट।के लिए क्लिक करें सिनेमा सेक्शन",
'target': 'अरियाना ग्रांडे का नया गाना रिलीज, सोशल मीडिया पर वायरल',
'url': 'https://www.indiatv.in/entertainment/hollywood-ariana-grande-shatters-24-hour-views-record-612835'
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `input (string)`: News article as input.
- `target (strings)`: Output as headline of the news article.
- `url (string)`: Source web link of the news article.
### Data Splits
Here is the number of samples in each split for all the languages.
Language | ISO 639-1 Code | Train | Dev | Test |
---------- | ---------- | ---------- | ---------- | ---------- |
Assamese | as | 29,631 | 14,592 | 14,808 |
Bengali | bn | 113,424 | 14,739 | 14,568 |
Gujarati | gu | 199,972 | 31,270 | 31,215 |
Hindi | hi | 208,221 | 44,738 | 44,514 |
Kannada | kn | 132,380 | 19,416 | 3,261 |
Malayalam | ml | 10,358 | 5,388 | 5,220 |
Marathi | mr | 114,042 | 14,253 | 14,340 |
Oriya | or | 58,225 | 7,484 | 7,137 |
Punjabi | pa | 48,441 | 6,108 | 6,086 |
Tamil | ta | 60,650 | 7,616 | 7,688 |
Telugu | te | 21,352 | 2,690 | 2,675 |
## Dataset Creation
### Curation Rationale
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Source Data
For hindi, web sources like [Dainik Bhaskar](https://www.bhaskar.com), [Naidunia](https://www.naidunia.com/), [NDTV](https://ndtv.in/), [Business Standard](https://hindi.business-standard.com/) and [IndiaTV](https://www.indiatv.in/). For other languages, modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) dataset.
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437",
```
### Contributions
[Detailed in the paper](https://arxiv.org/abs/2203.05437) | ai4bharat/IndicHeadlineGeneration | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:27K<n<341K",
"source_datasets:original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages.",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc-by-nc-4.0",
"arxiv:2203.05437",
"region:us"
] | 2022-03-10T09:58:27+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te"], "license": ["cc-by-nc-4.0"], "multilinguality": ["multilingual"], "size_categories": ["27K<n<341K"], "source_datasets": ["original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages."], "task_categories": ["conditional-text-generation"], "task_ids": ["conditional-text-generation-other-headline-generation"], "pretty_name": "IndicHeadlineGeneration"} | 2022-10-13T05:08:20+00:00 | [
"2203.05437"
] | [
"as",
"bn",
"gu",
"hi",
"kn",
"ml",
"mr",
"or",
"pa",
"ta",
"te"
] | TAGS
#annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-27K<n<341K #source_datasets-original for Hindi, and modified [IndicGLUE](https-//indicnlp.ai4bharat.org/indic-glue/) for other languages. #language-Assamese #language-Bengali #language-Gujarati #language-Hindi #language-Kannada #language-Malayalam #language-Marathi #language-Oriya (macrolanguage) #language-Panjabi #language-Tamil #language-Telugu #license-cc-by-nc-4.0 #arxiv-2203.05437 #region-us
| Dataset Card for "IndicHeadlineGeneration"
==========================================
Table of Contents
-----------------
* Dataset Card Creation Guide
+ Table of Contents
+ Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
+ Dataset Structure
- Data Instances
- Data Fields
- Data Splits
+ Dataset Creation
- Curation Rationale
- Source Data
* Initial Data Collection and Normalization
* Who are the source language producers?
- Annotations
* Annotation process
* Who are the annotators?
- Personal and Sensitive Information
+ Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
+ Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
Dataset Description
-------------------
* Homepage: URL
* Paper: IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages
* Point of Contact:
### Dataset Summary
IndicHeadlineGeneration is the news headline generation dataset released as part of IndicNLG Suite. Each
input document is paired with an output as title. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 1.4M.
### Supported Tasks and Leaderboards
Tasks: Headline Generation
Leaderboards: Currently there is no Leaderboard for this dataset.
### Languages
* 'Assamese (as)'
* 'Bengali (bn)'
* 'Gujarati (gu)'
* 'Kannada (kn)'
* 'Hindi (hi)'
* 'Malayalam (ml)'
* 'Marathi (mr)'
* 'Oriya (or)'
* 'Punjabi (pa)'
* 'Tamil (ta)'
* 'Telugu (te)'
Dataset Structure
-----------------
### Data Instances
One random example from the 'hi' dataset is given below in JSON format.
### Data Fields
* 'id (string)': Unique identifier.
* 'input (string)': News article as input.
* 'target (strings)': Output as headline of the news article.
* 'url (string)': Source web link of the news article.
### Data Splits
Here is the number of samples in each split for all the languages.
Dataset Creation
----------------
### Curation Rationale
Detailed in the paper
### Source Data
For hindi, web sources like Dainik Bhaskar, Naidunia, NDTV, Business Standard and IndiaTV. For other languages, modified IndicGLUE dataset.
#### Initial Data Collection and Normalization
Detailed in the paper
#### Who are the source language producers?
Detailed in the paper
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
Additional Information
----------------------
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). Copyright of the dataset contents belongs to the original copyright holders.
If you use any of the datasets, models or code modules, please cite the following paper:
### Contributions
Detailed in the paper
| [
"### Dataset Summary\n\n\nIndicHeadlineGeneration is the news headline generation dataset released as part of IndicNLG Suite. Each\ninput document is paired with an output as title. We create this dataset in eleven\nlanguages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total\nsize of the dataset is 1.4M.",
"### Supported Tasks and Leaderboards\n\n\nTasks: Headline Generation\n\n\nLeaderboards: Currently there is no Leaderboard for this dataset.",
"### Languages\n\n\n* 'Assamese (as)'\n* 'Bengali (bn)'\n* 'Gujarati (gu)'\n* 'Kannada (kn)'\n* 'Hindi (hi)'\n* 'Malayalam (ml)'\n* 'Marathi (mr)'\n* 'Oriya (or)'\n* 'Punjabi (pa)'\n* 'Tamil (ta)'\n* 'Telugu (te)'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nOne random example from the 'hi' dataset is given below in JSON format.",
"### Data Fields\n\n\n* 'id (string)': Unique identifier.\n* 'input (string)': News article as input.\n* 'target (strings)': Output as headline of the news article.\n* 'url (string)': Source web link of the news article.",
"### Data Splits\n\n\nHere is the number of samples in each split for all the languages.\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nDetailed in the paper",
"### Source Data\n\n\nFor hindi, web sources like Dainik Bhaskar, Naidunia, NDTV, Business Standard and IndiaTV. For other languages, modified IndicGLUE dataset.",
"#### Initial Data Collection and Normalization\n\n\nDetailed in the paper",
"#### Who are the source language producers?\n\n\nDetailed in the paper",
"### Annotations\n\n\n[More information needed]",
"#### Annotation process\n\n\n[More information needed]",
"#### Who are the annotators?\n\n\n[More information needed]",
"### Personal and Sensitive Information\n\n\n[More information needed]\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\n[More information needed]",
"### Discussion of Biases\n\n\n[More information needed]",
"### Other Known Limitations\n\n\n[More information needed]\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n[More information needed]",
"### Licensing Information\n\n\nContents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). Copyright of the dataset contents belongs to the original copyright holders.\n\n\nIf you use any of the datasets, models or code modules, please cite the following paper:",
"### Contributions\n\n\nDetailed in the paper"
] | [
"TAGS\n#annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-27K<n<341K #source_datasets-original for Hindi, and modified [IndicGLUE](https-//indicnlp.ai4bharat.org/indic-glue/) for other languages. #language-Assamese #language-Bengali #language-Gujarati #language-Hindi #language-Kannada #language-Malayalam #language-Marathi #language-Oriya (macrolanguage) #language-Panjabi #language-Tamil #language-Telugu #license-cc-by-nc-4.0 #arxiv-2203.05437 #region-us \n",
"### Dataset Summary\n\n\nIndicHeadlineGeneration is the news headline generation dataset released as part of IndicNLG Suite. Each\ninput document is paired with an output as title. We create this dataset in eleven\nlanguages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total\nsize of the dataset is 1.4M.",
"### Supported Tasks and Leaderboards\n\n\nTasks: Headline Generation\n\n\nLeaderboards: Currently there is no Leaderboard for this dataset.",
"### Languages\n\n\n* 'Assamese (as)'\n* 'Bengali (bn)'\n* 'Gujarati (gu)'\n* 'Kannada (kn)'\n* 'Hindi (hi)'\n* 'Malayalam (ml)'\n* 'Marathi (mr)'\n* 'Oriya (or)'\n* 'Punjabi (pa)'\n* 'Tamil (ta)'\n* 'Telugu (te)'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nOne random example from the 'hi' dataset is given below in JSON format.",
"### Data Fields\n\n\n* 'id (string)': Unique identifier.\n* 'input (string)': News article as input.\n* 'target (strings)': Output as headline of the news article.\n* 'url (string)': Source web link of the news article.",
"### Data Splits\n\n\nHere is the number of samples in each split for all the languages.\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nDetailed in the paper",
"### Source Data\n\n\nFor hindi, web sources like Dainik Bhaskar, Naidunia, NDTV, Business Standard and IndiaTV. For other languages, modified IndicGLUE dataset.",
"#### Initial Data Collection and Normalization\n\n\nDetailed in the paper",
"#### Who are the source language producers?\n\n\nDetailed in the paper",
"### Annotations\n\n\n[More information needed]",
"#### Annotation process\n\n\n[More information needed]",
"#### Who are the annotators?\n\n\n[More information needed]",
"### Personal and Sensitive Information\n\n\n[More information needed]\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\n[More information needed]",
"### Discussion of Biases\n\n\n[More information needed]",
"### Other Known Limitations\n\n\n[More information needed]\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n[More information needed]",
"### Licensing Information\n\n\nContents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). Copyright of the dataset contents belongs to the original copyright holders.\n\n\nIf you use any of the datasets, models or code modules, please cite the following paper:",
"### Contributions\n\n\nDetailed in the paper"
] |
53cfce5e0ca8da828ee1b6223dcf3ea986582812 |
# Dataset Card for "IndicSentenceSummarization"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
IndicSentenceSummarization is the sentence summarization dataset released as part of IndicNLG Suite. Each
input sentence is paired with an output as summary. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 431K.
### Supported Tasks and Leaderboards
**Tasks:** Sentence Summarization
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One random example from the `hi` dataset is given below in JSON format.
```
{'id': '5',
'input': 'जम्मू एवं कश्मीर के अनंतनाग जिले में शनिवार को सुरक्षाबलों के साथ मुठभेड़ में दो आतंकवादियों को मार गिराया गया।',
'target': 'जम्मू-कश्मीर : सुरक्षाबलों के साथ मुठभेड़ में 2 आतंकवादी ढेर',
'url': 'https://www.indiatv.in/india/national-jammu-kashmir-two-millitant-killed-in-encounter-with-security-forces-574529'
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `input (string)`: Input sentence.
- `target (strings)`: Output summary.
- `url (string)`: Source web link of the sentence.
### Data Splits
Here is the number of samples in each split for all the languages.
Language | ISO 639-1 Code | Train | Dev | Test |
---------- | ---------- | ---------- | ---------- | ---------- |
Assamese | as | 10,812 | 5,232 | 5,452 |
Bengali | bn | 17,035 | 2,355 | 2,384 |
Gujarati | gu | 54,788 | 8,720 | 8,460 |
Hindi | hi | 78,876 | 16,935 | 16,835 |
Kannada | kn | 61,220 | 9,024 | 1,485 |
Malayalam | ml | 2,855 | 1,520 | 1,580 |
Marathi | mr | 27,066 | 3,249 | 3,309 |
Oriya | or | 12,065 | 1,539 | 1,440 |
Punjabi | pa | 31,630 | 4,004 | 3,967 |
Tamil | ta | 23,098 | 2,874 | 2,948 |
Telugu | te | 7,119 | 878 | 862 |
## Dataset Creation
### Curation Rationale
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Source Data
It is a modified subset of [IndicHeadlineGeneration](https://huggingface.co/datasets/ai4bharat/IndicHeadlineGeneration) dataset.
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437",
```
### Contributions
[Detailed in the paper](https://arxiv.org/abs/2203.05437) | ai4bharat/IndicSentenceSummarization | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:5K<n<112K",
"source_datasets:original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages.",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc-by-nc-4.0",
"arxiv:2203.05437",
"region:us"
] | 2022-03-10T09:59:05+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te"], "license": ["cc-by-nc-4.0"], "multilinguality": ["multilingual"], "size_categories": ["5K<n<112K"], "source_datasets": ["original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages."], "task_categories": ["conditional-text-generation"], "task_ids": ["conditional-text-generation-other-sentence-summarization"], "pretty_name": "IndicSentenceSummarization"} | 2022-10-13T05:08:31+00:00 | [
"2203.05437"
] | [
"as",
"bn",
"gu",
"hi",
"kn",
"ml",
"mr",
"or",
"pa",
"ta",
"te"
] | TAGS
#annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-5K<n<112K #source_datasets-original for Hindi, and modified [IndicGLUE](https-//indicnlp.ai4bharat.org/indic-glue/) for other languages. #language-Assamese #language-Bengali #language-Gujarati #language-Hindi #language-Kannada #language-Malayalam #language-Marathi #language-Oriya (macrolanguage) #language-Panjabi #language-Tamil #language-Telugu #license-cc-by-nc-4.0 #arxiv-2203.05437 #region-us
| Dataset Card for "IndicSentenceSummarization"
=============================================
Table of Contents
-----------------
* Dataset Card Creation Guide
+ Table of Contents
+ Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
+ Dataset Structure
- Data Instances
- Data Fields
- Data Splits
+ Dataset Creation
- Curation Rationale
- Source Data
* Initial Data Collection and Normalization
* Who are the source language producers?
- Annotations
* Annotation process
* Who are the annotators?
- Personal and Sensitive Information
+ Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
+ Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
Dataset Description
-------------------
* Homepage: URL
* Paper: IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages
* Point of Contact:
### Dataset Summary
IndicSentenceSummarization is the sentence summarization dataset released as part of IndicNLG Suite. Each
input sentence is paired with an output as summary. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 431K.
### Supported Tasks and Leaderboards
Tasks: Sentence Summarization
Leaderboards: Currently there is no Leaderboard for this dataset.
### Languages
* 'Assamese (as)'
* 'Bengali (bn)'
* 'Gujarati (gu)'
* 'Kannada (kn)'
* 'Hindi (hi)'
* 'Malayalam (ml)'
* 'Marathi (mr)'
* 'Oriya (or)'
* 'Punjabi (pa)'
* 'Tamil (ta)'
* 'Telugu (te)'
Dataset Structure
-----------------
### Data Instances
One random example from the 'hi' dataset is given below in JSON format.
### Data Fields
* 'id (string)': Unique identifier.
* 'input (string)': Input sentence.
* 'target (strings)': Output summary.
* 'url (string)': Source web link of the sentence.
### Data Splits
Here is the number of samples in each split for all the languages.
Dataset Creation
----------------
### Curation Rationale
Detailed in the paper
### Source Data
It is a modified subset of IndicHeadlineGeneration dataset.
#### Initial Data Collection and Normalization
Detailed in the paper
#### Who are the source language producers?
Detailed in the paper
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
Additional Information
----------------------
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). Copyright of the dataset contents belongs to the original copyright holders.
If you use any of the datasets, models or code modules, please cite the following paper:
### Contributions
Detailed in the paper
| [
"### Dataset Summary\n\n\nIndicSentenceSummarization is the sentence summarization dataset released as part of IndicNLG Suite. Each\ninput sentence is paired with an output as summary. We create this dataset in eleven\nlanguages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total\nsize of the dataset is 431K.",
"### Supported Tasks and Leaderboards\n\n\nTasks: Sentence Summarization\n\n\nLeaderboards: Currently there is no Leaderboard for this dataset.",
"### Languages\n\n\n* 'Assamese (as)'\n* 'Bengali (bn)'\n* 'Gujarati (gu)'\n* 'Kannada (kn)'\n* 'Hindi (hi)'\n* 'Malayalam (ml)'\n* 'Marathi (mr)'\n* 'Oriya (or)'\n* 'Punjabi (pa)'\n* 'Tamil (ta)'\n* 'Telugu (te)'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nOne random example from the 'hi' dataset is given below in JSON format.",
"### Data Fields\n\n\n* 'id (string)': Unique identifier.\n* 'input (string)': Input sentence.\n* 'target (strings)': Output summary.\n* 'url (string)': Source web link of the sentence.",
"### Data Splits\n\n\nHere is the number of samples in each split for all the languages.\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nDetailed in the paper",
"### Source Data\n\n\nIt is a modified subset of IndicHeadlineGeneration dataset.",
"#### Initial Data Collection and Normalization\n\n\nDetailed in the paper",
"#### Who are the source language producers?\n\n\nDetailed in the paper",
"### Annotations\n\n\n[More information needed]",
"#### Annotation process\n\n\n[More information needed]",
"#### Who are the annotators?\n\n\n[More information needed]",
"### Personal and Sensitive Information\n\n\n[More information needed]\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\n[More information needed]",
"### Discussion of Biases\n\n\n[More information needed]",
"### Other Known Limitations\n\n\n[More information needed]\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n[More information needed]",
"### Licensing Information\n\n\nContents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). Copyright of the dataset contents belongs to the original copyright holders.\n\n\nIf you use any of the datasets, models or code modules, please cite the following paper:",
"### Contributions\n\n\nDetailed in the paper"
] | [
"TAGS\n#annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-5K<n<112K #source_datasets-original for Hindi, and modified [IndicGLUE](https-//indicnlp.ai4bharat.org/indic-glue/) for other languages. #language-Assamese #language-Bengali #language-Gujarati #language-Hindi #language-Kannada #language-Malayalam #language-Marathi #language-Oriya (macrolanguage) #language-Panjabi #language-Tamil #language-Telugu #license-cc-by-nc-4.0 #arxiv-2203.05437 #region-us \n",
"### Dataset Summary\n\n\nIndicSentenceSummarization is the sentence summarization dataset released as part of IndicNLG Suite. Each\ninput sentence is paired with an output as summary. We create this dataset in eleven\nlanguages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total\nsize of the dataset is 431K.",
"### Supported Tasks and Leaderboards\n\n\nTasks: Sentence Summarization\n\n\nLeaderboards: Currently there is no Leaderboard for this dataset.",
"### Languages\n\n\n* 'Assamese (as)'\n* 'Bengali (bn)'\n* 'Gujarati (gu)'\n* 'Kannada (kn)'\n* 'Hindi (hi)'\n* 'Malayalam (ml)'\n* 'Marathi (mr)'\n* 'Oriya (or)'\n* 'Punjabi (pa)'\n* 'Tamil (ta)'\n* 'Telugu (te)'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nOne random example from the 'hi' dataset is given below in JSON format.",
"### Data Fields\n\n\n* 'id (string)': Unique identifier.\n* 'input (string)': Input sentence.\n* 'target (strings)': Output summary.\n* 'url (string)': Source web link of the sentence.",
"### Data Splits\n\n\nHere is the number of samples in each split for all the languages.\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nDetailed in the paper",
"### Source Data\n\n\nIt is a modified subset of IndicHeadlineGeneration dataset.",
"#### Initial Data Collection and Normalization\n\n\nDetailed in the paper",
"#### Who are the source language producers?\n\n\nDetailed in the paper",
"### Annotations\n\n\n[More information needed]",
"#### Annotation process\n\n\n[More information needed]",
"#### Who are the annotators?\n\n\n[More information needed]",
"### Personal and Sensitive Information\n\n\n[More information needed]\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\n[More information needed]",
"### Discussion of Biases\n\n\n[More information needed]",
"### Other Known Limitations\n\n\n[More information needed]\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n[More information needed]",
"### Licensing Information\n\n\nContents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). Copyright of the dataset contents belongs to the original copyright holders.\n\n\nIf you use any of the datasets, models or code modules, please cite the following paper:",
"### Contributions\n\n\nDetailed in the paper"
] |
9b177ff8d3eeaf8d07d2918546e9b79ee655e29b |
# Dataset Card for "IndicWikiBio"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
The WikiBio dataset released as part of IndicNLG Suite. Each
example has four fields: id, infobox, serialized infobox and summary. We create this dataset in nine
languages including as, bn, hi, kn, ml, or, pa, ta, te. The total
size of the dataset is 57,426.
### Supported Tasks and Leaderboards
**Tasks:** WikiBio
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One random example from the `hi` dataset is given below in JSON format.
```
{
"id": 26,
"infobox": "name_1:सी॰\tname_2:एल॰\tname_3:रुआला\toffice_1:सांसद\toffice_2:-\toffice_3:मिजोरम\toffice_4:लोक\toffice_5:सभा\toffice_6:निर्वाचन\toffice_7:क्षेत्र\toffice_8:।\toffice_9:मिजोरम\tterm_1:2014\tterm_2:से\tterm_3:2019\tnationality_1:भारतीय",
"serialized_infobox": "<TAG> name </TAG> सी॰ एल॰ रुआला <TAG> office </TAG> सांसद - मिजोरम लोक सभा निर्वाचन क्षेत्र । मिजोरम <TAG> term </TAG> 2014 से 2019 <TAG> nationality </TAG> भारतीय",
"summary": "सी॰ एल॰ रुआला भारत की सोलहवीं लोक सभा के सांसद हैं।"
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `infobox (string)`: Raw Infobox.
- `serialized_infobox (string)`: Serialized Infobox as input.
- `summary (string)`: Summary of Infobox/First line of Wikipedia page.
### Data Splits
Here is the number of samples in each split for all the languages.
Language | ISO 639-1 Code | Train | Test | Val |
---------- | ---------- | ---------- | ---------- | ---------- |
Assamese | as | 1,300 | 391 | 381 |
Bengali | bn | 4,615 | 1,521 | 1,567 |
Hindi | hi | 5,684 | 1,919 | 1,853 |
Kannada | kn | 1,188 | 389 | 383 |
Malayalam | ml | 5,620 | 1,835 | 1,896 |
Oriya | or | 1,687 | 558 | 515 |
Punjabi | pa | 3,796 | 1,227 | 1,331 |
Tamil | ta | 8,169 | 2,701 | 2,632 |
Telugu | te | 2,594 | 854 | 820 |
## Dataset Creation
### Curation Rationale
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Source Data
None
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437",
```
### Contributions
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
| ai4bharat/IndicWikiBio | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1960<n<11,502",
"source_datasets:none. Originally generated from www.wikimedia.org.",
"language:as",
"language:bn",
"language:hi",
"language:kn",
"language:ml",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc-by-nc-4.0",
"arxiv:2203.05437",
"region:us"
] | 2022-03-10T09:59:23+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["as", "bn", "hi", "kn", "ml", "or", "pa", "ta", "te"], "license": ["cc-by-nc-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1960<n<11,502"], "source_datasets": ["none. Originally generated from www.wikimedia.org."], "task_categories": ["conditional-text-generation"], "task_ids": ["conditional-text-generation-other-wikibio"], "pretty_name": "IndicWikiBio"} | 2022-10-13T05:08:34+00:00 | [
"2203.05437"
] | [
"as",
"bn",
"hi",
"kn",
"ml",
"or",
"pa",
"ta",
"te"
] | TAGS
#annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-1960<n<11,502 #source_datasets-none. Originally generated from www.wikimedia.org. #language-Assamese #language-Bengali #language-Hindi #language-Kannada #language-Malayalam #language-Oriya (macrolanguage) #language-Panjabi #language-Tamil #language-Telugu #license-cc-by-nc-4.0 #arxiv-2203.05437 #region-us
| Dataset Card for "IndicWikiBio"
===============================
Table of Contents
-----------------
* Dataset Card Creation Guide
+ Table of Contents
+ Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
+ Dataset Structure
- Data Instances
- Data Fields
- Data Splits
+ Dataset Creation
- Curation Rationale
- Source Data
* Initial Data Collection and Normalization
* Who are the source language producers?
- Annotations
* Annotation process
* Who are the annotators?
- Personal and Sensitive Information
+ Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
+ Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
Dataset Description
-------------------
* Homepage: URL
* Paper: IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages
* Point of Contact:
### Dataset Summary
The WikiBio dataset released as part of IndicNLG Suite. Each
example has four fields: id, infobox, serialized infobox and summary. We create this dataset in nine
languages including as, bn, hi, kn, ml, or, pa, ta, te. The total
size of the dataset is 57,426.
### Supported Tasks and Leaderboards
Tasks: WikiBio
Leaderboards: Currently there is no Leaderboard for this dataset.
### Languages
* 'Assamese (as)'
* 'Bengali (bn)'
* 'Kannada (kn)'
* 'Hindi (hi)'
* 'Malayalam (ml)'
* 'Oriya (or)'
* 'Punjabi (pa)'
* 'Tamil (ta)'
* 'Telugu (te)'
Dataset Structure
-----------------
### Data Instances
One random example from the 'hi' dataset is given below in JSON format.
### Data Fields
* 'id (string)': Unique identifier.
* 'infobox (string)': Raw Infobox.
* 'serialized\_infobox (string)': Serialized Infobox as input.
* 'summary (string)': Summary of Infobox/First line of Wikipedia page.
### Data Splits
Here is the number of samples in each split for all the languages.
Dataset Creation
----------------
### Curation Rationale
Detailed in the paper
### Source Data
None
#### Initial Data Collection and Normalization
Detailed in the paper
#### Who are the source language producers?
Detailed in the paper
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
Additional Information
----------------------
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). Copyright of the dataset contents belongs to the original copyright holders.
If you use any of the datasets, models or code modules, please cite the following paper:
### Contributions
Detailed in the paper
| [
"### Dataset Summary\n\n\nThe WikiBio dataset released as part of IndicNLG Suite. Each\nexample has four fields: id, infobox, serialized infobox and summary. We create this dataset in nine\nlanguages including as, bn, hi, kn, ml, or, pa, ta, te. The total\nsize of the dataset is 57,426.",
"### Supported Tasks and Leaderboards\n\n\nTasks: WikiBio\n\n\nLeaderboards: Currently there is no Leaderboard for this dataset.",
"### Languages\n\n\n* 'Assamese (as)'\n* 'Bengali (bn)'\n* 'Kannada (kn)'\n* 'Hindi (hi)'\n* 'Malayalam (ml)'\n* 'Oriya (or)'\n* 'Punjabi (pa)'\n* 'Tamil (ta)'\n* 'Telugu (te)'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nOne random example from the 'hi' dataset is given below in JSON format.",
"### Data Fields\n\n\n* 'id (string)': Unique identifier.\n* 'infobox (string)': Raw Infobox.\n* 'serialized\\_infobox (string)': Serialized Infobox as input.\n* 'summary (string)': Summary of Infobox/First line of Wikipedia page.",
"### Data Splits\n\n\nHere is the number of samples in each split for all the languages.\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nDetailed in the paper",
"### Source Data\n\n\nNone",
"#### Initial Data Collection and Normalization\n\n\nDetailed in the paper",
"#### Who are the source language producers?\n\n\nDetailed in the paper",
"### Annotations\n\n\n[More information needed]",
"#### Annotation process\n\n\n[More information needed]",
"#### Who are the annotators?\n\n\n[More information needed]",
"### Personal and Sensitive Information\n\n\n[More information needed]\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\n[More information needed]",
"### Discussion of Biases\n\n\n[More information needed]",
"### Other Known Limitations\n\n\n[More information needed]\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n[More information needed]",
"### Licensing Information\n\n\nContents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). Copyright of the dataset contents belongs to the original copyright holders.\n\n\nIf you use any of the datasets, models or code modules, please cite the following paper:",
"### Contributions\n\n\nDetailed in the paper"
] | [
"TAGS\n#annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-1960<n<11,502 #source_datasets-none. Originally generated from www.wikimedia.org. #language-Assamese #language-Bengali #language-Hindi #language-Kannada #language-Malayalam #language-Oriya (macrolanguage) #language-Panjabi #language-Tamil #language-Telugu #license-cc-by-nc-4.0 #arxiv-2203.05437 #region-us \n",
"### Dataset Summary\n\n\nThe WikiBio dataset released as part of IndicNLG Suite. Each\nexample has four fields: id, infobox, serialized infobox and summary. We create this dataset in nine\nlanguages including as, bn, hi, kn, ml, or, pa, ta, te. The total\nsize of the dataset is 57,426.",
"### Supported Tasks and Leaderboards\n\n\nTasks: WikiBio\n\n\nLeaderboards: Currently there is no Leaderboard for this dataset.",
"### Languages\n\n\n* 'Assamese (as)'\n* 'Bengali (bn)'\n* 'Kannada (kn)'\n* 'Hindi (hi)'\n* 'Malayalam (ml)'\n* 'Oriya (or)'\n* 'Punjabi (pa)'\n* 'Tamil (ta)'\n* 'Telugu (te)'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nOne random example from the 'hi' dataset is given below in JSON format.",
"### Data Fields\n\n\n* 'id (string)': Unique identifier.\n* 'infobox (string)': Raw Infobox.\n* 'serialized\\_infobox (string)': Serialized Infobox as input.\n* 'summary (string)': Summary of Infobox/First line of Wikipedia page.",
"### Data Splits\n\n\nHere is the number of samples in each split for all the languages.\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nDetailed in the paper",
"### Source Data\n\n\nNone",
"#### Initial Data Collection and Normalization\n\n\nDetailed in the paper",
"#### Who are the source language producers?\n\n\nDetailed in the paper",
"### Annotations\n\n\n[More information needed]",
"#### Annotation process\n\n\n[More information needed]",
"#### Who are the annotators?\n\n\n[More information needed]",
"### Personal and Sensitive Information\n\n\n[More information needed]\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\n[More information needed]",
"### Discussion of Biases\n\n\n[More information needed]",
"### Other Known Limitations\n\n\n[More information needed]\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n[More information needed]",
"### Licensing Information\n\n\nContents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). Copyright of the dataset contents belongs to the original copyright holders.\n\n\nIf you use any of the datasets, models or code modules, please cite the following paper:",
"### Contributions\n\n\nDetailed in the paper"
] |
3c9cfa7c513097aa3e475ad34d8578c52b48514f |
# Dataset Card for "IndicQuestionGeneration"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
IndicQuestionGeneration is the question generation dataset released as part of IndicNLG Suite. Each
example has five fields: id, squad_id, answer, context and question. We create this dataset in eleven
languages, including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. This is translated data. The examples in each language are exactly similar but in different languages.
The number of examples in each language is 98,027.
### Supported Tasks and Leaderboards
**Tasks:** Question Generation
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One random example from the `hi` dataset is given below in JSON format.
```
{
"id": 8,
"squad_id": "56be8e613aeaaa14008c90d3",
"answer": "अमेरिकी फुटबॉल सम्मेलन",
"context": "अमेरिकी फुटबॉल सम्मेलन (एएफसी) के चैंपियन डेनवर ब्रोंकोस ने नेशनल फुटबॉल कांफ्रेंस (एनएफसी) की चैंपियन कैरोलिना पैंथर्स को 24-10 से हराकर अपना तीसरा सुपर बाउल खिताब जीता।",
"question": "एएफसी का मतलब क्या है?"
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `squad_id (string)`: Unique identifier in Squad dataset.
- `answer (strings)`: Answer as one of the two inputs.
- `context (string)`: Context, the other input.
- `question (string)`: Question, the output.
### Data Splits
Here is the number of samples in each split for all the languages.
Language | ISO 639-1 Code | Train | Dev | Test |
---------- | ---------- | ---------- | ---------- | ---------- |
Assamese | as | 69,979 | 17,495 | 10,553 |
Bengali | bn | 69,979 | 17,495 | 10,553 |
Gujarati | gu | 69,979 | 17,495 | 10,553 |
Hindi | hi | 69,979 | 17,495 | 10,553 |
Kannada | kn | 69,979 | 17,495 | 10,553 |
Malayalam | ml | 69,979 | 17,495 | 10,553 |
Marathi | mr | 69,979 | 17,495 | 10,553 |
Oriya | or | 69,979 | 17,495 | 10,553 |
Punjabi | pa | 69,979 | 17,495 | 10,553 |
Tamil | ta | 69,979 | 17,495 | 10,553 |
Telugu | te | 69,979 | 17,495 | 10,553 |
## Dataset Creation
### Curation Rationale
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Source Data
Squad Dataset(https://rajpurkar.github.io/SQuAD-explorer/)
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437",
```
### Contributions
[Detailed in the paper](https://arxiv.org/abs/2203.05437) | ai4bharat/IndicQuestionGeneration | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:98K<n<98K",
"source_datasets:we start with the SQuAD question answering dataset repurposed to serve as a question generation dataset. We translate this dataset into different Indic languages.",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc-by-nc-4.0",
"arxiv:2203.05437",
"region:us"
] | 2022-03-10T09:59:41+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te"], "license": ["cc-by-nc-4.0"], "multilinguality": ["multilingual"], "size_categories": ["98K<n<98K"], "source_datasets": ["we start with the SQuAD question answering dataset repurposed to serve as a question generation dataset. We translate this dataset into different Indic languages."], "task_categories": ["conditional-text-generation"], "task_ids": ["conditional-text-generation-other-question-generation"], "pretty_name": "IndicQuestionGeneration"} | 2022-10-13T05:08:25+00:00 | [
"2203.05437"
] | [
"as",
"bn",
"gu",
"hi",
"kn",
"ml",
"mr",
"or",
"pa",
"ta",
"te"
] | TAGS
#annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-98K<n<98K #source_datasets-we start with the SQuAD question answering dataset repurposed to serve as a question generation dataset. We translate this dataset into different Indic languages. #language-Assamese #language-Bengali #language-Gujarati #language-Hindi #language-Kannada #language-Malayalam #language-Marathi #language-Oriya (macrolanguage) #language-Panjabi #language-Tamil #language-Telugu #license-cc-by-nc-4.0 #arxiv-2203.05437 #region-us
| Dataset Card for "IndicQuestionGeneration"
==========================================
Table of Contents
-----------------
* Dataset Card Creation Guide
+ Table of Contents
+ Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
+ Dataset Structure
- Data Instances
- Data Fields
- Data Splits
+ Dataset Creation
- Curation Rationale
- Source Data
* Initial Data Collection and Normalization
* Who are the source language producers?
- Annotations
* Annotation process
* Who are the annotators?
- Personal and Sensitive Information
+ Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
+ Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
Dataset Description
-------------------
* Homepage: URL
* Paper: IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages
* Point of Contact:
### Dataset Summary
IndicQuestionGeneration is the question generation dataset released as part of IndicNLG Suite. Each
example has five fields: id, squad\_id, answer, context and question. We create this dataset in eleven
languages, including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. This is translated data. The examples in each language are exactly similar but in different languages.
The number of examples in each language is 98,027.
### Supported Tasks and Leaderboards
Tasks: Question Generation
Leaderboards: Currently there is no Leaderboard for this dataset.
### Languages
* 'Assamese (as)'
* 'Bengali (bn)'
* 'Gujarati (gu)'
* 'Kannada (kn)'
* 'Hindi (hi)'
* 'Malayalam (ml)'
* 'Marathi (mr)'
* 'Oriya (or)'
* 'Punjabi (pa)'
* 'Tamil (ta)'
* 'Telugu (te)'
Dataset Structure
-----------------
### Data Instances
One random example from the 'hi' dataset is given below in JSON format.
### Data Fields
* 'id (string)': Unique identifier.
* 'squad\_id (string)': Unique identifier in Squad dataset.
* 'answer (strings)': Answer as one of the two inputs.
* 'context (string)': Context, the other input.
* 'question (string)': Question, the output.
### Data Splits
Here is the number of samples in each split for all the languages.
Dataset Creation
----------------
### Curation Rationale
Detailed in the paper
### Source Data
Squad Dataset(URL
#### Initial Data Collection and Normalization
Detailed in the paper
#### Who are the source language producers?
Detailed in the paper
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
Additional Information
----------------------
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). Copyright of the dataset contents belongs to the original copyright holders.
If you use any of the datasets, models or code modules, please cite the following paper:
### Contributions
Detailed in the paper
| [
"### Dataset Summary\n\n\nIndicQuestionGeneration is the question generation dataset released as part of IndicNLG Suite. Each\nexample has five fields: id, squad\\_id, answer, context and question. We create this dataset in eleven\nlanguages, including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. This is translated data. The examples in each language are exactly similar but in different languages.\nThe number of examples in each language is 98,027.",
"### Supported Tasks and Leaderboards\n\n\nTasks: Question Generation\n\n\nLeaderboards: Currently there is no Leaderboard for this dataset.",
"### Languages\n\n\n* 'Assamese (as)'\n* 'Bengali (bn)'\n* 'Gujarati (gu)'\n* 'Kannada (kn)'\n* 'Hindi (hi)'\n* 'Malayalam (ml)'\n* 'Marathi (mr)'\n* 'Oriya (or)'\n* 'Punjabi (pa)'\n* 'Tamil (ta)'\n* 'Telugu (te)'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nOne random example from the 'hi' dataset is given below in JSON format.",
"### Data Fields\n\n\n* 'id (string)': Unique identifier.\n* 'squad\\_id (string)': Unique identifier in Squad dataset.\n* 'answer (strings)': Answer as one of the two inputs.\n* 'context (string)': Context, the other input.\n* 'question (string)': Question, the output.",
"### Data Splits\n\n\nHere is the number of samples in each split for all the languages.\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nDetailed in the paper",
"### Source Data\n\n\nSquad Dataset(URL",
"#### Initial Data Collection and Normalization\n\n\nDetailed in the paper",
"#### Who are the source language producers?\n\n\nDetailed in the paper",
"### Annotations\n\n\n[More information needed]",
"#### Annotation process\n\n\n[More information needed]",
"#### Who are the annotators?\n\n\n[More information needed]",
"### Personal and Sensitive Information\n\n\n[More information needed]\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\n[More information needed]",
"### Discussion of Biases\n\n\n[More information needed]",
"### Other Known Limitations\n\n\n[More information needed]\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n[More information needed]",
"### Licensing Information\n\n\nContents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). Copyright of the dataset contents belongs to the original copyright holders.\n\n\nIf you use any of the datasets, models or code modules, please cite the following paper:",
"### Contributions\n\n\nDetailed in the paper"
] | [
"TAGS\n#annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-98K<n<98K #source_datasets-we start with the SQuAD question answering dataset repurposed to serve as a question generation dataset. We translate this dataset into different Indic languages. #language-Assamese #language-Bengali #language-Gujarati #language-Hindi #language-Kannada #language-Malayalam #language-Marathi #language-Oriya (macrolanguage) #language-Panjabi #language-Tamil #language-Telugu #license-cc-by-nc-4.0 #arxiv-2203.05437 #region-us \n",
"### Dataset Summary\n\n\nIndicQuestionGeneration is the question generation dataset released as part of IndicNLG Suite. Each\nexample has five fields: id, squad\\_id, answer, context and question. We create this dataset in eleven\nlanguages, including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. This is translated data. The examples in each language are exactly similar but in different languages.\nThe number of examples in each language is 98,027.",
"### Supported Tasks and Leaderboards\n\n\nTasks: Question Generation\n\n\nLeaderboards: Currently there is no Leaderboard for this dataset.",
"### Languages\n\n\n* 'Assamese (as)'\n* 'Bengali (bn)'\n* 'Gujarati (gu)'\n* 'Kannada (kn)'\n* 'Hindi (hi)'\n* 'Malayalam (ml)'\n* 'Marathi (mr)'\n* 'Oriya (or)'\n* 'Punjabi (pa)'\n* 'Tamil (ta)'\n* 'Telugu (te)'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nOne random example from the 'hi' dataset is given below in JSON format.",
"### Data Fields\n\n\n* 'id (string)': Unique identifier.\n* 'squad\\_id (string)': Unique identifier in Squad dataset.\n* 'answer (strings)': Answer as one of the two inputs.\n* 'context (string)': Context, the other input.\n* 'question (string)': Question, the output.",
"### Data Splits\n\n\nHere is the number of samples in each split for all the languages.\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nDetailed in the paper",
"### Source Data\n\n\nSquad Dataset(URL",
"#### Initial Data Collection and Normalization\n\n\nDetailed in the paper",
"#### Who are the source language producers?\n\n\nDetailed in the paper",
"### Annotations\n\n\n[More information needed]",
"#### Annotation process\n\n\n[More information needed]",
"#### Who are the annotators?\n\n\n[More information needed]",
"### Personal and Sensitive Information\n\n\n[More information needed]\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\n[More information needed]",
"### Discussion of Biases\n\n\n[More information needed]",
"### Other Known Limitations\n\n\n[More information needed]\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n[More information needed]",
"### Licensing Information\n\n\nContents of this repository are restricted to only non-commercial research purposes under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). Copyright of the dataset contents belongs to the original copyright holders.\n\n\nIf you use any of the datasets, models or code modules, please cite the following paper:",
"### Contributions\n\n\nDetailed in the paper"
] |
9e3533eec643aebede8aaa7ea781c9b58f721dd8 |
Singapore's holiday data from 2017 to 2022. | Mulin/sg-holiday | [
"license:mit",
"region:us"
] | 2022-03-10T14:22:27+00:00 | {"license": "mit"} | 2022-03-14T10:44:11+00:00 | [] | [] | TAGS
#license-mit #region-us
|
Singapore's holiday data from 2017 to 2022. | [] | [
"TAGS\n#license-mit #region-us \n"
] |
153f48ba973d1b1f88cf97ec4d986bc13ffc9e63 |
<p align="center">
<br>
<img src="https://orca.dlnlp.ai/assets/orca_logo.png" width="55%"/>
<br>
<p>
<p align="center">
<!-- <a href="https://github.com/UBC-NLP/orca/releases"> -->
<!-- <img alt="GitHub release" src="https://img.shields.io/github/release/UBC-NLP/orca.svg"> </a>-->
<a href="https://orca.dlnlp.ai/">
<img alt="Documentation" src="https://img.shields.io/website.svg?down_color=red&down_message=offline&up_message=online&url=https://orca.dlnlp.ai">
</a>
<!-- <a href="https://github.com/UBC-NLP/orca/blob/main/LICENSE"><img alt="GitHub license" src="https://img.shields.io/github/license/UBC-NLP/orca?logoColor=blue"></a> -->
<!-- <a href='https://orca.readthedocs.io/en/latest/?badge=latest'><img src='https://readthedocs.org/projects/orca/badge/?version=latest' alt='Documentation Status' /></a> -->
<!-- <a href="https://github.com/UBC-NLP/orca/stargazers"><img alt="GitHub stars" src="https://img.shields.io/github/stars/UBC-NLP/orca"></a>
<!-- <a href="https://github.com/UBC-NLP/orca/network"><img alt="GitHub forks" src="https://img.shields.io/github/forks/UBC-NLP/orca"></a> -->
</p>
In this work, we introduce [**ORCA**](https://arxiv.org/abs/2212.10758), a publicly available benchmark for Arabic language understanding evaluation. ORCA is carefully constructed to cover diverse Arabic varieties and a wide range of challenging Arabic understanding tasks exploiting 60 different datasets across seven NLU task clusters. To measure current progress in Arabic NLU, we use ORCA to offer a comprehensive comparison between 18 multilingual and Arabic language models.
# ORCA Task Cluster
We arrange [**ORCA**](https://arxiv.org/abs/2212.10758), into seven NLU task clusters. These are (1) sentence classification, (2) structured prediction (3) semantic textual similarity and paraphrase, (4) text classification, (5) natural language inference, (6) word sense disambiguation, and (7) question answering.
### (1) Natural Language Inference (NLI)
|**Task**| **Variation** | **Metric** | **Reference** |
|---------|--------|--------|------|
|[ANS Stance](https://aclanthology.org/2020.fever-1.2/) |MSA | Macro F1 | [(Khouja, 2020)](https://aclanthology.org/2020.fever-1.2/) |
|[Baly Stance](https://aclanthology.org/N18-2004/) |MSA | Macro F1 | [(Balyet al., 2018)](https://aclanthology.org/N18-2004/) |
|[XLNI](https://github.com/facebookresearch/XNLI) |MSA | Macro F1 | [(Conneau et al., 2018)](https://github.com/facebookresearch/XNLI)|
### (2) Question Answering (QA)
|**Task**| **Variation** | **Metric** | **Reference** |
|---------|--------|--------|------|
|[Question Answering](https://aclanthology.org/2021.acl-long.551/) |MSA | Macro F1 | [(Abdul-Mageed et al., 2020a)](https://aclanthology.org/2021.acl-long.551/) |
### (3) Semantic Textual Similarity and Paraphrase (STSP)
|**Task**| **Variation** | **Metric** | **Reference** |
|---------|--------|--------|-------|
|[Emotion Regression](https://aclanthology.org/S18-1001/) |MSA | Spearman Correlation| [(Saif et al., 2018)](https://aclanthology.org/S18-1001/) |
|[MQ2Q](https://aclanthology.org/2019.nsurl-1.1) |MSA | Macro F1 | [(Seelawi al., 2019)](https://aclanthology.org/2019.nsurl-1.1) |
|[STS](https://aclanthology.org/S17-2001/) |MSA | Macro F1 | [(Cer et al., 2017)](https://aclanthology.org/S17-2001/) |
### (4) Sentence Classification (SC)
|**Task**| **Variation** | **Metric** | **Reference** |
|---------|--------|--------|-------|
|[Abusive](https://aclanthology.org/W19-3512/) |DA | Macro F1 | [(Mulki et al., 2019)](https://aclanthology.org/W19-3512/) |
|[Adult](https://aclanthology.org/2021.wanlp-1.14) |DA | Macro F1 | [(Mubarak et al., 2021)](https://aclanthology.org/2021.wanlp-1.14) |
|[Age](https://www.aclweb.org/anthology/2020.osact-1.3) |DA | Macro F1 | [(Abdul-Mageed et al., 2020b)]( https://aclanthology.org/2020.osact-1.3/) |
|[ANS Claim](https://aclanthology.org/2020.fever-1.2/) |MSA | Macro F1 | [(Khouja, 2020)](https://aclanthology.org/2020.fever-1.2/) |
|[Dangerous ](https://aclanthology.org/N18-2004/) |DA | Macro F1 | [(Alshehri et al., 2020)](https://www.aclweb.org/anthology/2020.osact-1.6)|
|[Dialect Binary](https://github.com/facebookresearch/XNLI) |DA | Macro F1 | [(Farha, 2020)](https://aclanthology.org/2020.osact-1.5/), [(Zaidan, 2014)](https://www.aclweb.org/anthology/J14-1006), [(Abdul-Mageed et al., 2020c)](https://aclanthology.org/2021.acl-long.551/), [(Bouamor et al., 2019)](https://www.aclweb.org/anthology/W19-4622), [(Abdelaliet al., 2020)](https://aclanthology.org/2021.wanlp-1.1), [(El-Haj, 2020)](https://aclanthology.org/2020.lrec-1.165/). |
|[Dialect Country](https://github.com/facebookresearch/XNLI) |DA | Macro F1 | [(Farha, 2020)](https://aclanthology.org/2020.osact-1.5/), [(Zaidan, 2014)](https://www.aclweb.org/anthology/J14-1006), [(Abdul-Mageed et al., 2020c)](https://aclanthology.org/2021.acl-long.551/), [(Bouamor et al., 2019)](https://www.aclweb.org/anthology/W19-4622), [(Abdelaliet al., 2020)](https://aclanthology.org/2021.wanlp-1.1), [(El-Haj, 2020)](https://aclanthology.org/2020.lrec-1.165/). |
|[Dialect Region](https://github.com/facebookresearch/XNLI) |DA | Macro F1 | [(Farha, 2020)](https://aclanthology.org/2020.osact-1.5/), [(Zaidan, 2014)](https://www.aclweb.org/anthology/J14-1006), [(Abdul-Mageed et al., 2020c)](https://aclanthology.org/2021.acl-long.551/), [(Bouamor et al., 2019)](https://www.aclweb.org/anthology/W19-4622), [(Abdelaliet al., 2020)](https://aclanthology.org/2021.wanlp-1.1), [(El-Haj, 2020)](https://aclanthology.org/2020.lrec-1.165/). |
|[Emotion](https://www.aclweb.org/anthology/2020.osact-1.3) |DA | Macro F1 | [(Abdul-Mageed et al., 2020b)]( https://aclanthology.org/2020.osact-1.3/) |
|[Gender](https://www.aclweb.org/anthology/2020.osact-1.3) |DA | Macro F1 | [(Abdul-Mageed et al., 2020b)]( https://aclanthology.org/2020.osact-1.3/) |
|[Hate Speech](https://www.aclweb.org/anthology/2020.osact-1.7) |DA | Macro F1 | [(Mubarak et al., 2020)](https://www.aclweb.org/anthology/2020.osact-1.7)|
|[Irony](https://dl.acm.org/doi/10.1145/3368567.3368585) |DA | Macro F1 | [(Ghanem al., 2019)](https://dl.acm.org/doi/10.1145/3368567.3368585) |
|[Machine Generation](https://aclanthology.org/2020.wanlp-1.7/) |MSA | Macro F1 | [(Nagoudi et al., 2020)](https://aclanthology.org/2020.wanlp-1.7/) |
|[Offensive](https://aclanthology.org/2020.osact-1.8/) |DA | Macro F1 | [(Mubarak et al., 2020)](https://www.aclweb.org/anthology/2020.osact-1.7)|
|[Sarcasm](https://aclanthology.org/N18-2004/) |DA | Macro F1 | [(Farha and Magdy, 2020)](https://aclanthology.org/2020.osact-1.5/) |
|[Sentiment Analysis](https://aclanthology.org/2021.acl-long.551/) |DA | Macro F1 | [(Abdul-Mageed et al., 2020c)](https://aclanthology.org/2021.acl-long.551/) |
### (5) Structure Predictions (SP)
|**Task**| **Variation** | **Metric** | **Reference** |
|---------|--------|--------|-------|
|[Aqmar NER](https://www.cs.cmu.edu/~ark/ArabicNER/) |MSA | Macro F1 | [(Mohit, 2012)](https://www.cs.cmu.edu/~ark/ArabicNER/) |
|[Arabic NER Corpus](http://www.dsic.upv.es/~prosso/resources/BenajibaRosso_IICAI07.pdf) |MSA | Macro F1 | [(Benajiba and Rosso, 2007)](http://www.dsic.upv.es/~prosso/resources/BenajibaRosso_IICAI07.pdf) |
|[Dialect Part Of Speech](https://aclanthology.org/L18-1015.pdf) |DA | Macro F1 | [(Darwish et al., 2018)](https://aclanthology.org/L18-1015.pdf) |
|[MSA Part Of Speech](https://arxiv.org/abs/2004.01401) |MSA | Macro F1 | [(Liang et al., 2020)](https://arxiv.org/abs/2004.01401) |
### (6) Topic Classification (TC)
|**Task**| **Variation** | **Metric** | **Reference** |
|---------|--------|--------|-------|
|[Topic](https://aclanthology.org/2021.acl-long.551/) |MSA | Macro F1 | [(Abbas et al.,2011)](https://www.dline.info/fpaper/jdim/v9i5/1.pdf), [(Chouigui et al.,2017)](https://www.researchgate.net/publication/320871871_Poster_ANT_Corpus_An_Arabic_News_Text_Collection_for_Textual_Classification), [(Saad, 2010)](http://site.iugaza.edu.ps/wp-content/uploads/mksaad-OSAC-OpenSourceArabicCorpora-EECS10-rev9(1).pdf). |
### (7) Word Sense Disambiguation (WSD)
|**Task**| **Variation** | **Metric** | **Reference** |
|---------|--------|--------|-------|
|[Word Sense Disambiguation](https://www.mdpi.com/2076-3417/11/6/2567) |MSA | Macro F1 | [(El-Razzaz, 2021)](https://www.mdpi.com/2076-3417/11/6/2567) |
# How to Use ORCA
### Request Access ###
To obtain access to the ORCA benchmark on Huggingface, follow the following steps:
- Login on your Haggingface account
<img src="https://raw.githubusercontent.com/UBC-NLP/orca/main/orca_request1.png" width="70%"/>
- Request access
<img src="https://raw.githubusercontent.com/UBC-NLP/orca/main/orca_request2.png" width="70%"/>
### Install Requirments
```shell
pip install datasets transformers seqeval
```
### Login with your Huggingface CLI ###
You can get/manage your access tokens in your [settings](https://huggingface.co/docs/hub/security-tokens).
```shell
export HUGGINGFACE_TOKEN=""
huggingface-cli login --token $HUGGINGFACE_TOKEN
```
### Fine-tuning a model on ORCA tasks
We provide a Google Colab Notebook that includes instructions for fine-tuning any model on ORCA tasks. <a href="https://colab.research.google.com/github/UBC-NLP/orca/blob/main/Finetuning_ORCA.ipynb"><img alt="colab" src="https://colab.research.google.com/assets/colab-badge.svg">
### Submitting your results on ORCA test
We design a public leaderboard for scoring PLMs on ORCA. Our leaderboard is interactive and offers rich meta-data about the various datasets involved as well as the language models we evaluate.
You can evalute your models using **ORCA** leaderboard: **[https://orca.dlnlp.ai](https://orca.dlnlp.ai/)**
---
## Citation
If you use ORCA for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows:
```
@inproceedings{elmadany-etal-2023-orca,
title = "{ORCA}: A Challenging Benchmark for {A}rabic Language Understanding",
author = "Elmadany, AbdelRahim and
Nagoudi, ElMoatez Billah and
Abdul-Mageed, Muhammad",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-acl.609",
pages = "9559--9586",
}
```
---
## Acknowledgments
We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, [ComputeCanada](www.computecanada.ca) and [UBC ARC-Sockeye](https://doi.org/10.14288/SOCKEYE). We also thank the [Google TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) program for providing us with free TPU access.
| UBC-NLP/orca | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"language:ara",
"Arabic",
"NLU Benchmark",
"Natural Language Inference (NLI)",
"Question Answering (QA)",
"Semantic Textual Similarity and and Paraphrase (STSP)",
"Sentence Classification (SC)",
"Structure Predictions (SP)",
"Topic Classification (TC)",
"Word Sense Disambiguation (WSD)",
"arxiv:2212.10758",
"arxiv:2004.01401",
"region:us"
] | 2022-03-10T19:45:30+00:00 | {"language": ["ara"], "task_categories": ["text-classification", "token-classification", "question-answering"], "viewer": false, "tags": ["Arabic", "NLU Benchmark", "Natural Language Inference (NLI)", "Question Answering (QA)", "Semantic Textual Similarity and and Paraphrase (STSP)", "Sentence Classification (SC)", "Structure Predictions (SP)", "Topic Classification (TC)", "Word Sense Disambiguation (WSD)"], "extra_gated_fields": {"Name": "text", "Official Email (email of your organization)": "text", "Affilation": "text", "Country": "text", "I agree to use this dataset for non-commercial use ONLY": "checkbox", "I agree to cite the ORCA paper and all original papers": "checkbox"}} | 2023-11-22T17:56:13+00:00 | [
"2212.10758",
"2004.01401"
] | [
"ara"
] | TAGS
#task_categories-text-classification #task_categories-token-classification #task_categories-question-answering #language-Arabic #Arabic #NLU Benchmark #Natural Language Inference (NLI) #Question Answering (QA) #Semantic Textual Similarity and and Paraphrase (STSP) #Sentence Classification (SC) #Structure Predictions (SP) #Topic Classification (TC) #Word Sense Disambiguation (WSD) #arxiv-2212.10758 #arxiv-2004.01401 #region-us
|

<a href="URL

In this work, we introduce ORCA, a publicly available benchmark for Arabic language understanding evaluation. ORCA is carefully constructed to cover diverse Arabic varieties and a wide range of challenging Arabic understanding tasks exploiting 60 different datasets across seven NLU task clusters. To measure current progress in Arabic NLU, we use ORCA to offer a comprehensive comparison between 18 multilingual and Arabic language models.
ORCA Task Cluster
=================
We arrange ORCA, into seven NLU task clusters. These are (1) sentence classification, (2) structured prediction (3) semantic textual similarity and paraphrase, (4) text classification, (5) natural language inference, (6) word sense disambiguation, and (7) question answering.
### (1) Natural Language Inference (NLI)
### (2) Question Answering (QA)
### (3) Semantic Textual Similarity and Paraphrase (STSP)
### (4) Sentence Classification (SC)
### (5) Structure Predictions (SP)
### (6) Topic Classification (TC)
### (7) Word Sense Disambiguation (WSD)
How to Use ORCA
===============
### Request Access
To obtain access to the ORCA benchmark on Huggingface, follow the following steps:
* Login on your Haggingface account
<img src="URL width="70%"/>
* Request access
<img src="URL width="70%"/>
### Install Requirments
### Login with your Huggingface CLI
You can get/manage your access tokens in your settings.
### Fine-tuning a model on ORCA tasks
We provide a Google Colab Notebook that includes instructions for fine-tuning any model on ORCA tasks. <a href="URL alt="colab" src="URL
### Submitting your results on ORCA test
We design a public leaderboard for scoring PLMs on ORCA. Our leaderboard is interactive and offers rich meta-data about the various datasets involved as well as the language models we evaluate.
You can evalute your models using ORCA leaderboard: URL
---
If you use ORCA for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows:
---
Acknowledgments
---------------
We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, ComputeCanada and UBC ARC-Sockeye. We also thank the Google TensorFlow Research Cloud (TFRC) program for providing us with free TPU access.
| [
"### (1) Natural Language Inference (NLI)",
"### (2) Question Answering (QA)",
"### (3) Semantic Textual Similarity and Paraphrase (STSP)",
"### (4) Sentence Classification (SC)",
"### (5) Structure Predictions (SP)",
"### (6) Topic Classification (TC)",
"### (7) Word Sense Disambiguation (WSD)\n\n\n\nHow to Use ORCA\n===============",
"### Request Access\n\n\nTo obtain access to the ORCA benchmark on Huggingface, follow the following steps:\n\n\n* Login on your Haggingface account\n\n\n<img src=\"URL width=\"70%\"/>\n* Request access\n\n\n<img src=\"URL width=\"70%\"/>",
"### Install Requirments",
"### Login with your Huggingface CLI\n\n\nYou can get/manage your access tokens in your settings.",
"### Fine-tuning a model on ORCA tasks\n\n\nWe provide a Google Colab Notebook that includes instructions for fine-tuning any model on ORCA tasks. <a href=\"URL alt=\"colab\" src=\"URL",
"### Submitting your results on ORCA test\n\n\nWe design a public leaderboard for scoring PLMs on ORCA. Our leaderboard is interactive and offers rich meta-data about the various datasets involved as well as the language models we evaluate.\n\n\nYou can evalute your models using ORCA leaderboard: URL\n\n\n\n\n---\n\n\nIf you use ORCA for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows:\n\n\n\n\n---\n\n\nAcknowledgments\n---------------\n\n\nWe gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, ComputeCanada and UBC ARC-Sockeye. We also thank the Google TensorFlow Research Cloud (TFRC) program for providing us with free TPU access."
] | [
"TAGS\n#task_categories-text-classification #task_categories-token-classification #task_categories-question-answering #language-Arabic #Arabic #NLU Benchmark #Natural Language Inference (NLI) #Question Answering (QA) #Semantic Textual Similarity and and Paraphrase (STSP) #Sentence Classification (SC) #Structure Predictions (SP) #Topic Classification (TC) #Word Sense Disambiguation (WSD) #arxiv-2212.10758 #arxiv-2004.01401 #region-us \n",
"### (1) Natural Language Inference (NLI)",
"### (2) Question Answering (QA)",
"### (3) Semantic Textual Similarity and Paraphrase (STSP)",
"### (4) Sentence Classification (SC)",
"### (5) Structure Predictions (SP)",
"### (6) Topic Classification (TC)",
"### (7) Word Sense Disambiguation (WSD)\n\n\n\nHow to Use ORCA\n===============",
"### Request Access\n\n\nTo obtain access to the ORCA benchmark on Huggingface, follow the following steps:\n\n\n* Login on your Haggingface account\n\n\n<img src=\"URL width=\"70%\"/>\n* Request access\n\n\n<img src=\"URL width=\"70%\"/>",
"### Install Requirments",
"### Login with your Huggingface CLI\n\n\nYou can get/manage your access tokens in your settings.",
"### Fine-tuning a model on ORCA tasks\n\n\nWe provide a Google Colab Notebook that includes instructions for fine-tuning any model on ORCA tasks. <a href=\"URL alt=\"colab\" src=\"URL",
"### Submitting your results on ORCA test\n\n\nWe design a public leaderboard for scoring PLMs on ORCA. Our leaderboard is interactive and offers rich meta-data about the various datasets involved as well as the language models we evaluate.\n\n\nYou can evalute your models using ORCA leaderboard: URL\n\n\n\n\n---\n\n\nIf you use ORCA for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows:\n\n\n\n\n---\n\n\nAcknowledgments\n---------------\n\n\nWe gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, ComputeCanada and UBC ARC-Sockeye. We also thank the Google TensorFlow Research Cloud (TFRC) program for providing us with free TPU access."
] |
de9bf1404880f4b7225e1cc0e9268192e57fefca |
## Description
**Gold standard annotations for profession detection in Spanish COVID-19 tweets**
The entire corpus contains 10,000 annotated tweets. It has been split into training, validation, and test (60-20-20). The current version contains the training and development set of the shared task with Gold Standard annotations. In addition, it contains the unannotated test, and background sets will be released.
For Named Entity Recognition, profession detection, annotations are distributed in 2 formats: Brat standoff and TSV. See the Brat webpage for more information about the Brat standoff format (https://brat.nlplab.org/standoff.html).
The TSV format follows the format employed in SMM4H 2019 Task 2:
tweet_id | begin | end | type | extraction
In addition, we provide a tokenized version of the dataset. It follows the BIO format (similar to CONLL). The files were generated with the brat_to_conll.py script (included), which employs the es_core_news_sm-2.3.1 Spacy model for tokenization.
## Files of Named Entity Recognition subtask.
Content:
- One TSV file per corpus split (train and valid).
- brat: folder with annotations in Brat format. One sub-directory per corpus split (train and valid)
- BIO: folder with corpus in BIO tagging. One file per corpus split (train and valid)
- train-valid-txt-files: folder with training and validation text files. One text file per tweet. One sub-- directory per corpus split (train and valid)
- train-valid-txt-files-english: folder with training and validation text files Machine Translated to English.
- test-background-txt-files: folder with the test and background text files. You must make your predictions for these files and upload them to CodaLab. | Biomedical-TeMU/ProfNER_corpus_NER | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-10T21:34:00+00:00 | {"license": "cc-by-4.0"} | 2022-03-10T21:50:30+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
|
## Description
Gold standard annotations for profession detection in Spanish COVID-19 tweets
The entire corpus contains 10,000 annotated tweets. It has been split into training, validation, and test (60-20-20). The current version contains the training and development set of the shared task with Gold Standard annotations. In addition, it contains the unannotated test, and background sets will be released.
For Named Entity Recognition, profession detection, annotations are distributed in 2 formats: Brat standoff and TSV. See the Brat webpage for more information about the Brat standoff format (URL
The TSV format follows the format employed in SMM4H 2019 Task 2:
tweet_id | begin | end | type | extraction
In addition, we provide a tokenized version of the dataset. It follows the BIO format (similar to CONLL). The files were generated with the brat_to_conll.py script (included), which employs the es_core_news_sm-2.3.1 Spacy model for tokenization.
## Files of Named Entity Recognition subtask.
Content:
- One TSV file per corpus split (train and valid).
- brat: folder with annotations in Brat format. One sub-directory per corpus split (train and valid)
- BIO: folder with corpus in BIO tagging. One file per corpus split (train and valid)
- train-valid-txt-files: folder with training and validation text files. One text file per tweet. One sub-- directory per corpus split (train and valid)
- train-valid-txt-files-english: folder with training and validation text files Machine Translated to English.
- test-background-txt-files: folder with the test and background text files. You must make your predictions for these files and upload them to CodaLab. | [
"## Description\r\n\r\nGold standard annotations for profession detection in Spanish COVID-19 tweets\r\n\r\nThe entire corpus contains 10,000 annotated tweets. It has been split into training, validation, and test (60-20-20). The current version contains the training and development set of the shared task with Gold Standard annotations. In addition, it contains the unannotated test, and background sets will be released.\r\n\r\nFor Named Entity Recognition, profession detection, annotations are distributed in 2 formats: Brat standoff and TSV. See the Brat webpage for more information about the Brat standoff format (URL \r\n\r\nThe TSV format follows the format employed in SMM4H 2019 Task 2:\r\ntweet_id | begin | end | type | extraction\r\n\r\nIn addition, we provide a tokenized version of the dataset. It follows the BIO format (similar to CONLL). The files were generated with the brat_to_conll.py script (included), which employs the es_core_news_sm-2.3.1 Spacy model for tokenization.",
"## Files of Named Entity Recognition subtask. \r\n\r\nContent:\r\n\r\n- One TSV file per corpus split (train and valid).\r\n- brat: folder with annotations in Brat format. One sub-directory per corpus split (train and valid)\r\n- BIO: folder with corpus in BIO tagging. One file per corpus split (train and valid)\r\n- train-valid-txt-files: folder with training and validation text files. One text file per tweet. One sub-- directory per corpus split (train and valid)\r\n- train-valid-txt-files-english: folder with training and validation text files Machine Translated to English.\r\n- test-background-txt-files: folder with the test and background text files. You must make your predictions for these files and upload them to CodaLab."
] | [
"TAGS\n#license-cc-by-4.0 #region-us \n",
"## Description\r\n\r\nGold standard annotations for profession detection in Spanish COVID-19 tweets\r\n\r\nThe entire corpus contains 10,000 annotated tweets. It has been split into training, validation, and test (60-20-20). The current version contains the training and development set of the shared task with Gold Standard annotations. In addition, it contains the unannotated test, and background sets will be released.\r\n\r\nFor Named Entity Recognition, profession detection, annotations are distributed in 2 formats: Brat standoff and TSV. See the Brat webpage for more information about the Brat standoff format (URL \r\n\r\nThe TSV format follows the format employed in SMM4H 2019 Task 2:\r\ntweet_id | begin | end | type | extraction\r\n\r\nIn addition, we provide a tokenized version of the dataset. It follows the BIO format (similar to CONLL). The files were generated with the brat_to_conll.py script (included), which employs the es_core_news_sm-2.3.1 Spacy model for tokenization.",
"## Files of Named Entity Recognition subtask. \r\n\r\nContent:\r\n\r\n- One TSV file per corpus split (train and valid).\r\n- brat: folder with annotations in Brat format. One sub-directory per corpus split (train and valid)\r\n- BIO: folder with corpus in BIO tagging. One file per corpus split (train and valid)\r\n- train-valid-txt-files: folder with training and validation text files. One text file per tweet. One sub-- directory per corpus split (train and valid)\r\n- train-valid-txt-files-english: folder with training and validation text files Machine Translated to English.\r\n- test-background-txt-files: folder with the test and background text files. You must make your predictions for these files and upload them to CodaLab."
] |
41ea0e39f062f9ca791fd5ec95c364a22150b56e |
# Dataset Card for FeedbackQA
[📄 Read](https://arxiv.org/pdf/2204.03025.pdf)<br>
[💾 Code](https://github.com/McGill-NLP/feedbackqa)<br>
[🔗 Webpage](https://mcgill-nlp.github.io/feedbackqa/)<br>
[💻 Demo](http://206.12.100.48:8080/)<br>
[🤗 Huggingface Dataset](https://huggingface.co/datasets/McGill-NLP/feedbackQA)<br>
[💬 Discussions](https://github.com/McGill-NLP/feedbackqa/discussions)
## Dataset Description
- **Homepage: https://mcgill-nlp.github.io/feedbackqa-data/**
- **Repository: https://github.com/McGill-NLP/feedbackqa-data/**
- **Paper:**
- **Leaderboard:**
- **Tasks: Question Answering**
### Dataset Summary
FeedbackQA is a retrieval-based QA dataset that contains interactive feedback from users.
It has two parts: the first part contains a conventional RQA dataset,
whilst this repo contains the second part, which contains feedback(ratings and natural language explanations) for QA pairs.
### Languages
English
## Dataset Creation
For each question-answer pair, we collected multiple feedback, each of which consists of a rating, selected
from excellent, good, could be improved, bad, and a natural language explanation
elaborating on the strengths and/or weaknesses of the answer.
#### Initial Data Collection and Normalization
We scraped Covid-19-related content from official websites.
### Annotations
#### Who are the annotators?
Crowd-workers
### Licensing Information
Apache 2.0
### Contributions
[McGill-NLP](https://github.com/McGill-NLP)
| McGill-NLP/feedbackQA | [
"license:apache-2.0",
"arxiv:2204.03025",
"region:us"
] | 2022-03-10T23:50:07+00:00 | {"license": "apache-2.0"} | 2023-06-14T16:27:23+00:00 | [
"2204.03025"
] | [] | TAGS
#license-apache-2.0 #arxiv-2204.03025 #region-us
|
# Dataset Card for FeedbackQA
Read<br>
Code<br>
Webpage<br>
Demo<br>
Huggingface Dataset<br>
Discussions
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper:
- Leaderboard:
- Tasks: Question Answering
### Dataset Summary
FeedbackQA is a retrieval-based QA dataset that contains interactive feedback from users.
It has two parts: the first part contains a conventional RQA dataset,
whilst this repo contains the second part, which contains feedback(ratings and natural language explanations) for QA pairs.
### Languages
English
## Dataset Creation
For each question-answer pair, we collected multiple feedback, each of which consists of a rating, selected
from excellent, good, could be improved, bad, and a natural language explanation
elaborating on the strengths and/or weaknesses of the answer.
#### Initial Data Collection and Normalization
We scraped Covid-19-related content from official websites.
### Annotations
#### Who are the annotators?
Crowd-workers
### Licensing Information
Apache 2.0
### Contributions
McGill-NLP
| [
"# Dataset Card for FeedbackQA\n\n Read<br>\n Code<br>\n Webpage<br>\n Demo<br>\n Huggingface Dataset<br>\n Discussions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper:\n- Leaderboard:\n- Tasks: Question Answering",
"### Dataset Summary\n\nFeedbackQA is a retrieval-based QA dataset that contains interactive feedback from users. \nIt has two parts: the first part contains a conventional RQA dataset, \nwhilst this repo contains the second part, which contains feedback(ratings and natural language explanations) for QA pairs.",
"### Languages\n\nEnglish",
"## Dataset Creation\nFor each question-answer pair, we collected multiple feedback, each of which consists of a rating, selected\nfrom excellent, good, could be improved, bad, and a natural language explanation \nelaborating on the strengths and/or weaknesses of the answer.",
"#### Initial Data Collection and Normalization\nWe scraped Covid-19-related content from official websites.",
"### Annotations",
"#### Who are the annotators?\n\nCrowd-workers",
"### Licensing Information\n\nApache 2.0",
"### Contributions\n\nMcGill-NLP"
] | [
"TAGS\n#license-apache-2.0 #arxiv-2204.03025 #region-us \n",
"# Dataset Card for FeedbackQA\n\n Read<br>\n Code<br>\n Webpage<br>\n Demo<br>\n Huggingface Dataset<br>\n Discussions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper:\n- Leaderboard:\n- Tasks: Question Answering",
"### Dataset Summary\n\nFeedbackQA is a retrieval-based QA dataset that contains interactive feedback from users. \nIt has two parts: the first part contains a conventional RQA dataset, \nwhilst this repo contains the second part, which contains feedback(ratings and natural language explanations) for QA pairs.",
"### Languages\n\nEnglish",
"## Dataset Creation\nFor each question-answer pair, we collected multiple feedback, each of which consists of a rating, selected\nfrom excellent, good, could be improved, bad, and a natural language explanation \nelaborating on the strengths and/or weaknesses of the answer.",
"#### Initial Data Collection and Normalization\nWe scraped Covid-19-related content from official websites.",
"### Annotations",
"#### Who are the annotators?\n\nCrowd-workers",
"### Licensing Information\n\nApache 2.0",
"### Contributions\n\nMcGill-NLP"
] |
393badffe34773d1536cfedfdc2abe14317d38e7 |
# The Sentence Splitter (SS) for Clinical Cases Written in Spanish
## Introduction
This repository contains the sentence splitting model trained using the SPACCC_SPLIT corpus (https://github.com/PlanTL-SANIDAD/SPACCC_SPLIT). The model was trained using the 90% of the corpus (900 clinical cases) and tested against the 10% (100 clinical cases). This model is a great resource to split sentences in biomedical documents, specially clinical cases written in Spanish. This model obtains a F-Measure of 98.75%.
This model was created using the Apache OpenNLP machine learning toolkit (https://opennlp.apache.org/), with the release number 1.8.4, released in December 2017.
This repository contains the model, training set, testing set, Gold Standard, executable file, and the source code.
## Prerequisites
This software has been compiled with Java SE 1.8 and it should work with recent versions. You can download Java from the following website: https://www.java.com/en/download
The executable file already includes the Apache OpenNLP dependencies inside, so the download of this toolkit is not necessary. However, you may download the latest version from this website: https://opennlp.apache.org/download.html
The library file we have used to compile is "opennlp-tools-1.8.4.jar". The source code should be able to compile with the latest version of OpenNLP, "opennlp-tools-*RELEASE_NUMBER*.jar". In case there are compilation or execution errors, please let us know and we will make all the necessary updates.
## Directory structure
<pre>
exec/
An executable file that can be used to apply the sentence splitter to your documents.
You can find the notes about its execution below in section "Usage".
gold_standard/
The clinical cases used as gold standard to evaluate the model's performance.
model/
The sentence splitting model, "es-sentence-splitter-model-spaccc.bin", a binary file.
src/
The source code to create the model (CreateModelSS.java) and evaluate it (EvaluateModelSS.java).
The directory includes an example about how to use the model inside your code (SentenceSplitter.java).
File "abbreviations.dat" contains a list of abbreviations, essential to build the model.
test_set/
The clinical cases used as test set to evaluate the model's performance.
train_set/
The clinical cases used to build the model. We use a single file with all documents present in
directory "train_set_docs" concatented.
train_set_docs/
The clinical cases used to build the model. For each record the sentences are already splitted.
</pre>
## Usage
The executable file *SentenceSplitter.jar* is the program you need to split the sentences of the document. For this program, two arguments are needed: (1) the text file to split the sentences, and (2) the model file (*es-sentence-splitter-model-spaccc.bin*). The program will display all sentences splitted in the terminal, with one sentence per line.
From the `exec` folder, type the following command in your terminal:
<pre>
$ java -jar SentenceSplitter.jar INPUT_FILE MODEL_FILE
</pre>
## Examples
Assuming you have the executable file, the input file and the model file in the same directory:
<pre>
$ java -jar SentenceSplitter.jar file_with_sentences_not_splitted.txt es-sentence-splitter-model-spaccc.bin
</pre>
## Model creation
To create this sentence splitting model, we used the following training parameters (class *TrainingParameters* in OpenNLP) to get the best performance:
- Number of iterations: 4000.
- Cutoff parameter: 3.
- Trainer type parameter: *EventTrainer.EVENT_VALUE*.
- Algorithm: Maximum Entropy (*ModelType.MAXENT.name()*).
Meanwhile, we used the following parameters for the sentence split builder (class *SentenceDetectorFactory* in OpenNLP) to get the best performance:
- Subclass name: null value.
- Language code: *es* (for Spanish).
- Use token end: true.
- Abbreviation dictionary: file "abbreviations.dat" (included in the `src/` directory).
- End of file characters: ".", "?" and "!".
## Model evaluation
After tuning the model using different values for each parameter mentioned above, we got the best performance with the values mentioned above.
| | Value |
| ----------------------------------------: | :------ |
| Number of sentences in the gold standard | 1445 |
| Number of sentences generated | 1447 |
| Number of sentences correctly splitted | 1428 |
| Number of sentences wrongly splitted | 12 |
| Number of sentences missed | 5 |
| **Precision** | **98.69%** |
| **Recall** | **98.82%** |
| **F-Measure** | **98.75%**|
Table 1: Evaluation statistics for the sentence splitting model.
## Contact
Ander Intxaurrondo ([email protected])
## License
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
Copyright (c) 2018 Secretaría de Estado para el Avance Digital (SEAD)
| Biomedical-TeMU/SPACCC_Sentence-Splitter | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-11T01:59:57+00:00 | {"license": "cc-by-4.0"} | 2022-03-11T02:09:00+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
| The Sentence Splitter (SS) for Clinical Cases Written in Spanish
================================================================
Introduction
------------
This repository contains the sentence splitting model trained using the SPACCC\_SPLIT corpus (URL The model was trained using the 90% of the corpus (900 clinical cases) and tested against the 10% (100 clinical cases). This model is a great resource to split sentences in biomedical documents, specially clinical cases written in Spanish. This model obtains a F-Measure of 98.75%.
This model was created using the Apache OpenNLP machine learning toolkit (URL with the release number 1.8.4, released in December 2017.
This repository contains the model, training set, testing set, Gold Standard, executable file, and the source code.
Prerequisites
-------------
This software has been compiled with Java SE 1.8 and it should work with recent versions. You can download Java from the following website: URL
The executable file already includes the Apache OpenNLP dependencies inside, so the download of this toolkit is not necessary. However, you may download the latest version from this website: URL
The library file we have used to compile is "opennlp-tools-1.8.4.jar". The source code should be able to compile with the latest version of OpenNLP, "opennlp-tools-*RELEASE\_NUMBER*.jar". In case there are compilation or execution errors, please let us know and we will make all the necessary updates.
Directory structure
-------------------
```
exec/
An executable file that can be used to apply the sentence splitter to your documents.
You can find the notes about its execution below in section "Usage".
gold_standard/
The clinical cases used as gold standard to evaluate the model's performance.
model/
The sentence splitting model, "URL", a binary file.
src/
The source code to create the model (URL) and evaluate it (URL).
The directory includes an example about how to use the model inside your code (URL).
File "URL" contains a list of abbreviations, essential to build the model.
test_set/
The clinical cases used as test set to evaluate the model's performance.
train_set/
The clinical cases used to build the model. We use a single file with all documents present in
directory "train_set_docs" concatented.
train_set_docs/
The clinical cases used to build the model. For each record the sentences are already splitted.
```
Usage
-----
The executable file *URL* is the program you need to split the sentences of the document. For this program, two arguments are needed: (1) the text file to split the sentences, and (2) the model file (*URL*). The program will display all sentences splitted in the terminal, with one sentence per line.
From the 'exec' folder, type the following command in your terminal:
```
$ java -jar URL INPUT_FILE MODEL_FILE
```
Examples
--------
Assuming you have the executable file, the input file and the model file in the same directory:
```
$ java -jar URL file_with_sentences_not_splitted.txt URL
```
Model creation
--------------
To create this sentence splitting model, we used the following training parameters (class *TrainingParameters* in OpenNLP) to get the best performance:
* Number of iterations: 4000.
* Cutoff parameter: 3.
* Trainer type parameter: *EventTrainer.EVENT\_VALUE*.
* Algorithm: Maximum Entropy (*URL()*).
Meanwhile, we used the following parameters for the sentence split builder (class *SentenceDetectorFactory* in OpenNLP) to get the best performance:
* Subclass name: null value.
* Language code: *es* (for Spanish).
* Use token end: true.
* Abbreviation dictionary: file "URL" (included in the 'src/' directory).
* End of file characters: ".", "?" and "!".
Model evaluation
----------------
After tuning the model using different values for each parameter mentioned above, we got the best performance with the values mentioned above.
Table 1: Evaluation statistics for the sentence splitting model.
Contact
-------
Ander Intxaurrondo (ander.intxaurrondo@URL)
License
-------
<a rel="license" href="URL alt="Creative Commons License" style="border-width:0" src="https://i.URL />
This work is licensed under a <a rel="license" href="URL Commons Attribution 4.0 International License.
Copyright (c) 2018 Secretaría de Estado para el Avance Digital (SEAD)
| [] | [
"TAGS\n#license-cc-by-4.0 #region-us \n"
] |
b80bc1594c34c07cee7888a0c741ae41ac06b274 | # The Tokenizer for Clinical Cases Written in Spanish
## Introduction
This repository contains the tokenization model trained using the SPACCC_TOKEN corpus (https://github.com/PlanTL-SANIDAD/SPACCC_TOKEN). The model was trained using the 90% of the corpus (900 clinical cases) and tested against the 10% (100 clinical cases). This model is a great resource to tokenize biomedical documents, specially clinical cases written in Spanish.
This model was created using the Apache OpenNLP machine learning toolkit (https://opennlp.apache.org/), with the release number 1.8.4, released in December 2017.
This repository contains the training set, testing set, Gold Standard.
## Prerequisites
This software has been compiled with Java SE 1.8 and it should work with recent versions. You can download Java from the following website: https://www.java.com/en/download
The executable file already includes the Apache OpenNLP dependencies inside, so the download of this toolkit is not necessary. However, you may download the latest version from this website: https://opennlp.apache.org/download.html
The library file we have used to compile is "opennlp-tools-1.8.4.jar". The source code should be able to compile with the latest version of OpenNLP, "opennlp-tools-*RELEASE_NUMBER*.jar". In case there are compilation or execution errors, please let us know and we will make all the necessary updates.
## Directory structure
<pre>
exec/
An executable file that can be used to apply the tokenization to your documents.
You can find the notes about its execution below in section "Usage".
gold_standard/
The clinical cases used as gold standard to evaluate the model's performance.
model/
The tokenizationint model, "es-tokenization-model-spaccc.bin", a binary file.
src/
The source code to create the model (CreateModelTok.java) and evaluate it (EvaluateModelTok.java).
The directory includes an example about how to use the model inside your code (Tokenization.java).
File "abbreviations.dat" contains a list of abbreviations, essential to build the model.
test_set/
The clinical cases used as test set to evaluate the model's performance.
train_set/
The clinical cases used to build the model. We use a single file with all documents present in
directory "train_set_docs" concatented.
train_set_docs/
The clinical cases used to build the model. For each record the sentences are already splitted.
</pre>
## Usage
The executable file *Tokenizer.jar* is the program you need to tokenize the text in your document. For this program, two arguments are needed: (1) the text file to tokenize, and (2) the model file (*es-tokenization-model-spaccc.bin*). The program will display all tokens in the terminal, with one token per line.
From the `exec` folder, type the following command in your terminal:
<pre>
$ java -jar Tokenizer.jar INPUT_FILE MODEL_FILE
</pre>
## Examples
Assuming you have the executable file, the input file and the model file in the same directory:
<pre>
$ java -jar Tokenizer.jar file.txt es-tokenizer-model-spaccc.bin
</pre>
## Model creation
To create this tokenization model, we used the following training parameters (class *TrainingParameters* in OpenNLP) to get the best performance:
- Number of iterations: 1500.
- Cutoff parameter: 4.
- Trainer type parameter: *EventTrainer.EVENT_VALUE*.
- Algorithm: Maximum Entropy (*ModelType.MAXENT.name()*).
Meanwhile, we used the following parameters for the tokenizer builder (class *TokenizerFactory* in OpenNLP) to get the best performance:
- Language code: *es* (for Spanish).
- Abbreviation dictionary: file "abbreviations.dat" (included in the `src/` directory).
- Use alphanumeric optimization: false
- Alphanumeric pattern: null
## Model evaluation
After tuning the model using different values for each parameter mentioned above, we got the best performance with the values mentioned above.
| | Value |
| ----------------------------------------: | :------ |
| Number of tokens in the gold standard | 38247 |
| Number of tokens generated | 38227 |
| Number of words correctly tokenized | 38182 |
| Number of words wrongly tokenized | 35 |
| Number of tokens missed | 30 |
| **Precision** | **99.88%** |
| **Recall** | **99.83%** |
| **F-Measure** | **99.85%**|
Table 1: Evaluation statistics for the tokenization model.
## Contact
Ander Intxaurrondo ([email protected])
## License
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
Copyright (c) 2018 Secretaría de Estado para el Avance Digital (SEAD)
| Biomedical-TeMU/SPACCC_Tokenizer | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-11T02:14:02+00:00 | {"license": "cc-by-4.0"} | 2022-03-11T02:18:16+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
| The Tokenizer for Clinical Cases Written in Spanish
===================================================
Introduction
------------
This repository contains the tokenization model trained using the SPACCC\_TOKEN corpus (URL The model was trained using the 90% of the corpus (900 clinical cases) and tested against the 10% (100 clinical cases). This model is a great resource to tokenize biomedical documents, specially clinical cases written in Spanish.
This model was created using the Apache OpenNLP machine learning toolkit (URL with the release number 1.8.4, released in December 2017.
This repository contains the training set, testing set, Gold Standard.
Prerequisites
-------------
This software has been compiled with Java SE 1.8 and it should work with recent versions. You can download Java from the following website: URL
The executable file already includes the Apache OpenNLP dependencies inside, so the download of this toolkit is not necessary. However, you may download the latest version from this website: URL
The library file we have used to compile is "opennlp-tools-1.8.4.jar". The source code should be able to compile with the latest version of OpenNLP, "opennlp-tools-*RELEASE\_NUMBER*.jar". In case there are compilation or execution errors, please let us know and we will make all the necessary updates.
Directory structure
-------------------
```
exec/
An executable file that can be used to apply the tokenization to your documents.
You can find the notes about its execution below in section "Usage".
gold_standard/
The clinical cases used as gold standard to evaluate the model's performance.
model/
The tokenizationint model, "URL", a binary file.
src/
The source code to create the model (URL) and evaluate it (URL).
The directory includes an example about how to use the model inside your code (URL).
File "URL" contains a list of abbreviations, essential to build the model.
test_set/
The clinical cases used as test set to evaluate the model's performance.
train_set/
The clinical cases used to build the model. We use a single file with all documents present in
directory "train_set_docs" concatented.
train_set_docs/
The clinical cases used to build the model. For each record the sentences are already splitted.
```
Usage
-----
The executable file *URL* is the program you need to tokenize the text in your document. For this program, two arguments are needed: (1) the text file to tokenize, and (2) the model file (*URL*). The program will display all tokens in the terminal, with one token per line.
From the 'exec' folder, type the following command in your terminal:
```
$ java -jar URL INPUT_FILE MODEL_FILE
```
Examples
--------
Assuming you have the executable file, the input file and the model file in the same directory:
```
$ java -jar URL URL URL
```
Model creation
--------------
To create this tokenization model, we used the following training parameters (class *TrainingParameters* in OpenNLP) to get the best performance:
* Number of iterations: 1500.
* Cutoff parameter: 4.
* Trainer type parameter: *EventTrainer.EVENT\_VALUE*.
* Algorithm: Maximum Entropy (*URL()*).
Meanwhile, we used the following parameters for the tokenizer builder (class *TokenizerFactory* in OpenNLP) to get the best performance:
* Language code: *es* (for Spanish).
* Abbreviation dictionary: file "URL" (included in the 'src/' directory).
* Use alphanumeric optimization: false
* Alphanumeric pattern: null
Model evaluation
----------------
After tuning the model using different values for each parameter mentioned above, we got the best performance with the values mentioned above.
Table 1: Evaluation statistics for the tokenization model.
Contact
-------
Ander Intxaurrondo (ander.intxaurrondo@URL)
License
-------
<a rel="license" href="URL alt="Creative Commons License" style="border-width:0" src="https://i.URL />
This work is licensed under a <a rel="license" href="URL Commons Attribution 4.0 International License.
Copyright (c) 2018 Secretaría de Estado para el Avance Digital (SEAD)
| [] | [
"TAGS\n#license-cc-by-4.0 #region-us \n"
] |
5ff2b006ea74699eccd393a5a0f3b99396d01e0c |
## Introduction
These are the train, development, test and background sets of the CodiEsp corpus. Train and development have gold standard annotations. The unannotated background and test sets are distributed together. All documents are released in the context of the CodiEsp track for CLEF ehealth 2020 (http://temu.bsc.es/codiesp/).
The CodiEsp corpus contains manually coded clinical cases. All documents are in Spanish language and CIE10 is the coding terminology (it is the Spanish version of ICD10-CM and ICD10-PCS). The CodiEsp corpus has been randomly sampled into three subsets: the train, the development, and the test set. The train set contains 500 clinical cases, and the development and test set 250 clinical cases each. The test set contains 250 clinical cases and it is released together with the background set (with 2751 clinical cases). CodiEsp participants must submit predictions for the test and background set, but they will only be evaluated on the test set.
## Structure
Three folders: train, dev and test. Each one of them contains the files for the train, development and test corpora, respectively.
+ train and dev folders have:
+ 3 tab-separated files with the annotation information relevant for each of the 3 sub-tracks of CodiEsp.
+ A subfolder named text_files with the plain text files of the clinical cases.
+ A subfolder named text_files_en with the plain text files machine-translated to English. Due to the translation process, the text files are sentence-splitted.
+ The test folder has only text_files and text_files_en subfolders with the plain text files.
## Corpus format description
The CodiEsp corpus is distributed in plain text in UTF8 encoding, where each clinical case is stored as a single file whose name is the clinical case identifier. Annotations are released in a tab-separated file. Since the CodiEsp track has 3 sub-tracks, every set of documents (train and test) has 3 tab-separated files associated with it.
For the sub-tracks CodiEsp-D and CodiEsp-P, the file has the following fields:
articleID ICD10-code
Tab-separated files for the sub-track CodiEsp-X contain extra fields that provide the text-reference and its position:
articleID label ICD10-code text-reference reference-position
## Corpus summary statistics
The final collection of 1000 clinical cases that make up the corpus had a total of 16504 sentences, with an average of 16.5 sentences per clinical case. It contains a total of 396,988 words, with an average of 396.2 words per clinical case.
For more information, visit the track webpage: http://temu.bsc.es/codiesp/ | Biomedical-TeMU/CodiEsp_corpus | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-11T02:19:32+00:00 | {"license": "cc-by-4.0"} | 2022-03-11T02:24:53+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
|
## Introduction
These are the train, development, test and background sets of the CodiEsp corpus. Train and development have gold standard annotations. The unannotated background and test sets are distributed together. All documents are released in the context of the CodiEsp track for CLEF ehealth 2020 (URL
The CodiEsp corpus contains manually coded clinical cases. All documents are in Spanish language and CIE10 is the coding terminology (it is the Spanish version of ICD10-CM and ICD10-PCS). The CodiEsp corpus has been randomly sampled into three subsets: the train, the development, and the test set. The train set contains 500 clinical cases, and the development and test set 250 clinical cases each. The test set contains 250 clinical cases and it is released together with the background set (with 2751 clinical cases). CodiEsp participants must submit predictions for the test and background set, but they will only be evaluated on the test set.
## Structure
Three folders: train, dev and test. Each one of them contains the files for the train, development and test corpora, respectively.
+ train and dev folders have:
+ 3 tab-separated files with the annotation information relevant for each of the 3 sub-tracks of CodiEsp.
+ A subfolder named text_files with the plain text files of the clinical cases.
+ A subfolder named text_files_en with the plain text files machine-translated to English. Due to the translation process, the text files are sentence-splitted.
+ The test folder has only text_files and text_files_en subfolders with the plain text files.
## Corpus format description
The CodiEsp corpus is distributed in plain text in UTF8 encoding, where each clinical case is stored as a single file whose name is the clinical case identifier. Annotations are released in a tab-separated file. Since the CodiEsp track has 3 sub-tracks, every set of documents (train and test) has 3 tab-separated files associated with it.
For the sub-tracks CodiEsp-D and CodiEsp-P, the file has the following fields:
articleID ICD10-code
Tab-separated files for the sub-track CodiEsp-X contain extra fields that provide the text-reference and its position:
articleID label ICD10-code text-reference reference-position
## Corpus summary statistics
The final collection of 1000 clinical cases that make up the corpus had a total of 16504 sentences, with an average of 16.5 sentences per clinical case. It contains a total of 396,988 words, with an average of 396.2 words per clinical case.
For more information, visit the track webpage: URL | [
"## Introduction\r\nThese are the train, development, test and background sets of the CodiEsp corpus. Train and development have gold standard annotations. The unannotated background and test sets are distributed together. All documents are released in the context of the CodiEsp track for CLEF ehealth 2020 (URL\r\n\r\nThe CodiEsp corpus contains manually coded clinical cases. All documents are in Spanish language and CIE10 is the coding terminology (it is the Spanish version of ICD10-CM and ICD10-PCS). The CodiEsp corpus has been randomly sampled into three subsets: the train, the development, and the test set. The train set contains 500 clinical cases, and the development and test set 250 clinical cases each. The test set contains 250 clinical cases and it is released together with the background set (with 2751 clinical cases). CodiEsp participants must submit predictions for the test and background set, but they will only be evaluated on the test set.",
"## Structure\r\nThree folders: train, dev and test. Each one of them contains the files for the train, development and test corpora, respectively.\r\n+ train and dev folders have:\r\n\t+ 3 tab-separated files with the annotation information relevant for each of the 3 sub-tracks of CodiEsp.\r\n\t+ A subfolder named text_files with the plain text files of the clinical cases.\r\n\t+ A subfolder named text_files_en with the plain text files machine-translated to English. Due to the translation process, the text files are sentence-splitted.\r\n+ The test folder has only text_files and text_files_en subfolders with the plain text files.",
"## Corpus format description\r\nThe CodiEsp corpus is distributed in plain text in UTF8 encoding, where each clinical case is stored as a single file whose name is the clinical case identifier. Annotations are released in a tab-separated file. Since the CodiEsp track has 3 sub-tracks, every set of documents (train and test) has 3 tab-separated files associated with it. \r\n\r\nFor the sub-tracks CodiEsp-D and CodiEsp-P, the file has the following fields:\r\narticleID\tICD10-code\r\n\r\nTab-separated files for the sub-track CodiEsp-X contain extra fields that provide the text-reference and its position:\r\narticleID\tlabel\tICD10-code\ttext-reference\treference-position",
"## Corpus summary statistics\r\nThe final collection of 1000 clinical cases that make up the corpus had a total of 16504 sentences, with an average of 16.5 sentences per clinical case. It contains a total of 396,988 words, with an average of 396.2 words per clinical case.\r\n\r\nFor more information, visit the track webpage: URL"
] | [
"TAGS\n#license-cc-by-4.0 #region-us \n",
"## Introduction\r\nThese are the train, development, test and background sets of the CodiEsp corpus. Train and development have gold standard annotations. The unannotated background and test sets are distributed together. All documents are released in the context of the CodiEsp track for CLEF ehealth 2020 (URL\r\n\r\nThe CodiEsp corpus contains manually coded clinical cases. All documents are in Spanish language and CIE10 is the coding terminology (it is the Spanish version of ICD10-CM and ICD10-PCS). The CodiEsp corpus has been randomly sampled into three subsets: the train, the development, and the test set. The train set contains 500 clinical cases, and the development and test set 250 clinical cases each. The test set contains 250 clinical cases and it is released together with the background set (with 2751 clinical cases). CodiEsp participants must submit predictions for the test and background set, but they will only be evaluated on the test set.",
"## Structure\r\nThree folders: train, dev and test. Each one of them contains the files for the train, development and test corpora, respectively.\r\n+ train and dev folders have:\r\n\t+ 3 tab-separated files with the annotation information relevant for each of the 3 sub-tracks of CodiEsp.\r\n\t+ A subfolder named text_files with the plain text files of the clinical cases.\r\n\t+ A subfolder named text_files_en with the plain text files machine-translated to English. Due to the translation process, the text files are sentence-splitted.\r\n+ The test folder has only text_files and text_files_en subfolders with the plain text files.",
"## Corpus format description\r\nThe CodiEsp corpus is distributed in plain text in UTF8 encoding, where each clinical case is stored as a single file whose name is the clinical case identifier. Annotations are released in a tab-separated file. Since the CodiEsp track has 3 sub-tracks, every set of documents (train and test) has 3 tab-separated files associated with it. \r\n\r\nFor the sub-tracks CodiEsp-D and CodiEsp-P, the file has the following fields:\r\narticleID\tICD10-code\r\n\r\nTab-separated files for the sub-track CodiEsp-X contain extra fields that provide the text-reference and its position:\r\narticleID\tlabel\tICD10-code\ttext-reference\treference-position",
"## Corpus summary statistics\r\nThe final collection of 1000 clinical cases that make up the corpus had a total of 16504 sentences, with an average of 16.5 sentences per clinical case. It contains a total of 396,988 words, with an average of 396.2 words per clinical case.\r\n\r\nFor more information, visit the track webpage: URL"
] |
38ccb945600346d52580891d6d77f5c2bfaae069 | # PersianNER
Named-Entity Recognition in Persian Language
## ArmanPersoNERCorpus
This is the first manually-annotated Persian named-entity (NE) dataset (ISLRN 399-379-640-828-6). We are releasing it only for academic research use.
The dataset includes 250,015 tokens and 7,682 Persian sentences in total. It is available in 3 folds to be used in turn as training and test sets. Each file contains one token, along with its manually annotated named-entity tag, per line. Each sentence is separated with a newline. The NER tags are in IOB format.
According to the instructions provided to the annotators, NEs are categorized into six classes: person, organization (such as banks, ministries, embassies, teams, nationalities, networks and publishers), location (such as cities, villages, rivers, seas, gulfs, deserts and mountains), facility (such as schools, universities, research centers, airports, railways, bridges, roads, harbors, stations, hospitals, parks, zoos and cinemas), product (such as books, newspapers, TV shows, movies, airplanes, ships, cars, theories, laws, agreements and religions), and event (such as wars, earthquakes, national holidays, festivals and conferences); other are the remaining tokens.
| Khedesh/ArmanNER | [
"region:us"
] | 2022-03-11T08:13:29+00:00 | {} | 2022-03-11T10:42:30+00:00 | [] | [] | TAGS
#region-us
| # PersianNER
Named-Entity Recognition in Persian Language
## ArmanPersoNERCorpus
This is the first manually-annotated Persian named-entity (NE) dataset (ISLRN 399-379-640-828-6). We are releasing it only for academic research use.
The dataset includes 250,015 tokens and 7,682 Persian sentences in total. It is available in 3 folds to be used in turn as training and test sets. Each file contains one token, along with its manually annotated named-entity tag, per line. Each sentence is separated with a newline. The NER tags are in IOB format.
According to the instructions provided to the annotators, NEs are categorized into six classes: person, organization (such as banks, ministries, embassies, teams, nationalities, networks and publishers), location (such as cities, villages, rivers, seas, gulfs, deserts and mountains), facility (such as schools, universities, research centers, airports, railways, bridges, roads, harbors, stations, hospitals, parks, zoos and cinemas), product (such as books, newspapers, TV shows, movies, airplanes, ships, cars, theories, laws, agreements and religions), and event (such as wars, earthquakes, national holidays, festivals and conferences); other are the remaining tokens.
| [
"# PersianNER\nNamed-Entity Recognition in Persian Language",
"## ArmanPersoNERCorpus \nThis is the first manually-annotated Persian named-entity (NE) dataset (ISLRN 399-379-640-828-6). We are releasing it only for academic research use.\n\nThe dataset includes 250,015 tokens and 7,682 Persian sentences in total. It is available in 3 folds to be used in turn as training and test sets. Each file contains one token, along with its manually annotated named-entity tag, per line. Each sentence is separated with a newline. The NER tags are in IOB format. \n\nAccording to the instructions provided to the annotators, NEs are categorized into six classes: person, organization (such as banks, ministries, embassies, teams, nationalities, networks and publishers), location (such as cities, villages, rivers, seas, gulfs, deserts and mountains), facility (such as schools, universities, research centers, airports, railways, bridges, roads, harbors, stations, hospitals, parks, zoos and cinemas), product (such as books, newspapers, TV shows, movies, airplanes, ships, cars, theories, laws, agreements and religions), and event (such as wars, earthquakes, national holidays, festivals and conferences); other are the remaining tokens."
] | [
"TAGS\n#region-us \n",
"# PersianNER\nNamed-Entity Recognition in Persian Language",
"## ArmanPersoNERCorpus \nThis is the first manually-annotated Persian named-entity (NE) dataset (ISLRN 399-379-640-828-6). We are releasing it only for academic research use.\n\nThe dataset includes 250,015 tokens and 7,682 Persian sentences in total. It is available in 3 folds to be used in turn as training and test sets. Each file contains one token, along with its manually annotated named-entity tag, per line. Each sentence is separated with a newline. The NER tags are in IOB format. \n\nAccording to the instructions provided to the annotators, NEs are categorized into six classes: person, organization (such as banks, ministries, embassies, teams, nationalities, networks and publishers), location (such as cities, villages, rivers, seas, gulfs, deserts and mountains), facility (such as schools, universities, research centers, airports, railways, bridges, roads, harbors, stations, hospitals, parks, zoos and cinemas), product (such as books, newspapers, TV shows, movies, airplanes, ships, cars, theories, laws, agreements and religions), and event (such as wars, earthquakes, national holidays, festivals and conferences); other are the remaining tokens."
] |
04bb1414d14d63bffc026c6f12d047b7a3232930 | ## Dataset Description
- **Homepage:** https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
- **Paper:** https://arxiv.org/abs/1703.10593
### Dataset Summary
This dataset was obtained from the original CycleGAN Datasets directory available on [Berkeley's website](https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/).
For more details about the dataset you can refer to the [original CycleGAN publication](https://arxiv.org/abs/1703.10593).
### How to use
You can easily load the dataset with the following lines :
```python
from datasets import load_dataset
data_horses = load_dataset("gigant/horse2zebra", name="horse", split="train")
data_zebras = load_dataset("gigant/horse2zebra", name="zebra", split="train")
```
Two splits are available, `"train"` and `"test"`
### Citation Information
```
@inproceedings{CycleGAN2017,
title={Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks},
author={Zhu, Jun-Yan and Park, Taesung and Isola, Phillip and Efros, Alexei A},
booktitle={Computer Vision (ICCV), 2017 IEEE International Conference on},
year={2017}
}
``` | gigant/horse2zebra | [
"task_categories:image-to-image",
"license:cc",
"GAN",
"unpaired-image-to-image-translation",
"arxiv:1703.10593",
"region:us"
] | 2022-03-11T09:59:03+00:00 | {"license": "cc", "task_categories": ["image-to-image"], "task_ids": [], "pretty_name": "Horse2Zebra", "tags": ["GAN", "unpaired-image-to-image-translation"]} | 2022-10-24T16:37:53+00:00 | [
"1703.10593"
] | [] | TAGS
#task_categories-image-to-image #license-cc #GAN #unpaired-image-to-image-translation #arxiv-1703.10593 #region-us
| ## Dataset Description
- Homepage: URL
- Paper: URL
### Dataset Summary
This dataset was obtained from the original CycleGAN Datasets directory available on Berkeley's website.
For more details about the dataset you can refer to the original CycleGAN publication.
### How to use
You can easily load the dataset with the following lines :
Two splits are available, '"train"' and '"test"'
| [
"## Dataset Description\n- Homepage: URL\n- Paper: URL",
"### Dataset Summary\n\nThis dataset was obtained from the original CycleGAN Datasets directory available on Berkeley's website.\n\n\nFor more details about the dataset you can refer to the original CycleGAN publication.",
"### How to use\n\nYou can easily load the dataset with the following lines :\n\n\nTwo splits are available, '\"train\"' and '\"test\"'"
] | [
"TAGS\n#task_categories-image-to-image #license-cc #GAN #unpaired-image-to-image-translation #arxiv-1703.10593 #region-us \n",
"## Dataset Description\n- Homepage: URL\n- Paper: URL",
"### Dataset Summary\n\nThis dataset was obtained from the original CycleGAN Datasets directory available on Berkeley's website.\n\n\nFor more details about the dataset you can refer to the original CycleGAN publication.",
"### How to use\n\nYou can easily load the dataset with the following lines :\n\n\nTwo splits are available, '\"train\"' and '\"test\"'"
] |
f90b0fced2b6b7d1fb3fcdb04cb5b754eafab378 | # GEM Submission
Submission name: Macro
| GEM-submissions/ratishsp__macro__1646998904 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-11T11:41:45+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "Macro", "tags": ["evaluation", "benchmark"]} | 2022-03-11T11:41:47+00:00 | [] | [] | TAGS
#benchmark-gem #evaluation #benchmark #region-us
| # GEM Submission
Submission name: Macro
| [
"# GEM Submission\n\nSubmission name: Macro"
] | [
"TAGS\n#benchmark-gem #evaluation #benchmark #region-us \n",
"# GEM Submission\n\nSubmission name: Macro"
] |
b8e66595f3f7e20f5c2a6f69be3504d2e97d790b |
# Dataset Card for Zeel/common
| Zeel/common | [
"language:en",
"region:us"
] | 2022-03-12T00:18:08+00:00 | {"language": ["en"], "pretty_name": "common"} | 2022-10-25T09:22:40+00:00 | [] | [
"en"
] | TAGS
#language-English #region-us
|
# Dataset Card for Zeel/common
| [
"# Dataset Card for Zeel/common"
] | [
"TAGS\n#language-English #region-us \n",
"# Dataset Card for Zeel/common"
] |
ce7b8f1a30bfae5184e554a5bf44b76b9e8fc011 |
# CLUES: Few-Shot Learning Evaluation in Natural Language Understanding
This repo contains the data for the NeurIPS 2021 benchmark [Constrained Language Understanding Evaluation Standard (CLUES)](https://openreview.net/pdf?id=VhIIQBm00VI).
## Leaderboard
We maintain a [Leaderboard](https://github.com/microsoft/CLUES) allowing researchers to submit their results as entries.
### Submission Instructions
- Each submission must be submitted as a pull request modifying the markdown file underlying the leaderboard.
- The submission must attach an accompanying public paper and public source code for reproducing their results on our dataset.
- A submission can be toward any subset of tasks in our benchmark, or toward the aggregate leaderboard.
- For any task targeted by the submission, we require evaluation on (1) 10, 20, *and* 30 shots, and (2) all 5 splits of the corresponding dataset and a report of their mean and standard deviation.
- Each leaderboard will be sorted by the 30-shot mean S1 score (where S1 score is a variant of F1 score defined in our paper).
- The submission should not use data from the 4 other splits during few-shot finetuning of any 1 split, either as extra training set or as validation set for hyperparameter tuning.
- However, we allow external data, labeled or unlabeled, to be used for such purposes.
Each submission using external data must mark the corresponding columns "external labeled" and/or "external unlabeled".
Note, in this context, "external data" refers to data used *after pretraining* (e.g., for task-specific tuning); in particular, methods using existing pretrained models only, without extra data, should not mark either column. For obvious reasons, models cannot be trained on the original labeled datasets from where we sampled the few-shot CLUES data.
- In the table entry, the submission should include a method name and a citation, hyperlinking to their publicly released source code reproducing the results. See the last entry of the table below for an example.
### Abbreviations
- FT = (classic) finetuning
- PT = prompt based tuning
- ICL = in-context learning, in the style of GPT-3
- μ±σ = mean μ and standard deviation σ across our 5 splits. Aggregate standard deviation is calculated using the sum-of-variance formula from individual tasks' standard deviations.
### Benchmarking CLUES for Aggregate 30-shot Evaluation
| Shots (K=30) | external labeled | external unlabeled | Average ▼ | SST-2 | MNLI | CoNLL03 | WikiANN | SQuAD-v2 | ReCoRD |
|-----------------------------------------------------------|-------------|---------------|-----------|-----------|----------|----------|----------|----------|----------|
| **Human** | N | N | 81.4 | 83.7 | 69.4 | 87.4 | 82.6 | 73.5 | 91.9 |
| T5-Large-770M-FT | N | N | 43.1±6.7 | 52.3±2.9 | 36.8±3.8 | 51.2±0.1 | 62.4±0.6 | 43.7±2.7 | 12±3.8 |
| BERT-Large-336M-FT | N | N | 42.1±7.8 | 55.4±2.5 | 33.3±1.4 | 51.3±0 | 62.5±0.6 | 35.3±6.4 | 14.9±3.4 |
| BERT-Base-110M-FT | N | N | 41.5±9.2 | 53.6±5.5 | 35.4±3.2 | 51.3±0 | 62.8±0 | 32.6±5.8 | 13.1±3.3 |
| DeBERTa-Large-400M-FT | N | N | 40.1±17.8 | 47.7±9.0 | 26.7±11 | 48.2±2.9 | 58.3±6.2 | 38.7±7.4 | 21.1±3.6 |
| RoBERTa-Large-355M-FT | N | N | 40.0±10.6 | 53.2±5.6 | 34.0±1.1 | 44.7±2.6 | 48.4±6.7 | 43.5±4.4 | 16±2.8 |
| RoBERTa-Large-355M-PT | N | N | | 90.2±1.8 | 61.6±3.5 | | | | |
| DeBERTa-Large-400M-PT | N | N | | 88.4±3.3 | 62.9±3.1 | | | | |
| BERT-Large-336M-PT | N | N | | 82.7±4.1 | 45.3±2.0 | | | | |
| GPT3-175B-ICL | N | N | | 91.0±1.6 | 33.2±0.2 | | | | |
| BERT-Base-110M-PT | N | N | | 79.4±5.6 | 42.5±3.2 | | | | |
| [LiST (Wang et al.)](https://github.com/microsoft/LiST) | N | Y | | 91.3 ±0.7 | 67.9±3.0 | | | | |
| [Example (lastname et al.)](link2code) | Y/N | Y/N | 0±0 | 0±0 | 0±0 | 0±0 | 0±0 | 0±0 | 0±0 |
### Individual Task Performance over Multiple Shots
#### SST-2
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|----------------------------------------|------------------|--------------------|-----------|-----------|----------|------|
| GPT-3 (175B) ICL | N | N | 85.9±3.7 | 92.0±0.7 | 91.0±1.6 | - |
| RoBERTa-Large PT | N | N | 88.8±3.9 | 89.0±1.1 | 90.2±1.8 | 93.8 |
| DeBERTa-Large PT | N | N | 83.4±5.3 | 87.8±3.5 | 88.4±3.3 | 91.9 |
| **Human** | N | N | 79.8 | 83 | 83.7 | - |
| BERT-Large PT | N | N | 63.2±11.3 | 78.2±9.9 | 82.7±4.1 | 91 |
| BERT-Base PT | N | N | 63.9±10.0 | 76.7±6.6 | 79.4±5.6 | 91.9 |
| BERT-Large FT | N | N | 46.3±5.5 | 55.5±3.4 | 55.4±2.5 | 99.1 |
| BERT-Base FT | N | N | 46.2±5.6 | 54.0±2.8 | 53.6±5.5 | 98.1 |
| RoBERTa-Large FT | N | N | 38.4±21.7 | 52.3±5.6 | 53.2±5.6 | 98.6 |
| T5-Large FT | N | N | 51.2±1.8 | 53.4±3.2 | 52.3±2.9 | 97.6 |
| DeBERTa-Large FT | N | N | 43.0±11.9 | 40.8±22.6 | 47.7±9.0 | 100 |
| [Example (lastname et al.)](link2code) | Y/N | Y/N | 0±0 | 0±0 | 0±0 | - |
#### MNLI
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|---------------------------------------------------------|------------------|--------------------|-----------|-----------|-----------|------|
| **Human** | N | Y | 78.1 | 78.6 | 69.4 | - |
| [LiST (wang et al.)](https://github.com/microsoft/LiST) | N | N | 60.5±8.3 | 67.2±4.5 | 67.9±3.0 | - |
| DeBERTa-Large PT | N | N | 44.5±8.2 | 60.7±5.3 | 62.9±3.1 | 88.1 |
| RoBERTa-Large PT | N | N | 57.7±3.6 | 58.6±2.9 | 61.6±3.5 | 87.1 |
| BERT-Large PT | N | N | 41.7±1.0 | 43.7±2.1 | 45.3±2.0 | 81.9 |
| BERT-Base PT | N | N | 40.4±1.8 | 42.1±4.4 | 42.5±3.2 | 81 |
| T5-Large FT | N | N | 39.8±3.3 | 37.9±4.3 | 36.8±3.8 | 85.9 |
| BERT-Base FT | N | N | 37.0±5.2 | 35.2±2.7 | 35.4±3.2 | 81.6 |
| RoBERTa-Large FT | N | N | 34.3±2.8 | 33.4±0.9 | 34.0±1.1 | 85.5 |
| BERT-Large FT | N | N | 33.7±0.4 | 28.2±14.8 | 33.3±1.4 | 80.9 |
| GPT-3 (175B) ICL | N | N | 33.5±0.7 | 33.1±0.3 | 33.2±0.2 | - |
| DeBERTa-Large FT | N | N | 27.4±14.1 | 33.6±2.5 | 26.7±11.0 | 87.6 |
#### CoNLL03
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|------------------|------------------|--------------------|----------|----------|----------|------|
| **Human** | N | N | 87.7 | 89.7 | 87.4 | - |
| BERT-Base FT | N | N | 51.3±0 | 51.3±0 | 51.3±0 | - |
| BERT-Large FT | N | N | 51.3±0 | 51.3±0 | 51.3±0 | 89.3 |
| T5-Large FT | N | N | 46.3±6.9 | 50.0±0.7 | 51.2±0.1 | 92.2 |
| DeBERTa-Large FT | N | N | 50.1±1.2 | 47.8±2.5 | 48.2±2.9 | 93.6 |
| RoBERTa-Large FT | N | N | 50.8±0.5 | 44.6±5.1 | 44.7±2.6 | 93.2 |
#### WikiANN
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|------------------|------------------|--------------------|----------|----------|----------|------|
| **Human** | N | N | 81.4 | 83.5 | 82.6 | - |
| BERT-Base FT | N | N | 62.8±0 | 62.8±0 | 62.8±0 | 88.8 |
| BERT-Large FT | N | N | 62.8±0 | 62.6±0.4 | 62.5±0.6 | 91 |
| T5-Large FT | N | N | 61.7±0.7 | 62.1±0.2 | 62.4±0.6 | 87.4 |
| DeBERTa-Large FT | N | N | 58.5±3.3 | 57.9±5.8 | 58.3±6.2 | 91.1 |
| RoBERTa-Large FT | N | N | 58.5±8.8 | 56.9±3.4 | 48.4±6.7 | 91.2 |
#### SQuAD v2
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|------------------|------------------|--------------------|----------|-----------|----------|------|
| **Human** | N | N | 71.9 | 76.4 | 73.5 | - |
| T5-Large FT | N | N | 43.6±3.5 | 28.7±13.0 | 43.7±2.7 | 87.2 |
| RoBERTa-Large FT | N | N | 38.1±7.2 | 40.1±6.4 | 43.5±4.4 | 89.4 |
| DeBERTa-Large FT | N | N | 41.4±7.3 | 44.4±4.5 | 38.7±7.4 | 90 |
| BERT-Large FT | N | N | 42.3±5.6 | 35.8±9.7 | 35.3±6.4 | 81.8 |
| BERT-Base FT | N | N | 46.0±2.4 | 34.9±9.0 | 32.6±5.8 | 76.3 |
#### ReCoRD
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|------------------|------------------|--------------------|----------|----------|----------|------|
| **Human** | N | N | 94.1 | 94.2 | 91.9 | - |
| DeBERTa-Large FT | N | N | 15.7±5.0 | 16.8±5.7 | 21.1±3.6 | 80.7 |
| RoBERTa-Large FT | N | N | 12.0±1.9 | 9.9±6.2 | 16.0±2.8 | 80.3 |
| BERT-Large FT | N | N | 9.9±5.2 | 11.8±4.9 | 14.9±3.4 | 66 |
| BERT-Base FT | N | N | 10.3±1.8 | 11.7±2.4 | 13.1±3.3 | 54.4 |
| T5-Large FT | N | N | 11.9±2.7 | 11.7±1.5 | 12.0±3.8 | 77.3 |
## How do I cite CLUES?
```
@article{cluesteam2021,
title={Few-Shot Learning Evaluation in Natural Language Understanding},
author={Mukherjee, Subhabrata and Liu, Xiaodong and Zheng, Guoqing and Hosseini, Saghar and Cheng, Hao and Yang, Greg and Meek, Christopher and Awadallah, Ahmed Hassan and Gao, Jianfeng},
booktitle = {NeurIPS 2021},
year = {2021},
month = {December},
url = {https://www.microsoft.com/en-us/research/publication/clues-few-shot-learning-evaluation-in-natural-language-understanding/},
}
```
## Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [[email protected]](mailto:[email protected]) with any additional questions or comments.
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
trademarks or logos is subject to and must follow
[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party's policies.
| microsoft/CLUES | [
"license:mit",
"region:us"
] | 2022-03-12T01:26:23+00:00 | {"license": "mit"} | 2022-03-25T22:05:58+00:00 | [] | [] | TAGS
#license-mit #region-us
| CLUES: Few-Shot Learning Evaluation in Natural Language Understanding
=====================================================================
This repo contains the data for the NeurIPS 2021 benchmark Constrained Language Understanding Evaluation Standard (CLUES).
Leaderboard
-----------
We maintain a Leaderboard allowing researchers to submit their results as entries.
### Submission Instructions
* Each submission must be submitted as a pull request modifying the markdown file underlying the leaderboard.
* The submission must attach an accompanying public paper and public source code for reproducing their results on our dataset.
* A submission can be toward any subset of tasks in our benchmark, or toward the aggregate leaderboard.
* For any task targeted by the submission, we require evaluation on (1) 10, 20, *and* 30 shots, and (2) all 5 splits of the corresponding dataset and a report of their mean and standard deviation.
* Each leaderboard will be sorted by the 30-shot mean S1 score (where S1 score is a variant of F1 score defined in our paper).
* The submission should not use data from the 4 other splits during few-shot finetuning of any 1 split, either as extra training set or as validation set for hyperparameter tuning.
* However, we allow external data, labeled or unlabeled, to be used for such purposes.
Each submission using external data must mark the corresponding columns "external labeled" and/or "external unlabeled".
Note, in this context, "external data" refers to data used *after pretraining* (e.g., for task-specific tuning); in particular, methods using existing pretrained models only, without extra data, should not mark either column. For obvious reasons, models cannot be trained on the original labeled datasets from where we sampled the few-shot CLUES data.
* In the table entry, the submission should include a method name and a citation, hyperlinking to their publicly released source code reproducing the results. See the last entry of the table below for an example.
### Abbreviations
* FT = (classic) finetuning
* PT = prompt based tuning
* ICL = in-context learning, in the style of GPT-3
* μ±σ = mean μ and standard deviation σ across our 5 splits. Aggregate standard deviation is calculated using the sum-of-variance formula from individual tasks' standard deviations.
### Benchmarking CLUES for Aggregate 30-shot Evaluation
### Individual Task Performance over Multiple Shots
#### SST-2
#### MNLI
#### CoNLL03
#### WikiANN
#### SQuAD v2
#### ReCoRD
How do I cite CLUES?
--------------------
Contributing
------------
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit URL.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct.
For more information see the Code of Conduct FAQ or
contact opencode@URL with any additional questions or comments.
Trademarks
----------
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
trademarks or logos is subject to and must follow
Microsoft's Trademark & Brand Guidelines.
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party's policies.
| [
"### Submission Instructions\n\n\n* Each submission must be submitted as a pull request modifying the markdown file underlying the leaderboard.\n* The submission must attach an accompanying public paper and public source code for reproducing their results on our dataset.\n* A submission can be toward any subset of tasks in our benchmark, or toward the aggregate leaderboard.\n* For any task targeted by the submission, we require evaluation on (1) 10, 20, *and* 30 shots, and (2) all 5 splits of the corresponding dataset and a report of their mean and standard deviation.\n* Each leaderboard will be sorted by the 30-shot mean S1 score (where S1 score is a variant of F1 score defined in our paper).\n* The submission should not use data from the 4 other splits during few-shot finetuning of any 1 split, either as extra training set or as validation set for hyperparameter tuning.\n* However, we allow external data, labeled or unlabeled, to be used for such purposes.\nEach submission using external data must mark the corresponding columns \"external labeled\" and/or \"external unlabeled\".\nNote, in this context, \"external data\" refers to data used *after pretraining* (e.g., for task-specific tuning); in particular, methods using existing pretrained models only, without extra data, should not mark either column. For obvious reasons, models cannot be trained on the original labeled datasets from where we sampled the few-shot CLUES data.\n* In the table entry, the submission should include a method name and a citation, hyperlinking to their publicly released source code reproducing the results. See the last entry of the table below for an example.",
"### Abbreviations\n\n\n* FT = (classic) finetuning\n* PT = prompt based tuning\n* ICL = in-context learning, in the style of GPT-3\n* μ±σ = mean μ and standard deviation σ across our 5 splits. Aggregate standard deviation is calculated using the sum-of-variance formula from individual tasks' standard deviations.",
"### Benchmarking CLUES for Aggregate 30-shot Evaluation",
"### Individual Task Performance over Multiple Shots",
"#### SST-2",
"#### MNLI",
"#### CoNLL03",
"#### WikiANN",
"#### SQuAD v2",
"#### ReCoRD\n\n\n\nHow do I cite CLUES?\n--------------------\n\n\nContributing\n------------\n\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit URL.\n\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\n\nThis project has adopted the Microsoft Open Source Code of Conduct.\nFor more information see the Code of Conduct FAQ or\ncontact opencode@URL with any additional questions or comments.\n\n\nTrademarks\n----------\n\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft\ntrademarks or logos is subject to and must follow\nMicrosoft's Trademark & Brand Guidelines.\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies."
] | [
"TAGS\n#license-mit #region-us \n",
"### Submission Instructions\n\n\n* Each submission must be submitted as a pull request modifying the markdown file underlying the leaderboard.\n* The submission must attach an accompanying public paper and public source code for reproducing their results on our dataset.\n* A submission can be toward any subset of tasks in our benchmark, or toward the aggregate leaderboard.\n* For any task targeted by the submission, we require evaluation on (1) 10, 20, *and* 30 shots, and (2) all 5 splits of the corresponding dataset and a report of their mean and standard deviation.\n* Each leaderboard will be sorted by the 30-shot mean S1 score (where S1 score is a variant of F1 score defined in our paper).\n* The submission should not use data from the 4 other splits during few-shot finetuning of any 1 split, either as extra training set or as validation set for hyperparameter tuning.\n* However, we allow external data, labeled or unlabeled, to be used for such purposes.\nEach submission using external data must mark the corresponding columns \"external labeled\" and/or \"external unlabeled\".\nNote, in this context, \"external data\" refers to data used *after pretraining* (e.g., for task-specific tuning); in particular, methods using existing pretrained models only, without extra data, should not mark either column. For obvious reasons, models cannot be trained on the original labeled datasets from where we sampled the few-shot CLUES data.\n* In the table entry, the submission should include a method name and a citation, hyperlinking to their publicly released source code reproducing the results. See the last entry of the table below for an example.",
"### Abbreviations\n\n\n* FT = (classic) finetuning\n* PT = prompt based tuning\n* ICL = in-context learning, in the style of GPT-3\n* μ±σ = mean μ and standard deviation σ across our 5 splits. Aggregate standard deviation is calculated using the sum-of-variance formula from individual tasks' standard deviations.",
"### Benchmarking CLUES for Aggregate 30-shot Evaluation",
"### Individual Task Performance over Multiple Shots",
"#### SST-2",
"#### MNLI",
"#### CoNLL03",
"#### WikiANN",
"#### SQuAD v2",
"#### ReCoRD\n\n\n\nHow do I cite CLUES?\n--------------------\n\n\nContributing\n------------\n\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit URL.\n\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\n\nThis project has adopted the Microsoft Open Source Code of Conduct.\nFor more information see the Code of Conduct FAQ or\ncontact opencode@URL with any additional questions or comments.\n\n\nTrademarks\n----------\n\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft\ntrademarks or logos is subject to and must follow\nMicrosoft's Trademark & Brand Guidelines.\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies."
] |
3c70f2fe25f7c73d2460f77a4c3f8b1aa8a6e819 |
# Review Hotel in Indonesia
### Dataset Summary
Data about reviews of hotels in Indonesia
### Languages
Indonesia
## Dataset Structure
### Data Fields
- review_id : unique identification code of each review
- review_text : the main review of text
- category : label for each review, positive (1) or negative (0)
| rakkaalhazimi/hotel-review | [
"license:gpl-3.0",
"region:us"
] | 2022-03-12T05:52:57+00:00 | {"license": "gpl-3.0"} | 2022-03-12T07:23:47+00:00 | [] | [] | TAGS
#license-gpl-3.0 #region-us
|
# Review Hotel in Indonesia
### Dataset Summary
Data about reviews of hotels in Indonesia
### Languages
Indonesia
## Dataset Structure
### Data Fields
- review_id : unique identification code of each review
- review_text : the main review of text
- category : label for each review, positive (1) or negative (0)
| [
"# Review Hotel in Indonesia",
"### Dataset Summary\r\n\r\nData about reviews of hotels in Indonesia",
"### Languages\r\n\r\nIndonesia",
"## Dataset Structure",
"### Data Fields\r\n\r\n- review_id : unique identification code of each review\r\n- review_text : the main review of text\r\n- category : label for each review, positive (1) or negative (0)"
] | [
"TAGS\n#license-gpl-3.0 #region-us \n",
"# Review Hotel in Indonesia",
"### Dataset Summary\r\n\r\nData about reviews of hotels in Indonesia",
"### Languages\r\n\r\nIndonesia",
"## Dataset Structure",
"### Data Fields\r\n\r\n- review_id : unique identification code of each review\r\n- review_text : the main review of text\r\n- category : label for each review, positive (1) or negative (0)"
] |
67f9dbf9e17ada0dcdc47e05ad9b37ed01f8e82f |
## How to use the data sets
This dataset contains more than 16,000 unique pairs of protein sequences and ligand SMILES with experimentally determined
binding affinities and protein-ligand contacts (ligand atom/SMILES token vs. Calpha within 5 Angstrom). These
are represented by a list that contains the positions of non-zero elements of the flattened, sparse
sequence x smiles tokens (2048x512) matrix. The first and last entries in both dimensions
are padded to zero, they correspond to [CLS] and [SEP].
It can be used for fine-tuning a language model.
The data solely uses data from PDBind-cn.
Contacts are calculated at four cut-off distances: 5, 8, 11A and 15A.
### Use the already preprocessed data
Load a test/train split using
```
from datasets import load_dataset
train = load_dataset("jglaser/protein_ligand_contacts",split='train[:90%]')
validation = load_dataset("jglaser/protein_ligand_contacts",split='train[90%:]')
```
### Pre-process yourself
To manually perform the preprocessing, download the data sets from P.DBBind-cn
Register for an account at <https://www.pdbbind.org.cn/>, confirm the validation
email, then login and download
- the Index files (1)
- the general protein-ligand complexes (2)
- the refined protein-ligand complexes (3)
Extract those files in `pdbbind/data`
Run the script `pdbbind.py` in a compute job on an MPI-enabled cluster
(e.g., `mpirun -n 64 pdbbind.py`).
Perform the steps in the notebook `pdbbind.ipynb`
| jglaser/protein_ligand_contacts | [
"molecules",
"chemistry",
"SMILES",
"region:us"
] | 2022-03-12T07:09:53+00:00 | {"tags": ["molecules", "chemistry", "SMILES"]} | 2022-03-15T21:17:32+00:00 | [] | [] | TAGS
#molecules #chemistry #SMILES #region-us
|
## How to use the data sets
This dataset contains more than 16,000 unique pairs of protein sequences and ligand SMILES with experimentally determined
binding affinities and protein-ligand contacts (ligand atom/SMILES token vs. Calpha within 5 Angstrom). These
are represented by a list that contains the positions of non-zero elements of the flattened, sparse
sequence x smiles tokens (2048x512) matrix. The first and last entries in both dimensions
are padded to zero, they correspond to [CLS] and [SEP].
It can be used for fine-tuning a language model.
The data solely uses data from PDBind-cn.
Contacts are calculated at four cut-off distances: 5, 8, 11A and 15A.
### Use the already preprocessed data
Load a test/train split using
### Pre-process yourself
To manually perform the preprocessing, download the data sets from P.DBBind-cn
Register for an account at <URL confirm the validation
email, then login and download
- the Index files (1)
- the general protein-ligand complexes (2)
- the refined protein-ligand complexes (3)
Extract those files in 'pdbbind/data'
Run the script 'URL' in a compute job on an MPI-enabled cluster
(e.g., 'mpirun -n 64 URL').
Perform the steps in the notebook 'URL'
| [
"## How to use the data sets\n\nThis dataset contains more than 16,000 unique pairs of protein sequences and ligand SMILES with experimentally determined\nbinding affinities and protein-ligand contacts (ligand atom/SMILES token vs. Calpha within 5 Angstrom). These\nare represented by a list that contains the positions of non-zero elements of the flattened, sparse\nsequence x smiles tokens (2048x512) matrix. The first and last entries in both dimensions\nare padded to zero, they correspond to [CLS] and [SEP].\n\nIt can be used for fine-tuning a language model.\n\nThe data solely uses data from PDBind-cn.\n\nContacts are calculated at four cut-off distances: 5, 8, 11A and 15A.",
"### Use the already preprocessed data\n\nLoad a test/train split using",
"### Pre-process yourself\n\nTo manually perform the preprocessing, download the data sets from P.DBBind-cn\n\nRegister for an account at <URL confirm the validation\nemail, then login and download \n\n- the Index files (1)\n- the general protein-ligand complexes (2)\n- the refined protein-ligand complexes (3)\n\nExtract those files in 'pdbbind/data'\n\nRun the script 'URL' in a compute job on an MPI-enabled cluster\n(e.g., 'mpirun -n 64 URL').\n\nPerform the steps in the notebook 'URL'"
] | [
"TAGS\n#molecules #chemistry #SMILES #region-us \n",
"## How to use the data sets\n\nThis dataset contains more than 16,000 unique pairs of protein sequences and ligand SMILES with experimentally determined\nbinding affinities and protein-ligand contacts (ligand atom/SMILES token vs. Calpha within 5 Angstrom). These\nare represented by a list that contains the positions of non-zero elements of the flattened, sparse\nsequence x smiles tokens (2048x512) matrix. The first and last entries in both dimensions\nare padded to zero, they correspond to [CLS] and [SEP].\n\nIt can be used for fine-tuning a language model.\n\nThe data solely uses data from PDBind-cn.\n\nContacts are calculated at four cut-off distances: 5, 8, 11A and 15A.",
"### Use the already preprocessed data\n\nLoad a test/train split using",
"### Pre-process yourself\n\nTo manually perform the preprocessing, download the data sets from P.DBBind-cn\n\nRegister for an account at <URL confirm the validation\nemail, then login and download \n\n- the Index files (1)\n- the general protein-ligand complexes (2)\n- the refined protein-ligand complexes (3)\n\nExtract those files in 'pdbbind/data'\n\nRun the script 'URL' in a compute job on an MPI-enabled cluster\n(e.g., 'mpirun -n 64 URL').\n\nPerform the steps in the notebook 'URL'"
] |
eeaa09638c5722e13fffd2daeaba4c2bec824d41 | # Dataset Card for Genecorpus-30M
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Species](#species)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
<!---
- [Licensing Information](#licensing-information)
- [Contributions](#contributions)
--->
## Dataset Description
<!--- **Paper:**
--->
- **Point of Contact:** [email protected]
### Dataset Summary
We assembled a large-scale pretraining corpus, Genecorpus-30M, comprised of ~30 million human single cell transcriptomes from a broad range of tissues from publicly available data. This corpus was used for pretraining [Geneformer](https://huggingface.co/ctheodoris/Geneformer), a pretrained transformer model that enables context-aware predictions in settings with limited data in network biology.
See [our manuscript](https://rdcu.be/ddrx0) for details.
### Supported Tasks
This corpus was used for pretraining [Geneformer](https://rdcu.be/ddrx0) and is compatible with pretraining or fine-tuning Geneformer or similar models.
### Species
Homo sapiens
## Dataset Structure
### Data Instances
Genecorpus-30M is provided as tokenized data in the Huggingface Datasets structure, which is based on the Apache Arrow format. Each example within the dataset is composed of the rank value encoding for a single cell within the corpus. Rank value encodings provide a nonparametric representation of each single cell’s transcriptome, ranking genes by their expression within that cell normalized by their expression across the entire Genecorpus-30M. This method takes advantage of the many observations of each gene’s expression across Genecorpus-30M to prioritize genes that distinguish cell state. Specifically, this method will deprioritize ubiquitously highly-expressed housekeeping genes by normalizing them to a lower rank. Conversely, genes such as transcription factors that may be lowly expressed when they are expressed but highly distinguish cell state will move to a higher rank within the encoding. Furthermore, this rank-based approach may be more robust against technical artifacts that may systematically bias the absolute transcript counts value while the overall relative ranking of genes within each cell remains more stable.
To accomplish this, we first calculated the nonzero median value of expression of each detected gene across all cells from the entire Genecorpus-30M. We aggregated the transcript count distribution for each gene, normalizing the gene transcript counts in each cell by the total transcript count of that cell to account for varying sequencing depth. We then normalized the genes in each single cell transcriptome by that gene’s nonzero median value of expression across Genecorpus-30M and ordered the genes by the rank of their normalized expression in that specific cell. Of note, we opted to use the nonzero median value of expression rather than include zeros in the distribution so as not to weight the value by tissue representation within Genecorpus-30M, assuming that a representative range of transcript values would be observed within the cells in which each gene was detected.
The rank value encodings for each single cell transcriptome were then tokenized based on a total vocabulary of 25,424 protein-coding or miRNA genes detected within Geneformer-30M. The token dictionary mapping each token ID to special tokens (pad and mask) or Ensembl IDs for each gene is included within the repository as a pickle file (token_dictionary.pkl).
### Data Fields
- `input_ids`: rank value encoding for an example cell
- `lengths`: length of rank value encoding for that example cell
### Data Splits
The dataset does not contain any predefined splits.
## Dataset Creation
### Curation Rationale
Mapping the gene regulatory networks that drive disease progression enables screening for molecules that correct the network by normalizing core regulatory elements, rather than targeting peripheral downstream effectors that may not be disease modifying. However, mapping the gene network architecture requires large amounts of transcriptomic data to learn the connections between genes, which impedes network-correcting drug discovery in settings with limited data, including rare diseases and diseases affecting clinically inaccessible tissues. Although data remains limited in these settings, recent advances in sequencing technologies have driven a rapid expansion in the amount of transcriptomic data available from human tissues more broadly. Furthermore, single cell technologies have facilitated the observation of transcriptomic states without averaging genes’ expression across multiple cells, potentially providing more precise data for inference of network interactions, especially in diseases driven by dysregulation of multiple cell types. Recently, the concept of transfer learning has revolutionized fields such as natural language understanding and computer vision by leveraging deep learning models pretrained on large-scale general datasets that can then be fine-tuned towards a vast array of downstream tasks with limited task-specific data that would be insufficient to yield meaningful predictions when used in isolation. We therefore assembled Genecorpus-30M to allow the large-scale pretraining of [Geneformer](https://huggingface.co/ctheodoris/Geneformer), a pretrained transformer model that enables context-aware predictions in settings with limited data in network biology.
### Source Data
#### Initial Data Collection and Normalization
Source data included 29.9 million (29,900,531) human single cell transcriptomes from a broad range of tissues from 561 publicly available datasets from original studies cited in the Methods of Theodoris et al, Nature 2023. Datasets were filtered to retain cells with total read counts within three standard deviations of the mean within that dataset and mitochondrial reads within three standard deviations of the mean within that dataset. Ensembl-annotated protein-coding and miRNA genes were used for downstream analysis. Cells with less than seven detected Ensembl-annotated protein-coding or miRNA genes were excluded as the 15% masking used for the pretraining learning objective would not reliably mask a gene in cells with fewer detected genes. Ultimately, 27.4 million (27,406,217) cells passed the defined quality filters. Cells were then represented as rank value encodings as discussed above in [Data Instances](#data-instances).
#### Who are the source data producers?
Publicly available datasets containing raw counts were collected from National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO), NCBI Sequence Read Archive (SRA), Human Cell Atlas, European Molecular Biology Laboratory-European Bioinformatics Institute (EMBL-EBI) Single Cell Expression Atlas, Broad Institute Single Cell Portal, Brotman Baty Institute (BBI)-Allen Single Cell Atlases, Tumor Immune Single-cell Hub (TISCH) (excluding malignant cells), Panglao Database, 10x Genomics, University of California, Santa Cruz Cell Browser, European Genome-phenome Archive, Synapse, Riken, Zenodo, National Institutes of Health (NIH) Figshare Archive, NCBI dbGap, Refine.bio, China National GeneBank Sequence Archive, Mendeley Data, and individual communication with authors of the original studies as cited in the Methods of Theodoris et al, Nature 2023.
### Annotations
#### Annotation process
Geneformer-30M does not contain annotations.
#### Who are the annotators?
N/A
### Personal and Sensitive Information
There is no personal or sensitive information included in the dataset. The dataset is composed of rank value encodings, so there are no traceable sequencing reads included.
## Considerations for Using the Data
### Social Impact of Dataset
Genecorpus-30M enabled the large-scale pretraining of [Geneformer](https://huggingface.co/ctheodoris/Geneformer), a foundation model that enables context-aware predictions in settings with limited data in network biology. Within our publication, we demonstrated that during pretraining, Geneformer gained a fundamental understanding of network dynamics, encoding network hierarchy in the model’s attention weights in a completely self-supervised manner. Fine-tuning Geneformer towards a diverse panel of downstream tasks relevant to chromatin and network dynamics using limited task-specific data demonstrated that Geneformer consistently boosted predictive accuracy. Applied to disease modeling with limited patient data, Geneformer identified candidate therapeutic targets for cardiomyopathy. Overall, Geneformer represents a pretrained foundation model from which fine-tuning towards a broad range of downstream applications can be pursued to accelerate discovery of key network regulators and candidate therapeutic targets.
### Discussion of Biases
We excluded cells with high mutational burdens (e.g. malignant cells and immortalized cell lines) that could lead to substantial network rewiring without companion genome sequencing to facilitate interpretation. We only included droplet-based sequencing platforms to assure expression value unit comparability. Although we assembled the dataset to represent as diverse a set of human tissues and cell types as possible, particular tissues and cell types are not represented due to unavailability of public data at the time of dataset assembly. In our manuscript, we demonstrated that pretraining with larger and more diverse corpuses consistently improved Geneformer’s predictive power, consistent with observations that large-scale pretraining allows training of deeper models that ultimately have greater predictive potential in fields including NLU, computer vision, and mathematical problem-solving. Additionally, exposure to hundreds of experimental datasets during pretraining also appeared to promote robustness to batch-dependent technical artifacts and individual variability that commonly impact single cell analyses in biology. These findings suggest that as the amount of publicly available transcriptomic data continues to expand, future models pretrained on even larger-scale corpuses may open opportunities to achieve meaningful predictions in even more elusive tasks with increasingly limited task-specific data.
### Other Known Limitations
Genecorpus-30M was intended to be used for self-supervised pretraining. To achieve the best possible predictions in downstream tasks, Geneformer should be fine-tuned with labeled datasets relevant to the task at hand.
## Additional Information
### Dataset Curators
Christina Theodoris, MD, PhD
### Citation Information
Theodoris CV*, Xiao L, Chopra A, Chaffin MD, Al Sayed ZR, Hill MC, Mantineo H, Brydon EM, Zeng Z, Liu XS, Ellinor PT*. Transfer learning enables predictions in network biology. Nature. 2023 May 31; Epub ahead of print.
(*co-corresponding authors)
<!--- ### Licensing Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
---> | ctheodoris/Genecorpus-30M | [
"license:apache-2.0",
"region:us"
] | 2022-03-12T21:21:46+00:00 | {"license": "apache-2.0"} | 2023-11-11T06:42:26+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
| # Dataset Card for Genecorpus-30M
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Species
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Citation Information
## Dataset Description
- Point of Contact: christina.theodoris@URL
### Dataset Summary
We assembled a large-scale pretraining corpus, Genecorpus-30M, comprised of ~30 million human single cell transcriptomes from a broad range of tissues from publicly available data. This corpus was used for pretraining Geneformer, a pretrained transformer model that enables context-aware predictions in settings with limited data in network biology.
See our manuscript for details.
### Supported Tasks
This corpus was used for pretraining Geneformer and is compatible with pretraining or fine-tuning Geneformer or similar models.
### Species
Homo sapiens
## Dataset Structure
### Data Instances
Genecorpus-30M is provided as tokenized data in the Huggingface Datasets structure, which is based on the Apache Arrow format. Each example within the dataset is composed of the rank value encoding for a single cell within the corpus. Rank value encodings provide a nonparametric representation of each single cell’s transcriptome, ranking genes by their expression within that cell normalized by their expression across the entire Genecorpus-30M. This method takes advantage of the many observations of each gene’s expression across Genecorpus-30M to prioritize genes that distinguish cell state. Specifically, this method will deprioritize ubiquitously highly-expressed housekeeping genes by normalizing them to a lower rank. Conversely, genes such as transcription factors that may be lowly expressed when they are expressed but highly distinguish cell state will move to a higher rank within the encoding. Furthermore, this rank-based approach may be more robust against technical artifacts that may systematically bias the absolute transcript counts value while the overall relative ranking of genes within each cell remains more stable.
To accomplish this, we first calculated the nonzero median value of expression of each detected gene across all cells from the entire Genecorpus-30M. We aggregated the transcript count distribution for each gene, normalizing the gene transcript counts in each cell by the total transcript count of that cell to account for varying sequencing depth. We then normalized the genes in each single cell transcriptome by that gene’s nonzero median value of expression across Genecorpus-30M and ordered the genes by the rank of their normalized expression in that specific cell. Of note, we opted to use the nonzero median value of expression rather than include zeros in the distribution so as not to weight the value by tissue representation within Genecorpus-30M, assuming that a representative range of transcript values would be observed within the cells in which each gene was detected.
The rank value encodings for each single cell transcriptome were then tokenized based on a total vocabulary of 25,424 protein-coding or miRNA genes detected within Geneformer-30M. The token dictionary mapping each token ID to special tokens (pad and mask) or Ensembl IDs for each gene is included within the repository as a pickle file (token_dictionary.pkl).
### Data Fields
- 'input_ids': rank value encoding for an example cell
- 'lengths': length of rank value encoding for that example cell
### Data Splits
The dataset does not contain any predefined splits.
## Dataset Creation
### Curation Rationale
Mapping the gene regulatory networks that drive disease progression enables screening for molecules that correct the network by normalizing core regulatory elements, rather than targeting peripheral downstream effectors that may not be disease modifying. However, mapping the gene network architecture requires large amounts of transcriptomic data to learn the connections between genes, which impedes network-correcting drug discovery in settings with limited data, including rare diseases and diseases affecting clinically inaccessible tissues. Although data remains limited in these settings, recent advances in sequencing technologies have driven a rapid expansion in the amount of transcriptomic data available from human tissues more broadly. Furthermore, single cell technologies have facilitated the observation of transcriptomic states without averaging genes’ expression across multiple cells, potentially providing more precise data for inference of network interactions, especially in diseases driven by dysregulation of multiple cell types. Recently, the concept of transfer learning has revolutionized fields such as natural language understanding and computer vision by leveraging deep learning models pretrained on large-scale general datasets that can then be fine-tuned towards a vast array of downstream tasks with limited task-specific data that would be insufficient to yield meaningful predictions when used in isolation. We therefore assembled Genecorpus-30M to allow the large-scale pretraining of Geneformer, a pretrained transformer model that enables context-aware predictions in settings with limited data in network biology.
### Source Data
#### Initial Data Collection and Normalization
Source data included 29.9 million (29,900,531) human single cell transcriptomes from a broad range of tissues from 561 publicly available datasets from original studies cited in the Methods of Theodoris et al, Nature 2023. Datasets were filtered to retain cells with total read counts within three standard deviations of the mean within that dataset and mitochondrial reads within three standard deviations of the mean within that dataset. Ensembl-annotated protein-coding and miRNA genes were used for downstream analysis. Cells with less than seven detected Ensembl-annotated protein-coding or miRNA genes were excluded as the 15% masking used for the pretraining learning objective would not reliably mask a gene in cells with fewer detected genes. Ultimately, 27.4 million (27,406,217) cells passed the defined quality filters. Cells were then represented as rank value encodings as discussed above in Data Instances.
#### Who are the source data producers?
Publicly available datasets containing raw counts were collected from National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO), NCBI Sequence Read Archive (SRA), Human Cell Atlas, European Molecular Biology Laboratory-European Bioinformatics Institute (EMBL-EBI) Single Cell Expression Atlas, Broad Institute Single Cell Portal, Brotman Baty Institute (BBI)-Allen Single Cell Atlases, Tumor Immune Single-cell Hub (TISCH) (excluding malignant cells), Panglao Database, 10x Genomics, University of California, Santa Cruz Cell Browser, European Genome-phenome Archive, Synapse, Riken, Zenodo, National Institutes of Health (NIH) Figshare Archive, NCBI dbGap, URL, China National GeneBank Sequence Archive, Mendeley Data, and individual communication with authors of the original studies as cited in the Methods of Theodoris et al, Nature 2023.
### Annotations
#### Annotation process
Geneformer-30M does not contain annotations.
#### Who are the annotators?
N/A
### Personal and Sensitive Information
There is no personal or sensitive information included in the dataset. The dataset is composed of rank value encodings, so there are no traceable sequencing reads included.
## Considerations for Using the Data
### Social Impact of Dataset
Genecorpus-30M enabled the large-scale pretraining of Geneformer, a foundation model that enables context-aware predictions in settings with limited data in network biology. Within our publication, we demonstrated that during pretraining, Geneformer gained a fundamental understanding of network dynamics, encoding network hierarchy in the model’s attention weights in a completely self-supervised manner. Fine-tuning Geneformer towards a diverse panel of downstream tasks relevant to chromatin and network dynamics using limited task-specific data demonstrated that Geneformer consistently boosted predictive accuracy. Applied to disease modeling with limited patient data, Geneformer identified candidate therapeutic targets for cardiomyopathy. Overall, Geneformer represents a pretrained foundation model from which fine-tuning towards a broad range of downstream applications can be pursued to accelerate discovery of key network regulators and candidate therapeutic targets.
### Discussion of Biases
We excluded cells with high mutational burdens (e.g. malignant cells and immortalized cell lines) that could lead to substantial network rewiring without companion genome sequencing to facilitate interpretation. We only included droplet-based sequencing platforms to assure expression value unit comparability. Although we assembled the dataset to represent as diverse a set of human tissues and cell types as possible, particular tissues and cell types are not represented due to unavailability of public data at the time of dataset assembly. In our manuscript, we demonstrated that pretraining with larger and more diverse corpuses consistently improved Geneformer’s predictive power, consistent with observations that large-scale pretraining allows training of deeper models that ultimately have greater predictive potential in fields including NLU, computer vision, and mathematical problem-solving. Additionally, exposure to hundreds of experimental datasets during pretraining also appeared to promote robustness to batch-dependent technical artifacts and individual variability that commonly impact single cell analyses in biology. These findings suggest that as the amount of publicly available transcriptomic data continues to expand, future models pretrained on even larger-scale corpuses may open opportunities to achieve meaningful predictions in even more elusive tasks with increasingly limited task-specific data.
### Other Known Limitations
Genecorpus-30M was intended to be used for self-supervised pretraining. To achieve the best possible predictions in downstream tasks, Geneformer should be fine-tuned with labeled datasets relevant to the task at hand.
## Additional Information
### Dataset Curators
Christina Theodoris, MD, PhD
Theodoris CV*, Xiao L, Chopra A, Chaffin MD, Al Sayed ZR, Hill MC, Mantineo H, Brydon EM, Zeng Z, Liu XS, Ellinor PT*. Transfer learning enables predictions in network biology. Nature. 2023 May 31; Epub ahead of print.
(*co-corresponding authors)
| [
"# Dataset Card for Genecorpus-30M",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Species\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Citation Information",
"## Dataset Description\n\n\n- Point of Contact: christina.theodoris@URL",
"### Dataset Summary\n\nWe assembled a large-scale pretraining corpus, Genecorpus-30M, comprised of ~30 million human single cell transcriptomes from a broad range of tissues from publicly available data. This corpus was used for pretraining Geneformer, a pretrained transformer model that enables context-aware predictions in settings with limited data in network biology.\n\nSee our manuscript for details.",
"### Supported Tasks\n\nThis corpus was used for pretraining Geneformer and is compatible with pretraining or fine-tuning Geneformer or similar models.",
"### Species\n\nHomo sapiens",
"## Dataset Structure",
"### Data Instances\n\nGenecorpus-30M is provided as tokenized data in the Huggingface Datasets structure, which is based on the Apache Arrow format. Each example within the dataset is composed of the rank value encoding for a single cell within the corpus. Rank value encodings provide a nonparametric representation of each single cell’s transcriptome, ranking genes by their expression within that cell normalized by their expression across the entire Genecorpus-30M. This method takes advantage of the many observations of each gene’s expression across Genecorpus-30M to prioritize genes that distinguish cell state. Specifically, this method will deprioritize ubiquitously highly-expressed housekeeping genes by normalizing them to a lower rank. Conversely, genes such as transcription factors that may be lowly expressed when they are expressed but highly distinguish cell state will move to a higher rank within the encoding. Furthermore, this rank-based approach may be more robust against technical artifacts that may systematically bias the absolute transcript counts value while the overall relative ranking of genes within each cell remains more stable.\n\nTo accomplish this, we first calculated the nonzero median value of expression of each detected gene across all cells from the entire Genecorpus-30M. We aggregated the transcript count distribution for each gene, normalizing the gene transcript counts in each cell by the total transcript count of that cell to account for varying sequencing depth. We then normalized the genes in each single cell transcriptome by that gene’s nonzero median value of expression across Genecorpus-30M and ordered the genes by the rank of their normalized expression in that specific cell. Of note, we opted to use the nonzero median value of expression rather than include zeros in the distribution so as not to weight the value by tissue representation within Genecorpus-30M, assuming that a representative range of transcript values would be observed within the cells in which each gene was detected.\n\nThe rank value encodings for each single cell transcriptome were then tokenized based on a total vocabulary of 25,424 protein-coding or miRNA genes detected within Geneformer-30M. The token dictionary mapping each token ID to special tokens (pad and mask) or Ensembl IDs for each gene is included within the repository as a pickle file (token_dictionary.pkl).",
"### Data Fields\n\n- 'input_ids': rank value encoding for an example cell\n- 'lengths': length of rank value encoding for that example cell",
"### Data Splits\n\nThe dataset does not contain any predefined splits.",
"## Dataset Creation",
"### Curation Rationale\n\nMapping the gene regulatory networks that drive disease progression enables screening for molecules that correct the network by normalizing core regulatory elements, rather than targeting peripheral downstream effectors that may not be disease modifying. However, mapping the gene network architecture requires large amounts of transcriptomic data to learn the connections between genes, which impedes network-correcting drug discovery in settings with limited data, including rare diseases and diseases affecting clinically inaccessible tissues. Although data remains limited in these settings, recent advances in sequencing technologies have driven a rapid expansion in the amount of transcriptomic data available from human tissues more broadly. Furthermore, single cell technologies have facilitated the observation of transcriptomic states without averaging genes’ expression across multiple cells, potentially providing more precise data for inference of network interactions, especially in diseases driven by dysregulation of multiple cell types. Recently, the concept of transfer learning has revolutionized fields such as natural language understanding and computer vision by leveraging deep learning models pretrained on large-scale general datasets that can then be fine-tuned towards a vast array of downstream tasks with limited task-specific data that would be insufficient to yield meaningful predictions when used in isolation. We therefore assembled Genecorpus-30M to allow the large-scale pretraining of Geneformer, a pretrained transformer model that enables context-aware predictions in settings with limited data in network biology.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nSource data included 29.9 million (29,900,531) human single cell transcriptomes from a broad range of tissues from 561 publicly available datasets from original studies cited in the Methods of Theodoris et al, Nature 2023. Datasets were filtered to retain cells with total read counts within three standard deviations of the mean within that dataset and mitochondrial reads within three standard deviations of the mean within that dataset. Ensembl-annotated protein-coding and miRNA genes were used for downstream analysis. Cells with less than seven detected Ensembl-annotated protein-coding or miRNA genes were excluded as the 15% masking used for the pretraining learning objective would not reliably mask a gene in cells with fewer detected genes. Ultimately, 27.4 million (27,406,217) cells passed the defined quality filters. Cells were then represented as rank value encodings as discussed above in Data Instances.",
"#### Who are the source data producers?\n\nPublicly available datasets containing raw counts were collected from National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO), NCBI Sequence Read Archive (SRA), Human Cell Atlas, European Molecular Biology Laboratory-European Bioinformatics Institute (EMBL-EBI) Single Cell Expression Atlas, Broad Institute Single Cell Portal, Brotman Baty Institute (BBI)-Allen Single Cell Atlases, Tumor Immune Single-cell Hub (TISCH) (excluding malignant cells), Panglao Database, 10x Genomics, University of California, Santa Cruz Cell Browser, European Genome-phenome Archive, Synapse, Riken, Zenodo, National Institutes of Health (NIH) Figshare Archive, NCBI dbGap, URL, China National GeneBank Sequence Archive, Mendeley Data, and individual communication with authors of the original studies as cited in the Methods of Theodoris et al, Nature 2023.",
"### Annotations",
"#### Annotation process\n\nGeneformer-30M does not contain annotations.",
"#### Who are the annotators?\n\nN/A",
"### Personal and Sensitive Information\n\nThere is no personal or sensitive information included in the dataset. The dataset is composed of rank value encodings, so there are no traceable sequencing reads included.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nGenecorpus-30M enabled the large-scale pretraining of Geneformer, a foundation model that enables context-aware predictions in settings with limited data in network biology. Within our publication, we demonstrated that during pretraining, Geneformer gained a fundamental understanding of network dynamics, encoding network hierarchy in the model’s attention weights in a completely self-supervised manner. Fine-tuning Geneformer towards a diverse panel of downstream tasks relevant to chromatin and network dynamics using limited task-specific data demonstrated that Geneformer consistently boosted predictive accuracy. Applied to disease modeling with limited patient data, Geneformer identified candidate therapeutic targets for cardiomyopathy. Overall, Geneformer represents a pretrained foundation model from which fine-tuning towards a broad range of downstream applications can be pursued to accelerate discovery of key network regulators and candidate therapeutic targets.",
"### Discussion of Biases\n\nWe excluded cells with high mutational burdens (e.g. malignant cells and immortalized cell lines) that could lead to substantial network rewiring without companion genome sequencing to facilitate interpretation. We only included droplet-based sequencing platforms to assure expression value unit comparability. Although we assembled the dataset to represent as diverse a set of human tissues and cell types as possible, particular tissues and cell types are not represented due to unavailability of public data at the time of dataset assembly. In our manuscript, we demonstrated that pretraining with larger and more diverse corpuses consistently improved Geneformer’s predictive power, consistent with observations that large-scale pretraining allows training of deeper models that ultimately have greater predictive potential in fields including NLU, computer vision, and mathematical problem-solving. Additionally, exposure to hundreds of experimental datasets during pretraining also appeared to promote robustness to batch-dependent technical artifacts and individual variability that commonly impact single cell analyses in biology. These findings suggest that as the amount of publicly available transcriptomic data continues to expand, future models pretrained on even larger-scale corpuses may open opportunities to achieve meaningful predictions in even more elusive tasks with increasingly limited task-specific data.",
"### Other Known Limitations\n\nGenecorpus-30M was intended to be used for self-supervised pretraining. To achieve the best possible predictions in downstream tasks, Geneformer should be fine-tuned with labeled datasets relevant to the task at hand.",
"## Additional Information",
"### Dataset Curators\n\nChristina Theodoris, MD, PhD\n\n\n\nTheodoris CV*, Xiao L, Chopra A, Chaffin MD, Al Sayed ZR, Hill MC, Mantineo H, Brydon EM, Zeng Z, Liu XS, Ellinor PT*. Transfer learning enables predictions in network biology. Nature. 2023 May 31; Epub ahead of print. \n(*co-corresponding authors)"
] | [
"TAGS\n#license-apache-2.0 #region-us \n",
"# Dataset Card for Genecorpus-30M",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Species\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Citation Information",
"## Dataset Description\n\n\n- Point of Contact: christina.theodoris@URL",
"### Dataset Summary\n\nWe assembled a large-scale pretraining corpus, Genecorpus-30M, comprised of ~30 million human single cell transcriptomes from a broad range of tissues from publicly available data. This corpus was used for pretraining Geneformer, a pretrained transformer model that enables context-aware predictions in settings with limited data in network biology.\n\nSee our manuscript for details.",
"### Supported Tasks\n\nThis corpus was used for pretraining Geneformer and is compatible with pretraining or fine-tuning Geneformer or similar models.",
"### Species\n\nHomo sapiens",
"## Dataset Structure",
"### Data Instances\n\nGenecorpus-30M is provided as tokenized data in the Huggingface Datasets structure, which is based on the Apache Arrow format. Each example within the dataset is composed of the rank value encoding for a single cell within the corpus. Rank value encodings provide a nonparametric representation of each single cell’s transcriptome, ranking genes by their expression within that cell normalized by their expression across the entire Genecorpus-30M. This method takes advantage of the many observations of each gene’s expression across Genecorpus-30M to prioritize genes that distinguish cell state. Specifically, this method will deprioritize ubiquitously highly-expressed housekeeping genes by normalizing them to a lower rank. Conversely, genes such as transcription factors that may be lowly expressed when they are expressed but highly distinguish cell state will move to a higher rank within the encoding. Furthermore, this rank-based approach may be more robust against technical artifacts that may systematically bias the absolute transcript counts value while the overall relative ranking of genes within each cell remains more stable.\n\nTo accomplish this, we first calculated the nonzero median value of expression of each detected gene across all cells from the entire Genecorpus-30M. We aggregated the transcript count distribution for each gene, normalizing the gene transcript counts in each cell by the total transcript count of that cell to account for varying sequencing depth. We then normalized the genes in each single cell transcriptome by that gene’s nonzero median value of expression across Genecorpus-30M and ordered the genes by the rank of their normalized expression in that specific cell. Of note, we opted to use the nonzero median value of expression rather than include zeros in the distribution so as not to weight the value by tissue representation within Genecorpus-30M, assuming that a representative range of transcript values would be observed within the cells in which each gene was detected.\n\nThe rank value encodings for each single cell transcriptome were then tokenized based on a total vocabulary of 25,424 protein-coding or miRNA genes detected within Geneformer-30M. The token dictionary mapping each token ID to special tokens (pad and mask) or Ensembl IDs for each gene is included within the repository as a pickle file (token_dictionary.pkl).",
"### Data Fields\n\n- 'input_ids': rank value encoding for an example cell\n- 'lengths': length of rank value encoding for that example cell",
"### Data Splits\n\nThe dataset does not contain any predefined splits.",
"## Dataset Creation",
"### Curation Rationale\n\nMapping the gene regulatory networks that drive disease progression enables screening for molecules that correct the network by normalizing core regulatory elements, rather than targeting peripheral downstream effectors that may not be disease modifying. However, mapping the gene network architecture requires large amounts of transcriptomic data to learn the connections between genes, which impedes network-correcting drug discovery in settings with limited data, including rare diseases and diseases affecting clinically inaccessible tissues. Although data remains limited in these settings, recent advances in sequencing technologies have driven a rapid expansion in the amount of transcriptomic data available from human tissues more broadly. Furthermore, single cell technologies have facilitated the observation of transcriptomic states without averaging genes’ expression across multiple cells, potentially providing more precise data for inference of network interactions, especially in diseases driven by dysregulation of multiple cell types. Recently, the concept of transfer learning has revolutionized fields such as natural language understanding and computer vision by leveraging deep learning models pretrained on large-scale general datasets that can then be fine-tuned towards a vast array of downstream tasks with limited task-specific data that would be insufficient to yield meaningful predictions when used in isolation. We therefore assembled Genecorpus-30M to allow the large-scale pretraining of Geneformer, a pretrained transformer model that enables context-aware predictions in settings with limited data in network biology.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nSource data included 29.9 million (29,900,531) human single cell transcriptomes from a broad range of tissues from 561 publicly available datasets from original studies cited in the Methods of Theodoris et al, Nature 2023. Datasets were filtered to retain cells with total read counts within three standard deviations of the mean within that dataset and mitochondrial reads within three standard deviations of the mean within that dataset. Ensembl-annotated protein-coding and miRNA genes were used for downstream analysis. Cells with less than seven detected Ensembl-annotated protein-coding or miRNA genes were excluded as the 15% masking used for the pretraining learning objective would not reliably mask a gene in cells with fewer detected genes. Ultimately, 27.4 million (27,406,217) cells passed the defined quality filters. Cells were then represented as rank value encodings as discussed above in Data Instances.",
"#### Who are the source data producers?\n\nPublicly available datasets containing raw counts were collected from National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO), NCBI Sequence Read Archive (SRA), Human Cell Atlas, European Molecular Biology Laboratory-European Bioinformatics Institute (EMBL-EBI) Single Cell Expression Atlas, Broad Institute Single Cell Portal, Brotman Baty Institute (BBI)-Allen Single Cell Atlases, Tumor Immune Single-cell Hub (TISCH) (excluding malignant cells), Panglao Database, 10x Genomics, University of California, Santa Cruz Cell Browser, European Genome-phenome Archive, Synapse, Riken, Zenodo, National Institutes of Health (NIH) Figshare Archive, NCBI dbGap, URL, China National GeneBank Sequence Archive, Mendeley Data, and individual communication with authors of the original studies as cited in the Methods of Theodoris et al, Nature 2023.",
"### Annotations",
"#### Annotation process\n\nGeneformer-30M does not contain annotations.",
"#### Who are the annotators?\n\nN/A",
"### Personal and Sensitive Information\n\nThere is no personal or sensitive information included in the dataset. The dataset is composed of rank value encodings, so there are no traceable sequencing reads included.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nGenecorpus-30M enabled the large-scale pretraining of Geneformer, a foundation model that enables context-aware predictions in settings with limited data in network biology. Within our publication, we demonstrated that during pretraining, Geneformer gained a fundamental understanding of network dynamics, encoding network hierarchy in the model’s attention weights in a completely self-supervised manner. Fine-tuning Geneformer towards a diverse panel of downstream tasks relevant to chromatin and network dynamics using limited task-specific data demonstrated that Geneformer consistently boosted predictive accuracy. Applied to disease modeling with limited patient data, Geneformer identified candidate therapeutic targets for cardiomyopathy. Overall, Geneformer represents a pretrained foundation model from which fine-tuning towards a broad range of downstream applications can be pursued to accelerate discovery of key network regulators and candidate therapeutic targets.",
"### Discussion of Biases\n\nWe excluded cells with high mutational burdens (e.g. malignant cells and immortalized cell lines) that could lead to substantial network rewiring without companion genome sequencing to facilitate interpretation. We only included droplet-based sequencing platforms to assure expression value unit comparability. Although we assembled the dataset to represent as diverse a set of human tissues and cell types as possible, particular tissues and cell types are not represented due to unavailability of public data at the time of dataset assembly. In our manuscript, we demonstrated that pretraining with larger and more diverse corpuses consistently improved Geneformer’s predictive power, consistent with observations that large-scale pretraining allows training of deeper models that ultimately have greater predictive potential in fields including NLU, computer vision, and mathematical problem-solving. Additionally, exposure to hundreds of experimental datasets during pretraining also appeared to promote robustness to batch-dependent technical artifacts and individual variability that commonly impact single cell analyses in biology. These findings suggest that as the amount of publicly available transcriptomic data continues to expand, future models pretrained on even larger-scale corpuses may open opportunities to achieve meaningful predictions in even more elusive tasks with increasingly limited task-specific data.",
"### Other Known Limitations\n\nGenecorpus-30M was intended to be used for self-supervised pretraining. To achieve the best possible predictions in downstream tasks, Geneformer should be fine-tuned with labeled datasets relevant to the task at hand.",
"## Additional Information",
"### Dataset Curators\n\nChristina Theodoris, MD, PhD\n\n\n\nTheodoris CV*, Xiao L, Chopra A, Chaffin MD, Al Sayed ZR, Hill MC, Mantineo H, Brydon EM, Zeng Z, Liu XS, Ellinor PT*. Transfer learning enables predictions in network biology. Nature. 2023 May 31; Epub ahead of print. \n(*co-corresponding authors)"
] |
9d24e08b068f24f80d9b3679e3806fe1c1be8fb3 | #catalonian independence tweet dataset
This dataset is a port of the official ['catalonia_independence' dataset] (https://huggingface.co/datasets/catalonia_independence) on the Hub. It has just the Catalan language version. | SetFit/catalonia_independence_ca | [
"region:us"
] | 2022-03-13T02:43:15+00:00 | {} | 2022-03-13T09:10:29+00:00 | [] | [] | TAGS
#region-us
| #catalonian independence tweet dataset
This dataset is a port of the official ['catalonia_independence' dataset] (URL on the Hub. It has just the Catalan language version. | [] | [
"TAGS\n#region-us \n"
] |
4d0ae2a3df2769cd4eff981ae8184b9fd72b0798 | #catalonian independence tweet dataset
This dataset is a port of the official ['catalonia_independence' dataset] (https://huggingface.co/datasets/catalonia_independence) on the Hub. It has just the Spanish language version. | SetFit/catalonia_independence_es | [
"region:us"
] | 2022-03-13T02:44:02+00:00 | {} | 2022-03-13T09:11:31+00:00 | [] | [] | TAGS
#region-us
| #catalonian independence tweet dataset
This dataset is a port of the official ['catalonia_independence' dataset] (URL on the Hub. It has just the Spanish language version. | [] | [
"TAGS\n#region-us \n"
] |
7fa32cf76b45dceb224903152c34dfa13718dfb2 | #xglue nc
This dataset is a port of the official ['xglue' dataset] (https://huggingface.co/datasets/xglue) on the Hub. It has just the news category classification section. It has been reduced to just 3 columns (plus text label) that are relevant to the SetFit task. Validation and test in English, Spanish, French, Russian, and German. | SetFit/xglue_nc | [
"region:us"
] | 2022-03-13T02:44:23+00:00 | {} | 2022-03-14T03:27:58+00:00 | [] | [] | TAGS
#region-us
| #xglue nc
This dataset is a port of the official ['xglue' dataset] (URL on the Hub. It has just the news category classification section. It has been reduced to just 3 columns (plus text label) that are relevant to the SetFit task. Validation and test in English, Spanish, French, Russian, and German. | [] | [
"TAGS\n#region-us \n"
] |
bb25d49f17c86f7affb193c18e0511afcd51b933 | #amazon reviews multi german
This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the German language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. | SetFit/amazon_reviews_multi_de | [
"region:us"
] | 2022-03-13T02:45:18+00:00 | {} | 2022-03-23T15:34:53+00:00 | [] | [] | TAGS
#region-us
| #amazon reviews multi german
This dataset is a port of the official ['amazon_reviews_multi' dataset] (URL on the Hub. It has just the German language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. | [] | [
"TAGS\n#region-us \n"
] |
16015418b488c9186fce74b058877ea939ca934d | #amazon reviews multi spanish
This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the Spanish language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. | SetFit/amazon_reviews_multi_es | [
"region:us"
] | 2022-03-13T02:45:47+00:00 | {} | 2022-03-23T15:43:09+00:00 | [] | [] | TAGS
#region-us
| #amazon reviews multi spanish
This dataset is a port of the official ['amazon_reviews_multi' dataset] (URL on the Hub. It has just the Spanish language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. | [] | [
"TAGS\n#region-us \n"
] |
77676678b2e9e03265aae02823ba2f77b531d11a | #amazon reviews multi japanese
This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the Japanese language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. | SetFit/amazon_reviews_multi_ja | [
"region:us"
] | 2022-03-13T02:46:28+00:00 | {} | 2022-03-23T15:40:06+00:00 | [] | [] | TAGS
#region-us
| #amazon reviews multi japanese
This dataset is a port of the official ['amazon_reviews_multi' dataset] (URL on the Hub. It has just the Japanese language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. | [] | [
"TAGS\n#region-us \n"
] |
184ac90d5511a7f6801cba99688892f440ece660 | #amazon reviews multi chinese
This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the Chinese language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. | SetFit/amazon_reviews_multi_zh | [
"region:us"
] | 2022-03-13T02:46:40+00:00 | {} | 2022-03-23T15:30:49+00:00 | [] | [] | TAGS
#region-us
| #amazon reviews multi chinese
This dataset is a port of the official ['amazon_reviews_multi' dataset] (URL on the Hub. It has just the Chinese language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. | [] | [
"TAGS\n#region-us \n"
] |
3a43b31171a667fb0bb7a298e143fd022266f78b | #amazon reviews multi french
This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the French language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. | SetFit/amazon_reviews_multi_fr | [
"region:us"
] | 2022-03-13T02:48:20+00:00 | {} | 2022-03-23T15:45:44+00:00 | [] | [] | TAGS
#region-us
| #amazon reviews multi french
This dataset is a port of the official ['amazon_reviews_multi' dataset] (URL on the Hub. It has just the French language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. | [] | [
"TAGS\n#region-us \n"
] |
7c9a79666d13e6d27ee74279fccdca11decbfb5d | #toy dataset
This is a small portion of the full dataset, used for testing and formatting purposes.
| multiIR/toy_data | [
"region:us"
] | 2022-03-13T04:08:34+00:00 | {} | 2022-03-14T10:33:27+00:00 | [] | [] | TAGS
#region-us
| #toy dataset
This is a small portion of the full dataset, used for testing and formatting purposes.
| [] | [
"TAGS\n#region-us \n"
] |
f9dd0d78228c6840ae9d97ffb7b8d6dfbbbc8634 |
The `post-data-by-subreddit.tar` file contains 5000 gzipped json files - one for each of the top 5000 subreddits (as roughly measured by subscriber count and comment activity). Each of those json files (e.g. `askreddit.json`) contains an array of the data for the top 1000 posts of all time.
Notes:
* I stopped crawling a subreddit's top-posts list if I reached a batch that had a post with a score less than 5, so some subreddits won't have the full 1000 posts.
* No posts comments are included. Only the posts themselves.
* See the example file `askreddit.json` in this repo if you want to see what you're getting before downloading all the data.
* The list of subreddits included are listed in `top-5k-subreddits.json`.
* NSFW subreddits have been included in the crawl, so you might have to filter them out depending on your use case.
* The Deno scraping/crawling script is included as `crawl.js`, and can be started with `deno run --allow-net --allow-read=. --allow-write=. crawl.js` once you've [installed Deno](https://deno.land/manual/getting_started/installation) and have downloaded `top-5k-subreddits.json` into the same folder as `crawl.js`. | rocca/top-reddit-posts | [
"license:mit",
"region:us"
] | 2022-03-13T05:06:55+00:00 | {"license": "mit"} | 2022-03-23T05:16:33+00:00 | [] | [] | TAGS
#license-mit #region-us
|
The 'URL' file contains 5000 gzipped json files - one for each of the top 5000 subreddits (as roughly measured by subscriber count and comment activity). Each of those json files (e.g. 'URL') contains an array of the data for the top 1000 posts of all time.
Notes:
* I stopped crawling a subreddit's top-posts list if I reached a batch that had a post with a score less than 5, so some subreddits won't have the full 1000 posts.
* No posts comments are included. Only the posts themselves.
* See the example file 'URL' in this repo if you want to see what you're getting before downloading all the data.
* The list of subreddits included are listed in 'URL'.
* NSFW subreddits have been included in the crawl, so you might have to filter them out depending on your use case.
* The Deno scraping/crawling script is included as 'URL', and can be started with 'deno run --allow-net --allow-read=. --allow-write=. URL' once you've installed Deno and have downloaded 'URL' into the same folder as 'URL'. | [] | [
"TAGS\n#license-mit #region-us \n"
] |
dd5650eb094112f8913c5c9f907e43008aeb52cf | From the Evaluating Student Writing Kaggle competition. | carbon12/evaluating_student_writing | [
"region:us"
] | 2022-03-13T05:16:30+00:00 | {} | 2022-03-13T13:03:06+00:00 | [] | [] | TAGS
#region-us
| From the Evaluating Student Writing Kaggle competition. | [] | [
"TAGS\n#region-us \n"
] |
749b7eac6d013c77d95ba1b744bb88ac436ca48b | This dataset contains MFCC feature extracted for 646 short speech audios | Parmann/speech_classification | [
"region:us"
] | 2022-03-13T08:30:16+00:00 | {} | 2022-03-13T08:32:04+00:00 | [] | [] | TAGS
#region-us
| This dataset contains MFCC feature extracted for 646 short speech audios | [] | [
"TAGS\n#region-us \n"
] |
088baa7f2aa235290fb8a35850cee1e70bd5ce25 | # text-to-text format from superglue axg
# Note that RTE train and val set has been added
axg: DatasetDict({
test: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 356
})
train: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 2490
})
validation: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 277
})
}) | stjokerli/TextToText_axg_seqio | [
"region:us"
] | 2022-03-13T10:08:17+00:00 | {} | 2022-04-04T09:24:18+00:00 | [] | [] | TAGS
#region-us
| # text-to-text format from superglue axg
# Note that RTE train and val set has been added
axg: DatasetDict({
test: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 356
})
train: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 2490
})
validation: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 277
})
}) | [
"# text-to-text format from superglue axg",
"# Note that RTE train and val set has been added\n\naxg: DatasetDict({\n test: Dataset({\n features: ['idx', 'inputs', 'targets'],\n num_rows: 356\n })\n train: Dataset({\n features: ['idx', 'inputs', 'targets'],\n num_rows: 2490\n })\n validation: Dataset({\n features: ['idx', 'inputs', 'targets'],\n num_rows: 277\n })\n })"
] | [
"TAGS\n#region-us \n",
"# text-to-text format from superglue axg",
"# Note that RTE train and val set has been added\n\naxg: DatasetDict({\n test: Dataset({\n features: ['idx', 'inputs', 'targets'],\n num_rows: 356\n })\n train: Dataset({\n features: ['idx', 'inputs', 'targets'],\n num_rows: 2490\n })\n validation: Dataset({\n features: ['idx', 'inputs', 'targets'],\n num_rows: 277\n })\n })"
] |
aa9340e5512f9d1c196b34645346db83107a0cd3 | axb: DatasetDict({
test: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 1104
})
train: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 2490
})
validation: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 277
})
})
Text to text implemantion of T5
note that RTE train and validation set has been added | stjokerli/TextToText_axb_seqio | [
"region:us"
] | 2022-03-13T10:08:23+00:00 | {} | 2022-04-04T09:25:39+00:00 | [] | [] | TAGS
#region-us
| axb: DatasetDict({
test: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 1104
})
train: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 2490
})
validation: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 277
})
})
Text to text implemantion of T5
note that RTE train and validation set has been added | [] | [
"TAGS\n#region-us \n"
] |
4dc1c8da193d078c788bccf7eebbc301c754b121 | [Needs More Information]
# Dataset Card for ph-en-text
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://huggingface.co/datasets/joypersicanon/ph-en-text/tree/main
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** Mary Joy P. Canon
### Dataset Summary
PhEnText is a large-scale and multi-domain lexical data written in Philippine English text.
It is composed of 20, 562, 265 lines from news articles, religious articles and court decisions.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
ph-en
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
id: "3128940",
text: "Why this happened should be the focus of inquiry."
### Data Splits
80:20 split for train and test data
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | joypersicanon/ph-en-text | [
"region:us"
] | 2022-03-13T10:16:38+00:00 | {} | 2022-03-17T13:30:52+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for ph-en-text
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard:
- Point of Contact: Mary Joy P. Canon
### Dataset Summary
PhEnText is a large-scale and multi-domain lexical data written in Philippine English text.
It is composed of 20, 562, 265 lines from news articles, religious articles and court decisions.
### Supported Tasks and Leaderboards
### Languages
ph-en
## Dataset Structure
### Data Instances
### Data Fields
id: "3128940",
text: "Why this happened should be the focus of inquiry."
### Data Splits
80:20 split for train and test data
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
| [
"# Dataset Card for ph-en-text",
"## Table of Contents\r\n- Dataset Description\r\n - Dataset Summary\r\n - Supported Tasks\r\n - Languages\r\n- Dataset Structure\r\n - Data Instances\r\n - Data Fields\r\n - Data Splits\r\n- Dataset Creation\r\n - Curation Rationale\r\n - Source Data\r\n - Annotations\r\n - Personal and Sensitive Information\r\n- Considerations for Using the Data\r\n - Social Impact of Dataset\r\n - Discussion of Biases\r\n - Other Known Limitations\r\n- Additional Information\r\n - Dataset Curators\r\n - Licensing Information\r\n - Citation Information",
"## Dataset Description\r\n\r\n- Homepage: \r\n- Repository: URL\r\n- Paper: \r\n- Leaderboard: \r\n- Point of Contact: Mary Joy P. Canon",
"### Dataset Summary\r\n\r\nPhEnText is a large-scale and multi-domain lexical data written in Philippine English text.\r\nIt is composed of 20, 562, 265 lines from news articles, religious articles and court decisions.",
"### Supported Tasks and Leaderboards",
"### Languages\r\n\r\nph-en",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\r\n\r\nid: \"3128940\",\r\ntext: \"Why this happened should be the focus of inquiry.\"",
"### Data Splits\r\n\r\n80:20 split for train and test data",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for ph-en-text",
"## Table of Contents\r\n- Dataset Description\r\n - Dataset Summary\r\n - Supported Tasks\r\n - Languages\r\n- Dataset Structure\r\n - Data Instances\r\n - Data Fields\r\n - Data Splits\r\n- Dataset Creation\r\n - Curation Rationale\r\n - Source Data\r\n - Annotations\r\n - Personal and Sensitive Information\r\n- Considerations for Using the Data\r\n - Social Impact of Dataset\r\n - Discussion of Biases\r\n - Other Known Limitations\r\n- Additional Information\r\n - Dataset Curators\r\n - Licensing Information\r\n - Citation Information",
"## Dataset Description\r\n\r\n- Homepage: \r\n- Repository: URL\r\n- Paper: \r\n- Leaderboard: \r\n- Point of Contact: Mary Joy P. Canon",
"### Dataset Summary\r\n\r\nPhEnText is a large-scale and multi-domain lexical data written in Philippine English text.\r\nIt is composed of 20, 562, 265 lines from news articles, religious articles and court decisions.",
"### Supported Tasks and Leaderboards",
"### Languages\r\n\r\nph-en",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\r\n\r\nid: \"3128940\",\r\ntext: \"Why this happened should be the focus of inquiry.\"",
"### Data Splits\r\n\r\n80:20 split for train and test data",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information"
] |
cc60812b3dc5abb00043962616195c023c7c27a2 |
# Top Quark Tagging Reference Dataset
A set of MC simulated training/testing events for the evaluation of top quark tagging architectures.
In total 1.2M training events, 400k validation events and 400k test events. Use “train” for training, “val” for validation during the training and “test” for final testing and reporting results.
## Description
* 14 TeV, hadronic tops for signal, qcd diets background, Delphes ATLAS detector card with Pythia8
* No MPI/pile-up included
* Clustering of particle-flow entries (produced by Delphes E-flow) into anti-kT 0.8 jets in the pT range [550,650] GeV
* All top jets are matched to a parton-level top within ∆R = 0.8, and to all top decay partons within 0.8
* Jets are required to have |eta| < 2
* The leading 200 jet constituent four-momenta are stored, with zero-padding for jets with fewer than 200
* Constituents are sorted by pT, with the highest pT one first
* The truth top four-momentum is stored as truth_px etc.
* A flag (1 for top, 0 for QCD) is kept for each jet. It is called is_signal_new
* The variable "ttv" (= test/train/validation) is kept for each jet. It indicates to which dataset the jet belongs. It is redundant as the different sets are already distributed as different files. | lewtun/top_quark_tagging | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-13T16:55:31+00:00 | {"license": "cc-by-4.0"} | 2022-04-03T13:26:05+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
|
# Top Quark Tagging Reference Dataset
A set of MC simulated training/testing events for the evaluation of top quark tagging architectures.
In total 1.2M training events, 400k validation events and 400k test events. Use “train” for training, “val” for validation during the training and “test” for final testing and reporting results.
## Description
* 14 TeV, hadronic tops for signal, qcd diets background, Delphes ATLAS detector card with Pythia8
* No MPI/pile-up included
* Clustering of particle-flow entries (produced by Delphes E-flow) into anti-kT 0.8 jets in the pT range [550,650] GeV
* All top jets are matched to a parton-level top within ∆R = 0.8, and to all top decay partons within 0.8
* Jets are required to have |eta| < 2
* The leading 200 jet constituent four-momenta are stored, with zero-padding for jets with fewer than 200
* Constituents are sorted by pT, with the highest pT one first
* The truth top four-momentum is stored as truth_px etc.
* A flag (1 for top, 0 for QCD) is kept for each jet. It is called is_signal_new
* The variable "ttv" (= test/train/validation) is kept for each jet. It indicates to which dataset the jet belongs. It is redundant as the different sets are already distributed as different files. | [
"# Top Quark Tagging Reference Dataset\r\n\r\nA set of MC simulated training/testing events for the evaluation of top quark tagging architectures.\r\n\r\nIn total 1.2M training events, 400k validation events and 400k test events. Use “train” for training, “val” for validation during the training and “test” for final testing and reporting results.",
"## Description\r\n\r\n* 14 TeV, hadronic tops for signal, qcd diets background, Delphes ATLAS detector card with Pythia8\r\n* No MPI/pile-up included\r\n* Clustering of particle-flow entries (produced by Delphes E-flow) into anti-kT 0.8 jets in the pT range [550,650] GeV\r\n* All top jets are matched to a parton-level top within ∆R = 0.8, and to all top decay partons within 0.8\r\n* Jets are required to have |eta| < 2\r\n* The leading 200 jet constituent four-momenta are stored, with zero-padding for jets with fewer than 200\r\n* Constituents are sorted by pT, with the highest pT one first\r\n* The truth top four-momentum is stored as truth_px etc.\r\n* A flag (1 for top, 0 for QCD) is kept for each jet. It is called is_signal_new\r\n* The variable \"ttv\" (= test/train/validation) is kept for each jet. It indicates to which dataset the jet belongs. It is redundant as the different sets are already distributed as different files."
] | [
"TAGS\n#license-cc-by-4.0 #region-us \n",
"# Top Quark Tagging Reference Dataset\r\n\r\nA set of MC simulated training/testing events for the evaluation of top quark tagging architectures.\r\n\r\nIn total 1.2M training events, 400k validation events and 400k test events. Use “train” for training, “val” for validation during the training and “test” for final testing and reporting results.",
"## Description\r\n\r\n* 14 TeV, hadronic tops for signal, qcd diets background, Delphes ATLAS detector card with Pythia8\r\n* No MPI/pile-up included\r\n* Clustering of particle-flow entries (produced by Delphes E-flow) into anti-kT 0.8 jets in the pT range [550,650] GeV\r\n* All top jets are matched to a parton-level top within ∆R = 0.8, and to all top decay partons within 0.8\r\n* Jets are required to have |eta| < 2\r\n* The leading 200 jet constituent four-momenta are stored, with zero-padding for jets with fewer than 200\r\n* Constituents are sorted by pT, with the highest pT one first\r\n* The truth top four-momentum is stored as truth_px etc.\r\n* A flag (1 for top, 0 for QCD) is kept for each jet. It is called is_signal_new\r\n* The variable \"ttv\" (= test/train/validation) is kept for each jet. It indicates to which dataset the jet belongs. It is redundant as the different sets are already distributed as different files."
] |
845aaad797f618d1f8c9b42c3cb5919f0becdb2a |
Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802)
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
Github repo: https://github.com/vipulraheja/IteraTeR
| wanyu/IteraTeR_full_sent | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"conditional-text-generation",
"text-editing",
"arxiv:2203.03802",
"region:us"
] | 2022-03-13T19:29:50+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "IteraTeR_full_sent", "language_bcp47": ["en-US"], "tags": ["conditional-text-generation", "text-editing"]} | 2022-10-24T17:58:37+00:00 | [
"2203.03802"
] | [
"en"
] | TAGS
#task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #source_datasets-original #language-English #license-apache-2.0 #conditional-text-generation #text-editing #arxiv-2203.03802 #region-us
|
Paper: Understanding Iterative Revision from Human-Written Text
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
Github repo: URL
| [] | [
"TAGS\n#task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #source_datasets-original #language-English #license-apache-2.0 #conditional-text-generation #text-editing #arxiv-2203.03802 #region-us \n"
] |
792d5310cc82446cccfd3cd8953893b831538976 |
Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802)
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
Github repo: https://github.com/vipulraheja/IteraTeR
| wanyu/IteraTeR_full_doc | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"conditional-text-generation",
"text-editing",
"arxiv:2203.03802",
"region:us"
] | 2022-03-13T20:41:13+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "IteraTeR_full_doc", "language_bcp47": ["en-US"], "tags": ["conditional-text-generation", "text-editing"]} | 2022-10-24T17:58:30+00:00 | [
"2203.03802"
] | [
"en"
] | TAGS
#task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #source_datasets-original #language-English #license-apache-2.0 #conditional-text-generation #text-editing #arxiv-2203.03802 #region-us
|
Paper: Understanding Iterative Revision from Human-Written Text
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
Github repo: URL
| [] | [
"TAGS\n#task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #source_datasets-original #language-English #license-apache-2.0 #conditional-text-generation #text-editing #arxiv-2203.03802 #region-us \n"
] |
e22e0371dac444239b944f9293f5b491d62b73f0 |
Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802)
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
Github repo: https://github.com/vipulraheja/IteraTeR
| wanyu/IteraTeR_human_sent | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"conditional-text-generation",
"text-editing",
"arxiv:2203.03802",
"region:us"
] | 2022-03-13T20:46:23+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "IteraTeR_human_sent", "language_bcp47": ["en-US"], "tags": ["conditional-text-generation", "text-editing"]} | 2022-10-24T17:58:22+00:00 | [
"2203.03802"
] | [
"en"
] | TAGS
#task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #source_datasets-original #language-English #license-apache-2.0 #conditional-text-generation #text-editing #arxiv-2203.03802 #region-us
|
Paper: Understanding Iterative Revision from Human-Written Text
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
Github repo: URL
| [] | [
"TAGS\n#task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #source_datasets-original #language-English #license-apache-2.0 #conditional-text-generation #text-editing #arxiv-2203.03802 #region-us \n"
] |
3b0bdabb090d04062ebc17e54ac889a64f5cb791 |
Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802)
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
Github repo: https://github.com/vipulraheja/IteraTeR
| wanyu/IteraTeR_human_doc | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"conditional-text-generation",
"text-editing",
"arxiv:2203.03802",
"region:us"
] | 2022-03-13T20:48:31+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "IteraTeR-human-doc", "language_bcp47": ["en-US"], "tags": ["conditional-text-generation", "text-editing"]} | 2022-10-24T17:58:15+00:00 | [
"2203.03802"
] | [
"en"
] | TAGS
#task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #source_datasets-original #language-English #license-apache-2.0 #conditional-text-generation #text-editing #arxiv-2203.03802 #region-us
|
Paper: Understanding Iterative Revision from Human-Written Text
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
Github repo: URL
| [] | [
"TAGS\n#task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #source_datasets-original #language-English #license-apache-2.0 #conditional-text-generation #text-editing #arxiv-2203.03802 #region-us \n"
] |
1a2b7bc94feea59665740ea295e504c41b8f9c39 | # AutoNLP Dataset for project: ALBERTFINALYEAR
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project ALBERTFINALYEAR.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"context": "Hasidic or Chasidic Judaism overlaps significantly with Haredi Judaism in its engagement with the se[...]",
"question": "What overlaps significantly with Haredi Judiasm?",
"answers.text": [
"Chasidic Judaism"
],
"answers.answer_start": [
11
]
},
{
"context": "Data compression can be viewed as a special case of data differencing: Data differencing consists of[...]",
"question": "What can classified as data differencing with empty source data?",
"answers.text": [
"Data compression",
"data compression"
],
"answers.answer_start": [
0,
400
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"context": "Value(dtype='string', id=None)",
"question": "Value(dtype='string', id=None)",
"answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 87433 |
| valid | 10544 |
| Aclairs/ALBERTFINALYEAR | [
"region:us"
] | 2022-03-14T05:29:43+00:00 | {} | 2022-03-14T05:56:07+00:00 | [] | [] | TAGS
#region-us
| AutoNLP Dataset for project: ALBERTFINALYEAR
============================================
Table of content
----------------
* Dataset Description
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
Dataset Descritpion
-------------------
This dataset has been automatically processed by AutoNLP for project ALBERTFINALYEAR.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
e0536f5bfc7c35bb62f104bb2400c2b36b6029ef | # GEM Submission
Submission name: This is a test
| GEM-submissions/lewtun__this-is-a-test__1647246406 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-14T08:26:50+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "This is a test", "tags": ["evaluation", "benchmark"]} | 2022-03-14T08:26:51+00:00 | [] | [] | TAGS
#benchmark-gem #evaluation #benchmark #region-us
| # GEM Submission
Submission name: This is a test
| [
"# GEM Submission\n\nSubmission name: This is a test"
] | [
"TAGS\n#benchmark-gem #evaluation #benchmark #region-us \n",
"# GEM Submission\n\nSubmission name: This is a test"
] |
1d84bb9af6e19a7cd6860f4e3149f951e7c1c018 | # GEM Submission
Submission name: mT5_xl
| GEM-submissions/lewtun__mt5_xl__1647246454 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-14T08:27:38+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "mT5_xl", "tags": ["evaluation", "benchmark"]} | 2022-03-14T08:27:39+00:00 | [] | [] | TAGS
#benchmark-gem #evaluation #benchmark #region-us
| # GEM Submission
Submission name: mT5_xl
| [
"# GEM Submission\n\nSubmission name: mT5_xl"
] | [
"TAGS\n#benchmark-gem #evaluation #benchmark #region-us \n",
"# GEM Submission\n\nSubmission name: mT5_xl"
] |
2bd261e242dd6801c5bf27ed6dfbe28309ba0387 | # GEM Submission
Submission name: This is a test
| GEM-submissions/lewtun__this-is-a-test__1647247409 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-14T08:43:33+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "This is a test", "tags": ["evaluation", "benchmark"]} | 2022-03-14T08:43:34+00:00 | [] | [] | TAGS
#benchmark-gem #evaluation #benchmark #region-us
| # GEM Submission
Submission name: This is a test
| [
"# GEM Submission\n\nSubmission name: This is a test"
] | [
"TAGS\n#benchmark-gem #evaluation #benchmark #region-us \n",
"# GEM Submission\n\nSubmission name: This is a test"
] |
39719e276a1e76288e53e4ab8743ffb0ceb7bbe0 |
# Dataset Card for BLURB
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://microsoft.github.io/BLURB/index.html
- **Paper:** [Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing](https://arxiv.org/pdf/2007.15779.pdf)
- **Leaderboard:** https://microsoft.github.io/BLURB/leaderboard.html
- **Point of Contact:**
### Dataset Summary
BLURB is a collection of resources for biomedical natural language processing. In general domains, such as newswire and the Web, comprehensive benchmarks and leaderboards such as GLUE have greatly accelerated progress in open-domain NLP. In biomedicine, however, such resources are ostensibly scarce. In the past, there have been a plethora of shared tasks in biomedical NLP, such as BioCreative, BioNLP Shared Tasks, SemEval, and BioASQ, to name just a few. These efforts have played a significant role in fueling interest and progress by the research community, but they typically focus on individual tasks. The advent of neural language models, such as BERT provides a unifying foundation to leverage transfer learning from unlabeled text to support a wide range of NLP applications. To accelerate progress in biomedical pretraining strategies and task-specific methods, it is thus imperative to create a broad-coverage benchmark encompassing diverse biomedical tasks.
Inspired by prior efforts toward this direction (e.g., BLUE), we have created BLURB (short for Biomedical Language Understanding and Reasoning Benchmark). BLURB comprises of a comprehensive benchmark for PubMed-based biomedical NLP applications, as well as a leaderboard for tracking progress by the community. BLURB includes thirteen publicly available datasets in six diverse tasks. To avoid placing undue emphasis on tasks with many available datasets, such as named entity recognition (NER), BLURB reports the macro average across all tasks as the main score. The BLURB leaderboard is model-agnostic. Any system capable of producing the test predictions using the same training and development data can participate. The main goal of BLURB is to lower the entry barrier in biomedical NLP and help accelerate progress in this vitally important field for positive societal and human impact.
#### BC5-chem
The corpus consists of three separate sets of
articles with diseases, chemicals and their relations annotated.
The training (500 articles) and development (500 articles) sets
were released to task participants in advance to support text-mining
method development. The test set (500 articles) was used for final
system performance evaluation.
- **Homepage:** https://biocreative.bioinformatics.udel.edu/resources/corpora/biocreative-v-cdr-corpus
- **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/)
- **Paper:** [BioCreative V CDR task corpus: a resource for chemical disease relation extraction](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4860626/)
#### BC5-disease
The corpus consists of three separate sets of
articles with diseases, chemicals and their relations annotated.
The training (500 articles) and development (500 articles) sets
were released to task participants in advance to support text-mining
method development. The test set (500 articles) was used for final
system performance evaluation.
- **Homepage:** https://biocreative.bioinformatics.udel.edu/resources/corpora/biocreative-v-cdr-corpus
- **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/)
- **Paper:** [BioCreative V CDR task corpus: a resource for chemical disease relation extraction](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4860626/)
#### BC2GM
The BioCreative II Gene Mention task.
The training corpus for the current task consists mainly of
the training and testing corpora (text collections) from the
BCI task, and the testing corpus for the current task
consists of an additional 5,000 sentences that were held
'in reserve' from the previous task.
In the current corpus, tokenization is not provided;
instead participants are asked to identify a gene mention
in a sentence by giving its start and end characters.
As before, the training set consists of a set of sentences,
and for each sentence a set of gene mentions
(GENE annotations).
- **Homepage:** https://biocreative.bioinformatics.udel.edu/tasks/biocreative-ii/task-1a-gene-mention-tagging/
- **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/)
- **Paper:** [verview of BioCreative II gene mention recognition](https://link.springer.com/article/10.1186/gb-2008-9-s2-s2)
#### NCBI Disease
The NCBI disease corpus is fully annotated at the mention
and concept level to serve as a research resource for the biomedical natural
language processing community.
Corpus Characteristics
----------------------
* 793 PubMed abstracts
* 6,892 disease mentions
* 790 unique disease concepts
* Medical Subject Headings (MeSH®)
* Online Mendelian Inheritance in Man (OMIM®)
* 91% of the mentions map to a single disease concept
**divided into training, developing and testing sets.
Corpus Annotation
* Fourteen annotators
* Two-annotators per document (randomly paired)
* Three annotation phases
* Checked for corpus-wide consistency of annotations
- **Homepage:** https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/
- **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/)
- **Paper:** [NCBI disease corpus: a resource for disease name recognition and concept normalization](https://pubmed.ncbi.nlm.nih.gov/24393765/)
#### JNLPBA
The BioNLP / JNLPBA Shared Task 2004 involves the identification
and classification of technical terms referring to concepts of interest to
biologists in the domain of molecular biology. The task was organized by GENIA
Project based on the annotations of the GENIA Term corpus (version 3.02).
Corpus format: The JNLPBA corpus is distributed in IOB format, with each line
containing a single token and its tag, separated by a tab character.
Sentences are separated by blank lines.
- **Homepage: ** http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004
- **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/)
- **Paper: ** [Introduction to the Bio-entity Recognition Task at JNLPBA](https://aclanthology.org/W04-1213)
#### EBM PICO
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
#### ChemProt
- **Homepage:**
- **Repository:**
- **Paper:**
#### DDI
- **Homepage:**
- **Repository:**
- **Paper:**
#### GAD
- **Homepage:**
- **Repository:**
- **Paper:**
#### BIOSSES
BIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the [TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset](https://tac.nist.gov/2014/BiomedSumm/) containing articles from the biomedical domain. The sentence pairs in BIOSSES were selected from citing sentences, i.e. sentences that have a citation to a reference article.
The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). In the original paper the mean of the scores assigned by the five human annotators was taken as the gold standard. The Pearson correlation between the gold standard scores and the scores estimated by the models was used as the evaluation metric. The strength of correlation can be assessed by the general guideline proposed by Evans (1996) as follows:
- very strong: 0.80–1.00
- strong: 0.60–0.79
- moderate: 0.40–0.59
- weak: 0.20–0.39
- very weak: 0.00–0.19
- **Homepage:** https://tabilab.cmpe.boun.edu.tr/BIOSSES/DataSet.html
- **Repository:** https://github.com/gizemsogancioglu/biosses
- **Paper:** [BIOSSES: a semantic sentence similarity estimation system for the biomedical domain](https://academic.oup.com/bioinformatics/article/33/14/i49/3953954)
- **Point of Contact:** [Gizem Soğancıoğlu]([email protected]) and [Arzucan Özgür]([email protected])
#### HoC
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
#### PubMedQA
We introduce PubMedQA, a novel biomedical question answering (QA) dataset collected from PubMed abstracts. The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts. PubMedQA has 1k expert-annotated, 61.2k unlabeled and 211.3k artificially generated QA instances. Each PubMedQA instance is composed of (1) a question which is either an existing research article title or derived from one, (2) a context which is the corresponding abstract without its conclusion, (3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question, and (4) a yes/no/maybe answer which summarizes the conclusion. PubMedQA is the first QA dataset where reasoning over biomedical research texts, especially their quantitative contents, is required to answer the questions. Our best performing model, multi-phase fine-tuning of BioBERT with long answer bag-of-word statistics as additional supervision, achieves 68.1% accuracy, compared to single human performance of 78.0% accuracy and majority-baseline of 55.2% accuracy, leaving much room for improvement. PubMedQA is publicly available at this https URL.
- **Homepage:** https://pubmedqa.github.io/
- **Repository:** https://github.com/pubmedqa/pubmedqa
- **Paper:** [PubMedQA: A Dataset for Biomedical Research Question Answering](https://arxiv.org/pdf/1909.06146.pdf)
- **Leaderboard:** [Question answering](https://pubmedqa.github.io/)
- **Point of Contact:**
#### BioASQ
Task 7b will use benchmark datasets containing training and test biomedical questions, in English, along with gold standard (reference) answers. The participants will have to respond to each test question with relevant concepts (from designated terminologies and ontologies), relevant articles (in English, from designated article repositories), relevant snippets (from the relevant articles), relevant RDF triples (from designated ontologies), exact answers (e.g., named entities in the case of factoid questions) and 'ideal' answers (English paragraph-sized summaries). 2747 training questions (that were used as dry-run or test questions in previous year) are already available, along with their gold standard answers (relevant concepts, articles, snippets, exact answers, summaries).
- **Homepage:** http://bioasq.org/
- **Repository:** http://participants-area.bioasq.org/datasets/
- **Paper:** [Automatic semantic classification of scientific literature according to the hallmarks of cancer](https://academic.oup.com/bioinformatics/article/32/3/432/1743783?login=false)
### Supported Tasks and Leaderboards
| **Dataset** | **Task** | **Train** | **Dev** | **Test** | **Evaluation Metrics** | **Added** |
|:------------:|:-----------------------:|:---------:|:-------:|:--------:|:----------------------:|-----------|
| BC5-chem | NER | 5203 | 5347 | 5385 | F1 entity-level | **Yes** |
| BC5-disease | NER | 4182 | 4244 | 4424 | F1 entity-level | **Yes** |
| NCBI-disease | NER | 5134 | 787 | 960 | F1 entity-level | **Yes** |
| BC2GM | NER | 15197 | 3061 | 6325 | F1 entity-level | **Yes** |
| JNLPBA | NER | 46750 | 4551 | 8662 | F1 entity-level | **Yes** |
| EBM PICO | PICO | 339167 | 85321 | 16364 | Macro F1 word-level | No |
| ChemProt | Relation Extraction | 18035 | 11268 | 15745 | Micro F1 | No |
| DDI | Relation Extraction | 25296 | 2496 | 5716 | Micro F1 | No |
| GAD | Relation Extraction | 4261 | 535 | 534 | Micro F1 | No |
| BIOSSES | Sentence Similarity | 64 | 16 | 20 | Pearson | **Yes** |
| HoC | Document Classification | 1295 | 186 | 371 | Average Micro F1 | No |
| PubMedQA | Question Answering | 450 | 50 | 500 | Accuracy | **Yes** |
| BioASQ | Question Answering | 670 | 75 | 140 | Accuracy | No |
Datasets used in the BLURB biomedical NLP benchmark. The Train, Dev, and test splits might not be exactly identical to those proposed in BLURB.
This is something to be checked.
### Languages
English from biomedical texts
## Dataset Structure
### Data Instances
* **NER**
```json
{
'id': 0,
'tokens': [ "DPP6", "as", "a", "candidate", "gene", "for", "neuroleptic", "-", "induced", "tardive", "dyskinesia", "." ]
'ner_tags': [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
}
```
* **PICO**
```json
{
'TBD'
}
```
* **Relation Extraction**
```json
{
'TBD'
}
```
* **Sentence Similarity**
```json
{'sentence 1': 'Here, looking for agents that could specifically kill KRAS mutant cells, they found that knockdown of GATA2 was synthetically lethal with KRAS mutation'
'sentence 2': 'Not surprisingly, GATA2 knockdown in KRAS mutant cells resulted in a striking reduction of active GTP-bound RHO proteins, including the downstream ROCK kinase'
'score': 2.2}
```
* **Document Classification**
```json
{
'TBD'
}
```
* **Question Answering**
* PubMedQA
```json
{'context': {'contexts': ['Programmed cell death (PCD) is the regulated death of cells within an organism. The lace plant (Aponogeton madagascariensis) produces perforations in its leaves through PCD. The leaves of the plant consist of a latticework of longitudinal and transverse veins enclosing areoles. PCD occurs in the cells at the center of these areoles and progresses outwards, stopping approximately five cells from the vasculature. The role of mitochondria during PCD has been recognized in animals; however, it has been less studied during PCD in plants.',
'The following paper elucidates the role of mitochondrial dynamics during developmentally regulated PCD in vivo in A. madagascariensis. A single areole within a window stage leaf (PCD is occurring) was divided into three areas based on the progression of PCD; cells that will not undergo PCD (NPCD), cells in early stages of PCD (EPCD), and cells in late stages of PCD (LPCD). Window stage leaves were stained with the mitochondrial dye MitoTracker Red CMXRos and examined. Mitochondrial dynamics were delineated into four categories (M1-M4) based on characteristics including distribution, motility, and membrane potential (ΔΨm). A TUNEL assay showed fragmented nDNA in a gradient over these mitochondrial stages. Chloroplasts and transvacuolar strands were also examined using live cell imaging. The possible importance of mitochondrial permeability transition pore (PTP) formation during PCD was indirectly examined via in vivo cyclosporine A (CsA) treatment. This treatment resulted in lace plant leaves with a significantly lower number of perforations compared to controls, and that displayed mitochondrial dynamics similar to that of non-PCD cells.'],
'labels': ['BACKGROUND', 'RESULTS'],
'meshes': ['Alismataceae',
'Apoptosis',
'Cell Differentiation',
'Mitochondria',
'Plant Leaves'],
'reasoning_free_pred': ['y', 'e', 's'],
'reasoning_required_pred': ['y', 'e', 's']},
'final_decision': 'yes',
'long_answer': 'Results depicted mitochondrial dynamics in vivo as PCD progresses within the lace plant, and highlight the correlation of this organelle with other organelles during developmental PCD. To the best of our knowledge, this is the first report of mitochondria and chloroplasts moving on transvacuolar strands to form a ring structure surrounding the nucleus during developmental PCD. Also, for the first time, we have shown the feasibility for the use of CsA in a whole plant system. Overall, our findings implicate the mitochondria as playing a critical and early role in developmentally regulated PCD in the lace plant.',
'pubid': 21645374,
'question': 'Do mitochondria play a role in remodelling lace plant leaves during programmed cell death?'}
```
### Data Fields
* **NER**
* `id`: string
* `ner_tags`: Sequence[ClassLabel]
* `tokens`: Sequence[String]
* **PICO**
* To be added
* **Relation Extraction**
* To be added
* **Sentence Similarity**
* `sentence 1`: string
* `sentence 2`: string
* `score`: float ranging from 0 (no relation) to 4 (equivalent)
* **Document Classification**
* To be added
* **Question Answering**
* PubMedQA
* `pubid`: integer
* `question`: string
* `context`: sequence of strings [`contexts`, `labels`, `meshes`, `reasoning_required_pred`, `reasoning_free_pred`]
* `long_answer`: string
* `final_decision`: string
### Data Splits
Shown in the table of supported tasks.
## Dataset Creation
### Curation Rationale
* BC5-chem
* BC5-disease
* BC2GM
* JNLPBA
* EBM PICO
* ChemProt
* DDI
* GAD
* BIOSSES
* HoC
* PubMedQA
* BioASQ
### Source Data
[More Information Needed]
### Annotations
All the datasets have been obtained and annotated by experts in the biomedical domain. Check the different citations for further details.
#### Annotation process
* BC5-chem
* BC5-disease
* BC2GM
* JNLPBA
* EBM PICO
* ChemProt
* DDI
* GAD
* BIOSSES - The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). The score range was described based on the guidelines of SemEval 2012 Task 6 on STS (Agirre et al., 2012). Besides the annotation instructions, example sentences from the biomedical literature were provided to the annotators for each of the similarity degrees.
* HoC
* PubMedQA
* BioASQ
### Dataset Curators
All the datasets have been obtained and annotated by experts in thebiomedical domain. Check the different citations for further details.
### Licensing Information
* BC5-chem
* BC5-disease
* BC2GM
* JNLPBA
* EBM PICO
* ChemProt
* DDI
* GAD
* BIOSSES - BIOSSES is made available under the terms of [The GNU Common Public License v.3.0](https://www.gnu.org/licenses/gpl-3.0.en.html).
* HoC
* PubMedQA - MIT License Copyright (c) 2019 pubmedqa
* BioASQ
### Citation Information
* BC5-chem & BC5-disease
```latex
@article{article,
author = {Li, Jiao and Sun, Yueping and Johnson, Robin and Sciaky, Daniela and Wei, Chih-Hsuan and Leaman, Robert and Davis, Allan Peter and Mattingly, Carolyn and Wiegers, Thomas and lu, Zhiyong},
year = {2016},
month = {05},
pages = {baw068},
title = {BioCreative V CDR task corpus: a resource for chemical disease relation extraction},
volume = {2016},
journal = {Database},
doi = {10.1093/database/baw068}
}
```
* BC2GM
```latex
@article{article,
author = {Smith, Larry and Tanabe, Lorraine and Ando, Rie and Kuo, Cheng-Ju and Chung, I-Fang and Hsu, Chun-Nan and Lin, Yu-Shi and Klinger, Roman and Friedrich, Christoph and Ganchev, Kuzman and Torii, Manabu and Liu, Hongfang and Haddow, Barry and Struble, Craig and Povinelli, Richard and Vlachos, Andreas and Baumgartner Jr, William and Hunter, Lawrence and Carpenter, Bob and Wilbur, W.},
year = {2008},
month = {09},
pages = {S2},
title = {Overview of BioCreative II gene mention recognition},
volume = {9 Suppl 2},
journal = {Genome biology},
doi = {10.1186/gb-2008-9-s2-s2}
}
```
* JNLPBA
```latex
@inproceedings{collier-kim-2004-introduction,
title = "Introduction to the Bio-entity Recognition Task at {JNLPBA}",
author = "Collier, Nigel and
Kim, Jin-Dong",
booktitle = "Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications ({NLPBA}/{B}io{NLP})",
month = aug # " 28th and 29th",
year = "2004",
address = "Geneva, Switzerland",
publisher = "COLING",
url = "https://aclanthology.org/W04-1213",
pages = "73--78",
}
```
* NCBI Disiease
```latex
@article{10.5555/2772763.2772800,
author = {Dogan, Rezarta Islamaj and Leaman, Robert and Lu, Zhiyong},
title = {NCBI Disease Corpus},
year = {2014},
issue_date = {February 2014},
publisher = {Elsevier Science},
address = {San Diego, CA, USA},
volume = {47},
number = {C},
issn = {1532-0464},
abstract = {Graphical abstractDisplay Omitted NCBI disease corpus is built as a gold-standard resource for disease recognition.793 PubMed abstracts are annotated with disease mentions and concepts (MeSH/OMIM).14 Annotators produced high consistency level and inter-annotator agreement.Normalization benchmark results demonstrate the utility of the corpus.The corpus is publicly available to the community. Information encoded in natural language in biomedical literature publications is only useful if efficient and reliable ways of accessing and analyzing that information are available. Natural language processing and text mining tools are therefore essential for extracting valuable information, however, the development of powerful, highly effective tools to automatically detect central biomedical concepts such as diseases is conditional on the availability of annotated corpora.This paper presents the disease name and concept annotations of the NCBI disease corpus, a collection of 793 PubMed abstracts fully annotated at the mention and concept level to serve as a research resource for the biomedical natural language processing community. Each PubMed abstract was manually annotated by two annotators with disease mentions and their corresponding concepts in Medical Subject Headings (MeSH ) or Online Mendelian Inheritance in Man (OMIM ). Manual curation was performed using PubTator, which allowed the use of pre-annotations as a pre-step to manual annotations. Fourteen annotators were randomly paired and differing annotations were discussed for reaching a consensus in two annotation phases. In this setting, a high inter-annotator agreement was observed. Finally, all results were checked against annotations of the rest of the corpus to assure corpus-wide consistency.The public release of the NCBI disease corpus contains 6892 disease mentions, which are mapped to 790 unique disease concepts. Of these, 88% link to a MeSH identifier, while the rest contain an OMIM identifier. We were able to link 91% of the mentions to a single disease concept, while the rest are described as a combination of concepts. In order to help researchers use the corpus to design and test disease identification methods, we have prepared the corpus as training, testing and development sets. To demonstrate its utility, we conducted a benchmarking experiment where we compared three different knowledge-based disease normalization methods with a best performance in F-measure of 63.7%. These results show that the NCBI disease corpus has the potential to significantly improve the state-of-the-art in disease name recognition and normalization research, by providing a high-quality gold standard thus enabling the development of machine-learning based approaches for such tasks.The NCBI disease corpus, guidelines and other associated resources are available at: http://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/.},
journal = {J. of Biomedical Informatics},
month = {feb},
pages = {1–10},
numpages = {10}}
```
* EBM PICO
* ChemProt
* DDI
* GAD
* BIOSSES
```latex
@article{souganciouglu2017biosses,
title={BIOSSES: a semantic sentence similarity estimation system for the biomedical domain},
author={So{\u{g}}anc{\i}o{\u{g}}lu, Gizem and {\"O}zt{\"u}rk, Hakime and {\"O}zg{\"u}r, Arzucan},
journal={Bioinformatics},
volume={33},
number={14},
pages={i49--i58},
year={2017},
publisher={Oxford University Press}
}
```
* HoC
* PubMedQA
```latex
@inproceedings{jin2019pubmedqa,
title={PubMedQA: A Dataset for Biomedical Research Question Answering},
author={Jin, Qiao and Dhingra, Bhuwan and Liu, Zhengping and Cohen, William and Lu, Xinghua},
booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
pages={2567--2577},
year={2019}
}
```
* BioASQ
```latex
@article{10.1093/bioinformatics/btv585,
author = {Baker, Simon and Silins, Ilona and Guo, Yufan and Ali, Imran and Högberg, Johan and Stenius, Ulla and Korhonen, Anna},
title = "{Automatic semantic classification of scientific literature according to the hallmarks of cancer}",
journal = {Bioinformatics},
volume = {32},
number = {3},
pages = {432-440},
year = {2015},
month = {10},
abstract = "{Motivation: The hallmarks of cancer have become highly influential in cancer research. They reduce the complexity of cancer into 10 principles (e.g. resisting cell death and sustaining proliferative signaling) that explain the biological capabilities acquired during the development of human tumors. Since new research depends crucially on existing knowledge, technology for semantic classification of scientific literature according to the hallmarks of cancer could greatly support literature review, knowledge discovery and applications in cancer research.Results: We present the first step toward the development of such technology. We introduce a corpus of 1499 PubMed abstracts annotated according to the scientific evidence they provide for the 10 currently known hallmarks of cancer. We use this corpus to train a system that classifies PubMed literature according to the hallmarks. The system uses supervised machine learning and rich features largely based on biomedical text mining. We report good performance in both intrinsic and extrinsic evaluations, demonstrating both the accuracy of the methodology and its potential in supporting practical cancer research. We discuss how this approach could be developed and applied further in the future.Availability and implementation: The corpus of hallmark-annotated PubMed abstracts and the software for classification are available at: http://www.cl.cam.ac.uk/∼sb895/HoC.html .Contact:[email protected]}",
issn = {1367-4803},
doi = {10.1093/bioinformatics/btv585},
url = {https://doi.org/10.1093/bioinformatics/btv585},
eprint = {https://academic.oup.com/bioinformatics/article-pdf/32/3/432/19568147/btv585.pdf},
}
```
### Contributions
* This dataset has been uploaded and generated by Dr. Jorge Abreu Vicente.
* Thanks to [@GamalC](https://github.com/GamalC) for uploading the NER datasets to GitHub, from where I got them.
* I am not part of the team that generated BLURB. This dataset is intended to help researchers to usethe BLURB benchmarking for NLP in Biomedical NLP.
* Thanks to [@bwang482](https://github.com/bwang482) for uploading the [BIOSSES dataset](https://github.com/bwang482/datasets/tree/master/datasets/biosses). We forked the [BIOSSES 🤗 dataset](https://huggingface.co/datasets/biosses) to add it to this BLURB benchmark.
* Thank you to [@tuner007](https://github.com/tuner007) for adding this dataset to the 🤗 hub | EMBO/BLURB | [
"task_categories:question-answering",
"task_categories:token-classification",
"task_categories:sentence-similarity",
"task_categories:text-classification",
"task_ids:closed-domain-qa",
"task_ids:named-entity-recognition",
"task_ids:parsing",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"task_ids:topic-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"arxiv:2007.15779",
"arxiv:1909.06146",
"region:us"
] | 2022-03-14T10:29:16+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": "apache-2.0", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering", "token-classification", "sentence-similarity", "text-classification"], "task_ids": ["closed-domain-qa", "named-entity-recognition", "parsing", "semantic-similarity-scoring", "text-scoring", "topic-classification"], "pretty_name": "BLURB (Biomedical Language Understanding and Reasoning Benchmark.)"} | 2022-12-09T07:57:37+00:00 | [
"2007.15779",
"1909.06146"
] | [
"en"
] | TAGS
#task_categories-question-answering #task_categories-token-classification #task_categories-sentence-similarity #task_categories-text-classification #task_ids-closed-domain-qa #task_ids-named-entity-recognition #task_ids-parsing #task_ids-semantic-similarity-scoring #task_ids-text-scoring #task_ids-topic-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-apache-2.0 #arxiv-2007.15779 #arxiv-1909.06146 #region-us
| Dataset Card for BLURB
======================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: URL
* Paper: Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing
* Leaderboard: URL
* Point of Contact:
### Dataset Summary
BLURB is a collection of resources for biomedical natural language processing. In general domains, such as newswire and the Web, comprehensive benchmarks and leaderboards such as GLUE have greatly accelerated progress in open-domain NLP. In biomedicine, however, such resources are ostensibly scarce. In the past, there have been a plethora of shared tasks in biomedical NLP, such as BioCreative, BioNLP Shared Tasks, SemEval, and BioASQ, to name just a few. These efforts have played a significant role in fueling interest and progress by the research community, but they typically focus on individual tasks. The advent of neural language models, such as BERT provides a unifying foundation to leverage transfer learning from unlabeled text to support a wide range of NLP applications. To accelerate progress in biomedical pretraining strategies and task-specific methods, it is thus imperative to create a broad-coverage benchmark encompassing diverse biomedical tasks.
Inspired by prior efforts toward this direction (e.g., BLUE), we have created BLURB (short for Biomedical Language Understanding and Reasoning Benchmark). BLURB comprises of a comprehensive benchmark for PubMed-based biomedical NLP applications, as well as a leaderboard for tracking progress by the community. BLURB includes thirteen publicly available datasets in six diverse tasks. To avoid placing undue emphasis on tasks with many available datasets, such as named entity recognition (NER), BLURB reports the macro average across all tasks as the main score. The BLURB leaderboard is model-agnostic. Any system capable of producing the test predictions using the same training and development data can participate. The main goal of BLURB is to lower the entry barrier in biomedical NLP and help accelerate progress in this vitally important field for positive societal and human impact.
#### BC5-chem
The corpus consists of three separate sets of
articles with diseases, chemicals and their relations annotated.
The training (500 articles) and development (500 articles) sets
were released to task participants in advance to support text-mining
method development. The test set (500 articles) was used for final
system performance evaluation.
* Homepage: URL
* Repository: NER GitHub repo by @GamalC
* Paper: BioCreative V CDR task corpus: a resource for chemical disease relation extraction
#### BC5-disease
The corpus consists of three separate sets of
articles with diseases, chemicals and their relations annotated.
The training (500 articles) and development (500 articles) sets
were released to task participants in advance to support text-mining
method development. The test set (500 articles) was used for final
system performance evaluation.
* Homepage: URL
* Repository: NER GitHub repo by @GamalC
* Paper: BioCreative V CDR task corpus: a resource for chemical disease relation extraction
#### BC2GM
The BioCreative II Gene Mention task.
The training corpus for the current task consists mainly of
the training and testing corpora (text collections) from the
BCI task, and the testing corpus for the current task
consists of an additional 5,000 sentences that were held
'in reserve' from the previous task.
In the current corpus, tokenization is not provided;
instead participants are asked to identify a gene mention
in a sentence by giving its start and end characters.
As before, the training set consists of a set of sentences,
and for each sentence a set of gene mentions
(GENE annotations).
* Homepage: URL
* Repository: NER GitHub repo by @GamalC
* Paper: verview of BioCreative II gene mention recognition
#### NCBI Disease
The NCBI disease corpus is fully annotated at the mention
and concept level to serve as a research resource for the biomedical natural
language processing community.
Corpus Characteristics
----------------------
\* 793 PubMed abstracts
\* 6,892 disease mentions
\* 790 unique disease concepts
\* Medical Subject Headings (MeSH®)
\* Online Mendelian Inheritance in Man (OMIM®)
\* 91% of the mentions map to a single disease concept
divided into training, developing and testing sets.
Corpus Annotation
\* Fourteen annotators
\* Two-annotators per document (randomly paired)
\* Three annotation phases
\* Checked for corpus-wide consistency of annotations
* Homepage: URL
* Repository: NER GitHub repo by @GamalC
* Paper: NCBI disease corpus: a resource for disease name recognition and concept normalization
#### JNLPBA
The BioNLP / JNLPBA Shared Task 2004 involves the identification
and classification of technical terms referring to concepts of interest to
biologists in the domain of molecular biology. The task was organized by GENIA
Project based on the annotations of the GENIA Term corpus (version 3.02).
Corpus format: The JNLPBA corpus is distributed in IOB format, with each line
containing a single token and its tag, separated by a tab character.
Sentences are separated by blank lines.
* Homepage: URL
* Repository: NER GitHub repo by @GamalC
* Paper: Introduction to the Bio-entity Recognition Task at JNLPBA
#### EBM PICO
* Homepage:
* Repository:
* Paper:
* Leaderboard:
#### ChemProt
* Homepage:
* Repository:
* Paper:
#### DDI
* Homepage:
* Repository:
* Paper:
#### GAD
* Homepage:
* Repository:
* Paper:
#### BIOSSES
BIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset containing articles from the biomedical domain. The sentence pairs in BIOSSES were selected from citing sentences, i.e. sentences that have a citation to a reference article.
The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). In the original paper the mean of the scores assigned by the five human annotators was taken as the gold standard. The Pearson correlation between the gold standard scores and the scores estimated by the models was used as the evaluation metric. The strength of correlation can be assessed by the general guideline proposed by Evans (1996) as follows:
* very strong: 0.80–1.00
* strong: 0.60–0.79
* moderate: 0.40–0.59
* weak: 0.20–0.39
* very weak: 0.00–0.19
* Homepage: URL
* Repository: URL
* Paper: BIOSSES: a semantic sentence similarity estimation system for the biomedical domain
* Point of Contact: Gizem Soğancıoğlu and Arzucan Özgür
#### HoC
* Homepage:
* Repository:
* Paper:
* Leaderboard:
* Point of Contact:
#### PubMedQA
We introduce PubMedQA, a novel biomedical question answering (QA) dataset collected from PubMed abstracts. The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts. PubMedQA has 1k expert-annotated, 61.2k unlabeled and 211.3k artificially generated QA instances. Each PubMedQA instance is composed of (1) a question which is either an existing research article title or derived from one, (2) a context which is the corresponding abstract without its conclusion, (3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question, and (4) a yes/no/maybe answer which summarizes the conclusion. PubMedQA is the first QA dataset where reasoning over biomedical research texts, especially their quantitative contents, is required to answer the questions. Our best performing model, multi-phase fine-tuning of BioBERT with long answer bag-of-word statistics as additional supervision, achieves 68.1% accuracy, compared to single human performance of 78.0% accuracy and majority-baseline of 55.2% accuracy, leaving much room for improvement. PubMedQA is publicly available at this https URL.
* Homepage: URL
* Repository: URL
* Paper: PubMedQA: A Dataset for Biomedical Research Question Answering
* Leaderboard: Question answering
* Point of Contact:
#### BioASQ
Task 7b will use benchmark datasets containing training and test biomedical questions, in English, along with gold standard (reference) answers. The participants will have to respond to each test question with relevant concepts (from designated terminologies and ontologies), relevant articles (in English, from designated article repositories), relevant snippets (from the relevant articles), relevant RDF triples (from designated ontologies), exact answers (e.g., named entities in the case of factoid questions) and 'ideal' answers (English paragraph-sized summaries). 2747 training questions (that were used as dry-run or test questions in previous year) are already available, along with their gold standard answers (relevant concepts, articles, snippets, exact answers, summaries).
* Homepage: URL
* Repository: URL
* Paper: Automatic semantic classification of scientific literature according to the hallmarks of cancer
### Supported Tasks and Leaderboards
Datasets used in the BLURB biomedical NLP benchmark. The Train, Dev, and test splits might not be exactly identical to those proposed in BLURB.
This is something to be checked.
### Languages
English from biomedical texts
Dataset Structure
-----------------
### Data Instances
* NER
* PICO
* Relation Extraction
* Sentence Similarity
* Document Classification
* Question Answering
+ PubMedQA
### Data Fields
* NER
+ 'id': string
+ 'ner\_tags': Sequence[ClassLabel]
+ 'tokens': Sequence[String]
* PICO
+ To be added
* Relation Extraction
+ To be added
* Sentence Similarity
+ 'sentence 1': string
+ 'sentence 2': string
+ 'score': float ranging from 0 (no relation) to 4 (equivalent)
* Document Classification
+ To be added
* Question Answering
+ PubMedQA
- 'pubid': integer
- 'question': string
- 'context': sequence of strings ['contexts', 'labels', 'meshes', 'reasoning\_required\_pred', 'reasoning\_free\_pred']
- 'long\_answer': string
- 'final\_decision': string
### Data Splits
Shown in the table of supported tasks.
Dataset Creation
----------------
### Curation Rationale
* BC5-chem
* BC5-disease
* BC2GM
* JNLPBA
* EBM PICO
* ChemProt
* DDI
* GAD
* BIOSSES
* HoC
* PubMedQA
* BioASQ
### Source Data
### Annotations
All the datasets have been obtained and annotated by experts in the biomedical domain. Check the different citations for further details.
#### Annotation process
* BC5-chem
* BC5-disease
* BC2GM
* JNLPBA
* EBM PICO
* ChemProt
* DDI
* GAD
* BIOSSES - The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). The score range was described based on the guidelines of SemEval 2012 Task 6 on STS (Agirre et al., 2012). Besides the annotation instructions, example sentences from the biomedical literature were provided to the annotators for each of the similarity degrees.
* HoC
* PubMedQA
* BioASQ
### Dataset Curators
All the datasets have been obtained and annotated by experts in thebiomedical domain. Check the different citations for further details.
### Licensing Information
* BC5-chem
* BC5-disease
* BC2GM
* JNLPBA
* EBM PICO
* ChemProt
* DDI
* GAD
* BIOSSES - BIOSSES is made available under the terms of The GNU Common Public License v.3.0.
* HoC
* PubMedQA - MIT License Copyright (c) 2019 pubmedqa
* BioASQ
* BC5-chem & BC5-disease
* BC2GM
* JNLPBA
* NCBI Disiease
* EBM PICO
* ChemProt
* DDI
* GAD
* BIOSSES
* HoC
* PubMedQA
* BioASQ
### Contributions
* This dataset has been uploaded and generated by Dr. Jorge Abreu Vicente.
* Thanks to @GamalC for uploading the NER datasets to GitHub, from where I got them.
* I am not part of the team that generated BLURB. This dataset is intended to help researchers to usethe BLURB benchmarking for NLP in Biomedical NLP.
* Thanks to @bwang482 for uploading the BIOSSES dataset. We forked the BIOSSES dataset to add it to this BLURB benchmark.
* Thank you to @tuner007 for adding this dataset to the hub
| [
"### Dataset Summary\n\n\nBLURB is a collection of resources for biomedical natural language processing. In general domains, such as newswire and the Web, comprehensive benchmarks and leaderboards such as GLUE have greatly accelerated progress in open-domain NLP. In biomedicine, however, such resources are ostensibly scarce. In the past, there have been a plethora of shared tasks in biomedical NLP, such as BioCreative, BioNLP Shared Tasks, SemEval, and BioASQ, to name just a few. These efforts have played a significant role in fueling interest and progress by the research community, but they typically focus on individual tasks. The advent of neural language models, such as BERT provides a unifying foundation to leverage transfer learning from unlabeled text to support a wide range of NLP applications. To accelerate progress in biomedical pretraining strategies and task-specific methods, it is thus imperative to create a broad-coverage benchmark encompassing diverse biomedical tasks.\n\n\nInspired by prior efforts toward this direction (e.g., BLUE), we have created BLURB (short for Biomedical Language Understanding and Reasoning Benchmark). BLURB comprises of a comprehensive benchmark for PubMed-based biomedical NLP applications, as well as a leaderboard for tracking progress by the community. BLURB includes thirteen publicly available datasets in six diverse tasks. To avoid placing undue emphasis on tasks with many available datasets, such as named entity recognition (NER), BLURB reports the macro average across all tasks as the main score. The BLURB leaderboard is model-agnostic. Any system capable of producing the test predictions using the same training and development data can participate. The main goal of BLURB is to lower the entry barrier in biomedical NLP and help accelerate progress in this vitally important field for positive societal and human impact.",
"#### BC5-chem\n\n\nThe corpus consists of three separate sets of\narticles with diseases, chemicals and their relations annotated.\nThe training (500 articles) and development (500 articles) sets\nwere released to task participants in advance to support text-mining\nmethod development. The test set (500 articles) was used for final\nsystem performance evaluation.\n\n\n* Homepage: URL\n* Repository: NER GitHub repo by @GamalC\n* Paper: BioCreative V CDR task corpus: a resource for chemical disease relation extraction",
"#### BC5-disease\n\n\nThe corpus consists of three separate sets of\narticles with diseases, chemicals and their relations annotated.\nThe training (500 articles) and development (500 articles) sets\nwere released to task participants in advance to support text-mining\nmethod development. The test set (500 articles) was used for final\nsystem performance evaluation.\n\n\n* Homepage: URL\n* Repository: NER GitHub repo by @GamalC\n* Paper: BioCreative V CDR task corpus: a resource for chemical disease relation extraction",
"#### BC2GM\n\n\nThe BioCreative II Gene Mention task.\nThe training corpus for the current task consists mainly of\nthe training and testing corpora (text collections) from the\nBCI task, and the testing corpus for the current task\nconsists of an additional 5,000 sentences that were held\n'in reserve' from the previous task.\nIn the current corpus, tokenization is not provided;\ninstead participants are asked to identify a gene mention\nin a sentence by giving its start and end characters.\nAs before, the training set consists of a set of sentences,\nand for each sentence a set of gene mentions\n(GENE annotations).\n\n\n* Homepage: URL\n* Repository: NER GitHub repo by @GamalC\n* Paper: verview of BioCreative II gene mention recognition",
"#### NCBI Disease\n\n\nThe NCBI disease corpus is fully annotated at the mention\nand concept level to serve as a research resource for the biomedical natural\nlanguage processing community.\nCorpus Characteristics\n----------------------\n\\* 793 PubMed abstracts\n\\* 6,892 disease mentions\n\\* 790 unique disease concepts\n\\* Medical Subject Headings (MeSH®)\n\\* Online Mendelian Inheritance in Man (OMIM®)\n\\* 91% of the mentions map to a single disease concept\ndivided into training, developing and testing sets.\nCorpus Annotation\n\\* Fourteen annotators\n\\* Two-annotators per document (randomly paired)\n\\* Three annotation phases\n\\* Checked for corpus-wide consistency of annotations\n\n\n* Homepage: URL\n* Repository: NER GitHub repo by @GamalC\n* Paper: NCBI disease corpus: a resource for disease name recognition and concept normalization",
"#### JNLPBA\n\n\nThe BioNLP / JNLPBA Shared Task 2004 involves the identification\nand classification of technical terms referring to concepts of interest to\nbiologists in the domain of molecular biology. The task was organized by GENIA\nProject based on the annotations of the GENIA Term corpus (version 3.02).\nCorpus format: The JNLPBA corpus is distributed in IOB format, with each line\ncontaining a single token and its tag, separated by a tab character.\nSentences are separated by blank lines.\n\n\n* Homepage: URL\n* Repository: NER GitHub repo by @GamalC\n* Paper: Introduction to the Bio-entity Recognition Task at JNLPBA",
"#### EBM PICO\n\n\n* Homepage:\n* Repository:\n* Paper:\n* Leaderboard:",
"#### ChemProt\n\n\n* Homepage:\n* Repository:\n* Paper:",
"#### DDI\n\n\n* Homepage:\n* Repository:\n* Paper:",
"#### GAD\n\n\n* Homepage:\n* Repository:\n* Paper:",
"#### BIOSSES\n\n\nBIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset containing articles from the biomedical domain. The sentence pairs in BIOSSES were selected from citing sentences, i.e. sentences that have a citation to a reference article.\nThe sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). In the original paper the mean of the scores assigned by the five human annotators was taken as the gold standard. The Pearson correlation between the gold standard scores and the scores estimated by the models was used as the evaluation metric. The strength of correlation can be assessed by the general guideline proposed by Evans (1996) as follows:\n\n\n* very strong: 0.80–1.00\n* strong: 0.60–0.79\n* moderate: 0.40–0.59\n* weak: 0.20–0.39\n* very weak: 0.00–0.19\n* Homepage: URL\n* Repository: URL\n* Paper: BIOSSES: a semantic sentence similarity estimation system for the biomedical domain\n* Point of Contact: Gizem Soğancıoğlu and Arzucan Özgür",
"#### HoC\n\n\n* Homepage:\n* Repository:\n* Paper:\n* Leaderboard:\n* Point of Contact:",
"#### PubMedQA\n\n\nWe introduce PubMedQA, a novel biomedical question answering (QA) dataset collected from PubMed abstracts. The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts. PubMedQA has 1k expert-annotated, 61.2k unlabeled and 211.3k artificially generated QA instances. Each PubMedQA instance is composed of (1) a question which is either an existing research article title or derived from one, (2) a context which is the corresponding abstract without its conclusion, (3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question, and (4) a yes/no/maybe answer which summarizes the conclusion. PubMedQA is the first QA dataset where reasoning over biomedical research texts, especially their quantitative contents, is required to answer the questions. Our best performing model, multi-phase fine-tuning of BioBERT with long answer bag-of-word statistics as additional supervision, achieves 68.1% accuracy, compared to single human performance of 78.0% accuracy and majority-baseline of 55.2% accuracy, leaving much room for improvement. PubMedQA is publicly available at this https URL.\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: PubMedQA: A Dataset for Biomedical Research Question Answering\n* Leaderboard: Question answering\n* Point of Contact:",
"#### BioASQ\n\n\nTask 7b will use benchmark datasets containing training and test biomedical questions, in English, along with gold standard (reference) answers. The participants will have to respond to each test question with relevant concepts (from designated terminologies and ontologies), relevant articles (in English, from designated article repositories), relevant snippets (from the relevant articles), relevant RDF triples (from designated ontologies), exact answers (e.g., named entities in the case of factoid questions) and 'ideal' answers (English paragraph-sized summaries). 2747 training questions (that were used as dry-run or test questions in previous year) are already available, along with their gold standard answers (relevant concepts, articles, snippets, exact answers, summaries).\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: Automatic semantic classification of scientific literature according to the hallmarks of cancer",
"### Supported Tasks and Leaderboards\n\n\n\nDatasets used in the BLURB biomedical NLP benchmark. The Train, Dev, and test splits might not be exactly identical to those proposed in BLURB.\nThis is something to be checked.",
"### Languages\n\n\nEnglish from biomedical texts\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\n* NER\n* PICO\n* Relation Extraction\n* Sentence Similarity\n* Document Classification\n* Question Answering\n\n\n\t+ PubMedQA",
"### Data Fields\n\n\n* NER\n\t+ 'id': string\n\t+ 'ner\\_tags': Sequence[ClassLabel]\n\t+ 'tokens': Sequence[String]\n* PICO\n\t+ To be added\n* Relation Extraction\n\t+ To be added\n* Sentence Similarity\n\t+ 'sentence 1': string\n\t+ 'sentence 2': string\n\t+ 'score': float ranging from 0 (no relation) to 4 (equivalent)\n* Document Classification\n\t+ To be added\n* Question Answering\n\t+ PubMedQA\n\t\t- 'pubid': integer\n\t\t- 'question': string\n\t\t- 'context': sequence of strings ['contexts', 'labels', 'meshes', 'reasoning\\_required\\_pred', 'reasoning\\_free\\_pred']\n\t\t- 'long\\_answer': string\n\t\t- 'final\\_decision': string",
"### Data Splits\n\n\nShown in the table of supported tasks.\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\n* BC5-chem\n* BC5-disease\n* BC2GM\n* JNLPBA\n* EBM PICO\n* ChemProt\n* DDI\n* GAD\n* BIOSSES\n* HoC\n* PubMedQA\n* BioASQ",
"### Source Data",
"### Annotations\n\n\nAll the datasets have been obtained and annotated by experts in the biomedical domain. Check the different citations for further details.",
"#### Annotation process\n\n\n* BC5-chem\n* BC5-disease\n* BC2GM\n* JNLPBA\n* EBM PICO\n* ChemProt\n* DDI\n* GAD\n* BIOSSES - The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). The score range was described based on the guidelines of SemEval 2012 Task 6 on STS (Agirre et al., 2012). Besides the annotation instructions, example sentences from the biomedical literature were provided to the annotators for each of the similarity degrees.\n* HoC\n* PubMedQA\n* BioASQ",
"### Dataset Curators\n\n\nAll the datasets have been obtained and annotated by experts in thebiomedical domain. Check the different citations for further details.",
"### Licensing Information\n\n\n* BC5-chem\n* BC5-disease\n* BC2GM\n* JNLPBA\n* EBM PICO\n* ChemProt\n* DDI\n* GAD\n* BIOSSES - BIOSSES is made available under the terms of The GNU Common Public License v.3.0.\n* HoC\n* PubMedQA - MIT License Copyright (c) 2019 pubmedqa\n* BioASQ\n* BC5-chem & BC5-disease\n* BC2GM\n* JNLPBA\n* NCBI Disiease\n* EBM PICO\n* ChemProt\n* DDI\n* GAD\n* BIOSSES\n* HoC\n* PubMedQA\n* BioASQ",
"### Contributions\n\n\n* This dataset has been uploaded and generated by Dr. Jorge Abreu Vicente.\n* Thanks to @GamalC for uploading the NER datasets to GitHub, from where I got them.\n* I am not part of the team that generated BLURB. This dataset is intended to help researchers to usethe BLURB benchmarking for NLP in Biomedical NLP.\n* Thanks to @bwang482 for uploading the BIOSSES dataset. We forked the BIOSSES dataset to add it to this BLURB benchmark.\n* Thank you to @tuner007 for adding this dataset to the hub"
] | [
"TAGS\n#task_categories-question-answering #task_categories-token-classification #task_categories-sentence-similarity #task_categories-text-classification #task_ids-closed-domain-qa #task_ids-named-entity-recognition #task_ids-parsing #task_ids-semantic-similarity-scoring #task_ids-text-scoring #task_ids-topic-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-apache-2.0 #arxiv-2007.15779 #arxiv-1909.06146 #region-us \n",
"### Dataset Summary\n\n\nBLURB is a collection of resources for biomedical natural language processing. In general domains, such as newswire and the Web, comprehensive benchmarks and leaderboards such as GLUE have greatly accelerated progress in open-domain NLP. In biomedicine, however, such resources are ostensibly scarce. In the past, there have been a plethora of shared tasks in biomedical NLP, such as BioCreative, BioNLP Shared Tasks, SemEval, and BioASQ, to name just a few. These efforts have played a significant role in fueling interest and progress by the research community, but they typically focus on individual tasks. The advent of neural language models, such as BERT provides a unifying foundation to leverage transfer learning from unlabeled text to support a wide range of NLP applications. To accelerate progress in biomedical pretraining strategies and task-specific methods, it is thus imperative to create a broad-coverage benchmark encompassing diverse biomedical tasks.\n\n\nInspired by prior efforts toward this direction (e.g., BLUE), we have created BLURB (short for Biomedical Language Understanding and Reasoning Benchmark). BLURB comprises of a comprehensive benchmark for PubMed-based biomedical NLP applications, as well as a leaderboard for tracking progress by the community. BLURB includes thirteen publicly available datasets in six diverse tasks. To avoid placing undue emphasis on tasks with many available datasets, such as named entity recognition (NER), BLURB reports the macro average across all tasks as the main score. The BLURB leaderboard is model-agnostic. Any system capable of producing the test predictions using the same training and development data can participate. The main goal of BLURB is to lower the entry barrier in biomedical NLP and help accelerate progress in this vitally important field for positive societal and human impact.",
"#### BC5-chem\n\n\nThe corpus consists of three separate sets of\narticles with diseases, chemicals and their relations annotated.\nThe training (500 articles) and development (500 articles) sets\nwere released to task participants in advance to support text-mining\nmethod development. The test set (500 articles) was used for final\nsystem performance evaluation.\n\n\n* Homepage: URL\n* Repository: NER GitHub repo by @GamalC\n* Paper: BioCreative V CDR task corpus: a resource for chemical disease relation extraction",
"#### BC5-disease\n\n\nThe corpus consists of three separate sets of\narticles with diseases, chemicals and their relations annotated.\nThe training (500 articles) and development (500 articles) sets\nwere released to task participants in advance to support text-mining\nmethod development. The test set (500 articles) was used for final\nsystem performance evaluation.\n\n\n* Homepage: URL\n* Repository: NER GitHub repo by @GamalC\n* Paper: BioCreative V CDR task corpus: a resource for chemical disease relation extraction",
"#### BC2GM\n\n\nThe BioCreative II Gene Mention task.\nThe training corpus for the current task consists mainly of\nthe training and testing corpora (text collections) from the\nBCI task, and the testing corpus for the current task\nconsists of an additional 5,000 sentences that were held\n'in reserve' from the previous task.\nIn the current corpus, tokenization is not provided;\ninstead participants are asked to identify a gene mention\nin a sentence by giving its start and end characters.\nAs before, the training set consists of a set of sentences,\nand for each sentence a set of gene mentions\n(GENE annotations).\n\n\n* Homepage: URL\n* Repository: NER GitHub repo by @GamalC\n* Paper: verview of BioCreative II gene mention recognition",
"#### NCBI Disease\n\n\nThe NCBI disease corpus is fully annotated at the mention\nand concept level to serve as a research resource for the biomedical natural\nlanguage processing community.\nCorpus Characteristics\n----------------------\n\\* 793 PubMed abstracts\n\\* 6,892 disease mentions\n\\* 790 unique disease concepts\n\\* Medical Subject Headings (MeSH®)\n\\* Online Mendelian Inheritance in Man (OMIM®)\n\\* 91% of the mentions map to a single disease concept\ndivided into training, developing and testing sets.\nCorpus Annotation\n\\* Fourteen annotators\n\\* Two-annotators per document (randomly paired)\n\\* Three annotation phases\n\\* Checked for corpus-wide consistency of annotations\n\n\n* Homepage: URL\n* Repository: NER GitHub repo by @GamalC\n* Paper: NCBI disease corpus: a resource for disease name recognition and concept normalization",
"#### JNLPBA\n\n\nThe BioNLP / JNLPBA Shared Task 2004 involves the identification\nand classification of technical terms referring to concepts of interest to\nbiologists in the domain of molecular biology. The task was organized by GENIA\nProject based on the annotations of the GENIA Term corpus (version 3.02).\nCorpus format: The JNLPBA corpus is distributed in IOB format, with each line\ncontaining a single token and its tag, separated by a tab character.\nSentences are separated by blank lines.\n\n\n* Homepage: URL\n* Repository: NER GitHub repo by @GamalC\n* Paper: Introduction to the Bio-entity Recognition Task at JNLPBA",
"#### EBM PICO\n\n\n* Homepage:\n* Repository:\n* Paper:\n* Leaderboard:",
"#### ChemProt\n\n\n* Homepage:\n* Repository:\n* Paper:",
"#### DDI\n\n\n* Homepage:\n* Repository:\n* Paper:",
"#### GAD\n\n\n* Homepage:\n* Repository:\n* Paper:",
"#### BIOSSES\n\n\nBIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset containing articles from the biomedical domain. The sentence pairs in BIOSSES were selected from citing sentences, i.e. sentences that have a citation to a reference article.\nThe sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). In the original paper the mean of the scores assigned by the five human annotators was taken as the gold standard. The Pearson correlation between the gold standard scores and the scores estimated by the models was used as the evaluation metric. The strength of correlation can be assessed by the general guideline proposed by Evans (1996) as follows:\n\n\n* very strong: 0.80–1.00\n* strong: 0.60–0.79\n* moderate: 0.40–0.59\n* weak: 0.20–0.39\n* very weak: 0.00–0.19\n* Homepage: URL\n* Repository: URL\n* Paper: BIOSSES: a semantic sentence similarity estimation system for the biomedical domain\n* Point of Contact: Gizem Soğancıoğlu and Arzucan Özgür",
"#### HoC\n\n\n* Homepage:\n* Repository:\n* Paper:\n* Leaderboard:\n* Point of Contact:",
"#### PubMedQA\n\n\nWe introduce PubMedQA, a novel biomedical question answering (QA) dataset collected from PubMed abstracts. The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts. PubMedQA has 1k expert-annotated, 61.2k unlabeled and 211.3k artificially generated QA instances. Each PubMedQA instance is composed of (1) a question which is either an existing research article title or derived from one, (2) a context which is the corresponding abstract without its conclusion, (3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question, and (4) a yes/no/maybe answer which summarizes the conclusion. PubMedQA is the first QA dataset where reasoning over biomedical research texts, especially their quantitative contents, is required to answer the questions. Our best performing model, multi-phase fine-tuning of BioBERT with long answer bag-of-word statistics as additional supervision, achieves 68.1% accuracy, compared to single human performance of 78.0% accuracy and majority-baseline of 55.2% accuracy, leaving much room for improvement. PubMedQA is publicly available at this https URL.\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: PubMedQA: A Dataset for Biomedical Research Question Answering\n* Leaderboard: Question answering\n* Point of Contact:",
"#### BioASQ\n\n\nTask 7b will use benchmark datasets containing training and test biomedical questions, in English, along with gold standard (reference) answers. The participants will have to respond to each test question with relevant concepts (from designated terminologies and ontologies), relevant articles (in English, from designated article repositories), relevant snippets (from the relevant articles), relevant RDF triples (from designated ontologies), exact answers (e.g., named entities in the case of factoid questions) and 'ideal' answers (English paragraph-sized summaries). 2747 training questions (that were used as dry-run or test questions in previous year) are already available, along with their gold standard answers (relevant concepts, articles, snippets, exact answers, summaries).\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: Automatic semantic classification of scientific literature according to the hallmarks of cancer",
"### Supported Tasks and Leaderboards\n\n\n\nDatasets used in the BLURB biomedical NLP benchmark. The Train, Dev, and test splits might not be exactly identical to those proposed in BLURB.\nThis is something to be checked.",
"### Languages\n\n\nEnglish from biomedical texts\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\n* NER\n* PICO\n* Relation Extraction\n* Sentence Similarity\n* Document Classification\n* Question Answering\n\n\n\t+ PubMedQA",
"### Data Fields\n\n\n* NER\n\t+ 'id': string\n\t+ 'ner\\_tags': Sequence[ClassLabel]\n\t+ 'tokens': Sequence[String]\n* PICO\n\t+ To be added\n* Relation Extraction\n\t+ To be added\n* Sentence Similarity\n\t+ 'sentence 1': string\n\t+ 'sentence 2': string\n\t+ 'score': float ranging from 0 (no relation) to 4 (equivalent)\n* Document Classification\n\t+ To be added\n* Question Answering\n\t+ PubMedQA\n\t\t- 'pubid': integer\n\t\t- 'question': string\n\t\t- 'context': sequence of strings ['contexts', 'labels', 'meshes', 'reasoning\\_required\\_pred', 'reasoning\\_free\\_pred']\n\t\t- 'long\\_answer': string\n\t\t- 'final\\_decision': string",
"### Data Splits\n\n\nShown in the table of supported tasks.\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\n* BC5-chem\n* BC5-disease\n* BC2GM\n* JNLPBA\n* EBM PICO\n* ChemProt\n* DDI\n* GAD\n* BIOSSES\n* HoC\n* PubMedQA\n* BioASQ",
"### Source Data",
"### Annotations\n\n\nAll the datasets have been obtained and annotated by experts in the biomedical domain. Check the different citations for further details.",
"#### Annotation process\n\n\n* BC5-chem\n* BC5-disease\n* BC2GM\n* JNLPBA\n* EBM PICO\n* ChemProt\n* DDI\n* GAD\n* BIOSSES - The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). The score range was described based on the guidelines of SemEval 2012 Task 6 on STS (Agirre et al., 2012). Besides the annotation instructions, example sentences from the biomedical literature were provided to the annotators for each of the similarity degrees.\n* HoC\n* PubMedQA\n* BioASQ",
"### Dataset Curators\n\n\nAll the datasets have been obtained and annotated by experts in thebiomedical domain. Check the different citations for further details.",
"### Licensing Information\n\n\n* BC5-chem\n* BC5-disease\n* BC2GM\n* JNLPBA\n* EBM PICO\n* ChemProt\n* DDI\n* GAD\n* BIOSSES - BIOSSES is made available under the terms of The GNU Common Public License v.3.0.\n* HoC\n* PubMedQA - MIT License Copyright (c) 2019 pubmedqa\n* BioASQ\n* BC5-chem & BC5-disease\n* BC2GM\n* JNLPBA\n* NCBI Disiease\n* EBM PICO\n* ChemProt\n* DDI\n* GAD\n* BIOSSES\n* HoC\n* PubMedQA\n* BioASQ",
"### Contributions\n\n\n* This dataset has been uploaded and generated by Dr. Jorge Abreu Vicente.\n* Thanks to @GamalC for uploading the NER datasets to GitHub, from where I got them.\n* I am not part of the team that generated BLURB. This dataset is intended to help researchers to usethe BLURB benchmarking for NLP in Biomedical NLP.\n* Thanks to @bwang482 for uploading the BIOSSES dataset. We forked the BIOSSES dataset to add it to this BLURB benchmark.\n* Thank you to @tuner007 for adding this dataset to the hub"
] |
2e7a18495a4a6b869d49c68c6def0bffc7e1135e | # GEM Submission
Submission name: This is a test
| GEM-submissions/lewtun__this-is-a-test__1647256250 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-14T11:10:54+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "This is a test", "tags": ["evaluation", "benchmark"]} | 2022-03-14T11:10:55+00:00 | [] | [] | TAGS
#benchmark-gem #evaluation #benchmark #region-us
| # GEM Submission
Submission name: This is a test
| [
"# GEM Submission\n\nSubmission name: This is a test"
] | [
"TAGS\n#benchmark-gem #evaluation #benchmark #region-us \n",
"# GEM Submission\n\nSubmission name: This is a test"
] |
fac45b3184e0ce9b79eecac454acf17e0a51f94e |
# Dataset Card for WikiTableQuestions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [WikiTableQuestions homepage](https://nlp.stanford.edu/software/sempre/wikitable)
- **Repository:** [WikiTableQuestions repository](https://github.com/ppasupat/WikiTableQuestions)
- **Paper:** [Compositional Semantic Parsing on Semi-Structured Tables](https://arxiv.org/abs/1508.00305)
- **Leaderboard:** [WikiTableQuestions leaderboard on PaperWithCode](https://paperswithcode.com/dataset/wikitablequestions)
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables.
### Supported Tasks and Leaderboards
question-answering, table-question-answering
### Languages
en
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 29.27 MB
- **Size of the generated dataset:** 47.90 MB
- **Total amount of disk used:** 77.18 MB
An example of 'validation' looks as follows:
```
{
"id": "nt-0",
"question": "what was the last year where this team was a part of the usl a-league?",
"answers": ["2004"],
"table": {
"header": ["Year", "Division", "League", ...],
"name": "csv/204-csv/590.csv",
"rows": [
["2001", "2", "USL A-League", ...],
["2002", "2", "USL A-League", ...],
...
]
}
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `id`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a `list` of `string` feature.
- `table`: a dictionary feature containing:
- `header`: a `list` of `string` features.
- `rows`: a `list` of `list` of `string` features:
- `name`: a `string` feature.
### Data Splits
| name |train|validation|test |
|-------|----:|---------:|----:|
|default|11321| 2831|4344|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Panupong Pasupat and Percy Liang
### Licensing Information
Creative Commons Attribution Share Alike 4.0 International
### Citation Information
```
@inproceedings{pasupat-liang-2015-compositional,
title = "Compositional Semantic Parsing on Semi-Structured Tables",
author = "Pasupat, Panupong and Liang, Percy",
booktitle = "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = jul,
year = "2015",
address = "Beijing, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P15-1142",
doi = "10.3115/v1/P15-1142",
pages = "1470--1480",
}
```
### Contributions
Thanks to [@SivilTaram](https://github.com/SivilTaram) for adding this dataset. | wikitablequestions | [
"task_categories:question-answering",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"table-question-answering",
"arxiv:1508.00305",
"region:us"
] | 2022-03-14T11:16:52+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": [], "pretty_name": "WikiTableQuestions", "tags": ["table-question-answering"], "dataset_info": [{"config_name": "random-split-1", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "table", "struct": [{"name": "header", "sequence": "string"}, {"name": "rows", "sequence": {"sequence": "string"}}, {"name": "name", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 30364389, "num_examples": 11321}, {"name": "test", "num_bytes": 11423506, "num_examples": 4344}, {"name": "validation", "num_bytes": 7145768, "num_examples": 2831}], "download_size": 29267445, "dataset_size": 48933663}, {"config_name": "random-split-2", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "table", "struct": [{"name": "header", "sequence": "string"}, {"name": "rows", "sequence": {"sequence": "string"}}, {"name": "name", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 30098954, "num_examples": 11314}, {"name": "test", "num_bytes": 11423506, "num_examples": 4344}, {"name": "validation", "num_bytes": 7411203, "num_examples": 2838}], "download_size": 29267445, "dataset_size": 48933663}, {"config_name": "random-split-3", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "table", "struct": [{"name": "header", "sequence": "string"}, {"name": "rows", "sequence": {"sequence": "string"}}, {"name": "name", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 28778697, "num_examples": 11314}, {"name": "test", "num_bytes": 11423506, "num_examples": 4344}, {"name": "validation", "num_bytes": 8731460, "num_examples": 2838}], "download_size": 29267445, "dataset_size": 48933663}, {"config_name": "random-split-4", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "table", "struct": [{"name": "header", "sequence": "string"}, {"name": "rows", "sequence": {"sequence": "string"}}, {"name": "name", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 30166421, "num_examples": 11321}, {"name": "test", "num_bytes": 11423506, "num_examples": 4344}, {"name": "validation", "num_bytes": 7343736, "num_examples": 2831}], "download_size": 29267445, "dataset_size": 48933663}, {"config_name": "random-split-5", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "table", "struct": [{"name": "header", "sequence": "string"}, {"name": "rows", "sequence": {"sequence": "string"}}, {"name": "name", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 30333964, "num_examples": 11316}, {"name": "test", "num_bytes": 11423506, "num_examples": 4344}, {"name": "validation", "num_bytes": 7176193, "num_examples": 2836}], "download_size": 29267445, "dataset_size": 48933663}]} | 2024-01-18T11:19:00+00:00 | [
"1508.00305"
] | [
"en"
] | TAGS
#task_categories-question-answering #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #table-question-answering #arxiv-1508.00305 #region-us
| Dataset Card for WikiTableQuestions
===================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
Dataset Description
-------------------
* Homepage: WikiTableQuestions homepage
* Repository: WikiTableQuestions repository
* Paper: Compositional Semantic Parsing on Semi-Structured Tables
* Leaderboard: WikiTableQuestions leaderboard on PaperWithCode
* Point of Contact:
### Dataset Summary
The WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables.
### Supported Tasks and Leaderboards
question-answering, table-question-answering
### Languages
en
Dataset Structure
-----------------
### Data Instances
#### default
* Size of downloaded dataset files: 29.27 MB
* Size of the generated dataset: 47.90 MB
* Total amount of disk used: 77.18 MB
An example of 'validation' looks as follows:
### Data Fields
The data fields are the same among all splits.
#### default
* 'id': a 'string' feature.
* 'question': a 'string' feature.
* 'answers': a 'list' of 'string' feature.
* 'table': a dictionary feature containing:
+ 'header': a 'list' of 'string' features.
+ 'rows': a 'list' of 'list' of 'string' features:
+ 'name': a 'string' feature.
### Data Splits
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
Panupong Pasupat and Percy Liang
### Licensing Information
Creative Commons Attribution Share Alike 4.0 International
### Contributions
Thanks to @SivilTaram for adding this dataset.
| [
"### Dataset Summary\n\n\nThe WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables.",
"### Supported Tasks and Leaderboards\n\n\nquestion-answering, table-question-answering",
"### Languages\n\n\nen\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### default\n\n\n* Size of downloaded dataset files: 29.27 MB\n* Size of the generated dataset: 47.90 MB\n* Total amount of disk used: 77.18 MB\n\n\nAn example of 'validation' looks as follows:",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### default\n\n\n* 'id': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a 'list' of 'string' feature.\n* 'table': a dictionary feature containing:\n\t+ 'header': a 'list' of 'string' features.\n\t+ 'rows': a 'list' of 'list' of 'string' features:\n\t+ 'name': a 'string' feature.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nPanupong Pasupat and Percy Liang",
"### Licensing Information\n\n\nCreative Commons Attribution Share Alike 4.0 International",
"### Contributions\n\n\nThanks to @SivilTaram for adding this dataset."
] | [
"TAGS\n#task_categories-question-answering #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #table-question-answering #arxiv-1508.00305 #region-us \n",
"### Dataset Summary\n\n\nThe WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables.",
"### Supported Tasks and Leaderboards\n\n\nquestion-answering, table-question-answering",
"### Languages\n\n\nen\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### default\n\n\n* Size of downloaded dataset files: 29.27 MB\n* Size of the generated dataset: 47.90 MB\n* Total amount of disk used: 77.18 MB\n\n\nAn example of 'validation' looks as follows:",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### default\n\n\n* 'id': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a 'list' of 'string' feature.\n* 'table': a dictionary feature containing:\n\t+ 'header': a 'list' of 'string' features.\n\t+ 'rows': a 'list' of 'list' of 'string' features:\n\t+ 'name': a 'string' feature.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nPanupong Pasupat and Percy Liang",
"### Licensing Information\n\n\nCreative Commons Attribution Share Alike 4.0 International",
"### Contributions\n\n\nThanks to @SivilTaram for adding this dataset."
] |
b6ef2478821cfd61a28b32b10598cf2d23608d33 |
# UK PV dataset
PV solar generation data from the UK.
This dataset contains data from 1311 PV systems from 2018 to 2021.
Time granularity varies from 2 minutes to 30 minutes.
This data is collected from live PV systems in the UK. We have obfuscated the location of the PV systems for privacy.
If you are the owner of a PV system in the dataset, and do not want this data to be shared,
please do get in contact with [email protected].
## Files
- metadata.csv: Data about the PV systems, e.g location
- 2min.parquet: Power output for PV systems every 2 minutes.
- 5min.parquet: Power output for PV systems every 5 minutes.
- 30min.parquet: Power output for PV systems every 30 minutes.
- pv.netcdf: (legacy) Time series of PV solar generation every 5 minutes
### metadata.csv
Metadata of the different PV systems.
Note that there are extra PV systems in this metadata that do not appear in the PV time-series data.
The csv columns are:
- ss_id: the id of the system
- latitude_rounded: latitude of the PV system, but rounded to approximately the nearest km
- longitude_rounded: latitude of the PV system, but rounded to approximately the nearest km
- llsoacd: TODO
- orientation: The orientation of the PV system
- tilt: The tilt of the PV system
- kwp: The capacity of the PV system
- operational_at: the datetime the PV system started working
### {2,5,30}min.parquet
Time series of solar generation for a number of sytems.
Each file includes the systems for which there is enough granularity.
In particular the systems in 2min.parquet and 5min.parquet are also in 30min.parquet.
The files contain 3 columns:
- ss_id: the id of the system
- timestamp: the timestamp
- generation_wh: the generated power (in kW) at the given timestamp for the given system
### pv.netcdf (legacy)
Time series data of PV solar generation data is in an [xarray](https://docs.xarray.dev/en/stable/) format.
The data variables are the same as 'ss_id' in the metadata.
Each data variable contains the solar generation (in kW) for that PV system.
The ss_id's here are a subset of all the ss_id's in the metadata
The coordinates of the date are tagged as 'datetime' which is the datetime of the solar generation reading.
This is a subset of the more recent `5min.parquet` file.
## example
using Hugging Face Datasets
```python
from datasets import load_dataset
dataset = load_dataset("openclimatefix/uk_pv")
```
## useful links
https://huggingface.co/docs/datasets/share - this repo was made by following this tutorial | openclimatefix/uk_pv | [
"task_categories:time-series-forecasting",
"task_ids:multivariate-time-series-forecasting",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1B<n<10B",
"source_datasets:original",
"language:en",
"license:mit",
"pv",
"photovoltaic",
"environment",
"climate",
"energy",
"electricity",
"doi:10.57967/hf/0878",
"region:us"
] | 2022-03-14T12:20:19+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1B<n<10B"], "source_datasets": ["original"], "task_categories": ["time-series-forecasting"], "task_ids": ["multivariate-time-series-forecasting"], "pretty_name": "United Kingdom PV Solar generation", "tags": ["pv", "photovoltaic", "environment", "climate", "energy", "electricity"]} | 2022-11-30T17:02:42+00:00 | [] | [
"en"
] | TAGS
#task_categories-time-series-forecasting #task_ids-multivariate-time-series-forecasting #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-1B<n<10B #source_datasets-original #language-English #license-mit #pv #photovoltaic #environment #climate #energy #electricity #doi-10.57967/hf/0878 #region-us
|
# UK PV dataset
PV solar generation data from the UK.
This dataset contains data from 1311 PV systems from 2018 to 2021.
Time granularity varies from 2 minutes to 30 minutes.
This data is collected from live PV systems in the UK. We have obfuscated the location of the PV systems for privacy.
If you are the owner of a PV system in the dataset, and do not want this data to be shared,
please do get in contact with info@URL.
## Files
- URL: Data about the PV systems, e.g location
- 2min.parquet: Power output for PV systems every 2 minutes.
- 5min.parquet: Power output for PV systems every 5 minutes.
- 30min.parquet: Power output for PV systems every 30 minutes.
- URL: (legacy) Time series of PV solar generation every 5 minutes
### URL
Metadata of the different PV systems.
Note that there are extra PV systems in this metadata that do not appear in the PV time-series data.
The csv columns are:
- ss_id: the id of the system
- latitude_rounded: latitude of the PV system, but rounded to approximately the nearest km
- longitude_rounded: latitude of the PV system, but rounded to approximately the nearest km
- llsoacd: TODO
- orientation: The orientation of the PV system
- tilt: The tilt of the PV system
- kwp: The capacity of the PV system
- operational_at: the datetime the PV system started working
### {2,5,30}min.parquet
Time series of solar generation for a number of sytems.
Each file includes the systems for which there is enough granularity.
In particular the systems in 2min.parquet and 5min.parquet are also in 30min.parquet.
The files contain 3 columns:
- ss_id: the id of the system
- timestamp: the timestamp
- generation_wh: the generated power (in kW) at the given timestamp for the given system
### URL (legacy)
Time series data of PV solar generation data is in an xarray format.
The data variables are the same as 'ss_id' in the metadata.
Each data variable contains the solar generation (in kW) for that PV system.
The ss_id's here are a subset of all the ss_id's in the metadata
The coordinates of the date are tagged as 'datetime' which is the datetime of the solar generation reading.
This is a subset of the more recent '5min.parquet' file.
## example
using Hugging Face Datasets
## useful links
URL - this repo was made by following this tutorial | [
"# UK PV dataset\n\nPV solar generation data from the UK. \nThis dataset contains data from 1311 PV systems from 2018 to 2021.\nTime granularity varies from 2 minutes to 30 minutes.\n\nThis data is collected from live PV systems in the UK. We have obfuscated the location of the PV systems for privacy.\nIf you are the owner of a PV system in the dataset, and do not want this data to be shared, \nplease do get in contact with info@URL.",
"## Files\n\n- URL: Data about the PV systems, e.g location\n- 2min.parquet: Power output for PV systems every 2 minutes.\n- 5min.parquet: Power output for PV systems every 5 minutes.\n- 30min.parquet: Power output for PV systems every 30 minutes.\n- URL: (legacy) Time series of PV solar generation every 5 minutes",
"### URL\n\nMetadata of the different PV systems. \n\nNote that there are extra PV systems in this metadata that do not appear in the PV time-series data.\n\nThe csv columns are:\n- ss_id: the id of the system\n- latitude_rounded: latitude of the PV system, but rounded to approximately the nearest km\n- longitude_rounded: latitude of the PV system, but rounded to approximately the nearest km\n- llsoacd: TODO\n- orientation: The orientation of the PV system\n- tilt: The tilt of the PV system\n- kwp: The capacity of the PV system\n- operational_at: the datetime the PV system started working",
"### {2,5,30}min.parquet\n\nTime series of solar generation for a number of sytems.\nEach file includes the systems for which there is enough granularity.\nIn particular the systems in 2min.parquet and 5min.parquet are also in 30min.parquet.\n\nThe files contain 3 columns:\n- ss_id: the id of the system\n- timestamp: the timestamp\n- generation_wh: the generated power (in kW) at the given timestamp for the given system",
"### URL (legacy)\n\nTime series data of PV solar generation data is in an xarray format.\n\nThe data variables are the same as 'ss_id' in the metadata. \nEach data variable contains the solar generation (in kW) for that PV system. \nThe ss_id's here are a subset of all the ss_id's in the metadata \nThe coordinates of the date are tagged as 'datetime' which is the datetime of the solar generation reading.\n\nThis is a subset of the more recent '5min.parquet' file.",
"## example\n\nusing Hugging Face Datasets",
"## useful links\n\nURL - this repo was made by following this tutorial"
] | [
"TAGS\n#task_categories-time-series-forecasting #task_ids-multivariate-time-series-forecasting #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-1B<n<10B #source_datasets-original #language-English #license-mit #pv #photovoltaic #environment #climate #energy #electricity #doi-10.57967/hf/0878 #region-us \n",
"# UK PV dataset\n\nPV solar generation data from the UK. \nThis dataset contains data from 1311 PV systems from 2018 to 2021.\nTime granularity varies from 2 minutes to 30 minutes.\n\nThis data is collected from live PV systems in the UK. We have obfuscated the location of the PV systems for privacy.\nIf you are the owner of a PV system in the dataset, and do not want this data to be shared, \nplease do get in contact with info@URL.",
"## Files\n\n- URL: Data about the PV systems, e.g location\n- 2min.parquet: Power output for PV systems every 2 minutes.\n- 5min.parquet: Power output for PV systems every 5 minutes.\n- 30min.parquet: Power output for PV systems every 30 minutes.\n- URL: (legacy) Time series of PV solar generation every 5 minutes",
"### URL\n\nMetadata of the different PV systems. \n\nNote that there are extra PV systems in this metadata that do not appear in the PV time-series data.\n\nThe csv columns are:\n- ss_id: the id of the system\n- latitude_rounded: latitude of the PV system, but rounded to approximately the nearest km\n- longitude_rounded: latitude of the PV system, but rounded to approximately the nearest km\n- llsoacd: TODO\n- orientation: The orientation of the PV system\n- tilt: The tilt of the PV system\n- kwp: The capacity of the PV system\n- operational_at: the datetime the PV system started working",
"### {2,5,30}min.parquet\n\nTime series of solar generation for a number of sytems.\nEach file includes the systems for which there is enough granularity.\nIn particular the systems in 2min.parquet and 5min.parquet are also in 30min.parquet.\n\nThe files contain 3 columns:\n- ss_id: the id of the system\n- timestamp: the timestamp\n- generation_wh: the generated power (in kW) at the given timestamp for the given system",
"### URL (legacy)\n\nTime series data of PV solar generation data is in an xarray format.\n\nThe data variables are the same as 'ss_id' in the metadata. \nEach data variable contains the solar generation (in kW) for that PV system. \nThe ss_id's here are a subset of all the ss_id's in the metadata \nThe coordinates of the date are tagged as 'datetime' which is the datetime of the solar generation reading.\n\nThis is a subset of the more recent '5min.parquet' file.",
"## example\n\nusing Hugging Face Datasets",
"## useful links\n\nURL - this repo was made by following this tutorial"
] |
090cbc0841fe628b18037e73de742959bffaec77 | # GEM Submission
Submission name: This is a test
| GEM-submissions/lewtun__this-is-a-test__1647263213 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-14T13:06:57+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "This is a test", "tags": ["evaluation", "benchmark"]} | 2022-03-14T13:06:58+00:00 | [] | [] | TAGS
#benchmark-gem #evaluation #benchmark #region-us
| # GEM Submission
Submission name: This is a test
| [
"# GEM Submission\n\nSubmission name: This is a test"
] | [
"TAGS\n#benchmark-gem #evaluation #benchmark #region-us \n",
"# GEM Submission\n\nSubmission name: This is a test"
] |
d2146561ecc7df707d9e6b8318885fe6a39668a2 |
# Dataset Card for GTZAN
## Table of Contents
- [Dataset Card for GTZAN](#dataset-card-for-gtzan)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://marsyas.info/downloads/datasets.html](http://marsyas.info/downloads/datasets.html)
- **Paper:** [http://ismir2001.ismir.net/pdf/tzanetakis.pdf](http://ismir2001.ismir.net/pdf/tzanetakis.pdf)
- **Point of Contact:**
### Dataset Summary
GTZAN is a dataset for musical genre classification of audio signals. The dataset consists of 1,000 audio tracks, each of 30 seconds long. It contains 10 genres, each represented by 100 tracks. The tracks are all 22,050Hz Mono 16-bit audio files in WAV format. The genres are: blues, classical, country, disco, hiphop, jazz, metal, pop, reggae, and rock.
### Languages
English
## Dataset Structure
GTZAN is distributed as a single dataset without a predefined training and test split. The information below refers to the single `train` split that is assigned by default.
### Data Instances
An example of GTZAN looks as follows:
```python
{
"file": "/path/to/cache/genres/blues/blues.00000.wav",
"audio": {
"path": "/path/to/cache/genres/blues/blues.00000.wav",
"array": array(
[
0.00732422,
0.01660156,
0.00762939,
...,
-0.05560303,
-0.06106567,
-0.06417847,
],
dtype=float32,
),
"sampling_rate": 22050,
},
"genre": 0,
}
```
### Data Fields
The types associated with each of the data fields is as follows:
* `file`: a `string` feature.
* `audio`: an `Audio` feature containing the `path` of the sound file, the decoded waveform in the `array` field, and the `sampling_rate`.
* `genre`: a `ClassLabel` feature.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{tzanetakis_essl_cook_2001,
author = "Tzanetakis, George and Essl, Georg and Cook, Perry",
title = "Automatic Musical Genre Classification Of Audio Signals",
url = "http://ismir2001.ismir.net/pdf/tzanetakis.pdf",
publisher = "The International Society for Music Information Retrieval",
year = "2001"
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset. | marsyas/gtzan | [
"region:us"
] | 2022-03-14T14:54:59+00:00 | {"pretty_name": "GTZAN"} | 2023-11-26T18:57:29+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for GTZAN
## Table of Contents
- Dataset Card for GTZAN
- Table of Contents
- Dataset Description
- Dataset Summary
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Initial Data Collection and Normalization
- Who are the source language producers?
- Annotations
- Annotation process
- Who are the annotators?
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Paper: URL
- Point of Contact:
### Dataset Summary
GTZAN is a dataset for musical genre classification of audio signals. The dataset consists of 1,000 audio tracks, each of 30 seconds long. It contains 10 genres, each represented by 100 tracks. The tracks are all 22,050Hz Mono 16-bit audio files in WAV format. The genres are: blues, classical, country, disco, hiphop, jazz, metal, pop, reggae, and rock.
### Languages
English
## Dataset Structure
GTZAN is distributed as a single dataset without a predefined training and test split. The information below refers to the single 'train' split that is assigned by default.
### Data Instances
An example of GTZAN looks as follows:
### Data Fields
The types associated with each of the data fields is as follows:
* 'file': a 'string' feature.
* 'audio': an 'Audio' feature containing the 'path' of the sound file, the decoded waveform in the 'array' field, and the 'sampling_rate'.
* 'genre': a 'ClassLabel' feature.
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @lewtun for adding this dataset. | [
"# Dataset Card for GTZAN",
"## Table of Contents\n- Dataset Card for GTZAN\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n - Dataset Creation\n - Curation Rationale\n - Source Data\n - Initial Data Collection and Normalization\n - Who are the source language producers?\n - Annotations\n - Annotation process\n - Who are the annotators?\n - Personal and Sensitive Information\n - Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n - Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Paper: URL\n- Point of Contact:",
"### Dataset Summary\n\nGTZAN is a dataset for musical genre classification of audio signals. The dataset consists of 1,000 audio tracks, each of 30 seconds long. It contains 10 genres, each represented by 100 tracks. The tracks are all 22,050Hz Mono 16-bit audio files in WAV format. The genres are: blues, classical, country, disco, hiphop, jazz, metal, pop, reggae, and rock.",
"### Languages\n\nEnglish",
"## Dataset Structure\n\nGTZAN is distributed as a single dataset without a predefined training and test split. The information below refers to the single 'train' split that is assigned by default.",
"### Data Instances\n\nAn example of GTZAN looks as follows:",
"### Data Fields\n\nThe types associated with each of the data fields is as follows:\n\n* 'file': a 'string' feature.\n* 'audio': an 'Audio' feature containing the 'path' of the sound file, the decoded waveform in the 'array' field, and the 'sampling_rate'.\n* 'genre': a 'ClassLabel' feature.",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @lewtun for adding this dataset."
] | [
"TAGS\n#region-us \n",
"# Dataset Card for GTZAN",
"## Table of Contents\n- Dataset Card for GTZAN\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n - Dataset Creation\n - Curation Rationale\n - Source Data\n - Initial Data Collection and Normalization\n - Who are the source language producers?\n - Annotations\n - Annotation process\n - Who are the annotators?\n - Personal and Sensitive Information\n - Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n - Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Paper: URL\n- Point of Contact:",
"### Dataset Summary\n\nGTZAN is a dataset for musical genre classification of audio signals. The dataset consists of 1,000 audio tracks, each of 30 seconds long. It contains 10 genres, each represented by 100 tracks. The tracks are all 22,050Hz Mono 16-bit audio files in WAV format. The genres are: blues, classical, country, disco, hiphop, jazz, metal, pop, reggae, and rock.",
"### Languages\n\nEnglish",
"## Dataset Structure\n\nGTZAN is distributed as a single dataset without a predefined training and test split. The information below refers to the single 'train' split that is assigned by default.",
"### Data Instances\n\nAn example of GTZAN looks as follows:",
"### Data Fields\n\nThe types associated with each of the data fields is as follows:\n\n* 'file': a 'string' feature.\n* 'audio': an 'Audio' feature containing the 'path' of the sound file, the decoded waveform in the 'array' field, and the 'sampling_rate'.\n* 'genre': a 'ClassLabel' feature.",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @lewtun for adding this dataset."
] |
73a091b01dfbf7865ee2d1ebef45f2e0cc7c6f73 |
# Dataset Card for GEM/xwikis
## Dataset Description
- **Homepage:** https://github.com/lauhaide/clads
- **Repository:** [Needs More Information]
- **Paper:** https://arxiv.org/abs/2202.09583
- **Leaderboard:** N/A
- **Point of Contact:** Laura Perez-Beltrachini
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/xwikis).
### Dataset Summary
The XWikis Corpus provides datasets with different language pairs and directions for cross-lingual and multi-lingual abstractive document summarisation.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/xwikis')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/xwikis).
#### website
[Github](https://github.com/lauhaide/clads)
#### paper
https://arxiv.org/abs/2202.09583
#### authors
Laura Perez-Beltrachini (University of Edinburgh)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[Github](https://github.com/lauhaide/clads)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
https://arxiv.org/abs/2202.09583
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@InProceedings{clads-emnlp,
author = "Laura Perez-Beltrachini and Mirella Lapata",
title = "Models and Datasets for Cross-Lingual Summarisation",
booktitle = "Proceedings of The 2021 Conference on Empirical Methods in Natural Language Processing ",
year = "2021",
address = "Punta Cana, Dominican Republic",
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Laura Perez-Beltrachini
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
[email protected]
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
yes
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`German`, `English`, `French`, `Czech`, `Chinese`
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
Cross-lingual and Multi-lingual single long input document abstractive summarisation.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Summarization
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
Entity descriptive summarisation, that is, generate a summary that conveys the most salient facts of a document related to a given entity.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Laura Perez-Beltrachini (University of Edinburgh)
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Laura Perez-Beltrachini (University of Edinburgh) and Ronald Cardenas (University of Edinburgh)
### Dataset Structure
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
For each language pair and direction there exists a train/valid/test split.
The test split is a sample of size 7k from the intersection of titles existing in the four languages (cs,fr,en,de).
Train/valid are randomly split.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
no
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
no
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
- identification of entity salient information
- translation
- multi-linguality
- cross-lingual transfer, zero-shot, few-shot
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`ROUGE`
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
#### Other Evaluation Approaches
<!-- info: What evaluation approaches have others used? -->
<!-- scope: periscope -->
ROUGE-1/2/L
## Dataset Curation
### Original Curation
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Single website`
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
other
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
found
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
#### Annotation Values
<!-- info: Purpose and values for each annotation -->
<!-- scope: microscope -->
The input documents have section structure information.
#### Any Quality Control?
<!-- info: Quality control measures? -->
<!-- scope: telescope -->
validated by another rater
#### Quality Control Details
<!-- info: Describe the quality control measures that were taken. -->
<!-- scope: microscope -->
Bilingual annotators assessed the content overlap of source document and target summaries.
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
no
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
no
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`public domain`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`public domain`
### Known Technical Limitations
| GEM/xwikis | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:de",
"language:en",
"language:fr",
"language:cs",
"license:cc-by-sa-4.0",
"arxiv:2202.09583",
"region:us"
] | 2022-03-14T15:31:48+00:00 | {"annotations_creators": ["found"], "language_creators": ["unknown"], "language": ["de", "en", "fr", "cs"], "license": ["cc-by-sa-4.0"], "multilinguality": ["unknown"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": [], "pretty_name": "xwikis"} | 2023-02-22T13:05:19+00:00 | [
"2202.09583"
] | [
"de",
"en",
"fr",
"cs"
] | TAGS
#task_categories-summarization #annotations_creators-found #language_creators-unknown #multilinguality-unknown #size_categories-unknown #source_datasets-original #language-German #language-English #language-French #language-Czech #license-cc-by-sa-4.0 #arxiv-2202.09583 #region-us
|
# Dataset Card for GEM/xwikis
## Dataset Description
- Homepage: URL
- Repository:
- Paper: URL
- Leaderboard: N/A
- Point of Contact: Laura Perez-Beltrachini
### Link to Main Data Card
You can find the main data card on the GEM Website.
### Dataset Summary
The XWikis Corpus provides datasets with different language pairs and directions for cross-lingual and multi-lingual abstractive document summarisation.
You can load the dataset via:
The data loader can be found here.
#### website
Github
#### paper
URL
#### authors
Laura Perez-Beltrachini (University of Edinburgh)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
Github
#### Paper
URL
#### BibTex
#### Contact Name
Laura Perez-Beltrachini
#### Contact Email
lperez@URL
#### Has a Leaderboard?
no
### Languages and Intended Use
#### Multilingual?
yes
#### Covered Languages
'German', 'English', 'French', 'Czech', 'Chinese'
#### License
cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International
#### Intended Use
Cross-lingual and Multi-lingual single long input document abstractive summarisation.
#### Primary Task
Summarization
#### Communicative Goal
Entity descriptive summarisation, that is, generate a summary that conveys the most salient facts of a document related to a given entity.
### Credit
#### Curation Organization Type(s)
'academic'
#### Dataset Creators
Laura Perez-Beltrachini (University of Edinburgh)
#### Who added the Dataset to GEM?
Laura Perez-Beltrachini (University of Edinburgh) and Ronald Cardenas (University of Edinburgh)
### Dataset Structure
#### Data Splits
For each language pair and direction there exists a train/valid/test split.
The test split is a sample of size 7k from the intersection of titles existing in the four languages (cs,fr,en,de).
Train/valid are randomly split.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Similar Datasets
no
### GEM-Specific Curation
#### Modificatied for GEM?
no
#### Additional Splits?
no
### Getting Started with the Task
## Previous Results
### Previous Results
#### Measured Model Abilities
- identification of entity salient information
- translation
- multi-linguality
- cross-lingual transfer, zero-shot, few-shot
#### Metrics
'ROUGE'
#### Previous results available?
yes
#### Other Evaluation Approaches
ROUGE-1/2/L
## Dataset Curation
### Original Curation
#### Sourced from Different Sources
no
### Language Data
#### How was Language Data Obtained?
'Found'
#### Where was it found?
'Single website'
#### Data Validation
other
#### Was Data Filtered?
not filtered
### Structured Annotations
#### Additional Annotations?
found
#### Annotation Service?
no
#### Annotation Values
The input documents have section structure information.
#### Any Quality Control?
validated by another rater
#### Quality Control Details
Bilingual annotators assessed the content overlap of source document and target summaries.
### Consent
#### Any Consent Policy?
no
### Private Identifying Information (PII)
#### Contains PII?
no PII
### Maintenance
#### Any Maintenance Plan?
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
no
### Discussion of Biases
#### Any Documented Social Biases?
no
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
'public domain'
#### Copyright Restrictions on the Language Data
'public domain'
### Known Technical Limitations
| [
"# Dataset Card for GEM/xwikis",
"## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Leaderboard: N/A\n- Point of Contact: Laura Perez-Beltrachini",
"### Link to Main Data Card\n\nYou can find the main data card on the GEM Website.",
"### Dataset Summary \n\nThe XWikis Corpus provides datasets with different language pairs and directions for cross-lingual and multi-lingual abstractive document summarisation. \n\nYou can load the dataset via:\n\nThe data loader can be found here.",
"#### website\nGithub",
"#### paper\nURL",
"#### authors\nLaura Perez-Beltrachini (University of Edinburgh)",
"## Dataset Overview",
"### Where to find the Data and its Documentation",
"#### Webpage\n\n\n\nGithub",
"#### Paper\n\n\n\nURL",
"#### BibTex",
"#### Contact Name\n\n\n\n\nLaura Perez-Beltrachini",
"#### Contact Email\n\n\n\nlperez@URL",
"#### Has a Leaderboard?\n\n\n\nno",
"### Languages and Intended Use",
"#### Multilingual?\n\n\n\n\nyes",
"#### Covered Languages\n\n\n\n\n'German', 'English', 'French', 'Czech', 'Chinese'",
"#### License\n\n\n\n\ncc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International",
"#### Intended Use\n\n\n\nCross-lingual and Multi-lingual single long input document abstractive summarisation.",
"#### Primary Task\n\n\n\nSummarization",
"#### Communicative Goal\n\n\n\n\nEntity descriptive summarisation, that is, generate a summary that conveys the most salient facts of a document related to a given entity.",
"### Credit",
"#### Curation Organization Type(s)\n\n\n\n'academic'",
"#### Dataset Creators\n\n\n\nLaura Perez-Beltrachini (University of Edinburgh)",
"#### Who added the Dataset to GEM?\n\n\n\nLaura Perez-Beltrachini (University of Edinburgh) and Ronald Cardenas (University of Edinburgh)",
"### Dataset Structure",
"#### Data Splits\n\n\n\nFor each language pair and direction there exists a train/valid/test split. \nThe test split is a sample of size 7k from the intersection of titles existing in the four languages (cs,fr,en,de).\nTrain/valid are randomly split.",
"## Dataset in GEM",
"### Rationale for Inclusion in GEM",
"#### Similar Datasets\n\n\n\nno",
"### GEM-Specific Curation",
"#### Modificatied for GEM?\n\n\n\nno",
"#### Additional Splits?\n\n\n\nno",
"### Getting Started with the Task",
"## Previous Results",
"### Previous Results",
"#### Measured Model Abilities\n\n\n\n- identification of entity salient information\n- translation\n- multi-linguality\n- cross-lingual transfer, zero-shot, few-shot",
"#### Metrics\n\n\n\n'ROUGE'",
"#### Previous results available?\n\n\n\nyes",
"#### Other Evaluation Approaches\n\n\n\nROUGE-1/2/L",
"## Dataset Curation",
"### Original Curation",
"#### Sourced from Different Sources\n\n\n\nno",
"### Language Data",
"#### How was Language Data Obtained?\n\n\n\n'Found'",
"#### Where was it found?\n\n\n\n'Single website'",
"#### Data Validation\n\n\n\nother",
"#### Was Data Filtered?\n\n\n\nnot filtered",
"### Structured Annotations",
"#### Additional Annotations?\n\n\n\n\nfound",
"#### Annotation Service?\n\n\n\nno",
"#### Annotation Values\n\n\n\nThe input documents have section structure information.",
"#### Any Quality Control?\n\n\n\nvalidated by another rater",
"#### Quality Control Details\n\n\n\nBilingual annotators assessed the content overlap of source document and target summaries.",
"### Consent",
"#### Any Consent Policy?\n\n\n\nno",
"### Private Identifying Information (PII)",
"#### Contains PII?\n\n\n\n\nno PII",
"### Maintenance",
"#### Any Maintenance Plan?\n\n\n\nno",
"## Broader Social Context",
"### Previous Work on the Social Impact of the Dataset",
"#### Usage of Models based on the Data\n\n\n\nno",
"### Impact on Under-Served Communities",
"#### Addresses needs of underserved Communities?\n\n\n\nno",
"### Discussion of Biases",
"#### Any Documented Social Biases?\n\n\n\nno",
"## Considerations for Using the Data",
"### PII Risks and Liability",
"### Licenses",
"#### Copyright Restrictions on the Dataset\n\n\n\n'public domain'",
"#### Copyright Restrictions on the Language Data\n\n\n\n'public domain'",
"### Known Technical Limitations"
] | [
"TAGS\n#task_categories-summarization #annotations_creators-found #language_creators-unknown #multilinguality-unknown #size_categories-unknown #source_datasets-original #language-German #language-English #language-French #language-Czech #license-cc-by-sa-4.0 #arxiv-2202.09583 #region-us \n",
"# Dataset Card for GEM/xwikis",
"## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Leaderboard: N/A\n- Point of Contact: Laura Perez-Beltrachini",
"### Link to Main Data Card\n\nYou can find the main data card on the GEM Website.",
"### Dataset Summary \n\nThe XWikis Corpus provides datasets with different language pairs and directions for cross-lingual and multi-lingual abstractive document summarisation. \n\nYou can load the dataset via:\n\nThe data loader can be found here.",
"#### website\nGithub",
"#### paper\nURL",
"#### authors\nLaura Perez-Beltrachini (University of Edinburgh)",
"## Dataset Overview",
"### Where to find the Data and its Documentation",
"#### Webpage\n\n\n\nGithub",
"#### Paper\n\n\n\nURL",
"#### BibTex",
"#### Contact Name\n\n\n\n\nLaura Perez-Beltrachini",
"#### Contact Email\n\n\n\nlperez@URL",
"#### Has a Leaderboard?\n\n\n\nno",
"### Languages and Intended Use",
"#### Multilingual?\n\n\n\n\nyes",
"#### Covered Languages\n\n\n\n\n'German', 'English', 'French', 'Czech', 'Chinese'",
"#### License\n\n\n\n\ncc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International",
"#### Intended Use\n\n\n\nCross-lingual and Multi-lingual single long input document abstractive summarisation.",
"#### Primary Task\n\n\n\nSummarization",
"#### Communicative Goal\n\n\n\n\nEntity descriptive summarisation, that is, generate a summary that conveys the most salient facts of a document related to a given entity.",
"### Credit",
"#### Curation Organization Type(s)\n\n\n\n'academic'",
"#### Dataset Creators\n\n\n\nLaura Perez-Beltrachini (University of Edinburgh)",
"#### Who added the Dataset to GEM?\n\n\n\nLaura Perez-Beltrachini (University of Edinburgh) and Ronald Cardenas (University of Edinburgh)",
"### Dataset Structure",
"#### Data Splits\n\n\n\nFor each language pair and direction there exists a train/valid/test split. \nThe test split is a sample of size 7k from the intersection of titles existing in the four languages (cs,fr,en,de).\nTrain/valid are randomly split.",
"## Dataset in GEM",
"### Rationale for Inclusion in GEM",
"#### Similar Datasets\n\n\n\nno",
"### GEM-Specific Curation",
"#### Modificatied for GEM?\n\n\n\nno",
"#### Additional Splits?\n\n\n\nno",
"### Getting Started with the Task",
"## Previous Results",
"### Previous Results",
"#### Measured Model Abilities\n\n\n\n- identification of entity salient information\n- translation\n- multi-linguality\n- cross-lingual transfer, zero-shot, few-shot",
"#### Metrics\n\n\n\n'ROUGE'",
"#### Previous results available?\n\n\n\nyes",
"#### Other Evaluation Approaches\n\n\n\nROUGE-1/2/L",
"## Dataset Curation",
"### Original Curation",
"#### Sourced from Different Sources\n\n\n\nno",
"### Language Data",
"#### How was Language Data Obtained?\n\n\n\n'Found'",
"#### Where was it found?\n\n\n\n'Single website'",
"#### Data Validation\n\n\n\nother",
"#### Was Data Filtered?\n\n\n\nnot filtered",
"### Structured Annotations",
"#### Additional Annotations?\n\n\n\n\nfound",
"#### Annotation Service?\n\n\n\nno",
"#### Annotation Values\n\n\n\nThe input documents have section structure information.",
"#### Any Quality Control?\n\n\n\nvalidated by another rater",
"#### Quality Control Details\n\n\n\nBilingual annotators assessed the content overlap of source document and target summaries.",
"### Consent",
"#### Any Consent Policy?\n\n\n\nno",
"### Private Identifying Information (PII)",
"#### Contains PII?\n\n\n\n\nno PII",
"### Maintenance",
"#### Any Maintenance Plan?\n\n\n\nno",
"## Broader Social Context",
"### Previous Work on the Social Impact of the Dataset",
"#### Usage of Models based on the Data\n\n\n\nno",
"### Impact on Under-Served Communities",
"#### Addresses needs of underserved Communities?\n\n\n\nno",
"### Discussion of Biases",
"#### Any Documented Social Biases?\n\n\n\nno",
"## Considerations for Using the Data",
"### PII Risks and Liability",
"### Licenses",
"#### Copyright Restrictions on the Dataset\n\n\n\n'public domain'",
"#### Copyright Restrictions on the Language Data\n\n\n\n'public domain'",
"### Known Technical Limitations"
] |
648664f0f63aa5901cc1bcdc2922558433c07dc7 |
# Dataset Card for "oscar"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://oscar-corpus.com](https://oscar-corpus.com)
- **Repository:** [github.com/oscar-corpus/corpus](https://github.com/oscar-corpus/corpus)
- **Paper:** [Towards a Cleaner Document-Oriented Multilingual Crawled Corpus](https://oscar-corpus.com/publication/2022/arxiv/towards/)
- **Point of Contact:** [Contact](https://oscar-corpus.com/#contact)
### Dataset Summary
OSCAR or **O**pen **S**uper-large **C**rawled **A**ggregated co**R**pus is a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [ungoliant](https://github.com/oscar-corpus/ungoliant) architecture. Data is distributed by language in both original and deduplicated form.
**We are aware of the virus warnings issue. See discussion [here](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201/discussions/12) for more info!**
### Usage
```py
from datasets import load_dataset
dataset = load_dataset("oscar-corpus/OSCAR-2201",
use_auth_token=True, # required
language="ar",
streaming=True, # optional
split="train") # optional, but the dataset only has a train split
for d in dataset:
print(d) # prints documents
```
### Supported Tasks and Leaderboards
OSCAR is mainly intended to pretrain language models and word representations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 151 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
### Issues
OSCAR 22.01 may have quality issues on low size subcorpora, as it has been the case before.
Note that since the documents are identified as a whole, it is expected to have lines in other languages in a given language subcorpus.
As an example, it is known and expected that the German subcorpus contains documents holding lines identified as Swiss German / Alemannic.
**If you encounter something that is unexpected, please file an issue here: https://github.com/oscar-corpus/corpus/issues.**
|Language code|Language|Issues|
|-------------|--------|------|
| | | |
## Dataset Structure
We show detailed information for all the configurations of the dataset.
### Data Instances
TODO
### Data Fields
* `id`: a `int64` feature.
* `content`: `string` Newline-separated content
* `warc_headers`: WARC Headers
* `warc_headers.content-length`: `int64` Content length (in bytes) **before** cleaning
* `warc_headers.content-type`: `string` MIME type
* `warc_headers.warc-block-digest`:`string` Algorithm name and calculated value of a digest applied to the full block of the record
* `warc_headers.warc-date`: `string` Crawl date (YYYY-MM-DDThh:mm:ssZ)
* `warc_headers.warc-identified-content-language`: `string` Comma-separated list of language identifications done by CommonCrawl (uses CLD3)
* `warc_headers.warc-record-id`: `string` Record ID
* `warc_headers.warc-refers-to`: `string` Record-ID of a single record for which the present record holds additional content
* `warc_headers.warc-target-uri`: `string` URI from where the content has been fetched
* `warc_headers.warc-type`: `string` Type of the WARC Record
* `metadata`: Metadata
* `metadata.identification.label`: `string` Language identification of the document
* `metadata.identification.prob`: `float` Confidence of the identification
* `metadata.annotation`: `[string]` Annnotations of the document. `null` if none present. (Is `None` if using `datasets`)
* `metadata.sentence_identifications`: `[string]` List of line identifications. `null`/`None` can be present for lines that failed the identification step.
* `meta.offset`: `int64` line offset where the related text begins. Should be used with `meta.nb_sentences` when reading the source files rather than using iterators to get related data.
* `text`: `string` content
See the [WARC Format standard](https://iipc.github.io/warc-specifications/specifications/warc-format/warc-1.1/#warc-type-mandatory) for more details on the `warc_headers` fields, and our [website](https://oscar-corpus.com/post/oscar-v22-01/) for more details about the format in general.
### Data Splits
<details>
<summary>Click to expand the number of samples per configuration</summary>
</details>
## Table
| lang | size | docs | words |
|:----------------------------|:----------|:------------|:----------------|
| _Multilingual_ | 12.1 GB | 1,210,685 | 936,187,711 |
| Afrikaans | 47.0 MB | 12,393 | 6,227,310 |
| Albanian | 3.0 GB | 437,287 | 326,325,149 |
| Alemannic / Swiss German | 363.6 kB | 139 | 37,381 |
| Amharic | 461.0 MB | 37,513 | 30,481,153 |
| Arabic | 84.2 GB | 8,718,929 | 6,103,711,887 |
| Aragonese | 10.6 kB | 12 | 51 |
| Armenian | 4.7 GB | 379,267 | 268,031,270 |
| Assamese | 221.2 MB | 17,084 | 11,109,557 |
| Asturian | 73.6 kB | 77 | 3,919 |
| Avaric | 18.6 kB | 14 | 582 |
| Azerbaijani | 3.5 GB | 491,847 | 291,927,692 |
| Bangla | 15.1 GB | 1,171,501 | 751,877,226 |
| Bashkir | 95.5 MB | 11,198 | 5,418,474 |
| Basque | 1.1 GB | 233,658 | 97,092,942 |
| Belarusian | 1.8 GB | 180,046 | 107,227,860 |
| Bihari languages | 24.2 kB | 27 | 569 |
| Bishnupriya | 2.0 MB | 271 | 98,419 |
| Bosnian | 10.3 kB | 10 | 422 |
| Breton | 33.7 MB | 16,119 | 3,111,619 |
| Bulgarian | 35.1 GB | 2,887,115 | 2,405,981,285 |
| Burmese | 1.9 GB | 158,733 | 44,835,970 |
| Catalan | 13.9 GB | 2,627,307 | 1,508,919,864 |
| Cebuano | 44.6 MB | 5,742 | 5,253,785 |
| Central Kurdish | 716.4 MB | 84,950 | 43,913,025 |
| Chechen | 14.0 MB | 4,086 | 798,766 |
| Chinese | 900.9 GB | 56,524,518 | 23,149,203,886 |
| Chuvash | 41.8 MB | 4,750 | 2,465,782 |
| Cornish | 1.4 kB | 2 | 55 |
| Croatian | 11.2 MB | 11,462 | 505,369 |
| Czech | 58.6 GB | 10,381,916 | 5,452,724,456 |
| Danish | 12.6 GB | 2,265,479 | 1,454,439,292 |
| Dimli (individual language) | 706 Bytes | 1 | 19 |
| Divehi | 217.2 MB | 24,067 | 10,112,205 |
| Dutch | 114.0 GB | 20,206,532 | 12,329,127,151 |
| Eastern Mari | 11.3 MB | 1,612 | 641,525 |
| Egyptian Arabic | 2.8 MB | 1,256 | 176,096 |
| English | 3.2 TB | 431,992,659 | 377,376,402,775 |
| Esperanto | 558.3 MB | 111,932 | 58,416,628 |
| Estonian | 9.2 GB | 1,362,524 | 820,975,443 |
| Filipino | 646.5 MB | 70,394 | 81,881,278 |
| Finnish | 37.8 GB | 4,948,961 | 2,900,615,928 |
| French | 382.2 GB | 52,037,098 | 41,713,990,658 |
| Galician | 255.2 MB | 88,803 | 27,051,212 |
| Georgian | 7.1 GB | 488,588 | 281,430,479 |
| German | 496.7 GB | 70,075,424 | 46,826,676,844 |
| Goan Konkani | 787.2 kB | 46 | 38,831 |
| Greek | 78.3 GB | 6,738,546 | 5,031,242,803 |
| Guarani | 9.0 kB | 10 | 374 |
| Gujarati | 4.8 GB | 136,467 | 301,170,777 |
| Hebrew | 30.3 GB | 3,132,396 | 2,249,377,984 |
| Hindi | 23.3 GB | 1,529,907 | 1,534,799,198 |
| Hungarian | 53.9 GB | 6,866,062 | 4,598,787,907 |
| Icelandic | 2.0 GB | 396,183 | 210,365,124 |
| Ido | 77.3 kB | 105 | 2,690 |
| Iloko | 97.9 kB | 75 | 8,592 |
| Indonesian | 17.4 GB | 2,244,622 | 1,984,195,207 |
| Interlingua | 40.2 kB | 6 | 10,125 |
| Irish | 45.6 MB | 12,233 | 4,877,850 |
| Italian | 229.3 GB | 28,502,092 | 24,294,684,830 |
| Japanese | 258.7 GB | 36,328,931 | 5,592,948,356 |
| Javanese | 152.7 kB | 70 | 10,441 |
| Kalmyk | 9.3 kB | 9 | 250 |
| Kannada | 2.6 GB | 150,850 | 108,450,571 |
| Karachay-Balkar | 119.6 kB | 91 | 4,089 |
| Kazakh | 2.9 GB | 261,085 | 157,267,307 |
| Khmer | 1.9 GB | 121,910 | 30,564,131 |
| Komi | 119.9 kB | 127 | 3,335 |
| Korean | 51.8 GB | 5,881,481 | 3,854,968,649 |
| Kurdish | 150.3 MB | 29,906 | 17,390,759 |
| Kyrgyz | 518.6 MB | 62,244 | 28,028,986 |
| Lao | 337.1 MB | 28,914 | 6,682,982 |
| Latin | 4.1 MB | 4,397 | 187,446 |
| Latvian | 8.2 GB | 1,032,987 | 707,361,898 |
| Lezghian | 375.5 kB | 124 | 19,250 |
| Limburgish | 1.4 kB | 2 | 41 |
| Lithuanian | 20.0 GB | 2,303,070 | 1,712,802,056 |
| Lojban | 1.9 MB | 570 | 260,542 |
| Lombard | 2.6 kB | 2 | 225 |
| Low German | 9.0 MB | 1,938 | 1,012,561 |
| Lower Sorbian | 707 Bytes | 1 | 17 |
| Luxembourgish | 15.8 MB | 5,108 | 1,545,946 |
| Macedonian | 3.6 GB | 341,775 | 244,058,579 |
| Maithili | 21.6 kB | 23 | 483 |
| Malagasy | 57.3 MB | 3,028 | 7,279,056 |
| Malay | 5.3 MB | 5,228 | 217,818 |
| Malayalam | 4.1 GB | 250,972 | 137,831,247 |
| Maltese | 2.5 MB | 2,208 | 118,190 |
| Marathi | 3.3 GB | 250,376 | 160,179,233 |
| Mazanderani | 128.2 kB | 76 | 7,337 |
| Minangkabau | 6.0 MB | 585 | 614,613 |
| Mingrelian | 7.6 MB | 2,550 | 253,333 |
| Mongolian | 2.8 GB | 237,719 | 176,405,432 |
| Nahuatl languages | 8.7 kB | 12 | 179 |
| Nepali | 3.7 GB | 391,947 | 177,885,116 |
| Newari | 5.7 MB | 1,134 | 273,837 |
| Norwegian | 2.8 GB | 973,188 | 279,182,902 |
| Norwegian Nynorsk | 6.8 MB | 5,835 | 459,183 |
| Occitan | 2.1 MB | 373 | 31,061 |
| Odia | 487.9 MB | 52,942 | 23,755,902 |
| Ossetic | 13.9 MB | 3,560 | 800,430 |
| Pashto | 490.3 MB | 50,312 | 46,293,249 |
| Persian | 77.4 GB | 7,665,871 | 6,430,164,396 |
| Piedmontese | 1.7 MB | 698 | 188,270 |
| Polish | 139.0 GB | 19,301,137 | 12,584,498,906 |
| Portuguese | 170.3 GB | 23,735,707 | 18,441,864,893 |
| Punjabi | 1.1 GB | 68,094 | 70,068,604 |
| Quechua | 744 Bytes | 1 | 14 |
| Romanian | 49.2 GB | 4,624,764 | 5,261,803,995 |
| Russia Buriat | 32.9 kB | 39 | 785 |
| Russian | 1.1 TB | 76,060,844 | 62,811,122,663 |
| Sakha | 65.6 MB | 6,284 | 3,473,813 |
| Sanskrit | 136.0 MB | 4,472 | 5,671,369 |
| Scottish Gaelic | 137.7 kB | 136 | 7,769 |
| Serbian | 6.9 GB | 577,472 | 482,932,670 |
| Serbian (Latin) | 931.8 kB | 738 | 92,875 |
| Sicilian | 1.5 kB | 2 | 50 |
| Sindhi | 117.1 MB | 15,516 | 10,685,611 |
| Sinhala | 2.0 GB | 108,593 | 113,179,741 |
| Slovak | 16.5 GB | 2,409,555 | 1,619,121,944 |
| Slovenian | 1.2 GB | 351,894 | 118,400,246 |
| Somali | 2.1 kB | 3 | 109 |
| South Azerbaijani | 14.1 MB | 5,381 | 693,746 |
| Spanish | 381.9 GB | 51,386,247 | 42,829,835,316 |
| Sundanese | 5.0 MB | 263 | 547,145 |
| Swahili | 1.3 MB | 462 | 123,050 |
| Swedish | 48.0 GB | 7,541,278 | 5,078,331,128 |
| Tajik | 870.9 MB | 46,366 | 56,627,727 |
| Tamil | 11.4 GB | 556,772 | 452,343,748 |
| Tatar | 915.3 MB | 76,398 | 51,875,265 |
| Telugu | 3.4 GB | 249,756 | 137,752,065 |
| Thai | 66.1 GB | 5,030,254 | 1,626,779,846 |
| Tibetan | 234.5 MB | 18,683 | 2,286,269 |
| Turkish | 75.1 GB | 10,826,031 | 6,421,221,358 |
| Turkmen | 4.4 MB | 2,485 | 276,632 |
| Ukrainian | 48.8 GB | 4,558,214 | 2,879,585,992 |
| Emiliano-Romagnolo[eml] | 901 Bytes | 1 | 53 |
| Upper Sorbian | 132.8 kB | 110 | 8,825 |
| Urdu | 3.4 GB | 336,994 | 332,816,354 |
| Uyghur | 201.9 MB | 18,556 | 11,240,889 |
| Uzbek | 19.9 MB | 9,526 | 1,370,842 |
| Vietnamese | 98.9 GB | 9,587,233 | 12,283,185,482 |
| Volapük | 825.9 kB | 661 | 57,039 |
| Walloon | 105.7 kB | 138 | 4,386 |
| Waray | 7.6 MB | 933 | 830,872 |
| Welsh | 409.3 MB | 90,378 | 49,488,495 |
| Western Frisian | 75.3 MB | 21,946 | 6,357,929 |
| Western Mari | 743.5 kB | 155 | 43,916 |
| Western Panjabi | 46.7 MB | 6,790 | 4,060,419 |
| Wu Chinese | 137.2 kB | 88 | 3,056 |
| Yiddish | 232.5 MB | 23,418 | 15,809,780 |
| Yoruba | 24.7 kB | 26 | 1,042 |
## Dataset Creation
### Curation Rationale
OSCAR was constructed using [`Ungoliant`](https://github.com/oscar-corpus/ungoliant), a new pipeline derived from [goclassy](https://github.com/oscar-corpus/goclassy), itself being derived from [fastText's one](https://github.com/facebookresearch/fastText).
The pipeline works on documents rather than lines.
`Ungoliant` is implemented in the [Rust programming language](https://rust-lang.org), and uses [rayon](https://github.com/rayon-rs/rayon) as its data parallelism strategy.
Threading is done at shard, record and sentence level, making the whole generation process much more efficient.
Filtering will be explained in a future blog post at our [website](https://oscar-corpus.com)
### Source Data
#### Initial Data Collection and Normalization
[Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR 22.01, the **November/December 2021** snapshot was used. It is composed by 64 000 compressed text files containing documents and their headers.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
## Additional Information
### Dataset Curators
The corpus was put together by [Julien Abadji](https://ujj.space), [Pedro Ortiz Suarez](https://portizs.eu/), [Benoît Sagot](http://pauillac.inria.fr/~sagot/), and [Laurent Romary](https://cv.archives-ouvertes.fr/laurentromary), during work done at [Inria](https://www.inria.fr/en), particularly at the [ALMAnaCH team](https://team.inria.fr/almanach/).
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
```
@ARTICLE{2022arXiv220106642A,
author = {{Abadji}, Julien and {Ortiz Suarez}, Pedro and {Romary}, Laurent and {Sagot}, Beno{\^\i}t},
title = "{Towards a Cleaner Document-Oriented Multilingual Crawled Corpus}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = 2022,
month = jan,
eid = {arXiv:2201.06642},
pages = {arXiv:2201.06642},
archivePrefix = {arXiv},
eprint = {2201.06642},
primaryClass = {cs.CL},
adsurl = {https://ui.adsabs.harvard.edu/abs/2022arXiv220106642A},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
@inproceedings{AbadjiOrtizSuarezRomaryetal.2021,
author = {Julien Abadji and Pedro Javier Ortiz Su{\'a}rez and Laurent Romary and Beno{\^i}t Sagot},
title = {Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-9) 2021. Limerick, 12 July 2021 (Online-Event)},
editor = {Harald L{\"u}ngen and Marc Kupietz and Piotr Bański and Adrien Barbaresi and Simon Clematide and Ines Pisetta},
publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-10468},
url = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-104688},
pages = {1 -- 9},
year = {2021},
abstract = {Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.},
language = {en}
}
@ARTICLE{caswell-etal-2021-quality,
author = {{Caswell}, Isaac and {Kreutzer}, Julia and {Wang}, Lisa and {Wahab}, Ahsan and {van Esch}, Daan and {Ulzii-Orshikh}, Nasanbayar and {Tapo}, Allahsera and {Subramani}, Nishant and {Sokolov}, Artem and {Sikasote}, Claytone and {Setyawan}, Monang and {Sarin}, Supheakmungkol and {Samb}, Sokhar and {Sagot}, Beno{\^\i}t and {Rivera}, Clara and {Rios}, Annette and {Papadimitriou}, Isabel and {Osei}, Salomey and {Ortiz Su{\'a}rez}, Pedro Javier and {Orife}, Iroro and {Ogueji}, Kelechi and {Niyongabo}, Rubungo Andre and {Nguyen}, Toan Q. and {M{\"u}ller}, Mathias and {M{\"u}ller}, Andr{\'e} and {Hassan Muhammad}, Shamsuddeen and {Muhammad}, Nanda and {Mnyakeni}, Ayanda and {Mirzakhalov}, Jamshidbek and {Matangira}, Tapiwanashe and {Leong}, Colin and {Lawson}, Nze and {Kudugunta}, Sneha and {Jernite}, Yacine and {Jenny}, Mathias and {Firat}, Orhan and {Dossou}, Bonaventure F.~P. and {Dlamini}, Sakhile and {de Silva}, Nisansa and {{\c{C}}abuk Ball{\i}}, Sakine and {Biderman}, Stella and {Battisti}, Alessia and {Baruwa}, Ahmed and {Bapna}, Ankur and {Baljekar}, Pallavi and {Abebe Azime}, Israel and {Awokoya}, Ayodele and {Ataman}, Duygu and {Ahia}, Orevaoghene and {Ahia}, Oghenefego and {Agrawal}, Sweta and {Adeyemi}, Mofetoluwa},
title = "{Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language, Computer Science - Artificial Intelligence},
year = 2021,
month = mar,
eid = {arXiv:2103.12028},
pages = {arXiv:2103.12028},
archivePrefix = {arXiv},
eprint = {2103.12028},
primaryClass = {cs.CL},
adsurl = {https://ui.adsabs.harvard.edu/abs/2021arXiv210312028C},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
```
### Contributions
Thanks to [@pjox](https://github.com/pjox), [@Uinelj](https://github.com/Uinelj) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
| oscar-corpus/OSCAR-2201 | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:af",
"language:sq",
"language:am",
"language:ar",
"language:an",
"language:hy",
"language:as",
"language:ast",
"language:av",
"language:az",
"language:bn",
"language:ba",
"language:eu",
"language:be",
"language:bh",
"language:bpy",
"language:bs",
"language:br",
"language:bg",
"language:my",
"language:ca",
"language:ceb",
"language:ckb",
"language:ce",
"language:zh",
"language:cv",
"language:kw",
"language:hr",
"language:cs",
"language:da",
"language:diq",
"language:dv",
"language:nl",
"language:mhr",
"language:arz",
"language:en",
"language:eo",
"language:et",
"language:tl",
"language:fi",
"language:fr",
"language:gl",
"language:ka",
"language:de",
"language:gom",
"language:el",
"language:gn",
"language:gu",
"language:he",
"language:hi",
"language:hu",
"language:is",
"language:io",
"language:ilo",
"language:id",
"language:ia",
"language:ga",
"language:it",
"language:ja",
"language:jv",
"language:xal",
"language:kn",
"language:krc",
"language:kk",
"language:km",
"language:kv",
"language:ko",
"language:ku",
"language:ky",
"language:lo",
"language:la",
"language:lv",
"language:lez",
"language:li",
"language:lt",
"language:jbo",
"language:lmo",
"language:nds",
"language:dsb",
"language:lb",
"language:mk",
"language:mai",
"language:mg",
"language:ms",
"language:ml",
"language:mt",
"language:mr",
"language:mzn",
"language:min",
"language:xmf",
"language:mn",
"language:nah",
"language:ne",
"language:new",
"language:no",
"language:nn",
"language:oc",
"language:or",
"language:os",
"language:ps",
"language:fa",
"language:pms",
"language:pl",
"language:pt",
"language:pa",
"language:qu",
"language:ro",
"language:bxr",
"language:ru",
"language:sah",
"language:sa",
"language:gd",
"language:sr",
"language:sh",
"language:scn",
"language:sd",
"language:si",
"language:sk",
"language:sl",
"language:so",
"language:azb",
"language:es",
"language:su",
"language:sw",
"language:sv",
"language:tg",
"language:ta",
"language:tt",
"language:te",
"language:th",
"language:bo",
"language:als",
"language:tr",
"language:tk",
"language:uk",
"language:eml",
"language:hsb",
"language:ur",
"language:ug",
"language:uz",
"language:vi",
"language:vo",
"language:wa",
"language:war",
"language:cy",
"language:fy",
"language:mrj",
"language:pnb",
"language:wuu",
"language:yi",
"language:yo",
"language:mul",
"license:cc0-1.0",
"arxiv:2010.14571",
"arxiv:2201.06642",
"arxiv:2103.12028",
"region:us"
] | 2022-03-14T23:09:14+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["af", "sq", "am", "ar", "an", "hy", "as", "ast", "av", "az", "bn", "ba", "eu", "be", "bh", "bpy", "bs", "br", "bg", "my", "ca", "ceb", "ckb", "ce", "zh", "cv", "kw", "hr", "cs", "da", "diq", "dv", "nl", "mhr", "arz", "en", "eo", "et", "tl", "fi", "fr", "gl", "ka", "de", "gom", "el", "gn", "gu", "he", "hi", "hu", "is", "io", "ilo", "id", "ia", "ga", "it", "ja", "jv", "xal", "kn", "krc", "kk", "km", "kv", "ko", "ku", "ky", "lo", "la", "lv", "lez", "li", "lt", "jbo", "lmo", "nds", "dsb", "lb", "mk", "mai", "mg", "ms", "ml", "mt", "mr", "mzn", "min", "xmf", "mn", "nah", "ne", "new", false, "nn", "oc", "or", "os", "ps", "fa", "pms", "pl", "pt", "pa", "qu", "ro", "bxr", "ru", "sah", "sa", "gd", "sr", "sh", "scn", "sd", "si", "sk", "sl", "so", "azb", "es", "su", "sw", "sv", "tg", "ta", "tt", "te", "th", "bo", "als", "tr", "tk", "uk", "eml", "hsb", "ur", "ug", "uz", "vi", "vo", "wa", "war", "cy", "fy", "mrj", "pnb", "wuu", "yi", "yo", "mul"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "source_datasets": ["original"], "task_categories": ["fill-mask", "text-generation"], "task_ids": ["language-modeling"], "paperswithcode_id": "oscar", "pretty_name": "OSCAR"} | 2023-05-30T06:48:15+00:00 | [
"2010.14571",
"2201.06642",
"2103.12028"
] | [
"af",
"sq",
"am",
"ar",
"an",
"hy",
"as",
"ast",
"av",
"az",
"bn",
"ba",
"eu",
"be",
"bh",
"bpy",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ckb",
"ce",
"zh",
"cv",
"kw",
"hr",
"cs",
"da",
"diq",
"dv",
"nl",
"mhr",
"arz",
"en",
"eo",
"et",
"tl",
"fi",
"fr",
"gl",
"ka",
"de",
"gom",
"el",
"gn",
"gu",
"he",
"hi",
"hu",
"is",
"io",
"ilo",
"id",
"ia",
"ga",
"it",
"ja",
"jv",
"xal",
"kn",
"krc",
"kk",
"km",
"kv",
"ko",
"ku",
"ky",
"lo",
"la",
"lv",
"lez",
"li",
"lt",
"jbo",
"lmo",
"nds",
"dsb",
"lb",
"mk",
"mai",
"mg",
"ms",
"ml",
"mt",
"mr",
"mzn",
"min",
"xmf",
"mn",
"nah",
"ne",
"new",
"no",
"nn",
"oc",
"or",
"os",
"ps",
"fa",
"pms",
"pl",
"pt",
"pa",
"qu",
"ro",
"bxr",
"ru",
"sah",
"sa",
"gd",
"sr",
"sh",
"scn",
"sd",
"si",
"sk",
"sl",
"so",
"azb",
"es",
"su",
"sw",
"sv",
"tg",
"ta",
"tt",
"te",
"th",
"bo",
"als",
"tr",
"tk",
"uk",
"eml",
"hsb",
"ur",
"ug",
"uz",
"vi",
"vo",
"wa",
"war",
"cy",
"fy",
"mrj",
"pnb",
"wuu",
"yi",
"yo",
"mul"
] | TAGS
#task_categories-fill-mask #task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #source_datasets-original #language-Afrikaans #language-Albanian #language-Amharic #language-Arabic #language-Aragonese #language-Armenian #language-Assamese #language-Asturian #language-Avaric #language-Azerbaijani #language-Bengali #language-Bashkir #language-Basque #language-Belarusian #language-bh #language-Bishnupriya #language-Bosnian #language-Breton #language-Bulgarian #language-Burmese #language-Catalan #language-Cebuano #language-Central Kurdish #language-Chechen #language-Chinese #language-Chuvash #language-Cornish #language-Croatian #language-Czech #language-Danish #language-Dimli (individual language) #language-Dhivehi #language-Dutch #language-Eastern Mari #language-Egyptian Arabic #language-English #language-Esperanto #language-Estonian #language-Tagalog #language-Finnish #language-French #language-Galician #language-Georgian #language-German #language-Goan Konkani #language-Modern Greek (1453-) #language-Guarani #language-Gujarati #language-Hebrew #language-Hindi #language-Hungarian #language-Icelandic #language-Ido #language-Iloko #language-Indonesian #language-Interlingua (International Auxiliary Language Association) #language-Irish #language-Italian #language-Japanese #language-Javanese #language-Kalmyk #language-Kannada #language-Karachay-Balkar #language-Kazakh #language-Khmer #language-Komi #language-Korean #language-Kurdish #language-Kirghiz #language-Lao #language-Latin #language-Latvian #language-Lezghian #language-Limburgan #language-Lithuanian #language-Lojban #language-Lombard #language-Low German #language-Lower Sorbian #language-Luxembourgish #language-Macedonian #language-Maithili #language-Malagasy #language-Malay (macrolanguage) #language-Malayalam #language-Maltese #language-Marathi #language-Mazanderani #language-Minangkabau #language-Mingrelian #language-Mongolian #language-nah #language-Nepali (macrolanguage) #language-Newari #language-Norwegian #language-Norwegian Nynorsk #language-Occitan (post 1500) #language-Oriya (macrolanguage) #language-Ossetian #language-Pushto #language-Persian #language-Piemontese #language-Polish #language-Portuguese #language-Panjabi #language-Quechua #language-Romanian #language-Russia Buriat #language-Russian #language-Yakut #language-Sanskrit #language-Scottish Gaelic #language-Serbian #language-Serbo-Croatian #language-Sicilian #language-Sindhi #language-Sinhala #language-Slovak #language-Slovenian #language-Somali #language-South Azerbaijani #language-Spanish #language-Sundanese #language-Swahili (macrolanguage) #language-Swedish #language-Tajik #language-Tamil #language-Tatar #language-Telugu #language-Thai #language-Tibetan #language-Tosk Albanian #language-Turkish #language-Turkmen #language-Ukrainian #language-Emiliano-Romagnolo #language-Upper Sorbian #language-Urdu #language-Uighur #language-Uzbek #language-Vietnamese #language-Volapük #language-Walloon #language-Waray (Philippines) #language-Welsh #language-Western Frisian #language-Western Mari #language-Western Panjabi #language-Wu Chinese #language-Yiddish #language-Yoruba #language-Multiple languages #license-cc0-1.0 #arxiv-2010.14571 #arxiv-2201.06642 #arxiv-2103.12028 #region-us
| Dataset Card for "oscar"
========================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: Towards a Cleaner Document-Oriented Multilingual Crawled Corpus
* Point of Contact: Contact
### Dataset Summary
OSCAR or Open Super-large Crawled Aggregated coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the ungoliant architecture. Data is distributed by language in both original and deduplicated form.
We are aware of the virus warnings issue. See discussion here for more info!
### Usage
### Supported Tasks and Leaderboards
OSCAR is mainly intended to pretrain language models and word representations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 151 different languages are available. The table in subsection Data Splits Sample Size provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
### Issues
OSCAR 22.01 may have quality issues on low size subcorpora, as it has been the case before.
Note that since the documents are identified as a whole, it is expected to have lines in other languages in a given language subcorpus.
As an example, it is known and expected that the German subcorpus contains documents holding lines identified as Swiss German / Alemannic.
If you encounter something that is unexpected, please file an issue here: URL
Language code: , Language: , Issues:
Dataset Structure
-----------------
We show detailed information for all the configurations of the dataset.
### Data Instances
TODO
### Data Fields
* 'id': a 'int64' feature.
* 'content': 'string' Newline-separated content
* 'warc\_headers': WARC Headers
* 'warc\_headers.content-length': 'int64' Content length (in bytes) before cleaning
* 'warc\_headers.content-type': 'string' MIME type
* 'warc\_headers.warc-block-digest':'string' Algorithm name and calculated value of a digest applied to the full block of the record
* 'warc\_headers.warc-date': 'string' Crawl date (YYYY-MM-DDThh:mm:ssZ)
* 'warc\_headers.warc-identified-content-language': 'string' Comma-separated list of language identifications done by CommonCrawl (uses CLD3)
* 'warc\_headers.warc-record-id': 'string' Record ID
* 'warc\_headers.warc-refers-to': 'string' Record-ID of a single record for which the present record holds additional content
* 'warc\_headers.warc-target-uri': 'string' URI from where the content has been fetched
* 'warc\_headers.warc-type': 'string' Type of the WARC Record
* 'metadata': Metadata
* 'URL': 'string' Language identification of the document
* 'URL': 'float' Confidence of the identification
* 'metadata.annotation': '[string]' Annnotations of the document. 'null' if none present. (Is 'None' if using 'datasets')
* 'metadata.sentence\_identifications': '[string]' List of line identifications. 'null'/'None' can be present for lines that failed the identification step.
* 'URL': 'int64' line offset where the related text begins. Should be used with 'meta.nb\_sentences' when reading the source files rather than using iterators to get related data.
* 'text': 'string' content
See the WARC Format standard for more details on the 'warc\_headers' fields, and our website for more details about the format in general.
### Data Splits
Click to expand the number of samples per configuration
Table
-----
Dataset Creation
----------------
### Curation Rationale
OSCAR was constructed using 'Ungoliant', a new pipeline derived from goclassy, itself being derived from fastText's one.
The pipeline works on documents rather than lines.
'Ungoliant' is implemented in the Rust programming language, and uses rayon as its data parallelism strategy.
Threading is done at shard, record and sentence level, making the whole generation process much more efficient.
Filtering will be explained in a future blog post at our website
### Source Data
#### Initial Data Collection and Normalization
Common Crawl is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected nofollow and URL policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR 22.01, the November/December 2021 snapshot was used. It is composed by 64 000 compressed text files containing documents and their headers.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This must be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The fastText linear classifier is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by third parties.
Additional Information
----------------------
### Dataset Curators
The corpus was put together by Julien Abadji, Pedro Ortiz Suarez, Benoît Sagot, and Laurent Romary, during work done at Inria, particularly at the ALMAnaCH team.
### Licensing Information
```
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") URL
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
```
### Contributions
Thanks to @pjox, @Uinelj and @lhoestq for adding this dataset.
| [
"### Dataset Summary\n\n\nOSCAR or Open Super-large Crawled Aggregated coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the ungoliant architecture. Data is distributed by language in both original and deduplicated form.\n\n\nWe are aware of the virus warnings issue. See discussion here for more info!",
"### Usage",
"### Supported Tasks and Leaderboards\n\n\nOSCAR is mainly intended to pretrain language models and word representations.",
"### Languages\n\n\nAll the data is distributed by language, both the original and the deduplicated versions of the data are available. 151 different languages are available. The table in subsection Data Splits Sample Size provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.",
"### Issues\n\n\nOSCAR 22.01 may have quality issues on low size subcorpora, as it has been the case before.\n\n\nNote that since the documents are identified as a whole, it is expected to have lines in other languages in a given language subcorpus.\nAs an example, it is known and expected that the German subcorpus contains documents holding lines identified as Swiss German / Alemannic.\n\n\nIf you encounter something that is unexpected, please file an issue here: URL\n\n\nLanguage code: , Language: , Issues: \n\n\nDataset Structure\n-----------------\n\n\nWe show detailed information for all the configurations of the dataset.",
"### Data Instances\n\n\nTODO",
"### Data Fields\n\n\n* 'id': a 'int64' feature.\n* 'content': 'string' Newline-separated content\n* 'warc\\_headers': WARC Headers\n* 'warc\\_headers.content-length': 'int64' Content length (in bytes) before cleaning\n* 'warc\\_headers.content-type': 'string' MIME type\n* 'warc\\_headers.warc-block-digest':'string' Algorithm name and calculated value of a digest applied to the full block of the record\n* 'warc\\_headers.warc-date': 'string' Crawl date (YYYY-MM-DDThh:mm:ssZ)\n* 'warc\\_headers.warc-identified-content-language': 'string' Comma-separated list of language identifications done by CommonCrawl (uses CLD3)\n* 'warc\\_headers.warc-record-id': 'string' Record ID\n* 'warc\\_headers.warc-refers-to': 'string' Record-ID of a single record for which the present record holds additional content\n* 'warc\\_headers.warc-target-uri': 'string' URI from where the content has been fetched\n* 'warc\\_headers.warc-type': 'string' Type of the WARC Record\n* 'metadata': Metadata\n* 'URL': 'string' Language identification of the document\n* 'URL': 'float' Confidence of the identification\n* 'metadata.annotation': '[string]' Annnotations of the document. 'null' if none present. (Is 'None' if using 'datasets')\n* 'metadata.sentence\\_identifications': '[string]' List of line identifications. 'null'/'None' can be present for lines that failed the identification step.\n* 'URL': 'int64' line offset where the related text begins. Should be used with 'meta.nb\\_sentences' when reading the source files rather than using iterators to get related data.\n* 'text': 'string' content\n\n\nSee the WARC Format standard for more details on the 'warc\\_headers' fields, and our website for more details about the format in general.",
"### Data Splits\n\n\n\nClick to expand the number of samples per configuration\n\nTable\n-----\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nOSCAR was constructed using 'Ungoliant', a new pipeline derived from goclassy, itself being derived from fastText's one.\n\n\nThe pipeline works on documents rather than lines.\n'Ungoliant' is implemented in the Rust programming language, and uses rayon as its data parallelism strategy.\nThreading is done at shard, record and sentence level, making the whole generation process much more efficient.\n\n\nFiltering will be explained in a future blog post at our website",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nCommon Crawl is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected nofollow and URL policies.\n\n\nEach monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.\n\n\nTo construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR 22.01, the November/December 2021 snapshot was used. It is composed by 64 000 compressed text files containing documents and their headers.",
"#### Who are the source language producers?\n\n\nThe data comes from multiple web pages in a large variety of languages.",
"### Annotations\n\n\nThe dataset does not contain any additional annotations.",
"#### Annotation process\n\n\nN/A",
"#### Who are the annotators?\n\n\nN/A",
"### Personal and Sensitive Information\n\n\nBeing constructed from Common Crawl, Personal and sensitive information might be present. This must be considered before training deep learning models with OSCAR, specially in the case of text-generation models.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nOSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.",
"### Discussion of Biases\n\n\nOSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.",
"### Other Known Limitations\n\n\nThe fastText linear classifier is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by third parties.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe corpus was put together by Julien Abadji, Pedro Ortiz Suarez, Benoît Sagot, and Laurent Romary, during work done at Inria, particularly at the ALMAnaCH team.",
"### Licensing Information\n\n\n\n```\nThese data are released under this licensing scheme\nWe do not own any of the text from which these data has been extracted.\nWe license the actual packaging of these data under the Creative Commons CC0 license (\"no rights reserved\") URL\nTo the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR\nThis work is published from: France.\n\nShould you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:\n* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.\n* Clearly identify the copyrighted work claimed to be infringed.\n* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.\n\nWe will comply to legitimate requests by removing the affected sources from the next release of the corpus.\n\n```",
"### Contributions\n\n\nThanks to @pjox, @Uinelj and @lhoestq for adding this dataset."
] | [
"TAGS\n#task_categories-fill-mask #task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #source_datasets-original #language-Afrikaans #language-Albanian #language-Amharic #language-Arabic #language-Aragonese #language-Armenian #language-Assamese #language-Asturian #language-Avaric #language-Azerbaijani #language-Bengali #language-Bashkir #language-Basque #language-Belarusian #language-bh #language-Bishnupriya #language-Bosnian #language-Breton #language-Bulgarian #language-Burmese #language-Catalan #language-Cebuano #language-Central Kurdish #language-Chechen #language-Chinese #language-Chuvash #language-Cornish #language-Croatian #language-Czech #language-Danish #language-Dimli (individual language) #language-Dhivehi #language-Dutch #language-Eastern Mari #language-Egyptian Arabic #language-English #language-Esperanto #language-Estonian #language-Tagalog #language-Finnish #language-French #language-Galician #language-Georgian #language-German #language-Goan Konkani #language-Modern Greek (1453-) #language-Guarani #language-Gujarati #language-Hebrew #language-Hindi #language-Hungarian #language-Icelandic #language-Ido #language-Iloko #language-Indonesian #language-Interlingua (International Auxiliary Language Association) #language-Irish #language-Italian #language-Japanese #language-Javanese #language-Kalmyk #language-Kannada #language-Karachay-Balkar #language-Kazakh #language-Khmer #language-Komi #language-Korean #language-Kurdish #language-Kirghiz #language-Lao #language-Latin #language-Latvian #language-Lezghian #language-Limburgan #language-Lithuanian #language-Lojban #language-Lombard #language-Low German #language-Lower Sorbian #language-Luxembourgish #language-Macedonian #language-Maithili #language-Malagasy #language-Malay (macrolanguage) #language-Malayalam #language-Maltese #language-Marathi #language-Mazanderani #language-Minangkabau #language-Mingrelian #language-Mongolian #language-nah #language-Nepali (macrolanguage) #language-Newari #language-Norwegian #language-Norwegian Nynorsk #language-Occitan (post 1500) #language-Oriya (macrolanguage) #language-Ossetian #language-Pushto #language-Persian #language-Piemontese #language-Polish #language-Portuguese #language-Panjabi #language-Quechua #language-Romanian #language-Russia Buriat #language-Russian #language-Yakut #language-Sanskrit #language-Scottish Gaelic #language-Serbian #language-Serbo-Croatian #language-Sicilian #language-Sindhi #language-Sinhala #language-Slovak #language-Slovenian #language-Somali #language-South Azerbaijani #language-Spanish #language-Sundanese #language-Swahili (macrolanguage) #language-Swedish #language-Tajik #language-Tamil #language-Tatar #language-Telugu #language-Thai #language-Tibetan #language-Tosk Albanian #language-Turkish #language-Turkmen #language-Ukrainian #language-Emiliano-Romagnolo #language-Upper Sorbian #language-Urdu #language-Uighur #language-Uzbek #language-Vietnamese #language-Volapük #language-Walloon #language-Waray (Philippines) #language-Welsh #language-Western Frisian #language-Western Mari #language-Western Panjabi #language-Wu Chinese #language-Yiddish #language-Yoruba #language-Multiple languages #license-cc0-1.0 #arxiv-2010.14571 #arxiv-2201.06642 #arxiv-2103.12028 #region-us \n",
"### Dataset Summary\n\n\nOSCAR or Open Super-large Crawled Aggregated coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the ungoliant architecture. Data is distributed by language in both original and deduplicated form.\n\n\nWe are aware of the virus warnings issue. See discussion here for more info!",
"### Usage",
"### Supported Tasks and Leaderboards\n\n\nOSCAR is mainly intended to pretrain language models and word representations.",
"### Languages\n\n\nAll the data is distributed by language, both the original and the deduplicated versions of the data are available. 151 different languages are available. The table in subsection Data Splits Sample Size provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.",
"### Issues\n\n\nOSCAR 22.01 may have quality issues on low size subcorpora, as it has been the case before.\n\n\nNote that since the documents are identified as a whole, it is expected to have lines in other languages in a given language subcorpus.\nAs an example, it is known and expected that the German subcorpus contains documents holding lines identified as Swiss German / Alemannic.\n\n\nIf you encounter something that is unexpected, please file an issue here: URL\n\n\nLanguage code: , Language: , Issues: \n\n\nDataset Structure\n-----------------\n\n\nWe show detailed information for all the configurations of the dataset.",
"### Data Instances\n\n\nTODO",
"### Data Fields\n\n\n* 'id': a 'int64' feature.\n* 'content': 'string' Newline-separated content\n* 'warc\\_headers': WARC Headers\n* 'warc\\_headers.content-length': 'int64' Content length (in bytes) before cleaning\n* 'warc\\_headers.content-type': 'string' MIME type\n* 'warc\\_headers.warc-block-digest':'string' Algorithm name and calculated value of a digest applied to the full block of the record\n* 'warc\\_headers.warc-date': 'string' Crawl date (YYYY-MM-DDThh:mm:ssZ)\n* 'warc\\_headers.warc-identified-content-language': 'string' Comma-separated list of language identifications done by CommonCrawl (uses CLD3)\n* 'warc\\_headers.warc-record-id': 'string' Record ID\n* 'warc\\_headers.warc-refers-to': 'string' Record-ID of a single record for which the present record holds additional content\n* 'warc\\_headers.warc-target-uri': 'string' URI from where the content has been fetched\n* 'warc\\_headers.warc-type': 'string' Type of the WARC Record\n* 'metadata': Metadata\n* 'URL': 'string' Language identification of the document\n* 'URL': 'float' Confidence of the identification\n* 'metadata.annotation': '[string]' Annnotations of the document. 'null' if none present. (Is 'None' if using 'datasets')\n* 'metadata.sentence\\_identifications': '[string]' List of line identifications. 'null'/'None' can be present for lines that failed the identification step.\n* 'URL': 'int64' line offset where the related text begins. Should be used with 'meta.nb\\_sentences' when reading the source files rather than using iterators to get related data.\n* 'text': 'string' content\n\n\nSee the WARC Format standard for more details on the 'warc\\_headers' fields, and our website for more details about the format in general.",
"### Data Splits\n\n\n\nClick to expand the number of samples per configuration\n\nTable\n-----\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nOSCAR was constructed using 'Ungoliant', a new pipeline derived from goclassy, itself being derived from fastText's one.\n\n\nThe pipeline works on documents rather than lines.\n'Ungoliant' is implemented in the Rust programming language, and uses rayon as its data parallelism strategy.\nThreading is done at shard, record and sentence level, making the whole generation process much more efficient.\n\n\nFiltering will be explained in a future blog post at our website",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nCommon Crawl is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected nofollow and URL policies.\n\n\nEach monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.\n\n\nTo construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR 22.01, the November/December 2021 snapshot was used. It is composed by 64 000 compressed text files containing documents and their headers.",
"#### Who are the source language producers?\n\n\nThe data comes from multiple web pages in a large variety of languages.",
"### Annotations\n\n\nThe dataset does not contain any additional annotations.",
"#### Annotation process\n\n\nN/A",
"#### Who are the annotators?\n\n\nN/A",
"### Personal and Sensitive Information\n\n\nBeing constructed from Common Crawl, Personal and sensitive information might be present. This must be considered before training deep learning models with OSCAR, specially in the case of text-generation models.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nOSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.",
"### Discussion of Biases\n\n\nOSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.",
"### Other Known Limitations\n\n\nThe fastText linear classifier is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by third parties.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe corpus was put together by Julien Abadji, Pedro Ortiz Suarez, Benoît Sagot, and Laurent Romary, during work done at Inria, particularly at the ALMAnaCH team.",
"### Licensing Information\n\n\n\n```\nThese data are released under this licensing scheme\nWe do not own any of the text from which these data has been extracted.\nWe license the actual packaging of these data under the Creative Commons CC0 license (\"no rights reserved\") URL\nTo the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR\nThis work is published from: France.\n\nShould you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:\n* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.\n* Clearly identify the copyrighted work claimed to be infringed.\n* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.\n\nWe will comply to legitimate requests by removing the affected sources from the next release of the corpus.\n\n```",
"### Contributions\n\n\nThanks to @pjox, @Uinelj and @lhoestq for adding this dataset."
] |
6e8665ced0dc6c8f274e1e496a2187b11fe0832d | # Dataset Card for Cartoon Set
## Table of Contents
- [Dataset Card for Cartoon Set](#dataset-card-for-cartoon-set)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Usage](#usage)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://google.github.io/cartoonset/
- **Repository:** https://github.com/google/cartoonset/
- **Paper:** XGAN: Unsupervised Image-to-Image Translation for Many-to-Many Mappings
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary

[Cartoon Set](https://google.github.io/cartoonset/) is a collection of random, 2D cartoon avatar images. The cartoons vary in 10 artwork categories, 4 color categories, and 4 proportion categories, with a total of ~10^13 possible combinations. We provide sets of 10k and 100k randomly chosen cartoons and labeled attributes.
#### Usage
`cartoonset` provides the images as PNG byte strings, this gives you a bit more flexibility into how to load the data. Here we show 2 ways:
**Using PIL:**
```python
import datasets
from io import BytesIO
from PIL import Image
ds = datasets.load_dataset("cgarciae/cartoonset", "10k") # or "100k"
def process_fn(sample):
img = Image.open(BytesIO(sample["img_bytes"]))
...
return {"img": img}
ds = ds.map(process_fn, remove_columns=["img_bytes"])
```
**Using TensorFlow:**
```python
import datasets
import tensorflow as tf
hfds = datasets.load_dataset("cgarciae/cartoonset", "10k") # or "100k"
ds = tf.data.Dataset.from_generator(
lambda: hfds,
output_signature={
"img_bytes": tf.TensorSpec(shape=(), dtype=tf.string),
},
)
def process_fn(sample):
img = tf.image.decode_png(sample["img_bytes"], channels=3)
...
return {"img": img}
ds = ds.map(process_fn)
```
**Additional features:**
You can also access the features that generated each sample e.g:
```python
ds = datasets.load_dataset("cgarciae/cartoonset", "10k+features") # or "100k+features"
```
Apart from `img_bytes` these configurations add a total of 18 * 2 additional `int` features, these come in `{feature}`, `{feature}_num_categories` pairs where `num_categories` indicates the number of categories for that feature. See [Data Fields](#data-fields) for the complete list of features.
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```python
{
'img_bytes': b'0x...',
}
```
If `+features` is added to the dataset name, the following additional fields are provided:
```python
{
'img_bytes': b'0x...',
'eye_angle': 0,
'eye_angle_num_categories': 3,
'eye_lashes': 0,
'eye_lashes_num_categories': 2,
'eye_lid': 0,
'eye_lid_num_categories': 2,
'chin_length': 2,
'chin_length_num_categories': 3,
...
}
```
### Data Fields
- `img_bytes`: A byte string containing the raw data of a 500x500 PNG image.
If `+features` is appended to the dataset name, the following additional `int32` fields are provided:
- `eye_angle`
- `eye_angle_num_categories`
- `eye_lashes`
- `eye_lashes_num_categories`
- `eye_lid`
- `eye_lid_num_categories`
- `chin_length`
- `chin_length_num_categories`
- `eyebrow_weight`
- `eyebrow_weight_num_categories`
- `eyebrow_shape`
- `eyebrow_shape_num_categories`
- `eyebrow_thickness`
- `eyebrow_thickness_num_categories`
- `face_shape`
- `face_shape_num_categories`
- `facial_hair`
- `facial_hair_num_categories`
- `facial_hair_num_categories`
- `facial_hair_num_categories`
- `hair`
- `hair_num_categories`
- `hair_num_categories`
- `hair_num_categories`
- `eye_color`
- `eye_color_num_categories`
- `face_color`
- `face_color_num_categories`
- `hair_color`
- `hair_color_num_categories`
- `glasses`
- `glasses_num_categories`
- `glasses_color`
- `glasses_color_num_categories`
- `eyes_slant`
- `eye_slant_num_categories`
- `eyebrow_width`
- `eyebrow_width_num_categories`
- `eye_eyebrow_distance`
- `eye_eyebrow_distance_num_categories`
### Data Splits
Train
## Dataset Creation
### Licensing Information
This data is licensed by Google LLC under a Creative Commons Attribution 4.0 International License.
### Citation Information
```
@article{DBLP:journals/corr/abs-1711-05139,
author = {Amelie Royer and
Konstantinos Bousmalis and
Stephan Gouws and
Fred Bertsch and
Inbar Mosseri and
Forrester Cole and
Kevin Murphy},
title = {{XGAN:} Unsupervised Image-to-Image Translation for many-to-many Mappings},
journal = {CoRR},
volume = {abs/1711.05139},
year = {2017},
url = {http://arxiv.org/abs/1711.05139},
eprinttype = {arXiv},
eprint = {1711.05139},
timestamp = {Mon, 13 Aug 2018 16:47:38 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1711-05139.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
| cgarciae/cartoonset | [
"size_categories:10K<n<100K",
"license:cc-by-4.0",
"arxiv:1711.05139",
"region:us"
] | 2022-03-14T23:35:29+00:00 | {"license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["image", "computer-vision", "generative-modelling"], "pretty_name": "Cartoon Set"} | 2022-03-23T19:12:10+00:00 | [
"1711.05139"
] | [] | TAGS
#size_categories-10K<n<100K #license-cc-by-4.0 #arxiv-1711.05139 #region-us
| # Dataset Card for Cartoon Set
## Table of Contents
- Dataset Card for Cartoon Set
- Table of Contents
- Dataset Description
- Dataset Summary
- Usage
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: XGAN: Unsupervised Image-to-Image Translation for Many-to-Many Mappings
- Leaderboard:
- Point of Contact:
### Dataset Summary
!Cartoon Set sample image
Cartoon Set is a collection of random, 2D cartoon avatar images. The cartoons vary in 10 artwork categories, 4 color categories, and 4 proportion categories, with a total of ~10^13 possible combinations. We provide sets of 10k and 100k randomly chosen cartoons and labeled attributes.
#### Usage
'cartoonset' provides the images as PNG byte strings, this gives you a bit more flexibility into how to load the data. Here we show 2 ways:
Using PIL:
Using TensorFlow:
Additional features:
You can also access the features that generated each sample e.g:
Apart from 'img_bytes' these configurations add a total of 18 * 2 additional 'int' features, these come in '{feature}', '{feature}_num_categories' pairs where 'num_categories' indicates the number of categories for that feature. See Data Fields for the complete list of features.
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
If '+features' is added to the dataset name, the following additional fields are provided:
### Data Fields
- 'img_bytes': A byte string containing the raw data of a 500x500 PNG image.
If '+features' is appended to the dataset name, the following additional 'int32' fields are provided:
- 'eye_angle'
- 'eye_angle_num_categories'
- 'eye_lashes'
- 'eye_lashes_num_categories'
- 'eye_lid'
- 'eye_lid_num_categories'
- 'chin_length'
- 'chin_length_num_categories'
- 'eyebrow_weight'
- 'eyebrow_weight_num_categories'
- 'eyebrow_shape'
- 'eyebrow_shape_num_categories'
- 'eyebrow_thickness'
- 'eyebrow_thickness_num_categories'
- 'face_shape'
- 'face_shape_num_categories'
- 'facial_hair'
- 'facial_hair_num_categories'
- 'facial_hair_num_categories'
- 'facial_hair_num_categories'
- 'hair'
- 'hair_num_categories'
- 'hair_num_categories'
- 'hair_num_categories'
- 'eye_color'
- 'eye_color_num_categories'
- 'face_color'
- 'face_color_num_categories'
- 'hair_color'
- 'hair_color_num_categories'
- 'glasses'
- 'glasses_num_categories'
- 'glasses_color'
- 'glasses_color_num_categories'
- 'eyes_slant'
- 'eye_slant_num_categories'
- 'eyebrow_width'
- 'eyebrow_width_num_categories'
- 'eye_eyebrow_distance'
- 'eye_eyebrow_distance_num_categories'
### Data Splits
Train
## Dataset Creation
### Licensing Information
This data is licensed by Google LLC under a Creative Commons Attribution 4.0 International License.
### Contributions
| [
"# Dataset Card for Cartoon Set",
"## Table of Contents\r\n- Dataset Card for Cartoon Set\r\n - Table of Contents\r\n - Dataset Description\r\n - Dataset Summary\r\n - Usage\r\n - Dataset Structure\r\n - Data Instances\r\n - Data Fields\r\n - Data Splits\r\n - Dataset Creation\r\n - Licensing Information\r\n - Citation Information\r\n - Contributions",
"## Dataset Description\r\n- Homepage: URL\r\n- Repository: URL\r\n- Paper: XGAN: Unsupervised Image-to-Image Translation for Many-to-Many Mappings\r\n- Leaderboard:\r\n- Point of Contact:",
"### Dataset Summary\r\n\r\n!Cartoon Set sample image\r\n\r\nCartoon Set is a collection of random, 2D cartoon avatar images. The cartoons vary in 10 artwork categories, 4 color categories, and 4 proportion categories, with a total of ~10^13 possible combinations. We provide sets of 10k and 100k randomly chosen cartoons and labeled attributes.",
"#### Usage\r\n'cartoonset' provides the images as PNG byte strings, this gives you a bit more flexibility into how to load the data. Here we show 2 ways:\r\n\r\nUsing PIL:\r\n\r\n\r\nUsing TensorFlow:\r\n\r\n\r\nAdditional features:\r\nYou can also access the features that generated each sample e.g:\r\n\r\n\r\n\r\nApart from 'img_bytes' these configurations add a total of 18 * 2 additional 'int' features, these come in '{feature}', '{feature}_num_categories' pairs where 'num_categories' indicates the number of categories for that feature. See Data Fields for the complete list of features.",
"## Dataset Structure",
"### Data Instances\r\nA sample from the training set is provided below:\r\n\r\nIf '+features' is added to the dataset name, the following additional fields are provided:",
"### Data Fields\r\n- 'img_bytes': A byte string containing the raw data of a 500x500 PNG image.\r\n\r\nIf '+features' is appended to the dataset name, the following additional 'int32' fields are provided:\r\n\r\n- 'eye_angle'\r\n- 'eye_angle_num_categories'\r\n- 'eye_lashes'\r\n- 'eye_lashes_num_categories'\r\n- 'eye_lid'\r\n- 'eye_lid_num_categories'\r\n- 'chin_length'\r\n- 'chin_length_num_categories'\r\n- 'eyebrow_weight'\r\n- 'eyebrow_weight_num_categories'\r\n- 'eyebrow_shape'\r\n- 'eyebrow_shape_num_categories'\r\n- 'eyebrow_thickness'\r\n- 'eyebrow_thickness_num_categories'\r\n- 'face_shape'\r\n- 'face_shape_num_categories'\r\n- 'facial_hair'\r\n- 'facial_hair_num_categories'\r\n- 'facial_hair_num_categories'\r\n- 'facial_hair_num_categories'\r\n- 'hair'\r\n- 'hair_num_categories'\r\n- 'hair_num_categories'\r\n- 'hair_num_categories'\r\n- 'eye_color'\r\n- 'eye_color_num_categories'\r\n- 'face_color'\r\n- 'face_color_num_categories'\r\n- 'hair_color'\r\n- 'hair_color_num_categories'\r\n- 'glasses'\r\n- 'glasses_num_categories'\r\n- 'glasses_color'\r\n- 'glasses_color_num_categories'\r\n- 'eyes_slant'\r\n- 'eye_slant_num_categories'\r\n- 'eyebrow_width'\r\n- 'eyebrow_width_num_categories'\r\n- 'eye_eyebrow_distance'\r\n- 'eye_eyebrow_distance_num_categories'",
"### Data Splits\r\nTrain",
"## Dataset Creation",
"### Licensing Information\r\nThis data is licensed by Google LLC under a Creative Commons Attribution 4.0 International License.",
"### Contributions"
] | [
"TAGS\n#size_categories-10K<n<100K #license-cc-by-4.0 #arxiv-1711.05139 #region-us \n",
"# Dataset Card for Cartoon Set",
"## Table of Contents\r\n- Dataset Card for Cartoon Set\r\n - Table of Contents\r\n - Dataset Description\r\n - Dataset Summary\r\n - Usage\r\n - Dataset Structure\r\n - Data Instances\r\n - Data Fields\r\n - Data Splits\r\n - Dataset Creation\r\n - Licensing Information\r\n - Citation Information\r\n - Contributions",
"## Dataset Description\r\n- Homepage: URL\r\n- Repository: URL\r\n- Paper: XGAN: Unsupervised Image-to-Image Translation for Many-to-Many Mappings\r\n- Leaderboard:\r\n- Point of Contact:",
"### Dataset Summary\r\n\r\n!Cartoon Set sample image\r\n\r\nCartoon Set is a collection of random, 2D cartoon avatar images. The cartoons vary in 10 artwork categories, 4 color categories, and 4 proportion categories, with a total of ~10^13 possible combinations. We provide sets of 10k and 100k randomly chosen cartoons and labeled attributes.",
"#### Usage\r\n'cartoonset' provides the images as PNG byte strings, this gives you a bit more flexibility into how to load the data. Here we show 2 ways:\r\n\r\nUsing PIL:\r\n\r\n\r\nUsing TensorFlow:\r\n\r\n\r\nAdditional features:\r\nYou can also access the features that generated each sample e.g:\r\n\r\n\r\n\r\nApart from 'img_bytes' these configurations add a total of 18 * 2 additional 'int' features, these come in '{feature}', '{feature}_num_categories' pairs where 'num_categories' indicates the number of categories for that feature. See Data Fields for the complete list of features.",
"## Dataset Structure",
"### Data Instances\r\nA sample from the training set is provided below:\r\n\r\nIf '+features' is added to the dataset name, the following additional fields are provided:",
"### Data Fields\r\n- 'img_bytes': A byte string containing the raw data of a 500x500 PNG image.\r\n\r\nIf '+features' is appended to the dataset name, the following additional 'int32' fields are provided:\r\n\r\n- 'eye_angle'\r\n- 'eye_angle_num_categories'\r\n- 'eye_lashes'\r\n- 'eye_lashes_num_categories'\r\n- 'eye_lid'\r\n- 'eye_lid_num_categories'\r\n- 'chin_length'\r\n- 'chin_length_num_categories'\r\n- 'eyebrow_weight'\r\n- 'eyebrow_weight_num_categories'\r\n- 'eyebrow_shape'\r\n- 'eyebrow_shape_num_categories'\r\n- 'eyebrow_thickness'\r\n- 'eyebrow_thickness_num_categories'\r\n- 'face_shape'\r\n- 'face_shape_num_categories'\r\n- 'facial_hair'\r\n- 'facial_hair_num_categories'\r\n- 'facial_hair_num_categories'\r\n- 'facial_hair_num_categories'\r\n- 'hair'\r\n- 'hair_num_categories'\r\n- 'hair_num_categories'\r\n- 'hair_num_categories'\r\n- 'eye_color'\r\n- 'eye_color_num_categories'\r\n- 'face_color'\r\n- 'face_color_num_categories'\r\n- 'hair_color'\r\n- 'hair_color_num_categories'\r\n- 'glasses'\r\n- 'glasses_num_categories'\r\n- 'glasses_color'\r\n- 'glasses_color_num_categories'\r\n- 'eyes_slant'\r\n- 'eye_slant_num_categories'\r\n- 'eyebrow_width'\r\n- 'eyebrow_width_num_categories'\r\n- 'eye_eyebrow_distance'\r\n- 'eye_eyebrow_distance_num_categories'",
"### Data Splits\r\nTrain",
"## Dataset Creation",
"### Licensing Information\r\nThis data is licensed by Google LLC under a Creative Commons Attribution 4.0 International License.",
"### Contributions"
] |
f887b0aa23f386116e46690f4630b2f2c204a880 |
# Dataset Card for "Hebrew_Squad_v1"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/TechnionTDK/hebwiki-qa/](https://github.com/TechnionTDK/hebwiki-qa/)
- **Size of train dataset files:** 62.3 MB
- **Size of validation dataset files:** 9.48 MB
- **Total amount of disk used:** 71.78 MB
### Dataset Summary
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. This Hebrew dataset is an automatic translation of the English SQuAD dataset https://huggingface.co/datasets/squad.
### Supported Tasks and Leaderboards
Extractive Question-Answering
### Languages
Hebrew
## Dataset Structure
Follows the standars SQuAD format.
### Data Instances
#### plain_text
- **Size of train dataset files:** 62.3 MB
- **Size of validation dataset files:** 9.48 MB
- **Total amount of disk used:** 71.78 MB
An example of 'train' looks as follows.
```
{
"id": "56be4db0acb8001400a502ee",
"title": "Super_Bowl_50",
"context": "סופרבול 50 היה משחק כדורגל אמריקאי כדי לקבוע את אלופת ליגת הפוטבול הלאומית (NFL) לעונת 2015. אלופת ועידת הכדורגל האמריקאית (AFC) דנבר ברונקוס ניצחה את אלופת ועידת הכדורגל הלאומית (NFC) קרולינה פנתרס 24–10 כדי לזכות בתואר הסופרבול השלישי שלה. המשחק נערך ב-7 בפברואר 2016 באצטדיון ליווי'ס באזור מפרץ סן פרנסיסקו בסנטה קלרה, קליפורניה. מכיוון שזה היה הסופרבול ה-50, הליגה הדגישה את יום השנה הזהב עם יוזמות שונות בנושא זהב, כמו גם השעיה זמנית את המסורת של שם כל משחק סופרבול עם ספרות רומיות (שתחתן המשחק היה ידוע בתור סופרבול L ), כך שהלוגו יוכל להציג באופן בולט את הספרות הערביות 50.",
"question": "היכן התקיים סופרבול 50?",
"answers": {
"text": ["סנטה קלרה, קליפורניה", "אצטדיון ליווי"],
"answer_start": [311, 271]
}
}
```
### Data Fields
The data fields are the same among all splits.
#### Hebrew_Squad_v1
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name |train|validation|
|----------|----|---------|
|Hebrew_Squad_v1|52405| 7455|
### Contributions
Created by Matan Ben-chorin, May Flaster, Guided by Dr. Oren Mishali.
This is our final project as part of computer engineering B.Sc studies in the Faculty of Electrical Engineering combined with Computer Science at Technion, Israel Institute of Technology.
For more cooperation, please contact email:
Matan Ben-chorin: [email protected]
May Flaster: [email protected]
| tdklab/Hebrew_Squad_v1 | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:auto_translation",
"language_creators:auto_translation",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:squad",
"region:us"
] | 2022-03-15T00:43:59+00:00 | {"annotations_creators": ["auto_translation"], "language_creators": ["auto_translation"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["squad"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "Hebrew_Squad_v1", "languages": ["Hebrew", "he"], "licenses": ["cc-by-4-0"]} | 2022-08-04T03:59:05+00:00 | [] | [] | TAGS
#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-auto_translation #language_creators-auto_translation #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-squad #region-us
| Dataset Card for "Hebrew\_Squad\_v1"
====================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: URL
* Size of train dataset files: 62.3 MB
* Size of validation dataset files: 9.48 MB
* Total amount of disk used: 71.78 MB
### Dataset Summary
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. This Hebrew dataset is an automatic translation of the English SQuAD dataset URL
### Supported Tasks and Leaderboards
Extractive Question-Answering
### Languages
Hebrew
Dataset Structure
-----------------
Follows the standars SQuAD format.
### Data Instances
#### plain\_text
* Size of train dataset files: 62.3 MB
* Size of validation dataset files: 9.48 MB
* Total amount of disk used: 71.78 MB
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
#### Hebrew\_Squad\_v1
* 'id': a 'string' feature.
* 'title': a 'string' feature.
* 'context': a 'string' feature.
* 'question': a 'string' feature.
* 'answers': a dictionary feature containing:
+ 'text': a 'string' feature.
+ 'answer\_start': a 'int32' feature.
### Data Splits
name: Hebrew\_Squad\_v1, train: 52405, validation: 7455
### Contributions
Created by Matan Ben-chorin, May Flaster, Guided by Dr. Oren Mishali.
This is our final project as part of computer engineering B.Sc studies in the Faculty of Electrical Engineering combined with Computer Science at Technion, Israel Institute of Technology.
For more cooperation, please contact email:
Matan Ben-chorin: matan.bh1@URL
May Flaster: mayflaster96@URL
| [
"### Dataset Summary\n\n\nStanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. This Hebrew dataset is an automatic translation of the English SQuAD dataset URL",
"### Supported Tasks and Leaderboards\n\n\nExtractive Question-Answering",
"### Languages\n\n\nHebrew\n\n\nDataset Structure\n-----------------\n\n\nFollows the standars SQuAD format.",
"### Data Instances",
"#### plain\\_text\n\n\n* Size of train dataset files: 62.3 MB\n* Size of validation dataset files: 9.48 MB\n* Total amount of disk used: 71.78 MB\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### Hebrew\\_Squad\\_v1\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.",
"### Data Splits\n\n\nname: Hebrew\\_Squad\\_v1, train: 52405, validation: 7455",
"### Contributions\n\n\nCreated by Matan Ben-chorin, May Flaster, Guided by Dr. Oren Mishali.\nThis is our final project as part of computer engineering B.Sc studies in the Faculty of Electrical Engineering combined with Computer Science at Technion, Israel Institute of Technology.\nFor more cooperation, please contact email:\nMatan Ben-chorin: matan.bh1@URL\nMay Flaster: mayflaster96@URL"
] | [
"TAGS\n#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-auto_translation #language_creators-auto_translation #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-squad #region-us \n",
"### Dataset Summary\n\n\nStanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. This Hebrew dataset is an automatic translation of the English SQuAD dataset URL",
"### Supported Tasks and Leaderboards\n\n\nExtractive Question-Answering",
"### Languages\n\n\nHebrew\n\n\nDataset Structure\n-----------------\n\n\nFollows the standars SQuAD format.",
"### Data Instances",
"#### plain\\_text\n\n\n* Size of train dataset files: 62.3 MB\n* Size of validation dataset files: 9.48 MB\n* Total amount of disk used: 71.78 MB\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### Hebrew\\_Squad\\_v1\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.",
"### Data Splits\n\n\nname: Hebrew\\_Squad\\_v1, train: 52405, validation: 7455",
"### Contributions\n\n\nCreated by Matan Ben-chorin, May Flaster, Guided by Dr. Oren Mishali.\nThis is our final project as part of computer engineering B.Sc studies in the Faculty of Electrical Engineering combined with Computer Science at Technion, Israel Institute of Technology.\nFor more cooperation, please contact email:\nMatan Ben-chorin: matan.bh1@URL\nMay Flaster: mayflaster96@URL"
] |
1161216f7e7185a4b2f4d0a4e0734dc7919dfa15 |
# Dataset Card for CoNLL2012 shared task data based on OntoNotes 5.0
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CoNLL-2012 Shared Task](https://conll.cemantix.org/2012/data.html), [Author's page](https://cemantix.org/data/ontonotes.html)
- **Repository:** [Mendeley](https://data.mendeley.com/datasets/zmycy7t9h9)
- **Paper:** [Towards Robust Linguistic Analysis using OntoNotes](https://aclanthology.org/W13-3516/)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
OntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre,
multilingual corpus manually annotated with syntactic, semantic and discourse information.
This dataset is the version of OntoNotes v5.0 extended and is used in the CoNLL-2012 shared task.
It includes v4 train/dev and v9 test data for English/Chinese/Arabic and corrected version v12 train/dev/test data (English only).
The source of data is the Mendeley Data repo [ontonotes-conll2012](https://data.mendeley.com/datasets/zmycy7t9h9), which seems to be as the same as the official data, but users should use this dataset on their own responsibility.
See also summaries from paperwithcode, [OntoNotes 5.0](https://paperswithcode.com/dataset/ontonotes-5-0) and [CoNLL-2012](https://paperswithcode.com/dataset/conll-2012-1)
For more detailed info of the dataset like annotation, tag set, etc., you can refer to the documents in the Mendeley repo mentioned above.
### Supported Tasks and Leaderboards
- [Named Entity Recognition on Ontonotes v5 (English)](https://paperswithcode.com/sota/named-entity-recognition-ner-on-ontonotes-v5)
- [Coreference Resolution on OntoNotes](https://paperswithcode.com/sota/coreference-resolution-on-ontonotes)
- [Semantic Role Labeling on OntoNotes](https://paperswithcode.com/sota/semantic-role-labeling-on-ontonotes)
- ...
### Languages
V4 data for Arabic, Chinese, English, and V12 data for English
## Dataset Structure
### Data Instances
```
{
{'document_id': 'nw/wsj/23/wsj_2311',
'sentences': [{'part_id': 0,
'words': ['CONCORDE', 'trans-Atlantic', 'flights', 'are', '$', '2, 'to', 'Paris', 'and', '$', '3, 'to', 'London', '.']},
'pos_tags': [25, 18, 27, 43, 2, 12, 17, 25, 11, 2, 12, 17, 25, 7],
'parse_tree': '(TOP(S(NP (NNP CONCORDE) (JJ trans-Atlantic) (NNS flights) )(VP (VBP are) (NP(NP(NP ($ $) (CD 2,400) )(PP (IN to) (NP (NNP Paris) ))) (CC and) (NP(NP ($ $) (CD 3,200) )(PP (IN to) (NP (NNP London) ))))) (. .) ))',
'predicate_lemmas': [None, None, None, 'be', None, None, None, None, None, None, None, None, None, None],
'predicate_framenet_ids': [None, None, None, '01', None, None, None, None, None, None, None, None, None, None],
'word_senses': [None, None, None, None, None, None, None, None, None, None, None, None, None, None],
'speaker': None,
'named_entities': [7, 6, 0, 0, 0, 15, 0, 5, 0, 0, 15, 0, 5, 0],
'srl_frames': [{'frames': ['B-ARG1', 'I-ARG1', 'I-ARG1', 'B-V', 'B-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'O'],
'verb': 'are'}],
'coref_spans': [],
{'part_id': 0,
'words': ['In', 'a', 'Centennial', 'Journal', 'article', 'Oct.', '5', ',', 'the', 'fares', 'were', 'reversed', '.']}]}
'pos_tags': [17, 13, 25, 25, 24, 25, 12, 4, 13, 27, 40, 42, 7],
'parse_tree': '(TOP(S(PP (IN In) (NP (DT a) (NML (NNP Centennial) (NNP Journal) ) (NN article) ))(NP (NNP Oct.) (CD 5) ) (, ,) (NP (DT the) (NNS fares) )(VP (VBD were) (VP (VBN reversed) )) (. .) ))',
'predicate_lemmas': [None, None, None, None, None, None, None, None, None, None, None, 'reverse', None],
'predicate_framenet_ids': [None, None, None, None, None, None, None, None, None, None, None, '01', None],
'word_senses': [None, None, None, None, None, None, None, None, None, None, None, None, None],
'speaker': None,
'named_entities': [0, 0, 4, 22, 0, 12, 30, 0, 0, 0, 0, 0, 0],
'srl_frames': [{'frames': ['B-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'B-ARGM-TMP', 'I-ARGM-TMP', 'O', 'B-ARG1', 'I-ARG1', 'O', 'B-V', 'O'],
'verb': 'reversed'}],
'coref_spans': [],
}
```
### Data Fields
- **`document_id`** (*`str`*): This is a variation on the document filename
- **`sentences`** (*`List[Dict]`*): All sentences of the same document are in a single example for the convenience of concatenating sentences.
Every element in `sentences` is a *`Dict`* composed of the following data fields:
- **`part_id`** (*`int`*) : Some files are divided into multiple parts numbered as 000, 001, 002, ... etc.
- **`words`** (*`List[str]`*) :
- **`pos_tags`** (*`List[ClassLabel]` or `List[str]`*) : This is the Penn-Treebank-style part of speech. When parse information is missing, all parts of speech except the one for which there is some sense or proposition annotation are marked with a XX tag. The verb is marked with just a VERB tag.
- tag set : Note tag sets below are founded by scanning all the data, and I found it seems to be a little bit different from officially stated tag sets. See official documents in the [Mendeley repo](https://data.mendeley.com/datasets/zmycy7t9h9)
- arabic : str. Because pos tag in Arabic is compounded and complex, hard to represent it by `ClassLabel`
- chinese v4 : `datasets.ClassLabel(num_classes=36, names=["X", "AD", "AS", "BA", "CC", "CD", "CS", "DEC", "DEG", "DER", "DEV", "DT", "ETC", "FW", "IJ", "INF", "JJ", "LB", "LC", "M", "MSP", "NN", "NR", "NT", "OD", "ON", "P", "PN", "PU", "SB", "SP", "URL", "VA", "VC", "VE", "VV",])`, where `X` is for pos tag missing
- english v4 : `datasets.ClassLabel(num_classes=49, names=["XX", "``", "$", "''", ",", "-LRB-", "-RRB-", ".", ":", "ADD", "AFX", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "LS", "MD", "NFP", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "WDT", "WP", "WP$", "WRB",])`, where `XX` is for pos tag missing, and `-LRB-`/`-RRB-` is "`(`" / "`)`".
- english v12 : `datasets.ClassLabel(num_classes=51, names="english_v12": ["XX", "``", "$", "''", "*", ",", "-LRB-", "-RRB-", ".", ":", "ADD", "AFX", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "LS", "MD", "NFP", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "VERB", "WDT", "WP", "WP$", "WRB",])`, where `XX` is for pos tag missing, and `-LRB-`/`-RRB-` is "`(`" / "`)`".
- **`parse_tree`** (*`Optional[str]`*) : An serialized NLTK Tree representing the parse. It includes POS tags as pre-terminal nodes. When the parse information is missing, the parse will be `None`.
- **`predicate_lemmas`** (*`List[Optional[str]]`*) : The predicate lemma of the words for which we have semantic role information or word sense information. All other indices are `None`.
- **`predicate_framenet_ids`** (*`List[Optional[int]]`*) : The PropBank frameset ID of the lemmas in predicate_lemmas, or `None`.
- **`word_senses`** (*`List[Optional[float]]`*) : The word senses for the words in the sentence, or None. These are floats because the word sense can have values after the decimal, like 1.1.
- **`speaker`** (*`Optional[str]`*) : This is the speaker or author name where available. Mostly in Broadcast Conversation and Web Log data. When it is not available, it will be `None`.
- **`named_entities`** (*`List[ClassLabel]`*) : The BIO tags for named entities in the sentence.
- tag set : `datasets.ClassLabel(num_classes=37, names=["O", "B-PERSON", "I-PERSON", "B-NORP", "I-NORP", "B-FAC", "I-FAC", "B-ORG", "I-ORG", "B-GPE", "I-GPE", "B-LOC", "I-LOC", "B-PRODUCT", "I-PRODUCT", "B-DATE", "I-DATE", "B-TIME", "I-TIME", "B-PERCENT", "I-PERCENT", "B-MONEY", "I-MONEY", "B-QUANTITY", "I-QUANTITY", "B-ORDINAL", "I-ORDINAL", "B-CARDINAL", "I-CARDINAL", "B-EVENT", "I-EVENT", "B-WORK_OF_ART", "I-WORK_OF_ART", "B-LAW", "I-LAW", "B-LANGUAGE", "I-LANGUAGE",])`
- **`srl_frames`** (*`List[{"word":str, "frames":List[str]}]`*) : A dictionary keyed by the verb in the sentence for the given Propbank frame labels, in a BIO format.
- **`coref spans`** (*`List[List[int]]`*) : The spans for entity mentions involved in coreference resolution within the sentence. Each element is a tuple composed of (cluster_id, start_index, end_index). Indices are inclusive.
### Data Splits
Each dataset (arabic_v4, chinese_v4, english_v4, english_v12) has 3 splits: _train_, _validation_, and _test_
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{pradhan-etal-2013-towards,
title = "Towards Robust Linguistic Analysis using {O}nto{N}otes",
author = {Pradhan, Sameer and
Moschitti, Alessandro and
Xue, Nianwen and
Ng, Hwee Tou and
Bj{\"o}rkelund, Anders and
Uryupina, Olga and
Zhang, Yuchen and
Zhong, Zhi},
booktitle = "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-3516",
pages = "143--152",
}
```
### Contributions
Thanks to [@richarddwang](https://github.com/richarddwang) for adding this dataset. | conll2012_ontonotesv5 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"task_ids:coreference-resolution",
"task_ids:parsing",
"task_ids:lemmatization",
"task_ids:word-sense-disambiguation",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ar",
"language:en",
"language:zh",
"license:cc-by-nc-nd-4.0",
"semantic-role-labeling",
"region:us"
] | 2022-03-15T10:48:28+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["ar", "en", "zh"], "license": ["cc-by-nc-nd-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition", "part-of-speech", "coreference-resolution", "parsing", "lemmatization", "word-sense-disambiguation"], "paperswithcode_id": "ontonotes-5-0", "pretty_name": "CoNLL2012 shared task data based on OntoNotes 5.0", "tags": ["semantic-role-labeling"], "dataset_info": [{"config_name": "english_v4", "features": [{"name": "document_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "part_id", "dtype": "int32"}, {"name": "words", "sequence": "string"}, {"name": "pos_tags", "sequence": {"class_label": {"names": {"0": "XX", "1": "``", "2": "$", "3": "''", "4": ",", "5": "-LRB-", "6": "-RRB-", "7": ".", "8": ":", "9": "ADD", "10": "AFX", "11": "CC", "12": "CD", "13": "DT", "14": "EX", "15": "FW", "16": "HYPH", "17": "IN", "18": "JJ", "19": "JJR", "20": "JJS", "21": "LS", "22": "MD", "23": "NFP", "24": "NN", "25": "NNP", "26": "NNPS", "27": "NNS", "28": "PDT", "29": "POS", "30": "PRP", "31": "PRP$", "32": "RB", "33": "RBR", "34": "RBS", "35": "RP", "36": "SYM", "37": "TO", "38": "UH", "39": "VB", "40": "VBD", "41": "VBG", "42": "VBN", "43": "VBP", "44": "VBZ", "45": "WDT", "46": "WP", "47": "WP$", "48": "WRB"}}}}, {"name": "parse_tree", "dtype": "string"}, {"name": "predicate_lemmas", "sequence": "string"}, {"name": "predicate_framenet_ids", "sequence": "string"}, {"name": "word_senses", "sequence": "float32"}, {"name": "speaker", "dtype": "string"}, {"name": "named_entities", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PERSON", "2": "I-PERSON", "3": "B-NORP", "4": "I-NORP", "5": "B-FAC", "6": "I-FAC", "7": "B-ORG", "8": "I-ORG", "9": "B-GPE", "10": "I-GPE", "11": "B-LOC", "12": "I-LOC", "13": "B-PRODUCT", "14": "I-PRODUCT", "15": "B-DATE", "16": "I-DATE", "17": "B-TIME", "18": "I-TIME", "19": "B-PERCENT", "20": "I-PERCENT", "21": "B-MONEY", "22": "I-MONEY", "23": "B-QUANTITY", "24": "I-QUANTITY", "25": "B-ORDINAL", "26": "I-ORDINAL", "27": "B-CARDINAL", "28": "I-CARDINAL", "29": "B-EVENT", "30": "I-EVENT", "31": "B-WORK_OF_ART", "32": "I-WORK_OF_ART", "33": "B-LAW", "34": "I-LAW", "35": "B-LANGUAGE", "36": "I-LANGUAGE"}}}}, {"name": "srl_frames", "list": [{"name": "verb", "dtype": "string"}, {"name": "frames", "sequence": "string"}]}, {"name": "coref_spans", "sequence": {"sequence": "int32", "length": 3}}]}], "splits": [{"name": "train", "num_bytes": 112246121, "num_examples": 1940}, {"name": "validation", "num_bytes": 14116925, "num_examples": 222}, {"name": "test", "num_bytes": 14709044, "num_examples": 222}], "download_size": 193644139, "dataset_size": 141072090}, {"config_name": "chinese_v4", "features": [{"name": "document_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "part_id", "dtype": "int32"}, {"name": "words", "sequence": "string"}, {"name": "pos_tags", "sequence": {"class_label": {"names": {"0": "X", "1": "AD", "2": "AS", "3": "BA", "4": "CC", "5": "CD", "6": "CS", "7": "DEC", "8": "DEG", "9": "DER", "10": "DEV", "11": "DT", "12": "ETC", "13": "FW", "14": "IJ", "15": "INF", "16": "JJ", "17": "LB", "18": "LC", "19": "M", "20": "MSP", "21": "NN", "22": "NR", "23": "NT", "24": "OD", "25": "ON", "26": "P", "27": "PN", "28": "PU", "29": "SB", "30": "SP", "31": "URL", "32": "VA", "33": "VC", "34": "VE", "35": "VV"}}}}, {"name": "parse_tree", "dtype": "string"}, {"name": "predicate_lemmas", "sequence": "string"}, {"name": "predicate_framenet_ids", "sequence": "string"}, {"name": "word_senses", "sequence": "float32"}, {"name": "speaker", "dtype": "string"}, {"name": "named_entities", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PERSON", "2": "I-PERSON", "3": "B-NORP", "4": "I-NORP", "5": "B-FAC", "6": "I-FAC", "7": "B-ORG", "8": "I-ORG", "9": "B-GPE", "10": "I-GPE", "11": "B-LOC", "12": "I-LOC", "13": "B-PRODUCT", "14": "I-PRODUCT", "15": "B-DATE", "16": "I-DATE", "17": "B-TIME", "18": "I-TIME", "19": "B-PERCENT", "20": "I-PERCENT", "21": "B-MONEY", "22": "I-MONEY", "23": "B-QUANTITY", "24": "I-QUANTITY", "25": "B-ORDINAL", "26": "I-ORDINAL", "27": "B-CARDINAL", "28": "I-CARDINAL", "29": "B-EVENT", "30": "I-EVENT", "31": "B-WORK_OF_ART", "32": "I-WORK_OF_ART", "33": "B-LAW", "34": "I-LAW", "35": "B-LANGUAGE", "36": "I-LANGUAGE"}}}}, {"name": "srl_frames", "list": [{"name": "verb", "dtype": "string"}, {"name": "frames", "sequence": "string"}]}, {"name": "coref_spans", "sequence": {"sequence": "int32", "length": 3}}]}], "splits": [{"name": "train", "num_bytes": 77195698, "num_examples": 1391}, {"name": "validation", "num_bytes": 10828169, "num_examples": 172}, {"name": "test", "num_bytes": 9585138, "num_examples": 166}], "download_size": 193644139, "dataset_size": 97609005}, {"config_name": "arabic_v4", "features": [{"name": "document_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "part_id", "dtype": "int32"}, {"name": "words", "sequence": "string"}, {"name": "pos_tags", "sequence": "string"}, {"name": "parse_tree", "dtype": "string"}, {"name": "predicate_lemmas", "sequence": "string"}, {"name": "predicate_framenet_ids", "sequence": "string"}, {"name": "word_senses", "sequence": "float32"}, {"name": "speaker", "dtype": "string"}, {"name": "named_entities", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PERSON", "2": "I-PERSON", "3": "B-NORP", "4": "I-NORP", "5": "B-FAC", "6": "I-FAC", "7": "B-ORG", "8": "I-ORG", "9": "B-GPE", "10": "I-GPE", "11": "B-LOC", "12": "I-LOC", "13": "B-PRODUCT", "14": "I-PRODUCT", "15": "B-DATE", "16": "I-DATE", "17": "B-TIME", "18": "I-TIME", "19": "B-PERCENT", "20": "I-PERCENT", "21": "B-MONEY", "22": "I-MONEY", "23": "B-QUANTITY", "24": "I-QUANTITY", "25": "B-ORDINAL", "26": "I-ORDINAL", "27": "B-CARDINAL", "28": "I-CARDINAL", "29": "B-EVENT", "30": "I-EVENT", "31": "B-WORK_OF_ART", "32": "I-WORK_OF_ART", "33": "B-LAW", "34": "I-LAW", "35": "B-LANGUAGE", "36": "I-LANGUAGE"}}}}, {"name": "srl_frames", "list": [{"name": "verb", "dtype": "string"}, {"name": "frames", "sequence": "string"}]}, {"name": "coref_spans", "sequence": {"sequence": "int32", "length": 3}}]}], "splits": [{"name": "train", "num_bytes": 42017761, "num_examples": 359}, {"name": "validation", "num_bytes": 4859292, "num_examples": 44}, {"name": "test", "num_bytes": 4900664, "num_examples": 44}], "download_size": 193644139, "dataset_size": 51777717}, {"config_name": "english_v12", "features": [{"name": "document_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "part_id", "dtype": "int32"}, {"name": "words", "sequence": "string"}, {"name": "pos_tags", "sequence": {"class_label": {"names": {"0": "XX", "1": "``", "2": "$", "3": "''", "4": "*", "5": ",", "6": "-LRB-", "7": "-RRB-", "8": ".", "9": ":", "10": "ADD", "11": "AFX", "12": "CC", "13": "CD", "14": "DT", "15": "EX", "16": "FW", "17": "HYPH", "18": "IN", "19": "JJ", "20": "JJR", "21": "JJS", "22": "LS", "23": "MD", "24": "NFP", "25": "NN", "26": "NNP", "27": "NNPS", "28": "NNS", "29": "PDT", "30": "POS", "31": "PRP", "32": "PRP$", "33": "RB", "34": "RBR", "35": "RBS", "36": "RP", "37": "SYM", "38": "TO", "39": "UH", "40": "VB", "41": "VBD", "42": "VBG", "43": "VBN", "44": "VBP", "45": "VBZ", "46": "VERB", "47": "WDT", "48": "WP", "49": "WP$", "50": "WRB"}}}}, {"name": "parse_tree", "dtype": "string"}, {"name": "predicate_lemmas", "sequence": "string"}, {"name": "predicate_framenet_ids", "sequence": "string"}, {"name": "word_senses", "sequence": "float32"}, {"name": "speaker", "dtype": "string"}, {"name": "named_entities", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PERSON", "2": "I-PERSON", "3": "B-NORP", "4": "I-NORP", "5": "B-FAC", "6": "I-FAC", "7": "B-ORG", "8": "I-ORG", "9": "B-GPE", "10": "I-GPE", "11": "B-LOC", "12": "I-LOC", "13": "B-PRODUCT", "14": "I-PRODUCT", "15": "B-DATE", "16": "I-DATE", "17": "B-TIME", "18": "I-TIME", "19": "B-PERCENT", "20": "I-PERCENT", "21": "B-MONEY", "22": "I-MONEY", "23": "B-QUANTITY", "24": "I-QUANTITY", "25": "B-ORDINAL", "26": "I-ORDINAL", "27": "B-CARDINAL", "28": "I-CARDINAL", "29": "B-EVENT", "30": "I-EVENT", "31": "B-WORK_OF_ART", "32": "I-WORK_OF_ART", "33": "B-LAW", "34": "I-LAW", "35": "B-LANGUAGE", "36": "I-LANGUAGE"}}}}, {"name": "srl_frames", "list": [{"name": "verb", "dtype": "string"}, {"name": "frames", "sequence": "string"}]}, {"name": "coref_spans", "sequence": {"sequence": "int32", "length": 3}}]}], "splits": [{"name": "train", "num_bytes": 174173192, "num_examples": 10539}, {"name": "validation", "num_bytes": 24264804, "num_examples": 1370}, {"name": "test", "num_bytes": 18254144, "num_examples": 1200}], "download_size": 193644139, "dataset_size": 216692140}]} | 2024-01-18T09:34:57+00:00 | [] | [
"ar",
"en",
"zh"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #task_ids-part-of-speech #task_ids-coreference-resolution #task_ids-parsing #task_ids-lemmatization #task_ids-word-sense-disambiguation #annotations_creators-expert-generated #language_creators-found #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-Arabic #language-English #language-Chinese #license-cc-by-nc-nd-4.0 #semantic-role-labeling #region-us
|
# Dataset Card for CoNLL2012 shared task data based on OntoNotes 5.0
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: CoNLL-2012 Shared Task, Author's page
- Repository: Mendeley
- Paper: Towards Robust Linguistic Analysis using OntoNotes
- Leaderboard:
- Point of Contact:
### Dataset Summary
OntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre,
multilingual corpus manually annotated with syntactic, semantic and discourse information.
This dataset is the version of OntoNotes v5.0 extended and is used in the CoNLL-2012 shared task.
It includes v4 train/dev and v9 test data for English/Chinese/Arabic and corrected version v12 train/dev/test data (English only).
The source of data is the Mendeley Data repo ontonotes-conll2012, which seems to be as the same as the official data, but users should use this dataset on their own responsibility.
See also summaries from paperwithcode, OntoNotes 5.0 and CoNLL-2012
For more detailed info of the dataset like annotation, tag set, etc., you can refer to the documents in the Mendeley repo mentioned above.
### Supported Tasks and Leaderboards
- Named Entity Recognition on Ontonotes v5 (English)
- Coreference Resolution on OntoNotes
- Semantic Role Labeling on OntoNotes
- ...
### Languages
V4 data for Arabic, Chinese, English, and V12 data for English
## Dataset Structure
### Data Instances
### Data Fields
- 'document_id' (*'str'*): This is a variation on the document filename
- 'sentences' (*'List[Dict]'*): All sentences of the same document are in a single example for the convenience of concatenating sentences.
Every element in 'sentences' is a *'Dict'* composed of the following data fields:
- 'part_id' (*'int'*) : Some files are divided into multiple parts numbered as 000, 001, 002, ... etc.
- 'words' (*'List[str]'*) :
- 'pos_tags' (*'List[ClassLabel]' or 'List[str]'*) : This is the Penn-Treebank-style part of speech. When parse information is missing, all parts of speech except the one for which there is some sense or proposition annotation are marked with a XX tag. The verb is marked with just a VERB tag.
- tag set : Note tag sets below are founded by scanning all the data, and I found it seems to be a little bit different from officially stated tag sets. See official documents in the Mendeley repo
- arabic : str. Because pos tag in Arabic is compounded and complex, hard to represent it by 'ClassLabel'
- chinese v4 : 'datasets.ClassLabel(num_classes=36, names=["X", "AD", "AS", "BA", "CC", "CD", "CS", "DEC", "DEG", "DER", "DEV", "DT", "ETC", "FW", "IJ", "INF", "JJ", "LB", "LC", "M", "MSP", "NN", "NR", "NT", "OD", "ON", "P", "PN", "PU", "SB", "SP", "URL", "VA", "VC", "VE", "VV",])', where 'X' is for pos tag missing
- english v4 : 'datasets.ClassLabel(num_classes=49, names=["XX", "''", "$", "''", ",", "-LRB-", "-RRB-", ".", ":", "ADD", "AFX", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "LS", "MD", "NFP", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "WDT", "WP", "WP$", "WRB",])', where 'XX' is for pos tag missing, and '-LRB-'/'-RRB-' is "'('" / "')'".
- english v12 : 'datasets.ClassLabel(num_classes=51, names="english_v12": ["XX", "''", "$", "''", "*", ",", "-LRB-", "-RRB-", ".", ":", "ADD", "AFX", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "LS", "MD", "NFP", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "VERB", "WDT", "WP", "WP$", "WRB",])', where 'XX' is for pos tag missing, and '-LRB-'/'-RRB-' is "'('" / "')'".
- 'parse_tree' (*'Optional[str]'*) : An serialized NLTK Tree representing the parse. It includes POS tags as pre-terminal nodes. When the parse information is missing, the parse will be 'None'.
- 'predicate_lemmas' (*'List[Optional[str]]'*) : The predicate lemma of the words for which we have semantic role information or word sense information. All other indices are 'None'.
- 'predicate_framenet_ids' (*'List[Optional[int]]'*) : The PropBank frameset ID of the lemmas in predicate_lemmas, or 'None'.
- 'word_senses' (*'List[Optional[float]]'*) : The word senses for the words in the sentence, or None. These are floats because the word sense can have values after the decimal, like 1.1.
- 'speaker' (*'Optional[str]'*) : This is the speaker or author name where available. Mostly in Broadcast Conversation and Web Log data. When it is not available, it will be 'None'.
- 'named_entities' (*'List[ClassLabel]'*) : The BIO tags for named entities in the sentence.
- tag set : 'datasets.ClassLabel(num_classes=37, names=["O", "B-PERSON", "I-PERSON", "B-NORP", "I-NORP", "B-FAC", "I-FAC", "B-ORG", "I-ORG", "B-GPE", "I-GPE", "B-LOC", "I-LOC", "B-PRODUCT", "I-PRODUCT", "B-DATE", "I-DATE", "B-TIME", "I-TIME", "B-PERCENT", "I-PERCENT", "B-MONEY", "I-MONEY", "B-QUANTITY", "I-QUANTITY", "B-ORDINAL", "I-ORDINAL", "B-CARDINAL", "I-CARDINAL", "B-EVENT", "I-EVENT", "B-WORK_OF_ART", "I-WORK_OF_ART", "B-LAW", "I-LAW", "B-LANGUAGE", "I-LANGUAGE",])'
- 'srl_frames' (*'List[{"word":str, "frames":List[str]}]'*) : A dictionary keyed by the verb in the sentence for the given Propbank frame labels, in a BIO format.
- 'coref spans' (*'List[List[int]]'*) : The spans for entity mentions involved in coreference resolution within the sentence. Each element is a tuple composed of (cluster_id, start_index, end_index). Indices are inclusive.
### Data Splits
Each dataset (arabic_v4, chinese_v4, english_v4, english_v12) has 3 splits: _train_, _validation_, and _test_
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @richarddwang for adding this dataset. | [
"# Dataset Card for CoNLL2012 shared task data based on OntoNotes 5.0",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: CoNLL-2012 Shared Task, Author's page\n- Repository: Mendeley\n- Paper: Towards Robust Linguistic Analysis using OntoNotes\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nOntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre,\nmultilingual corpus manually annotated with syntactic, semantic and discourse information.\n\nThis dataset is the version of OntoNotes v5.0 extended and is used in the CoNLL-2012 shared task.\nIt includes v4 train/dev and v9 test data for English/Chinese/Arabic and corrected version v12 train/dev/test data (English only).\n\nThe source of data is the Mendeley Data repo ontonotes-conll2012, which seems to be as the same as the official data, but users should use this dataset on their own responsibility.\n\nSee also summaries from paperwithcode, OntoNotes 5.0 and CoNLL-2012\n\nFor more detailed info of the dataset like annotation, tag set, etc., you can refer to the documents in the Mendeley repo mentioned above.",
"### Supported Tasks and Leaderboards\n\n- Named Entity Recognition on Ontonotes v5 (English)\n- Coreference Resolution on OntoNotes\n- Semantic Role Labeling on OntoNotes\n- ...",
"### Languages\n\nV4 data for Arabic, Chinese, English, and V12 data for English",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'document_id' (*'str'*): This is a variation on the document filename\n- 'sentences' (*'List[Dict]'*): All sentences of the same document are in a single example for the convenience of concatenating sentences.\n\nEvery element in 'sentences' is a *'Dict'* composed of the following data fields:\n- 'part_id' (*'int'*) : Some files are divided into multiple parts numbered as 000, 001, 002, ... etc.\n- 'words' (*'List[str]'*) :\n- 'pos_tags' (*'List[ClassLabel]' or 'List[str]'*) : This is the Penn-Treebank-style part of speech. When parse information is missing, all parts of speech except the one for which there is some sense or proposition annotation are marked with a XX tag. The verb is marked with just a VERB tag.\n - tag set : Note tag sets below are founded by scanning all the data, and I found it seems to be a little bit different from officially stated tag sets. See official documents in the Mendeley repo \n - arabic : str. Because pos tag in Arabic is compounded and complex, hard to represent it by 'ClassLabel'\n - chinese v4 : 'datasets.ClassLabel(num_classes=36, names=[\"X\", \"AD\", \"AS\", \"BA\", \"CC\", \"CD\", \"CS\", \"DEC\", \"DEG\", \"DER\", \"DEV\", \"DT\", \"ETC\", \"FW\", \"IJ\", \"INF\", \"JJ\", \"LB\", \"LC\", \"M\", \"MSP\", \"NN\", \"NR\", \"NT\", \"OD\", \"ON\", \"P\", \"PN\", \"PU\", \"SB\", \"SP\", \"URL\", \"VA\", \"VC\", \"VE\", \"VV\",])', where 'X' is for pos tag missing\n - english v4 : 'datasets.ClassLabel(num_classes=49, names=[\"XX\", \"''\", \"$\", \"''\", \",\", \"-LRB-\", \"-RRB-\", \".\", \":\", \"ADD\", \"AFX\", \"CC\", \"CD\", \"DT\", \"EX\", \"FW\", \"HYPH\", \"IN\", \"JJ\", \"JJR\", \"JJS\", \"LS\", \"MD\", \"NFP\", \"NN\", \"NNP\", \"NNPS\", \"NNS\", \"PDT\", \"POS\", \"PRP\", \"PRP$\", \"RB\", \"RBR\", \"RBS\", \"RP\", \"SYM\", \"TO\", \"UH\", \"VB\", \"VBD\", \"VBG\", \"VBN\", \"VBP\", \"VBZ\", \"WDT\", \"WP\", \"WP$\", \"WRB\",])', where 'XX' is for pos tag missing, and '-LRB-'/'-RRB-' is \"'('\" / \"')'\".\n - english v12 : 'datasets.ClassLabel(num_classes=51, names=\"english_v12\": [\"XX\", \"''\", \"$\", \"''\", \"*\", \",\", \"-LRB-\", \"-RRB-\", \".\", \":\", \"ADD\", \"AFX\", \"CC\", \"CD\", \"DT\", \"EX\", \"FW\", \"HYPH\", \"IN\", \"JJ\", \"JJR\", \"JJS\", \"LS\", \"MD\", \"NFP\", \"NN\", \"NNP\", \"NNPS\", \"NNS\", \"PDT\", \"POS\", \"PRP\", \"PRP$\", \"RB\", \"RBR\", \"RBS\", \"RP\", \"SYM\", \"TO\", \"UH\", \"VB\", \"VBD\", \"VBG\", \"VBN\", \"VBP\", \"VBZ\", \"VERB\", \"WDT\", \"WP\", \"WP$\", \"WRB\",])', where 'XX' is for pos tag missing, and '-LRB-'/'-RRB-' is \"'('\" / \"')'\".\n- 'parse_tree' (*'Optional[str]'*) : An serialized NLTK Tree representing the parse. It includes POS tags as pre-terminal nodes. When the parse information is missing, the parse will be 'None'.\n- 'predicate_lemmas' (*'List[Optional[str]]'*) : The predicate lemma of the words for which we have semantic role information or word sense information. All other indices are 'None'.\n- 'predicate_framenet_ids' (*'List[Optional[int]]'*) : The PropBank frameset ID of the lemmas in predicate_lemmas, or 'None'.\n- 'word_senses' (*'List[Optional[float]]'*) : The word senses for the words in the sentence, or None. These are floats because the word sense can have values after the decimal, like 1.1.\n- 'speaker' (*'Optional[str]'*) : This is the speaker or author name where available. Mostly in Broadcast Conversation and Web Log data. When it is not available, it will be 'None'.\n- 'named_entities' (*'List[ClassLabel]'*) : The BIO tags for named entities in the sentence. \n - tag set : 'datasets.ClassLabel(num_classes=37, names=[\"O\", \"B-PERSON\", \"I-PERSON\", \"B-NORP\", \"I-NORP\", \"B-FAC\", \"I-FAC\", \"B-ORG\", \"I-ORG\", \"B-GPE\", \"I-GPE\", \"B-LOC\", \"I-LOC\", \"B-PRODUCT\", \"I-PRODUCT\", \"B-DATE\", \"I-DATE\", \"B-TIME\", \"I-TIME\", \"B-PERCENT\", \"I-PERCENT\", \"B-MONEY\", \"I-MONEY\", \"B-QUANTITY\", \"I-QUANTITY\", \"B-ORDINAL\", \"I-ORDINAL\", \"B-CARDINAL\", \"I-CARDINAL\", \"B-EVENT\", \"I-EVENT\", \"B-WORK_OF_ART\", \"I-WORK_OF_ART\", \"B-LAW\", \"I-LAW\", \"B-LANGUAGE\", \"I-LANGUAGE\",])'\n- 'srl_frames' (*'List[{\"word\":str, \"frames\":List[str]}]'*) : A dictionary keyed by the verb in the sentence for the given Propbank frame labels, in a BIO format.\n- 'coref spans' (*'List[List[int]]'*) : The spans for entity mentions involved in coreference resolution within the sentence. Each element is a tuple composed of (cluster_id, start_index, end_index). Indices are inclusive.",
"### Data Splits\n\nEach dataset (arabic_v4, chinese_v4, english_v4, english_v12) has 3 splits: _train_, _validation_, and _test_",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @richarddwang for adding this dataset."
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #task_ids-part-of-speech #task_ids-coreference-resolution #task_ids-parsing #task_ids-lemmatization #task_ids-word-sense-disambiguation #annotations_creators-expert-generated #language_creators-found #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-Arabic #language-English #language-Chinese #license-cc-by-nc-nd-4.0 #semantic-role-labeling #region-us \n",
"# Dataset Card for CoNLL2012 shared task data based on OntoNotes 5.0",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: CoNLL-2012 Shared Task, Author's page\n- Repository: Mendeley\n- Paper: Towards Robust Linguistic Analysis using OntoNotes\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nOntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre,\nmultilingual corpus manually annotated with syntactic, semantic and discourse information.\n\nThis dataset is the version of OntoNotes v5.0 extended and is used in the CoNLL-2012 shared task.\nIt includes v4 train/dev and v9 test data for English/Chinese/Arabic and corrected version v12 train/dev/test data (English only).\n\nThe source of data is the Mendeley Data repo ontonotes-conll2012, which seems to be as the same as the official data, but users should use this dataset on their own responsibility.\n\nSee also summaries from paperwithcode, OntoNotes 5.0 and CoNLL-2012\n\nFor more detailed info of the dataset like annotation, tag set, etc., you can refer to the documents in the Mendeley repo mentioned above.",
"### Supported Tasks and Leaderboards\n\n- Named Entity Recognition on Ontonotes v5 (English)\n- Coreference Resolution on OntoNotes\n- Semantic Role Labeling on OntoNotes\n- ...",
"### Languages\n\nV4 data for Arabic, Chinese, English, and V12 data for English",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'document_id' (*'str'*): This is a variation on the document filename\n- 'sentences' (*'List[Dict]'*): All sentences of the same document are in a single example for the convenience of concatenating sentences.\n\nEvery element in 'sentences' is a *'Dict'* composed of the following data fields:\n- 'part_id' (*'int'*) : Some files are divided into multiple parts numbered as 000, 001, 002, ... etc.\n- 'words' (*'List[str]'*) :\n- 'pos_tags' (*'List[ClassLabel]' or 'List[str]'*) : This is the Penn-Treebank-style part of speech. When parse information is missing, all parts of speech except the one for which there is some sense or proposition annotation are marked with a XX tag. The verb is marked with just a VERB tag.\n - tag set : Note tag sets below are founded by scanning all the data, and I found it seems to be a little bit different from officially stated tag sets. See official documents in the Mendeley repo \n - arabic : str. Because pos tag in Arabic is compounded and complex, hard to represent it by 'ClassLabel'\n - chinese v4 : 'datasets.ClassLabel(num_classes=36, names=[\"X\", \"AD\", \"AS\", \"BA\", \"CC\", \"CD\", \"CS\", \"DEC\", \"DEG\", \"DER\", \"DEV\", \"DT\", \"ETC\", \"FW\", \"IJ\", \"INF\", \"JJ\", \"LB\", \"LC\", \"M\", \"MSP\", \"NN\", \"NR\", \"NT\", \"OD\", \"ON\", \"P\", \"PN\", \"PU\", \"SB\", \"SP\", \"URL\", \"VA\", \"VC\", \"VE\", \"VV\",])', where 'X' is for pos tag missing\n - english v4 : 'datasets.ClassLabel(num_classes=49, names=[\"XX\", \"''\", \"$\", \"''\", \",\", \"-LRB-\", \"-RRB-\", \".\", \":\", \"ADD\", \"AFX\", \"CC\", \"CD\", \"DT\", \"EX\", \"FW\", \"HYPH\", \"IN\", \"JJ\", \"JJR\", \"JJS\", \"LS\", \"MD\", \"NFP\", \"NN\", \"NNP\", \"NNPS\", \"NNS\", \"PDT\", \"POS\", \"PRP\", \"PRP$\", \"RB\", \"RBR\", \"RBS\", \"RP\", \"SYM\", \"TO\", \"UH\", \"VB\", \"VBD\", \"VBG\", \"VBN\", \"VBP\", \"VBZ\", \"WDT\", \"WP\", \"WP$\", \"WRB\",])', where 'XX' is for pos tag missing, and '-LRB-'/'-RRB-' is \"'('\" / \"')'\".\n - english v12 : 'datasets.ClassLabel(num_classes=51, names=\"english_v12\": [\"XX\", \"''\", \"$\", \"''\", \"*\", \",\", \"-LRB-\", \"-RRB-\", \".\", \":\", \"ADD\", \"AFX\", \"CC\", \"CD\", \"DT\", \"EX\", \"FW\", \"HYPH\", \"IN\", \"JJ\", \"JJR\", \"JJS\", \"LS\", \"MD\", \"NFP\", \"NN\", \"NNP\", \"NNPS\", \"NNS\", \"PDT\", \"POS\", \"PRP\", \"PRP$\", \"RB\", \"RBR\", \"RBS\", \"RP\", \"SYM\", \"TO\", \"UH\", \"VB\", \"VBD\", \"VBG\", \"VBN\", \"VBP\", \"VBZ\", \"VERB\", \"WDT\", \"WP\", \"WP$\", \"WRB\",])', where 'XX' is for pos tag missing, and '-LRB-'/'-RRB-' is \"'('\" / \"')'\".\n- 'parse_tree' (*'Optional[str]'*) : An serialized NLTK Tree representing the parse. It includes POS tags as pre-terminal nodes. When the parse information is missing, the parse will be 'None'.\n- 'predicate_lemmas' (*'List[Optional[str]]'*) : The predicate lemma of the words for which we have semantic role information or word sense information. All other indices are 'None'.\n- 'predicate_framenet_ids' (*'List[Optional[int]]'*) : The PropBank frameset ID of the lemmas in predicate_lemmas, or 'None'.\n- 'word_senses' (*'List[Optional[float]]'*) : The word senses for the words in the sentence, or None. These are floats because the word sense can have values after the decimal, like 1.1.\n- 'speaker' (*'Optional[str]'*) : This is the speaker or author name where available. Mostly in Broadcast Conversation and Web Log data. When it is not available, it will be 'None'.\n- 'named_entities' (*'List[ClassLabel]'*) : The BIO tags for named entities in the sentence. \n - tag set : 'datasets.ClassLabel(num_classes=37, names=[\"O\", \"B-PERSON\", \"I-PERSON\", \"B-NORP\", \"I-NORP\", \"B-FAC\", \"I-FAC\", \"B-ORG\", \"I-ORG\", \"B-GPE\", \"I-GPE\", \"B-LOC\", \"I-LOC\", \"B-PRODUCT\", \"I-PRODUCT\", \"B-DATE\", \"I-DATE\", \"B-TIME\", \"I-TIME\", \"B-PERCENT\", \"I-PERCENT\", \"B-MONEY\", \"I-MONEY\", \"B-QUANTITY\", \"I-QUANTITY\", \"B-ORDINAL\", \"I-ORDINAL\", \"B-CARDINAL\", \"I-CARDINAL\", \"B-EVENT\", \"I-EVENT\", \"B-WORK_OF_ART\", \"I-WORK_OF_ART\", \"B-LAW\", \"I-LAW\", \"B-LANGUAGE\", \"I-LANGUAGE\",])'\n- 'srl_frames' (*'List[{\"word\":str, \"frames\":List[str]}]'*) : A dictionary keyed by the verb in the sentence for the given Propbank frame labels, in a BIO format.\n- 'coref spans' (*'List[List[int]]'*) : The spans for entity mentions involved in coreference resolution within the sentence. Each element is a tuple composed of (cluster_id, start_index, end_index). Indices are inclusive.",
"### Data Splits\n\nEach dataset (arabic_v4, chinese_v4, english_v4, english_v12) has 3 splits: _train_, _validation_, and _test_",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @richarddwang for adding this dataset."
] |
80ce985b32bd618df18f86436893249c60add630 | # AutoNLP Dataset for project: tweet-sentiment
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project tweet-sentiment.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "I am going to see how long I can do this for.",
"target": 8
},
{
"text": "@anitabora yeah, right. What if our politicians start using uploading their pics, lots of inside sto[...]",
"target": 8
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=13, names=['anger', 'boredom', 'empty', 'enthusiasm', 'fun', 'happiness', 'hate', 'love', 'neutral', 'relief', 'sadness', 'surprise', 'worry'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 31995 |
| valid | 8005 |
| victor/autonlp-data-tweet-sentiment | [
"task_categories:text-classification",
"language:en",
"region:us"
] | 2022-03-15T11:10:29+00:00 | {"language": ["en"], "task_categories": ["text-classification"]} | 2022-10-25T09:03:17+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #language-English #region-us
| AutoNLP Dataset for project: tweet-sentiment
============================================
Table of content
----------------
* Dataset Description
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
Dataset Descritpion
-------------------
This dataset has been automatically processed by AutoNLP for project tweet-sentiment.
### Languages
The BCP-47 code for the dataset's language is en.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-text-classification #language-English #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
a6f9aa7bda62c328bd642d32316c63e3387210ec | # BWNS: The Baha'i World News Service dataset.
BWNS articles from 2000 to 2022.
| Dayyan/bwns | [
"region:us"
] | 2022-03-15T19:45:05+00:00 | {} | 2022-03-17T14:41:53+00:00 | [] | [] | TAGS
#region-us
| # BWNS: The Baha'i World News Service dataset.
BWNS articles from 2000 to 2022.
| [
"# BWNS: The Baha'i World News Service dataset.\n\nBWNS articles from 2000 to 2022."
] | [
"TAGS\n#region-us \n",
"# BWNS: The Baha'i World News Service dataset.\n\nBWNS articles from 2000 to 2022."
] |
689f949a36ec83a2a6f14e1fc4a52cf22a704d56 | # DISCO: Diachronic Spanish Sonnet Corpus
[](https://zenodo.org/badge/latestdoi/103841064)
The Diachronic Spanish Sonnet Corpus (DISCO) contains sonnets in Spanish in CSV, between the 15th and the 20th centuries (4303 sonnets by 1215 authors from 22 different countries). It includes well-known authors, but also less canonized ones.
This is a CSV compilation taken from the plain text corpus v4 published on git https://github.com/pruizf/disco/tree/v4. It includes the title, author, age and text metadata.
<br><br>
| jorge-henao/disco_poetry_spanish | [
"region:us"
] | 2022-03-16T03:42:59+00:00 | {} | 2022-03-17T03:19:06+00:00 | [] | [] | TAGS
#region-us
| # DISCO: Diachronic Spanish Sonnet Corpus
 contains sonnets in Spanish in CSV, between the 15th and the 20th centuries (4303 sonnets by 1215 authors from 22 different countries). It includes well-known authors, but also less canonized ones.
This is a CSV compilation taken from the plain text corpus v4 published on git URL It includes the title, author, age and text metadata.
<br><br>
| [
"# DISCO: Diachronic Spanish Sonnet Corpus\n contains sonnets in Spanish in CSV, between the 15th and the 20th centuries (4303 sonnets by 1215 authors from 22 different countries). It includes well-known authors, but also less canonized ones. \n\nThis is a CSV compilation taken from the plain text corpus v4 published on git URL It includes the title, author, age and text metadata.\n<br><br>"
] | [
"TAGS\n#region-us \n",
"# DISCO: Diachronic Spanish Sonnet Corpus\n contains sonnets in Spanish in CSV, between the 15th and the 20th centuries (4303 sonnets by 1215 authors from 22 different countries). It includes well-known authors, but also less canonized ones. \n\nThis is a CSV compilation taken from the plain text corpus v4 published on git URL It includes the title, author, age and text metadata.\n<br><br>"
] |
fbeac939f336b47d75f06167cf339f6706fbafdc |
# Dataset Card for frwiki_good_pages_el
## Dataset Description
- Repository: [enwiki_el](https://github.com/GaaH/enwiki_el)
- Point of Contact: [Gaëtan Caillaut](mailto://[email protected])
### Dataset Summary
It is intended to be used to train Entity Linking (EL) systems. Links in Wikipedia articles are used to detect named entities.
### Languages
- English
## Dataset Structure
```
{
"title": "Title of the page",
"qid": "QID of the corresponding Wikidata entity",
"words": ["tokens"],
"wikipedia": ["Wikipedia description of each entity"],
"labels": ["NER labels"],
"titles": ["Wikipedia title of each entity"],
"qids": ["QID of each entity"],
}
```
The `words` field contains the article’s text splitted on white-spaces. The other fields are list with same length as `words` and contains data only when the respective token in `words` is the __start of an entity__. For instance, if the _i-th_ token in `words` is an entity, then the _i-th_ element of `wikipedia` contains a description, extracted from Wikipedia, of this entity. The same applies for the other fields. If the entity spans multiple words, then only the index of the first words contains data.
The only exception is the `labels` field, which is used to delimit entities. It uses the IOB encoding: if the token is not part of an entity, the label is `"O"`; if it is the first word of a multi-word entity, the label is `"B"`; otherwise the label is `"I"`. | gcaillaut/enwiki_el | [
"task_categories:other",
"annotations_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"license:wtfpl",
"region:us"
] | 2022-03-16T10:16:09+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": [], "language": ["en-EN"], "license": ["wtfpl"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": [], "pretty_name": "test"} | 2022-07-04T11:36:35+00:00 | [] | [
"en-EN"
] | TAGS
#task_categories-other #annotations_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #license-wtfpl #region-us
|
# Dataset Card for frwiki_good_pages_el
## Dataset Description
- Repository: enwiki_el
- Point of Contact: Gaëtan Caillaut
### Dataset Summary
It is intended to be used to train Entity Linking (EL) systems. Links in Wikipedia articles are used to detect named entities.
### Languages
- English
## Dataset Structure
The 'words' field contains the article’s text splitted on white-spaces. The other fields are list with same length as 'words' and contains data only when the respective token in 'words' is the __start of an entity__. For instance, if the _i-th_ token in 'words' is an entity, then the _i-th_ element of 'wikipedia' contains a description, extracted from Wikipedia, of this entity. The same applies for the other fields. If the entity spans multiple words, then only the index of the first words contains data.
The only exception is the 'labels' field, which is used to delimit entities. It uses the IOB encoding: if the token is not part of an entity, the label is '"O"'; if it is the first word of a multi-word entity, the label is '"B"'; otherwise the label is '"I"'. | [
"# Dataset Card for frwiki_good_pages_el",
"## Dataset Description\n\n- Repository: enwiki_el\n- Point of Contact: Gaëtan Caillaut",
"### Dataset Summary\n\nIt is intended to be used to train Entity Linking (EL) systems. Links in Wikipedia articles are used to detect named entities.",
"### Languages\n\n- English",
"## Dataset Structure\n\n\n\nThe 'words' field contains the article’s text splitted on white-spaces. The other fields are list with same length as 'words' and contains data only when the respective token in 'words' is the __start of an entity__. For instance, if the _i-th_ token in 'words' is an entity, then the _i-th_ element of 'wikipedia' contains a description, extracted from Wikipedia, of this entity. The same applies for the other fields. If the entity spans multiple words, then only the index of the first words contains data.\n\nThe only exception is the 'labels' field, which is used to delimit entities. It uses the IOB encoding: if the token is not part of an entity, the label is '\"O\"'; if it is the first word of a multi-word entity, the label is '\"B\"'; otherwise the label is '\"I\"'."
] | [
"TAGS\n#task_categories-other #annotations_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #license-wtfpl #region-us \n",
"# Dataset Card for frwiki_good_pages_el",
"## Dataset Description\n\n- Repository: enwiki_el\n- Point of Contact: Gaëtan Caillaut",
"### Dataset Summary\n\nIt is intended to be used to train Entity Linking (EL) systems. Links in Wikipedia articles are used to detect named entities.",
"### Languages\n\n- English",
"## Dataset Structure\n\n\n\nThe 'words' field contains the article’s text splitted on white-spaces. The other fields are list with same length as 'words' and contains data only when the respective token in 'words' is the __start of an entity__. For instance, if the _i-th_ token in 'words' is an entity, then the _i-th_ element of 'wikipedia' contains a description, extracted from Wikipedia, of this entity. The same applies for the other fields. If the entity spans multiple words, then only the index of the first words contains data.\n\nThe only exception is the 'labels' field, which is used to delimit entities. It uses the IOB encoding: if the token is not part of an entity, the label is '\"O\"'; if it is the first word of a multi-word entity, the label is '\"B\"'; otherwise the label is '\"I\"'."
] |
Subsets and Splits