sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
db42e5b3fc768e45fb1376dce4367f8f692143cf |
# Dataset Card for Monolingual-Quechua-IIC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://llamacha.pe](https://llamacha.pe)
- **Paper:** [Introducing QuBERT: A Large Monolingual Corpus and BERT Model for
Southern Quechua](https://aclanthology.org/2022.deeplo-1.1.pdf)
- **Point of Contact:** [Rodolfo Zevallos](mailto:[email protected])
- **Size of downloaded dataset files:** 373.28 MB
### Dataset Summary
We present Monolingual-Quechua-IIC, a monolingual corpus of Southern Quechua, which can be used to build language models using Transformers models. This corpus also includes the Wiki and OSCAR corpora. We used this corpus to build Llama-RoBERTa-Quechua, the first language model for Southern Quechua using Transformers.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
Southern Quechua
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Apache-2.0
### Citation Information
```
@inproceedings{zevallos2022introducing,
title={Introducing QuBERT: A Large Monolingual Corpus and BERT Model for Southern Quechua},
author={Zevallos, Rodolfo and Ortega, John and Chen, William and Castro, Richard and Bel, Nuria and Toshio, Cesar and Venturas, Renzo and Aradiel, Hilario and Melgarejo, Nelsi},
booktitle={Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing},
pages={1--13},
year={2022}
}
```
### Contributions
Thanks to [@rjzevallos](https://github.com/rjzevallos) for adding this dataset.
| Llamacha/monolingual-quechua-iic | [
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<5M",
"source_datasets:original",
"language:qu",
"license:apache-2.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["qu"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<5M"], "source_datasets": ["original"], "task_categories": ["fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"]} | 2022-10-02T13:40:00+00:00 | [] | [
"qu"
] | TAGS
#task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<5M #source_datasets-original #language-Quechua #license-apache-2.0 #region-us
|
# Dataset Card for Monolingual-Quechua-IIC
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Paper: Introducing QuBERT: A Large Monolingual Corpus and BERT Model for
Southern Quechua
- Point of Contact: Rodolfo Zevallos
- Size of downloaded dataset files: 373.28 MB
### Dataset Summary
We present Monolingual-Quechua-IIC, a monolingual corpus of Southern Quechua, which can be used to build language models using Transformers models. This corpus also includes the Wiki and OSCAR corpora. We used this corpus to build Llama-RoBERTa-Quechua, the first language model for Southern Quechua using Transformers.
### Supported Tasks and Leaderboards
### Languages
Southern Quechua
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
Apache-2.0
### Contributions
Thanks to @rjzevallos for adding this dataset.
| [
"# Dataset Card for Monolingual-Quechua-IIC",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Homepage: URL\n- Paper: Introducing QuBERT: A Large Monolingual Corpus and BERT Model for\nSouthern Quechua\n- Point of Contact: Rodolfo Zevallos\n- Size of downloaded dataset files: 373.28 MB",
"### Dataset Summary\n We present Monolingual-Quechua-IIC, a monolingual corpus of Southern Quechua, which can be used to build language models using Transformers models. This corpus also includes the Wiki and OSCAR corpora. We used this corpus to build Llama-RoBERTa-Quechua, the first language model for Southern Quechua using Transformers.",
"### Supported Tasks and Leaderboards",
"### Languages\nSouthern Quechua",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\nApache-2.0",
"### Contributions\nThanks to @rjzevallos for adding this dataset."
] | [
"TAGS\n#task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<5M #source_datasets-original #language-Quechua #license-apache-2.0 #region-us \n",
"# Dataset Card for Monolingual-Quechua-IIC",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Homepage: URL\n- Paper: Introducing QuBERT: A Large Monolingual Corpus and BERT Model for\nSouthern Quechua\n- Point of Contact: Rodolfo Zevallos\n- Size of downloaded dataset files: 373.28 MB",
"### Dataset Summary\n We present Monolingual-Quechua-IIC, a monolingual corpus of Southern Quechua, which can be used to build language models using Transformers models. This corpus also includes the Wiki and OSCAR corpora. We used this corpus to build Llama-RoBERTa-Quechua, the first language model for Southern Quechua using Transformers.",
"### Supported Tasks and Leaderboards",
"### Languages\nSouthern Quechua",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\nApache-2.0",
"### Contributions\nThanks to @rjzevallos for adding this dataset."
] |
b947ddc419921265fe8ba8aba625102e4efbe63b | #Title | LoganKells/amazon_product_reviews_video_games | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-12-07T01:42:37+00:00 | [] | [] | TAGS
#region-us
| #Title | [] | [
"TAGS\n#region-us \n"
] |
f70ba107784a66526d326cf1c82bb90bf6c8034a | https://mahoningmed.org/docs/123movies-watch-after-we-fell-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-cinderella-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-cryptozoo-2021full-movie-hd-free/
https://mahoningmed.org/docs/123movieswatch-breathless-2021hd-full-movie-online/
https://mahoningmed.org/docs/123movieswatch-firebird-2021full-hd-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-paw-patrol-the-movie-2021-full-hd-movie-online-free/
https://mahoningmed.org/docs/atch-sweet-girl-2021free-hd-full-movie-online/
https://mahoningmed.org/docs/123movies-watch-hes-all-that-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movieswatch-im-your-man-2020hd-full-movie-online-free/
https://mahoningmed.org/docs/watchcrazy-fist-2021full-hd-movie-online-free/
https://mahoningmed.org/docs/watchpaw-patrol-the-movie-2021hd-full-movie-online-for-free/
https://mahoningmed.org/docs/123movies-watch-black-widow-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/full-watch-dont-breathe-2-2021-hd-movie-online-free/
https://mahoningmed.org/docs/watchthe-tomorrow-war-2021hd-full-movie-online-for-free/
https://mahoningmed.org/docs/123movies-watch-jurassic-hunt-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-after-we-fell-2021-hd-full-movie-online-free-2/
https://mahoningmed.org/docs/23movies-watch-free-guy-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/watch-candyman-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watch-the-night-house-2021full-hd-movie-online-free/
https://mahoningmed.org/docs/watchsas-red-notice-2021hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchshang-chi-and-the-legend-of-the-ten-rings-2021hd-full-movie-online-for-free/
https://mahoningmed.org/docs/123movies-watch-luca-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-escape-room-tournament-of-champions-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/watchnarco-sub-2021hd-full-movie-online-for-free/
https://mahoningmed.org/docs/123movies-watch-malignant-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/watch-mortal-kombat-legends-battle-of-the-realms-2021full-online-movie-free-hd/
https://mahoningmed.org/docs/watch-space-jam-a-new-legacy-2021-hd-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-cinderella-2021-hd-full-movie-online-free-2/
https://mahoningmed.org/docs/watcheggs-run-2021hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watch-f9-2021full-online-movie-free-hd-1080p/
https://mahoningmed.org/docs/123movies-watch-jurassic-hunt-2021-hd-full-movie-online-free-2/
https://mahoningmed.org/docs/123movies-watch-vacation-friends-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-silent-night-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-the-card-counter-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-silent-night-2021-hd-full-movie-online-free-2/
https://mahoningmed.org/docs/123movies-watch-jolt-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-the-last-mercenary-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-beckett-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-rogue-hostage-2018-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-the-boss-baby-family-business-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-cruella-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/watch-the-manson-brothers-midnight-zombie-massacre-2021full-hd-movie-online-free-123movies/
https://mahoningmed.org/docs/watchthe-suicide-squad-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watch-jungle-cruise-2021full-hd-movie-online-free/
https://mahoningmed.org/docs/watch-after-we-fell-2021full-hd-movie-online-free/
https://mahoningmed.org/docs/123movieswatch-the-last-warrior-root-of-evil-2021hd-full-movie-online-free/
https://mahoningmed.org/docs/123movieswatch-kate-2021hd-full-movie-online-free/
https://mahoningmed.org/docs/23movieswatch-wrath-of-man-2021hd-full-movie-online-free/
https://mahoningmed.org/docs/watchthe-forever-purge-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchafterlife-of-the-party-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchthe-conjuring-the-devil-made-me-do-it-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchold-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchinsensate-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchdreamcatcher-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchthe-kissing-booth-3-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchjj-plus-e-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchyoung-sister-in-law-3-2019-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchjurassic-world-fallen-kingdom-2018-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/123movieswatch-danny-boy-2021hd-full-movie-online-free/
https://mahoningmed.org/docs/watchsnake-eyes-g-i-joe-origins-2021-hd-full-movie-online-for-free/ | MBAH/MOVIESON | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-09-17T23:04:34+00:00 | [] | [] | TAGS
#region-us
| URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL | [] | [
"TAGS\n#region-us \n"
] |
77ec81315e332a2c07f8aa0edf6ba4f90883332f | # <a href='https://arxiv.org/pdf/1906.01155.pdf'>ShEMO: a large-scale validated database for Persian speech emotion detection</a><br>
## Abstract
<div align="justify"> This paper introduces a large-scale, validated database for Persian called Sharif Emotional Speech Database (ShEMO). The database includes 3000 semi-natural utterances, equivalent to 3 hours and 25 minutes of speech data extracted from online radio plays. The ShEMO covers speech samples of 87 native-Persian speakers for five basic emotions including <i>anger</i>, <i>fear</i>, <i>happiness</i>, <i>sadness</i> and <i>surprise</i>, as well as neutral state. Twelve annotators label the underlying emotional state of utterances and majority voting is used to decide on the final labels. According to the kappa measure,
the inter-annotator agreement is 64% which is interpreted as "substantial agreement". We also present benchmark results based on common classification methods in speech emotion detection task. According to the experiments, support vector machine achieves the best results for both gender-independent (58.2%) and gender-dependent models (female=59.4%, male=57.6%). The ShEMO is available for academic purposes free of charge to provide a baseline for further research on Persian emotional speech.
## Download Dataset
To download female utterances (zip file):
```bash
wget -O female.zip "https://www.dropbox.com/s/42okby6c40w3j2x/female.zip?dl=0"
```
To download male utterances (zip file):
```bash
wget -O male.zip "https://www.dropbox.com/s/5ebs8hq1zm0qkp6/male.zip?dl=0"
```
To download labels & transcripts (json file):
```bash
wget https://github.com/pariajm/sharif-emotional-speech-dataset/raw/master/shemo.json
```
## Models Trained or Fine-tuned on ShEMO
Credits to [Mehrdad Farahani](https://github.com/m3hrdadfi/soxan)
- [Speech emotion detection in Persian (fa) using wav2vec 2.0](https://huggingface.co/m3hrdadfi/wav2vec2-xlsr-persian-speech-emotion-recognition)
- [Speech emotion detection in Persian (fa) using HuBERT](https://huggingface.co/m3hrdadfi/hubert-base-persian-speech-emotion-recognition)
- [Speech geneder detection in Persian (fa) using HuBERT](https://huggingface.co/m3hrdadfi/hubert-base-persian-speech-gender-recognition)
- [Automatic speech recognition in Persian (fa) using XLSR-53](https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-shemo)
## Overview of ShEMO
Feature | Status
------------- | ----------
**access** | open source
**language** | Persian (fa)
**modality** | speech
**duration** | 3 hours and 25 minutes
**#utterances** | 3000
**#speakers** | 87 (31 females, 56 males)
**#emotions** | 5 basic emotions (anger, fear, happiness, sadness and surprise) and neutral state
**orthographic transcripts** | available
**phonetic transcripts** | available
Read our paper on <a href='https://link.springer.com/article/10.1007/s10579-018-9427-x'>Springer</a> or [arxiv](https://arxiv.org/pdf/1906.01155.pdf)
## Description of Filenames
The characters used in the filenames and their corresponding meaning:
- **A**: angry
- **F**: female speaker (if used at the beginning of the label e.g.`F14A09`) or fearful (if used in the middle of the label e.g. `M02F01`)
- **H** : happy
- **M** : male speaker
- **N** : neutral
- **S** : sad
- **W** : surprised
e.g. `F03S02` **F** means the speaker is **female**, **03** denotes **the speaker code**, **S** refers to the underlying emotion of the utterance which is **sadness**, **02** means this is the **second utterance for this speaker in sad emotion**.
## Data Instances
Here is a sample of data instances:
```json
"F21N37": {
"speaker_id": "F21",
"gender": "female",
"emotion": "neutral",
"transcript": "مگه من به تو نگفته بودم که باید راجع به دورانت سکوت کنی؟",
"ipa": "mӕge mæn be to nægofte budӕm ke bɑyæd rɑdʒeʔ be dorɑnt sokut koni"
}
```
## دادگان گفتار احساسی شریف (شمو)
برای دریافت مقاله <a href='https://arxiv.org/pdf/1906.01155.pdf'>اینجا</a> کلیک کنید
## Citation
If you use this dataset, please cite the following paper:
~~~~
@Article{MohamadNezami2019,
author = {Mohamad Nezami, Omid and Jamshid Lou, Paria and Karami, Mansoureh},
title = {ShEMO: a large-scale validated database for Persian speech emotion detection},
journal = {Language Resources and Evaluation},
year = {2019},
volume = {53},
number = {1},
pages = {1--16},
issn = {1574-0218},
doi = {10.1007/s10579-018-9427-x},
url = {https://doi.org/10.1007/s10579-018-9427-x}
}
~~~~
### Contact
Paria Jamshid Lou <[email protected]>
Omid Mohamad Nezami <[email protected]> | Mansooreh/sharif-emotional-speech-dataset | [
"arxiv:1906.01155",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-10-19T22:33:59+00:00 | [
"1906.01155"
] | [] | TAGS
#arxiv-1906.01155 #region-us
| <a href='URL a large-scale validated database for Persian speech emotion detection
==================================================================================
Abstract
--------
This paper introduces a large-scale, validated database for Persian called Sharif Emotional Speech Database (ShEMO). The database includes 3000 semi-natural utterances, equivalent to 3 hours and 25 minutes of speech data extracted from online radio plays. The ShEMO covers speech samples of 87 native-Persian speakers for five basic emotions including *anger*, *fear*, *happiness*, *sadness* and *surprise*, as well as neutral state. Twelve annotators label the underlying emotional state of utterances and majority voting is used to decide on the final labels. According to the kappa measure,
the inter-annotator agreement is 64% which is interpreted as "substantial agreement". We also present benchmark results based on common classification methods in speech emotion detection task. According to the experiments, support vector machine achieves the best results for both gender-independent (58.2%) and gender-dependent models (female=59.4%, male=57.6%). The ShEMO is available for academic purposes free of charge to provide a baseline for further research on Persian emotional speech.
Download Dataset
----------------
To download female utterances (zip file):
To download male utterances (zip file):
To download labels & transcripts (json file):
Models Trained or Fine-tuned on ShEMO
-------------------------------------
Credits to Mehrdad Farahani
* Speech emotion detection in Persian (fa) using wav2vec 2.0
* Speech emotion detection in Persian (fa) using HuBERT
* Speech geneder detection in Persian (fa) using HuBERT
* Automatic speech recognition in Persian (fa) using XLSR-53
Overview of ShEMO
-----------------
Read our paper on <a href='URL or arxiv
Description of Filenames
------------------------
The characters used in the filenames and their corresponding meaning:
* A: angry
* F: female speaker (if used at the beginning of the label e.g.'F14A09') or fearful (if used in the middle of the label e.g. 'M02F01')
* H : happy
* M : male speaker
* N : neutral
* S : sad
* W : surprised
e.g. 'F03S02' F means the speaker is female, 03 denotes the speaker code, S refers to the underlying emotion of the utterance which is sadness, 02 means this is the second utterance for this speaker in sad emotion.
Data Instances
--------------
Here is a sample of data instances:
دادگان گفتار احساسی شریف (شمو)
------------------------------
برای دریافت مقاله <a href='URL>اینجا کلیک کنید
If you use this dataset, please cite the following paper:
```
@Article{MohamadNezami2019,
author = {Mohamad Nezami, Omid and Jamshid Lou, Paria and Karami, Mansoureh},
title = {ShEMO: a large-scale validated database for Persian speech emotion detection},
journal = {Language Resources and Evaluation},
year = {2019},
volume = {53},
number = {1},
pages = {1--16},
issn = {1574-0218},
doi = {10.1007/s10579-018-9427-x},
url = {URL
}
```
### Contact
Paria Jamshid Lou [paria.jamshid-lou@URL](mailto:paria.jamshid-lou@URL)
Omid Mohamad Nezami [omid.mohamad-nezami@URL](mailto:omid.mohamad-nezami@URL)
| [
"### Contact\n\n\nParia Jamshid Lou [paria.jamshid-lou@URL](mailto:paria.jamshid-lou@URL)\n\n\nOmid Mohamad Nezami [omid.mohamad-nezami@URL](mailto:omid.mohamad-nezami@URL)"
] | [
"TAGS\n#arxiv-1906.01155 #region-us \n",
"### Contact\n\n\nParia Jamshid Lou [paria.jamshid-lou@URL](mailto:paria.jamshid-lou@URL)\n\n\nOmid Mohamad Nezami [omid.mohamad-nezami@URL](mailto:omid.mohamad-nezami@URL)"
] |
2b1de7c5e588ec70afa1e753394ae25531e0d182 | ## QA4PC Dataset (paper: Cross-Policy Compliance Detection via Question Answering)
### Train Sets
To create training set or entailment and QA tasks, download the ShARC data from here: https://sharc-data.github.io/data.html. After that, run the script from _create_train_from_sharc.py_, by providing the path to the ShARC train and development sets.
### Evaluation Sets
#### Entailment Data
The following files contain the data for the entailment task. This includes the policy + questions, a scenario and an answer (_Yes, No, Maybe_). Each datapoint also contain the information from the ShARC dataset such as tree_id and source_url.
- __dev_entailment_qa4pc.json__
- __test_entailment_qa4pc.json__
#### QA Data
The following files contain the data for the QA task.
- __dev_sc_qa4pc.json__
- __test_sc_qa4pc.json__
The following file contains the expression tree data for the dev and test sets. Each tree includes a policy, a set of questions and a logical expression.
- __trees_dev_test_qa4pc.json__
| Marzipan/QA4PC | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-11-16T13:45:34+00:00 | [] | [] | TAGS
#region-us
| ## QA4PC Dataset (paper: Cross-Policy Compliance Detection via Question Answering)
### Train Sets
To create training set or entailment and QA tasks, download the ShARC data from here: URL After that, run the script from _create_train_from_sharc.py_, by providing the path to the ShARC train and development sets.
### Evaluation Sets
#### Entailment Data
The following files contain the data for the entailment task. This includes the policy + questions, a scenario and an answer (_Yes, No, Maybe_). Each datapoint also contain the information from the ShARC dataset such as tree_id and source_url.
- __dev_entailment_qa4pc.json__
- __test_entailment_qa4pc.json__
#### QA Data
The following files contain the data for the QA task.
- __dev_sc_qa4pc.json__
- __test_sc_qa4pc.json__
The following file contains the expression tree data for the dev and test sets. Each tree includes a policy, a set of questions and a logical expression.
- __trees_dev_test_qa4pc.json__
| [
"## QA4PC Dataset (paper: Cross-Policy Compliance Detection via Question Answering)",
"### Train Sets\nTo create training set or entailment and QA tasks, download the ShARC data from here: URL After that, run the script from _create_train_from_sharc.py_, by providing the path to the ShARC train and development sets.",
"### Evaluation Sets",
"#### Entailment Data\nThe following files contain the data for the entailment task. This includes the policy + questions, a scenario and an answer (_Yes, No, Maybe_). Each datapoint also contain the information from the ShARC dataset such as tree_id and source_url.\n- __dev_entailment_qa4pc.json__\n- __test_entailment_qa4pc.json__",
"#### QA Data\nThe following files contain the data for the QA task.\n- __dev_sc_qa4pc.json__\n- __test_sc_qa4pc.json__\n\nThe following file contains the expression tree data for the dev and test sets. Each tree includes a policy, a set of questions and a logical expression.\n- __trees_dev_test_qa4pc.json__"
] | [
"TAGS\n#region-us \n",
"## QA4PC Dataset (paper: Cross-Policy Compliance Detection via Question Answering)",
"### Train Sets\nTo create training set or entailment and QA tasks, download the ShARC data from here: URL After that, run the script from _create_train_from_sharc.py_, by providing the path to the ShARC train and development sets.",
"### Evaluation Sets",
"#### Entailment Data\nThe following files contain the data for the entailment task. This includes the policy + questions, a scenario and an answer (_Yes, No, Maybe_). Each datapoint also contain the information from the ShARC dataset such as tree_id and source_url.\n- __dev_entailment_qa4pc.json__\n- __test_entailment_qa4pc.json__",
"#### QA Data\nThe following files contain the data for the QA task.\n- __dev_sc_qa4pc.json__\n- __test_sc_qa4pc.json__\n\nThe following file contains the expression tree data for the dev and test sets. Each tree includes a policy, a set of questions and a logical expression.\n- __trees_dev_test_qa4pc.json__"
] |
83b690cb666c5a8869e7f213a877bbd24a642d7c | # Dataset Card for mlquestions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/McGill-NLP/MLQuestions
- **Repository:** https://github.com/McGill-NLP/MLQuestions
- **Paper:** https://aclanthology.org/2021.emnlp-main.566.pdf
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Devang Kulshreshtha](mailto:[email protected])
### Dataset Summary
The MLQuestions dataset consists of questions from Google search queries and passages from Wikipedia pages related to Machine learning domain . The dataset was created to support research in domain adaptation of question generation and passage retrieval models.
### Languages
The text in the dataset is in English
## Dataset Structure
### Data Instances
We release development and test sets where a typical data point comprises a passage denoted by `input_text` label and a question, with a `target_text` label.
An example from the MLQuestions test set looks as follows:
{
'input_text': 'Bayesian learning uses Bayes' theorem to determine the conditional probability of a hypotheses given some evidence or observations.'
'target_text': 'What is Bayesian learning in machine learning'
}
We also provide unsupervised questions and passages in two separate files - 'passages_unaligned.csv' and 'questions_unaligned.csv' with labels `input_text` and `target_text` respectively.
## Additional Information
### Licensing Information
https://github.com/McGill-NLP/MLQuestions/blob/main/LICENSE.md
### Citation Information
If you find this useful in your research, please consider citing:
@inproceedings{kulshreshtha-etal-2021-back,
title = "Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval",
author = "Kulshreshtha, Devang and
Belfer, Robert and
Serban, Iulian Vlad and
Reddy, Siva",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.566",
pages = "7064--7078",
abstract = "In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA). While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between target domain and synthetic data distribution, and reduces model overfitting to source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU-4 points on generation, and 17.6{\%} top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation dataset - MLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs.",
} | McGill-NLP/mlquestions | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-11-11T10:01:14+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for mlquestions
## Table of Contents
- Dataset Description
- Dataset Summary
- Languages
- Dataset Structure
- Data Instances
- Additional Information
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
- Leaderboard:
- Point of Contact: Devang Kulshreshtha
### Dataset Summary
The MLQuestions dataset consists of questions from Google search queries and passages from Wikipedia pages related to Machine learning domain . The dataset was created to support research in domain adaptation of question generation and passage retrieval models.
### Languages
The text in the dataset is in English
## Dataset Structure
### Data Instances
We release development and test sets where a typical data point comprises a passage denoted by 'input_text' label and a question, with a 'target_text' label.
An example from the MLQuestions test set looks as follows:
{
'input_text': 'Bayesian learning uses Bayes' theorem to determine the conditional probability of a hypotheses given some evidence or observations.'
'target_text': 'What is Bayesian learning in machine learning'
}
We also provide unsupervised questions and passages in two separate files - 'passages_unaligned.csv' and 'questions_unaligned.csv' with labels 'input_text' and 'target_text' respectively.
## Additional Information
### Licensing Information
URL
If you find this useful in your research, please consider citing:
@inproceedings{kulshreshtha-etal-2021-back,
title = "Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval",
author = "Kulshreshtha, Devang and
Belfer, Robert and
Serban, Iulian Vlad and
Reddy, Siva",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "URL
pages = "7064--7078",
abstract = "In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA). While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between target domain and synthetic data distribution, and reduces model overfitting to source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU-4 points on generation, and 17.6{\%} top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation dataset - MLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs.",
} | [
"# Dataset Card for mlquestions",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n- Additional Information\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact: Devang Kulshreshtha",
"### Dataset Summary\n\nThe MLQuestions dataset consists of questions from Google search queries and passages from Wikipedia pages related to Machine learning domain . The dataset was created to support research in domain adaptation of question generation and passage retrieval models.",
"### Languages\n\nThe text in the dataset is in English",
"## Dataset Structure",
"### Data Instances\n\nWe release development and test sets where a typical data point comprises a passage denoted by 'input_text' label and a question, with a 'target_text' label. \n\nAn example from the MLQuestions test set looks as follows:\n\n{\n 'input_text': 'Bayesian learning uses Bayes' theorem to determine the conditional probability of a hypotheses given some evidence or observations.'\n 'target_text': 'What is Bayesian learning in machine learning'\n}\n\nWe also provide unsupervised questions and passages in two separate files - 'passages_unaligned.csv' and 'questions_unaligned.csv' with labels 'input_text' and 'target_text' respectively.",
"## Additional Information",
"### Licensing Information\n\nURL\n\n\n\nIf you find this useful in your research, please consider citing:\n\n @inproceedings{kulshreshtha-etal-2021-back,\n title = \"Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval\",\n author = \"Kulshreshtha, Devang and\n Belfer, Robert and\n Serban, Iulian Vlad and\n Reddy, Siva\",\n booktitle = \"Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing\",\n month = nov,\n year = \"2021\",\n address = \"Online and Punta Cana, Dominican Republic\",\n publisher = \"Association for Computational Linguistics\",\n url = \"URL\n pages = \"7064--7078\",\n abstract = \"In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA). While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between target domain and synthetic data distribution, and reduces model overfitting to source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU-4 points on generation, and 17.6{\\%} top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation dataset - MLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs.\",\n }"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for mlquestions",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n- Additional Information\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact: Devang Kulshreshtha",
"### Dataset Summary\n\nThe MLQuestions dataset consists of questions from Google search queries and passages from Wikipedia pages related to Machine learning domain . The dataset was created to support research in domain adaptation of question generation and passage retrieval models.",
"### Languages\n\nThe text in the dataset is in English",
"## Dataset Structure",
"### Data Instances\n\nWe release development and test sets where a typical data point comprises a passage denoted by 'input_text' label and a question, with a 'target_text' label. \n\nAn example from the MLQuestions test set looks as follows:\n\n{\n 'input_text': 'Bayesian learning uses Bayes' theorem to determine the conditional probability of a hypotheses given some evidence or observations.'\n 'target_text': 'What is Bayesian learning in machine learning'\n}\n\nWe also provide unsupervised questions and passages in two separate files - 'passages_unaligned.csv' and 'questions_unaligned.csv' with labels 'input_text' and 'target_text' respectively.",
"## Additional Information",
"### Licensing Information\n\nURL\n\n\n\nIf you find this useful in your research, please consider citing:\n\n @inproceedings{kulshreshtha-etal-2021-back,\n title = \"Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval\",\n author = \"Kulshreshtha, Devang and\n Belfer, Robert and\n Serban, Iulian Vlad and\n Reddy, Siva\",\n booktitle = \"Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing\",\n month = nov,\n year = \"2021\",\n address = \"Online and Punta Cana, Dominican Republic\",\n publisher = \"Association for Computational Linguistics\",\n url = \"URL\n pages = \"7064--7078\",\n abstract = \"In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA). While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between target domain and synthetic data distribution, and reduces model overfitting to source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU-4 points on generation, and 17.6{\\%} top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation dataset - MLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs.\",\n }"
] |
33aa688c5b795d2d2dee21da2e46490f72e57ceb | My third Dataset
- for wolf classification | Mulin/my_third_dataset | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-09-19T00:36:15+00:00 | [] | [] | TAGS
#region-us
| My third Dataset
- for wolf classification | [] | [
"TAGS\n#region-us \n"
] |
142f1238199fa4a7c76524de93744349092799d7 | # English-Tamil parallel Corpus prepared by the National Languages Processing Center, University of Moratuwa. The data has been cleaned and then aligned.
#En-Ta Glossary Line Count : 22477
#En-Ta Corpus Line Count : 8950
#Source: Data extracted from publicly available government resources such as annual reports, procurement reports, circulars and websites.
#Processing: Each word/pdf file was converted to text files, and unicode errors were fixed using a custom tool. Then the Tamil and English files were manually sentence-aligned. All the spelling and grammatical errors were manually fixed.
#If you use this dataset, kindly cite the following publication:
Fernando, A., Ranathunga, S., & Dias, G. (2020). Data Augmentation and Terminology Integration for Domain-Specific Sinhala-English-Tamil Statistical Machine Translation. arXiv preprint arXiv:2011.02821.
| NLPC-UOM/English-Tamil-Parallel-Corpus | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": [], "languages": ["si"], "licenses": ["mit"]} | 2022-05-20T02:28:51+00:00 | [] | [] | TAGS
#region-us
| # English-Tamil parallel Corpus prepared by the National Languages Processing Center, University of Moratuwa. The data has been cleaned and then aligned.
#En-Ta Glossary Line Count : 22477
#En-Ta Corpus Line Count : 8950
#Source: Data extracted from publicly available government resources such as annual reports, procurement reports, circulars and websites.
#Processing: Each word/pdf file was converted to text files, and unicode errors were fixed using a custom tool. Then the Tamil and English files were manually sentence-aligned. All the spelling and grammatical errors were manually fixed.
#If you use this dataset, kindly cite the following publication:
Fernando, A., Ranathunga, S., & Dias, G. (2020). Data Augmentation and Terminology Integration for Domain-Specific Sinhala-English-Tamil Statistical Machine Translation. arXiv preprint arXiv:2011.02821.
| [
"# English-Tamil parallel Corpus prepared by the National Languages Processing Center, University of Moratuwa. The data has been cleaned and then aligned."
] | [
"TAGS\n#region-us \n",
"# English-Tamil parallel Corpus prepared by the National Languages Processing Center, University of Moratuwa. The data has been cleaned and then aligned."
] |
a27f748842d87dec8a8e5d700d6bc106804a5ad9 | # Sinhala-POS-Data
POS tagged Sinhala text
news- verified- final level.txt file contains the first version of our annotated data. There are 253636 word in it.
TagList.txt contains the tag list.
Tagging Guide.pdf contains a detailed description of the tags.
If you use this data set or the tag set, please cite one of these as apropriate:
Fernando, S., & Ranathunga, S. (2018, May). Evaluation of Different Classifiers for Sinhala POS Tagging. In 2018 Moratuwa Engineering Research Conference (MERCon) (pp. 96-101). IEEE.
Dilshani, N., Fernando, S., Ranathunga, S., Jayasena, S., & Dias, G. (2017). A Comprehensive Part of Speech (POS) Tag Set for Sinhala Language. The Third International Conference on Linguistics in Sri Lanka, ICLSL 2017. Department of Linguistics, University of Kelaniya, Sri Lanka.
Fernando, S., Ranathunga, S., Jayasena, S., & Dias, G. (2016, December). Comprehensive Part-Of-Speech Tag Set and SVM Based POS Tagger for Sinhala. In Proceedings of the 6th Workshop on South and Southeast Asian Natural Language Processing (WSSANLP2016) (pp. 173-182).
| NLPC-UOM/Sinhala-POS-Data | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": [], "languages": ["si"], "licenses": ["mit"]} | 2022-05-20T02:25:59+00:00 | [] | [] | TAGS
#region-us
| # Sinhala-POS-Data
POS tagged Sinhala text
news- verified- final URL file contains the first version of our annotated data. There are 253636 word in it.
URL contains the tag list.
Tagging URL contains a detailed description of the tags.
If you use this data set or the tag set, please cite one of these as apropriate:
Fernando, S., & Ranathunga, S. (2018, May). Evaluation of Different Classifiers for Sinhala POS Tagging. In 2018 Moratuwa Engineering Research Conference (MERCon) (pp. 96-101). IEEE.
Dilshani, N., Fernando, S., Ranathunga, S., Jayasena, S., & Dias, G. (2017). A Comprehensive Part of Speech (POS) Tag Set for Sinhala Language. The Third International Conference on Linguistics in Sri Lanka, ICLSL 2017. Department of Linguistics, University of Kelaniya, Sri Lanka.
Fernando, S., Ranathunga, S., Jayasena, S., & Dias, G. (2016, December). Comprehensive Part-Of-Speech Tag Set and SVM Based POS Tagger for Sinhala. In Proceedings of the 6th Workshop on South and Southeast Asian Natural Language Processing (WSSANLP2016) (pp. 173-182).
| [
"# Sinhala-POS-Data\nPOS tagged Sinhala text\n\nnews- verified- final URL file contains the first version of our annotated data. There are 253636 word in it.\nURL contains the tag list.\nTagging URL contains a detailed description of the tags.\n\n\nIf you use this data set or the tag set, please cite one of these as apropriate:\n\nFernando, S., & Ranathunga, S. (2018, May). Evaluation of Different Classifiers for Sinhala POS Tagging. In 2018 Moratuwa Engineering Research Conference (MERCon) (pp. 96-101). IEEE.\n\n\nDilshani, N., Fernando, S., Ranathunga, S., Jayasena, S., & Dias, G. (2017). A Comprehensive Part of Speech (POS) Tag Set for Sinhala Language. The Third International Conference on Linguistics in Sri Lanka, ICLSL 2017. Department of Linguistics, University of Kelaniya, Sri Lanka.\n\n\nFernando, S., Ranathunga, S., Jayasena, S., & Dias, G. (2016, December). Comprehensive Part-Of-Speech Tag Set and SVM Based POS Tagger for Sinhala. In Proceedings of the 6th Workshop on South and Southeast Asian Natural Language Processing (WSSANLP2016) (pp. 173-182)."
] | [
"TAGS\n#region-us \n",
"# Sinhala-POS-Data\nPOS tagged Sinhala text\n\nnews- verified- final URL file contains the first version of our annotated data. There are 253636 word in it.\nURL contains the tag list.\nTagging URL contains a detailed description of the tags.\n\n\nIf you use this data set or the tag set, please cite one of these as apropriate:\n\nFernando, S., & Ranathunga, S. (2018, May). Evaluation of Different Classifiers for Sinhala POS Tagging. In 2018 Moratuwa Engineering Research Conference (MERCon) (pp. 96-101). IEEE.\n\n\nDilshani, N., Fernando, S., Ranathunga, S., Jayasena, S., & Dias, G. (2017). A Comprehensive Part of Speech (POS) Tag Set for Sinhala Language. The Third International Conference on Linguistics in Sri Lanka, ICLSL 2017. Department of Linguistics, University of Kelaniya, Sri Lanka.\n\n\nFernando, S., Ranathunga, S., Jayasena, S., & Dias, G. (2016, December). Comprehensive Part-Of-Speech Tag Set and SVM Based POS Tagger for Sinhala. In Proceedings of the 6th Workshop on South and Southeast Asian Natural Language Processing (WSSANLP2016) (pp. 173-182)."
] |
907b92006761244648449ffa55d88ce42942d793 | # Dataset Card for HuCOLA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
[HuCOLA dataset](https://github.com/nytud/HuCOLA)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
[lnnoemi](mailto:[email protected])
### Dataset Summary
This is the dataset card for the Hungarian Corpus of Linguistic Acceptability (HuCOLA), which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit [HuLU](hulu.nlp.nytud.hu).
### Supported Tasks and Leaderboards
### Languages
The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.
## Dataset Structure
### Data Instances
For each instance, there is aN id, a sentence and a label.
An example:
```
{"Sent_id": "dev_0",
"Sent": "A földek eláradtak.",
"Label": "0"}
```
### Data Fields
- Sent_id: unique id of the instances, an integer between 1 and 1000;
- Sent: a Hungarian sentence;
- label: '0' for wrong, '1' for good sentences.
### Data Splits
HuCOLA has 3 splits: *train*, *validation* and *test*.
| Dataset split | Number of sentences in the split | Proportion of the split
|---------------|----------------------------------| ---------|
| train | 7276 | 80%|
| validation | 900 |10%|
| test | 900 |10%|
The test data is distributed without the labels. To evaluate your model, please [contact us](mailto:[email protected]), or check [HuLU's website](hulu.nlp.nytud.hu) for an automatic evaluation (this feature is under construction at the moment). The evaluation metric is Matthew's correlation coefficient.
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The data was collected by two human annotators from 3 main linguistic books on Hungarian language:
- Kiefer Ferenc (ed.) (1992), Strukturális magyar nyelvtan 1. Mondattan. Budapest, Akadémiai Kiadó.
- Alberti, Gábor and Laczkó, Tibor (eds) (2018), Syntax of Hungarian Nouns and Noun Phrases. I., II. Comprehensive grammar resources. Amsterdam University Press, Amsterdam.
- Katalin É. Kiss and Veronika Hegedűs (eds) (2021), Postpositions and Postpositional Phrases. Amsterdam: Amsterdam University Press.
The process of collecting sentences partly followed the one described in Warstadt et. al (2018). The guideline of our process is available in the repository of [HuCOLA](https://github.com/nytud/HuCOLA).
### Annotations
#### Annotation process
Each instance was annotated by 4 human annotators for its acceptability (see the annotation guidelines in the repository of [HuCOLA](https://github.com/nytud/HuCOLA)).
#### Who are the annotators?
The annotators were native Hungarian speakers (of various ages, from 20 to 67) without any linguistic backround.
## Additional Information
### Licensing Information
HuCOLA is released under the CC-BY-SA 4.0 licence.
### Citation Information
If you use this resource or any part of its documentation, please refer to:
Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis
kiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. XVIII. Magyar Számítógépes Nyelvészeti Konferencia. (in press)
```
@inproceedings{ligetinagy2022hulu,
title={HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából},
author={Ligeti-Nagy, N. and Ferenczi, G. and Héja, E. and Jelencsik-Mátyus, K. and Laki, L. J. and Vadász, N. and Yang, Z. Gy. and Váradi, T.},
booktitle={XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year={2022}
}
```
### Contributions
Thanks to [lnnoemi](https://github.com/lnnoemi) for adding this dataset. | NYTK/HuCOLA | [
"task_ids:text-simplification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:hu",
"license:cc-by-sa-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["hu"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["conditional-text-generation"], "task_ids": ["machine-translation", "text-simplification"], "pretty_name": "HuCOLA"} | 2024-01-12T15:51:28+00:00 | [] | [
"hu"
] | TAGS
#task_ids-text-simplification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-Hungarian #license-cc-by-sa-4.0 #region-us
| Dataset Card for HuCOLA
=======================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage:
* Repository:
HuCOLA dataset
* Paper:
* Leaderboard:
* Point of Contact:
lnnoemi
### Dataset Summary
This is the dataset card for the Hungarian Corpus of Linguistic Acceptability (HuCOLA), which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit HuLU.
### Supported Tasks and Leaderboards
### Languages
The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.
Dataset Structure
-----------------
### Data Instances
For each instance, there is aN id, a sentence and a label.
An example:
### Data Fields
* Sent\_id: unique id of the instances, an integer between 1 and 1000;
* Sent: a Hungarian sentence;
* label: '0' for wrong, '1' for good sentences.
### Data Splits
HuCOLA has 3 splits: *train*, *validation* and *test*.
Dataset split: train, Number of sentences in the split: 7276, Proportion of the split: 80%
Dataset split: validation, Number of sentences in the split: 900, Proportion of the split: 10%
Dataset split: test, Number of sentences in the split: 900, Proportion of the split: 10%
The test data is distributed without the labels. To evaluate your model, please contact us, or check HuLU's website for an automatic evaluation (this feature is under construction at the moment). The evaluation metric is Matthew's correlation coefficient.
Dataset Creation
----------------
### Source Data
#### Initial Data Collection and Normalization
The data was collected by two human annotators from 3 main linguistic books on Hungarian language:
* Kiefer Ferenc (ed.) (1992), Strukturális magyar nyelvtan 1. Mondattan. Budapest, Akadémiai Kiadó.
* Alberti, Gábor and Laczkó, Tibor (eds) (2018), Syntax of Hungarian Nouns and Noun Phrases. I., II. Comprehensive grammar resources. Amsterdam University Press, Amsterdam.
* Katalin É. Kiss and Veronika Hegedűs (eds) (2021), Postpositions and Postpositional Phrases. Amsterdam: Amsterdam University Press.
The process of collecting sentences partly followed the one described in Warstadt et. al (2018). The guideline of our process is available in the repository of HuCOLA.
### Annotations
#### Annotation process
Each instance was annotated by 4 human annotators for its acceptability (see the annotation guidelines in the repository of HuCOLA).
#### Who are the annotators?
The annotators were native Hungarian speakers (of various ages, from 20 to 67) without any linguistic backround.
Additional Information
----------------------
### Licensing Information
HuCOLA is released under the CC-BY-SA 4.0 licence.
If you use this resource or any part of its documentation, please refer to:
Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis
kiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. XVIII. Magyar Számítógépes Nyelvészeti Konferencia. (in press)
### Contributions
Thanks to lnnoemi for adding this dataset.
| [
"### Dataset Summary\n\n\nThis is the dataset card for the Hungarian Corpus of Linguistic Acceptability (HuCOLA), which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit HuLU.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nThe BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nFor each instance, there is aN id, a sentence and a label.\n\n\nAn example:",
"### Data Fields\n\n\n* Sent\\_id: unique id of the instances, an integer between 1 and 1000;\n* Sent: a Hungarian sentence;\n* label: '0' for wrong, '1' for good sentences.",
"### Data Splits\n\n\nHuCOLA has 3 splits: *train*, *validation* and *test*.\n\n\nDataset split: train, Number of sentences in the split: 7276, Proportion of the split: 80%\nDataset split: validation, Number of sentences in the split: 900, Proportion of the split: 10%\nDataset split: test, Number of sentences in the split: 900, Proportion of the split: 10%\n\n\nThe test data is distributed without the labels. To evaluate your model, please contact us, or check HuLU's website for an automatic evaluation (this feature is under construction at the moment). The evaluation metric is Matthew's correlation coefficient.\n\n\nDataset Creation\n----------------",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe data was collected by two human annotators from 3 main linguistic books on Hungarian language:\n\n\n* Kiefer Ferenc (ed.) (1992), Strukturális magyar nyelvtan 1. Mondattan. Budapest, Akadémiai Kiadó.\n* Alberti, Gábor and Laczkó, Tibor (eds) (2018), Syntax of Hungarian Nouns and Noun Phrases. I., II. Comprehensive grammar resources. Amsterdam University Press, Amsterdam.\n* Katalin É. Kiss and Veronika Hegedűs (eds) (2021), Postpositions and Postpositional Phrases. Amsterdam: Amsterdam University Press.\n\n\nThe process of collecting sentences partly followed the one described in Warstadt et. al (2018). The guideline of our process is available in the repository of HuCOLA.",
"### Annotations",
"#### Annotation process\n\n\nEach instance was annotated by 4 human annotators for its acceptability (see the annotation guidelines in the repository of HuCOLA).",
"#### Who are the annotators?\n\n\nThe annotators were native Hungarian speakers (of various ages, from 20 to 67) without any linguistic backround.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nHuCOLA is released under the CC-BY-SA 4.0 licence.\n\n\nIf you use this resource or any part of its documentation, please refer to:\n\n\nLigeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis\nkiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. XVIII. Magyar Számítógépes Nyelvészeti Konferencia. (in press)",
"### Contributions\n\n\nThanks to lnnoemi for adding this dataset."
] | [
"TAGS\n#task_ids-text-simplification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-Hungarian #license-cc-by-sa-4.0 #region-us \n",
"### Dataset Summary\n\n\nThis is the dataset card for the Hungarian Corpus of Linguistic Acceptability (HuCOLA), which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit HuLU.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nThe BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nFor each instance, there is aN id, a sentence and a label.\n\n\nAn example:",
"### Data Fields\n\n\n* Sent\\_id: unique id of the instances, an integer between 1 and 1000;\n* Sent: a Hungarian sentence;\n* label: '0' for wrong, '1' for good sentences.",
"### Data Splits\n\n\nHuCOLA has 3 splits: *train*, *validation* and *test*.\n\n\nDataset split: train, Number of sentences in the split: 7276, Proportion of the split: 80%\nDataset split: validation, Number of sentences in the split: 900, Proportion of the split: 10%\nDataset split: test, Number of sentences in the split: 900, Proportion of the split: 10%\n\n\nThe test data is distributed without the labels. To evaluate your model, please contact us, or check HuLU's website for an automatic evaluation (this feature is under construction at the moment). The evaluation metric is Matthew's correlation coefficient.\n\n\nDataset Creation\n----------------",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe data was collected by two human annotators from 3 main linguistic books on Hungarian language:\n\n\n* Kiefer Ferenc (ed.) (1992), Strukturális magyar nyelvtan 1. Mondattan. Budapest, Akadémiai Kiadó.\n* Alberti, Gábor and Laczkó, Tibor (eds) (2018), Syntax of Hungarian Nouns and Noun Phrases. I., II. Comprehensive grammar resources. Amsterdam University Press, Amsterdam.\n* Katalin É. Kiss and Veronika Hegedűs (eds) (2021), Postpositions and Postpositional Phrases. Amsterdam: Amsterdam University Press.\n\n\nThe process of collecting sentences partly followed the one described in Warstadt et. al (2018). The guideline of our process is available in the repository of HuCOLA.",
"### Annotations",
"#### Annotation process\n\n\nEach instance was annotated by 4 human annotators for its acceptability (see the annotation guidelines in the repository of HuCOLA).",
"#### Who are the annotators?\n\n\nThe annotators were native Hungarian speakers (of various ages, from 20 to 67) without any linguistic backround.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nHuCOLA is released under the CC-BY-SA 4.0 licence.\n\n\nIf you use this resource or any part of its documentation, please refer to:\n\n\nLigeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis\nkiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. XVIII. Magyar Számítógépes Nyelvészeti Konferencia. (in press)",
"### Contributions\n\n\nThanks to lnnoemi for adding this dataset."
] |
61bc890939bb58017be92123cf85b9b556ce1bea |
# Dataset Card for HuCoPA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
[HuCoPA dataset](https://github.com/nytud/HuCoPA)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
[lnnoemi](mailto:[email protected])
### Dataset Summary
This is the dataset card for the Hungarian Choice of Plausible Alternatives Corpus (HuCoPA), which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit [HuLU](hulu.nlp.nytud.hu). The corpus was created by translating and re-annotating the original English CoPA corpus (Roemmele et al., 2011).
### Supported Tasks and Leaderboards
'commonsense reasoning'
'question answering'
### Languages
The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.
## Dataset Structure
### Data Instances
For each instance, there is an id, a premise, a question ('cause' or 'effect'), two alternatives and a label (1 or 2).
An example:
```
{"idx": "1",
"question": "cause",
"label": "1",
"premise": "A testem árnyékot vetett a fűre.",
"choice1": "Felkelt a nap.",
"choice2": "A füvet lenyírták."}
```
### Data Fields
- id: unique id of the instances, an integer between 1 and 1000;
- question: "cause" or "effect". It suggests what kind of causal relation are we looking for: in the case of "cause" we search for the more plausible alternative that may be a cause of the premise. In the case of "effect" we are looking for a plausible result of the premise;
- premise: the premise, a sentence;
- choice1: the first alternative, a sentence;
- choice2: the second alternative, a sentence;
- label: the number of the more plausible alternative (1 or 2).
### Data Splits
HuCoPA has 3 splits: *train*, *validation* and *test*.
| Dataset split | Number of instances in the split |
|---------------|----------------------------------|
| train | 400 |
| validation | 100 |
| test | 500 |
The test data is distributed without the labels. To evaluate your model, please [contact us](mailto:[email protected]), or check [HuLU's website](hulu.nlp.nytud.hu) for an automatic evaluation (this feature is under construction at the moment).
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The data is a translation of the content of the CoPA corpus. Each sentence was translated by a human translator. Each translation was manually checked and further refined by another annotator.
### Annotations
#### Annotation process
The instances initially inherited their original labels from the CoPA dataset. Each instance was annotated by a human annotator. If the original label and the human annotator's label did not match, we manually curated the instance and assigned a final label to that. This step was necessary to ensure that the causal realationship had not been changed or lost during the translation process.
#### Who are the annotators?
The translators were native Hungarian speakers with English proficiency. The annotators were university students with some linguistic background.
## Additional Information
The human performance on the test set is 96% (accuracy).
### Licensing Information
HuCoPA is released under the BSD 2-Clause License.
Copyright (c) 2010, University of Southern California
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
### Citation Information
If you use this resource or any part of its documentation, please refer to:
Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis
kiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. In: Berend, Gábor and Gosztolya, Gábor and Vincze, Veronika (eds), XVIII. Magyar Számítógépes Nyelvészeti Konferencia. JATEPress, Szeged. 431–446.
```
@inproceedings{ligetinagy2022hulu,
title={HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából},
author={Ligeti-Nagy, N. and Ferenczi, G. and Héja, E. and Jelencsik-Mátyus, K. and Laki, L. J. and Vadász, N. and Yang, Z. Gy. and Váradi, T.},
booktitle={XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year={2022},
editors = {Berend, Gábor and Gosztolya, Gábor and Vincze, Veronika},
address = {Szeged},
publisher = {JATEPress},
pages = {431–446}
}
```
and to:
Roemmele, M., Bejan, C., and Gordon, A. (2011) Choice of Plausible Alternatives: An Evaluation of Commonsense Causal Reasoning. AAAI Spring Symposium on Logical Formalizations of Commonsense Reasoning, Stanford University, March 21-23, 2011.
```
@inproceedings{roemmele2011choice,
title={Choice of plausible alternatives: An evaluation of commonsense causal reasoning},
author={Roemmele, Melissa and Bejan, Cosmin Adrian and Gordon, Andrew S},
booktitle={2011 AAAI Spring Symposium Series},
year={2011},
url={https://people.ict.usc.edu/~gordon/publications/AAAI-SPRING11A.PDF},
}
```
### Contributions
Thanks to [lnnoemi](https://github.com/lnnoemi) for adding this dataset.
| NYTK/HuCoPA | [
"task_categories:other",
"annotations_creators:found",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|other",
"language:hu",
"license:bsd-2-clause",
"commonsense-reasoning",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["found"], "language_creators": ["found", "expert-generated"], "language": ["hu"], "license": ["bsd-2-clause"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["extended|other"], "task_categories": ["other"], "task_ids": [], "pretty_name": "HuCoPA", "tags": ["commonsense-reasoning"]} | 2023-03-27T08:54:02+00:00 | [] | [
"hu"
] | TAGS
#task_categories-other #annotations_creators-found #language_creators-found #language_creators-expert-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-extended|other #language-Hungarian #license-bsd-2-clause #commonsense-reasoning #region-us
| Dataset Card for HuCoPA
=======================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage:
* Repository:
HuCoPA dataset
* Paper:
* Leaderboard:
* Point of Contact:
lnnoemi
### Dataset Summary
This is the dataset card for the Hungarian Choice of Plausible Alternatives Corpus (HuCoPA), which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit HuLU. The corpus was created by translating and re-annotating the original English CoPA corpus (Roemmele et al., 2011).
### Supported Tasks and Leaderboards
'commonsense reasoning'
'question answering'
### Languages
The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.
Dataset Structure
-----------------
### Data Instances
For each instance, there is an id, a premise, a question ('cause' or 'effect'), two alternatives and a label (1 or 2).
An example:
### Data Fields
* id: unique id of the instances, an integer between 1 and 1000;
* question: "cause" or "effect". It suggests what kind of causal relation are we looking for: in the case of "cause" we search for the more plausible alternative that may be a cause of the premise. In the case of "effect" we are looking for a plausible result of the premise;
* premise: the premise, a sentence;
* choice1: the first alternative, a sentence;
* choice2: the second alternative, a sentence;
* label: the number of the more plausible alternative (1 or 2).
### Data Splits
HuCoPA has 3 splits: *train*, *validation* and *test*.
The test data is distributed without the labels. To evaluate your model, please contact us, or check HuLU's website for an automatic evaluation (this feature is under construction at the moment).
Dataset Creation
----------------
### Source Data
#### Initial Data Collection and Normalization
The data is a translation of the content of the CoPA corpus. Each sentence was translated by a human translator. Each translation was manually checked and further refined by another annotator.
### Annotations
#### Annotation process
The instances initially inherited their original labels from the CoPA dataset. Each instance was annotated by a human annotator. If the original label and the human annotator's label did not match, we manually curated the instance and assigned a final label to that. This step was necessary to ensure that the causal realationship had not been changed or lost during the translation process.
#### Who are the annotators?
The translators were native Hungarian speakers with English proficiency. The annotators were university students with some linguistic background.
Additional Information
----------------------
The human performance on the test set is 96% (accuracy).
### Licensing Information
HuCoPA is released under the BSD 2-Clause License.
Copyright (c) 2010, University of Southern California
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
If you use this resource or any part of its documentation, please refer to:
Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis
kiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. In: Berend, Gábor and Gosztolya, Gábor and Vincze, Veronika (eds), XVIII. Magyar Számítógépes Nyelvészeti Konferencia. JATEPress, Szeged. 431–446.
and to:
Roemmele, M., Bejan, C., and Gordon, A. (2011) Choice of Plausible Alternatives: An Evaluation of Commonsense Causal Reasoning. AAAI Spring Symposium on Logical Formalizations of Commonsense Reasoning, Stanford University, March 21-23, 2011.
### Contributions
Thanks to lnnoemi for adding this dataset.
| [
"### Dataset Summary\n\n\nThis is the dataset card for the Hungarian Choice of Plausible Alternatives Corpus (HuCoPA), which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit HuLU. The corpus was created by translating and re-annotating the original English CoPA corpus (Roemmele et al., 2011).",
"### Supported Tasks and Leaderboards\n\n\n'commonsense reasoning'\n'question answering'",
"### Languages\n\n\nThe BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nFor each instance, there is an id, a premise, a question ('cause' or 'effect'), two alternatives and a label (1 or 2).\n\n\nAn example:",
"### Data Fields\n\n\n* id: unique id of the instances, an integer between 1 and 1000;\n* question: \"cause\" or \"effect\". It suggests what kind of causal relation are we looking for: in the case of \"cause\" we search for the more plausible alternative that may be a cause of the premise. In the case of \"effect\" we are looking for a plausible result of the premise;\n* premise: the premise, a sentence;\n* choice1: the first alternative, a sentence;\n* choice2: the second alternative, a sentence;\n* label: the number of the more plausible alternative (1 or 2).",
"### Data Splits\n\n\nHuCoPA has 3 splits: *train*, *validation* and *test*.\n\n\n\nThe test data is distributed without the labels. To evaluate your model, please contact us, or check HuLU's website for an automatic evaluation (this feature is under construction at the moment).\n\n\nDataset Creation\n----------------",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe data is a translation of the content of the CoPA corpus. Each sentence was translated by a human translator. Each translation was manually checked and further refined by another annotator.",
"### Annotations",
"#### Annotation process\n\n\nThe instances initially inherited their original labels from the CoPA dataset. Each instance was annotated by a human annotator. If the original label and the human annotator's label did not match, we manually curated the instance and assigned a final label to that. This step was necessary to ensure that the causal realationship had not been changed or lost during the translation process.",
"#### Who are the annotators?\n\n\nThe translators were native Hungarian speakers with English proficiency. The annotators were university students with some linguistic background.\n\n\nAdditional Information\n----------------------\n\n\nThe human performance on the test set is 96% (accuracy).",
"### Licensing Information\n\n\nHuCoPA is released under the BSD 2-Clause License.\n\n\nCopyright (c) 2010, University of Southern California\nAll rights reserved.\n\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n\n* Redistributions of source code must retain the above copyright notice, this\nlist of conditions and the following disclaimer.\n* Redistributions in binary form must reproduce the above copyright notice,\nthis list of conditions and the following disclaimer in the documentation\nand/or other materials provided with the distribution.\n\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\nIf you use this resource or any part of its documentation, please refer to:\n\n\nLigeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis\nkiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. In: Berend, Gábor and Gosztolya, Gábor and Vincze, Veronika (eds), XVIII. Magyar Számítógépes Nyelvészeti Konferencia. JATEPress, Szeged. 431–446.\n\n\nand to:\n\n\nRoemmele, M., Bejan, C., and Gordon, A. (2011) Choice of Plausible Alternatives: An Evaluation of Commonsense Causal Reasoning. AAAI Spring Symposium on Logical Formalizations of Commonsense Reasoning, Stanford University, March 21-23, 2011.",
"### Contributions\n\n\nThanks to lnnoemi for adding this dataset."
] | [
"TAGS\n#task_categories-other #annotations_creators-found #language_creators-found #language_creators-expert-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-extended|other #language-Hungarian #license-bsd-2-clause #commonsense-reasoning #region-us \n",
"### Dataset Summary\n\n\nThis is the dataset card for the Hungarian Choice of Plausible Alternatives Corpus (HuCoPA), which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit HuLU. The corpus was created by translating and re-annotating the original English CoPA corpus (Roemmele et al., 2011).",
"### Supported Tasks and Leaderboards\n\n\n'commonsense reasoning'\n'question answering'",
"### Languages\n\n\nThe BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nFor each instance, there is an id, a premise, a question ('cause' or 'effect'), two alternatives and a label (1 or 2).\n\n\nAn example:",
"### Data Fields\n\n\n* id: unique id of the instances, an integer between 1 and 1000;\n* question: \"cause\" or \"effect\". It suggests what kind of causal relation are we looking for: in the case of \"cause\" we search for the more plausible alternative that may be a cause of the premise. In the case of \"effect\" we are looking for a plausible result of the premise;\n* premise: the premise, a sentence;\n* choice1: the first alternative, a sentence;\n* choice2: the second alternative, a sentence;\n* label: the number of the more plausible alternative (1 or 2).",
"### Data Splits\n\n\nHuCoPA has 3 splits: *train*, *validation* and *test*.\n\n\n\nThe test data is distributed without the labels. To evaluate your model, please contact us, or check HuLU's website for an automatic evaluation (this feature is under construction at the moment).\n\n\nDataset Creation\n----------------",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe data is a translation of the content of the CoPA corpus. Each sentence was translated by a human translator. Each translation was manually checked and further refined by another annotator.",
"### Annotations",
"#### Annotation process\n\n\nThe instances initially inherited their original labels from the CoPA dataset. Each instance was annotated by a human annotator. If the original label and the human annotator's label did not match, we manually curated the instance and assigned a final label to that. This step was necessary to ensure that the causal realationship had not been changed or lost during the translation process.",
"#### Who are the annotators?\n\n\nThe translators were native Hungarian speakers with English proficiency. The annotators were university students with some linguistic background.\n\n\nAdditional Information\n----------------------\n\n\nThe human performance on the test set is 96% (accuracy).",
"### Licensing Information\n\n\nHuCoPA is released under the BSD 2-Clause License.\n\n\nCopyright (c) 2010, University of Southern California\nAll rights reserved.\n\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n\n* Redistributions of source code must retain the above copyright notice, this\nlist of conditions and the following disclaimer.\n* Redistributions in binary form must reproduce the above copyright notice,\nthis list of conditions and the following disclaimer in the documentation\nand/or other materials provided with the distribution.\n\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\nIf you use this resource or any part of its documentation, please refer to:\n\n\nLigeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis\nkiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. In: Berend, Gábor and Gosztolya, Gábor and Vincze, Veronika (eds), XVIII. Magyar Számítógépes Nyelvészeti Konferencia. JATEPress, Szeged. 431–446.\n\n\nand to:\n\n\nRoemmele, M., Bejan, C., and Gordon, A. (2011) Choice of Plausible Alternatives: An Evaluation of Commonsense Causal Reasoning. AAAI Spring Symposium on Logical Formalizations of Commonsense Reasoning, Stanford University, March 21-23, 2011.",
"### Contributions\n\n\nThanks to lnnoemi for adding this dataset."
] |
b45ea6dbdec3b8692f02df89e8f943fa8d84e5bf |
# Dataset Card for HuRC
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
[HuRC dataset](https://github.com/nytud/HuRC)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
[lnnoemi](mailto:[email protected])
### Dataset Summary
This is the dataset card for the Hungarian Corpus for Reading Comprehension with Commonsense Reasoning (HuRC), which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit HuLU.
The dataset contains 80 614 instances. Each instance is composed of a lead, a passage and a cloze-style query with a masked entity. The task is to select the named entity that is being masked in the query.
The data was automatically collected from the online news of Népszabadság online (nol.hu).
### Languages
The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.
## Dataset Structure
### Data Instances
For each instance, there is an id, a lead, a passage, a query and a MASK.
An example:
```
{
"id": "1",
"lead": ["A Közigazgatási és Igazságügyi Minisztérium szerint a Bárka Színház esetében felmerült a felelőtlen gazdálkodás gyanúja, egyes értesülések szerint pedig ebben \"a színház igazgatójának és gazdasági vezetőjének felelőssége is felmerül\""],
"passage": [
"A teátrumnak Navracsics Tibor közigazgatási és igazságügyi miniszterhez és Kocsis Máté VIII. kerületi polgármesterhez",
"reagálva a tárca azt írta, hogy a felelőtlen gazdálkodás gyanújában \"egyes értesülések szerint a színház igazgatójának és gazdasági vezetőjének felelőssége is felmerül\". A KIM \"éppen ezért nagyon várja az Állami Számvevőszék készülő jelentését, hogy tiszta képet kaphasson a színház működéséről\".",
"A minisztérium hangsúlyozta, hogy az elmúlt évben is mindent elkövetett azért, hogy a Bárka Színház \"valós, rangos művészeti térként\" működjön, és a továbbiakban is ez a szándéka, de jelenleg a társulat működtetését a minisztérium fenntartói támogatás formájában jogszerűen még nem tudja megoldani.",
"A teátrum az átadás-átvétel elhúzódásának okát keresve tette közzé nyílt levelét, amelyben elmaradó fizetésekre, előadásokra és bemutatókra hívta fel a figyelmet, és jelezte, hogy várja a helyzet megoldását.",
"A színház átadás-átvétele jelenleg zajlik, a folyamat végeztével a Bárka a józsefvárosi önkormányzattól állami tulajdonba, a tervek szerint a Közigazgatási és Igazságügyi Minisztérium fenntartásába kerül."
],
"query": "A KIM 2014-es költségvetésében szerepel a Bárka Színház, de amíg nem a minisztérium a [MASK] fenntartója, addig ez a költségvetési keret nem nyitható meg.",
"MASK": "Bárka",
}
```
### Data Fields
- id: unique id of the instances;
- lead: a short summary of the article as it was extracted from the source texts;
- passage: 3-6 paragraphs of texts as the body of the article;
- query: the last paragraph of an article, some kind of summary or conclusion, with a named entity masked (with [MASK]) in it;
- MASK: the masked named entity.
### Data Splits
HuRC has 3 splits: *train*, *validation* and *test*.
| Dataset split | Number of instances in the split | Proportion of the split
|---------------|----------------------------------| ---------|
| train | 64614 | 80%|
| validation | 8000 |10%|
| test | 8000 |10%|
The test data is distributed without the MASK fields. To evaluate your model, please [contact us](mailto:[email protected]), or check [HuLU's website](hulu.nlp.nytud.hu) for an automatic evaluation (this feature is under construction at the moment).
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
To produce the Hungarian material, we used the daily articles from Népszabadság Online which had titles and summaries as well. We selected 3-6 paragraphs from each article from the ones which contain proper nouns both in the main part and the summary as well. We trained a NER model using huBERT (Nemeskey 2021) for recognizing proper nouns. NerKor (Simon és Vadász 2021) and Huggingface’s token-level classification library were used to fine-tune the model. Our model achieved an F-score of 90.18 on the test material. As a final step, we found pairs of proper names which are present both in the main article and the summary. Multiple articles contained more than one such pairs so we used those more than once. This resulted in a database of 88655 instances (from 49782 articles).
The quantitative properties of our corpus are as follows: Number of articles: 88655 Number of different articles (type): 49782 Token: 27703631 Type: 1115.260 Average length of text (token): 249.42 (median: 229) Average question length (token): 63.07 (median: 56). We fine-tuned the corpus by hand.
One annotator per 100 unit checked and validated the dataset for which we provided our own demo interface. Automatic masking and the previous occurrence of the entity was checked. This resulted in a database of 80 614 validated entries.
## Additional Information
### Licensing Information
HuRC is released under the cc-by-4.0 license.
### Citation Information
If you use this resource or any part of its documentation, please refer to:
Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. XVIII. Magyar Számítógépes Nyelvészeti Konferencia. (in press)
```
@inproceedings{ligetinagy2022hulu,
title={HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából},
author={Ligeti-Nagy, N. and Ferenczi, G. and Héja, E. and Jelencsik-Mátyus, K. and Laki, L. J. and Vadász, N. and Yang, Z. Gy. and Váradi, T.},
booktitle={XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year={2022}
}
```
### Contributions
Thanks to [lnnoemi](https://github.com/lnnoemi) for adding this dataset. | NYTK/HuRC | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"task_ids:abstractive-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|other",
"language:hu",
"license:cc-by-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found", "expert-generated"], "language": ["hu"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["extended|other"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa", "abstractive-qa"], "pretty_name": "HuRC"} | 2022-07-07T12:03:49+00:00 | [] | [
"hu"
] | TAGS
#task_categories-question-answering #task_ids-extractive-qa #task_ids-abstractive-qa #annotations_creators-crowdsourced #language_creators-found #language_creators-expert-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-extended|other #language-Hungarian #license-cc-by-4.0 #region-us
| Dataset Card for HuRC
=====================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage:
* Repository:
HuRC dataset
* Paper:
* Leaderboard:
* Point of Contact:
lnnoemi
### Dataset Summary
This is the dataset card for the Hungarian Corpus for Reading Comprehension with Commonsense Reasoning (HuRC), which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit HuLU.
The dataset contains 80 614 instances. Each instance is composed of a lead, a passage and a cloze-style query with a masked entity. The task is to select the named entity that is being masked in the query.
The data was automatically collected from the online news of Népszabadság online (URL).
### Languages
The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.
Dataset Structure
-----------------
### Data Instances
For each instance, there is an id, a lead, a passage, a query and a MASK.
An example:
### Data Fields
* id: unique id of the instances;
* lead: a short summary of the article as it was extracted from the source texts;
* passage: 3-6 paragraphs of texts as the body of the article;
* query: the last paragraph of an article, some kind of summary or conclusion, with a named entity masked (with [MASK]) in it;
* MASK: the masked named entity.
### Data Splits
HuRC has 3 splits: *train*, *validation* and *test*.
Dataset split: train, Number of instances in the split: 64614, Proportion of the split: 80%
Dataset split: validation, Number of instances in the split: 8000, Proportion of the split: 10%
Dataset split: test, Number of instances in the split: 8000, Proportion of the split: 10%
The test data is distributed without the MASK fields. To evaluate your model, please contact us, or check HuLU's website for an automatic evaluation (this feature is under construction at the moment).
Dataset Creation
----------------
### Source Data
#### Initial Data Collection and Normalization
To produce the Hungarian material, we used the daily articles from Népszabadság Online which had titles and summaries as well. We selected 3-6 paragraphs from each article from the ones which contain proper nouns both in the main part and the summary as well. We trained a NER model using huBERT (Nemeskey 2021) for recognizing proper nouns. NerKor (Simon és Vadász 2021) and Huggingface’s token-level classification library were used to fine-tune the model. Our model achieved an F-score of 90.18 on the test material. As a final step, we found pairs of proper names which are present both in the main article and the summary. Multiple articles contained more than one such pairs so we used those more than once. This resulted in a database of 88655 instances (from 49782 articles).
The quantitative properties of our corpus are as follows: Number of articles: 88655 Number of different articles (type): 49782 Token: 27703631 Type: 1115.260 Average length of text (token): 249.42 (median: 229) Average question length (token): 63.07 (median: 56). We fine-tuned the corpus by hand.
One annotator per 100 unit checked and validated the dataset for which we provided our own demo interface. Automatic masking and the previous occurrence of the entity was checked. This resulted in a database of 80 614 validated entries.
Additional Information
----------------------
### Licensing Information
HuRC is released under the cc-by-4.0 license.
If you use this resource or any part of its documentation, please refer to:
Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. XVIII. Magyar Számítógépes Nyelvészeti Konferencia. (in press)
### Contributions
Thanks to lnnoemi for adding this dataset.
| [
"### Dataset Summary\n\n\nThis is the dataset card for the Hungarian Corpus for Reading Comprehension with Commonsense Reasoning (HuRC), which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit HuLU.\nThe dataset contains 80 614 instances. Each instance is composed of a lead, a passage and a cloze-style query with a masked entity. The task is to select the named entity that is being masked in the query.\nThe data was automatically collected from the online news of Népszabadság online (URL).",
"### Languages\n\n\nThe BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nFor each instance, there is an id, a lead, a passage, a query and a MASK.\nAn example:",
"### Data Fields\n\n\n* id: unique id of the instances;\n* lead: a short summary of the article as it was extracted from the source texts;\n* passage: 3-6 paragraphs of texts as the body of the article;\n* query: the last paragraph of an article, some kind of summary or conclusion, with a named entity masked (with [MASK]) in it;\n* MASK: the masked named entity.",
"### Data Splits\n\n\nHuRC has 3 splits: *train*, *validation* and *test*.\n\n\nDataset split: train, Number of instances in the split: 64614, Proportion of the split: 80%\nDataset split: validation, Number of instances in the split: 8000, Proportion of the split: 10%\nDataset split: test, Number of instances in the split: 8000, Proportion of the split: 10%\n\n\nThe test data is distributed without the MASK fields. To evaluate your model, please contact us, or check HuLU's website for an automatic evaluation (this feature is under construction at the moment).\n\n\nDataset Creation\n----------------",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nTo produce the Hungarian material, we used the daily articles from Népszabadság Online which had titles and summaries as well. We selected 3-6 paragraphs from each article from the ones which contain proper nouns both in the main part and the summary as well. We trained a NER model using huBERT (Nemeskey 2021) for recognizing proper nouns. NerKor (Simon és Vadász 2021) and Huggingface’s token-level classification library were used to fine-tune the model. Our model achieved an F-score of 90.18 on the test material. As a final step, we found pairs of proper names which are present both in the main article and the summary. Multiple articles contained more than one such pairs so we used those more than once. This resulted in a database of 88655 instances (from 49782 articles).\n\n\nThe quantitative properties of our corpus are as follows: Number of articles: 88655 Number of different articles (type): 49782 Token: 27703631 Type: 1115.260 Average length of text (token): 249.42 (median: 229) Average question length (token): 63.07 (median: 56). We fine-tuned the corpus by hand.\n\n\nOne annotator per 100 unit checked and validated the dataset for which we provided our own demo interface. Automatic masking and the previous occurrence of the entity was checked. This resulted in a database of 80 614 validated entries.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nHuRC is released under the cc-by-4.0 license.\n\n\nIf you use this resource or any part of its documentation, please refer to:\n\n\nLigeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. XVIII. Magyar Számítógépes Nyelvészeti Konferencia. (in press)",
"### Contributions\n\n\nThanks to lnnoemi for adding this dataset."
] | [
"TAGS\n#task_categories-question-answering #task_ids-extractive-qa #task_ids-abstractive-qa #annotations_creators-crowdsourced #language_creators-found #language_creators-expert-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-extended|other #language-Hungarian #license-cc-by-4.0 #region-us \n",
"### Dataset Summary\n\n\nThis is the dataset card for the Hungarian Corpus for Reading Comprehension with Commonsense Reasoning (HuRC), which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit HuLU.\nThe dataset contains 80 614 instances. Each instance is composed of a lead, a passage and a cloze-style query with a masked entity. The task is to select the named entity that is being masked in the query.\nThe data was automatically collected from the online news of Népszabadság online (URL).",
"### Languages\n\n\nThe BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nFor each instance, there is an id, a lead, a passage, a query and a MASK.\nAn example:",
"### Data Fields\n\n\n* id: unique id of the instances;\n* lead: a short summary of the article as it was extracted from the source texts;\n* passage: 3-6 paragraphs of texts as the body of the article;\n* query: the last paragraph of an article, some kind of summary or conclusion, with a named entity masked (with [MASK]) in it;\n* MASK: the masked named entity.",
"### Data Splits\n\n\nHuRC has 3 splits: *train*, *validation* and *test*.\n\n\nDataset split: train, Number of instances in the split: 64614, Proportion of the split: 80%\nDataset split: validation, Number of instances in the split: 8000, Proportion of the split: 10%\nDataset split: test, Number of instances in the split: 8000, Proportion of the split: 10%\n\n\nThe test data is distributed without the MASK fields. To evaluate your model, please contact us, or check HuLU's website for an automatic evaluation (this feature is under construction at the moment).\n\n\nDataset Creation\n----------------",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nTo produce the Hungarian material, we used the daily articles from Népszabadság Online which had titles and summaries as well. We selected 3-6 paragraphs from each article from the ones which contain proper nouns both in the main part and the summary as well. We trained a NER model using huBERT (Nemeskey 2021) for recognizing proper nouns. NerKor (Simon és Vadász 2021) and Huggingface’s token-level classification library were used to fine-tune the model. Our model achieved an F-score of 90.18 on the test material. As a final step, we found pairs of proper names which are present both in the main article and the summary. Multiple articles contained more than one such pairs so we used those more than once. This resulted in a database of 88655 instances (from 49782 articles).\n\n\nThe quantitative properties of our corpus are as follows: Number of articles: 88655 Number of different articles (type): 49782 Token: 27703631 Type: 1115.260 Average length of text (token): 249.42 (median: 229) Average question length (token): 63.07 (median: 56). We fine-tuned the corpus by hand.\n\n\nOne annotator per 100 unit checked and validated the dataset for which we provided our own demo interface. Automatic masking and the previous occurrence of the entity was checked. This resulted in a database of 80 614 validated entries.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nHuRC is released under the cc-by-4.0 license.\n\n\nIf you use this resource or any part of its documentation, please refer to:\n\n\nLigeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. XVIII. Magyar Számítógépes Nyelvészeti Konferencia. (in press)",
"### Contributions\n\n\nThanks to lnnoemi for adding this dataset."
] |
14215f45b4c630c03dbed18508b18176b691449b |
# Dataset Card for HuSST
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Language](#language)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
[HuSST dataset](https://github.com/nytud/HuSST)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
[lnnoemi](mailto:[email protected])
### Dataset Summary
This is the dataset card for the Hungarian version of the Stanford Sentiment Treebank. This dataset which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit [HuLU](hulu.nlp.nytud.hu). The corpus was created by translating and re-annotating the original SST (Roemmele et al., 2011).
### Supported Tasks and Leaderboards
'sentiment classification'
'sentiment scoring'
### Language
The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.
## Dataset Structure
### Data Instances
For each instance, there is an id, a sentence and a sentiment label.
An example:
```
{
"Sent_id": "dev_0",
"Sent": "Nos, a Jason elment Manhattanbe és a Pokolba kapcsán, azt hiszem, az elkerülhetetlen folytatások ötletlistájáról kihúzhatunk egy űrállomást 2455-ben (hé, ne lődd le a poént).",
"Label": "neutral"
}
```
### Data Fields
- Sent_id: unique id of the instances;
- Sent: the sentence, translation of an instance of the SST dataset;
- Label: "negative", "neutral", or "positive".
### Data Splits
HuSST has 3 splits: *train*, *validation* and *test*.
| Dataset split | Number of instances in the split |
|---------------|----------------------------------|
| train | 9344 |
| validation | 1168 |
| test | 1168 |
The test data is distributed without the labels. To evaluate your model, please [contact us](mailto:[email protected]), or check [HuLU's website](hulu.nlp.nytud.hu) for an automatic evaluation (this feature is under construction at the moment).
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The data is a translation of the content of the SST dataset (only the whole sentences were used). Each sentence was translated by a human translator. Each translation was manually checked and further refined by another annotator.
### Annotations
#### Annotation process
The translated sentences were annotated by three human annotators with one of the following labels: negative, neutral and positive. Each sentence was then curated by a fourth annotator (the 'curator'). The final label is the decision of the curator based on the three labels of the annotators.
#### Who are the annotators?
The translators were native Hungarian speakers with English proficiency. The annotators were university students with some linguistic background.
## Additional Information
### Licensing Information
### Citation Information
If you use this resource or any part of its documentation, please refer to:
Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Vadász, T. (2022) HuLU: magyar nyelvű benchmark adatbázis
kiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. XVIII. Magyar Számítógépes Nyelvészeti Konferencia. pp. 431–446.
```
@inproceedings{ligetinagy2022hulu,
title={HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából},
author={Ligeti-Nagy, N. and Ferenczi, G. and Héja, E. and Jelencsik-Mátyus, K. and Laki, L. J. and Vadász, N. and Yang, Z. Gy. and Vadász, T.},
booktitle={XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year={2022},
pages = {431--446}
}
```
and to:
Socher et al. (2013), Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. 1631--1642.
```
@inproceedings{socher-etal-2013-recursive,
title = "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank",
author = "Socher, Richard and
Perelygin, Alex and
Wu, Jean and
Chuang, Jason and
Manning, Christopher D. and
Ng, Andrew and
Potts, Christopher",
booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
month = oct,
year = "2013",
address = "Seattle, Washington, USA",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D13-1170",
pages = "1631--1642",
}
```
### Contributions
Thanks to [lnnoemi](https://github.com/lnnoemi) for adding this dataset. | NYTK/HuSST | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:sentiment-scoring",
"task_ids:text-scoring",
"annotations_creators:found",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|other",
"language:hu",
"license:bsd-2-clause",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["found"], "language_creators": ["found", "expert-generated"], "language": ["hu"], "license": ["bsd-2-clause"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["extended|other"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification", "sentiment-scoring", "text-scoring"], "pretty_name": "HuSST"} | 2023-03-27T08:54:13+00:00 | [] | [
"hu"
] | TAGS
#task_categories-text-classification #task_ids-sentiment-classification #task_ids-sentiment-scoring #task_ids-text-scoring #annotations_creators-found #language_creators-found #language_creators-expert-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-extended|other #language-Hungarian #license-bsd-2-clause #region-us
| Dataset Card for HuSST
======================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Language
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage:
* Repository:
HuSST dataset
* Paper:
* Leaderboard:
* Point of Contact:
lnnoemi
### Dataset Summary
This is the dataset card for the Hungarian version of the Stanford Sentiment Treebank. This dataset which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit HuLU. The corpus was created by translating and re-annotating the original SST (Roemmele et al., 2011).
### Supported Tasks and Leaderboards
'sentiment classification'
'sentiment scoring'
### Language
The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.
Dataset Structure
-----------------
### Data Instances
For each instance, there is an id, a sentence and a sentiment label.
An example:
### Data Fields
* Sent\_id: unique id of the instances;
* Sent: the sentence, translation of an instance of the SST dataset;
* Label: "negative", "neutral", or "positive".
### Data Splits
HuSST has 3 splits: *train*, *validation* and *test*.
The test data is distributed without the labels. To evaluate your model, please contact us, or check HuLU's website for an automatic evaluation (this feature is under construction at the moment).
Dataset Creation
----------------
### Source Data
#### Initial Data Collection and Normalization
The data is a translation of the content of the SST dataset (only the whole sentences were used). Each sentence was translated by a human translator. Each translation was manually checked and further refined by another annotator.
### Annotations
#### Annotation process
The translated sentences were annotated by three human annotators with one of the following labels: negative, neutral and positive. Each sentence was then curated by a fourth annotator (the 'curator'). The final label is the decision of the curator based on the three labels of the annotators.
#### Who are the annotators?
The translators were native Hungarian speakers with English proficiency. The annotators were university students with some linguistic background.
Additional Information
----------------------
### Licensing Information
If you use this resource or any part of its documentation, please refer to:
Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Vadász, T. (2022) HuLU: magyar nyelvű benchmark adatbázis
kiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. XVIII. Magyar Számítógépes Nyelvészeti Konferencia. pp. 431–446.
and to:
Socher et al. (2013), Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. 1631--1642.
### Contributions
Thanks to lnnoemi for adding this dataset.
| [
"### Dataset Summary\n\n\nThis is the dataset card for the Hungarian version of the Stanford Sentiment Treebank. This dataset which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit HuLU. The corpus was created by translating and re-annotating the original SST (Roemmele et al., 2011).",
"### Supported Tasks and Leaderboards\n\n\n'sentiment classification'\n\n\n'sentiment scoring'",
"### Language\n\n\nThe BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nFor each instance, there is an id, a sentence and a sentiment label.\n\n\nAn example:",
"### Data Fields\n\n\n* Sent\\_id: unique id of the instances;\n* Sent: the sentence, translation of an instance of the SST dataset;\n* Label: \"negative\", \"neutral\", or \"positive\".",
"### Data Splits\n\n\nHuSST has 3 splits: *train*, *validation* and *test*.\n\n\n\nThe test data is distributed without the labels. To evaluate your model, please contact us, or check HuLU's website for an automatic evaluation (this feature is under construction at the moment).\n\n\nDataset Creation\n----------------",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe data is a translation of the content of the SST dataset (only the whole sentences were used). Each sentence was translated by a human translator. Each translation was manually checked and further refined by another annotator.",
"### Annotations",
"#### Annotation process\n\n\nThe translated sentences were annotated by three human annotators with one of the following labels: negative, neutral and positive. Each sentence was then curated by a fourth annotator (the 'curator'). The final label is the decision of the curator based on the three labels of the annotators.",
"#### Who are the annotators?\n\n\nThe translators were native Hungarian speakers with English proficiency. The annotators were university students with some linguistic background.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nIf you use this resource or any part of its documentation, please refer to:\n\n\nLigeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Vadász, T. (2022) HuLU: magyar nyelvű benchmark adatbázis\nkiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. XVIII. Magyar Számítógépes Nyelvészeti Konferencia. pp. 431–446.\n\n\nand to:\n\n\nSocher et al. (2013), Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. 1631--1642.",
"### Contributions\n\n\nThanks to lnnoemi for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #task_ids-sentiment-scoring #task_ids-text-scoring #annotations_creators-found #language_creators-found #language_creators-expert-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-extended|other #language-Hungarian #license-bsd-2-clause #region-us \n",
"### Dataset Summary\n\n\nThis is the dataset card for the Hungarian version of the Stanford Sentiment Treebank. This dataset which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit HuLU. The corpus was created by translating and re-annotating the original SST (Roemmele et al., 2011).",
"### Supported Tasks and Leaderboards\n\n\n'sentiment classification'\n\n\n'sentiment scoring'",
"### Language\n\n\nThe BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nFor each instance, there is an id, a sentence and a sentiment label.\n\n\nAn example:",
"### Data Fields\n\n\n* Sent\\_id: unique id of the instances;\n* Sent: the sentence, translation of an instance of the SST dataset;\n* Label: \"negative\", \"neutral\", or \"positive\".",
"### Data Splits\n\n\nHuSST has 3 splits: *train*, *validation* and *test*.\n\n\n\nThe test data is distributed without the labels. To evaluate your model, please contact us, or check HuLU's website for an automatic evaluation (this feature is under construction at the moment).\n\n\nDataset Creation\n----------------",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe data is a translation of the content of the SST dataset (only the whole sentences were used). Each sentence was translated by a human translator. Each translation was manually checked and further refined by another annotator.",
"### Annotations",
"#### Annotation process\n\n\nThe translated sentences were annotated by three human annotators with one of the following labels: negative, neutral and positive. Each sentence was then curated by a fourth annotator (the 'curator'). The final label is the decision of the curator based on the three labels of the annotators.",
"#### Who are the annotators?\n\n\nThe translators were native Hungarian speakers with English proficiency. The annotators were university students with some linguistic background.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nIf you use this resource or any part of its documentation, please refer to:\n\n\nLigeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Vadász, T. (2022) HuLU: magyar nyelvű benchmark adatbázis\nkiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. XVIII. Magyar Számítógépes Nyelvészeti Konferencia. pp. 431–446.\n\n\nand to:\n\n\nSocher et al. (2013), Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. 1631--1642.",
"### Contributions\n\n\nThanks to lnnoemi for adding this dataset."
] |
b30ddf235604143d10ac9bcd027028863b8d89d4 |
# Dataset Card for HuWNLI
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
[HuWNLI dataset](https://github.com/nytud/HuWNLI)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
[lnnoemi](mailto:[email protected])
### Dataset Summary
This is the dataset card for the Hungarian translation of the Winograd schemata formatted as an inference task. A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its resolution (Levesque et al. 2012). This dataset is also part of the Hungarian Language Understanding Evaluation Benchmark Kit [HuLU](hulu.nlp.nytud.hu). The corpus was created by translating and manually curating the original English Winograd schemata. The NLI format was created by replacing the ambiguous pronoun with each possible referent (the method is described in GLUE's paper, Wang et al. 2019). We extended the set of sentence pairs derived from the schemata by the translation of the sentence pairs that - together with the Winograd schema sentences - build up the WNLI dataset of GLUE.
### Languages
The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.
## Dataset Structure
### Data Instances
For each instance, there is an orig_id, an id, two sentences and a label.
An example:
```
{"orig_id": "4",
"id": "4",
"sentence1": "A férfi nem tudta felemelni a fiát, mert olyan nehéz volt.",
"sentence2": "A fia nehéz volt.",
"Label": "1"
}
```
### Data Fields
- orig_id: the original id of this sentence pair (more precisely, its English counterpart's) in GLUE's WNLI dataset;
- id: unique id of the instances;
- sentence1: the premise;
- sentence2: the hypothesis;
- label: "1" if sentence2 is entailed by sentence1, and "0" otherwise.
### Data Splits
The data is distributed in three splits: training set (562), development set (59) and test set (134). The splits follow GLUE's WNLI's splits but contain fewer instances as many sentence pairs had to be thrown away for being untranslatable to Hungarian. The train and the development set have been extended from nli sentence pairs formatted from the Hungarian translation of 6 Winograd schemata left out from the original WNLI dataset.
The test set's sentence pairs are translated from GLUE's WNLI's test set. This set was distributed without labels. 3 annotators annotated the Hungarian sentence pairs.
The test set of HuWNLI is also distributed without labels. To evaluate your model, please [contact us](mailto:[email protected]), or check [HuLU's website](hulu.nytud.hu) for an automatic evaluation (this feature is under construction at the moment).
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The data is a translation of the English Winograd schemata and the additional sentence pairs of GLUE's WNLI. Each schema and sentence pair was translated by a human translator. Each schema was manually curated by a linguistic expert. The schemata were transformed into nli format by a linguistic expert.
During the adaption method, we found two erroneous labels in GLUE's WNLI's train set (id 347 and id 464). We corrected them in our dataset.
## Additional Information
Average human performance on the test set is 92,78% (accuracy).
### Licensing Information
HuWNLI is released under the Creative Commons Attribution-ShareAlike 4.0 International License.
### Citation Information
If you use this resource or any part of its documentation, please refer to:
Ligeti-Nagy, N., Héja, E., Laki, L. J., Takács, D., Yang, Z. Gy. and Váradi, T. (2023) Hát te mekkorát nőttél! - A HuLU első életéve új adatbázisokkal és webszolgáltatással \[Look at how much you have grown! - The first year of HuLU with new databases and with webservice\]. In: Berend, G., Gosztolya, G. and Vincze, V. (eds), XIX. Magyar Számítógépes Nyelvészeti Konferencia. Szeged, Szegedi Tudományegyetem, Informatikai Intézet. 217-230.
```
@inproceedings{ligetinagy2023hulu,
title={át te mekkorát nőttél! - A HuLU első életéve új adatbázisokkal és webszolgáltatással},
author={Ligeti-Nagy, N. and Héja, E. and Laki, L. J. and Takács, D. and Yang, Z. Gy. and Váradi, T.},
booktitle={XIX. Magyar Számítógépes Nyelvészeti Konferencia},
year={2023},
editors = {Berend, Gábor and Gosztolya, Gábor and Vincze, Veronika},
address = {Szeged},
publisher = {JATEPress},
pages = {217–230}
}
```
Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából \[HuLU: Hungarian benchmark dataset to evaluate neural language models\]. In: Berend, Gábor and Gosztolya, Gábor and Vincze, Veronika (eds), XVIII. Magyar Számítógépes Nyelvészeti Konferencia. JATEPress, Szeged. 431–446.
```
@inproceedings{ligetinagy2022hulu,
title={HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából},
author={Ligeti-Nagy, N. and Ferenczi, G. and Héja, E. and Jelencsik-Mátyus, K. and Laki, L. J. and Vadász, N. and Yang, Z. Gy. and Váradi, T.},
booktitle={XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year={2022},
editors = {Berend, Gábor and Gosztolya, Gábor and Vincze, Veronika},
address = {Szeged},
publisher = {JATEPress},
pages = {431–446}
}
```
and to:
Levesque, Hector, Davis, Ernest, Morgenstern, Leora (2012) he winograd schema challenge. In: Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning.
```
@inproceedings{levesque2012winograd,
title={The Winograd Schema Challenge},
author={Levesque, Hector and Davis, Ernest and Morgenstern, Leora},
booktitle={Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning},
year={2012},
organization={Citeseer}
}
```
### Contributions
Thanks to [lnnoemi](https://github.com/lnnoemi) for adding this dataset. | NYTK/HuWNLI | [
"task_categories:other",
"task_ids:coreference-resolution",
"annotations_creators:found",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|other",
"language:hu",
"license:cc-by-sa-4.0",
"structure-prediction",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["found"], "language_creators": ["found", "expert-generated"], "language": ["hu"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["extended|other"], "task_categories": ["other"], "task_ids": ["coreference-resolution"], "pretty_name": "HuWNLI", "tags": ["structure-prediction"]} | 2023-03-27T08:53:33+00:00 | [] | [
"hu"
] | TAGS
#task_categories-other #task_ids-coreference-resolution #annotations_creators-found #language_creators-found #language_creators-expert-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-extended|other #language-Hungarian #license-cc-by-sa-4.0 #structure-prediction #region-us
|
# Dataset Card for HuWNLI
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
HuWNLI dataset
- Paper:
- Leaderboard:
- Point of Contact:
lnnoemi
### Dataset Summary
This is the dataset card for the Hungarian translation of the Winograd schemata formatted as an inference task. A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its resolution (Levesque et al. 2012). This dataset is also part of the Hungarian Language Understanding Evaluation Benchmark Kit HuLU. The corpus was created by translating and manually curating the original English Winograd schemata. The NLI format was created by replacing the ambiguous pronoun with each possible referent (the method is described in GLUE's paper, Wang et al. 2019). We extended the set of sentence pairs derived from the schemata by the translation of the sentence pairs that - together with the Winograd schema sentences - build up the WNLI dataset of GLUE.
### Languages
The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.
## Dataset Structure
### Data Instances
For each instance, there is an orig_id, an id, two sentences and a label.
An example:
### Data Fields
- orig_id: the original id of this sentence pair (more precisely, its English counterpart's) in GLUE's WNLI dataset;
- id: unique id of the instances;
- sentence1: the premise;
- sentence2: the hypothesis;
- label: "1" if sentence2 is entailed by sentence1, and "0" otherwise.
### Data Splits
The data is distributed in three splits: training set (562), development set (59) and test set (134). The splits follow GLUE's WNLI's splits but contain fewer instances as many sentence pairs had to be thrown away for being untranslatable to Hungarian. The train and the development set have been extended from nli sentence pairs formatted from the Hungarian translation of 6 Winograd schemata left out from the original WNLI dataset.
The test set's sentence pairs are translated from GLUE's WNLI's test set. This set was distributed without labels. 3 annotators annotated the Hungarian sentence pairs.
The test set of HuWNLI is also distributed without labels. To evaluate your model, please contact us, or check HuLU's website for an automatic evaluation (this feature is under construction at the moment).
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The data is a translation of the English Winograd schemata and the additional sentence pairs of GLUE's WNLI. Each schema and sentence pair was translated by a human translator. Each schema was manually curated by a linguistic expert. The schemata were transformed into nli format by a linguistic expert.
During the adaption method, we found two erroneous labels in GLUE's WNLI's train set (id 347 and id 464). We corrected them in our dataset.
## Additional Information
Average human performance on the test set is 92,78% (accuracy).
### Licensing Information
HuWNLI is released under the Creative Commons Attribution-ShareAlike 4.0 International License.
If you use this resource or any part of its documentation, please refer to:
Ligeti-Nagy, N., Héja, E., Laki, L. J., Takács, D., Yang, Z. Gy. and Váradi, T. (2023) Hát te mekkorát nőttél! - A HuLU első életéve új adatbázisokkal és webszolgáltatással \[Look at how much you have grown! - The first year of HuLU with new databases and with webservice\]. In: Berend, G., Gosztolya, G. and Vincze, V. (eds), XIX. Magyar Számítógépes Nyelvészeti Konferencia. Szeged, Szegedi Tudományegyetem, Informatikai Intézet. 217-230.
Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából \[HuLU: Hungarian benchmark dataset to evaluate neural language models\]. In: Berend, Gábor and Gosztolya, Gábor and Vincze, Veronika (eds), XVIII. Magyar Számítógépes Nyelvészeti Konferencia. JATEPress, Szeged. 431–446.
and to:
Levesque, Hector, Davis, Ernest, Morgenstern, Leora (2012) he winograd schema challenge. In: Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning.
### Contributions
Thanks to lnnoemi for adding this dataset. | [
"# Dataset Card for HuWNLI",
"## Table of Contents\n\n- Table of Contents\n\n- Dataset Description\n\n - Dataset Summary\n\n - Supported Tasks and Leaderboards\n\n - Languages\n\n- Dataset Structure\n\n - Data Instances\n\n - Data Fields\n\n - Data Splits\n\n- Dataset Creation\n\n - Curation Rationale\n\n - Source Data\n\n - Annotations\n\n - Personal and Sensitive Information\n\n- Considerations for Using the Data\n\n - Social Impact of Dataset\n\n - Discussion of Biases\n\n - Other Known Limitations\n\n- Additional Information\n\n - Dataset Curators\n\n - Licensing Information\n\n - Citation Information\n\n - Contributions",
"## Dataset Description\n- Homepage:\n- Repository:\nHuWNLI dataset\n- Paper:\n- Leaderboard:\n- Point of Contact:\nlnnoemi",
"### Dataset Summary\n\nThis is the dataset card for the Hungarian translation of the Winograd schemata formatted as an inference task. A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its resolution (Levesque et al. 2012). This dataset is also part of the Hungarian Language Understanding Evaluation Benchmark Kit HuLU. The corpus was created by translating and manually curating the original English Winograd schemata. The NLI format was created by replacing the ambiguous pronoun with each possible referent (the method is described in GLUE's paper, Wang et al. 2019). We extended the set of sentence pairs derived from the schemata by the translation of the sentence pairs that - together with the Winograd schema sentences - build up the WNLI dataset of GLUE.",
"### Languages\n\nThe BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.",
"## Dataset Structure",
"### Data Instances\n\nFor each instance, there is an orig_id, an id, two sentences and a label. \n\nAn example:",
"### Data Fields\n- orig_id: the original id of this sentence pair (more precisely, its English counterpart's) in GLUE's WNLI dataset;\n\n- id: unique id of the instances;\n\n- sentence1: the premise;\n\n- sentence2: the hypothesis; \n\n- label: \"1\" if sentence2 is entailed by sentence1, and \"0\" otherwise.",
"### Data Splits\n\nThe data is distributed in three splits: training set (562), development set (59) and test set (134). The splits follow GLUE's WNLI's splits but contain fewer instances as many sentence pairs had to be thrown away for being untranslatable to Hungarian. The train and the development set have been extended from nli sentence pairs formatted from the Hungarian translation of 6 Winograd schemata left out from the original WNLI dataset.\nThe test set's sentence pairs are translated from GLUE's WNLI's test set. This set was distributed without labels. 3 annotators annotated the Hungarian sentence pairs.\nThe test set of HuWNLI is also distributed without labels. To evaluate your model, please contact us, or check HuLU's website for an automatic evaluation (this feature is under construction at the moment).",
"## Dataset Creation",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe data is a translation of the English Winograd schemata and the additional sentence pairs of GLUE's WNLI. Each schema and sentence pair was translated by a human translator. Each schema was manually curated by a linguistic expert. The schemata were transformed into nli format by a linguistic expert.\n\nDuring the adaption method, we found two erroneous labels in GLUE's WNLI's train set (id 347 and id 464). We corrected them in our dataset.",
"## Additional Information\n\nAverage human performance on the test set is 92,78% (accuracy).",
"### Licensing Information\n\nHuWNLI is released under the Creative Commons Attribution-ShareAlike 4.0 International License.\n\n\n\n\nIf you use this resource or any part of its documentation, please refer to:\n\nLigeti-Nagy, N., Héja, E., Laki, L. J., Takács, D., Yang, Z. Gy. and Váradi, T. (2023) Hát te mekkorát nőttél! - A HuLU első életéve új adatbázisokkal és webszolgáltatással \\[Look at how much you have grown! - The first year of HuLU with new databases and with webservice\\]. In: Berend, G., Gosztolya, G. and Vincze, V. (eds), XIX. Magyar Számítógépes Nyelvészeti Konferencia. Szeged, Szegedi Tudományegyetem, Informatikai Intézet. 217-230.\n\n\n\nLigeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából \\[HuLU: Hungarian benchmark dataset to evaluate neural language models\\]. In: Berend, Gábor and Gosztolya, Gábor and Vincze, Veronika (eds), XVIII. Magyar Számítógépes Nyelvészeti Konferencia. JATEPress, Szeged. 431–446.\n\n\n\nand to:\n\nLevesque, Hector, Davis, Ernest, Morgenstern, Leora (2012) he winograd schema challenge. In: Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning.",
"### Contributions\n\nThanks to lnnoemi for adding this dataset."
] | [
"TAGS\n#task_categories-other #task_ids-coreference-resolution #annotations_creators-found #language_creators-found #language_creators-expert-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-extended|other #language-Hungarian #license-cc-by-sa-4.0 #structure-prediction #region-us \n",
"# Dataset Card for HuWNLI",
"## Table of Contents\n\n- Table of Contents\n\n- Dataset Description\n\n - Dataset Summary\n\n - Supported Tasks and Leaderboards\n\n - Languages\n\n- Dataset Structure\n\n - Data Instances\n\n - Data Fields\n\n - Data Splits\n\n- Dataset Creation\n\n - Curation Rationale\n\n - Source Data\n\n - Annotations\n\n - Personal and Sensitive Information\n\n- Considerations for Using the Data\n\n - Social Impact of Dataset\n\n - Discussion of Biases\n\n - Other Known Limitations\n\n- Additional Information\n\n - Dataset Curators\n\n - Licensing Information\n\n - Citation Information\n\n - Contributions",
"## Dataset Description\n- Homepage:\n- Repository:\nHuWNLI dataset\n- Paper:\n- Leaderboard:\n- Point of Contact:\nlnnoemi",
"### Dataset Summary\n\nThis is the dataset card for the Hungarian translation of the Winograd schemata formatted as an inference task. A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its resolution (Levesque et al. 2012). This dataset is also part of the Hungarian Language Understanding Evaluation Benchmark Kit HuLU. The corpus was created by translating and manually curating the original English Winograd schemata. The NLI format was created by replacing the ambiguous pronoun with each possible referent (the method is described in GLUE's paper, Wang et al. 2019). We extended the set of sentence pairs derived from the schemata by the translation of the sentence pairs that - together with the Winograd schema sentences - build up the WNLI dataset of GLUE.",
"### Languages\n\nThe BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.",
"## Dataset Structure",
"### Data Instances\n\nFor each instance, there is an orig_id, an id, two sentences and a label. \n\nAn example:",
"### Data Fields\n- orig_id: the original id of this sentence pair (more precisely, its English counterpart's) in GLUE's WNLI dataset;\n\n- id: unique id of the instances;\n\n- sentence1: the premise;\n\n- sentence2: the hypothesis; \n\n- label: \"1\" if sentence2 is entailed by sentence1, and \"0\" otherwise.",
"### Data Splits\n\nThe data is distributed in three splits: training set (562), development set (59) and test set (134). The splits follow GLUE's WNLI's splits but contain fewer instances as many sentence pairs had to be thrown away for being untranslatable to Hungarian. The train and the development set have been extended from nli sentence pairs formatted from the Hungarian translation of 6 Winograd schemata left out from the original WNLI dataset.\nThe test set's sentence pairs are translated from GLUE's WNLI's test set. This set was distributed without labels. 3 annotators annotated the Hungarian sentence pairs.\nThe test set of HuWNLI is also distributed without labels. To evaluate your model, please contact us, or check HuLU's website for an automatic evaluation (this feature is under construction at the moment).",
"## Dataset Creation",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe data is a translation of the English Winograd schemata and the additional sentence pairs of GLUE's WNLI. Each schema and sentence pair was translated by a human translator. Each schema was manually curated by a linguistic expert. The schemata were transformed into nli format by a linguistic expert.\n\nDuring the adaption method, we found two erroneous labels in GLUE's WNLI's train set (id 347 and id 464). We corrected them in our dataset.",
"## Additional Information\n\nAverage human performance on the test set is 92,78% (accuracy).",
"### Licensing Information\n\nHuWNLI is released under the Creative Commons Attribution-ShareAlike 4.0 International License.\n\n\n\n\nIf you use this resource or any part of its documentation, please refer to:\n\nLigeti-Nagy, N., Héja, E., Laki, L. J., Takács, D., Yang, Z. Gy. and Váradi, T. (2023) Hát te mekkorát nőttél! - A HuLU első életéve új adatbázisokkal és webszolgáltatással \\[Look at how much you have grown! - The first year of HuLU with new databases and with webservice\\]. In: Berend, G., Gosztolya, G. and Vincze, V. (eds), XIX. Magyar Számítógépes Nyelvészeti Konferencia. Szeged, Szegedi Tudományegyetem, Informatikai Intézet. 217-230.\n\n\n\nLigeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Váradi, T. (2022) HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából \\[HuLU: Hungarian benchmark dataset to evaluate neural language models\\]. In: Berend, Gábor and Gosztolya, Gábor and Vincze, Veronika (eds), XVIII. Magyar Számítógépes Nyelvészeti Konferencia. JATEPress, Szeged. 431–446.\n\n\n\nand to:\n\nLevesque, Hector, Davis, Ernest, Morgenstern, Leora (2012) he winograd schema challenge. In: Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning.",
"### Contributions\n\nThanks to lnnoemi for adding this dataset."
] |
2c789aab0a4ee94ce96bcb370eca2054d23317ec | TEST | Nathanael/NPS | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-12-17T15:14:19+00:00 | [] | [] | TAGS
#region-us
| TEST | [] | [
"TAGS\n#region-us \n"
] |
eaafe98fa72f1c4b7431e2dc38207560b07948cc | # Dataset Card for NbAiLab/NCC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Data Fields](#data-fiels)
- [Dataset Creation](#dataset-creation)
- [Statistics](#statistics)
- [Document Types](#document-types)
- [Languages](#languages)
- [Publish Periode](#publish-periode)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/NbAiLab/notram
- **Repository:** https://github.com/NbAiLab/notram
- **Paper:** https://arxiv.org/abs/2104.09617
- **Point of Contact:** [Freddy Wetjen](mailto:[email protected])
The Norwegian Colossal Corpus is a collection of multiple smaller Norwegian corpuses suitable for training large language models. We have done extensive cleaning on the datasets, and have made them available in a common format. The total size of the NCC is currently 45GB.
## How to Use
```python
from datasets import load_dataset
data = load_dataset("NbAiLab/NCC", streaming=True)
```
## Download Data
If you do not want to use the HuggingFace Dataset-library for training, or if you want to do additional pre-processing, it is also possible to download the files locally.
```bash
# Clone the training set
git clone https://huggingface.co/datasets/NbAiLab/NCC
# Create one large training file of all shards without unpacking
cat NCC/data/train*.gz > onefile.json.gz
```
<details>
<summary>List of all the files.</summary>
* [train-shard-0001-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0001-of-0046.json.gz)
* [train-shard-0002-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0002-of-0046.json.gz)
* [train-shard-0003-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0003-of-0046.json.gz)
* [train-shard-0004-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0004-of-0046.json.gz)
* [train-shard-0005-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0005-of-0046.json.gz)
* [train-shard-0006-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0006-of-0046.json.gz)
* [train-shard-0007-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0007-of-0046.json.gz)
* [train-shard-0008-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0008-of-0046.json.gz)
* [train-shard-0009-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0009-of-0046.json.gz)
* [train-shard-0010-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0010-of-0046.json.gz)
* [train-shard-0011-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0011-of-0046.json.gz)
* [train-shard-0012-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0012-of-0046.json.gz)
* [train-shard-0013-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0013-of-0046.json.gz)
* [train-shard-0014-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0014-of-0046.json.gz)
* [train-shard-0015-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0015-of-0046.json.gz)
* [train-shard-0016-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0016-of-0046.json.gz)
* [train-shard-0017-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0017-of-0046.json.gz)
* [train-shard-0018-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0018-of-0046.json.gz)
* [train-shard-0019-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0019-of-0046.json.gz)
* [train-shard-0020-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0020-of-0046.json.gz)
* [train-shard-0021-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0021-of-0046.json.gz)
* [train-shard-0022-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0022-of-0046.json.gz)
* [train-shard-0023-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0023-of-0046.json.gz)
* [train-shard-0024-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0024-of-0046.json.gz)
* [train-shard-0025-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0025-of-0046.json.gz)
* [train-shard-0026-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0026-of-0046.json.gz)
* [train-shard-0027-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0027-of-0046.json.gz)
* [train-shard-0028-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0028-of-0046.json.gz)
* [train-shard-0029-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0029-of-0046.json.gz)
* [train-shard-0030-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0030-of-0046.json.gz)
* [train-shard-0031-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0031-of-0046.json.gz)
* [train-shard-0032-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0032-of-0046.json.gz)
* [train-shard-0033-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0033-of-0046.json.gz)
* [train-shard-0034-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0034-of-0046.json.gz)
* [train-shard-0035-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0035-of-0046.json.gz)
* [train-shard-0036-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0036-of-0046.json.gz)
* [train-shard-0037-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0037-of-0046.json.gz)
* [train-shard-0038-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0038-of-0046.json.gz)
* [train-shard-0039-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0039-of-0046.json.gz)
* [train-shard-0040-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0040-of-0046.json.gz)
* [train-shard-0041-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0041-of-0046.json.gz)
* [train-shard-0042-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0042-of-0046.json.gz)
* [train-shard-0043-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0043-of-0046.json.gz)
* [train-shard-0044-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0044-of-0046.json.gz)
* [train-shard-0045-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0045-of-0046.json.gz)
* [train-shard-0046-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0046-of-0046.json.gz)
* [validation-shard-0001-of-0001](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/validation-shard-0001-of-0001.json.gz)
</details>
### Dataset Summary
The NCC dataset contains json lines with language training data. Here is an example json line:
```json
{
"id": "1006205",
"doc_type": "cc100",
"publish_year": 2021,
"lang_fasttext": "nn",
"lang_fasttext_conf": "0.641",
"text": "Eg har ein PLAN! KOS deg og ha ei fin helg"
}
```
## Data Fields
|**id:** | String with id to source of line and a unique identifier|
|:-----------|:------------|
|**doc_type** | String describing type of media text extracted from (I.e. book,newspaper etc)|
|**publish_year** | Integer. The year text published. When year is undetermined it is set to 2021.|
|**lang_fasttext** | String. Language of text identified by FastText|
|**lang_fasttext_conf** | String. Confidence calculated by FastText|
|**text** | String. The complete utf-8 document. If longer than 1M characters it is split.|
### Dataset Creation
We are providing a **train** and a **validation** split. The standard size of the validation is a single 1GB file, while train is sharded in 1GB chunks.
All files are gzipped.
Build date: 21012022
#### Initial Data Collection and Curation
The procedure for the dataset creation is described in detail in our paper.
### Summary
| Words | Documents | Words/Document |
|--------------:|------------:|-----------------:|
| 6,905,570,165 | 20,830,348 | 331 |
### Document Types
| Source | Words | Documents | Words/Document |
|--------------------------------------:|--------------:|------------:|-----------------:|
| newspaper_ocr | 1,974,452,883 | 9,872,470 | 199 |
| parliament | 1,273,353,169 | 9,321 | 136,611 |
| books | 842,936,050 | 23,708 | 35,554 |
| newspapers_online_nb | 487,189,627 | 3,446,348 | 141 |
| maalfrid_regjeringen | 360,349,242 | 919,902 | 391 |
| maalfrid_ssb | 279,732,847 | 851,982 | 328 |
| maalfrid_uio | 181,916,296 | 771,480 | 235 |
| government_nb | 134,127,104 | 3,476 | 38,586 |
| wikipedia_download_nbo | 110,845,615 | 523,593 | 211 |
| maalfrid_fylkesmannen | 102,849,898 | 463,021 | 222 |
| publicreports | 78,347,879 | 3,298 | 23,756 |
| maalfrid_nve | 66,656,315 | 301,966 | 220 |
| maalfrid_patentstyret | 64,985,154 | 213,991 | 303 |
| maalfrid_ntnu | 57,803,460 | 199,307 | 290 |
| newspapers_online_nn | 42,205,558 | 167,347 | 252 |
| lovdata_cd_odelsting_2005 | 36,370,948 | 1,933 | 18,815 |
| maalfrid_vegvesen | 33,431,887 | 166,203 | 201 |
| maalfrid_fhi | 32,784,098 | 144,363 | 227 |
| maalfrid_norad | 32,720,034 | 93,097 | 351 |
| maalfrid_skatteetaten | 32,567,691 | 82,589 | 394 |
| maalfrid_uib | 28,425,322 | 115,729 | 245 |
| wikipedia_download_nno | 27,061,858 | 143,265 | 188 |
| maalfrid_forskningsradet | 24,076,984 | 73,368 | 328 |
| maalfrid_nasjonalparkstyre | 21,309,995 | 93,871 | 227 |
| government_nn | 18,316,345 | 1,063 | 17,230 |
| maalfrid_nmbu | 18,082,476 | 69,719 | 259 |
| maalfrid_oslomet | 17,710,771 | 47,022 | 376 |
| maalfrid_domstol | 16,678,270 | 51,038 | 326 |
| maalfrid_banenor | 16,445,420 | 70,360 | 233 |
| maalfrid_nav | 16,272,635 | 74,101 | 219 |
| maalfrid_landbruksdirektoratet | 13,119,567 | 47,983 | 273 |
| maalfrid_helsedirektoratet | 13,008,787 | 49,344 | 263 |
| maalfrid_nokut | 10,101,424 | 38,552 | 262 |
| maalfrid_hi | 10,046,751 | 39,065 | 257 |
| maalfrid_norges-bank | 9,924,489 | 37,171 | 266 |
| maalfrid_udir | 9,868,345 | 38,736 | 254 |
| maalfrid_vkm | 9,824,529 | 32,230 | 304 |
| maalfrid_nbim | 9,629,725 | 18,131 | 531 |
| maalfrid_miljodirektoratet | 9,496,631 | 34,711 | 273 |
| maalfrid_distriktssenteret | 9,375,506 | 38,525 | 243 |
| maalfrid_ngu | 9,231,905 | 34,619 | 266 |
| maalfrid_ptil | 9,214,434 | 34,250 | 269 |
| maalfrid_nord | 8,992,352 | 44,800 | 200 |
| maalfrid_fiskeridir | 8,297,897 | 33,446 | 248 |
| maalfrid_hivolda | 7,820,709 | 26,473 | 295 |
| maalfrid_difi | 7,789,290 | 35,733 | 217 |
| maalfrid_mattilsynet | 7,492,831 | 27,002 | 277 |
| maalfrid_havarikommisjonen | 7,440,410 | 24,989 | 297 |
| maalfrid_kulturradet | 7,196,423 | 22,437 | 320 |
| maalfrid_ks | 6,915,503 | 27,439 | 252 |
| maalfrid_kystverket | 6,713,070 | 30,975 | 216 |
| maalfrid_udi | 6,433,540 | 19,134 | 336 |
| maalfrid_uia | 5,964,644 | 23,861 | 249 |
| maalfrid_hjelpemiddeldatabasen | 5,892,662 | 34,192 | 172 |
| maalfrid_khrono | 5,859,186 | 19,970 | 293 |
| maalfrid_helsetilsynet | 5,803,000 | 18,365 | 315 |
| maalfrid_moreforsk | 5,622,025 | 21,579 | 260 |
| maalfrid_jernbanedirektoratet | 5,461,268 | 21,666 | 252 |
| maalfrid_veiviseren | 5,316,521 | 18,026 | 294 |
| lovdata_cd_somb_rundskriv_2005 | 5,264,746 | 3,215 | 1,637 |
| maalfrid_dsb | 5,199,259 | 17,814 | 291 |
| lovdata_cd_sentrale_forskrifter_2005 | 5,037,694 | 11,467 | 439 |
| maalfrid_husbanken | 4,711,069 | 15,053 | 312 |
| maalfrid_legemiddelverket | 4,689,988 | 20,192 | 232 |
| maalfrid_vetinst | 4,674,951 | 14,492 | 322 |
| maalfrid_imdi | 4,636,355 | 15,290 | 303 |
| maalfrid_forsvarsbygg | 4,567,318 | 18,886 | 241 |
| maalfrid_sdir | 4,540,110 | 15,202 | 298 |
| maalfrid_konkurransetilsynet | 4,512,807 | 12,617 | 357 |
| maalfrid_dsa | 4,498,837 | 15,898 | 282 |
| maalfrid_arkivverket | 4,493,280 | 16,515 | 272 |
| maalfrid_hiof | 4,473,731 | 23,119 | 193 |
| maalfrid_ehelse | 4,379,984 | 22,553 | 194 |
| maalfrid_inn | 4,326,704 | 26,277 | 164 |
| maalfrid_klagenemndssekretariatet | 4,181,685 | 11,916 | 350 |
| maalfrid_sprakradet | 4,097,815 | 15,187 | 269 |
| maalfrid_dibk | 3,967,428 | 15,509 | 255 |
| maalfrid_nhh | 3,962,033 | 15,678 | 252 |
| maalfrid_kartverket | 3,732,184 | 18,710 | 199 |
| maalfrid_riksrevisjonen | 3,680,555 | 10,922 | 336 |
| maalfrid_toll | 3,510,061 | 13,777 | 254 |
| maalfrid_nibio | 3,456,026 | 17,104 | 202 |
| maalfrid_met | 3,446,762 | 18,282 | 188 |
| maalfrid_bufdir | 3,354,740 | 11,470 | 292 |
| maalfrid_artsdatabanken | 3,193,511 | 9,009 | 354 |
| maalfrid_politiet | 3,167,395 | 10,501 | 301 |
| maalfrid_nkom | 3,127,687 | 10,002 | 312 |
| maalfrid_vestlandfylke | 3,060,166 | 12,075 | 253 |
| maalfrid_uis | 2,924,821 | 9,838 | 297 |
| maalfrid_sykkelbynettverket | 2,820,702 | 11,818 | 238 |
| maalfrid_nlr | 2,646,014 | 15,851 | 166 |
| maalfrid_seniorporten | 2,616,054 | 8,111 | 322 |
| maalfrid_npd | 2,597,831 | 10,742 | 241 |
| maalfrid_aldringoghelse | 2,430,767 | 6,788 | 358 |
| maalfrid_custompublish | 2,430,747 | 9,184 | 264 |
| maalfrid_bioteknologiradet | 2,393,891 | 5,996 | 399 |
| maalfrid_arbeidstilsynet | 2,379,597 | 6,882 | 345 |
| maalfrid_nyemetoder | 2,376,468 | 10,771 | 220 |
| maalfrid_riksantikvaren | 2,257,491 | 8,756 | 257 |
| maalfrid_sjt | 2,238,168 | 11,189 | 200 |
| lovdata_cd_lokaleforskrifter_2005 | 2,176,221 | 22,274 | 97 |
| maalfrid_hvl | 2,149,292 | 9,395 | 228 |
| maalfrid_luftfartstilsynet | 2,101,272 | 9,866 | 212 |
| maalfrid_dfo | 2,073,203 | 9,165 | 226 |
| maalfrid_ldo | 2,047,969 | 7,299 | 280 |
| maalfrid_kompetansenorge | 1,952,035 | 10,245 | 190 |
| maalfrid_forbrukerradet | 1,945,089 | 7,330 | 265 |
| maalfrid_himolde | 1,913,699 | 9,975 | 191 |
| maalfrid_usn | 1,793,297 | 7,403 | 242 |
| lovdata_cd_norgeslover_2005 | 1,760,884 | 1,386 | 1,270 |
| maalfrid_naku | 1,754,510 | 5,239 | 334 |
| maalfrid_medietilsynet | 1,608,424 | 6,611 | 243 |
| maalfrid_matematikksenteret | 1,567,505 | 7,298 | 214 |
| maalfrid_forskningsetikk | 1,545,336 | 5,545 | 278 |
| maalfrid_diku | 1,542,929 | 6,241 | 247 |
| maalfrid_godeidrettsanlegg | 1,506,577 | 6,115 | 246 |
| maalfrid_dirmin | 1,467,255 | 5,303 | 276 |
| maalfrid_diskrimineringsnemnda | 1,463,291 | 4,168 | 351 |
| maalfrid_naturfag | 1,450,662 | 5,976 | 242 |
| maalfrid_arbeidsretten | 1,440,074 | 4,754 | 302 |
| lovdata_cd_rtv_rundskriv_2005 | 1,366,872 | 9,596 | 142 |
| maalfrid_fellesstudentsystem | 1,359,292 | 10,321 | 131 |
| maalfrid_nupi | 1,286,395 | 5,491 | 234 |
| maalfrid_kriminalitetsforebygging | 1,201,477 | 4,667 | 257 |
| maalfrid_anskaffelser | 1,187,544 | 5,479 | 216 |
| maalfrid_folketrygdfondet | 1,183,502 | 4,253 | 278 |
| maalfrid_miljopakken | 1,170,252 | 5,513 | 212 |
| maalfrid_nih | 1,116,791 | 5,271 | 211 |
| maalfrid_statsbygg | 1,103,635 | 4,439 | 248 |
| lovdata_cd_skatt_rundskriv_2005 | 1,102,142 | 398 | 2,769 |
| maalfrid_nb | 1,055,200 | 4,135 | 255 |
| maalfrid_npolar | 1,051,181 | 2,653 | 396 |
| maalfrid_unit | 1,049,621 | 6,329 | 165 |
| maalfrid_valgdirektoratet | 1,009,941 | 9,131 | 110 |
| maalfrid_barneombudet | 980,751 | 2,807 | 349 |
| maalfrid_datatilsynet | 974,679 | 2,965 | 328 |
| maalfrid_lottstift | 959,590 | 3,578 | 268 |
| maalfrid_aho | 953,568 | 4,528 | 210 |
| maalfrid_sykehuspartner | 939,625 | 4,579 | 205 |
| maalfrid_naturfagsenteret | 897,049 | 3,859 | 232 |
| maalfrid_khio | 849,973 | 3,377 | 251 |
| maalfrid_spesialenheten | 824,209 | 2,127 | 387 |
| maalfrid_xn--miljlftet-o8ab | 803,011 | 3,384 | 237 |
| maalfrid_samordnaopptak | 792,595 | 2,368 | 334 |
| maalfrid_helsenorge | 780,465 | 3,034 | 257 |
| maalfrid_skrivesenteret | 777,204 | 4,161 | 186 |
| maalfrid_mareano | 760,645 | 3,724 | 204 |
| maalfrid_fiskeridirektoratet | 755,997 | 2,444 | 309 |
| maalfrid_sykehusinnkjop | 738,720 | 4,340 | 170 |
| maalfrid_matportalen | 630,990 | 2,370 | 266 |
| maalfrid_spk | 613,180 | 2,152 | 284 |
| maalfrid_justervesenet | 595,014 | 1,904 | 312 |
| maalfrid_pasientsikkerhetsprogrammet | 594,399 | 4,684 | 126 |
| maalfrid_nhn | 579,713 | 3,581 | 161 |
| maalfrid_sshf | 572,570 | 1,897 | 301 |
| maalfrid_bibliotekutvikling | 560,126 | 3,216 | 174 |
| maalfrid_nysgjerrigper | 559,207 | 3,019 | 185 |
| maalfrid_nodnett | 538,021 | 2,689 | 200 |
| maalfrid_une | 513,586 | 1,255 | 409 |
| maalfrid_giek | 512,569 | 1,796 | 285 |
| maalfrid_samas | 501,177 | 2,548 | 196 |
| maalfrid_kriminalomsorgen | 496,062 | 1,951 | 254 |
| maalfrid_kjonnsforskning | 483,376 | 1,426 | 338 |
| maalfrid_kunstkultursenteret | 470,009 | 1,435 | 327 |
| lovdata_cd_rundskriv_lovavdeling_2005 | 469,295 | 405 | 1,158 |
| maalfrid_nynorsksenteret | 460,165 | 2,085 | 220 |
| maalfrid_ceres | 448,920 | 1,950 | 230 |
| maalfrid_stami | 445,031 | 1,160 | 383 |
| maalfrid_nsm | 442,110 | 1,536 | 287 |
| maalfrid_gjenopptakelse | 420,205 | 1,467 | 286 |
| maalfrid_nfi | 420,128 | 1,523 | 275 |
| maalfrid_nidsenter | 410,785 | 1,631 | 251 |
| maalfrid_nasjonalmuseet | 390,036 | 1,087 | 358 |
| maalfrid_forbrukertilsynet | 387,579 | 1,227 | 315 |
| maalfrid_natursekken | 378,442 | 3,563 | 106 |
| maalfrid_fordelingsutvalget | 355,121 | 1,385 | 256 |
| maalfrid_digdir | 349,548 | 2,105 | 166 |
| maalfrid_forsvaret | 331,183 | 1,215 | 272 |
| maalfrid_beccle | 329,568 | 1,517 | 217 |
| maalfrid_romsenter | 329,304 | 1,133 | 290 |
| maalfrid_geonorge | 301,869 | 1,622 | 186 |
| maalfrid_universell | 263,740 | 2,155 | 122 |
| maalfrid_ovf | 262,542 | 930 | 282 |
| maalfrid_forbrukereuropa | 259,420 | 1,018 | 254 |
| maalfrid_politihogskolen | 258,615 | 1,229 | 210 |
| maalfrid_vinmonopolet | 245,685 | 671 | 366 |
| maalfrid_energimerking | 237,243 | 1,033 | 229 |
| maalfrid_ombudsmann | 225,947 | 418 | 540 |
| maalfrid_vea-fs | 224,712 | 1,261 | 178 |
| maalfrid_traumebevisst | 224,297 | 2,417 | 92 |
| maalfrid_npe | 205,102 | 1,000 | 205 |
| maalfrid_pkh | 201,503 | 791 | 254 |
| maalfrid_helfo | 193,880 | 988 | 196 |
| maalfrid_opplaringslovutvalget | 193,590 | 549 | 352 |
| maalfrid_regionaleforskningsfond | 187,261 | 989 | 189 |
| maalfrid_nafkam | 177,295 | 571 | 310 |
| maalfrid_jernbanemagasinet | 174,152 | 412 | 422 |
| maalfrid_polarhistorie | 171,386 | 382 | 448 |
| maalfrid_aasentunet | 161,626 | 529 | 305 |
| maalfrid_riksteatret | 159,991 | 798 | 200 |
| maalfrid_realfagsloyper | 157,166 | 748 | 210 |
| maalfrid_koro | 153,304 | 574 | 267 |
| maalfrid_squarespace | 146,931 | 504 | 291 |
| maalfrid_politietssikkerhetstjeneste | 143,781 | 469 | 306 |
| maalfrid_unknown | 139,263 | 700 | 198 |
| maalfrid_whocc | 121,616 | 656 | 185 |
| maalfrid_konfliktraadet | 120,258 | 372 | 323 |
| maalfrid_okokrim | 115,842 | 372 | 311 |
| maalfrid_brreg | 112,787 | 571 | 197 |
| maalfrid_riksmekleren | 110,737 | 558 | 198 |
| maalfrid_sismo | 110,700 | 309 | 358 |
| maalfrid_radetfordyreetikk | 99,241 | 441 | 225 |
| maalfrid_akkreditert | 99,040 | 503 | 196 |
| maalfrid_sivilforsvaret | 97,679 | 514 | 190 |
| maalfrid_lanekassen | 95,286 | 301 | 316 |
| maalfrid_digidel | 95,140 | 607 | 156 |
| maalfrid_generaladvokaten | 91,385 | 294 | 310 |
| maalfrid_uit | 90,273 | 602 | 149 |
| maalfrid_nyinorge | 88,466 | 199 | 444 |
| maalfrid_lokforerskolen | 87,224 | 468 | 186 |
| maalfrid_varsom | 85,382 | 563 | 151 |
| maalfrid_ffi | 80,137 | 220 | 364 |
| maalfrid_kulturminnefondet | 79,767 | 411 | 194 |
| maalfrid_unesco | 76,951 | 382 | 201 |
| maalfrid_yrkesfisker | 74,807 | 501 | 149 |
| maalfrid_dekom | 72,148 | 1,307 | 55 |
| maalfrid_omsorgsforskning | 71,675 | 321 | 223 |
| maalfrid_lektor2 | 67,385 | 549 | 122 |
| maalfrid_openaccess | 63,554 | 192 | 331 |
| maalfrid_ssn | 63,036 | 302 | 208 |
| maalfrid_lokalhistorie | 59,854 | 241 | 248 |
| maalfrid_nlb | 57,872 | 200 | 289 |
| maalfrid_riksadvokaten | 57,563 | 155 | 371 |
| maalfrid_laudim | 57,500 | 393 | 146 |
| maalfrid_denkulturelleskolesekken | 46,018 | 243 | 189 |
| maalfrid_sivilrett | 44,062 | 142 | 310 |
| maalfrid_htu | 43,330 | 169 | 256 |
| maalfrid_yr | 40,646 | 562 | 72 |
| maalfrid_informasjonskompetanse | 40,351 | 330 | 122 |
| maalfrid_dep | 38,882 | 126 | 308 |
| maalfrid_finansportalen | 38,506 | 180 | 213 |
| maalfrid_feide | 36,715 | 267 | 137 |
| maalfrid_kulturped | 36,013 | 96 | 375 |
| maalfrid_fug | 34,158 | 120 | 284 |
| maalfrid_kulturoghelse | 33,424 | 184 | 181 |
| maalfrid_helseklage | 32,756 | 124 | 264 |
| maalfrid_nbsk | 30,674 | 211 | 145 |
| maalfrid_matogindustri | 29,922 | 194 | 154 |
| maalfrid_sinn | 27,541 | 150 | 183 |
| maalfrid_transport21 | 25,317 | 90 | 281 |
| maalfrid_konkursradet | 23,505 | 76 | 309 |
| maalfrid_vergemal | 23,271 | 77 | 302 |
| maalfrid_norec | 22,496 | 78 | 288 |
| maalfrid_pts | 20,459 | 78 | 262 |
| maalfrid_nasjonaleturistveger | 19,922 | 110 | 181 |
| maalfrid_iearth | 19,281 | 146 | 132 |
| maalfrid_hjelpelinjen | 19,209 | 85 | 225 |
| maalfrid_russamtalen | 17,999 | 65 | 276 |
| maalfrid_xn--kvinneligomskjring-1ub | 17,701 | 77 | 229 |
| maalfrid_nynorskbok | 17,600 | 96 | 183 |
| maalfrid_regjeringsadvokaten | 17,416 | 55 | 316 |
| maalfrid_memu | 17,311 | 98 | 176 |
| maalfrid_xn--tilbakefring-2jb | 15,814 | 49 | 322 |
| maalfrid_xn--forskerfr-t8a | 15,724 | 172 | 91 |
| maalfrid_ringerikefengsel | 15,669 | 28 | 559 |
| maalfrid_skeivtarkiv | 15,537 | 69 | 225 |
| maalfrid_samfunnskunnskap | 15,110 | 60 | 251 |
| maalfrid_fordelingsutvalet | 15,017 | 34 | 441 |
| maalfrid_skattefunn | 14,599 | 51 | 286 |
| maalfrid_shiprep | 14,165 | 142 | 99 |
| maalfrid_haldenfengsel | 13,625 | 37 | 368 |
| maalfrid_sevuppt | 13,332 | 52 | 256 |
| maalfrid_forbrukerklageutvalget | 12,698 | 49 | 259 |
| maalfrid_mhfa | 11,999 | 144 | 83 |
| maalfrid_ah | 11,787 | 36 | 327 |
| maalfrid_nettvett | 11,002 | 43 | 255 |
| maalfrid_uh-it | 10,828 | 273 | 39 |
| maalfrid_fishgen | 10,199 | 28 | 364 |
| maalfrid_designavgang | 10,164 | 75 | 135 |
| maalfrid_global | 9,051 | 41 | 220 |
| maalfrid_havmiljo | 8,607 | 68 | 126 |
| maalfrid_valg | 8,516 | 47 | 181 |
| maalfrid_miljoklagenemnda | 7,797 | 35 | 222 |
| maalfrid_altinn | 7,695 | 49 | 157 |
| maalfrid_spinn-inn | 7,674 | 47 | 163 |
| maalfrid_kantinekurset | 7,217 | 53 | 136 |
| maalfrid_bastoyfengsel | 7,142 | 56 | 127 |
| maalfrid_norskpetroleum | 6,083 | 119 | 51 |
| maalfrid_voldsoffererstatning | 5,827 | 26 | 224 |
| maalfrid_musikkbasertmiljobehandling | 5,186 | 39 | 132 |
| maalfrid_prosjektveiviseren | 5,019 | 14 | 358 |
| maalfrid_aldersvennlig | 4,919 | 32 | 153 |
| maalfrid_barentswatch | 4,829 | 32 | 150 |
| maalfrid_fmfiavo@fylkesmannen | 4,702 | 68 | 69 |
| maalfrid_kk-utvalget | 4,697 | 19 | 247 |
| maalfrid_agropub | 4,434 | 17 | 260 |
| maalfrid_utdanningiverden | 4,266 | 13 | 328 |
| maalfrid_overgangsbolig | 3,769 | 35 | 107 |
| maalfrid_forsvaretsmuseer | 3,706 | 34 | 109 |
| maalfrid_okopark | 3,282 | 12 | 273 |
| maalfrid_pst | 2,866 | 14 | 204 |
| maalfrid_sikkerhverdag | 2,697 | 18 | 149 |
| maalfrid_arkitektur | 2,436 | 15 | 162 |
| maalfrid_velgekte | 2,287 | 10 | 228 |
| maalfrid_addlab | 2,109 | 12 | 175 |
| maalfrid_romerikefengsel | 2,088 | 19 | 109 |
| maalfrid_utdanning | 2,009 | 12 | 167 |
| maalfrid_grunderskolen | 1,994 | 7 | 284 |
| maalfrid_umb | 1,934 | 8 | 241 |
| maalfrid_oslofengsel | 1,756 | 8 | 219 |
| maalfrid_hjorteviltregisteret | 1,600 | 5 | 320 |
| maalfrid_alleteller | 1,511 | 7 | 215 |
| maalfrid_webhuset | 1,409 | 5 | 281 |
| maalfrid_lykillinn | 1,349 | 4 | 337 |
| maalfrid_kulturfag | 1,215 | 6 | 202 |
| maalfrid_unimus | 940 | 4 | 235 |
| maalfrid_anleggsregisteret | 928 | 5 | 185 |
| maalfrid_mangfoldsprisen | 597 | 3 | 199 |
| maalfrid_algae2future | 456 | 8 | 57 |
| maalfrid_mammapresenterer | 447 | 2 | 223 |
| maalfrid_karriereveiledning | 391 | 27 | 14 |
| maalfrid_nodsms | 351 | 4 | 87 |
| maalfrid_kildekompasset | 302 | 1 | 302 |
| maalfrid_praksisfou | 297 | 1 | 297 |
| maalfrid_retttilaalese | 246 | 3 | 82 |
| maalfrid_indreostfoldfengsel | 215 | 3 | 71 |
| maalfrid_xn--kroppsvingsforskning-gcc | 205 | 2 | 102 |
| maalfrid_pahoyden | 154 | 1 | 154 |
| maalfrid_norren | 42 | 1 | 42 |
### Languages
| Language | Words | Documents | Words/Document |
|-----------:|--------------:|------------:|-----------------:|
| no | 5,050,752,505 | 17,177,223 | 294 |
| da | 940,216,574 | 574,211 | 1,637 |
| en | 474,855,361 | 1,526,795 | 311 |
| nn | 299,753,996 | 987,701 | 303 |
| fr | 49,409,701 | 108,071 | 457 |
| de | 27,159,878 | 85,230 | 318 |
| sv | 18,773,092 | 118,753 | 158 |
| es | 10,057,791 | 42,177 | 238 |
| fi | 8,104,322 | 46,710 | 173 |
| et | 3,309,661 | 24,183 | 136 |
| cs | 2,652,151 | 21,793 | 121 |
| pt | 2,550,218 | 16,407 | 155 |
| oc | 2,123,730 | 4,927 | 431 |
| nl | 1,984,501 | 11,813 | 167 |
| zh | 1,470,751 | 8,146 | 180 |
| uk | 1,459,484 | 5,096 | 286 |
| ca | 1,370,260 | 4,476 | 306 |
| it | 1,293,230 | 8,479 | 152 |
| la | 1,281,920 | 797 | 1,608 |
| ru | 1,231,482 | 6,796 | 181 |
| pl | 852,304 | 9,396 | 90 |
| eu | 831,276 | 3,195 | 260 |
| hu | 659,973 | 8,499 | 77 |
| fa | 494,551 | 2,047 | 241 |
| ja | 351,634 | 4,994 | 70 |
| is | 309,422 | 1,207 | 256 |
| id | 226,296 | 2,033 | 111 |
| ar | 205,632 | 1,173 | 175 |
| sl | 140,913 | 1,858 | 75 |
| vi | 139,122 | 982 | 141 |
| so | 128,303 | 592 | 216 |
| hr | 124,033 | 1,081 | 114 |
| el | 117,624 | 618 | 190 |
| lv | 106,626 | 123 | 866 |
| tr | 92,680 | 1,630 | 56 |
| ro | 80,804 | 635 | 127 |
| sr | 71,953 | 970 | 74 |
| lt | 70,148 | 869 | 80 |
| gl | 65,152 | 692 | 94 |
| war | 56,369 | 274 | 205 |
| ko | 56,057 | 1,006 | 55 |
| th | 54,067 | 367 | 147 |
| am | 44,818 | 317 | 141 |
| sk | 39,416 | 1,000 | 39 |
| ml | 35,575 | 156 | 228 |
| ceb | 35,337 | 331 | 106 |
| sq | 34,461 | 238 | 144 |
| tl | 30,839 | 177 | 174 |
| kk | 27,827 | 72 | 386 |
| eo | 24,187 | 859 | 28 |
| mn | 21,540 | 22 | 979 |
| sw | 18,670 | 72 | 259 |
| pnb | 18,403 | 80 | 230 |
| sh | 17,807 | 213 | 83 |
| gu | 16,973 | 13 | 1,305 |
| bg | 16,495 | 100 | 164 |
| ur | 15,650 | 169 | 92 |
| mk | 13,305 | 65 | 204 |
| ckb | 9,119 | 43 | 212 |
| ku | 9,071 | 57 | 159 |
| ast | 7,919 | 73 | 108 |
| az | 7,907 | 59 | 134 |
| ms | 7,051 | 483 | 14 |
| uz | 6,924 | 56 | 123 |
| ta | 4,180 | 60 | 69 |
| fy | 3,841 | 68 | 56 |
| ga | 3,761 | 174 | 21 |
| hy | 3,456 | 43 | 80 |
| pa | 3,299 | 17 | 194 |
| hi | 2,783 | 39 | 71 |
| be | 2,556 | 62 | 41 |
| bo | 2,551 | 1 | 2,551 |
| ht | 2,534 | 11 | 230 |
| jv | 2,341 | 91 | 25 |
| min | 2,206 | 18 | 122 |
| cy | 2,052 | 52 | 39 |
| bs | 2,047 | 66 | 31 |
| als | 1,918 | 66 | 29 |
| su | 1,888 | 29 | 65 |
| nds | 1,869 | 162 | 11 |
| ps | 1,832 | 15 | 122 |
| bn | 1,797 | 22 | 81 |
| qu | 1,498 | 14 | 107 |
| ilo | 1,126 | 25 | 45 |
| mt | 968 | 16 | 60 |
| si | 942 | 29 | 32 |
| te | 888 | 18 | 49 |
| my | 784 | 15 | 52 |
| af | 741 | 32 | 23 |
| io | 715 | 15 | 47 |
| tt | 684 | 22 | 31 |
| km | 674 | 11 | 61 |
| br | 645 | 40 | 16 |
| gn | 638 | 11 | 58 |
| jbo | 611 | 27 | 22 |
| as | 584 | 2 | 292 |
| ug | 581 | 6 | 96 |
| kv | 562 | 3 | 187 |
| kn | 544 | 22 | 24 |
| pam | 480 | 2 | 240 |
| kw | 475 | 19 | 25 |
| vep | 419 | 34 | 12 |
| he | 412 | 18 | 22 |
| ka | 351 | 20 | 17 |
| yo | 281 | 9 | 31 |
| wa | 268 | 38 | 7 |
| ky | 228 | 10 | 22 |
| azb | 216 | 1 | 216 |
| ba | 203 | 5 | 40 |
| gom | 174 | 12 | 14 |
| ia | 140 | 15 | 9 |
| mr | 138 | 10 | 13 |
| lmo | 134 | 27 | 4 |
| tg | 129 | 3 | 43 |
| lb | 115 | 26 | 4 |
| pms | 115 | 16 | 7 |
| vec | 67 | 3 | 22 |
| rue | 67 | 2 | 33 |
| sco | 61 | 6 | 10 |
| ie | 59 | 11 | 5 |
| hsb | 57 | 3 | 19 |
| ne | 56 | 6 | 9 |
| bar | 46 | 7 | 6 |
| cbk | 46 | 2 | 23 |
| or | 44 | 2 | 22 |
| mg | 38 | 8 | 4 |
| os | 36 | 3 | 12 |
| tk | 36 | 4 | 9 |
| arz | 31 | 1 | 31 |
| li | 29 | 6 | 4 |
| gd | 29 | 2 | 14 |
| eml | 24 | 5 | 4 |
| diq | 20 | 2 | 10 |
| lrc | 20 | 1 | 20 |
| dsb | 19 | 1 | 19 |
| yue | 19 | 1 | 19 |
| nap | 16 | 1 | 16 |
| nah | 14 | 2 | 7 |
| wuu | 14 | 1 | 14 |
| sd | 14 | 1 | 14 |
| frr | 13 | 3 | 4 |
| rm | 12 | 2 | 6 |
| cv | 12 | 1 | 12 |
| scn | 9 | 2 | 4 |
| bh | 8 | 1 | 8 |
| bcl | 8 | 1 | 8 |
| co | 7 | 1 | 7 |
| ce | 4 | 1 | 4 |
| new | 4 | 1 | 4 |
| vo | 3 | 2 | 1 |
| mzn | 3 | 1 | 3 |
| gv | 3 | 1 | 3 |
| lo | 2 | 1 | 2 |
### Publish Periode
| Decade | Words | Documents | Words/Document |
|---------:|--------------:|------------:|-----------------:|
| 2020 | 4,090,213,596 | 10,934,550 | 523 |
| 2010 | 355,391,417 | 2,415,563 | 1,511 |
| 2000 | 447,853,330 | 1,705,354 | 2,773 |
| 1990 | 767,392,364 | 2,513,364 | 3,051 |
| 1980 | 160,980,586 | 538,665 | 3,011 |
| 1970 | 186,113,674 | 829,646 | 2,222 |
| 1960 | 149,421,535 | 834,219 | 1,807 |
| 1950 | 97,863,608 | 478,628 | 2,041 |
| 1940 | 122,648,278 | 570,154 | 2,307 |
| 1930 | 35,635,053 | 697 | 508,420 |
| 1920 | 50,381,418 | 1,049 | 484,836 |
| 1910 | 62,599,984 | 1,221 | 504,678 |
| 1900 | 60,019,080 | 1,130 | 527,329 |
| 1890 | 86,781,861 | 1,777 | 485,878 |
| 1880 | 58,546,570 | 1,064 | 553,442 |
| 1870 | 26,492,662 | 632 | 407,191 |
| 1860 | 39,176,930 | 698 | 543,151 |
| 1850 | 53,801,490 | 846 | 634,038 |
| 1840 | 30,434,939 | 522 | 581,593 |
| 1830 | 18,189,838 | 368 | 481,719 |
| 1820 | 4,721,154 | 144 | 338,350 |
| 1810 | 910,798 | 57 | 124,880 |
## Considerations for Using the Data
This corpus contains data under copyright and is not allowed to be used outide the National Library of Norway. The dataset should not be distributed.
### Discussion of Biases
Please refer to our paper.
### Dataset Curators
[Freddy Wetjen](mailto:[email protected]) and [Per Egil Kummervold](mailto:[email protected])
## License
Various licences applies to different parts of the corpus. Every document in the corpus has a tag telling what **"doc_type"** it belongs to. If you are unable to accept any of the licenses, you should filter out the **"doc_type"** with a conflicting license.
| Doc_type | License |
| :-------- | :------------- |
| government_nb, government_nn, parliament, publicreports, lovdata_cd_\*, maalfrid_\* | [NLOD 2.0](https://data.norge.no/nlod/en/2.0/)|
| newspapers_ocr, newspapers_pdf, books| [CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/)|
| newspapers_online_nb, newspapers_online_nn | [CC BY-NC 2.0](https://creativecommons.org/licenses/by-nc/2.0/)|
| opensubtitles, wikipedia | [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/)
|
### Citation Information
We are preparing an article with detailed information about this corpus. Until it is published, please cite out paper discussing the first version of this corpus:
```
@inproceedings{kummervold-etal-2021-operationalizing,
title = {Operationalizing a National Digital Library: The Case for a {N}orwegian Transformer Model},
author = {Kummervold, Per E and
De la Rosa, Javier and
Wetjen, Freddy and
Brygfjeld, Svein Arne",
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)},
year = "2021",
address = "Reykjavik, Iceland (Online)",
publisher = {Link{"o}ping University Electronic Press, Sweden},
url = "https://aclanthology.org/2021.nodalida-main.3",
pages = "20--29",
abstract = "In this work, we show the process of building a large-scale training set from digital and digitized collections at a national library.
The resulting Bidirectional Encoder Representations from Transformers (BERT)-based language model for Norwegian outperforms multilingual BERT (mBERT) models
in several token and sequence classification tasks for both Norwegian Bokm{aa}l and Norwegian Nynorsk. Our model also improves the mBERT performance for other
languages present in the corpus such as English, Swedish, and Danish. For languages not included in the corpus, the weights degrade moderately while keeping strong multilingual properties. Therefore,
we show that building high-quality models within a memory institution using somewhat noisy optical character recognition (OCR) content is feasible, and we hope to pave the way for other memory institutions to follow.",
}
```
| NbAiLab/NCC | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:2G<n<1B",
"source_datasets:original",
"language:en",
"language:nb",
"language:no",
"language:nn",
"language:sv",
"language:da",
"language:is",
"language:fo",
"license:other",
"arxiv:2104.09617",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en", "nb", false, "nn", "sv", "da", "is", "fo"], "license": ["other"], "multilinguality": ["multilingual"], "size_categories": ["2G<n<1B"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "NCC", "extra_gated_prompt": "The Directive on Copyright in the Digital Single Market, which came into force on June 6 2019, amends the European Union copyright and database legislation and allows for Text and Data Mining (TDM) activities for research organizations and cultural heritage institutions. Under the terms of the aforementioned directive, by clicking on 'Access repository' you agree on using the text and data contained in this dataset for non-commercial scientific purposes only."} | 2023-11-17T12:48:38+00:00 | [
"2104.09617"
] | [
"en",
"nb",
"no",
"nn",
"sv",
"da",
"is",
"fo"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-2G<n<1B #source_datasets-original #language-English #language-Norwegian Bokmål #language-Norwegian #language-Norwegian Nynorsk #language-Swedish #language-Danish #language-Icelandic #language-Faroese #license-other #arxiv-2104.09617 #region-us
| Dataset Card for NbAiLab/NCC
============================
Table of Contents
-----------------
* Dataset Description
* Dataset Summary
* Data Fields
* Dataset Creation
* Statistics
* Document Types
* Languages
* Publish Periode
* Considerations for Using the Data
* Social Impact of Dataset
* Discussion of Biases
* Other Known Limitations
* Additional Information
* Dataset Curators
* Licensing Information
* Citation Information
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: URL
* Point of Contact: Freddy Wetjen
The Norwegian Colossal Corpus is a collection of multiple smaller Norwegian corpuses suitable for training large language models. We have done extensive cleaning on the datasets, and have made them available in a common format. The total size of the NCC is currently 45GB.
How to Use
----------
Download Data
-------------
If you do not want to use the HuggingFace Dataset-library for training, or if you want to do additional pre-processing, it is also possible to download the files locally.
List of all the files.
* train-shard-0001-of-0046
* train-shard-0002-of-0046
* train-shard-0003-of-0046
* train-shard-0004-of-0046
* train-shard-0005-of-0046
* train-shard-0006-of-0046
* train-shard-0007-of-0046
* train-shard-0008-of-0046
* train-shard-0009-of-0046
* train-shard-0010-of-0046
* train-shard-0011-of-0046
* train-shard-0012-of-0046
* train-shard-0013-of-0046
* train-shard-0014-of-0046
* train-shard-0015-of-0046
* train-shard-0016-of-0046
* train-shard-0017-of-0046
* train-shard-0018-of-0046
* train-shard-0019-of-0046
* train-shard-0020-of-0046
* train-shard-0021-of-0046
* train-shard-0022-of-0046
* train-shard-0023-of-0046
* train-shard-0024-of-0046
* train-shard-0025-of-0046
* train-shard-0026-of-0046
* train-shard-0027-of-0046
* train-shard-0028-of-0046
* train-shard-0029-of-0046
* train-shard-0030-of-0046
* train-shard-0031-of-0046
* train-shard-0032-of-0046
* train-shard-0033-of-0046
* train-shard-0034-of-0046
* train-shard-0035-of-0046
* train-shard-0036-of-0046
* train-shard-0037-of-0046
* train-shard-0038-of-0046
* train-shard-0039-of-0046
* train-shard-0040-of-0046
* train-shard-0041-of-0046
* train-shard-0042-of-0046
* train-shard-0043-of-0046
* train-shard-0044-of-0046
* train-shard-0045-of-0046
* train-shard-0046-of-0046
* validation-shard-0001-of-0001
### Dataset Summary
The NCC dataset contains json lines with language training data. Here is an example json line:
Data Fields
-----------
### Dataset Creation
We are providing a train and a validation split. The standard size of the validation is a single 1GB file, while train is sharded in 1GB chunks.
All files are gzipped.
Build date: 21012022
#### Initial Data Collection and Curation
The procedure for the dataset creation is described in detail in our paper.
### Summary
### Document Types
### Languages
### Publish Periode
Considerations for Using the Data
---------------------------------
This corpus contains data under copyright and is not allowed to be used outide the National Library of Norway. The dataset should not be distributed.
### Discussion of Biases
Please refer to our paper.
### Dataset Curators
Freddy Wetjen and Per Egil Kummervold
License
-------
Various licences applies to different parts of the corpus. Every document in the corpus has a tag telling what "doc\_type" it belongs to. If you are unable to accept any of the licenses, you should filter out the "doc\_type" with a conflicting license.
We are preparing an article with detailed information about this corpus. Until it is published, please cite out paper discussing the first version of this corpus:
| [
"### Dataset Summary\n\n\nThe NCC dataset contains json lines with language training data. Here is an example json line:\n\n\nData Fields\n-----------",
"### Dataset Creation\n\n\nWe are providing a train and a validation split. The standard size of the validation is a single 1GB file, while train is sharded in 1GB chunks.\nAll files are gzipped.\n\n\nBuild date: 21012022",
"#### Initial Data Collection and Curation\n\n\nThe procedure for the dataset creation is described in detail in our paper.",
"### Summary",
"### Document Types",
"### Languages",
"### Publish Periode\n\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThis corpus contains data under copyright and is not allowed to be used outide the National Library of Norway. The dataset should not be distributed.",
"### Discussion of Biases\n\n\nPlease refer to our paper.",
"### Dataset Curators\n\n\nFreddy Wetjen and Per Egil Kummervold\n\n\nLicense\n-------\n\n\nVarious licences applies to different parts of the corpus. Every document in the corpus has a tag telling what \"doc\\_type\" it belongs to. If you are unable to accept any of the licenses, you should filter out the \"doc\\_type\" with a conflicting license.\n\n\n\nWe are preparing an article with detailed information about this corpus. Until it is published, please cite out paper discussing the first version of this corpus:"
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-2G<n<1B #source_datasets-original #language-English #language-Norwegian Bokmål #language-Norwegian #language-Norwegian Nynorsk #language-Swedish #language-Danish #language-Icelandic #language-Faroese #license-other #arxiv-2104.09617 #region-us \n",
"### Dataset Summary\n\n\nThe NCC dataset contains json lines with language training data. Here is an example json line:\n\n\nData Fields\n-----------",
"### Dataset Creation\n\n\nWe are providing a train and a validation split. The standard size of the validation is a single 1GB file, while train is sharded in 1GB chunks.\nAll files are gzipped.\n\n\nBuild date: 21012022",
"#### Initial Data Collection and Curation\n\n\nThe procedure for the dataset creation is described in detail in our paper.",
"### Summary",
"### Document Types",
"### Languages",
"### Publish Periode\n\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThis corpus contains data under copyright and is not allowed to be used outide the National Library of Norway. The dataset should not be distributed.",
"### Discussion of Biases\n\n\nPlease refer to our paper.",
"### Dataset Curators\n\n\nFreddy Wetjen and Per Egil Kummervold\n\n\nLicense\n-------\n\n\nVarious licences applies to different parts of the corpus. Every document in the corpus has a tag telling what \"doc\\_type\" it belongs to. If you are unable to accept any of the licenses, you should filter out the \"doc\\_type\" with a conflicting license.\n\n\n\nWe are preparing an article with detailed information about this corpus. Until it is published, please cite out paper discussing the first version of this corpus:"
] |
026b1dfb9e5c256ff47163e04a686c2332603868 | # Dataset Card for NbAiLab/NPSC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Data Fields](#data-fiels)
- [Dataset Creation](#dataset-creation)
- [Statistics](#statistics)
- [Document Types](#document-types)
- [Languages](#languages)
- [Publish Periode](#publish-periode)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.nb.no/sprakbanken/
- **Repository:** https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-58/
- **Paper:** https://www.nb.no/sprakbanken/
- **Point of Contact:** [Per Erik Solberg](mailto:[email protected])
The Norwegian Parliamentary Speech Corpus (NPSC) is a speech corpus made by the Norwegian Language Bank at the National Library of Norway in 2019-2021. The NPSC consists of recordings of speech from Stortinget, the Norwegian parliament, and corresponding orthographic transcriptions to Norwegian Bokmål and Norwegian Nynorsk. All transcriptions are done manually by trained linguists or philologists, and the manual transcriptions are subsequently proofread to ensure consistency and accuracy. Entire days of Parliamentary meetings are transcribed in the dataset.
This repository contains a version of the NPSC in the 🤗 Dataset Format. Note that the official release of the dataset, which can be found in [the repository of the Norwegian Language Bank](https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-58/), contains more information than the version found here, including word-level metadata, metadata about the speakers, and detailed documentation.
## How to Use
```python
# Loads the 16K Bokmål corpus in streaming mode
from datasets import load_dataset
data = load_dataset("NbAiLab/NPSC", config="16K_mp3_bokmaal", streaming=True)
```
## Dataset Summary
The NPSC dataset contains JSON lines with language training data. The data loader will add audio data to this structure. Here is an example json object:
```json
{
"sentence_id": 49853,
"sentence_order": 0,
"speaker_id": 32,
"meeting_date": "20170110",
"speaker_name": "Olemic Thommessen",
"sentence_text": "Stortingets møte er lovlig satt",
"sentence_language_code": "nb-NO",
"text": "Stortingets møte er lovlig satt",
"start_time": 320246,
"end_time": 323590,
"normsentence_text": "Stortingets møte er lovlig satt",
"transsentence_text": "Stortingets møte er lovleg sett",
"translated": 1,
"audio": {"path": "audio/20170110-095504_320246_323590.wav","array": [.......]}
}
```
## Data Fields
|**Key** | **Type** | **Description** |
|:-----------|:------------|:------------|
|**sentence_id:** | Integer | Unique identifier of the sentence |
|**sentence_order** | Integer | A number indicating the order of the sentences in the meeting |
|**speaker_id** | Integer | The ID of the speaker. This can be linked to the original dataset containing thorough demographic and dialectal information about the speaker. |
|**meeting_date** | String | The date for the meeting in the format __yyyymmdd__ |
| **speaker_name** | String | Name of the speaker. All speakers were members of the Norwegian Parliament or members of the Norwegian Government at the meeting date |
| **sentence_text** | String | The sentence text. The transcribed text string of the sentence in non-normalized form. This is the text of the manual transcriptions, without any postprocessing (apart from corrections of known errors). It may contain interrupted words, non-standard words and function words with a pronunciation deviating from the written form. Detailed metadata about the words in the sentence can be found in the word-tokenized version of the corpus in the official release of the dataset. |
| **sentence_language_code** | String | The language code of the sentence. The following alternatives exists in the file: ['nb-NO'. 'nn-NO', 'en-US']|
| **text** | String | sentence text. This is a copy of "sentence_text". It is included here to make it more convenient to interleave with other datasets.|
| **start_time** | Integer | The start time of the sentence in milliseconds. This time is relative to the start of audiofile of the entire meeting, which can be accessed in the official release |
| **end_time** | Integer | End time. See comment above. |
| **normsentence_text** | String | Normalized sentence text. In this version of the transcription, numbers and dates are written in digits on standardized formats, and common abbreviations are used. These modifications to the original transcriptions are produced automatically using normalization grammars |
| **transsentence_text** | String | Translated sentence text. Whenever the original transcription is in Bokmål (nb-NO), this field contains a machine-translated version in Nynorsk (nn-NO), and vice versa |
| **translated** | Integer | A flag indicating whether a machine-translated version has been produced or not. Sentences in en-US have not been translated |
| **audio** | Array | The dataloader will encode the accociated audio files and provide them as an array containing 'path', 'sound array','sampling_rate' |
#### Initial Data Collection
The procedure for the dataset creation is described in detail in our paper.
## Statistics
| Feature | Value |
|:---------|-----------:|
| Duration, pauses included | 140,3 hours|
| Duration, pauses not included | 125,7 hours |
| Word count | 1,2 million |
| Sentence count | 64.531 |
| Language distribution | Nynorsk: 12,8%|
| | Bokmål: 87,2%|
| Gender distribution | Female: 38,3% |
| | Male: 61.7% |
## Considerations for Using the Data
This corpus contains speech data. All recordings are of members of Parliament in a public setting, and can be distributed without any restrains.
### Dataset Creators and Curators
The content of the dataset was created by the Norwegian Language Bank (Språkbanken) at the National Library of Norway. [Javier de la Rosa](mailto:[email protected]), [Freddy Wetjen](mailto:[email protected]), [Per Egil Kummervold](mailto:[email protected]), and [Andre Kaasen](mailto:[email protected]) all contributed in making this into a HuggingFace Dataset. Thanks to the HuggingFace team for assistance.
## License
The sound and the transcriptions are released under the [CC-ZERO-license](https://creativecommons.org/publicdomain/zero/1.0/). The curation of the HuggingFace Dataset is released under [CC-BY-SA-3-license](https://creativecommons.org/licenses/by-sa/3.0/).
### Citation Information
The following article gives detailed information about the corpus. Please refer to the article and this page if you are using this dataset:
```
@inproceedings{solberg2022norwegian,
title={The Norwegian Parliamentary Speech Corpus},
author={Solberg, Per Erik and Ortiz, Pablo},
booktitle={Proceedings of the 13th Language Resources and Evaluation Conference},
url={http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.106.pdf},
year={2022}
}
```
| NbAiLab/NPSC | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:2G<n<1B",
"source_datasets:original",
"language:no",
"language:nb",
"language:nn",
"license:cc0-1.0",
"speech-modeling",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["no", "nb", "nn"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["2G<n<1B"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition", "audio-classification"], "pretty_name": "NPSC", "tags": ["speech-modeling"]} | 2023-04-25T08:52:08+00:00 | [] | [
"no",
"nb",
"nn"
] | TAGS
#task_categories-automatic-speech-recognition #task_categories-audio-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-2G<n<1B #source_datasets-original #language-Norwegian #language-Norwegian Bokmål #language-Norwegian Nynorsk #license-cc0-1.0 #speech-modeling #region-us
| Dataset Card for NbAiLab/NPSC
=============================
Table of Contents
-----------------
* Dataset Description
* Dataset Summary
* Data Fields
* Dataset Creation
* Statistics
* Document Types
* Languages
* Publish Periode
* Considerations for Using the Data
* Social Impact of Dataset
* Discussion of Biases
* Other Known Limitations
* Additional Information
* Dataset Curators
* Licensing Information
* Citation Information
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: URL
* Point of Contact: Per Erik Solberg
The Norwegian Parliamentary Speech Corpus (NPSC) is a speech corpus made by the Norwegian Language Bank at the National Library of Norway in 2019-2021. The NPSC consists of recordings of speech from Stortinget, the Norwegian parliament, and corresponding orthographic transcriptions to Norwegian Bokmål and Norwegian Nynorsk. All transcriptions are done manually by trained linguists or philologists, and the manual transcriptions are subsequently proofread to ensure consistency and accuracy. Entire days of Parliamentary meetings are transcribed in the dataset.
This repository contains a version of the NPSC in the Dataset Format. Note that the official release of the dataset, which can be found in the repository of the Norwegian Language Bank, contains more information than the version found here, including word-level metadata, metadata about the speakers, and detailed documentation.
How to Use
----------
Dataset Summary
---------------
The NPSC dataset contains JSON lines with language training data. The data loader will add audio data to this structure. Here is an example json object:
Data Fields
-----------
#### Initial Data Collection
The procedure for the dataset creation is described in detail in our paper.
Statistics
----------
Considerations for Using the Data
---------------------------------
This corpus contains speech data. All recordings are of members of Parliament in a public setting, and can be distributed without any restrains.
### Dataset Creators and Curators
The content of the dataset was created by the Norwegian Language Bank (Språkbanken) at the National Library of Norway. Javier de la Rosa, Freddy Wetjen, Per Egil Kummervold, and Andre Kaasen all contributed in making this into a HuggingFace Dataset. Thanks to the HuggingFace team for assistance.
License
-------
The sound and the transcriptions are released under the CC-ZERO-license. The curation of the HuggingFace Dataset is released under CC-BY-SA-3-license.
The following article gives detailed information about the corpus. Please refer to the article and this page if you are using this dataset:
| [
"#### Initial Data Collection\n\n\nThe procedure for the dataset creation is described in detail in our paper.\n\n\nStatistics\n----------\n\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThis corpus contains speech data. All recordings are of members of Parliament in a public setting, and can be distributed without any restrains.",
"### Dataset Creators and Curators\n\n\nThe content of the dataset was created by the Norwegian Language Bank (Språkbanken) at the National Library of Norway. Javier de la Rosa, Freddy Wetjen, Per Egil Kummervold, and Andre Kaasen all contributed in making this into a HuggingFace Dataset. Thanks to the HuggingFace team for assistance.\n\n\nLicense\n-------\n\n\nThe sound and the transcriptions are released under the CC-ZERO-license. The curation of the HuggingFace Dataset is released under CC-BY-SA-3-license.\n\n\nThe following article gives detailed information about the corpus. Please refer to the article and this page if you are using this dataset:"
] | [
"TAGS\n#task_categories-automatic-speech-recognition #task_categories-audio-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-2G<n<1B #source_datasets-original #language-Norwegian #language-Norwegian Bokmål #language-Norwegian Nynorsk #license-cc0-1.0 #speech-modeling #region-us \n",
"#### Initial Data Collection\n\n\nThe procedure for the dataset creation is described in detail in our paper.\n\n\nStatistics\n----------\n\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThis corpus contains speech data. All recordings are of members of Parliament in a public setting, and can be distributed without any restrains.",
"### Dataset Creators and Curators\n\n\nThe content of the dataset was created by the Norwegian Language Bank (Språkbanken) at the National Library of Norway. Javier de la Rosa, Freddy Wetjen, Per Egil Kummervold, and Andre Kaasen all contributed in making this into a HuggingFace Dataset. Thanks to the HuggingFace team for assistance.\n\n\nLicense\n-------\n\n\nThe sound and the transcriptions are released under the CC-ZERO-license. The curation of the HuggingFace Dataset is released under CC-BY-SA-3-license.\n\n\nThe following article gives detailed information about the corpus. Please refer to the article and this page if you are using this dataset:"
] |
0c85e09ec2892d276417ea695f7a25db525cc687 | # Dataset Card for NBAiLab/NPSC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Data Fields](#data-fiels)
- [Dataset Creation](#dataset-creation)
- [Statistics](#statistics)
- [Document Types](#document-types)
- [Languages](#languages)
- [Publish Periode](#publish-periode)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.nb.no/sprakbanken/
- **Repository:** https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-58/
- **Paper:** https://www.nb.no/sprakbanken/
- **Point of Contact:** [Per Erik Solberg](mailto:[email protected])
The Norwegian Parliament Speech Corpus (NPSC) is a corpus for training a Norwegian ASR (Automatic Speech Recognition) models. The corpus is created by Språkbanken at the National Library in Norway.
NPSC is based on sound recording from meeting in the Norwegian Parliament. These talks are orthographically transcribed to either Norwegian Bokmål or Norwegian Nynorsk. In addition to the data actually included in this dataset, there is a significant amount of metadata that is included in the original corpus. Through the speaker id there is additional information about the speaker, like gender, age, and place of birth (ie dialect). Through the proceedings id the corpus can be linked to the official proceedings from the meetings.
The corpus is in total sound recordings from 40 entire days of meetings. This amounts to 140 hours of speech, 65,000 sentences or 1.2 million words.
This corpus is an adaption of the original corpus made for efficiant ASR training. For simplicity and portability, a few of the original datasets features, like the token transcription, is ommitted. You can find the complete dataset at [the Resource Catalogue at Språkbanken](https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-58/).
## How to Use (This needs to be edited of course)
```python
from datasets import load_dataset
data = load_dataset("nb/NPSC", streaming=True)
```
## Data Fields
Currently there are two versions included in this repo.
### Version A
This verison has a short list of the metadata and includes the audio (48k mp3) encoded as a float32 array in the dataset itself.
The current dataloader script is associated with this version.
One line in train.json looks like this:
```json
{
"sentence_id": 7309,
"sentence_order": 0,
"speaker_id": 1,
"speaker_name": "Marit Nybakk",
"sentence_text": "Stortingets møte er lovlig satt",
"sentence_language_code": "nb-NO",
"text": "Stortingets møte er lovlig satt",
"start_time": 302650,
"end_time": 306000,
"normsentence_text": "Stortingets møte er lovlig satt",
"transsentence_text": "Stortingets møte er lovleg sett",
"translated": 1,
"audio": {
"path": "audio/20170207-095506_302650_306000.wav",
"array": [
24,
25,
50,
(...)
],
"sampling_rate": 48000
}
}
```
### Version B
This verison does not contain the audio encoded in the dataset. Instead it has the audio files placed in sub-directories. There are currently both samples in clips_48k_wav and clips_16k_mp3. Only the base filename is referred in the dataset. Please not that there are both sentence-based audio clips as well at meeting-based audio clips. The dataset contains referrals to both, the latter referral has start and stop time as well.
One line in the train/metadata.json looks like this:
```json
{
"meeting_date": "20170207",
"full_audio_file": "20170207-095506",
"proceedings_file": "20170207-095506.ref",
"duration": 4442474,
"transcriber_id": 1,
"reviewer_id": 2,
"data_split": "test",
"speaker_name": "Marit Nybakk",
"speaker_id": 1,
"sentence_id": 7309,
"sentence_language_code": "nb-NO",
"sentence_text": "Stortingets møte er lovlig satt",
"sentence_order": 0,
"audio_file": "20170207-095506_302650_306000",
"start_time": 302650,
"end_time": 306000,
"normsentence_text": "Stortingets møte er lovlig satt",
"transsentence_text": "Stortingets møte er lovleg sett",
"translated": 1
}
```
### Dataset Creation
We are providing a **train**, **dev** and **test** split. These are the same as in the orginal corpus.
Build date: 20012022
#### Initial Data Collection and Curation
The procedure for the dataset creation is described in detail in the paper.
## Statistics
| Feature | Value |
|:---------|-----------:|
| Duration, pauses included | 140,3 hours|
| Duration, pauses not included | 125,7 hours |
| Word count | 1,2 million |
| Sentence count | 64.531 |
| Language distribution | Nynorsk: 12,8%|
| | Bokmål: 87,2%%|
| Gender distribution | Female: 38,3% |
| | Male: 61.7% |
## Considerations for Using the Data
This corpus contains speech data and is allowed to be used outside the National Library of Norway for speech recognition technology purposes.
### Discussion of Biases
Please refer to our paper.
### Dataset Curators
[Per Erik Solberg](mailto:[email protected])
[Freddy Wetjen](mailto:[email protected]), [Andre Kaasen](mailto:[email protected]) and [Per Egil Kummervold](mailto:[email protected]) has contributed to porting it to the Hugging Face Dataset format.
### Licensing Information
Licensed for use outside the National Library of Norway.
## License
CC-ZERO(https://creativecommons.org/publicdomain/zero/1.0/)
### Citation Information
We are preparing an article with detailed information about this corpus. Until it is published, please cite out paper discussing the first version of this corpus:
```
ANDRE: TO BE DONE
```
| NbAiLab/NPSC_test | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:2G<n<1B",
"source_datasets:original",
"language:nb",
"language:no",
"language:nn",
"license:cc0-1.0",
"speech-modeling",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["nb", "no", "nn"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["2G<n<1B"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition", "audio-classification"], "task_ids": ["speech-modeling"], "pretty_name": "NPSC", "tags": ["speech-modeling"]} | 2022-11-07T12:37:31+00:00 | [] | [
"nb",
"no",
"nn"
] | TAGS
#task_categories-automatic-speech-recognition #task_categories-audio-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-2G<n<1B #source_datasets-original #language-Norwegian Bokmål #language-Norwegian #language-Norwegian Nynorsk #license-cc0-1.0 #speech-modeling #region-us
| Dataset Card for NBAiLab/NPSC
=============================
Table of Contents
-----------------
* Dataset Description
* Dataset Summary
* Data Fields
* Dataset Creation
* Statistics
* Document Types
* Languages
* Publish Periode
* Considerations for Using the Data
* Social Impact of Dataset
* Discussion of Biases
* Other Known Limitations
* Additional Information
* Dataset Curators
* Licensing Information
* Citation Information
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: URL
* Point of Contact: Per Erik Solberg
The Norwegian Parliament Speech Corpus (NPSC) is a corpus for training a Norwegian ASR (Automatic Speech Recognition) models. The corpus is created by Språkbanken at the National Library in Norway.
NPSC is based on sound recording from meeting in the Norwegian Parliament. These talks are orthographically transcribed to either Norwegian Bokmål or Norwegian Nynorsk. In addition to the data actually included in this dataset, there is a significant amount of metadata that is included in the original corpus. Through the speaker id there is additional information about the speaker, like gender, age, and place of birth (ie dialect). Through the proceedings id the corpus can be linked to the official proceedings from the meetings.
The corpus is in total sound recordings from 40 entire days of meetings. This amounts to 140 hours of speech, 65,000 sentences or 1.2 million words.
This corpus is an adaption of the original corpus made for efficiant ASR training. For simplicity and portability, a few of the original datasets features, like the token transcription, is ommitted. You can find the complete dataset at the Resource Catalogue at Språkbanken.
How to Use (This needs to be edited of course)
----------------------------------------------
Data Fields
-----------
Currently there are two versions included in this repo.
### Version A
This verison has a short list of the metadata and includes the audio (48k mp3) encoded as a float32 array in the dataset itself.
The current dataloader script is associated with this version.
One line in URL looks like this:
### Version B
This verison does not contain the audio encoded in the dataset. Instead it has the audio files placed in sub-directories. There are currently both samples in clips\_48k\_wav and clips\_16k\_mp3. Only the base filename is referred in the dataset. Please not that there are both sentence-based audio clips as well at meeting-based audio clips. The dataset contains referrals to both, the latter referral has start and stop time as well.
One line in the train/URL looks like this:
### Dataset Creation
We are providing a train, dev and test split. These are the same as in the orginal corpus.
Build date: 20012022
#### Initial Data Collection and Curation
The procedure for the dataset creation is described in detail in the paper.
Statistics
----------
Considerations for Using the Data
---------------------------------
This corpus contains speech data and is allowed to be used outside the National Library of Norway for speech recognition technology purposes.
### Discussion of Biases
Please refer to our paper.
### Dataset Curators
Per Erik Solberg
Freddy Wetjen, Andre Kaasen and Per Egil Kummervold has contributed to porting it to the Hugging Face Dataset format.
### Licensing Information
Licensed for use outside the National Library of Norway.
License
-------
CC-ZERO(URL
We are preparing an article with detailed information about this corpus. Until it is published, please cite out paper discussing the first version of this corpus:
| [
"### Version A\n\n\nThis verison has a short list of the metadata and includes the audio (48k mp3) encoded as a float32 array in the dataset itself.\n\n\nThe current dataloader script is associated with this version.\n\n\nOne line in URL looks like this:",
"### Version B\n\n\nThis verison does not contain the audio encoded in the dataset. Instead it has the audio files placed in sub-directories. There are currently both samples in clips\\_48k\\_wav and clips\\_16k\\_mp3. Only the base filename is referred in the dataset. Please not that there are both sentence-based audio clips as well at meeting-based audio clips. The dataset contains referrals to both, the latter referral has start and stop time as well.\n\n\nOne line in the train/URL looks like this:",
"### Dataset Creation\n\n\nWe are providing a train, dev and test split. These are the same as in the orginal corpus.\n\n\nBuild date: 20012022",
"#### Initial Data Collection and Curation\n\n\nThe procedure for the dataset creation is described in detail in the paper.\n\n\nStatistics\n----------\n\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThis corpus contains speech data and is allowed to be used outside the National Library of Norway for speech recognition technology purposes.",
"### Discussion of Biases\n\n\nPlease refer to our paper.",
"### Dataset Curators\n\n\nPer Erik Solberg\n\n\nFreddy Wetjen, Andre Kaasen and Per Egil Kummervold has contributed to porting it to the Hugging Face Dataset format.",
"### Licensing Information\n\n\nLicensed for use outside the National Library of Norway.\n\n\nLicense\n-------\n\n\nCC-ZERO(URL\n\n\nWe are preparing an article with detailed information about this corpus. Until it is published, please cite out paper discussing the first version of this corpus:"
] | [
"TAGS\n#task_categories-automatic-speech-recognition #task_categories-audio-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-2G<n<1B #source_datasets-original #language-Norwegian Bokmål #language-Norwegian #language-Norwegian Nynorsk #license-cc0-1.0 #speech-modeling #region-us \n",
"### Version A\n\n\nThis verison has a short list of the metadata and includes the audio (48k mp3) encoded as a float32 array in the dataset itself.\n\n\nThe current dataloader script is associated with this version.\n\n\nOne line in URL looks like this:",
"### Version B\n\n\nThis verison does not contain the audio encoded in the dataset. Instead it has the audio files placed in sub-directories. There are currently both samples in clips\\_48k\\_wav and clips\\_16k\\_mp3. Only the base filename is referred in the dataset. Please not that there are both sentence-based audio clips as well at meeting-based audio clips. The dataset contains referrals to both, the latter referral has start and stop time as well.\n\n\nOne line in the train/URL looks like this:",
"### Dataset Creation\n\n\nWe are providing a train, dev and test split. These are the same as in the orginal corpus.\n\n\nBuild date: 20012022",
"#### Initial Data Collection and Curation\n\n\nThe procedure for the dataset creation is described in detail in the paper.\n\n\nStatistics\n----------\n\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThis corpus contains speech data and is allowed to be used outside the National Library of Norway for speech recognition technology purposes.",
"### Discussion of Biases\n\n\nPlease refer to our paper.",
"### Dataset Curators\n\n\nPer Erik Solberg\n\n\nFreddy Wetjen, Andre Kaasen and Per Egil Kummervold has contributed to porting it to the Hugging Face Dataset format.",
"### Licensing Information\n\n\nLicensed for use outside the National Library of Norway.\n\n\nLicense\n-------\n\n\nCC-ZERO(URL\n\n\nWe are preparing an article with detailed information about this corpus. Until it is published, please cite out paper discussing the first version of this corpus:"
] |
539cf25078d1136058e2580552acf5d8b5f26248 |
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** N/A
- **Repository:** [GitHub](https://github.com/ltgoslo/NorBERT/)
- **Paper:** [A Fine-grained Sentiment Dataset for Norwegian](https://www.aclweb.org/anthology/2020.lrec-1.618/)
- **Leaderboard:** N/A
- **Point of Contact:** -
### Dataset Summary
Aggregated NoRec_fine: A Fine-grained Sentiment Dataset for Norwegian.
This dataset was created by the Nordic Language Processing Laboratory by aggregating the fine-grained annotations in NoReC_fine and removing sentences with conflicting or no sentiment.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in Norwegian.
## Dataset Structure
### Data Instances
Example of one instance in the dataset.
```{'label': 0, 'text': 'Verre er det med slagsmålene .'}```
### Data Fields
- `id`: index of the example
- `text`: Text of a sentence
- `label`: The sentiment label. Here
- 0 = negative
- 1 = positive
### Data Splits
The dataset is split into a `train`, `validation`, and `test` split with the following sizes:
| | Tain | Valid | Test |
| ----- | ------ | ----- | ----- |
| Number of examples | 2675 | 516 | 417 |
## Dataset Creation
This dataset is based largely on the original data described in the paper _A Fine-Grained Sentiment Dataset for Norwegian_ by L. Øvrelid, P. Mæhlum, J. Barnes, and E. Velldal, accepted at LREC 2020, [paper available](https://www.aclweb.org/anthology/2020.lrec-1.618). However, we have since added annotations for another 3476 sentences, increasing the overall size and scope of the dataset.
## Additional Information
### Licensing Information
This work is licensed under a Creative Commons Attribution 4.0 International License
### Citation Information
```latex
@misc{sheng2020investigating,
title={Investigating Societal Biases in a Poetry Composition System},
author={Emily Sheng and David Uthus},
year={2020},
eprint={2011.02686},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| NbAiLab/norec_agg | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:2011.02686",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"]} | 2022-07-01T18:53:24+00:00 | [
"2011.02686"
] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2011.02686 #region-us
| Dataset Card Creation Guide
===========================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
Dataset Description
-------------------
* Homepage: N/A
* Repository: GitHub
* Paper: A Fine-grained Sentiment Dataset for Norwegian
* Leaderboard: N/A
* Point of Contact: -
### Dataset Summary
Aggregated NoRec\_fine: A Fine-grained Sentiment Dataset for Norwegian.
This dataset was created by the Nordic Language Processing Laboratory by aggregating the fine-grained annotations in NoReC\_fine and removing sentences with conflicting or no sentiment.
### Supported Tasks and Leaderboards
### Languages
The text in the dataset is in Norwegian.
Dataset Structure
-----------------
### Data Instances
Example of one instance in the dataset.
### Data Fields
* 'id': index of the example
* 'text': Text of a sentence
* 'label': The sentiment label. Here
+ 0 = negative
+ 1 = positive
### Data Splits
The dataset is split into a 'train', 'validation', and 'test' split with the following sizes:
Dataset Creation
----------------
This dataset is based largely on the original data described in the paper *A Fine-Grained Sentiment Dataset for Norwegian* by L. Øvrelid, P. Mæhlum, J. Barnes, and E. Velldal, accepted at LREC 2020, paper available. However, we have since added annotations for another 3476 sentences, increasing the overall size and scope of the dataset.
Additional Information
----------------------
### Licensing Information
This work is licensed under a Creative Commons Attribution 4.0 International License
| [
"### Dataset Summary\n\n\nAggregated NoRec\\_fine: A Fine-grained Sentiment Dataset for Norwegian.\nThis dataset was created by the Nordic Language Processing Laboratory by aggregating the fine-grained annotations in NoReC\\_fine and removing sentences with conflicting or no sentiment.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nThe text in the dataset is in Norwegian.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nExample of one instance in the dataset.",
"### Data Fields\n\n\n* 'id': index of the example\n* 'text': Text of a sentence\n* 'label': The sentiment label. Here\n\t+ 0 = negative\n\t+ 1 = positive",
"### Data Splits\n\n\nThe dataset is split into a 'train', 'validation', and 'test' split with the following sizes:\n\n\n\nDataset Creation\n----------------\n\n\nThis dataset is based largely on the original data described in the paper *A Fine-Grained Sentiment Dataset for Norwegian* by L. Øvrelid, P. Mæhlum, J. Barnes, and E. Velldal, accepted at LREC 2020, paper available. However, we have since added annotations for another 3476 sentences, increasing the overall size and scope of the dataset.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThis work is licensed under a Creative Commons Attribution 4.0 International License"
] | [
"TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2011.02686 #region-us \n",
"### Dataset Summary\n\n\nAggregated NoRec\\_fine: A Fine-grained Sentiment Dataset for Norwegian.\nThis dataset was created by the Nordic Language Processing Laboratory by aggregating the fine-grained annotations in NoReC\\_fine and removing sentences with conflicting or no sentiment.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nThe text in the dataset is in Norwegian.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nExample of one instance in the dataset.",
"### Data Fields\n\n\n* 'id': index of the example\n* 'text': Text of a sentence\n* 'label': The sentiment label. Here\n\t+ 0 = negative\n\t+ 1 = positive",
"### Data Splits\n\n\nThe dataset is split into a 'train', 'validation', and 'test' split with the following sizes:\n\n\n\nDataset Creation\n----------------\n\n\nThis dataset is based largely on the original data described in the paper *A Fine-Grained Sentiment Dataset for Norwegian* by L. Øvrelid, P. Mæhlum, J. Barnes, and E. Velldal, accepted at LREC 2020, paper available. However, we have since added annotations for another 3476 sentences, increasing the overall size and scope of the dataset.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThis work is licensed under a Creative Commons Attribution 4.0 International License"
] |
270602110690593f77f68735c8b17b5a50ce9182 |
# Dataset Card for NorNE: Norwegian Named Entities
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [NorNE](https://github.com/ltgoslo/norne/)
- **Repository:** [Github](https://github.com/ltgoslo/norne/)
- **Paper:** https://arxiv.org/abs/1911.12146
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons,organizations, locations, geo-political entities, products, and events, in addition to a class corresponding to nominals derived from names.
### Supported Tasks and Leaderboards
NorNE ads named entity annotations on top of the Norwegian Dependency Treebank.
### Languages
Both Norwegian Bokmål (`bokmaal`) and Nynorsk (`nynorsk`) are supported as different configs in this dataset. An extra config for the combined languages is also included (`combined`). See the Annotation section for details on accessing reduced tag sets for the NER feature.
## Dataset Structure
Each entry contains text sentences, their language, identifiers, tokens, lemmas, and corresponding NER and POS tag lists.
### Data Instances
An example of the `train` split of the `bokmaal` config.
```python
{'idx': '000001',
'lang': 'bokmaal',
'lemmas': ['lam', 'og', 'piggvar', 'på', 'bryllupsmeny'],
'ner_tags': [0, 0, 0, 0, 0],
'pos_tags': [0, 9, 0, 5, 0],
'text': 'Lam og piggvar på bryllupsmenyen',
'tokens': ['Lam', 'og', 'piggvar', 'på', 'bryllupsmenyen']}
```
### Data Fields
Each entry is annotated with the next fields:
- `idx` (`int`), text (sentence) identifier from the NorNE dataset
- `lang` (`str`), language variety, either `bokmaal`, `nynorsk` or `combined`
- `text` (`str`), plain text
- `tokens` (`List[str]`), list of tokens extracted from `text`
- `lemmas` (`List[str]`), list of lemmas extracted from `tokens`
- `ner_tags` (`List[int]`), list of numeric NER tags for each token in `tokens`
- `pos_tags` (`List[int]`), list of numeric PoS tags for each token in `tokens`
An example DataFrame obtained from the dataset:
<table class="dataframe" border="1">
<thead>
<tr style="text-align: right;">
<th></th>
<th>idx</th>
<th>lang</th>
<th>text</th>
<th>tokens</th>
<th>lemmas</th>
<th>ner_tags</th>
<th>pos_tags</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>000001</td>
<td>bokmaal</td>
<td>Lam og piggvar på bryllupsmenyen</td>
<td>[Lam, og, piggvar, på, bryllupsmenyen]</td>
<td>[lam, og, piggvar, på, bryllupsmeny]</td>
<td>[0, 0, 0, 0, 0]</td>
<td>[0, 9, 0, 5, 0]</td>
</tr>
<tr>
<th>1</th>
<td>000002</td>
<td>bokmaal</td>
<td>Kamskjell, piggvar og lammefilet sto på menyen...</td>
<td>[Kamskjell, ,, piggvar, og, lammefilet, sto, p...</td>
<td>[kamskjell, $,, piggvar, og, lammefilet, stå, ...</td>
<td>[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]</td>
<td>[0, 1, 0, 9, 0, 15, 2, 0, 2, 8, 6, 0, 1]</td>
</tr>
<tr>
<th>2</th>
<td>000003</td>
<td>bokmaal</td>
<td>Og til dessert: Parfait à la Mette-Marit.</td>
<td>[Og, til, dessert, :, Parfait, à, la, Mette-Ma...</td>
<td>[og, til, dessert, $:, Parfait, à, la, Mette-M...</td>
<td>[0, 0, 0, 0, 7, 8, 8, 8, 0]</td>
<td>[9, 2, 0, 1, 10, 12, 12, 10, 1]</td>
</tr>
</tbody>
</table>
### Data Splits
There are three splits: `train`, `validation` and `test`.
| Config | Split | Total |
| :---------|-------------:|-------:|
| `bokmaal` | `train` | 15696 |
| `bokmaal` | `validation` | 2410 |
| `bokmaal` | `test` | 1939 |
| `nynorsk` | `train` | 14174 |
| `nynorsk` | `validation` | 1890 |
| `nynorsk` | `test` | 1511 |
| `combined`| `test` | 29870 |
| `combined`| `validation` | 4300 |
| `combined`| `test` | 3450 |
## Dataset Creation
### Curation Rationale
1. A _name_ in this context is close to [Saul Kripke's definition of a name](https://en.wikipedia.org/wiki/Saul_Kripke#Naming_and_Necessity),
in that a name has a unique reference and its meaning is constant (there are exceptions in the annotations, e.g. "Regjeringen" (en. "Government")).
2. It is the usage of a name that determines the entity type, not the default/literal sense of the name,
3. If there is an ambiguity in the type/sense of a name, then the the default/literal sense of the name is chosen
(following [Markert and Nissim, 2002](http://www.lrec-conf.org/proceedings/lrec2002/pdf/11.pdf)).
For more details, see the "Annotation Guidelines.pdf" distributed with the corpus.
### Source Data
Data was collected using blogs and newspapers in Norwegian, as well as parliament speeches and governamental reports.
#### Initial Data Collection and Normalization
The texts in the Norwegian Dependency Treebank (NDT) are manually annotated with morphological features, syntactic functions
and hierarchical structure. The formalism used for the syntactic annotation is dependency grammar.
The treebanks consists of two parts, one part in Norwegian Bokmål (`nob`) and one part in Norwegian Nynorsk (`nno`).
Both parts contain around 300.000 tokens, and are a mix of different non-fictional genres.
See the [NDT webpage](https://www.nb.no/sprakbanken/show?serial=sbr-10) for more details.
### Annotations
The following types of entities are annotated:
- **Person (`PER`):** Real or fictional characters and animals
- **Organization (`ORG`):** Any collection of people, such as firms, institutions, organizations, music groups,
sports teams, unions, political parties etc.
- **Location (`LOC`):** Geographical places, buildings and facilities
- **Geo-political entity (`GPE`):** Geographical regions defined by political and/or social groups.
A GPE entity subsumes and does not distinguish between a nation, its region, its government, or its people
- **Product (`PROD`):** Artificially produced entities are regarded products. This may include more abstract entities, such as speeches,
radio shows, programming languages, contracts, laws and ideas.
- **Event (`EVT`):** Festivals, cultural events, sports events, weather phenomena, wars, etc. Events are bounded in time and space.
- **Derived (`DRV`):** Words (and phrases?) that are dervied from a name, but not a name in themselves. They typically contain a full name and are capitalized, but are not proper nouns. Examples (fictive) are "Brann-treneren" ("the Brann coach") or "Oslo-mannen" ("the man from Oslo").
- **Miscellaneous (`MISC`):** Names that do not belong in the other categories. Examples are animals species and names of medical conditions. Entities that are manufactured or produced are of type Products, whereas thing naturally or spontaneously occurring are of type Miscellaneous.
Furthermore, all `GPE` entities are additionally sub-categorized as being either `ORG` or `LOC`, with the two annotation levels separated by an underscore:
- `GPE_LOC`: Geo-political entity, with a locative sense (e.g. "John lives in _Spain_")
- `GPE_ORG`: Geo-political entity, with an organisation sense (e.g. "_Spain_ declined to meet with Belgium")
The two special types `GPE_LOC` and `GPE_ORG` can easily be altered depending on the task, choosing either the more general `GPE` tag or the more specific `LOC`/`ORG` tags, conflating them with the other annotations of the same type. This means that the following sets of entity types can be derived:
- 7 types, deleting `_GPE`: **`ORG`**, **`LOC`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`
- 8 types, deleting `LOC_` and `ORG_`: **`ORG`**, **`LOC`**, **`GPE`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`
- 9 types, keeping all types: **`ORG`**, **`LOC`**, **`GPE_LOC`**, **`GPE_ORG`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`
The class distribution is as follows, broken down across the data splits of the UD version of NDT, and sorted by total counts (i.e. the number of examples, not tokens within the spans of the annotatons):
| Type | Train | Dev | Test | Total |
| :--------|-------:|-------:|-------:|-------:|
| `PER` | 4033 | 607 | 560 | 5200 |
| `ORG` | 2828 | 400 | 283 | 3511 |
| `GPE_LOC`| 2132 | 258 | 257 | 2647 |
| `PROD` | 671 | 162 | 71 | 904 |
| `LOC` | 613 | 109 | 103 | 825 |
| `GPE_ORG`| 388 | 55 | 50 | 493 |
| `DRV` | 519 | 77 | 48 | 644 |
| `EVT` | 131 | 9 | 5 | 145 |
| `MISC` | 8 | 0 | 0 | 0 |
To access these reduce versions of the dataset, you can use the configs `bokmaal-7`, `nynorsk-7`, `combined-7` for the NER tag set with 7 tags ( **`ORG`**, **`LOC`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`), and `bokmaal-8`, `nynorsk-8`, `combined-8` for the NER tag set with 8 tags (`LOC_` and `ORG_`: **`ORG`**, **`LOC`**, **`GPE`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`). By default, the full set (9 tags) will be used.
## Additional Information
### Dataset Curators
NorNE was created as a collaboration between [Schibsted Media Group](https://schibsted.com/), [Språkbanken](https://www.nb.no/forskning/sprakbanken/) at the [National Library of Norway](https://www.nb.no) and the [Language Technology Group](https://www.mn.uio.no/ifi/english/research/groups/ltg/) at the University of Oslo.
NorNE was added to Huggingface Datasets by the AI-Lab at the National Library of Norway.
### Licensing Information
The NorNE corpus is published under the same [license](https://github.com/ltgoslo/norne/blob/master/LICENSE_NDT.txt) as the Norwegian Dependency Treebank
### Citation Information
This dataset is described in the paper _NorNE: Annotating Named Entities for Norwegian_ by
Fredrik Jørgensen, Tobias Aasmoe, Anne-Stine Ruud Husevåg, Lilja Øvrelid, and Erik Velldal, accepted for LREC 2020 and available as pre-print here: https://arxiv.org/abs/1911.12146.
| NbAiLab/norne | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:no",
"license:other",
"structure-prediction",
"arxiv:1911.12146",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["no"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition", "part-of-speech"], "tags": ["structure-prediction"]} | 2022-11-07T12:41:46+00:00 | [
"1911.12146"
] | [
"no"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #task_ids-part-of-speech #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Norwegian #license-other #structure-prediction #arxiv-1911.12146 #region-us
| Dataset Card for NorNE: Norwegian Named Entities
================================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
Dataset Description
-------------------
* Homepage: NorNE
* Repository: Github
* Paper: URL
* Leaderboard:
* Point of Contact:
### Dataset Summary
NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons,organizations, locations, geo-political entities, products, and events, in addition to a class corresponding to nominals derived from names.
### Supported Tasks and Leaderboards
NorNE ads named entity annotations on top of the Norwegian Dependency Treebank.
### Languages
Both Norwegian Bokmål ('bokmaal') and Nynorsk ('nynorsk') are supported as different configs in this dataset. An extra config for the combined languages is also included ('combined'). See the Annotation section for details on accessing reduced tag sets for the NER feature.
Dataset Structure
-----------------
Each entry contains text sentences, their language, identifiers, tokens, lemmas, and corresponding NER and POS tag lists.
### Data Instances
An example of the 'train' split of the 'bokmaal' config.
### Data Fields
Each entry is annotated with the next fields:
* 'idx' ('int'), text (sentence) identifier from the NorNE dataset
* 'lang' ('str'), language variety, either 'bokmaal', 'nynorsk' or 'combined'
* 'text' ('str'), plain text
* 'tokens' ('List[str]'), list of tokens extracted from 'text'
* 'lemmas' ('List[str]'), list of lemmas extracted from 'tokens'
* 'ner\_tags' ('List[int]'), list of numeric NER tags for each token in 'tokens'
* 'pos\_tags' ('List[int]'), list of numeric PoS tags for each token in 'tokens'
An example DataFrame obtained from the dataset:
| | idx | lang | text | tokens | lemmas | ner\_tags | pos\_tags |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 000001 | bokmaal | Lam og piggvar på bryllupsmenyen | [Lam, og, piggvar, på, bryllupsmenyen] | [lam, og, piggvar, på, bryllupsmeny] | [0, 0, 0, 0, 0] | [0, 9, 0, 5, 0] |
| 1 | 000002 | bokmaal | Kamskjell, piggvar og lammefilet sto på menyen... | [Kamskjell, ,, piggvar, og, lammefilet, sto, p... | [kamskjell, $,, piggvar, og, lammefilet, stå, ... | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] | [0, 1, 0, 9, 0, 15, 2, 0, 2, 8, 6, 0, 1] |
| 2 | 000003 | bokmaal | Og til dessert: Parfait à la Mette-Marit. | [Og, til, dessert, :, Parfait, à, la, Mette-Ma... | [og, til, dessert, $:, Parfait, à, la, Mette-M... | [0, 0, 0, 0, 7, 8, 8, 8, 0] | [9, 2, 0, 1, 10, 12, 12, 10, 1] |
### Data Splits
There are three splits: 'train', 'validation' and 'test'.
Dataset Creation
----------------
### Curation Rationale
1. A *name* in this context is close to Saul Kripke's definition of a name,
in that a name has a unique reference and its meaning is constant (there are exceptions in the annotations, e.g. "Regjeringen" (en. "Government")).
2. It is the usage of a name that determines the entity type, not the default/literal sense of the name,
3. If there is an ambiguity in the type/sense of a name, then the the default/literal sense of the name is chosen
(following Markert and Nissim, 2002).
For more details, see the "Annotation URL" distributed with the corpus.
### Source Data
Data was collected using blogs and newspapers in Norwegian, as well as parliament speeches and governamental reports.
#### Initial Data Collection and Normalization
The texts in the Norwegian Dependency Treebank (NDT) are manually annotated with morphological features, syntactic functions
and hierarchical structure. The formalism used for the syntactic annotation is dependency grammar.
The treebanks consists of two parts, one part in Norwegian Bokmål ('nob') and one part in Norwegian Nynorsk ('nno').
Both parts contain around 300.000 tokens, and are a mix of different non-fictional genres.
See the NDT webpage for more details.
### Annotations
The following types of entities are annotated:
* Person ('PER'): Real or fictional characters and animals
* Organization ('ORG'): Any collection of people, such as firms, institutions, organizations, music groups,
sports teams, unions, political parties etc.
* Location ('LOC'): Geographical places, buildings and facilities
* Geo-political entity ('GPE'): Geographical regions defined by political and/or social groups.
A GPE entity subsumes and does not distinguish between a nation, its region, its government, or its people
* Product ('PROD'): Artificially produced entities are regarded products. This may include more abstract entities, such as speeches,
radio shows, programming languages, contracts, laws and ideas.
* Event ('EVT'): Festivals, cultural events, sports events, weather phenomena, wars, etc. Events are bounded in time and space.
* Derived ('DRV'): Words (and phrases?) that are dervied from a name, but not a name in themselves. They typically contain a full name and are capitalized, but are not proper nouns. Examples (fictive) are "Brann-treneren" ("the Brann coach") or "Oslo-mannen" ("the man from Oslo").
* Miscellaneous ('MISC'): Names that do not belong in the other categories. Examples are animals species and names of medical conditions. Entities that are manufactured or produced are of type Products, whereas thing naturally or spontaneously occurring are of type Miscellaneous.
Furthermore, all 'GPE' entities are additionally sub-categorized as being either 'ORG' or 'LOC', with the two annotation levels separated by an underscore:
* 'GPE\_LOC': Geo-political entity, with a locative sense (e.g. "John lives in *Spain*")
* 'GPE\_ORG': Geo-political entity, with an organisation sense (e.g. "*Spain* declined to meet with Belgium")
The two special types 'GPE\_LOC' and 'GPE\_ORG' can easily be altered depending on the task, choosing either the more general 'GPE' tag or the more specific 'LOC'/'ORG' tags, conflating them with the other annotations of the same type. This means that the following sets of entity types can be derived:
* 7 types, deleting '\_GPE': 'ORG', 'LOC', 'PER', 'PROD', 'EVT', 'DRV', 'MISC'
* 8 types, deleting 'LOC\_' and 'ORG\_': 'ORG', 'LOC', 'GPE', 'PER', 'PROD', 'EVT', 'DRV', 'MISC'
* 9 types, keeping all types: 'ORG', 'LOC', 'GPE\_LOC', 'GPE\_ORG', 'PER', 'PROD', 'EVT', 'DRV', 'MISC'
The class distribution is as follows, broken down across the data splits of the UD version of NDT, and sorted by total counts (i.e. the number of examples, not tokens within the spans of the annotatons):
To access these reduce versions of the dataset, you can use the configs 'bokmaal-7', 'nynorsk-7', 'combined-7' for the NER tag set with 7 tags ( 'ORG', 'LOC', 'PER', 'PROD', 'EVT', 'DRV', 'MISC'), and 'bokmaal-8', 'nynorsk-8', 'combined-8' for the NER tag set with 8 tags ('LOC\_' and 'ORG\_': 'ORG', 'LOC', 'GPE', 'PER', 'PROD', 'EVT', 'DRV', 'MISC'). By default, the full set (9 tags) will be used.
Additional Information
----------------------
### Dataset Curators
NorNE was created as a collaboration between Schibsted Media Group, Språkbanken at the National Library of Norway and the Language Technology Group at the University of Oslo.
NorNE was added to Huggingface Datasets by the AI-Lab at the National Library of Norway.
### Licensing Information
The NorNE corpus is published under the same license as the Norwegian Dependency Treebank
This dataset is described in the paper *NorNE: Annotating Named Entities for Norwegian* by
Fredrik Jørgensen, Tobias Aasmoe, Anne-Stine Ruud Husevåg, Lilja Øvrelid, and Erik Velldal, accepted for LREC 2020 and available as pre-print here: URL
| [
"### Dataset Summary\n\n\nNorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons,organizations, locations, geo-political entities, products, and events, in addition to a class corresponding to nominals derived from names.",
"### Supported Tasks and Leaderboards\n\n\nNorNE ads named entity annotations on top of the Norwegian Dependency Treebank.",
"### Languages\n\n\nBoth Norwegian Bokmål ('bokmaal') and Nynorsk ('nynorsk') are supported as different configs in this dataset. An extra config for the combined languages is also included ('combined'). See the Annotation section for details on accessing reduced tag sets for the NER feature.\n\n\nDataset Structure\n-----------------\n\n\nEach entry contains text sentences, their language, identifiers, tokens, lemmas, and corresponding NER and POS tag lists.",
"### Data Instances\n\n\nAn example of the 'train' split of the 'bokmaal' config.",
"### Data Fields\n\n\nEach entry is annotated with the next fields:\n\n\n* 'idx' ('int'), text (sentence) identifier from the NorNE dataset\n* 'lang' ('str'), language variety, either 'bokmaal', 'nynorsk' or 'combined'\n* 'text' ('str'), plain text\n* 'tokens' ('List[str]'), list of tokens extracted from 'text'\n* 'lemmas' ('List[str]'), list of lemmas extracted from 'tokens'\n* 'ner\\_tags' ('List[int]'), list of numeric NER tags for each token in 'tokens'\n* 'pos\\_tags' ('List[int]'), list of numeric PoS tags for each token in 'tokens'\n\n\nAn example DataFrame obtained from the dataset:\n\n\n\n\n| | idx | lang | text | tokens | lemmas | ner\\_tags | pos\\_tags |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| 0 | 000001 | bokmaal | Lam og piggvar på bryllupsmenyen | [Lam, og, piggvar, på, bryllupsmenyen] | [lam, og, piggvar, på, bryllupsmeny] | [0, 0, 0, 0, 0] | [0, 9, 0, 5, 0] |\n| 1 | 000002 | bokmaal | Kamskjell, piggvar og lammefilet sto på menyen... | [Kamskjell, ,, piggvar, og, lammefilet, sto, p... | [kamskjell, $,, piggvar, og, lammefilet, stå, ... | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] | [0, 1, 0, 9, 0, 15, 2, 0, 2, 8, 6, 0, 1] |\n| 2 | 000003 | bokmaal | Og til dessert: Parfait à la Mette-Marit. | [Og, til, dessert, :, Parfait, à, la, Mette-Ma... | [og, til, dessert, $:, Parfait, à, la, Mette-M... | [0, 0, 0, 0, 7, 8, 8, 8, 0] | [9, 2, 0, 1, 10, 12, 12, 10, 1] |",
"### Data Splits\n\n\nThere are three splits: 'train', 'validation' and 'test'.\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\n1. A *name* in this context is close to Saul Kripke's definition of a name,\nin that a name has a unique reference and its meaning is constant (there are exceptions in the annotations, e.g. \"Regjeringen\" (en. \"Government\")).\n2. It is the usage of a name that determines the entity type, not the default/literal sense of the name,\n3. If there is an ambiguity in the type/sense of a name, then the the default/literal sense of the name is chosen\n(following Markert and Nissim, 2002).\n\n\nFor more details, see the \"Annotation URL\" distributed with the corpus.",
"### Source Data\n\n\nData was collected using blogs and newspapers in Norwegian, as well as parliament speeches and governamental reports.",
"#### Initial Data Collection and Normalization\n\n\nThe texts in the Norwegian Dependency Treebank (NDT) are manually annotated with morphological features, syntactic functions\nand hierarchical structure. The formalism used for the syntactic annotation is dependency grammar.\n\n\nThe treebanks consists of two parts, one part in Norwegian Bokmål ('nob') and one part in Norwegian Nynorsk ('nno').\nBoth parts contain around 300.000 tokens, and are a mix of different non-fictional genres.\n\n\nSee the NDT webpage for more details.",
"### Annotations\n\n\nThe following types of entities are annotated:\n\n\n* Person ('PER'): Real or fictional characters and animals\n* Organization ('ORG'): Any collection of people, such as firms, institutions, organizations, music groups,\nsports teams, unions, political parties etc.\n* Location ('LOC'): Geographical places, buildings and facilities\n* Geo-political entity ('GPE'): Geographical regions defined by political and/or social groups.\nA GPE entity subsumes and does not distinguish between a nation, its region, its government, or its people\n* Product ('PROD'): Artificially produced entities are regarded products. This may include more abstract entities, such as speeches,\nradio shows, programming languages, contracts, laws and ideas.\n* Event ('EVT'): Festivals, cultural events, sports events, weather phenomena, wars, etc. Events are bounded in time and space.\n* Derived ('DRV'): Words (and phrases?) that are dervied from a name, but not a name in themselves. They typically contain a full name and are capitalized, but are not proper nouns. Examples (fictive) are \"Brann-treneren\" (\"the Brann coach\") or \"Oslo-mannen\" (\"the man from Oslo\").\n* Miscellaneous ('MISC'): Names that do not belong in the other categories. Examples are animals species and names of medical conditions. Entities that are manufactured or produced are of type Products, whereas thing naturally or spontaneously occurring are of type Miscellaneous.\n\n\nFurthermore, all 'GPE' entities are additionally sub-categorized as being either 'ORG' or 'LOC', with the two annotation levels separated by an underscore:\n\n\n* 'GPE\\_LOC': Geo-political entity, with a locative sense (e.g. \"John lives in *Spain*\")\n* 'GPE\\_ORG': Geo-political entity, with an organisation sense (e.g. \"*Spain* declined to meet with Belgium\")\n\n\nThe two special types 'GPE\\_LOC' and 'GPE\\_ORG' can easily be altered depending on the task, choosing either the more general 'GPE' tag or the more specific 'LOC'/'ORG' tags, conflating them with the other annotations of the same type. This means that the following sets of entity types can be derived:\n\n\n* 7 types, deleting '\\_GPE': 'ORG', 'LOC', 'PER', 'PROD', 'EVT', 'DRV', 'MISC'\n* 8 types, deleting 'LOC\\_' and 'ORG\\_': 'ORG', 'LOC', 'GPE', 'PER', 'PROD', 'EVT', 'DRV', 'MISC'\n* 9 types, keeping all types: 'ORG', 'LOC', 'GPE\\_LOC', 'GPE\\_ORG', 'PER', 'PROD', 'EVT', 'DRV', 'MISC'\n\n\nThe class distribution is as follows, broken down across the data splits of the UD version of NDT, and sorted by total counts (i.e. the number of examples, not tokens within the spans of the annotatons):\n\n\n\nTo access these reduce versions of the dataset, you can use the configs 'bokmaal-7', 'nynorsk-7', 'combined-7' for the NER tag set with 7 tags ( 'ORG', 'LOC', 'PER', 'PROD', 'EVT', 'DRV', 'MISC'), and 'bokmaal-8', 'nynorsk-8', 'combined-8' for the NER tag set with 8 tags ('LOC\\_' and 'ORG\\_': 'ORG', 'LOC', 'GPE', 'PER', 'PROD', 'EVT', 'DRV', 'MISC'). By default, the full set (9 tags) will be used.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nNorNE was created as a collaboration between Schibsted Media Group, Språkbanken at the National Library of Norway and the Language Technology Group at the University of Oslo.\n\n\nNorNE was added to Huggingface Datasets by the AI-Lab at the National Library of Norway.",
"### Licensing Information\n\n\nThe NorNE corpus is published under the same license as the Norwegian Dependency Treebank\n\n\nThis dataset is described in the paper *NorNE: Annotating Named Entities for Norwegian* by\nFredrik Jørgensen, Tobias Aasmoe, Anne-Stine Ruud Husevåg, Lilja Øvrelid, and Erik Velldal, accepted for LREC 2020 and available as pre-print here: URL"
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #task_ids-part-of-speech #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Norwegian #license-other #structure-prediction #arxiv-1911.12146 #region-us \n",
"### Dataset Summary\n\n\nNorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons,organizations, locations, geo-political entities, products, and events, in addition to a class corresponding to nominals derived from names.",
"### Supported Tasks and Leaderboards\n\n\nNorNE ads named entity annotations on top of the Norwegian Dependency Treebank.",
"### Languages\n\n\nBoth Norwegian Bokmål ('bokmaal') and Nynorsk ('nynorsk') are supported as different configs in this dataset. An extra config for the combined languages is also included ('combined'). See the Annotation section for details on accessing reduced tag sets for the NER feature.\n\n\nDataset Structure\n-----------------\n\n\nEach entry contains text sentences, their language, identifiers, tokens, lemmas, and corresponding NER and POS tag lists.",
"### Data Instances\n\n\nAn example of the 'train' split of the 'bokmaal' config.",
"### Data Fields\n\n\nEach entry is annotated with the next fields:\n\n\n* 'idx' ('int'), text (sentence) identifier from the NorNE dataset\n* 'lang' ('str'), language variety, either 'bokmaal', 'nynorsk' or 'combined'\n* 'text' ('str'), plain text\n* 'tokens' ('List[str]'), list of tokens extracted from 'text'\n* 'lemmas' ('List[str]'), list of lemmas extracted from 'tokens'\n* 'ner\\_tags' ('List[int]'), list of numeric NER tags for each token in 'tokens'\n* 'pos\\_tags' ('List[int]'), list of numeric PoS tags for each token in 'tokens'\n\n\nAn example DataFrame obtained from the dataset:\n\n\n\n\n| | idx | lang | text | tokens | lemmas | ner\\_tags | pos\\_tags |\n| --- | --- | --- | --- | --- | --- | --- | --- |\n| 0 | 000001 | bokmaal | Lam og piggvar på bryllupsmenyen | [Lam, og, piggvar, på, bryllupsmenyen] | [lam, og, piggvar, på, bryllupsmeny] | [0, 0, 0, 0, 0] | [0, 9, 0, 5, 0] |\n| 1 | 000002 | bokmaal | Kamskjell, piggvar og lammefilet sto på menyen... | [Kamskjell, ,, piggvar, og, lammefilet, sto, p... | [kamskjell, $,, piggvar, og, lammefilet, stå, ... | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] | [0, 1, 0, 9, 0, 15, 2, 0, 2, 8, 6, 0, 1] |\n| 2 | 000003 | bokmaal | Og til dessert: Parfait à la Mette-Marit. | [Og, til, dessert, :, Parfait, à, la, Mette-Ma... | [og, til, dessert, $:, Parfait, à, la, Mette-M... | [0, 0, 0, 0, 7, 8, 8, 8, 0] | [9, 2, 0, 1, 10, 12, 12, 10, 1] |",
"### Data Splits\n\n\nThere are three splits: 'train', 'validation' and 'test'.\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\n1. A *name* in this context is close to Saul Kripke's definition of a name,\nin that a name has a unique reference and its meaning is constant (there are exceptions in the annotations, e.g. \"Regjeringen\" (en. \"Government\")).\n2. It is the usage of a name that determines the entity type, not the default/literal sense of the name,\n3. If there is an ambiguity in the type/sense of a name, then the the default/literal sense of the name is chosen\n(following Markert and Nissim, 2002).\n\n\nFor more details, see the \"Annotation URL\" distributed with the corpus.",
"### Source Data\n\n\nData was collected using blogs and newspapers in Norwegian, as well as parliament speeches and governamental reports.",
"#### Initial Data Collection and Normalization\n\n\nThe texts in the Norwegian Dependency Treebank (NDT) are manually annotated with morphological features, syntactic functions\nand hierarchical structure. The formalism used for the syntactic annotation is dependency grammar.\n\n\nThe treebanks consists of two parts, one part in Norwegian Bokmål ('nob') and one part in Norwegian Nynorsk ('nno').\nBoth parts contain around 300.000 tokens, and are a mix of different non-fictional genres.\n\n\nSee the NDT webpage for more details.",
"### Annotations\n\n\nThe following types of entities are annotated:\n\n\n* Person ('PER'): Real or fictional characters and animals\n* Organization ('ORG'): Any collection of people, such as firms, institutions, organizations, music groups,\nsports teams, unions, political parties etc.\n* Location ('LOC'): Geographical places, buildings and facilities\n* Geo-political entity ('GPE'): Geographical regions defined by political and/or social groups.\nA GPE entity subsumes and does not distinguish between a nation, its region, its government, or its people\n* Product ('PROD'): Artificially produced entities are regarded products. This may include more abstract entities, such as speeches,\nradio shows, programming languages, contracts, laws and ideas.\n* Event ('EVT'): Festivals, cultural events, sports events, weather phenomena, wars, etc. Events are bounded in time and space.\n* Derived ('DRV'): Words (and phrases?) that are dervied from a name, but not a name in themselves. They typically contain a full name and are capitalized, but are not proper nouns. Examples (fictive) are \"Brann-treneren\" (\"the Brann coach\") or \"Oslo-mannen\" (\"the man from Oslo\").\n* Miscellaneous ('MISC'): Names that do not belong in the other categories. Examples are animals species and names of medical conditions. Entities that are manufactured or produced are of type Products, whereas thing naturally or spontaneously occurring are of type Miscellaneous.\n\n\nFurthermore, all 'GPE' entities are additionally sub-categorized as being either 'ORG' or 'LOC', with the two annotation levels separated by an underscore:\n\n\n* 'GPE\\_LOC': Geo-political entity, with a locative sense (e.g. \"John lives in *Spain*\")\n* 'GPE\\_ORG': Geo-political entity, with an organisation sense (e.g. \"*Spain* declined to meet with Belgium\")\n\n\nThe two special types 'GPE\\_LOC' and 'GPE\\_ORG' can easily be altered depending on the task, choosing either the more general 'GPE' tag or the more specific 'LOC'/'ORG' tags, conflating them with the other annotations of the same type. This means that the following sets of entity types can be derived:\n\n\n* 7 types, deleting '\\_GPE': 'ORG', 'LOC', 'PER', 'PROD', 'EVT', 'DRV', 'MISC'\n* 8 types, deleting 'LOC\\_' and 'ORG\\_': 'ORG', 'LOC', 'GPE', 'PER', 'PROD', 'EVT', 'DRV', 'MISC'\n* 9 types, keeping all types: 'ORG', 'LOC', 'GPE\\_LOC', 'GPE\\_ORG', 'PER', 'PROD', 'EVT', 'DRV', 'MISC'\n\n\nThe class distribution is as follows, broken down across the data splits of the UD version of NDT, and sorted by total counts (i.e. the number of examples, not tokens within the spans of the annotatons):\n\n\n\nTo access these reduce versions of the dataset, you can use the configs 'bokmaal-7', 'nynorsk-7', 'combined-7' for the NER tag set with 7 tags ( 'ORG', 'LOC', 'PER', 'PROD', 'EVT', 'DRV', 'MISC'), and 'bokmaal-8', 'nynorsk-8', 'combined-8' for the NER tag set with 8 tags ('LOC\\_' and 'ORG\\_': 'ORG', 'LOC', 'GPE', 'PER', 'PROD', 'EVT', 'DRV', 'MISC'). By default, the full set (9 tags) will be used.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nNorNE was created as a collaboration between Schibsted Media Group, Språkbanken at the National Library of Norway and the Language Technology Group at the University of Oslo.\n\n\nNorNE was added to Huggingface Datasets by the AI-Lab at the National Library of Norway.",
"### Licensing Information\n\n\nThe NorNE corpus is published under the same license as the Norwegian Dependency Treebank\n\n\nThis dataset is described in the paper *NorNE: Annotating Named Entities for Norwegian* by\nFredrik Jørgensen, Tobias Aasmoe, Anne-Stine Ruud Husevåg, Lilja Øvrelid, and Erik Velldal, accepted for LREC 2020 and available as pre-print here: URL"
] |
f7393532774c66312378d30b197610b43d751972 |
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** N/A
- **Repository:** [GitHub](https://github.com/ltgoslo/NorBERT/)
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** -
### Dataset Summary
The Norwegian Parliament Speeches is a collection of text passages from 1998 to 2016 and pronounced at the Norwegian Parliament (Storting) by members of the two major parties: Fremskrittspartiet and Sosialistisk Venstreparti. The dataset is annotated with the party the speaker was associated with at the time (dates of speeches are also included).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in Norwegian.
## Dataset Structure
### Data Instances
Example of one instance in the dataset.
```{'label': 0, 'text': 'Verre er det med slagsmålene .'}```
### Data Fields
- `id`: index of the example
- `text`: Text of a speech
- `date`: Date (`YYYY-MM-DD`) the speech was produced
- `label`: Political party the speaker was associated with at the time
- 0 = Fremskrittspartiet
- 1 = Sosialistisk Venstreparti
### Data Splits
The dataset is split into a `train`, `validation`, and `test` split with the following sizes:
| | Tain | Valid | Test |
| ----- | ------ | ----- | ----- |
| Number of examples | 3600 | 1200 | 1200 |
The dataset is balanced on political party.
## Dataset Creation
This dataset is based on the publicly available information by Norwegian Parliament (Storting) and created by the National Library of Norway AI-Lab to benchmark their language models.
## Additional Information
### Licensing Information
This work is licensed under a Creative Commons Attribution 4.0 International License
### Citation Information
```latex
@misc{--,
title={--},
author={--},
year={2021},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| NbAiLab/norwegian_parliament | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:no",
"license:cc-by-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": [false], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"]} | 2022-07-01T18:51:13+00:00 | [] | [
"no"
] | TAGS
#task_categories-text-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Norwegian #license-cc-by-4.0 #region-us
| Dataset Card Creation Guide
===========================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
Dataset Description
-------------------
* Homepage: N/A
* Repository: GitHub
* Paper: N/A
* Leaderboard: N/A
* Point of Contact: -
### Dataset Summary
The Norwegian Parliament Speeches is a collection of text passages from 1998 to 2016 and pronounced at the Norwegian Parliament (Storting) by members of the two major parties: Fremskrittspartiet and Sosialistisk Venstreparti. The dataset is annotated with the party the speaker was associated with at the time (dates of speeches are also included).
### Supported Tasks and Leaderboards
### Languages
The text in the dataset is in Norwegian.
Dataset Structure
-----------------
### Data Instances
Example of one instance in the dataset.
### Data Fields
* 'id': index of the example
* 'text': Text of a speech
* 'date': Date ('YYYY-MM-DD') the speech was produced
* 'label': Political party the speaker was associated with at the time
+ 0 = Fremskrittspartiet
+ 1 = Sosialistisk Venstreparti
### Data Splits
The dataset is split into a 'train', 'validation', and 'test' split with the following sizes:
The dataset is balanced on political party.
Dataset Creation
----------------
This dataset is based on the publicly available information by Norwegian Parliament (Storting) and created by the National Library of Norway AI-Lab to benchmark their language models.
Additional Information
----------------------
### Licensing Information
This work is licensed under a Creative Commons Attribution 4.0 International License
| [
"### Dataset Summary\n\n\nThe Norwegian Parliament Speeches is a collection of text passages from 1998 to 2016 and pronounced at the Norwegian Parliament (Storting) by members of the two major parties: Fremskrittspartiet and Sosialistisk Venstreparti. The dataset is annotated with the party the speaker was associated with at the time (dates of speeches are also included).",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nThe text in the dataset is in Norwegian.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nExample of one instance in the dataset.",
"### Data Fields\n\n\n* 'id': index of the example\n* 'text': Text of a speech\n* 'date': Date ('YYYY-MM-DD') the speech was produced\n* 'label': Political party the speaker was associated with at the time\n\t+ 0 = Fremskrittspartiet\n\t+ 1 = Sosialistisk Venstreparti",
"### Data Splits\n\n\nThe dataset is split into a 'train', 'validation', and 'test' split with the following sizes:\n\n\n\nThe dataset is balanced on political party.\n\n\nDataset Creation\n----------------\n\n\nThis dataset is based on the publicly available information by Norwegian Parliament (Storting) and created by the National Library of Norway AI-Lab to benchmark their language models.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThis work is licensed under a Creative Commons Attribution 4.0 International License"
] | [
"TAGS\n#task_categories-text-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Norwegian #license-cc-by-4.0 #region-us \n",
"### Dataset Summary\n\n\nThe Norwegian Parliament Speeches is a collection of text passages from 1998 to 2016 and pronounced at the Norwegian Parliament (Storting) by members of the two major parties: Fremskrittspartiet and Sosialistisk Venstreparti. The dataset is annotated with the party the speaker was associated with at the time (dates of speeches are also included).",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nThe text in the dataset is in Norwegian.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nExample of one instance in the dataset.",
"### Data Fields\n\n\n* 'id': index of the example\n* 'text': Text of a speech\n* 'date': Date ('YYYY-MM-DD') the speech was produced\n* 'label': Political party the speaker was associated with at the time\n\t+ 0 = Fremskrittspartiet\n\t+ 1 = Sosialistisk Venstreparti",
"### Data Splits\n\n\nThe dataset is split into a 'train', 'validation', and 'test' split with the following sizes:\n\n\n\nThe dataset is balanced on political party.\n\n\nDataset Creation\n----------------\n\n\nThis dataset is based on the publicly available information by Norwegian Parliament (Storting) and created by the National Library of Norway AI-Lab to benchmark their language models.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThis work is licensed under a Creative Commons Attribution 4.0 International License"
] |
5b43dd1c2294a9dcd5e87c40fc75e2638ca7063b | restsfds | Niciu/github-issues | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-03-01T13:52:02+00:00 | [] | [] | TAGS
#region-us
| restsfds | [] | [
"TAGS\n#region-us \n"
] |
8d1fa0a3369522cb04e3d427293f7b5f48103f94 | This dataset can be found in Kaggle | Nuwaisir/Quran_speech_recognition_kaggle | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-20T13:07:06+00:00 | [] | [] | TAGS
#region-us
| This dataset can be found in Kaggle | [] | [
"TAGS\n#region-us \n"
] |
08e4869ec670f065a83a9b805d5dcfdaddc59f6b |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | OmarN121/train | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"YAML tags": [{"copy-paste the tags obtained with the online tagging app": "https://huggingface.co/spaces/huggingface/datasets-tagging"}]} | 2022-01-03T18:49:44+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for [Dataset Name]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset. | [
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#region-us \n",
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
9d2ac89df04254e5c427bcc8d61b6d6c83a1f59b |
# Dataset Card for HateCheck
## Dataset Description
HateCheck is a suite of functional test for hate speech detection models.
The dataset contains 3,728 validated test cases in 29 functional tests.
19 functional tests correspond to distinct types of hate. The other 11 functional tests cover challenging types of non-hate.
This allows for targeted diagnostic insights into model performance.
In our ACL paper, we found critical weaknesses in all commercial and academic hate speech detection model that we tested with HateCheck.
Please refer to the paper (linked below) for results and further discussion, as well as further information on the dataset and a full data statement.
- **Paper:** Röttger et al. (2021) - HateCheck: Functional Tests for Hate Speech Detection Model. https://aclanthology.org/2021.acl-long.4/ or https://arxiv.org/abs/2012.15606
- **Repository:** https://github.com/paul-rottger/hatecheck-data
- **Point of Contact:** [email protected]
## Dataset Structure
"test.csv" contains all 3,728 validated test cases. Each test case (row) has the following attributes:
**functionality**
The shorthand for the functionality tested by the test case.
**case_id**
The unique ID of the test case (assigned to each of the 3,901 cases we initially generated)
**test_case**
The text of the test case.
**label_gold**
The gold standard label (hateful/non-hateful) of the test case. All test cases within a given functionality have the same gold standard label.
**target_ident**
Where applicable, the protected group targeted or referenced by the test case. We cover seven protected groups in the test suite: women, trans people, gay people, black people, disabled people, Muslims and immigrants.
**direction**
For hateful cases, the binary secondary label indicating whether they are *directed* at an individual as part of a protected group or aimed at the group in *general*.
**focus_words**
Where applicable, the key word or phrase in a given test case (e.g. "cut their throats").
**focus_lemma**
Where applicable, the corresponding lemma (e.g. "cut sb. throat").
**ref_case_id**
For hateful cases, where applicable, the ID of the simpler hateful case which was perturbed to generate them.
For non-hateful cases, where applicable, the ID of the hateful case which is contrasted.
**ref_templ_id**
The equivalent, but for template IDs.
**templ_id**
The unique ID of the template from which the test case was generated (assigned to each of the 866 cases and templates from which we generated the 3,901 initial cases).
## Citation Information
When using HateCheck, please cite our ACL paper:
@inproceedings{rottger-etal-2021-hatecheck,
title = "{H}ate{C}heck: Functional Tests for Hate Speech Detection Models",
author = {R{\"o}ttger, Paul and
Vidgen, Bertie and
Nguyen, Dong and
Waseem, Zeerak and
Margetts, Helen and
Pierrehumbert, Janet},
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.4",
doi = "10.18653/v1/2021.acl-long.4",
pages = "41--58",
abstract = "Detecting online hate is a difficult task that even state-of-the-art models struggle with. Typically, hate speech detection models are evaluated by measuring their performance on held-out test data using metrics such as accuracy and F1 score. However, this approach makes it difficult to identify specific model weak points. It also risks overestimating generalisable model performance due to increasingly well-evidenced systematic gaps and biases in hate speech datasets. To enable more targeted diagnostic insights, we introduce HateCheck, a suite of functional tests for hate speech detection models. We specify 29 model functionalities motivated by a review of previous research and a series of interviews with civil society stakeholders. We craft test cases for each functionality and validate their quality through a structured annotation process. To illustrate HateCheck{'}s utility, we test near-state-of-the-art transformer models as well as two popular commercial models, revealing critical model weaknesses.",
}
| Paul/hatecheck | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:2012.15606",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["hate-speech-detection"], "pretty_name": "HateCheck"} | 2022-07-05T09:27:25+00:00 | [
"2012.15606"
] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2012.15606 #region-us
|
# Dataset Card for HateCheck
## Dataset Description
HateCheck is a suite of functional test for hate speech detection models.
The dataset contains 3,728 validated test cases in 29 functional tests.
19 functional tests correspond to distinct types of hate. The other 11 functional tests cover challenging types of non-hate.
This allows for targeted diagnostic insights into model performance.
In our ACL paper, we found critical weaknesses in all commercial and academic hate speech detection model that we tested with HateCheck.
Please refer to the paper (linked below) for results and further discussion, as well as further information on the dataset and a full data statement.
- Paper: Röttger et al. (2021) - HateCheck: Functional Tests for Hate Speech Detection Model. URL or URL
- Repository: URL
- Point of Contact: paul.rottger@URL
## Dataset Structure
"URL" contains all 3,728 validated test cases. Each test case (row) has the following attributes:
functionality
The shorthand for the functionality tested by the test case.
case_id
The unique ID of the test case (assigned to each of the 3,901 cases we initially generated)
test_case
The text of the test case.
label_gold
The gold standard label (hateful/non-hateful) of the test case. All test cases within a given functionality have the same gold standard label.
target_ident
Where applicable, the protected group targeted or referenced by the test case. We cover seven protected groups in the test suite: women, trans people, gay people, black people, disabled people, Muslims and immigrants.
direction
For hateful cases, the binary secondary label indicating whether they are *directed* at an individual as part of a protected group or aimed at the group in *general*.
focus_words
Where applicable, the key word or phrase in a given test case (e.g. "cut their throats").
focus_lemma
Where applicable, the corresponding lemma (e.g. "cut sb. throat").
ref_case_id
For hateful cases, where applicable, the ID of the simpler hateful case which was perturbed to generate them.
For non-hateful cases, where applicable, the ID of the hateful case which is contrasted.
ref_templ_id
The equivalent, but for template IDs.
templ_id
The unique ID of the template from which the test case was generated (assigned to each of the 866 cases and templates from which we generated the 3,901 initial cases).
When using HateCheck, please cite our ACL paper:
@inproceedings{rottger-etal-2021-hatecheck,
title = "{H}ate{C}heck: Functional Tests for Hate Speech Detection Models",
author = {R{\"o}ttger, Paul and
Vidgen, Bertie and
Nguyen, Dong and
Waseem, Zeerak and
Margetts, Helen and
Pierrehumbert, Janet},
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "URL
doi = "10.18653/v1/URL-long.4",
pages = "41--58",
abstract = "Detecting online hate is a difficult task that even state-of-the-art models struggle with. Typically, hate speech detection models are evaluated by measuring their performance on held-out test data using metrics such as accuracy and F1 score. However, this approach makes it difficult to identify specific model weak points. It also risks overestimating generalisable model performance due to increasingly well-evidenced systematic gaps and biases in hate speech datasets. To enable more targeted diagnostic insights, we introduce HateCheck, a suite of functional tests for hate speech detection models. We specify 29 model functionalities motivated by a review of previous research and a series of interviews with civil society stakeholders. We craft test cases for each functionality and validate their quality through a structured annotation process. To illustrate HateCheck{'}s utility, we test near-state-of-the-art transformer models as well as two popular commercial models, revealing critical model weaknesses.",
}
| [
"# Dataset Card for HateCheck",
"## Dataset Description\n\nHateCheck is a suite of functional test for hate speech detection models. \nThe dataset contains 3,728 validated test cases in 29 functional tests.\n19 functional tests correspond to distinct types of hate. The other 11 functional tests cover challenging types of non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nIn our ACL paper, we found critical weaknesses in all commercial and academic hate speech detection model that we tested with HateCheck. \nPlease refer to the paper (linked below) for results and further discussion, as well as further information on the dataset and a full data statement.\n\n- Paper: Röttger et al. (2021) - HateCheck: Functional Tests for Hate Speech Detection Model. URL or URL\n- Repository: URL\n- Point of Contact: paul.rottger@URL",
"## Dataset Structure\n\n\"URL\" contains all 3,728 validated test cases. Each test case (row) has the following attributes:\n\nfunctionality\nThe shorthand for the functionality tested by the test case.\n\ncase_id\nThe unique ID of the test case (assigned to each of the 3,901 cases we initially generated)\n\ntest_case\nThe text of the test case.\n\nlabel_gold\nThe gold standard label (hateful/non-hateful) of the test case. All test cases within a given functionality have the same gold standard label.\n\ntarget_ident\nWhere applicable, the protected group targeted or referenced by the test case. We cover seven protected groups in the test suite: women, trans people, gay people, black people, disabled people, Muslims and immigrants.\n\ndirection\nFor hateful cases, the binary secondary label indicating whether they are *directed* at an individual as part of a protected group or aimed at the group in *general*.\n\nfocus_words\nWhere applicable, the key word or phrase in a given test case (e.g. \"cut their throats\").\n\nfocus_lemma\nWhere applicable, the corresponding lemma (e.g. \"cut sb. throat\").\n\nref_case_id\nFor hateful cases, where applicable, the ID of the simpler hateful case which was perturbed to generate them.\nFor non-hateful cases, where applicable, the ID of the hateful case which is contrasted.\n\nref_templ_id\nThe equivalent, but for template IDs.\n\ntempl_id\nThe unique ID of the template from which the test case was generated (assigned to each of the 866 cases and templates from which we generated the 3,901 initial cases).\n\n\n\n\nWhen using HateCheck, please cite our ACL paper:\n\n@inproceedings{rottger-etal-2021-hatecheck,\n title = \"{H}ate{C}heck: Functional Tests for Hate Speech Detection Models\",\n author = {R{\\\"o}ttger, Paul and\n Vidgen, Bertie and\n Nguyen, Dong and\n Waseem, Zeerak and\n Margetts, Helen and\n Pierrehumbert, Janet},\n booktitle = \"Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)\",\n month = aug,\n year = \"2021\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"URL\n doi = \"10.18653/v1/URL-long.4\",\n pages = \"41--58\",\n abstract = \"Detecting online hate is a difficult task that even state-of-the-art models struggle with. Typically, hate speech detection models are evaluated by measuring their performance on held-out test data using metrics such as accuracy and F1 score. However, this approach makes it difficult to identify specific model weak points. It also risks overestimating generalisable model performance due to increasingly well-evidenced systematic gaps and biases in hate speech datasets. To enable more targeted diagnostic insights, we introduce HateCheck, a suite of functional tests for hate speech detection models. We specify 29 model functionalities motivated by a review of previous research and a series of interviews with civil society stakeholders. We craft test cases for each functionality and validate their quality through a structured annotation process. To illustrate HateCheck{'}s utility, we test near-state-of-the-art transformer models as well as two popular commercial models, revealing critical model weaknesses.\",\n}"
] | [
"TAGS\n#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2012.15606 #region-us \n",
"# Dataset Card for HateCheck",
"## Dataset Description\n\nHateCheck is a suite of functional test for hate speech detection models. \nThe dataset contains 3,728 validated test cases in 29 functional tests.\n19 functional tests correspond to distinct types of hate. The other 11 functional tests cover challenging types of non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nIn our ACL paper, we found critical weaknesses in all commercial and academic hate speech detection model that we tested with HateCheck. \nPlease refer to the paper (linked below) for results and further discussion, as well as further information on the dataset and a full data statement.\n\n- Paper: Röttger et al. (2021) - HateCheck: Functional Tests for Hate Speech Detection Model. URL or URL\n- Repository: URL\n- Point of Contact: paul.rottger@URL",
"## Dataset Structure\n\n\"URL\" contains all 3,728 validated test cases. Each test case (row) has the following attributes:\n\nfunctionality\nThe shorthand for the functionality tested by the test case.\n\ncase_id\nThe unique ID of the test case (assigned to each of the 3,901 cases we initially generated)\n\ntest_case\nThe text of the test case.\n\nlabel_gold\nThe gold standard label (hateful/non-hateful) of the test case. All test cases within a given functionality have the same gold standard label.\n\ntarget_ident\nWhere applicable, the protected group targeted or referenced by the test case. We cover seven protected groups in the test suite: women, trans people, gay people, black people, disabled people, Muslims and immigrants.\n\ndirection\nFor hateful cases, the binary secondary label indicating whether they are *directed* at an individual as part of a protected group or aimed at the group in *general*.\n\nfocus_words\nWhere applicable, the key word or phrase in a given test case (e.g. \"cut their throats\").\n\nfocus_lemma\nWhere applicable, the corresponding lemma (e.g. \"cut sb. throat\").\n\nref_case_id\nFor hateful cases, where applicable, the ID of the simpler hateful case which was perturbed to generate them.\nFor non-hateful cases, where applicable, the ID of the hateful case which is contrasted.\n\nref_templ_id\nThe equivalent, but for template IDs.\n\ntempl_id\nThe unique ID of the template from which the test case was generated (assigned to each of the 866 cases and templates from which we generated the 3,901 initial cases).\n\n\n\n\nWhen using HateCheck, please cite our ACL paper:\n\n@inproceedings{rottger-etal-2021-hatecheck,\n title = \"{H}ate{C}heck: Functional Tests for Hate Speech Detection Models\",\n author = {R{\\\"o}ttger, Paul and\n Vidgen, Bertie and\n Nguyen, Dong and\n Waseem, Zeerak and\n Margetts, Helen and\n Pierrehumbert, Janet},\n booktitle = \"Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)\",\n month = aug,\n year = \"2021\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"URL\n doi = \"10.18653/v1/URL-long.4\",\n pages = \"41--58\",\n abstract = \"Detecting online hate is a difficult task that even state-of-the-art models struggle with. Typically, hate speech detection models are evaluated by measuring their performance on held-out test data using metrics such as accuracy and F1 score. However, this approach makes it difficult to identify specific model weak points. It also risks overestimating generalisable model performance due to increasingly well-evidenced systematic gaps and biases in hate speech datasets. To enable more targeted diagnostic insights, we introduce HateCheck, a suite of functional tests for hate speech detection models. We specify 29 model functionalities motivated by a review of previous research and a series of interviews with civil society stakeholders. We craft test cases for each functionality and validate their quality through a structured annotation process. To illustrate HateCheck{'}s utility, we test near-state-of-the-art transformer models as well as two popular commercial models, revealing critical model weaknesses.\",\n}"
] |
14cf38fdcc9469dff7c340e5c5a6385991e6457a | See https://github.com/PaulLerner/ViQuAE
Get the original dataset there: http://nlp.cs.washington.edu/triviaqa/ (or via HF: https://huggingface.co/datasets/trivia_qa) | PaulLerner/triviaqa_for_viquae | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-08-02T07:24:26+00:00 | [] | [] | TAGS
#region-us
| See URL
Get the original dataset there: URL (or via HF: URL | [] | [
"TAGS\n#region-us \n"
] |
800a1b34736e3f105621b578e15cc7de2a834eb8 | See https://github.com/PaulLerner/ViQuAE
---
license: cc-by-4.0
---
| PaulLerner/viquae_all_images | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-03-10T10:59:10+00:00 | [] | [] | TAGS
#region-us
| See URL
---
license: cc-by-4.0
---
| [] | [
"TAGS\n#region-us \n"
] |
779bec48cd16da60a0f217c39cb6b57f16cbc06b | See https://github.com/PaulLerner/ViQuAE
---
license: cc-by-4.0
---
| PaulLerner/viquae_dataset | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-15T17:43:12+00:00 | [] | [] | TAGS
#region-us
| See URL
---
license: cc-by-4.0
---
| [] | [
"TAGS\n#region-us \n"
] |
0f9179e71a05d365e485aeefe07f19d041d7e76c | See https://github.com/PaulLerner/ViQuAE
---
license: cc-by-4.0
---
| PaulLerner/viquae_images | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-15T17:29:53+00:00 | [] | [] | TAGS
#region-us
| See URL
---
license: cc-by-4.0
---
| [] | [
"TAGS\n#region-us \n"
] |
003b63177d1d4a2865d1e437b7a84817e42bb32d | See https://github.com/PaulLerner/ViQuAE
---
license: cc-by-3.0
---
| PaulLerner/viquae_wikipedia | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-18T11:34:47+00:00 | [] | [] | TAGS
#region-us
| See URL
---
license: cc-by-3.0
---
| [] | [
"TAGS\n#region-us \n"
] |
bb1d91e8434fb11bbb1006a63af37cf9509f40e2 | This is the dataset | Pengfei/test1 | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-11-05T00:46:07+00:00 | [] | [] | TAGS
#region-us
| This is the dataset | [] | [
"TAGS\n#region-us \n"
] |
f9928e8819596a601b8887cc5f8598b15d589a82 |
# SQAC (Spanish Question-Answering Corpus)
## Dataset Description
SQAC is an extractive QA dataset for the Spanish language.
- **Paper:** [MarIA: Spanish Language Models](https://upcommons.upc.edu/bitstream/handle/2117/367156/6405-5863-1-PB%20%281%29.pdf?sequence=1)
- **Point of Contact:** [email protected]
- **Leaderboard:** [EvalEs] (https://plantl-gob-es.github.io/spanish-benchmark/)
### Dataset Summary
Contains 6,247 contexts and 18,817 questions with their respective answers, 1 to 5 for each fragment.
The sources of the contexts are:
* Encyclopedic articles from the [Spanish Wikipedia](https://es.wikipedia.org/), used under [CC-by-sa licence](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
* News articles from [Wikinews](https://es.wikinews.org/), used under [CC-by licence](https://creativecommons.org/licenses/by/2.5/).
* Newswire and literature text from the [AnCora corpus](http://clic.ub.edu/corpus/en), used under [CC-by licence](https://creativecommons.org/licenses/by/4.0/legalcode).
### Supported Tasks
Extractive-QA
### Languages
- Spanish (es)
### Directory Structure
- README.md
- SQAC.py
- dev.json
- test.json
- train.json
## Dataset Structure
### Data Instances
<pre>
{
'id': '6cf3dcd6-b5a3-4516-8f9e-c5c1c6b66628',
'title': 'Historia de Japón',
'context': 'La historia de Japón (日本の歴史 o 日本史, Nihon no rekishi / Nihonshi?) es la sucesión de hechos acontecidos dentro del archipiélago japonés. Algunos de estos hechos aparecen aislados e influenciados por la naturaleza geográfica de Japón como nación insular, en tanto que otra serie de hechos, obedece a influencias foráneas como en el caso del Imperio chino, el cual definió su idioma, su escritura y, también, su cultura política. Asimismo, otra de las influencias foráneas fue la de origen occidental, lo que convirtió al país en una nación industrial, ejerciendo con ello una esfera de influencia y una expansión territorial sobre el área del Pacífico. No obstante, dicho expansionismo se detuvo tras la Segunda Guerra Mundial y el país se posicionó en un esquema de nación industrial con vínculos a su tradición cultural.',
'question': '¿Qué influencia convirtió Japón en una nación industrial?',
'answers': {
'text': ['la de origen occidental'],
'answer_start': [473]
}
}
</pre>
### Data Fields
<pre>
{
id: str
title: str
context: str
question: str
answers: {
answer_start: [int]
text: [str]
}
}
</pre>
### Data Splits
| Split | Size |
| ------------- | ------------- |
| `train` | 15,036 |
| `dev` | 1,864 |
| `test` | 1.910 |
## Content analysis
### Number of articles, paragraphs and questions
* Number of articles: 3,834
* Number of contexts: 6,247
* Number of questions: 18,817
* Number of sentences: 48,026
* Questions/Context ratio: 3.01
* Sentences/Context ratio: 7.70
### Number of tokens
* Total tokens in context: 1,561,616
* Average tokens/context: 250
* Total tokens in questions: 203,235
* Average tokens/question: 10.80
* Total tokens in answers: 90,307
* Average tokens/answer: 4.80
### Lexical variation
46.38% of the words in the Question can be found in the Context.
### Question type
| Question | Count | % |
|----------|-------:|---:|
| qué | 6,381 | 33.91 % |
| quién/es | 2,952 | 15.69 % |
| cuál/es | 2,034 | 10.81 % |
| cómo | 1,949 | 10.36 % |
| dónde | 1,856 | 9.86 % |
| cuándo | 1,639 | 8.71 % |
| cuánto | 1,311 | 6.97 % |
| cuántos | 495 |2.63 % |
| adónde | 100 | 0.53 % |
| cuánta | 49 | 0.26 % |
| no question mark | 43 | 0.23 % |
| cuántas | 19 | 0.10 % |
## Dataset Creation
### Curation Rationale
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines from SQUAD 1.0 [(Rajpurkar, Pranav et al.)](http://arxiv.org/abs/1606.05250).
### Source Data
#### Initial Data Collection and Normalization
The source data are scraped articles from Wikinews, the Spanish Wikipedia and the AnCora corpus.
- [Spanish Wikipedia](https://es.wikipedia.org)
- [Spanish Wikinews](https://es.wikinews.org/)
- [AnCora corpus](http://clic.ub.edu/corpus/en)
#### Who are the source language producers?
Contributors to the aforementioned sites.
### Annotations
#### Annotation process
We commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 [(Rajpurkar, Pranav et al.)](http://arxiv.org/abs/1606.05250).
#### Who are the annotators?
Native language speakers.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This corpus contributes to the development of language models in Spanish.
### Discussion of Biases
No postprocessing steps were applied to mitigate potential social biases.
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]).
For further information, send an email to ([email protected]).
This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
### Licensing information
This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License.
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Citation Information
```
@article{maria,
author = {Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquin Silveira-Ocampo and Casimiro Pio Carrino and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Aitor Gonzalez-Agirre and Marta Villegas},
title = {MarIA: Spanish Language Models},
journal = {Procesamiento del Lenguaje Natural},
volume = {68},
number = {0},
year = {2022},
issn = {1989-7553},
url = {http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405},
pages = {39--60}
}
```
### Contributions
[N/A]
| PlanTL-GOB-ES/SQAC | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:es",
"license:cc-by-sa-4.0",
"arxiv:1606.05250",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["es"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "Spanish Question Answering Corpus (SQAC)"} | 2023-10-12T22:35:38+00:00 | [
"1606.05250"
] | [
"es"
] | TAGS
#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #source_datasets-original #language-Spanish #license-cc-by-sa-4.0 #arxiv-1606.05250 #region-us
| SQAC (Spanish Question-Answering Corpus)
========================================
Dataset Description
-------------------
SQAC is an extractive QA dataset for the Spanish language.
* Paper: MarIA: Spanish Language Models
* Point of Contact: carlos.rodriguez1@URL
* Leaderboard: [EvalEs] (URL
### Dataset Summary
Contains 6,247 contexts and 18,817 questions with their respective answers, 1 to 5 for each fragment.
The sources of the contexts are:
* Encyclopedic articles from the Spanish Wikipedia, used under CC-by-sa licence.
* News articles from Wikinews, used under CC-by licence.
* Newswire and literature text from the AnCora corpus, used under CC-by licence.
### Supported Tasks
Extractive-QA
### Languages
* Spanish (es)
### Directory Structure
* URL
* URL
* URL
* URL
* URL
Dataset Structure
-----------------
### Data Instances
```
{
'id': '6cf3dcd6-b5a3-4516-8f9e-c5c1c6b66628',
'title': 'Historia de Japón',
'context': 'La historia de Japón (日本の歴史 o 日本史, Nihon no rekishi / Nihonshi?) es la sucesión de hechos acontecidos dentro del archipiélago japonés. Algunos de estos hechos aparecen aislados e influenciados por la naturaleza geográfica de Japón como nación insular, en tanto que otra serie de hechos, obedece a influencias foráneas como en el caso del Imperio chino, el cual definió su idioma, su escritura y, también, su cultura política. Asimismo, otra de las influencias foráneas fue la de origen occidental, lo que convirtió al país en una nación industrial, ejerciendo con ello una esfera de influencia y una expansión territorial sobre el área del Pacífico. No obstante, dicho expansionismo se detuvo tras la Segunda Guerra Mundial y el país se posicionó en un esquema de nación industrial con vínculos a su tradición cultural.',
'question': '¿Qué influencia convirtió Japón en una nación industrial?',
'answers': {
'text': ['la de origen occidental'],
'answer_start': [473]
}
}
```
### Data Fields
```
{
id: str
title: str
context: str
question: str
answers: {
answer_start: [int]
text: [str]
}
}
```
### Data Splits
Content analysis
----------------
### Number of articles, paragraphs and questions
* Number of articles: 3,834
* Number of contexts: 6,247
* Number of questions: 18,817
* Number of sentences: 48,026
* Questions/Context ratio: 3.01
* Sentences/Context ratio: 7.70
### Number of tokens
* Total tokens in context: 1,561,616
* Average tokens/context: 250
* Total tokens in questions: 203,235
* Average tokens/question: 10.80
* Total tokens in answers: 90,307
* Average tokens/answer: 4.80
### Lexical variation
46.38% of the words in the Question can be found in the Context.
### Question type
Dataset Creation
----------------
### Curation Rationale
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines from SQUAD 1.0 (Rajpurkar, Pranav et al.).
### Source Data
#### Initial Data Collection and Normalization
The source data are scraped articles from Wikinews, the Spanish Wikipedia and the AnCora corpus.
* Spanish Wikipedia
* Spanish Wikinews
* AnCora corpus
#### Who are the source language producers?
Contributors to the aforementioned sites.
### Annotations
#### Annotation process
We commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 (Rajpurkar, Pranav et al.).
#### Who are the annotators?
Native language speakers.
### Personal and Sensitive Information
No personal or sensitive information included.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
This corpus contributes to the development of language models in Spanish.
### Discussion of Biases
No postprocessing steps were applied to mitigate potential social biases.
Additional Information
----------------------
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL).
For further information, send an email to (plantl-gob-es@URL).
This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
### Licensing information
This work is licensed under CC Attribution 4.0 International License.
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Contributions
[N/A]
| [
"### Dataset Summary\n\n\nContains 6,247 contexts and 18,817 questions with their respective answers, 1 to 5 for each fragment.\n\n\nThe sources of the contexts are:\n\n\n* Encyclopedic articles from the Spanish Wikipedia, used under CC-by-sa licence.\n* News articles from Wikinews, used under CC-by licence.\n* Newswire and literature text from the AnCora corpus, used under CC-by licence.",
"### Supported Tasks\n\n\nExtractive-QA",
"### Languages\n\n\n* Spanish (es)",
"### Directory Structure\n\n\n* URL\n* URL\n* URL\n* URL\n* URL\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\n\n```\n\n{\n\t'id': '6cf3dcd6-b5a3-4516-8f9e-c5c1c6b66628', \n\t'title': 'Historia de Japón', \n\t'context': 'La historia de Japón (日本の歴史 o 日本史, Nihon no rekishi / Nihonshi?) es la sucesión de hechos acontecidos dentro del archipiélago japonés. Algunos de estos hechos aparecen aislados e influenciados por la naturaleza geográfica de Japón como nación insular, en tanto que otra serie de hechos, obedece a influencias foráneas como en el caso del Imperio chino, el cual definió su idioma, su escritura y, también, su cultura política. Asimismo, otra de las influencias foráneas fue la de origen occidental, lo que convirtió al país en una nación industrial, ejerciendo con ello una esfera de influencia y una expansión territorial sobre el área del Pacífico. No obstante, dicho expansionismo se detuvo tras la Segunda Guerra Mundial y el país se posicionó en un esquema de nación industrial con vínculos a su tradición cultural.', \n\t'question': '¿Qué influencia convirtió Japón en una nación industrial?', \n\t'answers': {\n\t\t'text': ['la de origen occidental'], \n\t\t'answer_start': [473]\n\t}\n}\n\n```",
"### Data Fields\n\n\n\n```\n\n{\n id: str\n title: str\n context: str\n question: str\n answers: {\n answer_start: [int]\n text: [str]\n }\n}\n\n```",
"### Data Splits\n\n\n\nContent analysis\n----------------",
"### Number of articles, paragraphs and questions\n\n\n* Number of articles: 3,834\n* Number of contexts: 6,247\n* Number of questions: 18,817\n* Number of sentences: 48,026\n* Questions/Context ratio: 3.01\n* Sentences/Context ratio: 7.70",
"### Number of tokens\n\n\n* Total tokens in context: 1,561,616\n* Average tokens/context: 250\n* Total tokens in questions: 203,235\n* Average tokens/question: 10.80\n* Total tokens in answers: 90,307\n* Average tokens/answer: 4.80",
"### Lexical variation\n\n\n46.38% of the words in the Question can be found in the Context.",
"### Question type\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nFor compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines from SQUAD 1.0 (Rajpurkar, Pranav et al.).",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe source data are scraped articles from Wikinews, the Spanish Wikipedia and the AnCora corpus.\n\n\n* Spanish Wikipedia\n* Spanish Wikinews\n* AnCora corpus",
"#### Who are the source language producers?\n\n\nContributors to the aforementioned sites.",
"### Annotations",
"#### Annotation process\n\n\nWe commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 (Rajpurkar, Pranav et al.).",
"#### Who are the annotators?\n\n\nNative language speakers.",
"### Personal and Sensitive Information\n\n\nNo personal or sensitive information included.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThis corpus contributes to the development of language models in Spanish.",
"### Discussion of Biases\n\n\nNo postprocessing steps were applied to mitigate potential social biases.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL).\n\n\nFor further information, send an email to (plantl-gob-es@URL).\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.",
"### Licensing information\n\n\nThis work is licensed under CC Attribution 4.0 International License.\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)",
"### Contributions\n\n\n[N/A]"
] | [
"TAGS\n#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #source_datasets-original #language-Spanish #license-cc-by-sa-4.0 #arxiv-1606.05250 #region-us \n",
"### Dataset Summary\n\n\nContains 6,247 contexts and 18,817 questions with their respective answers, 1 to 5 for each fragment.\n\n\nThe sources of the contexts are:\n\n\n* Encyclopedic articles from the Spanish Wikipedia, used under CC-by-sa licence.\n* News articles from Wikinews, used under CC-by licence.\n* Newswire and literature text from the AnCora corpus, used under CC-by licence.",
"### Supported Tasks\n\n\nExtractive-QA",
"### Languages\n\n\n* Spanish (es)",
"### Directory Structure\n\n\n* URL\n* URL\n* URL\n* URL\n* URL\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\n\n```\n\n{\n\t'id': '6cf3dcd6-b5a3-4516-8f9e-c5c1c6b66628', \n\t'title': 'Historia de Japón', \n\t'context': 'La historia de Japón (日本の歴史 o 日本史, Nihon no rekishi / Nihonshi?) es la sucesión de hechos acontecidos dentro del archipiélago japonés. Algunos de estos hechos aparecen aislados e influenciados por la naturaleza geográfica de Japón como nación insular, en tanto que otra serie de hechos, obedece a influencias foráneas como en el caso del Imperio chino, el cual definió su idioma, su escritura y, también, su cultura política. Asimismo, otra de las influencias foráneas fue la de origen occidental, lo que convirtió al país en una nación industrial, ejerciendo con ello una esfera de influencia y una expansión territorial sobre el área del Pacífico. No obstante, dicho expansionismo se detuvo tras la Segunda Guerra Mundial y el país se posicionó en un esquema de nación industrial con vínculos a su tradición cultural.', \n\t'question': '¿Qué influencia convirtió Japón en una nación industrial?', \n\t'answers': {\n\t\t'text': ['la de origen occidental'], \n\t\t'answer_start': [473]\n\t}\n}\n\n```",
"### Data Fields\n\n\n\n```\n\n{\n id: str\n title: str\n context: str\n question: str\n answers: {\n answer_start: [int]\n text: [str]\n }\n}\n\n```",
"### Data Splits\n\n\n\nContent analysis\n----------------",
"### Number of articles, paragraphs and questions\n\n\n* Number of articles: 3,834\n* Number of contexts: 6,247\n* Number of questions: 18,817\n* Number of sentences: 48,026\n* Questions/Context ratio: 3.01\n* Sentences/Context ratio: 7.70",
"### Number of tokens\n\n\n* Total tokens in context: 1,561,616\n* Average tokens/context: 250\n* Total tokens in questions: 203,235\n* Average tokens/question: 10.80\n* Total tokens in answers: 90,307\n* Average tokens/answer: 4.80",
"### Lexical variation\n\n\n46.38% of the words in the Question can be found in the Context.",
"### Question type\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nFor compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines from SQUAD 1.0 (Rajpurkar, Pranav et al.).",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe source data are scraped articles from Wikinews, the Spanish Wikipedia and the AnCora corpus.\n\n\n* Spanish Wikipedia\n* Spanish Wikinews\n* AnCora corpus",
"#### Who are the source language producers?\n\n\nContributors to the aforementioned sites.",
"### Annotations",
"#### Annotation process\n\n\nWe commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 (Rajpurkar, Pranav et al.).",
"#### Who are the annotators?\n\n\nNative language speakers.",
"### Personal and Sensitive Information\n\n\nNo personal or sensitive information included.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThis corpus contributes to the development of language models in Spanish.",
"### Discussion of Biases\n\n\nNo postprocessing steps were applied to mitigate potential social biases.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL).\n\n\nFor further information, send an email to (plantl-gob-es@URL).\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.",
"### Licensing information\n\n\nThis work is licensed under CC Attribution 4.0 International License.\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)",
"### Contributions\n\n\n[N/A]"
] |
6aa7eccb5dd39278ceaa47d4d2fdd8ffdc33a13b |
# CANTEMIST
## Dataset Description
Manually classified collection of Spanish oncological clinical case reports.
- **Homepage:** [zenodo](https://zenodo.org/record/3978041)
- **Paper:** [Named Entity Recognition, Concept Normalization and Clinical Coding: Overview of the Cantemist Track for Cancer Text Mining in Spanish, Corpus, Guidelines, Methods and Results](https://www.researchgate.net/profile/Antonio-Miranda-Escalada-2/publication/352786464_Named_Entity_Recognition_Concept_Normalization_and_Clinical_Coding_Overview_of_the_Cantemist_Track_for_Cancer_Text_Mining_in_Spanish_Corpus_Guidelines_Methods_and_Results/links/60d98a3b458515d6fbe382d8/Named-Entity-Recognition-Concept-Normalization-and-Clinical-Coding-Overview-of-the-Cantemist-Track-for-Cancer-Text-Mining-in-Spanish-Corpus-Guidelines-Methods-and-Results.pdf)
- **Point of Contact:** [email protected]
### Dataset Summary
Collection of 1301 oncological clinical case reports written in Spanish, with tumor morphology mentions manually annotated and mapped by clinical experts to a controlled terminology. Every tumor morphology mention is linked to an eCIE-O code (the Spanish equivalent of ICD-O).
The training subset contains 501 documents, the development subsets 500, and the test subset 300. The original dataset is distributed in [Brat](https://brat.nlplab.org/standoff.html) format.
This dataset was designed for the CANcer TExt Mining Shared Task, sponsored by [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
For further information, please visit [the official website](https://temu.bsc.es/cantemist/).
### Supported Tasks
Named Entity Recognition (NER)
### Languages
- Spanish (es)
### Directory Structure
* README.md
* cantemist.py
* train.conll
* dev.conll
* test.conll
## Dataset Structure
### Data Instances
Three four-column files, one for each split.
### Data Fields
Every file has 4 columns:
* 1st column: Word form or punctuation symbol
* 2nd column: Original BRAT file name
* 3rd column: Spans
* 4th column: IOB tag
#### Example
<pre>
El cc_onco101 662_664 O
informe cc_onco101 665_672 O
HP cc_onco101 673_675 O
es cc_onco101 676_678 O
compatible cc_onco101 679_689 O
con cc_onco101 690_693 O
adenocarcinoma cc_onco101 694_708 B-MORFOLOGIA_NEOPLASIA
moderadamente cc_onco101 709_722 I-MORFOLOGIA_NEOPLASIA
diferenciado cc_onco101 723_735 I-MORFOLOGIA_NEOPLASIA
que cc_onco101 736_739 O
afecta cc_onco101 740_746 O
a cc_onco101 747_748 O
grasa cc_onco101 749_754 O
peripancreática cc_onco101 755_770 O
sobrepasando cc_onco101 771_783 O
la cc_onco101 784_786 O
serosa cc_onco101 787_793 O
, cc_onco101 793_794 O
infiltración cc_onco101 795_807 O
perineural cc_onco101 808_818 O
. cc_onco101 818_819 O
</pre>
### Data Splits
| Split | Size |
| ------------- | ------------- |
| `train` | 19,397 |
| `dev` | 18,165 |
| `test` | 11,168 |
## Dataset Creation
### Curation Rationale
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
### Source Data
#### Initial Data Collection and Normalization
The selected clinical case reports are fairly similar to hospital health records. To increase the usefulness and practical relevance of the CANTEMIST corpus, we selected clinical cases affecting all genders and that comprised most ages (from children to the elderly) and of various complexity levels (solid tumors, hemato-oncological malignancies, neuroendocrine cancer...).
The CANTEMIST cases include clinical signs and symptoms, personal and family history, current illness, physical examination, complementary tests (blood tests, imaging, pathology), diagnosis, treatment (including adverse effects of chemotherapy), evolution and outcome.
#### Who are the source language producers?
Humans, there is no machine generated data.
### Annotations
#### Annotation process
The manual annotation of the Cantemist corpus was performed by clinical experts following the Cantemist guidelines (for more detail refer to this [paper](http://ceur-ws.org/Vol-2664/cantemist_overview.pdf)). These guidelines contain rules for annotating morphology neoplasms in Spanish oncology clinical cases, as well as for mapping these annotations to eCIE-O.
A medical doctor was regularly consulted by annotators (scientists with PhDs on cancer-related subjects) for the most difficult pathology expressions. This same doctor periodically checked a random selection of annotated clinical records and these annotations were compared and discussed with the annotators. To normalize a selection of very complex cases, MD specialists in pathology from one of the largest university hospitals in Spain were consulted.
#### Who are the annotators?
Clinical experts.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This corpus contributes to the development of medical language models in Spanish.
### Discussion of Biases
Not applicable.
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]).
For further information, send an email to ([email protected]).
This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
### Licensing information
This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License.
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Citation Information
```bibtex
@article{cantemist,
title={Named Entity Recognition, Concept Normalization and Clinical Coding: Overview of the Cantemist Track for Cancer Text Mining in Spanish, Corpus, Guidelines, Methods and Results.},
author={Miranda-Escalada, Antonio and Farr{\'e}, Eul{\`a}lia and Krallinger, Martin},
journal={IberLEF@ SEPLN},
pages={303--323},
year={2020}
}
```
### Contributions
[N/A]
| PlanTL-GOB-ES/cantemist-ner | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"biomedical",
"clinical",
"spanish",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["expert-generated"], "language": ["es"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "tags": ["biomedical", "clinical", "spanish"]} | 2022-11-18T12:08:17+00:00 | [] | [
"es"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #multilinguality-monolingual #language-Spanish #license-cc-by-4.0 #biomedical #clinical #spanish #region-us
| CANTEMIST
=========
Dataset Description
-------------------
Manually classified collection of Spanish oncological clinical case reports.
* Homepage: zenodo
* Paper: Named Entity Recognition, Concept Normalization and Clinical Coding: Overview of the Cantemist Track for Cancer Text Mining in Spanish, Corpus, Guidelines, Methods and Results
* Point of Contact: encargo-pln-life@URL
### Dataset Summary
Collection of 1301 oncological clinical case reports written in Spanish, with tumor morphology mentions manually annotated and mapped by clinical experts to a controlled terminology. Every tumor morphology mention is linked to an eCIE-O code (the Spanish equivalent of ICD-O).
The training subset contains 501 documents, the development subsets 500, and the test subset 300. The original dataset is distributed in Brat format.
This dataset was designed for the CANcer TExt Mining Shared Task, sponsored by Plan-TL.
For further information, please visit the official website.
### Supported Tasks
Named Entity Recognition (NER)
### Languages
* Spanish (es)
### Directory Structure
* URL
* URL
* URL
* URL
* URL
Dataset Structure
-----------------
### Data Instances
Three four-column files, one for each split.
### Data Fields
Every file has 4 columns:
* 1st column: Word form or punctuation symbol
* 2nd column: Original BRAT file name
* 3rd column: Spans
* 4th column: IOB tag
#### Example
```
El cc_onco101 662_664 O
informe cc_onco101 665_672 O
HP cc_onco101 673_675 O
es cc_onco101 676_678 O
compatible cc_onco101 679_689 O
con cc_onco101 690_693 O
adenocarcinoma cc_onco101 694_708 B-MORFOLOGIA_NEOPLASIA
moderadamente cc_onco101 709_722 I-MORFOLOGIA_NEOPLASIA
diferenciado cc_onco101 723_735 I-MORFOLOGIA_NEOPLASIA
que cc_onco101 736_739 O
afecta cc_onco101 740_746 O
a cc_onco101 747_748 O
grasa cc_onco101 749_754 O
peripancreática cc_onco101 755_770 O
sobrepasando cc_onco101 771_783 O
la cc_onco101 784_786 O
serosa cc_onco101 787_793 O
, cc_onco101 793_794 O
infiltración cc_onco101 795_807 O
perineural cc_onco101 808_818 O
. cc_onco101 818_819 O
```
### Data Splits
Dataset Creation
----------------
### Curation Rationale
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
### Source Data
#### Initial Data Collection and Normalization
The selected clinical case reports are fairly similar to hospital health records. To increase the usefulness and practical relevance of the CANTEMIST corpus, we selected clinical cases affecting all genders and that comprised most ages (from children to the elderly) and of various complexity levels (solid tumors, hemato-oncological malignancies, neuroendocrine cancer...).
The CANTEMIST cases include clinical signs and symptoms, personal and family history, current illness, physical examination, complementary tests (blood tests, imaging, pathology), diagnosis, treatment (including adverse effects of chemotherapy), evolution and outcome.
#### Who are the source language producers?
Humans, there is no machine generated data.
### Annotations
#### Annotation process
The manual annotation of the Cantemist corpus was performed by clinical experts following the Cantemist guidelines (for more detail refer to this paper). These guidelines contain rules for annotating morphology neoplasms in Spanish oncology clinical cases, as well as for mapping these annotations to eCIE-O.
A medical doctor was regularly consulted by annotators (scientists with PhDs on cancer-related subjects) for the most difficult pathology expressions. This same doctor periodically checked a random selection of annotated clinical records and these annotations were compared and discussed with the annotators. To normalize a selection of very complex cases, MD specialists in pathology from one of the largest university hospitals in Spain were consulted.
#### Who are the annotators?
Clinical experts.
### Personal and Sensitive Information
No personal or sensitive information included.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
This corpus contributes to the development of medical language models in Spanish.
### Discussion of Biases
Not applicable.
Additional Information
----------------------
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL).
For further information, send an email to (plantl-gob-es@URL).
This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
### Licensing information
This work is licensed under CC Attribution 4.0 International License.
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Contributions
[N/A]
| [
"### Dataset Summary\n\n\nCollection of 1301 oncological clinical case reports written in Spanish, with tumor morphology mentions manually annotated and mapped by clinical experts to a controlled terminology. Every tumor morphology mention is linked to an eCIE-O code (the Spanish equivalent of ICD-O).\n\n\nThe training subset contains 501 documents, the development subsets 500, and the test subset 300. The original dataset is distributed in Brat format.\n\n\nThis dataset was designed for the CANcer TExt Mining Shared Task, sponsored by Plan-TL.\n\n\nFor further information, please visit the official website.",
"### Supported Tasks\n\n\nNamed Entity Recognition (NER)",
"### Languages\n\n\n* Spanish (es)",
"### Directory Structure\n\n\n* URL\n* URL\n* URL\n* URL\n* URL\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nThree four-column files, one for each split.",
"### Data Fields\n\n\nEvery file has 4 columns:\n\n\n* 1st column: Word form or punctuation symbol\n* 2nd column: Original BRAT file name\n* 3rd column: Spans\n* 4th column: IOB tag",
"#### Example\n\n\n\n```\n\nEl cc_onco101 662_664 O\ninforme cc_onco101 665_672 O\nHP cc_onco101 673_675 O\nes cc_onco101 676_678 O\ncompatible cc_onco101 679_689 O\ncon cc_onco101 690_693 O\nadenocarcinoma cc_onco101 694_708 B-MORFOLOGIA_NEOPLASIA\nmoderadamente cc_onco101 709_722 I-MORFOLOGIA_NEOPLASIA\ndiferenciado cc_onco101 723_735 I-MORFOLOGIA_NEOPLASIA\nque cc_onco101 736_739 O\nafecta cc_onco101 740_746 O\na cc_onco101 747_748 O\ngrasa cc_onco101 749_754 O\nperipancreática cc_onco101 755_770 O\nsobrepasando cc_onco101 771_783 O\nla cc_onco101 784_786 O\nserosa cc_onco101 787_793 O\n, cc_onco101 793_794 O\ninfiltración cc_onco101 795_807 O\nperineural cc_onco101 808_818 O\n. cc_onco101 818_819 O\n\n```",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nFor compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe selected clinical case reports are fairly similar to hospital health records. To increase the usefulness and practical relevance of the CANTEMIST corpus, we selected clinical cases affecting all genders and that comprised most ages (from children to the elderly) and of various complexity levels (solid tumors, hemato-oncological malignancies, neuroendocrine cancer...).\n\n\nThe CANTEMIST cases include clinical signs and symptoms, personal and family history, current illness, physical examination, complementary tests (blood tests, imaging, pathology), diagnosis, treatment (including adverse effects of chemotherapy), evolution and outcome.",
"#### Who are the source language producers?\n\n\nHumans, there is no machine generated data.",
"### Annotations",
"#### Annotation process\n\n\nThe manual annotation of the Cantemist corpus was performed by clinical experts following the Cantemist guidelines (for more detail refer to this paper). These guidelines contain rules for annotating morphology neoplasms in Spanish oncology clinical cases, as well as for mapping these annotations to eCIE-O.\n\n\nA medical doctor was regularly consulted by annotators (scientists with PhDs on cancer-related subjects) for the most difficult pathology expressions. This same doctor periodically checked a random selection of annotated clinical records and these annotations were compared and discussed with the annotators. To normalize a selection of very complex cases, MD specialists in pathology from one of the largest university hospitals in Spain were consulted.",
"#### Who are the annotators?\n\n\nClinical experts.",
"### Personal and Sensitive Information\n\n\nNo personal or sensitive information included.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThis corpus contributes to the development of medical language models in Spanish.",
"### Discussion of Biases\n\n\nNot applicable.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL).\n\n\nFor further information, send an email to (plantl-gob-es@URL).\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.",
"### Licensing information\n\n\nThis work is licensed under CC Attribution 4.0 International License.\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)",
"### Contributions\n\n\n[N/A]"
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #multilinguality-monolingual #language-Spanish #license-cc-by-4.0 #biomedical #clinical #spanish #region-us \n",
"### Dataset Summary\n\n\nCollection of 1301 oncological clinical case reports written in Spanish, with tumor morphology mentions manually annotated and mapped by clinical experts to a controlled terminology. Every tumor morphology mention is linked to an eCIE-O code (the Spanish equivalent of ICD-O).\n\n\nThe training subset contains 501 documents, the development subsets 500, and the test subset 300. The original dataset is distributed in Brat format.\n\n\nThis dataset was designed for the CANcer TExt Mining Shared Task, sponsored by Plan-TL.\n\n\nFor further information, please visit the official website.",
"### Supported Tasks\n\n\nNamed Entity Recognition (NER)",
"### Languages\n\n\n* Spanish (es)",
"### Directory Structure\n\n\n* URL\n* URL\n* URL\n* URL\n* URL\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nThree four-column files, one for each split.",
"### Data Fields\n\n\nEvery file has 4 columns:\n\n\n* 1st column: Word form or punctuation symbol\n* 2nd column: Original BRAT file name\n* 3rd column: Spans\n* 4th column: IOB tag",
"#### Example\n\n\n\n```\n\nEl cc_onco101 662_664 O\ninforme cc_onco101 665_672 O\nHP cc_onco101 673_675 O\nes cc_onco101 676_678 O\ncompatible cc_onco101 679_689 O\ncon cc_onco101 690_693 O\nadenocarcinoma cc_onco101 694_708 B-MORFOLOGIA_NEOPLASIA\nmoderadamente cc_onco101 709_722 I-MORFOLOGIA_NEOPLASIA\ndiferenciado cc_onco101 723_735 I-MORFOLOGIA_NEOPLASIA\nque cc_onco101 736_739 O\nafecta cc_onco101 740_746 O\na cc_onco101 747_748 O\ngrasa cc_onco101 749_754 O\nperipancreática cc_onco101 755_770 O\nsobrepasando cc_onco101 771_783 O\nla cc_onco101 784_786 O\nserosa cc_onco101 787_793 O\n, cc_onco101 793_794 O\ninfiltración cc_onco101 795_807 O\nperineural cc_onco101 808_818 O\n. cc_onco101 818_819 O\n\n```",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nFor compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe selected clinical case reports are fairly similar to hospital health records. To increase the usefulness and practical relevance of the CANTEMIST corpus, we selected clinical cases affecting all genders and that comprised most ages (from children to the elderly) and of various complexity levels (solid tumors, hemato-oncological malignancies, neuroendocrine cancer...).\n\n\nThe CANTEMIST cases include clinical signs and symptoms, personal and family history, current illness, physical examination, complementary tests (blood tests, imaging, pathology), diagnosis, treatment (including adverse effects of chemotherapy), evolution and outcome.",
"#### Who are the source language producers?\n\n\nHumans, there is no machine generated data.",
"### Annotations",
"#### Annotation process\n\n\nThe manual annotation of the Cantemist corpus was performed by clinical experts following the Cantemist guidelines (for more detail refer to this paper). These guidelines contain rules for annotating morphology neoplasms in Spanish oncology clinical cases, as well as for mapping these annotations to eCIE-O.\n\n\nA medical doctor was regularly consulted by annotators (scientists with PhDs on cancer-related subjects) for the most difficult pathology expressions. This same doctor periodically checked a random selection of annotated clinical records and these annotations were compared and discussed with the annotators. To normalize a selection of very complex cases, MD specialists in pathology from one of the largest university hospitals in Spain were consulted.",
"#### Who are the annotators?\n\n\nClinical experts.",
"### Personal and Sensitive Information\n\n\nNo personal or sensitive information included.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThis corpus contributes to the development of medical language models in Spanish.",
"### Discussion of Biases\n\n\nNot applicable.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL).\n\n\nFor further information, send an email to (plantl-gob-es@URL).\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.",
"### Licensing information\n\n\nThis work is licensed under CC Attribution 4.0 International License.\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)",
"### Contributions\n\n\n[N/A]"
] |
0400dea295f345309e77f92656ef65b636fef238 |
# PharmaCoNER
## Dataset Description
Manually classified collection of Spanish clinical case studies.
- **Homepage:** [zenodo](https://zenodo.org/record/4270158)
- **Paper:** [PharmaCoNER: Pharmacological Substances, Compounds and proteins Named Entity Recognition track](https://aclanthology.org/D19-5701/)
- **Point of Contact:** [email protected]
### Dataset Summary
Manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from [SciELO](https://scielo.org/).
The PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets.
The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.
In terms of training examples, this translates to a total of 8129, 3787 and 3952 annotated sentences in each set.
The original dataset is distributed in [Brat](https://brat.nlplab.org/standoff.html) format.
The annotation of the entire set of entity mentions was carried out by domain experts.
It includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.
This dataset was designed for the PharmaCoNER task, sponsored by [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
For further information, please visit [the official website](https://temu.bsc.es/pharmaconer/).
### Supported Tasks
Named Entity Recognition (NER)
### Languages
- Spanish (es)
### Directory Structure
* README.md
* pharmaconer.py
* dev-set_1.1.conll
* test-set_1.1.conll
* train-set_1.1.conll
## Dataset Structure
### Data Instances
Three four-column files, one for each split.
### Data Fields
Every file has four columns:
* 1st column: Word form or punctuation symbol
* 2nd column: Original BRAT file name
* 3rd column: Spans
* 4th column: IOB tag
#### Example
<pre>
La S0004-06142006000900008-1 123_125 O
paciente S0004-06142006000900008-1 126_134 O
tenía S0004-06142006000900008-1 135_140 O
antecedentes S0004-06142006000900008-1 141_153 O
de S0004-06142006000900008-1 154_156 O
hipotiroidismo S0004-06142006000900008-1 157_171 O
, S0004-06142006000900008-1 171_172 O
hipertensión S0004-06142006000900008-1 173_185 O
arterial S0004-06142006000900008-1 186_194 O
en S0004-06142006000900008-1 195_197 O
tratamiento S0004-06142006000900008-1 198_209 O
habitual S0004-06142006000900008-1 210_218 O
con S0004-06142006000900008-1 219-222 O
atenolol S0004-06142006000900008-1 223_231 B-NORMALIZABLES
y S0004-06142006000900008-1 232_233 O
enalapril S0004-06142006000900008-1 234_243 B-NORMALIZABLES
</pre>
### Data Splits
| Split | Size |
| ------------- | ------------- |
| `train` | 8,129 |
| `dev` | 3,787 |
| `test` | 3,952 |
## Dataset Creation
### Curation Rationale
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
### Source Data
#### Initial Data Collection and Normalization
Manually classified collection of clinical case report sections. The clinical cases were not restricted to a single medical discipline, covering a variety of medical disciplines, including oncology, urology, cardiology, pneumology or infectious diseases. This is key to cover a diverse set of chemicals and drugs.
#### Who are the source language producers?
Humans, there is no machine generated data.
### Annotations
#### Annotation process
The annotation process of the PharmaCoNER corpus was inspired by previous annotation schemes and corpora used for the BioCreative CHEMDNER and GPRO tracks, translating the guidelines used for these tracks into Spanish and adapting them to the characteristics and needs of clinically oriented documents by modifying the annotation criteria and rules to cover medical information needs. This adaptation was carried out in collaboration with practicing physicians and medicinal chemistry experts. The adaptation, translation and refinement of the guidelines was done on a sample set of the SPACCC corpus and linked to an iterative process of annotation consistency analysis through interannotator agreement (IAA) studies until a high annotation quality in terms of IAA was reached.
#### Who are the annotators?
Practicing physicians and medicinal chemistry experts.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This corpus contributes to the development of medical language models in Spanish.
### Discussion of Biases
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]).
For further information, send an email to ([email protected]).
This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
### Licensing information
This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License.
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Citation Information
```bibtex
@inproceedings{,
title = "PharmaCoNER: Pharmacological Substances, Compounds and proteins Named Entity Recognition track",
author = "Gonzalez-Agirre, Aitor and
Marimon, Montserrat and
Intxaurrondo, Ander and
Rabal, Obdulia and
Villegas, Marta and
Krallinger, Martin",
booktitle = "Proceedings of The 5th Workshop on BioNLP Open Shared Tasks",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-5701",
doi = "10.18653/v1/D19-5701",
pages = "1--10",
}
```
### Contributions
[N/A]
| PlanTL-GOB-ES/pharmaconer | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"biomedical",
"clinical",
"spanish",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["expert-generated"], "language": ["es"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "tags": ["biomedical", "clinical", "spanish"]} | 2022-11-18T12:06:36+00:00 | [] | [
"es"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #multilinguality-monolingual #language-Spanish #license-cc-by-4.0 #biomedical #clinical #spanish #region-us
| PharmaCoNER
===========
Dataset Description
-------------------
Manually classified collection of Spanish clinical case studies.
* Homepage: zenodo
* Paper: PharmaCoNER: Pharmacological Substances, Compounds and proteins Named Entity Recognition track
* Point of Contact: encargo-pln-life@URL
### Dataset Summary
Manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO.
The PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets.
The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.
In terms of training examples, this translates to a total of 8129, 3787 and 3952 annotated sentences in each set.
The original dataset is distributed in Brat format.
The annotation of the entire set of entity mentions was carried out by domain experts.
It includes the following 4 entity types: NORMALIZABLES, NO\_NORMALIZABLES, PROTEINAS and UNCLEAR.
This dataset was designed for the PharmaCoNER task, sponsored by Plan-TL.
For further information, please visit the official website.
### Supported Tasks
Named Entity Recognition (NER)
### Languages
* Spanish (es)
### Directory Structure
* URL
* URL
* dev-set\_1.1.conll
* test-set\_1.1.conll
* train-set\_1.1.conll
Dataset Structure
-----------------
### Data Instances
Three four-column files, one for each split.
### Data Fields
Every file has four columns:
* 1st column: Word form or punctuation symbol
* 2nd column: Original BRAT file name
* 3rd column: Spans
* 4th column: IOB tag
#### Example
```
La S0004-06142006000900008-1 123_125 O
paciente S0004-06142006000900008-1 126_134 O
tenía S0004-06142006000900008-1 135_140 O
antecedentes S0004-06142006000900008-1 141_153 O
de S0004-06142006000900008-1 154_156 O
hipotiroidismo S0004-06142006000900008-1 157_171 O
, S0004-06142006000900008-1 171_172 O
hipertensión S0004-06142006000900008-1 173_185 O
arterial S0004-06142006000900008-1 186_194 O
en S0004-06142006000900008-1 195_197 O
tratamiento S0004-06142006000900008-1 198_209 O
habitual S0004-06142006000900008-1 210_218 O
con S0004-06142006000900008-1 219-222 O
atenolol S0004-06142006000900008-1 223_231 B-NORMALIZABLES
y S0004-06142006000900008-1 232_233 O
enalapril S0004-06142006000900008-1 234_243 B-NORMALIZABLES
```
### Data Splits
Dataset Creation
----------------
### Curation Rationale
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
### Source Data
#### Initial Data Collection and Normalization
Manually classified collection of clinical case report sections. The clinical cases were not restricted to a single medical discipline, covering a variety of medical disciplines, including oncology, urology, cardiology, pneumology or infectious diseases. This is key to cover a diverse set of chemicals and drugs.
#### Who are the source language producers?
Humans, there is no machine generated data.
### Annotations
#### Annotation process
The annotation process of the PharmaCoNER corpus was inspired by previous annotation schemes and corpora used for the BioCreative CHEMDNER and GPRO tracks, translating the guidelines used for these tracks into Spanish and adapting them to the characteristics and needs of clinically oriented documents by modifying the annotation criteria and rules to cover medical information needs. This adaptation was carried out in collaboration with practicing physicians and medicinal chemistry experts. The adaptation, translation and refinement of the guidelines was done on a sample set of the SPACCC corpus and linked to an iterative process of annotation consistency analysis through interannotator agreement (IAA) studies until a high annotation quality in terms of IAA was reached.
#### Who are the annotators?
Practicing physicians and medicinal chemistry experts.
### Personal and Sensitive Information
No personal or sensitive information included.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
This corpus contributes to the development of medical language models in Spanish.
### Discussion of Biases
[N/A]
Additional Information
----------------------
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL).
For further information, send an email to (plantl-gob-es@URL).
This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
### Licensing information
This work is licensed under CC Attribution 4.0 International License.
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Contributions
[N/A]
| [
"### Dataset Summary\n\n\nManually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO.\n\n\nThe PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets.\nThe training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.\nIn terms of training examples, this translates to a total of 8129, 3787 and 3952 annotated sentences in each set.\nThe original dataset is distributed in Brat format.\n\n\nThe annotation of the entire set of entity mentions was carried out by domain experts.\nIt includes the following 4 entity types: NORMALIZABLES, NO\\_NORMALIZABLES, PROTEINAS and UNCLEAR.\n\n\nThis dataset was designed for the PharmaCoNER task, sponsored by Plan-TL.\n\n\nFor further information, please visit the official website.",
"### Supported Tasks\n\n\nNamed Entity Recognition (NER)",
"### Languages\n\n\n* Spanish (es)",
"### Directory Structure\n\n\n* URL\n* URL\n* dev-set\\_1.1.conll\n* test-set\\_1.1.conll\n* train-set\\_1.1.conll\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nThree four-column files, one for each split.",
"### Data Fields\n\n\nEvery file has four columns:\n\n\n* 1st column: Word form or punctuation symbol\n* 2nd column: Original BRAT file name\n* 3rd column: Spans\n* 4th column: IOB tag",
"#### Example\n\n\n\n```\n\nLa S0004-06142006000900008-1 123_125 O\npaciente S0004-06142006000900008-1 126_134 O\ntenía S0004-06142006000900008-1 135_140 O\nantecedentes S0004-06142006000900008-1 141_153 O\nde S0004-06142006000900008-1 154_156 O\nhipotiroidismo S0004-06142006000900008-1 157_171 O\n, S0004-06142006000900008-1 171_172 O\nhipertensión S0004-06142006000900008-1 173_185 O\narterial S0004-06142006000900008-1 186_194 O\nen S0004-06142006000900008-1 195_197 O\ntratamiento S0004-06142006000900008-1 198_209 O\nhabitual S0004-06142006000900008-1 210_218 O\ncon S0004-06142006000900008-1 219-222 O\natenolol S0004-06142006000900008-1 223_231 B-NORMALIZABLES\ny S0004-06142006000900008-1 232_233 O\nenalapril S0004-06142006000900008-1 234_243 B-NORMALIZABLES\n\n```",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nFor compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nManually classified collection of clinical case report sections. The clinical cases were not restricted to a single medical discipline, covering a variety of medical disciplines, including oncology, urology, cardiology, pneumology or infectious diseases. This is key to cover a diverse set of chemicals and drugs.",
"#### Who are the source language producers?\n\n\nHumans, there is no machine generated data.",
"### Annotations",
"#### Annotation process\n\n\nThe annotation process of the PharmaCoNER corpus was inspired by previous annotation schemes and corpora used for the BioCreative CHEMDNER and GPRO tracks, translating the guidelines used for these tracks into Spanish and adapting them to the characteristics and needs of clinically oriented documents by modifying the annotation criteria and rules to cover medical information needs. This adaptation was carried out in collaboration with practicing physicians and medicinal chemistry experts. The adaptation, translation and refinement of the guidelines was done on a sample set of the SPACCC corpus and linked to an iterative process of annotation consistency analysis through interannotator agreement (IAA) studies until a high annotation quality in terms of IAA was reached.",
"#### Who are the annotators?\n\n\nPracticing physicians and medicinal chemistry experts.",
"### Personal and Sensitive Information\n\n\nNo personal or sensitive information included.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThis corpus contributes to the development of medical language models in Spanish.",
"### Discussion of Biases\n\n\n[N/A]\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL).\n\n\nFor further information, send an email to (plantl-gob-es@URL).\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.",
"### Licensing information\n\n\nThis work is licensed under CC Attribution 4.0 International License.\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)",
"### Contributions\n\n\n[N/A]"
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #multilinguality-monolingual #language-Spanish #license-cc-by-4.0 #biomedical #clinical #spanish #region-us \n",
"### Dataset Summary\n\n\nManually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO.\n\n\nThe PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets.\nThe training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.\nIn terms of training examples, this translates to a total of 8129, 3787 and 3952 annotated sentences in each set.\nThe original dataset is distributed in Brat format.\n\n\nThe annotation of the entire set of entity mentions was carried out by domain experts.\nIt includes the following 4 entity types: NORMALIZABLES, NO\\_NORMALIZABLES, PROTEINAS and UNCLEAR.\n\n\nThis dataset was designed for the PharmaCoNER task, sponsored by Plan-TL.\n\n\nFor further information, please visit the official website.",
"### Supported Tasks\n\n\nNamed Entity Recognition (NER)",
"### Languages\n\n\n* Spanish (es)",
"### Directory Structure\n\n\n* URL\n* URL\n* dev-set\\_1.1.conll\n* test-set\\_1.1.conll\n* train-set\\_1.1.conll\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nThree four-column files, one for each split.",
"### Data Fields\n\n\nEvery file has four columns:\n\n\n* 1st column: Word form or punctuation symbol\n* 2nd column: Original BRAT file name\n* 3rd column: Spans\n* 4th column: IOB tag",
"#### Example\n\n\n\n```\n\nLa S0004-06142006000900008-1 123_125 O\npaciente S0004-06142006000900008-1 126_134 O\ntenía S0004-06142006000900008-1 135_140 O\nantecedentes S0004-06142006000900008-1 141_153 O\nde S0004-06142006000900008-1 154_156 O\nhipotiroidismo S0004-06142006000900008-1 157_171 O\n, S0004-06142006000900008-1 171_172 O\nhipertensión S0004-06142006000900008-1 173_185 O\narterial S0004-06142006000900008-1 186_194 O\nen S0004-06142006000900008-1 195_197 O\ntratamiento S0004-06142006000900008-1 198_209 O\nhabitual S0004-06142006000900008-1 210_218 O\ncon S0004-06142006000900008-1 219-222 O\natenolol S0004-06142006000900008-1 223_231 B-NORMALIZABLES\ny S0004-06142006000900008-1 232_233 O\nenalapril S0004-06142006000900008-1 234_243 B-NORMALIZABLES\n\n```",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nFor compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nManually classified collection of clinical case report sections. The clinical cases were not restricted to a single medical discipline, covering a variety of medical disciplines, including oncology, urology, cardiology, pneumology or infectious diseases. This is key to cover a diverse set of chemicals and drugs.",
"#### Who are the source language producers?\n\n\nHumans, there is no machine generated data.",
"### Annotations",
"#### Annotation process\n\n\nThe annotation process of the PharmaCoNER corpus was inspired by previous annotation schemes and corpora used for the BioCreative CHEMDNER and GPRO tracks, translating the guidelines used for these tracks into Spanish and adapting them to the characteristics and needs of clinically oriented documents by modifying the annotation criteria and rules to cover medical information needs. This adaptation was carried out in collaboration with practicing physicians and medicinal chemistry experts. The adaptation, translation and refinement of the guidelines was done on a sample set of the SPACCC corpus and linked to an iterative process of annotation consistency analysis through interannotator agreement (IAA) studies until a high annotation quality in terms of IAA was reached.",
"#### Who are the annotators?\n\n\nPracticing physicians and medicinal chemistry experts.",
"### Personal and Sensitive Information\n\n\nNo personal or sensitive information included.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThis corpus contributes to the development of medical language models in Spanish.",
"### Discussion of Biases\n\n\n[N/A]\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL).\n\n\nFor further information, send an email to (plantl-gob-es@URL).\n\n\nThis work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.",
"### Licensing information\n\n\nThis work is licensed under CC Attribution 4.0 International License.\n\n\nCopyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)",
"### Contributions\n\n\n[N/A]"
] |
3214aea668533584604213ce045faf6c31886643 | OpenSLR is a site devoted to hosting speech and language resources,
such as training corpora for speech recognition, and software related to speech recognition.
They intend to be a convenient place for anyone to put resources that they have created,
so that they can be downloaded publicly.
They aim to provide a central, hassle-free place for others to put their speech resources. see there http://www.openslr.org/contributions.html
#Supported Task
Automatic Speech Recognition
#Languages
Gujarati
Identifier: SLR78
Summary: Data set which contains recordings of native speakers of Gujarati.
Category: Speech
License: Attribution-ShareAlike 4.0 International
Downloads (use a mirror closer to you):
about.html [1.5K] (Information about the data set ) Mirrors: [China]
LICENSE [20K] (License information for the data set ) Mirrors: [China]
line_index_female.tsv [423K] (Lines recorded by the female speakers ) Mirrors: [China]
line_index_male.tsv [393K] (Lines recorded by the male speakers ) Mirrors: [China]
gu_in_female.zip [917M] (Archive containing recordings from female speakers ) Mirrors: [China]
gu_in_male.zip [825M] (Archive file recordings from male speakers ) Mirrors: [China]
About this resource:
This data set contains transcribed high-quality audio of Gujarati sentences recorded by volunteers. The data set consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
See LICENSE file for license information.
Copyright 2018, 2019 Google, Inc.
If you use this data in publications, please cite it as follows:
@inproceedings{he-etal-2020-open,
title = {{Open-source Multi-speaker Speech Corpora for Building Gujarati, Kannada, Malayalam, Marathi, Tamil and Telugu Speech Synthesis Systems}},
author = {He, Fei and Chu, Shan-Hui Cathy and Kjartansson, Oddur and Rivera, Clara and Katanova, Anna and Gutkin, Alexander and Demirsahin, Isin and Johny, Cibu and Jansche, Martin and Sarin, Supheakmungkol and Pipatsrisawat, Knot},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
month = may,
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association (ELRA)},
pages = {6494--6503},
url = {https://www.aclweb.org/anthology/2020.lrec-1.800},
ISBN = "{979-10-95546-34-4},
}
| Pratik/Gujarati_OpenSLR | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-11-17T13:36:56+00:00 | [] | [] | TAGS
#region-us
| OpenSLR is a site devoted to hosting speech and language resources,
such as training corpora for speech recognition, and software related to speech recognition.
They intend to be a convenient place for anyone to put resources that they have created,
so that they can be downloaded publicly.
They aim to provide a central, hassle-free place for others to put their speech resources. see there URL
#Supported Task
Automatic Speech Recognition
#Languages
Gujarati
Identifier: SLR78
Summary: Data set which contains recordings of native speakers of Gujarati.
Category: Speech
License: Attribution-ShareAlike 4.0 International
Downloads (use a mirror closer to you):
URL [1.5K] (Information about the data set ) Mirrors: [China]
LICENSE [20K] (License information for the data set ) Mirrors: [China]
line_index_female.tsv [423K] (Lines recorded by the female speakers ) Mirrors: [China]
line_index_male.tsv [393K] (Lines recorded by the male speakers ) Mirrors: [China]
gu_in_female.zip [917M] (Archive containing recordings from female speakers ) Mirrors: [China]
gu_in_male.zip [825M] (Archive file recordings from male speakers ) Mirrors: [China]
About this resource:
This data set contains transcribed high-quality audio of Gujarati sentences recorded by volunteers. The data set consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. URL
See LICENSE file for license information.
Copyright 2018, 2019 Google, Inc.
If you use this data in publications, please cite it as follows:
@inproceedings{he-etal-2020-open,
title = {{Open-source Multi-speaker Speech Corpora for Building Gujarati, Kannada, Malayalam, Marathi, Tamil and Telugu Speech Synthesis Systems}},
author = {He, Fei and Chu, Shan-Hui Cathy and Kjartansson, Oddur and Rivera, Clara and Katanova, Anna and Gutkin, Alexander and Demirsahin, Isin and Johny, Cibu and Jansche, Martin and Sarin, Supheakmungkol and Pipatsrisawat, Knot},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
month = may,
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association (ELRA)},
pages = {6494--6503},
url = {URL
ISBN = "{979-10-95546-34-4},
}
| [] | [
"TAGS\n#region-us \n"
] |
ced9365b1e8c7f3df9fd8cabc9c2a067cb958791 | # Dataset for annotation
## Files and their description
1. combined_all_stats_csv
- This is the CSV File containing statistics for the patent dataset
> can be downloaded from the Git Repo.
- What are the statistics it shows?
> This file contains columns such as filename, total publications, total positive samples, total negative samples, and total neutral samples per every week of the year from 2010-2020.
2. combined_neut_data_csv
- This is the CSV File, that contains details regarding neutral samples
> Can be downloaded from [here](https://drive.google.com/file/d/1-YCku0qx74cmbSm7R_7t7C6K9jdmvTn1/view?usp=sharing)
- What data this file contains?
> This file contains columns such as publication_number, patent_title, appl_type, and neutral_text for each patents.
3. combined_pos_data_csv
- This is the CSV File, that contains details regarding positive samples
> Can be downloaded from [here](https://drive.google.com/file/d/1-I_HVrIVVZ0hFPk32GcqTpUXdk-4Nr2l/view?usp=sharing).
- What data this file contains?
> This file contains columns such as publication_number, patent_title, appl_type, and positive_text for each patent.
4. combined_neg_data_csv
- This is the CSV File, that contains details regarding negative samples
> Can be downloaded from [here](https://drive.google.com/file/d/1-PDCkcrrDCtFnKy5MJFuaKJ0T8A4r1_S/view?usp=sharing)
- What data this file contains?
> This file contains columns such as publication_number, patent_title, appl_type, and negative_text for each patent.
5. combined_data_150k
- This is the CSV File, that contains 150k cleaned data samples(50k from each pos, neg, and neut) required to train models.
> Can be downloaded from [here](https://drive.google.com/file/d/1nMdnmJ3_JdcgNFB8C9O8URUWEqFYg01m/view?usp=sharing)
- What data this file contains?
> This file contains two columns text and target.
> Text can be neutral, positive, or negative.
> Target can be label 0, 1, 2 for neutral, positive, and negative respectively. | Renukswamy/Patent_sentiment_analysis | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-11-26T09:18:15+00:00 | [] | [] | TAGS
#region-us
| # Dataset for annotation
## Files and their description
1. combined_all_stats_csv
- This is the CSV File containing statistics for the patent dataset
> can be downloaded from the Git Repo.
- What are the statistics it shows?
> This file contains columns such as filename, total publications, total positive samples, total negative samples, and total neutral samples per every week of the year from 2010-2020.
2. combined_neut_data_csv
- This is the CSV File, that contains details regarding neutral samples
> Can be downloaded from here
- What data this file contains?
> This file contains columns such as publication_number, patent_title, appl_type, and neutral_text for each patents.
3. combined_pos_data_csv
- This is the CSV File, that contains details regarding positive samples
> Can be downloaded from here.
- What data this file contains?
> This file contains columns such as publication_number, patent_title, appl_type, and positive_text for each patent.
4. combined_neg_data_csv
- This is the CSV File, that contains details regarding negative samples
> Can be downloaded from here
- What data this file contains?
> This file contains columns such as publication_number, patent_title, appl_type, and negative_text for each patent.
5. combined_data_150k
- This is the CSV File, that contains 150k cleaned data samples(50k from each pos, neg, and neut) required to train models.
> Can be downloaded from here
- What data this file contains?
> This file contains two columns text and target.
> Text can be neutral, positive, or negative.
> Target can be label 0, 1, 2 for neutral, positive, and negative respectively. | [
"# Dataset for annotation",
"## Files and their description\n\n1.\tcombined_all_stats_csv \n\t- This is the CSV File containing statistics for the patent dataset \n\t\t> can be downloaded from the Git Repo.\n\t- What are the statistics it shows?\t\n\t\t> This file contains columns such as filename, total publications, total positive samples, total negative samples, and total neutral samples per every week of the year from 2010-2020. \n\n2. combined_neut_data_csv \n\t- This is the CSV File, that contains details regarding neutral samples \n\t\t> Can be downloaded from here \t\n\t- What data this file contains?\n\t\t> This file contains columns such as publication_number, patent_title, appl_type, and neutral_text for each patents.\t\n\n3. combined_pos_data_csv \n\t- This is the CSV File, that contains details regarding positive samples \n\t\t> Can be downloaded from here. \t\n\t- What data this file contains?\n\t\t> This file contains columns such as publication_number, patent_title, appl_type, and positive_text for each patent.\t \n\n4. combined_neg_data_csv \n\t- This is the CSV File, that contains details regarding negative samples \n\t\t> Can be downloaded from here \t\n\t- What data this file contains?\n\t\t> This file contains columns such as publication_number, patent_title, appl_type, and negative_text for each patent.\t \n\n5. combined_data_150k\n\t- This is the CSV File, that contains 150k cleaned data samples(50k from each pos, neg, and neut) required to train models. \n\t\t> Can be downloaded from here \t\n\t- What data this file contains?\n\t\t> This file contains two columns text and target. \n\t\t> Text can be neutral, positive, or negative. \n\t\t> Target can be label 0, 1, 2 for neutral, positive, and negative respectively."
] | [
"TAGS\n#region-us \n",
"# Dataset for annotation",
"## Files and their description\n\n1.\tcombined_all_stats_csv \n\t- This is the CSV File containing statistics for the patent dataset \n\t\t> can be downloaded from the Git Repo.\n\t- What are the statistics it shows?\t\n\t\t> This file contains columns such as filename, total publications, total positive samples, total negative samples, and total neutral samples per every week of the year from 2010-2020. \n\n2. combined_neut_data_csv \n\t- This is the CSV File, that contains details regarding neutral samples \n\t\t> Can be downloaded from here \t\n\t- What data this file contains?\n\t\t> This file contains columns such as publication_number, patent_title, appl_type, and neutral_text for each patents.\t\n\n3. combined_pos_data_csv \n\t- This is the CSV File, that contains details regarding positive samples \n\t\t> Can be downloaded from here. \t\n\t- What data this file contains?\n\t\t> This file contains columns such as publication_number, patent_title, appl_type, and positive_text for each patent.\t \n\n4. combined_neg_data_csv \n\t- This is the CSV File, that contains details regarding negative samples \n\t\t> Can be downloaded from here \t\n\t- What data this file contains?\n\t\t> This file contains columns such as publication_number, patent_title, appl_type, and negative_text for each patent.\t \n\n5. combined_data_150k\n\t- This is the CSV File, that contains 150k cleaned data samples(50k from each pos, neg, and neut) required to train models. \n\t\t> Can be downloaded from here \t\n\t- What data this file contains?\n\t\t> This file contains two columns text and target. \n\t\t> Text can be neutral, positive, or negative. \n\t\t> Target can be label 0, 1, 2 for neutral, positive, and negative respectively."
] |
c81e931641843d953dc134ae4bcfc27866710dd2 | # Persian Blog
# Dataset Summary
persian_news_dataset is a collection of 400k blog posts. these posts have been gathered from more than 10 websites. This dataset can be used in different NLP tasks like language modeling and text generation tasks.
This effort is part of a bigger perspective to have several datasets in Persian language for different tasks that have two important factors: `free` and `easy-to-use`. Here is a quick HOW-TO for using this dataset in datasets library:[Demo-datasets](https://saied71.github.io/RohanAiLab/2021/09/03/Demo-datasets.html)
# Description
As discussed before, this dataset contains 5M news articles. Each article has these three attributes: text, title, category. Here is a sample of dataset:
```
text : چرا کودکان به روانشناس نیاز دارند؟ روانشناسی کودکانکودکان همچون غنچههای زیبا هستند که برای شکوفایی و به ثمر رسیدن نیاز به مراقبت و رسیدگی دارند . روانشناس کودک فردیست که از زمان بدو تولد کودک در مراحل مختلف زندگی کودک در کنار والدین وی میباشد و به چگونگی تربیت کودک کمک میکند تا به بهترین شکل رشد کند . چرا که روانشناس کودک با روحیات ، نیازها و مشکلات کودکان و همچنین چگونگی برقراری ارتباط بین کودک و والدین وی آشنایی کامل دارد .بسیاری از کودکان در سنین مختلف بخاطر شرایط زندگی ، دچار انواع ناسازگاریها و مشکلات در زندگی خود میشود از ناسازگاری کودکان میتوان به موارد زیر اشاره کرد : 1 . پرخاشگری 2 . بد دهنی 3 . اختلال در خوابیدن 4 . اختلال در غذا خوردن و کم اشتهایی 5 . حالت افسردگی و اضطراب 6 . ترس از محیط پیرامون 7 . عدم آمادگی برای ورود به جامعه 8 . وجود مشکل در محیط مدرسه 9 . عدم تمرکز 10 . جویدن ناخن ها 11 . انزوا و گوشه گیری 12 . عدم هم بازی شدن با هم سن و سال هاو .این گونه ناسازگاریها در زندگی آینده کودک نقش به سزایی دارد .روانشناس کودکیک روانشناس کودک خوب ، با دلسوزی و با تکیه بر تجربیات و تخصص خود میکوشد تا رفتارهای کودک را مورد ارزیابی و بررسی قرار دهد سپس سعی میکند تا رفتارهای بعدی کودک را پیش بینی کند و منشاء این مشکلات و سطح پیشرفت آن را بیابد. سپس او بهترین روشهای درمان برای بهبود اختلال کودک را مییابد و با کمک والدین این ناسازگاریها ، مشکلات و ناهنجاریها را حل کرده و نهایتا رابطهای دوستانه و صمیمانه بین کودک و والدین وی ایجاد مینماید تاآیندهای درخشان در انتظار کودک شما باشد .
```
# Citation
```
[email protected]
title={persian_blog},
author={Saied Alimoradi},
year={2021}
}
``` | RohanAiLab/persian_blog | [
"task_categories:text-generation",
"task_ids:language-modeling",
"source_datasets:original",
"language:fa",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["fa"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "persian_blog"} | 2022-10-25T08:06:36+00:00 | [] | [
"fa"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #source_datasets-original #language-Persian #region-us
| # Persian Blog
# Dataset Summary
persian_news_dataset is a collection of 400k blog posts. these posts have been gathered from more than 10 websites. This dataset can be used in different NLP tasks like language modeling and text generation tasks.
This effort is part of a bigger perspective to have several datasets in Persian language for different tasks that have two important factors: 'free' and 'easy-to-use'. Here is a quick HOW-TO for using this dataset in datasets library:Demo-datasets
# Description
As discussed before, this dataset contains 5M news articles. Each article has these three attributes: text, title, category. Here is a sample of dataset:
| [
"# Persian Blog",
"# Dataset Summary\n\npersian_news_dataset is a collection of 400k blog posts. these posts have been gathered from more than 10 websites. This dataset can be used in different NLP tasks like language modeling and text generation tasks.\n\nThis effort is part of a bigger perspective to have several datasets in Persian language for different tasks that have two important factors: 'free' and 'easy-to-use'. Here is a quick HOW-TO for using this dataset in datasets library:Demo-datasets",
"# Description\n\nAs discussed before, this dataset contains 5M news articles. Each article has these three attributes: text, title, category. Here is a sample of dataset:"
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #source_datasets-original #language-Persian #region-us \n",
"# Persian Blog",
"# Dataset Summary\n\npersian_news_dataset is a collection of 400k blog posts. these posts have been gathered from more than 10 websites. This dataset can be used in different NLP tasks like language modeling and text generation tasks.\n\nThis effort is part of a bigger perspective to have several datasets in Persian language for different tasks that have two important factors: 'free' and 'easy-to-use'. Here is a quick HOW-TO for using this dataset in datasets library:Demo-datasets",
"# Description\n\nAs discussed before, this dataset contains 5M news articles. Each article has these three attributes: text, title, category. Here is a sample of dataset:"
] |
234cb96c879728e1cd24fbc535da0bef6098322b | # Persian Daily News
# Dataset Summary
persian_daily_news is a collection of 2 million of unique news articles with the headline for each article. dataset can be used in abstractive summarization and paraphrasing tasks.
This effort is part of a bigger perspective to have several datasets in Persian language(and other low resources languages) for different tasks that have two important factors: `free` and `easy-to-use`. Here is a quick HOW-TO for using this dataset in datasets library:[Demo-datasets](https://saied71.github.io/RohanAiLab/2021/09/03/Demo-datasets.html)
# Description
As discussed before, this dataset contains 2M news articles. Each article has these two attributes: text and summary. Here is a sample of dataset:
```
text: به گزارش گروه بین الملل ، خبرگزاری رسمی قطر اعلام کرد، بعد از امضای موافقتنامه همکاری نظامی بین قطر و روسیه این امکان فراهم شده است تا نظامیان قطری برای تکمیل آموزشهای نظامی خود عازم روسیه شده و در آنجا تعلیم ببینند.در چارچوب این قرارداد که امروز یک شنبه توسط سرتیپ ستاد عبدالعزیز صالح السلیطی رییس هییت همکاریهای بین المللی نظامی قطر و سرلشکر ویکتور جوریمیکین رییس اداره عمومی نیروی انسانی وزارت دفاع روسیه به امضا رسید، روابط نظامی بین دوحه و مسکو در زمینه موسسات آموزشهای نظامی شاهد توسه قابل توجهی خواهد شد.به نوشته این خبرگزاری روابط قطر و روسیه در حال گسترش بوده و به سوی شکلگیری مشارکت راهبردی در تمامی زمینهها پیش میرود.
summary: از این پس نظامیان قطری برای آموزش عازم روسیه شده و در موسسات آموزش نظامی این کشور تعلیم خواهند دید.
```
# Citation
```
[email protected]
title={persian_daily_news},
author={Saied Alimoradi},
year={2021}
}
``` | RohanAiLab/persian_daily_news | [
"source_datasets:original",
"language:fa",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["fa"], "source_datasets": ["original"], "task_categories": ["Summarization", "sequence-modeling"], "pretty_name": "Persian Daily News"} | 2022-10-21T15:13:09+00:00 | [] | [
"fa"
] | TAGS
#source_datasets-original #language-Persian #region-us
| # Persian Daily News
# Dataset Summary
persian_daily_news is a collection of 2 million of unique news articles with the headline for each article. dataset can be used in abstractive summarization and paraphrasing tasks.
This effort is part of a bigger perspective to have several datasets in Persian language(and other low resources languages) for different tasks that have two important factors: 'free' and 'easy-to-use'. Here is a quick HOW-TO for using this dataset in datasets library:Demo-datasets
# Description
As discussed before, this dataset contains 2M news articles. Each article has these two attributes: text and summary. Here is a sample of dataset:
| [
"# Persian Daily News",
"# Dataset Summary\n\npersian_daily_news is a collection of 2 million of unique news articles with the headline for each article. dataset can be used in abstractive summarization and paraphrasing tasks.\n\nThis effort is part of a bigger perspective to have several datasets in Persian language(and other low resources languages) for different tasks that have two important factors: 'free' and 'easy-to-use'. Here is a quick HOW-TO for using this dataset in datasets library:Demo-datasets",
"# Description\n\nAs discussed before, this dataset contains 2M news articles. Each article has these two attributes: text and summary. Here is a sample of dataset:"
] | [
"TAGS\n#source_datasets-original #language-Persian #region-us \n",
"# Persian Daily News",
"# Dataset Summary\n\npersian_daily_news is a collection of 2 million of unique news articles with the headline for each article. dataset can be used in abstractive summarization and paraphrasing tasks.\n\nThis effort is part of a bigger perspective to have several datasets in Persian language(and other low resources languages) for different tasks that have two important factors: 'free' and 'easy-to-use'. Here is a quick HOW-TO for using this dataset in datasets library:Demo-datasets",
"# Description\n\nAs discussed before, this dataset contains 2M news articles. Each article has these two attributes: text and summary. Here is a sample of dataset:"
] |
883746f06ee0fb9ef98d37721c257d9287f1256e | # Persian_News_Dataset
# Dataset Summary
persian_news_dataset is a collection of 5 million news articles. News articles have been gathered from more than 10 news agencies for the last 12 years. This dataset can be used in different NLP tasks like language modeling, classification, supervised topic modeling,...
This effort is part of a bigger perspective to have several datasets in Persian language for different tasks that have two important factors: `free` and `easy-to-use`. Here is a quick HOW-TO for using this dataset in datasets library:[Demo-datasets](https://saied71.github.io/RohanAiLab/2021/09/03/Demo-datasets.html)
# Description
As discussed before, this dataset contains 5M news articles. Each article has these three attributes: text, title, category. Here is a sample of dataset:
```
text :سهشنبه شب از دور برگشت مرحله نیمهنهایی لیگ قهرمانان اروپا، منچسترسیتی در ورزشگاه «اتحاد» میزبان پاریسنژرمن بود و با ارائه نمایشی حساب شده و تحسین برانگیز به پیروزی دو بر صفر دست یافت.بازی رفت در پاریس با برتری دو بر یک سیتی به اتمام رسیده بود و با این اوصاف تیم تحت هدایت «پپ گواردیولا» در مجموع با پیروزی چهار بر یک، راهی فینال شد.بارش برف موجب سفیدپوش شدن زمین شده بود و همین امر بر عملکرد تیمها تاثیر گذاشت. دیدار در حالی آغاز به کار کرد که «امباپه» ستاره پاریسیها که به تازگی از مصدومیت رهایی پیدا کرده است، نیمکتنشین بود.بازی با حملات میهمان آغاز شد و در دقیقه هفتم داور هلندی با تصمیمی عجیب اعتقاد داشت توپ به دست «زینچنکو» مدافع سیتی برخورد کرده و نقطه پنالتی را نشان داد، اما با استفاده از سیستم کمک داور ویدئویی، پنالتی پس گرفته شد. سیتی خیلی زود به هدفش رسید و در دقیقه ۱۰ حرکت عالی او و پاس به «دیبروین» موجب شد تا توپ در یک رفت و برگشت به «ریاض محرز» رسیده و این بازیکن الجزایری گل نخست بازی را برای میزبان به ارمغان آورد.در دقیقه ۱۶ ضربه سر «مارکینیوش» مدافع پیشتاخته پاریسنژرمن با بدشانسی به تیرک دروازه سیتی برخورد کرد.در ادامه برای دقایقی، بازیکنان در میانه میدان خطاهای متعددی انجام دادند و این امر موجب ایجاد چند درگیری شد.هرچند نماینده فرانسه درپی جبران مافات بود اما برنامهای برای رسیدن به این مهم نداشت تا نیمه نخست با همین یک گل همراه شود.در نیمه دوم هم حملات پاریسیها سودی نداشت و در طرف مقابل منچسترسیتی، بازی بسیار هوشمندانهای ارائه کرد.در دقیقه ۶۲ و در ضد حملهای برق آسا، «فیل فودن» با پاسی عالی توپ را به «ریاض محرز» رساند تا این بازیکن گل دوم خود و تیمش را ثبت کرده و سند صعود سیتی به فینال را امضا کند.در دقیقه ۶۸ «آنخل دیماریا» وینگر آرژانتینی تیم پاریسنژرمن پس از درگیری با «فرناندینو» با کارت قرمز داور از زمین اخراج شد تا کار تیمش تمام شود.در این بازی پاریسنژرمن با تفکرات «پوچتینو»، طراحی حملات خود را به «نیمار» سپرده بود اما این بازیکن مطرح برزیلی با حرکات انفرادی بیش از از اندازه، عملکرد خوبی نداشت و حملات تیمش را خراب کرد.در نهایت بازی با پیروزی سیتی همراه شد و مالکان ثروتمند منچسترسیتی به آرزوی خود رسیده و پس از سالها سرمایهگذاری به دیدار نهایی رسیدند. این اولین حضور سیتی در فینال لیگ قهرمانان اروپا است.چهارشنبه شب در دیگر دیدار دور برگشت نیمهنهایی، چلسی انگلیس در ورزشگاه «استمفورد بریج» شهر لندن پذیرای رئالمادرید اسپانیا است. بازی رفت با تساوی یک بر یک به اتمام رسید
title:آرزوی سیتی برآورده شد؛ صعود شاگردان «گواردیولا» به فینال
category:ورزش
```
# Citation
```
[email protected]
title={persian_news_dataset},
author={Saied Alimoradi},
year={2021}
}
``` | RohanAiLab/persian_news_dataset | [
"task_categories:text-classification",
"task_ids:language-modeling",
"task_ids:multi-class-classification",
"source_datasets:original",
"language:fa",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["fa"], "source_datasets": ["original"], "task_categories": ["text-classification", "sequence-modeling"], "task_ids": ["language-modeling", "multi-class-classification"], "pretty_name": "persian_news_datset"} | 2022-10-21T15:13:59+00:00 | [] | [
"fa"
] | TAGS
#task_categories-text-classification #task_ids-language-modeling #task_ids-multi-class-classification #source_datasets-original #language-Persian #region-us
| # Persian_News_Dataset
# Dataset Summary
persian_news_dataset is a collection of 5 million news articles. News articles have been gathered from more than 10 news agencies for the last 12 years. This dataset can be used in different NLP tasks like language modeling, classification, supervised topic modeling,...
This effort is part of a bigger perspective to have several datasets in Persian language for different tasks that have two important factors: 'free' and 'easy-to-use'. Here is a quick HOW-TO for using this dataset in datasets library:Demo-datasets
# Description
As discussed before, this dataset contains 5M news articles. Each article has these three attributes: text, title, category. Here is a sample of dataset:
| [
"# Persian_News_Dataset",
"# Dataset Summary\n\npersian_news_dataset is a collection of 5 million news articles. News articles have been gathered from more than 10 news agencies for the last 12 years. This dataset can be used in different NLP tasks like language modeling, classification, supervised topic modeling,...\n\nThis effort is part of a bigger perspective to have several datasets in Persian language for different tasks that have two important factors: 'free' and 'easy-to-use'. Here is a quick HOW-TO for using this dataset in datasets library:Demo-datasets",
"# Description\n\nAs discussed before, this dataset contains 5M news articles. Each article has these three attributes: text, title, category. Here is a sample of dataset:"
] | [
"TAGS\n#task_categories-text-classification #task_ids-language-modeling #task_ids-multi-class-classification #source_datasets-original #language-Persian #region-us \n",
"# Persian_News_Dataset",
"# Dataset Summary\n\npersian_news_dataset is a collection of 5 million news articles. News articles have been gathered from more than 10 news agencies for the last 12 years. This dataset can be used in different NLP tasks like language modeling, classification, supervised topic modeling,...\n\nThis effort is part of a bigger perspective to have several datasets in Persian language for different tasks that have two important factors: 'free' and 'easy-to-use'. Here is a quick HOW-TO for using this dataset in datasets library:Demo-datasets",
"# Description\n\nAs discussed before, this dataset contains 5M news articles. Each article has these three attributes: text, title, category. Here is a sample of dataset:"
] |
5a314e73c690b159983b2e45b9d4c0500a80cfd0 | # PersianQA: a dataset for Persian Question Answering
Persian Question Answering (PersianQA) Dataset is a reading comprehension dataset on Persian Wikipedia. The crowd-sourced dataset consists of more than 9,000 entries. Each entry can be either an impossible to answer or a question with one or more answers spanning in the passage (the context) from which the questioner proposed the question. Much like the SQuAD2.0 dataset, the impossible or unanswerable questions can be utilized to create a system which "knows that it doesn't know the answer".
On top of that, the dataset has 900 test data available. Moreover, the first models trained on the dataset, Transformers, are available.
All the crowd workers of the dataset are native Persian speakers. Also, it worth mentioning that the contexts are collected from all categories of the Wiki (Historical, Religious, Geography, Science, etc.)
At the moment, each context has 7 pairs of questions with one answer and 3 impossible questions.
## Dataset
### Access/Download
- You can find the data under the [`dataset/`](https://github.com/sajjjadayobi/PersianQA/tree/main/dataset) directory. and use it like this
```python
import read_qa # is avalible at src/read_ds.py
train_ds = read_qa('pqa_train.json')
test_ds = read_qa('pqa_test.json')
```
- Alternatively, you can also access the data through the HuggingFace🤗 datasets library
- First, you need to install datasets using this command in your terminal:
```sh
pip install -q datasets
```
- Then import `persian_qa` dataset using `load_dataset`:
```python
from datasets import load_dataset
dataset = load_dataset("SajjadAyoubi/persian_qa")
```
### Examples
| Title | Context | Question | Answer |
| :---: | :---------------------: | :--------: | :----: |
| خوب، بد، زشت | خوب، بد، زشت یک فیلم درژانر وسترن اسپاگتی حماسی است که توسط سرجو لئونه در سال ۱۹۶۶ در ایتالیا ساخته شد. زبانی که بازیگران این فیلم به آن تکلم میکنند مخلوطی از ایتالیایی و انگلیسی است. این فیلم سومین (و آخرین) فیلم از سهگانهٔ دلار (Dollars Trilogy) سرجو لئونه است. این فیلم در حال حاضر در فهرست ۲۵۰ فیلم برتر تاریخ سینما در وبگاه IMDB با امتیاز ۸٫۸ از ۱۰، رتبهٔ هشتم را به خود اختصاص دادهاست و به عنوان بهترین فیلم وسترن تاریخ سینمای جهان شناخته میشود. «خوب» (کلینت ایستوود، در فیلم، با نام «بلوندی») و «زشت» (ایلای والاک، در فیلم، با نام «توکو») با هم کار میکنند و با شگرد خاصی، به گول زدن کلانترهای مناطق مختلف و پول درآوردن از این راه میپردازند. «بد» (لی وان کلیف) آدمکشی حرفهای است که بهخاطر پول حاضر به انجام هر کاری است. «بد»، که در فیلم او را «اِنجل آیز (اِینجل آیز)» (به انگلیسی: Angel Eyes) صدا میکنند. بهدنبال گنجی است که در طی جنگهای داخلی آمریکا، به دست سربازی به نام «جکسون»، که بعدها به «کارسون» نامش را تغییر داده، مخفی شدهاست. | در فیلم خوب بد زشت شخصیت ها کجایی صحبت می کنند؟ | مخلوطی از ایتالیایی و انگلیسی |
| قرارداد کرسنت | قرارداد کرسنت قراردادی برای فروش روزانه معادل ۵۰۰ میلیون فوت مکعب، گاز ترش میدان سلمان است، که در سال ۱۳۸۱ و در زمان وزارت بیژن نامدار زنگنه در دولت هفتم مابین شرکت کرسنت پترولیوم و شرکت ملی نفت ایران منعقد گردید. مذاکرات اولیه این قرارداد از سال ۱۹۹۷ آغاز شد و در نهایت، سال ۲۰۰۱ (۱۳۸۱) به امضای این تفاهم نامه مشترک انجامید. بر اساس مفاد این قرارداد، مقرر شده بود که از سال ۲۰۰۵ با احداث خط لوله در خلیج فارس، گاز فرآورده نشده میدان سلمان (مخزن مشترک با ابوظبی)، به میزان روزانه ۵۰۰ میلیون فوت مکعب (به قول برخی منابع ۶۰۰ میلیون فوت مکعب) به امارات صادر شود. این قرارداد مطابق قوانین داخلی ایران بسته شده و تنها قرارداد نفتی ایران است که از طرف مقابل خود، تضمین گرفتهاست. اجرای این پروژه در سال ۱۳۸۴ با دلایل ارائه شده از سوی دیوان محاسبات ایران از جمله تغییر نیافتن بهای گاز صادراتی و ثابت ماندن آن در هفت سال اول اجرای قرارداد متوقف شد. این در حالی است که طبق تعریف حقوقی، دیوان محاسبات ایران، حق دخالت در قراردادها، پیش از آنکه قراردادها اجرایی و مالی شوند را ندارد. | طرفین قرار داد کرسنت کیا بودن؟ | کرسنت پترولیوم و شرکت ملی نفت ایران |
| چهارشنبهسوری | چهارشنبهسوری یکی از جشنهای ایرانی است که از غروب آخرین سهشنبه ی ماه اسفند، تا پس از نیمهشب تا آخرین چهارشنبه ی سال، برگزار میشود و برافروختن و پریدن از روی آتش مشخصهٔ اصلی آن است. این جشن، نخستین جشن از مجموعهٔ جشنها و مناسبتهای نوروزی است که با برافروختن آتش و برخی رفتارهای نمادین دیگر، بهصورت جمعی در فضای باز برگزار میشود. بهگفتهٔ ابراهیم پورداوود چهارشنبهسوری ریشه در گاهنبارِ هَمَسْپَتْمَدَم زرتشتیان و نیز جشن نزول فروهرها دارد که شش روز پیش از فرارسیدن نوروز برگزار میشد. احتمال دیگر این است که چهارشنبهسوری بازمانده و شکل تحولیافتهای از جشن سده باشد، که احتمال بعیدی است. علاوه برافروختن آتش، آیینهای مختلف دیگری نیز در بخشهای گوناگون ایران در زمان این جشن انجام میشوند. برای نمونه، در تبریز، مردم به چهارشنبهبازار میروند که با چراغ و شمع، بهطرز زیبایی چراغانی شدهاست. هر خانواده یک آینه، دانههای اسفند، و یک کوزه برای سال نو خریداری میکنند. همهساله شهروندانی از ایران در اثر انفجارهای ناخوشایند مربوط به این جشن، کشته یا مصدوم میشوند. | نام جشن اخرین شنبه ی سال چیست؟ | No Answer |
### Statistic
| Split | # of instances | # of unanswerables | avg. question length | avg. paragraph length | avg. answer length |
| :---: | :------------: | :----------------: | :------------------: | :-------------------: | :----------------: |
| Train | 9,000 | 2,700 | 8.39 | 224.58 | 9.61 |
| Test | 938 | 280 | 8.02 | 220.18 | 5.99 |
The lengths are on the token level.
- for more about data and more example see [here](https://github.com/sajjjadayobi/PersianQA/tree/main/dataset#readme)
## Models
Currently, two models (baseline) on [HuggingFace🤗](https://huggingface.co/SajjadAyoubi/) model hub are using the dataset.
## Citation
Yet, we didn't publish any papers on the work.
However, if you did, please cite us properly with an entry like one below.
```bibtex
@misc{PersianQA,
author = {Ayoubi, Sajjad \& Davoodeh, Mohammad Yasin},
title = {PersianQA: a dataset for Persian Question Answering},
year = 2021,
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/SajjjadAyobi/PersianQA}},
}
```
| SajjadAyoubi/persian_qa | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-04-29T05:11:18+00:00 | [] | [] | TAGS
#region-us
| PersianQA: a dataset for Persian Question Answering
===================================================
Persian Question Answering (PersianQA) Dataset is a reading comprehension dataset on Persian Wikipedia. The crowd-sourced dataset consists of more than 9,000 entries. Each entry can be either an impossible to answer or a question with one or more answers spanning in the passage (the context) from which the questioner proposed the question. Much like the SQuAD2.0 dataset, the impossible or unanswerable questions can be utilized to create a system which "knows that it doesn't know the answer".
On top of that, the dataset has 900 test data available. Moreover, the first models trained on the dataset, Transformers, are available.
All the crowd workers of the dataset are native Persian speakers. Also, it worth mentioning that the contexts are collected from all categories of the Wiki (Historical, Religious, Geography, Science, etc.)
At the moment, each context has 7 pairs of questions with one answer and 3 impossible questions.
Dataset
-------
### Access/Download
* You can find the data under the 'dataset/' directory. and use it like this
* Alternatively, you can also access the data through the HuggingFace datasets library
+ First, you need to install datasets using this command in your terminal:
* Then import 'persian\_qa' dataset using 'load\_dataset':
### Examples
### Statistic
The lengths are on the token level.
* for more about data and more example see here
Models
------
Currently, two models (baseline) on HuggingFace model hub are using the dataset.
Yet, we didn't publish any papers on the work.
However, if you did, please cite us properly with an entry like one below.
| [
"### Access/Download\n\n\n* You can find the data under the 'dataset/' directory. and use it like this\n* Alternatively, you can also access the data through the HuggingFace datasets library\n\n\n\t+ First, you need to install datasets using this command in your terminal:\n* Then import 'persian\\_qa' dataset using 'load\\_dataset':",
"### Examples",
"### Statistic\n\n\n\nThe lengths are on the token level.\n\n\n* for more about data and more example see here\n\n\nModels\n------\n\n\nCurrently, two models (baseline) on HuggingFace model hub are using the dataset.\n\n\nYet, we didn't publish any papers on the work.\nHowever, if you did, please cite us properly with an entry like one below."
] | [
"TAGS\n#region-us \n",
"### Access/Download\n\n\n* You can find the data under the 'dataset/' directory. and use it like this\n* Alternatively, you can also access the data through the HuggingFace datasets library\n\n\n\t+ First, you need to install datasets using this command in your terminal:\n* Then import 'persian\\_qa' dataset using 'load\\_dataset':",
"### Examples",
"### Statistic\n\n\n\nThe lengths are on the token level.\n\n\n* for more about data and more example see here\n\n\nModels\n------\n\n\nCurrently, two models (baseline) on HuggingFace model hub are using the dataset.\n\n\nYet, we didn't publish any papers on the work.\nHowever, if you did, please cite us properly with an entry like one below."
] |
78526be8ce25858cf5ce2c4719fd9d7af4d811ad |
# Dataset Card for "nepalitext-language-model-dataset"
### Dataset Summary
"NepaliText" language modeling dataset is a collection of over 13 million Nepali text sequences (phrases/sentences/paragraphs) extracted by combining the datasets: [OSCAR](https://huggingface.co/datasets/oscar) , [cc100](https://huggingface.co/datasets/cc100) and a set of scraped Nepali articles on Wikipedia.
### Supported Tasks and Leaderboards
This dataset is intended to pre-train language models and word representations on Nepali Language.
### Languages
The data is focused on Nepali language, but may have instances of other languages as well.
## Dataset Structure
### Data Instances
An example:
```
{'text': 'घरेलु मैदानमा भएको च्याम्पियन्स लिगको दोस्रो लेगमा एथ्लेटिको मड्रिडले आर्सनललाई एक शून्यले हराउँदै समग्रमा दुई एकको अग्रताका साथ फाइनलमा प्रवेश गरेको हो ।\n'}
```
### Data Fields
The data fields are:
- `text`: a `string` feature.
### Data Splits
train|test|
----:|---:|
13141222|268189|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
Being extracted and scraped from variety of internet sources, Personal and sensitive information might be present. This must be considered before training deep learning models, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@Sakonii](https://github.com/Sakonii) for adding this dataset. | Sakonii/nepalitext-language-model-dataset | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"language_creators:other",
"multilinguality:monolingual",
"source_datasets:extended|oscar",
"source_datasets:extended|cc100",
"language:ne",
"license:cc0-1.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found", "other"], "language": ["ne"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "source_datasets": ["extended|oscar", "extended|cc100"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "nepalitext-language-model-dataset"} | 2022-10-25T05:14:22+00:00 | [] | [
"ne"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #language_creators-other #multilinguality-monolingual #source_datasets-extended|oscar #source_datasets-extended|cc100 #language-Nepali (macrolanguage) #license-cc0-1.0 #region-us
| Dataset Card for "nepalitext-language-model-dataset"
====================================================
### Dataset Summary
"NepaliText" language modeling dataset is a collection of over 13 million Nepali text sequences (phrases/sentences/paragraphs) extracted by combining the datasets: OSCAR , cc100 and a set of scraped Nepali articles on Wikipedia.
### Supported Tasks and Leaderboards
This dataset is intended to pre-train language models and word representations on Nepali Language.
### Languages
The data is focused on Nepali language, but may have instances of other languages as well.
Dataset Structure
-----------------
### Data Instances
An example:
### Data Fields
The data fields are:
* 'text': a 'string' feature.
### Data Splits
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Being extracted and scraped from variety of internet sources, Personal and sensitive information might be present. This must be considered before training deep learning models, specially in the case of text-generation models.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @Sakonii for adding this dataset.
| [
"### Dataset Summary\n\n\n\"NepaliText\" language modeling dataset is a collection of over 13 million Nepali text sequences (phrases/sentences/paragraphs) extracted by combining the datasets: OSCAR , cc100 and a set of scraped Nepali articles on Wikipedia.",
"### Supported Tasks and Leaderboards\n\n\nThis dataset is intended to pre-train language models and word representations on Nepali Language.",
"### Languages\n\n\nThe data is focused on Nepali language, but may have instances of other languages as well.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example:",
"### Data Fields\n\n\nThe data fields are:\n\n\n* 'text': a 'string' feature.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations\n\n\nThe dataset does not contain any additional annotations.",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nBeing extracted and scraped from variety of internet sources, Personal and sensitive information might be present. This must be considered before training deep learning models, specially in the case of text-generation models.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @Sakonii for adding this dataset."
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #language_creators-other #multilinguality-monolingual #source_datasets-extended|oscar #source_datasets-extended|cc100 #language-Nepali (macrolanguage) #license-cc0-1.0 #region-us \n",
"### Dataset Summary\n\n\n\"NepaliText\" language modeling dataset is a collection of over 13 million Nepali text sequences (phrases/sentences/paragraphs) extracted by combining the datasets: OSCAR , cc100 and a set of scraped Nepali articles on Wikipedia.",
"### Supported Tasks and Leaderboards\n\n\nThis dataset is intended to pre-train language models and word representations on Nepali Language.",
"### Languages\n\n\nThe data is focused on Nepali language, but may have instances of other languages as well.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example:",
"### Data Fields\n\n\nThe data fields are:\n\n\n* 'text': a 'string' feature.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations\n\n\nThe dataset does not contain any additional annotations.",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nBeing extracted and scraped from variety of internet sources, Personal and sensitive information might be present. This must be considered before training deep learning models, specially in the case of text-generation models.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @Sakonii for adding this dataset."
] |
e4b5e7000236325106d5cb9f4d6ed874bbd18976 | Hello
| Samip/func | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-11-15T05:35:47+00:00 | [] | [] | TAGS
#region-us
| Hello
| [] | [
"TAGS\n#region-us \n"
] |
2b12fc88f0c661409b85a55136dfd1f030a9cf67 | This is a dataset extracted from the Natural Questions dataset
This dataset is currently under development | SaulLu/Natural_Questions_HTML | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-08-27T17:58:58+00:00 | [] | [] | TAGS
#region-us
| This is a dataset extracted from the Natural Questions dataset
This dataset is currently under development | [] | [
"TAGS\n#region-us \n"
] |
e73b4ff89f31e0fbbb720aae0d8079bdcdf93d7d |
# Dataset Card for GitHub Issues
## Dataset Description
this was an example dataset made from the huggingface course | SebastianS/github-issues | [
"task_categories:text-classification",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": [], "language_creators": ["crowdsourced"], "language": ["en-US"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "github-issues"} | 2022-10-25T08:06:38+00:00 | [] | [
"en-US"
] | TAGS
#task_categories-text-classification #language_creators-crowdsourced #multilinguality-monolingual #size_categories-unknown #region-us
|
# Dataset Card for GitHub Issues
## Dataset Description
this was an example dataset made from the huggingface course | [
"# Dataset Card for GitHub Issues",
"## Dataset Description\nthis was an example dataset made from the huggingface course"
] | [
"TAGS\n#task_categories-text-classification #language_creators-crowdsourced #multilinguality-monolingual #size_categories-unknown #region-us \n",
"# Dataset Card for GitHub Issues",
"## Dataset Description\nthis was an example dataset made from the huggingface course"
] |
f1b91292074e7cfb69be58b642d583ec262f30ed | This is a version of the [20 newsgroups dataset](https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html#the-20-newsgroups-text-dataset) that is provided in Scikit-learn. From the Scikit-learn docs:
> The 20 newsgroups dataset comprises around 18000 newsgroups posts on 20 topics split in two subsets: one for training (or development) and the other one for testing (or for performance evaluation). The split between the train and test set is based upon a messages posted before and after a specific date.
We followed the [recommended practice](https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html#filtering-text-for-more-realistic-training) to remove headers, signature blocks, and quotations from each news article. | SetFit/20_newsgroups | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-03T08:27:00+00:00 | [] | [] | TAGS
#region-us
| This is a version of the 20 newsgroups dataset that is provided in Scikit-learn. From the Scikit-learn docs:
> The 20 newsgroups dataset comprises around 18000 newsgroups posts on 20 topics split in two subsets: one for training (or development) and the other one for testing (or for performance evaluation). The split between the train and test set is based upon a messages posted before and after a specific date.
We followed the recommended practice to remove headers, signature blocks, and quotations from each news article. | [] | [
"TAGS\n#region-us \n"
] |
0a34640b3cac3affef6f0abdfb81e3d0ecffcf92 | # TREC Question Classification
Question classification in coarse and fine-grained categories.
Source:
[Experimental Data for Question Classification](https://cogcomp.seas.upenn.edu/Data/QA/QC/)
Xin Li, Dan Roth, Learning Question Classifiers. COLING'02, Aug., 2002. | SetFit/TREC-QC | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-01-15T22:42:56+00:00 | [] | [] | TAGS
#region-us
| # TREC Question Classification
Question classification in coarse and fine-grained categories.
Source:
Experimental Data for Question Classification
Xin Li, Dan Roth, Learning Question Classifiers. COLING'02, Aug., 2002. | [
"# TREC Question Classification\r\n\r\nQuestion classification in coarse and fine-grained categories.\r\n\r\nSource: \r\n\r\nExperimental Data for Question Classification\r\n\r\nXin Li, Dan Roth, Learning Question Classifiers. COLING'02, Aug., 2002."
] | [
"TAGS\n#region-us \n",
"# TREC Question Classification\r\n\r\nQuestion classification in coarse and fine-grained categories.\r\n\r\nSource: \r\n\r\nExperimental Data for Question Classification\r\n\r\nXin Li, Dan Roth, Learning Question Classifiers. COLING'02, Aug., 2002."
] |
53e2aca73f7af37bfc24e0670c361d39809ad2ab | # Amazon Multilingual Counterfactual Dataset
The dataset contains sentences from Amazon customer reviews (sampled from Amazon product review dataset) annotated for counterfactual detection (CFD) binary classification. Counterfactual statements describe events that did not or cannot take place. Counterfactual statements may be identified as statements of the form – If p was true, then q would be true (i.e. assertions whose antecedent (p) and consequent (q) are known or assumed to be false).
The key features of this dataset are:
* The dataset is multilingual and contains sentences in English, German, and Japanese.
* The labeling was done by professional linguists and high quality was ensured.
* The dataset is supplemented with the annotation guidelines and definitions, which were worked out by professional linguists. We also provide the clue word lists, which are typical for counterfactual sentences and were used for initial data filtering. The clue word lists were also compiled by professional linguists.
Please see the [paper](https://arxiv.org/abs/2104.06893) for the data statistics, detailed description of data collection and annotation.
GitHub repo URL: https://github.com/amazon-research/amazon-multilingual-counterfactual-dataset
## Usage
You can load each of the languages as follows:
```
from datasets import get_dataset_config_names
dataset_id = "SetFit/amazon_counterfactual"
# Returns ['de', 'en', 'en-ext', 'ja']
configs = get_dataset_config_names(dataset_id)
# Load English subset
dset = load_dataset(dataset_id, name="en")
``` | SetFit/amazon_counterfactual | [
"arxiv:2104.06893",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-08T10:15:40+00:00 | [
"2104.06893"
] | [] | TAGS
#arxiv-2104.06893 #region-us
| # Amazon Multilingual Counterfactual Dataset
The dataset contains sentences from Amazon customer reviews (sampled from Amazon product review dataset) annotated for counterfactual detection (CFD) binary classification. Counterfactual statements describe events that did not or cannot take place. Counterfactual statements may be identified as statements of the form – If p was true, then q would be true (i.e. assertions whose antecedent (p) and consequent (q) are known or assumed to be false).
The key features of this dataset are:
* The dataset is multilingual and contains sentences in English, German, and Japanese.
* The labeling was done by professional linguists and high quality was ensured.
* The dataset is supplemented with the annotation guidelines and definitions, which were worked out by professional linguists. We also provide the clue word lists, which are typical for counterfactual sentences and were used for initial data filtering. The clue word lists were also compiled by professional linguists.
Please see the paper for the data statistics, detailed description of data collection and annotation.
GitHub repo URL: URL
## Usage
You can load each of the languages as follows:
| [
"# Amazon Multilingual Counterfactual Dataset \n\nThe dataset contains sentences from Amazon customer reviews (sampled from Amazon product review dataset) annotated for counterfactual detection (CFD) binary classification. Counterfactual statements describe events that did not or cannot take place. Counterfactual statements may be identified as statements of the form – If p was true, then q would be true (i.e. assertions whose antecedent (p) and consequent (q) are known or assumed to be false).\n\nThe key features of this dataset are:\n\n* The dataset is multilingual and contains sentences in English, German, and Japanese.\n* The labeling was done by professional linguists and high quality was ensured.\n* The dataset is supplemented with the annotation guidelines and definitions, which were worked out by professional linguists. We also provide the clue word lists, which are typical for counterfactual sentences and were used for initial data filtering. The clue word lists were also compiled by professional linguists.\n\nPlease see the paper for the data statistics, detailed description of data collection and annotation.\n\n\nGitHub repo URL: URL",
"## Usage\n\nYou can load each of the languages as follows:"
] | [
"TAGS\n#arxiv-2104.06893 #region-us \n",
"# Amazon Multilingual Counterfactual Dataset \n\nThe dataset contains sentences from Amazon customer reviews (sampled from Amazon product review dataset) annotated for counterfactual detection (CFD) binary classification. Counterfactual statements describe events that did not or cannot take place. Counterfactual statements may be identified as statements of the form – If p was true, then q would be true (i.e. assertions whose antecedent (p) and consequent (q) are known or assumed to be false).\n\nThe key features of this dataset are:\n\n* The dataset is multilingual and contains sentences in English, German, and Japanese.\n* The labeling was done by professional linguists and high quality was ensured.\n* The dataset is supplemented with the annotation guidelines and definitions, which were worked out by professional linguists. We also provide the clue word lists, which are typical for counterfactual sentences and were used for initial data filtering. The clue word lists were also compiled by professional linguists.\n\nPlease see the paper for the data statistics, detailed description of data collection and annotation.\n\n\nGitHub repo URL: URL",
"## Usage\n\nYou can load each of the languages as follows:"
] |
bb7e3bd207c48e2c4046f0c16107beda80c91342 | # Amazon Counterfactual Statements
This dataset is the *en-ext* split from [SetFit/amazon_counterfactual](https://huggingface.co/datasets/SetFit/amazon_counterfactual). As the original test set is rather small (1333 examples), a different split was created with 50-50 for training & testing.
The dataset is described in [amazon-multilingual-counterfactual-dataset](https://github.com/amazon-research/amazon-multilingual-counterfactual-dataset) / [Paper](https://arxiv.org/pdf/2104.06893.pdf)
It contains statements from Amazon reviews about events that did not or cannot take place. | SetFit/amazon_counterfactual_en | [
"arxiv:2104.06893",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-11T13:03:45+00:00 | [
"2104.06893"
] | [] | TAGS
#arxiv-2104.06893 #region-us
| # Amazon Counterfactual Statements
This dataset is the *en-ext* split from SetFit/amazon_counterfactual. As the original test set is rather small (1333 examples), a different split was created with 50-50 for training & testing.
The dataset is described in amazon-multilingual-counterfactual-dataset / Paper
It contains statements from Amazon reviews about events that did not or cannot take place. | [
"# Amazon Counterfactual Statements\r\n\r\nThis dataset is the *en-ext* split from SetFit/amazon_counterfactual. As the original test set is rather small (1333 examples), a different split was created with 50-50 for training & testing.\r\n\r\nThe dataset is described in amazon-multilingual-counterfactual-dataset / Paper\r\n\r\nIt contains statements from Amazon reviews about events that did not or cannot take place."
] | [
"TAGS\n#arxiv-2104.06893 #region-us \n",
"# Amazon Counterfactual Statements\r\n\r\nThis dataset is the *en-ext* split from SetFit/amazon_counterfactual. As the original test set is rather small (1333 examples), a different split was created with 50-50 for training & testing.\r\n\r\nThe dataset is described in amazon-multilingual-counterfactual-dataset / Paper\r\n\r\nIt contains statements from Amazon reviews about events that did not or cannot take place."
] |
3587946a5360b8c81993d551b16f4951e025fb6d | # BBC News Topic Classification
Dataset on [BBC News Topic Classification](https://www.kaggle.com/yufengdev/bbc-text-categorization/data): 2225 articles, each labeled under one of 5 categories: business, entertainment, politics, sport or tech. | SetFit/bbc-news | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-01-18T05:58:34+00:00 | [] | [] | TAGS
#region-us
| # BBC News Topic Classification
Dataset on BBC News Topic Classification: 2225 articles, each labeled under one of 5 categories: business, entertainment, politics, sport or tech. | [
"# BBC News Topic Classification\r\n\r\nDataset on BBC News Topic Classification: 2225 articles, each labeled under one of 5 categories: business, entertainment, politics, sport or tech."
] | [
"TAGS\n#region-us \n",
"# BBC News Topic Classification\r\n\r\nDataset on BBC News Topic Classification: 2225 articles, each labeled under one of 5 categories: business, entertainment, politics, sport or tech."
] |
6c362e04d016f6b6a9377e85c3b944140f0b96c9 | ** Attention: There appears an overlap in train / test. I trained a model on the train set and achieved 100% acc on test set. With the original emotion dataset this is not the case (92.4% acc)** | SetFit/emotion | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-04-03T19:47:37+00:00 | [] | [] | TAGS
#region-us
| Attention: There appears an overlap in train / test. I trained a model on the train set and achieved 100% acc on test set. With the original emotion dataset this is not the case (92.4% acc) | [] | [
"TAGS\n#region-us \n"
] |
1916f66c89d52221ae33eb57d44498b4f3a5df22 | This is a version of the [Enron Spam Email Dataset](https://github.com/MWiechmann/enron_spam_data), containing emails (subject + message) and a label whether it is spam or ham. | SetFit/enron_spam | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-01-16T18:12:43+00:00 | [] | [] | TAGS
#region-us
| This is a version of the Enron Spam Email Dataset, containing emails (subject + message) and a label whether it is spam or ham. | [] | [
"TAGS\n#region-us \n"
] |
3816a4232f05ebec284ffb76a2b6565d1c858e2a | # Ethos
This dataset is a clone of the official [`ethos` dataset](https://huggingface.co/datasets/ethos) on the Hub. It contains both `binary` and `multilabel` subsets. | SetFit/ethos | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-03T08:31:19+00:00 | [] | [] | TAGS
#region-us
| # Ethos
This dataset is a clone of the official 'ethos' dataset on the Hub. It contains both 'binary' and 'multilabel' subsets. | [
"# Ethos\n\nThis dataset is a clone of the official 'ethos' dataset on the Hub. It contains both 'binary' and 'multilabel' subsets."
] | [
"TAGS\n#region-us \n",
"# Ethos\n\nThis dataset is a clone of the official 'ethos' dataset on the Hub. It contains both 'binary' and 'multilabel' subsets."
] |
3e20849175072aa95845981f27938ebeb4eb93e0 |
This is the binary split of [ethos](https://huggingface.co/datasets/ethos), split into train and test.
It contains comments annotated for hate speech or not. | SetFit/ethos_binary | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-01-16T17:54:54+00:00 | [] | [] | TAGS
#region-us
|
This is the binary split of ethos, split into train and test.
It contains comments annotated for hate speech or not. | [] | [
"TAGS\n#region-us \n"
] |
b83b20869f3b5eff89c0a375d8d015a878826094 | # GoEmotions
This dataset is a port of the official [`go_emotions` dataset](https://huggingface.co/datasets/go_emotions) on the Hub. It only contains the `simplified` subset as these are the only fields we need for text classification. | SetFit/go_emotions | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-09-08T14:41:33+00:00 | [] | [] | TAGS
#region-us
| # GoEmotions
This dataset is a port of the official 'go_emotions' dataset on the Hub. It only contains the 'simplified' subset as these are the only fields we need for text classification. | [
"# GoEmotions\n\nThis dataset is a port of the official 'go_emotions' dataset on the Hub. It only contains the 'simplified' subset as these are the only fields we need for text classification."
] | [
"TAGS\n#region-us \n",
"# GoEmotions\n\nThis dataset is a port of the official 'go_emotions' dataset on the Hub. It only contains the 'simplified' subset as these are the only fields we need for text classification."
] |
d0dafd0c98ed4f90649fbb1c77d5dee7857c0d49 | # hate_speech_offensive
This dataset is a version from [hate_speech_offensive](https://huggingface.co/datasets/hate_speech_offensive), splitted into train and test set. | SetFit/hate_speech_offensive | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-01-15T21:47:31+00:00 | [] | [] | TAGS
#region-us
| # hate_speech_offensive
This dataset is a version from hate_speech_offensive, splitted into train and test set. | [
"# hate_speech_offensive\r\nThis dataset is a version from hate_speech_offensive, splitted into train and test set."
] | [
"TAGS\n#region-us \n",
"# hate_speech_offensive\r\nThis dataset is a version from hate_speech_offensive, splitted into train and test set."
] |
cbc6d860e837a83dc17bc59ef1575825a979e19f | This is a version of the [Quora Insincere Questions Classification](https://www.kaggle.com/c/quora-insincere-questions-classification).
An insincere question is defined as a question intended to make a statement rather than look for helpful answers. About 6% of questions are labeled as insincere. | SetFit/insincere-questions | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-01-19T18:15:51+00:00 | [] | [] | TAGS
#region-us
| This is a version of the Quora Insincere Questions Classification.
An insincere question is defined as a question intended to make a statement rather than look for helpful answers. About 6% of questions are labeled as insincere. | [] | [
"TAGS\n#region-us \n"
] |
1c01c732e1efb5d5090c5618276c010aecebefa3 | # Glue MNLI
This dataset is a port of the official [`mnli` dataset](https://huggingface.co/datasets/glue/viewer/mnli/train) on the Hub.
It contains the matched version.
Note that the premise and hypothesis columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
| SetFit/mnli | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-28T13:53:53+00:00 | [] | [] | TAGS
#region-us
| # Glue MNLI
This dataset is a port of the official 'mnli' dataset on the Hub.
It contains the matched version.
Note that the premise and hypothesis columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
| [
"# Glue MNLI\r\n\r\nThis dataset is a port of the official 'mnli' dataset on the Hub. \r\nIt contains the matched version.\r\nNote that the premise and hypothesis columns have been renamed to text1 and text2 respectively.\r\nAlso, the test split is not labeled; the label column values are always -1."
] | [
"TAGS\n#region-us \n",
"# Glue MNLI\r\n\r\nThis dataset is a port of the official 'mnli' dataset on the Hub. \r\nIt contains the matched version.\r\nNote that the premise and hypothesis columns have been renamed to text1 and text2 respectively.\r\nAlso, the test split is not labeled; the label column values are always -1."
] |
9a22592a7f30887c20b4f0c45570669002cdca6d | # Glue MNLI
This dataset is a port of the official [`mnli` dataset](https://huggingface.co/datasets/glue/viewer/mnli/train) on the Hub.
It contains the mismatched version.
Note that the premise and hypothesis columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
| SetFit/mnli_mm | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-28T13:56:44+00:00 | [] | [] | TAGS
#region-us
| # Glue MNLI
This dataset is a port of the official 'mnli' dataset on the Hub.
It contains the mismatched version.
Note that the premise and hypothesis columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
| [
"# Glue MNLI\r\n\r\nThis dataset is a port of the official 'mnli' dataset on the Hub. \r\nIt contains the mismatched version.\r\nNote that the premise and hypothesis columns have been renamed to text1 and text2 respectively.\r\nAlso, the test split is not labeled; the label column values are always -1."
] | [
"TAGS\n#region-us \n",
"# Glue MNLI\r\n\r\nThis dataset is a port of the official 'mnli' dataset on the Hub. \r\nIt contains the mismatched version.\r\nNote that the premise and hypothesis columns have been renamed to text1 and text2 respectively.\r\nAlso, the test split is not labeled; the label column values are always -1."
] |
2e2058c90792fb6dc86125943e4d2f96a5155a6b | # Glue MRPC
This dataset is a port of the official [`mrpc` dataset](https://huggingface.co/datasets/glue/viewer/mrpc/train) on the Hub.
Note that the sentence1 and sentence2 columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
| SetFit/mrpc | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-28T13:18:30+00:00 | [] | [] | TAGS
#region-us
| # Glue MRPC
This dataset is a port of the official 'mrpc' dataset on the Hub.
Note that the sentence1 and sentence2 columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
| [
"# Glue MRPC\r\n\r\nThis dataset is a port of the official 'mrpc' dataset on the Hub. \r\nNote that the sentence1 and sentence2 columns have been renamed to text1 and text2 respectively.\r\nAlso, the test split is not labeled; the label column values are always -1."
] | [
"TAGS\n#region-us \n",
"# Glue MRPC\r\n\r\nThis dataset is a port of the official 'mrpc' dataset on the Hub. \r\nNote that the sentence1 and sentence2 columns have been renamed to text1 and text2 respectively.\r\nAlso, the test split is not labeled; the label column values are always -1."
] |
e08ed8239c319cf7f2d2ab393d1b90e276c39570 | # Glue QNLI
This dataset is a port of the official [`qnli` dataset](https://huggingface.co/datasets/glue/viewer/qnli/train) on the Hub.
Note that the question and sentence columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
| SetFit/qnli | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-28T13:29:16+00:00 | [] | [] | TAGS
#region-us
| # Glue QNLI
This dataset is a port of the official 'qnli' dataset on the Hub.
Note that the question and sentence columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
| [
"# Glue QNLI\r\n\r\nThis dataset is a port of the official 'qnli' dataset on the Hub. \r\nNote that the question and sentence columns have been renamed to text1 and text2 respectively.\r\nAlso, the test split is not labeled; the label column values are always -1."
] | [
"TAGS\n#region-us \n",
"# Glue QNLI\r\n\r\nThis dataset is a port of the official 'qnli' dataset on the Hub. \r\nNote that the question and sentence columns have been renamed to text1 and text2 respectively.\r\nAlso, the test split is not labeled; the label column values are always -1."
] |
df33eff016c496a5b944d026d5c0b511dc1f64f1 | # Glue QQP
This dataset is a port of the official [`qqp` dataset](https://huggingface.co/datasets/glue/viewer/qqp/train) on the Hub.
Note that the question1 and question2 columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
| SetFit/qqp | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-28T11:10:11+00:00 | [] | [] | TAGS
#region-us
| # Glue QQP
This dataset is a port of the official 'qqp' dataset on the Hub.
Note that the question1 and question2 columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
| [
"# Glue QQP\r\n\r\nThis dataset is a port of the official 'qqp' dataset on the Hub. \r\nNote that the question1 and question2 columns have been renamed to text1 and text2 respectively.\r\nAlso, the test split is not labeled; the label column values are always -1."
] | [
"TAGS\n#region-us \n",
"# Glue QQP\r\n\r\nThis dataset is a port of the official 'qqp' dataset on the Hub. \r\nNote that the question1 and question2 columns have been renamed to text1 and text2 respectively.\r\nAlso, the test split is not labeled; the label column values are always -1."
] |
23f2a468b9bc13030f5595a2e5f9307cb165280c | # Glue RTE
This dataset is a port of the official [`rte` dataset](https://huggingface.co/datasets/glue/viewer/rte/train) on the Hub.
Note that the sentence1 and sentence2 columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
| SetFit/rte | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-28T12:46:43+00:00 | [] | [] | TAGS
#region-us
| # Glue RTE
This dataset is a port of the official 'rte' dataset on the Hub.
Note that the sentence1 and sentence2 columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
| [
"# Glue RTE\r\n\r\nThis dataset is a port of the official 'rte' dataset on the Hub. \r\nNote that the sentence1 and sentence2 columns have been renamed to text1 and text2 respectively.\r\nAlso, the test split is not labeled; the label column values are always -1."
] | [
"TAGS\n#region-us \n",
"# Glue RTE\r\n\r\nThis dataset is a port of the official 'rte' dataset on the Hub. \r\nNote that the sentence1 and sentence2 columns have been renamed to text1 and text2 respectively.\r\nAlso, the test split is not labeled; the label column values are always -1."
] |
00ea8ccb7a54b4e3780a3e51aa3f80361ff849c0 | # Stanford Sentiment Treebank - Binary
[Stanford Sentiment Treebank](http://nlp.stanford.edu/sentiment/) with 2 labels: negative, positive
Splits are from:
[https://github.com/AcademiaSinicaNLPLab/sentiment_dataset/tree/master/data](https://github.com/AcademiaSinicaNLPLab/sentiment_dataset/tree/master/data)
Training data is on sentence level, not on phrase level!
| SetFit/sst2 | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-12-25T06:16:15+00:00 | [] | [] | TAGS
#region-us
| # Stanford Sentiment Treebank - Binary
Stanford Sentiment Treebank with 2 labels: negative, positive
Splits are from:
URL
Training data is on sentence level, not on phrase level!
| [
"# Stanford Sentiment Treebank - Binary\r\n\r\nStanford Sentiment Treebank with 2 labels: negative, positive\r\n\r\nSplits are from: \r\nURL\r\n\r\nTraining data is on sentence level, not on phrase level!"
] | [
"TAGS\n#region-us \n",
"# Stanford Sentiment Treebank - Binary\r\n\r\nStanford Sentiment Treebank with 2 labels: negative, positive\r\n\r\nSplits are from: \r\nURL\r\n\r\nTraining data is on sentence level, not on phrase level!"
] |
e51bdcd8cd3a30da231967c1a249ba59361279a3 | # Stanford Sentiment Treebank - Fine-Grained
[Stanford Sentiment Treebank](http://nlp.stanford.edu/sentiment/) with 5 labels: very positive, positive, neutral, negative, very negative
Splits are from:
[https://github.com/AcademiaSinicaNLPLab/sentiment_dataset/tree/master/data](https://github.com/AcademiaSinicaNLPLab/sentiment_dataset/tree/master/data)
Training data is on sentence level, not on phrase level! | SetFit/sst5 | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-12-25T06:10:36+00:00 | [] | [] | TAGS
#region-us
| # Stanford Sentiment Treebank - Fine-Grained
Stanford Sentiment Treebank with 5 labels: very positive, positive, neutral, negative, very negative
Splits are from:
URL
Training data is on sentence level, not on phrase level! | [
"# Stanford Sentiment Treebank - Fine-Grained\r\n\r\nStanford Sentiment Treebank with 5 labels: very positive, positive, neutral, negative, very negative\r\n\r\nSplits are from: \r\nURL\r\n\r\nTraining data is on sentence level, not on phrase level!"
] | [
"TAGS\n#region-us \n",
"# Stanford Sentiment Treebank - Fine-Grained\r\n\r\nStanford Sentiment Treebank with 5 labels: very positive, positive, neutral, negative, very negative\r\n\r\nSplits are from: \r\nURL\r\n\r\nTraining data is on sentence level, not on phrase level!"
] |
77bd85cf73a244d4503e477ce6de81196187839f | # Glue STS-B
This dataset is a port of the official [`sts-b` dataset](https://huggingface.co/datasets/glue/viewer/stsb/validation) on the Hub.
This is not a classification task, so the label_text column is only included for consistency
Note that the sentence1 and sentence2 columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
| SetFit/stsb | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-28T14:20:16+00:00 | [] | [] | TAGS
#region-us
| # Glue STS-B
This dataset is a port of the official 'sts-b' dataset on the Hub.
This is not a classification task, so the label_text column is only included for consistency
Note that the sentence1 and sentence2 columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
| [
"# Glue STS-B\r\n\r\nThis dataset is a port of the official 'sts-b' dataset on the Hub.\r\nThis is not a classification task, so the label_text column is only included for consistency \r\nNote that the sentence1 and sentence2 columns have been renamed to text1 and text2 respectively.\r\nAlso, the test split is not labeled; the label column values are always -1."
] | [
"TAGS\n#region-us \n",
"# Glue STS-B\r\n\r\nThis dataset is a port of the official 'sts-b' dataset on the Hub.\r\nThis is not a classification task, so the label_text column is only included for consistency \r\nNote that the sentence1 and sentence2 columns have been renamed to text1 and text2 respectively.\r\nAlso, the test split is not labeled; the label column values are always -1."
] |
a0053782f27c4d68d642f4b9169cb1cb49088ee2 | This is the [IITJEE NEET AIIMS Students Questions Data](https://www.kaggle.com/mrutyunjaybiswal/iitjee-neet-aims-students-questions-data) dataset.
It categorizes university entry questions into 4 categories: Physics, Chemistry, Biology, and Mathematics. | SetFit/student-question-categories | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-01-16T18:32:48+00:00 | [] | [] | TAGS
#region-us
| This is the IITJEE NEET AIIMS Students Questions Data dataset.
It categorizes university entry questions into 4 categories: Physics, Chemistry, Biology, and Mathematics. | [] | [
"TAGS\n#region-us \n"
] |
f3c1162e678417f664d76b21864fdb87b0615fcf | # Subjective vs Objective
This is the SUBJ dataset as used in [SentEval](https://github.com/facebookresearch/SentEval). It contains sentences with an annotation if they sentence describes something subjective about a movie or something objective | SetFit/subj | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-01-15T21:34:11+00:00 | [] | [] | TAGS
#region-us
| # Subjective vs Objective
This is the SUBJ dataset as used in SentEval. It contains sentences with an annotation if they sentence describes something subjective about a movie or something objective | [
"# Subjective vs Objective\r\n\r\nThis is the SUBJ dataset as used in SentEval. It contains sentences with an annotation if they sentence describes something subjective about a movie or something objective"
] | [
"TAGS\n#region-us \n",
"# Subjective vs Objective\r\n\r\nThis is the SUBJ dataset as used in SentEval. It contains sentences with an annotation if they sentence describes something subjective about a movie or something objective"
] |
3953bb49397ac2ee228986b2f3080b20cbce1365 | # Toxic Conversation
This is a version of the [Jigsaw Unintended Bias in Toxicity Classification dataset](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/overview). It contains comments from the Civil Comments platform together with annotations if the comment is toxic or not.
10 annotators annotated each example and, as recommended in the task page, set a comment as toxic when target >= 0.5
The dataset is inbalanced, with only about 8% of the comments marked as toxic. | SetFit/toxic_conversations | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-11T13:45:54+00:00 | [] | [] | TAGS
#region-us
| # Toxic Conversation
This is a version of the Jigsaw Unintended Bias in Toxicity Classification dataset. It contains comments from the Civil Comments platform together with annotations if the comment is toxic or not.
10 annotators annotated each example and, as recommended in the task page, set a comment as toxic when target >= 0.5
The dataset is inbalanced, with only about 8% of the comments marked as toxic. | [
"# Toxic Conversation\r\nThis is a version of the Jigsaw Unintended Bias in Toxicity Classification dataset. It contains comments from the Civil Comments platform together with annotations if the comment is toxic or not.\r\n\r\n10 annotators annotated each example and, as recommended in the task page, set a comment as toxic when target >= 0.5\r\n\r\nThe dataset is inbalanced, with only about 8% of the comments marked as toxic."
] | [
"TAGS\n#region-us \n",
"# Toxic Conversation\r\nThis is a version of the Jigsaw Unintended Bias in Toxicity Classification dataset. It contains comments from the Civil Comments platform together with annotations if the comment is toxic or not.\r\n\r\n10 annotators annotated each example and, as recommended in the task page, set a comment as toxic when target >= 0.5\r\n\r\nThe dataset is inbalanced, with only about 8% of the comments marked as toxic."
] |
568ea1093306f1b0ace849f1d703ad67525355ed | # tweet_eval_stance_abortion
This is the stance_abortion subset of [tweet_eval](https://huggingface.co/datasets/tweet_eval) | SetFit/tweet_eval_stance | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-01-17T13:01:36+00:00 | [] | [] | TAGS
#region-us
| # tweet_eval_stance_abortion
This is the stance_abortion subset of tweet_eval | [
"# tweet_eval_stance_abortion\r\n\r\nThis is the stance_abortion subset of tweet_eval"
] | [
"TAGS\n#region-us \n",
"# tweet_eval_stance_abortion\r\n\r\nThis is the stance_abortion subset of tweet_eval"
] |
0d5359cfbb2470332f2b82e62269cf755e0ac5c3 | # Tweet Sentiment Extraction
Source: https://www.kaggle.com/c/tweet-sentiment-extraction/data | SetFit/tweet_sentiment_extraction | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-05-12T18:52:02+00:00 | [] | [] | TAGS
#region-us
| # Tweet Sentiment Extraction
Source: URL | [
"# Tweet Sentiment Extraction\n\nSource: URL"
] | [
"TAGS\n#region-us \n",
"# Tweet Sentiment Extraction\n\nSource: URL"
] |
9ffaa43ad4f7a844512d06621fa8b63721232f1e | # Glue WNLI
This dataset is a port of the official [`wnli` dataset](https://huggingface.co/datasets/glue/viewer/wnli/train) on the Hub.
Note that the sentence1 and sentence2 columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
| SetFit/wnli | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-28T13:48:16+00:00 | [] | [] | TAGS
#region-us
| # Glue WNLI
This dataset is a port of the official 'wnli' dataset on the Hub.
Note that the sentence1 and sentence2 columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
| [
"# Glue WNLI\r\n\r\nThis dataset is a port of the official 'wnli' dataset on the Hub. \r\nNote that the sentence1 and sentence2 columns have been renamed to text1 and text2 respectively.\r\nAlso, the test split is not labeled; the label column values are always -1."
] | [
"TAGS\n#region-us \n",
"# Glue WNLI\r\n\r\nThis dataset is a port of the official 'wnli' dataset on the Hub. \r\nNote that the sentence1 and sentence2 columns have been renamed to text1 and text2 respectively.\r\nAlso, the test split is not labeled; the label column values are always -1."
] |
05600ff310a0970823e70f82f428893b85c71ffe |
# Dataset Card for JaQuAD
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splitting](#data-splitting)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Acknowledgements](#acknowledgements)
## Dataset Description
- **Repository:** https://github.com/SkelterLabsInc/JaQuAD
- **Paper:** [JaQuAD: Japanese Question Answering Dataset for Machine Reading Comprehension]()
- **Point of Contact:** [[email protected]]([email protected])
- **Size of dataset files:** 24.6 MB
- **Size of the generated dataset:** 48.6 MB
- **Total amount of disk used:** 73.2 MB
### Dataset Summary
Japanese Question Answering Dataset (JaQuAD), released in 2022, is a
human-annotated dataset created for Japanese Machine Reading Comprehension.
JaQuAD is developed to provide a SQuAD-like QA dataset in Japanese.
JaQuAD contains 39,696 question-answer pairs.
Questions and answers are manually curated by human annotators.
Contexts are collected from Japanese Wikipedia articles.
Fine-tuning [BERT-Japanese](https://huggingface.co/cl-tohoku/bert-base-japanese)
on JaQuAD achieves 78.92% for an F1 score and 63.38% for an exact match.
### Supported Tasks
- `extractive-qa`: This dataset is intended to be used for `extractive-qa`.
### Languages
Japanese (`ja`)
## Dataset Structure
### Data Instances
- **Size of dataset files:** 24.6 MB
- **Size of the generated dataset:** 48.6 MB
- **Total amount of disk used:** 73.2 MB
An example of 'validation':
```python
{
"id": "de-001-00-000",
"title": "イタセンパラ",
"context": "イタセンパラ(板鮮腹、Acheilognathuslongipinnis)は、コイ科のタナゴ亜科タナゴ属に分類される淡水>魚の一種。\n別名はビワタナゴ(琵琶鱮、琵琶鰱)。",
"question": "ビワタナゴの正式名称は何?",
"question_type": "Multiple sentence reasoning",
"answers": {
"text": "イタセンパラ",
"answer_start": 0,
"answer_type": "Object",
},
},
```
### Data Fields
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `question_type`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
- `answer_type`: a `string` feature.
### Data Splitting
JaQuAD consists of three sets, `train`, `validation`, and `test`. They were
created from disjoint sets of Wikipedia articles. The `test` set is not publicly
released yet. The following table shows statistics for each set.
Set | Number of Articles | Number of Contexts | Number of Questions
--------------|--------------------|--------------------|--------------------
Train | 691 | 9713 | 31748
Validation | 101 | 1431 | 3939
Test | 109 | 1479 | 4009
## Dataset Creation
### Curation Rationale
The JaQuAD dataset was created by [Skelter Labs](https://skelterlabs.com/) to
provide a SQuAD-like QA dataset in Japanese. Questions are original and based
on Japanese Wikipedia articles.
### Source Data
The articles used for the contexts are from [Japanese Wikipedia](https://ja.wikipedia.org/).
88.7% of articles are from the curated list of Japanese high-quality Wikipedia
articles, e.g., [featured articles](https://ja.wikipedia.org/wiki/Wikipedia:%E8%89%AF%E8%B3%AA%E3%81%AA%E8%A8%98%E4%BA%8B)
and [good articles](https://ja.wikipedia.org/wiki/Wikipedia:%E7%A7%80%E9%80%B8%E3%81%AA%E8%A8%98%E4%BA%8B).
### Annotations
Wikipedia articles were scrapped and divided into one more multiple paragraphs
as contexts. Annotations (questions and answer spans) are written by fluent
Japanese speakers, including natives and non-natives. Annotators were given a
context and asked to generate non-trivial questions about information in the
context.
### Personal and Sensitive Information
No personal or sensitive information is included in this dataset. Dataset
annotators has been manually verified it.
## Considerations for Using the Data
Users should consider that the articles are sampled from Wikipedia articles but
not representative of all Wikipedia articles.
### Social Impact of Dataset
The social biases of this dataset have not yet been investigated.
### Discussion of Biases
The social biases of this dataset have not yet been investigated. Articles and
questions have been selected for quality and diversity.
### Other Known Limitations
The JaQuAD dataset has limitations as follows:
- Most of them are short answers.
- Assume that a question is answerable using the corresponding context.
This dataset is incomplete yet. If you find any errors in JaQuAD, please contact
us.
## Additional Information
### Dataset Curators
Skelter Labs: [https://skelterlabs.com/](https://skelterlabs.com/)
### Licensing Information
The JaQuAD dataset is licensed under the [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/) license.
### Citation Information
```bibtex
@misc{so2022jaquad,
title={{JaQuAD: Japanese Question Answering Dataset for Machine Reading Comprehension}},
author={ByungHoon So and Kyuhong Byun and Kyungwon Kang and Seongjin Cho},
year={2022},
eprint={2202.01764},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Acknowledgements
This work was supported by [TPU Research Cloud (TRC) program](https://sites.research.google/trc/).
For training models, we used cloud TPUs provided by TRC. We also thank
annotators who generated JaQuAD.
| SkelterLabsInc/JaQuAD | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ja",
"license:cc-by-sa-3.0",
"arxiv:2202.01764",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced", "found"], "language": ["ja"], "license": ["cc-by-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "JaQuAD: Japanese Question Answering Dataset"} | 2022-10-25T08:06:40+00:00 | [
"2202.01764"
] | [
"ja"
] | TAGS
#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Japanese #license-cc-by-sa-3.0 #arxiv-2202.01764 #region-us
| Dataset Card for JaQuAD
=======================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splitting
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Acknowledgements
Dataset Description
-------------------
* Repository: URL
* Paper: JaQuAD: Japanese Question Answering Dataset for Machine Reading Comprehension
* Point of Contact: jaquad@URL
* Size of dataset files: 24.6 MB
* Size of the generated dataset: 48.6 MB
* Total amount of disk used: 73.2 MB
### Dataset Summary
Japanese Question Answering Dataset (JaQuAD), released in 2022, is a
human-annotated dataset created for Japanese Machine Reading Comprehension.
JaQuAD is developed to provide a SQuAD-like QA dataset in Japanese.
JaQuAD contains 39,696 question-answer pairs.
Questions and answers are manually curated by human annotators.
Contexts are collected from Japanese Wikipedia articles.
Fine-tuning BERT-Japanese
on JaQuAD achieves 78.92% for an F1 score and 63.38% for an exact match.
### Supported Tasks
* 'extractive-qa': This dataset is intended to be used for 'extractive-qa'.
### Languages
Japanese ('ja')
Dataset Structure
-----------------
### Data Instances
* Size of dataset files: 24.6 MB
* Size of the generated dataset: 48.6 MB
* Total amount of disk used: 73.2 MB
An example of 'validation':
### Data Fields
* 'id': a 'string' feature.
* 'title': a 'string' feature.
* 'context': a 'string' feature.
* 'question': a 'string' feature.
* 'question\_type': a 'string' feature.
* 'answers': a dictionary feature containing:
+ 'text': a 'string' feature.
+ 'answer\_start': a 'int32' feature.
+ 'answer\_type': a 'string' feature.
### Data Splitting
JaQuAD consists of three sets, 'train', 'validation', and 'test'. They were
created from disjoint sets of Wikipedia articles. The 'test' set is not publicly
released yet. The following table shows statistics for each set.
Dataset Creation
----------------
### Curation Rationale
The JaQuAD dataset was created by Skelter Labs to
provide a SQuAD-like QA dataset in Japanese. Questions are original and based
on Japanese Wikipedia articles.
### Source Data
The articles used for the contexts are from Japanese Wikipedia.
88.7% of articles are from the curated list of Japanese high-quality Wikipedia
articles, e.g., featured articles
and good articles.
### Annotations
Wikipedia articles were scrapped and divided into one more multiple paragraphs
as contexts. Annotations (questions and answer spans) are written by fluent
Japanese speakers, including natives and non-natives. Annotators were given a
context and asked to generate non-trivial questions about information in the
context.
### Personal and Sensitive Information
No personal or sensitive information is included in this dataset. Dataset
annotators has been manually verified it.
Considerations for Using the Data
---------------------------------
Users should consider that the articles are sampled from Wikipedia articles but
not representative of all Wikipedia articles.
### Social Impact of Dataset
The social biases of this dataset have not yet been investigated.
### Discussion of Biases
The social biases of this dataset have not yet been investigated. Articles and
questions have been selected for quality and diversity.
### Other Known Limitations
The JaQuAD dataset has limitations as follows:
* Most of them are short answers.
* Assume that a question is answerable using the corresponding context.
This dataset is incomplete yet. If you find any errors in JaQuAD, please contact
us.
Additional Information
----------------------
### Dataset Curators
Skelter Labs: URL
### Licensing Information
The JaQuAD dataset is licensed under the CC BY-SA 3.0 license.
### Acknowledgements
This work was supported by TPU Research Cloud (TRC) program.
For training models, we used cloud TPUs provided by TRC. We also thank
annotators who generated JaQuAD.
| [
"### Dataset Summary\n\n\nJapanese Question Answering Dataset (JaQuAD), released in 2022, is a\nhuman-annotated dataset created for Japanese Machine Reading Comprehension.\nJaQuAD is developed to provide a SQuAD-like QA dataset in Japanese.\nJaQuAD contains 39,696 question-answer pairs.\nQuestions and answers are manually curated by human annotators.\nContexts are collected from Japanese Wikipedia articles.\nFine-tuning BERT-Japanese\non JaQuAD achieves 78.92% for an F1 score and 63.38% for an exact match.",
"### Supported Tasks\n\n\n* 'extractive-qa': This dataset is intended to be used for 'extractive-qa'.",
"### Languages\n\n\nJapanese ('ja')\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\n* Size of dataset files: 24.6 MB\n* Size of the generated dataset: 48.6 MB\n* Total amount of disk used: 73.2 MB\n\n\nAn example of 'validation':",
"### Data Fields\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'question\\_type': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.\n\t+ 'answer\\_type': a 'string' feature.",
"### Data Splitting\n\n\nJaQuAD consists of three sets, 'train', 'validation', and 'test'. They were\ncreated from disjoint sets of Wikipedia articles. The 'test' set is not publicly\nreleased yet. The following table shows statistics for each set.\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nThe JaQuAD dataset was created by Skelter Labs to\nprovide a SQuAD-like QA dataset in Japanese. Questions are original and based\non Japanese Wikipedia articles.",
"### Source Data\n\n\nThe articles used for the contexts are from Japanese Wikipedia.\n88.7% of articles are from the curated list of Japanese high-quality Wikipedia\narticles, e.g., featured articles\nand good articles.",
"### Annotations\n\n\nWikipedia articles were scrapped and divided into one more multiple paragraphs\nas contexts. Annotations (questions and answer spans) are written by fluent\nJapanese speakers, including natives and non-natives. Annotators were given a\ncontext and asked to generate non-trivial questions about information in the\ncontext.",
"### Personal and Sensitive Information\n\n\nNo personal or sensitive information is included in this dataset. Dataset\nannotators has been manually verified it.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nUsers should consider that the articles are sampled from Wikipedia articles but\nnot representative of all Wikipedia articles.",
"### Social Impact of Dataset\n\n\nThe social biases of this dataset have not yet been investigated.",
"### Discussion of Biases\n\n\nThe social biases of this dataset have not yet been investigated. Articles and\nquestions have been selected for quality and diversity.",
"### Other Known Limitations\n\n\nThe JaQuAD dataset has limitations as follows:\n\n\n* Most of them are short answers.\n* Assume that a question is answerable using the corresponding context.\n\n\nThis dataset is incomplete yet. If you find any errors in JaQuAD, please contact\nus.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nSkelter Labs: URL",
"### Licensing Information\n\n\nThe JaQuAD dataset is licensed under the CC BY-SA 3.0 license.",
"### Acknowledgements\n\n\nThis work was supported by TPU Research Cloud (TRC) program.\nFor training models, we used cloud TPUs provided by TRC. We also thank\nannotators who generated JaQuAD."
] | [
"TAGS\n#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Japanese #license-cc-by-sa-3.0 #arxiv-2202.01764 #region-us \n",
"### Dataset Summary\n\n\nJapanese Question Answering Dataset (JaQuAD), released in 2022, is a\nhuman-annotated dataset created for Japanese Machine Reading Comprehension.\nJaQuAD is developed to provide a SQuAD-like QA dataset in Japanese.\nJaQuAD contains 39,696 question-answer pairs.\nQuestions and answers are manually curated by human annotators.\nContexts are collected from Japanese Wikipedia articles.\nFine-tuning BERT-Japanese\non JaQuAD achieves 78.92% for an F1 score and 63.38% for an exact match.",
"### Supported Tasks\n\n\n* 'extractive-qa': This dataset is intended to be used for 'extractive-qa'.",
"### Languages\n\n\nJapanese ('ja')\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\n* Size of dataset files: 24.6 MB\n* Size of the generated dataset: 48.6 MB\n* Total amount of disk used: 73.2 MB\n\n\nAn example of 'validation':",
"### Data Fields\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'question\\_type': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.\n\t+ 'answer\\_type': a 'string' feature.",
"### Data Splitting\n\n\nJaQuAD consists of three sets, 'train', 'validation', and 'test'. They were\ncreated from disjoint sets of Wikipedia articles. The 'test' set is not publicly\nreleased yet. The following table shows statistics for each set.\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nThe JaQuAD dataset was created by Skelter Labs to\nprovide a SQuAD-like QA dataset in Japanese. Questions are original and based\non Japanese Wikipedia articles.",
"### Source Data\n\n\nThe articles used for the contexts are from Japanese Wikipedia.\n88.7% of articles are from the curated list of Japanese high-quality Wikipedia\narticles, e.g., featured articles\nand good articles.",
"### Annotations\n\n\nWikipedia articles were scrapped and divided into one more multiple paragraphs\nas contexts. Annotations (questions and answer spans) are written by fluent\nJapanese speakers, including natives and non-natives. Annotators were given a\ncontext and asked to generate non-trivial questions about information in the\ncontext.",
"### Personal and Sensitive Information\n\n\nNo personal or sensitive information is included in this dataset. Dataset\nannotators has been manually verified it.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nUsers should consider that the articles are sampled from Wikipedia articles but\nnot representative of all Wikipedia articles.",
"### Social Impact of Dataset\n\n\nThe social biases of this dataset have not yet been investigated.",
"### Discussion of Biases\n\n\nThe social biases of this dataset have not yet been investigated. Articles and\nquestions have been selected for quality and diversity.",
"### Other Known Limitations\n\n\nThe JaQuAD dataset has limitations as follows:\n\n\n* Most of them are short answers.\n* Assume that a question is answerable using the corresponding context.\n\n\nThis dataset is incomplete yet. If you find any errors in JaQuAD, please contact\nus.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nSkelter Labs: URL",
"### Licensing Information\n\n\nThe JaQuAD dataset is licensed under the CC BY-SA 3.0 license.",
"### Acknowledgements\n\n\nThis work was supported by TPU Research Cloud (TRC) program.\nFor training models, we used cloud TPUs provided by TRC. We also thank\nannotators who generated JaQuAD."
] |
f89cf4c53510c2351d3f306dcedac046d9f68280 |
# Shellcode_IA32
___Shellcode_IA32___ is a dataset containing _20_ years of shellcodes from a variety of sources is the largest collection of shellcodes in assembly available to date.
This dataset consists of 3,200 examples of instructions in assembly language for _IA-32_ (the 32-bit version of the x86 Intel Architecture) from publicly available security exploits. We collected assembly programs used to generate shellcode from [exploit-db](https://www.exploit-db.com/shellcodes?platform=linux_x86) and from [shell-storm](http://shell-storm.org/shellcode/).
We enriched the dataset by adding examples of assembly programs for the _IA-32_ architecture from popular tutorials and books. This allowed us to understand how different authors and assembly experts comment and, thus, how to deal with the ambiguity of natural language in this specific context. Our dataset consists of 10% of instructions collected from books and guidelines, and the rest from real shellcodes.
Our focus is on Linux, the most common OS for security-critical network services. Accordingly, we added assembly instructions written with _Netwide Assembler_ (NASM) for Linux.
Each line of _Shellcode\_IA32_ dataset represents a snippet - intent pair. The _snippet_ is a line or a combination of multiple lines of assembly code, built by following the NASM syntax. The _intent_ is a comment in the English language.
Further statistics on the dataset and a set of preliminary experiments performed with a neural machine translation (NMT) model are described in the following paper: [Shellcode_IA32: A Dataset for Automatic Shellcode Generation](https://arxiv.org/abs/2104.13100).
**Note**: This work was done in collaboration with the [DESSERT Lab](http://www.dessert.unina.it/).
The dataset is also hosted on the [DESSERT Lab Github](https://github.com/dessertlab/Shellcode_IA32).
Please consider citing our work:
```
@inproceedings{liguori-etal-2021-shellcode,
title = "{S}hellcode{\_}{IA}32: A Dataset for Automatic Shellcode Generation",
author = "Liguori, Pietro and
Al-Hossami, Erfan and
Cotroneo, Domenico and
Natella, Roberto and
Cukic, Bojan and
Shaikh, Samira",
booktitle = "Proceedings of the 1st Workshop on Natural Language Processing for Programming (NLP4Prog 2021)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.nlp4prog-1.7",
doi = "10.18653/v1/2021.nlp4prog-1.7",
pages = "58--64",
abstract = "We take the first step to address the task of automatically generating shellcodes, i.e., small pieces of code used as a payload in the exploitation of a software vulnerability, starting from natural language comments. We assemble and release a novel dataset (Shellcode{\_}IA32), consisting of challenging but common assembly instructions with their natural language descriptions. We experiment with standard methods in neural machine translation (NMT) to establish baseline performance levels on this task.",
}
```
| SoLID/shellcode_i_a32 | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language_creators:found",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"language:en",
"license:gpl-3.0",
"arxiv:2104.13100",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated", "found"], "language": ["code", "en"], "license": ["gpl-3.0"], "multilinguality": ["translation"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "paperswithcode_id": "shellcode-ia32"} | 2022-11-17T19:53:43+00:00 | [
"2104.13100"
] | [
"code",
"en"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #annotations_creators-expert-generated #language_creators-expert-generated #language_creators-found #multilinguality-translation #size_categories-unknown #source_datasets-original #language-code #language-English #license-gpl-3.0 #arxiv-2104.13100 #region-us
|
# Shellcode_IA32
___Shellcode_IA32___ is a dataset containing _20_ years of shellcodes from a variety of sources is the largest collection of shellcodes in assembly available to date.
This dataset consists of 3,200 examples of instructions in assembly language for _IA-32_ (the 32-bit version of the x86 Intel Architecture) from publicly available security exploits. We collected assembly programs used to generate shellcode from exploit-db and from shell-storm.
We enriched the dataset by adding examples of assembly programs for the _IA-32_ architecture from popular tutorials and books. This allowed us to understand how different authors and assembly experts comment and, thus, how to deal with the ambiguity of natural language in this specific context. Our dataset consists of 10% of instructions collected from books and guidelines, and the rest from real shellcodes.
Our focus is on Linux, the most common OS for security-critical network services. Accordingly, we added assembly instructions written with _Netwide Assembler_ (NASM) for Linux.
Each line of _Shellcode\_IA32_ dataset represents a snippet - intent pair. The _snippet_ is a line or a combination of multiple lines of assembly code, built by following the NASM syntax. The _intent_ is a comment in the English language.
Further statistics on the dataset and a set of preliminary experiments performed with a neural machine translation (NMT) model are described in the following paper: Shellcode_IA32: A Dataset for Automatic Shellcode Generation.
Note: This work was done in collaboration with the DESSERT Lab.
The dataset is also hosted on the DESSERT Lab Github.
Please consider citing our work:
| [
"# Shellcode_IA32\n\n___Shellcode_IA32___ is a dataset containing _20_ years of shellcodes from a variety of sources is the largest collection of shellcodes in assembly available to date.\n\nThis dataset consists of 3,200 examples of instructions in assembly language for _IA-32_ (the 32-bit version of the x86 Intel Architecture) from publicly available security exploits. We collected assembly programs used to generate shellcode from exploit-db and from shell-storm.\nWe enriched the dataset by adding examples of assembly programs for the _IA-32_ architecture from popular tutorials and books. This allowed us to understand how different authors and assembly experts comment and, thus, how to deal with the ambiguity of natural language in this specific context. Our dataset consists of 10% of instructions collected from books and guidelines, and the rest from real shellcodes. \n\nOur focus is on Linux, the most common OS for security-critical network services. Accordingly, we added assembly instructions written with _Netwide Assembler_ (NASM) for Linux.\n\nEach line of _Shellcode\\_IA32_ dataset represents a snippet - intent pair. The _snippet_ is a line or a combination of multiple lines of assembly code, built by following the NASM syntax. The _intent_ is a comment in the English language.\n\nFurther statistics on the dataset and a set of preliminary experiments performed with a neural machine translation (NMT) model are described in the following paper: Shellcode_IA32: A Dataset for Automatic Shellcode Generation.\n\n\nNote: This work was done in collaboration with the DESSERT Lab.\n\nThe dataset is also hosted on the DESSERT Lab Github.\n\n\nPlease consider citing our work:"
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-expert-generated #language_creators-expert-generated #language_creators-found #multilinguality-translation #size_categories-unknown #source_datasets-original #language-code #language-English #license-gpl-3.0 #arxiv-2104.13100 #region-us \n",
"# Shellcode_IA32\n\n___Shellcode_IA32___ is a dataset containing _20_ years of shellcodes from a variety of sources is the largest collection of shellcodes in assembly available to date.\n\nThis dataset consists of 3,200 examples of instructions in assembly language for _IA-32_ (the 32-bit version of the x86 Intel Architecture) from publicly available security exploits. We collected assembly programs used to generate shellcode from exploit-db and from shell-storm.\nWe enriched the dataset by adding examples of assembly programs for the _IA-32_ architecture from popular tutorials and books. This allowed us to understand how different authors and assembly experts comment and, thus, how to deal with the ambiguity of natural language in this specific context. Our dataset consists of 10% of instructions collected from books and guidelines, and the rest from real shellcodes. \n\nOur focus is on Linux, the most common OS for security-critical network services. Accordingly, we added assembly instructions written with _Netwide Assembler_ (NASM) for Linux.\n\nEach line of _Shellcode\\_IA32_ dataset represents a snippet - intent pair. The _snippet_ is a line or a combination of multiple lines of assembly code, built by following the NASM syntax. The _intent_ is a comment in the English language.\n\nFurther statistics on the dataset and a set of preliminary experiments performed with a neural machine translation (NMT) model are described in the following paper: Shellcode_IA32: A Dataset for Automatic Shellcode Generation.\n\n\nNote: This work was done in collaboration with the DESSERT Lab.\n\nThe dataset is also hosted on the DESSERT Lab Github.\n\n\nPlease consider citing our work:"
] |
55ea22daf606f8305cde921f5a60e9a1989272c5 |
# Dataset Card for one-million-reddit-confessions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=onemillionconfessions)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=onemillionconfessions)
### Dataset Summary
This corpus contains a million posts from the following subreddits:
- /r/trueoffmychest
- /r/confession
- /r/confessions
- /r/offmychest
Posts are annotated with their score.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a Reddit post.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': the domain of the data point's link.
- 'url': the destination of the data point's link, if any.
- 'selftext': the self-text of the data point, if any.
- 'title': the title of the post data point.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] | SocialGrep/one-million-reddit-confessions | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"]} | 2022-07-01T17:48:52+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for one-million-reddit-confessions
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Point of Contact: Website
### Dataset Summary
This corpus contains a million posts from the following subreddits:
- /r/trueoffmychest
- /r/confession
- /r/confessions
- /r/offmychest
Posts are annotated with their score.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a Reddit post.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'URL': the human-readable name of the data point's host subreddit.
- 'URL': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': the domain of the data point's link.
- 'url': the destination of the data point's link, if any.
- 'selftext': the self-text of the data point, if any.
- 'title': the title of the post data point.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
CC-BY v4.0
### Contributions
| [
"# Dataset Card for one-million-reddit-confessions",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains a million posts from the following subreddits:\n - /r/trueoffmychest\n - /r/confession\n - /r/confessions\n - /r/offmychest\n\nPosts are annotated with their score.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a Reddit post.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n- 'domain': the domain of the data point's link.\n- 'url': the destination of the data point's link, if any.\n- 'selftext': the self-text of the data point, if any.\n- 'title': the title of the post data point.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] | [
"TAGS\n#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for one-million-reddit-confessions",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains a million posts from the following subreddits:\n - /r/trueoffmychest\n - /r/confession\n - /r/confessions\n - /r/offmychest\n\nPosts are annotated with their score.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a Reddit post.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n- 'domain': the domain of the data point's link.\n- 'url': the destination of the data point's link, if any.\n- 'selftext': the self-text of the data point, if any.\n- 'title': the title of the post data point.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] |
c3f8706ec95b94882246edd68b74410080911ecc |
# Dataset Card for one-million-reddit-jokes
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=onemillionjokes)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=onemillionjokes)
### Dataset Summary
This corpus contains a million posts from /r/jokes.
Posts are annotated with their score.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a Reddit post.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': the domain of the data point's link.
- 'url': the destination of the data point's link, if any.
- 'selftext': the self-text of the data point, if any.
- 'title': the title of the post data point.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] | SocialGrep/one-million-reddit-jokes | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"]} | 2022-07-01T17:48:46+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for one-million-reddit-jokes
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Point of Contact: Website
### Dataset Summary
This corpus contains a million posts from /r/jokes.
Posts are annotated with their score.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a Reddit post.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'URL': the human-readable name of the data point's host subreddit.
- 'URL': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': the domain of the data point's link.
- 'url': the destination of the data point's link, if any.
- 'selftext': the self-text of the data point, if any.
- 'title': the title of the post data point.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
CC-BY v4.0
### Contributions
| [
"# Dataset Card for one-million-reddit-jokes",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains a million posts from /r/jokes.\n\nPosts are annotated with their score.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a Reddit post.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n- 'domain': the domain of the data point's link.\n- 'url': the destination of the data point's link, if any.\n- 'selftext': the self-text of the data point, if any.\n- 'title': the title of the post data point.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] | [
"TAGS\n#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for one-million-reddit-jokes",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains a million posts from /r/jokes.\n\nPosts are annotated with their score.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a Reddit post.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n- 'domain': the domain of the data point's link.\n- 'url': the destination of the data point's link, if any.\n- 'selftext': the self-text of the data point, if any.\n- 'title': the title of the post data point.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] |
e23c47a0aa8fbb2cd63538dea1dc977e2e0f2647 |
# Dataset Card for one-million-reddit-questions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=dataset&utm_term=onemillionquestions)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=dataset&utm_term=onemillionquestions)
### Dataset Summary
This corpus contains a million posts on /r/AskReddit, annotated with their score.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a Reddit post.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': the domain of the data point's link.
- 'url': the destination of the data point's link, if any.
- 'selftext': the self-text of the data point, if any.
- 'title': the title of the post data point.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] | SocialGrep/one-million-reddit-questions | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"]} | 2022-07-25T17:57:10+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for one-million-reddit-questions
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Point of Contact: Website
### Dataset Summary
This corpus contains a million posts on /r/AskReddit, annotated with their score.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a Reddit post.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'URL': the human-readable name of the data point's host subreddit.
- 'URL': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': the domain of the data point's link.
- 'url': the destination of the data point's link, if any.
- 'selftext': the self-text of the data point, if any.
- 'title': the title of the post data point.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
CC-BY v4.0
### Contributions
| [
"# Dataset Card for one-million-reddit-questions",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains a million posts on /r/AskReddit, annotated with their score.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a Reddit post.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n- 'domain': the domain of the data point's link.\n- 'url': the destination of the data point's link, if any.\n- 'selftext': the self-text of the data point, if any.\n- 'title': the title of the post data point.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] | [
"TAGS\n#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for one-million-reddit-questions",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains a million posts on /r/AskReddit, annotated with their score.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a Reddit post.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n- 'domain': the domain of the data point's link.\n- 'url': the destination of the data point's link, if any.\n- 'selftext': the self-text of the data point, if any.\n- 'title': the title of the post data point.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] |
83362992f86cdfe9cd057069407d943f1baa2976 |
# Dataset Card for one-year-of-r-india
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=oneyearofrindia)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=oneyearofrindia)
### Dataset Summary
This corpus contains the complete data for the activity of the subreddit /r/India from Sep 30, 2020 to Sep 30, 2021.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] | SocialGrep/one-year-of-r-india | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"]} | 2022-07-01T17:48:19+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for one-year-of-r-india
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Point of Contact: Website
### Dataset Summary
This corpus contains the complete data for the activity of the subreddit /r/India from Sep 30, 2020 to Sep 30, 2021.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'URL': the human-readable name of the data point's host subreddit.
- 'URL': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
CC-BY v4.0
### Contributions
| [
"# Dataset Card for one-year-of-r-india",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains the complete data for the activity of the subreddit /r/India from Sep 30, 2020 to Sep 30, 2021.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] | [
"TAGS\n#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for one-year-of-r-india",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains the complete data for the activity of the subreddit /r/India from Sep 30, 2020 to Sep 30, 2021.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] |
d6b971e2c735261ffba9ec44a60ff4ee492fc431 |
# Dataset Card for reddit-crypto-aug-2021
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=dataset&utm_term=crypto)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=dataset&utm_term=crypto)
### Dataset Summary
This corpus contains the complete data for the activity on the following subreddits for the entire month of August 2021:
- /r/cryptocurrency
- /r/cryptocurrencyclassic
- /r/cryptocurrencyico
- /r/cryptomars
- /r/cryptomoon
- /r/cryptomoonshots
- /r/satoshistreetbets
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] | SocialGrep/reddit-crypto-aug-2021 | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"]} | 2022-07-01T18:08:05+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for reddit-crypto-aug-2021
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Point of Contact: Website
### Dataset Summary
This corpus contains the complete data for the activity on the following subreddits for the entire month of August 2021:
- /r/cryptocurrency
- /r/cryptocurrencyclassic
- /r/cryptocurrencyico
- /r/cryptomars
- /r/cryptomoon
- /r/cryptomoonshots
- /r/satoshistreetbets
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'URL': the human-readable name of the data point's host subreddit.
- 'URL': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
CC-BY v4.0
### Contributions
| [
"# Dataset Card for reddit-crypto-aug-2021",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains the complete data for the activity on the following subreddits for the entire month of August 2021:\n- /r/cryptocurrency\n- /r/cryptocurrencyclassic\n- /r/cryptocurrencyico\n- /r/cryptomars\n- /r/cryptomoon\n- /r/cryptomoonshots\n- /r/satoshistreetbets",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] | [
"TAGS\n#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for reddit-crypto-aug-2021",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains the complete data for the activity on the following subreddits for the entire month of August 2021:\n- /r/cryptocurrency\n- /r/cryptocurrencyclassic\n- /r/cryptocurrencyico\n- /r/cryptomars\n- /r/cryptomoon\n- /r/cryptomoonshots\n- /r/satoshistreetbets",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] |
70a0d87706eb429c9ecbefe862d8d7ef0e0c7837 |
# Dataset Card for reddit-nonewnormal-complete
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=dataset&utm_term=nonewnormal)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=dataset&utm_term=nonewnormal)
### Dataset Summary
This corpus contains the complete data for the activity on subreddit /r/NoNewNormal for the entire duration of its existence.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] | SocialGrep/reddit-nonewnormal-complete | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"]} | 2022-07-01T18:02:06+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for reddit-nonewnormal-complete
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Point of Contact: Website
### Dataset Summary
This corpus contains the complete data for the activity on subreddit /r/NoNewNormal for the entire duration of its existence.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'URL': the human-readable name of the data point's host subreddit.
- 'URL': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
CC-BY v4.0
### Contributions
| [
"# Dataset Card for reddit-nonewnormal-complete",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains the complete data for the activity on subreddit /r/NoNewNormal for the entire duration of its existence.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] | [
"TAGS\n#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for reddit-nonewnormal-complete",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains the complete data for the activity on subreddit /r/NoNewNormal for the entire duration of its existence.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] |
0de5fd81c695f468b56d8274241e1ad3f40ae9ac |
# Dataset Card for reddit-wallstreetbets-aug-2021
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=dataset&utm_term=wallstreetbets)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=dataset&utm_term=wallstreetbets)
### Dataset Summary
This corpus contains the complete data for the activity on subreddit /r/WallStreetBets for the entire month of August.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] | SocialGrep/reddit-wallstreetbets-aug-2021 | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"]} | 2022-07-01T18:15:07+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for reddit-wallstreetbets-aug-2021
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Point of Contact: Website
### Dataset Summary
This corpus contains the complete data for the activity on subreddit /r/WallStreetBets for the entire month of August.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'URL': the human-readable name of the data point's host subreddit.
- 'URL': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
CC-BY v4.0
### Contributions
| [
"# Dataset Card for reddit-wallstreetbets-aug-2021",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains the complete data for the activity on subreddit /r/WallStreetBets for the entire month of August.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] | [
"TAGS\n#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for reddit-wallstreetbets-aug-2021",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains the complete data for the activity on subreddit /r/WallStreetBets for the entire month of August.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] |
077b6add1d663d3168679a0329eb13b110c3f79a |
# Dataset Card for ten-million-reddit-answers
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=tenmillionanswers)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=tenmillionanswers)
### Dataset Summary
This corpus contains ten million question-answer pairs, labeled with score and pre-packaged with results of a basic sentiment predictor.
The data was procured from /r/AskReddit using [SocialGrep](https://socialgrep.com/?utm_source=huggingface&utm_medium=link&utm_campaign=tenmillionanswers).
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] | SocialGrep/ten-million-reddit-answers | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"]} | 2022-07-01T16:38:25+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for ten-million-reddit-answers
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Point of Contact: Website
### Dataset Summary
This corpus contains ten million question-answer pairs, labeled with score and pre-packaged with results of a basic sentiment predictor.
The data was procured from /r/AskReddit using SocialGrep.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'URL': the human-readable name of the data point's host subreddit.
- 'URL': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
CC-BY v4.0
### Contributions
| [
"# Dataset Card for ten-million-reddit-answers",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains ten million question-answer pairs, labeled with score and pre-packaged with results of a basic sentiment predictor.\n\nThe data was procured from /r/AskReddit using SocialGrep.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] | [
"TAGS\n#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for ten-million-reddit-answers",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains ten million question-answer pairs, labeled with score and pre-packaged with results of a basic sentiment predictor.\n\nThe data was procured from /r/AskReddit using SocialGrep.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] |
b4fd711a8bdf95379deddedec9c10abd428cf8ab |
# Dataset Card for the-2022-trucker-strike-on-reddit
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-2022-trucker-strike-on-reddit?utm_source=huggingface&utm_medium=link&utm_campaign=the2022truckerstrikeonreddit)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=the2022truckerstrikeonreddit)
### Dataset Summary
This corpus contains all the comments under the /r/Ottawa convoy megathreads.
Comments are annotated with their score.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a Reddit comment.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'sentiment': the evaluated sentiment of the data point, if any.
- 'body': the text of the data point.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] | SocialGrep/the-2022-trucker-strike-on-reddit | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"]} | 2022-07-01T17:00:49+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for the-2022-trucker-strike-on-reddit
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Point of Contact: Website
### Dataset Summary
This corpus contains all the comments under the /r/Ottawa convoy megathreads.
Comments are annotated with their score.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a Reddit comment.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'URL': the human-readable name of the data point's host subreddit.
- 'URL': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'sentiment': the evaluated sentiment of the data point, if any.
- 'body': the text of the data point.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
CC-BY v4.0
### Contributions
| [
"# Dataset Card for the-2022-trucker-strike-on-reddit",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains all the comments under the /r/Ottawa convoy megathreads.\n\nComments are annotated with their score.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a Reddit comment.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n- 'sentiment': the evaluated sentiment of the data point, if any.\n- 'body': the text of the data point.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] | [
"TAGS\n#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for the-2022-trucker-strike-on-reddit",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains all the comments under the /r/Ottawa convoy megathreads.\n\nComments are annotated with their score.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a Reddit comment.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n- 'sentiment': the evaluated sentiment of the data point, if any.\n- 'body': the text of the data point.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] |
63d8c86b1e3c714fd00c98c986eef9e5c6914b26 |
# Dataset Card for the-reddit-covid-dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Disclaimer
Due to file size limitations, we are not able to provide the comments for this dataset. Please feel free to download them from the [website](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=theredditcoviddataset#the-reddit-covid-dataset) - no registration required.
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=theredditcoviddataset)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theredditcoviddataset)
### Dataset Summary
This corpus contains all the mentions of the term `covid` in post titles on the social media platform Reddit, up until the 25th of October, 2021.
The data was procured from Reddit using [SocialGrep](https://socialgrep.com/?utm_source=huggingface&utm_medium=link&utm_campaign=theredditcoviddataset).
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] | SocialGrep/the-reddit-covid-dataset | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"]} | 2022-07-01T17:40:57+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for the-reddit-covid-dataset
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Disclaimer
Due to file size limitations, we are not able to provide the comments for this dataset. Please feel free to download them from the website - no registration required.
## Dataset Description
- Homepage: URL
- Point of Contact: Website
### Dataset Summary
This corpus contains all the mentions of the term 'covid' in post titles on the social media platform Reddit, up until the 25th of October, 2021.
The data was procured from Reddit using SocialGrep.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'URL': the human-readable name of the data point's host subreddit.
- 'URL': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
CC-BY v4.0
### Contributions
| [
"# Dataset Card for the-reddit-covid-dataset",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Disclaimer\nDue to file size limitations, we are not able to provide the comments for this dataset. Please feel free to download them from the website - no registration required.",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains all the mentions of the term 'covid' in post titles on the social media platform Reddit, up until the 25th of October, 2021.\n\nThe data was procured from Reddit using SocialGrep.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] | [
"TAGS\n#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for the-reddit-covid-dataset",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Disclaimer\nDue to file size limitations, we are not able to provide the comments for this dataset. Please feel free to download them from the website - no registration required.",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains all the mentions of the term 'covid' in post titles on the social media platform Reddit, up until the 25th of October, 2021.\n\nThe data was procured from Reddit using SocialGrep.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] |
4e365e048efa0d81fd4ceb4bd79b0be8b9b69fe7 |
# Dataset Card for top-american-universities-on-reddit
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/top-american-universities-on-reddit?utm_source=huggingface&utm_medium=link&utm_campaign=topamericanuniversitiesonreddit)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=topamericanuniversitiesonreddit)
### Dataset Summary
This corpus contains the complete data for the activity of the subreddits of the top 10 US colleges, according to the [2019 Forbes listing](https://www.forbes.com/top-colleges/#1208425d1987).
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] | SocialGrep/top-american-universities-on-reddit | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"]} | 2022-07-25T17:57:00+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for top-american-universities-on-reddit
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Point of Contact: Website
### Dataset Summary
This corpus contains the complete data for the activity of the subreddits of the top 10 US colleges, according to the 2019 Forbes listing.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'URL': the human-readable name of the data point's host subreddit.
- 'URL': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
CC-BY v4.0
### Contributions
| [
"# Dataset Card for top-american-universities-on-reddit",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains the complete data for the activity of the subreddits of the top 10 US colleges, according to the 2019 Forbes listing.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] | [
"TAGS\n#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for top-american-universities-on-reddit",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nThis corpus contains the complete data for the activity of the subreddits of the top 10 US colleges, according to the 2019 Forbes listing.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nCC-BY v4.0",
"### Contributions"
] |
bfca53175d032213e8bb52537ef9c5c5c504e8e6 | A parallel text corpus, **SALT -- (Sunbird African Language Translation Dataset)**, was created for five Ugandan languages (Luganda,
Runyankore, Acholi, Lugbara and Ateso) and various methods were explored to train and evaluate translation models. | Sunbird/salt-dataset | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-03-28T12:04:56+00:00 | [] | [] | TAGS
#region-us
| A parallel text corpus, SALT -- (Sunbird African Language Translation Dataset), was created for five Ugandan languages (Luganda,
Runyankore, Acholi, Lugbara and Ateso) and various methods were explored to train and evaluate translation models. | [] | [
"TAGS\n#region-us \n"
] |
8156e2a3041b6fb41f722e237a96a3e12b3491e8 | [SuperAI Engineer Season 2](https://superai.aiat.or.th/) , [Machima](https://machchima.superai.me/)
Machima_ThaiQA_LST20 เป็นชุดข้อมูลที่สกัดหาคำถาม และคำตอบ จากบทความในชุดข้อมูล LST20 โดยสกัดได้คำถาม-ตอบทั้งหมด 7,642 คำถาม มีข้อมูล 4 คอลัมน์ ประกอบด้วย context, question, answer และ status ตามลำดับ
แสดงตัวอย่างดังนี้
context : ด.ต.ประสิทธิ์ ชาหอมชื่นอายุ 55 ปี ผบ.หมู่งาน ป.ตชด. 24 อุดรธานีถูกยิงด้วยอาวุธปืนอาก้าเข้าที่แขนซ้าย 3 นัดหน้าท้อง 1 นัดส.ต.อ.ประเสริฐ ใหญ่สูงเนินอายุ 35 ปี ผบ.หมู่กก. 1 ปส.2 บช.ปส. ถูกยิงเข้าที่แขนขวากระดูกแตกละเอียดร.ต.อ.ชวพล หมื่นโรจน์อายุ 32 ปีรอง สว.กก. 1 ปส. 2 บช.ปส. ถูกยิงเข้าที่แก้มและไหปลาร้าด้านขวา
question :ผบ.หมู่งาน ป.ตชด. 24 อุดรธานี ถูกยิงด้วยอาวุธปืนอะไรเข้าที่แขนซ้าย 3 นัดหน้าท้อง
answer : อาวุธปืนอาก้า
status : 1
ซึ่งใน 7,642 คำถาม จะมีคำถาม-ตอบ ที่สกัดออกมาได้ถูกต้อง และไม่ถูกต้องตาม ยกตัวอย่างเช่น ตอบไม่ตรงคำถาม หรือมีคำตอบอยู่ด้านในประโยคคำถาม
ทางทีมงานบ้านมณิมาได้ทำการตรวจสอบคำถามตอบ และทำการติด label ให้กับคู่ของคำถาม-ตอบ ที่ถูกต้อง และไม่ถูกต้อง โดย 1 = ถูกต้อง และ 0 = ไม่ถูกต้อง
จากคู่คำถาม-ตอบ 7,642 คำถาม
พบว่าถูกต้อง 4,438 คำถาม
ไม่ถูกต้อง 3,204 คำถาม
เพื่อน ๆ สามารถโหลดข้อมูลมาใช้โดยใช้โค้ดดังนี้
```python
!pip install datasets -qq #สำหรับโหลดdataset
from datasets import load_dataset
import pandas as pd
dataset = load_dataset("SuperAI2-Machima/ThaiQA_LST20")
train_df = pd.DataFrame(dataset['train'])
train_df
``` | SuperAI2-Machima/ThaiQA_LST20 | [
"license:mit",
"question-generation dataset",
"qa dataset",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["thai", "th"], "license": "mit", "tags": ["question-generation dataset", "qa dataset"], "datasets": ["LST20"]} | 2022-02-25T06:29:22+00:00 | [] | [
"thai",
"th"
] | TAGS
#license-mit #question-generation dataset #qa dataset #region-us
| SuperAI Engineer Season 2 , Machima
Machima_ThaiQA_LST20 เป็นชุดข้อมูลที่สกัดหาคำถาม และคำตอบ จากบทความในชุดข้อมูล LST20 โดยสกัดได้คำถาม-ตอบทั้งหมด 7,642 คำถาม มีข้อมูล 4 คอลัมน์ ประกอบด้วย context, question, answer และ status ตามลำดับ
แสดงตัวอย่างดังนี้
context : ด.ต.ประสิทธิ์ ชาหอมชื่นอายุ 55 ปี ผบ.หมู่งาน ป.ตชด. 24 อุดรธานีถูกยิงด้วยอาวุธปืนอาก้าเข้าที่แขนซ้าย 3 นัดหน้าท้อง 1 นัดส.ต.อ.ประเสริฐ ใหญ่สูงเนินอายุ 35 ปี ผบ.หมู่กก. 1 ปส.2 บช.ปส. ถูกยิงเข้าที่แขนขวากระดูกแตกละเอียดร.ต.อ.ชวพล หมื่นโรจน์อายุ 32 ปีรอง สว.กก. 1 ปส. 2 บช.ปส. ถูกยิงเข้าที่แก้มและไหปลาร้าด้านขวา
question :ผบ.หมู่งาน ป.ตชด. 24 อุดรธานี ถูกยิงด้วยอาวุธปืนอะไรเข้าที่แขนซ้าย 3 นัดหน้าท้อง
answer : อาวุธปืนอาก้า
status : 1
ซึ่งใน 7,642 คำถาม จะมีคำถาม-ตอบ ที่สกัดออกมาได้ถูกต้อง และไม่ถูกต้องตาม ยกตัวอย่างเช่น ตอบไม่ตรงคำถาม หรือมีคำตอบอยู่ด้านในประโยคคำถาม
ทางทีมงานบ้านมณิมาได้ทำการตรวจสอบคำถามตอบ และทำการติด label ให้กับคู่ของคำถาม-ตอบ ที่ถูกต้อง และไม่ถูกต้อง โดย 1 = ถูกต้อง และ 0 = ไม่ถูกต้อง
จากคู่คำถาม-ตอบ 7,642 คำถาม
พบว่าถูกต้อง 4,438 คำถาม
ไม่ถูกต้อง 3,204 คำถาม
เพื่อน ๆ สามารถโหลดข้อมูลมาใช้โดยใช้โค้ดดังนี้
| [] | [
"TAGS\n#license-mit #question-generation dataset #qa dataset #region-us \n"
] |
3185d6f6dfc1ddf52fcc4361fe040e42089079e1 | พี่ยอด และน้อง ๆ ในทีมบ้านมัณิชมา ร่วมกันสร้างชุดข้อมูล คำถาม - คำตอบ จากชุดข้อมูล LST-20
โดยใช้ POS และ NER เพื่อมาสร้างชุดประโยคคำถาม
ได้ข้อมูลคำถาม - ตอบ ทั้งหมดประมาณ 1,000 แถว
| SuperAI2-Machima/Yord_ThaiQA_LST20 | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-25T06:31:36+00:00 | [] | [] | TAGS
#region-us
| พี่ยอด และน้อง ๆ ในทีมบ้านมัณิชมา ร่วมกันสร้างชุดข้อมูล คำถาม - คำตอบ จากชุดข้อมูล LST-20
โดยใช้ POS และ NER เพื่อมาสร้างชุดประโยคคำถาม
ได้ข้อมูลคำถาม - ตอบ ทั้งหมดประมาณ 1,000 แถว
| [] | [
"TAGS\n#region-us \n"
] |
ae9327c6e338e0dc74c821358d8f926ca99009e6 | #MASC
The dataset will be available soon. | TRoboto/masc | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-08-16T18:34:57+00:00 | [] | [] | TAGS
#region-us
| #MASC
The dataset will be available soon. | [] | [
"TAGS\n#region-us \n"
] |
8b370cd6d175cfdfaed3b978d6f583b3d0ebd801 |
## Dataset Summary
It includes list of Arabic names with meaning and origin of most names
| TRoboto/names | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"license": "cc-by-4.0", "project": "Maha"} | 2022-01-29T16:33:25+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
|
## Dataset Summary
It includes list of Arabic names with meaning and origin of most names
| [
"## Dataset Summary\r\n\r\nIt includes list of Arabic names with meaning and origin of most names"
] | [
"TAGS\n#license-cc-by-4.0 #region-us \n",
"## Dataset Summary\r\n\r\nIt includes list of Arabic names with meaning and origin of most names"
] |
e441169d5d9a7a666058f3b00f466bc824e2905f | # Overview
This dataset contains the data for the paper [Deep learning based question answering system in Bengali](https://www.tandfonline.com/doi/full/10.1080/24751839.2020.1833136). It is a translated version of [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset to bengali language. Preprocessing details can be found in the paper. | Tahsin-Mayeesha/Bengali-SQuAD | [
"task_categories:question-answering",
"multilinguality:monolingual",
"language:bn",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["bn"], "multilinguality": ["monolingual"], "task_categories": ["question-answering"]} | 2022-10-25T08:06:50+00:00 | [] | [
"bn"
] | TAGS
#task_categories-question-answering #multilinguality-monolingual #language-Bengali #region-us
| # Overview
This dataset contains the data for the paper Deep learning based question answering system in Bengali. It is a translated version of SQuAD 2.0 dataset to bengali language. Preprocessing details can be found in the paper. | [
"# Overview \nThis dataset contains the data for the paper Deep learning based question answering system in Bengali. It is a translated version of SQuAD 2.0 dataset to bengali language. Preprocessing details can be found in the paper."
] | [
"TAGS\n#task_categories-question-answering #multilinguality-monolingual #language-Bengali #region-us \n",
"# Overview \nThis dataset contains the data for the paper Deep learning based question answering system in Bengali. It is a translated version of SQuAD 2.0 dataset to bengali language. Preprocessing details can be found in the paper."
] |
a66a66ee3a858c2b59e056b3fb7dfaf501fc5425 |
# Dataset with sentiment of Russian text
Contains aggregated dataset of Russian texts from 6 datasets.
## Labels meaning
0: NEUTRAL
1: POSITIVE
2: NEGATIVE
## Datasets
**[Sentiment Analysis in Russian](https://www.kaggle.com/c/sentiment-analysis-in-russian/data)**
> Sentiments (positive, negative or neutral) of news in russian language from Kaggle competition.
**[Russian Language Toxic Comments](https://www.kaggle.com/blackmoon/russian-language-toxic-comments/)**
> Small dataset with labeled comments from 2ch.hk and pikabu.ru.
**[Dataset of car reviews for machine learning (sentiment analysis)](https://github.com/oldaandozerskaya/auto_reviews)**
> Glazkova A. The evaluation of the proximity of text categories for solving electronic documents classification tasks //VESTNIK TOMSKOGO GOSUDARSTVENNOGO UNIVERSITETA-UPRAVLENIE VYCHISLITELNAJA TEHNIKA I INFORMATIKA-TOMSK STATE UNIVERSITY JOURNAL OF CONTROL AND COMPUTER SCIENCE. – 2015. – Т. 31. – №. 2. – С. 18-25.
**[Sentiment datasets by Blinov](https://github.com/natasha/corus/issues/14)**
> Datasets contain reviews from different scopes.
**[LINIS Crowd](http://www.linis-crowd.org/)**
> Произведение «LINIS Crowd SENT - тональный словарь и коллекция текстов с тональной разметкой» созданное автором по имени Sergei Koltcov, Olessia Koltsova и Svetlana Alexeeva.
**[Russian Hotel Reviews Dataset](https://drive.google.com/drive/folders/17sa3h4XHcG0MJGrbfOsbL-kDW29CuJul)**
> Hotel reviews in Russian | MonoHime/ru_sentiment_dataset | [
"language:ru",
"sentiment",
"text-classification",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["ru"], "tags": ["sentiment", "text-classification"]} | 2021-05-19T23:57:22+00:00 | [] | [
"ru"
] | TAGS
#language-Russian #sentiment #text-classification #region-us
|
# Dataset with sentiment of Russian text
Contains aggregated dataset of Russian texts from 6 datasets.
## Labels meaning
0: NEUTRAL
1: POSITIVE
2: NEGATIVE
## Datasets
Sentiment Analysis in Russian
> Sentiments (positive, negative or neutral) of news in russian language from Kaggle competition.
Russian Language Toxic Comments
> Small dataset with labeled comments from URL and URL.
Dataset of car reviews for machine learning (sentiment analysis)
> Glazkova A. The evaluation of the proximity of text categories for solving electronic documents classification tasks //VESTNIK TOMSKOGO GOSUDARSTVENNOGO UNIVERSITETA-UPRAVLENIE VYCHISLITELNAJA TEHNIKA I INFORMATIKA-TOMSK STATE UNIVERSITY JOURNAL OF CONTROL AND COMPUTER SCIENCE. – 2015. – Т. 31. – №. 2. – С. 18-25.
Sentiment datasets by Blinov
> Datasets contain reviews from different scopes.
LINIS Crowd
> Произведение «LINIS Crowd SENT - тональный словарь и коллекция текстов с тональной разметкой» созданное автором по имени Sergei Koltcov, Olessia Koltsova и Svetlana Alexeeva.
Russian Hotel Reviews Dataset
> Hotel reviews in Russian | [
"# Dataset with sentiment of Russian text\nContains aggregated dataset of Russian texts from 6 datasets.",
"## Labels meaning\n 0: NEUTRAL\n 1: POSITIVE\n 2: NEGATIVE",
"## Datasets\n\nSentiment Analysis in Russian\n\n> Sentiments (positive, negative or neutral) of news in russian language from Kaggle competition.\n\nRussian Language Toxic Comments\n\n> Small dataset with labeled comments from URL and URL.\n\nDataset of car reviews for machine learning (sentiment analysis)\n\n> Glazkova A. The evaluation of the proximity of text categories for solving electronic documents classification tasks //VESTNIK TOMSKOGO GOSUDARSTVENNOGO UNIVERSITETA-UPRAVLENIE VYCHISLITELNAJA TEHNIKA I INFORMATIKA-TOMSK STATE UNIVERSITY JOURNAL OF CONTROL AND COMPUTER SCIENCE. – 2015. – Т. 31. – №. 2. – С. 18-25.\n\nSentiment datasets by Blinov\n\n> Datasets contain reviews from different scopes.\n\nLINIS Crowd\n\n> Произведение «LINIS Crowd SENT - тональный словарь и коллекция текстов с тональной разметкой» созданное автором по имени Sergei Koltcov, Olessia Koltsova и Svetlana Alexeeva.\n\nRussian Hotel Reviews Dataset\n\n> Hotel reviews in Russian"
] | [
"TAGS\n#language-Russian #sentiment #text-classification #region-us \n",
"# Dataset with sentiment of Russian text\nContains aggregated dataset of Russian texts from 6 datasets.",
"## Labels meaning\n 0: NEUTRAL\n 1: POSITIVE\n 2: NEGATIVE",
"## Datasets\n\nSentiment Analysis in Russian\n\n> Sentiments (positive, negative or neutral) of news in russian language from Kaggle competition.\n\nRussian Language Toxic Comments\n\n> Small dataset with labeled comments from URL and URL.\n\nDataset of car reviews for machine learning (sentiment analysis)\n\n> Glazkova A. The evaluation of the proximity of text categories for solving electronic documents classification tasks //VESTNIK TOMSKOGO GOSUDARSTVENNOGO UNIVERSITETA-UPRAVLENIE VYCHISLITELNAJA TEHNIKA I INFORMATIKA-TOMSK STATE UNIVERSITY JOURNAL OF CONTROL AND COMPUTER SCIENCE. – 2015. – Т. 31. – №. 2. – С. 18-25.\n\nSentiment datasets by Blinov\n\n> Datasets contain reviews from different scopes.\n\nLINIS Crowd\n\n> Произведение «LINIS Crowd SENT - тональный словарь и коллекция текстов с тональной разметкой» созданное автором по имени Sergei Koltcov, Olessia Koltsova и Svetlana Alexeeva.\n\nRussian Hotel Reviews Dataset\n\n> Hotel reviews in Russian"
] |
c5e97b6dd4236a9868df73d6ae176b4ae3efe78c | # JSFakes (Dr. Tristan Behrens).
This is a tokenized version of the JS-Fakes dataset by Omar Peracha. The original dataset can be found here:
[js-fakes.git](https://github.com/omarperacha/js-fakes.git)
The representation is four tracks with four bars per track.
## Purpose.
This dataset is a good starting point for Music Generation. You could train GPT-2 on the samples to compose music.
## Contact.
Find me on [LinkedIn](https://www.linkedin.com/in/dr-tristan-behrens-734967a2/) and say hello.
If you find and issue or have a feature request, please contact me.
Please be so kind and like this dataset if you find it useful. | TristanBehrens/js-fakes-4bars | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-01-03T15:53:23+00:00 | [] | [] | TAGS
#region-us
| # JSFakes (Dr. Tristan Behrens).
This is a tokenized version of the JS-Fakes dataset by Omar Peracha. The original dataset can be found here:
URL
The representation is four tracks with four bars per track.
## Purpose.
This dataset is a good starting point for Music Generation. You could train GPT-2 on the samples to compose music.
## Contact.
Find me on LinkedIn and say hello.
If you find and issue or have a feature request, please contact me.
Please be so kind and like this dataset if you find it useful. | [
"# JSFakes (Dr. Tristan Behrens).\n\nThis is a tokenized version of the JS-Fakes dataset by Omar Peracha. The original dataset can be found here:\nURL\n\nThe representation is four tracks with four bars per track.",
"## Purpose.\n\nThis dataset is a good starting point for Music Generation. You could train GPT-2 on the samples to compose music.",
"## Contact.\n\nFind me on LinkedIn and say hello.\n\nIf you find and issue or have a feature request, please contact me. \n\nPlease be so kind and like this dataset if you find it useful."
] | [
"TAGS\n#region-us \n",
"# JSFakes (Dr. Tristan Behrens).\n\nThis is a tokenized version of the JS-Fakes dataset by Omar Peracha. The original dataset can be found here:\nURL\n\nThe representation is four tracks with four bars per track.",
"## Purpose.\n\nThis dataset is a good starting point for Music Generation. You could train GPT-2 on the samples to compose music.",
"## Contact.\n\nFind me on LinkedIn and say hello.\n\nIf you find and issue or have a feature request, please contact me. \n\nPlease be so kind and like this dataset if you find it useful."
] |
b2c27fa4cdd3354d835aec7970814ffb08dab0a9 | # Dataset Card for register_oscar
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
### Dataset Summary
The Register Oscar dataset is a multilingual dataset, containing languaegs from the Oscar dataset that have been tagged with register information.
8 main-level registers:
* Narrative (NA)
* Informational Description (IN)
* Opinion (OP)
* Interactive Discussion (ID)
* How-to/Instruction (HI)
* Informational Persuasion (IP)
* Lyrical (LY)
* Spoken (SP)
For further description of the labels, see (Douglas Biber and Jesse Egbert. 2018. Register variation online)
Code used to tag Register Oscar can be found at https://github.com/TurkuNLP/register-labeling
### Languages
Currently contains the following languages: Arabic, Bengali, Catalan, English, Spanish, Basque, French, Hindi, Indonesian, Portuguese, Swahili, Urdu, Vietnamese and Chinese.
For further information on the languages and data, see https://huggingface.co/datasets/oscar
## Dataset Structure
### Data Instances
```
{"id": "0", "labels": ["NA"], "text": "Zarif: Iran inajua mpango wa Saudia wa kufanya mauaji ya kigaidi dhidi ya maafisa wa ngazi za juu wa Iran\n"}
```
### Data Fields
* id: unique id of the document (from the Oscar dataset)
* labels: the list of labels assigned to the text
* text: the original text of the document (as appears in the Oscar dataset)
### Citing
```
@inproceedings{laippala-etal-2022-towards,
title = "Towards better structured and less noisy Web data: Oscar with Register annotations",
author = {Laippala, Veronika and
Salmela, Anna and
R{\"o}nnqvist, Samuel and
Aji, Alham Fikri and
Chang, Li-Hsin and
Dhifallah, Asma and
Goulart, Larissa and
Kortelainen, Henna and
P{\`a}mies, Marc and
Prina Dutra, Deise and
Skantsi, Valtteri and
Sutawika, Lintang and
Pyysalo, Sampo},
booktitle = "Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.wnut-1.23",
pages = "215--221",
abstract = {Web-crawled datasets are known to be noisy, as they feature a wide range of language use covering both user-generated and professionally edited content as well as noise originating from the crawling process. This article presents one solution to reduce this noise by using automatic register (genre) identification -whether the texts are, e.g., forum discussions, lyrical or how-to pages. We apply the multilingual register identification model by R{\"o}nnqvist et al. (2021) and label the widely used Oscar dataset. Additionally, we evaluate the model against eight new languages, showing that the performance is comparable to previous findings on a restricted set of languages. Finally, we present and apply a machine learning method for further cleaning text files originating from Web crawls from remains of boilerplate and other elements not belonging to the main text of the Web page. The register labeled and cleaned dataset covers 351 million documents in 14 languages and is available at https://huggingface.co/datasets/TurkuNLP/register{\_}oscar.},
}
``` | TurkuNLP/register_oscar | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2023-09-25T10:30:50+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for register_oscar
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
### Dataset Summary
The Register Oscar dataset is a multilingual dataset, containing languaegs from the Oscar dataset that have been tagged with register information.
8 main-level registers:
* Narrative (NA)
* Informational Description (IN)
* Opinion (OP)
* Interactive Discussion (ID)
* How-to/Instruction (HI)
* Informational Persuasion (IP)
* Lyrical (LY)
* Spoken (SP)
For further description of the labels, see (Douglas Biber and Jesse Egbert. 2018. Register variation online)
Code used to tag Register Oscar can be found at URL
### Languages
Currently contains the following languages: Arabic, Bengali, Catalan, English, Spanish, Basque, French, Hindi, Indonesian, Portuguese, Swahili, Urdu, Vietnamese and Chinese.
For further information on the languages and data, see URL
## Dataset Structure
### Data Instances
### Data Fields
* id: unique id of the document (from the Oscar dataset)
* labels: the list of labels assigned to the text
* text: the original text of the document (as appears in the Oscar dataset)
### Citing
| [
"# Dataset Card for register_oscar",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields",
"### Dataset Summary\n\nThe Register Oscar dataset is a multilingual dataset, containing languaegs from the Oscar dataset that have been tagged with register information. \n\n8 main-level registers:\n* Narrative (NA)\n* Informational Description (IN)\n* Opinion (OP)\n* Interactive Discussion (ID)\n* How-to/Instruction (HI)\n* Informational Persuasion (IP)\n* Lyrical (LY)\n* Spoken (SP)\n\nFor further description of the labels, see (Douglas Biber and Jesse Egbert. 2018. Register variation online)\n\nCode used to tag Register Oscar can be found at URL",
"### Languages\n\nCurrently contains the following languages: Arabic, Bengali, Catalan, English, Spanish, Basque, French, Hindi, Indonesian, Portuguese, Swahili, Urdu, Vietnamese and Chinese.\n\nFor further information on the languages and data, see URL",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n* id: unique id of the document (from the Oscar dataset)\n* labels: the list of labels assigned to the text\n* text: the original text of the document (as appears in the Oscar dataset)",
"### Citing"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for register_oscar",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields",
"### Dataset Summary\n\nThe Register Oscar dataset is a multilingual dataset, containing languaegs from the Oscar dataset that have been tagged with register information. \n\n8 main-level registers:\n* Narrative (NA)\n* Informational Description (IN)\n* Opinion (OP)\n* Interactive Discussion (ID)\n* How-to/Instruction (HI)\n* Informational Persuasion (IP)\n* Lyrical (LY)\n* Spoken (SP)\n\nFor further description of the labels, see (Douglas Biber and Jesse Egbert. 2018. Register variation online)\n\nCode used to tag Register Oscar can be found at URL",
"### Languages\n\nCurrently contains the following languages: Arabic, Bengali, Catalan, English, Spanish, Basque, French, Hindi, Indonesian, Portuguese, Swahili, Urdu, Vietnamese and Chinese.\n\nFor further information on the languages and data, see URL",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n* id: unique id of the document (from the Oscar dataset)\n* labels: the list of labels assigned to the text\n* text: the original text of the document (as appears in the Oscar dataset)",
"### Citing"
] |
e4428e399de70a21b8857464e76f0fe859cabe05 |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://turkunlp.org/paraphrase.html
- **Repository:** https://github.com/TurkuNLP/Turku-paraphrase-corpus
- **Paper:** https://aclanthology.org/2021.nodalida-main.29
- **Leaderboard:** Not available
- **Point of Contact:** [Jenna Kanerva, Filip Ginter](mailto:[email protected],[email protected])
### Dataset Summary
The project gathered a large dataset of Finnish paraphrase pairs (over 100,000). The paraphrases are selected and classified manually, so as to minimize lexical overlap, and provide examples that are maximally structurally and lexically different. The objective is to create a dataset which is challenging and better tests the capabilities of natural language understanding. An important feature of the data is that most paraphrase pairs are distributed in their document context. The primary application for the dataset is the development and evaluation of deep language models, and representation learning in general.
Usage:
```
from datasets import load_dataset
dataset = load_dataset('TurkuNLP/turku_paraphrase_corpus', name="plain")
```
where `name` is one of the supported loading options: `plain`, `plain-context`, `classification`, `classification-context`, or `generation`. See Data Fields for more information.
### Supported Tasks and Leaderboards
* Paraphrase classification
* Paraphrase generation
### Languages
Finnish
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
The dataset consist of pairs of text passages, where a typical passage is about a sentence long, however, a passage may also be longer or shorter than a sentence. Thus, each example includes two text passages (string), a manually annotated label to indicate the paraphrase type (string), and additional metadata. The dataset includes three different configurations: `plain`, `classification`, and `generation`. The `plain` configuration loads the original data without any additional preprocessing or transformations, while the `classification` configuration directly builds the data in a form suitable for training a paraphrase classifier, where each example is doubled in the data with different directions (text1, text2, label) --> (text2, text1, label) taking care of the label flipping as well if needed (paraphrases with directionality flag < or >). In the `generation` configuration, the examples are preprocessed to be directly suitable for the paraphrase generation task. In here, paraphrases not suitable for generation are discarded (negative, and highly context-dependent paraphrases), and directional paraphrases are provided so that the generation goes from more detailed passage to the more general one in order to prevent model hallucination (i.e. model learning to introduce new information). The rest of the paraphrases are provided in both directions (text1, text2, label) --> (text2, text1, label).
Each pair in the `plain` and `classification` configurations will include fields:
`id`:
Identifier of the paraphrase pair (string)
`gem_id`:
Identifier of the paraphrase pair in the GEM dataset (string)
`goeswith`:
Identifier of the document from which the paraphrase was extracted, can be `not available` in case the source of the paraphrase is not from document-structured data. All examples with the same `goeswith` value (other than `not available`) should be kept together in any train/dev/test split; most users won't need this (string)
`fold`:
0-99, data split into 100 parts respecting document boundaries, you can use this e.g. to implement crossvalidation safely as all paraphrases from one document are in one fold, most users won't need this (int)
`text1`:
First paraphrase passage (string)
`text2`:
Second paraphrase passage (string)
`label`:
Manually annotated labels (string)
`binary_label`:
Label turned into binary with values `positive` (paraphrase) and `negative` (not-paraphrase) (string)
`is_rewrite`:
Indicator whether the example is human produced rewrite or naturally occurring paraphrase (bool)
Each pair in the `generation` config will include the same fields except `text1` and `text2` are renamed to `input` and `output` in order to indicate the generation direction. Thus the fields are: `id`, `gem_id`, `goeswith`, `fold`, `input`, `output`, `label`, `binary_label`, and `is_rewrite`
**Context**: Most (but not all) of the paraphrase pairs are identified in their document context. By default, these contexts are not included to conserve memory, but can be accessed using the configurations `plain-context` and `classification-context`. These are exactly like `plain` and `classification` with these additional fields:
`context1`:
a dictionary with the fields `doctext` (string), `begin` (int), `end` (int). These mean that the paraphrase in `text1` was extracted from `doctext[begin:end]`. In most cases, `doctext[begin:end]` and `text1` are the exact same string, but occassionally that is not the case when e.g. intervening punctuations or other unrelated texts were "cleaned" from `text1` during annotation. In case the context is not available, `doctext` is an empty string and `beg==end==0`
`context2`:
same as `context1` but for `text2`
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@jmnybl](https://github.com/jmnybl) and [@fginter](https://github.com/fginter) for adding this dataset. | TurkuNLP/turku_paraphrase_corpus | [
"task_categories:text-classification",
"task_categories:sentence-similarity",
"task_categories:text2text-generation",
"task_categories:other",
"task_ids:semantic-similarity-classification",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:fi",
"license:cc-by-sa-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": [], "language": ["fi"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification", "sentence-similarity", "text2text-generation", "other"], "task_ids": ["semantic-similarity-classification"], "pretty_name": "Turku Paraphrase Corpus"} | 2022-07-01T14:25:27+00:00 | [] | [
"fi"
] | TAGS
#task_categories-text-classification #task_categories-sentence-similarity #task_categories-text2text-generation #task_categories-other #task_ids-semantic-similarity-classification #annotations_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Finnish #license-cc-by-sa-4.0 #region-us
|
# Dataset Card for [Dataset Name]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
- Leaderboard: Not available
- Point of Contact: Jenna Kanerva, Filip Ginter
### Dataset Summary
The project gathered a large dataset of Finnish paraphrase pairs (over 100,000). The paraphrases are selected and classified manually, so as to minimize lexical overlap, and provide examples that are maximally structurally and lexically different. The objective is to create a dataset which is challenging and better tests the capabilities of natural language understanding. An important feature of the data is that most paraphrase pairs are distributed in their document context. The primary application for the dataset is the development and evaluation of deep language models, and representation learning in general.
Usage:
where 'name' is one of the supported loading options: 'plain', 'plain-context', 'classification', 'classification-context', or 'generation'. See Data Fields for more information.
### Supported Tasks and Leaderboards
* Paraphrase classification
* Paraphrase generation
### Languages
Finnish
## Dataset Structure
### Data Instances
### Data Fields
The dataset consist of pairs of text passages, where a typical passage is about a sentence long, however, a passage may also be longer or shorter than a sentence. Thus, each example includes two text passages (string), a manually annotated label to indicate the paraphrase type (string), and additional metadata. The dataset includes three different configurations: 'plain', 'classification', and 'generation'. The 'plain' configuration loads the original data without any additional preprocessing or transformations, while the 'classification' configuration directly builds the data in a form suitable for training a paraphrase classifier, where each example is doubled in the data with different directions (text1, text2, label) --> (text2, text1, label) taking care of the label flipping as well if needed (paraphrases with directionality flag < or >). In the 'generation' configuration, the examples are preprocessed to be directly suitable for the paraphrase generation task. In here, paraphrases not suitable for generation are discarded (negative, and highly context-dependent paraphrases), and directional paraphrases are provided so that the generation goes from more detailed passage to the more general one in order to prevent model hallucination (i.e. model learning to introduce new information). The rest of the paraphrases are provided in both directions (text1, text2, label) --> (text2, text1, label).
Each pair in the 'plain' and 'classification' configurations will include fields:
'id':
Identifier of the paraphrase pair (string)
'gem_id':
Identifier of the paraphrase pair in the GEM dataset (string)
'goeswith':
Identifier of the document from which the paraphrase was extracted, can be 'not available' in case the source of the paraphrase is not from document-structured data. All examples with the same 'goeswith' value (other than 'not available') should be kept together in any train/dev/test split; most users won't need this (string)
'fold':
0-99, data split into 100 parts respecting document boundaries, you can use this e.g. to implement crossvalidation safely as all paraphrases from one document are in one fold, most users won't need this (int)
'text1':
First paraphrase passage (string)
'text2':
Second paraphrase passage (string)
'label':
Manually annotated labels (string)
'binary_label':
Label turned into binary with values 'positive' (paraphrase) and 'negative' (not-paraphrase) (string)
'is_rewrite':
Indicator whether the example is human produced rewrite or naturally occurring paraphrase (bool)
Each pair in the 'generation' config will include the same fields except 'text1' and 'text2' are renamed to 'input' and 'output' in order to indicate the generation direction. Thus the fields are: 'id', 'gem_id', 'goeswith', 'fold', 'input', 'output', 'label', 'binary_label', and 'is_rewrite'
Context: Most (but not all) of the paraphrase pairs are identified in their document context. By default, these contexts are not included to conserve memory, but can be accessed using the configurations 'plain-context' and 'classification-context'. These are exactly like 'plain' and 'classification' with these additional fields:
'context1':
a dictionary with the fields 'doctext' (string), 'begin' (int), 'end' (int). These mean that the paraphrase in 'text1' was extracted from 'doctext[begin:end]'. In most cases, 'doctext[begin:end]' and 'text1' are the exact same string, but occassionally that is not the case when e.g. intervening punctuations or other unrelated texts were "cleaned" from 'text1' during annotation. In case the context is not available, 'doctext' is an empty string and 'beg==end==0'
'context2':
same as 'context1' but for 'text2'
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @jmnybl and @fginter for adding this dataset. | [
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: Not available\n- Point of Contact: Jenna Kanerva, Filip Ginter",
"### Dataset Summary\n\nThe project gathered a large dataset of Finnish paraphrase pairs (over 100,000). The paraphrases are selected and classified manually, so as to minimize lexical overlap, and provide examples that are maximally structurally and lexically different. The objective is to create a dataset which is challenging and better tests the capabilities of natural language understanding. An important feature of the data is that most paraphrase pairs are distributed in their document context. The primary application for the dataset is the development and evaluation of deep language models, and representation learning in general.\n\nUsage:\n\nwhere 'name' is one of the supported loading options: 'plain', 'plain-context', 'classification', 'classification-context', or 'generation'. See Data Fields for more information.",
"### Supported Tasks and Leaderboards\n\n* Paraphrase classification\n* Paraphrase generation",
"### Languages\n\nFinnish",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\nThe dataset consist of pairs of text passages, where a typical passage is about a sentence long, however, a passage may also be longer or shorter than a sentence. Thus, each example includes two text passages (string), a manually annotated label to indicate the paraphrase type (string), and additional metadata. The dataset includes three different configurations: 'plain', 'classification', and 'generation'. The 'plain' configuration loads the original data without any additional preprocessing or transformations, while the 'classification' configuration directly builds the data in a form suitable for training a paraphrase classifier, where each example is doubled in the data with different directions (text1, text2, label) --> (text2, text1, label) taking care of the label flipping as well if needed (paraphrases with directionality flag < or >). In the 'generation' configuration, the examples are preprocessed to be directly suitable for the paraphrase generation task. In here, paraphrases not suitable for generation are discarded (negative, and highly context-dependent paraphrases), and directional paraphrases are provided so that the generation goes from more detailed passage to the more general one in order to prevent model hallucination (i.e. model learning to introduce new information). The rest of the paraphrases are provided in both directions (text1, text2, label) --> (text2, text1, label).\n\nEach pair in the 'plain' and 'classification' configurations will include fields:\n\n'id': \nIdentifier of the paraphrase pair (string)\n\n'gem_id':\nIdentifier of the paraphrase pair in the GEM dataset (string)\n\n'goeswith': \nIdentifier of the document from which the paraphrase was extracted, can be 'not available' in case the source of the paraphrase is not from document-structured data. All examples with the same 'goeswith' value (other than 'not available') should be kept together in any train/dev/test split; most users won't need this (string)\n\n'fold': \n0-99, data split into 100 parts respecting document boundaries, you can use this e.g. to implement crossvalidation safely as all paraphrases from one document are in one fold, most users won't need this (int)\n\n'text1': \nFirst paraphrase passage (string)\n\n'text2': \nSecond paraphrase passage (string)\n\n'label': \nManually annotated labels (string)\n\n'binary_label': \nLabel turned into binary with values 'positive' (paraphrase) and 'negative' (not-paraphrase) (string)\n\n'is_rewrite': \nIndicator whether the example is human produced rewrite or naturally occurring paraphrase (bool)\n\nEach pair in the 'generation' config will include the same fields except 'text1' and 'text2' are renamed to 'input' and 'output' in order to indicate the generation direction. Thus the fields are: 'id', 'gem_id', 'goeswith', 'fold', 'input', 'output', 'label', 'binary_label', and 'is_rewrite'\n\nContext: Most (but not all) of the paraphrase pairs are identified in their document context. By default, these contexts are not included to conserve memory, but can be accessed using the configurations 'plain-context' and 'classification-context'. These are exactly like 'plain' and 'classification' with these additional fields:\n\n'context1': \na dictionary with the fields 'doctext' (string), 'begin' (int), 'end' (int). These mean that the paraphrase in 'text1' was extracted from 'doctext[begin:end]'. In most cases, 'doctext[begin:end]' and 'text1' are the exact same string, but occassionally that is not the case when e.g. intervening punctuations or other unrelated texts were \"cleaned\" from 'text1' during annotation. In case the context is not available, 'doctext' is an empty string and 'beg==end==0'\n\n'context2': \nsame as 'context1' but for 'text2'",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @jmnybl and @fginter for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_categories-sentence-similarity #task_categories-text2text-generation #task_categories-other #task_ids-semantic-similarity-classification #annotations_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Finnish #license-cc-by-sa-4.0 #region-us \n",
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: Not available\n- Point of Contact: Jenna Kanerva, Filip Ginter",
"### Dataset Summary\n\nThe project gathered a large dataset of Finnish paraphrase pairs (over 100,000). The paraphrases are selected and classified manually, so as to minimize lexical overlap, and provide examples that are maximally structurally and lexically different. The objective is to create a dataset which is challenging and better tests the capabilities of natural language understanding. An important feature of the data is that most paraphrase pairs are distributed in their document context. The primary application for the dataset is the development and evaluation of deep language models, and representation learning in general.\n\nUsage:\n\nwhere 'name' is one of the supported loading options: 'plain', 'plain-context', 'classification', 'classification-context', or 'generation'. See Data Fields for more information.",
"### Supported Tasks and Leaderboards\n\n* Paraphrase classification\n* Paraphrase generation",
"### Languages\n\nFinnish",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\nThe dataset consist of pairs of text passages, where a typical passage is about a sentence long, however, a passage may also be longer or shorter than a sentence. Thus, each example includes two text passages (string), a manually annotated label to indicate the paraphrase type (string), and additional metadata. The dataset includes three different configurations: 'plain', 'classification', and 'generation'. The 'plain' configuration loads the original data without any additional preprocessing or transformations, while the 'classification' configuration directly builds the data in a form suitable for training a paraphrase classifier, where each example is doubled in the data with different directions (text1, text2, label) --> (text2, text1, label) taking care of the label flipping as well if needed (paraphrases with directionality flag < or >). In the 'generation' configuration, the examples are preprocessed to be directly suitable for the paraphrase generation task. In here, paraphrases not suitable for generation are discarded (negative, and highly context-dependent paraphrases), and directional paraphrases are provided so that the generation goes from more detailed passage to the more general one in order to prevent model hallucination (i.e. model learning to introduce new information). The rest of the paraphrases are provided in both directions (text1, text2, label) --> (text2, text1, label).\n\nEach pair in the 'plain' and 'classification' configurations will include fields:\n\n'id': \nIdentifier of the paraphrase pair (string)\n\n'gem_id':\nIdentifier of the paraphrase pair in the GEM dataset (string)\n\n'goeswith': \nIdentifier of the document from which the paraphrase was extracted, can be 'not available' in case the source of the paraphrase is not from document-structured data. All examples with the same 'goeswith' value (other than 'not available') should be kept together in any train/dev/test split; most users won't need this (string)\n\n'fold': \n0-99, data split into 100 parts respecting document boundaries, you can use this e.g. to implement crossvalidation safely as all paraphrases from one document are in one fold, most users won't need this (int)\n\n'text1': \nFirst paraphrase passage (string)\n\n'text2': \nSecond paraphrase passage (string)\n\n'label': \nManually annotated labels (string)\n\n'binary_label': \nLabel turned into binary with values 'positive' (paraphrase) and 'negative' (not-paraphrase) (string)\n\n'is_rewrite': \nIndicator whether the example is human produced rewrite or naturally occurring paraphrase (bool)\n\nEach pair in the 'generation' config will include the same fields except 'text1' and 'text2' are renamed to 'input' and 'output' in order to indicate the generation direction. Thus the fields are: 'id', 'gem_id', 'goeswith', 'fold', 'input', 'output', 'label', 'binary_label', and 'is_rewrite'\n\nContext: Most (but not all) of the paraphrase pairs are identified in their document context. By default, these contexts are not included to conserve memory, but can be accessed using the configurations 'plain-context' and 'classification-context'. These are exactly like 'plain' and 'classification' with these additional fields:\n\n'context1': \na dictionary with the fields 'doctext' (string), 'begin' (int), 'end' (int). These mean that the paraphrase in 'text1' was extracted from 'doctext[begin:end]'. In most cases, 'doctext[begin:end]' and 'text1' are the exact same string, but occassionally that is not the case when e.g. intervening punctuations or other unrelated texts were \"cleaned\" from 'text1' during annotation. In case the context is not available, 'doctext' is an empty string and 'beg==end==0'\n\n'context2': \nsame as 'context1' but for 'text2'",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @jmnybl and @fginter for adding this dataset."
] |
0371e35968fd4adfa2d1f6bd5e009c6e2b842f81 | Transformation of AI.FB's Wikimatrix dataset. Combined rows containing translations of a single source sentence into one consolidated row, applying a score threshold of 1.03 to remove poor translations. | Tyler/wikimatrix_collapsed | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-04-13T18:54:24+00:00 | [] | [] | TAGS
#region-us
| Transformation of AI.FB's Wikimatrix dataset. Combined rows containing translations of a single source sentence into one consolidated row, applying a score threshold of 1.03 to remove poor translations. | [] | [
"TAGS\n#region-us \n"
] |
77547406a07141e688c3ec62fd1840f97b17adf9 | # AutoNLP Dataset for project: second
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project second.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "one hundred and forty-two minus fifty-three",
"target": "one hundred and ninety-five"
},
{
"text": "two hundred and twenty minus seventy-one",
"target": "two hundred and ninety-one"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "Value(dtype='string', id=None)",
"text": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 600000 |
| valid | 150000 |
| VoidZeroe/autonlp-data-second | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"task_categories": ["conditional-text-generation"]} | 2021-11-20T06:51:45+00:00 | [] | [] | TAGS
#region-us
| AutoNLP Dataset for project: second
===================================
Table of content
----------------
* Dataset Description
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
Dataset Descritpion
-------------------
This dataset has been automatically processed by AutoNLP for project second.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
76133a5c0f1d27bfb8521dad6783490cd157e730 | # IndoParaCrawl
IndoParaCrawl is ParaCrawl v7.1 dataset bulk-translated to Indonesian using Google Translate.
Thanks HuggingFace for providing free storage for datasets <3.
| Wikidepia/IndoParaCrawl | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-04-13T09:22:22+00:00 | [] | [] | TAGS
#region-us
| # IndoParaCrawl
IndoParaCrawl is ParaCrawl v7.1 dataset bulk-translated to Indonesian using Google Translate.
Thanks HuggingFace for providing free storage for datasets <3.
| [
"# IndoParaCrawl\n\nIndoParaCrawl is ParaCrawl v7.1 dataset bulk-translated to Indonesian using Google Translate.\n\nThanks HuggingFace for providing free storage for datasets <3."
] | [
"TAGS\n#region-us \n",
"# IndoParaCrawl\n\nIndoParaCrawl is ParaCrawl v7.1 dataset bulk-translated to Indonesian using Google Translate.\n\nThanks HuggingFace for providing free storage for datasets <3."
] |
290dc889066d7127077a2bec7f0be726fdeaa760 |
### KR3: Korean Restaurant Reviews with Ratings
Korean sentiment classification dataset
- Size: 460K(+180K)
- Language: Korean-centric
### ⚠️ Caution with `Rating` Column
0 stands for negative review, 1 stands for positive review, and 2 stands for ambiguous review.
**Note that rating 2 is not intended to be used directly for supervised learning(classification).** This data is included for additional pre-training purpose or other usage.
In other words, this dataset is basically a **binary** sentiment classification task where labels are 0 and 1.
### 🔍 See More
See all the codes for crawling/preprocessing the dataset and experiments with KR3 in [GitHub Repo](https://github.com/Wittgensteinian/kr3).
See Kaggle dataset in [Kaggle Dataset](https://www.kaggle.com/ninetyninenewton/kr3-korean-restaurant-reviews-with-ratings).
### Usage
```python
from datasets import load_dataset
kr3 = load_dataset("leey4n/KR3", name='kr3', split='train')
kr3 = kr3.remove_columns(['__index_level_0__']) # Original file didn't include this column. Suspect it's a hugging face issue.
```
```python
# drop reviews with ambiguous label
kr3_binary = kr3.filter(lambda example: example['Rating'] != 2)
```
### License
**CC BY-NC-SA 4.0**
### Legal Issues
We concluded that the **non-commerical usage and release of KR3 fall into the range of fair use (공정 이용)** stated in the Korean copyright act (저작권법). We further clarify that we **did not agree to the terms of service** from any websites which might prohibit web crawling. In other words, web crawling we've done was proceeded without logging in to the website. Despite all of these, feel free to contact to any of the contributors if you notice any legal issues.
### Contributors & Acknowledgement
(Alphabetical order)
[Dongin Jung](https://github.com/dongin1009)
[Hyunwoo Kwak](https://github.com/Kwak-Hyun-woo)
[Kaeun Lee](https://github.com/Kaeun-Lee)
[Yejoon Lee](https://github.com/wittgensteinian)
This work was done as DIYA 4기. Compute resources needed for the work was supported by [DIYA](https://blog.diyaml.com) and surromind.ai.
| leey4n/KR3 | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"multilinguality:monolingual",
"size_categories:100K<n<1m",
"language:ko",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": [], "language_creators": [], "language": ["ko"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1m"], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "KR3"} | 2023-07-19T07:35:54+00:00 | [] | [
"ko"
] | TAGS
#task_categories-text-classification #task_ids-sentiment-classification #multilinguality-monolingual #size_categories-100K<n<1m #language-Korean #license-cc-by-nc-sa-4.0 #region-us
|
### KR3: Korean Restaurant Reviews with Ratings
Korean sentiment classification dataset
- Size: 460K(+180K)
- Language: Korean-centric
### ️ Caution with 'Rating' Column
0 stands for negative review, 1 stands for positive review, and 2 stands for ambiguous review.
Note that rating 2 is not intended to be used directly for supervised learning(classification). This data is included for additional pre-training purpose or other usage.
In other words, this dataset is basically a binary sentiment classification task where labels are 0 and 1.
### See More
See all the codes for crawling/preprocessing the dataset and experiments with KR3 in GitHub Repo.
See Kaggle dataset in Kaggle Dataset.
### Usage
### License
CC BY-NC-SA 4.0
### Legal Issues
We concluded that the non-commerical usage and release of KR3 fall into the range of fair use (공정 이용) stated in the Korean copyright act (저작권법). We further clarify that we did not agree to the terms of service from any websites which might prohibit web crawling. In other words, web crawling we've done was proceeded without logging in to the website. Despite all of these, feel free to contact to any of the contributors if you notice any legal issues.
### Contributors & Acknowledgement
(Alphabetical order)
Dongin Jung
Hyunwoo Kwak
Kaeun Lee
Yejoon Lee
This work was done as DIYA 4기. Compute resources needed for the work was supported by DIYA and URL.
| [
"### KR3: Korean Restaurant Reviews with Ratings\nKorean sentiment classification dataset \n\n- Size: 460K(+180K)\n- Language: Korean-centric",
"### ️ Caution with 'Rating' Column\n0 stands for negative review, 1 stands for positive review, and 2 stands for ambiguous review. \nNote that rating 2 is not intended to be used directly for supervised learning(classification). This data is included for additional pre-training purpose or other usage. \nIn other words, this dataset is basically a binary sentiment classification task where labels are 0 and 1.",
"### See More\nSee all the codes for crawling/preprocessing the dataset and experiments with KR3 in GitHub Repo. \nSee Kaggle dataset in Kaggle Dataset.",
"### Usage",
"### License\nCC BY-NC-SA 4.0",
"### Legal Issues\nWe concluded that the non-commerical usage and release of KR3 fall into the range of fair use (공정 이용) stated in the Korean copyright act (저작권법). We further clarify that we did not agree to the terms of service from any websites which might prohibit web crawling. In other words, web crawling we've done was proceeded without logging in to the website. Despite all of these, feel free to contact to any of the contributors if you notice any legal issues.",
"### Contributors & Acknowledgement\n(Alphabetical order)\n\nDongin Jung\n\nHyunwoo Kwak\n\nKaeun Lee\n\nYejoon Lee\n\nThis work was done as DIYA 4기. Compute resources needed for the work was supported by DIYA and URL."
] | [
"TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #multilinguality-monolingual #size_categories-100K<n<1m #language-Korean #license-cc-by-nc-sa-4.0 #region-us \n",
"### KR3: Korean Restaurant Reviews with Ratings\nKorean sentiment classification dataset \n\n- Size: 460K(+180K)\n- Language: Korean-centric",
"### ️ Caution with 'Rating' Column\n0 stands for negative review, 1 stands for positive review, and 2 stands for ambiguous review. \nNote that rating 2 is not intended to be used directly for supervised learning(classification). This data is included for additional pre-training purpose or other usage. \nIn other words, this dataset is basically a binary sentiment classification task where labels are 0 and 1.",
"### See More\nSee all the codes for crawling/preprocessing the dataset and experiments with KR3 in GitHub Repo. \nSee Kaggle dataset in Kaggle Dataset.",
"### Usage",
"### License\nCC BY-NC-SA 4.0",
"### Legal Issues\nWe concluded that the non-commerical usage and release of KR3 fall into the range of fair use (공정 이용) stated in the Korean copyright act (저작권법). We further clarify that we did not agree to the terms of service from any websites which might prohibit web crawling. In other words, web crawling we've done was proceeded without logging in to the website. Despite all of these, feel free to contact to any of the contributors if you notice any legal issues.",
"### Contributors & Acknowledgement\n(Alphabetical order)\n\nDongin Jung\n\nHyunwoo Kwak\n\nKaeun Lee\n\nYejoon Lee\n\nThis work was done as DIYA 4기. Compute resources needed for the work was supported by DIYA and URL."
] |
cdd31747121aded91fb44cce3b7ed23fd42bbe93 | # Archive Of Our Own Original Works (AO4W)
**Warning! Many/most of these files may be NSFW!**
Approximately 2GB of text files from Archive of Our Own; specifically, files labeled "original work" or some variation. For training fiction models. I recommend that you clean the text as needed for your purposes. | WyrdCurt/AO4W | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-07-26T11:03:27+00:00 | [] | [] | TAGS
#region-us
| # Archive Of Our Own Original Works (AO4W)
Warning! Many/most of these files may be NSFW!
Approximately 2GB of text files from Archive of Our Own; specifically, files labeled "original work" or some variation. For training fiction models. I recommend that you clean the text as needed for your purposes. | [
"# Archive Of Our Own Original Works (AO4W)\n\nWarning! Many/most of these files may be NSFW!\n\nApproximately 2GB of text files from Archive of Our Own; specifically, files labeled \"original work\" or some variation. For training fiction models. I recommend that you clean the text as needed for your purposes."
] | [
"TAGS\n#region-us \n",
"# Archive Of Our Own Original Works (AO4W)\n\nWarning! Many/most of these files may be NSFW!\n\nApproximately 2GB of text files from Archive of Our Own; specifically, files labeled \"original work\" or some variation. For training fiction models. I recommend that you clean the text as needed for your purposes."
] |
dbf39a35fef11582622433b7f031c876c6b29d6f | My new dataset | XiangXiang/clt | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2021-04-28T01:08:29+00:00 | [] | [] | TAGS
#region-us
| My new dataset | [] | [
"TAGS\n#region-us \n"
] |
Subsets and Splits