sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
1b1f1f2a456fc59a8c9260f800d7098a34183419 |
Retrieving the 50th example from the train set:
```
> print(dataset['train']['sentence1'][0][50])
Muž hrá na gitare.
> print(dataset['train']['sentence2'][0][50])
Chlapec hrá na gitare.
> print(dataset['train']['similarity_score'][0][50])
3.200000047683716
```
For score explanation see [stsb_multi_mt](https://huggingface.co/datasets/stsb_multi_mt).
| crabz/stsb-sk | [
"task_ids:semantic-similarity-scoring",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|stsb_multi_mt",
"language:sk",
"license:unknown",
"region:us"
] | 2022-03-16T10:20:28+00:00 | {"annotations_creators": ["other"], "language_creators": ["other"], "language": ["sk"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|stsb_multi_mt"], "task_categories": ["text-scoring"], "task_ids": ["semantic-similarity-scoring"], "pretty_name": "stsb-sk", "language_bcp47": ["sk-SK"]} | 2022-10-23T04:13:41+00:00 | [] | [
"sk"
] | TAGS
#task_ids-semantic-similarity-scoring #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|stsb_multi_mt #language-Slovak #license-unknown #region-us
|
Retrieving the 50th example from the train set:
For score explanation see stsb_multi_mt.
| [] | [
"TAGS\n#task_ids-semantic-similarity-scoring #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|stsb_multi_mt #language-Slovak #license-unknown #region-us \n"
] |
45c0c4a3404059175269c9dacfe00cb88b3a5a89 |
# Dataset Card for NMSQA(Natural Multi-speaker Spoken Question Answering)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- Homepage:
https://github.com/DanielLin94144/DUAL-textless-SQA
- Repository:
https://github.com/DanielLin94144/DUAL-textless-SQA
- Paper:
https://arxiv.org/abs/2203.04911
- Leaderboard:
- Point of Contact:
Download audio data: [https://huggingface.co/datasets/voidful/NMSQA/resolve/main/nmsqa_audio.tar.gz](https://huggingface.co/datasets/voidful/NMSQA/resolve/main/nmsqa_audio.tar.gz)
Unzip audio data: `tar -xf nmsqa_audio.tar.gz`
### Dataset Summary
The Natural Multi-speaker Spoken Question Answering (NMSQA) dataset is designed for the task of textless spoken question answering. It is based on the SQuAD dataset and contains spoken questions and passages. The dataset includes the original text, transcriptions, and audio files of the spoken content. This dataset is created to evaluate the performance of models on textless spoken question answering tasks.
### Supported Tasks and Leaderboards
The primary task supported by this dataset is textless spoken question answering, where the goal is to answer questions based on spoken passages without relying on textual information. The dataset can also be used for automatic speech recognition tasks.
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
Each instance in the dataset contains the following fields:
- id: Unique identifier for the instance
- title: The title of the passage
- context: The passage text
- question: The question text
- - answer_start: The start index of the answer in the text
- audio_full_answer_end: The end position of the audio answer in seconds
- audio_full_answer_start: The start position of the audio answer in seconds
- audio_full_neg_answer_end: The end position of the audio answer in seconds for an incorrect answer with the same words
- audio_full_neg_answer_start: The start position of the audio answer in seconds for an incorrect answer with the same words
- audio_segment_answer_end: The end position of the audio answer in seconds for the segment
- audio_segment_answer_start: The start position of the audio answer in seconds for the segment
- text: The answer text
- content_segment_audio_path: The audio path for the content segment
- content_full_audio_path: The complete audio path for the content
- content_audio_sampling_rate: The audio sampling rate
- content_audio_speaker: The audio speaker
- content_segment_text: The segment text of the content
- content_segment_normalized_text: The normalized text for generating audio
- question_audio_path: The audio path for the question
- question_audio_sampling_rate: The audio sampling rate
- question_audio_speaker: The audio speaker
- question_normalized_text: The normalized text for generating audio
### Data Fields
The dataset includes the following data fields:
- id
- title
- context
- question
- answers
- content_segment_audio_path
- content_full_audio_path
- content_audio_sampling_rate
- content_audio_speaker
- content_segment_text
- content_segment_normalized_text
- question_audio_path
- question_audio_sampling_rate
- question_audio_speaker
- question_normalized_text
### Data Splits
The dataset is split into train, dev, and test sets.
## Dataset Creation
### Curation Rationale
The NMSQA dataset is created to address the challenge of textless spoken question answering, where the model must answer questions based on spoken passages without relying on textual information.
### Source Data
The NMSQA dataset is based on the SQuAD dataset, with spoken questions and passages created from the original text data.
#### Initial Data Collection and Normalization
The initial data collection involved converting the original SQuAD dataset's text-based questions and passages into spoken audio files. The text was first normalized, and then audio files were generated using text-to-speech methods.
#### Who are the source language producers?
The source language producers are the creators of the SQuAD dataset and the researchers who generated the spoken audio files for the NMSQA dataset.
### Annotations
#### Annotation process
The annotations for the NMSQA dataset are derived from the original SQuAD dataset. Additional annotations, such as audio start and end positions for correct and incorrect answers, as well as audio file paths and speaker information, are added by the dataset creators.
#### Who are the annotators?
The annotators for the NMSQA dataset are the creators of the SQuAD dataset and the researchers who generated the spoken audio files and additional annotations for the NMSQA dataset.
### Personal and Sensitive Information
The dataset does not contain any personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
The NMSQA dataset contributes to the development and evaluation of models for textless spoken question answering tasks, which can lead to advancements in natural language processing and automatic speech recognition. Applications of these technologies can improve accessibility and convenience in various domains, such as virtual assistants, customer service, and voice-controlled devices.
### Discussion of Biases
The dataset inherits potential biases from the original SQuAD dataset, which may include biases in the selection of passages, questions, and answers. Additionally, biases may be introduced in the text-to-speech process and the choice of speakers used to generate the spoken audio files.
### Other Known Limitations
As the dataset is based on the SQuAD dataset, it shares the same limitations, including the fact that it is limited to the English language and mainly focuses on factual questions. Furthermore, the dataset may not cover a wide range of accents, dialects, or speaking styles.
## Additional Information
### Dataset Curators
The NMSQA dataset is curated by Guan-Ting Lin, Yung-Sung Chuang, Ho-Lam Chung, Shu-Wen Yang, Hsuan-Jui Chen, Shang-Wen Li, Abdelrahman Mohamed, Hung-Yi Lee, and Lin-Shan Lee.
### Licensing Information
The licensing information for the dataset is not explicitly mentioned.
### Citation Information
```css
@article{lin2022dual,
title={DUAL: Textless Spoken Question Answering with Speech Discrete Unit Adaptive Learning},
author={Lin, Guan-Ting and Chuang, Yung-Sung and Chung, Ho-Lam and Yang, Shu-wen and Chen, Hsuan-Jui and Li, Shang-Wen and Mohamed, Abdelrahman and Lee, Hung-yi and Lee, Lin-shan},
journal={arXiv preprint arXiv:2203.04911},
year={2022}
}
```
### Contributions
Thanks to [@voidful](https://github.com/voidful) for adding this dataset. | voidful/NMSQA | [
"task_categories:question-answering",
"task_categories:automatic-speech-recognition",
"task_ids:abstractive-qa",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"speech-recognition",
"arxiv:2203.04911",
"region:us"
] | 2022-03-16T16:03:42+00:00 | {"annotations_creators": ["crowdsourced", "machine-generated"], "language_creators": ["expert-generated", "machine-generated", "crowdsourced"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["question-answering", "automatic-speech-recognition"], "task_ids": ["abstractive-qa"], "pretty_name": "NMSQA", "tags": ["speech-recognition"]} | 2023-04-04T03:46:23+00:00 | [
"2203.04911"
] | [
"en"
] | TAGS
#task_categories-question-answering #task_categories-automatic-speech-recognition #task_ids-abstractive-qa #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-expert-generated #language_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #speech-recognition #arxiv-2203.04911 #region-us
|
# Dataset Card for NMSQA(Natural Multi-speaker Spoken Question Answering)
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
URL
- Repository:
URL
- Paper:
URL
- Leaderboard:
- Point of Contact:
Download audio data: URL
Unzip audio data: 'tar -xf nmsqa_audio.URL'
### Dataset Summary
The Natural Multi-speaker Spoken Question Answering (NMSQA) dataset is designed for the task of textless spoken question answering. It is based on the SQuAD dataset and contains spoken questions and passages. The dataset includes the original text, transcriptions, and audio files of the spoken content. This dataset is created to evaluate the performance of models on textless spoken question answering tasks.
### Supported Tasks and Leaderboards
The primary task supported by this dataset is textless spoken question answering, where the goal is to answer questions based on spoken passages without relying on textual information. The dataset can also be used for automatic speech recognition tasks.
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
Each instance in the dataset contains the following fields:
- id: Unique identifier for the instance
- title: The title of the passage
- context: The passage text
- question: The question text
- - answer_start: The start index of the answer in the text
- audio_full_answer_end: The end position of the audio answer in seconds
- audio_full_answer_start: The start position of the audio answer in seconds
- audio_full_neg_answer_end: The end position of the audio answer in seconds for an incorrect answer with the same words
- audio_full_neg_answer_start: The start position of the audio answer in seconds for an incorrect answer with the same words
- audio_segment_answer_end: The end position of the audio answer in seconds for the segment
- audio_segment_answer_start: The start position of the audio answer in seconds for the segment
- text: The answer text
- content_segment_audio_path: The audio path for the content segment
- content_full_audio_path: The complete audio path for the content
- content_audio_sampling_rate: The audio sampling rate
- content_audio_speaker: The audio speaker
- content_segment_text: The segment text of the content
- content_segment_normalized_text: The normalized text for generating audio
- question_audio_path: The audio path for the question
- question_audio_sampling_rate: The audio sampling rate
- question_audio_speaker: The audio speaker
- question_normalized_text: The normalized text for generating audio
### Data Fields
The dataset includes the following data fields:
- id
- title
- context
- question
- answers
- content_segment_audio_path
- content_full_audio_path
- content_audio_sampling_rate
- content_audio_speaker
- content_segment_text
- content_segment_normalized_text
- question_audio_path
- question_audio_sampling_rate
- question_audio_speaker
- question_normalized_text
### Data Splits
The dataset is split into train, dev, and test sets.
## Dataset Creation
### Curation Rationale
The NMSQA dataset is created to address the challenge of textless spoken question answering, where the model must answer questions based on spoken passages without relying on textual information.
### Source Data
The NMSQA dataset is based on the SQuAD dataset, with spoken questions and passages created from the original text data.
#### Initial Data Collection and Normalization
The initial data collection involved converting the original SQuAD dataset's text-based questions and passages into spoken audio files. The text was first normalized, and then audio files were generated using text-to-speech methods.
#### Who are the source language producers?
The source language producers are the creators of the SQuAD dataset and the researchers who generated the spoken audio files for the NMSQA dataset.
### Annotations
#### Annotation process
The annotations for the NMSQA dataset are derived from the original SQuAD dataset. Additional annotations, such as audio start and end positions for correct and incorrect answers, as well as audio file paths and speaker information, are added by the dataset creators.
#### Who are the annotators?
The annotators for the NMSQA dataset are the creators of the SQuAD dataset and the researchers who generated the spoken audio files and additional annotations for the NMSQA dataset.
### Personal and Sensitive Information
The dataset does not contain any personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
The NMSQA dataset contributes to the development and evaluation of models for textless spoken question answering tasks, which can lead to advancements in natural language processing and automatic speech recognition. Applications of these technologies can improve accessibility and convenience in various domains, such as virtual assistants, customer service, and voice-controlled devices.
### Discussion of Biases
The dataset inherits potential biases from the original SQuAD dataset, which may include biases in the selection of passages, questions, and answers. Additionally, biases may be introduced in the text-to-speech process and the choice of speakers used to generate the spoken audio files.
### Other Known Limitations
As the dataset is based on the SQuAD dataset, it shares the same limitations, including the fact that it is limited to the English language and mainly focuses on factual questions. Furthermore, the dataset may not cover a wide range of accents, dialects, or speaking styles.
## Additional Information
### Dataset Curators
The NMSQA dataset is curated by Guan-Ting Lin, Yung-Sung Chuang, Ho-Lam Chung, Shu-Wen Yang, Hsuan-Jui Chen, Shang-Wen Li, Abdelrahman Mohamed, Hung-Yi Lee, and Lin-Shan Lee.
### Licensing Information
The licensing information for the dataset is not explicitly mentioned.
### Contributions
Thanks to @voidful for adding this dataset. | [
"# Dataset Card for NMSQA(Natural Multi-speaker Spoken Question Answering)",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\nURL\n- Repository:\nURL\n- Paper:\nURL\n- Leaderboard:\n- Point of Contact:\n\nDownload audio data: URL \nUnzip audio data: 'tar -xf nmsqa_audio.URL'",
"### Dataset Summary\n\nThe Natural Multi-speaker Spoken Question Answering (NMSQA) dataset is designed for the task of textless spoken question answering. It is based on the SQuAD dataset and contains spoken questions and passages. The dataset includes the original text, transcriptions, and audio files of the spoken content. This dataset is created to evaluate the performance of models on textless spoken question answering tasks.",
"### Supported Tasks and Leaderboards\n\nThe primary task supported by this dataset is textless spoken question answering, where the goal is to answer questions based on spoken passages without relying on textual information. The dataset can also be used for automatic speech recognition tasks.",
"### Languages\n\nThe dataset is in English.",
"## Dataset Structure",
"### Data Instances\n\nEach instance in the dataset contains the following fields:\n\n- id: Unique identifier for the instance\n- title: The title of the passage\n- context: The passage text\n- question: The question text\n- - answer_start: The start index of the answer in the text\n- audio_full_answer_end: The end position of the audio answer in seconds\n- audio_full_answer_start: The start position of the audio answer in seconds\n- audio_full_neg_answer_end: The end position of the audio answer in seconds for an incorrect answer with the same words\n- audio_full_neg_answer_start: The start position of the audio answer in seconds for an incorrect answer with the same words\n- audio_segment_answer_end: The end position of the audio answer in seconds for the segment\n- audio_segment_answer_start: The start position of the audio answer in seconds for the segment\n- text: The answer text\n- content_segment_audio_path: The audio path for the content segment\n- content_full_audio_path: The complete audio path for the content\n- content_audio_sampling_rate: The audio sampling rate\n- content_audio_speaker: The audio speaker\n- content_segment_text: The segment text of the content\n- content_segment_normalized_text: The normalized text for generating audio\n- question_audio_path: The audio path for the question\n- question_audio_sampling_rate: The audio sampling rate\n- question_audio_speaker: The audio speaker\n- question_normalized_text: The normalized text for generating audio",
"### Data Fields\n\nThe dataset includes the following data fields:\n\n- id\n- title\n- context\n- question\n- answers\n- content_segment_audio_path\n- content_full_audio_path\n- content_audio_sampling_rate\n- content_audio_speaker\n- content_segment_text\n- content_segment_normalized_text\n- question_audio_path\n- question_audio_sampling_rate\n- question_audio_speaker\n- question_normalized_text",
"### Data Splits\n\nThe dataset is split into train, dev, and test sets.",
"## Dataset Creation",
"### Curation Rationale\n\nThe NMSQA dataset is created to address the challenge of textless spoken question answering, where the model must answer questions based on spoken passages without relying on textual information.",
"### Source Data\n\nThe NMSQA dataset is based on the SQuAD dataset, with spoken questions and passages created from the original text data.",
"#### Initial Data Collection and Normalization\n\nThe initial data collection involved converting the original SQuAD dataset's text-based questions and passages into spoken audio files. The text was first normalized, and then audio files were generated using text-to-speech methods.",
"#### Who are the source language producers?\n\nThe source language producers are the creators of the SQuAD dataset and the researchers who generated the spoken audio files for the NMSQA dataset.",
"### Annotations",
"#### Annotation process\n\nThe annotations for the NMSQA dataset are derived from the original SQuAD dataset. Additional annotations, such as audio start and end positions for correct and incorrect answers, as well as audio file paths and speaker information, are added by the dataset creators.",
"#### Who are the annotators?\n\nThe annotators for the NMSQA dataset are the creators of the SQuAD dataset and the researchers who generated the spoken audio files and additional annotations for the NMSQA dataset.",
"### Personal and Sensitive Information\n\nThe dataset does not contain any personal or sensitive information.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe NMSQA dataset contributes to the development and evaluation of models for textless spoken question answering tasks, which can lead to advancements in natural language processing and automatic speech recognition. Applications of these technologies can improve accessibility and convenience in various domains, such as virtual assistants, customer service, and voice-controlled devices.",
"### Discussion of Biases\n\nThe dataset inherits potential biases from the original SQuAD dataset, which may include biases in the selection of passages, questions, and answers. Additionally, biases may be introduced in the text-to-speech process and the choice of speakers used to generate the spoken audio files.",
"### Other Known Limitations\n\nAs the dataset is based on the SQuAD dataset, it shares the same limitations, including the fact that it is limited to the English language and mainly focuses on factual questions. Furthermore, the dataset may not cover a wide range of accents, dialects, or speaking styles.",
"## Additional Information",
"### Dataset Curators\n\nThe NMSQA dataset is curated by Guan-Ting Lin, Yung-Sung Chuang, Ho-Lam Chung, Shu-Wen Yang, Hsuan-Jui Chen, Shang-Wen Li, Abdelrahman Mohamed, Hung-Yi Lee, and Lin-Shan Lee.",
"### Licensing Information\n\nThe licensing information for the dataset is not explicitly mentioned.",
"### Contributions\n\nThanks to @voidful for adding this dataset."
] | [
"TAGS\n#task_categories-question-answering #task_categories-automatic-speech-recognition #task_ids-abstractive-qa #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-expert-generated #language_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #speech-recognition #arxiv-2203.04911 #region-us \n",
"# Dataset Card for NMSQA(Natural Multi-speaker Spoken Question Answering)",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\nURL\n- Repository:\nURL\n- Paper:\nURL\n- Leaderboard:\n- Point of Contact:\n\nDownload audio data: URL \nUnzip audio data: 'tar -xf nmsqa_audio.URL'",
"### Dataset Summary\n\nThe Natural Multi-speaker Spoken Question Answering (NMSQA) dataset is designed for the task of textless spoken question answering. It is based on the SQuAD dataset and contains spoken questions and passages. The dataset includes the original text, transcriptions, and audio files of the spoken content. This dataset is created to evaluate the performance of models on textless spoken question answering tasks.",
"### Supported Tasks and Leaderboards\n\nThe primary task supported by this dataset is textless spoken question answering, where the goal is to answer questions based on spoken passages without relying on textual information. The dataset can also be used for automatic speech recognition tasks.",
"### Languages\n\nThe dataset is in English.",
"## Dataset Structure",
"### Data Instances\n\nEach instance in the dataset contains the following fields:\n\n- id: Unique identifier for the instance\n- title: The title of the passage\n- context: The passage text\n- question: The question text\n- - answer_start: The start index of the answer in the text\n- audio_full_answer_end: The end position of the audio answer in seconds\n- audio_full_answer_start: The start position of the audio answer in seconds\n- audio_full_neg_answer_end: The end position of the audio answer in seconds for an incorrect answer with the same words\n- audio_full_neg_answer_start: The start position of the audio answer in seconds for an incorrect answer with the same words\n- audio_segment_answer_end: The end position of the audio answer in seconds for the segment\n- audio_segment_answer_start: The start position of the audio answer in seconds for the segment\n- text: The answer text\n- content_segment_audio_path: The audio path for the content segment\n- content_full_audio_path: The complete audio path for the content\n- content_audio_sampling_rate: The audio sampling rate\n- content_audio_speaker: The audio speaker\n- content_segment_text: The segment text of the content\n- content_segment_normalized_text: The normalized text for generating audio\n- question_audio_path: The audio path for the question\n- question_audio_sampling_rate: The audio sampling rate\n- question_audio_speaker: The audio speaker\n- question_normalized_text: The normalized text for generating audio",
"### Data Fields\n\nThe dataset includes the following data fields:\n\n- id\n- title\n- context\n- question\n- answers\n- content_segment_audio_path\n- content_full_audio_path\n- content_audio_sampling_rate\n- content_audio_speaker\n- content_segment_text\n- content_segment_normalized_text\n- question_audio_path\n- question_audio_sampling_rate\n- question_audio_speaker\n- question_normalized_text",
"### Data Splits\n\nThe dataset is split into train, dev, and test sets.",
"## Dataset Creation",
"### Curation Rationale\n\nThe NMSQA dataset is created to address the challenge of textless spoken question answering, where the model must answer questions based on spoken passages without relying on textual information.",
"### Source Data\n\nThe NMSQA dataset is based on the SQuAD dataset, with spoken questions and passages created from the original text data.",
"#### Initial Data Collection and Normalization\n\nThe initial data collection involved converting the original SQuAD dataset's text-based questions and passages into spoken audio files. The text was first normalized, and then audio files were generated using text-to-speech methods.",
"#### Who are the source language producers?\n\nThe source language producers are the creators of the SQuAD dataset and the researchers who generated the spoken audio files for the NMSQA dataset.",
"### Annotations",
"#### Annotation process\n\nThe annotations for the NMSQA dataset are derived from the original SQuAD dataset. Additional annotations, such as audio start and end positions for correct and incorrect answers, as well as audio file paths and speaker information, are added by the dataset creators.",
"#### Who are the annotators?\n\nThe annotators for the NMSQA dataset are the creators of the SQuAD dataset and the researchers who generated the spoken audio files and additional annotations for the NMSQA dataset.",
"### Personal and Sensitive Information\n\nThe dataset does not contain any personal or sensitive information.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe NMSQA dataset contributes to the development and evaluation of models for textless spoken question answering tasks, which can lead to advancements in natural language processing and automatic speech recognition. Applications of these technologies can improve accessibility and convenience in various domains, such as virtual assistants, customer service, and voice-controlled devices.",
"### Discussion of Biases\n\nThe dataset inherits potential biases from the original SQuAD dataset, which may include biases in the selection of passages, questions, and answers. Additionally, biases may be introduced in the text-to-speech process and the choice of speakers used to generate the spoken audio files.",
"### Other Known Limitations\n\nAs the dataset is based on the SQuAD dataset, it shares the same limitations, including the fact that it is limited to the English language and mainly focuses on factual questions. Furthermore, the dataset may not cover a wide range of accents, dialects, or speaking styles.",
"## Additional Information",
"### Dataset Curators\n\nThe NMSQA dataset is curated by Guan-Ting Lin, Yung-Sung Chuang, Ho-Lam Chung, Shu-Wen Yang, Hsuan-Jui Chen, Shang-Wen Li, Abdelrahman Mohamed, Hung-Yi Lee, and Lin-Shan Lee.",
"### Licensing Information\n\nThe licensing information for the dataset is not explicitly mentioned.",
"### Contributions\n\nThanks to @voidful for adding this dataset."
] |
1190b855bc90372a9571b3c59847f42d1675a2fe | # AutoNLP Dataset for project: devign_raw_test
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project devign_raw_test.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "void ff_avg_h264_qpel16_mc32_msa ( uint8_t * dst , const uint8_t * src , ptrdiff_t stride ) { avc_lu[...]",
"target": 0
},
{
"text": "static void sd_cardchange ( void * opaque , bool load ) { SDState * sd = opaque ; qemu_set_irq ( sd [...]",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=2, names=['0', '1'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 21188 |
| valid | 5298 |
| nimaster/autonlp-data-devign_raw_test | [
"task_categories:text-classification",
"region:us"
] | 2022-03-17T13:06:22+00:00 | {"task_categories": ["text-classification"], "languages": ["en"]} | 2022-03-17T13:07:49+00:00 | [] | [] | TAGS
#task_categories-text-classification #region-us
| AutoNLP Dataset for project: devign\_raw\_test
==============================================
Dataset Descritpion
-------------------
This dataset has been automatically processed by AutoNLP for project devign\_raw\_test.
### Languages
The BCP-47 code for the dataset's language is en.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-text-classification #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
43aa565bcc88b801013e7a3882eee40713e7c725 | **X-SCITLDR**: Cross-Lingual Extreme Summarization of Scholarly Documents
# X-SCITLDR
The number of scientific publications nowadays is rapidly increasing, causing information overload for researchers and making it hard for scholars to keep up to date with current trends and lines of work. Consequently, recent work on applying text mining technologies for scholarly publications has investigated the application of automatic text summarization technologies, including extreme summarization, for this domain. However, previous work has concentrated only on monolingual settings, primarily in English. In this paper, we fill this research gap and present an abstractive cross-lingual summarization dataset for four different languages in the scholarly domain, which enables us to train and evaluate models that process English papers and generate summaries in German, Italian, Chinese and Japanese. We present our new X-SCITLDR dataset for multilingual summarization and thoroughly benchmark different models based on a state-of-the-art multilingual pre-trained model, including a two-stage summarize and translate approach and a direct cross-lingual model. We additionally explore the benefits of intermediate-stage training using English monolingual summarization and machine translation as intermediate tasks and analyze performance in zero- and few-shot scenarios.
# Languages
- German
- Italian
- Chinese
- Japanese
# Related
- [Paper](https://dl.acm.org/doi/abs/10.1145/3529372.3530938)
- [Code](https://github.com/sobamchan/xscitldr/)
- [Contact](mailto:[email protected])
# Citation Information
```
@inproceedings{takeshita-etal-2022-xsci,
author = {Takeshita, Sotaro and Green, Tommaso and Friedrich, Niklas and Eckert, Kai and Ponzetto, Simone Paolo},
title = {X-SCITLDR: Cross-Lingual Extreme Summarization of Scholarly Documents},
year = {2022},
isbn = {9781450393454},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3529372.3530938},
doi = {10.1145/3529372.3530938},
abstract = {The number of scientific publications nowadays is rapidly increasing, causing information overload for researchers and making it hard for scholars to keep up to date with current trends and lines of work. Consequently, recent work on applying text mining technologies for scholarly publications has investigated the application of automatic text summarization technologies, including extreme summarization, for this domain. However, previous work has concentrated only on monolingual settings, primarily in English. In this paper, we fill this research gap and present an abstractive cross-lingual summarization dataset for four different languages in the scholarly domain, which enables us to train and evaluate models that process English papers and generate summaries in German, Italian, Chinese and Japanese. We present our new X-SCITLDR dataset for multilingual summarization and thoroughly benchmark different models based on a state-of-the-art multilingual pre-trained model, including a two-stage 'summarize and translate' approach and a direct cross-lingual model. We additionally explore the benefits of intermediate-stage training using English monolingual summarization and machine translation as intermediate tasks and analyze performance in zero- and few-shot scenarios.},
booktitle = {Proceedings of the 22nd ACM/IEEE Joint Conference on Digital Libraries},
articleno = {4},
numpages = {12},
keywords = {scholarly document processing, summarization, multilinguality},
location = {Cologne, Germany},
series = {JCDL '22}
}
``` | umanlp/xscitldr | [
"region:us"
] | 2022-03-17T14:30:16+00:00 | {} | 2022-07-04T12:49:25+00:00 | [] | [] | TAGS
#region-us
| X-SCITLDR: Cross-Lingual Extreme Summarization of Scholarly Documents
# X-SCITLDR
The number of scientific publications nowadays is rapidly increasing, causing information overload for researchers and making it hard for scholars to keep up to date with current trends and lines of work. Consequently, recent work on applying text mining technologies for scholarly publications has investigated the application of automatic text summarization technologies, including extreme summarization, for this domain. However, previous work has concentrated only on monolingual settings, primarily in English. In this paper, we fill this research gap and present an abstractive cross-lingual summarization dataset for four different languages in the scholarly domain, which enables us to train and evaluate models that process English papers and generate summaries in German, Italian, Chinese and Japanese. We present our new X-SCITLDR dataset for multilingual summarization and thoroughly benchmark different models based on a state-of-the-art multilingual pre-trained model, including a two-stage summarize and translate approach and a direct cross-lingual model. We additionally explore the benefits of intermediate-stage training using English monolingual summarization and machine translation as intermediate tasks and analyze performance in zero- and few-shot scenarios.
# Languages
- German
- Italian
- Chinese
- Japanese
# Related
- Paper
- Code
- Contact
| [
"# X-SCITLDR\n\nThe number of scientific publications nowadays is rapidly increasing, causing information overload for researchers and making it hard for scholars to keep up to date with current trends and lines of work. Consequently, recent work on applying text mining technologies for scholarly publications has investigated the application of automatic text summarization technologies, including extreme summarization, for this domain. However, previous work has concentrated only on monolingual settings, primarily in English. In this paper, we fill this research gap and present an abstractive cross-lingual summarization dataset for four different languages in the scholarly domain, which enables us to train and evaluate models that process English papers and generate summaries in German, Italian, Chinese and Japanese. We present our new X-SCITLDR dataset for multilingual summarization and thoroughly benchmark different models based on a state-of-the-art multilingual pre-trained model, including a two-stage summarize and translate approach and a direct cross-lingual model. We additionally explore the benefits of intermediate-stage training using English monolingual summarization and machine translation as intermediate tasks and analyze performance in zero- and few-shot scenarios.",
"# Languages\n\n- German\n- Italian\n- Chinese\n- Japanese",
"# Related\n\n- Paper\n- Code\n- Contact"
] | [
"TAGS\n#region-us \n",
"# X-SCITLDR\n\nThe number of scientific publications nowadays is rapidly increasing, causing information overload for researchers and making it hard for scholars to keep up to date with current trends and lines of work. Consequently, recent work on applying text mining technologies for scholarly publications has investigated the application of automatic text summarization technologies, including extreme summarization, for this domain. However, previous work has concentrated only on monolingual settings, primarily in English. In this paper, we fill this research gap and present an abstractive cross-lingual summarization dataset for four different languages in the scholarly domain, which enables us to train and evaluate models that process English papers and generate summaries in German, Italian, Chinese and Japanese. We present our new X-SCITLDR dataset for multilingual summarization and thoroughly benchmark different models based on a state-of-the-art multilingual pre-trained model, including a two-stage summarize and translate approach and a direct cross-lingual model. We additionally explore the benefits of intermediate-stage training using English monolingual summarization and machine translation as intermediate tasks and analyze performance in zero- and few-shot scenarios.",
"# Languages\n\n- German\n- Italian\n- Chinese\n- Japanese",
"# Related\n\n- Paper\n- Code\n- Contact"
] |
f5b8eff44796cdd3a3c9ebb77383051adae4abc7 |
kaggle datasets | ttxy/kaggle | [
"license:apache-2.0",
"region:us"
] | 2022-03-17T15:02:27+00:00 | {"license": "apache-2.0"} | 2022-03-17T16:00:50+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
|
kaggle datasets | [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] |
4d1d66c78bfe1ad870fb21f7e7837103b43c42c7 | - `tweet_disaster`, 8562 | ttxy/nlp | [
"region:us"
] | 2022-03-17T15:59:17+00:00 | {} | 2022-07-24T04:58:39+00:00 | [] | [] | TAGS
#region-us
| - 'tweet_disaster', 8562 | [] | [
"TAGS\n#region-us \n"
] |
adb147bd12398f9d56a652005f4895c6b7100ebe | Texto perteneciente a todos los BOE (Boletin Oficial del Estado, España) desde 13 de enero del 2020 al 16 de febrero del 2022.
Separador '|'
Columnas: año | mes | dia | texto del BOE | tamaño | nombre pdf del BOE | Paulosdeanllons/ODS_BOE | [
"license:afl-3.0",
"region:us"
] | 2022-03-18T08:48:15+00:00 | {"license": "afl-3.0"} | 2022-03-23T13:52:31+00:00 | [] | [] | TAGS
#license-afl-3.0 #region-us
| Texto perteneciente a todos los BOE (Boletin Oficial del Estado, España) desde 13 de enero del 2020 al 16 de febrero del 2022.
Separador '|'
Columnas: año | mes | dia | texto del BOE | tamaño | nombre pdf del BOE | [] | [
"TAGS\n#license-afl-3.0 #region-us \n"
] |
2e1dc06ac448fac1fe3c032a8919735353d80f58 |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| malteos/test-ds | [
"task_categories:text-retrieval",
"multilinguality:monolingual",
"size_categories:unknown",
"region:us"
] | 2022-03-18T10:02:26+00:00 | {"annotations_creators": [], "language_creators": [], "language": ["en-US"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": [], "pretty_name": "test ds"} | 2022-10-25T09:03:23+00:00 | [] | [
"en-US"
] | TAGS
#task_categories-text-retrieval #multilinguality-monolingual #size_categories-unknown #region-us
|
# Dataset Card for [Dataset Name]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset.
| [
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#task_categories-text-retrieval #multilinguality-monolingual #size_categories-unknown #region-us \n",
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
d62cc9c9bad06319b45ec81ba7d840fd1bc63894 |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| malteos/test2 | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"region:us"
] | 2022-03-18T10:18:42+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["conditional-text-generation"], "task_ids": ["summarization"], "paperswithcode_id": "cnn-daily-mail-1", "pretty_name": "CNN / Daily Mail"} | 2022-10-23T04:14:36+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-apache-2.0 #region-us
|
# Dataset Card for [Dataset Name]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset.
| [
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-apache-2.0 #region-us \n",
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
f7a3fbcdaec21897a76a04cf78ecd94149444327 | This contains crawled ecommerce data from Common Crawl
| elena-soare/crawled-ecommerce | [
"region:us"
] | 2022-03-18T11:19:43+00:00 | {} | 2022-04-04T09:35:10+00:00 | [] | [] | TAGS
#region-us
| This contains crawled ecommerce data from Common Crawl
| [] | [
"TAGS\n#region-us \n"
] |
6dfbbdc8bf9da9500f8eaa2eeb13f150186941d0 |
<p align="center"><img src="https://huggingface.co/datasets/cfilt/HiNER-collapsed/raw/main/cfilt-dark-vec.png" alt="Computation for Indian Language Technology Logo" width="150" height="150"/></p>
# IWN Wordlists
[](https://creativecommons.org/licenses/by-nc-sa/4.0/) [](https://twitter.com/cfiltnlp) [](https://twitter.com/PeopleCentredAI)
We provide the unique word list form the [IndoWordnet (IWN)](https://www.cfilt.iitb.ac.in/indowordnet/) knowledge base.
## Usage
```python
from datasets import load_dataset
language = "hindi" // supported languages: assamese, bengali, bodo, gujarati, hindi, kannada, kashmiri, konkani, malayalam, manipuri, marathi, meitei, nepali, oriya, punjabi, sanskrit, tamil, telugu, urdu.
words = load_dataset("cfilt/iwn_wordlists", language)
word_list = words["train"]["word"]
```
## Citation
```latex
@inproceedings{bhattacharyya2010indowordnet,
title={IndoWordNet},
author={Bhattacharyya, Pushpak},
booktitle={Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)},
year={2010}
}
``` | cfilt/iwn_wordlists | [
"task_categories:token-classification",
"annotations_creators:Shivam Mhaskar, Diptesh Kanojia",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:as",
"language:bn",
"language:mni",
"language:gu",
"language:hi",
"language:kn",
"language:ks",
"language:kok",
"language:ml",
"language:mr",
"language:or",
"language:ne",
"language:pa",
"language:sa",
"language:ta",
"language:te",
"language:ur",
"license:cc-by-nc-sa-4.0",
"abbreviation-detection",
"region:us"
] | 2022-03-18T11:56:41+00:00 | {"annotations_creators": ["Shivam Mhaskar, Diptesh Kanojia"], "language_creators": ["found"], "language": ["as", "bn", "mni", "gu", "hi", "kn", "ks", "kok", "ml", "mr", "or", "ne", "pa", "sa", "ta", "te", "ur"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": [], "paperswithcode_id": "plod-filtered", "pretty_name": "PLOD: An Abbreviation Detection Dataset", "tags": ["abbreviation-detection"]} | 2022-11-23T12:06:02+00:00 | [] | [
"as",
"bn",
"mni",
"gu",
"hi",
"kn",
"ks",
"kok",
"ml",
"mr",
"or",
"ne",
"pa",
"sa",
"ta",
"te",
"ur"
] | TAGS
#task_categories-token-classification #annotations_creators-Shivam Mhaskar, Diptesh Kanojia #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Assamese #language-Bengali #language-Manipuri #language-Gujarati #language-Hindi #language-Kannada #language-Kashmiri #language-Konkani (macrolanguage) #language-Malayalam #language-Marathi #language-Oriya (macrolanguage) #language-Nepali (macrolanguage) #language-Panjabi #language-Sanskrit #language-Tamil #language-Telugu #language-Urdu #license-cc-by-nc-sa-4.0 #abbreviation-detection #region-us
|
<p align="center"><img src="URL alt="Computation for Indian Language Technology Logo" width="150" height="150"/></p>
# IWN Wordlists
 knowledge base.
## Usage
| [
"# IWN Wordlists\n\n knowledge base.",
"## Usage"
] | [
"TAGS\n#task_categories-token-classification #annotations_creators-Shivam Mhaskar, Diptesh Kanojia #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Assamese #language-Bengali #language-Manipuri #language-Gujarati #language-Hindi #language-Kannada #language-Kashmiri #language-Konkani (macrolanguage) #language-Malayalam #language-Marathi #language-Oriya (macrolanguage) #language-Nepali (macrolanguage) #language-Panjabi #language-Sanskrit #language-Tamil #language-Telugu #language-Urdu #license-cc-by-nc-sa-4.0 #abbreviation-detection #region-us \n",
"# IWN Wordlists\n\n knowledge base.",
"## Usage"
] |
329f8440b131659c97299b2a4cdf38779082e14f | # Parallel Sentences for Spanish language
This repository contains parallel sentences (English + same sentences in Spanish language) in a simple tsv.gz format:
```
english_sentences\tsentence_in_spanish_language
```
## Usage
These sentences can be used to train multi-lingual sentence embedding models. For more details, you could check out [SBERT.net - Multilingual-Model](https://www.sbert.net/examples/training/multilingual/README.html) | hackathon-pln-es/parallel-sentences | [
"region:us"
] | 2022-03-18T18:08:37+00:00 | {} | 2022-04-02T17:38:29+00:00 | [] | [] | TAGS
#region-us
| # Parallel Sentences for Spanish language
This repository contains parallel sentences (English + same sentences in Spanish language) in a simple URL format:
## Usage
These sentences can be used to train multi-lingual sentence embedding models. For more details, you could check out URL - Multilingual-Model | [
"# Parallel Sentences for Spanish language\n\nThis repository contains parallel sentences (English + same sentences in Spanish language) in a simple URL format:",
"## Usage\t\nThese sentences can be used to train multi-lingual sentence embedding models. For more details, you could check out URL - Multilingual-Model"
] | [
"TAGS\n#region-us \n",
"# Parallel Sentences for Spanish language\n\nThis repository contains parallel sentences (English + same sentences in Spanish language) in a simple URL format:",
"## Usage\t\nThese sentences can be used to train multi-lingual sentence embedding models. For more details, you could check out URL - Multilingual-Model"
] |
f888d2a1df5a5f11cde2832710cc0d9e59b3b132 | ## Generation procedure
The dataset was constructed using documents from [the Pile](https://pile.eleuther.ai/) scored using [LDNOOBW](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) wordlist (a score is number of curses per character).
The procedure was the following:
1. The first half of the data are 100k documents randomly sampled from the Pile and assigned scores
2. The second half are the most cursing document from the Pile, obtained by scoring the whole Pile and choosing 100k documents with highest scores
3. Then, the dataset was shuffled and a 9:1 train-test split was done
## Basic stats
The average and median scores are 0.013 and 0.019, respectively. | tomekkorbak/pile-curse-full | [
"region:us"
] | 2022-03-18T23:17:53+00:00 | {} | 2022-03-23T20:05:15+00:00 | [] | [] | TAGS
#region-us
| ## Generation procedure
The dataset was constructed using documents from the Pile scored using LDNOOBW wordlist (a score is number of curses per character).
The procedure was the following:
1. The first half of the data are 100k documents randomly sampled from the Pile and assigned scores
2. The second half are the most cursing document from the Pile, obtained by scoring the whole Pile and choosing 100k documents with highest scores
3. Then, the dataset was shuffled and a 9:1 train-test split was done
## Basic stats
The average and median scores are 0.013 and 0.019, respectively. | [
"## Generation procedure\n\nThe dataset was constructed using documents from the Pile scored using LDNOOBW wordlist (a score is number of curses per character). \n\nThe procedure was the following:\n1. The first half of the data are 100k documents randomly sampled from the Pile and assigned scores\n2. The second half are the most cursing document from the Pile, obtained by scoring the whole Pile and choosing 100k documents with highest scores\n3. Then, the dataset was shuffled and a 9:1 train-test split was done",
"## Basic stats\n\nThe average and median scores are 0.013 and 0.019, respectively."
] | [
"TAGS\n#region-us \n",
"## Generation procedure\n\nThe dataset was constructed using documents from the Pile scored using LDNOOBW wordlist (a score is number of curses per character). \n\nThe procedure was the following:\n1. The first half of the data are 100k documents randomly sampled from the Pile and assigned scores\n2. The second half are the most cursing document from the Pile, obtained by scoring the whole Pile and choosing 100k documents with highest scores\n3. Then, the dataset was shuffled and a 9:1 train-test split was done",
"## Basic stats\n\nThe average and median scores are 0.013 and 0.019, respectively."
] |
9d4d238fbdad8ccfc9058cdcda552527f54bca2a |
# Dataset Card for CCMatrix v1
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://opus.nlpl.eu/CCMatrix.php
- **Repository:** None
- **Paper:** https://arxiv.org/abs/1911.04944
### Dataset Summary
This corpus has been extracted from web crawls using the margin-based bitext mining techniques described at https://github.com/facebookresearch/LASER/tree/master/tasks/CCMatrix.
* 90 languages, 1,197 bitexts
* total number of files: 90
* total number of tokens: 112.14G
* total number of sentence fragments: 7.37G
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Configs are generated for all language pairs in both directions.
You can find the valid pairs in Homepage section of Dataset Description: https://opus.nlpl.eu/CCMatrix.php
E.g.
```
from datasets import load_dataset
dataset = load_dataset("yhavinga/ccmatrix", "en-nl", streaming=True)
```
This will open the `en-nl` dataset in streaming mode. Without streaming, download and prepare will take tens of minutes.
You can inspect elements with:
```
print(next(iter(dataset['train'])))
{'id': 0, 'score': 1.2499677, 'translation': {'en': 'They come from all parts of Egypt, just like they will at the day of His coming.', 'nl': 'Zij kwamen uit alle delen van Egypte, evenals zij op de dag van Zijn komst zullen doen.'}}
```
## Dataset Structure
### Data Instances
For example:
```json
{
"id": 1,
"score": 1.2498379,
"translation": {
"nl": "En we moeten elke waarheid vals noemen die niet minstens door een lach vergezeld ging.”",
"en": "And we should call every truth false which was not accompanied by at least one laugh.”"
}
}
```
### Data Fields
Each example contains an integer id starting with 0, a score, and a translation dictionary with the language 1 and
language 2 texts.
### Data Splits
Only a `train` split is provided.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
IMPORTANT: Please cite reference [2][3] if you use this data.
1. **[CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data](https://arxiv.org/abs/1911.00359)**
by *Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Jouli
and Edouard Grave*.
2. **[CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB](https://arxiv.org/abs/1911.04944)** by *Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave and Armand Joulin*.
3. **[Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125)** by *Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines,
Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky,
Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin.*
This HuggingFace CCMatrix dataset is a wrapper around the service and files prepared and hosted by OPUS:
* **[Parallel Data, Tools and Interfaces in OPUS](https://www.aclweb.org/anthology/L12-1246/)** by *Jörg Tiedemann*.
### Contributions
| yhavinga/ccmatrix | [
"task_categories:text2text-generation",
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:af",
"language:am",
"language:ar",
"language:ast",
"language:az",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:ca",
"language:ceb",
"language:cs",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:fy",
"language:ga",
"language:gd",
"language:gl",
"language:ha",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:id",
"language:ig",
"language:ilo",
"language:is",
"language:it",
"language:ja",
"language:jv",
"language:ka",
"language:kk",
"language:km",
"language:ko",
"language:la",
"language:lb",
"language:lg",
"language:lt",
"language:lv",
"language:mg",
"language:mk",
"language:ml",
"language:mr",
"language:ms",
"language:my",
"language:ne",
"language:nl",
"language:no",
"language:oc",
"language:om",
"language:or",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:sd",
"language:si",
"language:sk",
"language:sl",
"language:so",
"language:sq",
"language:sr",
"language:su",
"language:sv",
"language:sw",
"language:ta",
"language:tl",
"language:tr",
"language:tt",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:wo",
"language:xh",
"language:yi",
"language:yo",
"language:zh",
"language:zu",
"language:se",
"license:unknown",
"conditional-text-generation",
"arxiv:1911.04944",
"arxiv:1911.00359",
"arxiv:2010.11125",
"region:us"
] | 2022-03-19T08:54:43+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["af", "am", "ar", "ast", "az", "be", "bg", "bn", "br", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "ha", "he", "hi", "hr", "hu", "hy", "id", "ig", "ilo", "is", "it", "ja", "jv", "ka", "kk", "km", "ko", "la", "lb", "lg", "lt", "lv", "mg", "mk", "ml", "mr", "ms", "my", "ne", "nl", "no", "oc", "om", "or", "pl", "pt", "ro", "ru", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "tl", "tr", "tt", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu", "se"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": {"en-nl": ["n<110M"], "en-af": ["n<9M"], "en-lt": ["<24M"]}, "source_datasets": ["original"], "task_categories": ["text2text-generation", "translation"], "task_ids": [], "paperswithcode_id": "ccmatrix", "pretty_name": "CCMatrixV1", "tags": ["conditional-text-generation"]} | 2023-03-09T07:44:58+00:00 | [
"1911.04944",
"1911.00359",
"2010.11125"
] | [
"af",
"am",
"ar",
"ast",
"az",
"be",
"bg",
"bn",
"br",
"ca",
"ceb",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"ig",
"ilo",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"ko",
"la",
"lb",
"lg",
"lt",
"lv",
"mg",
"mk",
"ml",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"oc",
"om",
"or",
"pl",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"tl",
"tr",
"tt",
"uk",
"ur",
"uz",
"vi",
"wo",
"xh",
"yi",
"yo",
"zh",
"zu",
"se"
] | TAGS
#task_categories-text2text-generation #task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #source_datasets-original #language-Afrikaans #language-Amharic #language-Arabic #language-Asturian #language-Azerbaijani #language-Belarusian #language-Bulgarian #language-Bengali #language-Breton #language-Catalan #language-Cebuano #language-Czech #language-Welsh #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Finnish #language-French #language-Western Frisian #language-Irish #language-Scottish Gaelic #language-Galician #language-Hausa #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Igbo #language-Iloko #language-Icelandic #language-Italian #language-Japanese #language-Javanese #language-Georgian #language-Kazakh #language-Khmer #language-Korean #language-Latin #language-Luxembourgish #language-Ganda #language-Lithuanian #language-Latvian #language-Malagasy #language-Macedonian #language-Malayalam #language-Marathi #language-Malay (macrolanguage) #language-Burmese #language-Nepali (macrolanguage) #language-Dutch #language-Norwegian #language-Occitan (post 1500) #language-Oromo #language-Oriya (macrolanguage) #language-Polish #language-Portuguese #language-Romanian #language-Russian #language-Sindhi #language-Sinhala #language-Slovak #language-Slovenian #language-Somali #language-Albanian #language-Serbian #language-Sundanese #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Tagalog #language-Turkish #language-Tatar #language-Ukrainian #language-Urdu #language-Uzbek #language-Vietnamese #language-Wolof #language-Xhosa #language-Yiddish #language-Yoruba #language-Chinese #language-Zulu #language-Northern Sami #license-unknown #conditional-text-generation #arxiv-1911.04944 #arxiv-1911.00359 #arxiv-2010.11125 #region-us
|
# Dataset Card for CCMatrix v1
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository: None
- Paper: URL
### Dataset Summary
This corpus has been extracted from web crawls using the margin-based bitext mining techniques described at URL
* 90 languages, 1,197 bitexts
* total number of files: 90
* total number of tokens: 112.14G
* total number of sentence fragments: 7.37G
### Supported Tasks and Leaderboards
### Languages
Configs are generated for all language pairs in both directions.
You can find the valid pairs in Homepage section of Dataset Description: URL
E.g.
This will open the 'en-nl' dataset in streaming mode. Without streaming, download and prepare will take tens of minutes.
You can inspect elements with:
## Dataset Structure
### Data Instances
For example:
### Data Fields
Each example contains an integer id starting with 0, a score, and a translation dictionary with the language 1 and
language 2 texts.
### Data Splits
Only a 'train' split is provided.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
IMPORTANT: Please cite reference [2][3] if you use this data.
1. CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data
by *Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Jouli
and Edouard Grave*.
2. CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB by *Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave and Armand Joulin*.
3. Beyond English-Centric Multilingual Machine Translation by *Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines,
Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky,
Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin.*
This HuggingFace CCMatrix dataset is a wrapper around the service and files prepared and hosted by OPUS:
* Parallel Data, Tools and Interfaces in OPUS by *Jörg Tiedemann*.
### Contributions
| [
"# Dataset Card for CCMatrix v1",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Homepage: URL\n- Repository: None\n- Paper: URL",
"### Dataset Summary\n\nThis corpus has been extracted from web crawls using the margin-based bitext mining techniques described at URL\n\n* 90 languages, 1,197 bitexts\n* total number of files: 90\n* total number of tokens: 112.14G\n* total number of sentence fragments: 7.37G",
"### Supported Tasks and Leaderboards",
"### Languages\n\nConfigs are generated for all language pairs in both directions.\nYou can find the valid pairs in Homepage section of Dataset Description: URL\nE.g.\n\n\n\nThis will open the 'en-nl' dataset in streaming mode. Without streaming, download and prepare will take tens of minutes.\nYou can inspect elements with:",
"## Dataset Structure",
"### Data Instances\nFor example:",
"### Data Fields\nEach example contains an integer id starting with 0, a score, and a translation dictionary with the language 1 and\nlanguage 2 texts.",
"### Data Splits\nOnly a 'train' split is provided.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\n\n\nIMPORTANT: Please cite reference [2][3] if you use this data.\n\n1. CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data\n by *Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Jouli\n and Edouard Grave*.\n2. CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB by *Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave and Armand Joulin*.\n3. Beyond English-Centric Multilingual Machine Translation by *Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines,\n Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky,\n Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin.*\n\nThis HuggingFace CCMatrix dataset is a wrapper around the service and files prepared and hosted by OPUS:\n\n* Parallel Data, Tools and Interfaces in OPUS by *Jörg Tiedemann*.",
"### Contributions"
] | [
"TAGS\n#task_categories-text2text-generation #task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #source_datasets-original #language-Afrikaans #language-Amharic #language-Arabic #language-Asturian #language-Azerbaijani #language-Belarusian #language-Bulgarian #language-Bengali #language-Breton #language-Catalan #language-Cebuano #language-Czech #language-Welsh #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Finnish #language-French #language-Western Frisian #language-Irish #language-Scottish Gaelic #language-Galician #language-Hausa #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Igbo #language-Iloko #language-Icelandic #language-Italian #language-Japanese #language-Javanese #language-Georgian #language-Kazakh #language-Khmer #language-Korean #language-Latin #language-Luxembourgish #language-Ganda #language-Lithuanian #language-Latvian #language-Malagasy #language-Macedonian #language-Malayalam #language-Marathi #language-Malay (macrolanguage) #language-Burmese #language-Nepali (macrolanguage) #language-Dutch #language-Norwegian #language-Occitan (post 1500) #language-Oromo #language-Oriya (macrolanguage) #language-Polish #language-Portuguese #language-Romanian #language-Russian #language-Sindhi #language-Sinhala #language-Slovak #language-Slovenian #language-Somali #language-Albanian #language-Serbian #language-Sundanese #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Tagalog #language-Turkish #language-Tatar #language-Ukrainian #language-Urdu #language-Uzbek #language-Vietnamese #language-Wolof #language-Xhosa #language-Yiddish #language-Yoruba #language-Chinese #language-Zulu #language-Northern Sami #license-unknown #conditional-text-generation #arxiv-1911.04944 #arxiv-1911.00359 #arxiv-2010.11125 #region-us \n",
"# Dataset Card for CCMatrix v1",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Homepage: URL\n- Repository: None\n- Paper: URL",
"### Dataset Summary\n\nThis corpus has been extracted from web crawls using the margin-based bitext mining techniques described at URL\n\n* 90 languages, 1,197 bitexts\n* total number of files: 90\n* total number of tokens: 112.14G\n* total number of sentence fragments: 7.37G",
"### Supported Tasks and Leaderboards",
"### Languages\n\nConfigs are generated for all language pairs in both directions.\nYou can find the valid pairs in Homepage section of Dataset Description: URL\nE.g.\n\n\n\nThis will open the 'en-nl' dataset in streaming mode. Without streaming, download and prepare will take tens of minutes.\nYou can inspect elements with:",
"## Dataset Structure",
"### Data Instances\nFor example:",
"### Data Fields\nEach example contains an integer id starting with 0, a score, and a translation dictionary with the language 1 and\nlanguage 2 texts.",
"### Data Splits\nOnly a 'train' split is provided.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\n\n\nIMPORTANT: Please cite reference [2][3] if you use this data.\n\n1. CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data\n by *Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Jouli\n and Edouard Grave*.\n2. CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB by *Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave and Armand Joulin*.\n3. Beyond English-Centric Multilingual Machine Translation by *Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines,\n Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky,\n Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin.*\n\nThis HuggingFace CCMatrix dataset is a wrapper around the service and files prepared and hosted by OPUS:\n\n* Parallel Data, Tools and Interfaces in OPUS by *Jörg Tiedemann*.",
"### Contributions"
] |
24d41c732de80b4b883f8e279d484a6d4b5eb017 |
# Dataset Card for MESD
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://data.mendeley.com/datasets/cy34mh68j9/5
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Contiene los datos de la base MESD procesados para hacer 'finetuning' de un modelo 'Wav2Vec' en el Hackaton organizado por 'Somos NLP'.
Ejemplo de referencia:
https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/audio_classification.ipynb
Hemos accedido a la base MESD para obtener ejemplos.
Breve descripción de los autores de la base MESD:
"La Base de Datos del Discurso Emocional Mexicano (MESD en inglés) proporciona enunciados de una sola palabra para las prosodias afectivas de ira, asco, miedo, felicidad, neutro y tristeza con conformación cultural mexicana. El MESD ha sido pronunciado por actores adultos y niños no profesionales: Se dispone de 3 voces femeninas, 2 masculinas y 6 infantiles. Las palabras de los enunciados emocionales y neutros proceden de dos corpus: (corpus A) compuesto por sustantivos y adjetivos que se repiten a través de prosodias emocionales y tipos de voz (femenina, masculina, infantil), y (corpus B) que consiste en palabras controladas por edad de adquisición, frecuencia de uso, familiaridad, concreción, valencia, excitación y clasificaciones de dimensionalidad de emociones discretas.
Las grabaciones de audio se realizaron en un estudio profesional con los siguientes materiales (1) un micrófono Sennheiser e835 con una respuesta de frecuencia plana (100 Hz a 10 kHz), (2) una interfaz de audio Focusrite Scarlett 2i4 conectada al micrófono con un cable XLR y al ordenador, y (3) la estación de trabajo de audio digital REAPER (Rapid Environment for Audio Production, Engineering, and Recording). Los archivos de audio se almacenaron como una secuencia de 24 bits con una frecuencia de muestreo de 48000Hz. La amplitud de las formas de onda acústicas se reescaló entre -1 y 1.
Se crearon dos versiones con reducción de la naturalidad de los locutores a partir de expresiones emocionales humanas para voces femeninas del corpus B. En concreto, la naturalidad se redujo progresivamente de las voces humanas al nivel 1 al nivel 2. En particular, se editaron la duración y el tono medio en las sílabas acentuadas para reducir la diferencia entre las sílabas acentuadas y las no acentuadas. En los enunciados completos, se redujeron las relaciones F2/F1 y F3/F1 editando las frecuencias F2 y F3. También se redujo la intensidad de los armónicos 1 y 4. "
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Español
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
Origen: texto que indica si se trata del conjunto de datos MESD original o los casos 'Speaker-embedded naturalness-reduced female voices' donde los autores han generado de forma sintética nuevos datos transformando algunas de las instancias de los audios originales.
Palabra: texto de la palabra que se ha leído.
Emoción: texto de la emoción a la que representa: Valores: 'Enojo', 'Felicidad', 'Miedo', 'Neutral', 'Disgusto', 'Tristeza'.
InfoActor: texto que indica si la voz es de 'Niño', 'Hombre', 'Mujer'.
AudioArray: audio array, remuestreado a 16 Khz.
### Data Splits
Train: 891 ejemplos, mezcla de casos MESD y 'Speaker-embedded naturalness-reduced female voices'.
Validation: 130 ejemplos, todos casos MESD.
Test: 129 ejemplos, todos casos MESD.
## Dataset Creation
### Curation Rationale
Unir los tres subconjuntos de datos y procesarlos para la tarea de finetuning, acorde al input esperado por el modelo Wav2Vec.
### Source Data
#### Initial Data Collection and Normalization
Acceso a los datos en bruto:
https://data.mendeley.com/datasets/cy34mh68j9/5
Conversión a audio arra y remuestreo a 16 Khz.
#### Who are the source language producers?
Duville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Creative Commons, [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
Duville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5
```
| hackathon-pln-es/MESD | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-19T18:39:32+00:00 | {"license": "cc-by-4.0", "Duville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), \u201cMexican Emotional Speech Database (MESD)\u201d, Mendeley Data, V5, doi": "10.17632/cy34mh68j9.5"} | 2022-03-25T18:15:07+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
|
# Dataset Card for MESD
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: URL
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
Contiene los datos de la base MESD procesados para hacer 'finetuning' de un modelo 'Wav2Vec' en el Hackaton organizado por 'Somos NLP'.
Ejemplo de referencia:
URL
Hemos accedido a la base MESD para obtener ejemplos.
Breve descripción de los autores de la base MESD:
"La Base de Datos del Discurso Emocional Mexicano (MESD en inglés) proporciona enunciados de una sola palabra para las prosodias afectivas de ira, asco, miedo, felicidad, neutro y tristeza con conformación cultural mexicana. El MESD ha sido pronunciado por actores adultos y niños no profesionales: Se dispone de 3 voces femeninas, 2 masculinas y 6 infantiles. Las palabras de los enunciados emocionales y neutros proceden de dos corpus: (corpus A) compuesto por sustantivos y adjetivos que se repiten a través de prosodias emocionales y tipos de voz (femenina, masculina, infantil), y (corpus B) que consiste en palabras controladas por edad de adquisición, frecuencia de uso, familiaridad, concreción, valencia, excitación y clasificaciones de dimensionalidad de emociones discretas.
Las grabaciones de audio se realizaron en un estudio profesional con los siguientes materiales (1) un micrófono Sennheiser e835 con una respuesta de frecuencia plana (100 Hz a 10 kHz), (2) una interfaz de audio Focusrite Scarlett 2i4 conectada al micrófono con un cable XLR y al ordenador, y (3) la estación de trabajo de audio digital REAPER (Rapid Environment for Audio Production, Engineering, and Recording). Los archivos de audio se almacenaron como una secuencia de 24 bits con una frecuencia de muestreo de 48000Hz. La amplitud de las formas de onda acústicas se reescaló entre -1 y 1.
Se crearon dos versiones con reducción de la naturalidad de los locutores a partir de expresiones emocionales humanas para voces femeninas del corpus B. En concreto, la naturalidad se redujo progresivamente de las voces humanas al nivel 1 al nivel 2. En particular, se editaron la duración y el tono medio en las sílabas acentuadas para reducir la diferencia entre las sílabas acentuadas y las no acentuadas. En los enunciados completos, se redujeron las relaciones F2/F1 y F3/F1 editando las frecuencias F2 y F3. También se redujo la intensidad de los armónicos 1 y 4. "
### Supported Tasks and Leaderboards
### Languages
Español
## Dataset Structure
### Data Instances
### Data Fields
Origen: texto que indica si se trata del conjunto de datos MESD original o los casos 'Speaker-embedded naturalness-reduced female voices' donde los autores han generado de forma sintética nuevos datos transformando algunas de las instancias de los audios originales.
Palabra: texto de la palabra que se ha leído.
Emoción: texto de la emoción a la que representa: Valores: 'Enojo', 'Felicidad', 'Miedo', 'Neutral', 'Disgusto', 'Tristeza'.
InfoActor: texto que indica si la voz es de 'Niño', 'Hombre', 'Mujer'.
AudioArray: audio array, remuestreado a 16 Khz.
### Data Splits
Train: 891 ejemplos, mezcla de casos MESD y 'Speaker-embedded naturalness-reduced female voices'.
Validation: 130 ejemplos, todos casos MESD.
Test: 129 ejemplos, todos casos MESD.
## Dataset Creation
### Curation Rationale
Unir los tres subconjuntos de datos y procesarlos para la tarea de finetuning, acorde al input esperado por el modelo Wav2Vec.
### Source Data
#### Initial Data Collection and Normalization
Acceso a los datos en bruto:
URL
Conversión a audio arra y remuestreo a 16 Khz.
#### Who are the source language producers?
Duville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
Creative Commons, CC-BY-4.0
| [
"# Dataset Card for MESD",
"## Table of Contents\r\n- Dataset Description\r\n - Dataset Summary\r\n - Supported Tasks\r\n - Languages\r\n- Dataset Structure\r\n - Data Instances\r\n - Data Fields\r\n - Data Splits\r\n- Dataset Creation\r\n - Curation Rationale\r\n - Source Data\r\n - Annotations\r\n - Personal and Sensitive Information\r\n- Considerations for Using the Data\r\n - Social Impact of Dataset\r\n - Discussion of Biases\r\n - Other Known Limitations\r\n- Additional Information\r\n - Dataset Curators\r\n - Licensing Information\r\n - Citation Information",
"## Dataset Description\r\n\r\n- Homepage: URL\r\n- Repository: \r\n- Paper: \r\n- Leaderboard: \r\n- Point of Contact:",
"### Dataset Summary\r\n\r\nContiene los datos de la base MESD procesados para hacer 'finetuning' de un modelo 'Wav2Vec' en el Hackaton organizado por 'Somos NLP'.\r\n\r\nEjemplo de referencia: \r\nURL\r\n\r\nHemos accedido a la base MESD para obtener ejemplos. \r\n\r\nBreve descripción de los autores de la base MESD:\r\n\"La Base de Datos del Discurso Emocional Mexicano (MESD en inglés) proporciona enunciados de una sola palabra para las prosodias afectivas de ira, asco, miedo, felicidad, neutro y tristeza con conformación cultural mexicana. El MESD ha sido pronunciado por actores adultos y niños no profesionales: Se dispone de 3 voces femeninas, 2 masculinas y 6 infantiles. Las palabras de los enunciados emocionales y neutros proceden de dos corpus: (corpus A) compuesto por sustantivos y adjetivos que se repiten a través de prosodias emocionales y tipos de voz (femenina, masculina, infantil), y (corpus B) que consiste en palabras controladas por edad de adquisición, frecuencia de uso, familiaridad, concreción, valencia, excitación y clasificaciones de dimensionalidad de emociones discretas. \r\n\r\nLas grabaciones de audio se realizaron en un estudio profesional con los siguientes materiales (1) un micrófono Sennheiser e835 con una respuesta de frecuencia plana (100 Hz a 10 kHz), (2) una interfaz de audio Focusrite Scarlett 2i4 conectada al micrófono con un cable XLR y al ordenador, y (3) la estación de trabajo de audio digital REAPER (Rapid Environment for Audio Production, Engineering, and Recording). Los archivos de audio se almacenaron como una secuencia de 24 bits con una frecuencia de muestreo de 48000Hz. La amplitud de las formas de onda acústicas se reescaló entre -1 y 1. \r\n\r\nSe crearon dos versiones con reducción de la naturalidad de los locutores a partir de expresiones emocionales humanas para voces femeninas del corpus B. En concreto, la naturalidad se redujo progresivamente de las voces humanas al nivel 1 al nivel 2. En particular, se editaron la duración y el tono medio en las sílabas acentuadas para reducir la diferencia entre las sílabas acentuadas y las no acentuadas. En los enunciados completos, se redujeron las relaciones F2/F1 y F3/F1 editando las frecuencias F2 y F3. También se redujo la intensidad de los armónicos 1 y 4. \"",
"### Supported Tasks and Leaderboards",
"### Languages\r\n\r\nEspañol",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\r\n\r\nOrigen: texto que indica si se trata del conjunto de datos MESD original o los casos 'Speaker-embedded naturalness-reduced female voices' donde los autores han generado de forma sintética nuevos datos transformando algunas de las instancias de los audios originales.\r\n\r\nPalabra: texto de la palabra que se ha leído.\r\n\r\nEmoción: texto de la emoción a la que representa: Valores: 'Enojo', 'Felicidad', 'Miedo', 'Neutral', 'Disgusto', 'Tristeza'.\r\n\r\nInfoActor: texto que indica si la voz es de 'Niño', 'Hombre', 'Mujer'.\r\n\r\nAudioArray: audio array, remuestreado a 16 Khz.",
"### Data Splits\r\n\r\nTrain: 891 ejemplos, mezcla de casos MESD y 'Speaker-embedded naturalness-reduced female voices'.\r\n\r\nValidation: 130 ejemplos, todos casos MESD.\r\n\r\nTest: 129 ejemplos, todos casos MESD.",
"## Dataset Creation",
"### Curation Rationale\r\n\r\nUnir los tres subconjuntos de datos y procesarlos para la tarea de finetuning, acorde al input esperado por el modelo Wav2Vec.",
"### Source Data",
"#### Initial Data Collection and Normalization\r\n\r\nAcceso a los datos en bruto:\r\nURL\r\n\r\nConversión a audio arra y remuestreo a 16 Khz.",
"#### Who are the source language producers?\r\n\r\nDuville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\r\n\r\nCreative Commons, CC-BY-4.0"
] | [
"TAGS\n#license-cc-by-4.0 #region-us \n",
"# Dataset Card for MESD",
"## Table of Contents\r\n- Dataset Description\r\n - Dataset Summary\r\n - Supported Tasks\r\n - Languages\r\n- Dataset Structure\r\n - Data Instances\r\n - Data Fields\r\n - Data Splits\r\n- Dataset Creation\r\n - Curation Rationale\r\n - Source Data\r\n - Annotations\r\n - Personal and Sensitive Information\r\n- Considerations for Using the Data\r\n - Social Impact of Dataset\r\n - Discussion of Biases\r\n - Other Known Limitations\r\n- Additional Information\r\n - Dataset Curators\r\n - Licensing Information\r\n - Citation Information",
"## Dataset Description\r\n\r\n- Homepage: URL\r\n- Repository: \r\n- Paper: \r\n- Leaderboard: \r\n- Point of Contact:",
"### Dataset Summary\r\n\r\nContiene los datos de la base MESD procesados para hacer 'finetuning' de un modelo 'Wav2Vec' en el Hackaton organizado por 'Somos NLP'.\r\n\r\nEjemplo de referencia: \r\nURL\r\n\r\nHemos accedido a la base MESD para obtener ejemplos. \r\n\r\nBreve descripción de los autores de la base MESD:\r\n\"La Base de Datos del Discurso Emocional Mexicano (MESD en inglés) proporciona enunciados de una sola palabra para las prosodias afectivas de ira, asco, miedo, felicidad, neutro y tristeza con conformación cultural mexicana. El MESD ha sido pronunciado por actores adultos y niños no profesionales: Se dispone de 3 voces femeninas, 2 masculinas y 6 infantiles. Las palabras de los enunciados emocionales y neutros proceden de dos corpus: (corpus A) compuesto por sustantivos y adjetivos que se repiten a través de prosodias emocionales y tipos de voz (femenina, masculina, infantil), y (corpus B) que consiste en palabras controladas por edad de adquisición, frecuencia de uso, familiaridad, concreción, valencia, excitación y clasificaciones de dimensionalidad de emociones discretas. \r\n\r\nLas grabaciones de audio se realizaron en un estudio profesional con los siguientes materiales (1) un micrófono Sennheiser e835 con una respuesta de frecuencia plana (100 Hz a 10 kHz), (2) una interfaz de audio Focusrite Scarlett 2i4 conectada al micrófono con un cable XLR y al ordenador, y (3) la estación de trabajo de audio digital REAPER (Rapid Environment for Audio Production, Engineering, and Recording). Los archivos de audio se almacenaron como una secuencia de 24 bits con una frecuencia de muestreo de 48000Hz. La amplitud de las formas de onda acústicas se reescaló entre -1 y 1. \r\n\r\nSe crearon dos versiones con reducción de la naturalidad de los locutores a partir de expresiones emocionales humanas para voces femeninas del corpus B. En concreto, la naturalidad se redujo progresivamente de las voces humanas al nivel 1 al nivel 2. En particular, se editaron la duración y el tono medio en las sílabas acentuadas para reducir la diferencia entre las sílabas acentuadas y las no acentuadas. En los enunciados completos, se redujeron las relaciones F2/F1 y F3/F1 editando las frecuencias F2 y F3. También se redujo la intensidad de los armónicos 1 y 4. \"",
"### Supported Tasks and Leaderboards",
"### Languages\r\n\r\nEspañol",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\r\n\r\nOrigen: texto que indica si se trata del conjunto de datos MESD original o los casos 'Speaker-embedded naturalness-reduced female voices' donde los autores han generado de forma sintética nuevos datos transformando algunas de las instancias de los audios originales.\r\n\r\nPalabra: texto de la palabra que se ha leído.\r\n\r\nEmoción: texto de la emoción a la que representa: Valores: 'Enojo', 'Felicidad', 'Miedo', 'Neutral', 'Disgusto', 'Tristeza'.\r\n\r\nInfoActor: texto que indica si la voz es de 'Niño', 'Hombre', 'Mujer'.\r\n\r\nAudioArray: audio array, remuestreado a 16 Khz.",
"### Data Splits\r\n\r\nTrain: 891 ejemplos, mezcla de casos MESD y 'Speaker-embedded naturalness-reduced female voices'.\r\n\r\nValidation: 130 ejemplos, todos casos MESD.\r\n\r\nTest: 129 ejemplos, todos casos MESD.",
"## Dataset Creation",
"### Curation Rationale\r\n\r\nUnir los tres subconjuntos de datos y procesarlos para la tarea de finetuning, acorde al input esperado por el modelo Wav2Vec.",
"### Source Data",
"#### Initial Data Collection and Normalization\r\n\r\nAcceso a los datos en bruto:\r\nURL\r\n\r\nConversión a audio arra y remuestreo a 16 Khz.",
"#### Who are the source language producers?\r\n\r\nDuville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\r\n\r\nCreative Commons, CC-BY-4.0"
] |
bc865c50d83a257b7458e3c97ad16533fb491287 | ACLED Dataset for Summarization Task - CSE635 (University at Buffalo)
Actor Description
- 0: N/A
- 1: State Forces
- 2: Rebel Groups
- 3: Political Militias
- 4: Identity Militias
- 5: Rioters
- 6: Protesters
- 7: Civilians
- 8: External/Other Forces | vinaykudari/acled-token-summary | [
"region:us"
] | 2022-03-20T00:39:18+00:00 | {} | 2022-03-20T00:47:22+00:00 | [] | [] | TAGS
#region-us
| ACLED Dataset for Summarization Task - CSE635 (University at Buffalo)
Actor Description
- 0: N/A
- 1: State Forces
- 2: Rebel Groups
- 3: Political Militias
- 4: Identity Militias
- 5: Rioters
- 6: Protesters
- 7: Civilians
- 8: External/Other Forces | [] | [
"TAGS\n#region-us \n"
] |
213f66c68c9410d717ec0b9ad13abd5c67100b7f |
# Dataset Card for PMC Open Access XML
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The XML Open Access includes more than 3.4 million journal articles and preprints that are made available under
license terms that allow reuse.
Not all articles in PMC are available for text mining and other reuse, many have copyright protection, however articles
in the PMC Open Access Subset are made available under Creative Commons or similar licenses that generally allow more
liberal redistribution and reuse than a traditional copyrighted work.
The PMC Open Access Subset is one part of the PMC Article Datasets
This version takes XML version as source, benefiting from the structured text
to split the articles in parts, naming the introduction, methods, results,
discussion and conclusion, and reference with keywords in the text to external or internal
resources (articles, figures, tables, formulas, boxed-text, quotes, code, footnotes, chemicals, graphics, medias).
The dataset was initially created with relation-extraction tasks in mind, between the references in text and the content of the
references (e.g. for PMID, by joining the refered article abstract from the pubmed dataset), but aims in a larger extent to provide
a corpus of pre-annotated text for other tasks (e.g. figure caption to graphic, glossary definition detection, summarization).
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Fields
- "accession_id": The PMC ID of the article
- "pmid": The PubMed ID of the article
- "introduction": List of \<title\> and \<p\> elements in \<body\>, sharing their root with a \<title\> containing "introduction" or "background".
- "methods": Same as introduction with "method" keyword.
- "results": Same as introduction with "result" keyword.
- "discussion": Same as introduction with "discussion" keyword.
- "conclusion": Same as introduction with "conclusion" keyword.
- "front": List of \<title\> and \<p\> elements in \<front\> after everything else has been searched.
- "body": List of \<title\> and \<p\> elements in \<body\> after everything else has been searched.
- "back": List of \<title\> and \<p\> elements in \<back\> after everything else has been searched.
- "figure": List of \<fig\> elements of the article.
- "table": List of \<table-wrap\> and \<array\> elements of the article.
- "formula": List of \<disp-formula\> and \<inline-formula\> elements of the article.
- "box": List of \<boxed-text\> elements of the article.
- "code": List of \<code\> elements of the article.
- "quote": List of \<disp-quote\> and \<speech\> elements of the article.
- "chemical": List of \<chem-struct-wrap\> elements of the article.
- "supplementary": List of \<supplementary-material\> and \<inline-supplementary-material\> elements of the article.
- "footnote": List of \<fn-group\> and \<table-wrap-foot\> elements of the article.
- "graphic": List of \<graphic\> and \<inline-graphic\> elements of the article.
- "media": List of \<media\> and \<inline-media\> elements of the article.
- "glossary": Glossary if found in the XML
- "unknown_references": JSON of a dictionnary of each "tag":"text" for the reference that did not indicate a PMID
- "n_references": Total number of references and unknown references
- "license": The licence of the article
- "retracted": If the article was retracted or not
- "last_updated": Last update of the article
- "citation": Citation of the article
- "package_file": path to the folder containing the graphics and media files of the article (to append to the base URL: ftp.ncbi.nlm.nih.gov/pub/pmc/)
In text, the references are in the form ##KEYWORD##IDX_REF##OLD_TEXT##, with keywords (REF, UREF, FIG, TAB, FORMU, BOX, CODE, QUOTE, CHEM, SUPPL, FOOTN, GRAPH, MEDIA) referencing respectively to "pubmed articles" (external), "unknown_references", "figure", "table", "formula", "box", "code", "quote", "chem", "supplementary", "footnote", "graphic" and "media".
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
Internal references (figures, tables, ...) were found using specific tags. Deciding on those tags was done by testing and by looking in the documentation
for the different kind of possible usage.
Then, to split the article into introduction, methods, results, discussion and conclusion, specific keywords in titles were used. Because there are no rules
in this xml to tag those sections, finding the keyword seemed like the most reliable approach to do so. A drawback is that many section do not have those
keywords in the titles but could be assimilated to those. However, the huge diversity in the titles makes it harder to label such sections. This could be the
work of further versions of this dataset.
### Source Data
#### Initial Data Collection and Normalization
Data was obtained from:
- ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/oa_noncomm/xml/
- ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/oa_comm/xml/
- ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/oa_other/xml/
Additional content for individual articles (graphics, media) can be obtained from:
- ftp.ncbi.nlm.nih.gov/pub/pmc + "package_file"
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
The articles XML are similar accross collections. This means that if a certain collection handles the structure in unusual ways, the whole collection might not be as
well annotated than others. This concerns all the sections (intro, methods, ...), the external references (pmids) and the internal references (tables, figures, ...).
To illustrate that, references are sometime given as a range (e.g. 10-15). In that case, only reference 10 and 15 are linked. This could potentially be handled in a
future version.
### Other Known Limitations
[Needs More Information]
### Preprocessing recommendations
- Filter out empty contents.
- Remove unwanted references from the text, and replace either by the "references_text" or by the reference content itself.
- Unescape HTML special characters: `import html; html.unescape(my_text)`
- Remove superfluous line break in text.
- Remove XML tags (\<italic\>, \<sup\>, \<sub\>, ...), replace by special tokens?
- Join the items of the contents' lists.
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
https://www.ncbi.nlm.nih.gov/pmc/about/copyright/
Within the PMC Open Access Subset, there are three groupings:
Commercial Use Allowed - CC0, CC BY, CC BY-SA, CC BY-ND licenses
Non-Commercial Use Only - CC BY-NC, CC BY-NC-SA, CC BY-NC-ND licenses; and
Other - no machine-readable Creative Commons license, no license, or a custom license.
### Citation Information
[Needs More Information] | TomTBT/pmc_open_access_xml | [
"task_categories:text-classification",
"task_categories:summarization",
"task_categories:other",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"license:cc-by-4.0",
"license:cc-by-sa-4.0",
"license:cc-by-nc-4.0",
"license:cc-by-nd-4.0",
"license:cc-by-nc-nd-4.0",
"license:cc-by-nc-sa-4.0",
"license:unknown",
"license:other",
"research papers",
"biology",
"medecine",
"region:us"
] | 2022-03-20T09:47:21+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc0-1.0", "cc-by-4.0", "cc-by-sa-4.0", "cc-by-nc-4.0", "cc-by-nd-4.0", "cc-by-nc-nd-4.0", "cc-by-nc-sa-4.0", "unknown", "other"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-classification", "summarization", "other"], "task_ids": [], "pretty_name": "XML-parsed PMC", "tags": ["research papers", "biology", "medecine"]} | 2024-01-17T16:52:45+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_categories-summarization #task_categories-other #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc0-1.0 #license-cc-by-4.0 #license-cc-by-sa-4.0 #license-cc-by-nc-4.0 #license-cc-by-nd-4.0 #license-cc-by-nc-nd-4.0 #license-cc-by-nc-sa-4.0 #license-unknown #license-other #research papers #biology #medecine #region-us
|
# Dataset Card for PMC Open Access XML
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: URL
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
The XML Open Access includes more than 3.4 million journal articles and preprints that are made available under
license terms that allow reuse.
Not all articles in PMC are available for text mining and other reuse, many have copyright protection, however articles
in the PMC Open Access Subset are made available under Creative Commons or similar licenses that generally allow more
liberal redistribution and reuse than a traditional copyrighted work.
The PMC Open Access Subset is one part of the PMC Article Datasets
This version takes XML version as source, benefiting from the structured text
to split the articles in parts, naming the introduction, methods, results,
discussion and conclusion, and reference with keywords in the text to external or internal
resources (articles, figures, tables, formulas, boxed-text, quotes, code, footnotes, chemicals, graphics, medias).
The dataset was initially created with relation-extraction tasks in mind, between the references in text and the content of the
references (e.g. for PMID, by joining the refered article abstract from the pubmed dataset), but aims in a larger extent to provide
a corpus of pre-annotated text for other tasks (e.g. figure caption to graphic, glossary definition detection, summarization).
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Fields
- "accession_id": The PMC ID of the article
- "pmid": The PubMed ID of the article
- "introduction": List of \<title\> and \<p\> elements in \<body\>, sharing their root with a \<title\> containing "introduction" or "background".
- "methods": Same as introduction with "method" keyword.
- "results": Same as introduction with "result" keyword.
- "discussion": Same as introduction with "discussion" keyword.
- "conclusion": Same as introduction with "conclusion" keyword.
- "front": List of \<title\> and \<p\> elements in \<front\> after everything else has been searched.
- "body": List of \<title\> and \<p\> elements in \<body\> after everything else has been searched.
- "back": List of \<title\> and \<p\> elements in \<back\> after everything else has been searched.
- "figure": List of \<fig\> elements of the article.
- "table": List of \<table-wrap\> and \<array\> elements of the article.
- "formula": List of \<disp-formula\> and \<inline-formula\> elements of the article.
- "box": List of \<boxed-text\> elements of the article.
- "code": List of \<code\> elements of the article.
- "quote": List of \<disp-quote\> and \<speech\> elements of the article.
- "chemical": List of \<chem-struct-wrap\> elements of the article.
- "supplementary": List of \<supplementary-material\> and \<inline-supplementary-material\> elements of the article.
- "footnote": List of \<fn-group\> and \<table-wrap-foot\> elements of the article.
- "graphic": List of \<graphic\> and \<inline-graphic\> elements of the article.
- "media": List of \<media\> and \<inline-media\> elements of the article.
- "glossary": Glossary if found in the XML
- "unknown_references": JSON of a dictionnary of each "tag":"text" for the reference that did not indicate a PMID
- "n_references": Total number of references and unknown references
- "license": The licence of the article
- "retracted": If the article was retracted or not
- "last_updated": Last update of the article
- "citation": Citation of the article
- "package_file": path to the folder containing the graphics and media files of the article (to append to the base URL: URL
In text, the references are in the form ##KEYWORD##IDX_REF##OLD_TEXT##, with keywords (REF, UREF, FIG, TAB, FORMU, BOX, CODE, QUOTE, CHEM, SUPPL, FOOTN, GRAPH, MEDIA) referencing respectively to "pubmed articles" (external), "unknown_references", "figure", "table", "formula", "box", "code", "quote", "chem", "supplementary", "footnote", "graphic" and "media".
### Data Splits
## Dataset Creation
### Curation Rationale
Internal references (figures, tables, ...) were found using specific tags. Deciding on those tags was done by testing and by looking in the documentation
for the different kind of possible usage.
Then, to split the article into introduction, methods, results, discussion and conclusion, specific keywords in titles were used. Because there are no rules
in this xml to tag those sections, finding the keyword seemed like the most reliable approach to do so. A drawback is that many section do not have those
keywords in the titles but could be assimilated to those. However, the huge diversity in the titles makes it harder to label such sections. This could be the
work of further versions of this dataset.
### Source Data
#### Initial Data Collection and Normalization
Data was obtained from:
- URL
- URL
- URL
Additional content for individual articles (graphics, media) can be obtained from:
- URL + "package_file"
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
The articles XML are similar accross collections. This means that if a certain collection handles the structure in unusual ways, the whole collection might not be as
well annotated than others. This concerns all the sections (intro, methods, ...), the external references (pmids) and the internal references (tables, figures, ...).
To illustrate that, references are sometime given as a range (e.g. 10-15). In that case, only reference 10 and 15 are linked. This could potentially be handled in a
future version.
### Other Known Limitations
### Preprocessing recommendations
- Filter out empty contents.
- Remove unwanted references from the text, and replace either by the "references_text" or by the reference content itself.
- Unescape HTML special characters: 'import html; html.unescape(my_text)'
- Remove superfluous line break in text.
- Remove XML tags (\<italic\>, \<sup\>, \<sub\>, ...), replace by special tokens?
- Join the items of the contents' lists.
## Additional Information
### Dataset Curators
### Licensing Information
URL
Within the PMC Open Access Subset, there are three groupings:
Commercial Use Allowed - CC0, CC BY, CC BY-SA, CC BY-ND licenses
Non-Commercial Use Only - CC BY-NC, CC BY-NC-SA, CC BY-NC-ND licenses; and
Other - no machine-readable Creative Commons license, no license, or a custom license.
| [
"# Dataset Card for PMC Open Access XML",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThe XML Open Access includes more than 3.4 million journal articles and preprints that are made available under\nlicense terms that allow reuse. \nNot all articles in PMC are available for text mining and other reuse, many have copyright protection, however articles\nin the PMC Open Access Subset are made available under Creative Commons or similar licenses that generally allow more\nliberal redistribution and reuse than a traditional copyrighted work. \nThe PMC Open Access Subset is one part of the PMC Article Datasets\n\nThis version takes XML version as source, benefiting from the structured text\nto split the articles in parts, naming the introduction, methods, results,\ndiscussion and conclusion, and reference with keywords in the text to external or internal\nresources (articles, figures, tables, formulas, boxed-text, quotes, code, footnotes, chemicals, graphics, medias).\n\nThe dataset was initially created with relation-extraction tasks in mind, between the references in text and the content of the\nreferences (e.g. for PMID, by joining the refered article abstract from the pubmed dataset), but aims in a larger extent to provide\na corpus of pre-annotated text for other tasks (e.g. figure caption to graphic, glossary definition detection, summarization).",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Fields\n\n- \"accession_id\": The PMC ID of the article\n- \"pmid\": The PubMed ID of the article\n- \"introduction\": List of \\<title\\> and \\<p\\> elements in \\<body\\>, sharing their root with a \\<title\\> containing \"introduction\" or \"background\".\n- \"methods\": Same as introduction with \"method\" keyword.\n- \"results\": Same as introduction with \"result\" keyword.\n- \"discussion\": Same as introduction with \"discussion\" keyword.\n- \"conclusion\": Same as introduction with \"conclusion\" keyword. \n- \"front\": List of \\<title\\> and \\<p\\> elements in \\<front\\> after everything else has been searched.\n- \"body\": List of \\<title\\> and \\<p\\> elements in \\<body\\> after everything else has been searched.\n- \"back\": List of \\<title\\> and \\<p\\> elements in \\<back\\> after everything else has been searched.\n- \"figure\": List of \\<fig\\> elements of the article.\n- \"table\": List of \\<table-wrap\\> and \\<array\\> elements of the article.\n- \"formula\": List of \\<disp-formula\\> and \\<inline-formula\\> elements of the article.\n- \"box\": List of \\<boxed-text\\> elements of the article.\n- \"code\": List of \\<code\\> elements of the article.\n- \"quote\": List of \\<disp-quote\\> and \\<speech\\> elements of the article.\n- \"chemical\": List of \\<chem-struct-wrap\\> elements of the article.\n- \"supplementary\": List of \\<supplementary-material\\> and \\<inline-supplementary-material\\> elements of the article.\n- \"footnote\": List of \\<fn-group\\> and \\<table-wrap-foot\\> elements of the article.\n- \"graphic\": List of \\<graphic\\> and \\<inline-graphic\\> elements of the article.\n- \"media\": List of \\<media\\> and \\<inline-media\\> elements of the article.\n- \"glossary\": Glossary if found in the XML\n- \"unknown_references\": JSON of a dictionnary of each \"tag\":\"text\" for the reference that did not indicate a PMID\n- \"n_references\": Total number of references and unknown references\n- \"license\": The licence of the article\n- \"retracted\": If the article was retracted or not\n- \"last_updated\": Last update of the article\n- \"citation\": Citation of the article\n- \"package_file\": path to the folder containing the graphics and media files of the article (to append to the base URL: URL\n\nIn text, the references are in the form ##KEYWORD##IDX_REF##OLD_TEXT##, with keywords (REF, UREF, FIG, TAB, FORMU, BOX, CODE, QUOTE, CHEM, SUPPL, FOOTN, GRAPH, MEDIA) referencing respectively to \"pubmed articles\" (external), \"unknown_references\", \"figure\", \"table\", \"formula\", \"box\", \"code\", \"quote\", \"chem\", \"supplementary\", \"footnote\", \"graphic\" and \"media\".",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale\n\nInternal references (figures, tables, ...) were found using specific tags. Deciding on those tags was done by testing and by looking in the documentation\nfor the different kind of possible usage.\nThen, to split the article into introduction, methods, results, discussion and conclusion, specific keywords in titles were used. Because there are no rules\nin this xml to tag those sections, finding the keyword seemed like the most reliable approach to do so. A drawback is that many section do not have those \nkeywords in the titles but could be assimilated to those. However, the huge diversity in the titles makes it harder to label such sections. This could be the\nwork of further versions of this dataset.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nData was obtained from:\n- URL\n- URL\n- URL\n\nAdditional content for individual articles (graphics, media) can be obtained from:\n- URL + \"package_file\"",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases\n\nThe articles XML are similar accross collections. This means that if a certain collection handles the structure in unusual ways, the whole collection might not be as\nwell annotated than others. This concerns all the sections (intro, methods, ...), the external references (pmids) and the internal references (tables, figures, ...).\nTo illustrate that, references are sometime given as a range (e.g. 10-15). In that case, only reference 10 and 15 are linked. This could potentially be handled in a\nfuture version.",
"### Other Known Limitations",
"### Preprocessing recommendations\n\n- Filter out empty contents.\n- Remove unwanted references from the text, and replace either by the \"references_text\" or by the reference content itself.\n- Unescape HTML special characters: 'import html; html.unescape(my_text)'\n- Remove superfluous line break in text.\n- Remove XML tags (\\<italic\\>, \\<sup\\>, \\<sub\\>, ...), replace by special tokens? \n- Join the items of the contents' lists.",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nURL\n\nWithin the PMC Open Access Subset, there are three groupings:\n\nCommercial Use Allowed - CC0, CC BY, CC BY-SA, CC BY-ND licenses\nNon-Commercial Use Only - CC BY-NC, CC BY-NC-SA, CC BY-NC-ND licenses; and\nOther - no machine-readable Creative Commons license, no license, or a custom license."
] | [
"TAGS\n#task_categories-text-classification #task_categories-summarization #task_categories-other #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc0-1.0 #license-cc-by-4.0 #license-cc-by-sa-4.0 #license-cc-by-nc-4.0 #license-cc-by-nd-4.0 #license-cc-by-nc-nd-4.0 #license-cc-by-nc-sa-4.0 #license-unknown #license-other #research papers #biology #medecine #region-us \n",
"# Dataset Card for PMC Open Access XML",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThe XML Open Access includes more than 3.4 million journal articles and preprints that are made available under\nlicense terms that allow reuse. \nNot all articles in PMC are available for text mining and other reuse, many have copyright protection, however articles\nin the PMC Open Access Subset are made available under Creative Commons or similar licenses that generally allow more\nliberal redistribution and reuse than a traditional copyrighted work. \nThe PMC Open Access Subset is one part of the PMC Article Datasets\n\nThis version takes XML version as source, benefiting from the structured text\nto split the articles in parts, naming the introduction, methods, results,\ndiscussion and conclusion, and reference with keywords in the text to external or internal\nresources (articles, figures, tables, formulas, boxed-text, quotes, code, footnotes, chemicals, graphics, medias).\n\nThe dataset was initially created with relation-extraction tasks in mind, between the references in text and the content of the\nreferences (e.g. for PMID, by joining the refered article abstract from the pubmed dataset), but aims in a larger extent to provide\na corpus of pre-annotated text for other tasks (e.g. figure caption to graphic, glossary definition detection, summarization).",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Fields\n\n- \"accession_id\": The PMC ID of the article\n- \"pmid\": The PubMed ID of the article\n- \"introduction\": List of \\<title\\> and \\<p\\> elements in \\<body\\>, sharing their root with a \\<title\\> containing \"introduction\" or \"background\".\n- \"methods\": Same as introduction with \"method\" keyword.\n- \"results\": Same as introduction with \"result\" keyword.\n- \"discussion\": Same as introduction with \"discussion\" keyword.\n- \"conclusion\": Same as introduction with \"conclusion\" keyword. \n- \"front\": List of \\<title\\> and \\<p\\> elements in \\<front\\> after everything else has been searched.\n- \"body\": List of \\<title\\> and \\<p\\> elements in \\<body\\> after everything else has been searched.\n- \"back\": List of \\<title\\> and \\<p\\> elements in \\<back\\> after everything else has been searched.\n- \"figure\": List of \\<fig\\> elements of the article.\n- \"table\": List of \\<table-wrap\\> and \\<array\\> elements of the article.\n- \"formula\": List of \\<disp-formula\\> and \\<inline-formula\\> elements of the article.\n- \"box\": List of \\<boxed-text\\> elements of the article.\n- \"code\": List of \\<code\\> elements of the article.\n- \"quote\": List of \\<disp-quote\\> and \\<speech\\> elements of the article.\n- \"chemical\": List of \\<chem-struct-wrap\\> elements of the article.\n- \"supplementary\": List of \\<supplementary-material\\> and \\<inline-supplementary-material\\> elements of the article.\n- \"footnote\": List of \\<fn-group\\> and \\<table-wrap-foot\\> elements of the article.\n- \"graphic\": List of \\<graphic\\> and \\<inline-graphic\\> elements of the article.\n- \"media\": List of \\<media\\> and \\<inline-media\\> elements of the article.\n- \"glossary\": Glossary if found in the XML\n- \"unknown_references\": JSON of a dictionnary of each \"tag\":\"text\" for the reference that did not indicate a PMID\n- \"n_references\": Total number of references and unknown references\n- \"license\": The licence of the article\n- \"retracted\": If the article was retracted or not\n- \"last_updated\": Last update of the article\n- \"citation\": Citation of the article\n- \"package_file\": path to the folder containing the graphics and media files of the article (to append to the base URL: URL\n\nIn text, the references are in the form ##KEYWORD##IDX_REF##OLD_TEXT##, with keywords (REF, UREF, FIG, TAB, FORMU, BOX, CODE, QUOTE, CHEM, SUPPL, FOOTN, GRAPH, MEDIA) referencing respectively to \"pubmed articles\" (external), \"unknown_references\", \"figure\", \"table\", \"formula\", \"box\", \"code\", \"quote\", \"chem\", \"supplementary\", \"footnote\", \"graphic\" and \"media\".",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale\n\nInternal references (figures, tables, ...) were found using specific tags. Deciding on those tags was done by testing and by looking in the documentation\nfor the different kind of possible usage.\nThen, to split the article into introduction, methods, results, discussion and conclusion, specific keywords in titles were used. Because there are no rules\nin this xml to tag those sections, finding the keyword seemed like the most reliable approach to do so. A drawback is that many section do not have those \nkeywords in the titles but could be assimilated to those. However, the huge diversity in the titles makes it harder to label such sections. This could be the\nwork of further versions of this dataset.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nData was obtained from:\n- URL\n- URL\n- URL\n\nAdditional content for individual articles (graphics, media) can be obtained from:\n- URL + \"package_file\"",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases\n\nThe articles XML are similar accross collections. This means that if a certain collection handles the structure in unusual ways, the whole collection might not be as\nwell annotated than others. This concerns all the sections (intro, methods, ...), the external references (pmids) and the internal references (tables, figures, ...).\nTo illustrate that, references are sometime given as a range (e.g. 10-15). In that case, only reference 10 and 15 are linked. This could potentially be handled in a\nfuture version.",
"### Other Known Limitations",
"### Preprocessing recommendations\n\n- Filter out empty contents.\n- Remove unwanted references from the text, and replace either by the \"references_text\" or by the reference content itself.\n- Unescape HTML special characters: 'import html; html.unescape(my_text)'\n- Remove superfluous line break in text.\n- Remove XML tags (\\<italic\\>, \\<sup\\>, \\<sub\\>, ...), replace by special tokens? \n- Join the items of the contents' lists.",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nURL\n\nWithin the PMC Open Access Subset, there are three groupings:\n\nCommercial Use Allowed - CC0, CC BY, CC BY-SA, CC BY-ND licenses\nNon-Commercial Use Only - CC BY-NC, CC BY-NC-SA, CC BY-NC-ND licenses; and\nOther - no machine-readable Creative Commons license, no license, or a custom license."
] |
030caa10ace4b5c8f5084b70fe6bc281c44cc579 |
Hyperion Cloud imagery from the hyperspectral imager | jacobbieker/hyperion-clouds | [
"license:mit",
"region:us"
] | 2022-03-21T08:29:52+00:00 | {"license": "mit"} | 2023-12-23T11:28:31+00:00 | [] | [] | TAGS
#license-mit #region-us
|
Hyperion Cloud imagery from the hyperspectral imager | [] | [
"TAGS\n#license-mit #region-us \n"
] |
58aafbe2712ff481c014f562e42723f2820fd5d4 |
# Dataset Card for Monash Time Series Forecasting Repository
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Monash Time Series Forecasting Repository](https://forecastingdata.org/)
- **Repository:** [Monash Time Series Forecasting Repository code repository](https://github.com/rakshitha123/TSForecasting)
- **Paper:** [Monash Time Series Forecasting Archive](https://openreview.net/pdf?id=wEc1mgAjU-)
- **Leaderboard:** [Baseline Results](https://forecastingdata.org/#results)
- **Point of Contact:** [Rakshitha Godahewa](mailto:[email protected])
### Dataset Summary
The first comprehensive time series forecasting repository containing datasets of related time series to facilitate the evaluation of global forecasting models. All datasets are intended to use only for research purpose. Our repository contains 30 datasets including both publicly available time series datasets (in different formats) and datasets curated by us. Many datasets have different versions based on the frequency and the inclusion of missing values, making the total number of dataset variations to 58. Furthermore, it includes both real-world and competition time series datasets covering varied domains.
The following table shows a list of datasets available:
| Name | Domain | No. of series | Freq. | Pred. Len. | Source |
|-------------------------------|-----------|---------------|--------|------------|-------------------------------------------------------------------------------------------------------------------------------------|
| weather | Nature | 3010 | 1D | 30 | [Sparks et al., 2020](https://cran.r-project.org/web/packages/bomrang) |
| tourism_yearly | Tourism | 1311 | 1Y | 4 | [Athanasopoulos et al., 2011](https://doi.org/10.1016/j.ijforecast.2010.04.009) |
| tourism_quarterly | Tourism | 1311 | 1Q-JAN | 8 | [Athanasopoulos et al., 2011](https://doi.org/10.1016/j.ijforecast.2010.04.009) |
| tourism_monthly | Tourism | 1311 | 1M | 24 | [Athanasopoulos et al., 2011](https://doi.org/10.1016/j.ijforecast.2010.04.009) |
| cif_2016 | Banking | 72 | 1M | 12 | [Stepnicka and Burda, 2017](https://doi.org/10.1109/FUZZ-IEEE.2017.8015455) |
| london_smart_meters | Energy | 5560 | 30T | 60 | [Jean-Michel, 2019](https://www.kaggle.com/jeanmidev/smart-meters-in-london) |
| australian_electricity_demand | Energy | 5 | 30T | 60 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU-) |
| wind_farms_minutely | Energy | 339 | 1T | 60 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- ) |
| bitcoin | Economic | 18 | 1D | 30 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- ) |
| pedestrian_counts | Transport | 66 | 1H | 48 | [City of Melbourne, 2020](https://data.melbourne.vic.gov.au/Transport/Pedestrian-Counting-System-Monthly-counts-per-hour/b2ak-trbp) |
| vehicle_trips | Transport | 329 | 1D | 30 | [fivethirtyeight, 2015](https://github.com/fivethirtyeight/uber-tlc-foil-response) |
| kdd_cup_2018 | Nature | 270 | 1H | 48 | [KDD Cup, 2018](https://www.kdd.org/kdd2018/kdd-cup) |
| nn5_daily | Banking | 111 | 1D | 56 | [Ben Taieb et al., 2012](https://doi.org/10.1016/j.eswa.2012.01.039) |
| nn5_weekly | Banking | 111 | 1W-MON | 8 | [Ben Taieb et al., 2012](https://doi.org/10.1016/j.eswa.2012.01.039) |
| kaggle_web_traffic | Web | 145063 | 1D | 59 | [Google, 2017](https://www.kaggle.com/c/web-traffic-time-series-forecasting) |
| kaggle_web_traffic_weekly | Web | 145063 | 1W-WED | 8 | [Google, 2017](https://www.kaggle.com/c/web-traffic-time-series-forecasting) |
| solar_10_minutes | Energy | 137 | 10T | 60 | [Solar, 2020](https://www.nrel.gov/grid/solar-power-data.html) |
| solar_weekly | Energy | 137 | 1W-SUN | 5 | [Solar, 2020](https://www.nrel.gov/grid/solar-power-data.html) |
| car_parts | Sales | 2674 | 1M | 12 | [Hyndman, 2015](https://cran.r-project.org/web/packages/expsmooth/) |
| fred_md | Economic | 107 | 1M | 12 | [McCracken and Ng, 2016](https://doi.org/10.1080/07350015.2015.1086655) |
| traffic_hourly | Transport | 862 | 1H | 48 | [Caltrans, 2020](http://pems.dot.ca.gov/) |
| traffic_weekly | Transport | 862 | 1W-WED | 8 | [Caltrans, 2020](http://pems.dot.ca.gov/) |
| hospital | Health | 767 | 1M | 12 | [Hyndman, 2015](https://cran.r-project.org/web/packages/expsmooth/) |
| covid_deaths | Health | 266 | 1D | 30 | [Johns Hopkins University, 2020](https://github.com/CSSEGISandData/COVID-19) |
| sunspot | Nature | 1 | 1D | 30 | [Sunspot, 2015](http://www.sidc.be/silso/newdataset) |
| saugeenday | Nature | 1 | 1D | 30 | [McLeod and Gweon, 2013](http://www.jenvstat.org/v04/i11) |
| us_births | Health | 1 | 1D | 30 | [Pruim et al., 2020](https://cran.r-project.org/web/packages/mosaicData) |
| solar_4_seconds | Energy | 1 | 4S | 60 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- ) |
| wind_4_seconds | Energy | 1 | 4S | 60 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- ) |
| rideshare | Transport | 2304 | 1H | 48 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- ) |
| oikolab_weather | Nature | 8 | 1H | 48 | [Oikolab](https://oikolab.com/) |
| temperature_rain | Nature | 32072 | 1D | 30 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- )
### Dataset Usage
To load a particular dataset just specify its name from the table above e.g.:
```python
load_dataset("monash_tsf", "nn5_daily")
```
> Notes:
> - Data might contain missing values as in the original datasets.
> - The prediction length is either specified in the dataset or a default value depending on the frequency is used as in the original repository benchmark.
### Supported Tasks and Leaderboards
#### `time-series-forecasting`
##### `univariate-time-series-forecasting`
The univariate time series forecasting tasks involves learning the future one dimensional `target` values of a time series in a dataset for some `prediction_length` time steps. The performance of the forecast models can then be validated via the ground truth in the `validation` split and tested via the `test` split.
##### `multivariate-time-series-forecasting`
The multivariate time series forecasting task involves learning the future vector of `target` values of a time series in a dataset for some `prediction_length` time steps. Similar to the univariate setting the performance of a multivariate model can be validated via the ground truth in the `validation` split and tested via the `test` split.
### Languages
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```python
{
'start': datetime.datetime(2012, 1, 1, 0, 0),
'target': [14.0, 18.0, 21.0, 20.0, 22.0, 20.0, ...],
'feat_static_cat': [0],
'feat_dynamic_real': [[0.3, 0.4], [0.1, 0.6], ...],
'item_id': '0'
}
```
### Data Fields
For the univariate regular time series each series has the following keys:
* `start`: a datetime of the first entry of each time series in the dataset
* `target`: an array[float32] of the actual target values
* `feat_static_cat`: an array[uint64] which contains a categorical identifier of each time series in the dataset
* `feat_dynamic_real`: optional array of covariate features
* `item_id`: a string identifier of each time series in a dataset for reference
For the multivariate time series the `target` is a vector of the multivariate dimension for each time point.
### Data Splits
The datasets are split in time depending on the prediction length specified in the datasets. In particular for each time series in a dataset there is a prediction length window of the future in the validation split and another prediction length more in the test split.
## Dataset Creation
### Curation Rationale
To facilitate the evaluation of global forecasting models. All datasets in our repository are intended for research purposes and to evaluate the performance of new forecasting algorithms.
### Source Data
#### Initial Data Collection and Normalization
Out of the 30 datasets, 23 were already publicly available in different platforms with different data formats. The original sources of all datasets are mentioned in the datasets table above.
After extracting and curating these datasets, we analysed them individually to identify the datasets containing series with different frequencies and missing observations. Nine datasets contain time series belonging to different frequencies and the archive contains a separate dataset per each frequency.
#### Who are the source language producers?
The data comes from the datasets listed in the table above.
### Annotations
#### Annotation process
The annotations come from the datasets listed in the table above.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
* [Rakshitha Godahewa](mailto:[email protected])
* [Christoph Bergmeir](mailto:[email protected])
* [Geoff Webb](mailto:[email protected])
* [Rob Hyndman](mailto:[email protected])
* [Pablo Montero-Manso](mailto:[email protected])
### Licensing Information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode)
### Citation Information
```tex
@InProceedings{godahewa2021monash,
author = "Godahewa, Rakshitha and Bergmeir, Christoph and Webb, Geoffrey I. and Hyndman, Rob J. and Montero-Manso, Pablo",
title = "Monash Time Series Forecasting Archive",
booktitle = "Neural Information Processing Systems Track on Datasets and Benchmarks",
year = "2021",
note = "forthcoming"
}
```
### Contributions
Thanks to [@kashif](https://github.com/kashif) for adding this dataset. | monash_tsf | [
"task_categories:time-series-forecasting",
"task_ids:univariate-time-series-forecasting",
"task_ids:multivariate-time-series-forecasting",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"license:cc-by-4.0",
"region:us"
] | 2022-03-21T09:50:46+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["time-series-forecasting"], "task_ids": ["univariate-time-series-forecasting", "multivariate-time-series-forecasting"], "pretty_name": "Monash Time Series Forecasting Repository", "dataset_info": [{"config_name": "weather", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 176893738, "num_examples": 3010}, {"name": "test", "num_bytes": 177638713, "num_examples": 3010}, {"name": "validation", "num_bytes": 177266226, "num_examples": 3010}], "download_size": 38820451, "dataset_size": 531798677}, {"config_name": "tourism_yearly", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 54264, "num_examples": 518}, {"name": "test", "num_bytes": 71358, "num_examples": 518}, {"name": "validation", "num_bytes": 62811, "num_examples": 518}], "download_size": 36749, "dataset_size": 188433}, {"config_name": "tourism_quarterly", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 162738, "num_examples": 427}, {"name": "test", "num_bytes": 190920, "num_examples": 427}, {"name": "validation", "num_bytes": 176829, "num_examples": 427}], "download_size": 93833, "dataset_size": 530487}, {"config_name": "tourism_monthly", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 391518, "num_examples": 366}, {"name": "test", "num_bytes": 463986, "num_examples": 366}, {"name": "validation", "num_bytes": 427752, "num_examples": 366}], "download_size": 199791, "dataset_size": 1283256}, {"config_name": "cif_2016", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24731, "num_examples": 72}, {"name": "test", "num_bytes": 31859, "num_examples": 72}, {"name": "validation", "num_bytes": 28295, "num_examples": 72}], "download_size": 53344, "dataset_size": 84885}, {"config_name": "london_smart_meters", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 684386194, "num_examples": 5560}, {"name": "test", "num_bytes": 687138394, "num_examples": 5560}, {"name": "validation", "num_bytes": 685762294, "num_examples": 5560}], "download_size": 219673439, "dataset_size": 2057286882}, {"config_name": "australian_electricity_demand", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4763162, "num_examples": 5}, {"name": "test", "num_bytes": 4765637, "num_examples": 5}, {"name": "validation", "num_bytes": 4764400, "num_examples": 5}], "download_size": 5770526, "dataset_size": 14293199}, {"config_name": "wind_farms_minutely", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 710078918, "num_examples": 339}, {"name": "test", "num_bytes": 710246723, "num_examples": 339}, {"name": "validation", "num_bytes": 710162820, "num_examples": 339}], "download_size": 71383130, "dataset_size": 2130488461}, {"config_name": "bitcoin", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 336511, "num_examples": 18}, {"name": "test", "num_bytes": 340966, "num_examples": 18}, {"name": "validation", "num_bytes": 338738, "num_examples": 18}], "download_size": 220403, "dataset_size": 1016215}, {"config_name": "pedestrian_counts", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12897120, "num_examples": 66}, {"name": "test", "num_bytes": 12923256, "num_examples": 66}, {"name": "validation", "num_bytes": 12910188, "num_examples": 66}], "download_size": 4587054, "dataset_size": 38730564}, {"config_name": "vehicle_trips", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 105261, "num_examples": 329}, {"name": "test", "num_bytes": 186688, "num_examples": 329}, {"name": "validation", "num_bytes": 145974, "num_examples": 329}], "download_size": 44914, "dataset_size": 437923}, {"config_name": "kdd_cup_2018", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12040046, "num_examples": 270}, {"name": "test", "num_bytes": 12146966, "num_examples": 270}, {"name": "validation", "num_bytes": 12093506, "num_examples": 270}], "download_size": 2456948, "dataset_size": 36280518}, {"config_name": "nn5_daily", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 314828, "num_examples": 111}, {"name": "test", "num_bytes": 366110, "num_examples": 111}, {"name": "validation", "num_bytes": 340469, "num_examples": 111}], "download_size": 287708, "dataset_size": 1021407}, {"config_name": "nn5_weekly", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 48344, "num_examples": 111}, {"name": "test", "num_bytes": 55670, "num_examples": 111}, {"name": "validation", "num_bytes": 52007, "num_examples": 111}], "download_size": 62043, "dataset_size": 156021}, {"config_name": "kaggle_web_traffic", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 415494391, "num_examples": 145063}, {"name": "test", "num_bytes": 486103806, "num_examples": 145063}, {"name": "validation", "num_bytes": 450799098, "num_examples": 145063}], "download_size": 145485324, "dataset_size": 1352397295}, {"config_name": "kaggle_web_traffic_weekly", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 64242469, "num_examples": 145063}, {"name": "test", "num_bytes": 73816627, "num_examples": 145063}, {"name": "validation", "num_bytes": 69029548, "num_examples": 145063}], "download_size": 28930900, "dataset_size": 207088644}, {"config_name": "solar_10_minutes", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 29640033, "num_examples": 137}, {"name": "test", "num_bytes": 29707848, "num_examples": 137}, {"name": "validation", "num_bytes": 29673941, "num_examples": 137}], "download_size": 4559353, "dataset_size": 89021822}, {"config_name": "solar_weekly", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28614, "num_examples": 137}, {"name": "test", "num_bytes": 34265, "num_examples": 137}, {"name": "validation", "num_bytes": 31439, "num_examples": 137}], "download_size": 24375, "dataset_size": 94318}, {"config_name": "car_parts", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 396653, "num_examples": 2674}, {"name": "test", "num_bytes": 661379, "num_examples": 2674}, {"name": "validation", "num_bytes": 529016, "num_examples": 2674}], "download_size": 39656, "dataset_size": 1587048}, {"config_name": "fred_md", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 314514, "num_examples": 107}, {"name": "test", "num_bytes": 325107, "num_examples": 107}, {"name": "validation", "num_bytes": 319811, "num_examples": 107}], "download_size": 169107, "dataset_size": 959432}, {"config_name": "traffic_hourly", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 62071974, "num_examples": 862}, {"name": "test", "num_bytes": 62413326, "num_examples": 862}, {"name": "validation", "num_bytes": 62242650, "num_examples": 862}], "download_size": 22868806, "dataset_size": 186727950}, {"config_name": "traffic_weekly", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 344154, "num_examples": 862}, {"name": "test", "num_bytes": 401046, "num_examples": 862}, {"name": "validation", "num_bytes": 372600, "num_examples": 862}], "download_size": 245126, "dataset_size": 1117800}, {"config_name": "hospital", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 217625, "num_examples": 767}, {"name": "test", "num_bytes": 293558, "num_examples": 767}, {"name": "validation", "num_bytes": 255591, "num_examples": 767}], "download_size": 78110, "dataset_size": 766774}, {"config_name": "covid_deaths", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 176352, "num_examples": 266}, {"name": "test", "num_bytes": 242187, "num_examples": 266}, {"name": "validation", "num_bytes": 209270, "num_examples": 266}], "download_size": 27335, "dataset_size": 627809}, {"config_name": "sunspot", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 304726, "num_examples": 1}, {"name": "test", "num_bytes": 304974, "num_examples": 1}, {"name": "validation", "num_bytes": 304850, "num_examples": 1}], "download_size": 68865, "dataset_size": 914550}, {"config_name": "saugeenday", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 97722, "num_examples": 1}, {"name": "test", "num_bytes": 97969, "num_examples": 1}, {"name": "validation", "num_bytes": 97845, "num_examples": 1}], "download_size": 28721, "dataset_size": 293536}, {"config_name": "us_births", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 29923, "num_examples": 1}, {"name": "test", "num_bytes": 30171, "num_examples": 1}, {"name": "validation", "num_bytes": 30047, "num_examples": 1}], "download_size": 16332, "dataset_size": 90141}, {"config_name": "solar_4_seconds", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 30513083, "num_examples": 1}, {"name": "test", "num_bytes": 30513578, "num_examples": 1}, {"name": "validation", "num_bytes": 30513331, "num_examples": 1}], "download_size": 794502, "dataset_size": 91539992}, {"config_name": "wind_4_seconds", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 30512774, "num_examples": 1}, {"name": "test", "num_bytes": 30513269, "num_examples": 1}, {"name": "validation", "num_bytes": 30513021, "num_examples": 1}], "download_size": 2226184, "dataset_size": 91539064}, {"config_name": "rideshare", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": {"sequence": "float32"}}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4249051, "num_examples": 156}, {"name": "test", "num_bytes": 5161435, "num_examples": 156}, {"name": "validation", "num_bytes": 4705243, "num_examples": 156}], "download_size": 1031826, "dataset_size": 14115729}, {"config_name": "oikolab_weather", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3299142, "num_examples": 8}, {"name": "test", "num_bytes": 3302310, "num_examples": 8}, {"name": "validation", "num_bytes": 3300726, "num_examples": 8}], "download_size": 1326101, "dataset_size": 9902178}, {"config_name": "temperature_rain", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": {"sequence": "float32"}}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 88121466, "num_examples": 422}, {"name": "test", "num_bytes": 96059286, "num_examples": 422}, {"name": "validation", "num_bytes": 92090376, "num_examples": 422}], "download_size": 25747139, "dataset_size": 276271128}]} | 2023-06-13T12:26:34+00:00 | [] | [] | TAGS
#task_categories-time-series-forecasting #task_ids-univariate-time-series-forecasting #task_ids-multivariate-time-series-forecasting #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #license-cc-by-4.0 #region-us
| Dataset Card for Monash Time Series Forecasting Repository
==========================================================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: Monash Time Series Forecasting Repository
* Repository: Monash Time Series Forecasting Repository code repository
* Paper: Monash Time Series Forecasting Archive
* Leaderboard: Baseline Results
* Point of Contact: Rakshitha Godahewa
### Dataset Summary
The first comprehensive time series forecasting repository containing datasets of related time series to facilitate the evaluation of global forecasting models. All datasets are intended to use only for research purpose. Our repository contains 30 datasets including both publicly available time series datasets (in different formats) and datasets curated by us. Many datasets have different versions based on the frequency and the inclusion of missing values, making the total number of dataset variations to 58. Furthermore, it includes both real-world and competition time series datasets covering varied domains.
The following table shows a list of datasets available:
### Dataset Usage
To load a particular dataset just specify its name from the table above e.g.:
>
> Notes:
>
>
> * Data might contain missing values as in the original datasets.
> * The prediction length is either specified in the dataset or a default value depending on the frequency is used as in the original repository benchmark.
>
>
>
### Supported Tasks and Leaderboards
#### 'time-series-forecasting'
##### 'univariate-time-series-forecasting'
The univariate time series forecasting tasks involves learning the future one dimensional 'target' values of a time series in a dataset for some 'prediction\_length' time steps. The performance of the forecast models can then be validated via the ground truth in the 'validation' split and tested via the 'test' split.
##### 'multivariate-time-series-forecasting'
The multivariate time series forecasting task involves learning the future vector of 'target' values of a time series in a dataset for some 'prediction\_length' time steps. Similar to the univariate setting the performance of a multivariate model can be validated via the ground truth in the 'validation' split and tested via the 'test' split.
### Languages
Dataset Structure
-----------------
### Data Instances
A sample from the training set is provided below:
### Data Fields
For the univariate regular time series each series has the following keys:
* 'start': a datetime of the first entry of each time series in the dataset
* 'target': an array[float32] of the actual target values
* 'feat\_static\_cat': an array[uint64] which contains a categorical identifier of each time series in the dataset
* 'feat\_dynamic\_real': optional array of covariate features
* 'item\_id': a string identifier of each time series in a dataset for reference
For the multivariate time series the 'target' is a vector of the multivariate dimension for each time point.
### Data Splits
The datasets are split in time depending on the prediction length specified in the datasets. In particular for each time series in a dataset there is a prediction length window of the future in the validation split and another prediction length more in the test split.
Dataset Creation
----------------
### Curation Rationale
To facilitate the evaluation of global forecasting models. All datasets in our repository are intended for research purposes and to evaluate the performance of new forecasting algorithms.
### Source Data
#### Initial Data Collection and Normalization
Out of the 30 datasets, 23 were already publicly available in different platforms with different data formats. The original sources of all datasets are mentioned in the datasets table above.
After extracting and curating these datasets, we analysed them individually to identify the datasets containing series with different frequencies and missing observations. Nine datasets contain time series belonging to different frequencies and the archive contains a separate dataset per each frequency.
#### Who are the source language producers?
The data comes from the datasets listed in the table above.
### Annotations
#### Annotation process
The annotations come from the datasets listed in the table above.
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
* Rakshitha Godahewa
* Christoph Bergmeir
* Geoff Webb
* Rob Hyndman
* Pablo Montero-Manso
### Licensing Information
Creative Commons Attribution 4.0 International
### Contributions
Thanks to @kashif for adding this dataset.
| [
"### Dataset Summary\n\n\nThe first comprehensive time series forecasting repository containing datasets of related time series to facilitate the evaluation of global forecasting models. All datasets are intended to use only for research purpose. Our repository contains 30 datasets including both publicly available time series datasets (in different formats) and datasets curated by us. Many datasets have different versions based on the frequency and the inclusion of missing values, making the total number of dataset variations to 58. Furthermore, it includes both real-world and competition time series datasets covering varied domains.\n\n\nThe following table shows a list of datasets available:",
"### Dataset Usage\n\n\nTo load a particular dataset just specify its name from the table above e.g.:\n\n\n\n> \n> Notes:\n> \n> \n> * Data might contain missing values as in the original datasets.\n> * The prediction length is either specified in the dataset or a default value depending on the frequency is used as in the original repository benchmark.\n> \n> \n>",
"### Supported Tasks and Leaderboards",
"#### 'time-series-forecasting'",
"##### 'univariate-time-series-forecasting'\n\n\nThe univariate time series forecasting tasks involves learning the future one dimensional 'target' values of a time series in a dataset for some 'prediction\\_length' time steps. The performance of the forecast models can then be validated via the ground truth in the 'validation' split and tested via the 'test' split.",
"##### 'multivariate-time-series-forecasting'\n\n\nThe multivariate time series forecasting task involves learning the future vector of 'target' values of a time series in a dataset for some 'prediction\\_length' time steps. Similar to the univariate setting the performance of a multivariate model can be validated via the ground truth in the 'validation' split and tested via the 'test' split.",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from the training set is provided below:",
"### Data Fields\n\n\nFor the univariate regular time series each series has the following keys:\n\n\n* 'start': a datetime of the first entry of each time series in the dataset\n* 'target': an array[float32] of the actual target values\n* 'feat\\_static\\_cat': an array[uint64] which contains a categorical identifier of each time series in the dataset\n* 'feat\\_dynamic\\_real': optional array of covariate features\n* 'item\\_id': a string identifier of each time series in a dataset for reference\n\n\nFor the multivariate time series the 'target' is a vector of the multivariate dimension for each time point.",
"### Data Splits\n\n\nThe datasets are split in time depending on the prediction length specified in the datasets. In particular for each time series in a dataset there is a prediction length window of the future in the validation split and another prediction length more in the test split.\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nTo facilitate the evaluation of global forecasting models. All datasets in our repository are intended for research purposes and to evaluate the performance of new forecasting algorithms.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nOut of the 30 datasets, 23 were already publicly available in different platforms with different data formats. The original sources of all datasets are mentioned in the datasets table above.\n\n\nAfter extracting and curating these datasets, we analysed them individually to identify the datasets containing series with different frequencies and missing observations. Nine datasets contain time series belonging to different frequencies and the archive contains a separate dataset per each frequency.",
"#### Who are the source language producers?\n\n\nThe data comes from the datasets listed in the table above.",
"### Annotations",
"#### Annotation process\n\n\nThe annotations come from the datasets listed in the table above.",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n* Rakshitha Godahewa\n* Christoph Bergmeir\n* Geoff Webb\n* Rob Hyndman\n* Pablo Montero-Manso",
"### Licensing Information\n\n\nCreative Commons Attribution 4.0 International",
"### Contributions\n\n\nThanks to @kashif for adding this dataset."
] | [
"TAGS\n#task_categories-time-series-forecasting #task_ids-univariate-time-series-forecasting #task_ids-multivariate-time-series-forecasting #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #license-cc-by-4.0 #region-us \n",
"### Dataset Summary\n\n\nThe first comprehensive time series forecasting repository containing datasets of related time series to facilitate the evaluation of global forecasting models. All datasets are intended to use only for research purpose. Our repository contains 30 datasets including both publicly available time series datasets (in different formats) and datasets curated by us. Many datasets have different versions based on the frequency and the inclusion of missing values, making the total number of dataset variations to 58. Furthermore, it includes both real-world and competition time series datasets covering varied domains.\n\n\nThe following table shows a list of datasets available:",
"### Dataset Usage\n\n\nTo load a particular dataset just specify its name from the table above e.g.:\n\n\n\n> \n> Notes:\n> \n> \n> * Data might contain missing values as in the original datasets.\n> * The prediction length is either specified in the dataset or a default value depending on the frequency is used as in the original repository benchmark.\n> \n> \n>",
"### Supported Tasks and Leaderboards",
"#### 'time-series-forecasting'",
"##### 'univariate-time-series-forecasting'\n\n\nThe univariate time series forecasting tasks involves learning the future one dimensional 'target' values of a time series in a dataset for some 'prediction\\_length' time steps. The performance of the forecast models can then be validated via the ground truth in the 'validation' split and tested via the 'test' split.",
"##### 'multivariate-time-series-forecasting'\n\n\nThe multivariate time series forecasting task involves learning the future vector of 'target' values of a time series in a dataset for some 'prediction\\_length' time steps. Similar to the univariate setting the performance of a multivariate model can be validated via the ground truth in the 'validation' split and tested via the 'test' split.",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from the training set is provided below:",
"### Data Fields\n\n\nFor the univariate regular time series each series has the following keys:\n\n\n* 'start': a datetime of the first entry of each time series in the dataset\n* 'target': an array[float32] of the actual target values\n* 'feat\\_static\\_cat': an array[uint64] which contains a categorical identifier of each time series in the dataset\n* 'feat\\_dynamic\\_real': optional array of covariate features\n* 'item\\_id': a string identifier of each time series in a dataset for reference\n\n\nFor the multivariate time series the 'target' is a vector of the multivariate dimension for each time point.",
"### Data Splits\n\n\nThe datasets are split in time depending on the prediction length specified in the datasets. In particular for each time series in a dataset there is a prediction length window of the future in the validation split and another prediction length more in the test split.\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nTo facilitate the evaluation of global forecasting models. All datasets in our repository are intended for research purposes and to evaluate the performance of new forecasting algorithms.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nOut of the 30 datasets, 23 were already publicly available in different platforms with different data formats. The original sources of all datasets are mentioned in the datasets table above.\n\n\nAfter extracting and curating these datasets, we analysed them individually to identify the datasets containing series with different frequencies and missing observations. Nine datasets contain time series belonging to different frequencies and the archive contains a separate dataset per each frequency.",
"#### Who are the source language producers?\n\n\nThe data comes from the datasets listed in the table above.",
"### Annotations",
"#### Annotation process\n\n\nThe annotations come from the datasets listed in the table above.",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n* Rakshitha Godahewa\n* Christoph Bergmeir\n* Geoff Webb\n* Rob Hyndman\n* Pablo Montero-Manso",
"### Licensing Information\n\n\nCreative Commons Attribution 4.0 International",
"### Contributions\n\n\nThanks to @kashif for adding this dataset."
] |
ccd927007e794cf0a8794aee8482c6dec66ff6fb | Twitter 3.21 | NoCaptain/MyTwitter | [
"region:us"
] | 2022-03-21T14:14:44+00:00 | {} | 2022-03-21T14:51:28+00:00 | [] | [] | TAGS
#region-us
| Twitter 3.21 | [] | [
"TAGS\n#region-us \n"
] |
b6a5fc413080ac48e2ad89fb86a0e4f624ec02e3 | Cleaned wikipedia dataset | blo05/cleaned_wiki_en | [
"region:us"
] | 2022-03-21T15:55:39+00:00 | {} | 2022-03-30T09:12:38+00:00 | [] | [] | TAGS
#region-us
| Cleaned wikipedia dataset | [] | [
"TAGS\n#region-us \n"
] |
f55977a7a91fb5d406494b6b98ff236b36dfb8a0 |
# Dataset Card for LFQA Discourse
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Repo](https://github.com/utcsnlp/lfqa_discourse)
- **Paper:** [How Do We Answer Complex Questions: Discourse Structure of Long-form Answers](https://arxiv.org/abs/2203.11048)
- **Point of Contact:** fangyuan[at]utexas.edu
### Dataset Summary
This dataset contains discourse annotation of long-form answers. There are two types of annotations:
* **Validity:** whether a <question, answer> pair is valid based on a set of invalid reasons defined.
* **Role:** sentence-level role annotation of functional roles for long-form answers.
### Languages
The dataset contains data in English.
## Dataset Structure
### Data Instances
Each instance is a (question, long-form answer) pair from one of the four data sources -- ELI5, WebGPT, NQ, and model-generated answers (denoted as ELI5-model), and our discourse annotation, which consists of QA-pair level validity label and sentence-level functional role label.
We provide all validity and role annotations here. For further train/val/test split, please refer to our [github repository](https://github.com/utcsnlp/lfqa_discourse).
### Data Fields
For validity annotations, each instance contains the following fields:
* `dataset`: The dataset this QA pair belongs to, one of [`NQ`, `ELI5`, `Web-GPT`]. Note that `ELI5` contains both human-written answers and model-generated answers, with model-generated answer distinguished with the `a_id` field mentioned below.
* `q_id`: The question id, same as the original NQ or ELI5 dataset.
* `a_id`: The answer id, same as the original ELI5 dataset. For NQ, we populate a dummy `a_id` (1). For machine generated answers, this field corresponds to the name of the model.
* `question`: The question.
* `answer_paragraph`: The answer paragraph.
* `answer_sentences`: The list of answer sentences, tokenized from the answer paragraph.
* `is_valid`: A boolean value indicating whether the qa pair is valid, values: [`True`, `False`].
* `invalid_reason`: A list of list, each list contains the invalid reason the annotator selected. The invalid reason is one of [`no_valid_answer`, `nonsensical_question`, `assumptions_rejected`, `multiple_questions`].
For role annotations, each instance contains the following fields:
*
* `dataset`: The dataset this QA pair belongs to, one of [`NQ`, `ELI5`, `Web-GPT`]. Note that `ELI5` contains both human-written answers and model-generated answers, with model-generated answer distinguished with the `a_id` field mentioned below.
* `q_id`: The question id, same as the original NQ or ELI5 dataset.
* `a_id`: The answer id, same as the original ELI5 dataset. For NQ, we populate a dummy `a_id` (1). For machine generated answers, this field corresponds to the name of the model.
* `question`: The question.
* `answer_paragraph`: The answer paragraph.
* `answer_sentences`: The list of answer sentences, tokenized from the answer paragraph.
* `role_annotation`: The list of majority role (or adjudicated) role (if exists), for the sentences in `answer_sentences`. Each role is one of [`Answer`, `Answer - Example`, `Answer (Summary)`, `Auxiliary Information`, `Answer - Organizational sentence`, `Miscellaneous`]
* `raw_role_annotation`: A list of list, each list contains the raw role annotations for sentences in `answer_sentences`.
### Data Splits
For train/validation/test splits, please refer to our [repository]((https://github.com/utcsnlp/lfqa_discourse).
## Dataset Creation
Please refer to our [paper](https://arxiv.org/abs/2203.11048) and datasheet for details on dataset creation, annotation process and discussion on limitations.
## Additional Information
### Licensing Information
https://creativecommons.org/licenses/by-sa/4.0/legalcode
### Citation Information
```
@inproceedings{xu2022lfqadiscourse,
title = {How Do We Answer Complex Questions: Discourse Structure of Long-form Answers},
author = {Xu, Fangyuan and Li, Junyi Jessy and Choi, Eunsol},
year = 2022,
booktitle = {Proceedings of the Annual Meeting of the Association for Computational Linguistics},
note = {Long paper}
}
```
### Contributions
Thanks to [@carriex](https://github.com/carriex) for adding this dataset. | fangyuan/lfqa_discourse | [
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|natural_questions",
"source_datasets:extended|eli5",
"license:cc-by-sa-4.0",
"arxiv:2203.11048",
"region:us"
] | 2022-03-21T16:37:57+00:00 | {"annotations_creators": ["crowdsourced", "expert-generated"], "language_creators": ["machine-generated", "found"], "language": ["en-US"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["extended|natural_questions", "extended|eli5"], "task_categories": [], "task_ids": [], "pretty_name": "lfqa_discourse"} | 2023-06-08T03:55:00+00:00 | [
"2203.11048"
] | [
"en-US"
] | TAGS
#annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-unknown #source_datasets-extended|natural_questions #source_datasets-extended|eli5 #license-cc-by-sa-4.0 #arxiv-2203.11048 #region-us
|
# Dataset Card for LFQA Discourse
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Repository: Repo
- Paper: How Do We Answer Complex Questions: Discourse Structure of Long-form Answers
- Point of Contact: fangyuan[at]URL
### Dataset Summary
This dataset contains discourse annotation of long-form answers. There are two types of annotations:
* Validity: whether a <question, answer> pair is valid based on a set of invalid reasons defined.
* Role: sentence-level role annotation of functional roles for long-form answers.
### Languages
The dataset contains data in English.
## Dataset Structure
### Data Instances
Each instance is a (question, long-form answer) pair from one of the four data sources -- ELI5, WebGPT, NQ, and model-generated answers (denoted as ELI5-model), and our discourse annotation, which consists of QA-pair level validity label and sentence-level functional role label.
We provide all validity and role annotations here. For further train/val/test split, please refer to our github repository.
### Data Fields
For validity annotations, each instance contains the following fields:
* 'dataset': The dataset this QA pair belongs to, one of ['NQ', 'ELI5', 'Web-GPT']. Note that 'ELI5' contains both human-written answers and model-generated answers, with model-generated answer distinguished with the 'a_id' field mentioned below.
* 'q_id': The question id, same as the original NQ or ELI5 dataset.
* 'a_id': The answer id, same as the original ELI5 dataset. For NQ, we populate a dummy 'a_id' (1). For machine generated answers, this field corresponds to the name of the model.
* 'question': The question.
* 'answer_paragraph': The answer paragraph.
* 'answer_sentences': The list of answer sentences, tokenized from the answer paragraph.
* 'is_valid': A boolean value indicating whether the qa pair is valid, values: ['True', 'False'].
* 'invalid_reason': A list of list, each list contains the invalid reason the annotator selected. The invalid reason is one of ['no_valid_answer', 'nonsensical_question', 'assumptions_rejected', 'multiple_questions'].
For role annotations, each instance contains the following fields:
*
* 'dataset': The dataset this QA pair belongs to, one of ['NQ', 'ELI5', 'Web-GPT']. Note that 'ELI5' contains both human-written answers and model-generated answers, with model-generated answer distinguished with the 'a_id' field mentioned below.
* 'q_id': The question id, same as the original NQ or ELI5 dataset.
* 'a_id': The answer id, same as the original ELI5 dataset. For NQ, we populate a dummy 'a_id' (1). For machine generated answers, this field corresponds to the name of the model.
* 'question': The question.
* 'answer_paragraph': The answer paragraph.
* 'answer_sentences': The list of answer sentences, tokenized from the answer paragraph.
* 'role_annotation': The list of majority role (or adjudicated) role (if exists), for the sentences in 'answer_sentences'. Each role is one of ['Answer', 'Answer - Example', 'Answer (Summary)', 'Auxiliary Information', 'Answer - Organizational sentence', 'Miscellaneous']
* 'raw_role_annotation': A list of list, each list contains the raw role annotations for sentences in 'answer_sentences'.
### Data Splits
For train/validation/test splits, please refer to our repository.
## Dataset Creation
Please refer to our paper and datasheet for details on dataset creation, annotation process and discussion on limitations.
## Additional Information
### Licensing Information
URL
### Contributions
Thanks to @carriex for adding this dataset. | [
"# Dataset Card for LFQA Discourse",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Repository: Repo\n- Paper: How Do We Answer Complex Questions: Discourse Structure of Long-form Answers\n- Point of Contact: fangyuan[at]URL",
"### Dataset Summary\n\nThis dataset contains discourse annotation of long-form answers. There are two types of annotations:\n* Validity: whether a <question, answer> pair is valid based on a set of invalid reasons defined.\n* Role: sentence-level role annotation of functional roles for long-form answers.",
"### Languages\n\nThe dataset contains data in English.",
"## Dataset Structure",
"### Data Instances\n\nEach instance is a (question, long-form answer) pair from one of the four data sources -- ELI5, WebGPT, NQ, and model-generated answers (denoted as ELI5-model), and our discourse annotation, which consists of QA-pair level validity label and sentence-level functional role label.\n\nWe provide all validity and role annotations here. For further train/val/test split, please refer to our github repository.",
"### Data Fields\n\nFor validity annotations, each instance contains the following fields:\n* 'dataset': The dataset this QA pair belongs to, one of ['NQ', 'ELI5', 'Web-GPT']. Note that 'ELI5' contains both human-written answers and model-generated answers, with model-generated answer distinguished with the 'a_id' field mentioned below.\n* 'q_id': The question id, same as the original NQ or ELI5 dataset.\n* 'a_id': The answer id, same as the original ELI5 dataset. For NQ, we populate a dummy 'a_id' (1). For machine generated answers, this field corresponds to the name of the model. \n* 'question': The question.\n* 'answer_paragraph': The answer paragraph.\n* 'answer_sentences': The list of answer sentences, tokenized from the answer paragraph.\n* 'is_valid': A boolean value indicating whether the qa pair is valid, values: ['True', 'False'].\n* 'invalid_reason': A list of list, each list contains the invalid reason the annotator selected. The invalid reason is one of ['no_valid_answer', 'nonsensical_question', 'assumptions_rejected', 'multiple_questions'].\n\nFor role annotations, each instance contains the following fields:\n* \n* 'dataset': The dataset this QA pair belongs to, one of ['NQ', 'ELI5', 'Web-GPT']. Note that 'ELI5' contains both human-written answers and model-generated answers, with model-generated answer distinguished with the 'a_id' field mentioned below.\n* 'q_id': The question id, same as the original NQ or ELI5 dataset.\n* 'a_id': The answer id, same as the original ELI5 dataset. For NQ, we populate a dummy 'a_id' (1). For machine generated answers, this field corresponds to the name of the model. \n* 'question': The question.\n* 'answer_paragraph': The answer paragraph.\n* 'answer_sentences': The list of answer sentences, tokenized from the answer paragraph.\n* 'role_annotation': The list of majority role (or adjudicated) role (if exists), for the sentences in 'answer_sentences'. Each role is one of ['Answer', 'Answer - Example', 'Answer (Summary)', 'Auxiliary Information', 'Answer - Organizational sentence', 'Miscellaneous']\n* 'raw_role_annotation': A list of list, each list contains the raw role annotations for sentences in 'answer_sentences'.",
"### Data Splits\n\nFor train/validation/test splits, please refer to our repository.",
"## Dataset Creation\n\nPlease refer to our paper and datasheet for details on dataset creation, annotation process and discussion on limitations.",
"## Additional Information",
"### Licensing Information\n\nURL",
"### Contributions\n\nThanks to @carriex for adding this dataset."
] | [
"TAGS\n#annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-unknown #source_datasets-extended|natural_questions #source_datasets-extended|eli5 #license-cc-by-sa-4.0 #arxiv-2203.11048 #region-us \n",
"# Dataset Card for LFQA Discourse",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Repository: Repo\n- Paper: How Do We Answer Complex Questions: Discourse Structure of Long-form Answers\n- Point of Contact: fangyuan[at]URL",
"### Dataset Summary\n\nThis dataset contains discourse annotation of long-form answers. There are two types of annotations:\n* Validity: whether a <question, answer> pair is valid based on a set of invalid reasons defined.\n* Role: sentence-level role annotation of functional roles for long-form answers.",
"### Languages\n\nThe dataset contains data in English.",
"## Dataset Structure",
"### Data Instances\n\nEach instance is a (question, long-form answer) pair from one of the four data sources -- ELI5, WebGPT, NQ, and model-generated answers (denoted as ELI5-model), and our discourse annotation, which consists of QA-pair level validity label and sentence-level functional role label.\n\nWe provide all validity and role annotations here. For further train/val/test split, please refer to our github repository.",
"### Data Fields\n\nFor validity annotations, each instance contains the following fields:\n* 'dataset': The dataset this QA pair belongs to, one of ['NQ', 'ELI5', 'Web-GPT']. Note that 'ELI5' contains both human-written answers and model-generated answers, with model-generated answer distinguished with the 'a_id' field mentioned below.\n* 'q_id': The question id, same as the original NQ or ELI5 dataset.\n* 'a_id': The answer id, same as the original ELI5 dataset. For NQ, we populate a dummy 'a_id' (1). For machine generated answers, this field corresponds to the name of the model. \n* 'question': The question.\n* 'answer_paragraph': The answer paragraph.\n* 'answer_sentences': The list of answer sentences, tokenized from the answer paragraph.\n* 'is_valid': A boolean value indicating whether the qa pair is valid, values: ['True', 'False'].\n* 'invalid_reason': A list of list, each list contains the invalid reason the annotator selected. The invalid reason is one of ['no_valid_answer', 'nonsensical_question', 'assumptions_rejected', 'multiple_questions'].\n\nFor role annotations, each instance contains the following fields:\n* \n* 'dataset': The dataset this QA pair belongs to, one of ['NQ', 'ELI5', 'Web-GPT']. Note that 'ELI5' contains both human-written answers and model-generated answers, with model-generated answer distinguished with the 'a_id' field mentioned below.\n* 'q_id': The question id, same as the original NQ or ELI5 dataset.\n* 'a_id': The answer id, same as the original ELI5 dataset. For NQ, we populate a dummy 'a_id' (1). For machine generated answers, this field corresponds to the name of the model. \n* 'question': The question.\n* 'answer_paragraph': The answer paragraph.\n* 'answer_sentences': The list of answer sentences, tokenized from the answer paragraph.\n* 'role_annotation': The list of majority role (or adjudicated) role (if exists), for the sentences in 'answer_sentences'. Each role is one of ['Answer', 'Answer - Example', 'Answer (Summary)', 'Auxiliary Information', 'Answer - Organizational sentence', 'Miscellaneous']\n* 'raw_role_annotation': A list of list, each list contains the raw role annotations for sentences in 'answer_sentences'.",
"### Data Splits\n\nFor train/validation/test splits, please refer to our repository.",
"## Dataset Creation\n\nPlease refer to our paper and datasheet for details on dataset creation, annotation process and discussion on limitations.",
"## Additional Information",
"### Licensing Information\n\nURL",
"### Contributions\n\nThanks to @carriex for adding this dataset."
] |
b1cb0eb42393e09d5b9090c60a1f55d59273dbfb | # AutoNLP Dataset for project: pruebapoems
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project pruebapoems.
### Languages
The BCP-47 code for the dataset's language is es.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "When I was fair and young, then favor graced me.\r\nOf many was I sought their mistress for to be.\r\nBu[...]",
"target": 1
},
{
"text": "Sigh no more, ladies, sigh no more.\r\n Men were deceivers ever,\r\nOne foot in sea, and one on shore[...]",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=3, names=['Love', 'Mythology & Folklore', 'Nature'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 457 |
| valid | 116 |
| EALeon16/autonlp-data-pruebapoems | [
"task_categories:text-classification",
"language:es",
"region:us"
] | 2022-03-21T17:51:36+00:00 | {"language": ["es"], "task_categories": ["text-classification"]} | 2022-10-25T09:03:29+00:00 | [] | [
"es"
] | TAGS
#task_categories-text-classification #language-Spanish #region-us
| AutoNLP Dataset for project: pruebapoems
========================================
Dataset Descritpion
-------------------
This dataset has been automatically processed by AutoNLP for project pruebapoems.
### Languages
The BCP-47 code for the dataset's language is es.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is es.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-text-classification #language-Spanish #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is es.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
dc52efd76d818fcd4d0a3b4cc1d6579486b92a0a |
La base de datos consta de una cantidad de 192 347 filas de datos para el entrenamiento, 33 944 para las pruebas y 22630 para la validación. Su contenido está compuesto por comentarios suicidas y comentarios normales de la red social Reddit traducidos al español, y obtenidos de la base de datos: Suicide and Depression Detection de Nikhileswar Komati, la cual se puede visualizar en la siguiente dirección: https://www.kaggle.com/datasets/nikhileswarkomati/suicide-watch
Autores
- Danny Vásquez
- César Salazar
- Alexis Cañar
- Yannela Castro
- Daniel Patiño
| hackathon-pln-es/comentarios_depresivos | [
"license:cc-by-sa-4.0",
"region:us"
] | 2022-03-21T18:16:53+00:00 | {"license": "cc-by-sa-4.0"} | 2022-04-01T00:40:06+00:00 | [] | [] | TAGS
#license-cc-by-sa-4.0 #region-us
|
La base de datos consta de una cantidad de 192 347 filas de datos para el entrenamiento, 33 944 para las pruebas y 22630 para la validación. Su contenido está compuesto por comentarios suicidas y comentarios normales de la red social Reddit traducidos al español, y obtenidos de la base de datos: Suicide and Depression Detection de Nikhileswar Komati, la cual se puede visualizar en la siguiente dirección: URL
Autores
- Danny Vásquez
- César Salazar
- Alexis Cañar
- Yannela Castro
- Daniel Patiño
| [] | [
"TAGS\n#license-cc-by-sa-4.0 #region-us \n"
] |
1b7f73b6c66efd03e28c7f409895c878684675b5 | Dataset descargado de la página kaggle.com.
El archivo original contenía información en inglés y posteriormente fue traducida para su uso.
El dataset contiene las columnas:
- Autor: corresponde al autor del poema.
- Contenido: contiene todo el poema.
- Nombre del poema: contiene el título del poema.
- Años: corresponde al tiempo en que fue hecho el poema.
- Tipo: contiene el tipo que pertenece el poema. | hackathon-pln-es/poems-es | [
"license:wtfpl",
"region:us"
] | 2022-03-21T18:36:23+00:00 | {"license": "wtfpl"} | 2022-03-27T17:39:08+00:00 | [] | [] | TAGS
#license-wtfpl #region-us
| Dataset descargado de la página URL.
El archivo original contenía información en inglés y posteriormente fue traducida para su uso.
El dataset contiene las columnas:
- Autor: corresponde al autor del poema.
- Contenido: contiene todo el poema.
- Nombre del poema: contiene el título del poema.
- Años: corresponde al tiempo en que fue hecho el poema.
- Tipo: contiene el tipo que pertenece el poema. | [] | [
"TAGS\n#license-wtfpl #region-us \n"
] |
45c0b11a67f833a92e8e04fbaa2577e1c9f75a63 | #HourAI-data
Conversational data used to finetune HourAI
Parsed from: [omoito](https://dynasty-scans.com/series/omoito)
Added some testing conversations that looked ok as well. | archmagos/HourAI-data | [
"region:us"
] | 2022-03-22T11:40:11+00:00 | {} | 2022-03-22T20:26:21+00:00 | [] | [] | TAGS
#region-us
| #HourAI-data
Conversational data used to finetune HourAI
Parsed from: omoito
Added some testing conversations that looked ok as well. | [] | [
"TAGS\n#region-us \n"
] |
c35d0c6af81729ca8a1049b4c23674b276cb14ee | # NLI-TR for Supervised SimCSE
This dataset is a modified version of [NLI-TR](https://huggingface.co/datasets/nli_tr) dataset. Its intended use is to train Supervised [SimCSE](https://github.com/princeton-nlp/SimCSE) models for sentence-embeddings. Steps followed to produce this dataset are listed below:
1. Merge train split of snli_tr and multinli_tr subsets.
2. Find every premise that has an entailment hypothesis **and** a contradiction hypothesis.
3. Write found triplets into sent0 (premise), sent1 (entailment hypothesis), hard_neg (contradiction hypothesis) format.
See this [Colab Notebook](https://colab.research.google.com/drive/1Ysq1SpFOa7n1X79x2HxyWjfKzuR_gDQV?usp=sharing) for training and evaluation on Turkish sentences. | emrecan/nli_tr_for_simcse | [
"task_categories:text-classification",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"size_categories:100K<n<1M",
"source_datasets:nli_tr",
"language:tr",
"region:us"
] | 2022-03-22T12:01:59+00:00 | {"language": ["tr"], "size_categories": ["100K<n<1M"], "source_datasets": ["nli_tr"], "task_categories": ["text-classification"], "task_ids": ["semantic-similarity-scoring", "text-scoring"]} | 2023-01-25T16:56:04+00:00 | [] | [
"tr"
] | TAGS
#task_categories-text-classification #task_ids-semantic-similarity-scoring #task_ids-text-scoring #size_categories-100K<n<1M #source_datasets-nli_tr #language-Turkish #region-us
| # NLI-TR for Supervised SimCSE
This dataset is a modified version of NLI-TR dataset. Its intended use is to train Supervised SimCSE models for sentence-embeddings. Steps followed to produce this dataset are listed below:
1. Merge train split of snli_tr and multinli_tr subsets.
2. Find every premise that has an entailment hypothesis and a contradiction hypothesis.
3. Write found triplets into sent0 (premise), sent1 (entailment hypothesis), hard_neg (contradiction hypothesis) format.
See this Colab Notebook for training and evaluation on Turkish sentences. | [
"# NLI-TR for Supervised SimCSE\n\nThis dataset is a modified version of NLI-TR dataset. Its intended use is to train Supervised SimCSE models for sentence-embeddings. Steps followed to produce this dataset are listed below:\n1. Merge train split of snli_tr and multinli_tr subsets.\n2. Find every premise that has an entailment hypothesis and a contradiction hypothesis.\n3. Write found triplets into sent0 (premise), sent1 (entailment hypothesis), hard_neg (contradiction hypothesis) format.\n\nSee this Colab Notebook for training and evaluation on Turkish sentences."
] | [
"TAGS\n#task_categories-text-classification #task_ids-semantic-similarity-scoring #task_ids-text-scoring #size_categories-100K<n<1M #source_datasets-nli_tr #language-Turkish #region-us \n",
"# NLI-TR for Supervised SimCSE\n\nThis dataset is a modified version of NLI-TR dataset. Its intended use is to train Supervised SimCSE models for sentence-embeddings. Steps followed to produce this dataset are listed below:\n1. Merge train split of snli_tr and multinli_tr subsets.\n2. Find every premise that has an entailment hypothesis and a contradiction hypothesis.\n3. Write found triplets into sent0 (premise), sent1 (entailment hypothesis), hard_neg (contradiction hypothesis) format.\n\nSee this Colab Notebook for training and evaluation on Turkish sentences."
] |
de49bc6dc80030d41ac50d2f3e981bbe78f51e47 | # :newspaper: The Spanish Fake News Corpus




## The Spanish Fake News Corpus Version 2.0 [[ FakeDeS Task @ Iberlef 2021 ]] :metal:
### Corpus Description
The Spanish Fake News Corpus Version 2.0 contains pairs of fake and true publications about different events (all of them were written in Spanish) that were collected from **November 2020 to March 2021**. Different sources from the web were used to gather the information, but mainly of two types: 1) newspapers and media companies websites, and 2) fact-cheking websites. Most of the revised fact-checking sites used follow the recommendations of the International [Fact-Checking Network (IFCN)](https://ifcncodeofprinciples.poynter.org/) that seeks to promote good practice in fact-checking.
The assembled corpus has **572 instances** and the instances were labeled using two classes, true or fake. The test corpus is balanced with respect to these two classes. To compile the true-fake news pair of the test corpus, the following guidelines were followed:
- A fake news is added to the corpus if any of the selected fact-checking sites determines it.
- Given a fake news, its true news counterpart is added if there is evidence that it has been published in a reliable site (established newspaper site or media site).
The topics covered in the corpus are: **Science, Sport, Politics, Society, COVID-19, Environment, and International**.The corpus includes mostly news articles, however, on this occasion social media posts were also included in the category of fake news. Exactly 90 posts were included as fake news (15.73\% of the total). This posts were recovered mainly from Facebook and WhatsApp. The use of the various fact-checking sites involved consulting pages from different countries that offer content in Spanish in addition to Mexico, so different variants of Spanish are included in the test corpus. These sites included countries like Argentina, Bolivia, Chile, Colombia, Costa Rica, Ecuador, Spain, United States, France, Peru, Uruguay, England and Venezuela.
The corpus is concentrated in the file test.xlsx. The meaning of the columns is described next:
<ul>
<li><b>Id</b>: assign an identifier to each instance.</li>
<li><b>Category</b>: indicates the category of the news (true or fake).</li>
<li><b>Topic</b>: indicates the topic related to the news.</li>
<li><b>Source</b>: indicates the name of the source.</li>
<li><b>Headline</b>: contains the headline of the news.</li>
<li><b>Text</b>: contains the raw text of the news.</li>
<li><b>Link</b>: contains the URL of the source.</li>
</ul>
Note that some instances have an empty header intentionally because the source omitted it.
### :pencil: How to cite
If you use the corpus please cite the following articles:
1) Gómez-Adorno, H., Posadas-Durán, J. P., Enguix, G. B., & Capetillo, C. P. (2021). Overview of FakeDeS at IberLEF 2021: Fake News Detection in Spanish Shared Task. Procesamiento del Lenguaje Natural, 67, 223-231.
2) Aragón, M. E., Jarquín, H., Gómez, M. M. Y., Escalante, H. J., Villaseñor-Pineda, L., Gómez-Adorno, H., ... & Posadas-Durán, J. P. (2020, September). Overview of mex-a3t at iberlef 2020: Fake news and aggressiveness analysis in mexican spanish. In Notebook Papers of 2nd SEPLN Workshop on Iberian Languages Evaluation Forum (IberLEF), Malaga, Spain.
3) Posadas-Durán, J. P., Gómez-Adorno, H., Sidorov, G., & Escobar, J. J. M. (2019). Detection of fake news in a new corpus for the Spanish language. Journal of Intelligent & Fuzzy Systems, 36(5), 4869-4876.
### FakeDeS @ IberLef 2021
>> The corpus was used for the **Fake News Detection in Spanish (FakeDeS)** shared task at the IberLEF 2021 congress. The details of the competition can be viewed in the main page of the [competition](https://sites.google.com/view/fakedes).
### Organizers
- Helena Montserrat Gómez Adorno (IIMAS - UNAM)
- Juan Pablo Francisco Posadas Durán (ESIME Zacatenco - IPN)
- Gemma Bel Enguix (IINGEN - UNAM)
- Claudia Porto Capetillo (IIMAS - UNAM)
## :books: The Spanish Fake News Corpus Version 1.0 (@ MEXLEF 20)
### :page_facing_up: Corpus Description
<p style='text-align: justify;'>
The Spanish Fake News Corpus contains a collection of news compiled from several resources on the Web: established newspapers websites, media companies’ websites, special websites dedicated to validating fake news and websites designated by different journalists as sites that regularly publish fake news. The news were collected from **January to July of 2018** and all of them were written in Spanish. The process of tagging the corpus was manually performed and the method followed is described in the paper.
aspects were considered: 1) news were tagged as true if there was evidence that it has been published in reliable sites, i.e., established newspaper websites or renowned journalists websites; 2) news were tagged as fake if there were news from reliable sites or specialized website in detection of deceptive content for example VerificadoMX (https://verificado.mx) that contradicts it or no other evidence was found about the news besides the source; 3) the correlation between the news was kept by collecting the true-fake news pair of an event; 4) we tried to trace the source of the news.
</p>
The corpus contains 971 news divided into 491 real news and 480 fake news. The corpus covers news from 9 different topics: **Science, Sport, Economy, Education, Entertainment, Politics, Health, Security, and Society**. The corpus was split into train and test sets, using around the 70\% of the corpus for train and the rest for test. We performed a hierarchical distribution of the corpus, i.e., all the categories keep the 70\%-30\% ratio.
The corpus is concentrated in the files train.xlsx and development.xlsx. The meaning of the columns is described next:
<ul>
<li><b>Id</b>: assign an identifier to each instance.</li>
<li><b>Category</b>: indicates the category of the news (true or fake).</li>
<li><b>Topic</b>: indicates the topic related to the news.</li>
<li><b>Source</b>: indicates the name of the source.</li>
<li><b>Headline</b>: contains the headline of the news.</li>
<li><b>Text</b>: contains the raw text of the news.</li>
<li><b>Link</b>: contains the URL of the source.</li>
</ul>
### :pencil: How to cite
If you use the corpus please cite the following articles:
1) Gómez-Adorno, H., Posadas-Durán, J. P., Enguix, G. B., & Capetillo, C. P. (2021). Overview of FakeDeS at IberLEF 2021: Fake News Detection in Spanish Shared Task. Procesamiento del Lenguaje Natural, 67, 223-231.
2) Aragón, M. E., Jarquín, H., Gómez, M. M. Y., Escalante, H. J., Villaseñor-Pineda, L., Gómez-Adorno, H., ... & Posadas-Durán, J. P. (2020, September). Overview of mex-a3t at iberlef 2020: Fake news and aggressiveness analysis in mexican spanish. In Notebook Papers of 2nd SEPLN Workshop on Iberian Languages Evaluation Forum (IberLEF), Malaga, Spain.
3) Posadas-Durán, J. P., Gómez-Adorno, H., Sidorov, G., & Escobar, J. J. M. (2019). Detection of fake news in a new corpus for the Spanish language. Journal of Intelligent & Fuzzy Systems, 36(5), 4869-4876.
### Fake News Detection Task at MEX-A3T
>> The Fake News Corpus in Spanish was used for the **Fake News Detection Task** in the **MEX-A3T** competition at the IberLEF 2020 congress. The details of the competition can be viewed in the main page of the [competition](https://sites.google.com/view/mex-a3t/).
### Authors of the corpus
Juan Manuel Ramírez Cruz (ESIME Zacatenco - IPN), Silvia Úrsula Palacios Alvarado (ESIME Zacatenco - IPN), Karime Elena Franca Tapia (ESIME Zacatenco - IPN), Juan Pablo Francisco Posadas Durán (ESIME Zacatenco - IPN), Helena Montserrat Gómez Adorno (IIMAS - UNAM), Grigori Sidorov (CIC - IPN)
### Aknowledgments
The work was done with partial support of Red Temática de Tecnologías del Lenguaje, CONACYT project 240844 and SIP-IPN projects 20181849 and 20171813
## License
[CC-BY-4.0](https://choosealicense.com/licenses/cc-by-4.0/).
| sayalaruano/FakeNewsCorpusSpanish | [
"region:us"
] | 2022-03-22T14:20:00+00:00 | {} | 2022-03-22T14:37:06+00:00 | [] | [] | TAGS
#region-us
| # :newspaper: The Spanish Fake News Corpus
!GitHub
!GitHub repo size
!GitHub last commit
!GitHub stars
## The Spanish Fake News Corpus Version 2.0 [[ FakeDeS Task @ Iberlef 2021 ]] :metal:
### Corpus Description
The Spanish Fake News Corpus Version 2.0 contains pairs of fake and true publications about different events (all of them were written in Spanish) that were collected from November 2020 to March 2021. Different sources from the web were used to gather the information, but mainly of two types: 1) newspapers and media companies websites, and 2) fact-cheking websites. Most of the revised fact-checking sites used follow the recommendations of the International Fact-Checking Network (IFCN) that seeks to promote good practice in fact-checking.
The assembled corpus has 572 instances and the instances were labeled using two classes, true or fake. The test corpus is balanced with respect to these two classes. To compile the true-fake news pair of the test corpus, the following guidelines were followed:
- A fake news is added to the corpus if any of the selected fact-checking sites determines it.
- Given a fake news, its true news counterpart is added if there is evidence that it has been published in a reliable site (established newspaper site or media site).
The topics covered in the corpus are: Science, Sport, Politics, Society, COVID-19, Environment, and International.The corpus includes mostly news articles, however, on this occasion social media posts were also included in the category of fake news. Exactly 90 posts were included as fake news (15.73\% of the total). This posts were recovered mainly from Facebook and WhatsApp. The use of the various fact-checking sites involved consulting pages from different countries that offer content in Spanish in addition to Mexico, so different variants of Spanish are included in the test corpus. These sites included countries like Argentina, Bolivia, Chile, Colombia, Costa Rica, Ecuador, Spain, United States, France, Peru, Uruguay, England and Venezuela.
The corpus is concentrated in the file URL. The meaning of the columns is described next:
<ul>
<li><b>Id</b>: assign an identifier to each instance.</li>
<li><b>Category</b>: indicates the category of the news (true or fake).</li>
<li><b>Topic</b>: indicates the topic related to the news.</li>
<li><b>Source</b>: indicates the name of the source.</li>
<li><b>Headline</b>: contains the headline of the news.</li>
<li><b>Text</b>: contains the raw text of the news.</li>
<li><b>Link</b>: contains the URL of the source.</li>
</ul>
Note that some instances have an empty header intentionally because the source omitted it.
### :pencil: How to cite
If you use the corpus please cite the following articles:
1) Gómez-Adorno, H., Posadas-Durán, J. P., Enguix, G. B., & Capetillo, C. P. (2021). Overview of FakeDeS at IberLEF 2021: Fake News Detection in Spanish Shared Task. Procesamiento del Lenguaje Natural, 67, 223-231.
2) Aragón, M. E., Jarquín, H., Gómez, M. M. Y., Escalante, H. J., Villaseñor-Pineda, L., Gómez-Adorno, H., ... & Posadas-Durán, J. P. (2020, September). Overview of mex-a3t at iberlef 2020: Fake news and aggressiveness analysis in mexican spanish. In Notebook Papers of 2nd SEPLN Workshop on Iberian Languages Evaluation Forum (IberLEF), Malaga, Spain.
3) Posadas-Durán, J. P., Gómez-Adorno, H., Sidorov, G., & Escobar, J. J. M. (2019). Detection of fake news in a new corpus for the Spanish language. Journal of Intelligent & Fuzzy Systems, 36(5), 4869-4876.
### FakeDeS @ IberLef 2021
>> The corpus was used for the Fake News Detection in Spanish (FakeDeS) shared task at the IberLEF 2021 congress. The details of the competition can be viewed in the main page of the competition.
### Organizers
- Helena Montserrat Gómez Adorno (IIMAS - UNAM)
- Juan Pablo Francisco Posadas Durán (ESIME Zacatenco - IPN)
- Gemma Bel Enguix (IINGEN - UNAM)
- Claudia Porto Capetillo (IIMAS - UNAM)
## :books: The Spanish Fake News Corpus Version 1.0 (@ MEXLEF 20)
### :page_facing_up: Corpus Description
<p style='text-align: justify;'>
The Spanish Fake News Corpus contains a collection of news compiled from several resources on the Web: established newspapers websites, media companies’ websites, special websites dedicated to validating fake news and websites designated by different journalists as sites that regularly publish fake news. The news were collected from January to July of 2018 and all of them were written in Spanish. The process of tagging the corpus was manually performed and the method followed is described in the paper.
aspects were considered: 1) news were tagged as true if there was evidence that it has been published in reliable sites, i.e., established newspaper websites or renowned journalists websites; 2) news were tagged as fake if there were news from reliable sites or specialized website in detection of deceptive content for example VerificadoMX (URL) that contradicts it or no other evidence was found about the news besides the source; 3) the correlation between the news was kept by collecting the true-fake news pair of an event; 4) we tried to trace the source of the news.
</p>
The corpus contains 971 news divided into 491 real news and 480 fake news. The corpus covers news from 9 different topics: Science, Sport, Economy, Education, Entertainment, Politics, Health, Security, and Society. The corpus was split into train and test sets, using around the 70\% of the corpus for train and the rest for test. We performed a hierarchical distribution of the corpus, i.e., all the categories keep the 70\%-30\% ratio.
The corpus is concentrated in the files URL and URL. The meaning of the columns is described next:
<ul>
<li><b>Id</b>: assign an identifier to each instance.</li>
<li><b>Category</b>: indicates the category of the news (true or fake).</li>
<li><b>Topic</b>: indicates the topic related to the news.</li>
<li><b>Source</b>: indicates the name of the source.</li>
<li><b>Headline</b>: contains the headline of the news.</li>
<li><b>Text</b>: contains the raw text of the news.</li>
<li><b>Link</b>: contains the URL of the source.</li>
</ul>
### :pencil: How to cite
If you use the corpus please cite the following articles:
1) Gómez-Adorno, H., Posadas-Durán, J. P., Enguix, G. B., & Capetillo, C. P. (2021). Overview of FakeDeS at IberLEF 2021: Fake News Detection in Spanish Shared Task. Procesamiento del Lenguaje Natural, 67, 223-231.
2) Aragón, M. E., Jarquín, H., Gómez, M. M. Y., Escalante, H. J., Villaseñor-Pineda, L., Gómez-Adorno, H., ... & Posadas-Durán, J. P. (2020, September). Overview of mex-a3t at iberlef 2020: Fake news and aggressiveness analysis in mexican spanish. In Notebook Papers of 2nd SEPLN Workshop on Iberian Languages Evaluation Forum (IberLEF), Malaga, Spain.
3) Posadas-Durán, J. P., Gómez-Adorno, H., Sidorov, G., & Escobar, J. J. M. (2019). Detection of fake news in a new corpus for the Spanish language. Journal of Intelligent & Fuzzy Systems, 36(5), 4869-4876.
### Fake News Detection Task at MEX-A3T
>> The Fake News Corpus in Spanish was used for the Fake News Detection Task in the MEX-A3T competition at the IberLEF 2020 congress. The details of the competition can be viewed in the main page of the competition.
### Authors of the corpus
Juan Manuel Ramírez Cruz (ESIME Zacatenco - IPN), Silvia Úrsula Palacios Alvarado (ESIME Zacatenco - IPN), Karime Elena Franca Tapia (ESIME Zacatenco - IPN), Juan Pablo Francisco Posadas Durán (ESIME Zacatenco - IPN), Helena Montserrat Gómez Adorno (IIMAS - UNAM), Grigori Sidorov (CIC - IPN)
### Aknowledgments
The work was done with partial support of Red Temática de Tecnologías del Lenguaje, CONACYT project 240844 and SIP-IPN projects 20181849 and 20171813
## License
CC-BY-4.0.
| [
"# :newspaper: The Spanish Fake News Corpus\r\n!GitHub\r\n!GitHub repo size\r\n!GitHub last commit\r\n!GitHub stars",
"## The Spanish Fake News Corpus Version 2.0 [[ FakeDeS Task @ Iberlef 2021 ]] :metal:",
"### Corpus Description\r\nThe Spanish Fake News Corpus Version 2.0 contains pairs of fake and true publications about different events (all of them were written in Spanish) that were collected from November 2020 to March 2021. Different sources from the web were used to gather the information, but mainly of two types: 1) newspapers and media companies websites, and 2) fact-cheking websites. Most of the revised fact-checking sites used follow the recommendations of the International Fact-Checking Network (IFCN) that seeks to promote good practice in fact-checking.\r\n \r\nThe assembled corpus has 572 instances and the instances were labeled using two classes, true or fake. The test corpus is balanced with respect to these two classes. To compile the true-fake news pair of the test corpus, the following guidelines were followed:\r\n- A fake news is added to the corpus if any of the selected fact-checking sites determines it.\r\n- Given a fake news, its true news counterpart is added if there is evidence that it has been published in a reliable site (established newspaper site or media site).\r\n\r\nThe topics covered in the corpus are: Science, Sport, Politics, Society, COVID-19, Environment, and International.The corpus includes mostly news articles, however, on this occasion social media posts were also included in the category of fake news. Exactly 90 posts were included as fake news (15.73\\% of the total). This posts were recovered mainly from Facebook and WhatsApp. The use of the various fact-checking sites involved consulting pages from different countries that offer content in Spanish in addition to Mexico, so different variants of Spanish are included in the test corpus. These sites included countries like Argentina, Bolivia, Chile, Colombia, Costa Rica, Ecuador, Spain, United States, France, Peru, Uruguay, England and Venezuela.\r\n\r\nThe corpus is concentrated in the file URL. The meaning of the columns is described next:\r\n<ul>\r\n <li><b>Id</b>: assign an identifier to each instance.</li>\r\n <li><b>Category</b>: indicates the category of the news (true or fake).</li>\r\n <li><b>Topic</b>: indicates the topic related to the news.</li>\r\n <li><b>Source</b>: indicates the name of the source.</li>\r\n <li><b>Headline</b>: contains the headline of the news.</li>\r\n <li><b>Text</b>: contains the raw text of the news.</li>\r\n <li><b>Link</b>: contains the URL of the source.</li>\r\n</ul>\r\n\r\nNote that some instances have an empty header intentionally because the source omitted it.",
"### :pencil: How to cite\r\nIf you use the corpus please cite the following articles:\r\n\r\n1) Gómez-Adorno, H., Posadas-Durán, J. P., Enguix, G. B., & Capetillo, C. P. (2021). Overview of FakeDeS at IberLEF 2021: Fake News Detection in Spanish Shared Task. Procesamiento del Lenguaje Natural, 67, 223-231.\r\n2) Aragón, M. E., Jarquín, H., Gómez, M. M. Y., Escalante, H. J., Villaseñor-Pineda, L., Gómez-Adorno, H., ... & Posadas-Durán, J. P. (2020, September). Overview of mex-a3t at iberlef 2020: Fake news and aggressiveness analysis in mexican spanish. In Notebook Papers of 2nd SEPLN Workshop on Iberian Languages Evaluation Forum (IberLEF), Malaga, Spain.\r\n3) Posadas-Durán, J. P., Gómez-Adorno, H., Sidorov, G., & Escobar, J. J. M. (2019). Detection of fake news in a new corpus for the Spanish language. Journal of Intelligent & Fuzzy Systems, 36(5), 4869-4876.",
"### FakeDeS @ IberLef 2021 \r\n>> The corpus was used for the Fake News Detection in Spanish (FakeDeS) shared task at the IberLEF 2021 congress. The details of the competition can be viewed in the main page of the competition.",
"### Organizers\r\n- Helena Montserrat Gómez Adorno (IIMAS - UNAM)\r\n- Juan Pablo Francisco Posadas Durán (ESIME Zacatenco - IPN)\r\n- Gemma Bel Enguix (IINGEN - UNAM)\r\n- Claudia Porto Capetillo (IIMAS - UNAM)",
"## :books: The Spanish Fake News Corpus Version 1.0 (@ MEXLEF 20)",
"### :page_facing_up: Corpus Description\r\n<p style='text-align: justify;'>\r\nThe Spanish Fake News Corpus contains a collection of news compiled from several resources on the Web: established newspapers websites, media companies’ websites, special websites dedicated to validating fake news and websites designated by different journalists as sites that regularly publish fake news. The news were collected from January to July of 2018 and all of them were written in Spanish. The process of tagging the corpus was manually performed and the method followed is described in the paper.\r\naspects were considered: 1) news were tagged as true if there was evidence that it has been published in reliable sites, i.e., established newspaper websites or renowned journalists websites; 2) news were tagged as fake if there were news from reliable sites or specialized website in detection of deceptive content for example VerificadoMX (URL) that contradicts it or no other evidence was found about the news besides the source; 3) the correlation between the news was kept by collecting the true-fake news pair of an event; 4) we tried to trace the source of the news.\r\n</p>\r\nThe corpus contains 971 news divided into 491 real news and 480 fake news. The corpus covers news from 9 different topics: Science, Sport, Economy, Education, Entertainment, Politics, Health, Security, and Society. The corpus was split into train and test sets, using around the 70\\% of the corpus for train and the rest for test. We performed a hierarchical distribution of the corpus, i.e., all the categories keep the 70\\%-30\\% ratio.\r\n\r\nThe corpus is concentrated in the files URL and URL. The meaning of the columns is described next:\r\n<ul>\r\n <li><b>Id</b>: assign an identifier to each instance.</li>\r\n <li><b>Category</b>: indicates the category of the news (true or fake).</li>\r\n <li><b>Topic</b>: indicates the topic related to the news.</li>\r\n <li><b>Source</b>: indicates the name of the source.</li>\r\n <li><b>Headline</b>: contains the headline of the news.</li>\r\n <li><b>Text</b>: contains the raw text of the news.</li>\r\n <li><b>Link</b>: contains the URL of the source.</li>\r\n</ul>",
"### :pencil: How to cite\r\nIf you use the corpus please cite the following articles:\r\n\r\n1) Gómez-Adorno, H., Posadas-Durán, J. P., Enguix, G. B., & Capetillo, C. P. (2021). Overview of FakeDeS at IberLEF 2021: Fake News Detection in Spanish Shared Task. Procesamiento del Lenguaje Natural, 67, 223-231.\r\n2) Aragón, M. E., Jarquín, H., Gómez, M. M. Y., Escalante, H. J., Villaseñor-Pineda, L., Gómez-Adorno, H., ... & Posadas-Durán, J. P. (2020, September). Overview of mex-a3t at iberlef 2020: Fake news and aggressiveness analysis in mexican spanish. In Notebook Papers of 2nd SEPLN Workshop on Iberian Languages Evaluation Forum (IberLEF), Malaga, Spain.\r\n3) Posadas-Durán, J. P., Gómez-Adorno, H., Sidorov, G., & Escobar, J. J. M. (2019). Detection of fake news in a new corpus for the Spanish language. Journal of Intelligent & Fuzzy Systems, 36(5), 4869-4876.",
"### Fake News Detection Task at MEX-A3T \r\n>> The Fake News Corpus in Spanish was used for the Fake News Detection Task in the MEX-A3T competition at the IberLEF 2020 congress. The details of the competition can be viewed in the main page of the competition.",
"### Authors of the corpus\r\nJuan Manuel Ramírez Cruz (ESIME Zacatenco - IPN), Silvia Úrsula Palacios Alvarado (ESIME Zacatenco - IPN), Karime Elena Franca Tapia (ESIME Zacatenco - IPN), Juan Pablo Francisco Posadas Durán (ESIME Zacatenco - IPN), Helena Montserrat Gómez Adorno (IIMAS - UNAM), Grigori Sidorov (CIC - IPN)",
"### Aknowledgments\r\nThe work was done with partial support of Red Temática de Tecnologías del Lenguaje, CONACYT project 240844 and SIP-IPN projects 20181849 and 20171813",
"## License\r\nCC-BY-4.0."
] | [
"TAGS\n#region-us \n",
"# :newspaper: The Spanish Fake News Corpus\r\n!GitHub\r\n!GitHub repo size\r\n!GitHub last commit\r\n!GitHub stars",
"## The Spanish Fake News Corpus Version 2.0 [[ FakeDeS Task @ Iberlef 2021 ]] :metal:",
"### Corpus Description\r\nThe Spanish Fake News Corpus Version 2.0 contains pairs of fake and true publications about different events (all of them were written in Spanish) that were collected from November 2020 to March 2021. Different sources from the web were used to gather the information, but mainly of two types: 1) newspapers and media companies websites, and 2) fact-cheking websites. Most of the revised fact-checking sites used follow the recommendations of the International Fact-Checking Network (IFCN) that seeks to promote good practice in fact-checking.\r\n \r\nThe assembled corpus has 572 instances and the instances were labeled using two classes, true or fake. The test corpus is balanced with respect to these two classes. To compile the true-fake news pair of the test corpus, the following guidelines were followed:\r\n- A fake news is added to the corpus if any of the selected fact-checking sites determines it.\r\n- Given a fake news, its true news counterpart is added if there is evidence that it has been published in a reliable site (established newspaper site or media site).\r\n\r\nThe topics covered in the corpus are: Science, Sport, Politics, Society, COVID-19, Environment, and International.The corpus includes mostly news articles, however, on this occasion social media posts were also included in the category of fake news. Exactly 90 posts were included as fake news (15.73\\% of the total). This posts were recovered mainly from Facebook and WhatsApp. The use of the various fact-checking sites involved consulting pages from different countries that offer content in Spanish in addition to Mexico, so different variants of Spanish are included in the test corpus. These sites included countries like Argentina, Bolivia, Chile, Colombia, Costa Rica, Ecuador, Spain, United States, France, Peru, Uruguay, England and Venezuela.\r\n\r\nThe corpus is concentrated in the file URL. The meaning of the columns is described next:\r\n<ul>\r\n <li><b>Id</b>: assign an identifier to each instance.</li>\r\n <li><b>Category</b>: indicates the category of the news (true or fake).</li>\r\n <li><b>Topic</b>: indicates the topic related to the news.</li>\r\n <li><b>Source</b>: indicates the name of the source.</li>\r\n <li><b>Headline</b>: contains the headline of the news.</li>\r\n <li><b>Text</b>: contains the raw text of the news.</li>\r\n <li><b>Link</b>: contains the URL of the source.</li>\r\n</ul>\r\n\r\nNote that some instances have an empty header intentionally because the source omitted it.",
"### :pencil: How to cite\r\nIf you use the corpus please cite the following articles:\r\n\r\n1) Gómez-Adorno, H., Posadas-Durán, J. P., Enguix, G. B., & Capetillo, C. P. (2021). Overview of FakeDeS at IberLEF 2021: Fake News Detection in Spanish Shared Task. Procesamiento del Lenguaje Natural, 67, 223-231.\r\n2) Aragón, M. E., Jarquín, H., Gómez, M. M. Y., Escalante, H. J., Villaseñor-Pineda, L., Gómez-Adorno, H., ... & Posadas-Durán, J. P. (2020, September). Overview of mex-a3t at iberlef 2020: Fake news and aggressiveness analysis in mexican spanish. In Notebook Papers of 2nd SEPLN Workshop on Iberian Languages Evaluation Forum (IberLEF), Malaga, Spain.\r\n3) Posadas-Durán, J. P., Gómez-Adorno, H., Sidorov, G., & Escobar, J. J. M. (2019). Detection of fake news in a new corpus for the Spanish language. Journal of Intelligent & Fuzzy Systems, 36(5), 4869-4876.",
"### FakeDeS @ IberLef 2021 \r\n>> The corpus was used for the Fake News Detection in Spanish (FakeDeS) shared task at the IberLEF 2021 congress. The details of the competition can be viewed in the main page of the competition.",
"### Organizers\r\n- Helena Montserrat Gómez Adorno (IIMAS - UNAM)\r\n- Juan Pablo Francisco Posadas Durán (ESIME Zacatenco - IPN)\r\n- Gemma Bel Enguix (IINGEN - UNAM)\r\n- Claudia Porto Capetillo (IIMAS - UNAM)",
"## :books: The Spanish Fake News Corpus Version 1.0 (@ MEXLEF 20)",
"### :page_facing_up: Corpus Description\r\n<p style='text-align: justify;'>\r\nThe Spanish Fake News Corpus contains a collection of news compiled from several resources on the Web: established newspapers websites, media companies’ websites, special websites dedicated to validating fake news and websites designated by different journalists as sites that regularly publish fake news. The news were collected from January to July of 2018 and all of them were written in Spanish. The process of tagging the corpus was manually performed and the method followed is described in the paper.\r\naspects were considered: 1) news were tagged as true if there was evidence that it has been published in reliable sites, i.e., established newspaper websites or renowned journalists websites; 2) news were tagged as fake if there were news from reliable sites or specialized website in detection of deceptive content for example VerificadoMX (URL) that contradicts it or no other evidence was found about the news besides the source; 3) the correlation between the news was kept by collecting the true-fake news pair of an event; 4) we tried to trace the source of the news.\r\n</p>\r\nThe corpus contains 971 news divided into 491 real news and 480 fake news. The corpus covers news from 9 different topics: Science, Sport, Economy, Education, Entertainment, Politics, Health, Security, and Society. The corpus was split into train and test sets, using around the 70\\% of the corpus for train and the rest for test. We performed a hierarchical distribution of the corpus, i.e., all the categories keep the 70\\%-30\\% ratio.\r\n\r\nThe corpus is concentrated in the files URL and URL. The meaning of the columns is described next:\r\n<ul>\r\n <li><b>Id</b>: assign an identifier to each instance.</li>\r\n <li><b>Category</b>: indicates the category of the news (true or fake).</li>\r\n <li><b>Topic</b>: indicates the topic related to the news.</li>\r\n <li><b>Source</b>: indicates the name of the source.</li>\r\n <li><b>Headline</b>: contains the headline of the news.</li>\r\n <li><b>Text</b>: contains the raw text of the news.</li>\r\n <li><b>Link</b>: contains the URL of the source.</li>\r\n</ul>",
"### :pencil: How to cite\r\nIf you use the corpus please cite the following articles:\r\n\r\n1) Gómez-Adorno, H., Posadas-Durán, J. P., Enguix, G. B., & Capetillo, C. P. (2021). Overview of FakeDeS at IberLEF 2021: Fake News Detection in Spanish Shared Task. Procesamiento del Lenguaje Natural, 67, 223-231.\r\n2) Aragón, M. E., Jarquín, H., Gómez, M. M. Y., Escalante, H. J., Villaseñor-Pineda, L., Gómez-Adorno, H., ... & Posadas-Durán, J. P. (2020, September). Overview of mex-a3t at iberlef 2020: Fake news and aggressiveness analysis in mexican spanish. In Notebook Papers of 2nd SEPLN Workshop on Iberian Languages Evaluation Forum (IberLEF), Malaga, Spain.\r\n3) Posadas-Durán, J. P., Gómez-Adorno, H., Sidorov, G., & Escobar, J. J. M. (2019). Detection of fake news in a new corpus for the Spanish language. Journal of Intelligent & Fuzzy Systems, 36(5), 4869-4876.",
"### Fake News Detection Task at MEX-A3T \r\n>> The Fake News Corpus in Spanish was used for the Fake News Detection Task in the MEX-A3T competition at the IberLEF 2020 congress. The details of the competition can be viewed in the main page of the competition.",
"### Authors of the corpus\r\nJuan Manuel Ramírez Cruz (ESIME Zacatenco - IPN), Silvia Úrsula Palacios Alvarado (ESIME Zacatenco - IPN), Karime Elena Franca Tapia (ESIME Zacatenco - IPN), Juan Pablo Francisco Posadas Durán (ESIME Zacatenco - IPN), Helena Montserrat Gómez Adorno (IIMAS - UNAM), Grigori Sidorov (CIC - IPN)",
"### Aknowledgments\r\nThe work was done with partial support of Red Temática de Tecnologías del Lenguaje, CONACYT project 240844 and SIP-IPN projects 20181849 and 20171813",
"## License\r\nCC-BY-4.0."
] |
8a03d6240ada811ba3d603f813b91d3be4553764 |
This dataset was obtained from: https://www.kaggle.com/datasets/arseniitretiakov/noticias-falsas-en-espaol
| sayalaruano/FakeNewsSpanish_Kaggle1 | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-03-22T14:53:20+00:00 | {"license": "cc-by-nc-sa-4.0"} | 2022-03-22T14:59:40+00:00 | [] | [] | TAGS
#license-cc-by-nc-sa-4.0 #region-us
|
This dataset was obtained from: URL
| [] | [
"TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n"
] |
74fb40b34737bd14e40f2638ac00938243ec9ee3 |
This dataset was obtained from: https://www.kaggle.com/datasets/zulanac/fake-and-real-news | sayalaruano/FakeNewsSpanish_Kaggle2 | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-03-22T15:01:36+00:00 | {"license": "cc-by-nc-sa-4.0"} | 2022-03-22T15:02:43+00:00 | [] | [] | TAGS
#license-cc-by-nc-sa-4.0 #region-us
|
This dataset was obtained from: URL | [] | [
"TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n"
] |
421b0fbcfc6b450bea2364de6ace4e965cd98c8b | annotations_creators:
- machine-generated
language_creators:
- machine-generated
languages: []
licenses:
- mit
multilinguality: []
pretty_name: Mutli-Radar/Multi-System Precipitation Radar
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- time-series-forecasting
- image-classification
- image-segmentation
- other
task_ids:
- univariate-time-series-forecasting
- multi-label-image-classification
- semantic-segmentation
# Dataset Card for MRMS
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://mrms.nssl.noaa.gov/
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Jacob Bieker](mailto:[email protected])
### Dataset Summary
Multi-Radar/Multi-System Precipitation Rate Radar data for 2016-2022. This data contains precipitation rate values for the continental United States.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
This dataset was constructed to help recreate the original dataset used for MetNet/MetNet-2 as well as Deep Generative Model of Radar papers. The datasets were not pubicly released, but this dataset should cover the time period used plus more compared to the datasets in the papers.
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
US Government License, no restrictions
### Citation Information
@article(ocf:mrms,
author = {Jacob Bieker}
title = {MRMS Precipitation Rate Dataset}
year = {2022}
} | openclimatefix/mrms | [
"doi:10.57967/hf/0885",
"region:us"
] | 2022-03-22T15:39:47+00:00 | {} | 2022-06-22T12:39:35+00:00 | [] | [] | TAGS
#doi-10.57967/hf/0885 #region-us
| annotations_creators:
- machine-generated
language_creators:
- machine-generated
languages: []
licenses:
- mit
multilinguality: []
pretty_name: Mutli-Radar/Multi-System Precipitation Radar
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- time-series-forecasting
- image-classification
- image-segmentation
- other
task_ids:
- univariate-time-series-forecasting
- multi-label-image-classification
- semantic-segmentation
# Dataset Card for MRMS
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: URL
- Repository:
- Paper:
- Leaderboard:
- Point of Contact: Jacob Bieker
### Dataset Summary
Multi-Radar/Multi-System Precipitation Rate Radar data for 2016-2022. This data contains precipitation rate values for the continental United States.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
This dataset was constructed to help recreate the original dataset used for MetNet/MetNet-2 as well as Deep Generative Model of Radar papers. The datasets were not pubicly released, but this dataset should cover the time period used plus more compared to the datasets in the papers.
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
US Government License, no restrictions
@article(ocf:mrms,
author = {Jacob Bieker}
title = {MRMS Precipitation Rate Dataset}
year = {2022}
} | [
"# Dataset Card for MRMS",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact: Jacob Bieker",
"### Dataset Summary\n\nMulti-Radar/Multi-System Precipitation Rate Radar data for 2016-2022. This data contains precipitation rate values for the continental United States.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale\n\nThis dataset was constructed to help recreate the original dataset used for MetNet/MetNet-2 as well as Deep Generative Model of Radar papers. The datasets were not pubicly released, but this dataset should cover the time period used plus more compared to the datasets in the papers.",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nUS Government License, no restrictions\n\n\n\n@article(ocf:mrms,\nauthor = {Jacob Bieker}\ntitle = {MRMS Precipitation Rate Dataset}\nyear = {2022}\n}"
] | [
"TAGS\n#doi-10.57967/hf/0885 #region-us \n",
"# Dataset Card for MRMS",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact: Jacob Bieker",
"### Dataset Summary\n\nMulti-Radar/Multi-System Precipitation Rate Radar data for 2016-2022. This data contains precipitation rate values for the continental United States.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale\n\nThis dataset was constructed to help recreate the original dataset used for MetNet/MetNet-2 as well as Deep Generative Model of Radar papers. The datasets were not pubicly released, but this dataset should cover the time period used plus more compared to the datasets in the papers.",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nUS Government License, no restrictions\n\n\n\n@article(ocf:mrms,\nauthor = {Jacob Bieker}\ntitle = {MRMS Precipitation Rate Dataset}\nyear = {2022}\n}"
] |
4f92928b7f48c7f12925055498f2ed92ac042e06 | Please cite: E. Cardenas., et al. “A Comparison of House Price Classification with Structured and Unstructured Text Data.” Published in AAAI FLAIRS-35. 2022. | erikacardenas300/Zillow-Text-Listings | [
"region:us"
] | 2022-03-22T19:24:10+00:00 | {} | 2022-03-23T01:47:24+00:00 | [] | [] | TAGS
#region-us
| Please cite: E. Cardenas., et al. “A Comparison of House Price Classification with Structured and Unstructured Text Data.” Published in AAAI FLAIRS-35. 2022. | [] | [
"TAGS\n#region-us \n"
] |
549d7035f8df8bcd19d41ea355a4a775273b08e5 | This is the corpus file from the [BEIR benchmark](https://github.com/beir-cellar/beir) for the [TREC-COVID 19 dataset](https://ir.nist.gov/trec-covid/).
| nreimers/trec-covid | [
"region:us"
] | 2022-03-22T22:14:03+00:00 | {} | 2022-03-23T12:55:44+00:00 | [] | [] | TAGS
#region-us
| This is the corpus file from the BEIR benchmark for the TREC-COVID 19 dataset.
| [] | [
"TAGS\n#region-us \n"
] |
e3e19a9a95b3464d2aa336ccf473b4d1cc7de76b | # Dataset Card for Fake-news-latam-omdena
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[latam-chapters-news-detector](https://github.com/OmdenaAI/latam-chapters-news-detector)
- **Repository:**[latam-chapters-news-detector](https://github.com/OmdenaAI/latam-chapters-news-detector)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Since the Cambridge Analytica scandal a pandora box has been opened around the world, bringing to light campaigns even involving our current Latinamerica leaders manipulating public opinion through social media to win an election. There is a common and simple pattern that includes platforms such as facebook and fake news, where the candidates are able to build a nefarious narrative for their own benefit. This fact is a growing concern for our democracies, as many of these practices have been widely spread across the region and more people are gaining access to the internet. Thus, it is a necessity to be able to advise the population, and for that we have to be able to quickly spot these plots on the net before the damage is irreversible.
Therefore, an initial effort was taken to collect this dataset which gathered news from different news sources in Mexico, Colombia and El Salvador. With the objective to train a classification model and deploy it as part of the Politics Fake News Detector in LATAM (Latin America) project [https://github.com/OmdenaAI/latam-chapters-news-detector].
Website articles and tweets were considered.
### Supported Tasks and Leaderboards
Binary fake news classification [with classes "True" and "Fake"]
### Languages
Spanish only
## Dataset Structure
### Data Instances
* Train: 2782
* Test: 310
### Data Fields
[More Information Needed]
### Data Splits
Train and test. Each split was generated with a stratified procedure in order to have the same proportion of fake news in both train and test.
Around 1/3 of the observations in each split have the label 'Fake', while 2/3 have the label 'True'.
## Dataset Creation
### Curation Rationale
For a more specific flow of how the labeling was done, follow this link: https://github.com/OmdenaAI/latam-chapters-news-detector/blob/main/Fake-news_Flowchart.pdf
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Once the capacity to somewhat detect irregularities in the news activity on the internet is developed, we might be able to counter the disinformation with the help of additional research. As we reduce the time spent in looking for those occurrences, more time can be used in validating the results and uncovering the truth; enabling researchers, journalists and organizations to help people make an informed decision whether the public opinion is true or not, so that they can identify on their own if someone is trying to manipulate them for a certain political benefit.
If this matter isn’t tackled with enough urgency, we might see the rise of a new dark era in latin america politics, where many unscrupulous parties and people will manage to gain power and control the lives of many people.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to the Omdena local chapter members from Mexico, Colombia and El Salvador for their amazing effort to collect and curate this dataset. | IsaacRodgz/Fake-news-latam-omdena | [
"region:us"
] | 2022-03-22T23:58:35+00:00 | {} | 2022-03-23T00:20:36+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for Fake-news-latam-omdena
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:latam-chapters-news-detector
- Repository:latam-chapters-news-detector
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
Since the Cambridge Analytica scandal a pandora box has been opened around the world, bringing to light campaigns even involving our current Latinamerica leaders manipulating public opinion through social media to win an election. There is a common and simple pattern that includes platforms such as facebook and fake news, where the candidates are able to build a nefarious narrative for their own benefit. This fact is a growing concern for our democracies, as many of these practices have been widely spread across the region and more people are gaining access to the internet. Thus, it is a necessity to be able to advise the population, and for that we have to be able to quickly spot these plots on the net before the damage is irreversible.
Therefore, an initial effort was taken to collect this dataset which gathered news from different news sources in Mexico, Colombia and El Salvador. With the objective to train a classification model and deploy it as part of the Politics Fake News Detector in LATAM (Latin America) project [URL
Website articles and tweets were considered.
### Supported Tasks and Leaderboards
Binary fake news classification [with classes "True" and "Fake"]
### Languages
Spanish only
## Dataset Structure
### Data Instances
* Train: 2782
* Test: 310
### Data Fields
### Data Splits
Train and test. Each split was generated with a stratified procedure in order to have the same proportion of fake news in both train and test.
Around 1/3 of the observations in each split have the label 'Fake', while 2/3 have the label 'True'.
## Dataset Creation
### Curation Rationale
For a more specific flow of how the labeling was done, follow this link: URL
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
Once the capacity to somewhat detect irregularities in the news activity on the internet is developed, we might be able to counter the disinformation with the help of additional research. As we reduce the time spent in looking for those occurrences, more time can be used in validating the results and uncovering the truth; enabling researchers, journalists and organizations to help people make an informed decision whether the public opinion is true or not, so that they can identify on their own if someone is trying to manipulate them for a certain political benefit.
If this matter isn’t tackled with enough urgency, we might see the rise of a new dark era in latin america politics, where many unscrupulous parties and people will manage to gain power and control the lives of many people.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to the Omdena local chapter members from Mexico, Colombia and El Salvador for their amazing effort to collect and curate this dataset. | [
"# Dataset Card for Fake-news-latam-omdena",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:latam-chapters-news-detector\n- Repository:latam-chapters-news-detector\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nSince the Cambridge Analytica scandal a pandora box has been opened around the world, bringing to light campaigns even involving our current Latinamerica leaders manipulating public opinion through social media to win an election. There is a common and simple pattern that includes platforms such as facebook and fake news, where the candidates are able to build a nefarious narrative for their own benefit. This fact is a growing concern for our democracies, as many of these practices have been widely spread across the region and more people are gaining access to the internet. Thus, it is a necessity to be able to advise the population, and for that we have to be able to quickly spot these plots on the net before the damage is irreversible.\n\nTherefore, an initial effort was taken to collect this dataset which gathered news from different news sources in Mexico, Colombia and El Salvador. With the objective to train a classification model and deploy it as part of the Politics Fake News Detector in LATAM (Latin America) project [URL\n\nWebsite articles and tweets were considered.",
"### Supported Tasks and Leaderboards\n\nBinary fake news classification [with classes \"True\" and \"Fake\"]",
"### Languages\n\nSpanish only",
"## Dataset Structure",
"### Data Instances\n\n* Train: 2782\n* Test: 310",
"### Data Fields",
"### Data Splits\n\nTrain and test. Each split was generated with a stratified procedure in order to have the same proportion of fake news in both train and test.\n\nAround 1/3 of the observations in each split have the label 'Fake', while 2/3 have the label 'True'.",
"## Dataset Creation",
"### Curation Rationale\n\nFor a more specific flow of how the labeling was done, follow this link: URL",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nOnce the capacity to somewhat detect irregularities in the news activity on the internet is developed, we might be able to counter the disinformation with the help of additional research. As we reduce the time spent in looking for those occurrences, more time can be used in validating the results and uncovering the truth; enabling researchers, journalists and organizations to help people make an informed decision whether the public opinion is true or not, so that they can identify on their own if someone is trying to manipulate them for a certain political benefit.\nIf this matter isn’t tackled with enough urgency, we might see the rise of a new dark era in latin america politics, where many unscrupulous parties and people will manage to gain power and control the lives of many people.",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to the Omdena local chapter members from Mexico, Colombia and El Salvador for their amazing effort to collect and curate this dataset."
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Fake-news-latam-omdena",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:latam-chapters-news-detector\n- Repository:latam-chapters-news-detector\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nSince the Cambridge Analytica scandal a pandora box has been opened around the world, bringing to light campaigns even involving our current Latinamerica leaders manipulating public opinion through social media to win an election. There is a common and simple pattern that includes platforms such as facebook and fake news, where the candidates are able to build a nefarious narrative for their own benefit. This fact is a growing concern for our democracies, as many of these practices have been widely spread across the region and more people are gaining access to the internet. Thus, it is a necessity to be able to advise the population, and for that we have to be able to quickly spot these plots on the net before the damage is irreversible.\n\nTherefore, an initial effort was taken to collect this dataset which gathered news from different news sources in Mexico, Colombia and El Salvador. With the objective to train a classification model and deploy it as part of the Politics Fake News Detector in LATAM (Latin America) project [URL\n\nWebsite articles and tweets were considered.",
"### Supported Tasks and Leaderboards\n\nBinary fake news classification [with classes \"True\" and \"Fake\"]",
"### Languages\n\nSpanish only",
"## Dataset Structure",
"### Data Instances\n\n* Train: 2782\n* Test: 310",
"### Data Fields",
"### Data Splits\n\nTrain and test. Each split was generated with a stratified procedure in order to have the same proportion of fake news in both train and test.\n\nAround 1/3 of the observations in each split have the label 'Fake', while 2/3 have the label 'True'.",
"## Dataset Creation",
"### Curation Rationale\n\nFor a more specific flow of how the labeling was done, follow this link: URL",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nOnce the capacity to somewhat detect irregularities in the news activity on the internet is developed, we might be able to counter the disinformation with the help of additional research. As we reduce the time spent in looking for those occurrences, more time can be used in validating the results and uncovering the truth; enabling researchers, journalists and organizations to help people make an informed decision whether the public opinion is true or not, so that they can identify on their own if someone is trying to manipulate them for a certain political benefit.\nIf this matter isn’t tackled with enough urgency, we might see the rise of a new dark era in latin america politics, where many unscrupulous parties and people will manage to gain power and control the lives of many people.",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to the Omdena local chapter members from Mexico, Colombia and El Salvador for their amazing effort to collect and curate this dataset."
] |
abdbddf991d8dbc29adc013f26970ba3232fd712 | - Problem type: Summarization
languages:
- en
multilinguality:
- monolingual
task_ids:
- summarization
# MeQSum
Dataset for medical question summarization introduced in the ACL 2019 paper "On the Summarization of Consumer Health Questions": https://www.aclweb.org/anthology/P19-1215
### Citation Information
```bibtex
@Inproceedings{MeQSum,
author = {Asma {Ben Abacha} and Dina Demner-Fushman},
title = {On the Summarization of Consumer Health Questions},
booktitle = {Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28th - August 2},
year = {2019},
abstract = {Question understanding is one of the main challenges in question answering. In real world applications, users often submit natural language questions that are longer than needed and include peripheral information that increases the complexity of the question, leading to substantially more false positives in answer retrieval. In this paper, we study neural abstractive models for medical question summarization. We introduce the MeQSum corpus of 1,000 summarized consumer health questions. We explore data augmentation methods and evaluate state-of-the-art neural abstractive models on this new task. In particular, we show that semantic augmentation from question datasets improves the overall performance, and that pointer-generator networks outperform sequence-to-sequence attentional models on this task, with a ROUGE-1 score of 44.16%. We also present a detailed error analysis and discuss directions for improvement that are specific to question summarization. }}
``` | sumedh/MeQSum | [
"license:apache-2.0",
"region:us"
] | 2022-03-23T04:21:51+00:00 | {"license": "apache-2.0"} | 2022-03-24T20:20:43+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
| - Problem type: Summarization
languages:
- en
multilinguality:
- monolingual
task_ids:
- summarization
# MeQSum
Dataset for medical question summarization introduced in the ACL 2019 paper "On the Summarization of Consumer Health Questions": URL
| [
"# MeQSum\r\nDataset for medical question summarization introduced in the ACL 2019 paper \"On the Summarization of Consumer Health Questions\": URL"
] | [
"TAGS\n#license-apache-2.0 #region-us \n",
"# MeQSum\r\nDataset for medical question summarization introduced in the ACL 2019 paper \"On the Summarization of Consumer Health Questions\": URL"
] |
686600ca31931048cbb9f0acc98b83c29d0036b9 |
## WARNING: this dataset is an extract of the OSCAR dataset published here to simulate the use of the full dataset in low-resource contexts.
Using this dataset is equivalent to using a processed version of OSCAR legally speaking. I take no credit for the gathering of the original data and hence refer entirely to the original dataset in the card below.
# Dataset Card for "oscar"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://oscar-corpus.com](https://oscar-corpus.com)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
OSCAR or **O**pen **S**uper-large **C**rawled [**A**LMAnaCH](https://team.inria.fr/almanach/) co**R**pus is a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [goclassy](https://github.com/pjox/goclassy) architecture. Data is distributed by language in both original and deduplicated form.
### Supported Tasks and Leaderboards
OSCAR is mainly inteded to pretrain language models and word represantations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
## Dataset Structure
We show detailed information for all the configurations of the dataset.
## Dataset Creation
### Curation Rationale
OSCAR was constructed new pipeline derived from the [fastText's one](https://github.com/facebookresearch/fastText), called [_goclassy_](https://github.com/pjox/goclassy). Goclassy reuses the [fastText linear classifier](https://fasttext.cc) and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.
The order of operations is more or less the same as in the fastText pre-processing pipeline but instead of clustering multiple operations into a single blocking process, a worker is launched for each operation but bounding the number of possible parallel operations at a given time by the number of available threads instead of the number of CPUs. Goclassy is implemented in the [Go programming language](https://golang.org/) so it lets the [Go runtime](https://golang.org/src/runtime/mprof.go) handle the scheduling of the processes. Thus the goclassy's pipeline one does not have to wait for a whole WET file to download, decompress and classify in order to start downloading and processing the next one, a new file will start downloading and processing as soon as the scheduler is able to allocate a new process.
Filtering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.
### Source Data
#### Initial Data Collection and Normalization
[Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the **November 2018** snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
## Additional Information
### Dataset Curators
The corpus was put together by [Pedro J. Ortiz](https://pjortiz.eu/), [Benoît Sagot](http://pauillac.inria.fr/~sagot/), and [Laurent Romary](https://cv.archives-ouvertes.fr/laurentromary), during work done at [Inria](https://www.inria.fr/en), particularly at the [ALMAnaCH team](https://team.inria.fr/almanach/).
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
```
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
```
### Contributions
Thanks to [@pjox](https://github.com/pjox) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
| nthngdy/oscar-small | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:oscar",
"language:af",
"language:am",
"language:ar",
"language:arz",
"language:as",
"language:az",
"language:azb",
"language:ba",
"language:be",
"language:bg",
"language:bn",
"language:bo",
"language:br",
"language:ca",
"language:ce",
"language:ceb",
"language:ckb",
"language:cs",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:dv",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:fy",
"language:ga",
"language:gl",
"language:gu",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:id",
"language:is",
"language:it",
"language:ja",
"language:ka",
"language:kk",
"language:km",
"language:kn",
"language:ko",
"language:ku",
"language:ky",
"language:la",
"language:lb",
"language:lo",
"language:lt",
"language:lv",
"language:mg",
"language:mhr",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:mt",
"language:my",
"language:nds",
"language:ne",
"language:nl",
"language:nn",
"language:no",
"language:or",
"language:os",
"language:pa",
"language:pl",
"language:pnb",
"language:ps",
"language:pt",
"language:ro",
"language:ru",
"language:sa",
"language:sah",
"language:sd",
"language:sh",
"language:si",
"language:sk",
"language:sl",
"language:sq",
"language:sr",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:tg",
"language:th",
"language:tk",
"language:tl",
"language:tr",
"language:tt",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:yi",
"language:zh",
"license:cc0-1.0",
"arxiv:2010.14571",
"region:us"
] | 2022-03-23T09:26:03+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["af", "am", "ar", "arz", "as", "az", "azb", "ba", "be", "bg", "bn", "bo", "br", "ca", "ce", "ceb", "ckb", "cs", "cv", "cy", "da", "de", "dv", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gl", "gu", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mhr", "mk", "ml", "mn", "mr", "ms", "mt", "my", "nds", "ne", "nl", "nn", "no", "or", "os", "pa", "pl", "pnb", "ps", "pt", "ro", "ru", "sa", "sah", "sd", "sh", "si", "sk", "sl", "sq", "sr", "sv", "sw", "ta", "te", "tg", "th", "tk", "tl", "tr", "tt", "ug", "uk", "ur", "uz", "vi", "yi", "zh"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "source_datasets": ["oscar"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "paperswithcode_id": "oscar", "pretty_name": "OSCAR"} | 2023-03-08T09:57:45+00:00 | [
"2010.14571"
] | [
"af",
"am",
"ar",
"arz",
"as",
"az",
"azb",
"ba",
"be",
"bg",
"bn",
"bo",
"br",
"ca",
"ce",
"ceb",
"ckb",
"cs",
"cv",
"cy",
"da",
"de",
"dv",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gl",
"gu",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mhr",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"nds",
"ne",
"nl",
"nn",
"no",
"or",
"os",
"pa",
"pl",
"pnb",
"ps",
"pt",
"ro",
"ru",
"sa",
"sah",
"sd",
"sh",
"si",
"sk",
"sl",
"sq",
"sr",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tk",
"tl",
"tr",
"tt",
"ug",
"uk",
"ur",
"uz",
"vi",
"yi",
"zh"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #source_datasets-oscar #language-Afrikaans #language-Amharic #language-Arabic #language-Egyptian Arabic #language-Assamese #language-Azerbaijani #language-South Azerbaijani #language-Bashkir #language-Belarusian #language-Bulgarian #language-Bengali #language-Tibetan #language-Breton #language-Catalan #language-Chechen #language-Cebuano #language-Central Kurdish #language-Czech #language-Chuvash #language-Welsh #language-Danish #language-German #language-Dhivehi #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Finnish #language-French #language-Western Frisian #language-Irish #language-Galician #language-Gujarati #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Icelandic #language-Italian #language-Japanese #language-Georgian #language-Kazakh #language-Khmer #language-Kannada #language-Korean #language-Kurdish #language-Kirghiz #language-Latin #language-Luxembourgish #language-Lao #language-Lithuanian #language-Latvian #language-Malagasy #language-Eastern Mari #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Malay (macrolanguage) #language-Maltese #language-Burmese #language-Low German #language-Nepali (macrolanguage) #language-Dutch #language-Norwegian Nynorsk #language-Norwegian #language-Oriya (macrolanguage) #language-Ossetian #language-Panjabi #language-Polish #language-Western Panjabi #language-Pushto #language-Portuguese #language-Romanian #language-Russian #language-Sanskrit #language-Yakut #language-Sindhi #language-Serbo-Croatian #language-Sinhala #language-Slovak #language-Slovenian #language-Albanian #language-Serbian #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Tajik #language-Thai #language-Turkmen #language-Tagalog #language-Turkish #language-Tatar #language-Uighur #language-Ukrainian #language-Urdu #language-Uzbek #language-Vietnamese #language-Yiddish #language-Chinese #license-cc0-1.0 #arxiv-2010.14571 #region-us
|
## WARNING: this dataset is an extract of the OSCAR dataset published here to simulate the use of the full dataset in low-resource contexts.
Using this dataset is equivalent to using a processed version of OSCAR legally speaking. I take no credit for the gathering of the original data and hence refer entirely to the original dataset in the card below.
# Dataset Card for "oscar"
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository:
- Paper:
- Point of Contact:
### Dataset Summary
OSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. Data is distributed by language in both original and deduplicated form.
### Supported Tasks and Leaderboards
OSCAR is mainly inteded to pretrain language models and word represantations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection Data Splits Sample Size provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
## Dataset Structure
We show detailed information for all the configurations of the dataset.
## Dataset Creation
### Curation Rationale
OSCAR was constructed new pipeline derived from the fastText's one, called _goclassy_. Goclassy reuses the fastText linear classifier and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.
The order of operations is more or less the same as in the fastText pre-processing pipeline but instead of clustering multiple operations into a single blocking process, a worker is launched for each operation but bounding the number of possible parallel operations at a given time by the number of available threads instead of the number of CPUs. Goclassy is implemented in the Go programming language so it lets the Go runtime handle the scheduling of the processes. Thus the goclassy's pipeline one does not have to wait for a whole WET file to download, decompress and classify in order to start downloading and processing the next one, a new file will start downloading and processing as soon as the scheduler is able to allocate a new process.
Filtering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.
### Source Data
#### Initial Data Collection and Normalization
Common Crawl is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected nofollow and URL policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the November 2018 snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This must be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The fastText linear classifier is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by third parties.
## Additional Information
### Dataset Curators
The corpus was put together by Pedro J. Ortiz, Benoît Sagot, and Laurent Romary, during work done at Inria, particularly at the ALMAnaCH team.
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") URL
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Contributions
Thanks to @pjox and @lhoestq for adding this dataset.
| [
"## WARNING: this dataset is an extract of the OSCAR dataset published here to simulate the use of the full dataset in low-resource contexts.\nUsing this dataset is equivalent to using a processed version of OSCAR legally speaking. I take no credit for the gathering of the original data and hence refer entirely to the original dataset in the card below.",
"# Dataset Card for \"oscar\"",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact:",
"### Dataset Summary\n\nOSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. Data is distributed by language in both original and deduplicated form.",
"### Supported Tasks and Leaderboards\n\nOSCAR is mainly inteded to pretrain language models and word represantations.",
"### Languages\n\nAll the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection Data Splits Sample Size provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.",
"## Dataset Structure\n\nWe show detailed information for all the configurations of the dataset.",
"## Dataset Creation",
"### Curation Rationale\n\nOSCAR was constructed new pipeline derived from the fastText's one, called _goclassy_. Goclassy reuses the fastText linear classifier and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.\n\nThe order of operations is more or less the same as in the fastText pre-processing pipeline but instead of clustering multiple operations into a single blocking process, a worker is launched for each operation but bounding the number of possible parallel operations at a given time by the number of available threads instead of the number of CPUs. Goclassy is implemented in the Go programming language so it lets the Go runtime handle the scheduling of the processes. Thus the goclassy's pipeline one does not have to wait for a whole WET file to download, decompress and classify in order to start downloading and processing the next one, a new file will start downloading and processing as soon as the scheduler is able to allocate a new process.\n\nFiltering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nCommon Crawl is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected nofollow and URL policies.\n\nEach monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.\n\nTo construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the November 2018 snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.",
"#### Who are the source language producers?\n\nThe data comes from multiple web pages in a large variety of languages.",
"### Annotations\n\nThe dataset does not contain any additional annotations.",
"#### Annotation process\n\nN/A",
"#### Who are the annotators?\n\nN/A",
"### Personal and Sensitive Information\n\nBeing constructed from Common Crawl, Personal and sensitive information might be present. This must be considered before training deep learning models with OSCAR, specially in the case of text-generation models.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nOSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.",
"### Discussion of Biases\n\nOSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.",
"### Other Known Limitations\n\nThe fastText linear classifier is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by third parties.",
"## Additional Information",
"### Dataset Curators\n\nThe corpus was put together by Pedro J. Ortiz, Benoît Sagot, and Laurent Romary, during work done at Inria, particularly at the ALMAnaCH team.",
"### Licensing Information\n\n These data are released under this licensing scheme\n We do not own any of the text from which these data has been extracted.\n We license the actual packaging of these data under the Creative Commons CC0 license (\"no rights reserved\") URL\n To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR\n This work is published from: France.\n\n Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:\n * Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.\n * Clearly identify the copyrighted work claimed to be infringed.\n * Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.\n\n We will comply to legitimate requests by removing the affected sources from the next release of the corpus.",
"### Contributions\n\nThanks to @pjox and @lhoestq for adding this dataset."
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #source_datasets-oscar #language-Afrikaans #language-Amharic #language-Arabic #language-Egyptian Arabic #language-Assamese #language-Azerbaijani #language-South Azerbaijani #language-Bashkir #language-Belarusian #language-Bulgarian #language-Bengali #language-Tibetan #language-Breton #language-Catalan #language-Chechen #language-Cebuano #language-Central Kurdish #language-Czech #language-Chuvash #language-Welsh #language-Danish #language-German #language-Dhivehi #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Finnish #language-French #language-Western Frisian #language-Irish #language-Galician #language-Gujarati #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Icelandic #language-Italian #language-Japanese #language-Georgian #language-Kazakh #language-Khmer #language-Kannada #language-Korean #language-Kurdish #language-Kirghiz #language-Latin #language-Luxembourgish #language-Lao #language-Lithuanian #language-Latvian #language-Malagasy #language-Eastern Mari #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Malay (macrolanguage) #language-Maltese #language-Burmese #language-Low German #language-Nepali (macrolanguage) #language-Dutch #language-Norwegian Nynorsk #language-Norwegian #language-Oriya (macrolanguage) #language-Ossetian #language-Panjabi #language-Polish #language-Western Panjabi #language-Pushto #language-Portuguese #language-Romanian #language-Russian #language-Sanskrit #language-Yakut #language-Sindhi #language-Serbo-Croatian #language-Sinhala #language-Slovak #language-Slovenian #language-Albanian #language-Serbian #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Tajik #language-Thai #language-Turkmen #language-Tagalog #language-Turkish #language-Tatar #language-Uighur #language-Ukrainian #language-Urdu #language-Uzbek #language-Vietnamese #language-Yiddish #language-Chinese #license-cc0-1.0 #arxiv-2010.14571 #region-us \n",
"## WARNING: this dataset is an extract of the OSCAR dataset published here to simulate the use of the full dataset in low-resource contexts.\nUsing this dataset is equivalent to using a processed version of OSCAR legally speaking. I take no credit for the gathering of the original data and hence refer entirely to the original dataset in the card below.",
"# Dataset Card for \"oscar\"",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact:",
"### Dataset Summary\n\nOSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. Data is distributed by language in both original and deduplicated form.",
"### Supported Tasks and Leaderboards\n\nOSCAR is mainly inteded to pretrain language models and word represantations.",
"### Languages\n\nAll the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection Data Splits Sample Size provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.",
"## Dataset Structure\n\nWe show detailed information for all the configurations of the dataset.",
"## Dataset Creation",
"### Curation Rationale\n\nOSCAR was constructed new pipeline derived from the fastText's one, called _goclassy_. Goclassy reuses the fastText linear classifier and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.\n\nThe order of operations is more or less the same as in the fastText pre-processing pipeline but instead of clustering multiple operations into a single blocking process, a worker is launched for each operation but bounding the number of possible parallel operations at a given time by the number of available threads instead of the number of CPUs. Goclassy is implemented in the Go programming language so it lets the Go runtime handle the scheduling of the processes. Thus the goclassy's pipeline one does not have to wait for a whole WET file to download, decompress and classify in order to start downloading and processing the next one, a new file will start downloading and processing as soon as the scheduler is able to allocate a new process.\n\nFiltering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nCommon Crawl is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected nofollow and URL policies.\n\nEach monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.\n\nTo construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the November 2018 snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.",
"#### Who are the source language producers?\n\nThe data comes from multiple web pages in a large variety of languages.",
"### Annotations\n\nThe dataset does not contain any additional annotations.",
"#### Annotation process\n\nN/A",
"#### Who are the annotators?\n\nN/A",
"### Personal and Sensitive Information\n\nBeing constructed from Common Crawl, Personal and sensitive information might be present. This must be considered before training deep learning models with OSCAR, specially in the case of text-generation models.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nOSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.",
"### Discussion of Biases\n\nOSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.",
"### Other Known Limitations\n\nThe fastText linear classifier is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by third parties.",
"## Additional Information",
"### Dataset Curators\n\nThe corpus was put together by Pedro J. Ortiz, Benoît Sagot, and Laurent Romary, during work done at Inria, particularly at the ALMAnaCH team.",
"### Licensing Information\n\n These data are released under this licensing scheme\n We do not own any of the text from which these data has been extracted.\n We license the actual packaging of these data under the Creative Commons CC0 license (\"no rights reserved\") URL\n To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR\n This work is published from: France.\n\n Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:\n * Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.\n * Clearly identify the copyrighted work claimed to be infringed.\n * Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.\n\n We will comply to legitimate requests by removing the affected sources from the next release of the corpus.",
"### Contributions\n\nThanks to @pjox and @lhoestq for adding this dataset."
] |
f2d4e2258fe51ace7062bbeb2a55ad1e890d1c72 |
# Licensing information
Apple MIT License (AML). | 10zinten/op_classical_corpus_bo | [
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|other",
"language:bo",
"license:other",
"region:us"
] | 2022-03-23T10:53:48+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["bo"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["extended|other"], "task_categories": ["sequence-modeling"], "task_ids": ["language-modeling"], "pretty_name": "Tibetan Classical Buddhist Text Corpus"} | 2022-10-23T04:21:37+00:00 | [] | [
"bo"
] | TAGS
#task_ids-language-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-unknown #source_datasets-extended|other #language-Tibetan #license-other #region-us
|
# Licensing information
Apple MIT License (AML). | [
"# Licensing information\n\nApple MIT License (AML)."
] | [
"TAGS\n#task_ids-language-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-unknown #source_datasets-extended|other #language-Tibetan #license-other #region-us \n",
"# Licensing information\n\nApple MIT License (AML)."
] |
50da653240bbb86afedf9d408eae3c2f80aa646f | # GEM Submission
Submission name: This is a test name
| GEM-submissions/lewtun__this-is-a-test-name__1648048960 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-23T15:22:40+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "This is a test name", "tags": ["evaluation", "benchmark"]} | 2022-03-23T15:22:42+00:00 | [] | [] | TAGS
#benchmark-gem #evaluation #benchmark #region-us
| # GEM Submission
Submission name: This is a test name
| [
"# GEM Submission\n\nSubmission name: This is a test name"
] | [
"TAGS\n#benchmark-gem #evaluation #benchmark #region-us \n",
"# GEM Submission\n\nSubmission name: This is a test name"
] |
f5cb9a55f7c4c9e07fa812b2dc21846fc3ffeb78 | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/facades | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-23T16:23:02+00:00 | {} | 2022-04-12T12:57:03+00:00 | [
"1703.10593"
] | [] | TAGS
#arxiv-1703.10593 #region-us
| This dataset is part of the CycleGAN datasets, originally hosted here: URL
| [] | [
"TAGS\n#arxiv-1703.10593 #region-us \n"
] |
0d523b5d1a1ab77e4a3a4d86b9cbb3b432dc804d | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/. | huggan/maps | [
"region:us"
] | 2022-03-23T17:05:03+00:00 | {} | 2022-04-12T12:54:14+00:00 | [] | [] | TAGS
#region-us
| This dataset is part of the CycleGAN datasets, originally hosted here: URL | [] | [
"TAGS\n#region-us \n"
] |
0eae7d40a244b73068f68bb8f0fd7e456fceb66b | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/cityscapes | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-23T20:09:01+00:00 | {} | 2022-04-12T12:56:44+00:00 | [
"1703.10593"
] | [] | TAGS
#arxiv-1703.10593 #region-us
| This dataset is part of the CycleGAN datasets, originally hosted here: URL
| [] | [
"TAGS\n#arxiv-1703.10593 #region-us \n"
] |
199e90c44cdbc4f8323367513796e07f75df272f | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/ae_photos | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-23T21:00:46+00:00 | {} | 2022-04-12T12:56:12+00:00 | [
"1703.10593"
] | [] | TAGS
#arxiv-1703.10593 #region-us
| This dataset is part of the CycleGAN datasets, originally hosted here: URL
| [] | [
"TAGS\n#arxiv-1703.10593 #region-us \n"
] |
33b406a36ec2927e14c7afd65c598cc57ba77701 | ### dataset-list
The datasets in this dataset repository are from public datasets DeepMatcher,Magellan and WDC, which cover a variety of domains, such as product, citation and restaurant. Each dataset contains entities from two relational tables with multiple attributes, and a set of labeled matching/non-matching entity pairs.
| dataset_name | domain |
| -------------- | ----------- |
| abt_buy | Product |
| amazon_google | Product |
| anime | Anime |
| beer | Product |
| books2 | Book |
| books4 | Book |
| cameras | WDC-Product |
| computers | WDC-Product |
| cosmetics | Cosmetics |
| dblp_acm | Citation |
| dblp_scholar | Citation |
| ebooks1 | eBook |
| fodors_zagat | Restaurant |
| itunes_amazon | Music |
| movies1 | Movie |
| restaurants1 | Restaurant |
| restaurants3 | Restaurant |
| restaurants4 | Restaurant |
| shoes | WDC-Product |
| walmart_amazon | Product |
| watches | WDC-Product |
| RUC-DataLab/ER-dataset | [
"region:us"
] | 2022-03-24T01:49:22+00:00 | {} | 2022-07-05T06:58:55+00:00 | [] | [] | TAGS
#region-us
| ### dataset-list
The datasets in this dataset repository are from public datasets DeepMatcher,Magellan and WDC, which cover a variety of domains, such as product, citation and restaurant. Each dataset contains entities from two relational tables with multiple attributes, and a set of labeled matching/non-matching entity pairs.
| [
"### dataset-list\n\n\nThe datasets in this dataset repository are from public datasets DeepMatcher,Magellan and WDC, which cover a variety of domains, such as product, citation and restaurant. Each dataset contains entities from two relational tables with multiple attributes, and a set of labeled matching/non-matching entity pairs."
] | [
"TAGS\n#region-us \n",
"### dataset-list\n\n\nThe datasets in this dataset repository are from public datasets DeepMatcher,Magellan and WDC, which cover a variety of domains, such as product, citation and restaurant. Each dataset contains entities from two relational tables with multiple attributes, and a set of labeled matching/non-matching entity pairs."
] |
11391706f44d60008a984b20fbc2a16ce392fa87 |
# liv4ever v1
This is the Livonian 4-lingual parallel corpus. Livonian is a Uralic / Finnic language with just about 20 fluent speakers and no native speakers (as of 2021). The texts and translations in this corpus were collected from all the digital text resources that could be found by the authors; scanned and printed materials are left for future work.
The corpus includes parallel data for Livonian-Latvian, Livonian-Estonian and Livonian-English; the data has been collected in 2021. After retrieval it was normalized in terms of different orthographies of Livonian and manually sentence-aligned where needed. It was collected from the following sources, with sentence counts per language pair:
* Dictionary - example sentences from the Livonian-Latvian-Estonian dictionary;
* liv-lv: 10'388,
* liv-et: 10'378
* Stalte - the alphabet book by Kōrli Stalte, translated into Estonian and Latvian;
* liv-lv: 842,
* liv-et: 685
* Poetry - the poetry collection book "Ma võtan su õnge, tursk / Ma akūb sīnda vizzõ, tūrska", with Estonian translations;
* liv-et: 770
* Vääri - the book by Eduard Vääri about Livonian language and culture;
* liv-et: 592
* Satversme - translations of the Latvian Constitution into Livonian, Estonian and English;
* liv-en: 380,
* liv-lv: 414,
* liv-et: 413
* Facebook - social media posts by the Livonian Institute and Livonian Days with original translations;
* liv-en: 123,
* liv-lv: 124,
* liv-et: 7
* JEFUL - article abstracts from the Journal of Estonian and Finno-Ugric Linguistics, special issues dedicated to Livonian studies, translated into Estonian and English;
* liv-en: 36,
* liv-et: 49
* Trilium - the book with a collection of Livonian poetry, foreword and afterword translated into Estonian and Latvian;
* liv-lv: 51,
* liv-et: 53
* Songs - material crawled off lyricstranslate.com;
* liv-en: 54,
* liv-lv: 54,
* liv-fr: 31 | tartuNLP/liv4ever | [
"task_categories:text2text-generation",
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"language:liv",
"license:cc-by-nc-sa-4.0",
"conditional-text-generation",
"region:us"
] | 2022-03-24T07:40:49+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en", "liv"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["translation"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["text2text-generation", "translation"], "task_ids": [], "pretty_name": "Liv4ever", "language_bcp47": ["en-US", "liv"], "tags": ["conditional-text-generation"]} | 2022-10-25T11:30:49+00:00 | [] | [
"en",
"liv"
] | TAGS
#task_categories-text2text-generation #task_categories-translation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-translation #size_categories-unknown #source_datasets-original #language-English #language-Liv #license-cc-by-nc-sa-4.0 #conditional-text-generation #region-us
|
# liv4ever v1
This is the Livonian 4-lingual parallel corpus. Livonian is a Uralic / Finnic language with just about 20 fluent speakers and no native speakers (as of 2021). The texts and translations in this corpus were collected from all the digital text resources that could be found by the authors; scanned and printed materials are left for future work.
The corpus includes parallel data for Livonian-Latvian, Livonian-Estonian and Livonian-English; the data has been collected in 2021. After retrieval it was normalized in terms of different orthographies of Livonian and manually sentence-aligned where needed. It was collected from the following sources, with sentence counts per language pair:
* Dictionary - example sentences from the Livonian-Latvian-Estonian dictionary;
* liv-lv: 10'388,
* liv-et: 10'378
* Stalte - the alphabet book by Kōrli Stalte, translated into Estonian and Latvian;
* liv-lv: 842,
* liv-et: 685
* Poetry - the poetry collection book "Ma võtan su õnge, tursk / Ma akūb sīnda vizzõ, tūrska", with Estonian translations;
* liv-et: 770
* Vääri - the book by Eduard Vääri about Livonian language and culture;
* liv-et: 592
* Satversme - translations of the Latvian Constitution into Livonian, Estonian and English;
* liv-en: 380,
* liv-lv: 414,
* liv-et: 413
* Facebook - social media posts by the Livonian Institute and Livonian Days with original translations;
* liv-en: 123,
* liv-lv: 124,
* liv-et: 7
* JEFUL - article abstracts from the Journal of Estonian and Finno-Ugric Linguistics, special issues dedicated to Livonian studies, translated into Estonian and English;
* liv-en: 36,
* liv-et: 49
* Trilium - the book with a collection of Livonian poetry, foreword and afterword translated into Estonian and Latvian;
* liv-lv: 51,
* liv-et: 53
* Songs - material crawled off URL;
* liv-en: 54,
* liv-lv: 54,
* liv-fr: 31 | [
"# liv4ever v1\n\nThis is the Livonian 4-lingual parallel corpus. Livonian is a Uralic / Finnic language with just about 20 fluent speakers and no native speakers (as of 2021). The texts and translations in this corpus were collected from all the digital text resources that could be found by the authors; scanned and printed materials are left for future work.\n\nThe corpus includes parallel data for Livonian-Latvian, Livonian-Estonian and Livonian-English; the data has been collected in 2021. After retrieval it was normalized in terms of different orthographies of Livonian and manually sentence-aligned where needed. It was collected from the following sources, with sentence counts per language pair:\n\n* Dictionary - example sentences from the Livonian-Latvian-Estonian dictionary;\n * liv-lv: 10'388, \n * liv-et: 10'378\n* Stalte - the alphabet book by Kōrli Stalte, translated into Estonian and Latvian;\n * liv-lv: 842, \n * liv-et: 685\n* Poetry - the poetry collection book \"Ma võtan su õnge, tursk / Ma akūb sīnda vizzõ, tūrska\", with Estonian translations;\n * liv-et: 770\n* Vääri - the book by Eduard Vääri about Livonian language and culture;\n * liv-et: 592\n* Satversme - translations of the Latvian Constitution into Livonian, Estonian and English;\n * liv-en: 380, \n * liv-lv: 414, \n * liv-et: 413\n* Facebook - social media posts by the Livonian Institute and Livonian Days with original translations;\n * liv-en: 123, \n * liv-lv: 124, \n * liv-et: 7\n* JEFUL - article abstracts from the Journal of Estonian and Finno-Ugric Linguistics, special issues dedicated to Livonian studies, translated into Estonian and English;\n * liv-en: 36, \n * liv-et: 49\n* Trilium - the book with a collection of Livonian poetry, foreword and afterword translated into Estonian and Latvian;\n * liv-lv: 51, \n * liv-et: 53\n* Songs - material crawled off URL;\n * liv-en: 54, \n * liv-lv: 54, \n * liv-fr: 31"
] | [
"TAGS\n#task_categories-text2text-generation #task_categories-translation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-translation #size_categories-unknown #source_datasets-original #language-English #language-Liv #license-cc-by-nc-sa-4.0 #conditional-text-generation #region-us \n",
"# liv4ever v1\n\nThis is the Livonian 4-lingual parallel corpus. Livonian is a Uralic / Finnic language with just about 20 fluent speakers and no native speakers (as of 2021). The texts and translations in this corpus were collected from all the digital text resources that could be found by the authors; scanned and printed materials are left for future work.\n\nThe corpus includes parallel data for Livonian-Latvian, Livonian-Estonian and Livonian-English; the data has been collected in 2021. After retrieval it was normalized in terms of different orthographies of Livonian and manually sentence-aligned where needed. It was collected from the following sources, with sentence counts per language pair:\n\n* Dictionary - example sentences from the Livonian-Latvian-Estonian dictionary;\n * liv-lv: 10'388, \n * liv-et: 10'378\n* Stalte - the alphabet book by Kōrli Stalte, translated into Estonian and Latvian;\n * liv-lv: 842, \n * liv-et: 685\n* Poetry - the poetry collection book \"Ma võtan su õnge, tursk / Ma akūb sīnda vizzõ, tūrska\", with Estonian translations;\n * liv-et: 770\n* Vääri - the book by Eduard Vääri about Livonian language and culture;\n * liv-et: 592\n* Satversme - translations of the Latvian Constitution into Livonian, Estonian and English;\n * liv-en: 380, \n * liv-lv: 414, \n * liv-et: 413\n* Facebook - social media posts by the Livonian Institute and Livonian Days with original translations;\n * liv-en: 123, \n * liv-lv: 124, \n * liv-et: 7\n* JEFUL - article abstracts from the Journal of Estonian and Finno-Ugric Linguistics, special issues dedicated to Livonian studies, translated into Estonian and English;\n * liv-en: 36, \n * liv-et: 49\n* Trilium - the book with a collection of Livonian poetry, foreword and afterword translated into Estonian and Latvian;\n * liv-lv: 51, \n * liv-et: 53\n* Songs - material crawled off URL;\n * liv-en: 54, \n * liv-lv: 54, \n * liv-fr: 31"
] |
639b92ed9b6f2c613185744d5e0d145e24b070b4 | #NQ-retrieval
This is a nicely formatted version of the [Natural Questions](https://ai.google.com/research/NaturalQuestions/) dataset, formatted to train and evaluate retrieval systems.
Each row contains the following entries:
- **question**: Original question send for Google Search Engine
- **title**: Title of Wikipedia article
- **candidates**: A list with the passages from the original Wikipedia HTML document
- **passage_types**: Types (text, table, list) of the candidate passages
- **long_answers**: IDs which candidate passages where selected as relevant from annotators. Might be empty if no relevant passage has been identified
- **document_url** | sentence-transformers/NQ-retrieval | [
"region:us"
] | 2022-03-24T08:17:51+00:00 | {} | 2022-03-24T08:18:36+00:00 | [] | [] | TAGS
#region-us
| #NQ-retrieval
This is a nicely formatted version of the Natural Questions dataset, formatted to train and evaluate retrieval systems.
Each row contains the following entries:
- question: Original question send for Google Search Engine
- title: Title of Wikipedia article
- candidates: A list with the passages from the original Wikipedia HTML document
- passage_types: Types (text, table, list) of the candidate passages
- long_answers: IDs which candidate passages where selected as relevant from annotators. Might be empty if no relevant passage has been identified
- document_url | [] | [
"TAGS\n#region-us \n"
] |
1c3412bf9133897681719c058d1dbcc9221e89f1 | # GEM Submission
Submission name: This is a test name
| GEM-submissions/lewtun__this-is-a-test-name__1648111972 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-24T08:52:52+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "This is a test name", "tags": ["evaluation", "benchmark"]} | 2022-03-24T08:52:55+00:00 | [] | [] | TAGS
#benchmark-gem #evaluation #benchmark #region-us
| # GEM Submission
Submission name: This is a test name
| [
"# GEM Submission\n\nSubmission name: This is a test name"
] | [
"TAGS\n#benchmark-gem #evaluation #benchmark #region-us \n",
"# GEM Submission\n\nSubmission name: This is a test name"
] |
c5589b86803b9ab1a3747272f6a729f5c22b1e1e | # GEM Submission
Submission name: This is a test name
| GEM-submissions/lewtun__this-is-a-test-name__1648137608 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-24T16:00:09+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "This is a test name", "tags": ["evaluation", "benchmark"]} | 2022-03-24T16:00:11+00:00 | [] | [] | TAGS
#benchmark-gem #evaluation #benchmark #region-us
| # GEM Submission
Submission name: This is a test name
| [
"# GEM Submission\n\nSubmission name: This is a test name"
] | [
"TAGS\n#benchmark-gem #evaluation #benchmark #region-us \n",
"# GEM Submission\n\nSubmission name: This is a test name"
] |
5a3e6f4f6abdc2088363b7d4ececec81b3c8a053 | # Dataset Card for SpanishNLP
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Spanish Poems and their Authors and titles
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | wesamhaddad14/spanishNLP | [
"region:us"
] | 2022-03-24T16:36:16+00:00 | {} | 2022-03-24T16:46:39+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for SpanishNLP
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
Spanish Poems and their Authors and titles
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset. | [
"# Dataset Card for SpanishNLP",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nSpanish Poems and their Authors and titles",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#region-us \n",
"# Dataset Card for SpanishNLP",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nSpanish Poems and their Authors and titles",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
0e7601c463d3048563ea8017d7162279f56333b1 |
# Dataset Card for SciDTB
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/PKU-TANGENT/SciDTB
- **Repository:** https://github.com/PKU-TANGENT/SciDTB
- **Paper:** https://aclanthology.org/P18-2071/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
SciDTB is a domain-specific discourse treebank annotated on scientific articles written in English-language. Different from widely-used RST-DT and PDTB, SciDTB uses dependency trees to represent discourse structure, which is flexible and simplified to some extent but do not sacrifice structural integrity. Furthermore, this treebank is made as a benchmark for evaluating discourse dependency parsers. This dataset can benefit many downstream NLP tasks such as machine translation and automatic summarization.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English.
## Dataset Structure
### Data Instances
A typical data point consist of `root` which is a list of nodes in dependency tree. Each node in the list has four fields: `id` containing id for the node, `parent` contains id of the parent node, `text` refers to the span that is part of the current node and finally `relation` represents relation between current node and parent node.
An example from SciDTB train set is given below:
```
{
"root": [
{
"id": 0,
"parent": -1,
"text": "ROOT",
"relation": "null"
},
{
"id": 1,
"parent": 0,
"text": "We propose a neural network approach ",
"relation": "ROOT"
},
{
"id": 2,
"parent": 1,
"text": "to benefit from the non-linearity of corpus-wide statistics for part-of-speech ( POS ) tagging . <S>",
"relation": "enablement"
},
{
"id": 3,
"parent": 1,
"text": "We investigated several types of corpus-wide information for the words , such as word embeddings and POS tag distributions . <S>",
"relation": "elab-aspect"
},
{
"id": 4,
"parent": 5,
"text": "Since these statistics are encoded as dense continuous features , ",
"relation": "cause"
},
{
"id": 5,
"parent": 3,
"text": "it is not trivial to combine these features ",
"relation": "elab-addition"
},
{
"id": 6,
"parent": 5,
"text": "comparing with sparse discrete features . <S>",
"relation": "comparison"
},
{
"id": 7,
"parent": 1,
"text": "Our tagger is designed as a combination of a linear model for discrete features and a feed-forward neural network ",
"relation": "elab-aspect"
},
{
"id": 8,
"parent": 7,
"text": "that captures the non-linear interactions among the continuous features . <S>",
"relation": "elab-addition"
},
{
"id": 9,
"parent": 10,
"text": "By using several recent advances in the activation functions for neural networks , ",
"relation": "manner-means"
},
{
"id": 10,
"parent": 1,
"text": "the proposed method marks new state-of-the-art accuracies for English POS tagging tasks . <S>",
"relation": "evaluation"
}
]
}
```
More such raw data instance can be found [here](https://github.com/PKU-TANGENT/SciDTB/tree/master/dataset)
### Data Fields
- id: an integer identifier for the node
- parent: an integer identifier for the parent node
- text: a string containing text for the current node
- relation: a string representing discourse relation between current node and parent node
### Data Splits
Dataset consists of three splits: `train`, `dev` and `test`.
| Train | Valid | Test |
| ------ | ----- | ---- |
| 743 | 154 | 152|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
More information can be found [here](https://aclanthology.org/P18-2071/)
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@inproceedings{yang-li-2018-scidtb,
title = "{S}ci{DTB}: Discourse Dependency {T}ree{B}ank for Scientific Abstracts",
author = "Yang, An and
Li, Sujian",
booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = jul,
year = "2018",
address = "Melbourne, Australia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P18-2071",
doi = "10.18653/v1/P18-2071",
pages = "444--449",
abstract = "Annotation corpus for discourse relations benefits NLP tasks such as machine translation and question answering. In this paper, we present SciDTB, a domain-specific discourse treebank annotated on scientific articles. Different from widely-used RST-DT and PDTB, SciDTB uses dependency trees to represent discourse structure, which is flexible and simplified to some extent but do not sacrifice structural integrity. We discuss the labeling framework, annotation workflow and some statistics about SciDTB. Furthermore, our treebank is made as a benchmark for evaluating discourse dependency parsers, on which we provide several baselines as fundamental work.",
}
``` | DFKI-SLT/scidtb | [
"task_categories:token-classification",
"task_ids:parsing",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"region:us"
] | 2022-03-25T09:07:59+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["parsing"], "pretty_name": "Scientific Dependency Tree Bank", "language_bcp47": ["en-US"]} | 2022-10-25T05:38:25+00:00 | [] | [
"en"
] | TAGS
#task_categories-token-classification #task_ids-parsing #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #region-us
| Dataset Card for SciDTB
=======================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: URL
* Leaderboard:
* Point of Contact:
### Dataset Summary
SciDTB is a domain-specific discourse treebank annotated on scientific articles written in English-language. Different from widely-used RST-DT and PDTB, SciDTB uses dependency trees to represent discourse structure, which is flexible and simplified to some extent but do not sacrifice structural integrity. Furthermore, this treebank is made as a benchmark for evaluating discourse dependency parsers. This dataset can benefit many downstream NLP tasks such as machine translation and automatic summarization.
### Supported Tasks and Leaderboards
### Languages
English.
Dataset Structure
-----------------
### Data Instances
A typical data point consist of 'root' which is a list of nodes in dependency tree. Each node in the list has four fields: 'id' containing id for the node, 'parent' contains id of the parent node, 'text' refers to the span that is part of the current node and finally 'relation' represents relation between current node and parent node.
An example from SciDTB train set is given below:
More such raw data instance can be found here
### Data Fields
* id: an integer identifier for the node
* parent: an integer identifier for the parent node
* text: a string containing text for the current node
* relation: a string representing discourse relation between current node and parent node
### Data Splits
Dataset consists of three splits: 'train', 'dev' and 'test'.
Train: 743, Valid: 154, Test: 152
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
More information can be found here
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
| [
"### Dataset Summary\n\n\nSciDTB is a domain-specific discourse treebank annotated on scientific articles written in English-language. Different from widely-used RST-DT and PDTB, SciDTB uses dependency trees to represent discourse structure, which is flexible and simplified to some extent but do not sacrifice structural integrity. Furthermore, this treebank is made as a benchmark for evaluating discourse dependency parsers. This dataset can benefit many downstream NLP tasks such as machine translation and automatic summarization.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nEnglish.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA typical data point consist of 'root' which is a list of nodes in dependency tree. Each node in the list has four fields: 'id' containing id for the node, 'parent' contains id of the parent node, 'text' refers to the span that is part of the current node and finally 'relation' represents relation between current node and parent node.\n\n\nAn example from SciDTB train set is given below:\n\n\nMore such raw data instance can be found here",
"### Data Fields\n\n\n* id: an integer identifier for the node\n* parent: an integer identifier for the parent node\n* text: a string containing text for the current node\n* relation: a string representing discourse relation between current node and parent node",
"### Data Splits\n\n\nDataset consists of three splits: 'train', 'dev' and 'test'.\n\n\nTrain: 743, Valid: 154, Test: 152\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\n\nMore information can be found here",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information"
] | [
"TAGS\n#task_categories-token-classification #task_ids-parsing #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #region-us \n",
"### Dataset Summary\n\n\nSciDTB is a domain-specific discourse treebank annotated on scientific articles written in English-language. Different from widely-used RST-DT and PDTB, SciDTB uses dependency trees to represent discourse structure, which is flexible and simplified to some extent but do not sacrifice structural integrity. Furthermore, this treebank is made as a benchmark for evaluating discourse dependency parsers. This dataset can benefit many downstream NLP tasks such as machine translation and automatic summarization.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nEnglish.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA typical data point consist of 'root' which is a list of nodes in dependency tree. Each node in the list has four fields: 'id' containing id for the node, 'parent' contains id of the parent node, 'text' refers to the span that is part of the current node and finally 'relation' represents relation between current node and parent node.\n\n\nAn example from SciDTB train set is given below:\n\n\nMore such raw data instance can be found here",
"### Data Fields\n\n\n* id: an integer identifier for the node\n* parent: an integer identifier for the parent node\n* text: a string containing text for the current node\n* relation: a string representing discourse relation between current node and parent node",
"### Data Splits\n\n\nDataset consists of three splits: 'train', 'dev' and 'test'.\n\n\nTrain: 743, Valid: 154, Test: 152\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\n\nMore information can be found here",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information"
] |
1eddac63112eee1fdf1966e0bca27a5ff248c772 | ## Overview
The original dataset can be found [here](https://www.dropbox.com/s/hylbuaovqwo2zav/nli_fever.zip?dl=0)
while the Github repo is [here](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md).
This dataset has been proposed in [Combining fact extraction and verification with neural semantic matching networks](https://dl.acm.org/doi/abs/10.1609/aaai.v33i01.33016859). This dataset has been created as a modification
of FEVER.
In the original FEVER setting, the input is a claim from Wikipedia and the expected output is a label.
However, this is different from the standard NLI formalization which is basically a *pair-of-sequence to label* problem.
To facilitate NLI-related research to take advantage of the FEVER dataset, the authors pair the claims in the FEVER dataset
with the textual evidence and make it a *pair-of-sequence to label* formatted dataset.
## Dataset curation
The label mapping follows the paper and is the following
```python
mapping = {
"SUPPORTS": 0, # entailment
"NOT ENOUGH INFO": 1, # neutral
"REFUTES": 2, # contradiction
}
```
Also, the "verifiable" column has been encoded as follows
```python
mapping = {"NOT VERIFIABLE": 0, "VERIFIABLE": 1}
```
Finally, a consistency check with the labels reported in the original FEVER dataset is performed.
NOTE: no label is available for the "test" split.
NOTE: there are 3 instances in common between `dev` and `train` splits.
## Code to generate the dataset
```python
import pandas as pd
from datasets import Dataset, ClassLabel, load_dataset, Value, Features, DatasetDict
import json
# download data from https://www.dropbox.com/s/hylbuaovqwo2zav/nli_fever.zip?dl=0
paths = {
"train": "<some_path>/nli_fever/train_fitems.jsonl",
"validation": "<some_path>/nli_fever/dev_fitems.jsonl",
"test": "<some_path>/nli_fever/test_fitems.jsonl",
}
# parsing code from https://github.com/facebookresearch/anli/blob/main/src/utils/common.py
registered_jsonabl_classes = {}
def register_class(cls):
global registered_jsonabl_classes
if cls not in registered_jsonabl_classes:
registered_jsonabl_classes.update({cls.__name__: cls})
def unserialize_JsonableObject(d):
global registered_jsonabl_classes
classname = d.pop("_jcls_", None)
if classname:
cls = registered_jsonabl_classes[classname]
obj = cls.__new__(cls) # Make instance without calling __init__
for key, value in d.items():
setattr(obj, key, value)
return obj
else:
return d
def load_jsonl(filename, debug_num=None):
d_list = []
with open(filename, encoding="utf-8", mode="r") as in_f:
print("Load Jsonl:", filename)
for line in in_f:
item = json.loads(line.strip(), object_hook=unserialize_JsonableObject)
d_list.append(item)
if debug_num is not None and 0 < debug_num == len(d_list):
break
return d_list
def get_original_fever() -> pd.DataFrame:
"""Get original fever datasets."""
fever_v1 = load_dataset("fever", "v1.0")
fever_v2 = load_dataset("fever", "v2.0")
columns = ["id", "label"]
splits = ["paper_test", "paper_dev", "labelled_dev", "train"]
list_dfs = [fever_v1[split].to_pandas()[columns] for split in splits]
list_dfs.append(fever_v2["validation"].to_pandas()[columns])
dfs = pd.concat(list_dfs, ignore_index=False)
dfs = dfs.drop_duplicates()
dfs = dfs.rename(columns={"label": "fever_gold_label"})
return dfs
def load_and_process(path: str, fever_df: pd.DataFrame) -> pd.DataFrame:
"""Load data split and merge with fever."""
df = pd.DataFrame(load_jsonl(path))
df = df.rename(columns={"query": "premise", "context": "hypothesis"})
# adjust dtype
df["cid"] = df["cid"].astype(int)
# merge with original fever to get labels
df = pd.merge(df, fever_df, left_on="cid", right_on="id", how="inner").drop_duplicates()
return df
def encode_labels(df: pd.DataFrame) -> pd.DataFrame:
"""Encode labels using the mapping used in SNLI and MultiNLI"""
mapping = {
"SUPPORTS": 0, # entailment
"NOT ENOUGH INFO": 1, # neutral
"REFUTES": 2, # contradiction
}
df["label"] = df["fever_gold_label"].map(mapping)
# verifiable
df["verifiable"] = df["verifiable"].map({"NOT VERIFIABLE": 0, "VERIFIABLE": 1})
return df
if __name__ == "__main__":
fever_df = get_original_fever()
dataset_splits = {}
for split, path in paths.items():
# from json to dataframe and merge with fever
df = load_and_process(path, fever_df)
if not len(df) > 0:
print(f"Split `{split}` has no matches")
continue
if split == "train":
# train must have same labels
assert sum(df["fever_gold_label"] != df["label"]) == 0
# encode labels using the default mapping used by other nli datasets
# i.e, entailment: 0, neutral: 1, contradiction: 2
df = df.drop(columns=["label"])
df = encode_labels(df)
# cast to dataset
features = Features(
{
"cid": Value(dtype="int64", id=None),
"fid": Value(dtype="string", id=None),
"id": Value(dtype="int32", id=None),
"premise": Value(dtype="string", id=None),
"hypothesis": Value(dtype="string", id=None),
"verifiable": Value(dtype="int64", id=None),
"fever_gold_label": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
}
)
if "test" in path:
# no features for test set
df["label"] = -1
df["verifiable"] = -1
df["fever_gold_label"] = "not available"
dataset = Dataset.from_pandas(df, features=features)
dataset_splits[split] = dataset
nli_fever = DatasetDict(dataset_splits)
nli_fever.push_to_hub("pietrolesci/nli_fever", token="<your token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(dataset_splits.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
dataset_splits[i].to_pandas(),
dataset_splits[j].to_pandas(),
on=["premise", "hypothesis", "label"],
how="inner",
).shape[0],
)
#> train - dev: 3
#> train - test: 0
#> dev - test: 0
``` | pietrolesci/nli_fever | [
"region:us"
] | 2022-03-25T10:01:17+00:00 | {} | 2022-04-25T08:03:28+00:00 | [] | [] | TAGS
#region-us
| ## Overview
The original dataset can be found here
while the Github repo is here.
This dataset has been proposed in Combining fact extraction and verification with neural semantic matching networks. This dataset has been created as a modification
of FEVER.
In the original FEVER setting, the input is a claim from Wikipedia and the expected output is a label.
However, this is different from the standard NLI formalization which is basically a *pair-of-sequence to label* problem.
To facilitate NLI-related research to take advantage of the FEVER dataset, the authors pair the claims in the FEVER dataset
with the textual evidence and make it a *pair-of-sequence to label* formatted dataset.
## Dataset curation
The label mapping follows the paper and is the following
Also, the "verifiable" column has been encoded as follows
Finally, a consistency check with the labels reported in the original FEVER dataset is performed.
NOTE: no label is available for the "test" split.
NOTE: there are 3 instances in common between 'dev' and 'train' splits.
## Code to generate the dataset
| [
"## Overview\nThe original dataset can be found here\nwhile the Github repo is here.\n\nThis dataset has been proposed in Combining fact extraction and verification with neural semantic matching networks. This dataset has been created as a modification\nof FEVER.\n\nIn the original FEVER setting, the input is a claim from Wikipedia and the expected output is a label. \nHowever, this is different from the standard NLI formalization which is basically a *pair-of-sequence to label* problem.\nTo facilitate NLI-related research to take advantage of the FEVER dataset, the authors pair the claims in the FEVER dataset\nwith the textual evidence and make it a *pair-of-sequence to label* formatted dataset.",
"## Dataset curation\nThe label mapping follows the paper and is the following \n\n\n\nAlso, the \"verifiable\" column has been encoded as follows\n\n\n\nFinally, a consistency check with the labels reported in the original FEVER dataset is performed.\n\nNOTE: no label is available for the \"test\" split.\nNOTE: there are 3 instances in common between 'dev' and 'train' splits.",
"## Code to generate the dataset"
] | [
"TAGS\n#region-us \n",
"## Overview\nThe original dataset can be found here\nwhile the Github repo is here.\n\nThis dataset has been proposed in Combining fact extraction and verification with neural semantic matching networks. This dataset has been created as a modification\nof FEVER.\n\nIn the original FEVER setting, the input is a claim from Wikipedia and the expected output is a label. \nHowever, this is different from the standard NLI formalization which is basically a *pair-of-sequence to label* problem.\nTo facilitate NLI-related research to take advantage of the FEVER dataset, the authors pair the claims in the FEVER dataset\nwith the textual evidence and make it a *pair-of-sequence to label* formatted dataset.",
"## Dataset curation\nThe label mapping follows the paper and is the following \n\n\n\nAlso, the \"verifiable\" column has been encoded as follows\n\n\n\nFinally, a consistency check with the labels reported in the original FEVER dataset is performed.\n\nNOTE: no label is available for the \"test\" split.\nNOTE: there are 3 instances in common between 'dev' and 'train' splits.",
"## Code to generate the dataset"
] |
a3923be8c49a8c8b5025737e64919faecc7576a7 | ## Overview
The original dataset can be found [here](https://github.com/swarnaHub/ConjNLI). It has been
proposed in [ConjNLI: Natural Language Inference Over Conjunctive Sentences](https://aclanthology.org/2020.emnlp-main.661/).
This dataset is a stress test for natural language inference over conjunctive sentences,
where the premise differs from the hypothesis by conjuncts removed, added, or replaced.
## Dataset curation
The label mapping is the usual `{"entailment": 0, "neutral": 1, "contradiction": 2}`
used in NLI datasets. Note that labels for `test` split are not available.
Also, the `train` split is originally named `adversarial_train_15k`.
There are 2 instances (join on "premise", "hypothesis", "label") present both in `train` and `dev`.
The `test` split does not have labels.
Finally, in the `train` set there are a few instances without a label, they are removed.
## Code to create the dataset
```python
import pandas as pd
from datasets import Dataset, ClassLabel, Value, Features, DatasetDict
# download data from repo https://github.com/swarnaHub/ConjNLI
paths = {
"train": "<path_to_folder>/ConjNLI-master/data/NLI/adversarial_train_15k.tsv",
"dev": "<path_to_folder>/ConjNLI-master/data/NLI/conj_dev.tsv",
"test": "<path_to_folder>/ConjNLI-master/data/NLI/conj_test.tsv",
}
dataset_splits = {}
for split, path in paths.items():
# load data
df = pd.read_csv(paths[split], sep="\t")
# encode labels using the default mapping used by other nli datasets
# i.e, entailment: 0, neutral: 1, contradiction: 2
df.columns = df.columns.str.lower()
if "test" in path:
df["label"] = -1
else:
# remove empty labels
df = df.loc[~df["label"].isna()]
# encode labels
df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
# cast to dataset
features = Features({
"premise": Value(dtype="string", id=None),
"hypothesis": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
})
dataset = Dataset.from_pandas(df, features=features)
dataset_splits[split] = dataset
conj_nli = DatasetDict(dataset_splits)
conj_nli.push_to_hub("pietrolesci/conj_nli", token="<token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(conj_nli.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
conj_nli[i].to_pandas(),
conj_nli[j].to_pandas(),
on=["premise", "hypothesis", "label"], how="inner"
).shape[0],
)
#> train - dev: 2
#> train - test: 0
#> dev - test: 0
``` | pietrolesci/conj_nli | [
"region:us"
] | 2022-03-25T10:17:37+00:00 | {} | 2022-04-25T12:27:25+00:00 | [] | [] | TAGS
#region-us
| ## Overview
The original dataset can be found here. It has been
proposed in ConjNLI: Natural Language Inference Over Conjunctive Sentences.
This dataset is a stress test for natural language inference over conjunctive sentences,
where the premise differs from the hypothesis by conjuncts removed, added, or replaced.
## Dataset curation
The label mapping is the usual '{"entailment": 0, "neutral": 1, "contradiction": 2}'
used in NLI datasets. Note that labels for 'test' split are not available.
Also, the 'train' split is originally named 'adversarial_train_15k'.
There are 2 instances (join on "premise", "hypothesis", "label") present both in 'train' and 'dev'.
The 'test' split does not have labels.
Finally, in the 'train' set there are a few instances without a label, they are removed.
## Code to create the dataset
| [
"## Overview\nThe original dataset can be found here. It has been\nproposed in ConjNLI: Natural Language Inference Over Conjunctive Sentences.\n\nThis dataset is a stress test for natural language inference over conjunctive sentences, \nwhere the premise differs from the hypothesis by conjuncts removed, added, or replaced.",
"## Dataset curation\nThe label mapping is the usual '{\"entailment\": 0, \"neutral\": 1, \"contradiction\": 2}'\nused in NLI datasets. Note that labels for 'test' split are not available. \nAlso, the 'train' split is originally named 'adversarial_train_15k'.\n\nThere are 2 instances (join on \"premise\", \"hypothesis\", \"label\") present both in 'train' and 'dev'.\n\nThe 'test' split does not have labels.\n\nFinally, in the 'train' set there are a few instances without a label, they are removed.",
"## Code to create the dataset"
] | [
"TAGS\n#region-us \n",
"## Overview\nThe original dataset can be found here. It has been\nproposed in ConjNLI: Natural Language Inference Over Conjunctive Sentences.\n\nThis dataset is a stress test for natural language inference over conjunctive sentences, \nwhere the premise differs from the hypothesis by conjuncts removed, added, or replaced.",
"## Dataset curation\nThe label mapping is the usual '{\"entailment\": 0, \"neutral\": 1, \"contradiction\": 2}'\nused in NLI datasets. Note that labels for 'test' split are not available. \nAlso, the 'train' split is originally named 'adversarial_train_15k'.\n\nThere are 2 instances (join on \"premise\", \"hypothesis\", \"label\") present both in 'train' and 'dev'.\n\nThe 'test' split does not have labels.\n\nFinally, in the 'train' set there are a few instances without a label, they are removed.",
"## Code to create the dataset"
] |
9cbf1e972349722deae84615618ee6ad4d41e36e | [![CC BY 4.0][cc-by-shield]][cc-by]
[](https://doi.org/10.5281/zenodo.6457824)
# GLARE: Google Apps Arabic Reviews
Dataset and Code of "GLARE: Google Apps Arabic Reviews" paper.
You can download the paper via: [[Github]](GLARE.pdf)
## Paper Summary
We introduce GLARE: Google Apps Arabic Reviews dataset. A collection of 76M reviews from 9,980 Android apps collected from Google PlayStore Saudi store.
## Preparation
#### Below is details about each file, please ensure that you have enough storage before downloading the data.
| Data Type | File Name | File Size | File Type |
| ------------------ |---------------- | -------------- |-------------- |
| raw | apps | 4.1 MB | CSV |
| raw | reviews | 17 GB | CSV |
| raw | categories/ | 4.3 MB | CSV
| engineered | apps | 3.8 MB | CSV
| engineered | reviews | 21.9 GB | CSV
| engineered | vocabulary | 530.5 MB | CSV
## File Specifications
- **apps.csv**: File that contains apps metadata.
- **reviews.csv**: File that contains reviews and reviews metadata.
- **categories/**: Folder that contains 59 CSV files, each file corresponds to one category with apps and apps metadata scrapped from top 200 free apps for that category.
- **vocabulary.csv**: File that contains vocabulary set generated from reviews with additional engineered features (word length, word frequency, has noise or digits, ..etc.)
### Raw Data
#### Apps Metadata
```
{
"title":"application name/title",
"app_id":"application unique identifier",
"url":"application url at Google PlayStore",
"icon":"url for image object",
"developer":"developer name",
"developer_id":"developer unique identifier",
"summary":"short description of the application",
"rating":"application accumulated rating"
}
```
#### Reviews Metadata
```
{
"at":"review datetime",
"content":"review text",
"replied_at":"developer reply datetime",
"reply_content":"developer reply content",
"review_created_version":"user application version during the time of review",
"review_id":"review unique identifier",
"rating":"user rating",
"thumbs_up_count":"number of users that agree with the reviewer",
"user_name":"user display name",
"app_id":"application unique identifier"
}
```
### Engineered Data
#### Apps Metadata
Same as apps.csv in raw data with the following additions:
```
{
"reviews_count":"number of reviews for the application",
"categories":"list of application categories",
"categories_count":"number of application categories"
}
```
#### Reviews Metadata
Same as reviews.csv in raw data with the following additions:
```
{
"tokenized_review":"list of review words tokenized on white-space",
"words_count":"number of words in review"
}
```
#### Vocabulary
```
{
"word":"term text",
"length":"word characters count",
"frequency":"word occurrences in the reviews dataset",
"has_noise":"true or false if word contains anything non-arabic alphanumeric",
"noise":"list of noise (anything non-arabic alphanumeric) in the word",
"has_digits":"true or false if word contains arabic or hindi digits",
"digits":"list of digits in the word"
}
```
### Folders Structure
- Data are prepared as raw data or engineered data.
- Download the dataset files: [Google Drive](https://drive.google.com/drive/folders/1Cb61K3wFdVlIQfKouchsUpn5oXdJbhyg?usp=sharing) | [Zenodo](https://zenodo.org/record/6457824#.Ylv-gX9Bz8w) | [Alternative Google Drive](https://drive.google.com/drive/folders/1jWCCyJPKFf6Q-1zDuGRUBi6XtlmkyHlt?usp=sharing)
- The directory structure is as follow:
```
data
└── raw
├── apps.csv
├── reviews.csv
└── categories/
└── engineered
├── apps.csv
├── reviews.csv
└── vocabulary.csv
```
## Citation
If you use this dataset please cite as:
```
@dataset{alghamdi_fatima_2022_6457824,
author = {AlGhamdi, Fatima and
Mohammed, Reem and
Al-Khalifa, Hend and
Alowisheq, Areeb},
title = {GLARE: Google Apps Arabic Reviews Dataset},
month = apr,
year = 2022,
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.6457824},
url = {https://doi.org/10.5281/zenodo.6457824}
}
```
## License
This work is licensed under a
[Creative Commons Attribution 4.0 International License][cc-by].
[![CC BY 4.0][cc-by-image]][cc-by]
[cc-by]: http://creativecommons.org/licenses/by/4.0/
[cc-by-image]: https://i.creativecommons.org/l/by/4.0/88x31.png
[cc-by-shield]: https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg
| Fatima-Gh/GLARE | [
"region:us"
] | 2022-03-25T11:22:43+00:00 | {} | 2022-06-09T13:00:29+00:00 | [] | [] | TAGS
#region-us
| [](URL)

Paper Summary
-------------
We introduce GLARE: Google Apps Arabic Reviews dataset. A collection of 76M reviews from 9,980 Android apps collected from Google PlayStore Saudi store.
Preparation
-----------
#### Below is details about each file, please ensure that you have enough storage before downloading the data.
File Specifications
-------------------
* URL: File that contains apps metadata.
* URL: File that contains reviews and reviews metadata.
* categories/: Folder that contains 59 CSV files, each file corresponds to one category with apps and apps metadata scrapped from top 200 free apps for that category.
* URL: File that contains vocabulary set generated from reviews with additional engineered features (word length, word frequency, has noise or digits, ..etc.)
### Raw Data
#### Apps Metadata
#### Reviews Metadata
### Engineered Data
#### Apps Metadata
Same as URL in raw data with the following additions:
#### Reviews Metadata
Same as URL in raw data with the following additions:
#### Vocabulary
### Folders Structure
* Data are prepared as raw data or engineered data.
* Download the dataset files: Google Drive | Zenodo | Alternative Google Drive
* The directory structure is as follow:
If you use this dataset please cite as:
License
-------
This work is licensed under a
[Creative Commons Attribution 4.0 International License](URL).
[](URL)
| [
"#### Below is details about each file, please ensure that you have enough storage before downloading the data.\n\n\n\nFile Specifications\n-------------------\n\n\n* URL: File that contains apps metadata.\n* URL: File that contains reviews and reviews metadata.\n* categories/: Folder that contains 59 CSV files, each file corresponds to one category with apps and apps metadata scrapped from top 200 free apps for that category.\n* URL: File that contains vocabulary set generated from reviews with additional engineered features (word length, word frequency, has noise or digits, ..etc.)",
"### Raw Data",
"#### Apps Metadata",
"#### Reviews Metadata",
"### Engineered Data",
"#### Apps Metadata\n\n\nSame as URL in raw data with the following additions:",
"#### Reviews Metadata\n\n\nSame as URL in raw data with the following additions:",
"#### Vocabulary",
"### Folders Structure\n\n\n* Data are prepared as raw data or engineered data.\n* Download the dataset files: Google Drive | Zenodo | Alternative Google Drive\n* The directory structure is as follow:\n\n\nIf you use this dataset please cite as:\n\n\nLicense\n-------\n\n\nThis work is licensed under a\n[Creative Commons Attribution 4.0 International License](URL).\n\n\n[](URL)"
] | [
"TAGS\n#region-us \n",
"#### Below is details about each file, please ensure that you have enough storage before downloading the data.\n\n\n\nFile Specifications\n-------------------\n\n\n* URL: File that contains apps metadata.\n* URL: File that contains reviews and reviews metadata.\n* categories/: Folder that contains 59 CSV files, each file corresponds to one category with apps and apps metadata scrapped from top 200 free apps for that category.\n* URL: File that contains vocabulary set generated from reviews with additional engineered features (word length, word frequency, has noise or digits, ..etc.)",
"### Raw Data",
"#### Apps Metadata",
"#### Reviews Metadata",
"### Engineered Data",
"#### Apps Metadata\n\n\nSame as URL in raw data with the following additions:",
"#### Reviews Metadata\n\n\nSame as URL in raw data with the following additions:",
"#### Vocabulary",
"### Folders Structure\n\n\n* Data are prepared as raw data or engineered data.\n* Download the dataset files: Google Drive | Zenodo | Alternative Google Drive\n* The directory structure is as follow:\n\n\nIf you use this dataset please cite as:\n\n\nLicense\n-------\n\n\nThis work is licensed under a\n[Creative Commons Attribution 4.0 International License](URL).\n\n\n[](URL)"
] |
34554c2cebd75f8104b5a8128d3685802793558c | # GEM Submission
Submission name: This is a test name
| GEM-submissions/lewtun__this-is-a-test-name__1648220072 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-25T14:54:36+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "This is a test name", "tags": ["evaluation", "benchmark"]} | 2022-03-25T14:54:37+00:00 | [] | [] | TAGS
#benchmark-gem #evaluation #benchmark #region-us
| # GEM Submission
Submission name: This is a test name
| [
"# GEM Submission\n\nSubmission name: This is a test name"
] | [
"TAGS\n#benchmark-gem #evaluation #benchmark #region-us \n",
"# GEM Submission\n\nSubmission name: This is a test name"
] |
dbe7278565c80d2528ff20554b26c1655ced9cdd |
# Dataset Card for roman_urdu_hate_speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [roman_urdu_hate_speech homepage](https://aclanthology.org/2020.emnlp-main.197/)
- **Repository:** [roman_urdu_hate_speech repository](https://github.com/haroonshakeel/roman_urdu_hate_speech)
- **Paper:** [Hate-Speech and Offensive Language Detection in Roman Urdu](https://aclanthology.org/2020.emnlp-main.197.pdf)
- **Leaderboard:** [N/A]
- **Point of Contact:** [M. Haroon Shakeel](mailto:[email protected])
### Dataset Summary
The Roman Urdu Hate-Speech and Offensive Language Detection (RUHSOLD) dataset is a Roman Urdu dataset of tweets annotated by experts in the relevant language. The authors develop the gold-standard for two sub-tasks. First sub-task is based on binary labels of Hate-Offensive content and Normal content (i.e., inoffensive language). These labels are self-explanatory. The authors refer to this sub-task as coarse-grained classification. Second sub-task defines Hate-Offensive content with four labels at a granular level. These labels are the most relevant for the demographic of users who converse in RU and are defined in related literature. The authors refer to this sub-task as fine-grained classification. The objective behind creating two gold-standards is to enable the researchers to evaluate the hate speech detection approaches on both easier (coarse-grained) and challenging (fine-grained) scenarios.
### Supported Tasks and Leaderboards
- 'multi-class-classification', 'text-classification-other-binary classification': The dataset can be used for both multi class classification as well as for binary classification as it contains both coarse grained and fine grained labels.
### Languages
The text of this dataset is Roman Urdu. The associated BCP-47 code is 'ur'.
## Dataset Structure
### Data Instances
The dataset consists of two parts divided as a set of two types, Coarse grained examples and Fine Grained examples. The difference is that in the coarse grained example the tweets are labelled as abusive or normal whereas in the fine grained version there are several classes of hate associated with a tweet.
For the Coarse grained segment of the dataset the label mapping is:-
Task 1: Coarse-grained Classification Labels
0: Abusive/Offensive
1: Normal
Whereas for the Fine Grained segment of the dataset the label mapping is:-
Task 2: Fine-grained Classification Labels
0: Abusive/Offensive
1: Normal
2: Religious Hate
3: Sexism
4: Profane/Untargeted
An example from Roman Urdu Hate Speech looks as follows:
```
{
'tweet': 'there are some yahodi daboo like imran chore zakat khore'
'label': 0
}
```
### Data Fields
-tweet:a string denoting the tweet which has been selected by using a random sampling from a tweet base of 50000 tweets to select 10000 tweets and annotated for the dataset.
-label:An annotation manually labeled by three independent annotators, during the annotation process, all conflicts are resolved by a majority vote among three annotators.
### Data Splits
The data of each of the segments, Coarse Grained and Fine Grained is further split into training, validation and test set. The data is split in train, test, and validation sets with 70,20,10 split ratio using stratification based on fine-grained labels.
The use of stratified sampling is deemed necessary to preserve the same labels ratio across all splits.
The Final split sizes are as follows:
Train Valid Test
7209 2003 801
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was created by Hammad Rizwan, Muhammad Haroon Shakeel, Asim Karim during work done at Department of Computer Science, Lahore University of Management Sciences (LUMS), Lahore, Pakistan.
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [Roman Urdu Hate Speech Dataset Repository](https://github.com/haroonshakeel/roman_urdu_hate_speech) which is under MIT License.
### Citation Information
```bibtex
@inproceedings{rizwan2020hate,
title={Hate-speech and offensive language detection in roman Urdu},
author={Rizwan, Hammad and Shakeel, Muhammad Haroon and Karim, Asim},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
pages={2512--2522},
year={2020}
}
```
### Contributions
Thanks to [@bp-high](https://github.com/bp-high), for adding this dataset. | roman_urdu_hate_speech | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ur",
"license:mit",
"binary classification",
"region:us"
] | 2022-03-25T15:51:45+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["ur"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "roman_urdu_hate_speech", "tags": ["binary classification"], "dataset_info": [{"config_name": "Coarse_Grained", "features": [{"name": "tweet", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Abusive/Offensive", "1": "Normal"}}}}], "splits": [{"name": "train", "num_bytes": 725719, "num_examples": 7208}, {"name": "test", "num_bytes": 218087, "num_examples": 2002}, {"name": "validation", "num_bytes": 79759, "num_examples": 800}], "download_size": 927937, "dataset_size": 1023565}, {"config_name": "Fine_Grained", "features": [{"name": "tweet", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Abusive/Offensive", "1": "Normal", "2": "Religious Hate", "3": "Sexism", "4": "Profane/Untargeted"}}}}], "splits": [{"name": "train", "num_bytes": 723670, "num_examples": 7208}, {"name": "test", "num_bytes": 219359, "num_examples": 2002}, {"name": "validation", "num_bytes": 723670, "num_examples": 7208}], "download_size": 1519423, "dataset_size": 1666699}]} | 2024-01-18T11:19:02+00:00 | [] | [
"ur"
] | TAGS
#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Urdu #license-mit #binary classification #region-us
|
# Dataset Card for roman_urdu_hate_speech
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: roman_urdu_hate_speech homepage
- Repository: roman_urdu_hate_speech repository
- Paper: Hate-Speech and Offensive Language Detection in Roman Urdu
- Leaderboard: [N/A]
- Point of Contact: M. Haroon Shakeel
### Dataset Summary
The Roman Urdu Hate-Speech and Offensive Language Detection (RUHSOLD) dataset is a Roman Urdu dataset of tweets annotated by experts in the relevant language. The authors develop the gold-standard for two sub-tasks. First sub-task is based on binary labels of Hate-Offensive content and Normal content (i.e., inoffensive language). These labels are self-explanatory. The authors refer to this sub-task as coarse-grained classification. Second sub-task defines Hate-Offensive content with four labels at a granular level. These labels are the most relevant for the demographic of users who converse in RU and are defined in related literature. The authors refer to this sub-task as fine-grained classification. The objective behind creating two gold-standards is to enable the researchers to evaluate the hate speech detection approaches on both easier (coarse-grained) and challenging (fine-grained) scenarios.
### Supported Tasks and Leaderboards
- 'multi-class-classification', 'text-classification-other-binary classification': The dataset can be used for both multi class classification as well as for binary classification as it contains both coarse grained and fine grained labels.
### Languages
The text of this dataset is Roman Urdu. The associated BCP-47 code is 'ur'.
## Dataset Structure
### Data Instances
The dataset consists of two parts divided as a set of two types, Coarse grained examples and Fine Grained examples. The difference is that in the coarse grained example the tweets are labelled as abusive or normal whereas in the fine grained version there are several classes of hate associated with a tweet.
For the Coarse grained segment of the dataset the label mapping is:-
Task 1: Coarse-grained Classification Labels
0: Abusive/Offensive
1: Normal
Whereas for the Fine Grained segment of the dataset the label mapping is:-
Task 2: Fine-grained Classification Labels
0: Abusive/Offensive
1: Normal
2: Religious Hate
3: Sexism
4: Profane/Untargeted
An example from Roman Urdu Hate Speech looks as follows:
### Data Fields
-tweet:a string denoting the tweet which has been selected by using a random sampling from a tweet base of 50000 tweets to select 10000 tweets and annotated for the dataset.
-label:An annotation manually labeled by three independent annotators, during the annotation process, all conflicts are resolved by a majority vote among three annotators.
### Data Splits
The data of each of the segments, Coarse Grained and Fine Grained is further split into training, validation and test set. The data is split in train, test, and validation sets with 70,20,10 split ratio using stratification based on fine-grained labels.
The use of stratified sampling is deemed necessary to preserve the same labels ratio across all splits.
The Final split sizes are as follows:
Train Valid Test
7209 2003 801
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
The dataset was created by Hammad Rizwan, Muhammad Haroon Shakeel, Asim Karim during work done at Department of Computer Science, Lahore University of Management Sciences (LUMS), Lahore, Pakistan.
### Licensing Information
The licensing status of the dataset hinges on the legal status of the Roman Urdu Hate Speech Dataset Repository which is under MIT License.
### Contributions
Thanks to @bp-high, for adding this dataset. | [
"# Dataset Card for roman_urdu_hate_speech",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: roman_urdu_hate_speech homepage\n- Repository: roman_urdu_hate_speech repository\n- Paper: Hate-Speech and Offensive Language Detection in Roman Urdu\n- Leaderboard: [N/A]\n- Point of Contact: M. Haroon Shakeel",
"### Dataset Summary\n\nThe Roman Urdu Hate-Speech and Offensive Language Detection (RUHSOLD) dataset is a Roman Urdu dataset of tweets annotated by experts in the relevant language. The authors develop the gold-standard for two sub-tasks. First sub-task is based on binary labels of Hate-Offensive content and Normal content (i.e., inoffensive language). These labels are self-explanatory. The authors refer to this sub-task as coarse-grained classification. Second sub-task defines Hate-Offensive content with four labels at a granular level. These labels are the most relevant for the demographic of users who converse in RU and are defined in related literature. The authors refer to this sub-task as fine-grained classification. The objective behind creating two gold-standards is to enable the researchers to evaluate the hate speech detection approaches on both easier (coarse-grained) and challenging (fine-grained) scenarios.",
"### Supported Tasks and Leaderboards\n\n- 'multi-class-classification', 'text-classification-other-binary classification': The dataset can be used for both multi class classification as well as for binary classification as it contains both coarse grained and fine grained labels.",
"### Languages\n\nThe text of this dataset is Roman Urdu. The associated BCP-47 code is 'ur'.",
"## Dataset Structure",
"### Data Instances\n\nThe dataset consists of two parts divided as a set of two types, Coarse grained examples and Fine Grained examples. The difference is that in the coarse grained example the tweets are labelled as abusive or normal whereas in the fine grained version there are several classes of hate associated with a tweet.\n\nFor the Coarse grained segment of the dataset the label mapping is:-\nTask 1: Coarse-grained Classification Labels\n0: Abusive/Offensive\n1: Normal\n\nWhereas for the Fine Grained segment of the dataset the label mapping is:-\nTask 2: Fine-grained Classification Labels\n0: Abusive/Offensive\n1: Normal\n2: Religious Hate\n3: Sexism\n4: Profane/Untargeted\n\nAn example from Roman Urdu Hate Speech looks as follows:",
"### Data Fields\n\n-tweet:a string denoting the tweet which has been selected by using a random sampling from a tweet base of 50000 tweets to select 10000 tweets and annotated for the dataset.\n\n-label:An annotation manually labeled by three independent annotators, during the annotation process, all conflicts are resolved by a majority vote among three annotators.",
"### Data Splits\n\nThe data of each of the segments, Coarse Grained and Fine Grained is further split into training, validation and test set. The data is split in train, test, and validation sets with 70,20,10 split ratio using stratification based on fine-grained labels.\n\nThe use of stratified sampling is deemed necessary to preserve the same labels ratio across all splits. \n\nThe Final split sizes are as follows:\n\nTrain Valid Test\n7209 2003 801",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nThe dataset was created by Hammad Rizwan, Muhammad Haroon Shakeel, Asim Karim during work done at Department of Computer Science, Lahore University of Management Sciences (LUMS), Lahore, Pakistan.",
"### Licensing Information\n\nThe licensing status of the dataset hinges on the legal status of the Roman Urdu Hate Speech Dataset Repository which is under MIT License.",
"### Contributions\n\nThanks to @bp-high, for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Urdu #license-mit #binary classification #region-us \n",
"# Dataset Card for roman_urdu_hate_speech",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: roman_urdu_hate_speech homepage\n- Repository: roman_urdu_hate_speech repository\n- Paper: Hate-Speech and Offensive Language Detection in Roman Urdu\n- Leaderboard: [N/A]\n- Point of Contact: M. Haroon Shakeel",
"### Dataset Summary\n\nThe Roman Urdu Hate-Speech and Offensive Language Detection (RUHSOLD) dataset is a Roman Urdu dataset of tweets annotated by experts in the relevant language. The authors develop the gold-standard for two sub-tasks. First sub-task is based on binary labels of Hate-Offensive content and Normal content (i.e., inoffensive language). These labels are self-explanatory. The authors refer to this sub-task as coarse-grained classification. Second sub-task defines Hate-Offensive content with four labels at a granular level. These labels are the most relevant for the demographic of users who converse in RU and are defined in related literature. The authors refer to this sub-task as fine-grained classification. The objective behind creating two gold-standards is to enable the researchers to evaluate the hate speech detection approaches on both easier (coarse-grained) and challenging (fine-grained) scenarios.",
"### Supported Tasks and Leaderboards\n\n- 'multi-class-classification', 'text-classification-other-binary classification': The dataset can be used for both multi class classification as well as for binary classification as it contains both coarse grained and fine grained labels.",
"### Languages\n\nThe text of this dataset is Roman Urdu. The associated BCP-47 code is 'ur'.",
"## Dataset Structure",
"### Data Instances\n\nThe dataset consists of two parts divided as a set of two types, Coarse grained examples and Fine Grained examples. The difference is that in the coarse grained example the tweets are labelled as abusive or normal whereas in the fine grained version there are several classes of hate associated with a tweet.\n\nFor the Coarse grained segment of the dataset the label mapping is:-\nTask 1: Coarse-grained Classification Labels\n0: Abusive/Offensive\n1: Normal\n\nWhereas for the Fine Grained segment of the dataset the label mapping is:-\nTask 2: Fine-grained Classification Labels\n0: Abusive/Offensive\n1: Normal\n2: Religious Hate\n3: Sexism\n4: Profane/Untargeted\n\nAn example from Roman Urdu Hate Speech looks as follows:",
"### Data Fields\n\n-tweet:a string denoting the tweet which has been selected by using a random sampling from a tweet base of 50000 tweets to select 10000 tweets and annotated for the dataset.\n\n-label:An annotation manually labeled by three independent annotators, during the annotation process, all conflicts are resolved by a majority vote among three annotators.",
"### Data Splits\n\nThe data of each of the segments, Coarse Grained and Fine Grained is further split into training, validation and test set. The data is split in train, test, and validation sets with 70,20,10 split ratio using stratification based on fine-grained labels.\n\nThe use of stratified sampling is deemed necessary to preserve the same labels ratio across all splits. \n\nThe Final split sizes are as follows:\n\nTrain Valid Test\n7209 2003 801",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nThe dataset was created by Hammad Rizwan, Muhammad Haroon Shakeel, Asim Karim during work done at Department of Computer Science, Lahore University of Management Sciences (LUMS), Lahore, Pakistan.",
"### Licensing Information\n\nThe licensing status of the dataset hinges on the legal status of the Roman Urdu Hate Speech Dataset Repository which is under MIT License.",
"### Contributions\n\nThanks to @bp-high, for adding this dataset."
] |
0d783a9fe5e53539e7bc40df462c5a641dd48ce3 | # Dataset Card for Winoground
## Dataset Description
Winoground is a novel task and dataset for evaluating the ability of vision and language models to conduct visio-linguistic compositional reasoning. Given two images and two captions, the goal is to match them correctly—but crucially, both captions contain a completely identical set of words/morphemes, only in a different order. The dataset was carefully hand-curated by expert annotators and is labeled with a rich set of fine-grained tags to assist in analyzing model performance. In our accompanying paper, we probe a diverse range of state-of-the-art vision and language models and find that, surprisingly, none of them do much better than chance. Evidently, these models are not as skilled at visio-linguistic compositional reasoning as we might have hoped. In the paper, we perform an extensive analysis to obtain insights into how future work might try to mitigate these models’ shortcomings. We aim for Winoground to serve as a useful evaluation set for advancing the state of the art and driving further progress in the field.
We are thankful to Getty Images for providing the image data.
## Data
The captions and tags are located in `data/examples.jsonl` and the images are located in `data/images.zip`. You can load the data as follows:
```python
from datasets import load_dataset
examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>)
```
You can get `<YOUR USER ACCESS TOKEN>` by following these steps:
1) log into your Hugging Face account
2) click on your profile picture
3) click "Settings"
4) click "Access Tokens"
5) generate an access token
## Model Predictions and Statistics
The image-caption model scores from our paper are saved in `statistics/model_scores`. To compute many of the tables and graphs from our paper, run the following commands:
```bash
git clone https://huggingface.co/datasets/facebook/winoground
cd winoground
pip install -r statistics/requirements.txt
python statistics/compute_statistics.py
```
## FLAVA Colab notebook code for Winoground evaluation
https://colab.research.google.com/drive/1c3l4r4cEA5oXfq9uXhrJibddwRkcBxzP?usp=sharing
## CLIP Colab notebook code for Winoground evaluation
https://colab.research.google.com/drive/15wwOSte2CjTazdnCWYUm2VPlFbk2NGc0?usp=sharing
## Paper FAQ
### Why is the group score for a random model equal to 16.67%?
<details>
<summary>Click for a proof!</summary>
Intuitively, we might think that we can multiply the probabilities from the image and text score to get 1/16 = 6.25%. But, these scores are not conditionally independent. We can find the correct probability with combinatorics:
For ease of notation, let:
- a = s(c_0, i_0)
- b = s(c_1, i_0)
- c = s(c_1, i_1)
- d = s(c_0, i_1)
The group score is defined as 1 if a > b, a > d, c > b, c > d and 0 otherwise.
As one would say to GPT-3, let's think step by step:
1. There are 4! = 24 different orderings of a, c, b, d.
2. There are only 4 orderings for which a > b, a > d, c > b, c > d:
- a, c, b, d
- a, c, d, b
- c, a, b, d
- c, a, d, b
3. No ordering is any more likely than another because a, b, c, d are sampled from the same random distribution.
4. We can conclude that the probability of a group score of 1 is 4/24 = 0.166...
</details>
## Citation Information
[https://arxiv.org/abs/2204.03162](https://arxiv.org/abs/2204.03162)
Tristan Thrush and Candace Ross contributed equally.
```bibtex
@inproceedings{thrush_and_ross2022winoground,
author = {Tristan Thrush and Ryan Jiang and Max Bartolo and Amanpreet Singh and Adina Williams and Douwe Kiela and Candace Ross},
title = {Winoground: Probing vision and language models for visio-linguistic compositionality},
booktitle = {CVPR},
year = 2022,
}
``` | facebook/winoground | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"task_categories:image-classification",
"language:en",
"arxiv:2204.03162",
"region:us"
] | 2022-03-25T22:27:33+00:00 | {"language": ["en"], "task_categories": ["image-to-text", "text-to-image", "image-classification"], "pretty_name": "Winoground", "extra_gated_prompt": "By clicking on \u201cAccess repository\u201d below, you also agree that you are using it solely for research purposes. The full license agreement is available in the dataset files."} | 2023-11-02T17:15:41+00:00 | [
"2204.03162"
] | [
"en"
] | TAGS
#task_categories-image-to-text #task_categories-text-to-image #task_categories-image-classification #language-English #arxiv-2204.03162 #region-us
| # Dataset Card for Winoground
## Dataset Description
Winoground is a novel task and dataset for evaluating the ability of vision and language models to conduct visio-linguistic compositional reasoning. Given two images and two captions, the goal is to match them correctly—but crucially, both captions contain a completely identical set of words/morphemes, only in a different order. The dataset was carefully hand-curated by expert annotators and is labeled with a rich set of fine-grained tags to assist in analyzing model performance. In our accompanying paper, we probe a diverse range of state-of-the-art vision and language models and find that, surprisingly, none of them do much better than chance. Evidently, these models are not as skilled at visio-linguistic compositional reasoning as we might have hoped. In the paper, we perform an extensive analysis to obtain insights into how future work might try to mitigate these models’ shortcomings. We aim for Winoground to serve as a useful evaluation set for advancing the state of the art and driving further progress in the field.
We are thankful to Getty Images for providing the image data.
## Data
The captions and tags are located in 'data/URL' and the images are located in 'data/URL'. You can load the data as follows:
You can get '<YOUR USER ACCESS TOKEN>' by following these steps:
1) log into your Hugging Face account
2) click on your profile picture
3) click "Settings"
4) click "Access Tokens"
5) generate an access token
## Model Predictions and Statistics
The image-caption model scores from our paper are saved in 'statistics/model_scores'. To compute many of the tables and graphs from our paper, run the following commands:
## FLAVA Colab notebook code for Winoground evaluation
URL
## CLIP Colab notebook code for Winoground evaluation
URL
## Paper FAQ
### Why is the group score for a random model equal to 16.67%?
<details>
<summary>Click for a proof!</summary>
Intuitively, we might think that we can multiply the probabilities from the image and text score to get 1/16 = 6.25%. But, these scores are not conditionally independent. We can find the correct probability with combinatorics:
For ease of notation, let:
- a = s(c_0, i_0)
- b = s(c_1, i_0)
- c = s(c_1, i_1)
- d = s(c_0, i_1)
The group score is defined as 1 if a > b, a > d, c > b, c > d and 0 otherwise.
As one would say to GPT-3, let's think step by step:
1. There are 4! = 24 different orderings of a, c, b, d.
2. There are only 4 orderings for which a > b, a > d, c > b, c > d:
- a, c, b, d
- a, c, d, b
- c, a, b, d
- c, a, d, b
3. No ordering is any more likely than another because a, b, c, d are sampled from the same random distribution.
4. We can conclude that the probability of a group score of 1 is 4/24 = 0.166...
</details>
URL
Tristan Thrush and Candace Ross contributed equally.
| [
"# Dataset Card for Winoground",
"## Dataset Description\nWinoground is a novel task and dataset for evaluating the ability of vision and language models to conduct visio-linguistic compositional reasoning. Given two images and two captions, the goal is to match them correctly—but crucially, both captions contain a completely identical set of words/morphemes, only in a different order. The dataset was carefully hand-curated by expert annotators and is labeled with a rich set of fine-grained tags to assist in analyzing model performance. In our accompanying paper, we probe a diverse range of state-of-the-art vision and language models and find that, surprisingly, none of them do much better than chance. Evidently, these models are not as skilled at visio-linguistic compositional reasoning as we might have hoped. In the paper, we perform an extensive analysis to obtain insights into how future work might try to mitigate these models’ shortcomings. We aim for Winoground to serve as a useful evaluation set for advancing the state of the art and driving further progress in the field.\n\nWe are thankful to Getty Images for providing the image data.",
"## Data\nThe captions and tags are located in 'data/URL' and the images are located in 'data/URL'. You can load the data as follows:\n\nYou can get '<YOUR USER ACCESS TOKEN>' by following these steps:\n1) log into your Hugging Face account\n2) click on your profile picture\n3) click \"Settings\"\n4) click \"Access Tokens\"\n5) generate an access token",
"## Model Predictions and Statistics\nThe image-caption model scores from our paper are saved in 'statistics/model_scores'. To compute many of the tables and graphs from our paper, run the following commands:",
"## FLAVA Colab notebook code for Winoground evaluation\nURL",
"## CLIP Colab notebook code for Winoground evaluation\nURL",
"## Paper FAQ",
"### Why is the group score for a random model equal to 16.67%?\n\n<details>\n <summary>Click for a proof!</summary>\n \n Intuitively, we might think that we can multiply the probabilities from the image and text score to get 1/16 = 6.25%. But, these scores are not conditionally independent. We can find the correct probability with combinatorics:\n\n For ease of notation, let:\n - a = s(c_0, i_0)\n - b = s(c_1, i_0)\n - c = s(c_1, i_1)\n - d = s(c_0, i_1)\n \n The group score is defined as 1 if a > b, a > d, c > b, c > d and 0 otherwise.\n \n As one would say to GPT-3, let's think step by step:\n \n 1. There are 4! = 24 different orderings of a, c, b, d.\n 2. There are only 4 orderings for which a > b, a > d, c > b, c > d:\n - a, c, b, d\n - a, c, d, b\n - c, a, b, d\n - c, a, d, b\n 3. No ordering is any more likely than another because a, b, c, d are sampled from the same random distribution.\n 4. We can conclude that the probability of a group score of 1 is 4/24 = 0.166...\n</details>\n\n\n\nURL\n\nTristan Thrush and Candace Ross contributed equally."
] | [
"TAGS\n#task_categories-image-to-text #task_categories-text-to-image #task_categories-image-classification #language-English #arxiv-2204.03162 #region-us \n",
"# Dataset Card for Winoground",
"## Dataset Description\nWinoground is a novel task and dataset for evaluating the ability of vision and language models to conduct visio-linguistic compositional reasoning. Given two images and two captions, the goal is to match them correctly—but crucially, both captions contain a completely identical set of words/morphemes, only in a different order. The dataset was carefully hand-curated by expert annotators and is labeled with a rich set of fine-grained tags to assist in analyzing model performance. In our accompanying paper, we probe a diverse range of state-of-the-art vision and language models and find that, surprisingly, none of them do much better than chance. Evidently, these models are not as skilled at visio-linguistic compositional reasoning as we might have hoped. In the paper, we perform an extensive analysis to obtain insights into how future work might try to mitigate these models’ shortcomings. We aim for Winoground to serve as a useful evaluation set for advancing the state of the art and driving further progress in the field.\n\nWe are thankful to Getty Images for providing the image data.",
"## Data\nThe captions and tags are located in 'data/URL' and the images are located in 'data/URL'. You can load the data as follows:\n\nYou can get '<YOUR USER ACCESS TOKEN>' by following these steps:\n1) log into your Hugging Face account\n2) click on your profile picture\n3) click \"Settings\"\n4) click \"Access Tokens\"\n5) generate an access token",
"## Model Predictions and Statistics\nThe image-caption model scores from our paper are saved in 'statistics/model_scores'. To compute many of the tables and graphs from our paper, run the following commands:",
"## FLAVA Colab notebook code for Winoground evaluation\nURL",
"## CLIP Colab notebook code for Winoground evaluation\nURL",
"## Paper FAQ",
"### Why is the group score for a random model equal to 16.67%?\n\n<details>\n <summary>Click for a proof!</summary>\n \n Intuitively, we might think that we can multiply the probabilities from the image and text score to get 1/16 = 6.25%. But, these scores are not conditionally independent. We can find the correct probability with combinatorics:\n\n For ease of notation, let:\n - a = s(c_0, i_0)\n - b = s(c_1, i_0)\n - c = s(c_1, i_1)\n - d = s(c_0, i_1)\n \n The group score is defined as 1 if a > b, a > d, c > b, c > d and 0 otherwise.\n \n As one would say to GPT-3, let's think step by step:\n \n 1. There are 4! = 24 different orderings of a, c, b, d.\n 2. There are only 4 orderings for which a > b, a > d, c > b, c > d:\n - a, c, b, d\n - a, c, d, b\n - c, a, b, d\n - c, a, d, b\n 3. No ordering is any more likely than another because a, b, c, d are sampled from the same random distribution.\n 4. We can conclude that the probability of a group score of 1 is 4/24 = 0.166...\n</details>\n\n\n\nURL\n\nTristan Thrush and Candace Ross contributed equally."
] |
551df3187d04f5f7ea4d6ebf062d016a72a2680c |
# lang-uk's ner-uk dataset
A dataset for Ukrainian Named Entity Recognition.
The original dataset is located at https://github.com/lang-uk/ner-uk. All credit for creation of the dataset goes to the contributors of https://github.com/lang-uk/ner-uk.
# License
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Dataset" property="dct:title" rel="dct:type">"Корпус NER-анотацій українських текстів"</span> by <a xmlns:cc="http://creativecommons.org/ns#" href="https://github.com/lang-uk" property="cc:attributionName" rel="cc:attributionURL">lang-uk</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.<br />Based on a work at <a xmlns:dct="http://purl.org/dc/terms/" href="https://github.com/lang-uk/ner-uk" rel="dct:source">https://github.com/lang-uk/ner-uk</a>. | benjamin/ner-uk | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"language:uk",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-03-26T10:10:50+00:00 | {"language": ["uk"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"]} | 2022-10-26T10:47:43+00:00 | [] | [
"uk"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #language-Ukrainian #license-cc-by-nc-sa-4.0 #region-us
|
# lang-uk's ner-uk dataset
A dataset for Ukrainian Named Entity Recognition.
The original dataset is located at URL All credit for creation of the dataset goes to the contributors of URL
# License
<a rel="license" href="URL alt="Creative Commons License" style="border-width:0" src="https://i.URL /></a><br /><span xmlns:dct="URL href="URL property="dct:title" rel="dct:type">"Корпус NER-анотацій українських текстів"</span> by <a xmlns:cc="URL href="URL property="cc:attributionName" rel="cc:attributionURL">lang-uk</a> is licensed under a <a rel="license" href="URL Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.<br />Based on a work at <a xmlns:dct="URL href="URL rel="dct:source">URL | [
"# lang-uk's ner-uk dataset\n\nA dataset for Ukrainian Named Entity Recognition.\n\nThe original dataset is located at URL All credit for creation of the dataset goes to the contributors of URL",
"# License\n\n<a rel=\"license\" href=\"URL alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https://i.URL /></a><br /><span xmlns:dct=\"URL href=\"URL property=\"dct:title\" rel=\"dct:type\">\"Корпус NER-анотацій українських текстів\"</span> by <a xmlns:cc=\"URL href=\"URL property=\"cc:attributionName\" rel=\"cc:attributionURL\">lang-uk</a> is licensed under a <a rel=\"license\" href=\"URL Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.<br />Based on a work at <a xmlns:dct=\"URL href=\"URL rel=\"dct:source\">URL"
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #language-Ukrainian #license-cc-by-nc-sa-4.0 #region-us \n",
"# lang-uk's ner-uk dataset\n\nA dataset for Ukrainian Named Entity Recognition.\n\nThe original dataset is located at URL All credit for creation of the dataset goes to the contributors of URL",
"# License\n\n<a rel=\"license\" href=\"URL alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https://i.URL /></a><br /><span xmlns:dct=\"URL href=\"URL property=\"dct:title\" rel=\"dct:type\">\"Корпус NER-анотацій українських текстів\"</span> by <a xmlns:cc=\"URL href=\"URL property=\"cc:attributionName\" rel=\"cc:attributionURL\">lang-uk</a> is licensed under a <a rel=\"license\" href=\"URL Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.<br />Based on a work at <a xmlns:dct=\"URL href=\"URL rel=\"dct:source\">URL"
] |
264197c1d45d2aa4c8dc4e992e89e432a6e889c4 | name: **TRBLLmaker**
annotations_creators: found
language_creators: found
languages: en-US
licenses: Genius-Ventura-Toker
multilinguality: monolingual
source_datasets: original
task_categories: sequence-modeling
task_ids: sequence-modeling-seq2seq_generate
# Dataset Card for TRBLLmaker Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Split info](#Split-info)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/venturamor/TRBLLmaker-NLP
- **Paper:** in git
### Dataset Summary
TRBLLmaker - To Read Between Lyrics Lines.
Dataset used in order to train a model to get as an input - several lines of song's lyrics and generate optional interpretation / meaning of them or use the songs' metdata for various tasks such as classification.
This dataset is based on 'Genius' website's data, which contains global collection of songs lyrics and provides annotations and interpretations to songs lyrics and additional music knowledge.
We used 'Genius' API, created private client and extracted the relevant raw data from Genius servers.
We extracted the songs by the most popular songs in each genre - pop, rap, rock, country and r&b. Afterwards, we created a varied pool of 150 artists that associated with different music styles and periods, and extracted maximum of 100 samples from each.
We combined all the data, without repetitions, into one final database. After preforming a cleaning of non-English lyrics, we got our final corpus that contains 8,808 different songs with over all of 60,630 samples, while each sample is a specific sentence from the song's lyrics and its top rated annotation.
### Supported Tasks and Leaderboards
Seq2Seq
### Languages
[En] - English
## Dataset Structure
### Data Fields
We stored each sample in a 'SongInfo' structure with the following attributes: title, genre, annotations and song's meta data.
The meta data contains the artist's name, song id in the server, lyrics and statistics such page views.
### Data Splits
train
train_songs
test
test_songs
validation
validation songs
## Split info
- songs
- samples
train [0.64 (0.8 * 0.8)], test[0.2], validation [0.16 (0.8 * 0.2)]
## Dataset Creation
### Source Data
Genius - https://genius.com/
### Annotations
#### Who are the annotators?
top-ranked annotations by users in Genoius websites / Official Genius annotations
## Considerations for Using the Data
### Social Impact of Dataset
We are excited about the future of applying attention-based models on task such as meaning generation.
We hope this dataset will encourage more NLP researchers to improve the way we understand and enjoy songs, since
achieving artistic comprehension is another step that progress us to the goal of robust AI.
### Other Known Limitations
The artists list can be found here.
## Additional Information
### Dataset Curators
This Dataset created by Mor Ventura and Michael Toker.
### Licensing Information
All source of data belongs to Genius.
### Contributions
Thanks to [@venturamor, @tokeron](https://github.com/venturamor/TRBLLmaker-NLP) for adding this dataset. | MorVentura/TRBLLmaker | [
"region:us"
] | 2022-03-26T17:29:20+00:00 | {"TODO": "Add YAML tags here."} | 2022-03-26T18:44:51+00:00 | [] | [] | TAGS
#region-us
| name: TRBLLmaker
annotations_creators: found
language_creators: found
languages: en-US
licenses: Genius-Ventura-Toker
multilinguality: monolingual
source_datasets: original
task_categories: sequence-modeling
task_ids: sequence-modeling-seq2seq_generate
# Dataset Card for TRBLLmaker Dataset
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Fields
- Data Splits
- Split info
- Dataset Creation
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Other Known Limitations
- Additional Information
- Dataset Curators
- Contributions
## Dataset Description
- Repository: URL
- Paper: in git
### Dataset Summary
TRBLLmaker - To Read Between Lyrics Lines.
Dataset used in order to train a model to get as an input - several lines of song's lyrics and generate optional interpretation / meaning of them or use the songs' metdata for various tasks such as classification.
This dataset is based on 'Genius' website's data, which contains global collection of songs lyrics and provides annotations and interpretations to songs lyrics and additional music knowledge.
We used 'Genius' API, created private client and extracted the relevant raw data from Genius servers.
We extracted the songs by the most popular songs in each genre - pop, rap, rock, country and r&b. Afterwards, we created a varied pool of 150 artists that associated with different music styles and periods, and extracted maximum of 100 samples from each.
We combined all the data, without repetitions, into one final database. After preforming a cleaning of non-English lyrics, we got our final corpus that contains 8,808 different songs with over all of 60,630 samples, while each sample is a specific sentence from the song's lyrics and its top rated annotation.
### Supported Tasks and Leaderboards
Seq2Seq
### Languages
[En] - English
## Dataset Structure
### Data Fields
We stored each sample in a 'SongInfo' structure with the following attributes: title, genre, annotations and song's meta data.
The meta data contains the artist's name, song id in the server, lyrics and statistics such page views.
### Data Splits
train
train_songs
test
test_songs
validation
validation songs
## Split info
- songs
- samples
train [0.64 (0.8 * 0.8)], test[0.2], validation [0.16 (0.8 * 0.2)]
## Dataset Creation
### Source Data
Genius - URL
### Annotations
#### Who are the annotators?
top-ranked annotations by users in Genoius websites / Official Genius annotations
## Considerations for Using the Data
### Social Impact of Dataset
We are excited about the future of applying attention-based models on task such as meaning generation.
We hope this dataset will encourage more NLP researchers to improve the way we understand and enjoy songs, since
achieving artistic comprehension is another step that progress us to the goal of robust AI.
### Other Known Limitations
The artists list can be found here.
## Additional Information
### Dataset Curators
This Dataset created by Mor Ventura and Michael Toker.
### Licensing Information
All source of data belongs to Genius.
### Contributions
Thanks to @venturamor, @tokeron for adding this dataset. | [
"# Dataset Card for TRBLLmaker Dataset",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Fields\n - Data Splits\n - Split info\n- Dataset Creation\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Contributions",
"## Dataset Description\n\n- Repository: URL\n- Paper: in git",
"### Dataset Summary\nTRBLLmaker - To Read Between Lyrics Lines.\nDataset used in order to train a model to get as an input - several lines of song's lyrics and generate optional interpretation / meaning of them or use the songs' metdata for various tasks such as classification.\n\nThis dataset is based on 'Genius' website's data, which contains global collection of songs lyrics and provides annotations and interpretations to songs lyrics and additional music knowledge.\nWe used 'Genius' API, created private client and extracted the relevant raw data from Genius servers.\n\nWe extracted the songs by the most popular songs in each genre - pop, rap, rock, country and r&b. Afterwards, we created a varied pool of 150 artists that associated with different music styles and periods, and extracted maximum of 100 samples from each.\nWe combined all the data, without repetitions, into one final database. After preforming a cleaning of non-English lyrics, we got our final corpus that contains 8,808 different songs with over all of 60,630 samples, while each sample is a specific sentence from the song's lyrics and its top rated annotation.",
"### Supported Tasks and Leaderboards\n\nSeq2Seq",
"### Languages\n\n[En] - English",
"## Dataset Structure",
"### Data Fields\n\nWe stored each sample in a 'SongInfo' structure with the following attributes: title, genre, annotations and song's meta data. \nThe meta data contains the artist's name, song id in the server, lyrics and statistics such page views.",
"### Data Splits\n\ntrain\ntrain_songs\n\ntest\ntest_songs\n\nvalidation\nvalidation songs",
"## Split info\n- songs\n- samples\n\ntrain [0.64 (0.8 * 0.8)], test[0.2], validation [0.16 (0.8 * 0.2)]",
"## Dataset Creation",
"### Source Data\nGenius - URL",
"### Annotations",
"#### Who are the annotators?\n\ntop-ranked annotations by users in Genoius websites / Official Genius annotations",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nWe are excited about the future of applying attention-based models on task such as meaning generation. \nWe hope this dataset will encourage more NLP researchers to improve the way we understand and enjoy songs, since\nachieving artistic comprehension is another step that progress us to the goal of robust AI.",
"### Other Known Limitations\n\nThe artists list can be found here.",
"## Additional Information",
"### Dataset Curators\n\nThis Dataset created by Mor Ventura and Michael Toker.",
"### Licensing Information\n\nAll source of data belongs to Genius.",
"### Contributions\n\nThanks to @venturamor, @tokeron for adding this dataset."
] | [
"TAGS\n#region-us \n",
"# Dataset Card for TRBLLmaker Dataset",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Fields\n - Data Splits\n - Split info\n- Dataset Creation\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Contributions",
"## Dataset Description\n\n- Repository: URL\n- Paper: in git",
"### Dataset Summary\nTRBLLmaker - To Read Between Lyrics Lines.\nDataset used in order to train a model to get as an input - several lines of song's lyrics and generate optional interpretation / meaning of them or use the songs' metdata for various tasks such as classification.\n\nThis dataset is based on 'Genius' website's data, which contains global collection of songs lyrics and provides annotations and interpretations to songs lyrics and additional music knowledge.\nWe used 'Genius' API, created private client and extracted the relevant raw data from Genius servers.\n\nWe extracted the songs by the most popular songs in each genre - pop, rap, rock, country and r&b. Afterwards, we created a varied pool of 150 artists that associated with different music styles and periods, and extracted maximum of 100 samples from each.\nWe combined all the data, without repetitions, into one final database. After preforming a cleaning of non-English lyrics, we got our final corpus that contains 8,808 different songs with over all of 60,630 samples, while each sample is a specific sentence from the song's lyrics and its top rated annotation.",
"### Supported Tasks and Leaderboards\n\nSeq2Seq",
"### Languages\n\n[En] - English",
"## Dataset Structure",
"### Data Fields\n\nWe stored each sample in a 'SongInfo' structure with the following attributes: title, genre, annotations and song's meta data. \nThe meta data contains the artist's name, song id in the server, lyrics and statistics such page views.",
"### Data Splits\n\ntrain\ntrain_songs\n\ntest\ntest_songs\n\nvalidation\nvalidation songs",
"## Split info\n- songs\n- samples\n\ntrain [0.64 (0.8 * 0.8)], test[0.2], validation [0.16 (0.8 * 0.2)]",
"## Dataset Creation",
"### Source Data\nGenius - URL",
"### Annotations",
"#### Who are the annotators?\n\ntop-ranked annotations by users in Genoius websites / Official Genius annotations",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nWe are excited about the future of applying attention-based models on task such as meaning generation. \nWe hope this dataset will encourage more NLP researchers to improve the way we understand and enjoy songs, since\nachieving artistic comprehension is another step that progress us to the goal of robust AI.",
"### Other Known Limitations\n\nThe artists list can be found here.",
"## Additional Information",
"### Dataset Curators\n\nThis Dataset created by Mor Ventura and Michael Toker.",
"### Licensing Information\n\nAll source of data belongs to Genius.",
"### Contributions\n\nThanks to @venturamor, @tokeron for adding this dataset."
] |
701f90e92dca56a28a7439406291a565c55576ef |
# Dataset Card for ESsnli
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Machine-translated Spanish version of the Stanford Natural Language Inference dataset.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Machine-generated Spanish (from an English-language original corpus).
## Dataset Structure
### Data Instances
For each instance, there is a string for the premise, a string for the hypothesis, a string for the label given by the annotator and a string for the gold label, as well as data about the images that originated the sentences. Note that each premise may appear three times with a different hypothesis and label. See the [SNLI corpus viewer](https://huggingface.co/datasets/viewer/?dataset=snli) to explore more examples.
```
{"annotator_labels": ["contradiction"]
"captionID": "3618932839.jpg#4"
"gold_label": "contradiction"
"pairID": "3618932839.jpg#4r1c"
"sentence1": "El perro intenta saltar sobre el poste."
"sentence2": "Perrito durmiendo con su madre."}
```
### Data Fields
- `sentence1`: a string used to determine the truthfulness of the hypothesis
- `sentence2`: a string that may be true, false, or whose truth conditions may not be knowable when compared to the premise
- `label`: a string whose value may be either "entailment", "contradiction" or "neutral".
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
This corpus was built to remedy the scarcity of annotated Spanish-language datasets for NLI. It was generated by translating from the SNLI original dataset to Spanish using Argos. While machine translation is far from an ideal source for semantic classification, it is an aid to enlarging the data available.
### Source Data
#### Initial Data Collection and Normalization
The hypotheses (sentence1) were elicited by presenting crowdworkers with captions from preexisting datasets without the associated photos, but the vocabulary of the hypotheses still reflects the content of the photos as well as the caption style of writing (e.g. mostly present tense). The dataset developers report 37,026 distinct words in the corpus, ignoring case. They allowed bare NPs as well as full sentences. Using the Stanford PCFG Parser 3.5.2 (Klein and Manning, 2003) trained on the standard training set as well as on the Brown Corpus (Francis and Kucera 1979), the authors report that 74% of the premises and 88.9% of the hypotheses result in a parse rooted with an 'S'. The corpus was developed between 2014 and 2015.
Crowdworkers were presented with a caption without the associated photo and asked to produce three alternate captions, one that is definitely true, one that might be true, and one that is definitely false. See Section 2.1 and Figure 1 for details (Bowman et al., 2015).
The corpus includes content from the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/) and the [VisualGenome corpus](https://visualgenome.org/). The photo captions used to prompt the data creation were collected on Flickr by [Young et al. (2014)](https://www.aclweb.org/anthology/Q14-1006.pdf), who extended the Flickr 8K dataset developed by [Hodosh et al. (2013)](https://www.jair.org/index.php/jair/article/view/10833). Hodosh et al. collected photos from the following Flickr groups: strangers!, Wild-Child (Kids in Action), Dogs in Action (Read the Rules), Outdoor Activities, Action Photography, Flickr-Social (two or more people in the photo). Young et al. do not list the specific groups they collected photos from. The VisualGenome corpus also contains images from Flickr, originally collected in [MS-COCO](https://cocodataset.org/#home) and [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/).
The premises (sentence2) from the Flickr 30k corpus corrected for spelling using the Linux spell checker and ungrammatical sentences were removed. Bowman et al. do not report any normalization, though they note that punctuation and capitalization are often omitted.
#### Who are the source language producers?
A large portion of the premises (160k) were produced in the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/) by an unknown number of crowdworkers. About 2,500 crowdworkers from Amazon Mechanical Turk produced the associated hypotheses. The premises from the Flickr 30k project describe people and animals whose photos were collected and presented to the Flickr 30k crowdworkers, but the SNLI corpus did not present the photos to the hypotheses creators.
The Flickr 30k corpus did not report crowdworker or photo subject demographic information or crowdworker compensation. The SNLI crowdworkers were compensated per HIT at rates between $.1 and $.5 with no incentives. Workers who ignored the guidelines were disqualified, and automated bulk submissions were rejected. No demographic information was collected from the SNLI crowdworkers.
An additional 4,000 premises come from the pilot study of the [VisualGenome corpus](https://visualgenome.org/static/paper/Visual_Genome.pdf). Though the pilot study itself is not described, the location information of the 33,000 AMT crowdworkers that participated over the course of the 6 months of data collection are aggregated. Most of the workers were located in the United States (93%), with others from the Philippines, Kenya, India, Russia, and Canada. Workers were paid $6-$8 per hour.
### Annotations
#### Annotation process
56,941 of the total sentence pairs were further annotated in a validation task. Four annotators each labeled a premise-hypothesis pair as entailment, contradiction, or neither, resulting in 5 total judgements including the original hypothesis author judgement. See Section 2.2 for more details (Bowman et al., 2015).
The authors report 3/5 annotator agreement on 98% of the validation set and unanimous annotator agreement on 58.3% of the validation set. If a label was chosen by three annotators, that label was made the gold label. Following from this, 2% of the data did not have a consensus label and was labeled '-' by the authors.
#### Who are the annotators?
The annotators of the validation task were a closed set of about 30 trusted crowdworkers on Amazon Mechanical Turk. No demographic information was collected. Annotators were compensated per HIT between $.1 and $.5 with $1 bonuses in cases where annotator labels agreed with the curators' labels for 250 randomly distributed examples.
### Personal and Sensitive Information
The dataset does not contain any personal information about the authors or the crowdworkers, but may contain descriptions of the people in the original Flickr photos.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
The language reflects the content of the photos collected from Flickr, as described in the [Data Collection](#initial-data-collection-and-normalization) section. [Rudinger et al (2017)](https://www.aclweb.org/anthology/W17-1609.pdf) use pointwise mutual information to calculate a measure of association between a manually selected list of tokens corresponding to identity categories and the other words in the corpus, showing strong evidence of stereotypes across gender categories. They also provide examples in which crowdworkers reproduced harmful stereotypes or pejorative language in the hypotheses.
### Other Known Limitations
The translation of the sentences was mostly unsupervised and may introduce some noise in the corpus. Machine translation from an English-language corpus is likely to generate syntactic and lexical forms that differ from those a human Spanish speaker would produce.
[Gururangan et al (2018)](https://www.aclweb.org/anthology/N18-2017.pdf), [Poliak et al (2018)](https://www.aclweb.org/anthology/S18-2023.pdf), and [Tsuchiya (2018)](https://www.aclweb.org/anthology/L18-1239.pdf) show that the SNLI corpus has a number of annotation artifacts. Using various classifiers, Poliak et al correctly predicted the label of the hypothesis 69% of the time without using the premise, Gururangan et al 67% of the time, and Tsuchiya 63% of the time.
## Additional Information
### Dataset Curators
The SNLI corpus was developed by Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning as part of the [Stanford NLP group](https://nlp.stanford.edu/).
It was supported by a Google Faculty Research Award, a gift from Bloomberg L.P., the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA8750-13-2-0040, the National Science Foundation under grant no. IIS 1159679, and the Department of the Navy, Office of Naval Research, under grant no. N00014-10-1-0109.
### Licensing Information
This corpus is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0).
The Stanford Natural Language Inference Corpus is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/).
### Citation Information
[Needs More Information] | medardodt/ESsnli | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:n<100K",
"source_datasets:extended|snli",
"region:us"
] | 2022-03-26T19:07:01+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "multilinguality": ["monolingual"], "size_categories": ["n<100K"], "source_datasets": ["extended|snli"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference"], "languages": ["es"], "licenses": ["cc-by-nc-sa-4.0"]} | 2022-03-26T22:03:14+00:00 | [] | [] | TAGS
#task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-n<100K #source_datasets-extended|snli #region-us
|
# Dataset Card for ESsnli
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
Machine-translated Spanish version of the Stanford Natural Language Inference dataset.
### Supported Tasks and Leaderboards
### Languages
Machine-generated Spanish (from an English-language original corpus).
## Dataset Structure
### Data Instances
For each instance, there is a string for the premise, a string for the hypothesis, a string for the label given by the annotator and a string for the gold label, as well as data about the images that originated the sentences. Note that each premise may appear three times with a different hypothesis and label. See the SNLI corpus viewer to explore more examples.
### Data Fields
- 'sentence1': a string used to determine the truthfulness of the hypothesis
- 'sentence2': a string that may be true, false, or whose truth conditions may not be knowable when compared to the premise
- 'label': a string whose value may be either "entailment", "contradiction" or "neutral".
### Data Splits
## Dataset Creation
### Curation Rationale
This corpus was built to remedy the scarcity of annotated Spanish-language datasets for NLI. It was generated by translating from the SNLI original dataset to Spanish using Argos. While machine translation is far from an ideal source for semantic classification, it is an aid to enlarging the data available.
### Source Data
#### Initial Data Collection and Normalization
The hypotheses (sentence1) were elicited by presenting crowdworkers with captions from preexisting datasets without the associated photos, but the vocabulary of the hypotheses still reflects the content of the photos as well as the caption style of writing (e.g. mostly present tense). The dataset developers report 37,026 distinct words in the corpus, ignoring case. They allowed bare NPs as well as full sentences. Using the Stanford PCFG Parser 3.5.2 (Klein and Manning, 2003) trained on the standard training set as well as on the Brown Corpus (Francis and Kucera 1979), the authors report that 74% of the premises and 88.9% of the hypotheses result in a parse rooted with an 'S'. The corpus was developed between 2014 and 2015.
Crowdworkers were presented with a caption without the associated photo and asked to produce three alternate captions, one that is definitely true, one that might be true, and one that is definitely false. See Section 2.1 and Figure 1 for details (Bowman et al., 2015).
The corpus includes content from the Flickr 30k corpus and the VisualGenome corpus. The photo captions used to prompt the data creation were collected on Flickr by Young et al. (2014), who extended the Flickr 8K dataset developed by Hodosh et al. (2013). Hodosh et al. collected photos from the following Flickr groups: strangers!, Wild-Child (Kids in Action), Dogs in Action (Read the Rules), Outdoor Activities, Action Photography, Flickr-Social (two or more people in the photo). Young et al. do not list the specific groups they collected photos from. The VisualGenome corpus also contains images from Flickr, originally collected in MS-COCO and YFCC100M.
The premises (sentence2) from the Flickr 30k corpus corrected for spelling using the Linux spell checker and ungrammatical sentences were removed. Bowman et al. do not report any normalization, though they note that punctuation and capitalization are often omitted.
#### Who are the source language producers?
A large portion of the premises (160k) were produced in the Flickr 30k corpus by an unknown number of crowdworkers. About 2,500 crowdworkers from Amazon Mechanical Turk produced the associated hypotheses. The premises from the Flickr 30k project describe people and animals whose photos were collected and presented to the Flickr 30k crowdworkers, but the SNLI corpus did not present the photos to the hypotheses creators.
The Flickr 30k corpus did not report crowdworker or photo subject demographic information or crowdworker compensation. The SNLI crowdworkers were compensated per HIT at rates between $.1 and $.5 with no incentives. Workers who ignored the guidelines were disqualified, and automated bulk submissions were rejected. No demographic information was collected from the SNLI crowdworkers.
An additional 4,000 premises come from the pilot study of the VisualGenome corpus. Though the pilot study itself is not described, the location information of the 33,000 AMT crowdworkers that participated over the course of the 6 months of data collection are aggregated. Most of the workers were located in the United States (93%), with others from the Philippines, Kenya, India, Russia, and Canada. Workers were paid $6-$8 per hour.
### Annotations
#### Annotation process
56,941 of the total sentence pairs were further annotated in a validation task. Four annotators each labeled a premise-hypothesis pair as entailment, contradiction, or neither, resulting in 5 total judgements including the original hypothesis author judgement. See Section 2.2 for more details (Bowman et al., 2015).
The authors report 3/5 annotator agreement on 98% of the validation set and unanimous annotator agreement on 58.3% of the validation set. If a label was chosen by three annotators, that label was made the gold label. Following from this, 2% of the data did not have a consensus label and was labeled '-' by the authors.
#### Who are the annotators?
The annotators of the validation task were a closed set of about 30 trusted crowdworkers on Amazon Mechanical Turk. No demographic information was collected. Annotators were compensated per HIT between $.1 and $.5 with $1 bonuses in cases where annotator labels agreed with the curators' labels for 250 randomly distributed examples.
### Personal and Sensitive Information
The dataset does not contain any personal information about the authors or the crowdworkers, but may contain descriptions of the people in the original Flickr photos.
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
The language reflects the content of the photos collected from Flickr, as described in the Data Collection section. Rudinger et al (2017) use pointwise mutual information to calculate a measure of association between a manually selected list of tokens corresponding to identity categories and the other words in the corpus, showing strong evidence of stereotypes across gender categories. They also provide examples in which crowdworkers reproduced harmful stereotypes or pejorative language in the hypotheses.
### Other Known Limitations
The translation of the sentences was mostly unsupervised and may introduce some noise in the corpus. Machine translation from an English-language corpus is likely to generate syntactic and lexical forms that differ from those a human Spanish speaker would produce.
Gururangan et al (2018), Poliak et al (2018), and Tsuchiya (2018) show that the SNLI corpus has a number of annotation artifacts. Using various classifiers, Poliak et al correctly predicted the label of the hypothesis 69% of the time without using the premise, Gururangan et al 67% of the time, and Tsuchiya 63% of the time.
## Additional Information
### Dataset Curators
The SNLI corpus was developed by Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning as part of the Stanford NLP group.
It was supported by a Google Faculty Research Award, a gift from Bloomberg L.P., the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA8750-13-2-0040, the National Science Foundation under grant no. IIS 1159679, and the Department of the Navy, Office of Naval Research, under grant no. N00014-10-1-0109.
### Licensing Information
This corpus is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
The Stanford Natural Language Inference Corpus is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
| [
"# Dataset Card for ESsnli",
"## Table of Contents\r\n- Dataset Description\r\n - Dataset Summary\r\n - Supported Tasks\r\n - Languages\r\n- Dataset Structure\r\n - Data Instances\r\n - Data Fields\r\n - Data Splits\r\n- Dataset Creation\r\n - Curation Rationale\r\n - Source Data\r\n - Annotations\r\n - Personal and Sensitive Information\r\n- Considerations for Using the Data\r\n - Social Impact of Dataset\r\n - Discussion of Biases\r\n - Other Known Limitations\r\n- Additional Information\r\n - Dataset Curators\r\n - Licensing Information\r\n - Citation Information",
"## Dataset Description\r\n\r\n- Homepage: \r\n- Repository: \r\n- Paper: \r\n- Leaderboard: \r\n- Point of Contact:",
"### Dataset Summary\r\n\r\nMachine-translated Spanish version of the Stanford Natural Language Inference dataset.",
"### Supported Tasks and Leaderboards",
"### Languages\r\n\r\nMachine-generated Spanish (from an English-language original corpus).",
"## Dataset Structure",
"### Data Instances\r\n\r\nFor each instance, there is a string for the premise, a string for the hypothesis, a string for the label given by the annotator and a string for the gold label, as well as data about the images that originated the sentences. Note that each premise may appear three times with a different hypothesis and label. See the SNLI corpus viewer to explore more examples.",
"### Data Fields\r\n\r\n- 'sentence1': a string used to determine the truthfulness of the hypothesis\r\n- 'sentence2': a string that may be true, false, or whose truth conditions may not be knowable when compared to the premise\r\n- 'label': a string whose value may be either \"entailment\", \"contradiction\" or \"neutral\".",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale\r\n\r\nThis corpus was built to remedy the scarcity of annotated Spanish-language datasets for NLI. It was generated by translating from the SNLI original dataset to Spanish using Argos. While machine translation is far from an ideal source for semantic classification, it is an aid to enlarging the data available.",
"### Source Data",
"#### Initial Data Collection and Normalization\r\n\r\nThe hypotheses (sentence1) were elicited by presenting crowdworkers with captions from preexisting datasets without the associated photos, but the vocabulary of the hypotheses still reflects the content of the photos as well as the caption style of writing (e.g. mostly present tense). The dataset developers report 37,026 distinct words in the corpus, ignoring case. They allowed bare NPs as well as full sentences. Using the Stanford PCFG Parser 3.5.2 (Klein and Manning, 2003) trained on the standard training set as well as on the Brown Corpus (Francis and Kucera 1979), the authors report that 74% of the premises and 88.9% of the hypotheses result in a parse rooted with an 'S'. The corpus was developed between 2014 and 2015.\r\n\r\nCrowdworkers were presented with a caption without the associated photo and asked to produce three alternate captions, one that is definitely true, one that might be true, and one that is definitely false. See Section 2.1 and Figure 1 for details (Bowman et al., 2015).\r\n\r\nThe corpus includes content from the Flickr 30k corpus and the VisualGenome corpus. The photo captions used to prompt the data creation were collected on Flickr by Young et al. (2014), who extended the Flickr 8K dataset developed by Hodosh et al. (2013). Hodosh et al. collected photos from the following Flickr groups: strangers!, Wild-Child (Kids in Action), Dogs in Action (Read the Rules), Outdoor Activities, Action Photography, Flickr-Social (two or more people in the photo). Young et al. do not list the specific groups they collected photos from. The VisualGenome corpus also contains images from Flickr, originally collected in MS-COCO and YFCC100M.\r\n\r\nThe premises (sentence2) from the Flickr 30k corpus corrected for spelling using the Linux spell checker and ungrammatical sentences were removed. Bowman et al. do not report any normalization, though they note that punctuation and capitalization are often omitted.",
"#### Who are the source language producers?\r\n\r\nA large portion of the premises (160k) were produced in the Flickr 30k corpus by an unknown number of crowdworkers. About 2,500 crowdworkers from Amazon Mechanical Turk produced the associated hypotheses. The premises from the Flickr 30k project describe people and animals whose photos were collected and presented to the Flickr 30k crowdworkers, but the SNLI corpus did not present the photos to the hypotheses creators.\r\n\r\nThe Flickr 30k corpus did not report crowdworker or photo subject demographic information or crowdworker compensation. The SNLI crowdworkers were compensated per HIT at rates between $.1 and $.5 with no incentives. Workers who ignored the guidelines were disqualified, and automated bulk submissions were rejected. No demographic information was collected from the SNLI crowdworkers.\r\n\r\nAn additional 4,000 premises come from the pilot study of the VisualGenome corpus. Though the pilot study itself is not described, the location information of the 33,000 AMT crowdworkers that participated over the course of the 6 months of data collection are aggregated. Most of the workers were located in the United States (93%), with others from the Philippines, Kenya, India, Russia, and Canada. Workers were paid $6-$8 per hour.",
"### Annotations",
"#### Annotation process\r\n\r\n56,941 of the total sentence pairs were further annotated in a validation task. Four annotators each labeled a premise-hypothesis pair as entailment, contradiction, or neither, resulting in 5 total judgements including the original hypothesis author judgement. See Section 2.2 for more details (Bowman et al., 2015).\r\n\r\nThe authors report 3/5 annotator agreement on 98% of the validation set and unanimous annotator agreement on 58.3% of the validation set. If a label was chosen by three annotators, that label was made the gold label. Following from this, 2% of the data did not have a consensus label and was labeled '-' by the authors.",
"#### Who are the annotators?\r\n\r\nThe annotators of the validation task were a closed set of about 30 trusted crowdworkers on Amazon Mechanical Turk. No demographic information was collected. Annotators were compensated per HIT between $.1 and $.5 with $1 bonuses in cases where annotator labels agreed with the curators' labels for 250 randomly distributed examples.",
"### Personal and Sensitive Information\r\n\r\nThe dataset does not contain any personal information about the authors or the crowdworkers, but may contain descriptions of the people in the original Flickr photos.",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases\r\n\r\nThe language reflects the content of the photos collected from Flickr, as described in the Data Collection section. Rudinger et al (2017) use pointwise mutual information to calculate a measure of association between a manually selected list of tokens corresponding to identity categories and the other words in the corpus, showing strong evidence of stereotypes across gender categories. They also provide examples in which crowdworkers reproduced harmful stereotypes or pejorative language in the hypotheses.",
"### Other Known Limitations\r\n\r\nThe translation of the sentences was mostly unsupervised and may introduce some noise in the corpus. Machine translation from an English-language corpus is likely to generate syntactic and lexical forms that differ from those a human Spanish speaker would produce.\r\n\r\nGururangan et al (2018), Poliak et al (2018), and Tsuchiya (2018) show that the SNLI corpus has a number of annotation artifacts. Using various classifiers, Poliak et al correctly predicted the label of the hypothesis 69% of the time without using the premise, Gururangan et al 67% of the time, and Tsuchiya 63% of the time.",
"## Additional Information",
"### Dataset Curators\r\n\r\nThe SNLI corpus was developed by Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning as part of the Stanford NLP group.\r\n\r\nIt was supported by a Google Faculty Research Award, a gift from Bloomberg L.P., the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA8750-13-2-0040, the National Science Foundation under grant no. IIS 1159679, and the Department of the Navy, Office of Naval Research, under grant no. N00014-10-1-0109.",
"### Licensing Information\r\n\r\nThis corpus is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.\r\n\r\nThe Stanford Natural Language Inference Corpus is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License."
] | [
"TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-n<100K #source_datasets-extended|snli #region-us \n",
"# Dataset Card for ESsnli",
"## Table of Contents\r\n- Dataset Description\r\n - Dataset Summary\r\n - Supported Tasks\r\n - Languages\r\n- Dataset Structure\r\n - Data Instances\r\n - Data Fields\r\n - Data Splits\r\n- Dataset Creation\r\n - Curation Rationale\r\n - Source Data\r\n - Annotations\r\n - Personal and Sensitive Information\r\n- Considerations for Using the Data\r\n - Social Impact of Dataset\r\n - Discussion of Biases\r\n - Other Known Limitations\r\n- Additional Information\r\n - Dataset Curators\r\n - Licensing Information\r\n - Citation Information",
"## Dataset Description\r\n\r\n- Homepage: \r\n- Repository: \r\n- Paper: \r\n- Leaderboard: \r\n- Point of Contact:",
"### Dataset Summary\r\n\r\nMachine-translated Spanish version of the Stanford Natural Language Inference dataset.",
"### Supported Tasks and Leaderboards",
"### Languages\r\n\r\nMachine-generated Spanish (from an English-language original corpus).",
"## Dataset Structure",
"### Data Instances\r\n\r\nFor each instance, there is a string for the premise, a string for the hypothesis, a string for the label given by the annotator and a string for the gold label, as well as data about the images that originated the sentences. Note that each premise may appear three times with a different hypothesis and label. See the SNLI corpus viewer to explore more examples.",
"### Data Fields\r\n\r\n- 'sentence1': a string used to determine the truthfulness of the hypothesis\r\n- 'sentence2': a string that may be true, false, or whose truth conditions may not be knowable when compared to the premise\r\n- 'label': a string whose value may be either \"entailment\", \"contradiction\" or \"neutral\".",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale\r\n\r\nThis corpus was built to remedy the scarcity of annotated Spanish-language datasets for NLI. It was generated by translating from the SNLI original dataset to Spanish using Argos. While machine translation is far from an ideal source for semantic classification, it is an aid to enlarging the data available.",
"### Source Data",
"#### Initial Data Collection and Normalization\r\n\r\nThe hypotheses (sentence1) were elicited by presenting crowdworkers with captions from preexisting datasets without the associated photos, but the vocabulary of the hypotheses still reflects the content of the photos as well as the caption style of writing (e.g. mostly present tense). The dataset developers report 37,026 distinct words in the corpus, ignoring case. They allowed bare NPs as well as full sentences. Using the Stanford PCFG Parser 3.5.2 (Klein and Manning, 2003) trained on the standard training set as well as on the Brown Corpus (Francis and Kucera 1979), the authors report that 74% of the premises and 88.9% of the hypotheses result in a parse rooted with an 'S'. The corpus was developed between 2014 and 2015.\r\n\r\nCrowdworkers were presented with a caption without the associated photo and asked to produce three alternate captions, one that is definitely true, one that might be true, and one that is definitely false. See Section 2.1 and Figure 1 for details (Bowman et al., 2015).\r\n\r\nThe corpus includes content from the Flickr 30k corpus and the VisualGenome corpus. The photo captions used to prompt the data creation were collected on Flickr by Young et al. (2014), who extended the Flickr 8K dataset developed by Hodosh et al. (2013). Hodosh et al. collected photos from the following Flickr groups: strangers!, Wild-Child (Kids in Action), Dogs in Action (Read the Rules), Outdoor Activities, Action Photography, Flickr-Social (two or more people in the photo). Young et al. do not list the specific groups they collected photos from. The VisualGenome corpus also contains images from Flickr, originally collected in MS-COCO and YFCC100M.\r\n\r\nThe premises (sentence2) from the Flickr 30k corpus corrected for spelling using the Linux spell checker and ungrammatical sentences were removed. Bowman et al. do not report any normalization, though they note that punctuation and capitalization are often omitted.",
"#### Who are the source language producers?\r\n\r\nA large portion of the premises (160k) were produced in the Flickr 30k corpus by an unknown number of crowdworkers. About 2,500 crowdworkers from Amazon Mechanical Turk produced the associated hypotheses. The premises from the Flickr 30k project describe people and animals whose photos were collected and presented to the Flickr 30k crowdworkers, but the SNLI corpus did not present the photos to the hypotheses creators.\r\n\r\nThe Flickr 30k corpus did not report crowdworker or photo subject demographic information or crowdworker compensation. The SNLI crowdworkers were compensated per HIT at rates between $.1 and $.5 with no incentives. Workers who ignored the guidelines were disqualified, and automated bulk submissions were rejected. No demographic information was collected from the SNLI crowdworkers.\r\n\r\nAn additional 4,000 premises come from the pilot study of the VisualGenome corpus. Though the pilot study itself is not described, the location information of the 33,000 AMT crowdworkers that participated over the course of the 6 months of data collection are aggregated. Most of the workers were located in the United States (93%), with others from the Philippines, Kenya, India, Russia, and Canada. Workers were paid $6-$8 per hour.",
"### Annotations",
"#### Annotation process\r\n\r\n56,941 of the total sentence pairs were further annotated in a validation task. Four annotators each labeled a premise-hypothesis pair as entailment, contradiction, or neither, resulting in 5 total judgements including the original hypothesis author judgement. See Section 2.2 for more details (Bowman et al., 2015).\r\n\r\nThe authors report 3/5 annotator agreement on 98% of the validation set and unanimous annotator agreement on 58.3% of the validation set. If a label was chosen by three annotators, that label was made the gold label. Following from this, 2% of the data did not have a consensus label and was labeled '-' by the authors.",
"#### Who are the annotators?\r\n\r\nThe annotators of the validation task were a closed set of about 30 trusted crowdworkers on Amazon Mechanical Turk. No demographic information was collected. Annotators were compensated per HIT between $.1 and $.5 with $1 bonuses in cases where annotator labels agreed with the curators' labels for 250 randomly distributed examples.",
"### Personal and Sensitive Information\r\n\r\nThe dataset does not contain any personal information about the authors or the crowdworkers, but may contain descriptions of the people in the original Flickr photos.",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases\r\n\r\nThe language reflects the content of the photos collected from Flickr, as described in the Data Collection section. Rudinger et al (2017) use pointwise mutual information to calculate a measure of association between a manually selected list of tokens corresponding to identity categories and the other words in the corpus, showing strong evidence of stereotypes across gender categories. They also provide examples in which crowdworkers reproduced harmful stereotypes or pejorative language in the hypotheses.",
"### Other Known Limitations\r\n\r\nThe translation of the sentences was mostly unsupervised and may introduce some noise in the corpus. Machine translation from an English-language corpus is likely to generate syntactic and lexical forms that differ from those a human Spanish speaker would produce.\r\n\r\nGururangan et al (2018), Poliak et al (2018), and Tsuchiya (2018) show that the SNLI corpus has a number of annotation artifacts. Using various classifiers, Poliak et al correctly predicted the label of the hypothesis 69% of the time without using the premise, Gururangan et al 67% of the time, and Tsuchiya 63% of the time.",
"## Additional Information",
"### Dataset Curators\r\n\r\nThe SNLI corpus was developed by Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning as part of the Stanford NLP group.\r\n\r\nIt was supported by a Google Faculty Research Award, a gift from Bloomberg L.P., the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA8750-13-2-0040, the National Science Foundation under grant no. IIS 1159679, and the Department of the Navy, Office of Naval Research, under grant no. N00014-10-1-0109.",
"### Licensing Information\r\n\r\nThis corpus is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.\r\n\r\nThe Stanford Natural Language Inference Corpus is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License."
] |
e7b37332d07b614d95d1dd7c99904f825180f08a |
## How to use the data sets
This dataset contains more than 16,000 unique pairs of protein sequences and ligand SMILES, and the coordinates
of their complexes.
SMILES are assumed to be tokenized by the regex from P. Schwaller
Every (x,y,z) ligand coordinate maps onto a SMILES token, and is *nan* if the token does not represent an atom
Every receptor coordinate maps onto the Calpha coordinate of that residue.
The dataset can be used to fine-tune a language model, all data comes from PDBind-cn.
### Use the already preprocessed data
Load a test/train split using
```
from datasets import load_dataset
train = load_dataset("jglaser/pdbbind_complexes",split='train[:90%]')
validation = load_dataset("jglaser/pdbbind_complexes",split='train[90%:]')
```
### Pre-process yourself
To manually perform the preprocessing, download the data sets from P.DBBind-cn
Register for an account at <https://www.pdbbind.org.cn/>, confirm the validation
email, then login and download
- the Index files (1)
- the general protein-ligand complexes (2)
- the refined protein-ligand complexes (3)
Extract those files in `pdbbind/data`
Run the script `pdbbind.py` in a compute job on an MPI-enabled cluster
(e.g., `mpirun -n 64 pdbbind.py`).
| jglaser/pdbbind_complexes | [
"molecules",
"chemistry",
"SMILES",
"region:us"
] | 2022-03-26T21:30:56+00:00 | {"tags": ["molecules", "chemistry", "SMILES"]} | 2022-05-14T19:15:20+00:00 | [] | [] | TAGS
#molecules #chemistry #SMILES #region-us
|
## How to use the data sets
This dataset contains more than 16,000 unique pairs of protein sequences and ligand SMILES, and the coordinates
of their complexes.
SMILES are assumed to be tokenized by the regex from P. Schwaller
Every (x,y,z) ligand coordinate maps onto a SMILES token, and is *nan* if the token does not represent an atom
Every receptor coordinate maps onto the Calpha coordinate of that residue.
The dataset can be used to fine-tune a language model, all data comes from PDBind-cn.
### Use the already preprocessed data
Load a test/train split using
### Pre-process yourself
To manually perform the preprocessing, download the data sets from P.DBBind-cn
Register for an account at <URL confirm the validation
email, then login and download
- the Index files (1)
- the general protein-ligand complexes (2)
- the refined protein-ligand complexes (3)
Extract those files in 'pdbbind/data'
Run the script 'URL' in a compute job on an MPI-enabled cluster
(e.g., 'mpirun -n 64 URL').
| [
"## How to use the data sets\n\nThis dataset contains more than 16,000 unique pairs of protein sequences and ligand SMILES, and the coordinates\nof their complexes.\n\nSMILES are assumed to be tokenized by the regex from P. Schwaller\n\nEvery (x,y,z) ligand coordinate maps onto a SMILES token, and is *nan* if the token does not represent an atom\n\nEvery receptor coordinate maps onto the Calpha coordinate of that residue.\n\nThe dataset can be used to fine-tune a language model, all data comes from PDBind-cn.",
"### Use the already preprocessed data\n\nLoad a test/train split using",
"### Pre-process yourself\n\nTo manually perform the preprocessing, download the data sets from P.DBBind-cn\n\nRegister for an account at <URL confirm the validation\nemail, then login and download \n\n- the Index files (1)\n- the general protein-ligand complexes (2)\n- the refined protein-ligand complexes (3)\n\nExtract those files in 'pdbbind/data'\n\nRun the script 'URL' in a compute job on an MPI-enabled cluster\n(e.g., 'mpirun -n 64 URL')."
] | [
"TAGS\n#molecules #chemistry #SMILES #region-us \n",
"## How to use the data sets\n\nThis dataset contains more than 16,000 unique pairs of protein sequences and ligand SMILES, and the coordinates\nof their complexes.\n\nSMILES are assumed to be tokenized by the regex from P. Schwaller\n\nEvery (x,y,z) ligand coordinate maps onto a SMILES token, and is *nan* if the token does not represent an atom\n\nEvery receptor coordinate maps onto the Calpha coordinate of that residue.\n\nThe dataset can be used to fine-tune a language model, all data comes from PDBind-cn.",
"### Use the already preprocessed data\n\nLoad a test/train split using",
"### Pre-process yourself\n\nTo manually perform the preprocessing, download the data sets from P.DBBind-cn\n\nRegister for an account at <URL confirm the validation\nemail, then login and download \n\n- the Index files (1)\n- the general protein-ligand complexes (2)\n- the refined protein-ligand complexes (3)\n\nExtract those files in 'pdbbind/data'\n\nRun the script 'URL' in a compute job on an MPI-enabled cluster\n(e.g., 'mpirun -n 64 URL')."
] |
204d96b91f91c546df38a2284d250eb346fe0c77 |
# Dataset Card for ekar_chinese
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ekar-leaderboard.github.io
- **Paper:** [E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning](https://aclanthology.org/2022.findings-acl.311)
- **Leaderboard:** https://eval.ai/web/challenges/challenge-page/1671/overview
- **Point of Contact:** [email protected]
### Dataset Summary
***New!***(9/18/2022) E-KAR `v1.1` is officially released (at the `main` branch), **with a higher-quality English dataset!** In `v1.1`, we further improve the Chinese-to-English translation quality of the English E-KAR, with over 600 problems and over 1,000 explanations manually adjusted. You can still find previous version (as in the paper) in the `v1.0` branch in the repo. For more information please refer to https://ekar-leaderboard.github.io.
The ability to recognize analogies is fundamental to human cognition. Existing benchmarks to test word analogy do not reveal the underneath process of analogical reasoning of neural models. Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR). Our benchmark consists of 1,655 (in Chinese) and 1,251 (in English) problems sourced from the Civil Service Exams, which require intensive background knowledge to solve. More importantly, we design a free-text explanation scheme to explain whether an analogy should be drawn, and manually annotate them for each and every question and candidate answer. Empirical results suggest that this benchmark is very challenging for some state-of-the-art models for both explanation generation and analogical question answering tasks, which invites further research in this area.
### Supported Tasks and Leaderboards
- `analogical-qa`: The dataset can be used to train a model for analogical reasoning in the form of multiple-choice QA.
- `explanation-generation`: The dataset can be used to generate free-text explanations to rationalize analogical reasoning.
This dataset supports two task modes: EASY mode and HARD mode:
- `EASY mode`: where query explanation can be used as part of the input.
- `HARD mode`: no explanation is allowed as part of the input.
### Languages
This dataset is in Chinese, with its [English version](https://huggingface.co/datasets/Jiangjie/ekar_english).
## Dataset Structure
### Data Instances
```json
{
"id": "982f17-en",
"question": "plant:coal",
"choices": {
"label": [
"A",
"B",
"C",
"D"
],
"text": [
"white wine:aged vinegar",
"starch:corn",
"milk:yogurt",
"pickled cabbage:cabbage"
]
},
"answerKey": "C",
"explanation": [
"\"plant\" is the raw material of \"coal\".",
"both \"white wine\" and \"aged vinegar\" are brewed.",
"\"starch\" is made of \"corn\", and the order of words is inconsistent with the query.",
"\"yogurt\" is made from \"milk\".",
"\"pickled cabbage\" is made of \"cabbage\", and the word order is inconsistent with the query."
],
"relation": [
[["plant", "coal", "R3.7"]],
[["white wine", "aged vinegar", "R2.4"]],
[["corn", "starch", "R3.7"]],
[["milk", "yogurt", "R3.7"]],
[["cabbage", "pickled cabbage", "R3.7"]]
]
}
```
### Data Fields
- id: a string identifier for each example.
- question: query terms.
- choices: candidate answer terms.
- answerKey: correct answer.
- explanation: explanations for query (1st) and candidate answers (2nd-5th).
- relation: annotated relations for terms in the query (1st) and candidate answers (2nd-5th).
### Data Splits
| name |train|validation|test|
|:-----:|:---:|:--------:|:--:|
|default| 1155 | 165 | 335 |
|description| | | blinded |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop analogical reasoning systems that are right for the right reasons.
### Discussion of Biases
This dataset is sourced and translated from the Civil Service Examinations of China. Therefore, it may contain information biased to Chinese culture.
### Other Known Limitations
1. The explanation annotation process in E-KAR (not the EG task) is mostly post-hoc and reflects only the result of reasoning. Humans solve the analogy problems in a trial-and-error manner, i.e., adjusting the abduced source structure and trying to find the most suited one for all candidate answers. Therefore, such explanations cannot offer supervision for intermediate reasoning.
2. E-KAR only presents one feasible explanation for each problem, whereas there may be several.
## Additional Information
### Dataset Curators
The dataset was initially created and curated by Jiangjie Chen (Fudan University, ByteDance), Rui Xu (Fudan University), Ziquan Fu (Brain Technologies, Inc.), Wei Shi (South China University of Technology), Xinbo Zhang (ByteDance), Changzhi Sun (ByteDance) and other colleagues at ByteDance and Fudan University.
### Licensing Information
[Needs More Information]
### Citation Information
```latex
@inproceedings{chen-etal-2022-e,
title = "{E}-{KAR}: A Benchmark for Rationalizing Natural Language Analogical Reasoning",
author = "Chen, Jiangjie and
Xu, Rui and
Fu, Ziquan and
Shi, Wei and
Li, Zhongqiao and
Zhang, Xinbo and
Sun, Changzhi and
Li, Lei and
Xiao, Yanghua and
Zhou, Hao",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.311",
pages = "3941--3955",
}
```
| jiangjiechen/ekar_chinese | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_ids:explanation-generation",
"size_categories:1K<n<2K",
"source_datasets:original",
"language:zh",
"license:afl-3.0",
"region:us"
] | 2022-03-27T05:00:49+00:00 | {"language": ["zh"], "license": ["afl-3.0"], "size_categories": ["1K<n<2K"], "source_datasets": ["original"], "task_categories": ["question-answering", "text-generation"], "task_ids": ["analogical-qa", "explanation-generation"]} | 2023-01-11T08:12:59+00:00 | [] | [
"zh"
] | TAGS
#task_categories-question-answering #task_categories-text-generation #task_ids-explanation-generation #size_categories-1K<n<2K #source_datasets-original #language-Chinese #license-afl-3.0 #region-us
| Dataset Card for ekar\_chinese
==============================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
Dataset Description
-------------------
* Homepage: URL
* Paper: E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning
* Leaderboard: URL
* Point of Contact: jjchen19@URL
### Dataset Summary
*New!*(9/18/2022) E-KAR 'v1.1' is officially released (at the 'main' branch), with a higher-quality English dataset! In 'v1.1', we further improve the Chinese-to-English translation quality of the English E-KAR, with over 600 problems and over 1,000 explanations manually adjusted. You can still find previous version (as in the paper) in the 'v1.0' branch in the repo. For more information please refer to URL.
The ability to recognize analogies is fundamental to human cognition. Existing benchmarks to test word analogy do not reveal the underneath process of analogical reasoning of neural models. Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR). Our benchmark consists of 1,655 (in Chinese) and 1,251 (in English) problems sourced from the Civil Service Exams, which require intensive background knowledge to solve. More importantly, we design a free-text explanation scheme to explain whether an analogy should be drawn, and manually annotate them for each and every question and candidate answer. Empirical results suggest that this benchmark is very challenging for some state-of-the-art models for both explanation generation and analogical question answering tasks, which invites further research in this area.
### Supported Tasks and Leaderboards
* 'analogical-qa': The dataset can be used to train a model for analogical reasoning in the form of multiple-choice QA.
* 'explanation-generation': The dataset can be used to generate free-text explanations to rationalize analogical reasoning.
This dataset supports two task modes: EASY mode and HARD mode:
* 'EASY mode': where query explanation can be used as part of the input.
* 'HARD mode': no explanation is allowed as part of the input.
### Languages
This dataset is in Chinese, with its English version.
Dataset Structure
-----------------
### Data Instances
### Data Fields
* id: a string identifier for each example.
* question: query terms.
* choices: candidate answer terms.
* answerKey: correct answer.
* explanation: explanations for query (1st) and candidate answers (2nd-5th).
* relation: annotated relations for terms in the query (1st) and candidate answers (2nd-5th).
### Data Splits
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
The purpose of this dataset is to help develop analogical reasoning systems that are right for the right reasons.
### Discussion of Biases
This dataset is sourced and translated from the Civil Service Examinations of China. Therefore, it may contain information biased to Chinese culture.
### Other Known Limitations
1. The explanation annotation process in E-KAR (not the EG task) is mostly post-hoc and reflects only the result of reasoning. Humans solve the analogy problems in a trial-and-error manner, i.e., adjusting the abduced source structure and trying to find the most suited one for all candidate answers. Therefore, such explanations cannot offer supervision for intermediate reasoning.
2. E-KAR only presents one feasible explanation for each problem, whereas there may be several.
Additional Information
----------------------
### Dataset Curators
The dataset was initially created and curated by Jiangjie Chen (Fudan University, ByteDance), Rui Xu (Fudan University), Ziquan Fu (Brain Technologies, Inc.), Wei Shi (South China University of Technology), Xinbo Zhang (ByteDance), Changzhi Sun (ByteDance) and other colleagues at ByteDance and Fudan University.
### Licensing Information
| [
"### Dataset Summary\n\n\n*New!*(9/18/2022) E-KAR 'v1.1' is officially released (at the 'main' branch), with a higher-quality English dataset! In 'v1.1', we further improve the Chinese-to-English translation quality of the English E-KAR, with over 600 problems and over 1,000 explanations manually adjusted. You can still find previous version (as in the paper) in the 'v1.0' branch in the repo. For more information please refer to URL.\n\n\nThe ability to recognize analogies is fundamental to human cognition. Existing benchmarks to test word analogy do not reveal the underneath process of analogical reasoning of neural models. Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR). Our benchmark consists of 1,655 (in Chinese) and 1,251 (in English) problems sourced from the Civil Service Exams, which require intensive background knowledge to solve. More importantly, we design a free-text explanation scheme to explain whether an analogy should be drawn, and manually annotate them for each and every question and candidate answer. Empirical results suggest that this benchmark is very challenging for some state-of-the-art models for both explanation generation and analogical question answering tasks, which invites further research in this area.",
"### Supported Tasks and Leaderboards\n\n\n* 'analogical-qa': The dataset can be used to train a model for analogical reasoning in the form of multiple-choice QA.\n* 'explanation-generation': The dataset can be used to generate free-text explanations to rationalize analogical reasoning.\n\n\nThis dataset supports two task modes: EASY mode and HARD mode:\n\n\n* 'EASY mode': where query explanation can be used as part of the input.\n* 'HARD mode': no explanation is allowed as part of the input.",
"### Languages\n\n\nThis dataset is in Chinese, with its English version.\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\n* id: a string identifier for each example.\n* question: query terms.\n* choices: candidate answer terms.\n* answerKey: correct answer.\n* explanation: explanations for query (1st) and candidate answers (2nd-5th).\n* relation: annotated relations for terms in the query (1st) and candidate answers (2nd-5th).",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop analogical reasoning systems that are right for the right reasons.",
"### Discussion of Biases\n\n\nThis dataset is sourced and translated from the Civil Service Examinations of China. Therefore, it may contain information biased to Chinese culture.",
"### Other Known Limitations\n\n\n1. The explanation annotation process in E-KAR (not the EG task) is mostly post-hoc and reflects only the result of reasoning. Humans solve the analogy problems in a trial-and-error manner, i.e., adjusting the abduced source structure and trying to find the most suited one for all candidate answers. Therefore, such explanations cannot offer supervision for intermediate reasoning.\n2. E-KAR only presents one feasible explanation for each problem, whereas there may be several.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset was initially created and curated by Jiangjie Chen (Fudan University, ByteDance), Rui Xu (Fudan University), Ziquan Fu (Brain Technologies, Inc.), Wei Shi (South China University of Technology), Xinbo Zhang (ByteDance), Changzhi Sun (ByteDance) and other colleagues at ByteDance and Fudan University.",
"### Licensing Information"
] | [
"TAGS\n#task_categories-question-answering #task_categories-text-generation #task_ids-explanation-generation #size_categories-1K<n<2K #source_datasets-original #language-Chinese #license-afl-3.0 #region-us \n",
"### Dataset Summary\n\n\n*New!*(9/18/2022) E-KAR 'v1.1' is officially released (at the 'main' branch), with a higher-quality English dataset! In 'v1.1', we further improve the Chinese-to-English translation quality of the English E-KAR, with over 600 problems and over 1,000 explanations manually adjusted. You can still find previous version (as in the paper) in the 'v1.0' branch in the repo. For more information please refer to URL.\n\n\nThe ability to recognize analogies is fundamental to human cognition. Existing benchmarks to test word analogy do not reveal the underneath process of analogical reasoning of neural models. Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR). Our benchmark consists of 1,655 (in Chinese) and 1,251 (in English) problems sourced from the Civil Service Exams, which require intensive background knowledge to solve. More importantly, we design a free-text explanation scheme to explain whether an analogy should be drawn, and manually annotate them for each and every question and candidate answer. Empirical results suggest that this benchmark is very challenging for some state-of-the-art models for both explanation generation and analogical question answering tasks, which invites further research in this area.",
"### Supported Tasks and Leaderboards\n\n\n* 'analogical-qa': The dataset can be used to train a model for analogical reasoning in the form of multiple-choice QA.\n* 'explanation-generation': The dataset can be used to generate free-text explanations to rationalize analogical reasoning.\n\n\nThis dataset supports two task modes: EASY mode and HARD mode:\n\n\n* 'EASY mode': where query explanation can be used as part of the input.\n* 'HARD mode': no explanation is allowed as part of the input.",
"### Languages\n\n\nThis dataset is in Chinese, with its English version.\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\n* id: a string identifier for each example.\n* question: query terms.\n* choices: candidate answer terms.\n* answerKey: correct answer.\n* explanation: explanations for query (1st) and candidate answers (2nd-5th).\n* relation: annotated relations for terms in the query (1st) and candidate answers (2nd-5th).",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop analogical reasoning systems that are right for the right reasons.",
"### Discussion of Biases\n\n\nThis dataset is sourced and translated from the Civil Service Examinations of China. Therefore, it may contain information biased to Chinese culture.",
"### Other Known Limitations\n\n\n1. The explanation annotation process in E-KAR (not the EG task) is mostly post-hoc and reflects only the result of reasoning. Humans solve the analogy problems in a trial-and-error manner, i.e., adjusting the abduced source structure and trying to find the most suited one for all candidate answers. Therefore, such explanations cannot offer supervision for intermediate reasoning.\n2. E-KAR only presents one feasible explanation for each problem, whereas there may be several.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset was initially created and curated by Jiangjie Chen (Fudan University, ByteDance), Rui Xu (Fudan University), Ziquan Fu (Brain Technologies, Inc.), Wei Shi (South China University of Technology), Xinbo Zhang (ByteDance), Changzhi Sun (ByteDance) and other colleagues at ByteDance and Fudan University.",
"### Licensing Information"
] |
a4aa3ae597a4308ef79cd34138a2ddeba611ed51 |
# Dataset Card for ekar_english
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ekar-leaderboard.github.io
- **Paper:** [E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning](https://aclanthology.org/2022.findings-acl.311)
- **Leaderboard:** https://eval.ai/web/challenges/challenge-page/1671/overview
- **Point of Contact:** [email protected]
### Dataset Summary
***New!***(9/18/2022) E-KAR `v1.1` is officially released (at the `main` branch), **with a higher-quality English dataset!** In `v1.1`, we further improve the Chinese-to-English translation quality of the English E-KAR, with over 600 problems and over 1,000 explanations manually adjusted. You can still find previous version (as in the paper) in the `v1.0` branch in the repo. For more information please refer to https://ekar-leaderboard.github.io.
The ability to recognize analogies is fundamental to human cognition. Existing benchmarks to test word analogy do not reveal the underneath process of analogical reasoning of neural models. Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR). Our benchmark consists of 1,655 (in Chinese) and 1,251 (in English) problems sourced from the Civil Service Exams, which require intensive background knowledge to solve. More importantly, we design a free-text explanation scheme to explain whether an analogy should be drawn, and manually annotate them for each and every question and candidate answer. Empirical results suggest that this benchmark is very challenging for some state-of-the-art models for both explanation generation and analogical question answering tasks, which invites further research in this area.
### Supported Tasks and Leaderboards
- `analogical-qa`: The dataset can be used to train a model for analogical reasoning in the form of multiple-choice QA.
- `explanation-generation`: The dataset can be used to generate free-text explanations to rationalize analogical reasoning.
This dataset supports two task modes: EASY mode and HARD mode:
- `EASY mode`: where query explanation can be used as part of the input.
- `HARD mode`: no explanation is allowed as part of the input.
### Languages
This dataset is in English, which is translated from [its Chinese version](https://huggingface.co/datasets/Jiangjie/ekar_chinese/)
## Dataset Structure
### Data Instances
```json
{
"id": "982f17-en",
"question": "plant:coal",
"choices": {
"label": [
"A",
"B",
"C",
"D"
],
"text": [
"white wine:aged vinegar",
"starch:corn",
"milk:yogurt",
"pickled cabbage:cabbage"
]
},
"answerKey": "C",
"explanation": [
"\"plant\" is the raw material of \"coal\".",
"both \"white wine\" and \"aged vinegar\" are brewed.",
"\"starch\" is made of \"corn\", and the order of words is inconsistent with the query.",
"\"yogurt\" is made from \"milk\".",
"\"pickled cabbage\" is made of \"cabbage\", and the word order is inconsistent with the query."
],
"relation": [
[["plant", "coal", "R3.7"]],
[["white wine", "aged vinegar", "R2.4"]],
[["corn", "starch", "R3.7"]],
[["milk", "yogurt", "R3.7"]],
[["cabbage", "pickled cabbage", "R3.7"]]
]
}
```
### Data Fields
- id: a string identifier for each example.
- question: query terms.
- choices: candidate answer terms.
- answerKey: correct answer.
- explanation: explanations for query (1st) and candidate answers (2nd-5th).
- relation: annotated relations for terms in the query (1st) and candidate answers (2nd-5th).
### Data Splits
| name |train|validation|test|
|:-----:|:---:|:--------:|:--:|
|default| 870| 119| 262|
|description| | | blinded |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop analogical reasoning systems that are right for the right reasons.
### Discussion of Biases
This dataset is sourced and translated from the Civil Service Examinations of China. Therefore, despite the effort that the authors try to remove or rewrite such problems, it may still contain information biased to Chinese culture.
### Other Known Limitations
1. The explanation annotation process in E-KAR (not the EG task) is mostly post-hoc and reflects only the result of reasoning. Humans solve the analogy problems in a trial-and-error manner, i.e., adjusting the abduced source structure and trying to find the most suited one for all candidate answers. Therefore, such explanations cannot offer supervision for intermediate reasoning.
2. E-KAR only presents one feasible explanation for each problem, whereas there may be several.
3. The English version of E-KAR is machine-translated and post-edited by humans. Although the authors have tried their best to maintain the translation quality, there could be some unsatisfying samples in the English dataset, e.g., culture-specific ones, ambiguous ones after translation, etc.
## Additional Information
### Dataset Curators
The dataset was initially created and curated by Jiangjie Chen (Fudan University, ByteDance), Rui Xu (Fudan University), Ziquan Fu (Brain Technologies, Inc.), Wei Shi (South China University of Technology), Xinbo Zhang (ByteDance), Changzhi Sun (ByteDance) and other colleagues at ByteDance and Fudan University.
### Licensing Information
[Needs More Information]
### Citation Information
```latex
@inproceedings{chen-etal-2022-e,
title = "{E}-{KAR}: A Benchmark for Rationalizing Natural Language Analogical Reasoning",
author = "Chen, Jiangjie and
Xu, Rui and
Fu, Ziquan and
Shi, Wei and
Li, Zhongqiao and
Zhang, Xinbo and
Sun, Changzhi and
Li, Lei and
Xiao, Yanghua and
Zhou, Hao",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.311",
pages = "3941--3955",
}
``` | jiangjiechen/ekar_english | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_ids:explanation-generation",
"size_categories:1K<n<2K",
"source_datasets:original",
"language:en",
"license:afl-3.0",
"region:us"
] | 2022-03-27T05:03:06+00:00 | {"language": ["en"], "license": ["afl-3.0"], "size_categories": ["1K<n<2K"], "source_datasets": ["original"], "task_categories": ["question-answering", "text-generation"], "task_ids": ["analogical-qa", "explanation-generation"]} | 2023-01-11T08:13:18+00:00 | [] | [
"en"
] | TAGS
#task_categories-question-answering #task_categories-text-generation #task_ids-explanation-generation #size_categories-1K<n<2K #source_datasets-original #language-English #license-afl-3.0 #region-us
| Dataset Card for ekar\_english
==============================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
Dataset Description
-------------------
* Homepage: URL
* Paper: E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning
* Leaderboard: URL
* Point of Contact: jjchen19@URL
### Dataset Summary
*New!*(9/18/2022) E-KAR 'v1.1' is officially released (at the 'main' branch), with a higher-quality English dataset! In 'v1.1', we further improve the Chinese-to-English translation quality of the English E-KAR, with over 600 problems and over 1,000 explanations manually adjusted. You can still find previous version (as in the paper) in the 'v1.0' branch in the repo. For more information please refer to URL.
The ability to recognize analogies is fundamental to human cognition. Existing benchmarks to test word analogy do not reveal the underneath process of analogical reasoning of neural models. Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR). Our benchmark consists of 1,655 (in Chinese) and 1,251 (in English) problems sourced from the Civil Service Exams, which require intensive background knowledge to solve. More importantly, we design a free-text explanation scheme to explain whether an analogy should be drawn, and manually annotate them for each and every question and candidate answer. Empirical results suggest that this benchmark is very challenging for some state-of-the-art models for both explanation generation and analogical question answering tasks, which invites further research in this area.
### Supported Tasks and Leaderboards
* 'analogical-qa': The dataset can be used to train a model for analogical reasoning in the form of multiple-choice QA.
* 'explanation-generation': The dataset can be used to generate free-text explanations to rationalize analogical reasoning.
This dataset supports two task modes: EASY mode and HARD mode:
* 'EASY mode': where query explanation can be used as part of the input.
* 'HARD mode': no explanation is allowed as part of the input.
### Languages
This dataset is in English, which is translated from its Chinese version
Dataset Structure
-----------------
### Data Instances
### Data Fields
* id: a string identifier for each example.
* question: query terms.
* choices: candidate answer terms.
* answerKey: correct answer.
* explanation: explanations for query (1st) and candidate answers (2nd-5th).
* relation: annotated relations for terms in the query (1st) and candidate answers (2nd-5th).
### Data Splits
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
The purpose of this dataset is to help develop analogical reasoning systems that are right for the right reasons.
### Discussion of Biases
This dataset is sourced and translated from the Civil Service Examinations of China. Therefore, despite the effort that the authors try to remove or rewrite such problems, it may still contain information biased to Chinese culture.
### Other Known Limitations
1. The explanation annotation process in E-KAR (not the EG task) is mostly post-hoc and reflects only the result of reasoning. Humans solve the analogy problems in a trial-and-error manner, i.e., adjusting the abduced source structure and trying to find the most suited one for all candidate answers. Therefore, such explanations cannot offer supervision for intermediate reasoning.
2. E-KAR only presents one feasible explanation for each problem, whereas there may be several.
3. The English version of E-KAR is machine-translated and post-edited by humans. Although the authors have tried their best to maintain the translation quality, there could be some unsatisfying samples in the English dataset, e.g., culture-specific ones, ambiguous ones after translation, etc.
Additional Information
----------------------
### Dataset Curators
The dataset was initially created and curated by Jiangjie Chen (Fudan University, ByteDance), Rui Xu (Fudan University), Ziquan Fu (Brain Technologies, Inc.), Wei Shi (South China University of Technology), Xinbo Zhang (ByteDance), Changzhi Sun (ByteDance) and other colleagues at ByteDance and Fudan University.
### Licensing Information
| [
"### Dataset Summary\n\n\n*New!*(9/18/2022) E-KAR 'v1.1' is officially released (at the 'main' branch), with a higher-quality English dataset! In 'v1.1', we further improve the Chinese-to-English translation quality of the English E-KAR, with over 600 problems and over 1,000 explanations manually adjusted. You can still find previous version (as in the paper) in the 'v1.0' branch in the repo. For more information please refer to URL.\n\n\nThe ability to recognize analogies is fundamental to human cognition. Existing benchmarks to test word analogy do not reveal the underneath process of analogical reasoning of neural models. Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR). Our benchmark consists of 1,655 (in Chinese) and 1,251 (in English) problems sourced from the Civil Service Exams, which require intensive background knowledge to solve. More importantly, we design a free-text explanation scheme to explain whether an analogy should be drawn, and manually annotate them for each and every question and candidate answer. Empirical results suggest that this benchmark is very challenging for some state-of-the-art models for both explanation generation and analogical question answering tasks, which invites further research in this area.",
"### Supported Tasks and Leaderboards\n\n\n* 'analogical-qa': The dataset can be used to train a model for analogical reasoning in the form of multiple-choice QA.\n* 'explanation-generation': The dataset can be used to generate free-text explanations to rationalize analogical reasoning.\n\n\nThis dataset supports two task modes: EASY mode and HARD mode:\n\n\n* 'EASY mode': where query explanation can be used as part of the input.\n* 'HARD mode': no explanation is allowed as part of the input.",
"### Languages\n\n\nThis dataset is in English, which is translated from its Chinese version\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\n* id: a string identifier for each example.\n* question: query terms.\n* choices: candidate answer terms.\n* answerKey: correct answer.\n* explanation: explanations for query (1st) and candidate answers (2nd-5th).\n* relation: annotated relations for terms in the query (1st) and candidate answers (2nd-5th).",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop analogical reasoning systems that are right for the right reasons.",
"### Discussion of Biases\n\n\nThis dataset is sourced and translated from the Civil Service Examinations of China. Therefore, despite the effort that the authors try to remove or rewrite such problems, it may still contain information biased to Chinese culture.",
"### Other Known Limitations\n\n\n1. The explanation annotation process in E-KAR (not the EG task) is mostly post-hoc and reflects only the result of reasoning. Humans solve the analogy problems in a trial-and-error manner, i.e., adjusting the abduced source structure and trying to find the most suited one for all candidate answers. Therefore, such explanations cannot offer supervision for intermediate reasoning.\n2. E-KAR only presents one feasible explanation for each problem, whereas there may be several.\n3. The English version of E-KAR is machine-translated and post-edited by humans. Although the authors have tried their best to maintain the translation quality, there could be some unsatisfying samples in the English dataset, e.g., culture-specific ones, ambiguous ones after translation, etc.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset was initially created and curated by Jiangjie Chen (Fudan University, ByteDance), Rui Xu (Fudan University), Ziquan Fu (Brain Technologies, Inc.), Wei Shi (South China University of Technology), Xinbo Zhang (ByteDance), Changzhi Sun (ByteDance) and other colleagues at ByteDance and Fudan University.",
"### Licensing Information"
] | [
"TAGS\n#task_categories-question-answering #task_categories-text-generation #task_ids-explanation-generation #size_categories-1K<n<2K #source_datasets-original #language-English #license-afl-3.0 #region-us \n",
"### Dataset Summary\n\n\n*New!*(9/18/2022) E-KAR 'v1.1' is officially released (at the 'main' branch), with a higher-quality English dataset! In 'v1.1', we further improve the Chinese-to-English translation quality of the English E-KAR, with over 600 problems and over 1,000 explanations manually adjusted. You can still find previous version (as in the paper) in the 'v1.0' branch in the repo. For more information please refer to URL.\n\n\nThe ability to recognize analogies is fundamental to human cognition. Existing benchmarks to test word analogy do not reveal the underneath process of analogical reasoning of neural models. Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR). Our benchmark consists of 1,655 (in Chinese) and 1,251 (in English) problems sourced from the Civil Service Exams, which require intensive background knowledge to solve. More importantly, we design a free-text explanation scheme to explain whether an analogy should be drawn, and manually annotate them for each and every question and candidate answer. Empirical results suggest that this benchmark is very challenging for some state-of-the-art models for both explanation generation and analogical question answering tasks, which invites further research in this area.",
"### Supported Tasks and Leaderboards\n\n\n* 'analogical-qa': The dataset can be used to train a model for analogical reasoning in the form of multiple-choice QA.\n* 'explanation-generation': The dataset can be used to generate free-text explanations to rationalize analogical reasoning.\n\n\nThis dataset supports two task modes: EASY mode and HARD mode:\n\n\n* 'EASY mode': where query explanation can be used as part of the input.\n* 'HARD mode': no explanation is allowed as part of the input.",
"### Languages\n\n\nThis dataset is in English, which is translated from its Chinese version\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\n* id: a string identifier for each example.\n* question: query terms.\n* choices: candidate answer terms.\n* answerKey: correct answer.\n* explanation: explanations for query (1st) and candidate answers (2nd-5th).\n* relation: annotated relations for terms in the query (1st) and candidate answers (2nd-5th).",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop analogical reasoning systems that are right for the right reasons.",
"### Discussion of Biases\n\n\nThis dataset is sourced and translated from the Civil Service Examinations of China. Therefore, despite the effort that the authors try to remove or rewrite such problems, it may still contain information biased to Chinese culture.",
"### Other Known Limitations\n\n\n1. The explanation annotation process in E-KAR (not the EG task) is mostly post-hoc and reflects only the result of reasoning. Humans solve the analogy problems in a trial-and-error manner, i.e., adjusting the abduced source structure and trying to find the most suited one for all candidate answers. Therefore, such explanations cannot offer supervision for intermediate reasoning.\n2. E-KAR only presents one feasible explanation for each problem, whereas there may be several.\n3. The English version of E-KAR is machine-translated and post-edited by humans. Although the authors have tried their best to maintain the translation quality, there could be some unsatisfying samples in the English dataset, e.g., culture-specific ones, ambiguous ones after translation, etc.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset was initially created and curated by Jiangjie Chen (Fudan University, ByteDance), Rui Xu (Fudan University), Ziquan Fu (Brain Technologies, Inc.), Wei Shi (South China University of Technology), Xinbo Zhang (ByteDance), Changzhi Sun (ByteDance) and other colleagues at ByteDance and Fudan University.",
"### Licensing Information"
] |
a1a0b053c5c1fdfb9a26a5557be62272b1582b2c |
# Dataset Card for taiwanese_english_translation
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: https://taigi.fhl.net/list.html**
### Dataset Summary
[More Information Needed]
### Languages
Source Language: Taiwanese (Tailo romanization system)
Target Language: English
## Dataset Structure
csv: Tailo,English
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@atenglens](https://github.com/atenglens) for adding this dataset. | atenglens/taiwanese_english_translation | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"task_categories:text-generation",
"task_categories:translation",
"task_ids:language-modeling",
"language_creators:other",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:extended|other",
"language:tw",
"language:en",
"conditional-text-generation",
"region:us"
] | 2022-03-27T05:31:42+00:00 | {"annotations_creators": [], "language_creators": ["other"], "language": ["tw", "en"], "license": [], "multilinguality": ["translation"], "size_categories": ["unknown"], "source_datasets": ["extended|other"], "task_categories": ["question-answering", "text2text-generation", "text-generation", "translation"], "task_ids": ["language-modeling"], "pretty_name": "taiwanese_english_translation", "tags": ["conditional-text-generation"]} | 2022-10-24T18:51:45+00:00 | [] | [
"tw",
"en"
] | TAGS
#task_categories-question-answering #task_categories-text2text-generation #task_categories-text-generation #task_categories-translation #task_ids-language-modeling #language_creators-other #multilinguality-translation #size_categories-unknown #source_datasets-extended|other #language-Twi #language-English #conditional-text-generation #region-us
|
# Dataset Card for taiwanese_english_translation
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
### Dataset Summary
### Languages
Source Language: Taiwanese (Tailo romanization system)
Target Language: English
## Dataset Structure
csv: Tailo,English
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @atenglens for adding this dataset. | [
"# Dataset Card for taiwanese_english_translation",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL",
"### Dataset Summary",
"### Languages\n\nSource Language: Taiwanese (Tailo romanization system)\nTarget Language: English",
"## Dataset Structure\n\ncsv: Tailo,English",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @atenglens for adding this dataset."
] | [
"TAGS\n#task_categories-question-answering #task_categories-text2text-generation #task_categories-text-generation #task_categories-translation #task_ids-language-modeling #language_creators-other #multilinguality-translation #size_categories-unknown #source_datasets-extended|other #language-Twi #language-English #conditional-text-generation #region-us \n",
"# Dataset Card for taiwanese_english_translation",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL",
"### Dataset Summary",
"### Languages\n\nSource Language: Taiwanese (Tailo romanization system)\nTarget Language: English",
"## Dataset Structure\n\ncsv: Tailo,English",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @atenglens for adding this dataset."
] |
e3a39e3d6c1ff7e58cbeac653518a300a875adfd |
Machine translated Ohsumed collection (EN to ID)
Original corpora: http://disi.unitn.it/moschitti/corpora.htm
Translated using: https://huggingface.co/Helsinki-NLP/opus-mt-en-id
Compatible with HuggingFace text-classification script (Tested in 4.17)
https://github.com/huggingface/transformers/tree/v4.17.0/examples/pytorch/text-classification
[Moschitti, 2003a]. Alessandro Moschitti, Natural Language Processing and Text Categorization: a study on the reciprocal beneficial interactions, PhD thesis, University of Rome Tor Vergata, Rome, Italy, May 2003. | nadhifikbarw/id_ohsuhmed | [
"task_categories:text-classification",
"language:id",
"region:us"
] | 2022-03-27T06:01:29+00:00 | {"language": ["id"], "task_categories": ["text-classification"], "source": ["http://disi.unitn.it/moschitti/corpora.htm"]} | 2022-10-25T09:03:35+00:00 | [] | [
"id"
] | TAGS
#task_categories-text-classification #language-Indonesian #region-us
|
Machine translated Ohsumed collection (EN to ID)
Original corpora: URL
Translated using: URL
Compatible with HuggingFace text-classification script (Tested in 4.17)
URL
[Moschitti, 2003a]. Alessandro Moschitti, Natural Language Processing and Text Categorization: a study on the reciprocal beneficial interactions, PhD thesis, University of Rome Tor Vergata, Rome, Italy, May 2003. | [] | [
"TAGS\n#task_categories-text-classification #language-Indonesian #region-us \n"
] |
dcfc1a380a19b9f4b30cec04a4387be24da0b2b3 | text to text implementation basing on https://github.com/salesforce/DocNLI
DatasetDict({
train: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 942314
})
validation: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 234258
})
test: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 267086
})
}) | stjokerli/TextToText_DocNLI_seqio | [
"region:us"
] | 2022-03-27T13:27:45+00:00 | {} | 2022-03-27T13:46:59+00:00 | [] | [] | TAGS
#region-us
| text to text implementation basing on URL
DatasetDict({
train: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 942314
})
validation: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 234258
})
test: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 267086
})
}) | [] | [
"TAGS\n#region-us \n"
] |
1938d356e66bcdcdd662dd7b9285d0c4a0bc9c6b | squad_v010_allanswers in T5 paper https://github.com/google-research/text-to-text-transfer-transformer/blob/main/t5/data/tasks.py
DatasetDict({
squad: DatasetDict({
train: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 87599
})
validation: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 10570
})
})
}) | stjokerli/TextToText_squad_seqio | [
"region:us"
] | 2022-03-27T21:28:53+00:00 | {} | 2022-03-27T21:39:25+00:00 | [] | [] | TAGS
#region-us
| squad_v010_allanswers in T5 paper URL
DatasetDict({
squad: DatasetDict({
train: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 87599
})
validation: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 10570
})
})
}) | [] | [
"TAGS\n#region-us \n"
] |
e1abda026f687e917a8d9895469194736ebe872c |
# Dataset Card for Adversarial GLUE
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://adversarialglue.github.io/
- **Repository:**
- **Paper:** [arXiv](https://arxiv.org/pdf/2111.02840.pdf)
- **Leaderboard:**
- **Point of Contact:**
- **Size of downloaded dataset files:** 202.75 kB
### Dataset Summary
Adversarial GLUE Benchmark (AdvGLUE) is a comprehensive robustness evaluation benchmark that focuses on the adversarial robustness evaluation of language models. It covers five natural language understanding tasks from the famous GLUE tasks and is an adversarial version of GLUE benchmark.
AdvGLUE considers textual adversarial attacks from different perspectives and hierarchies, including word-level transformations, sentence-level manipulations, and human-written adversarial examples, which provide comprehensive coverage of various adversarial linguistic phenomena.
### Supported Tasks and Leaderboards
Leaderboard available on the homepage: [https://adversarialglue.github.io/](https://adversarialglue.github.io/).
### Languages
AdvGLUE deviates from the GLUE dataset, which has a base language of English.
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 202.75 kB
- **Example**:
```python
>>> datasets.load_dataset('adv_glue', 'adv_sst2')['validation'][0]
{'sentence': "it 's an uneven treat that bores fun at the democratic exercise while also examining its significance for those who take part .", 'label': 1, 'idx': 0}
```
### Data Fields
The data fields are the same as in the GLUE dataset, which differ by task.
The data fields are the same among all splits.
#### adv_mnli
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### adv_mnli_matched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### adv_mnli_mismatched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### adv_qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### adv_qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### adv_rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### adv_sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Splits
Adversarial GLUE provides only a 'dev' split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is distributed under the [CC BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/legalcode) license.
### Citation Information
```bibtex
@article{Wang2021AdversarialGA,
title={Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models},
author={Boxin Wang and Chejian Xu and Shuohang Wang and Zhe Gan and Yu Cheng and Jianfeng Gao and Ahmed Hassan Awadallah and B. Li},
journal={ArXiv},
year={2021},
volume={abs/2111.02840}
}
```
### Contributions
Thanks to [@jxmorris12](https://github.com/jxmorris12) for adding this dataset. | adv_glue | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:sentiment-classification",
"annotations_creators:other",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:extended|glue",
"language:en",
"license:cc-by-sa-4.0",
"paraphrase-identification",
"qa-nli",
"arxiv:2111.02840",
"region:us"
] | 2022-03-28T10:12:33+00:00 | {"annotations_creators": ["other"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["extended|glue"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference", "sentiment-classification"], "pretty_name": "Adversarial GLUE", "config_names": ["adv_mnli", "adv_mnli_mismatched", "adv_qnli", "adv_qqp", "adv_rte", "adv_sst2"], "tags": ["paraphrase-identification", "qa-nli"], "dataset_info": [{"config_name": "adv_mnli", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "validation", "num_bytes": 23712, "num_examples": 121}], "download_size": 13485, "dataset_size": 23712}, {"config_name": "adv_mnli_mismatched", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "validation", "num_bytes": 40953, "num_examples": 162}], "download_size": 25166, "dataset_size": 40953}, {"config_name": "adv_qnli", "features": [{"name": "question", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "not_entailment"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "validation", "num_bytes": 34850, "num_examples": 148}], "download_size": 19111, "dataset_size": 34850}, {"config_name": "adv_qqp", "features": [{"name": "question1", "dtype": "string"}, {"name": "question2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "not_duplicate", "1": "duplicate"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "validation", "num_bytes": 9908, "num_examples": 78}], "download_size": 7705, "dataset_size": 9908}, {"config_name": "adv_rte", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "not_entailment"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "validation", "num_bytes": 25979, "num_examples": 81}], "download_size": 15872, "dataset_size": 25979}, {"config_name": "adv_sst2", "features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "validation", "num_bytes": 16572, "num_examples": 148}], "download_size": 10833, "dataset_size": 16572}], "configs": [{"config_name": "adv_mnli", "data_files": [{"split": "validation", "path": "adv_mnli/validation-*"}]}, {"config_name": "adv_mnli_mismatched", "data_files": [{"split": "validation", "path": "adv_mnli_mismatched/validation-*"}]}, {"config_name": "adv_qnli", "data_files": [{"split": "validation", "path": "adv_qnli/validation-*"}]}, {"config_name": "adv_qqp", "data_files": [{"split": "validation", "path": "adv_qqp/validation-*"}]}, {"config_name": "adv_rte", "data_files": [{"split": "validation", "path": "adv_rte/validation-*"}]}, {"config_name": "adv_sst2", "data_files": [{"split": "validation", "path": "adv_sst2/validation-*"}]}]} | 2024-01-09T11:45:55+00:00 | [
"2111.02840"
] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-natural-language-inference #task_ids-sentiment-classification #annotations_creators-other #language_creators-machine-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-extended|glue #language-English #license-cc-by-sa-4.0 #paraphrase-identification #qa-nli #arxiv-2111.02840 #region-us
|
# Dataset Card for Adversarial GLUE
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository:
- Paper: arXiv
- Leaderboard:
- Point of Contact:
- Size of downloaded dataset files: 202.75 kB
### Dataset Summary
Adversarial GLUE Benchmark (AdvGLUE) is a comprehensive robustness evaluation benchmark that focuses on the adversarial robustness evaluation of language models. It covers five natural language understanding tasks from the famous GLUE tasks and is an adversarial version of GLUE benchmark.
AdvGLUE considers textual adversarial attacks from different perspectives and hierarchies, including word-level transformations, sentence-level manipulations, and human-written adversarial examples, which provide comprehensive coverage of various adversarial linguistic phenomena.
### Supported Tasks and Leaderboards
Leaderboard available on the homepage: URL
### Languages
AdvGLUE deviates from the GLUE dataset, which has a base language of English.
## Dataset Structure
### Data Instances
#### default
- Size of downloaded dataset files: 202.75 kB
- Example:
### Data Fields
The data fields are the same as in the GLUE dataset, which differ by task.
The data fields are the same among all splits.
#### adv_mnli
- 'premise': a 'string' feature.
- 'hypothesis': a 'string' feature.
- 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).
- 'idx': a 'int32' feature.
#### adv_mnli_matched
- 'premise': a 'string' feature.
- 'hypothesis': a 'string' feature.
- 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).
- 'idx': a 'int32' feature.
#### adv_mnli_mismatched
- 'premise': a 'string' feature.
- 'hypothesis': a 'string' feature.
- 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).
- 'idx': a 'int32' feature.
#### adv_qnli
#### adv_qqp
#### adv_rte
#### adv_sst2
### Data Splits
Adversarial GLUE provides only a 'dev' split.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
The dataset is distributed under the CC BY-SA 4.0 license.
### Contributions
Thanks to @jxmorris12 for adding this dataset. | [
"# Dataset Card for Adversarial GLUE",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper: arXiv\n- Leaderboard:\n- Point of Contact:\n- Size of downloaded dataset files: 202.75 kB",
"### Dataset Summary\n\nAdversarial GLUE Benchmark (AdvGLUE) is a comprehensive robustness evaluation benchmark that focuses on the adversarial robustness evaluation of language models. It covers five natural language understanding tasks from the famous GLUE tasks and is an adversarial version of GLUE benchmark.\n\nAdvGLUE considers textual adversarial attacks from different perspectives and hierarchies, including word-level transformations, sentence-level manipulations, and human-written adversarial examples, which provide comprehensive coverage of various adversarial linguistic phenomena.",
"### Supported Tasks and Leaderboards\n\nLeaderboard available on the homepage: URL",
"### Languages\n\nAdvGLUE deviates from the GLUE dataset, which has a base language of English.",
"## Dataset Structure",
"### Data Instances",
"#### default\n\n- Size of downloaded dataset files: 202.75 kB\n- Example:",
"### Data Fields\n\nThe data fields are the same as in the GLUE dataset, which differ by task.\n\n\nThe data fields are the same among all splits.",
"#### adv_mnli\n- 'premise': a 'string' feature.\n- 'hypothesis': a 'string' feature.\n- 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n- 'idx': a 'int32' feature.",
"#### adv_mnli_matched\n- 'premise': a 'string' feature.\n- 'hypothesis': a 'string' feature.\n- 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n- 'idx': a 'int32' feature.",
"#### adv_mnli_mismatched\n- 'premise': a 'string' feature.\n- 'hypothesis': a 'string' feature.\n- 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n- 'idx': a 'int32' feature.",
"#### adv_qnli",
"#### adv_qqp",
"#### adv_rte",
"#### adv_sst2",
"### Data Splits\n\nAdversarial GLUE provides only a 'dev' split.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nThe dataset is distributed under the CC BY-SA 4.0 license.",
"### Contributions\n\nThanks to @jxmorris12 for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #task_ids-sentiment-classification #annotations_creators-other #language_creators-machine-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-extended|glue #language-English #license-cc-by-sa-4.0 #paraphrase-identification #qa-nli #arxiv-2111.02840 #region-us \n",
"# Dataset Card for Adversarial GLUE",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper: arXiv\n- Leaderboard:\n- Point of Contact:\n- Size of downloaded dataset files: 202.75 kB",
"### Dataset Summary\n\nAdversarial GLUE Benchmark (AdvGLUE) is a comprehensive robustness evaluation benchmark that focuses on the adversarial robustness evaluation of language models. It covers five natural language understanding tasks from the famous GLUE tasks and is an adversarial version of GLUE benchmark.\n\nAdvGLUE considers textual adversarial attacks from different perspectives and hierarchies, including word-level transformations, sentence-level manipulations, and human-written adversarial examples, which provide comprehensive coverage of various adversarial linguistic phenomena.",
"### Supported Tasks and Leaderboards\n\nLeaderboard available on the homepage: URL",
"### Languages\n\nAdvGLUE deviates from the GLUE dataset, which has a base language of English.",
"## Dataset Structure",
"### Data Instances",
"#### default\n\n- Size of downloaded dataset files: 202.75 kB\n- Example:",
"### Data Fields\n\nThe data fields are the same as in the GLUE dataset, which differ by task.\n\n\nThe data fields are the same among all splits.",
"#### adv_mnli\n- 'premise': a 'string' feature.\n- 'hypothesis': a 'string' feature.\n- 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n- 'idx': a 'int32' feature.",
"#### adv_mnli_matched\n- 'premise': a 'string' feature.\n- 'hypothesis': a 'string' feature.\n- 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n- 'idx': a 'int32' feature.",
"#### adv_mnli_mismatched\n- 'premise': a 'string' feature.\n- 'hypothesis': a 'string' feature.\n- 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n- 'idx': a 'int32' feature.",
"#### adv_qnli",
"#### adv_qqp",
"#### adv_rte",
"#### adv_sst2",
"### Data Splits\n\nAdversarial GLUE provides only a 'dev' split.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nThe dataset is distributed under the CC BY-SA 4.0 license.",
"### Contributions\n\nThanks to @jxmorris12 for adding this dataset."
] |
7ea460abd146b010b3668f374c1f51068c6ff032 |
# Dataset Card for Corpus Carolina
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [sites.usp.br/corpuscarolina](https://sites.usp.br/corpuscarolina/)
- **Current Version:** 1.2 (Ada)
- **Point of Contact:** [LaViHD](mailto:[email protected])
### Dataset Summary
Carolina is an Open Corpus for Linguistics and Artificial Intelligence with a
robust volume of texts of varied typology in contemporary Brazilian Portuguese
(1970-2021). This corpus contains documents and texts extracted from the web
and includes information (metadata) about its provenance and tipology.
The documents are clustered into taxonomies and the corpus can be loaded in complete or taxonomy modes. To load a single taxonomy, it is possible to pass a code as a parameter to the loading script (see the example bellow). Codes are 3-letters string and possible values are:
- `dat` : datasets and other corpora;
- `jud` : judicial branch;
- `leg` : legislative branch;
- `pub` : public domain works;
- `soc` : social media;
- `uni` : university domains;
- `wik` : wikis.
Dataset Vesioning:
The Carolina Corpus is under continuous development resulting in multiple vesions. The current version is v1.2, but v1.1 is also available. You can access diferent vesions of the corpus using the `revision` parameter on `load_dataset`.
Usage Example:
```python
from datasets import load_dataset
# to load all taxonomies
corpus_carolina = load_dataset("carolina-c4ai/corpus-carolina")
# to load social media documents
social_media = load_dataset("carolina-c4ai/corpus-carolina", taxonomy="soc")
# to load previous version
corpus_carolina = load_dataset("carolina-c4ai/corpus-carolina", revision="v1.1")
```
### Supported Tasks
Carolina corpus was compiled for academic purposes,
namely linguistic and computational analysis.
### Languages
Contemporary Brazilian Portuguese (1970-2021).
## Dataset Structure
Files are stored inside `corpus` folder with a subfolder
for each taxonomy. Every file folows a XML structure
(TEI P5) and contains multiple extracted documents. For
each document, the text and metadata are exposed as
`text` and `meta` features, respectively.
### Data Instances
Every instance have the following structure.
```
{
"meta": datasets.Value("string"),
"text": datasets.Value("string")
}
```
| Code | Taxonomy | Instances | Size |
|:----:|:---------------------------|----------:|-------:|
| | **Total** | 2107045 | 11 GB |
| dat | Datasets and other Corpora | 1102049 | 4.4 GB |
| wik | Wikis | 960139 | 5.2 GB |
| jud | Judicial Branch | 40464 | 1.5 GB |
| leg | Legislative Branch | 13 | 25 MB |
| soc | Social Media | 3413 | 17 MB |
| uni | University Domains | 941 | 10 MB |
| pub | Public Domain Works | 26 | 4.5 MB |
||
### Data Fields
- `meta`: a XML string with a TEI conformant `teiHeader` tag. It is exposed as text and needs to be parsed in order to access the actual metada;
- `text`: a string containing the extracted document.
### Data Splits
As a general corpus, Carolina does not have splits. In order to load the dataset, it is used `corpus` as its single split.
## Additional Information
### Dataset Curators
The Corpus Carolina is developed by a multidisciplinary
team of linguists and computer scientists, members of the
Virtual Laboratory of Digital Humanities - LaViHD and the Artificial Intelligence Center of the University of São Paulo - C4AI.
### Licensing Information
The Open Corpus for Linguistics and Artificial Intelligence (Carolina) was
compiled for academic purposes, namely linguistic and computational analysis.
It is composed of texts assembled in various digital repositories, whose
licenses are multiple and therefore should be observed when making use of the
corpus. The Carolina headers are licensed under Creative Commons
Attribution-NonCommercial-ShareAlike 4.0 International."
### Citation Information
```
@misc{corpusCarolinaV1.1,
title={
Carolina:
The Open Corpus for Linguistics and Artificial Intelligence
},
author={
Finger, Marcelo and
Paixão de Sousa, Maria Clara and
Namiuti, Cristiane and
Martins do Monte, Vanessa and
Costa, Aline Silva and
Serras, Felipe Ribas and
Sturzeneker, Mariana Lourenço and
Guets, Raquel de Paula and
Mesquita, Renata Morais and
Mello, Guilherme Lamartine de and
Crespo, Maria Clara Ramos Morales and
Rocha, Maria Lina de Souza Jeannine and
Brasil, Patrícia and
Silva, Mariana Marques da and
Palma, Mayara Feliciano
},
howpublished={\url{
https://sites.usp.br/corpuscarolina/corpus}},
year={2022},
note={Version 1.1 (Ada)},
}
```
| carolina-c4ai/corpus-carolina | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:masked-language-modeling",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1B<n<10B",
"source_datasets:original",
"language:pt",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-03-28T12:30:33+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["pt"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1B<n<10B"], "source_datasets": ["original"], "task_categories": ["fill-mask", "text-generation"], "task_ids": ["masked-language-modeling", "language-modeling"], "pretty_name": "Carolina", "language_bcp47": ["pt-BR"]} | 2023-03-23T19:46:16+00:00 | [] | [
"pt"
] | TAGS
#task_categories-fill-mask #task_categories-text-generation #task_ids-masked-language-modeling #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1B<n<10B #source_datasets-original #language-Portuguese #license-cc-by-nc-sa-4.0 #region-us
| Dataset Card for Corpus Carolina
================================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
Dataset Description
-------------------
* Homepage: URL
* Current Version: 1.2 (Ada)
* Point of Contact: LaViHD
### Dataset Summary
Carolina is an Open Corpus for Linguistics and Artificial Intelligence with a
robust volume of texts of varied typology in contemporary Brazilian Portuguese
(1970-2021). This corpus contains documents and texts extracted from the web
and includes information (metadata) about its provenance and tipology.
The documents are clustered into taxonomies and the corpus can be loaded in complete or taxonomy modes. To load a single taxonomy, it is possible to pass a code as a parameter to the loading script (see the example bellow). Codes are 3-letters string and possible values are:
* 'dat' : datasets and other corpora;
* 'jud' : judicial branch;
* 'leg' : legislative branch;
* 'pub' : public domain works;
* 'soc' : social media;
* 'uni' : university domains;
* 'wik' : wikis.
Dataset Vesioning:
The Carolina Corpus is under continuous development resulting in multiple vesions. The current version is v1.2, but v1.1 is also available. You can access diferent vesions of the corpus using the 'revision' parameter on 'load\_dataset'.
Usage Example:
### Supported Tasks
Carolina corpus was compiled for academic purposes,
namely linguistic and computational analysis.
### Languages
Contemporary Brazilian Portuguese (1970-2021).
Dataset Structure
-----------------
Files are stored inside 'corpus' folder with a subfolder
for each taxonomy. Every file folows a XML structure
(TEI P5) and contains multiple extracted documents. For
each document, the text and metadata are exposed as
'text' and 'meta' features, respectively.
### Data Instances
Every instance have the following structure.
### Data Fields
* 'meta': a XML string with a TEI conformant 'teiHeader' tag. It is exposed as text and needs to be parsed in order to access the actual metada;
* 'text': a string containing the extracted document.
### Data Splits
As a general corpus, Carolina does not have splits. In order to load the dataset, it is used 'corpus' as its single split.
Additional Information
----------------------
### Dataset Curators
The Corpus Carolina is developed by a multidisciplinary
team of linguists and computer scientists, members of the
Virtual Laboratory of Digital Humanities - LaViHD and the Artificial Intelligence Center of the University of São Paulo - C4AI.
### Licensing Information
The Open Corpus for Linguistics and Artificial Intelligence (Carolina) was
compiled for academic purposes, namely linguistic and computational analysis.
It is composed of texts assembled in various digital repositories, whose
licenses are multiple and therefore should be observed when making use of the
corpus. The Carolina headers are licensed under Creative Commons
Attribution-NonCommercial-ShareAlike 4.0 International."
| [
"### Dataset Summary\n\n\nCarolina is an Open Corpus for Linguistics and Artificial Intelligence with a\nrobust volume of texts of varied typology in contemporary Brazilian Portuguese\n(1970-2021). This corpus contains documents and texts extracted from the web\nand includes information (metadata) about its provenance and tipology.\n\n\nThe documents are clustered into taxonomies and the corpus can be loaded in complete or taxonomy modes. To load a single taxonomy, it is possible to pass a code as a parameter to the loading script (see the example bellow). Codes are 3-letters string and possible values are:\n\n\n* 'dat' : datasets and other corpora;\n* 'jud' : judicial branch;\n* 'leg' : legislative branch;\n* 'pub' : public domain works;\n* 'soc' : social media;\n* 'uni' : university domains;\n* 'wik' : wikis.\n\n\nDataset Vesioning:\n\n\nThe Carolina Corpus is under continuous development resulting in multiple vesions. The current version is v1.2, but v1.1 is also available. You can access diferent vesions of the corpus using the 'revision' parameter on 'load\\_dataset'.\n\n\nUsage Example:",
"### Supported Tasks\n\n\nCarolina corpus was compiled for academic purposes,\nnamely linguistic and computational analysis.",
"### Languages\n\n\nContemporary Brazilian Portuguese (1970-2021).\n\n\nDataset Structure\n-----------------\n\n\nFiles are stored inside 'corpus' folder with a subfolder\nfor each taxonomy. Every file folows a XML structure\n(TEI P5) and contains multiple extracted documents. For\neach document, the text and metadata are exposed as\n'text' and 'meta' features, respectively.",
"### Data Instances\n\n\nEvery instance have the following structure.",
"### Data Fields\n\n\n* 'meta': a XML string with a TEI conformant 'teiHeader' tag. It is exposed as text and needs to be parsed in order to access the actual metada;\n* 'text': a string containing the extracted document.",
"### Data Splits\n\n\nAs a general corpus, Carolina does not have splits. In order to load the dataset, it is used 'corpus' as its single split.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe Corpus Carolina is developed by a multidisciplinary\nteam of linguists and computer scientists, members of the\nVirtual Laboratory of Digital Humanities - LaViHD and the Artificial Intelligence Center of the University of São Paulo - C4AI.",
"### Licensing Information\n\n\nThe Open Corpus for Linguistics and Artificial Intelligence (Carolina) was\ncompiled for academic purposes, namely linguistic and computational analysis.\nIt is composed of texts assembled in various digital repositories, whose\nlicenses are multiple and therefore should be observed when making use of the\ncorpus. The Carolina headers are licensed under Creative Commons\nAttribution-NonCommercial-ShareAlike 4.0 International.\""
] | [
"TAGS\n#task_categories-fill-mask #task_categories-text-generation #task_ids-masked-language-modeling #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1B<n<10B #source_datasets-original #language-Portuguese #license-cc-by-nc-sa-4.0 #region-us \n",
"### Dataset Summary\n\n\nCarolina is an Open Corpus for Linguistics and Artificial Intelligence with a\nrobust volume of texts of varied typology in contemporary Brazilian Portuguese\n(1970-2021). This corpus contains documents and texts extracted from the web\nand includes information (metadata) about its provenance and tipology.\n\n\nThe documents are clustered into taxonomies and the corpus can be loaded in complete or taxonomy modes. To load a single taxonomy, it is possible to pass a code as a parameter to the loading script (see the example bellow). Codes are 3-letters string and possible values are:\n\n\n* 'dat' : datasets and other corpora;\n* 'jud' : judicial branch;\n* 'leg' : legislative branch;\n* 'pub' : public domain works;\n* 'soc' : social media;\n* 'uni' : university domains;\n* 'wik' : wikis.\n\n\nDataset Vesioning:\n\n\nThe Carolina Corpus is under continuous development resulting in multiple vesions. The current version is v1.2, but v1.1 is also available. You can access diferent vesions of the corpus using the 'revision' parameter on 'load\\_dataset'.\n\n\nUsage Example:",
"### Supported Tasks\n\n\nCarolina corpus was compiled for academic purposes,\nnamely linguistic and computational analysis.",
"### Languages\n\n\nContemporary Brazilian Portuguese (1970-2021).\n\n\nDataset Structure\n-----------------\n\n\nFiles are stored inside 'corpus' folder with a subfolder\nfor each taxonomy. Every file folows a XML structure\n(TEI P5) and contains multiple extracted documents. For\neach document, the text and metadata are exposed as\n'text' and 'meta' features, respectively.",
"### Data Instances\n\n\nEvery instance have the following structure.",
"### Data Fields\n\n\n* 'meta': a XML string with a TEI conformant 'teiHeader' tag. It is exposed as text and needs to be parsed in order to access the actual metada;\n* 'text': a string containing the extracted document.",
"### Data Splits\n\n\nAs a general corpus, Carolina does not have splits. In order to load the dataset, it is used 'corpus' as its single split.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe Corpus Carolina is developed by a multidisciplinary\nteam of linguists and computer scientists, members of the\nVirtual Laboratory of Digital Humanities - LaViHD and the Artificial Intelligence Center of the University of São Paulo - C4AI.",
"### Licensing Information\n\n\nThe Open Corpus for Linguistics and Artificial Intelligence (Carolina) was\ncompiled for academic purposes, namely linguistic and computational analysis.\nIt is composed of texts assembled in various digital repositories, whose\nlicenses are multiple and therefore should be observed when making use of the\ncorpus. The Carolina headers are licensed under Creative Commons\nAttribution-NonCommercial-ShareAlike 4.0 International.\""
] |
01540a66ded66626baf224072d8faf05b5d329d0 | # AutoTrain Dataset for project: TweetClimateAnalysis
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project TweetClimateAnalysis.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "What do you do if you are a global warming alarmist and real-world temperatures do not warm as much [...]",
"target": 16
},
{
"text": "(2.) A sun-blocking volcanic aerosols component to explain the sudden but temporary cooling of globa[...]",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=18, names=['0_0', '1_1', '1_2', '1_3', '1_4', '1_6', '1_7', '2_1', '2_3', '3_1', '3_2', '3_3', '4_1', '4_2', '4_4', '4_5', '5_1', '5_2'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 23436 |
| valid | 2898 |
| KeithHorgan98/autotrain-data-TweetClimateAnalysis | [
"task_categories:text-classification",
"region:us"
] | 2022-03-28T21:17:30+00:00 | {"task_categories": ["text-classification"]} | 2022-03-28T21:27:22+00:00 | [] | [] | TAGS
#task_categories-text-classification #region-us
| AutoTrain Dataset for project: TweetClimateAnalysis
===================================================
Dataset Descritpion
-------------------
This dataset has been automatically processed by AutoTrain for project TweetClimateAnalysis.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-text-classification #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
dc42b11d642dd1b4985d30f98ec68f63363b1141 | # UNL: Universidad Nacional de Loja
### Miembros del equipo:
- Anderson Quizhpe <br>
- Luis Negrón <br>
- David Pacheco <br>
- Bryan Requenes <br>
- Paul Pasaca
<br><br>
| hackathon-pln-es/Dataset-Acoso-Twitter-Es | [
"license:gpl-3.0",
"region:us"
] | 2022-03-29T04:46:25+00:00 | {"license": "gpl-3.0", "languaje": ["es"]} | 2022-03-30T23:03:51+00:00 | [] | [] | TAGS
#license-gpl-3.0 #region-us
| # UNL: Universidad Nacional de Loja
### Miembros del equipo:
- Anderson Quizhpe <br>
- Luis Negrón <br>
- David Pacheco <br>
- Bryan Requenes <br>
- Paul Pasaca
<br><br>
| [
"# UNL: Universidad Nacional de Loja",
"### Miembros del equipo:\r\n- Anderson Quizhpe <br>\r\n- Luis Negrón <br>\r\n- David Pacheco <br>\r\n- Bryan Requenes <br>\r\n- Paul Pasaca\r\n<br><br>"
] | [
"TAGS\n#license-gpl-3.0 #region-us \n",
"# UNL: Universidad Nacional de Loja",
"### Miembros del equipo:\r\n- Anderson Quizhpe <br>\r\n- Luis Negrón <br>\r\n- David Pacheco <br>\r\n- Bryan Requenes <br>\r\n- Paul Pasaca\r\n<br><br>"
] |
67e0da2e44e860e37857edf17af7b2656b3be221 | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/horse2zebra | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-29T09:34:58+00:00 | {} | 2022-04-12T12:57:34+00:00 | [
"1703.10593"
] | [] | TAGS
#arxiv-1703.10593 #region-us
| This dataset is part of the CycleGAN datasets, originally hosted here: URL
| [] | [
"TAGS\n#arxiv-1703.10593 #region-us \n"
] |
26e882f90a286672b7dd46e603b3dd6b9c6c007e | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/monet2photo | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-29T11:23:53+00:00 | {} | 2022-04-12T12:58:04+00:00 | [
"1703.10593"
] | [] | TAGS
#arxiv-1703.10593 #region-us
| This dataset is part of the CycleGAN datasets, originally hosted here: URL
| [] | [
"TAGS\n#arxiv-1703.10593 #region-us \n"
] |
b9a1024774140d73f8272bf1158b4a8b4ef7abfe | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/cezanne2photo | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-29T11:27:36+00:00 | {} | 2022-04-12T12:56:27+00:00 | [
"1703.10593"
] | [] | TAGS
#arxiv-1703.10593 #region-us
| This dataset is part of the CycleGAN datasets, originally hosted here: URL
| [] | [
"TAGS\n#arxiv-1703.10593 #region-us \n"
] |
7d0f0b1f34034b010c6b7fc44d6b266803788448 | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/ukiyoe2photo | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-29T11:30:34+00:00 | {} | 2022-04-12T12:58:34+00:00 | [
"1703.10593"
] | [] | TAGS
#arxiv-1703.10593 #region-us
| This dataset is part of the CycleGAN datasets, originally hosted here: URL
| [] | [
"TAGS\n#arxiv-1703.10593 #region-us \n"
] |
877a11fde59bfcbbc59d508c7b00c7fa307604e6 | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/vangogh2photo | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-29T11:33:03+00:00 | {} | 2022-04-12T12:58:45+00:00 | [
"1703.10593"
] | [] | TAGS
#arxiv-1703.10593 #region-us
| This dataset is part of the CycleGAN datasets, originally hosted here: URL
| [] | [
"TAGS\n#arxiv-1703.10593 #region-us \n"
] |
c8706b48de9deec7eeee248792fcf483b3ccf4ef | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/apple2orange | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-29T11:44:10+00:00 | {} | 2022-04-12T12:55:40+00:00 | [
"1703.10593"
] | [] | TAGS
#arxiv-1703.10593 #region-us
| This dataset is part of the CycleGAN datasets, originally hosted here: URL
| [] | [
"TAGS\n#arxiv-1703.10593 #region-us \n"
] |
c4f40db4563d2acebd3a92c9b968f00c95234472 | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/iphone2dslr_flower | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-29T11:47:17+00:00 | {} | 2022-04-12T12:57:46+00:00 | [
"1703.10593"
] | [] | TAGS
#arxiv-1703.10593 #region-us
| This dataset is part of the CycleGAN datasets, originally hosted here: URL
| [] | [
"TAGS\n#arxiv-1703.10593 #region-us \n"
] |
7ada4dc70d20d435adc11b644ceeaff8d3b323c4 | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/summer2winter_yosemite | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-29T11:53:43+00:00 | {} | 2022-04-12T12:58:19+00:00 | [
"1703.10593"
] | [] | TAGS
#arxiv-1703.10593 #region-us
| This dataset is part of the CycleGAN datasets, originally hosted here: URL
| [] | [
"TAGS\n#arxiv-1703.10593 #region-us \n"
] |
853bc3c3221dfaa41d9116b4b11ec2953cc13fa3 | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/grumpifycat | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-29T13:42:02+00:00 | {} | 2022-04-12T12:57:20+00:00 | [
"1703.10593"
] | [] | TAGS
#arxiv-1703.10593 #region-us
| This dataset is part of the CycleGAN datasets, originally hosted here: URL
| [] | [
"TAGS\n#arxiv-1703.10593 #region-us \n"
] |
bf3e4aaebd4162e0b9d31785028c43bbd6303585 |
Dataset introduced in [Stop Clickbait: Detecting and Preventing Clickbaits in Online News Media](https://arxiv.org/abs/1610.09786)
by Abhijnan Chakraborty, Bhargavi Paranjape, Sourya Kakarla, Niloy Ganguly
Abhijnan Chakraborty, Bhargavi Paranjape, Sourya Kakarla, and Niloy Ganguly. "Stop Clickbait: Detecting and Preventing Clickbaits in Online News Media”. In Proceedings of the 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), San Fransisco, US, August 2016.
Cite:
```
@inproceedings{chakraborty2016stop,
title={Stop Clickbait: Detecting and preventing clickbaits in online news media},
author={Chakraborty, Abhijnan and Paranjape, Bhargavi and Kakarla, Sourya and Ganguly, Niloy},
booktitle={Advances in Social Networks Analysis and Mining (ASONAM), 2016 IEEE/ACM International Conference on},
pages={9--16},
year={2016},
organization={IEEE}
}
```
| marksverdhei/clickbait_title_classification | [
"license:mit",
"arxiv:1610.09786",
"region:us"
] | 2022-03-29T20:02:09+00:00 | {"license": "mit"} | 2022-03-29T20:25:01+00:00 | [
"1610.09786"
] | [] | TAGS
#license-mit #arxiv-1610.09786 #region-us
|
Dataset introduced in Stop Clickbait: Detecting and Preventing Clickbaits in Online News Media
by Abhijnan Chakraborty, Bhargavi Paranjape, Sourya Kakarla, Niloy Ganguly
Abhijnan Chakraborty, Bhargavi Paranjape, Sourya Kakarla, and Niloy Ganguly. "Stop Clickbait: Detecting and Preventing Clickbaits in Online News Media”. In Proceedings of the 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), San Fransisco, US, August 2016.
Cite:
| [] | [
"TAGS\n#license-mit #arxiv-1610.09786 #region-us \n"
] |
b62d5fa059150ce40bef72febc0779e9a2f4ba26 |
# Dataset Card for CoCoNuT-Java(2006)
## Dataset Description
- **Homepage:** [CoCoNuT training data](https://github.com/lin-tan/CoCoNut-Artifact/releases/tag/training_data_1.0.0)
- **Repository:** [CoCoNuT repository](https://github.com/lin-tan/CoCoNut-Artifact)
- **Paper:** [CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair](https://dl.acm.org/doi/abs/10.1145/3395363.3397369)
### Dataset Summary
Part of the data used to train the models in the "CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair" paper.
These datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.
The year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.
### Languages
- Java
## Dataset Structure
### Data Fields
The dataset consists of 4 columns: `add`, `rem`, `context`, and `meta`.
These match the original dataset files: `add.txt`, `rem.txt`, `context.txt`, and `meta.txt`.
### Data Instances
There is a mapping between the 4 columns for each instance.
For example:
5 first rows of `rem` (i.e., the buggy line/hunk):
```
1 public synchronized StringBuffer append(char ch)
2 ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
3 public String substring(int beginIndex, int endIndex)
4 if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
5 public Object next() {
```
5 first rows of add (i.e., the fixed line/hunk):
```
1 public StringBuffer append(Object obj)
2 return append(obj == null ? "null" : obj.toString());
3 public String substring(int begin)
4 return substring(begin, count);
5 public FSEntry next() {
```
These map to the 5 instances:
```diff
- public synchronized StringBuffer append(char ch)
+ public StringBuffer append(Object obj)
```
```diff
- ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
+ return append(obj == null ? "null" : obj.toString());
```
```diff
- public String substring(int beginIndex, int endIndex)
+ public String substring(int begin)
```
```diff
- if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
+ return substring(begin, count);
```
```diff
- public Object next() {
+ public FSEntry next() {
```
`context` contains the associated "context". Context is the (in-lined) buggy function (including the buggy lines and comments).
For example, the context of
```
public synchronized StringBuffer append(char ch)
```
is its associated function:
```java
public synchronized StringBuffer append(char ch) { ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this; }
```
`meta` contains some metadata about the project:
```
1056 /local/tlutelli/issta_data/temp/all_java0context/java/2006_temp/2006/1056/68a6301301378680519f2b146daec37812a1bc22/StringBuffer.java/buggy/core/src/classpath/java/java/lang/StringBuffer.java
```
`1056` is the project id. `/local/...` is the absolute path to the buggy file. This can be parsed to extract the commit id: `68a6301301378680519f2b146daec37812a1bc22`, the file name: `StringBuffer.java` and the original path within the project
`core/src/classpath/java/java/lang/StringBuffer.java`
| Number of projects | Number of Instances |
| ------------------ |-------------------- |
| 45,180 | 3,241,966 |
## Dataset Creation
### Curation Rationale
Data is collected to train automated program repair (APR) models.
### Citation Information
```bib
@inproceedings{lutellierCoCoNuTCombiningContextaware2020,
title = {{{CoCoNuT}}: Combining Context-Aware Neural Translation Models Using Ensemble for Program Repair},
shorttitle = {{{CoCoNuT}}},
booktitle = {Proceedings of the 29th {{ACM SIGSOFT International Symposium}} on {{Software Testing}} and {{Analysis}}},
author = {Lutellier, Thibaud and Pham, Hung Viet and Pang, Lawrence and Li, Yitong and Wei, Moshi and Tan, Lin},
year = {2020},
month = jul,
series = {{{ISSTA}} 2020},
pages = {101--114},
publisher = {{Association for Computing Machinery}},
address = {{New York, NY, USA}},
doi = {10.1145/3395363.3397369},
url = {https://doi.org/10.1145/3395363.3397369},
urldate = {2022-12-06},
isbn = {978-1-4503-8008-9},
keywords = {AI and Software Engineering,Automated program repair,Deep Learning,Neural Machine Translation}
}
```
| h4iku/coconut_java2006 | [
"code",
"region:us"
] | 2022-03-29T22:30:34+00:00 | {"pretty_name": "CoCoNuT-Java(2006)", "tags": ["code"]} | 2023-09-28T21:53:23+00:00 | [] | [] | TAGS
#code #region-us
| Dataset Card for CoCoNuT-Java(2006)
===================================
Dataset Description
-------------------
* Homepage: CoCoNuT training data
* Repository: CoCoNuT repository
* Paper: CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair
### Dataset Summary
Part of the data used to train the models in the "CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair" paper.
These datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.
The year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.
### Languages
* Java
Dataset Structure
-----------------
### Data Fields
The dataset consists of 4 columns: 'add', 'rem', 'context', and 'meta'.
These match the original dataset files: 'URL', 'URL', 'URL', and 'URL'.
### Data Instances
There is a mapping between the 4 columns for each instance.
For example:
5 first rows of 'rem' (i.e., the buggy line/hunk):
5 first rows of add (i.e., the fixed line/hunk):
These map to the 5 instances:
'context' contains the associated "context". Context is the (in-lined) buggy function (including the buggy lines and comments).
For example, the context of
is its associated function:
'meta' contains some metadata about the project:
'1056' is the project id. '/local/...' is the absolute path to the buggy file. This can be parsed to extract the commit id: '68a6301301378680519f2b146daec37812a1bc22', the file name: 'URL' and the original path within the project
'core/src/classpath/java/java/lang/URL'
Dataset Creation
----------------
### Curation Rationale
Data is collected to train automated program repair (APR) models.
| [
"### Dataset Summary\n\n\nPart of the data used to train the models in the \"CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair\" paper.\nThese datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.\nThe year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.",
"### Languages\n\n\n* Java\n\n\nDataset Structure\n-----------------",
"### Data Fields\n\n\nThe dataset consists of 4 columns: 'add', 'rem', 'context', and 'meta'.\nThese match the original dataset files: 'URL', 'URL', 'URL', and 'URL'.",
"### Data Instances\n\n\nThere is a mapping between the 4 columns for each instance.\nFor example:\n\n\n5 first rows of 'rem' (i.e., the buggy line/hunk):\n\n\n5 first rows of add (i.e., the fixed line/hunk):\n\n\nThese map to the 5 instances:\n\n\n'context' contains the associated \"context\". Context is the (in-lined) buggy function (including the buggy lines and comments).\nFor example, the context of\n\n\nis its associated function:\n\n\n'meta' contains some metadata about the project:\n\n\n'1056' is the project id. '/local/...' is the absolute path to the buggy file. This can be parsed to extract the commit id: '68a6301301378680519f2b146daec37812a1bc22', the file name: 'URL' and the original path within the project\n'core/src/classpath/java/java/lang/URL'\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nData is collected to train automated program repair (APR) models."
] | [
"TAGS\n#code #region-us \n",
"### Dataset Summary\n\n\nPart of the data used to train the models in the \"CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair\" paper.\nThese datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.\nThe year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.",
"### Languages\n\n\n* Java\n\n\nDataset Structure\n-----------------",
"### Data Fields\n\n\nThe dataset consists of 4 columns: 'add', 'rem', 'context', and 'meta'.\nThese match the original dataset files: 'URL', 'URL', 'URL', and 'URL'.",
"### Data Instances\n\n\nThere is a mapping between the 4 columns for each instance.\nFor example:\n\n\n5 first rows of 'rem' (i.e., the buggy line/hunk):\n\n\n5 first rows of add (i.e., the fixed line/hunk):\n\n\nThese map to the 5 instances:\n\n\n'context' contains the associated \"context\". Context is the (in-lined) buggy function (including the buggy lines and comments).\nFor example, the context of\n\n\nis its associated function:\n\n\n'meta' contains some metadata about the project:\n\n\n'1056' is the project id. '/local/...' is the absolute path to the buggy file. This can be parsed to extract the commit id: '68a6301301378680519f2b146daec37812a1bc22', the file name: 'URL' and the original path within the project\n'core/src/classpath/java/java/lang/URL'\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nData is collected to train automated program repair (APR) models."
] |
07c8608cac175708f83f05da225c487d66f8e8c9 |
# Dataset Card for CoCoNuT-JavaScript(2010)
## Dataset Description
- **Homepage:** [CoCoNuT training data](https://github.com/lin-tan/CoCoNut-Artifact/releases/tag/training_data_1.0.0)
- **Repository:** [CoCoNuT repository](https://github.com/lin-tan/CoCoNut-Artifact)
- **Paper:** [CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair](https://dl.acm.org/doi/abs/10.1145/3395363.3397369)
### Dataset Summary
Part of the data used to train the models in the "CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair" paper.
These datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.
The year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.
### Languages
- JavaScript
## Dataset Structure
### Data Fields
The dataset consists of 4 columns: `add`, `rem`, `context`, and `meta`.
These match the original dataset files: `add.txt`, `rem.txt`, `context.txt`, and `meta.txt`.
### Data Instances
There is a mapping between the 4 columns for each instance.
For example:
5 first rows of `rem` (i.e., the buggy line/hunk):
```
1 public synchronized StringBuffer append(char ch)
2 ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
3 public String substring(int beginIndex, int endIndex)
4 if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
5 public Object next() {
```
5 first rows of add (i.e., the fixed line/hunk):
```
1 public StringBuffer append(Object obj)
2 return append(obj == null ? "null" : obj.toString());
3 public String substring(int begin)
4 return substring(begin, count);
5 public FSEntry next() {
```
These map to the 5 instances:
```diff
- public synchronized StringBuffer append(char ch)
+ public StringBuffer append(Object obj)
```
```diff
- ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
+ return append(obj == null ? "null" : obj.toString());
```
```diff
- public String substring(int beginIndex, int endIndex)
+ public String substring(int begin)
```
```diff
- if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
+ return substring(begin, count);
```
```diff
- public Object next() {
+ public FSEntry next() {
```
`context` contains the associated "context". Context is the (in-lined) buggy function (including the buggy lines and comments).
For example, the context of
```
public synchronized StringBuffer append(char ch)
```
is its associated function:
```java
public synchronized StringBuffer append(char ch) { ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this; }
```
`meta` contains some metadata about the project:
```
1056 /local/tlutelli/issta_data/temp/all_java0context/java/2006_temp/2006/1056/68a6301301378680519f2b146daec37812a1bc22/StringBuffer.java/buggy/core/src/classpath/java/java/lang/StringBuffer.java
```
`1056` is the project id. `/local/...` is the absolute path to the buggy file. This can be parsed to extract the commit id: `68a6301301378680519f2b146daec37812a1bc22`, the file name: `StringBuffer.java` and the original path within the project
`core/src/classpath/java/java/lang/StringBuffer.java`
| Number of projects | Number of Instances |
| ------------------ |-------------------- |
| 10,163 | 2,254,253 |
## Dataset Creation
### Curation Rationale
Data is collected to train automated program repair (APR) models.
### Citation Information
```bib
@inproceedings{lutellierCoCoNuTCombiningContextaware2020,
title = {{{CoCoNuT}}: Combining Context-Aware Neural Translation Models Using Ensemble for Program Repair},
shorttitle = {{{CoCoNuT}}},
booktitle = {Proceedings of the 29th {{ACM SIGSOFT International Symposium}} on {{Software Testing}} and {{Analysis}}},
author = {Lutellier, Thibaud and Pham, Hung Viet and Pang, Lawrence and Li, Yitong and Wei, Moshi and Tan, Lin},
year = {2020},
month = jul,
series = {{{ISSTA}} 2020},
pages = {101--114},
publisher = {{Association for Computing Machinery}},
address = {{New York, NY, USA}},
doi = {10.1145/3395363.3397369},
url = {https://doi.org/10.1145/3395363.3397369},
urldate = {2022-12-06},
isbn = {978-1-4503-8008-9},
keywords = {AI and Software Engineering,Automated program repair,Deep Learning,Neural Machine Translation}
}
```
| h4iku/coconut_javascript2010 | [
"code",
"region:us"
] | 2022-03-29T22:49:36+00:00 | {"pretty_name": "CoCoNuT-JavaScript(2010)", "tags": ["code"]} | 2023-09-28T22:20:59+00:00 | [] | [] | TAGS
#code #region-us
| Dataset Card for CoCoNuT-JavaScript(2010)
=========================================
Dataset Description
-------------------
* Homepage: CoCoNuT training data
* Repository: CoCoNuT repository
* Paper: CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair
### Dataset Summary
Part of the data used to train the models in the "CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair" paper.
These datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.
The year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.
### Languages
* JavaScript
Dataset Structure
-----------------
### Data Fields
The dataset consists of 4 columns: 'add', 'rem', 'context', and 'meta'.
These match the original dataset files: 'URL', 'URL', 'URL', and 'URL'.
### Data Instances
There is a mapping between the 4 columns for each instance.
For example:
5 first rows of 'rem' (i.e., the buggy line/hunk):
5 first rows of add (i.e., the fixed line/hunk):
These map to the 5 instances:
'context' contains the associated "context". Context is the (in-lined) buggy function (including the buggy lines and comments).
For example, the context of
is its associated function:
'meta' contains some metadata about the project:
'1056' is the project id. '/local/...' is the absolute path to the buggy file. This can be parsed to extract the commit id: '68a6301301378680519f2b146daec37812a1bc22', the file name: 'URL' and the original path within the project
'core/src/classpath/java/java/lang/URL'
Dataset Creation
----------------
### Curation Rationale
Data is collected to train automated program repair (APR) models.
| [
"### Dataset Summary\n\n\nPart of the data used to train the models in the \"CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair\" paper.\nThese datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.\nThe year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.",
"### Languages\n\n\n* JavaScript\n\n\nDataset Structure\n-----------------",
"### Data Fields\n\n\nThe dataset consists of 4 columns: 'add', 'rem', 'context', and 'meta'.\nThese match the original dataset files: 'URL', 'URL', 'URL', and 'URL'.",
"### Data Instances\n\n\nThere is a mapping between the 4 columns for each instance.\nFor example:\n\n\n5 first rows of 'rem' (i.e., the buggy line/hunk):\n\n\n5 first rows of add (i.e., the fixed line/hunk):\n\n\nThese map to the 5 instances:\n\n\n'context' contains the associated \"context\". Context is the (in-lined) buggy function (including the buggy lines and comments).\nFor example, the context of\n\n\nis its associated function:\n\n\n'meta' contains some metadata about the project:\n\n\n'1056' is the project id. '/local/...' is the absolute path to the buggy file. This can be parsed to extract the commit id: '68a6301301378680519f2b146daec37812a1bc22', the file name: 'URL' and the original path within the project\n'core/src/classpath/java/java/lang/URL'\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nData is collected to train automated program repair (APR) models."
] | [
"TAGS\n#code #region-us \n",
"### Dataset Summary\n\n\nPart of the data used to train the models in the \"CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair\" paper.\nThese datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.\nThe year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.",
"### Languages\n\n\n* JavaScript\n\n\nDataset Structure\n-----------------",
"### Data Fields\n\n\nThe dataset consists of 4 columns: 'add', 'rem', 'context', and 'meta'.\nThese match the original dataset files: 'URL', 'URL', 'URL', and 'URL'.",
"### Data Instances\n\n\nThere is a mapping between the 4 columns for each instance.\nFor example:\n\n\n5 first rows of 'rem' (i.e., the buggy line/hunk):\n\n\n5 first rows of add (i.e., the fixed line/hunk):\n\n\nThese map to the 5 instances:\n\n\n'context' contains the associated \"context\". Context is the (in-lined) buggy function (including the buggy lines and comments).\nFor example, the context of\n\n\nis its associated function:\n\n\n'meta' contains some metadata about the project:\n\n\n'1056' is the project id. '/local/...' is the absolute path to the buggy file. This can be parsed to extract the commit id: '68a6301301378680519f2b146daec37812a1bc22', the file name: 'URL' and the original path within the project\n'core/src/classpath/java/java/lang/URL'\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nData is collected to train automated program repair (APR) models."
] |
76866995461bd07841e9ef7b08751da46c7eb9f4 | annotations_creators:
- crowdsourced
- other
language_creators:
- other
- crowdsourced
languages:
- es
licenses:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: ESnli
size_categories:
- unknown
source_datasets:
- extended|snli
- extended|xnli
- extended|multi_nli
task_categories:
- text-classification
task_ids:
- natural-language-inference
# Dataset Card for nli-es
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://huggingface.co/datasets/hackathon-pln-es/nli-es/
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
A Spanish Natural Language Inference dataset put together from the sources:
- the Spanish slice of the XNLI dataset;
- machine-translated Spanish version of the SNLI dataset
- machine-translated Spanish version of the Multinli dataset
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
A small percentage of the dataset contains original Spanish text by human speakers. The rest was generated by automatic translation.
## Dataset Structure
### Data Instances
A line includes four values: a sentence1 (the premise); a sentence2 (the hypothesis); a label specifying the relationship between the two ("gold_label") and the ID number of the pair of sentences as given in the original dataset.
Labels can be "entailment" if the premise entails the hypothesis, "contradiction" if it contradicts it or "neutral" if it neither implies it nor denies it.
{
"gold_label": "neutral",
"pairID": 1,
"sentence1": "A ver si nos tenemos que poner todos en huelga hasta cobrar lo que queramos.",
"sentence2": "La huelga es el método de lucha más eficaz para conseguir mejoras en el salario."
}
### Data Fields
gold_label: A string defining the relation between the sentence pair. Labels can be "entailment" if the premise entails the hypothesis, "contradiction" if it contradicts it or "neutral" if it neither implies it nor denies it.
pairID: A string identifying a pair sentence. It was inherited from the original datasets. NOTE: For the moment we are having trouble loading this column so we replaced every string with an int 0 as a placeholder. We hope to have the pairID back up soon.
sentence1: A string containing one sentence in Spanish, the premise. (See gold_label.)
sentence2: A string containing one sentence in Spanish, the hypothesis. (See gold_label.)
### Data Splits
The whole dataset was used for training. We did not use an evaluation split as we used the SemEval-2015 Task 2.
## Dataset Creation
### Curation Rationale
This corpus was built to remedy the scarcity of annotated Spanish-language datasets for NLI. It was generated by translating from the SNLI original dataset to Spanish using Argos. While machine translation is far from an ideal source for semantic classification, it is an aid to enlarging the data available.
### Source Data
#### Initial Data Collection and Normalization
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
#### Who are the source language producers?
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
### Annotations
#### Annotation process
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
#### Who are the annotators?
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
### Personal and Sensitive Information
In general, no sensitive information is conveyed in the sentences.
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to offer new tools for semantic textual similarity analysis of Spanish sentences.
### Discussion of Biases
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
### Other Known Limitations
The translation of the sentences was mostly unsupervised and may introduce some noise in the corpus. Machine translation from an English-language corpus is likely to generate syntactic and lexical forms that differ from those a human Spanish speaker would produce.
For discussion on the biases and limitations of the original datasets, please refer to their respective documentations:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
## Additional Information
### Dataset Curators
The nli-es dataset was put together by Anibal Pérez, Lautaro Gesuelli, Mauricio Mazuecos and Emilio Tomás Ariza.
### Licensing Information
This corpus is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0).
Please refer to the respective documentations of the original datasets for information on their licenses:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
### Citation Information
If you need to cite this dataset, you can link to this readme. | hackathon-pln-es/nli-es | [
"arxiv:1809.05053",
"region:us"
] | 2022-03-29T22:54:07+00:00 | {} | 2022-04-04T02:30:59+00:00 | [
"1809.05053"
] | [] | TAGS
#arxiv-1809.05053 #region-us
| annotations_creators:
- crowdsourced
- other
language_creators:
- other
- crowdsourced
languages:
- es
licenses:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: ESnli
size_categories:
- unknown
source_datasets:
- extended|snli
- extended|xnli
- extended|multi_nli
task_categories:
- text-classification
task_ids:
- natural-language-inference
# Dataset Card for nli-es
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
A Spanish Natural Language Inference dataset put together from the sources:
- the Spanish slice of the XNLI dataset;
- machine-translated Spanish version of the SNLI dataset
- machine-translated Spanish version of the Multinli dataset
### Supported Tasks and Leaderboards
### Languages
A small percentage of the dataset contains original Spanish text by human speakers. The rest was generated by automatic translation.
## Dataset Structure
### Data Instances
A line includes four values: a sentence1 (the premise); a sentence2 (the hypothesis); a label specifying the relationship between the two ("gold_label") and the ID number of the pair of sentences as given in the original dataset.
Labels can be "entailment" if the premise entails the hypothesis, "contradiction" if it contradicts it or "neutral" if it neither implies it nor denies it.
{
"gold_label": "neutral",
"pairID": 1,
"sentence1": "A ver si nos tenemos que poner todos en huelga hasta cobrar lo que queramos.",
"sentence2": "La huelga es el método de lucha más eficaz para conseguir mejoras en el salario."
}
### Data Fields
gold_label: A string defining the relation between the sentence pair. Labels can be "entailment" if the premise entails the hypothesis, "contradiction" if it contradicts it or "neutral" if it neither implies it nor denies it.
pairID: A string identifying a pair sentence. It was inherited from the original datasets. NOTE: For the moment we are having trouble loading this column so we replaced every string with an int 0 as a placeholder. We hope to have the pairID back up soon.
sentence1: A string containing one sentence in Spanish, the premise. (See gold_label.)
sentence2: A string containing one sentence in Spanish, the hypothesis. (See gold_label.)
### Data Splits
The whole dataset was used for training. We did not use an evaluation split as we used the SemEval-2015 Task 2.
## Dataset Creation
### Curation Rationale
This corpus was built to remedy the scarcity of annotated Spanish-language datasets for NLI. It was generated by translating from the SNLI original dataset to Spanish using Argos. While machine translation is far from an ideal source for semantic classification, it is an aid to enlarging the data available.
### Source Data
#### Initial Data Collection and Normalization
Please refer to the respective documentations of the original datasets:
URL
URL
URL
#### Who are the source language producers?
Please refer to the respective documentations of the original datasets:
URL
URL
URL
### Annotations
#### Annotation process
Please refer to the respective documentations of the original datasets:
URL
URL
URL
#### Who are the annotators?
Please refer to the respective documentations of the original datasets:
URL
URL
URL
### Personal and Sensitive Information
In general, no sensitive information is conveyed in the sentences.
Please refer to the respective documentations of the original datasets:
URL
URL
URL
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to offer new tools for semantic textual similarity analysis of Spanish sentences.
### Discussion of Biases
Please refer to the respective documentations of the original datasets:
URL
URL
URL
### Other Known Limitations
The translation of the sentences was mostly unsupervised and may introduce some noise in the corpus. Machine translation from an English-language corpus is likely to generate syntactic and lexical forms that differ from those a human Spanish speaker would produce.
For discussion on the biases and limitations of the original datasets, please refer to their respective documentations:
URL
URL
URL
## Additional Information
### Dataset Curators
The nli-es dataset was put together by Anibal Pérez, Lautaro Gesuelli, Mauricio Mazuecos and Emilio Tomás Ariza.
### Licensing Information
This corpus is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Please refer to the respective documentations of the original datasets for information on their licenses:
URL
URL
URL
If you need to cite this dataset, you can link to this readme. | [
"# Dataset Card for nli-es",
"## Table of Contents\r\n- Dataset Description\r\n - Dataset Summary\r\n - Supported Tasks\r\n - Languages\r\n- Dataset Structure\r\n - Data Instances\r\n - Data Fields\r\n - Data Splits\r\n- Dataset Creation\r\n - Curation Rationale\r\n - Source Data\r\n - Annotations\r\n - Personal and Sensitive Information\r\n- Considerations for Using the Data\r\n - Social Impact of Dataset\r\n - Discussion of Biases\r\n - Other Known Limitations\r\n- Additional Information\r\n - Dataset Curators\r\n - Licensing Information\r\n - Citation Information",
"## Dataset Description\r\n\r\n- Homepage: \r\n- Repository: URL\r\n- Paper: \r\n- Leaderboard: \r\n- Point of Contact:",
"### Dataset Summary\r\n\r\nA Spanish Natural Language Inference dataset put together from the sources:\r\n - the Spanish slice of the XNLI dataset;\r\n - machine-translated Spanish version of the SNLI dataset\r\n - machine-translated Spanish version of the Multinli dataset",
"### Supported Tasks and Leaderboards",
"### Languages\r\n\r\nA small percentage of the dataset contains original Spanish text by human speakers. The rest was generated by automatic translation.",
"## Dataset Structure",
"### Data Instances\r\n\r\nA line includes four values: a sentence1 (the premise); a sentence2 (the hypothesis); a label specifying the relationship between the two (\"gold_label\") and the ID number of the pair of sentences as given in the original dataset.\r\n\r\nLabels can be \"entailment\" if the premise entails the hypothesis, \"contradiction\" if it contradicts it or \"neutral\" if it neither implies it nor denies it.\r\n\r\n{ \r\n \"gold_label\": \"neutral\", \r\n \"pairID\": 1, \r\n \"sentence1\": \"A ver si nos tenemos que poner todos en huelga hasta cobrar lo que queramos.\", \r\n \"sentence2\": \"La huelga es el método de lucha más eficaz para conseguir mejoras en el salario.\" \r\n}",
"### Data Fields\r\n\r\ngold_label: A string defining the relation between the sentence pair. Labels can be \"entailment\" if the premise entails the hypothesis, \"contradiction\" if it contradicts it or \"neutral\" if it neither implies it nor denies it.\r\n\r\npairID: A string identifying a pair sentence. It was inherited from the original datasets. NOTE: For the moment we are having trouble loading this column so we replaced every string with an int 0 as a placeholder. We hope to have the pairID back up soon.\r\n\r\nsentence1: A string containing one sentence in Spanish, the premise. (See gold_label.)\r\n\r\nsentence2: A string containing one sentence in Spanish, the hypothesis. (See gold_label.)",
"### Data Splits\r\n\r\nThe whole dataset was used for training. We did not use an evaluation split as we used the SemEval-2015 Task 2.",
"## Dataset Creation",
"### Curation Rationale\r\n\r\nThis corpus was built to remedy the scarcity of annotated Spanish-language datasets for NLI. It was generated by translating from the SNLI original dataset to Spanish using Argos. While machine translation is far from an ideal source for semantic classification, it is an aid to enlarging the data available.",
"### Source Data",
"#### Initial Data Collection and Normalization\r\n\r\nPlease refer to the respective documentations of the original datasets: \r\nURL \r\nURL \r\nURL",
"#### Who are the source language producers?\r\n\r\nPlease refer to the respective documentations of the original datasets: \r\nURL \r\nURL \r\nURL",
"### Annotations",
"#### Annotation process\r\n\r\nPlease refer to the respective documentations of the original datasets: \r\nURL \r\nURL \r\nURL",
"#### Who are the annotators?\r\n\r\nPlease refer to the respective documentations of the original datasets: \r\nURL \r\nURL \r\nURL",
"### Personal and Sensitive Information\r\n\r\nIn general, no sensitive information is conveyed in the sentences.\r\nPlease refer to the respective documentations of the original datasets: \r\nURL \r\nURL \r\nURL",
"## Considerations for Using the Data",
"### Social Impact of Dataset\r\n\r\nThe purpose of this dataset is to offer new tools for semantic textual similarity analysis of Spanish sentences.",
"### Discussion of Biases\r\n\r\nPlease refer to the respective documentations of the original datasets: \r\nURL \r\nURL \r\nURL",
"### Other Known Limitations\r\n\r\nThe translation of the sentences was mostly unsupervised and may introduce some noise in the corpus. Machine translation from an English-language corpus is likely to generate syntactic and lexical forms that differ from those a human Spanish speaker would produce.\r\nFor discussion on the biases and limitations of the original datasets, please refer to their respective documentations: \r\nURL \r\nURL \r\nURL",
"## Additional Information",
"### Dataset Curators\r\n\r\nThe nli-es dataset was put together by Anibal Pérez, Lautaro Gesuelli, Mauricio Mazuecos and Emilio Tomás Ariza.",
"### Licensing Information\r\n\r\nThis corpus is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.\r\nPlease refer to the respective documentations of the original datasets for information on their licenses: \r\nURL \r\nURL \r\nURL\r\n\r\n\r\n\r\nIf you need to cite this dataset, you can link to this readme."
] | [
"TAGS\n#arxiv-1809.05053 #region-us \n",
"# Dataset Card for nli-es",
"## Table of Contents\r\n- Dataset Description\r\n - Dataset Summary\r\n - Supported Tasks\r\n - Languages\r\n- Dataset Structure\r\n - Data Instances\r\n - Data Fields\r\n - Data Splits\r\n- Dataset Creation\r\n - Curation Rationale\r\n - Source Data\r\n - Annotations\r\n - Personal and Sensitive Information\r\n- Considerations for Using the Data\r\n - Social Impact of Dataset\r\n - Discussion of Biases\r\n - Other Known Limitations\r\n- Additional Information\r\n - Dataset Curators\r\n - Licensing Information\r\n - Citation Information",
"## Dataset Description\r\n\r\n- Homepage: \r\n- Repository: URL\r\n- Paper: \r\n- Leaderboard: \r\n- Point of Contact:",
"### Dataset Summary\r\n\r\nA Spanish Natural Language Inference dataset put together from the sources:\r\n - the Spanish slice of the XNLI dataset;\r\n - machine-translated Spanish version of the SNLI dataset\r\n - machine-translated Spanish version of the Multinli dataset",
"### Supported Tasks and Leaderboards",
"### Languages\r\n\r\nA small percentage of the dataset contains original Spanish text by human speakers. The rest was generated by automatic translation.",
"## Dataset Structure",
"### Data Instances\r\n\r\nA line includes four values: a sentence1 (the premise); a sentence2 (the hypothesis); a label specifying the relationship between the two (\"gold_label\") and the ID number of the pair of sentences as given in the original dataset.\r\n\r\nLabels can be \"entailment\" if the premise entails the hypothesis, \"contradiction\" if it contradicts it or \"neutral\" if it neither implies it nor denies it.\r\n\r\n{ \r\n \"gold_label\": \"neutral\", \r\n \"pairID\": 1, \r\n \"sentence1\": \"A ver si nos tenemos que poner todos en huelga hasta cobrar lo que queramos.\", \r\n \"sentence2\": \"La huelga es el método de lucha más eficaz para conseguir mejoras en el salario.\" \r\n}",
"### Data Fields\r\n\r\ngold_label: A string defining the relation between the sentence pair. Labels can be \"entailment\" if the premise entails the hypothesis, \"contradiction\" if it contradicts it or \"neutral\" if it neither implies it nor denies it.\r\n\r\npairID: A string identifying a pair sentence. It was inherited from the original datasets. NOTE: For the moment we are having trouble loading this column so we replaced every string with an int 0 as a placeholder. We hope to have the pairID back up soon.\r\n\r\nsentence1: A string containing one sentence in Spanish, the premise. (See gold_label.)\r\n\r\nsentence2: A string containing one sentence in Spanish, the hypothesis. (See gold_label.)",
"### Data Splits\r\n\r\nThe whole dataset was used for training. We did not use an evaluation split as we used the SemEval-2015 Task 2.",
"## Dataset Creation",
"### Curation Rationale\r\n\r\nThis corpus was built to remedy the scarcity of annotated Spanish-language datasets for NLI. It was generated by translating from the SNLI original dataset to Spanish using Argos. While machine translation is far from an ideal source for semantic classification, it is an aid to enlarging the data available.",
"### Source Data",
"#### Initial Data Collection and Normalization\r\n\r\nPlease refer to the respective documentations of the original datasets: \r\nURL \r\nURL \r\nURL",
"#### Who are the source language producers?\r\n\r\nPlease refer to the respective documentations of the original datasets: \r\nURL \r\nURL \r\nURL",
"### Annotations",
"#### Annotation process\r\n\r\nPlease refer to the respective documentations of the original datasets: \r\nURL \r\nURL \r\nURL",
"#### Who are the annotators?\r\n\r\nPlease refer to the respective documentations of the original datasets: \r\nURL \r\nURL \r\nURL",
"### Personal and Sensitive Information\r\n\r\nIn general, no sensitive information is conveyed in the sentences.\r\nPlease refer to the respective documentations of the original datasets: \r\nURL \r\nURL \r\nURL",
"## Considerations for Using the Data",
"### Social Impact of Dataset\r\n\r\nThe purpose of this dataset is to offer new tools for semantic textual similarity analysis of Spanish sentences.",
"### Discussion of Biases\r\n\r\nPlease refer to the respective documentations of the original datasets: \r\nURL \r\nURL \r\nURL",
"### Other Known Limitations\r\n\r\nThe translation of the sentences was mostly unsupervised and may introduce some noise in the corpus. Machine translation from an English-language corpus is likely to generate syntactic and lexical forms that differ from those a human Spanish speaker would produce.\r\nFor discussion on the biases and limitations of the original datasets, please refer to their respective documentations: \r\nURL \r\nURL \r\nURL",
"## Additional Information",
"### Dataset Curators\r\n\r\nThe nli-es dataset was put together by Anibal Pérez, Lautaro Gesuelli, Mauricio Mazuecos and Emilio Tomás Ariza.",
"### Licensing Information\r\n\r\nThis corpus is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.\r\nPlease refer to the respective documentations of the original datasets for information on their licenses: \r\nURL \r\nURL \r\nURL\r\n\r\n\r\n\r\nIf you need to cite this dataset, you can link to this readme."
] |
024190594583122994c09be3724f2c5279422081 |
# Dataset Card for CoCoNuT-Python(2010)
## Dataset Description
- **Homepage:** [CoCoNuT training data](https://github.com/lin-tan/CoCoNut-Artifact/releases/tag/training_data_1.0.0)
- **Repository:** [CoCoNuT repository](https://github.com/lin-tan/CoCoNut-Artifact)
- **Paper:** [CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair](https://dl.acm.org/doi/abs/10.1145/3395363.3397369)
### Dataset Summary
Part of the data used to train the models in the "CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair" paper.
These datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.
The year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.
### Languages
- Python
## Dataset Structure
### Data Fields
The dataset consists of 4 columns: `add`, `rem`, `context`, and `meta`.
These match the original dataset files: `add.txt`, `rem.txt`, `context.txt`, and `meta.txt`.
### Data Instances
There is a mapping between the 4 columns for each instance.
For example:
5 first rows of `rem` (i.e., the buggy line/hunk):
```
1 public synchronized StringBuffer append(char ch)
2 ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
3 public String substring(int beginIndex, int endIndex)
4 if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
5 public Object next() {
```
5 first rows of add (i.e., the fixed line/hunk):
```
1 public StringBuffer append(Object obj)
2 return append(obj == null ? "null" : obj.toString());
3 public String substring(int begin)
4 return substring(begin, count);
5 public FSEntry next() {
```
These map to the 5 instances:
```diff
- public synchronized StringBuffer append(char ch)
+ public StringBuffer append(Object obj)
```
```diff
- ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
+ return append(obj == null ? "null" : obj.toString());
```
```diff
- public String substring(int beginIndex, int endIndex)
+ public String substring(int begin)
```
```diff
- if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
+ return substring(begin, count);
```
```diff
- public Object next() {
+ public FSEntry next() {
```
`context` contains the associated "context". Context is the (in-lined) buggy function (including the buggy lines and comments).
For example, the context of
```
public synchronized StringBuffer append(char ch)
```
is its associated function:
```java
public synchronized StringBuffer append(char ch) { ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this; }
```
`meta` contains some metadata about the project:
```
1056 /local/tlutelli/issta_data/temp/all_java0context/java/2006_temp/2006/1056/68a6301301378680519f2b146daec37812a1bc22/StringBuffer.java/buggy/core/src/classpath/java/java/lang/StringBuffer.java
```
`1056` is the project id. `/local/...` is the absolute path to the buggy file. This can be parsed to extract the commit id: `68a6301301378680519f2b146daec37812a1bc22`, the file name: `StringBuffer.java` and the original path within the project
`core/src/classpath/java/java/lang/StringBuffer.java`
| Number of projects | Number of Instances |
| ------------------ |-------------------- |
| 13,899 | 480,777 |
## Dataset Creation
### Curation Rationale
Data is collected to train automated program repair (APR) models.
### Citation Information
```bib
@inproceedings{lutellierCoCoNuTCombiningContextaware2020,
title = {{{CoCoNuT}}: Combining Context-Aware Neural Translation Models Using Ensemble for Program Repair},
shorttitle = {{{CoCoNuT}}},
booktitle = {Proceedings of the 29th {{ACM SIGSOFT International Symposium}} on {{Software Testing}} and {{Analysis}}},
author = {Lutellier, Thibaud and Pham, Hung Viet and Pang, Lawrence and Li, Yitong and Wei, Moshi and Tan, Lin},
year = {2020},
month = jul,
series = {{{ISSTA}} 2020},
pages = {101--114},
publisher = {{Association for Computing Machinery}},
address = {{New York, NY, USA}},
doi = {10.1145/3395363.3397369},
url = {https://doi.org/10.1145/3395363.3397369},
urldate = {2022-12-06},
isbn = {978-1-4503-8008-9},
keywords = {AI and Software Engineering,Automated program repair,Deep Learning,Neural Machine Translation}
}
```
| h4iku/coconut_python2010 | [
"code",
"region:us"
] | 2022-03-30T00:03:32+00:00 | {"pretty_name": "CoCoNuT-Python(2010)", "tags": ["code"]} | 2023-09-28T22:17:32+00:00 | [] | [] | TAGS
#code #region-us
| Dataset Card for CoCoNuT-Python(2010)
=====================================
Dataset Description
-------------------
* Homepage: CoCoNuT training data
* Repository: CoCoNuT repository
* Paper: CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair
### Dataset Summary
Part of the data used to train the models in the "CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair" paper.
These datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.
The year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.
### Languages
* Python
Dataset Structure
-----------------
### Data Fields
The dataset consists of 4 columns: 'add', 'rem', 'context', and 'meta'.
These match the original dataset files: 'URL', 'URL', 'URL', and 'URL'.
### Data Instances
There is a mapping between the 4 columns for each instance.
For example:
5 first rows of 'rem' (i.e., the buggy line/hunk):
5 first rows of add (i.e., the fixed line/hunk):
These map to the 5 instances:
'context' contains the associated "context". Context is the (in-lined) buggy function (including the buggy lines and comments).
For example, the context of
is its associated function:
'meta' contains some metadata about the project:
'1056' is the project id. '/local/...' is the absolute path to the buggy file. This can be parsed to extract the commit id: '68a6301301378680519f2b146daec37812a1bc22', the file name: 'URL' and the original path within the project
'core/src/classpath/java/java/lang/URL'
Dataset Creation
----------------
### Curation Rationale
Data is collected to train automated program repair (APR) models.
| [
"### Dataset Summary\n\n\nPart of the data used to train the models in the \"CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair\" paper.\nThese datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.\nThe year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.",
"### Languages\n\n\n* Python\n\n\nDataset Structure\n-----------------",
"### Data Fields\n\n\nThe dataset consists of 4 columns: 'add', 'rem', 'context', and 'meta'.\nThese match the original dataset files: 'URL', 'URL', 'URL', and 'URL'.",
"### Data Instances\n\n\nThere is a mapping between the 4 columns for each instance.\nFor example:\n\n\n5 first rows of 'rem' (i.e., the buggy line/hunk):\n\n\n5 first rows of add (i.e., the fixed line/hunk):\n\n\nThese map to the 5 instances:\n\n\n'context' contains the associated \"context\". Context is the (in-lined) buggy function (including the buggy lines and comments).\nFor example, the context of\n\n\nis its associated function:\n\n\n'meta' contains some metadata about the project:\n\n\n'1056' is the project id. '/local/...' is the absolute path to the buggy file. This can be parsed to extract the commit id: '68a6301301378680519f2b146daec37812a1bc22', the file name: 'URL' and the original path within the project\n'core/src/classpath/java/java/lang/URL'\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nData is collected to train automated program repair (APR) models."
] | [
"TAGS\n#code #region-us \n",
"### Dataset Summary\n\n\nPart of the data used to train the models in the \"CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair\" paper.\nThese datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.\nThe year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.",
"### Languages\n\n\n* Python\n\n\nDataset Structure\n-----------------",
"### Data Fields\n\n\nThe dataset consists of 4 columns: 'add', 'rem', 'context', and 'meta'.\nThese match the original dataset files: 'URL', 'URL', 'URL', and 'URL'.",
"### Data Instances\n\n\nThere is a mapping between the 4 columns for each instance.\nFor example:\n\n\n5 first rows of 'rem' (i.e., the buggy line/hunk):\n\n\n5 first rows of add (i.e., the fixed line/hunk):\n\n\nThese map to the 5 instances:\n\n\n'context' contains the associated \"context\". Context is the (in-lined) buggy function (including the buggy lines and comments).\nFor example, the context of\n\n\nis its associated function:\n\n\n'meta' contains some metadata about the project:\n\n\n'1056' is the project id. '/local/...' is the absolute path to the buggy file. This can be parsed to extract the commit id: '68a6301301378680519f2b146daec37812a1bc22', the file name: 'URL' and the original path within the project\n'core/src/classpath/java/java/lang/URL'\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nData is collected to train automated program repair (APR) models."
] |
cd86a6292c0525246a69827c754c3d2ed2993318 |
# Dataset Card for CoCoNuT-C(2005)
## Dataset Description
- **Homepage:** [CoCoNuT training data](https://github.com/lin-tan/CoCoNut-Artifact/releases/tag/training_data_1.0.0)
- **Repository:** [CoCoNuT repository](https://github.com/lin-tan/CoCoNut-Artifact)
- **Paper:** [CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair](https://dl.acm.org/doi/abs/10.1145/3395363.3397369)
### Dataset Summary
Part of the data used to train the models in the "CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair" paper.
These datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.
The year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.
### Languages
- C
## Dataset Structure
### Data Fields
The dataset consists of 4 columns: `add`, `rem`, `context`, and `meta`.
These match the original dataset files: `add.txt`, `rem.txt`, `context.txt`, and `meta.txt`.
### Data Instances
There is a mapping between the 4 columns for each instance.
For example:
5 first rows of `rem` (i.e., the buggy line/hunk):
```
1 public synchronized StringBuffer append(char ch)
2 ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
3 public String substring(int beginIndex, int endIndex)
4 if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
5 public Object next() {
```
5 first rows of add (i.e., the fixed line/hunk):
```
1 public StringBuffer append(Object obj)
2 return append(obj == null ? "null" : obj.toString());
3 public String substring(int begin)
4 return substring(begin, count);
5 public FSEntry next() {
```
These map to the 5 instances:
```diff
- public synchronized StringBuffer append(char ch)
+ public StringBuffer append(Object obj)
```
```diff
- ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
+ return append(obj == null ? "null" : obj.toString());
```
```diff
- public String substring(int beginIndex, int endIndex)
+ public String substring(int begin)
```
```diff
- if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
+ return substring(begin, count);
```
```diff
- public Object next() {
+ public FSEntry next() {
```
`context` contains the associated "context". Context is the (in-lined) buggy function (including the buggy lines and comments).
For example, the context of
```
public synchronized StringBuffer append(char ch)
```
is its associated function:
```java
public synchronized StringBuffer append(char ch) { ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this; }
```
`meta` contains some metadata about the project:
```
1056 /local/tlutelli/issta_data/temp/all_java0context/java/2006_temp/2006/1056/68a6301301378680519f2b146daec37812a1bc22/StringBuffer.java/buggy/core/src/classpath/java/java/lang/StringBuffer.java
```
`1056` is the project id. `/local/...` is the absolute path to the buggy file. This can be parsed to extract the commit id: `68a6301301378680519f2b146daec37812a1bc22`, the file name: `StringBuffer.java` and the original path within the project
`core/src/classpath/java/java/lang/StringBuffer.java`
| Number of projects | Number of Instances |
| ------------------ |-------------------- |
| 12,577 | 2,735,506 |
## Dataset Creation
### Curation Rationale
Data is collected to train automated program repair (APR) models.
### Citation Information
```bib
@inproceedings{lutellierCoCoNuTCombiningContextaware2020,
title = {{{CoCoNuT}}: Combining Context-Aware Neural Translation Models Using Ensemble for Program Repair},
shorttitle = {{{CoCoNuT}}},
booktitle = {Proceedings of the 29th {{ACM SIGSOFT International Symposium}} on {{Software Testing}} and {{Analysis}}},
author = {Lutellier, Thibaud and Pham, Hung Viet and Pang, Lawrence and Li, Yitong and Wei, Moshi and Tan, Lin},
year = {2020},
month = jul,
series = {{{ISSTA}} 2020},
pages = {101--114},
publisher = {{Association for Computing Machinery}},
address = {{New York, NY, USA}},
doi = {10.1145/3395363.3397369},
url = {https://doi.org/10.1145/3395363.3397369},
urldate = {2022-12-06},
isbn = {978-1-4503-8008-9},
keywords = {AI and Software Engineering,Automated program repair,Deep Learning,Neural Machine Translation}
}
```
| h4iku/coconut_c2005 | [
"code",
"region:us"
] | 2022-03-30T00:06:36+00:00 | {"pretty_name": "CoCoNuT-C(2005)", "tags": ["code"]} | 2023-09-28T22:19:25+00:00 | [] | [] | TAGS
#code #region-us
| Dataset Card for CoCoNuT-C(2005)
================================
Dataset Description
-------------------
* Homepage: CoCoNuT training data
* Repository: CoCoNuT repository
* Paper: CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair
### Dataset Summary
Part of the data used to train the models in the "CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair" paper.
These datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.
The year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.
### Languages
* C
Dataset Structure
-----------------
### Data Fields
The dataset consists of 4 columns: 'add', 'rem', 'context', and 'meta'.
These match the original dataset files: 'URL', 'URL', 'URL', and 'URL'.
### Data Instances
There is a mapping between the 4 columns for each instance.
For example:
5 first rows of 'rem' (i.e., the buggy line/hunk):
5 first rows of add (i.e., the fixed line/hunk):
These map to the 5 instances:
'context' contains the associated "context". Context is the (in-lined) buggy function (including the buggy lines and comments).
For example, the context of
is its associated function:
'meta' contains some metadata about the project:
'1056' is the project id. '/local/...' is the absolute path to the buggy file. This can be parsed to extract the commit id: '68a6301301378680519f2b146daec37812a1bc22', the file name: 'URL' and the original path within the project
'core/src/classpath/java/java/lang/URL'
Dataset Creation
----------------
### Curation Rationale
Data is collected to train automated program repair (APR) models.
| [
"### Dataset Summary\n\n\nPart of the data used to train the models in the \"CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair\" paper.\nThese datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.\nThe year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.",
"### Languages\n\n\n* C\n\n\nDataset Structure\n-----------------",
"### Data Fields\n\n\nThe dataset consists of 4 columns: 'add', 'rem', 'context', and 'meta'.\nThese match the original dataset files: 'URL', 'URL', 'URL', and 'URL'.",
"### Data Instances\n\n\nThere is a mapping between the 4 columns for each instance.\nFor example:\n\n\n5 first rows of 'rem' (i.e., the buggy line/hunk):\n\n\n5 first rows of add (i.e., the fixed line/hunk):\n\n\nThese map to the 5 instances:\n\n\n'context' contains the associated \"context\". Context is the (in-lined) buggy function (including the buggy lines and comments).\nFor example, the context of\n\n\nis its associated function:\n\n\n'meta' contains some metadata about the project:\n\n\n'1056' is the project id. '/local/...' is the absolute path to the buggy file. This can be parsed to extract the commit id: '68a6301301378680519f2b146daec37812a1bc22', the file name: 'URL' and the original path within the project\n'core/src/classpath/java/java/lang/URL'\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nData is collected to train automated program repair (APR) models."
] | [
"TAGS\n#code #region-us \n",
"### Dataset Summary\n\n\nPart of the data used to train the models in the \"CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair\" paper.\nThese datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.\nThe year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.",
"### Languages\n\n\n* C\n\n\nDataset Structure\n-----------------",
"### Data Fields\n\n\nThe dataset consists of 4 columns: 'add', 'rem', 'context', and 'meta'.\nThese match the original dataset files: 'URL', 'URL', 'URL', and 'URL'.",
"### Data Instances\n\n\nThere is a mapping between the 4 columns for each instance.\nFor example:\n\n\n5 first rows of 'rem' (i.e., the buggy line/hunk):\n\n\n5 first rows of add (i.e., the fixed line/hunk):\n\n\nThese map to the 5 instances:\n\n\n'context' contains the associated \"context\". Context is the (in-lined) buggy function (including the buggy lines and comments).\nFor example, the context of\n\n\nis its associated function:\n\n\n'meta' contains some metadata about the project:\n\n\n'1056' is the project id. '/local/...' is the absolute path to the buggy file. This can be parsed to extract the commit id: '68a6301301378680519f2b146daec37812a1bc22', the file name: 'URL' and the original path within the project\n'core/src/classpath/java/java/lang/URL'\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nData is collected to train automated program repair (APR) models."
] |
fa7ca4dffe2448901592f0a8bb2ea0f0581c5951 | Japanese Pitch Dataset | vumichien/pitch_japanese_data | [
"region:us"
] | 2022-03-30T09:40:52+00:00 | {} | 2022-04-04T02:05:08+00:00 | [] | [] | TAGS
#region-us
| Japanese Pitch Dataset | [] | [
"TAGS\n#region-us \n"
] |
e0f3a5567f5c8db711fce1d5dcf244000c5ab587 |
# Spanish Poetry Dataset
There are not many poetry datasets, and in Spanish language is even worst! With this dataset, we want to give access to these quality Spanish data for NLP tasks.
It is a simple dataset, but its potential is huge. I'm itching to discover new literary structures within Spanish literature data, a wider analysis, and so on!
# Authors
Andrea Morales (@andreamorgar) and Miguel López (@wizmik12)
### Motivation
This dataset was built for the PyConES2020 conference with the purpose of using it for a poem generation task. More information: https://github.com/andreamorgar/poesIA
### Content
Data was acquired in July 2020 from the poetry webpage www.poemas-del-alma.com. It provides a wide amount of data involving poems in Spanish. Data was scraped using Python library BeautifulSoup. For each poem in www.poemas-del-alma.com, we collected the name of the poet, poem, and poem title. Scraping processed is available at https://github.com/andreamorgar/poesIA/blob/master/poetry-scrapper.py.
### Languages
Spanish
### Acknowledgements
We wouldn't be here without www.poemas-del-alma.com, which provides the poetry collection in this dataset. | andreamorgar/spanish_poetry | [
"license:gpl-3.0",
"region:us"
] | 2022-03-30T11:29:11+00:00 | {"license": "gpl-3.0"} | 2022-03-30T11:39:22+00:00 | [] | [] | TAGS
#license-gpl-3.0 #region-us
|
# Spanish Poetry Dataset
There are not many poetry datasets, and in Spanish language is even worst! With this dataset, we want to give access to these quality Spanish data for NLP tasks.
It is a simple dataset, but its potential is huge. I'm itching to discover new literary structures within Spanish literature data, a wider analysis, and so on!
# Authors
Andrea Morales (@andreamorgar) and Miguel López (@wizmik12)
### Motivation
This dataset was built for the PyConES2020 conference with the purpose of using it for a poem generation task. More information: URL
### Content
Data was acquired in July 2020 from the poetry webpage URL. It provides a wide amount of data involving poems in Spanish. Data was scraped using Python library BeautifulSoup. For each poem in URL, we collected the name of the poet, poem, and poem title. Scraping processed is available at URL
### Languages
Spanish
### Acknowledgements
We wouldn't be here without URL, which provides the poetry collection in this dataset. | [
"# Spanish Poetry Dataset\r\nThere are not many poetry datasets, and in Spanish language is even worst! With this dataset, we want to give access to these quality Spanish data for NLP tasks.\r\nIt is a simple dataset, but its potential is huge. I'm itching to discover new literary structures within Spanish literature data, a wider analysis, and so on!",
"# Authors\r\nAndrea Morales (@andreamorgar) and Miguel López (@wizmik12)",
"### Motivation\r\nThis dataset was built for the PyConES2020 conference with the purpose of using it for a poem generation task. More information: URL",
"### Content\r\nData was acquired in July 2020 from the poetry webpage URL. It provides a wide amount of data involving poems in Spanish. Data was scraped using Python library BeautifulSoup. For each poem in URL, we collected the name of the poet, poem, and poem title. Scraping processed is available at URL",
"### Languages\r\nSpanish",
"### Acknowledgements\r\nWe wouldn't be here without URL, which provides the poetry collection in this dataset."
] | [
"TAGS\n#license-gpl-3.0 #region-us \n",
"# Spanish Poetry Dataset\r\nThere are not many poetry datasets, and in Spanish language is even worst! With this dataset, we want to give access to these quality Spanish data for NLP tasks.\r\nIt is a simple dataset, but its potential is huge. I'm itching to discover new literary structures within Spanish literature data, a wider analysis, and so on!",
"# Authors\r\nAndrea Morales (@andreamorgar) and Miguel López (@wizmik12)",
"### Motivation\r\nThis dataset was built for the PyConES2020 conference with the purpose of using it for a poem generation task. More information: URL",
"### Content\r\nData was acquired in July 2020 from the poetry webpage URL. It provides a wide amount of data involving poems in Spanish. Data was scraped using Python library BeautifulSoup. For each poem in URL, we collected the name of the poet, poem, and poem title. Scraping processed is available at URL",
"### Languages\r\nSpanish",
"### Acknowledgements\r\nWe wouldn't be here without URL, which provides the poetry collection in this dataset."
] |
a3fa98229bb3b442163ac8bda8e950076f75b589 |
# Dataset Card for People's Speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://mlcommons.org/en/peoples-speech/
- **Repository:** https://github.com/mlcommons/peoples-speech
- **Paper:** https://arxiv.org/abs/2111.09344
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [[email protected]](mailto:[email protected])
### Dataset Summary
The People's Speech Dataset is among the world's largest English speech recognition corpus today that is licensed for academic and commercial usage under CC-BY-SA and CC-BY 4.0. It includes 30,000+ hours of transcribed speech in English languages with a diverse set of speakers. This open dataset is large enough to train speech-to-text systems and crucially is available with a permissive license.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
{
"id": "gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac",
"audio": {
"path": "gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac"
"array": array([-6.10351562e-05, ...]),
"sampling_rate": 16000
}
"duration_ms": 14490,
"text": "contends that the suspension clause requires a [...]"
}
### Data Fields
{
"id": datasets.Value("string"),
"audio": datasets.Audio(sampling_rate=16_000),
"duration_ms": datasets.Value("int32"),
"text": datasets.Value("string"),
}
### Data Splits
We provide the following configurations for the dataset: `cc-by-clean`, `cc-by-dirty`, `cc-by-sa-clean`, `cc-by-sa-dirty`, and `microset`. We don't provide splits for any of the configurations.
## Dataset Creation
### Curation Rationale
See our [paper](https://arxiv.org/abs/2111.09344).
### Source Data
#### Initial Data Collection and Normalization
Data was downloaded via the archive.org API. No data inference was done.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
No manual annotation is done. We download only source audio with already existing transcripts.
#### Who are the annotators?
For the test and dev sets, we paid native American English speakers to do transcriptions. We do not know the identities of the transcriptionists for data in the training set. For the training set, we have noticed that some transcriptions are likely to be the output of automatic speech recognition systems.
### Personal and Sensitive Information
Several of our sources are legal and government proceedings, spoken histories, speeches, and so on. Given that these were intended as public documents and licensed as such, it is natural that the involved individuals are aware of this.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset could be used for speech synthesis. However, this requires careful cleaning of the dataset, as background noise is not tolerable for speech synthesis.
The dataset could be used for keyword spotting tasks as well. In particular, this is good use case for the non-English audio in the dataset.
Our sincere hope is that the large breadth of sources our dataset incorporates reduces existing quality of service issues today, like speech recognition system’s poor understanding of non-native English accents. We cannot think of any unfair treatment that come from using this dataset at this time.
### Discussion of Biases
Our data is downloaded from archive.org. As such, the data is biased towards whatever users decide to upload there.
Almost all of our data is American accented English.
### Other Known Limitations
As of version 1.0, a portion of data in the training, test, and dev sets is poorly aligned. Specifically, some words appear in the transcript, but not the audio, or some words appear in the audio, but not the transcript. We are working on it.
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
We provide CC-BY and CC-BY-SA subsets of the dataset.
### Citation Information
Please cite:
```
@article{DBLP:journals/corr/abs-2111-09344,
author = {Daniel Galvez and
Greg Diamos and
Juan Ciro and
Juan Felipe Cer{\'{o}}n and
Keith Achorn and
Anjali Gopi and
David Kanter and
Maximilian Lam and
Mark Mazumder and
Vijay Janapa Reddi},
title = {The People's Speech: {A} Large-Scale Diverse English Speech Recognition
Dataset for Commercial Usage},
journal = {CoRR},
volume = {abs/2111.09344},
year = {2021},
url = {https://arxiv.org/abs/2111.09344},
eprinttype = {arXiv},
eprint = {2111.09344},
timestamp = {Mon, 22 Nov 2021 16:44:07 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-09344.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | MLCommons/peoples_speech_v1.0 | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1T<n",
"source_datasets:original",
"language:en",
"license:cc-by-2.0",
"license:cc-by-2.5",
"license:cc-by-3.0",
"license:cc-by-4.0",
"license:cc-by-sa-3.0",
"license:cc-by-sa-4.0",
"arxiv:2111.09344",
"region:us"
] | 2022-03-30T14:49:51+00:00 | {"annotations_creators": ["crowdsourced", "machine-generated"], "language_creators": ["crowdsourced", "machine-generated"], "language": ["en"], "license": ["cc-by-2.0", "cc-by-2.5", "cc-by-3.0", "cc-by-4.0", "cc-by-sa-3.0", "cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1T<n"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition"], "task_ids": ["speech-recognition", "robust-speech-recognition", "noisy-speech-recognition"], "pretty_name": "People's Speech"} | 2022-08-10T15:41:34+00:00 | [
"2111.09344"
] | [
"en"
] | TAGS
#task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #language_creators-machine-generated #multilinguality-monolingual #size_categories-1T<n #source_datasets-original #language-English #license-cc-by-2.0 #license-cc-by-2.5 #license-cc-by-3.0 #license-cc-by-4.0 #license-cc-by-sa-3.0 #license-cc-by-sa-4.0 #arxiv-2111.09344 #region-us
|
# Dataset Card for People's Speech
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
- Leaderboard:
- Point of Contact: datasets@URL
### Dataset Summary
The People's Speech Dataset is among the world's largest English speech recognition corpus today that is licensed for academic and commercial usage under CC-BY-SA and CC-BY 4.0. It includes 30,000+ hours of transcribed speech in English languages with a diverse set of speakers. This open dataset is large enough to train speech-to-text systems and crucially is available with a permissive license.
### Supported Tasks and Leaderboards
### Languages
English
## Dataset Structure
### Data Instances
{
"id": "gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac",
"audio": {
"path": "gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac"
"array": array([-6.10351562e-05, ...]),
"sampling_rate": 16000
}
"duration_ms": 14490,
"text": "contends that the suspension clause requires a [...]"
}
### Data Fields
{
"id": datasets.Value("string"),
"audio": datasets.Audio(sampling_rate=16_000),
"duration_ms": datasets.Value("int32"),
"text": datasets.Value("string"),
}
### Data Splits
We provide the following configurations for the dataset: 'cc-by-clean', 'cc-by-dirty', 'cc-by-sa-clean', 'cc-by-sa-dirty', and 'microset'. We don't provide splits for any of the configurations.
## Dataset Creation
### Curation Rationale
See our paper.
### Source Data
#### Initial Data Collection and Normalization
Data was downloaded via the URL API. No data inference was done.
#### Who are the source language producers?
### Annotations
#### Annotation process
No manual annotation is done. We download only source audio with already existing transcripts.
#### Who are the annotators?
For the test and dev sets, we paid native American English speakers to do transcriptions. We do not know the identities of the transcriptionists for data in the training set. For the training set, we have noticed that some transcriptions are likely to be the output of automatic speech recognition systems.
### Personal and Sensitive Information
Several of our sources are legal and government proceedings, spoken histories, speeches, and so on. Given that these were intended as public documents and licensed as such, it is natural that the involved individuals are aware of this.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset could be used for speech synthesis. However, this requires careful cleaning of the dataset, as background noise is not tolerable for speech synthesis.
The dataset could be used for keyword spotting tasks as well. In particular, this is good use case for the non-English audio in the dataset.
Our sincere hope is that the large breadth of sources our dataset incorporates reduces existing quality of service issues today, like speech recognition system’s poor understanding of non-native English accents. We cannot think of any unfair treatment that come from using this dataset at this time.
### Discussion of Biases
Our data is downloaded from URL. As such, the data is biased towards whatever users decide to upload there.
Almost all of our data is American accented English.
### Other Known Limitations
As of version 1.0, a portion of data in the training, test, and dev sets is poorly aligned. Specifically, some words appear in the transcript, but not the audio, or some words appear in the audio, but not the transcript. We are working on it.
## Additional Information
### Dataset Curators
### Licensing Information
We provide CC-BY and CC-BY-SA subsets of the dataset.
Please cite:
| [
"# Dataset Card for People's Speech",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact: datasets@URL",
"### Dataset Summary\n\nThe People's Speech Dataset is among the world's largest English speech recognition corpus today that is licensed for academic and commercial usage under CC-BY-SA and CC-BY 4.0. It includes 30,000+ hours of transcribed speech in English languages with a diverse set of speakers. This open dataset is large enough to train speech-to-text systems and crucially is available with a permissive license.",
"### Supported Tasks and Leaderboards",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances\n\n{\n \"id\": \"gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac\",\n \"audio\": {\n \"path\": \"gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac\"\n \"array\": array([-6.10351562e-05, ...]),\n \"sampling_rate\": 16000\n }\n \"duration_ms\": 14490,\n \"text\": \"contends that the suspension clause requires a [...]\"\n}",
"### Data Fields\n\n{\n \"id\": datasets.Value(\"string\"),\n \"audio\": datasets.Audio(sampling_rate=16_000),\n \"duration_ms\": datasets.Value(\"int32\"),\n \"text\": datasets.Value(\"string\"),\n}",
"### Data Splits\n\nWe provide the following configurations for the dataset: 'cc-by-clean', 'cc-by-dirty', 'cc-by-sa-clean', 'cc-by-sa-dirty', and 'microset'. We don't provide splits for any of the configurations.",
"## Dataset Creation",
"### Curation Rationale\n\nSee our paper.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nData was downloaded via the URL API. No data inference was done.",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\nNo manual annotation is done. We download only source audio with already existing transcripts.",
"#### Who are the annotators?\n\nFor the test and dev sets, we paid native American English speakers to do transcriptions. We do not know the identities of the transcriptionists for data in the training set. For the training set, we have noticed that some transcriptions are likely to be the output of automatic speech recognition systems.",
"### Personal and Sensitive Information\n\nSeveral of our sources are legal and government proceedings, spoken histories, speeches, and so on. Given that these were intended as public documents and licensed as such, it is natural that the involved individuals are aware of this.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe dataset could be used for speech synthesis. However, this requires careful cleaning of the dataset, as background noise is not tolerable for speech synthesis.\n\nThe dataset could be used for keyword spotting tasks as well. In particular, this is good use case for the non-English audio in the dataset.\n\nOur sincere hope is that the large breadth of sources our dataset incorporates reduces existing quality of service issues today, like speech recognition system’s poor understanding of non-native English accents. We cannot think of any unfair treatment that come from using this dataset at this time.",
"### Discussion of Biases\n\nOur data is downloaded from URL. As such, the data is biased towards whatever users decide to upload there.\n\nAlmost all of our data is American accented English.",
"### Other Known Limitations\n\nAs of version 1.0, a portion of data in the training, test, and dev sets is poorly aligned. Specifically, some words appear in the transcript, but not the audio, or some words appear in the audio, but not the transcript. We are working on it.",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nWe provide CC-BY and CC-BY-SA subsets of the dataset.\n\n\n\nPlease cite:"
] | [
"TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #language_creators-machine-generated #multilinguality-monolingual #size_categories-1T<n #source_datasets-original #language-English #license-cc-by-2.0 #license-cc-by-2.5 #license-cc-by-3.0 #license-cc-by-4.0 #license-cc-by-sa-3.0 #license-cc-by-sa-4.0 #arxiv-2111.09344 #region-us \n",
"# Dataset Card for People's Speech",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact: datasets@URL",
"### Dataset Summary\n\nThe People's Speech Dataset is among the world's largest English speech recognition corpus today that is licensed for academic and commercial usage under CC-BY-SA and CC-BY 4.0. It includes 30,000+ hours of transcribed speech in English languages with a diverse set of speakers. This open dataset is large enough to train speech-to-text systems and crucially is available with a permissive license.",
"### Supported Tasks and Leaderboards",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances\n\n{\n \"id\": \"gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac\",\n \"audio\": {\n \"path\": \"gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac\"\n \"array\": array([-6.10351562e-05, ...]),\n \"sampling_rate\": 16000\n }\n \"duration_ms\": 14490,\n \"text\": \"contends that the suspension clause requires a [...]\"\n}",
"### Data Fields\n\n{\n \"id\": datasets.Value(\"string\"),\n \"audio\": datasets.Audio(sampling_rate=16_000),\n \"duration_ms\": datasets.Value(\"int32\"),\n \"text\": datasets.Value(\"string\"),\n}",
"### Data Splits\n\nWe provide the following configurations for the dataset: 'cc-by-clean', 'cc-by-dirty', 'cc-by-sa-clean', 'cc-by-sa-dirty', and 'microset'. We don't provide splits for any of the configurations.",
"## Dataset Creation",
"### Curation Rationale\n\nSee our paper.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nData was downloaded via the URL API. No data inference was done.",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\nNo manual annotation is done. We download only source audio with already existing transcripts.",
"#### Who are the annotators?\n\nFor the test and dev sets, we paid native American English speakers to do transcriptions. We do not know the identities of the transcriptionists for data in the training set. For the training set, we have noticed that some transcriptions are likely to be the output of automatic speech recognition systems.",
"### Personal and Sensitive Information\n\nSeveral of our sources are legal and government proceedings, spoken histories, speeches, and so on. Given that these were intended as public documents and licensed as such, it is natural that the involved individuals are aware of this.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe dataset could be used for speech synthesis. However, this requires careful cleaning of the dataset, as background noise is not tolerable for speech synthesis.\n\nThe dataset could be used for keyword spotting tasks as well. In particular, this is good use case for the non-English audio in the dataset.\n\nOur sincere hope is that the large breadth of sources our dataset incorporates reduces existing quality of service issues today, like speech recognition system’s poor understanding of non-native English accents. We cannot think of any unfair treatment that come from using this dataset at this time.",
"### Discussion of Biases\n\nOur data is downloaded from URL. As such, the data is biased towards whatever users decide to upload there.\n\nAlmost all of our data is American accented English.",
"### Other Known Limitations\n\nAs of version 1.0, a portion of data in the training, test, and dev sets is poorly aligned. Specifically, some words appear in the transcript, but not the audio, or some words appear in the audio, but not the transcript. We are working on it.",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nWe provide CC-BY and CC-BY-SA subsets of the dataset.\n\n\n\nPlease cite:"
] |
a29d08ef45085a1bf86173196c6a68aa54a8f43e |
# Axolotl-Spanish-Nahuatl : Parallel corpus for Spanish-Nahuatl machine translation
## Table of Contents
- [Dataset Card for [Axolotl-Spanish-Nahuatl]](#dataset-card-for-Axolotl-Spanish-Nahuatl)
## Dataset Description
- **Source 1:** http://www.corpus.unam.mx/axolotl
- **Source 2:** http://link.springer.com/article/10.1007/s10579-014-9287-y
- **Repository:1** https://github.com/ElotlMX/py-elotl
- **Repository:2** https://github.com/christos-c/bible-corpus/blob/master/bibles/Nahuatl-NT.xml
- **Paper:** https://aclanthology.org/N15-2021.pdf
## Dataset Collection
In order to get a good translator, we collected and cleaned two of the most complete Nahuatl-Spanish parallel corpora available. Those are Axolotl collected by an expert team at UNAM and Bible UEDIN Nahuatl Spanish crawled by Christos Christodoulopoulos and Mark Steedman from Bible Gateway site.
After this, we ended with 12,207 samples from Axolotl due to misalignments and duplicated texts in Spanish in both original and nahuatl columns and 7,821 samples from Bible UEDIN for a total of 20028 utterances.
## Team members
- Emilio Morales [(milmor)](https://huggingface.co/milmor)
- Rodrigo Martínez Arzate [(rockdrigoma)](https://huggingface.co/rockdrigoma)
- Luis Armando Mercado [(luisarmando)](https://huggingface.co/luisarmando)
- Jacobo del Valle [(jjdv)](https://huggingface.co/jjdv)
## Applications
- MODEL: Spanish Nahuatl Translation Task with a T5 model in ([t5-small-spanish-nahuatl](https://huggingface.co/hackathon-pln-es/t5-small-spanish-nahuatl))
- DEMO: Spanish Nahuatl Translation in ([Spanish-nahuatl](https://huggingface.co/spaces/hackathon-pln-es/Spanish-Nahuatl-Translation)) | hackathon-pln-es/Axolotl-Spanish-Nahuatl | [
"task_categories:text2text-generation",
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:original",
"language:es",
"license:mpl-2.0",
"conditional-text-generation",
"region:us"
] | 2022-03-30T14:52:03+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["es"], "license": ["mpl-2.0"], "multilinguality": ["translation"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["text2text-generation", "translation"], "task_ids": [], "pretty_name": "Axolotl Spanish-Nahuatl parallel corpus , is a digital corpus that compiles several sources with parallel content in these two languages. \n\nA parallel corpus is a type of corpus that contains texts in a source language with their correspondent translation in one or more target languages. Gutierrez-Vasques, X., Sierra, G., and Pompa, I. H. (2016). Axolotl: a web accessible parallel corpus for spanish-nahuatl. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2016), Portoro, Slovenia. European Language Resources Association (ELRA). Grupo de Ingenieria Linguistica (GIL, UNAM). Corpus paralelo espa\u00c3\u00b1ol-nahuatl. http://www.corpus.unam.mx/axolotl.", "language_bcp47": ["es-MX"], "tags": ["conditional-text-generation"]} | 2023-04-13T07:51:58+00:00 | [] | [
"es"
] | TAGS
#task_categories-text2text-generation #task_categories-translation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-translation #size_categories-unknown #source_datasets-original #language-Spanish #license-mpl-2.0 #conditional-text-generation #region-us
|
# Axolotl-Spanish-Nahuatl : Parallel corpus for Spanish-Nahuatl machine translation
## Table of Contents
- [Dataset Card for [Axolotl-Spanish-Nahuatl]](#dataset-card-for-Axolotl-Spanish-Nahuatl)
## Dataset Description
- Source 1: URL
- Source 2: URL
- Repository:1 URL
- Repository:2 URL
- Paper: URL
## Dataset Collection
In order to get a good translator, we collected and cleaned two of the most complete Nahuatl-Spanish parallel corpora available. Those are Axolotl collected by an expert team at UNAM and Bible UEDIN Nahuatl Spanish crawled by Christos Christodoulopoulos and Mark Steedman from Bible Gateway site.
After this, we ended with 12,207 samples from Axolotl due to misalignments and duplicated texts in Spanish in both original and nahuatl columns and 7,821 samples from Bible UEDIN for a total of 20028 utterances.
## Team members
- Emilio Morales (milmor)
- Rodrigo Martínez Arzate (rockdrigoma)
- Luis Armando Mercado (luisarmando)
- Jacobo del Valle (jjdv)
## Applications
- MODEL: Spanish Nahuatl Translation Task with a T5 model in (t5-small-spanish-nahuatl)
- DEMO: Spanish Nahuatl Translation in (Spanish-nahuatl) | [
"# Axolotl-Spanish-Nahuatl : Parallel corpus for Spanish-Nahuatl machine translation",
"## Table of Contents\n- [Dataset Card for [Axolotl-Spanish-Nahuatl]](#dataset-card-for-Axolotl-Spanish-Nahuatl)",
"## Dataset Description\n\n- Source 1: URL\n- Source 2: URL\n- Repository:1 URL\n- Repository:2 URL\n- Paper: URL",
"## Dataset Collection\n\nIn order to get a good translator, we collected and cleaned two of the most complete Nahuatl-Spanish parallel corpora available. Those are Axolotl collected by an expert team at UNAM and Bible UEDIN Nahuatl Spanish crawled by Christos Christodoulopoulos and Mark Steedman from Bible Gateway site.\n\nAfter this, we ended with 12,207 samples from Axolotl due to misalignments and duplicated texts in Spanish in both original and nahuatl columns and 7,821 samples from Bible UEDIN for a total of 20028 utterances.",
"## Team members\n- Emilio Morales (milmor)\n- Rodrigo Martínez Arzate (rockdrigoma)\n- Luis Armando Mercado (luisarmando)\n- Jacobo del Valle (jjdv)",
"## Applications\n\n- MODEL: Spanish Nahuatl Translation Task with a T5 model in (t5-small-spanish-nahuatl)\n- DEMO: Spanish Nahuatl Translation in (Spanish-nahuatl)"
] | [
"TAGS\n#task_categories-text2text-generation #task_categories-translation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-translation #size_categories-unknown #source_datasets-original #language-Spanish #license-mpl-2.0 #conditional-text-generation #region-us \n",
"# Axolotl-Spanish-Nahuatl : Parallel corpus for Spanish-Nahuatl machine translation",
"## Table of Contents\n- [Dataset Card for [Axolotl-Spanish-Nahuatl]](#dataset-card-for-Axolotl-Spanish-Nahuatl)",
"## Dataset Description\n\n- Source 1: URL\n- Source 2: URL\n- Repository:1 URL\n- Repository:2 URL\n- Paper: URL",
"## Dataset Collection\n\nIn order to get a good translator, we collected and cleaned two of the most complete Nahuatl-Spanish parallel corpora available. Those are Axolotl collected by an expert team at UNAM and Bible UEDIN Nahuatl Spanish crawled by Christos Christodoulopoulos and Mark Steedman from Bible Gateway site.\n\nAfter this, we ended with 12,207 samples from Axolotl due to misalignments and duplicated texts in Spanish in both original and nahuatl columns and 7,821 samples from Bible UEDIN for a total of 20028 utterances.",
"## Team members\n- Emilio Morales (milmor)\n- Rodrigo Martínez Arzate (rockdrigoma)\n- Luis Armando Mercado (luisarmando)\n- Jacobo del Valle (jjdv)",
"## Applications\n\n- MODEL: Spanish Nahuatl Translation Task with a T5 model in (t5-small-spanish-nahuatl)\n- DEMO: Spanish Nahuatl Translation in (Spanish-nahuatl)"
] |
2b66248c092349da7db3e12eb66d7ffb692c77d9 | # MF Rocket Paraphrase Corpus (MFRPC) - A State of the Art Paraphrasing Solution
## Dataset Description
MF Rocket Paraphrase Corpus (MFRPC) ) is a corpus consisting of 10,000 sentence pairs. Each sentence pair contains a source sentence and the paraphrased version of the source sentence. The source sentences are created manually and are intended to represent typical sentences found in online articles. They are limited to general topics and are not restricted to a specific domain. The paraphrased sentences were created partly using GPT-3 and partly manually. In this way, we hope to investigate the performance of GPT-3 in a typical real-world setting and improve the quality of the paraphrased sentences through manual corrections.
By finetuning a model we Pegasus with this data, we create a paraphraser that performs very well. The results are indistinguishable from human parahrased sentences in a blind test.
We are currently working on a data set with complete paragraphs or articles.
For more information, our Contact form can be used at https://mf-rocket.de.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "To overcome these difficulties, you must select an activity or goal that you are enthusiastic about [...]",
"target": "To overcome these challenges, you need to find an activity or goal that you are passionate about and[...]"
},
{
"text": "If you are unsure about what to do next, seek advice from a close friend or family member you can tr[...]",
"target": "If you are feeling lost, ask a trusted friend or family member for their opinion about what you shou[...]"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 8000 |
| valid | 2000 |
| MFRocket/MFRPC | [
"region:us"
] | 2022-03-30T15:59:22+00:00 | {"task_categories": ["conditional-text-generation", "paraphrase", "gpt-3", "crowdsourced"]} | 2022-03-30T18:58:37+00:00 | [] | [] | TAGS
#region-us
| MF Rocket Paraphrase Corpus (MFRPC) - A State of the Art Paraphrasing Solution
==============================================================================
Dataset Description
-------------------
MF Rocket Paraphrase Corpus (MFRPC) ) is a corpus consisting of 10,000 sentence pairs. Each sentence pair contains a source sentence and the paraphrased version of the source sentence. The source sentences are created manually and are intended to represent typical sentences found in online articles. They are limited to general topics and are not restricted to a specific domain. The paraphrased sentences were created partly using GPT-3 and partly manually. In this way, we hope to investigate the performance of GPT-3 in a typical real-world setting and improve the quality of the paraphrased sentences through manual corrections.
By finetuning a model we Pegasus with this data, we create a paraphraser that performs very well. The results are indistinguishable from human parahrased sentences in a blind test.
We are currently working on a data set with complete paragraphs or articles.
For more information, our Contact form can be used at URL.
### Languages
The BCP-47 code for the dataset's language is en.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
58d5b361e47265cf85f2334800c66d9bb485029e |
# Dataset Card for sufficient_facts
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/copenlu/sufficient_facts
- **Repository:** https://github.com/copenlu/sufficient_facts
- **Paper:** Will be uploaded soon...
- **Leaderboard:**
- **Point of Contact:** https://apepa.github.io/
### Dataset Summary
This is the dataset SufficientFacts, introduced in the paper "Fact Checking with Insufficient Evidence", accepted at the TACL journal in 2022.
Automating the fact checking (FC) process relies on information obtained from external sources. In this work, we posit that it is crucial for FC models to make veracity predictions only when there is sufficient evidence and otherwise indicate when it is not enough. To this end, we are the first to study what information FC models consider sufficient by introducing a novel task and advancing it with three main contributions. First, we conduct an in-depth empirical analysis of the task with a new fluency-preserving method for omitting information from the evidence at the constituent and sentence level. We identify when models consider the remaining evidence (in)sufficient for FC, based on three trained models with different Transformer architectures and three FC datasets. Second, we ask annotators whether the omitted evidence was important for FC, resulting in a novel diagnostic dataset, **SufficientFacts**, for FC with omitted evidence. We find that models are least successful in detecting missing evidence when adverbial modifiers are omitted (21% accuracy), whereas it is easiest for omitted date modifiers (63% accuracy). Finally, we propose a novel data augmentation strategy for contrastive self-learning of missing evidence by employing the proposed omission method combined with tri-training. It improves performance for Evidence Sufficiency Prediction by up to 17.8 F1 score, which in turn improves FC performance by up to 2.6 F1 score.
### Languages
English
## Dataset Structure
The dataset consists of three files, each for one of the datasets -- FEVER, HoVer, and VitaminC.
Each file consists of json lines of the format:
```json
{
"claim": "Unison (Celine Dion album) was originally released by Atlantic Records.",
"evidence": [
[
"Unison (Celine Dion album)",
"The album was originally released on 2 April 1990 ."
]
],
"label_before": "REFUTES",
"label_after": "NOT ENOUGH",
"agreement": "agree_ei",
"type": "PP",
"removed": ["by Columbia Records"],
"text_orig": "[[Unison (Celine Dion album)]] The album was originally released on 2 April 1990 <span style=\"color:red;\">by Columbia Records</span> ."
}
```
### Data Instances
* FEVER: 600 consituent-level, 400 sentence-level;
* HoVer - 600 consituent-level, 400 sentence-level;
* VitaminC - 600 consituent-level.
### Data Fields
* `claim` - the claim that is being verified
* `evidence` - the augmented evidence for the claim, i.e. the evidence with some removed information
* `label_before` - the original label for the claim-evidence pair, before information was removed from the evidence
* `label_after` - the label for the augmented claim-evidence pair, after information was removed from the evidence, as annotated by crowd-source workers
* `type` - type of the information removed from the evidence. The types are fine-grained and their mapping to the general types -- 7 constituent and 1 sentence type can be found in [types.json](types.json) file.
* `removed` - the text of the removed information from the evidence
* `text_orig` - the original text of the evidence, as presented to crowd-source workers, the text of the removed information is inside `<span style=\"color:red;\"></span>` tags.
### Data Splits
| name |test_fever|test_hover|test_vitaminc|
|----------|-------:|-----:|-------:|
|test| 1000| 1000| 600|
Augmented from the test splits of the corresponding datasets.
### Annotations
#### Annotation process
The workers were provided with the following task description:
For each evidence text, some facts have been removed (marked in <span style="color:red;">red</span>).
You should annotate whether, <b>given the remaining facts in the evidence text, the evidence is still enough for verifying the claim.</b> <br></br>
<ul>
<li>You should select <i><b>'ENOUGH -- IRRELEVANT'</b></i>, if the <b>remaining information is still <i>enough</i></b> for verifying the claim because the <b>removed information is irrelevant</b> for identifying the evidence as SUPPORTS or REFUTES. See examples 1 and 2.</li>
<li>You should select <i><b>'ENOUGH -- REPEATED'</b></i>, if the <b>remaining information is still <i>enough</i></b> for verifying the claim because the <b>removed information is relevant but is also present (repeated) in the remaining (not red) text.</b> See example 3.</li>
<li>You should select <i><b>'NOT ENOUGH'</b></i> -- when <b>1) the removed information is <i>relevant</i></b> for verifying the claim <b> AND 2) it is <i>not present (repeated)</i> in the remaining text.</b> See examples 4, 5, and 6.</li>
<!--<li>You should select <i><b>'CHANGED INFO'</b></i> in the rare cases when the remaining evidence has <b>changed the support for the claim</b></li>-->
</ul>
<b>Note: You should not incorporate your own knowledge or beliefs! You should rely only on the evidence provided for the claim.</b>
The annotators were then given example instance annotations.
Finally, annotators were asked to complete a qualification test in order to be allowed to annotate instances for the task.
The resulting inter-annotator agreement for SufficientFacts is 0.81 Fleiss'k from three annotators.
#### Who are the annotators?
The annotations were performed by workers at Amazon Mechanical Turk.
## Additional Information
### Licensing Information
MIT
### Citation Information
```
@article{10.1162/tacl_a_00486,
author = {Atanasova, Pepa and Simonsen, Jakob Grue and Lioma, Christina and Augenstein, Isabelle},
title = "{Fact Checking with Insufficient Evidence}",
journal = {Transactions of the Association for Computational Linguistics},
volume = {10},
pages = {746-763},
year = {2022},
month = {07},
abstract = "{Automating the fact checking (FC) process relies on information obtained from external sources. In this work, we posit that it is crucial for FC models to make veracity predictions only when there is sufficient evidence and otherwise indicate when it is not enough. To this end, we are the first to study what information FC models consider sufficient by introducing a novel task and advancing it with three main contributions. First, we conduct an in-depth empirical analysis of the task with a new fluency-preserving method for omitting information from the evidence at the constituent and sentence level. We identify when models consider the remaining evidence (in)sufficient for FC, based on three trained models with different Transformer architectures and three FC datasets. Second, we ask annotators whether the omitted evidence was important for FC, resulting in a novel diagnostic dataset, SufficientFacts1, for FC with omitted evidence. We find that models are least successful in detecting missing evidence when adverbial modifiers are omitted (21\\% accuracy), whereas it is easiest for omitted date modifiers (63\\% accuracy). Finally, we propose a novel data augmentation strategy for contrastive self-learning of missing evidence by employing the proposed omission method combined with tri-training. It improves performance for Evidence Sufficiency Prediction by up to 17.8 F1 score, which in turn improves FC performance by up to 2.6 F1 score.}",
issn = {2307-387X},
doi = {10.1162/tacl_a_00486},
url = {https://doi.org/10.1162/tacl\_a\_00486},
eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00486/2037141/tacl\_a\_00486.pdf},
}
```
### Contributions
Thanks to [@apepa](https://github.com/apepa) for adding this dataset. | copenlu/sufficient_facts | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|fever",
"source_datasets:extended|hover",
"source_datasets:extended|fever_gold_evidence",
"language:en",
"license:mit",
"region:us"
] | 2022-03-30T18:12:14+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|fever", "extended|hover", "extended|fever_gold_evidence"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"], "pretty_name": "sufficient_facts"} | 2022-08-05T07:33:48+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|fever #source_datasets-extended|hover #source_datasets-extended|fever_gold_evidence #language-English #license-mit #region-us
| Dataset Card for sufficient\_facts
==================================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: Will be uploaded soon...
* Leaderboard:
* Point of Contact: URL
### Dataset Summary
This is the dataset SufficientFacts, introduced in the paper "Fact Checking with Insufficient Evidence", accepted at the TACL journal in 2022.
Automating the fact checking (FC) process relies on information obtained from external sources. In this work, we posit that it is crucial for FC models to make veracity predictions only when there is sufficient evidence and otherwise indicate when it is not enough. To this end, we are the first to study what information FC models consider sufficient by introducing a novel task and advancing it with three main contributions. First, we conduct an in-depth empirical analysis of the task with a new fluency-preserving method for omitting information from the evidence at the constituent and sentence level. We identify when models consider the remaining evidence (in)sufficient for FC, based on three trained models with different Transformer architectures and three FC datasets. Second, we ask annotators whether the omitted evidence was important for FC, resulting in a novel diagnostic dataset, SufficientFacts, for FC with omitted evidence. We find that models are least successful in detecting missing evidence when adverbial modifiers are omitted (21% accuracy), whereas it is easiest for omitted date modifiers (63% accuracy). Finally, we propose a novel data augmentation strategy for contrastive self-learning of missing evidence by employing the proposed omission method combined with tri-training. It improves performance for Evidence Sufficiency Prediction by up to 17.8 F1 score, which in turn improves FC performance by up to 2.6 F1 score.
### Languages
English
Dataset Structure
-----------------
The dataset consists of three files, each for one of the datasets -- FEVER, HoVer, and VitaminC.
Each file consists of json lines of the format:
### Data Instances
* FEVER: 600 consituent-level, 400 sentence-level;
* HoVer - 600 consituent-level, 400 sentence-level;
* VitaminC - 600 consituent-level.
### Data Fields
* 'claim' - the claim that is being verified
* 'evidence' - the augmented evidence for the claim, i.e. the evidence with some removed information
* 'label\_before' - the original label for the claim-evidence pair, before information was removed from the evidence
* 'label\_after' - the label for the augmented claim-evidence pair, after information was removed from the evidence, as annotated by crowd-source workers
* 'type' - type of the information removed from the evidence. The types are fine-grained and their mapping to the general types -- 7 constituent and 1 sentence type can be found in URL file.
* 'removed' - the text of the removed information from the evidence
* 'text\_orig' - the original text of the evidence, as presented to crowd-source workers, the text of the removed information is inside '<span style="color:red;">' tags.
### Data Splits
Augmented from the test splits of the corresponding datasets.
### Annotations
#### Annotation process
The workers were provided with the following task description:
For each evidence text, some facts have been removed (marked in red).
You should annotate whether, **given the remaining facts in the evidence text, the evidence is still enough for verifying the claim.**
* You should select ***'ENOUGH -- IRRELEVANT'***, if the **remaining information is still *enough*** for verifying the claim because the **removed information is irrelevant** for identifying the evidence as SUPPORTS or REFUTES. See examples 1 and 2.
* You should select ***'ENOUGH -- REPEATED'***, if the **remaining information is still *enough*** for verifying the claim because the **removed information is relevant but is also present (repeated) in the remaining (not red) text.** See example 3.
* You should select ***'NOT ENOUGH'*** -- when **1) the removed information is *relevant*** for verifying the claim **AND 2) it is *not present (repeated)* in the remaining text.** See examples 4, 5, and 6.
**Note: You should not incorporate your own knowledge or beliefs! You should rely only on the evidence provided for the claim.**
The annotators were then given example instance annotations.
Finally, annotators were asked to complete a qualification test in order to be allowed to annotate instances for the task.
The resulting inter-annotator agreement for SufficientFacts is 0.81 Fleiss'k from three annotators.
#### Who are the annotators?
The annotations were performed by workers at Amazon Mechanical Turk.
Additional Information
----------------------
### Licensing Information
MIT
### Contributions
Thanks to @apepa for adding this dataset.
| [
"### Dataset Summary\n\n\nThis is the dataset SufficientFacts, introduced in the paper \"Fact Checking with Insufficient Evidence\", accepted at the TACL journal in 2022.\n\n\nAutomating the fact checking (FC) process relies on information obtained from external sources. In this work, we posit that it is crucial for FC models to make veracity predictions only when there is sufficient evidence and otherwise indicate when it is not enough. To this end, we are the first to study what information FC models consider sufficient by introducing a novel task and advancing it with three main contributions. First, we conduct an in-depth empirical analysis of the task with a new fluency-preserving method for omitting information from the evidence at the constituent and sentence level. We identify when models consider the remaining evidence (in)sufficient for FC, based on three trained models with different Transformer architectures and three FC datasets. Second, we ask annotators whether the omitted evidence was important for FC, resulting in a novel diagnostic dataset, SufficientFacts, for FC with omitted evidence. We find that models are least successful in detecting missing evidence when adverbial modifiers are omitted (21% accuracy), whereas it is easiest for omitted date modifiers (63% accuracy). Finally, we propose a novel data augmentation strategy for contrastive self-learning of missing evidence by employing the proposed omission method combined with tri-training. It improves performance for Evidence Sufficiency Prediction by up to 17.8 F1 score, which in turn improves FC performance by up to 2.6 F1 score.",
"### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------\n\n\nThe dataset consists of three files, each for one of the datasets -- FEVER, HoVer, and VitaminC.\nEach file consists of json lines of the format:",
"### Data Instances\n\n\n* FEVER: 600 consituent-level, 400 sentence-level;\n* HoVer - 600 consituent-level, 400 sentence-level;\n* VitaminC - 600 consituent-level.",
"### Data Fields\n\n\n* 'claim' - the claim that is being verified\n* 'evidence' - the augmented evidence for the claim, i.e. the evidence with some removed information\n* 'label\\_before' - the original label for the claim-evidence pair, before information was removed from the evidence\n* 'label\\_after' - the label for the augmented claim-evidence pair, after information was removed from the evidence, as annotated by crowd-source workers\n* 'type' - type of the information removed from the evidence. The types are fine-grained and their mapping to the general types -- 7 constituent and 1 sentence type can be found in URL file.\n* 'removed' - the text of the removed information from the evidence\n* 'text\\_orig' - the original text of the evidence, as presented to crowd-source workers, the text of the removed information is inside '<span style=\"color:red;\">' tags.",
"### Data Splits\n\n\n\nAugmented from the test splits of the corresponding datasets.",
"### Annotations",
"#### Annotation process\n\n\nThe workers were provided with the following task description:\n\n\nFor each evidence text, some facts have been removed (marked in red).\nYou should annotate whether, **given the remaining facts in the evidence text, the evidence is still enough for verifying the claim.** \n\n\n\n* You should select ***'ENOUGH -- IRRELEVANT'***, if the **remaining information is still *enough*** for verifying the claim because the **removed information is irrelevant** for identifying the evidence as SUPPORTS or REFUTES. See examples 1 and 2.\n* You should select ***'ENOUGH -- REPEATED'***, if the **remaining information is still *enough*** for verifying the claim because the **removed information is relevant but is also present (repeated) in the remaining (not red) text.** See example 3.\n* You should select ***'NOT ENOUGH'*** -- when **1) the removed information is *relevant*** for verifying the claim **AND 2) it is *not present (repeated)* in the remaining text.** See examples 4, 5, and 6.\n\n\n**Note: You should not incorporate your own knowledge or beliefs! You should rely only on the evidence provided for the claim.**\n\n\nThe annotators were then given example instance annotations.\nFinally, annotators were asked to complete a qualification test in order to be allowed to annotate instances for the task.\nThe resulting inter-annotator agreement for SufficientFacts is 0.81 Fleiss'k from three annotators.",
"#### Who are the annotators?\n\n\nThe annotations were performed by workers at Amazon Mechanical Turk.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nMIT",
"### Contributions\n\n\nThanks to @apepa for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|fever #source_datasets-extended|hover #source_datasets-extended|fever_gold_evidence #language-English #license-mit #region-us \n",
"### Dataset Summary\n\n\nThis is the dataset SufficientFacts, introduced in the paper \"Fact Checking with Insufficient Evidence\", accepted at the TACL journal in 2022.\n\n\nAutomating the fact checking (FC) process relies on information obtained from external sources. In this work, we posit that it is crucial for FC models to make veracity predictions only when there is sufficient evidence and otherwise indicate when it is not enough. To this end, we are the first to study what information FC models consider sufficient by introducing a novel task and advancing it with three main contributions. First, we conduct an in-depth empirical analysis of the task with a new fluency-preserving method for omitting information from the evidence at the constituent and sentence level. We identify when models consider the remaining evidence (in)sufficient for FC, based on three trained models with different Transformer architectures and three FC datasets. Second, we ask annotators whether the omitted evidence was important for FC, resulting in a novel diagnostic dataset, SufficientFacts, for FC with omitted evidence. We find that models are least successful in detecting missing evidence when adverbial modifiers are omitted (21% accuracy), whereas it is easiest for omitted date modifiers (63% accuracy). Finally, we propose a novel data augmentation strategy for contrastive self-learning of missing evidence by employing the proposed omission method combined with tri-training. It improves performance for Evidence Sufficiency Prediction by up to 17.8 F1 score, which in turn improves FC performance by up to 2.6 F1 score.",
"### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------\n\n\nThe dataset consists of three files, each for one of the datasets -- FEVER, HoVer, and VitaminC.\nEach file consists of json lines of the format:",
"### Data Instances\n\n\n* FEVER: 600 consituent-level, 400 sentence-level;\n* HoVer - 600 consituent-level, 400 sentence-level;\n* VitaminC - 600 consituent-level.",
"### Data Fields\n\n\n* 'claim' - the claim that is being verified\n* 'evidence' - the augmented evidence for the claim, i.e. the evidence with some removed information\n* 'label\\_before' - the original label for the claim-evidence pair, before information was removed from the evidence\n* 'label\\_after' - the label for the augmented claim-evidence pair, after information was removed from the evidence, as annotated by crowd-source workers\n* 'type' - type of the information removed from the evidence. The types are fine-grained and their mapping to the general types -- 7 constituent and 1 sentence type can be found in URL file.\n* 'removed' - the text of the removed information from the evidence\n* 'text\\_orig' - the original text of the evidence, as presented to crowd-source workers, the text of the removed information is inside '<span style=\"color:red;\">' tags.",
"### Data Splits\n\n\n\nAugmented from the test splits of the corresponding datasets.",
"### Annotations",
"#### Annotation process\n\n\nThe workers were provided with the following task description:\n\n\nFor each evidence text, some facts have been removed (marked in red).\nYou should annotate whether, **given the remaining facts in the evidence text, the evidence is still enough for verifying the claim.** \n\n\n\n* You should select ***'ENOUGH -- IRRELEVANT'***, if the **remaining information is still *enough*** for verifying the claim because the **removed information is irrelevant** for identifying the evidence as SUPPORTS or REFUTES. See examples 1 and 2.\n* You should select ***'ENOUGH -- REPEATED'***, if the **remaining information is still *enough*** for verifying the claim because the **removed information is relevant but is also present (repeated) in the remaining (not red) text.** See example 3.\n* You should select ***'NOT ENOUGH'*** -- when **1) the removed information is *relevant*** for verifying the claim **AND 2) it is *not present (repeated)* in the remaining text.** See examples 4, 5, and 6.\n\n\n**Note: You should not incorporate your own knowledge or beliefs! You should rely only on the evidence provided for the claim.**\n\n\nThe annotators were then given example instance annotations.\nFinally, annotators were asked to complete a qualification test in order to be allowed to annotate instances for the task.\nThe resulting inter-annotator agreement for SufficientFacts is 0.81 Fleiss'k from three annotators.",
"#### Who are the annotators?\n\n\nThe annotations were performed by workers at Amazon Mechanical Turk.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nMIT",
"### Contributions\n\n\nThanks to @apepa for adding this dataset."
] |
7e960327b88e961ca35bee0a6bed94c42b0ac0d8 | # AutoTrain Dataset for project: security-texts-classification-distilroberta
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project security-texts-classification-distilroberta.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Netgear launches Bug Bounty Program for Hacker; Offering up to $15,000 in Rewards It might be the ea[...]",
"target": 0
},
{
"text": "Popular Malware Families Using 'Process Doppelg\u00e4nging' to Evade Detection The fileless code injectio[...]",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=2, names=['irrelevant', 'relevant'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 780 |
| valid | 196 |
| vlsb/autotrain-data-security-texts-classification-distilroberta | [
"task_categories:text-classification",
"region:us"
] | 2022-03-30T19:48:23+00:00 | {"task_categories": ["text-classification"]} | 2022-03-30T19:48:56+00:00 | [] | [] | TAGS
#task_categories-text-classification #region-us
| AutoTrain Dataset for project: security-texts-classification-distilroberta
==========================================================================
Dataset Descritpion
-------------------
This dataset has been automatically processed by AutoTrain for project security-texts-classification-distilroberta.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-text-classification #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
608c2e9f00eacfdb0932301e45fe2b420a0559a0 | # DISCO: Diachronic Spanish Sonnet Corpus
[](https://zenodo.org/badge/latestdoi/103841064)
The Diachronic Spanish Sonnet Corpus (DISCO) contains sonnets in Spanish in CSV, between the 15th and the 20th centuries (4303 sonnets by 1215 authors from 22 different countries). It includes well-known authors, but also less canonized ones.
This is a CSV compilation taken from the plain text corpus v4 published on git https://github.com/pruizf/disco/tree/v4. It includes the title, author, age and text metadata.
<br><br> | hackathon-pln-es/disco_spanish_poetry | [
"region:us"
] | 2022-03-30T20:47:36+00:00 | {} | 2022-03-30T20:50:28+00:00 | [] | [] | TAGS
#region-us
| # DISCO: Diachronic Spanish Sonnet Corpus
 contains sonnets in Spanish in CSV, between the 15th and the 20th centuries (4303 sonnets by 1215 authors from 22 different countries). It includes well-known authors, but also less canonized ones.
This is a CSV compilation taken from the plain text corpus v4 published on git URL It includes the title, author, age and text metadata.
<br><br> | [
"# DISCO: Diachronic Spanish Sonnet Corpus\r\n contains sonnets in Spanish in CSV, between the 15th and the 20th centuries (4303 sonnets by 1215 authors from 22 different countries). It includes well-known authors, but also less canonized ones. \r\n\r\nThis is a CSV compilation taken from the plain text corpus v4 published on git URL It includes the title, author, age and text metadata.\r\n<br><br>"
] | [
"TAGS\n#region-us \n",
"# DISCO: Diachronic Spanish Sonnet Corpus\r\n contains sonnets in Spanish in CSV, between the 15th and the 20th centuries (4303 sonnets by 1215 authors from 22 different countries). It includes well-known authors, but also less canonized ones. \r\n\r\nThis is a CSV compilation taken from the plain text corpus v4 published on git URL It includes the title, author, age and text metadata.\r\n<br><br>"
] |
88fa19e471d13eae3c2903011908b1a8bbccb46a | # test-imagefolder-dataset
This dataset shows that you can upload image folders (with an accompanying info.csv file within) to share and visualize multiple splits of a dataset. Cheers 🍻 | nateraw/test-imagefolder-dataset | [
"region:us"
] | 2022-03-30T20:58:59+00:00 | {} | 2022-03-30T21:19:04+00:00 | [] | [] | TAGS
#region-us
| # test-imagefolder-dataset
This dataset shows that you can upload image folders (with an accompanying URL file within) to share and visualize multiple splits of a dataset. Cheers | [
"# test-imagefolder-dataset\n\nThis dataset shows that you can upload image folders (with an accompanying URL file within) to share and visualize multiple splits of a dataset. Cheers"
] | [
"TAGS\n#region-us \n",
"# test-imagefolder-dataset\n\nThis dataset shows that you can upload image folders (with an accompanying URL file within) to share and visualize multiple splits of a dataset. Cheers"
] |
2e15cfca305fcbc9215b159088e588bcca6ee60c | This dataset contains parallel corpora for Ethiopian languages | Atnafu/Parallel_dataset_for_Ethiopian_languages | [
"license:afl-3.0",
"region:us"
] | 2022-03-31T02:45:15+00:00 | {"license": "afl-3.0"} | 2022-03-31T02:45:52+00:00 | [] | [] | TAGS
#license-afl-3.0 #region-us
| This dataset contains parallel corpora for Ethiopian languages | [] | [
"TAGS\n#license-afl-3.0 #region-us \n"
] |
7408d8e0848a237692cfd597574ced7762ea42c9 | basic text | DioLiu/Test1 | [
"region:us"
] | 2022-03-31T03:16:01+00:00 | {} | 2022-04-09T03:11:46+00:00 | [] | [] | TAGS
#region-us
| basic text | [] | [
"TAGS\n#region-us \n"
] |
966942abb87e2e57c5b357342d7bc2f4177e0ba4 |
## Dataset Summary
Scotch is a dataset of about 19 million functions collected from open-source repositiories from GitHub with permissive licenses. Each function has its corresponding code context and about 4 million functions have corresponding docstrings.
### Languages
The dataset includes functions written in programming languages Python, Java, Javascript, and Go.
## Statistics
### Split
The functions with docstrings is splitted into train, valid, and test set of 3200626, 400077, 400080 functions respectively.
## Features
Each function consists of following features:
* repository_name: Name of the repository the function belongs to.
* function_path: Path of the function within the repository.
* function_identifier: Function name/identifier.
* language: Programming language the function is written in.
* function: Function string.
* docstring: Function docstring.
* function_url: URL to the function code.
* context: Code context.
* license: License info of the repository (includes only repositories with permissive licenses).
## Data Collection
The dataset is collected from GitHub repositories of respective languages with 5 or more stars. Such repositories are listed using [SEART](https://seart-ghs.si.usi.ch/). Functions are parsed using a lightweight parser build on top of function parser from [CodeSearchNet dataset](https://github.com/github/CodeSearchNet/tree/master/function_parser) and repositories were collected with help of [github-downloader from EleutherAI](https://github.com/EleutherAI/github-downloader).
### Data Processing
All the code without permissive licenses are removed and deduplication is performed on the remaining set of functions. Afterwards, all the functions with single line of code, whose docstring contains non-English characters are removed. Files with multiple same functions are excluded. This results in about 19M functions. To obtain a dataset of NL-Code pairs, functions with no docstrings or doctrings less than 3 tokens separated by white-space are excluded. Following CodeSearchNet, functions with 'test' keyword in their name are excluded.
## License
This dataset is under MIT License. However, the repositories the functions are collected from may have several permissive licenses. Those licenses include MIT License, Apache License 2.0, BSD 3-Clause “New” or “Revised” License, BSD 2-Clause “Simplified” License, and ISC License. | Samip/Scotch | [
"region:us"
] | 2022-03-31T11:31:51+00:00 | {} | 2022-04-29T13:19:23+00:00 | [] | [] | TAGS
#region-us
|
## Dataset Summary
Scotch is a dataset of about 19 million functions collected from open-source repositiories from GitHub with permissive licenses. Each function has its corresponding code context and about 4 million functions have corresponding docstrings.
### Languages
The dataset includes functions written in programming languages Python, Java, Javascript, and Go.
## Statistics
### Split
The functions with docstrings is splitted into train, valid, and test set of 3200626, 400077, 400080 functions respectively.
## Features
Each function consists of following features:
* repository_name: Name of the repository the function belongs to.
* function_path: Path of the function within the repository.
* function_identifier: Function name/identifier.
* language: Programming language the function is written in.
* function: Function string.
* docstring: Function docstring.
* function_url: URL to the function code.
* context: Code context.
* license: License info of the repository (includes only repositories with permissive licenses).
## Data Collection
The dataset is collected from GitHub repositories of respective languages with 5 or more stars. Such repositories are listed using SEART. Functions are parsed using a lightweight parser build on top of function parser from CodeSearchNet dataset and repositories were collected with help of github-downloader from EleutherAI.
### Data Processing
All the code without permissive licenses are removed and deduplication is performed on the remaining set of functions. Afterwards, all the functions with single line of code, whose docstring contains non-English characters are removed. Files with multiple same functions are excluded. This results in about 19M functions. To obtain a dataset of NL-Code pairs, functions with no docstrings or doctrings less than 3 tokens separated by white-space are excluded. Following CodeSearchNet, functions with 'test' keyword in their name are excluded.
## License
This dataset is under MIT License. However, the repositories the functions are collected from may have several permissive licenses. Those licenses include MIT License, Apache License 2.0, BSD 3-Clause “New” or “Revised” License, BSD 2-Clause “Simplified” License, and ISC License. | [
"## Dataset Summary\n\nScotch is a dataset of about 19 million functions collected from open-source repositiories from GitHub with permissive licenses. Each function has its corresponding code context and about 4 million functions have corresponding docstrings.",
"### Languages\nThe dataset includes functions written in programming languages Python, Java, Javascript, and Go.",
"## Statistics",
"### Split\nThe functions with docstrings is splitted into train, valid, and test set of 3200626, 400077, 400080 functions respectively.",
"## Features\n\nEach function consists of following features:\n\n* repository_name: Name of the repository the function belongs to.\n* function_path: Path of the function within the repository.\n* function_identifier: Function name/identifier.\n* language: Programming language the function is written in.\n* function: Function string.\n* docstring: Function docstring.\n* function_url: URL to the function code.\n* context: Code context.\n* license: License info of the repository (includes only repositories with permissive licenses).",
"## Data Collection\n\nThe dataset is collected from GitHub repositories of respective languages with 5 or more stars. Such repositories are listed using SEART. Functions are parsed using a lightweight parser build on top of function parser from CodeSearchNet dataset and repositories were collected with help of github-downloader from EleutherAI.",
"### Data Processing\nAll the code without permissive licenses are removed and deduplication is performed on the remaining set of functions. Afterwards, all the functions with single line of code, whose docstring contains non-English characters are removed. Files with multiple same functions are excluded. This results in about 19M functions. To obtain a dataset of NL-Code pairs, functions with no docstrings or doctrings less than 3 tokens separated by white-space are excluded. Following CodeSearchNet, functions with 'test' keyword in their name are excluded.",
"## License\n\nThis dataset is under MIT License. However, the repositories the functions are collected from may have several permissive licenses. Those licenses include MIT License, Apache License 2.0, BSD 3-Clause “New” or “Revised” License, BSD 2-Clause “Simplified” License, and ISC License."
] | [
"TAGS\n#region-us \n",
"## Dataset Summary\n\nScotch is a dataset of about 19 million functions collected from open-source repositiories from GitHub with permissive licenses. Each function has its corresponding code context and about 4 million functions have corresponding docstrings.",
"### Languages\nThe dataset includes functions written in programming languages Python, Java, Javascript, and Go.",
"## Statistics",
"### Split\nThe functions with docstrings is splitted into train, valid, and test set of 3200626, 400077, 400080 functions respectively.",
"## Features\n\nEach function consists of following features:\n\n* repository_name: Name of the repository the function belongs to.\n* function_path: Path of the function within the repository.\n* function_identifier: Function name/identifier.\n* language: Programming language the function is written in.\n* function: Function string.\n* docstring: Function docstring.\n* function_url: URL to the function code.\n* context: Code context.\n* license: License info of the repository (includes only repositories with permissive licenses).",
"## Data Collection\n\nThe dataset is collected from GitHub repositories of respective languages with 5 or more stars. Such repositories are listed using SEART. Functions are parsed using a lightweight parser build on top of function parser from CodeSearchNet dataset and repositories were collected with help of github-downloader from EleutherAI.",
"### Data Processing\nAll the code without permissive licenses are removed and deduplication is performed on the remaining set of functions. Afterwards, all the functions with single line of code, whose docstring contains non-English characters are removed. Files with multiple same functions are excluded. This results in about 19M functions. To obtain a dataset of NL-Code pairs, functions with no docstrings or doctrings less than 3 tokens separated by white-space are excluded. Following CodeSearchNet, functions with 'test' keyword in their name are excluded.",
"## License\n\nThis dataset is under MIT License. However, the repositories the functions are collected from may have several permissive licenses. Those licenses include MIT License, Apache License 2.0, BSD 3-Clause “New” or “Revised” License, BSD 2-Clause “Simplified” License, and ISC License."
] |
630a24d3e902f49f89ba5b835410ad2cbb3f0059 | ## Generation procedure
The dataset was constructed using documents from [the Pile](https://pile.eleuther.ai/) scored using using [Perspective API](http://perspectiveapi.com) toxicity scores.
The procedure was the following:
1. A chunk of the Pile (3%, 7m documents) was scored using the Perspective API.
1. The first half of this dataset is [tomekkorbak/pile-toxic-chunk-0](https://huggingface.co/datasets/tomekkorbak/pile-toxic-chunk-0), 100k *most* toxic documents of the scored chunk
2. The first half of this dataset is [tomekkorbak/pile-nontoxic-chunk-0](https://huggingface.co/datasets/tomekkorbak/pile-nontoxic-chunk-0), 100k *least* toxic documents of the scored chunk
3. Then, the dataset was shuffled and a 9:1 train-test split was done
## Basic stats
The average scores of the good and bad half are 0.0014 and 0.67, respectively. The average score of the whole dataset is 0.33; the median is 0.51.
However, the weighted average score (weighted by document length) is 0.45. Correlation between score and document length is 0.2.
Score histogram:

Mean score per Pile subset
| pile_set_name | score | length |
|:------------------|----------:|------------:|
| ArXiv | 0.141808 | 9963.82 |
| Books3 | 0.405541 | 8911.67 |
| DM Mathematics | 0.535474 | 8194 |
| Enron Emails | 0.541136 | 1406.76 |
| EuroParl | 0.373395 | 4984.36 |
| FreeLaw | 0.279582 | 8986.73 |
| Github | 0.495742 | 2184.86 |
| Gutenberg (PG-19) | 0.583263 | 4034 |
| HackerNews | 0.617917 | 3714.83 |
| NIH ExPorter | 0.0376628 | 1278.83 |
| OpenSubtitles | 0.674261 | 14881.1 |
| OpenWebText2 | 0.613273 | 2634.41 |
| PhilPapers | 0.549582 | 9693 |
| Pile-CC | 0.525136 | 2925.7 |
| PubMed Abstracts | 0.0388705 | 1282.29 |
| PubMed Central | 0.235012 | 7418.34 |
| StackExchange | 0.590904 | 2210.16 |
| USPTO Backgrounds | 0.0100077 | 2086.39 |
| Ubuntu IRC | 0.598423 | 4396.67 |
| Wikipedia (en) | 0.0136901 | 1515.89 |
| YoutubeSubtitles | 0.65201 | 4729.52 | | tomekkorbak/pile-toxicity-balanced | [
"region:us"
] | 2022-03-31T11:43:11+00:00 | {} | 2022-04-06T10:07:05+00:00 | [] | [] | TAGS
#region-us
| Generation procedure
--------------------
The dataset was constructed using documents from the Pile scored using using Perspective API toxicity scores.
The procedure was the following:
1. A chunk of the Pile (3%, 7m documents) was scored using the Perspective API.
2. The first half of this dataset is tomekkorbak/pile-toxic-chunk-0, 100k *most* toxic documents of the scored chunk
3. The first half of this dataset is tomekkorbak/pile-nontoxic-chunk-0, 100k *least* toxic documents of the scored chunk
4. Then, the dataset was shuffled and a 9:1 train-test split was done
Basic stats
-----------
The average scores of the good and bad half are 0.0014 and 0.67, respectively. The average score of the whole dataset is 0.33; the median is 0.51.
However, the weighted average score (weighted by document length) is 0.45. Correlation between score and document length is 0.2.
Score histogram:

The text prompt for many was "Orbs within orbs, concentric circles and ripples of fire (spheres and circles, roundness)"
I used a high guidance scale (10 IIRC) and generated them in batches of 64
There are two 'flavours', 'dark' and 'light' (indicated with the 'label' attribute in the dataset. The 'light' images are from a GLID-3 model I fine-tuned on some abstract art, and tend to be more pastel colors and plain shapes. The 'dark' images are from GLID-3 part way through it's training.
This dataset is intended for use in GAN training demos and other art projects. Please give attribution if you use it in your own work (and tag me @johnowhitaker so I can see what you make!)
It's also nice for other artsy things, such as this montage made up of many little orb images: https://www.easyzoom.com/imageaccess/47cab299796a45edbd98951e704cb340
gan trained on this dataset: https://huggingface.co/johnowhitaker/orbgan_e1
gan demo (spaces): https://huggingface.co/spaces/johnowhitaker/orbgan_demo | johnowhitaker/glid3_orbs | [
"region:us"
] | 2022-03-31T14:46:41+00:00 | {} | 2022-04-01T02:58:57+00:00 | [] | [] | TAGS
#region-us
| These orbs were generated with GLID-3, a text-to-image system (URL
The text prompt for many was "Orbs within orbs, concentric circles and ripples of fire (spheres and circles, roundness)"
I used a high guidance scale (10 IIRC) and generated them in batches of 64
There are two 'flavours', 'dark' and 'light' (indicated with the 'label' attribute in the dataset. The 'light' images are from a GLID-3 model I fine-tuned on some abstract art, and tend to be more pastel colors and plain shapes. The 'dark' images are from GLID-3 part way through it's training.
This dataset is intended for use in GAN training demos and other art projects. Please give attribution if you use it in your own work (and tag me @johnowhitaker so I can see what you make!)
It's also nice for other artsy things, such as this montage made up of many little orb images: URL
gan trained on this dataset: URL
gan demo (spaces): URL | [] | [
"TAGS\n#region-us \n"
] |
4bc4e8decfdda1c956ca15694d9fa1518261efd0 | # Spanish Gender Neutralization
<p align="center">
<img src="https://upload.wikimedia.org/wikipedia/commons/2/29/Gender_equality_symbol_%28clipart%29.png" width="250"/>
</p>
Spanish is a beautiful language and it has many ways of referring to people, neutralizing the genders and using some of the resources inside the language. One would say *Todas las personas asistentes* instead of *Todos los asistentes* and it would end in a more inclusive way for talking about people. This dataset collects a set of manually anotated examples of gendered-to-neutral spanish transformations.
The intended use of this dataset is to train a spanish language model for translating from gendered to neutral, in order to have more inclusive sentences.
### Compiled sources
One of the major challenges was to obtain a valuable dataset that would suit gender inclusion purpose, therefore, when building the dataset, the team opted to dedicate a considerable amount of time to build it from a scratch. You can find here the results.
The data used for the model training has been manually created form a compilation of sources, obtained from a series of guidelines and manuals issued by Spanish Ministry of Health, Social Services and Equality in the matter of the usage of non-sexist language, stipulated in this linked [document](https://www.inmujeres.gob.es/servRecursos/formacion/GuiasLengNoSexista/docs/Guiaslenguajenosexista_.pdf).
**NOTE: Appart from manually anotated samples, this dataset has been further increased by applying data augmentation so a minumin number of training examples are generated.**
* [Guía para un discurso igualitario en la universidad de alicante](https://ieg.ua.es/es/documentos/normativasobreigualdad/guia-para-un-discurso-igualitario-en-la-ua.pdf)
* [Guía UC de Comunicación en Igualdad](<https://web.unican.es/unidades/igualdad/SiteAssets/igualdad/comunicacion-en-igualdad/guia%20comunicacion%20igualdad%20(web).pdf>)
* [Buenas prácticas para el tratamiento del lenguaje en igualdad](https://e-archivo.uc3m.es/handle/10016/22811)
* [Guía del lenguaje no sexista de la Universidad de Castilla-La Mancha](https://unidadigualdad.ugr.es/page/guiialenguajeuniversitarionosexista_universidaddecastillalamancha/!)
* [Guía de Lenguaje Para el Ámbito Educativo](https://www.educacionyfp.gob.es/va/dam/jcr:8ce318fd-c8ff-4ad2-97b4-7318c27d1682/guialenguajeambitoeducativo.pdf)
* [Guía para un uso igualitario y no sexista del lenguaje y dela imagen en la Universidad de Jaén](https://www.ujaen.es/servicios/uigualdad/sites/servicio_uigualdad/files/uploads/Guia_lenguaje_no_sexista.pdf)
* [Guía de uso no sexista del vocabulario español](https://www.um.es/documents/2187255/2187763/guia-leng-no-sexista.pdf/d5b22eb9-b2e4-4f4b-82aa-8a129cdc83e3)
* [Guía para el uso no sexista de la lengua castellana y de imágnes en la UPV/EHV](https://www.ehu.eus/documents/1734204/1884196/Guia_uso_no_sexista_EHU.pdf)
* [Guía de lenguaje no sexista UNED](http://portal.uned.es/pls/portal/docs/PAGE/UNED_MAIN/LAUNIVERSIDAD/VICERRECTORADOS/GERENCIA/OFICINA_IGUALDAD/CONCEPTOS%20BASICOS/GUIA_LENGUAJE.PDF)
* [COMUNICACIÓN AMBIENTAL CON PERSPECTIVA DE GÉNERO](https://cima.cantabria.es/documents/5710649/5729124/COMUNICACI%C3%93N+AMBIENTAL+CON+PERSPECTIVA+DE+G%C3%89NERO.pdf/ccc18730-53e3-35b9-731e-b4c43339254b)
* [Recomendaciones para la utilización de lenguaje no sexista](https://www.csic.es/sites/default/files/guia_para_un_uso_no_sexista_de_la_lengua_adoptada_por_csic2.pdf)
* [Estudio sobre lenguaje y contenido sexista en la Web](https://www.mujeresenred.net/IMG/pdf/Estudio_paginas_web_T-incluye_ok.pdf)
* [Nombra.en.red. En femenino y en masculino](https://www.inmujeres.gob.es/areasTematicas/educacion/publicaciones/serieLenguaje/docs/Nombra_en_red.pdf)
## Team Members
- Fernando Velasco [(fermaat)](https://huggingface.co/fermaat)
- Cibeles Redondo [(CibelesR)](https://huggingface.co/CibelesR)
- Juan Julian Cea [(Juanju)](https://huggingface.co/Juanju)
- Magdalena Kujalowicz [(MacadellaCosta)](https://huggingface.co/MacadellaCosta)
- Javier Blasco [(javiblasco)](https://huggingface.co/javiblasco)
### Enjoy and feel free to collaborate with this dataset 🤗 | hackathon-pln-es/neutral-es | [
"task_categories:text2text-generation",
"task_categories:translation",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:es",
"region:us"
] | 2022-03-31T17:02:00+00:00 | {"language": ["es"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "task_categories": ["text2text-generation", "translation"], "task_ids": [], "pretty_name": "neutralES"} | 2022-10-25T09:20:48+00:00 | [] | [
"es"
] | TAGS
#task_categories-text2text-generation #task_categories-translation #multilinguality-monolingual #size_categories-1K<n<10K #language-Spanish #region-us
| # Spanish Gender Neutralization
<p align="center">
<img src="URL width="250"/>
</p>
Spanish is a beautiful language and it has many ways of referring to people, neutralizing the genders and using some of the resources inside the language. One would say *Todas las personas asistentes* instead of *Todos los asistentes* and it would end in a more inclusive way for talking about people. This dataset collects a set of manually anotated examples of gendered-to-neutral spanish transformations.
The intended use of this dataset is to train a spanish language model for translating from gendered to neutral, in order to have more inclusive sentences.
### Compiled sources
One of the major challenges was to obtain a valuable dataset that would suit gender inclusion purpose, therefore, when building the dataset, the team opted to dedicate a considerable amount of time to build it from a scratch. You can find here the results.
The data used for the model training has been manually created form a compilation of sources, obtained from a series of guidelines and manuals issued by Spanish Ministry of Health, Social Services and Equality in the matter of the usage of non-sexist language, stipulated in this linked document.
NOTE: Appart from manually anotated samples, this dataset has been further increased by applying data augmentation so a minumin number of training examples are generated.
* Guía para un discurso igualitario en la universidad de alicante
* Guía UC de Comunicación en URL>)
* Buenas prácticas para el tratamiento del lenguaje en igualdad
* Guía del lenguaje no sexista de la Universidad de Castilla-La Mancha
* Guía de Lenguaje Para el Ámbito Educativo
* Guía para un uso igualitario y no sexista del lenguaje y dela imagen en la Universidad de Jaén
* Guía de uso no sexista del vocabulario español
* Guía para el uso no sexista de la lengua castellana y de imágnes en la UPV/EHV
* Guía de lenguaje no sexista UNED
* COMUNICACIÓN AMBIENTAL CON PERSPECTIVA DE GÉNERO
* Recomendaciones para la utilización de lenguaje no sexista
* Estudio sobre lenguaje y contenido sexista en la Web
* URL. En femenino y en masculino
## Team Members
- Fernando Velasco (fermaat)
- Cibeles Redondo (CibelesR)
- Juan Julian Cea (Juanju)
- Magdalena Kujalowicz (MacadellaCosta)
- Javier Blasco (javiblasco)
### Enjoy and feel free to collaborate with this dataset | [
"# Spanish Gender Neutralization\n\n<p align=\"center\">\n <img src=\"URL width=\"250\"/>\n</p>\n\nSpanish is a beautiful language and it has many ways of referring to people, neutralizing the genders and using some of the resources inside the language. One would say *Todas las personas asistentes* instead of *Todos los asistentes* and it would end in a more inclusive way for talking about people. This dataset collects a set of manually anotated examples of gendered-to-neutral spanish transformations.\n\nThe intended use of this dataset is to train a spanish language model for translating from gendered to neutral, in order to have more inclusive sentences.",
"### Compiled sources\n\nOne of the major challenges was to obtain a valuable dataset that would suit gender inclusion purpose, therefore, when building the dataset, the team opted to dedicate a considerable amount of time to build it from a scratch. You can find here the results.\n\nThe data used for the model training has been manually created form a compilation of sources, obtained from a series of guidelines and manuals issued by Spanish Ministry of Health, Social Services and Equality in the matter of the usage of non-sexist language, stipulated in this linked document.\n\nNOTE: Appart from manually anotated samples, this dataset has been further increased by applying data augmentation so a minumin number of training examples are generated.\n\n* Guía para un discurso igualitario en la universidad de alicante\n\n* Guía UC de Comunicación en URL>)\n\n* Buenas prácticas para el tratamiento del lenguaje en igualdad\n\n* Guía del lenguaje no sexista de la Universidad de Castilla-La Mancha\n\n* Guía de Lenguaje Para el Ámbito Educativo\n\n* Guía para un uso igualitario y no sexista del lenguaje y dela imagen en la Universidad de Jaén\n\n* Guía de uso no sexista del vocabulario español\n\n* Guía para el uso no sexista de la lengua castellana y de imágnes en la UPV/EHV\n\n* Guía de lenguaje no sexista UNED\n\n* COMUNICACIÓN AMBIENTAL CON PERSPECTIVA DE GÉNERO\n\n* Recomendaciones para la utilización de lenguaje no sexista\n\n* Estudio sobre lenguaje y contenido sexista en la Web\n\n* URL. En femenino y en masculino",
"## Team Members\n- Fernando Velasco (fermaat)\n- Cibeles Redondo (CibelesR)\n- Juan Julian Cea (Juanju)\n- Magdalena Kujalowicz (MacadellaCosta)\n- Javier Blasco (javiblasco)",
"### Enjoy and feel free to collaborate with this dataset"
] | [
"TAGS\n#task_categories-text2text-generation #task_categories-translation #multilinguality-monolingual #size_categories-1K<n<10K #language-Spanish #region-us \n",
"# Spanish Gender Neutralization\n\n<p align=\"center\">\n <img src=\"URL width=\"250\"/>\n</p>\n\nSpanish is a beautiful language and it has many ways of referring to people, neutralizing the genders and using some of the resources inside the language. One would say *Todas las personas asistentes* instead of *Todos los asistentes* and it would end in a more inclusive way for talking about people. This dataset collects a set of manually anotated examples of gendered-to-neutral spanish transformations.\n\nThe intended use of this dataset is to train a spanish language model for translating from gendered to neutral, in order to have more inclusive sentences.",
"### Compiled sources\n\nOne of the major challenges was to obtain a valuable dataset that would suit gender inclusion purpose, therefore, when building the dataset, the team opted to dedicate a considerable amount of time to build it from a scratch. You can find here the results.\n\nThe data used for the model training has been manually created form a compilation of sources, obtained from a series of guidelines and manuals issued by Spanish Ministry of Health, Social Services and Equality in the matter of the usage of non-sexist language, stipulated in this linked document.\n\nNOTE: Appart from manually anotated samples, this dataset has been further increased by applying data augmentation so a minumin number of training examples are generated.\n\n* Guía para un discurso igualitario en la universidad de alicante\n\n* Guía UC de Comunicación en URL>)\n\n* Buenas prácticas para el tratamiento del lenguaje en igualdad\n\n* Guía del lenguaje no sexista de la Universidad de Castilla-La Mancha\n\n* Guía de Lenguaje Para el Ámbito Educativo\n\n* Guía para un uso igualitario y no sexista del lenguaje y dela imagen en la Universidad de Jaén\n\n* Guía de uso no sexista del vocabulario español\n\n* Guía para el uso no sexista de la lengua castellana y de imágnes en la UPV/EHV\n\n* Guía de lenguaje no sexista UNED\n\n* COMUNICACIÓN AMBIENTAL CON PERSPECTIVA DE GÉNERO\n\n* Recomendaciones para la utilización de lenguaje no sexista\n\n* Estudio sobre lenguaje y contenido sexista en la Web\n\n* URL. En femenino y en masculino",
"## Team Members\n- Fernando Velasco (fermaat)\n- Cibeles Redondo (CibelesR)\n- Juan Julian Cea (Juanju)\n- Magdalena Kujalowicz (MacadellaCosta)\n- Javier Blasco (javiblasco)",
"### Enjoy and feel free to collaborate with this dataset"
] |
939a75ee8e11464d473de956df8a96fa5e5e64b7 |
This is a suite of psycholinguistic datasets by Allyson Ettinger. See her [official Github repository](https://github.com/aetting/lm-diagnostics) for specific details. | KevinZ/psycholinguistic_eval | [
"task_categories:multiple-choice",
"task_categories:fill-mask",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"license:mit",
"region:us"
] | 2022-03-31T23:04:18+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en-US"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": [], "task_categories": ["multiple-choice", "fill-mask", "question-answering", "zero-shot-classification"], "task_ids": [], "pretty_name": "psycholinguistic_eval"} | 2022-10-25T09:03:37+00:00 | [] | [
"en-US"
] | TAGS
#task_categories-multiple-choice #task_categories-fill-mask #task_categories-question-answering #task_categories-zero-shot-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #license-mit #region-us
|
This is a suite of psycholinguistic datasets by Allyson Ettinger. See her official Github repository for specific details. | [] | [
"TAGS\n#task_categories-multiple-choice #task_categories-fill-mask #task_categories-question-answering #task_categories-zero-shot-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #license-mit #region-us \n"
] |
Subsets and Splits